1. Introduction
Peatlands cover a significant area globally (≈3%), and in particular of northern regions (e.g., ≈12% of Canada), and they have an increasingly important role in carbon sequestration and climate change mitigation [
1,
2,
3,
4]. Ongoing monitoring of peatlands over large spatial extents through the use of satellite-based Earth observation products is needed to understand their response to climate change (e.g., [
5,
6,
7]). However, given their generally poor accessibility and the fine-scale topographic variation of vegetation microforms (often <1 m in height), satellite-based mapping requires validation from ground data (e.g., water table depth, species composition, biochemistry) [
8,
9]. Unmanned aerial systems (UAS) have shown potential for characterizing these ecosystems at fine scales [
9,
10,
11]. In general terms, microtopographic features such as hollows and hummocks are key elements that are closely related to complex and associated hydrological, ecophysiological, and biogeochemical processes in peatlands [
12]. Hummocks are elevated features composed of vascular plants overlaying mosses that consistently remain above the water table, while hollows are lower lying areas with primarily exposed mosses [
13]. The multitemporal characterization of hollows and hummocks at submeter scales is key to validating satellite-derived products such as phenology tracking, net ecosystem exchange estimation, etc. [
9].
To date, mapping microtopography with UAS has relied on two main technologies: light detection and ranging (LiDAR) and structure-from-motion (SfM) multiview stereo (MVS) photogrammetry (hereinafter referred to as SfM) with variable results for each technology (e.g., [
14,
15,
16]). LiDAR is an active remote sensing technology that uses a pulsed laser generally between 800 and 1500 nm for terrestrial applications, to measure ranges, i.e., the variable distances from the instrument to objects on the surface of the Earth. It does so by measuring the exact time it takes for the pulses to return after they are reflected off objects or the ground [
17]. In contrast, SfM is a passive remote sensing technique that uses overlapping offset photographs from which to reconstruct the landscape [
18,
19]. In forested areas, LiDAR pulses can penetrate the canopy and allow for the development of both canopy and surface terrain models [
17], while SfM only provides a surface model of the highest layer, often the canopy, as seen from the photographs [
20]. Comparatively, across ecosystems, SfM is shown to produce higher density point clouds than those from LiDAR. Previously in peatlands, mapping microtopography has been compared between UAS SfM and airborne LiDAR (e.g., [
16]). Many studies have also employed airborne LiDAR for large-scale peatland assessments (e.g., [
21,
22,
23,
24,
25,
26]). Terrestrial laser scanning (TLS) has also been shown to successfully map microforms at very high spatial detail (e.g., [
27]). However, no formal study has rigorously compared UAS LiDAR and SfM for mapping peatland microtopography.
Because peatlands are both fragile ecosystems and in general have poor accessibility, tools to remotely study, access, and visualize peatland structure in 3D are needed for advancing our understanding of their response to climate change. Although not a new technology [
28], the recent advances in virtual reality (VR) [
29], with its applications in medicine [
30], conservation [
31], geosciences [
32,
33], e-tourism [
34,
35], and education [
36], among others, provide novel opportunities to study peatlands and other ecosystems remotely without disturbance [
37]. VR is technology (hardware and software) that generates a simulated environment which stimulates a “sense of being present” in the virtual representation [
38]. In contrast, augmented reality (AR) superimposes the virtual representation on the real world through glasses or other mobile digital displays, in turn supplementing reality rather than replacing it [
39]. Thus, through VR, users experience an immersive experience of the field conditions in a cost-effective and repeatable manner. For instance, [
29] showcases the advantages of VR, such as the quantification and analysis of field observations, which can be performed at multiple scales. While early implementations required extensive and expensive hardware, such as CAVE (CAVE Automatic Virtual Environments) [
38], recent commercial grade VR systems that utilize improved head mounted displays (HMD), such as Oculus Rift, Sony PlayStation VR, HTC Vive Cosmos, etc., allow for outstanding visualization capabilities and sharing of scientific output through web-based platforms.
Our study aims to bridge the implementation of 3D models derived from UAS (LiDAR and SfM) and VR/AR visualization. Thus, our objectives are to (1) compare SfM and LiDAR point cloud characteristics from a peatland; (2) compare the representation of peatland microtopography from the SfM and LiDAR data; and (3) provide a qualitative evaluation of VR and AR usability and quality of visualization of the two point clouds. We further discuss the potential of VR in peatland research and provide web-based examples of the study area. While we primarily focus on VR due to the maturity of the technology and its suitability for scientific data visualization, we also briefly compare the point clouds in AR. To our knowledge, ours is the first study to compare microtopography between LiDAR and SfM for a peatland, in addition to investigating peatland VR/AR models derived from UAS data.
2. Materials and Methods
2.1. Study Area
This study was carried out at Mer Bleue, an ≈8500 year-old ombrotrophic bog near Ottawa in Ontario, Canada (
Figure 1). A bog is a type of peatland commonly found in northern regions. Bogs are acidic, nutrient-poor ecosystems, receiving incoming water and nutrients only from precipitation and deposition. Mer Bleue is slightly domed, with peat depth decreasing from >5 m across most its area to ≈30 cm along the edges. It has a hummock–hollow–lawn microtopography with a mean relief between hummocks and hollows of <30 cm [
40,
41]. While the water table depth is variable throughout the growing season, it generally remains below the surface of the hollows [
42]. Malhotra et al. (2016) [
43] found a strong association between spatial variations in vegetation composition, water table depth, and microtopography. However, the strength of the association varied spatially within the bog. Mosses, predominantly
Sphagnum capillifolium, S. divinum, and
S. medium (the latter two species were formerly referred to as
S. magellanicum) [
44] form the ground layer of the bog and can be seen exposed in low lying hollows. Vascular plants comprise the visible upper plant canopy of the hummocks (
Figure 1). The most common vascular plant species are dwarf evergreen and deciduous shrubs (
Chamaedaphne calyculata,
Rhododendron groenlandicum, Kalmia angustifolia, Vaccinium myrtilloides), sedges (
Eriophorum vaginatum), and trees (
Picea mariana, Betula populifolia, and
Larix laricina) [
45]. Hummocks have been estimated to account for 51.2% and hollows for 12.7% of the total area [
46]. Trees and water bodies (open and vegetated) around the margins of the peatland, which are heavily impacted by beavers, comprise the remaining classes.
2.2. Airframe
We used a Matrice 600 Pro (M600P) (DJI, Shenzhen, China) for both the RGB photograph and LiDAR acquisitions (
Figure 2,
Table A1). The M600P is a six-rotor unmanned aerial vehicle (UAV) with a maximum takeoff weight of 21 kg (10.2 kg payload) (DJI Technical Support, 2017) that uses an A3 Pro flight controller with triple redundant GPS, compass, and IMU units. We integrated a differential real-time kinetic (D-RTK) GPS (dual-band, four-frequency receiver) module with the A3 Pro [
47] for improved precision of navigation [
10]. For both datasets, DJI Ground Station Pro was used for flight planning and for the automated flight control of the M600P.
2.3. Structure from Motion Photogrammetry
A Canon 5D Mark III digital single-lens reflex (DSLR) camera with a Canon EF 24–70 mm f/2.8L II USM Lens set to 24 mm was used for the RGB photograph acquisition in June (
Table A1). This is a full frame (36 × 24 mm CMOS) 22.1 MP camera with an image size of 5760 × 3840 pixels (6.25 μm pixel pitch). At 24 mm, the field of view of the lens is 84°. With the camera body and lens combined, the total weight was 1.9 kg. The camera was mounted on a DJI Ronin MX gimbal (2.3 kg) for stabilization and orientation control (
Figure 2a). The camera’s ISO was set to 800 to achieve fast shutter speeds of 1/640 to 1/1000 s at f/14 to f/16. The photographs were acquired from nadir in Canon RAW, (.cr2) format and were subsequently converted to large JPG (.jpg) files in Adobe Lightroom
® with minimal compression. Because the M600P does not automatically geotag the photographs acquired by third party cameras, geotags were acquired separately.
Geotagging was achieved through a postprocessing kinematic (PPK) workflow with an M+ GNSS module and Tallysman TW4721 antenna (Emlid, St. Petersburg, Russia) to record the position and altitude each time the camera was triggered (5 Hz update rate for GPS and GLONASS constellations) (
Table A1). A 12 × 12 cm aluminum ground plane was used for the antenna to reduce multipath and electromagnetic interference and to improve signal reception. The camera was triggered at two second intervals with a PocketWizard MultiMax II intervalometer (LPA Design, South Burlington, VT, USA). A hot shoe adaptor between the camera and the M+ recorded the time each photograph was taken with a resolution of <1 µs (i.e., flash sync pulse generated by the camera). The setup and configuration steps are described in [
48]. The weight of the M+ GNSS module, the Tallysman antenna, the intervalometer and cables were 300 g combined. Photographs were acquired from an altitude of 50 m AGL with 90% front overlap and 85% side overlap. With the aforementioned camera characteristics, i.e., altitude and overlap, the flight speed was set to 2.5 m/s by the flight controller. The total flight time required was ≈18 min.
Base station data from Natural Resources Canada’s Canadian Active Control System station 943020 [
49] (9.8 km baseline) was downloaded with precise clock and ephemeris data for PPK processing of the M+ geotags. The open-source RTKLib software v2.4.3B33 [
50] was used to generate a PPK corrected geotag for each photograph. A lever arm correction was also applied to account for the separation of the camera sensor from the position of the TW4721 antenna.
We used Pix4D Enterprise v4.6.4 (Pix4D S.A, Prilly, Switzerland) to carry out an SfM-MVS workflow to generate the dense 3D point cloud (
Table A1). Unlike UAV integrated cameras with camera orientation written to the EXIF data, the DSLR photographs lack this information. However, these initial estimates are not necessary because during processing, Pix4D calculates and optimizes both the internal (e.g., focal length) and external camera parameters (e.g., orientation). In addition to the camera calibration and optimization in the initial processing step, an automatic aerial triangulation and a bundle block adjustment are also carried out [
51]. Pix4D generates a sparse 3D point cloud through a modified scale-invariant feature transform (SIFT) algorithm [
52,
53]. Next, the point cloud is densified with an MVS photogrammetry algorithm [
54]. For this comparison, we did not generate the raster digital surface model (DSM) through Pix4D (see
Section 2.5).
SfM Point Cloud Accuracy
Two separate flights (≈12 min total flight time) with the same equipment described above were carried out ≈30 min earlier in a vegetated field, 300 m south of the primary bog study area. This field was located on mineral soil and therefore is less impacted by foot traffic than the fragile bog ecosystem. In an area of 0.2 ha, twenty targets to be used as check points were placed flat on the ground. Their positions were recorded with an Emlid Reach RS+ single-band GNSS receiver (Emlid, St Petersburg, Russia) (
Table A1). The RS+ received incoming NTRIP corrections from the Smartnet North America (Hexagon Geosystems, Atlanta, GA, USA) NTRIP casting service on an RTCM3-iMAX (individualized master–auxiliary) mount point utilizing both GPS and GLONASS constellations. The accuracy of the RS+ with the incoming NTRIP correction was previously determined in comparison to a Natural Resources Canada High Precision 3D Geodetic Passive Control Network station, and it was found to be <3 cm X, Y, and 5.1 cm Z [
55]. The photographs from the camera and geotags were processed the same way as described above with RTKlib and Pix4D up to the generation of the sparse point cloud (i.e., prior to the implementation of the MVS algorithm). Horizontal and vertical positional accuracies of the sparse 3D point cloud were determined from the coordinates of the checkpoints within Pix4D. The results of this accuracy assessment are used as an estimate of the positional accuracy of the SfM model of the study area within the bog where no checkpoints were available.
2.4. LiDAR
We used a LiAIR S220 integrated UAS LiDAR system (4.8 kg) (GreenValley International, Berkeley, CA, USA) hard mounted to the M600P in August (
Figure 2b) (
Table A1). The system uses a Hesai Pandar40P 905 nm laser with a ±2 cm range accuracy, a range of 200 m at 10% reflectivity, and a vertical FOV of –25° to +15° [
56,
57]. The Pandar40P is a 40-channel mechanical LiDAR that creates the 3D scene through a 360° rotation of 40 laser diodes. The majority of the lasers (channels 6–30) are within a +2° to –6° range of the FOV [
58]. The integrated S220 system utilizes an RTK enabled INS (0.1° attitude and azimuth resolution) with an external base station and a manufacturer stated relative final product accuracy of ±5 cm. The system includes an integrated Sony a6000 mirrorless camera that is triggered automatically during flight. These JPG photographs are used to apply realistic RGB colors to the point cloud in postprocessing.
Two flights at 50 m AGL and 5 m/s consisting of 6 parallel flight lines (40 m apart) were carried out. Importantly, prior to the flight lines, two figure 8s were flown to calibrate the IMU. The same figure 8s were repeated after the flight lines prior to landing. Total flight time was ≈10 min. The LiAcquire software (GreenValley International, Berkeley, CA, USA) provided a real-time view of the point cloud generation.
LiAcquire and LiNAV were used for the postprocessing of trajectory data and the geotagging of the RGB photographs. The LiDAR360 software (GreenValley International, Berkeley, CA, USA) was then used to correct the boresight error, carry out a strip alignment, merge individual strips, and calculate quality metrics consisting of analyses of the overlap, elevation difference between flight lines, and trajectory quality.
2.5. Analysis
The open source CloudCompare Stereo v2.11.3 (
https://www.danielgm.net/cc/) (accessed on 14 April 2021) software was used to analyze the point clouds (
Table A1). After the initial positional difference between the point clouds was computed, the LiDAR point cloud was coarsely aligned to the SfM point cloud followed by a refinement with an iterative closest point (ICP) alignment. Each point cloud was detrended to remove the slope of the bog surface. The point clouds were then clipped to the same area and compared. Characteristics including the number of neighbor points, point density, height distribution, surface roughness (distance between a point and the best fitting plane of its nearest neighbors), and the absolute difference between point clouds were calculated. DSMs at 10 and 50 cm pixel sizes were also created from each dataset. CloudCompare was used to generate the DSMs rather than Pix4D and LiDAR360 respectively to ensure differences in the surfaces are not due to varying interpolation methodology between the different software. The average method with the nearest neighbor interpolation (in case of empty cells) was chosen for the rasterization of the point clouds.
To classify the hummocks and hollows, the DSMs were first normalized in MATLAB v2020b (MathWorks, Natick, MA, USA) by subtracting the median elevation in a sliding window of 10 × 10 m [
59]. Hummocks were defined as having a height range 5–31 cm above the median and hollows as >5 cm below the median. These thresholds were defined on the basis of expert knowledge of the site. In the SfM data, this corresponded to the 55th–90th percentile of the height for hummocks and the bottom 38th percentile for hollows. In the LiDAR data, it corresponded to the 48th–71st percentile of the height for hummocks, and the bottom 40th percentile for hollows. A decision tree was used to assign the DSM pixels to hummock, hollow, and other classes based on their normalized height value.
To quantify the shape and compare the apparent complexity of the microforms from the SfM and LiDAR, we calculated the 3D Minkowski–Bouligand fractal dimension (D) of the surface of the bog [
60]. The 3D fractal dimension combines information about an object/surface across different spatial scales to provide a holistic quantification of the shape [
61]. The point clouds were converted to triangular meshes at rasterization scales of 10 and 50 cm in CloudCompare. The fractal dimension, D, was then calculated following the methodology described in [
61]. The fractal dimension is a scale-independent measure of complexity. As defined by [
62], fractals are “used to describe objects that possess self-similarity and scale-independent properties; small parts of the object resemble the whole object”. Here, D is a measure of the complexity of the bog surface as modeled by the triangular mesh objects from the SfM and LiDAR data sources. The value of D ranges from 0 to 3, with higher values indicating more complexity in the shapes. In this case, the complexity quantified by D is related to the irregularity pattern [
61], with more regular shapes having lower values.
Lastly, empirical semivariograms were used to compare the scale dependence of the hummock–hollow microtopography to determine whether the scale of the vegetation pattern captured by the SfM and LiDAR datasets is similar. The spatial dependence of the height of the vegetation can be inferred from the semivariogram which plots a dissimilarity measure (γ) against distance (h). The range, sill, and nugget describe the properties of the semivariogram. The range indicates the spatial distance below which the height values are autocorrelated. The sill indicates the amount of variability and the nugget is a measure of sampling error and fine-scale variability. Previous application of empirical semivariograms to terrestrial LiDAR data from a peatland indicated the hummock–hollow microtopography had an isotropic pattern with a range of up to 1 m, and in sites with increased shrub cover, the range increased to 3–4 m [
27]. The empirical semivariograms were calculated in MATLAB v2020b for a subset of the open bog that did not include boardwalks.
In order to generate the PLY files (i.e., Polygon file format, .ply) needed for VR and AR visualization, the horizontal coordinates (UTM) were reduced in size (i.e., number of digits before the decimal) using a global shift. In this case, 459,400 was subtracted from the easting and 5,028,400 from the northing. Binary PLY files were then generated with CloudCompare.
Both VR (
Section 2.6) and AR (
Section 2.7) visualizations were compared to a standard web-based 3D point cloud viewer as a baseline. We used a Windows server implementation of Potree v1.8 [
63], a free open-source WebGL based point cloud renderer to host the point clouds (
https://potree.github.io/) (accessed on 14 April 2021).The Potree Converter application was used to convert the LAS files (.las) into the Potree file and folder structure used by the web-based viewer for efficient tile-based rendering. In addition to navigation within the point cloud, user interactions include measurements of distance and volume and the generation of cross sections.
2.6. Virtual Reality Visualization
We tested the VR visualization of the point clouds with an Oculus Quest 2 headset (Facebook Technologies LLC, Menlo Park, CA. USA) (
Table A1). The Oculus Quest 2 released in 2020, is a relatively low cost, consumer-grade standalone VR HMD. It has 6 GB RAM and uses the Qualcomm Snapdragon XR2 chip running an Android-based operating system. The model we tested had 64 GB of internal storage. The fast-switching LCD display has 1832 × 1920 pixels per eye at a refresh rate of 72–90 Hz (depending on the application, with 120 Hz potentially available in a future update).
In order to access point cloud visualization software, the Oculus Quest 2 was connected to a Windows 10 PC through high-speed USB 3. In this tethered mode, the Oculus Link software uses the PC’s processing to simulate an Oculus Rift VR headset and to access software and data directly from the PC. The PC used had an Intel Core i7 4 GHz CPU, 64 GB RAM, and an NVIDIA GeForce GTX 1080 GPU. The PLY files were loaded in VRifier (Teatime Research Ltd., Helsinki, Finland), a 3D data viewer package that runs on Steam VR, a set of PC software and tools that allow for content to be viewed and interacted with on VR HMDs. The two touch controllers were used to navigate through the point clouds as well as to capture 2D and 360-degree “photographs” from within the VR environment.
As a simple and low-cost alternative VR visualization option, we also tested two Google Cardboard compatible viewers, a DSCVR viewer from I Am Cardboard (Sun Scale Technologies, Monrovia, CA, USA), and a second generation Google Official 87002823-01 Cardboard viewer (Google, Mountain View, CA, USA) (
Table A1). These low-tech viewers can be used with both iOS and Android smartphones by placing the phone in the headset and viewing VR content through the built-in lenses. The LiDAR and SfM point clouds were uploaded to Sketchfab (
https://sketchfab.com) (accessed on 14 April 2021) (in PLY format), an online platform for hosting and viewing interactive and immersive 3D content. The models were accessed through the smartphone’s web browser. The entire LiDAR point cloud was viewable with the smartphone’s web browser, but the SfM model was subset to an area of 0.3 ha of the open bog and an 0.4 ha area of the treed bog due to the maximum 200 MB file size upload limitations of our Sketchfab subscription. The PLY models were scaled in Sketchfab relative to a 1.8 m tall observer.
2.7. Augmented Reality Visualization
In comparison to consumer VR systems, AR head-up-displays and smart glasses capable of visualizing scientific data are predominantly expensive enterprise grade (e.g., Magic Leap 1, Epson Moverio series, Microsoft Hololens, Vuzix Blade, etc.) systems. Therefore, we tested mobile AR using webhosted data viewed through an iOS/Android smartphone application. The point clouds in PLY format were uploaded to Sketchfab, and the models were accessed in AR mode via the Sketchfab iOS/Android smartphone application. The entire LiDAR point cloud was viewable with the smartphone application, but the SfM model was subset to an area of 788 m2 due to RAM limitations of the phones tested (i.e., iPhone XR, 11 Pro, 12 Pro and Samsung Galaxy 20 FE).
4. Discussion
Microtopography and vegetation patterns at various scales can provide important information about the composition and environmental gradients (e.g., moisture and aeration) in peatlands. Ecological functions, greenhouse gas sequestration, and emission and hydrology can further be inferred from detailed analyses of the vegetation patterns [
27,
43]. As expected, our study revealed differences between SfM and LiDAR bog microtopography characterizations. The greatest difference is the spatial detail defining the microforms in the point clouds or DSMs. This is a result of the varying point densities, i.e., 570.4 ± 172.8 pts/m
2 from the SfM versus 19.4 ± 7.5 pts/m
2 from the LiDAR. Despite being sparser than the SfM, the UAS LiDAR data are considerably higher in density than conventional airborne LiDAR data from manned aircraft due to the low altitude of the UAS data collection. For example, airborne LiDAR data over the same study area produced a point cloud with a density of 2–4 pts/m
2 [
59]. Similarly, the authors in [
64] reported a point density of 1–2 pts/m
2 from airborne LiDAR for wetlands in Eastern Canada. Nevertheless, the point density achieved here for the LiDAR is lower than that reported by other UAS systems used to study forested ecosystems (e.g., up to 35 pts/m
2 [
65]).
Contrary to most forest ecosystems with a solid mineral soil ground layer, the ground layer of the bog is composed of living
Sphagnum sp. moss over a thick peat column (several meters) with high water content, which prevents the pulses from encountering a solid non-vegetated surface below. Furthermore, the shrubs that comprise the hummocks have a complex branch architecture. A laser pulse encountering vegetation is likely to undergo foliage structural interference, resulting in reduced amplitude of return in comparison to solid open ground [
66]. Luscombe et al. (2015) [
67] showed that dense bog vegetation disrupts the return of the laser pulses and can result in an uncertain representation of the microform topography. Similar to the authors in [
22,
25] who found that penetration of the laser pulses into hummock shrub canopy was low from airborne LiDAR, because the vegetation blocked the pulse interaction with the ground beneath hummocks, our results also did not show multiple returns over the hummocks. As can be seen in the cross section of the LiDAR point cloud (
Figure 9b), the points follow the elevation of the top of the canopy. A similar phenomenon was noted in other ecosystems with short dense vegetation such as crops and grasslands [
27]. The SfM also cannot distinguish between the tops of the hummocks and the moss ground layer beneath. Our results were also similar to those by the authors [
23,
24] who found that exposed
Sphagnum sp. mosses are good planar reflectors for LiDAR, which allows for mapping the surface details in open bogs.
As input to models that require a DSM as part of the workflow or as a covariate, e.g., peat burn intensity mapping [
68], biomass estimation [
59], and peat depth estimation [
21], either the SfM or LiDAR would be sufficient. Both retain the gross microtopography of the bog, with similar semivariogram ranges and complexity (at the 50 cm scale). LiDAR should be used with caution at fine scales of interpolation due to the artefacts introduced from the low point density. Where fine scale detail is required (<10 cm), the SfM provides better results.
While both technologies provide valuable datasets of the bog, they are optimized for different scenarios (
Table 6). The SfM dataset is better suited for studies that require fine spatial detail over a smaller area (<10 ha). The longer time for data acquisition and processing make this technology more feasible for localized studies. In contrast, the more efficient LiDAR is better suited to acquiring data over larger areas at lower spatial detail. At the expense of total area covered, from a lower altitude and with a slower flight speed the point density of the LiDAR could be increased, but further testing is required to determine by how much in this ecosystem. Both payloads are of moderate weight, 4.5 kg for the SfM and 4.8 kg for the LiDAR (
Table 6) and as such require a UAS with enough payload capacity (e.g., M600P used in our study).
When manipulating the point clouds on a desktop PC or viewing them through the web-based Potree viewer, the difference in file size (1 GB for the SfM vs. 51 MB for the LiDAR LAS files) is not apparent when navigating within or interacting with the dataset. Even with a slow mobile internet connection, the Potree viewer remained useable. The file size also was not an important consideration when viewing the point clouds in VR with the Oculus Quest 2. Because the HMD is tethered to the PC during this operation and the desktop computer is rendering the data, the full datasets can be readily interacted with. When mobile VR (e.g., Google Cardboard) or mobile AR was used, the file size of the SfM dataset hindered the user experience. The main limitation for mobile VR was the file size of the cloud-based hosting platform (i.e., Sketchfab) and RAM capacity of the smartphones for AR. Potentially, the commercial AR implementations developed for medical imaging would not have the same file size restrictions, although these were not tested here.
All VR and AR visualizations provided a sense of agency through the user’s ability to turn their head or smartphone and explore the bog through a full 360° panorama and change their perspective or scale of observation. While this ability is also true for the 360° panoramas captured within VRifer, dynamic agency was only fully achieved by motion tracking in the VR and AR implementations. As described by [
69], this is an important distinction between a desktop digital experience and immersive technology. Such transformative developments in visualization lead to the user becoming “part of” the digital representation as opposed to the digital content remaining distinct from the user’s experience [
69]. Of the VR and AR tested here, only the Oculus Quest 2 rendered a visually immersive experience. In comparison to other VR implementations such as CAVEs and video walls with smart glasses, the full 360° panoramic view of the VR HMD cannot be matched [
70].
Visualization technology is important because it allows users to study areas of interest in virtual environments in 3D, and it facilitates the interaction of groups in different locations, the collection of data in time and space, and the ability to view the object studied in environments with varying scales. In addition to its use in scientific queries, the immersive digital content is a further benefit for educational material and for the individual exploration of questions related to the datasets. Adding virtual models of the region of interest accessible with immersive VR or AR technology greatly benefits the overall understanding and interest in the subject matter [
71,
72]. Because VR/AR content is interactive, the datasets can now be manipulated by each person with different questions or interests.
With the popularization of this technology for gaming and entertainment, there has been both a surge in development and improvement in the quality of the hardware but also a decrease in price in consumer grade VR headsets. Therefore, it is becoming more feasible to equip teams to use this technology both for meetings and also for virtual collaboration to work with datasets and colleagues from anywhere in the world. Popular for virtual tech support, AR lags behind VR in technological maturity for geospatial visualization. Nevertheless, with more compact datasets, such as the LiDAR point cloud, these 3D scenes can be displayed from most modern smartphones, making it both easily accessible and readily available to share interactive files. With the majority of VR and AR development in fields other than geospatial sciences (e.g., gaming, marketing, telepresence), there is a need for improved functionality and the ability of the specialized software to effectively handle the large files produced from technologies such as SfM and LiDAR [
73].
Despite their promise, neither VR nor AR can replicate virtual environments with sufficient detail or fidelity to be indistinguishable from the real world. They are not a substitute for fieldwork, nor firsthand in situ field experiences. Rather, they are tools to augment and enhance geospatial visualization, data exploration, and collaboration.