Previous Article in Journal
Modeling Magma Intrusion-Induced Oxidation: Impact on the Paleomagnetic TRM Signal in Titanomagnetite
Previous Article in Special Issue
Qualitative Map of Geodiversity as a Tool to Identify Geodiversity-Related Ecosystem Services: Application to the Costões e Lagunas Aspiring Geopark, SE Brazil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Gaussian Splatting in Geosciences: A Novel High-Fidelity Approach for Digitizing Geoheritage from Minerals to Immersive Virtual Tours

by
Andrei Ionuţ Apopei
Department of Geology, Faculty of Geography and Geology, “Alexandru Ioan Cuza” University of Iaşi, 700505 Iaşi, Romania
Geosciences 2025, 15(10), 373; https://doi.org/10.3390/geosciences15100373
Submission received: 4 August 2025 / Revised: 20 September 2025 / Accepted: 23 September 2025 / Published: 24 September 2025
(This article belongs to the Special Issue Challenges and Research Trends of Geoheritage and Geoconservation)

Abstract

The digitization of geological heritage is essential for geoconservation, research, and education, yet traditional 3D methods like photogrammetry struggle to accurately capture specimens with complex optical properties. This paper evaluates 3D Gaussian Splatting (3DGS) as a high-fidelity alternative. This study presents a multi-scale comparative study, digitizing landscape-scale outcrops with UAVs, architectural-scale museum interiors with smartphones, and specimen-level minerals with complex lusters and transparency. The results demonstrate that 3DGS provides unprecedented realism, successfully capturing view-dependent phenomena such as the labradorescence of feldspar and the translucency of fluorite, which are poorly represented by photogrammetric textured meshes. Furthermore, the 3DGS workflow is significantly faster and eliminates the need for manual post-processing and texture painting. By enabling the creation of authentic digital twins and immersive virtual tours, 3DGS represents a transformative technology for the field. It offers powerful new avenues for enhancing public engagement and creating accessible, high-fidelity digital archives for geoeducation and geotourism.

1. Introduction

The ongoing digital transformation is reshaping research, education, and preservation across the geosciences, fostering an era of “Mineralogy 4.0” [1,2]. At the heart of this revolution is the three-dimensional (3D) digitization of geological heritage, which offers a powerful paradigm for geoconservation and geoeducation [3,4]. By creating high-fidelity “digital twins” of minerals, rocks, and fossil specimens, institutions can safeguard vulnerable collections from physical degradation while democratizing access on a global scale [5,6]. These interactive digital assets transcend the limitations of static 2D images, providing deeper cognitive engagement and enhancing the understanding of complex spatial properties [7,8]. The necessity for such robust digital solutions was underscored during global events like the COVID-19 pandemic, which highlighted the critical importance of remote access to educational and cultural content [8].
For years, the primary workhorse for creating these digital replicas has been photogrammetry, particularly workflows combining Structure-from-Motion (SfM) and Multi-View Stereo (MVS) techniques [5,8]. This approach has been instrumental in building extensive digital archives and has proven effective for a wide range of geological samples. However, despite its utility, the reliance on feature-matching algorithms exposes significant and well-documented limitations. Photogrammetry is notoriously sensitive to lighting conditions and struggles with homogeneously textured surfaces, often requiring extensive, manual post-processing to correct imperfections and optimize models for real-time applications [8,9].
These challenges are critically amplified when dealing with the unique material properties inherent to many geological specimens. Photogrammetry fundamentally fails when capturing objects with glossy, reflective, transparent, or translucent surfaces [7,10,11]. Complex light interactions, such as reflection, refraction, and internal scattering, disrupt the feature-matching process, making it nearly impossible to accurately reconstruct minerals with metallic, vitreous, or pearly lusters [8]. While methods like cross-polarized light photogrammetry or the use of anti-reflection sprays can mitigate these issues, they add complexity and are not always sufficient for achieving true-to-life results [1,8]. Consequently, capturing unique view-dependent optical phenomena, such as the labradorescence of feldspars (i.e., a vivid, metallic play-of-color) or the opalescence of opal (i.e., a milky, shimmering effect), is beyond the native capabilities of standard photogrammetric methods. This forces researchers and digital artists into a laborious post-processing workflow. They must manually create Physically Based Rendering (PBR) texture maps to simulate these visual effects, a process that is both time-consuming and based on artistic interpretation rather than a direct capture of reality [1].
In response to these persistent challenges, a transformative technology has emerged from the field of computer graphics: 3D Gaussian Splatting [12,13,14,15]. Unlike traditional methods that build a solid polygonal mesh, 3DGS represents a scene as a vast collection of explicit, anisotropic 3D Gaussians, each defined by its position, shape, color, and opacity [16,17]. This novel representation, combined with a differentiable rasterization pipeline, allows for both accelerated training and photorealistic, real-time rendering. By directly modeling the volumetric and view-dependent nature of objects, 3DGS has shown remarkable potential to overcome the key weaknesses of photogrammetry, offering superior quality and efficiency [11,14].
While 3DGS has seen explosive growth in computer graphics, its application within the geosciences is a nascent but rapidly developing field. Initial studies have demonstrated its potential for large-scale environmental applications, such as reconstructing the lunar surface for planetary geology [18], large-scale terrain reconstruction [19], urban modeling and planning [11,20,21,22,23], using drones for geological investigation [24], and for environmental change detection [25]. Its utility has also been proven for challenging underwater scenes relevant to earth science research [26]. However, a comprehensive study is needed to evaluate 3DGS across the full spectrum of geoheritage applications, from capturing the fine optical details of individual minerals to reconstructing large-scale virtual museum tours.
To address this gap, this study forwards and validates a central hypothesis: 3DGS provides a qualitatively and quantitatively superior methodology to traditional photogrammetry for the high-fidelity digitization of geoheritage assets, particularly those with complex, view-dependent optical properties. To test this hypothesis, this paper evaluates and demonstrates the application of 3DGS as a high-fidelity digitization tool for geoheritage across multiple scales. A series of case studies is presented, ranging from individual mineral specimens and complex scientific instruments to immersive virtual tours of museum spaces captured with Unmanned Aerial Vehicles (UAVs) and smartphones. Through direct qualitative and quantitative comparisons with traditional photogrammetry, the results show that 3DGS provides an unprecedented level of realism in capturing challenging geological materials, including those with transparency, complex luster, and dynamic optical effects. By showcasing its utility for creating accessible and engaging digital twins, this work makes the case that 3DGS represents a significant leap forward for geoconservation, geoeducation, and virtual geotourism.

2. Materials and Methods

This study employs a comparative methodology to evaluate two distinct 3D digitization workflows across multiple scales of geoheritage. The process is segmented into three key stages: data acquisition, 3D reconstruction and processing, and final digital deployment for visualization and interaction. The tangible objects used as case studies are presented in Table 1.

2.1. Data Acquisition

To ensure a comprehensive evaluation, source data were captured for the distinct scenarios outlined in Table 1. For landscape-scale digitization of geological outcrops (Limpedea Pillars), a DJI Mini 3 Pro UAV was utilized. Data was captured using timed shot mode photography (i.e., a sequence of photos with a specific 10 s delay between each shot) at a resolution of 4032 × 3024 pixels. Specimen-level digitization was performed using the high-resolution image sets and methodology established in previous research [1,8], involving a Canon 5D Mk III camera and a cross-polarized light photogrammetry rig. For the architectural-scale virtual tour of the museum and other object-scale models (e.g., a cabinet, a bust), a continuous video was captured on an Apple iPhone 16 Pro in 4K at 30 fps. Apple LOG color profile was used, as its “flat” image preserves a wider dynamic range, retaining crucial details in both the brightest and darkest areas of the scene. This footage was then processed in DaVinci Resolve (v19) where color grading was performed to restore accurate tonality and contrast, ensuring a photorealistic final appearance.

2.2. Three-Dimensional Reconstruction and Processing

The two distinct pipelines outlined in Figure 1 were processed on a custom workstation equipped with an AMD Ryzen 9 3950X processor, 64 GB of DDR4 RAM, and dual GPUs (NVIDIA GeForce 3090 24 GB and NVIDIA GeForce RTX 2080s 8 GB). It is important to note that while this high-end configuration was used to ensure optimal performance for this study, the software developer of Postshot (https://www.jawset.com, accessed on 1 August 2025) lists the minimum requirements as an NVIDIA RTX 2060 with 6 GB of VRAM.
As shown in Figure 1, the workflows present a fundamental difference: the photogrammetry pipeline requires a lengthy “Manual Post-Processing” stage (highlighted in yellow), which involves significant digital-artistic skill to clean the mesh and simulate optical effects. In contrast, the 3DGS workflow replaces this with a computationally driven “Automated 3DGS Training” step (highlighted in blue). This distinction is the primary factor accounting for the variations in processing time, required expertise, and the nature of the final result.

2.2.1. Photogrammetry Workflow (Comparative Baseline)

The baseline 3D models were generated using Agisoft Metashape software (v2.2.0). The standard workflow consisted of camera alignment, dense cloud generation, mesh construction, and texture baking, as detailed in [1,8].

2.2.2. Three-Dimensional Gaussian Splatting (3DGS) Workflow

The 3DGS scenes were generated using a desktop-based software solution. While a growing number of platforms now offer 3DGS creation, including accessible web and mobile applications (e.g., Kiri Engine, Polycam, Luma AI, Teleport) and open-source frameworks (e.g., Nerfstudio [27] or Brush [28]), for this study, the Postshot (v0.6.150) was selected, a software implementation of the method introduced by [15]. Postshot was chosen due to its robust handling of both video and high-resolution image-based inputs, its optimized processing pipeline for high-fidelity results, and its direct support for importing pre-computed camera poses from other software, which was critical for our comparative analysis. This workflow transforms 2D images or video into a high-fidelity volumetric scene representation without requiring a traditional polygonal mesh. The process can be broken down into three main stages: initialization, optimization with adaptive density control, and rendering, as visually summarized in Figure 2.
The process begins with a sparse point cloud and associated camera positions derived from SfM (Figure 2a), which serves as a rough geometric scaffold for the scene [15]. Each point from this initial cloud is “splatted” into an explicit 3D Gaussian primitive (Figure 2b,c). These primitives are the fundamental building blocks of the scene and are defined by a set of learnable properties (Figure 2b*): a 3D space position (center), a 3D covariance matrix (defining its shape, scale, and orientation as an ellipsoid), an opacity (transparency), and color represented by Spherical Harmonics (SH) coefficients to model view-dependent effects like reflections [13,15].
Unlike photogrammetry, which bakes a final texture, 3DGS uses an iterative optimization process to “learn” the scene. In a continuous loop, an image is rendered from the current set of 3D Gaussians using a fast, differentiable rasterizer. This rendered image is then compared to a real ground truth photograph from the input dataset. The difference (error) between the two is calculated, and this error signal is propagated backward to automatically adjust the properties of every Gaussian [15]. This optimization seeks to minimize the discrepancy, effectively “sculpting” the cloud of Gaussians until it perfectly represents the scene’s appearance from all viewpoints.
A key component of the training process is the adaptive control of the Gaussians’ density. The system automatically identifies areas that are poorly reconstructed. In “under-reconstructed” regions (areas with missing detail), new Gaussians are created by cloning existing ones. In “over-reconstructed” regions (where one large Gaussian incorrectly covers a detailed area), the primitive is split into smaller ones [15]. This adaptive densification, combined with the periodic removal of transparent (unnecessary) Gaussians, allows the model to dynamically allocate detail and efficiently build a highly accurate representation of the scene [13].
The 3DGS workflow is flexible and can be initiated in two ways, as shown in Figure 1. For case studies where a photogrammetric model already existed (e.g., the specimen-level minerals), the previously computed camera alignment data from Agisoft Metashape was imported directly into Postshot. This hybrid approach leverages existing SfM data to save significant processing time and ensure a perfectly consistent baseline for direct comparison. For case studies captured solely with video where no prior photogrammetry was performed (e.g., the architectural-scale museum tour), the SfM step was computed from scratch within Postshot to generate the necessary camera poses before training.
On the specified workstation, the training time for a typical scene varied from 30 to 90 min, depending on the complexity and size of the input dataset. The resulting 3DGS scenes were cleaned of outliers and then saved in the .ply format (i.e., polygon file format).

2.3. Digital Deployment and Visualization

The final 3D assets were deployed on platforms suited to their underlying technology: mesh-based platforms for the photogrammetric models and point-based or specialized renderers for the 3DGS scenes. Textured photogrammetric models were uploaded to Sketchfab (https://sketchfab.com, accessed on 1 August 2025) platform. The 3DGS scenes were rendered in real-time within a custom web-based viewer built using the Three.js library [29]. A publicly accessible online repository has been created to display the interactive 3D models presented in this study (Figure A1). For the most immersive experience, the 3DGS scene of the “Grigore Cobălcescu” Museum was imported into Unity (https://unity.com, accessed on 1 August 2025) 6 LTS to develop a standalone application for the Windows OS. This application provides a first-person gameplay experience, allowing users to navigate the approximately 200-square-meter digital twin of the museum. A key performance advantage is that the entire high-fidelity scene, including its 60 display cabinets, loads instantaneously with no discernible latency. The application serves as a platform for further enrichment with interactive elements such as informational text and audio guides, transforming the digital model into an engaging learning environment.
Figure 2. Visual progression of the 3DGS reconstruction for a pyrite specimen. (a) The initial sparse point cloud generated from the SfM step. (b) The scene represented by individual Gaussian primitives with their scale reduced, revealing their underlying distribution; the inset (*) provides a conceptual illustration of these anisotropic primitives. (c) The final photorealistic render after the Gaussians have been optimized to their correct size and appearance, accurately capturing the mineral’s metallic luster. (d) The ground truth: a real photograph of the pyrite specimen for comparison.
Figure 2. Visual progression of the 3DGS reconstruction for a pyrite specimen. (a) The initial sparse point cloud generated from the SfM step. (b) The scene represented by individual Gaussian primitives with their scale reduced, revealing their underlying distribution; the inset (*) provides a conceptual illustration of these anisotropic primitives. (c) The final photorealistic render after the Gaussians have been optimized to their correct size and appearance, accurately capturing the mineral’s metallic luster. (d) The ground truth: a real photograph of the pyrite specimen for comparison.
Geosciences 15 00373 g002

3. Results

The comparative analysis of the two reconstruction workflows yielded significant qualitative and quantitative differences, particularly in the representation of challenging geological materials and the overall efficiency of the digitization process.

3.1. Qualitative Comparison of Specimen-Level Reconstructions

The most striking differences between the two methods were observed in specimen-level models with complex optical properties. For minerals exhibiting view-dependent phenomena, 3DGS provided a vastly more realistic representation. The labradorite 3D model, for instance, successfully reproduced the characteristic schiller effect (labradorescence). As shown in Figure 3, the iridescence in the photogrammetric model (Figure 3a,c,e) appears as a static, baked-in texture that remains unchanged between viewing angles. In contrast, the 3DGS model (Figure 3b,d,f) accurately captures the dynamic, view-dependent effect, which shifts realistically as the perspective changes.
For transparent and translucent minerals like fluorite, 3DGS models realistically render their clarity, allowing for the visualization of internal fractures and inclusions. Photogrammetric models of the same specimens appeared largely opaque, with surface textures that concealed their true nature (Figure 4). High-luster minerals such as pyrite and galena were also rendered with greater fidelity in 3DGS, which accurately captured their metallic sheen, whereas photogrammetry often produced blurred or artificially flat reflections.

3.2. Versatility Across Multiple Scales

The 3DGS workflow demonstrated robust performance across all tested scales. At the architectural scale, the entire interior of the “Grigore Cobălcescu” Museum, encompassing 60 cabinets and approximately 6500 specimens, was reconstructed into a seamless and visually coherent digital twin. When deployed in the Unity application, this becomes a real-time, navigable first-person space that loads without any discernible latency. As shown in Figure 5, the final application demonstrates a seamless blending of the high-fidelity 3DGS scene with standard game development features. The player avatar provides a sense of scale, while the user interface (UI) dialogue boxes deliver historical and contextual information to the visitor, transforming the static model into an interactive educational tool.
Figure 6 showcases this versatility across a range of other challenging subjects. The complex geometry of the Fedorov stage (Figure 6a), with its fine-scale mechanical parts and highly reflective metallic surfaces, was captured with high precision. The landscape-scale reconstruction of the Limpedea Pillars from UAV imagery accurately captured the intricate columnar jointing of the andesite outcrop (Figure 6b). A complex museum cabinet with transparent glass (Figure 6c) and the fine sculptural details of the bust of Professor Grigore Cobălcescu (Figure 6d). In all cases, the 3DGS scenes maintained high visual fidelity without the artifacts often associated with large-scale/complex photogrammetric reconstructions.

3.3. Quantitative Performance

To address the method’s scalability, a detailed breakdown of performance metrics across different scales is presented in Table 2. For specimen-level models like the Pyrite, training times were consistently low. For more complex, object-scale models like a single museum cabinet, both the training time and the final number of Gaussians increased moderately. The most significant resource consumption was observed for the architectural-scale museum tour. This complex scene, covering 200 m2 and captured from a lengthy video sequence, required the most training time and resulted in the largest file size, demonstrating a clear relationship between scene complexity and computational demand. It is important to note that training time is directly influenced by the hardware used, the size of the input image set, the selected model profile (e.g., splat3, splat MCMC), and the number of training iterations, with 30,000 steps typically being sufficient for a high-quality result.
When compared to traditional photogrammetry, the 3DGS workflow offers significant efficiencies, as detailed in Table 3. The primary advantage is the drastic reduction in total processing time. While photogrammetry required several hours per model, including lengthy manual post-processing, 3DGS training was completed in a fraction of that time. Although the final .ply files from 3DGS can be larger than their photogrammetric counterparts, this is a reasonable trade-off for the elimination of separate texture files and the vastly improved workflow speed.
To directly address the challenge of large .ply file sizes for web distribution and archiving, as shown in Table 2 and Table 3, an additional compression step was performed to generate optimized .splat files. This process employs a multi-faceted approach to reduce the data footprint. First, the model is trimmed by culling near-invisible “floater” artifacts, removing any Gaussian primitives with an opacity (alpha) value below a defined threshold. Second, the complexity of the view-dependent color information is reduced by lowering the degree of the Spherical Harmonics (SH). The core of the compression lies in vector quantization, where continuous properties of the Gaussians (such as scale and rotation) are grouped into a finite number of “buckets.” Instead of storing the full high-precision value for each property, the model stores a much smaller index pointing to the appropriate bucket. This combination of trimming, color simplification, and quantization results in a .splat file that is approximately ten times smaller than the original, making it far more suitable for web-based distribution and long-term archiving.

4. Discussion

The results of this study indicate that 3DGS offers a transformative pathway for the digitization of geoheritage, addressing many of the fundamental limitations of conventional photogrammetry. The discussion below interprets these findings, connecting the technological advantages of 3DGS to their practical implications for geoconservation, geoeducation, and geotourism.

4.1. The Technological Leap: From Surface Approximation to Volumetric Representation

The superior qualitative results presented in this study, particularly for minerals with complex optical properties, are not an incremental improvement but the result of a fundamental technological shift. Traditional photogrammetry, which relies on textured triangular meshes, struggles with these materials because it is designed to capture surface correspondence [1,10]. This surface-based approach is inherently ill-equipped to handle the complex interplay of light in view-dependent phenomena like reflection, transparency, and iridescence, often leading to geometric voids and texture inaccuracies [11,30]. While methods like PBR can simulate these effects, they remain an artistic approximation layered onto an imperfect geometric base [1,8].
In contrast, technologies like Neural Radiance Fields (NeRFs) and 3DGS represent scenes as continuous volumetric fields [16,31,32]. 3DGS, in particular, models a scene using millions of learnable 3D Gaussians, each with properties for position, shape, opacity, and view-dependent color via spherical harmonics [14,15]. This allows the model to learn a representation of how light radiates throughout the space directly from 2D images. It is this shift from a surface-based approximation to a direct volumetric representation that enables 3DGS to so faithfully capture the appearance of challenging geological materials, moving the field from simulation to high-fidelity representation.
This fundamental distinction highlights that the choice between 3DGS and photogrammetry should be guided by the desired end-use case. Photogrammetry and LiDAR scanning are fundamentally surface reconstruction techniques, optimized to produce a view-independent, editable polygonal mesh. This output is ideal for applications where a clean, geometric asset is the primary requirement, such as for 3D printing or traditional CGI workflows. 3DGS, however, is a radiance field method optimized for a different goal: the rapid creation of photorealistic, view-dependent novel view synthesis. Its strength lies in capturing and rendering the appearance of a scene with unparalleled realism, making it the superior choice for applications like immersive virtual tours and digital archives for visual inspection.

4.2. A Versatile, Multi-Scale Tool for Geoheritage

This study demonstrates that the benefits of 3DGS extend beyond complex, specimen-level reconstructions. The successful digitization of the Limpedea Pillars and the entire museum interior underscores the method’s versatility for both landscape and architectural-scale applications. This capacity aligns with the growing demand for lifelike virtual environments and digital twins in heritage, urban planning, and virtual tourism, where 3DGS is emerging as a viable tool for creating large, navigable replicas [9,11,20,32]. Furthermore, this approach is crucial for the digital preservation of cultural sites at risk from conflict or degradation, ensuring their accessibility through immersive applications [33]. This success also demonstrates that common 3DGS artifacts, such as the “floaters” that can create a “hairy” surface appearance, can be effectively mitigated. These artifacts, which typically result from poorly constrained Gaussians in under-observed regions [15], were minimized in this study by ensuring comprehensive input data and applying a dedicated outlier cleaning step during processing. It is important to note, however, that applying 3DGS at these scales introduces significant computational challenges, including prohibitive GPU memory demands and the management of millions of primitives to maintain quality [11,13,34].

4.3. Enhancing Engagement in Geoeducation and Geotourism

The high visual fidelity achieved with 3DGS is not merely an aesthetic improvement; it is critical for enhancing user engagement and learning outcomes [33,35,36]. The realism of a 3D representation directly impacts the user’s sense of presence and immersion within a virtual environment, which is a key factor in the success of virtual museums [6,10,33,35,36,37,38]. For geoeducation, realistic 3D models provide a vastly superior method for learning to recognize mineral properties compared to static 2D images [1,4,5,7,8]. A student can now observe the Schiller effect on our digital labradorite model in a way that was previously only possible with the physical specimen in hand, thus democratizing high-quality educational experiences.
Furthermore, the development of a first-person interactive tour in Unity 6 highlights a significant leap for virtual geotourism. This aligns with the growing demand for lifelike, large-scale virtual environments and digital twins in tourism and urban planning [9,11,32]. The application, which runs on any standard Windows OS machine, loads the entire 200-square-meter digital twin of the museum instantaneously with no discernible latency. This seamless performance transforms the user from a passive observer of a 3D model into an active participant within the reconstructed space. This active exploration and emotional involvement are highly effective in communicating cultural and scientific content [1,3,5,33,35,39,40]. The application serves as a platform for further enrichment with immersive experiences, such as an introductory audio monologue or interactive exhibit information, allowing institutions to engage visitors in more profound and memorable ways [3,39,40,41].

5. Conclusions and Future Work

This study demonstrates that for the specific challenges of geoheritage digitization, 3D Gaussian Splatting is a superior and more versatile method than conventional photogrammetry. Its novelty lies not in the invention of a new algorithm, but in the comprehensive, multi-scale evaluation of its application to geoheritage, answering critical questions about its fidelity with complex mineral optics, workflow efficiency, and versatility. By directly capturing view-dependent properties, 3DGS provides a faster, more scalable, and more authentic representation of geological reality, confirming its broad utility for geoconservation, geoeducation, and geotourism.
Despite its advantages, challenges remain. The primary limitations are the requirement for high-end GPU hardware for training and the substantial storage requirements for the resulting models, which can be an obstacle for large-scale applications [11,12]. Beyond these practical considerations, another fundamental issue is the difficulty in extracting clean, continuous 3D mesh surfaces from the unstructured Gaussian primitives, which can be a barrier for downstream applications that require traditional geometry [14,16]. Furthermore, while visually impressive, 3DGS can be prone to artifacts such as aliasing, floaters, or “splotchy” Gaussians in regions with sparse or inconsistent input data [13,15]. The explicit but unstructured nature of the data also makes direct editing and manipulation less intuitive than with traditional polygonal models [14]. While this study demonstrates a clear advantage over photogrammetry, a detailed comparison with other radiance field techniques like NeRFs [31], which may offer different trade-offs between rendering speed and geometric precision, was beyond the scope of this work and remains an important area for future investigation. Furthermore, this study is limited to a qualitative and workflow-based quantitative comparison; a rigorous analysis of absolute geometric accuracy against a ground-truth dataset, such as one derived from laser scanning or high-precision ground control points (GCPs), remains an important area for future investigation.
Future work must focus on addressing these practical hurdles. Emerging compression techniques based on vector quantization (VQ), such as Compact3D and EAGLES, offer a promising solution by significantly reducing file sizes with minimal loss in visual quality [14,42]. Further research into robust mesh extraction algorithms and the development of user-friendly tools for editing 3DGS scenes will also be vital for broader adoption. This is an active area of research, with emerging methods showing promise in converting high-fidelity 3DGS scenes into clean polygonal meshes (Kiri Engine) (https://kiri-innovation.github.io/3DGStoMesh2/, accessed on 8 September 2025). A formal user study, following established methodologies [35,36], is also a critical next step to quantitatively assess the impact of 3DGS-driven realism on learning outcomes and user engagement. Furthermore, research could explore applications beyond pure visualization, as the explicit nature of the Gaussian primitives allows for the integration of physical properties, opening possibilities for dynamic and interactive geological simulations. Finally, integrating these high-fidelity models with GIS platforms remains a key opportunity to unlock the full potential of this transformative technology for preserving and sharing our planet’s invaluable geoheritage.

Funding

This research received no external funding.

Data Availability Statement

The interactive web-based viewers for all 3D models presented in this study are publicly accessible at https://geology.uaic.ro/muzee/mineralogie/gsplats/ (accessed on 3 August 2025). The compiled executable for the immersive Unity application is available for download from the public GitHub repository at https://github.com/aapopei/Virtual-Museum-of-Geology (accessed on 4 August 2025).

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree-Dimensional
3DGS3D Gaussian Splatting
GISGeographic Information System
LOGLogarithmic (color profile)
LTSLong-Term Support
MVSMulti-View Stereo
NeRFNeural Radiance Field
PBRPhysically Based Rendering
PLYPolygon File Format
SfMStructure-from-Motion
SHSpherical Harmonics
UAVUnmanned Aerial Vehicle
VQVector Quantization
WebGLWeb Graphics Library

Appendix A

All 3D models generated and analyzed in this study are publicly accessible for interactive viewing. The following table provides direct links to the comparative photogrammetry models, and both the high-fidelity and compressed versions of the 3D Gaussian Splatting models hosted on the project website.
Table A1. Complete list of 3D models and links to interactive viewers.
Table A1. Complete list of 3D models and links to interactive viewers.
SampleType 1Public URLs (Accessed on 8 September 2025)
Limpedea PillarsHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/firiza_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/firiza_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/firiza_photogrammetry.html
Geology MuseumHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/museum_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/museum_3DGS_low.html
PN/A
Cabinet of rocks and mineralsHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/cabinet_17_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/cabinet_17_3DGS_low.html
PN/A
Bust of Grigore CobălcescuHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/grigore_cobalcescu_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/grigore_cobalcescu_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/grigore_cobalcescu_photogrammetry.html
Fedorov stageHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/fedorov_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/fedorov_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/fedorov_photogrammetry.html
Gold panHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/gold_pan_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/gold_pan_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/gold_pan_photogrammetry.html
PyriteHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/pyrite_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/pyrite_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/pyrite_photogrammetry.html
FluoriteHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/fluorite_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/fluorite_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/fluorite_photogrammetry.html
StibniteHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/stibnite_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/stibnite_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/stibnite_photogrammetry.html
JasperHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/jasper_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/jasper_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/jasper_photogrammetry.html
TopazHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/topaz_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/topaz_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/topaz_photogrammetry.html
SulfurHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/sulfur_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/sulfur_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/sulfur_photogrammetry.html
CalciteHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/calcite_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/calcite_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/calcite_photogrammetry.html
AnthophylliteHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/anthophyllite_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/anthophyllite_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/anthophyllite_photogrammetry.html
Smoky QuartzHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/smoky_quartz_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/smoky_quartz_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/smoky_quartz_photogrammetry.html
QuartzHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/quartz_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/quartz_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/quartz_photogrammetry.html
AmethystHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/amethyst_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/amethyst_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/amethyst_photogrammetry.html
LabradoriteHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/labradorite_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/labradorite_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/labradorite_photogrammetry.html
OpalHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/opal_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/opal_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/opal_photogrammetry.html
VanadiniteHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/vanadinite_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/vanadinite_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/vanadinite_photogrammetry.html
Galena and ChalcopyriteHhttps://geology.uaic.ro/muzee/mineralogie/gsplats/galena_and_chalcopyrite_3DGS.html
Lhttps://geology.uaic.ro/muzee/mineralogie/gsplats/galena_and_chalcopyrite_3DGS_low.html
Phttps://geology.uaic.ro/muzee/mineralogie/gsplats/galena_and_chalcopyrite_photogrammetry.html
1 H–3DGS high-fidelity model; L–3DGS low-fidelity (compressed) model; P–Photogrammetry model.
Figure A1. Screenshot of the online repository displaying the interactive 3D models presented in this study. This page provides access to the photogrammetry, 3DGS high-fidelity (high), and 3DGS compressed (low) versions of each model, as detailed in Appendix A. The repository is publicly accessible at https://geology.uaic.ro/muzee/mineralogie/gsplats/ (accessed on 8 September 2025).
Figure A1. Screenshot of the online repository displaying the interactive 3D models presented in this study. This page provides access to the photogrammetry, 3DGS high-fidelity (high), and 3DGS compressed (low) versions of each model, as detailed in Appendix A. The repository is publicly accessible at https://geology.uaic.ro/muzee/mineralogie/gsplats/ (accessed on 8 September 2025).
Geosciences 15 00373 g0a1

References

  1. Apopei, A.I. Towards Mineralogy 4.0? Atlas of 3D Rocks and Minerals: Digitally Archiving Interactive and Immersive 3D Data of Rocks and Minerals. Minerals 2024, 14, 1196. [Google Scholar] [CrossRef]
  2. Prabhu, A.; Morrison, S.M.; Fox, P.; Ma, X.; Wong, M.L.; Williams, J.R.; McGuinness, K.N.; Krivovichev, S.V.; Lehnert, K.; Ralph, J. What is mineral informatics? Am. Mineral. 2023, 108, 1242–1257. [Google Scholar] [CrossRef]
  3. Apopei, A.I. Accessible Interface for Museum Geological Exhibitions: PETRA—A Gesture-Controlled Experience of Three-Dimensional Rocks and Minerals. Minerals 2025, 15, 775. [Google Scholar] [CrossRef]
  4. Kondyli, A. The Museums of Geology and Paleontology as Geoeducational and Geoconservation Tools against Climate Change: A Case Study in Greece. Geoheritage 2024, 16, 64. [Google Scholar]
  5. Cocal-Smith, V.; Hinchliffe, G.; Petterson, M.G. Digital Tools for the Promotion of Geological and Mining Heritage: Case Study from the Thames Goldfield, Aotearoa, New Zealand. Geosciences 2023, 13, 253. [Google Scholar] [CrossRef]
  6. Dong, S. Research on the application of digital media technology in museum exhibition design: A case study of the national museum of Singapore. In Proceedings of the ICDEBA 2023, Hangzhou, China, 19 November 2023. [Google Scholar]
  7. Andrews, G.D.M.; Labishak, G.; Brown, S.; Isom, S.L.; Pettus, H.D.; Byers, T. Teaching with Digital 3D Models of Minerals and Rocks. GSA Today 2020, 30, 42–43. [Google Scholar] [CrossRef]
  8. Apopei, A.I.; Buzgar, N.; Buzatu, A.; Maftei, A.E.; Apostoae, L. Digital 3D Models of Minerals and Rocks in a Nutshell: Enhancing Scientific, Learning, and Cultural Heritage Environments in Geosciences by Using Cross-Polarized Light Photogrammetry. Carpathian J. Earth Environ. Sci. 2021, 16, 237–249. [Google Scholar] [CrossRef]
  9. Abramov, N.; Lankegowda, H.; Liu, S.; Barazzetti, L.; Beltracchi, C.; Ruttico, P. Implementing Immersive Worlds for Metaverse-Based Participatory Design through Photogrammetry and Blockchain. ISPRS Int. J. Geo-Inf. 2024, 13, 211. [Google Scholar] [CrossRef]
  10. Cao, J.; Cui, J.; Liu, H.; Li, T.; Wang, X.; Wang, Q.; Yuan, Y. GlassGaussian: 3D Gaussian Splatting for Glass-Like Materials. IEEE Trans. Multimed. 2025, 13, 31517–31531. [Google Scholar]
  11. Do, T.L.P.; Choi, J.; Le, V.Q.; Gentet, P.; Hwang, L.; Lee, S. HoloGaussian Digital Twin: Reconstructing 3D Scenes with Gaussian Splatting for Tabletop Hologram Visualization of Real Environments. Remote Sens. 2024, 16, 4591. [Google Scholar] [CrossRef]
  12. Bao, Y.; Ding, T.; Huo, J.; Liu, Y.; Li, Y.; Li, W.; Gao, Y.; Luo, J. 3D Gaussian Splatting: Survey, Technologies, Challenges, and Opportunities. arXiv 2024, arXiv:2407.17418. [Google Scholar] [CrossRef]
  13. Chen, G.; Wang, W. A Survey on 3D Gaussian Splatting. arXiv 2025, arXiv:2501.07687. [Google Scholar] [CrossRef]
  14. Wu, T.; Yuan, Y.J.; Zhang, L.X.; Yang, J.; Cao, Y.P.; Yan, L.Q.; Gao, L. Recent advances in 3D Gaussian splatting. Comput. Vis. Media 2024, 10, 613–642. [Google Scholar] [CrossRef]
  15. Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Trans. Graph. 2023, 42, 1–14. [Google Scholar] [CrossRef]
  16. Fei, B.; Xu, J.; Zhang, R.; Zhou, Q.; Yang, W.; He, Y. 3D Gaussian Splatting as a New Era: A Survey. IEEE Trans. Vis. Comput. Graph. 2015, 31, 4429–4449. [Google Scholar] [CrossRef]
  17. Ye, V.; Li, R.; Kerr, J.; Turkulainen, M.; Yi, B.; Pan, Z.; Seiskari, O.; Ye, J.; Hu, J.; Tancik, M. gsplat: An Open-Source Library for Gaussian Splatting. J. Mach. Learn. Res. 2025, 26, 1–17. [Google Scholar]
  18. Prosvetov, A.; Prokhorenko, A.; Guryeva, E.; Starinskiy, V. Illuminating the Moon: A comparative study of photogrammetry, Neural Radiance Fields, and Gaussian Splatting for lunar surface reconstruction under varying illumination. Astron. Comput. 2025, 52, 100953. [Google Scholar] [CrossRef]
  19. Chen, L.; Wu, J.; Zhang, J.; Wu, W.; Cai, G.; Guo, Q. Large-scale 3D terrain reconstruction using 3D Gaussian Splatting. arXiv 2024, arXiv:2407.13329. [Google Scholar]
  20. Gao, K. Towards Urban Digital Twins with Gaussian Splatting: Challenges and Opportunities in Large Scale 3D Mapping and Beyond. Master’s Thesis, University of Waterloo, Waterloo, ON, Canada, 2025. [Google Scholar]
  21. Gao, K.; Lu, D.; Li, L.; Chen, N.; He, H.; Xu, L.; Li, J. Enhanced 3-D Urban Scene View Synthesis and Geometry Extraction Using Gaussian Splatting: A Case Study from Google Earth. IEEE Trans. Geosci. Remote Sens. 2025, 63, 4701714. [Google Scholar]
  22. Yan, Y.; Lin, H.; Zhou, C.; Wang, W.; Sun, H.; Zhan, K.; Peng, S. Street Gaussians: Modeling Dynamic Urban Scenes with Gaussian Splatting. In Proceedings of the Computer Vision–ECCV 2024, Milan, Italy, 29 September–4 October 2024. [Google Scholar]
  23. Yao, Y.; Zhang, W.; Zhang, B.; Li, B.; Wang, Y.; Wang, B. RSGaussian: 3D gaussian splatting with LiDAR for aerial remote sensing novel view synthesis. arXiv 2024, arXiv:2412.18380. [Google Scholar]
  24. Tang, X.; Yang, Y.; Wang, X.; Hou, Y.; Wang, Y. DroneSplat: 3D Gaussian Splatting for Drone Photography. arXiv 2025, arXiv:2501.07767. [Google Scholar]
  25. Winiwarter, L.; Urban, S.; Holst, L.; Jutzi, B. Assessing the Potential of NeRF and 3D Gaussian Splatting for Change Detection from Terrestrial Image Sequences. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Kashiwa, Japan, 26–29 May 2025; pp. 441–448. [Google Scholar]
  26. Fan, X.; Wang, X.; Ni, H.; Xin, Y.; Shi, P. Water-Adapted 3D Gaussian Splatting for precise underwater scene reconstruction. Front. Mar. Sci. 2025, 12, 1573612. [Google Scholar] [CrossRef]
  27. Tancik, M.; Weber, E.; Ng, E.; Li, R.; Yi, B.; Wang, T.; Kristoffersen, A.; Austin, J.; Salahi, K.; Ahuja, A. Nerfstudio: A modular framework for neural radiance field development. In Proceedings of the ACM SIGGRAPH 2023 Conference Proceedings, Los Angeles, CA, USA, 12–15 December 2023; pp. 1–12. [Google Scholar]
  28. Brussee, A. Brush. Available online: https://github.com/ArthurBrussee/brush (accessed on 5 September 2025).
  29. Boutsi, A.; Kourtis, A.; Tsiolakis, T.; Ioannidis, C. Interactive Online Visualization of Complex 3D Geo-Archaeological Models. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Enschede, The Netherlands, 10–14 June 2019; pp. 45–52. [Google Scholar]
  30. Chen, W.; Zhong, R.; Wang, K.; Xie, D. Li-GS: A fast 3D Gaussian reconstruction method assisted by LiDAR point clouds. Big Earth Data 2025, 1–25. [Google Scholar] [CrossRef]
  31. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing scenes as Neural Radiance Fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
  32. Wang, B.; Li, D. A new era of indoor scene reconstrucion: A survey. IEEE Access 2024, 12, 110160–110192. [Google Scholar] [CrossRef]
  33. Pietroni, E.; Ferdani, D. Virtual Restoration and Virtual Reconstruction in Cultural Heritage: Terminology, Methodologies, Visual Representation Techniques and Cognitive Models. Information 2021, 12, 167. [Google Scholar] [CrossRef]
  34. Qian, J.; Wang, Q.; He, H.; Wang, Z.; Gao, K. C3DGS: A COLMAP-free 3D Gaussian Splatting method with geometric consistency. arXiv 2025, arXiv:2501.12752. [Google Scholar]
  35. Barbieri, L.; Bruno, F.; Muzzupappa, M. Virtual museum system evaluation through user studies. J. Cult. Herit. 2017, 26, 101–108. [Google Scholar] [CrossRef]
  36. Leopardi, A.; Ceccacci, S.; Mengoni, M.; Naspetti, S.; Gambelli, D.; Ozturk, E.; Zanoli, R. X-reality technologies for museums: A comparative evaluation based on presence and visitors experience through user studies. J. Cult. Herit. 2021, 47, 188–198. [Google Scholar] [CrossRef]
  37. Carvajal, D.A.L.; Morita, M.M.; Bilmes, G.M. Virtual museums. Captured reality and 3D modeling. J. Cult. Herit. 2020, 45, 234–239. [Google Scholar]
  38. Kim, H.J.; Jeong, S.C.; Kim, S.H. Comparative Analysis of Product Information Provision Methods: Traditional E-Commerce vs. 3D VR Shopping. Appl. Sci. 2025, 15, 2089. [Google Scholar] [CrossRef]
  39. Clini, P.; Angeloni, R.; D’Alessio, M.; Quarchioni, R. Enhancing onsite and online museum experience through digital reconstruction and reproduction: The Raphael and Angelo Colocci temporary exhibition. SCIRES-IT Sci. Res. Inf. Technol. 2023, 13, 71–84. [Google Scholar]
  40. Clini, P.; Nespeca, R.; Ferretti, U.; Galazzi, F.; Bernacchia, M. Inclusive Museum Engagement: Multisensory Storytelling of Cagli Warriors’ Journey and the Via Flaminia Landscape Through Interactive Tactile Experiences and Digital Replicas. Heritage 2025, 8, 61. [Google Scholar] [CrossRef]
  41. Papadopoulos, C.; Gillikin Schoueri, K.; Schreibman, S. And Now What? Three-Dimensional Scholarship and Infrastructures in the Post-Sketchfab Era. Heritage 2025, 8, 99. [Google Scholar] [CrossRef]
  42. Navaneet, K.; Meibodi, K.P.; Koohpayegani, S.A.; Pirsiavash, H. Compact3d: Compressing gaussian splat radiance field models with vector quantization. arXiv 2023, arXiv:2311.18159. [Google Scholar] [CrossRef]
Figure 1. Comparative diagram of the 3D reconstruction workflows. The figure illustrates the procedural steps for the conventional photogrammetry pipeline (left) and the 3DGS pipeline (right), starting from a common data acquisition stage.
Figure 1. Comparative diagram of the 3D reconstruction workflows. The figure illustrates the procedural steps for the conventional photogrammetry pipeline (left) and the 3DGS pipeline (right), starting from a common data acquisition stage.
Geosciences 15 00373 g001
Figure 3. Comparative reconstruction of a labradorite specimen from three viewing angles: (a,c,e) The photogrammetric model; and (b,d,f) The 3DGS model. The size of the sample is 8.5 × 5.5 × 2 cm.
Figure 3. Comparative reconstruction of a labradorite specimen from three viewing angles: (a,c,e) The photogrammetric model; and (b,d,f) The 3DGS model. The size of the sample is 8.5 × 5.5 × 2 cm.
Geosciences 15 00373 g003
Figure 4. Comparison of the translucent fluorite specimen. (a) The photogrammetric model results in an opaque surface. (b) The 3DGS model successfully renders the mineral’s translucency and internal features. The size of the sample is 2 × 2 × 2 cm.
Figure 4. Comparison of the translucent fluorite specimen. (a) The photogrammetric model results in an opaque surface. (b) The 3DGS model successfully renders the mineral’s translucency and internal features. The size of the sample is 2 × 2 × 2 cm.
Geosciences 15 00373 g004
Figure 5. A screenshot from the interactive first-person virtual tour of the “Grigore Cobălcescu” Geology Museum, developed in the Unity engine using 3DGS as 3D environment data.
Figure 5. A screenshot from the interactive first-person virtual tour of the “Grigore Cobălcescu” Geology Museum, developed in the Unity engine using 3DGS as 3D environment data.
Geosciences 15 00373 g005
Figure 6. Demonstration of 3DGS versatility across multiple scales and materials. (a) Fedorov stage. (b) “Limpedea Pillars” outcrop. (c) Museum cabinet with glass front. (d) Bust of Grigore Cobălcescu.
Figure 6. Demonstration of 3DGS versatility across multiple scales and materials. (a) Fedorov stage. (b) “Limpedea Pillars” outcrop. (c) Museum cabinet with glass front. (d) Bust of Grigore Cobălcescu.
Geosciences 15 00373 g006
Table 1. Characteristics of the studied 3D geological models 1.
Table 1. Characteristics of the studied 3D geological models 1.
SampleType/ScaleSpecific Feature
Limpedea PillarsLandscapeColumnar-jointed andesite
Geology MuseumArchitectural-scale60 cabinets with ~6500 minerals and rocks, area of 200 square meters
Cabinet of rocks and mineralsObject-scaleCabinet with 50 samples of rocks and minerals
Bust of Grigore CobălcescuObject-scaleBust made of plaster mixture by Dimitrie Tronescu around 1893
Fedorov stageObject-scaleFedorov 5-axis Universal Stage for polarizing microscope made in the Soviet Union; highly reflective material; complex geometry
Gold panObject-scaleWooden-made; used in the Apuseni Mountains (Romania) for decades to wash auriferous sands
PyriteMineralLuster
FluoriteMineralTransparency
StibniteMineralLuster
JasperMineralLuster
TopazMineralTransparency
SulfurMineralHomogenous surface color
CalciteMineralCrystal habit
AnthophylliteMineralCrystal habit
Smoky QuartzMineralTransparency
QuartzMineralCrystal habit
AmethystMineralCrystal habit, Transparency
LabradoriteMineralIridescence
OpalMineralOpalescence
VanadiniteMineralCrystal habit, Luster
Galena and ChalcopyriteMineralLuster
1 The complete collection of 3D models (3D Gaussian Splatting and photogrammetry) is available for interactive viewing at the project website: https://geology.uaic.ro/muzee/mineralogie/gsplats/ (accessed on 8 September 2025). A detailed list of all models with direct links to their interactive online versions is provided in Appendix A.
Table 2. Detailed scalability analysis of the 3D Gaussian Splatting workflow.
Table 2. Detailed scalability analysis of the 3D Gaussian Splatting workflow.
Model ScaleExampleInput DataTraining Time (in Minutes)Number of GaussiansFile Size (in MB)
HighLow
MineralPyrite252 images35743,27217117
ObjectCabinet2 min 44 s of video44981,16522623
ArchitecturalGeology Museum10 min 25 s of video902,658,46861262
Table 3. Quantitative comparison of the Photogrammetry and 3D Gaussian Splatting workflows.
Table 3. Quantitative comparison of the Photogrammetry and 3D Gaussian Splatting workflows.
MetricPhotogrammetry3D Gaussian Splatting
Input Data~350 still images~350 still images or video(s) up to 10 min
Processing Time2–4 h 30–90 min (training)
Manual Post-Processing1–3 h (cleanup & PBR texturing)~10 min (outlier cleaning)
Final File Size10 to 100 MB (.obj + 4K textures)20 to 600 MB (.ply)
Compressed File SizeN/A2 to 60 MB (.splat)
Loading time 1~3–5 s~3–12 s
1 All processing was performed on the workstation detailed in Section 2.2.; Loading times were measured in Google Chrome (version 138, official build) (64-bit) on a laptop computer running Windows 11 with an AMD Ryzen 9 5800H CPU (~3.2 GHz), 32 GB of RAM, NVIDIA GeForce RTX 3070 (Laptop), NVMe SSD, and a 500 Mbps internet connection.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Apopei, A.I. 3D Gaussian Splatting in Geosciences: A Novel High-Fidelity Approach for Digitizing Geoheritage from Minerals to Immersive Virtual Tours. Geosciences 2025, 15, 373. https://doi.org/10.3390/geosciences15100373

AMA Style

Apopei AI. 3D Gaussian Splatting in Geosciences: A Novel High-Fidelity Approach for Digitizing Geoheritage from Minerals to Immersive Virtual Tours. Geosciences. 2025; 15(10):373. https://doi.org/10.3390/geosciences15100373

Chicago/Turabian Style

Apopei, Andrei Ionuţ. 2025. "3D Gaussian Splatting in Geosciences: A Novel High-Fidelity Approach for Digitizing Geoheritage from Minerals to Immersive Virtual Tours" Geosciences 15, no. 10: 373. https://doi.org/10.3390/geosciences15100373

APA Style

Apopei, A. I. (2025). 3D Gaussian Splatting in Geosciences: A Novel High-Fidelity Approach for Digitizing Geoheritage from Minerals to Immersive Virtual Tours. Geosciences, 15(10), 373. https://doi.org/10.3390/geosciences15100373

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop