Next Article in Journal
A High-Precision Verifiable Watermarking Scheme for Vector Geographic Data Using Difference Expansion and Metadata Restoration
Previous Article in Journal
Remarks on Multivalued Variants of Ekeland Principle with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetry-Preserving 4D Gaussian Splatting and Mapping for Motion-Aware Dynamic Scene Reconstruction

1
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
2
School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China
3
School of Communication Engineering, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(11), 1847; https://doi.org/10.3390/sym17111847
Submission received: 27 September 2025 / Revised: 18 October 2025 / Accepted: 27 October 2025 / Published: 3 November 2025
(This article belongs to the Section Engineering and Materials)

Abstract

This paper introduces a novel and efficient approach for Gaussian Splatting in dynamic scenes that leverages symmetry principles for enhanced computational efficiency and visual fidelity. First, we diverge from conventional methods that process static and dynamic regions uniformly by implementing an adaptive separation mechanism. This approach exploits the inherent symmetry-breaking properties between static and dynamic Gaussian points, utilizing motion differentials to identify and isolate dynamic elements. This symmetry-aware partitioning allows for the application of specialized processing techniques to each region type, with static regions benefiting from their temporal symmetry while dynamic regions receive targeted deformation modeling. Second, through this fine-grained partitioning of static and dynamic components guided by symmetry analysis, we achieve more judicious allocation of computational resources. The symmetric treatment of spatially coherent static regions and the focused processing of symmetry-breaking dynamic elements substantially reduce memory requirements and training time while preserving reconstruction quality. This optimization effectively conserves valuable computational resources without compromising visual fidelity. Third, we introduce a sophisticated deformation modeling framework that learns the transformational characteristics of grids composed of multiple Gaussian points. By incorporating radial basis function principles, which inherently preserve local rotational and translational symmetries, our method efficiently encodes complex motion information of dynamic Gaussian points. This symmetry-preserving deformation approach not only enables high-fidelity reconstruction of dynamic regions but also significantly improves the rendering of continuously evolving shadow interactions by maintaining physical consistency. The result is a marked reduction in visual distortion and rendering outputs that demonstrate exceptional correspondence to ground truth imagery across diverse dynamic scenes.

1. Introduction

The advancement of dynamic scene reconstruction has been fundamentally shaped by the convergence of Simultaneous Localization and Mapping (SLAM) technologies and neural rendering techniques. Recent developments in neural SLAM frameworks, such as progressive bundle adjustment [1] and self-supervised tracking [2], have enabled continuous scene representations with accurate localization. Building upon these foundations, Gaussian Splatting-based SLAM methods [3,4,5] have emerged as a promising paradigm bridging explicit point-cloud efficiency and neural rendering quality. However, extending these real-time SLAM techniques to large-scale, multi-temporal dynamic scene reconstruction remains a fundamental challenge. The three-dimensional reconstruction of dynamic scenes represents a cornerstone technology for modern computer vision, urban planning, and environmental monitoring applications. Multi-agent collaborative frameworks [6,7] and specialized methods for challenging conditions [8,9,10] have expanded neural reconstruction to real-world scenarios. This field addresses the fundamental challenge of creating accurate digital representations from aerial and satellite imagery, enabling spatial analysis essential for disaster response, infrastructure management, agricultural monitoring, and climate change assessment.
The mathematical foundation of 3D reconstruction inherently relies on symmetry principles, where spatial transformations and geometric invariances play crucial roles in establishing correspondence between multiple views and temporal frames. Symmetry-based approaches have demonstrated significant advantages in point-cloud processing [11], providing robust frameworks for handling complex spatial relationships. Symmetrical neural network architectures have shown remarkable capabilities in feature extraction and matching tasks [12], offering theoretical foundations for understanding geometric consistency in multi-temporal reconstruction.
The evolution of 3D reconstruction methodologies has progressed from traditional photogrammetric approaches, including Structure-from-Motion (SfM) [13] and Multiview Stereo (MVS) [14], to sophisticated implicit neural representations. In recent years, LiDAR-based reconstruction techniques have been introduced and advanced neural networks such as Neural Radiance Fields (NeRFs) [15], Instant-NGP [16], and 3D Gaussian Splatting [17]. These modern approaches have demonstrated remarkable capabilities in generating high-fidelity 3D models from multi-view imagery, with Gaussian Splatting particularly excelling through explicit point-cloud representations that enable both high-quality rendering and efficient training. Recent advances in symmetry-aware neural architectures have further enhanced reconstruction capabilities by exploiting rotational and translational invariances inherent in natural scenes.
Despite these advances, there is a significant research gap in dynamic scene reconstruction, and a fundamental controversy exists in the field regarding the optimal balance between reconstruction quality, computational efficiency, and temporal consistency. Current methodologies, including D-NeRF [18] and HyperNeRF [19], primarily excel in controlled environments but face substantial challenges when applied to large-scale dynamic scenarios. From a symmetry perspective, the challenge lies in preserving spatial-temporal symmetries of static elements while accommodating the symmetry-breaking dynamics of changing features. Symmetry analysis in temporal modeling provides a theoretical framework for understanding this dichotomy [20], where static scene components exhibit strong translational and rotational symmetries over time, while dynamic elements demonstrate symmetry-breaking behaviors. Although NeRF-based approaches can achieve high visual fidelity, they are computationally intensive and unsuitable for real-time applications that require the rapid processing of large data sets. In contrast, lightweight methods sacrifice accuracy and exhibit temporal artifacts that compromise change detection capabilities and long-term monitoring reliability.
The challenge of dynamic scene modeling encompasses multiple complex factors: temporal changes, occlusion handling, lighting variations, and motion patterns, while maintaining geometric consistency across temporal and spatial dimensions. Existing approaches, including Spatio-Temporal Gaussian [21] methods and deformable Gaussian Splatting techniques, demonstrate limitations when processing complex environmental changes and occlusions commonly encountered in complex dynamic scenes.
To address these limitations, this work proposes a novel adaptive hybrid reconstruction framework specifically designed for multi-temporal dynamic scene applications. Our adaptivity manifests in three key aspects: (1) spatially varying motion thresholds that automatically adjust to local scene characteristics, (2) learnable symmetry regularization weights optimized during training, and (3) dynamic computational resource allocation based on motion complexity. This enables our method to handle diverse dynamic scenes without manual parameter tuning.
Our approach combines Gaussian Splatting with adaptive deformation fields, featuring an intelligent decomposition mechanism that automatically distinguishes between static terrain features and dynamic environmental elements. The method incorporates radial basis function principles for efficient encoding of complex motion information [22], which have demonstrated effectiveness in handling both symmetric and asymmetric domains. Drawing insights from symmetry-based deformation methodologies [23], our framework exploits the inherent symmetries of Gaussian representations for static regions while employing specialized temporal tracking and deformation modeling for dynamic areas.
The primary aim of this research is to develop a scalable and accurate dynamic 3D reconstruction framework that maintains both geometric fidelity and temporal consistency for multi-temporal dynamic scene applications. The principal contributions include (1) an adaptive hybrid reconstruction framework that achieves 60% computational efficiency improvement while maintaining quantitative accuracy suitable for dynamic scene analysis, (2) a temporal deformation learning methodology that demonstrates enhanced sensitivity to subtle changes including temporal variations and structural modifications, and (3) a scalable Gaussian point dynamics system incorporating radial basis function optimized for multi-scale dynamic scene monitoring. These advances enable real-time monitoring capabilities essential for practical dynamic scene reconstruction systems. This research demonstrates that hybrid reconstruction frameworks can achieve both computational efficiency and temporal accuracy, enabling practical deployment in real-world dynamic scene reconstruction systems. Figure 1 provides an overview of our proposed pipeline, illustrating the key steps from input processing to final rendering.

2. Related Work

2.1. SLAM and Neural Scene Representation

Simultaneous Localization and Mapping (SLAM) techniques provide fundamental infrastructure for accurate camera pose estimation and incremental scene mapping in 3D reconstruction systems. The effectiveness of SLAM inherently relies on exploiting geometric invariances and symmetry principles to establish correspondence across multiple views and temporal frames. Recent advances in neural implicit SLAM and multi-agent collaborative frameworks have established robust foundations for continuous scene representation by leveraging rotational and translational symmetries in spatial tracking. The integration of SLAM with explicit scene representations, particularly Gaussian Splatting, has demonstrated significant potential for real-time reconstruction, where the radial symmetry of Gaussian primitives naturally aligns with the isotropic feature matching mechanisms in SLAM. These SLAM-guided approaches not only provide accurate camera trajectories but also enable symmetry-aware dynamic scene understanding and adaptive scene representation, establishing a strong foundation for the Gaussian-based reconstruction methods discussed in the following sections.

2.2. Gaussian Splatting

Gaussian Splatting is a state-of-the-art 3D reconstruction technique that constructs scenes using parametric Gaussian primitives—elliptical point clouds optimized for position, covariance, and opacity, enabling high-quality scene reconstruction and view synthesis. These Gaussian primitives are projected onto 2D image planes, leveraging their continuous probability distributions to generate photorealistic renderings while eliminating aliasing artifacts. The mathematical foundation of Gaussian Splatting relies on the anisotropic nature of 3D Gaussian distributions, which can be represented by their mean vectors and covariance matrices, providing an elegant framework for modeling complex geometric structures with inherent rotational and reflective symmetries.
Previous approaches, including PointNet [24], Neural Radiance Fields (NeRFs), and MeshNet [25], have made significant strides in 3D scene reconstruction. PointSIFT [26] and DGCNN [27] introduced efficient point-cloud feature extraction methods, while NeRF excels in light transport modeling. However, these methods face limitations in handling fine details, real-time processing, and high-resolution scenes. Moreover, traditional approaches often fail to explicitly leverage geometric symmetries present in natural and architectural scenes, leading to redundant computations and suboptimal reconstruction quality.
To address these limitations and enhance computational efficiency, Gaussian Splatting employs a comprehensive optimization framework incorporating hierarchical pruning strategies, importance sampling, and adaptive density control mechanisms. The method utilizes gradient descent to optimize the Gaussian primitive attributes to minimize discrepancies between rendered and reference images, with adaptive techniques including point duplication in low-gradient areas, splitting in high-gradient areas, and pruning low-opacity points. Furthermore, our implementation integrates multi-resolution voxel grid structures with optimized ray tracing algorithms, effectively balancing visual fidelity with computational performance requirements. Recent advances have demonstrated that incorporating symmetry-aware optimization strategies can significantly reduce the number of required primitives while maintaining reconstruction accuracy, particularly for objects exhibiting bilateral, rotational, or translational symmetries.

2.3. Symmetry in 3D Reconstruction

Symmetry plays a fundamental role in 3D scene understanding and reconstruction, as many natural and man-made objects exhibit various forms of geometric symmetry. Exploiting symmetry constraints has proven beneficial for improving reconstruction accuracy, reducing computational complexity, and enhancing geometric consistency. Reflective symmetry, commonly observed in architectural structures and biological forms, enables the completion of partially observed surfaces by mirroring information across symmetry planes. Rotational symmetry, prevalent in circular structures and mechanical components, allows for efficient representation through angular periodicity. Translational symmetry, characteristic of repetitive patterns in facades and textures, facilitates compact scene encoding through pattern replication.
In the context of Gaussian Splatting, symmetry-aware methods leverage these geometric properties to constrain the optimization process, enforce structural consistency, and reduce parameter redundancy. By detecting and exploiting symmetry in the input data, these approaches can achieve superior reconstruction quality with fewer Gaussian primitives, leading to more efficient memory utilization and faster rendering speeds. Recent work has explored automatic symmetry detection algorithms that identify dominant symmetry patterns in point clouds and incorporate them as soft or hard constraints during the optimization process. This symmetry-driven paradigm aligns naturally with the probabilistic nature of Gaussian distributions, as symmetric arrangements of Gaussians can be parameterized more compactly through shared parameters and transformation matrices.

2.4. Dynamic Scene Reconstruction

We extend Gaussian Splatting by introducing Spacetime Gaussian Splatting (4DGS) [28], a comprehensive framework for dynamic scene representation. 4DGS incorporates a temporal dimension into 3D Gaussian primitives, enabling the capture of position, color, opacity, and their temporal evolution. This extension facilitates the representation of geometric transformations and motion trajectories over time. 4DGS optimizes 4D Gaussian elements to achieve high-fidelity rendering from arbitrary viewpoints and temporal positions, demonstrating superior performance in dynamic content representation (e.g., deformations, translations) compared to static 3D methods.
Recent advances in explicit dynamic scene representations have expanded Gaussian-based reconstruction capabilities. Dynamic 3D Gaussians [29] introduce persistent tracking mechanisms that maintain temporal consistency through dynamic view synthesis, while deformable Gaussian methods [30] explore real-time continuous deformation modeling for photorealistic dynamic rendering. Alternative decomposition strategies have also emerged: K-Planes [31] demonstrates efficient spacetime factorization through explicit radiance fields, sharing conceptual similarities with plane-based decomposition approaches. Static–dynamic decomposition methods such as SDD-4DGS [32] further optimize computational efficiency by separately modeling static and dynamic scene components. For a comprehensive overview of neural rendering techniques for dynamic scenes, we refer readers to the survey by Chen et al. [33].
Its temporal encoding mechanism employs time-dependent parametric functions to model smooth variations in spatial and chromatic attributes, thereby supporting high-quality frame interpolation, slow-motion effects, and variable-rate playback. The representation of temporal dynamics can benefit significantly from temporal symmetry analysis, where periodic or quasi-periodic motions exhibit symmetries in the time domain. For instance, cyclic motions such as walking gaits or rotating machinery demonstrate temporal periodicity that can be exploited to reduce the temporal sampling requirements and improve motion prediction accuracy. By identifying and modeling these temporal symmetries, 4DGS can achieve more compact representations of repetitive motion patterns and enable more efficient temporal interpolation.
Despite these advantages, 4DGS faces computational and memory challenges. Researchers have addressed these challenges through adaptive temporal sampling strategies, hierarchical data structures, and spatiotemporal pruning techniques, thereby improving computational efficiency for complex dynamic scenes. Symmetry-aware temporal modeling further enhances efficiency by recognizing that certain motion patterns can be represented through symmetric transformations over time, reducing the need for frame-by-frame optimization of all primitive attributes.
While real-time deployment remains challenging, ongoing research focuses on developing efficient rendering algorithms, leveraging hardware acceleration, and implementing parallel processing architectures. Our research implements an optimized 4DGS variant that effectively balances computational efficiency with visual fidelity for high-quality dynamic scene modeling and rendering applications. The integration of spatiotemporal symmetry constraints enables more robust tracking of deformable objects and improved handling of occlusions in dynamic scenarios.

2.5. Radial Basis Function

Radial Basis Functions (RBFs) [34] are widely utilized in computer graphics and machine learning applications. They effectively capture both local and global data features through radially symmetric functions, enabling precise representation of complex geometric shapes in graphics applications and facilitating effective learning of data patterns in machine learning tasks such as classification and regression. The inherent radial symmetry of RBFs makes them particularly well-suited for representing isotropic features and modeling phenomena that exhibit spherical or circular symmetry in spatial domains.
From a mathematical perspective, a radial basis function is a real-valued function whose argument is the Euclidean distance, typically represented as ϕ ( x c ) , where x is the input point, c is the center point, and · denotes the Euclidean norm. Common RBF types include Gaussian functions ( ϕ ( r ) = e ( ε r ) 2 ), multiquadric functions ( ϕ ( r ) = 1 + ( ε r ) 2 ), and thin-plate splines ( ϕ ( r ) = r 2 ln ( r ) ), each exhibiting distinct advantages in different application contexts. Gaussian RBFs excel in handling smoothly varying data, while multiquadric RBFs are particularly suitable for data with steep gradients. The choice of RBF kernel directly influences the smoothness and locality of the interpolation, with each kernel type imposing different symmetry properties on the reconstructed surface.
RBFs excel in interpolation tasks, enabling the fitting of irregularly distributed points without requiring a structured grid, making them particularly suitable for processing sparse point clouds (e.g., from laser scanning systems). The radial symmetry property ensures rotation-invariant interpolation, which is crucial for maintaining geometric consistency across different viewing angles. However, RBFs encounter computational challenges when applied to large datasets. Researchers have addressed these limitations through acceleration techniques such as fast multipole methods and hierarchical RBF structures, reducing computational costs by controlling RBF influence regions or employing multi-resolution data structures.
In our work, we integrate RBFs with complementary methods for hybrid 3D scene modeling, leveraging their local adaptability for detailed geometric representation while maintaining a balance between computational efficiency and memory utilization. The combination of RBF-based local interpolation with global symmetry constraints enables robust reconstruction of both symmetric structural elements and asymmetric fine details. This hybrid approach proves particularly effective for architectural scenes where large-scale symmetric patterns coexist with localized geometric variations and ornamental features.

3. Methods

3.1. Deformation Field for Gaussian Splatting

To achieve enhanced shadow reconstruction during the rendering process, we draw inspiration from 4D Gaussians and introduce a Gaussian deformation network that explicitly accounts for the inherent symmetries and regularities present in natural scenes. Shadow formation and object deformation often exhibit symmetric properties—temporal symmetry in lighting changes, spatial symmetry in geometric structures, and reciprocal symmetry between light sources and occluders. Recent work has demonstrated that leveraging symmetry properties significantly enhances reconstruction quality in dynamic scenes [35,36].
At each training stage, we incorporate deformation magnitude learning with dynamically adjusted learning rates. During the point pruning process, we observe that shadow generation resulting from viewpoint changes significantly impacts training outcomes. Therefore, at the fundamental level of point pruning, we implement dual learning for both deformation magnitudes and grid quantities, while maintaining consistency across symmetric or structurally related point pairs.
In the densification process, the addition of supplementary points could potentially induce deformation in the initially generated Gaussian point cloud and result in the loss of certain features. Consequently, we continuously perform pruning while preserving existing features throughout densification. To compensate for potentially missing point clouds from various viewpoints, we strategically clone specific points and their corresponding features to fill the gaps. Notably, when the scene exhibits bilateral or rotational symmetry, we ensure that cloned points respect these symmetric relationships, thereby improving reconstruction consistency and parameter efficiency. This symmetric cloning strategy aligns with recent findings that geometric symmetry preservation is crucial for high-fidelity 3D reconstruction [37]. This symmetric cloning strategy not only reduces redundancy but also enhances the model’s ability to generalize across symmetric viewpoints.
The learning of deformation features is recorded before densification, facilitating more effective comprehension of deformation and grid characteristics. This approach naturally captures the reciprocal relationships between light sources and occluders, particularly in shadow formations where such correspondences play a crucial role.
For the integration of specific network features, we draw inspiration from the implementation methodology of HexPlane [38]. Similar multi-plane decomposition approaches have been successfully applied in dynamic scene reconstruction with explicit consideration of symmetric properties. By introducing a spatial encoding mechanism, we capture geometric structures and appearance details with greater precision, thereby enhancing the reconstruction quality in shadowed regions.
Our spacetime feature representation builds upon the HexPlane factorization, where the 4D spacetime volume is decomposed into six 2D feature planes. To leverage the complementary and symmetric nature of spatial and temporal dimensions, we organize these planes into symmetric spatial triplets ( x , y ) , ( x , z ) , ( y , z ) and temporal triplets ( x , t ) , ( y , t ) , ( z , t ) . This symmetric organization enables efficient capture of both spatial geometric details and temporal dynamics while maintaining structural coherence across different plane orientations.
The spacetime feature extraction is formulated as
f S T ( x , t ) = l = 0 L 1 ( i , j ) P l interp R l ( i , j ) ( π ( i , j ) ( x , t ) ) + λ c T T l ω T · interp R l ( i , j ) ( π ( i , j ) ( T ( x ) , t ) )
where f S T ( x , t ) represents the spacetime feature at spatial location x and time t, R l ( i , j ) denotes the l-th level feature plane for coordinate pair ( i , j ) , π ( i , j ) is the projection function that maps 4D coordinates to the corresponding 2D plane coordinates, and interp performs bilinear interpolation. The second term introduces symmetric coupling across related transformations T l (including reflections and rotations) with learned weights ω T and coupling strength λ c , which helps maintain consistency in symmetric regions. The coordinate pairs are defined as
P l = { ( x , y ) , ( x , z ) , ( y , z ) , ( x , t ) , ( y , t ) , ( z , t ) }
This multi-scale representation with structural coupling enables efficient capture of both local variations and global regularities. The projection function π ( i , j ) extracts the relevant 2D coordinates from the 4D spacetime point, allowing each plane to specialize in specific spatial or temporal correlations.
For Gaussian deformation modeling, we extend the standard 3D Gaussian representation to incorporate temporal dynamics while preserving geometric symmetries:
( X ( t ) , r ( t ) , s ( t ) ) = ( X + Δ X ( t ) , r + Δ r ( t ) , s + Δ s ( t ) )
where X ( t ) , r ( t ) , and s ( t ) represent the time-dependent position, rotation, and scale parameters. The temporal deformation offsets are computed using radial basis functions augmented with structural coupling terms:
Δ X ( t ) = k = 1 K w k X ϕ ( t t k ) + β p m N ( X ) w m Δ X m ( t )
Δ r ( t ) = k = 1 K w k r ϕ ( t t k ) + β r R [ r ]
Δ s ( t ) = k = 1 K w k s ϕ ( t t k ) + β s S [ s ]
where ϕ ( r ) = exp ( γ r 2 ) is the Gaussian RBF kernel, { t k } k = 1 K are the temporal control points, and w k { X , r , s } are the learned weights for each deformation component. The term N ( X ) identifies structurally related points, while R and S enforce rotational and scale consistency across related pairs. The coefficients β p , β r , β s control the strength of structural coupling, and γ controls the temporal smoothness. This formulation ensures smooth temporal transitions while maintaining computational efficiency during optimization. The incorporation of structural coupling terms in our deformation model draws inspiration from physics-based deformation approaches that maintain temporal coherence in dynamic systems [39].

3.1.1. Adaptive Symmetry Weight Mechanism

To prevent over-regularization in asymmetric regions, we implement learnable symmetry coefficients β p , β r , β s rather than fixed hyperparameters:
β i ( x ) = σ ( W β · f s p a t i a l ( x ) + b β ) , i { p , r , s }
where σ is the sigmoid function ensuring β i [ 0 , 1 ] . During training, the reconstruction loss naturally penalizes overly strong constraints in asymmetric regions, driving β i toward lower values, while genuine symmetric regions benefit from higher weights. This synergizes with our motion-aware decomposition, where high-motion regions automatically receive reduced symmetry constraints.
To effectively learn these coupled deformation patterns, we employ a dual-branch architecture where one branch captures general deformation while the other enforces structural consistency through a coupling loss:
L c o u p l e = X i , X j P r e l a t e | Δ X i ( t ) T i j ( Δ X j ( t ) ) | 2
where P r e l a t e denotes pairs of structurally related points and T i j represents the transformation relating them. The total optimization objective becomes
L t o t a l = L r e n d e r + λ d e f L d e f o r m + λ c o u p l e L c o u p l e + λ r e g L r e g
The exploitation of symmetry properties significantly reduces the dimensionality of the optimization space, enabling high-quality reconstruction even from limited viewpoints [40]. This dimension reduction is particularly valuable in shadow reconstruction scenarios where viewpoint coverage may be incomplete or asymmetric. This approach naturally leverages the inherent symmetries and structural properties of the scene, resulting in more coherent 4D reconstruction with improved parameter efficiency. By explicitly modeling symmetric relationships in both feature representation and deformation dynamics, our method achieves enhanced consistency across viewpoints while reducing computational redundancy. Experimental validation (Section 4.5, Table 1) demonstrates that our method automatically learns region-specific symmetry weights spanning a 10× range (0.08–0.98), confirming effective adaptation to diverse scene characteristics without manual tuning.

3.1.2. Symmetry Detection and Preservation Framework

To address how symmetries are detected and maintained, we present our fully automatic three-stage framework requiring no manual annotation. Figure 2 illustrates our automatic symmetry detection pipeline.
Stage 1: Geometric Initialization. We leverage COLMAP’s Structure-from-Motion point cloud to automatically detect symmetric structures. For point cloud P = { p i } i = 1 N , we employ RANSAC-based plane fitting to identify potential symmetry planes:
π : n T x + d = 0 , S ( π ) = 1 N i = 1 N min j reflect π ( p i ) p j 2
where reflect π ( p ) denotes reflection across plane π . Planes with S ( π ) < σ (typically σ = 0.15 ) are classified as symmetry candidates. Rotational symmetries are detected similarly by testing rotation axes with angles θ { π / 2 , π / 3 , 2 π / 3 , π } .
Stage 2: Data-Driven Learning. Beyond geometric priors, our framework learns implicit symmetries through coupling weights w m :
w m = σ ( MLP coupling ( [ f i , f j , r i j ] ) )
where f i , f j are learned features and r i j = X i X j is relative position. During training, w m 1.0 for strongly symmetric pairs, w m 0.5 for approximate symmetries, and w m 0.0 for unrelated regions.
Stage 3: Dynamic Adjustment. To prevent over-regularization, we adapt constraint strength through
λ couple ( t + 1 ) = λ couple ( t ) · exp α · L render λ couple
When symmetry constraints conflict with rendering quality, the weight automatically decreases, prioritizing data fidelity over structural regularity.
Preservation Mechanisms. Our strategy operates across three domains: (1) Spatial: HexPlane’s symmetric triplets ( x , y ) , ( x , z ) , ( y , z ) promote consistency; (2) Temporal: temporal triplets ( x , t ) , ( y , t ) , ( z , t ) maintain relationships across time; (3) Deformation: operators R [ r ] and S [ s ] enforce rotational and scale consistency:
R [ r i ] = j N sym ( i ) w j r i T sym ( r j ) 2
All mechanisms are implemented as soft constraints through regularization terms, allowing optimization to override symmetry assumptions when they conflict with observed data. The symmetry detection overhead is minimal (∼2–3 min) and performed once during initialization, adding <5% training overhead.

3.2. Gaussian Sphere Spatiotemporal Analysis

Indeed, because the Neural 3D Video (N3DV) dataset contains shadows generated by the movements of humans and animals, we adopted select techniques from Spacetime Gaussians (STGs) [41] and, with the assistance of radial basis functions, established comprehensive motion analysis for each Gaussian sphere. This analysis systematically evaluates multiple dynamic parameters, including angular velocity, displacement, scaling, opacity, and deformation magnitudes.
Our motion analysis framework extends beyond simple trajectory tracking to capture the complex temporal evolution of shadows cast by moving subjects. The temporal evolution of shadow patterns often exhibits symmetric properties, particularly when motion follows predictable paths or periodic patterns. Understanding symmetry-breaking mechanisms in dynamic systems is essential for accurately modeling temporal variations. By leveraging radial basis functions as interpolation kernels, we achieve smooth transitions between observed states and accurately model the continuous deformation of shadows across frames. This approach enables us to preserve temporal coherence even in challenging scenarios with rapid movements or occlusions.
The integration of key STG concepts with our deformation network proves particularly effective for modeling the non-rigid nature of shadow transformations. While conventional methods often struggle with the complex relationship between object movement and shadow projection, our combined approach successfully disentangles these factors. We implement an adaptive weighting scheme that dynamically adjusts the influence of each motion parameter based on scene context, allowing the system to prioritize different aspects of motion depending on the specific shadow characteristics.
The temporal dynamics are formulated as
σ t ( s ) = σ s 2 exp ( s τ | t t c | 2 )
μ t ( s ) = k = 0 n p β k ( t t c ) k
q t ( s ) = k = 0 n α k ( t t c ) k
where σ t ( s ) represents the Gaussian temporal variance, μ t ( s ) denotes the temporal mean position, and q t ( s ) indicates the temporal opacity. Here, t is the current time, t c is the center time of the Gaussian, s τ controls the temporal spread, β k and α k are learned coefficients for position and opacity, respectively, and n p , n determine the degree of the temporal polynomial.
Experimental results demonstrate that this motion-aware approach significantly improves shadow reconstruction accuracy on the N3DV dataset, particularly in sequences featuring complex articulated movements. The temporal consistency of rendered shadows shows marked improvement compared to static approaches, with smoother transitions and more realistic deformation patterns that closely match the ground truth. However, for static scenes within composite scenes, some information appears overly complex, and we only need to achieve reconstruction for static elements, which leads us to the third component of our method.

3.3. Dynamic–Static Scene Reconstruction

Initially, inspired by the Segment Anything Model (SAM) [42], we implemented a strategy to differentiate between dynamic and static scenes. However, the integration of SAM significantly compromised the model’s computational efficiency, and the boundaries of regions segmented by SAM exhibited considerable imprecision, often failing to properly identify shadowed areas.
To address these limitations and provide a more principled approach, we developed an automatic motion-aware decomposition strategy. Our method leverages the inherent motion characteristics present in multi-view temporal sequences by defining a motion intensity field M ( x , t ) that quantifies temporal variation at each spatial location:
M ( x , t ) = 1 | N t | Δ t N t I ( x , t ) I ( x , t + Δ t ) 2
where I ( x , t ) represents the intensity at pixel x and time t, and N t is a temporal neighborhood window. This metric effectively captures both direct object motion and indirect effects such as shadow dynamics. The dynamic–static decomposition is performed using an adaptive thresholding strategy:
D ( x ) = 1 if M ( x , t ) > θ a d a p t i v e ( x ) 0 otherwise
where θ a d a p t i v e ( x ) is a spatially varying threshold computed using Otsu’s method applied locally within spatial neighborhoods. This approach naturally accounts for varying illumination conditions and shadow characteristics across different scene regions.
For the reconstruction of dynamic regions ( D ( x ) = 1 ), we employ our specialized temporal deformation method with symmetric constraints as described in the previous subsection, while for static regions ( D ( x ) = 0 ), we leverage the original 3D Gaussian Splatting framework with strategic code modifications. These adaptations allow 3DGS to effectively process static regions from multiple viewpoints. The resulting binary mask D ( x ) guides our hybrid reconstruction pipeline, effectively partitioning the reconstruction process into complementary components. This approach shares similarities with multi-stage deformation strategies that separately model color and shape variations in dynamic scenes, while maintaining bilateral consistency across the scene.
By implementing a patch-based pruning strategy combined with morphological operations to refine the initial segmentation, we successfully mitigate edge blurring artifacts inherent to Gaussian sphere reconstruction. The composite results from both dynamic and static region reconstructions demonstrate remarkable visual coherence and quality while maintaining theoretical rigor in the decomposition process. This unified framework effectively handles both symmetric static structures and dynamic elements with temporal variations, providing a comprehensive solution for complex scene reconstruction.

3.4. Multi-Granularity Motion Decomposition and Overlap Handling

3.4.1. Three-Level Granularity Processing

Our motion decomposition employs a hierarchical framework balancing computational efficiency and accuracy. Figure 3 demonstrates our three-level hybrid granularity pipeline.
Level 1: Pixel-Level Detection. Fine-grained motion boundaries are captured at pixel resolution:
M ( x , t ) = 1 | N t | Δ t N t I ( x , t ) I ( x , t + Δ t ) 2
enabling detection of subtle motions (<2 pixels/frame).
Level 2: Gaussian-Level Aggregation. Motion information is aggregated to Gaussians for computational tractability:
M G ( G i , t ) = 1 | P i | x P i M ( x , t ) · w ( x , G i )
where w ( x , G i ) = exp ( x μ i 2 / 2 σ i 2 ) , reducing variables from O ( 10 6 ) to O ( 10 5 ) .
Level 3: Adaptive Clustering. Morphological operations refine segmentation:
D refined ( x ) = MorphClose ( MorphOpen ( D ( x ) , k 1 ) , k 2 )
producing coherent motion regions (typically k 1 = 3 × 3 , k 2 = 5 × 5 ).

3.4.2. Overlapping Motion Handling

Four mechanisms address motion overlaps:
RBF Spatiotemporal Interpolation. Multiple motion sources blend naturally:
Δ X total ( x , t ) = s = 1 S α s ( x ) k = 1 K s w s , k ϕ ( ( x , t ) ( c s , k , t s , k ) )
where α s ( x ) = exp ( x μ s 2 / 2 σ s 2 ) .
Neighborhood Consistency. Motion coherence is enforced spatially:
L neighbor = i j N spatial ( i ) w i j Δ X i ( t ) Δ X j ( t ) 2
with w i j = exp ( X i X j 2 / 2 h 2 ) and bandwidth h = 0.1 m.
Multi-Scale Features. Hierarchical representation captures details and context:
f motion ( x , t ) = l = 0 L 1 ω l · Conv l ( M ( x , t ) )
Soft Boundaries. Continuous probability distributions eliminate discontinuities:
p dynamic ( x ) = σ M ( x , t ) θ adaptive ( x ) τ
where τ = 0.05 controls transition sharpness.

4. Experiment

4.1. Experimental Setup and Validation Protocol

Hardware Configuration: All experiments were conducted on identical hardware configurations (NVIDIA RTX 3090 GPU, NVIDIA Corporation, Santa Clara, CA, USA; 32 GB RAM; Intel i9-10900K CPU, Intel Corporation, Santa Clara, CA, USA). to ensure fair comparison. Each method was implemented using the same computational precision (FP32) and optimization settings where applicable. Evaluation Protocol: To ensure robust evaluation, we implemented a rigorous validation protocol. Cross-validation: Each dataset was split into training (70%), validation (15%), and testing (15%) sets using stratified sampling to maintain temporal consistency. Multiple runs: All quantitative results represent averages over three independent training runs with different random initializations. Confidence intervals: 95% confidence intervals were computed using bootstrap sampling (n = 1000 iterations). Ablation studies: Systematic component removal to validate each methodological contribution was conducted.
Evaluation Metrics: We employed multiple complementary metrics to provide a comprehensive assessment:
-
PSNR: Peak Signal-to-Noise Ratio for pixel-level reconstruction accuracy;
-
SSIM: Structural Similarity Index for perceptual quality assessment;
-
LPIPS: Learned Perceptual Image Patch Similarity for deep feature comparison;
-
Training Time: Wall-clock time for complete model training;
-
Memory Usage: Peak GPU memory consumption during training.

4.2. Implementation Details

Preprocessing of the Dataset. We developed an efficient COLMAP preprocessing methodology that transitions from SAM-based segmentation to a computationally optimized bounding box approach for video processing. This method isolates dynamic regions by leveraging temporal acceleration and velocity variations in Gaussian points, enabling reliable differentiation between dynamic and static elements. Our minimal bounding boxes encapsulate moving objects with precise boundaries, significantly reducing training time while preserving reconstruction quality despite including a small percentage of static points.
Rendering Process. During rendering, we employed a methodological approach treating the entire image as background with bounding box constraints, reconstructing scenes through a streamlined dynamic Gaussian representation that concentrates computational resources on essential features. We optimized the original 3D Gaussian Splatting framework for efficient multi-view data processing, designating backgrounds as static entities to mitigate potential artifacts in dynamic region reconstruction. Our methodology preserves training efficiency by extending 3DGS principles with targeted modifications for dynamic content.

4.3. Datasets

Neural 3D Video Dataset. Neural 3D Video Dataset(N3DV) was captured in a controlled indoor environment with a multi-camera setup, featuring six dynamic scenes (coffee martini, cook spinach, cut roasted beef, flame salmon, flame steak, and sear steak). Each sequence was processed into 300 frames using COLMAP to extract camera parameters. The dataset includes complex dynamic elements with human subjects and a dog against static backgrounds. For evaluation, we used the first camera as a reference and employed PSNR, SSIM [43], and LPIPS metrics to assess visual fidelity.
Google Immersive Dataset [44]. Google Immersive Dataset was acquired using a custom array of 46 time-synchronized fisheye cameras distributed on a hemispherical dome, as described in [44]. in indoor and outdoor environments. Due to computational constraints (RTX 3090 GPU, 32 GB memory), we utilized a subset of cameras for training. This dataset served to benchmark computational efficiency and reconstruction quality in challenging shadow regions. Our evaluation focused on two exhibition scenes with complex lighting and a cave scene with extreme dynamic range. As shown in Table 2 and Figure 4, our method demonstrates robust performance under severe computational constraints, with notable improvements in PSNR and SSIM metrics after optimization across all three challenging scenes.

4.4. Comparative Experiment

As shown in Table 3, our method achieves superior performance in both reconstruction quality and computational efficiency. The analysis demonstrates that our method achieves the best performance both in PSNR and in time efficiency. Specifically, in terms of efficiency, our method outperforms 4DGS and STG by 39.4% and 15.3% in training time, respectively.

Comprehensive Comparison with State-of-the-Art Methods

To provide a thorough evaluation of our method’s performance across different paradigms, we conducted comprehensive comparisons with both implicit (NeRF-based) and explicit (Gaussian-based) representations. Table 4 presents a complete performance analysis including methods that were not included in our initial comparison.
Implementation and Evaluation Protocol:
To ensure fair comparison, we employed rigorous evaluation protocols for different baseline categories:
  • Directly Implemented Methods (DyNeRF, HyperNeRF): We implemented these NeRF-based methods using official codebases and trained them on identical hardware (NVIDIA RTX 3090 GPU, NVIDIA Corporation, Santa Clara, CA, USA; 32 GB RAM). with the same N3DV dataset splits and evaluation protocols. All hyperparameters were set according to the original papers’ recommendations.
  • Reproduced Methods (4DGS, STG): These Gaussian-based methods were evaluated using official implementations with identical experimental settings as our method.
  • Estimated Performance (SDD-4DGS): Since the official implementation of SDD-4DGS was not publicly available at the time of writing, we provide conservative performance estimates based on (1) relative improvements reported in the original paper over vanilla 4DGS, (2) computational complexity analysis of static–dynamic decomposition (expected 3–7% speedup based on architecture analysis), and (3) our vanilla 4DGS baseline performance. We compute estimates as Estimated SDD-4DGS = Baseline 4 D G S + RelativeImprovement reported . This approach provides conservative bounds and is clearly marked with asterisks in all tables and figures.
Key Findings:
The comprehensive comparison reveals several important insights:
  • Superiority over Implicit Methods: Compared to NeRF-based approaches (DyNeRF, HyperNeRF), our method achieves approximately 12× training acceleration, 60% memory reduction, and superior reconstruction quality (+2.49 dB PSNR over HyperNeRF). This demonstrates the fundamental advantages of explicit Gaussian representations for dynamic scene modeling.
  • Advancement over Gaussian Methods: Among explicit representations, our method outperforms both vanilla 4DGS and specialized variants. The improvements stem from (a) symmetry-aware optimization enabling 19% parameter reduction while maintaining consistency, (b) RBF-based temporal modeling providing smooth continuous-time representation with 0.5–0.8 dB quality gains, and (c) automatic motion-aware decomposition eliminating the need for external segmentation tools (3–5 min preprocessing time savings).
  • State-of-the-Art Performance: Our method achieves the best quality–efficiency balance, with simultaneous improvements in reconstruction quality (PSNR: 31.41 dB), perceptual metrics (LPIPS: 0.086), training efficiency (37.7 min), and memory footprint (12.4 GB). Relative to the estimated SDD-4DGS performance, we achieve +1.23 dB PSNR improvement with 25.8% faster training and 21.5% memory reduction.
Scene-Level Performance Analysis:
To provide granular insights into method performance across different scene characteristics, Table 5 presents per-scene comparisons with key baseline methods.
The scene-level analysis reveals that our method maintains robust performance across diverse scenarios. Notably, we achieve substantial improvements in challenging conditions such as flame scenes (complex lighting dynamics) and rapid motion sequences (cook_spinach, cut_roasted_beef), validating the effectiveness of our symmetry-aware deformation modeling and RBF-based temporal representation.
Memory Efficiency Analysis: Beyond training time improvements, our method demonstrates significant advantages in memory consumption and computational resource utilization. As shown in Table 6, we achieve 16.2% reduction in peak GPU memory usage compared to STG, while point-cloud sizes are reduced by 19.0% and model parameters by 15.5%. The efficiency ratio, computed as PSNR per unit of memory usage, shows a 19.6% improvement, confirming that our motion-aware decomposition strategy effectively optimizes resource allocation. This memory efficiency is particularly crucial for deployment on resource-constrained platforms and enables processing of higher-resolution scenes within the same hardware limitations.
Table 7 provides a comprehensive scene-by-scene comparison between our method and STG across all N3DV sequences. Our method outperforms STG in most scenes in terms of PSNR and LPIPS metrics, with particularly significant improvements in challenging scenes with complex motion and lighting conditions.
To further demonstrate our method’s robustness on particularly challenging scenarios, Table 8 presents a detailed comparison on the “flame salmon” sequence, which features complex lighting dynamics, rapid motion, and challenging shadow effects. Our method achieves superior performance across all metrics compared to both recent Gaussian-based methods (4DGS, STG) and traditional NeRF-based approaches (Neural Volumes, LLFF). Specifically, we achieve 28.81 dB PSNR, outperforming STG by 0.24 dB and 4DGS by 0.15 dB, while maintaining competitive perceptual quality (LPIPS: 0.074). This demonstrates the effectiveness of our motion-aware decomposition and RBF-based deformation modeling in handling extreme lighting conditions and dynamic elements.
Beyond quantitative evaluation metrics, our method exhibits notable advantages in addressing complex optical phenomena. Figure 5 presents the challenging “coffee martini” sequence, wherein glass-induced refraction and reflection effects pose substantial reconstruction challenges. Our approach successfully reconstructs objects observed through transparent media while preserving geometric fidelity.

4.5. Statistical Analysis and Significance Testing

To ensure the reliability of our experimental results, we conducted a comprehensive statistical analysis across all evaluation metrics. Each quantitative result represents the mean performance over three independent training runs with different random seeds, providing robust estimates of model performance and variability.
Statistical Significance Testing: We performed paired t-tests to assess the statistical significance of performance differences between our method and baseline approaches. The results demonstrate statistically significant improvements (p < 0.05) across all primary metrics, Table 9 are summary of Statistical Significance Testing Results.
Effect Size Analysis: To quantify the practical significance of our improvements, we computed Cohen’s d effect sizes:
  • PSNR improvement over STG: d = 0.78 (medium to large effect);
  • Training time reduction: d = 1.24 (large effect);
  • Overall reconstruction quality: d = 0.89 (large effect).
These results confirm that our improvements are not only statistically significant but also practically meaningful for real-world applications.

4.6. Ablation Study

To systematically evaluate the contribution of each proposed component, we conducted comprehensive ablation experiments by incrementally adding each methodological element. Table 10 presents the results of the ablation study investigating the effects of individual component removal. This analysis demonstrates the individual and cumulative impact of our key innovations.
Component-wise Performance Analysis
The results reveal several key insights:
  • HexPlane features contribute +1.28 dB PSNR improvement, demonstrating the effectiveness of spacetime factorization for capturing temporal correlations.
  • RBF deformation provides the largest single improvement (+1.16 dB PSNR), validating our temporal modeling approach for dynamic elements.
  • Motion-aware decomposition adds +0.35 dB PSNR while reducing computational cost, confirming the efficiency benefits of adaptive processing.
Adaptive Symmetry Weight Validation
To validate our adaptive symmetry mechanism’s ability to distinguish between symmetric and asymmetric regions, we analyzed the learned weight distributions across different scene characteristics. Table 1 presents the converged β values after training, demonstrating that our method automatically adjusts constraint strength based on local scene properties.
The weight distribution reveals several important characteristics: (1) Static symmetric structures (e.g., architectural elements) converge to near-maximum weights ( β > 0.9 ), effectively exploiting geometric regularities for parameter efficiency. (2) Dynamic elements with inherent asymmetry (flames, shadows) converge to minimal weights ( β < 0.2 ), allowing the reconstruction to faithfully capture irregular patterns. (3) The smooth gradient of weights for intermediate cases (static textures, hand motions) demonstrates the method’s ability to find optimal balances without binary decisions.
Computational Efficiency Analysis
Our computational efficiency analysis shows that by isolating dynamic elements and removing static points, we significantly reduce unnecessary computations while maintaining reconstruction quality.
This approach leverages radial basis functions to directly compute dynamic Gaussian points, greatly improving overall efficiency and allowing limited computational resources to be allocated more effectively to dynamic reconstruction tasks. The average 16% speedup over STG, combined with superior reconstruction quality, demonstrates the effectiveness of our motion-aware decomposition strategy. Table 11 presents a comparison of the training speed between our method and other baselines under various scenarios.
Computational Cost Breakdown Analysis: To provide deeper insights into the efficiency gains, we performed detailed profiling of computational resources across pipeline components using NVIDIA Nsight Systems. Table 12 presents the time and memory allocation for each module during the rendering process. Note that while peak GPU memory (Table 6) represents the maximum footprint at any single moment, the per-module memory values here reflect concurrent allocations that contribute to the total working set. The analysis reveals that our framework strategically allocates 30.4% of computation to static region processing using efficient 3DGS rendering, 50.5% to dynamic region reconstruction with HexPlane encoding and RBF deformation, and only 19.1% to integration and shadow refinement. Notably, the RBF deformation module, despite being the most time-consuming component at 25.1%, delivers the largest quality improvement (+1.16 dB PSNR from ablation studies). This demonstrates the effectiveness of our resource allocation strategy, where computational budget is concentrated on high-impact components while maintaining overall efficiency through motion-aware decomposition.
The breakdown clearly shows that compared to STG’s uniform processing approach, our selective allocation of computational resources achieves 27.2% reduction in total FLOPs while improving reconstruction quality, validating the core principle of symmetry-aware optimization in our framework.
Shadow Reconstruction Analysis
In experiments involving complex shadows and reflections, we analyzed the impact of our deformation modeling on challenging visual phenomena:
In experiments involving humans and dogs, we faced challenges such as unpredictable motion patterns and complex shadows, which often cause traditional reconstruction methods to fail. Figure 6 demonstrates our method’s effectiveness in reconstructing flames with their complex appearance and dynamics. By integrating the deformation of dynamic Gaussian points into the model, we significantly improved shadow reconstruction quality, reducing shadow distortion and artifacts. This approach also enhances the fidelity of dynamic elements, such as reflective surfaces and active light sources.
Point-Cloud Optimization Analysis
Figure 7 demonstrates our method’s capability in handling rapid motion, specifically in the “cook spinach” sequence with challenging stir-frying dynamics. By tracking point populations before densification, we found that integrating deformation information optimizes densification and pruning. The algorithm retains points with prominent deformation characteristics, preserving crucial information while reducing point counts, improving rendering quality, and enhancing computational efficiency. As shown in Figure 8, Figure 9 and Figure 10, we compared various challenging scenes, focusing on key areas like specular reflections, knife movements, and complex human-animal interactions. Visual comparisons highlight superior performance in complex lighting, fine details, and dynamic motion, with better temporal consistency and spatial detail preservation. Quantitative results confirm that our method achieves more accurate reconstruction with fewer computational resources. Table 13 demonstrates the enhancement in training performance attributable to the RBF function.

Overlapping Motion Handling Validation

Table 14 validates our approach on overlapping motion scenarios. Table 15 indicates that the largest performance improvement is attributed to the RBF interpolation.
Table 14. Overlapping motion handling results (PSNR in dB). Baseline uses uniform Gaussian optimization without specialized mechanisms.
Table 14. Overlapping motion handling results (PSNR in dB). Baseline uses uniform Gaussian optimization without specialized mechanisms.
SceneBaselineOursGainKey Mechanisms
cook_spinach 23.4 26.18 + 2.78 RBF blending + Multi-scale
cut_roasted_beef 24.1 25.87 + 1.77 Neighborhood + Soft boundary
flame_steak 25.8 27.18 + 1.38 RBF spatiotemporal + Layering
Average 24.4 26.41 + 1.98 All mechanisms
Mechanism ablation
Table 15. Mechanism-wise ablation averaged across overlap scenarios. RBF interpolation provides the largest contribution (+1.2 dB).
Table 15. Mechanism-wise ablation averaged across overlap scenarios. RBF interpolation provides the largest contribution (+1.2 dB).
ConfigurationPSNR (dB) Δ
Baseline 24.4 ± 1.1 -
+RBF interpolation 25.6 ± 0.9 + 1.2
+Neighborhood consistency 26.1 ± 0.8 + 0.5
+Multi-scale features 26.3 ± 0.8 + 0.2
+Soft boundaries 26.41 ± 0.8 + 0.11

4.7. Adaptive Symmetry Validation

Table 16 demonstrates that our method automatically learns region-specific symmetry weights spanning a 10× range, confirming effective adaptation to scene characteristics.
On highly asymmetric scenes (flame_steak, shadow regions), adaptive symmetry outperforms fixed symmetry by 0.46–3.52 dB PSNR, validating that scene-specific adaptation avoids over-regularization while maintaining structural consistency where beneficial.

4.8. Efficiency of Adaptive Strategies

Table 17 demonstrates that our adaptive approach achieves superior quality–efficiency tradeoff compared to fixed-strategy baselines.

5. Discussion

This research addresses critical challenges in dynamic scene reconstruction where balancing computational efficiency with temporal accuracy remains a fundamental problem. Our adaptive Gaussian Splatting framework demonstrates significant advances through symmetry-aware deformation modeling and motion-based scene decomposition strategies.

5.1. Key Contributions and Symmetry-Based Performance

Our adaptive Gaussian Splatting framework achieves significant improvements: 15% reduction in computational cost, 13.3% decrease in processing time, and superior reconstruction quality. The HexPlane-based spacetime encoding with symmetric coupling provides +1.28 dB PSNR improvement, while RBF deformation modeling with structural consistency contributes +1.16 dB PSNR. Statistical analysis confirms these improvements are significant (p < 0.05) with medium to large effect sizes.
The explicit modeling of symmetric relationships in both feature representation and deformation dynamics plays a crucial role in these improvements. By leveraging bilateral and rotational symmetries present in natural scenes, our method reduces redundancy while enhancing reconstruction consistency. The symmetric cloning strategy not only improves parameter efficiency but also ensures geometric coherence across symmetric viewpoints, which is particularly valuable for complex dynamic scenes with multiple symmetric structures.

5.2. Impact of Symmetry Constraints on Reconstruction Quality

Shadow reconstruction improvements (+3.5 dB PSNR in shadow regions) directly benefit from symmetry-aware processing. The reciprocal symmetry between light sources and occluders, combined with temporal symmetry in lighting changes, enables more accurate shadow modeling. Performance validation across various dynamic scenes demonstrates that symmetry preservation significantly enhances both spatial consistency and temporal coherence. The framework achieves 16% average speedup over Spacetime Gaussians while maintaining superior quality through efficient exploitation of scene symmetries.
The motion-aware decomposition strategy efficiently allocates computational resources by identifying symmetric patterns in scene dynamics. This symmetry-driven approach allows the system to predict deformations in symmetric regions without redundant computation, leading to substantial performance gains in scenes with high structural regularity.

5.3. Symmetry–Asymmetry Balance

Our adaptive framework addresses the fundamental tradeoff between exploiting geometric symmetries for efficiency and preserving genuine asymmetries through three mechanisms: (1) data-driven symmetry weight learning (Table 16) that automatically adjusts constraint strength from 0.08 to 0.98 based on local scene properties, (2) motion-guided asymmetry detection where high motion intensity correlates with symmetry-breaking dynamics, and (3) spatially varying adaptation that achieves optimal local tradeoffs. Results on highly asymmetric scenes (flame_steak: +0.83 dB) confirm this approach successfully balances symmetry exploitation with asymmetry preservation.

5.4. Symmetry Detection Strategy and Robustness

Our symmetry detection framework balances automation with robustness through three design principles: (1) conservative geometric initialization with threshold σ = 0.15 prioritizes precision over recall, ensuring only high-confidence symmetries enter optimization; (2) learning as refinement constrains the coupling MLP to operate only on geometrically symmetric or spatially proximate point pairs ( r i j < 0.5 m), preventing spurious pattern discovery; (3) gradient-based override automatically relaxes constraints when they conflict with observed data, particularly at shadow boundaries, non-rigid deformations, and asymmetric textures.
Current limitations include restrictions to reflective and rotational symmetries, lack of explicit periodic temporal pattern modeling, and globally set detection thresholds. Future work should explore group-equivariant networks for arbitrary transformation symmetries, Fourier-based temporal analysis for cyclic motions, and adaptive scale-dependent thresholds for multi-scale scenes. Nevertheless, our framework successfully leverages geometric priors without manual annotation or over-regularization, as validated by the quality improvements and ablation studies presented in Section 4.

5.5. Limitations and Future Directions

Motion Decomposition Limitations. Our approach handles 95% of common scenarios but faces challenges with (1) complete occlusion (>10 frames), (2) extreme blur (>30 pixels/frame), and (3) transparent multi-layer overlaps. Future work should explore hierarchical decomposition, optical flow disentanglement, and physics-informed priors. Current limitations include reliance on predefined symmetry types (bilateral and rotational), which may not capture all symmetric patterns in complex scenes. Future research should explore learning-based symmetry detection, adaptive symmetry-aware optimization strategies, and extension to higher-order symmetries. The decomposition strategy could be enhanced by incorporating symmetry-breaking detection to handle scenes where symmetry is only approximate or locally preserved.
Future work will investigate hierarchical symmetry modeling for multi-scale scene structures, integration of group-theoretic symmetry constraints for more robust optimization, and exploration of symmetry-aware neural architectures that can dynamically adapt to scene-specific symmetric properties. Additionally, combining our symmetry-based approach with semantic understanding could enable content-aware symmetry exploitation for diverse scene types.
Real-time Deployment Considerations. While our method achieves 37.7 min training time, real-time deployment faces challenges including (1) GPU memory constraints for large-scale scenes (> 10 6 Gaussians), (2) latency requirements for interactive applications (<33 ms per frame), and (3) dynamic resource allocation in varying network conditions. Potential solutions include hierarchical Level-of-Detail rendering, temporal coherence exploitation for frame interpolation, and edge-cloud collaborative processing architectures.
Broader Applicability. Beyond dynamic scene reconstruction, our symmetry-aware framework shows promise for (1) augmented reality applications requiring real-time scene understanding, (2) autonomous driving systems for dynamic environment mapping, (3) medical imaging for 4D organ reconstruction, and (4) industrial inspection for defect detection in manufacturing. The motion-aware decomposition and RBF-based temporal modeling are particularly suited for scenarios with predictable motion patterns or structural regularities.

6. Conclusions

This work presents an adaptive Gaussian Splatting framework for dynamic scene reconstruction through symmetry-aware deformation modeling and motion-based scene decomposition. Key achievements include 15% computational cost reduction, 13.3% processing time improvement, and superior reconstruction quality with +3.5 dB PSNR improvement in shadow regions.
The framework successfully exploits geometric symmetries—including bilateral, rotational, and reciprocal symmetries—to achieve enhanced reconstruction consistency while reducing computational redundancy. By explicitly modeling symmetric relationships in spacetime feature representation and temporal deformation dynamics, our method demonstrates robust performance across diverse dynamic scenes. The symmetric coupling mechanism maintains structural coherence across viewpoints, while the dual-branch architecture ensures both general deformation capture and symmetric consistency enforcement. The fully automatic symmetry detection framework combines geometric initialization, learned refinement, and dynamic constraint adjustment to leverage structural priors effectively without manual annotation or over-regularization risks.
Statistical validation confirms the significance of symmetry-driven optimization, with all improvements showing p < 0.05 significance levels and medium to large effect sizes. The 16% speedup over baseline methods, combined with quality improvements, demonstrates the practical value of symmetry exploitation in real-time dynamic reconstruction systems.
Future work will focus on learning-based symmetry detection, hierarchical multi-scale symmetry modeling, and integration of group-theoretic constraints to handle more complex symmetric patterns. The proposed framework provides a foundation for symmetry-aware 4D reconstruction applicable to various domains requiring high-fidelity dynamic scene modeling.

Author Contributions

Conceptualization, R.Z.; methodology, R.Z.; software, R.Z.; validation, R.Z.; formal analysis, R.Z.; investigation, R.Z.; resources, R.Z.; data curation, R.Z.; writing—original draft preparation, R.Z.; writing—review and editing, M.L. and Z.Z.; visualization, R.Z.; supervision, M.L. and Z.Z.; project administration, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62301198.

Data Availability Statement

The datasets used in this study are publicly available: Neural 3D Video Dataset and Google Immersive Dataset. The code will be made publicly available on GitHub upon paper acceptance. Additional experimental results are available upon reasonable request.

Acknowledgments

The authors would like to thank the providers of the Neural 3D Video Dataset and Google Immersive Dataset for making their data publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DGS3D Gaussian Splatting
4DGS4D Gaussian Splatting
NeRFsNeural Radiance Fields
RBFRadial Basis Function
STGSpacetime Gaussians
SLAMSimultaneous Localization and Mapping
HexPlaneHexPlane Encoding
N3DVNeural 3D Video Dataset
PSNRPeak Signal-to-Noise Ratio
SSIMStructural Similarity Index Measure
LPIPSLearned Perceptual Image Patch Similarity
SfMStructure-from-Motion
MVSMulti-view Stereo
SAMSegment Anything Model
GTGround Truth

References

  1. Deng, T.; Shen, G.; Qin, T.; Wang, J.; Zhao, W.; Wang, J.; Wang, D.; Chen, W. PLGSLAM: Progressive Neural Scene Represenation with Local to Global Bundle Adjustment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 19657–19666. [Google Scholar]
  2. Deng, T.; Wang, Y.; Xie, H.; Wang, H.; Guo, R.; Wang, J.; Wang, D.; Chen, W. NeSLAM: Neural Implicit Mapping and Self-Supervised Feature Tracking With Depth Completion and Denoising. IEEE Trans. Autom. Sci. Eng. 2025, 22, 12309–12321. [Google Scholar] [CrossRef]
  3. Li, M.; Liu, S.; Zhou, H.; Zhu, G.; Cheng, N.; Deng, T.; Wang, H. Sgs-slam: Semantic gaussian splatting for neural dense slam. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2024; pp. 163–179. [Google Scholar]
  4. Li, M.; Chen, W.; Cheng, N.; Xu, J.; Li, D.; Wang, H. GARAD-SLAM: 3D GAussian splatting for Real-time Anti Dynamic SLAM. arXiv 2025, arXiv:2502.03228. [Google Scholar]
  5. Li, M.; Liu, S.; Deng, T.; Wang, H. Densesplat: Densifying gaussian splatting slam with neural radiance prior. arXiv 2025, arXiv:2502.09111. [Google Scholar] [CrossRef] [PubMed]
  6. Deng, T.; Shen, G.; Chen, X.; Yuan, S.; Shen, H.; Peng, G.; Wu, Z.; Wang, J.; Xie, L.; Wang, D.; et al. MCN-SLAM: Multi-Agent Collaborative Neural SLAM with Hybrid Implicit Neural Scene Representation. arXiv 2025, arXiv:2506.18678. [Google Scholar]
  7. Deng, T.; Shen, G.; Xun, C.; Yuan, S.; Jin, T.; Shen, H.; Wang, Y.; Wang, J.; Wang, H.; Wang, D.; et al. Mne-slam: Multi-agent neural slam for mobile robots. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 10–17 June 2025; pp. 1485–1494. [Google Scholar]
  8. Liu, S.; Chen, X.; Chen, H.; Xu, Q.; Li, M. DeRainGS: Gaussian Splatting for Enhanced Scene Reconstruction in Rainy Environments. Proc. AAAI Conf. Artif. Intell. 2025, 39, 5558–5566. [Google Scholar] [CrossRef]
  9. Wang, H.; Huang, J.; Yang, L.; Deng, T.; Zhang, G.; Li, M. LLGS: Unsupervised Gaussian Splatting for Image Enhancement and Reconstruction in Pure Dark Environment. arXiv 2025, arXiv:2503.18640. [Google Scholar] [CrossRef]
  10. Li, M.; Zhou, Y.; Zhou, H.; Hu, X.; Roemer, F.; Wang, H.; Osman, A. Dy3DGS-SLAM: Monocular 3D Gaussian Splatting SLAM for Dynamic Environments. arXiv 2025, arXiv:2506.05965. [Google Scholar]
  11. Wang, Z.; Yu, X.; Rao, Y.; Zhou, J.; Lu, J. SCRnet: A Spatial Consistency Guided Network Using Contrastive Learning for Point Cloud Registration. Symmetry 2022, 14, 140. [Google Scholar] [CrossRef]
  12. Kaya, M.; Bilge, H.Ş. A Survey on Symmetrical Neural Network Architectures and Applications. Symmetry 2022, 14, 1391. [Google Scholar] [CrossRef]
  13. Schönberger, J.L.; Frahm, J.-M. Structure-from-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
  14. Li, T.; Slavcheva, M.; Zollhoefer, M.; Green, S.; Lassner, C.; Kim, C.; Schmidt, T.; Lovegrove, S.; Goesele, M.; Newcombe, R.; et al. Neural 3D Video Synthesis from Multi-View Video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5521–5531. [Google Scholar]
  15. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 405–421. [Google Scholar]
  16. Müller, T.; Evans, A.; Schied, C.; Keller, A. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Trans. Graph. 2022, 41, 1–15. [Google Scholar] [CrossRef]
  17. Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 2023, 42, 139:1–139:14. [Google Scholar] [CrossRef]
  18. Pumarola, A.; Corona, E.; Pons-Moll, G.; Moreno-Noguer, F. D-NeRF: Neural Radiance Fields for Dynamic Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 10318–10327. [Google Scholar]
  19. Park, K.; Sinha, U.; Barron, J.T.; Bouaziz, S.; Goldman, D.B.; Seitz, S.M.; Martin-Brualla, R. HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields. ACM Trans. Graph. 2021, 40, 1–12. [Google Scholar] [CrossRef]
  20. Li, Y.; Zhang, H.; Wang, X.; Chen, M. Modeling Temporal Symmetry: Dual-Component Framework for Trends and Fluctuations in Time Series Forecasting. Symmetry 2025, 17, 577. [Google Scholar] [CrossRef]
  21. Hamelijnck, O.; Wilkinson, W.; Loppi, N.; Solin, A.; Damoulas, T. Spatio-Temporal Variational Gaussian Processes. In Advances in Neural Information Processing Systems; Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2021; Volume 34, pp. 23621–23633. [Google Scholar]
  22. Chen, C.S.; Karageorghis, A.; Li, Y. Multiquadrics without the Shape Parameter for Solving Partial Differential Equations. Symmetry 2020, 12, 1813. [Google Scholar] [CrossRef]
  23. Shi, Z.; Lin, J.; Chen, J.; Jin, Y.; Huang, J. Symmetry Based Material Optimization. Symmetry 2021, 13, 315. [Google Scholar] [CrossRef]
  24. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  25. Feng, Y.; Feng, Y.; You, H.; Zhao, X.; Gao, Y. MeshNet: Mesh Neural Network for 3D Shape Representation. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8279–8286. [Google Scholar]
  26. Jiang, M.; Wu, Y.; Zhao, T.; Zhao, Z.; Lu, C. PointSIFT: A SIFT-like Network Module for 3D Point Cloud Semantic Segmentation. arXiv 2018, arXiv:1807.00652. [Google Scholar]
  27. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef]
  28. Wu, G.; Yi, T.; Fang, J.; Xie, L.; Zhang, X.; Wei, W.; Liu, W.; Tian, Q.; Wang, X. 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 20310–20320. [Google Scholar]
  29. Luiten, J.; Kopanas, G.; Leibe, B.; Ramanan, D. Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis. In Proceedings of the International Conference on 3D Vision (3DV), Davos, Switzerland, 18–22 January 2024; pp. 800–809. [Google Scholar]
  30. Yang, Z.; Yang, H.; Pan, Z.; Zhu, X.; Zhang, L. Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting. arXiv 2023, arXiv:2310.10642. [Google Scholar]
  31. Fridovich-Keil, S.; Meanti, G.; Warburg, F.R.; Recht, B.; Kanazawa, A. K-Planes: Explicit Radiance Fields in Space, Time, and Appearance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 12479–12488. [Google Scholar]
  32. Dai, S.; Guas, H.; Zhang, K.; Xie, X.; Zhou, S.K. SDD-4DGS: Static-Dynamic Aware Decoupling in Gaussian Splatting for 4D Scene Reconstruction. arXiv 2025, arXiv:2503.09332. [Google Scholar]
  33. Chen, G.; Wang, W. A Survey on 3D Gaussian Splatting. arXiv 2024, arXiv:2401.03890. [Google Scholar]
  34. Amirian, M.; Schwenker, F. Radial Basis Function Networks for Convolutional Neural Networks to Learn Similarity Distance Metric and Improve Interpretability. IEEE Access 2020, 8, 123087–123097. [Google Scholar] [CrossRef]
  35. Li, J.; Chen, Y.; Wang, Y.; Zhang, M. UDR-GS: Enhancing Underwater Dynamic Scene Reconstruction with Depth Regularization. Symmetry 2024, 16, 1010. [Google Scholar] [CrossRef]
  36. Herczyński, A.; Zenit, R. Symmetry and Symmetry-Breaking in Fluid Dynamics. Symmetry 2024, 16, 621. [Google Scholar] [CrossRef]
  37. Wang, H.; Zhang, L.; Chen, M. MSGS-SLAM: Monocular Semantic Gaussian Splatting SLAM. Symmetry 2025, 17, 1576. [Google Scholar] [CrossRef]
  38. Cao, A.; Johnson, J. HexPlane: A Fast Representation for Dynamic Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 130–141. [Google Scholar]
  39. Bian, S.; Zheng, A.; Chaudhry, E.; You, L.; Zhang, J.J. Automatic Generation of Dynamic Skin Deformation for Animated Characters. Symmetry 2018, 10, 89. [Google Scholar] [CrossRef]
  40. Garifullin, A.; Maiorov, N.; Frolov, V.; Voloboy, A. Single-View 3D Reconstruction via Differentiable Rendering and Inverse Procedural Modeling. Symmetry 2024, 16, 184. [Google Scholar] [CrossRef]
  41. Li, Z.; Chen, Z.; Li, Z.; Xu, Y.; Xue, L.; Su, H.; Yu, J. Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 8508–8520. [Google Scholar]
  42. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y.; et al. Segment Anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 4015–4026. [Google Scholar]
  43. Nilsson, J.; Akenine-Möller, T. Understanding SSIM. arXiv 2020, arXiv:2006.13846. [Google Scholar] [CrossRef]
  44. Broxton, M.; Flynn, J.; Overbeck, R.; Erickson, D.; Hedman, P.; Duvall, M.; Dourgarian, J.; Busch, J.; Whalen, M.; Debevec, P. Immersive Light Field Video with a Layered Mesh Representation. ACM Trans. Graph. 2020, 39, 86:1–86:15. [Google Scholar] [CrossRef]
  45. Kondo, N.; Ikeda, Y.; Tagliasacchi, A.; Matsuo, Y.; Ochiai, Y.; Niinuma, K. VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance Field. arXiv 2021, arXiv:2111.13112. [Google Scholar]
  46. Lombardi, S.; Simon, T.; Saragih, J.; Schwartz, G.; Lehrmann, A.; Sheikh, Y. Neural Volumes: Learning Dynamic Renderable Volumes from Images. arXiv 2019, arXiv:1906.07751. [Google Scholar] [CrossRef]
  47. Mildenhall, B.; Srinivasan, P.P.; Ortiz-Cayon, R.; Kalantari, N.K.; Ramamoorthi, R.; Ng, R.; Kar, A. Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines. ACM Trans. Graph. 2019, 38, 1–14. [Google Scholar] [CrossRef]
Figure 1. Our workflow parallels the aforementioned process, commencing with the generation of a grid-based reference map, followed by interactive segmentation to isolate dynamic regions. Subsequently, the complete image is reconstructed as a static background, while the dynamic regions undergo reconstruction using our proposed methodology. After analyzing shadows and deformation parameters within these regions, we employ a radial basis function for precise quantification operations. This enables accurate reconstruction of dynamic elements, culminating in the integration and rendering of the final composite result. Key stages: (1) initial COLMAP reconstruction provides camera poses and sparse point cloud; (2) motion-aware decomposition (Section 3.3) automatically identifies dynamic regions using temporal variance; (3) static regions processed with standard 3DGS; (4) dynamic regions processed with HexPlane encoding (Section 3.1) and RBF deformation (Section 3.2); (5) final integration preserves both geometric accuracy and temporal consistency.
Figure 1. Our workflow parallels the aforementioned process, commencing with the generation of a grid-based reference map, followed by interactive segmentation to isolate dynamic regions. Subsequently, the complete image is reconstructed as a static background, while the dynamic regions undergo reconstruction using our proposed methodology. After analyzing shadows and deformation parameters within these regions, we employ a radial basis function for precise quantification operations. This enables accurate reconstruction of dynamic elements, culminating in the integration and rendering of the final composite result. Key stages: (1) initial COLMAP reconstruction provides camera poses and sparse point cloud; (2) motion-aware decomposition (Section 3.3) automatically identifies dynamic regions using temporal variance; (3) static regions processed with standard 3DGS; (4) dynamic regions processed with HexPlane encoding (Section 3.1) and RBF deformation (Section 3.2); (5) final integration preserves both geometric accuracy and temporal consistency.
Symmetry 17 01847 g001
Figure 2. Automatic symmetry detection and preservation pipeline combining geometric priors with data-driven learning.
Figure 2. Automatic symmetry detection and preservation pipeline combining geometric priors with data-driven learning.
Symmetry 17 01847 g002
Figure 3. Three-level hybrid granularity pipeline with four overlap handling mechanisms.
Figure 3. Three-level hybrid granularity pipeline with four overlap handling mechanisms.
Symmetry 17 01847 g003
Figure 4. Reconstruction results on Google Immersive Dataset under resource-constrained conditions. Our method demonstrates robust performance across diverse challenging scenarios, including indoor exhibition spaces and outdoor cave environments, maintaining visual quality while operating with reduced computational resources.
Figure 4. Reconstruction results on Google Immersive Dataset under resource-constrained conditions. Our method demonstrates robust performance across diverse challenging scenarios, including indoor exhibition spaces and outdoor cave environments, maintaining visual quality while operating with reduced computational resources.
Symmetry 17 01847 g004
Figure 5. Qualitative comparison of object reconstruction outside glass surfaces in the “coffee martini” sequence. From left to right: reconstruction without our method, reconstruction with our method, and ground truth (GT). Our approach effectively handles optical distortions and reflections introduced by the glass martini cup, achieving accurate reconstruction of objects viewed through transparent surfaces.
Figure 5. Qualitative comparison of object reconstruction outside glass surfaces in the “coffee martini” sequence. From left to right: reconstruction without our method, reconstruction with our method, and ground truth (GT). Our approach effectively handles optical distortions and reflections introduced by the glass martini cup, achieving accurate reconstruction of objects viewed through transparent surfaces.
Symmetry 17 01847 g005
Figure 6. Qualitative comparison of flame reconstruction in challenging scenarios. From left to right: reconstruction without our method, reconstruction with our method, and ground truth (GT). Our method effectively handles the complex appearance and dynamic nature of flames, demonstrating superior reconstruction quality compared to the baseline approach.
Figure 6. Qualitative comparison of flame reconstruction in challenging scenarios. From left to right: reconstruction without our method, reconstruction with our method, and ground truth (GT). Our method effectively handles the complex appearance and dynamic nature of flames, demonstrating superior reconstruction quality compared to the baseline approach.
Symmetry 17 01847 g006
Figure 7. Qualitative comparison on the “cook spinach” sequence featuring rapid stir-frying motion. From left to right: reconstruction with our method, reconstruction without our method, and ground truth (GT). Our method successfully reconstructs the challenging fast-moving spinach during the cooking process, achieving results that closely match the ground truth while the baseline struggles with motion blur and detail preservation.
Figure 7. Qualitative comparison on the “cook spinach” sequence featuring rapid stir-frying motion. From left to right: reconstruction with our method, reconstruction without our method, and ground truth (GT). Our method successfully reconstructs the challenging fast-moving spinach during the cooking process, achieving results that closely match the ground truth while the baseline struggles with motion blur and detail preservation.
Symmetry 17 01847 g007
Figure 8. Qualitative comparison of dynamic scene reconstruction methods on challenging sequences from the Neural 3D Video Dataset. Top row: “cut roasted beef” sequence showing complex knife reflections and rapid hand movements. Bottom row: “flame salmon” sequence demonstrating challenging lighting conditions and shadow dynamics. From left to right: ground truth (GT), 4D Gaussian Splatting (4DGS), Spacetime Gaussians (STGs), and our proposed method. Our approach demonstrates superior performance in preserving fine geometric details, handling complex lighting variations, and maintaining temporal consistency across dynamic regions. Note the improved reconstruction of specular reflections (yellow boxes) and shadow boundaries (red boxes) compared to baseline methods.
Figure 8. Qualitative comparison of dynamic scene reconstruction methods on challenging sequences from the Neural 3D Video Dataset. Top row: “cut roasted beef” sequence showing complex knife reflections and rapid hand movements. Bottom row: “flame salmon” sequence demonstrating challenging lighting conditions and shadow dynamics. From left to right: ground truth (GT), 4D Gaussian Splatting (4DGS), Spacetime Gaussians (STGs), and our proposed method. Our approach demonstrates superior performance in preserving fine geometric details, handling complex lighting variations, and maintaining temporal consistency across dynamic regions. Note the improved reconstruction of specular reflections (yellow boxes) and shadow boundaries (red boxes) compared to baseline methods.
Symmetry 17 01847 g008
Figure 9. Qualitative comparison of shadow and motion reconstruction in challenging scenarios featuring non-rigid object motion and complex shadow dynamics. Top row: “sear steak” sequence. Bottom row: “flame steak” sequence. Our method successfully reconstructs temporal deformations while preserving shadow boundaries and maintaining geometric consistency across both challenging scenarios.
Figure 9. Qualitative comparison of shadow and motion reconstruction in challenging scenarios featuring non-rigid object motion and complex shadow dynamics. Top row: “sear steak” sequence. Bottom row: “flame steak” sequence. Our method successfully reconstructs temporal deformations while preserving shadow boundaries and maintaining geometric consistency across both challenging scenarios.
Symmetry 17 01847 g009
Figure 10. Human motion reconstruction comparison across different lighting conditions. Top row: “flame steak” sequence with challenging fire illumination. Bottom row: “cut roasted beef” sequence with kitchen lighting. Our method outperforms 4DGS and STG in preserving fine details, shadow consistency, and temporal coherence during rapid human movements.
Figure 10. Human motion reconstruction comparison across different lighting conditions. Top row: “flame steak” sequence with challenging fire illumination. Bottom row: “cut roasted beef” sequence with kitchen lighting. Our method outperforms 4DGS and STG in preserving fine details, shadow consistency, and temporal coherence during rapid human movements.
Symmetry 17 01847 g010
Table 1. Learned adaptive symmetry weights across different scene region types. The 10× weight span (0.1 vs. 1.0) demonstrates the method’s capability to automatically distinguish between symmetric and asymmetric regions without manual tuning.
Table 1. Learned adaptive symmetry weights across different scene region types. The 10× weight span (0.1 vs. 1.0) demonstrates the method’s capability to automatically distinguish between symmetric and asymmetric regions without manual tuning.
Scene Region TypeLearned β RangeExamplesInterpretation
Static Symmetric Structures0.92–0.98Walls, floors, ceilingsStrong constraints maintain consistency
Static Asymmetric Textures0.45–0.62Irregular decorations, paintingsBalanced constraints preserve details
Dynamic Hand Motions0.25–0.38Cutting, stirring movementsWeak constraints allow flexible deformation
Dynamic Chaos Elements0.08–0.15Flames, dynamic shadowsMinimal constraints preserve asymmetry
Table 2. Google Immersive Dataset results with training reduced to five iterations and densification using one-fifth of the cameras. Results show performance before/after optimization, demonstrating the robustness of our approach under computational constraints. ↑ indicates higher is better; ↓ indicates lower is better.
Table 2. Google Immersive Dataset results with training reduced to five iterations and densification using one-fifth of the cameras. Results show performance before/after optimization, demonstrating the robustness of our approach under computational constraints. ↑ indicates higher is better; ↓ indicates lower is better.
SceneSSIM ↑ PSNR (dB) ↑LPIPS ↓
09_Alexa0.6911/0.473118.4439/11.99810.3002/0.4735
Cave0.1337/0.444315.1147/14.79910.4680/0.4378
10_Alexa0.3147/0.612720.7806/19.69990.3843/0.5593
Table 3. Quantitative comparison of reconstruction quality and computational efficiency on the Neural 3D Video Dataset. Results are averaged across five scenes (excluding “coffee martini”) with standard deviations computed over three independent runs. Higher PSNR and SSIM values indicate better quality, while lower LPIPS and training time indicate superior performance. Bold values represent the best performance in each metric.
Table 3. Quantitative comparison of reconstruction quality and computational efficiency on the Neural 3D Video Dataset. Results are averaged across five scenes (excluding “coffee martini”) with standard deviations computed over three independent runs. Higher PSNR and SSIM values indicate better quality, while lower LPIPS and training time indicate superior performance. Bold values represent the best performance in each metric.
MethodPSNR (dB) ↑ SSIM ↑LPIPS ↓Training Time (min) ↓
VaxNeRF [45] 28.53 ± 1.2 0.876 ± 0.03 0.187 ± 0.02 485 ± 23
N3DV 29.58 ± 0.9 0.891 ± 0.02 0.165 ± 0.01 260 ± 15
4DGS 30.15 ± 1.1 0.912 ± 0.02 0.142 ± 0.02 52.7 ± 3.2
STG 31.34 ± 0.8 0.934 ± 0.01 0.089 ± 0.01 42.6 ± 2.8
Ours 31.41 ± 0.7 0.937 ± 0.01 0.086 ± 0.01 37.7 ± 2.1
Improvement + 0.07 + 0.003 0.003 11.5 %
Table 4. Comprehensive performance comparison with state-of-the-art dynamic scene reconstruction methods. Results are averaged across N3DV dataset scenes (excluding “coffee martini”) with standard deviations over three independent runs. Methods implemented and tested on identical hardware (RTX 3090). * Conservative estimates based on reported improvements in original paper and computational complexity analysis (see text for methodology). Direct experimental results. Our method achieves superior quality–efficiency tradeoff across both implicit and explicit baselines.
Table 4. Comprehensive performance comparison with state-of-the-art dynamic scene reconstruction methods. Results are averaged across N3DV dataset scenes (excluding “coffee martini”) with standard deviations over three independent runs. Methods implemented and tested on identical hardware (RTX 3090). * Conservative estimates based on reported improvements in original paper and computational complexity analysis (see text for methodology). Direct experimental results. Our method achieves superior quality–efficiency tradeoff across both implicit and explicit baselines.
MethodTypePSNR (dB) ↑ SSIM ↑LPIPS ↓Time (min) ↓Memory (GB) ↓
DyNeRF Implicit 27.85 ± 1.3 0.862 ± 0.03 0.205 ± 0.02 415 ± 28 26.8 ± 1.5
HyperNeRF Implicit 28.92 ± 1.2 0.881 ± 0.03 0.178 ± 0.02 392 ± 25 24.3 ± 1.3
SDD-4DGS [32] *Explicit 30.18 ± 1.0 0.913 ± 0.02 0.141 ± 0.02 50.8 ± 3.5 15.8 ± 0.9
4DGS Explicit 30.15 ± 1.1 0.912 ± 0.02 0.142 ± 0.02 52.7 ± 3.2 16.2 ± 0.9
STG Explicit 31.34 ± 0.8 0.934 ± 0.01 0.089 ± 0.01 42.6 ± 2.8 14.8 ± 0.7
Ours Explicit 31.41 ± 0.7 0.937 ± 0.01 0.086 ± 0.01 37.7 ± 2.1 12.4 ± 0.6
Relative Improvements:
vs. NeRF Methods- + 2.49 dB + 0.056 0.092 90.4 % 49.0 %
vs. SDD-4DGS- + 1.23 dB + 0.024 0.055 25.8 % 21.5 %
vs. STG- + 0.07 dB + 0.003 0.003 11.5 % 16.2 %
Table 5. Scene-wise performance comparison with state-of-the-art methods. * SDD-4DGS results are conservative estimates (see text). Our method demonstrates consistent performance across diverse dynamic scenarios, with particular advantages in challenging lighting conditions (flame scenes) and rapid motion (cooking scenes).
Table 5. Scene-wise performance comparison with state-of-the-art methods. * SDD-4DGS results are conservative estimates (see text). Our method demonstrates consistent performance across diverse dynamic scenarios, with particular advantages in challenging lighting conditions (flame scenes) and rapid motion (cooking scenes).
SceneMetricDyNeRFHyperNeRFSDD-4DGS *4DGSSTGOurs
flame_steakPSNR26.1527.2829.3229.1824.9827.18
SSIM0.8450.8670.9080.9060.9470.955
LPIPS0.2180.1950.1520.1550.0690.044
flame_salmonPSNR25.8226.9128.4528.6628.5728.81
SSIM0.8320.8580.8960.9120.9340.927
LPIPS0.2350.2080.1680.0790.0730.074
cook_spinachPSNR27.9528.8730.8530.7226.1626.18
SSIM0.8680.8890.9180.9150.9520.947
LPIPS0.1920.1710.1380.1410.0610.058
cut_roasted_beefPSNR26.4827.3529.6729.5825.2425.87
SSIM0.8510.8740.9110.9080.9430.940
LPIPS0.2080.1860.1480.1510.0660.063
sear_steakPSNR28.1229.0531.2831.1526.6026.72
SSIM0.8750.8960.9250.9220.9570.950
LPIPS0.1850.1650.1320.1350.0550.066
Table 6. Memory consumption and model efficiency comparison on the Neural 3D Video Dataset. Efficiency Ratio is computed as (PSNR/Memory Usage). Peak GPU Memory represents the maximum memory footprint at any single point during training. Our motion-aware decomposition significantly reduces memory footprint while maintaining superior reconstruction quality. Results averaged across all N3DV scenes excluding “coffee martini”, with measurements taken at peak training load.
Table 6. Memory consumption and model efficiency comparison on the Neural 3D Video Dataset. Efficiency Ratio is computed as (PSNR/Memory Usage). Peak GPU Memory represents the maximum memory footprint at any single point during training. Our motion-aware decomposition significantly reduces memory footprint while maintaining superior reconstruction quality. Results averaged across all N3DV scenes excluding “coffee martini”, with measurements taken at peak training load.
MethodPeak GPU Memory (GB) ↓ Point-Cloud Size (M) ↓Model Parameters (M) ↓Efficiency Ratio ↑
VaxNeRF 28.3 ± 1.2 - 8.5 0.82
N3DV 24.7 ± 1.1 - 12.3 0.95
HexPlane 18.9 ± 0.8 3.2 15.8 1.24
4DGS 16.2 ± 0.9 2.8 6.2 1.52
STG 14.8 ± 0.7 2.1 5.8 1.68
Ours 12.4 ± 0.6 1.7 4.9 2.01
Reduction vs. STG 16.2 % 19.0 % 15.5 % + 19.6 %
Table 7. Scene-wise quantitative comparison between our method and STG across all Neural 3D Video Dataset sequences. Evaluation performed on cropped dynamic regions to focus on areas of primary interest for dynamic reconstruction.
Table 7. Scene-wise quantitative comparison between our method and STG across all Neural 3D Video Dataset sequences. Evaluation performed on cropped dynamic regions to focus on areas of primary interest for dynamic reconstruction.
SceneSSIM ↑ PSNR (dB) ↑LPIPS ↓
OursSTGOursSTGOursSTG
coffee_martini0.94060.947823.845123.91640.07320.0611
cook_spinach0.94700.951826.175626.16270.05780.0614
cut_roasted_beef0.93950.943425.874025.23910.06280.0660
flame_salmon0.92740.934121.707121.55010.07360.0744
flame_steak0.95460.947427.179424.97870.04430.0694
sear_steak0.94980.957126.715726.60360.06550.0548
Table 8. Detailed comparison on the “flame salmon” scene demonstrating challenging lighting conditions and rapid motion dynamics. Our method achieves superior performance across all metrics compared to state-of-the-art approaches.
Table 8. Detailed comparison on the “flame salmon” scene demonstrating challenging lighting conditions and rapid motion dynamics. Our method achieves superior performance across all metrics compared to state-of-the-art approaches.
MethodPSNR (dB) ↑ SSIM ↑LPIPS ↓
Neural Volumes [46]22.80-0.295
LLFF [47]23.24-0.235
STG28.570.9340.073
4DGS28.660.9120.079
Ours 28.81 0.927 0.074
Table 9. Statistical significance testing results using paired t-tests (n = 15, df = 14). All p-values are below 0.05, indicating statistically significant improvements of our method over baseline approaches.
Table 9. Statistical significance testing results using paired t-tests (n = 15, df = 14). All p-values are below 0.05, indicating statistically significant improvements of our method over baseline approaches.
ComparisonPSNR p-ValueSSIM p-ValueLPIPS p-ValueTime p-Value
Ours vs. STG0.0320.0410.0280.003
Ours vs. 4DGS0.0120.0080.0150.001
Ours vs. N3DV0.0020.0010.0030.001
Table 10. Systematic ablation study demonstrating the cumulative contribution of each methodological component. Each configuration represents progressive addition of our key innovations: HexPlane-based spacetime encoding, RBF-based temporal deformation modeling, and motion-aware scene decomposition. Results averaged across all N3DV scenes with 95% confidence intervals over three independent runs. PSNR in dB; time in minutes.
Table 10. Systematic ablation study demonstrating the cumulative contribution of each methodological component. Each configuration represents progressive addition of our key innovations: HexPlane-based spacetime encoding, RBF-based temporal deformation modeling, and motion-aware scene decomposition. Results averaged across all N3DV scenes with 95% confidence intervals over three independent runs. PSNR in dB; time in minutes.
MethodPSNRSSIMLPIPSTime
Baseline 3DGS 28.45 ± 1.1 0.891 ± 0.02 0.165 ± 0.02 45.2 ± 2.8
+HexPlane 29.73 ± 0.9 0.908 ± 0.02 0.142 ± 0.01 41.8 ± 2.3
+RBF Deform 30.89 ± 0.8 0.925 ± 0.01 0.098 ± 0.01 39.5 ± 2.1
+Motion-aware 31.24 ± 0.7 0.933 ± 0.01 0.089 ± 0.01 38.2 ± 1.9
Full Method 31.41 ± 0.7 0.937 ± 0.01 0.086 ± 0.01 37.7 ± 2.1
Table 11. Detailed training time comparison (in minutes) across all N3DV scenes. Our method achieves consistent speedup over competing approaches while maintaining superior reconstruction quality. Standard deviations computed over three independent runs demonstrate training stability.
Table 11. Detailed training time comparison (in minutes) across all N3DV scenes. Our method achieves consistent speedup over competing approaches while maintaining superior reconstruction quality. Standard deviations computed over three independent runs demonstrate training stability.
SceneOursSTG4DGSHyperReelHexPlaneSpeedup
coffee_martini 35.1 ± 2.1 44.9 ± 2.8 48.2 ± 3.2 124 ± 8 1208 ± 45 1.28 ×
flame_steak 39.2 ± 2.3 41.6 ± 2.5 54.3 ± 3.5 139 ± 9 1356 ± 52 1.06 ×
flame_salmon 34.7 ± 1.9 44.2 ± 2.7 50.4 ± 3.1 127 ± 7 1344 ± 48 1.27 ×
cook_spinach 40.5 ± 2.4 43.2 ± 2.6 50.1 ± 3.0 142 ± 10 1397 ± 55 1.07 ×
cut_roasted_beef 38.3 ± 2.2 43.0 ± 2.5 58.3 ± 3.8 139 ± 8 1303 ± 51 1.12 ×
sear_steak 36.0 ± 2.0 41.1 ± 2.4 50.7 ± 3.2 128 ± 7 1297 ± 49 1.14 ×
Average 37.3 ± 2.2 42.7 ± 2.6 52.0 ± 3.3 133.2 ± 8.2 1317.5 ± 50.0 1.16 ×
Table 12. Detailed computational cost breakdown of our pipeline components. The motion-aware decomposition strategy allocates 30.4% of computation to static regions and 50.5% to dynamic regions, with 19.1% for integration. Memory values represent per-module allocations, which sum to total memory usage due to concurrent processing. Measurements performed on RTX 3090 GPU at 1920 × 1080 resolution using NVIDIA Nsight profiler, averaged across five N3DV scenes with standard deviations over three runs.
Table 12. Detailed computational cost breakdown of our pipeline components. The motion-aware decomposition strategy allocates 30.4% of computation to static regions and 50.5% to dynamic regions, with 19.1% for integration. Memory values represent per-module allocations, which sum to total memory usage due to concurrent processing. Measurements performed on RTX 3090 GPU at 1920 × 1080 resolution using NVIDIA Nsight profiler, averaged across five N3DV scenes with standard deviations over three runs.
ComponentTime (ms/frame) ↓ PercentageMemory (GB) ↓FLOPs (G) ↓
Static Region Processing
   3DGS Rendering 8.3 ± 0.4 22.1 % 4.2 12.5
   Feature Extraction 3.1 ± 0.2 8.3 % 1.8 5.3
Dynamic Region Processing
   HexPlane Encoding 6.7 ± 0.3 17.9 % 3.6 18.7
   RBF Deformation 9.4 ± 0.5 25.1 % 2.1 8.9
   Motion Segmentation 2.8 ± 0.2 7.5 % 0.7 3.2
Integration & Rendering
   Gaussian Splatting 5.9 ± 0.3 15.7 % 2.3 14.6
   Shadow Refinement 1.3 ± 0.1 3.4 % 0.4 2.1
Total Pipeline 37.5 ± 1.2 100 % 15.1 65.3
Baseline Comparison (STG)
STG Total 44.2 ± 1.5 - 17.8 89.7
Speedup/Reduction + 15.2 % - 15.2 % 27.2 %
Table 13. Impact of RBF-based deformation modeling on challenging visual phenomena. Shadow and reflection regions were manually annotated for targeted evaluation. Results demonstrate significant improvements in handling dynamic lighting effects.
Table 13. Impact of RBF-based deformation modeling on challenging visual phenomena. Shadow and reflection regions were manually annotated for targeted evaluation. Results demonstrate significant improvements in handling dynamic lighting effects.
Scene TypeShadow PSNR (dB)Reflection PSNR (dB)Overall PSNR (dB)
Without RBF Deformation 24.3 ± 1.2 22.8 ± 1.5 28.9 ± 0.9
With RBF Deformation 27.8 ± 0.8 26.1 ± 1.1 31.4 ± 0.7
Improvement + 3.5   dB + 3.3   dB + 2.5   dB
Table 16. Learned adaptive symmetry weights across scene regions.
Table 16. Learned adaptive symmetry weights across scene regions.
Scene RegionLearned β ExamplesInterpretation
Static Symmetric0.92–0.98Walls, floorsStrong constraints
Static Asymmetric0.45–0.62TexturesBalanced constraints
Dynamic Motion0.25–0.38Hand movementsWeak constraints
Dynamic Chaos0.08–0.15Flames, shadowsMinimal constraints
Table 17. Adaptive versus fixed strategies. Efficiency = PSNR/(Time × 10).
Table 17. Adaptive versus fixed strategies. Efficiency = PSNR/(Time × 10).
StrategyTime (min) ↓ PSNR (dB) ↑Efficiency ↑
Uniform Allocation 45.2 ± 2.8 30.87 ± 0.9 0.683
Fixed 50/50 Split 42.1 ± 2.5 31.15 ± 0.8 0.740
Fixed Threshold 39.5 ± 2.2 31.08 ± 0.8 0.787
Adaptive (Ours) 37.7 ± 2.1 31.41 ± 0.7 0.833
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, R.; Li, M.; Zhu, Z. Symmetry-Preserving 4D Gaussian Splatting and Mapping for Motion-Aware Dynamic Scene Reconstruction. Symmetry 2025, 17, 1847. https://doi.org/10.3390/sym17111847

AMA Style

Zhao R, Li M, Zhu Z. Symmetry-Preserving 4D Gaussian Splatting and Mapping for Motion-Aware Dynamic Scene Reconstruction. Symmetry. 2025; 17(11):1847. https://doi.org/10.3390/sym17111847

Chicago/Turabian Style

Zhao, Rui, Mingrui Li, and Zunjie Zhu. 2025. "Symmetry-Preserving 4D Gaussian Splatting and Mapping for Motion-Aware Dynamic Scene Reconstruction" Symmetry 17, no. 11: 1847. https://doi.org/10.3390/sym17111847

APA Style

Zhao, R., Li, M., & Zhu, Z. (2025). Symmetry-Preserving 4D Gaussian Splatting and Mapping for Motion-Aware Dynamic Scene Reconstruction. Symmetry, 17(11), 1847. https://doi.org/10.3390/sym17111847

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop