Abstract
In response to the challenges of low computational efficiency, insufficient detail restoration, and dependence on multiple GPUs in 3D Gaussian Splatting for large-scale UAV scene reconstruction, this study introduces an improved 3D Gaussian Splatting framework. It primarily targets two aspects: optimization of the partitioning strategy and enhancement of adaptive density control. Specifically, an adaptive partitioning strategy guided by scene complexity is designed to ensure more balanced computational workloads across spatial blocks. To preserve scene integrity, auxiliary point clouds are integrated during partition optimization. Furthermore, a pixel weight-scaling mechanism is employed to regulate the average gradient in adaptive density control, thereby mitigating excessive densification of Gaussians. This design accelerates the training process while maintaining high-fidelity rendering quality. Additionally, a task-scheduling algorithm based on frequency-domain analysis is incorporated to further improve computational resource utilization. Extensive experiments on multiple large-scale UAV datasets demonstrate that the proposed framework can be trained efficiently on a single RTX 3090 GPU, achieving more than a 50% reduction in average optimization time while maintaining PSNR, SSIM and LPIPS values that are comparable to or better than representative 3DGS-based methods; on the MatrixCity-S dataset (>6000 images), it attains the highest PSNR among 3DGS-based approaches and completes training on a single 24 GB GPU in less than 60% of the training time of DOGS. Nevertheless, the current framework still requires several hours of optimization for city-scale scenes and has so far only been evaluated on static UAV imagery with a fixed camera model, which may limit its applicability to dynamic scenes or heterogeneous sensor configurations.