An Improved Method for 3D Style Transfer of Cliff Carvings Based on Gaussian Splatting
Abstract
1. Introduction
- We propose a tailored 3D style transfer method for cliff carvings, integrating the 3D Gaussian Splatting (3DGS) model with the Nearest Neighbor Feature Matching (NNFM) loss function. This approach enables multi-period style simulation and dynamic stylized rendering, thereby enhancing the aesthetic expressiveness of cliff carvings while preserving their intricate historical details;
- We optimize the 3DGS + NNFM pipeline to handle the unique planar geometry and high-frequency historical textures of cliff carvings by embedding low-dimensional features into 3D Gaussians representation and employing a learnable affine transformation, ensuring multi-view consistency and style fidelity;
- We evaluate the proposed method on the Kongshuidong Cliff Carvings dataset, demonstrating its superior performance in 3D reconstruction and stylization. This provides an innovative solution for the digital preservation and interactive display of cultural heritage artifacts.
2. Related Work
2.1. 3D Reconstruction
2.2. Style Transfer
3. Methods
3.1. 3D Gaussian Splatting
3.2. Nearest Neighbor Feature Matching Loss
3.3. 3D Style Transfer
4. Results
4.1. Dataset
4.2. Quantitative Results
4.3. Qualitative Results
4.4. Ablation Studies
4.5. Additional Results
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Tweed, C.; Sutherland, M. Built cultural heritage and sustainable urban development. Landsc. Urban Plan. 2007, 83, 62–69. [Google Scholar] [CrossRef]
- Dahaghin, M.; Castillo, M.; Riahidehkordi, K.; Toso, M.; Del Bue, A. Gaussian heritage: 3D digitization of cultural heritage with integrated object segmentation. arXiv 2024, arXiv:2409.19039. [Google Scholar] [CrossRef]
- Jamil, O.; Brennan, A. Immersive heritage through gaussian splatting: A new visual aesthetic for reality capture. Front. Comput. Sci. 2025, 7, 1515609. [Google Scholar] [CrossRef]
- Mazzacca, G.; Karami, A.; Rigon, S.; Farella, E.; Trybala, P.; Remondino, F. Nerf for heritage 3D reconstruction. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 1051–1058. [Google Scholar] [CrossRef]
- Siliutina, I.; Tytar, O.; Barbash, M.; Petrenko, N.; Yepyk, L. Cultural preservation and digital heritage: Challenges and opportunities. Amazon. Investig. 2024, 13, 262–273. [Google Scholar] [CrossRef]
- Samavati, T.; Soryani, M. Deep learning-based 3D reconstruction: A survey. Artif. Intell. Rev. 2023, 56, 9175–9219. [Google Scholar] [CrossRef]
- Mandujano, R.; Maria, G. Integration of historic building information modeling and valuation approaches for managing cultural heritage sites. In Proceedings of the 27th Annual Conference of the International Group for Lean Construction (IGLC), Dublin, Ireland, 1–7 July 2019; pp. 1433–1444. [Google Scholar]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2016; pp. 2414–2423. [Google Scholar]
- Huang, H.-P.; Tseng, H.-Y.; Saini, S.; Singh, M.; Yang, M.-H. Learning to stylize novel views. In Proceedings of the IEEE/CVF International Conference on Computer Vision; IEEE: New York, NY, USA, 2021; pp. 13869–13878. [Google Scholar]
- Mu, F.; Wang, J.; Wu, Y.; Li, Y. 3D photo stylization: Learning to generate stylized novel views from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2022; pp. 16273–16282. [Google Scholar]
- Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3D gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 2023, 42, 139:1–139:14. [Google Scholar] [CrossRef]
- Fei, B.; Xu, J.; Zhang, R.; Zhou, Q.; Yang, W.; He, Y. 3D gaussian splatting as new era: A survey. IEEE Trans. Vis. Comput. Graph. 2024, 31, 4429–4449. [Google Scholar] [CrossRef] [PubMed]
- Chen, D.; Li, H.; Ye, W.; Wang, Y.; Xie, W.; Zhai, S.; Wang, N.; Liu, H.; Bao, H.; Zhang, G. Pgsr: Planar-based gaussian splatting for efficient and high-fidelity surface reconstruction. IEEE Trans. Vis. Comput. Graph. 2024, 31, 6100–6111. [Google Scholar] [CrossRef]
- Zhang, K.; Kolkin, N.; Bi, S.; Luan, F.; Xu, Z.; Shechtman, E.; Snavely, N. Arf: Artistic radiance fields. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 717–733. [Google Scholar]
- Alshawabkeh, Y.; Baik, A.; Miky, Y. Integration of laser scanner and photogrammetry for heritage bim enhancement. ISPRS Int. J. Geo-Inf. 2021, 10, 316. [Google Scholar] [CrossRef]
- Reutebuch, S.E.; Andersen, H.-E.; McGaughey, R.J. Light detection and ranging (lidar): An emerging tool for multiple resource inventory. J. For. 2005, 103, 286–292. [Google Scholar] [CrossRef]
- Raj, T.; HanimHashim, F.; BaseriHuddin, A.; Ibrahim, M.F.; Hussain, A. A survey on lidar scanning mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
- Croce, V.; Billi, D.; Caroti, G.; Piemonte, A.; De Luca, L.; Véron, P. Comparative assessment of neural radiance fields and photogrammetry in digital heritage: Impact of varying image conditions on 3D reconstruction. Remote Sens. 2024, 16, 301. [Google Scholar] [CrossRef]
- Zhang, K.; Riegler, G.; Snavely, N.; Koltun, V. Nerf++: Analyzing and improving neural radiance fields. arXiv 2020, arXiv:2010.07492. [Google Scholar]
- Chen, R.; Zhao, J.; Zhang, F.-L.; Chalmers, A.; Rhee, T. Neural radiance fields for dynamic view synthesis using local temporal priors. In International Conference on Computational Visual Media; Springer: Singapore, 2024; pp. 74–90. [Google Scholar]
- Lee, J.C.; Rho, D.; Sun, X.; Ko, J.H.; Park, E. Compact 3D gaussian representation for radiance field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2024; pp. 21719–21728. [Google Scholar]
- Zhu, H.; Zhang, Z.; Zhao, J.; Duan, H.; Ding, Y.; Xiao, X.; Yuan, J. Scene reconstruction techniques for autonomous driving: A review of 3D gaussian splatting. Artif. Intell. Rev. 2024, 58, 30. [Google Scholar] [CrossRef]
- Liu, H.; Liu, B.; Hu, Q.; Du, P.; Li, J.; Bao, Y.; Wang, F. A review on 3D gaussian splatting for sparse view reconstruction. Artif. Intell. Rev. 2025, 58, 215. [Google Scholar] [CrossRef]
- Wu, T.; Yuan, Y.-J.; Zhang, L.-X.; Yang, J.; Cao, Y.-P.; Yan, L.-Q.; Gao, L. Recent advances in 3D gaussian splatting. Comput. Vis. Media 2024, 10, 613–642. [Google Scholar] [CrossRef]
- Liu, K.-H.; Liu, T.-J.; Wang, C.-C.; Liu, H.-H.; Pei, S.-C. Modern architecture style transfer for ruin or old buildings. In 2019 IEEE International Symposium on Circuits and Systems (ISCAS); IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision; IEEE: New York, NY, USA, 2017; pp. 2223–2232. [Google Scholar]
- Wang, Z.; Zhao, L.; Xing, W.; Lu, D. Glstylenet: Higher quality style transfer combining global and local pyramid features. arXiv 2018, arXiv:1811.07260. [Google Scholar] [CrossRef]
- Li, W.; Wu, T.; Zhong, F.; Oztireli, C. Arf-plus: Controlling perceptual factors in artistic radiance fields for 3D scene stylization. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV); IEEE: New York, NY, USA, 2025; pp. 2301–2310. [Google Scholar]
- Yu, A.; Li, R.; Tancik, M.; Li, H.; Ng, R.; Kanazawa, A. Plenoctrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision; IEEE: New York, NY, USA, 2021; pp. 5752–5761. [Google Scholar]
- Chiang, P.-Z.; Tsai, M.-S.; Tseng, H.-Y.; Lai, W.-S.; Chiu, W.-C. Stylizing 3D scene via implicit representation and hypernetwork. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision; IEEE: New York, NY, USA, 2022; pp. 1475–1484. [Google Scholar]
- Liu, K.; Zhan, F.; Chen, Y.; Zhang, J.; Yu, Y.; ElSaddik, A.; Lu, S.; Xing, E.P. Stylerf: Zero-shot 3D style transfer of neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2023; pp. 8338–8348. [Google Scholar]
- Chen, A.; Xu, Z.; Geiger, A.; Yu, J.; Su, H. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 333–350. [Google Scholar]
- Liu, K.; Zhan, F.; Xu, M.; Theobalt, C.; Shao, L.; Lu, S. Stylegaussian: Instant 3D style transfer with gaussian splatting. In SIGGRAPH Asia 2024 Technical Communications; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1–4. [Google Scholar]
- Huang, X.; Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision; IEEE: New York, NY, USA, 2017. [Google Scholar]
- Kotovenko, D.; Grebenkova, O.; Sarafianos, N.; Paliwal, A.; Ma, P.; Poursaeed, O.; Mohan, S.; Fan, Y.; Li, Y.; Ranjan, R.; et al. Wast-3D: Wasserstein-2 distance for scene-to-scene stylization on 3D gaussians. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2024; pp. 298–314. [Google Scholar]
- Zhou, S.; Chang, H.; Jiang, S.; Fan, Z.; Zhu, Z.; Xu, D.; Chari, P.; You, S.; Wang, Z.; Kadambi, A. Feature 3Dgs: Supercharging 3D gaussian splatting to enable distilled feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2024; pp. 21676–21685. [Google Scholar]
- Barron, J.T.; Mildenhall, B.; Verbin, D.; Srinivasan, P.P.; Hedman, P. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. arXiv 2021, arXiv:2111.12077. [Google Scholar]
- Schonberger, J.L.; Frahm, J.-M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2016; pp. 4104–4113. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Qin, M.; Li, W.; Zhou, J.; Wang, H.; Pfister, H. Langsplat: 3D language gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2024; pp. 20051–20060. [Google Scholar]
- Wang, Z.; Li, Y.; Li, H. Chinese inscription restoration based on artificial intelligent models. npj Herit. Sci. 2025, 13, 326. [Google Scholar] [CrossRef]
- Mildenhall, B.; Srinivasan, P.P.; Ortiz-Cayon, R.; Kalantari, N.K.; Ramamoorthi, R.; Ng, R.; Kar, A. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. (ToG) 2019, 38, 29. [Google Scholar] [CrossRef]
- Niklaus, S.; Liu, F. Softmax splatting for video frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2020; pp. 5437–5446. [Google Scholar]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2018; pp. 586–595. [Google Scholar]
















| Short-Range Consistency | Long-Range Consistency | |||
|---|---|---|---|---|
| LPIPS (↓) | RMSE (↓) | LPIPS (↓) | RMSE (↓) | |
| ARF | 0.162 | 0.143 | 0.220 | 0.203 |
| StyleRF | 0.104 | 0.117 | 0.122 | 0.149 |
| StyleGaussian | 0.052 | 0.067 | 0.109 | 0.112 |
| Ours | 0.044 | 0.041 | 0.079 | 0.072 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Li, Y.; Ren, H.; Li, Y.; Sui, D.; Guo, M. An Improved Method for 3D Style Transfer of Cliff Carvings Based on Gaussian Splatting. Math. Comput. Appl. 2026, 31, 47. https://doi.org/10.3390/mca31020047
Li Y, Ren H, Li Y, Sui D, Guo M. An Improved Method for 3D Style Transfer of Cliff Carvings Based on Gaussian Splatting. Mathematical and Computational Applications. 2026; 31(2):47. https://doi.org/10.3390/mca31020047
Chicago/Turabian StyleLi, Yang, He Ren, Yacong Li, Dong Sui, and Maozu Guo. 2026. "An Improved Method for 3D Style Transfer of Cliff Carvings Based on Gaussian Splatting" Mathematical and Computational Applications 31, no. 2: 47. https://doi.org/10.3390/mca31020047
APA StyleLi, Y., Ren, H., Li, Y., Sui, D., & Guo, M. (2026). An Improved Method for 3D Style Transfer of Cliff Carvings Based on Gaussian Splatting. Mathematical and Computational Applications, 31(2), 47. https://doi.org/10.3390/mca31020047
