Next Article in Journal
Social Assessment of Alternative Urban Buses
Previous Article in Journal
Heart Rate Variability-Based Non-Invasive Method for Ovulation Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Denoising and Voxelization for Finite Element Analysis: A Review †

by
Sara Gonizzi Barsanti
Department of Engineering, Università degli Studi della Campania Luigi Vanvitelli, Via Roma 29, 81031 Aversa, Italy
Presented at the Conference “Discovering Pompeii: From Effects to Causes—From Surveying to the Reconstructions of Ballistae and Scorpiones”, Aversa, Italy, 27 February 2025.
Eng. Proc. 2025, 96(1), 6; https://doi.org/10.3390/engproc2025096006
Published: 6 June 2025

Abstract

The conservation of cultural heritage is fundamental, and it is difficult to predict how heritage objects will relate with structural damages. For these objects, the most used process for the analyses involves NURBS models that may introduce an excessive level of approximation leading to wrong simulation results. This work presents the preliminary review of literature and first tests regarding denoising and voxel algorithms and their application for the creation of volumetric models of a reconstruction of an ancient scorpionide, to identify the bottlenecks of the post-processing method for the creation of volumetric data for the FEA of cultural heritage.

1. Introduction

World cultural heritage is jeopardized by hazards, both natural (e.g., floods, earthquakes, fires) and man-made (e.g., pollution, mass rapid tourism, traffic, urban sprawl, neglect). Lack of money, interest, and conflicts can also increase the loss of this patrimony. Cultural heritage sites face significant threats during emergencies, recovery phases, and reconstruction efforts following environmental calamities like earthquakes and floods. This is because reconstruction projects, if not carefully planned, can pose a serious danger to historically and culturally significant areas. Consequently, it is crucial to implement risk mitigation strategies to safeguard both movable and immovable cultural heritage, along with their encompassing landscapes. The conservators and owners of cultural sites and museums need a simplified risk analysis approach that does not require extensive expertise to implement. This will allow them to calibrate and organize the conservation process in the best way. It is also an important resource for decision-makers who may not have sufficient knowledge and skills to deal with the complex risk assessment and evaluation process [1]. Diagnostic studies are fundamental for the conservation of cultural heritage; therefore, it is necessary to select and appropriately use current remote non-destructive testing (NDT) techniques. These techniques allow for examining different types of damage on a range of different materials without taking samples from or touching the artifacts. The use of NDT allows for diagnostics that assist in inspection and conservation. Three-dimensional reality-based modeling is an established technique that allows for obtaining 3D models of the state-of-the-art of the object surveyed with high accuracy and precision [2]. The models can be used directly for structural analysis with a strong simplification of the surface elements [3], which allows for maintaining a good accuracy. However, this involves several passages and an increasing level of possible errors or approximation. A good option is to use voxels directly on 3D point clouds, avoiding all the passages. In this paper, the analysis of the application of voxels on a clean point cloud using denoising algorithms is described using a copy of a Xanten–Wardt I sec AD scorpionide as the test object, which is reconstructed by Flavio Russo for Archeotecnica.it in scale 1:1 [4]. The small dart launcher was discovered in 1999 on the bottom of a river, known today as Lake Sudsee in the government district of Düsseldorf.

1.1. Denoising

Three-dimensional point clouds generated from photogrammetric and laser scanning surveys are most of the time noise due to non-collaborative materials or surfaces, inadequate lighting, too complex or convoluted geometries, and the imprecision of the tools employed. This noise not only creates a set of point clouds that are not compliant with the geometry of the object surveyed, but also introduces erratic information, thereby diminishing the geometric accuracy of the mesh model resulting from the following steps of the reality-based process; therefore, this leads to the outcomes of any analyses conducted on it. As a result, it is essential to clean the raw data. The noise removal has emerged as a significant focus in 3D geometric data processing. This process aids in eliminating noise to recover the true original shape of the object surveyed. Bilateral filtering [5] is a non-linear technique utilized mainly for image smoothing and then adapted for point cloud denoising [6]. It considers the position, normal, and color of the points [7]. A second algorithm that can be used is the guided filtering [8], an explicit image filter that functions as an edge-preserving smoothing operator [9]. There are also filter-based algorithms that utilize the normals of the points as guiding signals, leading to an iterative filtering process that updates the points to align them with the estimated normals. Then, graph-based point cloud denoising methods interpret the input point cloud as a graph signal, subsequently performing denoising through selected graph filters [10] and patch-based graph methods that construct patches of point clouds, with each patch represented as a node [11]. Optimization-based denoising techniques denoise the point cloud in an effort to maintain the best approximation of the input point cloud [12]. Lastly, deep learning algorithms have been employed in point cloud processing [10], wherein the noisy inputs are used as a tool for learning a mapping that overlays the ground truth data in an offline phase. Deep learning-based approaches can be categorized into two types: unsupervised [10] and supervised denoising methods such as PointNet [13].

1.2. Voxel

Voxelization is used in object detection, especially for autonomous driving or element recognition in the majority of related works [14,15,16,17,18,19], or 3D point cloud segmentation. The medical field is another scientific area where voxel-based modeling is used extensively [20,21,22,23,24]. Tests of using voxels for FEA have been conducted. For instance, voxel-based micro modeling was used to create a parametrical model of the composite structure to calculate ballistic impacts on ceramic-polymer composite panels [25]. Another study predicted roof collapses using voxel modeling of caves. This method made it possible to overcome the limitations of the FEM software (Ansys 19.2) and the challenges associated with reconstructing the geometry of the caves [26]. Then, to increase the accuracy of FEA, a homogenization approach for the voxel elements is presented in [27,28]. This is because, although employing voxels speeds up the mesh generation process, it lacks accuracy when working with curved surfaces. Voxelization can be carried out automatically with specific software, for example, Blender 4.1 and Meshmixer 3.5.474, starting from the input mesh and using the open-source library, Open3D. Blender 4.1 has the most straightforward process, but the operator can only control the number of the elements approximated. Meshmixer 3.5.474 creates a watertight solid from mesh surfaces by recomputing the object into a voxel representation. The process is easy, the only parameter the operator can change is the solid type as either fast or accurate, and the solid accuracy with a sliding tool gives a certain number and mesh density. These numbers are not correlated to the final number of polygons of the volume [29]. The Open3D core features include (i) 3D data structures, (ii) 3D data processing algorithms, (iii) scene reconstruction, (iv) surface alignment, (v) and 3D visualization.

1.3. Finite Element Analysis (FEA)

The finite element method (FEM) is a numerical technique used to perform finite element analysis (FEA), initially developed for structural mechanics and then applied to the solution of other types of problems [30]. The physical problem usually involves a structure, or a structural component subjected to certain loads. The idealization of this type of problem into a mathematical one involves specific assumptions that lead to differential equations. The finite element analysis solves this mathematical model, and since this solution technique is a numerical procedure, the accuracy of the solution must be considered. If the accuracy is not met, the numerical solution must be repeated with refined parameters (for example, finer meshes) until an adequate accuracy is achieved. The finite element analysis approximates the exact solution of the problem and the behavior of any point within the finite element is described by the nodal displacement, which is the first result of an FEM calculation. In summary, the use of the FEM increases accuracy, improves design, and enhances the identification of critical parts of a structure or object. In FEA, a mathematical model is commonly used as an idealization of the physical object built to predict or simulate its behavior. The analysis is then performed on the meshed models using data elements; this is different if a 2D or 3D problem is evaluated. Two-dimensional elements are triangular and quadrangular: quadrangular elements are preferred since triangular ones have lower precision. Three-dimensional elements are tetrahedral and hexahedral: as for 2D elements, hexahedral ones are more accurate (e.g., they deform to a lower strain energy state), but it is more difficult to mesh a 3D volume with this type of element if it is not segmented. Three-dimensional elements can be linear or quadratic: the difference is that quadratic ones have nodes also in the central side, varying the number from four nodes (linear tetrahedron) to 20 nodes (quadratic hexahedron).

2. Materials and Methods

This research utilized the Scopus and Web of Science (WOS) databases as primary search tools. Scopus, recognized as the largest abstract and citation database globally, encompasses over 20,000 journals across various disciplines published by more than 5000 publishers. Its extensive coverage and focus on specific subjects provide researchers with access to a broader spectrum of literature, thereby offering more robust data support to the academic community. To reduce personal bias and enhance the overall quality of the review, this study implemented a structured data search methodology in accordance with the meta-analysis protocol for systematic reviews. The initial phase of this literature review involves an exploratory survey designed to identify studies that have employed both techniques in examining climate-induced degradation processes impacting historical buildings. The systematic literature review adheres to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [31].

2.1. Exploratory Survey for Keywords Detection

The exploratory survey enables the detection of trusted and specific keyword combinations. This survey was conducted by searching documents that simultaneously utilized the denoising, voxel, and 3D survey for FEA keywords. In Scopus, the process of identifying the relevant documents to be analyzed in detail was conducted by searching within the fields “Article title, Abstract and Keyword” on the Search page. This search included all documents in the database published from 2022 through the end of January 2025. Since the intention was to analyze if denoising and voxel algorithms have been used together and if they have been used in combination with FEA in cultural heritage, the research was performed with 3 keyword combinations using the Boolean operator “AND”: (1) “voxel” AND “FEA” AND “point cloud” (27 papers); (2) “voxel” AND “FEA” AND “point cloud” AND “cultural heritage*” (242 papers); (3) “voxel” AND “denoising” AND “cultural heritage*” (5 papers). Initially, the search yielded a total of 265 documents; successively, the group was narrowed down by reviewing the titles and excluding those papers that (i) did not appear related to the simultaneous use of the techniques, (ii) were doubled, and (iii) were either unavailable or written in English. From the screening process, 185 documents were excluded, resulting in a total of 71 papers. The literature review for this contribution focuses on the less-explored domain of the simultaneous application of denoising and voxel algorithms for the structural analysis of cultural heritage and photogrammetry.

2.2. Screening and Inclusion Phases

The second step of the screening process included the title and abstract. Articles “out of topic” were excluded as they dealt with the following topics:
  • Type of applications out of topic such as medical or additive manufacturing research.
  • Type of applications regarding built environment but not inherent with cultural heritage (both movable and immovable), such as aqueducts, viaducts, bridges and/or very modern structures (built environment <20 years).
  • Papers particularly focused on the effectiveness of these techniques separately, without exploring their joint or simultaneous use for monitoring the condition of historical structures or heritage objects.
This step resulted in the exclusion of 51 documents, leading to the final selection of 13 publications, all in scientific journals.

2.3. State-of-the-Art in the Integration of Denoising Algorithms and Voxels

All the selected papers were analyzed based on their research approach. The first group analyzed was the one considering the keywords “voxelANDdenoisingANDculturalheritage”. Unfortunately, none of the papers presented a combined use of voxels and denoising algorithms for the post processing of cultural heritage models. Refs. [32,33,34,35] considered Deep and Machine Learning algorithms for the classification and representation of 3D point clouds, and three of them are reviews. None of the papers considered denoising algorithms or voxels for the optimization and post processing of point clouds. The use of voxels, analyzed in [36], provides a review of the use of applications using voxel-based representations, while in [37,38], the application of Machine and Deep Learning algorithms for the preservation of cultural heritage is considered, but not the ones investigated in this paper. Finally, in [39], a work on a hybrid survey technique for cultural heritage is presented. In summary, none of the papers identified have considered the two processes together. The second group, “voxelANDfeaANDpointcloud”, is the one in which more papers have been selected. In [40], an interesting approach on damage identification is presented, but it is not related to cultural heritage. The topology optimization is analyzed in [41], but again the topic is not cultural heritage and no denoising algorithms or voxels have been analyzed. The use of algorithms for the prediction of cracks is investigated in [42], while [43] presents a work on data structures and algorithms for efficiently converting piece-wise linear geometric data into topologically adequate voxel data. The last group is the one from the keywords “voxel+feaANDpointcloudANDculturalheritage”. In [44], the only paper left after the two processes of screening, the FEA for cultural heritage is considered, but not Deep and Machine Learning algorithms. It is hence clear that the field is not characterized by comprehensive works regarding the combined use of these algorithms, and even if the investigation follows just one of the two processing methods, the goal of the research is not FEA.

3. Results

Considering the lack of state-of-the-art, it was decided to test the most contemporary algorithms on a 3D point cloud of a copy of the scorpionide. A point cloud derived from a photogrammetric survey has been processed with denoising and voxel algorithms to analyze the difficulties and the problems in applying these algorithms to complex structures. It was decided to use the copy of a Roman scorpionide, a war throwing machine used in sieges (Figure 1).

3.1. Denoising

The point cloud is processed using Agisosft Metashape 2.2.1 and scaled using specific metric targets. The algorithm used for point cloud cleaning is one of the most relevant algorithms in recent research [45]. The proposed method consists of a denoiser based on the score of the point cloud in three-dimensional space. It uses deep learning principles to train the model and identify the best fitting region for each point. A gradient method is used to align the outlier groups with the estimated evaluation function. In other words, the technique simulates an intelligent smoothing of the underlying surface based on the majority voting method (or point density/size). The following methodology was employed: (i) use of a Git repository of the starting project [46]; (ii) creation of an Anaconda environment, with an attention to project requirements and packages needed; (iii) large point cloud denoising (>50,000 points) was run on the model; (iv) an XYZ mesh was saved as an output. The system uses two libraries: pytorch3d and pytorch, the first for managing 3D elements and the second for training deep learning models. The latter is the core of the system extracting features from the input, then giving the data to the evaluation network, which classifies them using a threshold-based clustering principle. Then, it aligns the points to match the threshold, and removes the outliers. The original point cloud and the denoised are then compared using CloudCompare software 2.14 giving as result a Gaussian distribution of mean and standard deviation and a signed C2C (cloud-to-cloud) distance, this means that only the closest points are compared (Figure 2).
To better evaluate the differences between the two point clouds, a series of profiles were exported in *.dxf format and analyzed using Rhinoceros software 7 to determine the maximum distance between the two lines (marked red for the non-denoised point cloud and blue for the denoised one). The overlap of these profiles illustrates how the algorithm impacts the geometric organization of the 3D points, realigning noisy data to match the ideal surface. The profiles reveal discrepancies, with the red line deviating significantly from the true geometry of the object (Figure 3a,b).
The results are summarized in Table 1.

3.2. Voxelization

The problem when dealing with complex geometries, such as those of cultural heritage artifacts, is that voxelization algorithms present too much simplification. A strong algorithm to process voxels is presented in [46]. Unfortunately, the starting point is a mesh, and it was decided to start from the point cloud to create voxel grids. Therefore, it was decided to test the open source Open3D library [47] with the function voxel_down_sample (self, voxel_size) that allows for downsampling the input point cloud to an output point cloud with one voxel. Normals and colors are averaged if they exist. The steps of the methodology applied are as follows:
  • Imposition of voxel size to downsample;
  • Definition of the resolution of the voxel grid;
  • Creation of the voxel grid (Figure 4);
  • Creation of a binary occupancy grid;
  • Activation of a corresponding index in the binary grid for each voxel;
  • Application of the Marching Cubes algorithm to extract the mesh surface;
  • STL file is then saved.
Starting from a simple visual analysis, (Figure 5a,b), the result is promising: the algorithm managed to create a complete model, being able to follow also the most geometrically complex parts, such as the ropes and the small metal nails.
To analyze further the accuracy of the data, the voxel model was then compared to both the denoised point cloud and the mesh model derived (Figure 6a,b).
The results are shown in Table 2.

4. Conclusions

FEA can work with 2D or 3D models, depending on the object to be analyzed. Volumetric models usually employed in the structural analysis software are formed by NURBS patches through 3D CAD modeling. If the analysis is carried out on models of cultural heritage, one possibility is to directly use 3D reality-based models transformed in volumes. This procedure, even if proved to be effective [3], consists of a sequence of iterative steps, each one contributing to add a degree of approximation to the final result. Starting from a reality-based survey, the data, after the conversion of a disorganized point cloud into a mesh via 3D surface reconstruction (first level of approximation) can be redrawn with CAD software from profiles, and then exported in BIM or HBIM models or converted to Non-Uniform Rational B-Splines (NURBS) following the process that uses retopology for the simplification of the mesh [45]. The export of 3D superficial meshes into NURBS creates one patch for each element of the surface mesh, thereby approximating the shape of the object. Additionally, the application of finite element analysis (FEA) in structural analysis introduces another step of approximation, leading to a cumulative effect that can cause the final output to significantly diverge from reality. Consequently, starting with less precise data, such as a point cloud that has a large amount of noise due to survey issues, inevitably results in a less accurate final product.
The denoising algorithm is effective because it allows for improving geometric accuracy, which means a more accurate and precise distribution of the points. Therefore, leading to a stronger closeness of the geometry of the point cloud to the geometry of the real object surveyed.
To skip a few passages and to decrease the level of approximation of the volume used in FEA, voxelization algorithms can be used to directly transform 3D point clouds in volume. The tessellated surface can be a problem while analyzing the model in FEA software, especially during the meshing process, and it is a problem that still needs to be fixed. In this paper, the use of precompiled tools and available library for the creation of volumes from 3D reality-based models of the object have been used starting from a first attempt of literature review, with just three searching combinations of keywords. It was decided to narrow the search on these three specific combinations to have an immediate idea of the state-of-the-art.
The test of the Open3D library gave optimal results in the creation of the voxel grid from 3D point clouds. This is why the denoising part has to be considered as best practice and the first step for obtaining a better voxel grid. The use of voxels for the creation of volumes shows great results in the medical field, probably because the data used have a different accuracy, geometry, and dimensions. For FEA, on the other hand, some problems arose during the test because the geometry of the volume, even if good in terms of standard deviation compared to the input data, is not well-read by the FEA software. Few tests have been carried out and not presented in this paper since the results are poor, but the main issues are regarding the meshing step with 3D elements and the imposition of forces and boundary conditions. One possibility for the latter is to segment the point cloud and then create a volume for each segmented part. This step may occur, however, in problems related to the continuity of the adjacent surfaces that shape the model. If these volumetric surfaces are not strictly connected, the FEA will give inaccurate or wrong results. This process is the first of a series of difficult improvements that the available tools for voxelization need. The reason lies, probably, in the different fields in which voxels have always been used, fields that do not need this level of accuracy or deal with data that are geometrically less complex.
The intention for future works is to analyze different algorithms and software for voxel creation using denoised point clouds of objects that vary in geometry, composition of materials, and dimensions. What is at least expected is to find, hopefully, the bottleneck of the available algorithms that can be improved for a more accurate result. From the first stage of state-of-the-art review, currently there is no script or algorithm nor software that is able to provide the level of accuracy needed for the pipeline proposed.

Funding

This research was supported by the project “SCORPiò-NIDI”, CUP B53D2302210 0006, funded by the Italian Ministry of Research under the PRIN (DD n.104/2022) funding initiative, PI prof. Rosa De Finis, University of Salento, co-PI prof. Sara Gonizzi Barsanti, University of Campania Luigi Vanvitelli.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Research data are available upon request.

Acknowledgments

The author would like to thank Gabriel Zuchtriegel, Director of the Archaeological Area of Pompeii, Giuseppe Scarpati, Head of the Study and Research Area, and Valeria Amoretti. The research activities are part of the MUR–PRIN 2022 project “SCORPiò-NIDI”.

Conflicts of Interest

The author declares no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Pedersoli, J.L., Jr.; Antomarchi, C.; Michalski, S. A Guide to Risk Management of Cultural Heritage; ICCROM ATHAR Regional Conservation Centre: Sharjah, United Arab Emirates, 2016. [Google Scholar]
  2. Rossi, A.; Cipriani, L.; Cabezos-Bernal, P.M. 3D Digital Models. Accessibility and Inclusive Fruition. DISEGNARECON 2024, 17, 1–6. [Google Scholar] [CrossRef]
  3. Gonizzi Barsanti, S.; Guagliano, M.; Rossi, A. 3D Reality-Based Survey and Retopology for Structural Analysis of Cultural Heritage. Sensors 2022, 22, 9593. [Google Scholar] [CrossRef] [PubMed]
  4. Fratino, M.; Rossi, A. Re-construction of the small Xanten dart launcher. In Discovering Pompeii: From Effects to Causes. From Surveying to the Reconstructions of Ballistae and Scorpiones; Real Casa dell’Annunziata, Department of Engineering Vanvitelli University; MDPI: Basel, Switzerland, 2025; under process of publication. [Google Scholar]
  5. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 7 January 1998; IEEE: Piscataway, NJ, USA, 1998; pp. 839–846. [Google Scholar]
  6. Chen, H.; Shen, J. Denoising of point cloud data for computer-aided design, engineering, and manufacturing. Eng. Comput. 2018, 34, 523–541. [Google Scholar] [CrossRef]
  7. Digne, J.; Franchis, C.D. The bilateral filter for point clouds. Image Process. Online 2017, 7, 278–287. [Google Scholar] [CrossRef]
  8. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  9. Han, X.; Jin, J.S.; Wang, M.; Jiang, W. Guided 3D point cloud filtering. Multimed. Tools Appl. 2018, 77, 17397–17411. [Google Scholar] [CrossRef]
  10. Irfan, M.A.; Magli, E. Exploiting color for graph-based 3d point cloud denoising. J. Vis. Commun. Image Represent. 2021, 75, 103027. [Google Scholar] [CrossRef]
  11. Dinesh, C.; Cheung, G.; Bajić, I.V. Point cloud denoising via feature graph Laplacian regularization. IEEE Trans. Image Process. 2020, 29, 4143–4158. [Google Scholar] [CrossRef]
  12. Xu, Z.; Foi, A. Anisotropic denoising of 3D point clouds by aggregation of multiple surface-adaptive estimates. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2851–2868. [Google Scholar] [CrossRef]
  13. Rakotosaona, M.J.; La Barbera, V.; Guerrero, P.; Mitra, N.J.; Ovsjanikov, M. Pointcleannet: Learning to denoise and remove outliers from dense point clouds. Comput. Graph. Forum 2021, 39, 185–203. [Google Scholar] [CrossRef]
  14. Sun, J.; Ji, Y.M.; Wu, F.; Zhang, C.; Sun, Y. Semantic-aware 3D-voxel CenterNet for point cloud object detection. Comput. Electr. Eng. 2022, 98, 107677. [Google Scholar] [CrossRef]
  15. He, C.; Li, R.; Li, S.; Zhang, L. Voxel set transformer: A set-to-set approach to 3D object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8417–8427. [Google Scholar]
  16. Mahmoud, A.; Hu, J.S.; Waslander, S.L. Dense voxel fusion for 3D object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 663–672. [Google Scholar]
  17. Shrout, O.; Ben-Shabat, Y.; Tal, A. GraVoS: Voxel Selection for 3D Point-Cloud Detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 21684–21693. [Google Scholar]
  18. Deng, J.; Shi, S.; Li, P.; Zhou, W.; Zhang, Y.; Li, H. Voxel r-cnn: Towards high performance voxel-based 3D object detection. Proc. AAAI Conf. Artif. Intell. 2021, 35, 1201–1209. [Google Scholar] [CrossRef]
  19. He, C.; Zeng, H.; Huang, J.; Hua, X.S.; Zhang, L. Structure Aware Single-Stage 3D Object Detection from Point Cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11870–11879. [Google Scholar]
  20. Lv, C.; Lin, W.; Zhao, B. Voxel Structure-Based Mesh Reconstruction From a 3D Point Cloud. IEEE Trans. Multimed. 2022, 24, 1815–1829. [Google Scholar] [CrossRef]
  21. Sas, A.; Ohs, N.; Tanck, E.; van Lenthe, G.H. Nonlinear voxel-based finite element model for strength assessment of healthy and metastatic proximal femurs. Bone Rep. 2020, 12, 100263. [Google Scholar] [CrossRef] [PubMed]
  22. Lee, T.Y.; Weng, T.L.; Lin, C.H.; Sun, Y.N. Interactive voxel surface rendering in medical applications. Comput. Med. Imaging Graph. 1999, 23, 193–200. [Google Scholar] [CrossRef]
  23. Han, G.; Li, J.; Wang, S.; Wang, L.; Zhou, Y.; Liu, Y. A comparison of voxel- and surface-based cone-beam computed tomography mandibular superimposition in adult orthodontic patients. J. Int. Med. Res. 2021, 49, 0300060520982708. [Google Scholar] [CrossRef] [PubMed]
  24. Goto, M.; Abe, O.; Hagiwara, A.; Fujita, S.; Kamagata, K.; Hori, M.; Aoki, S.; Osada, T.; Konishi, S.; Masutani, Y.; et al. Advantages of Using Both Voxel- and Surface-based Morphometry in Cortical Morphology Analysis: A Review of Various Applications, Magnetic Resonance. Med. Sci. 2022, 21, 41–57. [Google Scholar] [CrossRef] [PubMed]
  25. Babich, M.; Kublanov, V. Voxel Based Finite Element Method Modelling Framework for Electrical Stimulation Applications Using Open-Source Software. In Proceedings of the Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT), Yekaterinburg, Russia, 25–26 April 2019; pp. 127–130. [Google Scholar]
  26. Sapozhnikov, S.B.; Shchurova, E.I. Voxel and Finite Element Analysis Models for Ballistic Impact on Ceramic-polymer Composite Panels. Procedia Eng. 2017, 206, 182–187. [Google Scholar] [CrossRef]
  27. Doğan, S.; Güllü, H. Multiple methods for voxel modeling and finite element analysis for man-made caves in soft rock of Gaziantep. Bull. Eng. Geol. Environ. 2022, 81, 23. [Google Scholar] [CrossRef]
  28. Watanabe, K.; Iijima, Y.; Kawano, K.; Igarashi, H. Voxel Based Finite Element Method Using Homogenization. IEEE Trans. Magn. 2012, 48, 543–546. [Google Scholar] [CrossRef]
  29. Gonizzi Barsanti, S.; Marini, M.R.; Malatesta, S.G.; Rossi, A. Evaluation of Denoising and Voxelization Algorithms on 3D Point Clouds. Remote Sens. 2024, 16, 2632. [Google Scholar] [CrossRef]
  30. Zienkiewicz, O.C.; Taylor, R.L. The Finite Element Method; McGraw Hill: London, UK, 1989. [Google Scholar]
  31. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  32. Muzahid, A.A.M.; Han, H.; Zhang, Y.; Dawei, L.; Zhang, Y.; Jamshid, J. Ferdous Sohel, Deep learning for 3D object recognition: A survey. Neurocomputing 2024, 608, 128436. [Google Scholar] [CrossRef]
  33. Xue, F.; Lu, W.; Webster, C.J.; Chen, K. A derivative-free optimization-based approach for detecting architectural symmetries from 3D point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 148, 32–40. [Google Scholar] [CrossRef]
  34. Shahab, S.S.; Himeur, Y.; Kheddar, H.; Amira, A.; Fadli, F.; Atalla, S.; Copiaco, A.; Mansoor, W. Advancing 3D point cloud understanding through deep transfer learning: A comprehensive survey. Inf. Fusion 2025, 113, 102601. [Google Scholar]
  35. Di Angelo, L.; Di Stefano, P.; Guardiani, E. A review of computer-based methods for classification and reconstruction of 3D high-density scanned archaeological pottery. J. Cult. Herit. 2022, 56, 10–24. [Google Scholar] [CrossRef]
  36. Xu, Y.; Tong, X.; Stilla, U. Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry. Autom. Constr. 2021, 126, 103675. [Google Scholar] [CrossRef]
  37. Zu, X.; Gao, C.; Liu, Y.; Zhao, Z.; Hou, R.; Wang, Y. Machine intelligence for interpretation and preservation of built heritage. Autom. Constr. 2025, 172, 106055. [Google Scholar] [CrossRef]
  38. Liu, D.; Cao, K.; Tang, Y.; Zhang, J.; Meng, X.; Ao, T.; Zhang, H. Study on weathering corrosion characteristics of red sandstone of ancient buildings under the perspective of non-destructive testing. J. Build. Eng. 2024, 85, 108520. [Google Scholar] [CrossRef]
  39. Wang, Y.; Bi, W.; Liu, X.; Wang, Y. Overcoming single-technology limitations in digital heritage preservation: A study of the LiPhoScan 3D reconstruction model. Alex. Eng. J. 2025, 119, 518–530. [Google Scholar] [CrossRef]
  40. Gao, Y.; Li, H.; Fu, W.; Chai, C.; Su, T. Damage volumetric assessment and digital twin synchronization based on LiDAR point clouds. Autom. Constr. 2024, 157, 105168. [Google Scholar] [CrossRef]
  41. Zhang, Z.; Yao, W.; Li, Y.; Zhou, W.; Chen, X. Topology optimization via implicit neural representations. Comput. Methods Appl. Mech. Eng. 2023, 411, 116052. [Google Scholar] [CrossRef]
  42. Zhao, Y.; Liu, Y.; Xu, Z. Statistical learning prediction of fatigue crack growth via path slicing and re-weighting. Theor. Appl. Mech. Lett. 2023, 13, 100477. [Google Scholar] [CrossRef]
  43. Nourian, P.; Azadi, S. Voxel graph operators: Topological voxelization, graph generation, and derivation of discrete differential operators from voxel complexes. Adv. Eng. Softw. 2024, 196, 103722. [Google Scholar] [CrossRef]
  44. Cakir, F.; Kucuk, S. A case study on the restoration of a three-story historical structure based on field tests, laboratory tests and finite element analyses. Structures 2022, 44, 1356–1391. [Google Scholar] [CrossRef]
  45. Shitong, L.; Hu, W. Score-based point cloud denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 4563–4572. [Google Scholar]
  46. Baert, J. Cuda Voxelizer: A Gpu-Accelerated Mesh Voxelizer. 2017. 4. Available online: https://github.com/Forceflow/cuda_voxelizer (accessed on 5 February 2025).
  47. Zhou, Q.Y.; Park, J.; Koltun, V. Open3D: A modern library for 3D data processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
Figure 1. Image of the copy of the Xanten–Wardt scorpionide, reconstructed by Flavio Russo in scale 1:1.
Figure 1. Image of the copy of the Xanten–Wardt scorpionide, reconstructed by Flavio Russo in scale 1:1.
Engproc 96 00006 g001
Figure 2. Comparison of the two point clouds.
Figure 2. Comparison of the two point clouds.
Engproc 96 00006 g002
Figure 3. Comparison of profiles of the raw 3D point cloud (red) and the denoised one (blue). (a) The entire profile extrapolated with a section of the 3D point cloud; (b) the detail of the maximum distance of the two point clouds. It is evident how the denoising algorithm can rearrange the surface, avoiding spikes and data that are geometrically incoherent.
Figure 3. Comparison of profiles of the raw 3D point cloud (red) and the denoised one (blue). (a) The entire profile extrapolated with a section of the 3D point cloud; (b) the detail of the maximum distance of the two point clouds. It is evident how the denoising algorithm can rearrange the surface, avoiding spikes and data that are geometrically incoherent.
Engproc 96 00006 g003aEngproc 96 00006 g003b
Figure 4. The voxel grid created from the denoised point cloud.
Figure 4. The voxel grid created from the denoised point cloud.
Engproc 96 00006 g004
Figure 5. The voxel model (a) obtained from the denoised point cloud and a zoom of the front part (b).
Figure 5. The voxel model (a) obtained from the denoised point cloud and a zoom of the front part (b).
Engproc 96 00006 g005
Figure 6. The voxel model obtained compared with the denoised point cloud (a) and the mesh (b).
Figure 6. The voxel model obtained compared with the denoised point cloud (a) and the mesh (b).
Engproc 96 00006 g006
Table 1. Mean and standard deviation of the scorpionide point cloud analyzed after denoising.
Table 1. Mean and standard deviation of the scorpionide point cloud analyzed after denoising.
ObjectMean (mm)Standard Deviation (mm)Profiles Max Distance (mm)
Scorpionide0.0002640.00022616,764
Table 2. Mean and standard deviation of the voxel model of the scorpionide compared with the denoised point cloud and the mesh.
Table 2. Mean and standard deviation of the voxel model of the scorpionide compared with the denoised point cloud and the mesh.
Mean (mm)Standard Deviation (mm)
Cfr denoised/voxel (mm)0.000080.000393
Cfr voxel/mesh0.0003690.000760
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barsanti, S.G. Denoising and Voxelization for Finite Element Analysis: A Review. Eng. Proc. 2025, 96, 6. https://doi.org/10.3390/engproc2025096006

AMA Style

Barsanti SG. Denoising and Voxelization for Finite Element Analysis: A Review. Engineering Proceedings. 2025; 96(1):6. https://doi.org/10.3390/engproc2025096006

Chicago/Turabian Style

Barsanti, Sara Gonizzi. 2025. "Denoising and Voxelization for Finite Element Analysis: A Review" Engineering Proceedings 96, no. 1: 6. https://doi.org/10.3390/engproc2025096006

APA Style

Barsanti, S. G. (2025). Denoising and Voxelization for Finite Element Analysis: A Review. Engineering Proceedings, 96(1), 6. https://doi.org/10.3390/engproc2025096006

Article Metrics

Back to TopTop