A Method for Determining the Shape Similarity of Complex Three-Dimensional Structures to Aid Decay Restoration and Digitization Error Correction

: This paper introduces a new method for determining the shape similarity of complex three-dimensional (3D) mesh structures based on extracting a vector of important vertices, ordered according to a matrix of their most important geometrical and topological features. The correlation of ordered matrix vectors is combined with perceptual deﬁnition of salient regions in order to aid detection, distinguishing, measurement and restoration of real degradation and digitization errors. The case study is the digital 3D structure of the Camino Degli Angeli, in the Urbino’s Ducal Palace, acquired by the structure from motion (SfM) technique. In order to obtain an accurate, featured representation of the matching shape, the strong mesh processing computations are performed over the mesh surface while preserving real shape and geometric structure. In addition to perceptually based feature ranking, the new theoretical approach for ranking the evaluation criteria by employing neural networks (NNs) has been proposed to reduce the probability of deleting shape points, subject to optimization. Numerical analysis and simulations in combination with the developed virtual reality (VR) application serve as an assurance to restoration specialists providing visual and feature-based comparison of damaged parts with correct similar examples. The procedure also distinguishes mesh irregularities resulting from the photogrammetry process.


Introduction
Many cultural heritage (CH) archives are currently undergoing extensive 3D digitization to preserve artifacts from inevitable decay and provide visitors remote access to rich cultural collections. In recent years, numerous digitization techniques have arisen, varying in accordance with the various nature of 3D objects. However, the possibility of digitization error occurring is high due to the complexity of the 3D digitization process that includes preparation, digital recordings and data processing [1].
The superiority of computational algorithms for digital shape analysis and comparisons in combination with advanced concepts of artificial intelligence (AI) ensures qualitative representation of 3D models in virtual reality (VR) and augmented reality (AR) environments [2]. Increasing the efficiency of the CH model dissemination relies mostly on the quality of equipment for the acquisition and the hardware elements that deal with collected data, providing both semantic and visual information. For example, laser scanning and digital photogrammetry techniques accompanied with developed image processing software can provide very exact information regarding artifacts, ranging from miniature sculptures to large-scale monuments. However, even the state-of-the-art hardware generates digitization errors due to the technological limitations, variation of environmental light, small resolution of captured images or insufficient number of scanning steps. On the other hand, imperfections of digitized 3D models can also be related to the nature of artifact, i.e., the physicochemical characteristics of its material, the complexity of its shape and structure, as well as its size and the environmental influence amongst others. These problems are addressed by improvements in the resolution and quality of digital sensors followed by the integration of geo-positioning sensors, with a permanent increase in the computation power [3]. The other direction implies the development of new efficient algorithms that estimate camera parameters using the large number of captured images, such as structure from motion (SfM) [4] and visual simultaneous localization and mapping (VSLAM) [5]. Although the processed 3D objects satisfy the demands of a wider audience, they are still inadequate for restoration requirements due to their vulnerability to even non-malicious geometrical and topological deformations and transformations. In this paper, we propose an advanced shape analysis method that meets restoration requirements by considering all geometric deviations including holes and missing geometry structures of the particular 3D structure. The shape analysis and similarity algorithms combined with VR applications may help restoration specialists to compare damaged parts of both the physical artifact and its 3D representation with the correct similar sample.
The importance of shape similarity was first underlined in the fields of computer vision, mechanical engineering and molecular biology applications [6]. As a response to new requirements, the automatic comparison of 3D models has been introduced in the form of tools that use algorithms for shape matching such as the well-known iterative closest point (ICP) [7]. As a result of complex restoration requirements, a lot of literature also offers significant content-based similarity matching solutions and algorithms for filling holes [8,9]. A myriad of visualization techniques has also been proposed for the manual reconstruction in order to reach the qualitative visualization. However, such methods mainly rely on computer-aided design (CAD) or geographic information system (GIS) software use [10,11]. This kind of approach provides high-quality representations of restored models that actually do not correspond to their original geometric form.
This paper introduces a new method for determining the shape similarity of complex 3D mesh structures based on extracting a vector of important vertices, ordered according to matrix of their most important geometrical and topological features. The case study is the digital 3D structure of the Camino Degli Angeli, in the Urbino's Ducal Palace, acquired by the SfM technique [12,13], the practical result of which is illustrated in detail in Appendix A ( Figure A1) with a given technical performance and adjustments of the camera used (Table A1). In order to obtain accurate featured representation of the matching shape, the strong mesh processing computations were performed over the mesh surface with the preservation of the real shape and geometric structure.
In addition to perceptually based feature ranking, the new theoretical approach for ranking the evaluation criteria by employing the neural networks (NNs) has been proposed to reduce the probability of deleting shape points, subject to optimization. Construction of NNs with the input neuron values obtained from 3D mesh features technically contributes to the direct computation of the salient parts of the geometry, avoiding the additional usage of other systems, which supports queries based on 3D sketches, 2D sketches, 3D models, text and image and their association to particular 3D structures [14,15].
Numerical analysis and simulations in combination with the developed VR application serve as an assurance to restoration specialists providing visual and feature-based comparison of damaged parts with correct similar examples. The procedure also distinguishes mesh irregularities resulting from the photogrammetry process.
This paper is organized as follows: Section 2 describes prior work that addressed the shape similarity problem in analyzing complex geometrical structures. In this section, we also briefly describe methods and achievements that are exploited in our approach. Notation and concepts used throughout the paper are described in detail within Section 3. Section 4 contains a detailed description of our algorithm and a theoretical background of the techniques used with mathematical and geometrical notations. Numerical results with the corresponding visual illustrations are shown in Section 5. A brief discussion, conclusions and future work directions are given in Section 6.

Prior Work
The principal of geometric similarity and symmetry has been established in theory as a crucial shape description problem [16][17][18]. Recent studies [19,20] have been linked to measuring the distances between descriptors using the dissimilarity measurements to reduce the set of measured values and achieve accurate results. Mathematical generalization that satisfactorily represents the salient regions and shapes of any 3D structure is an imperative and also a starting point of research. One direction of research is focused on defining the most suitable formula to perform the representation invariant to the pose of 3D object and to the method or the way it is created. The comparison of 3D shapes using covariance matrices of features instead of the computation of features separately is an example [21]. At the same time, combining different shape matching methods becomes more challenging and promising [22].
An interesting idea in employing the surface analysis and filling holes with the patches from valid regions of the given surface is presented in [23] within the recent context-based method of restoration of 3D structures. Another concept of the model repairing and simplification [24] converts polygonal models into voxels, employing either the paritycount method or the ray-stabbing method according to the type of deformations. In the final procedure, they convert back a volumetric-based model into polygons using an iso-surface extraction for the final output. Unlike the approaches that use similarity for repairing architectural models [25], Chen et al. [26] propose a visual-similarity-based approach for 3D model retrieval, using features of encoded orthogonal projection by Zernike moments and Fourier descriptors. Although it achieves good results in decreasing errors in the 3D model generation by selection and elimination of planar polygons, coincident coplanar polygons, improper polygon intersections and face orientation, it is not efficient enough for error correction in the general case.
The philosophy of our approach is different. We employ strong signal processing during the vertex feature detection to ensure proper synchronization prior to similarity detection, while the quantization is adaptive and operates only on a matching example and not on the whole complex mesh. The reasoning for such an approach is the following. Our content-aware extraction selects the mesh vertices that stay invariant during the mesh transformations, thus reducing the probability of descriptor deletion. The heart of this method for selecting "important" vertices is the ordered statistics vertex extraction and tracing algorithm (OSVETA) [27]. This is a sophisticated and powerful algorithm that combines several mesh topologies with human visual system (HVS) metrics to calculate vertex stabilities and trace vertices most susceptible for extraction. Such vertex preprocessing allows the use of low-complexity algorithms tailored to the matching computation.

Preliminaries
In this section, we introduce the notation and the concepts used through the paper. We start with a brief discussion of the most important features and introduce geometrical and topological features that will be used in calculations. Then, we will introduce the quantization of important vertices and the concept of calculation of the Hausdorff distance between points. We also introduce basic terminology of the constructed NN that will be used in the theoretical concept of ranking criteria and vertices.

Gaussian and Mean Curvature Descriptors
From the differential geometry [28], we can locally approximate the manifold surface M in R 3 by its tangent plane that is orthogonal to the normal vector n. We define e as a unit direction in the tangent plane and the normal curvature κ N as a curvature of the curve that belongs to both the surface M and the tangent plane that contains both n and e. The average value and product of both principal curvatures κ 1 and κ 2 of the surface defines the mean curvature κ H = (κ 1 + κ 2 )/2 and Gaussian curvature κ G = κ 1 κ 2 . The mean curvature 2κ H (v i ) and unit normal n(v i ) at the vertex v i are given by mean curvature normal (Laplace-Beltrami operator) K(v i ) = 2κ H (v i )n(v i ). Using the derived expression for the discrete mean curvature normal operator [29], and if we define at the point v i , # f -a number of adjacent triangular faces, and θ ij -an angle of jth adjacent triangle, and A B -area of the first ring of triangles around the point v i , we can find the Gaussian curvature and the mean curvature of the discrete surface, which depend only on a vertex position and adjacent triangles angles: The index "B" is a notation of using the barycenter region in the computation of the area A.

Fitting Quadric Curvature Estimation
The idea of the fitting quadric curvature estimation method is that a smooth surface geometry can be locally approximated with a quadratic polynomial surface by fitting a quadric to point in a local neighborhood of each chosen point in a local coordinate frame. The quadric's curvature at the chosen point is taken to be the estimation of the curvature for the discrete surface.
For a simple quadric form z = ax 2 + bx y + cy 2 , we estimate a surface normal n at the point v either by simple or weighted averaging or by finding the least-squares fitted plane to the point and its neighbors. Then, we position a local coordinate system (x , y , z ) at the point v and align axis z along the estimated normal. We use the McIvor and Valkenburg suggestion [30] for aligning of the x coordinate axis with a projection of the global x axis onto the tangent plane defined by n. If we use the suggested improvements and fit the mapped data to extended quadric:ẑ = ax 2 + b xŷ + c ŷ 2 + d x + e ŷ, and solve the resulting system of linear equations, we finally obtain the estimation for the Gaussian and mean curvature: where a , b , c , d , e are the parameters of the last quadric fitted.

Mesh Quantization
For a given triangular mesh M with n vertices, each vertex v i ∈ M is conventionally represented using absolute Cartesian coordinates, denoted by v i = (x i , y i , z i ). The nominal uniform quantizer Q M (v) in the classification stage maps the input value v to a quantization index k, i.e., it distributes vertices into quantization levels with a step ∆ according to the value of their coordinates [31].
where, with a slight abuse of notation, [ * ] denotes the rounding operation, i.e., for a real x, [x] is the integer closest to x. Irregular meshes and complex geometrical structures require non-uniform or adaptive quantizers that provide a quantization space partitioned according to input data. For any mesh M ∈ R 3 with a non-proportional dimensional sizes, the quantizer Q A M (v), adaptive to particular dimension, will be constructed as a triplet of quantizers with the same number of quantization levels, k, but with a particular step size for each dimension

Ordered Statistics Vertex Extraction
. . , g l (v, f )} be defined as the set of l functions over the vector of mesh vertices v and their corresponding indices in a triangular face construction f . If the function g i has both positive and negative values, its argument vector v can be ordered by at least two criteria v = 〈v〉 χ 1 ; v _ = 〈v〉 χ 2 , where * denotes ordering the argument vector in descending or ascending order in accordance to the criteria χ 1 = g i > 0 and χ 2 = g i < 0, respectively (the values g(v) = 0, of all features, except ψ max and ψ min , correspond to vertices belonging to surface regions irelevant for our consideration). For the given set of functions G, one can define the new setĜ = wχ of q criteria weighted by the importance coefficients w = w 1 , w 2 , . . . , w q , and extract a new vector of vertices which is ordered according to the sum of all elements fromĜ [27].

Our Algorithm
The core of our method is fundamental shape analysis of 3D structure and the extraction of geometric primitives depending on their involvement in the shape creation. Each primitive is determined by its importance using the ordered statistics ranking of crucial geometrical and topological features. In order to decrease the computational time taken to describe their connectivity, all vertices are quantized by their Euclidian coordinates, assigning only the indexes of vertices with the highest importance to quantization levels. This quantized mesh ensures fast preliminary descriptor computation and, thus, good similarity results even using very small quantization levels. The refinement of matching results is achieved by additionally increasing the quantization levels with the tightening of the descriptor criteria.
The algorithm operates within four basic steps: (i) computation of geometrical and topological features g(v, f ) over the whole 3D mesh structure; (ii) extraction of the vector of vertices ordered according to their features' ranking v I ; (iii) adaptive mesh quantization depending on the selected sample mesh region, Q A (v), and (iv) similarity description computation. Figure 1 illustrates the flow of the whole process of our algorithm including the task of the topological error determination.

Ordered Statistics Vertex Extraction
Let ( , ) = { 1 ( , ), 2 ( , ), … , ( , )} be defined as the set of functions over the vector of mesh vertices and their corresponding indices in a triangular face construction . If the function has both positive and negative values, its argument vector can be ordered by at least two criteria = 〈 〉 1 ; = 〈 〉 2 , where 〈 * 〉 denotes ordering the argument vector in descending or ascending order in accordance to the criteria 1 = > 0 and 2 = < 0, respectively (the values ( ) = 0, of all features, except and , correspond to vertices belonging to surface regions irelevant for our consideration). For the given set of functions , one can define the new set ̂= of criteria weighted by the importance coefficients = { 1 , 2 , … , }, and extract a new vector of vertices which is ordered according to the sum of all elements from ̂ [27].

Our Algorithm
The core of our method is fundamental shape analysis of 3D structure and the extraction of geometric primitives depending on their involvement in the shape creation. Each primitive is determined by its importance using the ordered statistics ranking of crucial geometrical and topological features. In order to decrease the computational time taken to describe their connectivity, all vertices are quantized by their Euclidian coordinates, assigning only the indexes of vertices with the highest importance to quantization levels. This quantized mesh ensures fast preliminary descriptor computation and, thus, good similarity results even using very small quantization levels. The refinement of matching results is achieved by additionally increasing the quantization levels with the tightening of the descriptor criteria.
The algorithm operates within four basic steps: (i) computation of geometrical and topological features ( , ) over the whole 3D mesh structure; (ii) extraction of the vector of vertices ordered according to their features' ranking I ; (iii) adaptive mesh quantization depending on the selected sample mesh region, ( ), and (iv) similarity description computation. Figure 1 illustrates the flow of the whole process of our algorithm including the task of the topological error determination.  The contribution of our algorithm is threefold: (i) proposed novel method for extracting a set of crucial geometrical and topological features fromĜ using strong signal processing, (ii) introducing the fast and accurate ordered statistic ranking criteria algorithm for important vertex extraction, v I , and(iii) new adaptive 3D quantization technique that operates with iteratively determined quantization steps. Correlation and cooperation between all the above methods ensure reliable set of vertices for the final computation of similarity matching descriptors. The value of the calculated Hausdorff distance between the matching shape descriptors and the sample shape descriptors triggers an increase or decrease of the quantization level, k, striving to reduce the quantization error.

Mesh Processing
Prior to consider the similarity problems, we perform strong 3D mesh processing that includes calculation of most of geometrical and topological features of the given mesh geometry. All computed values of crucial shape description features are collected in the matrix form that is suitable for all further fast computations and statistical analyses. The mesh correction step also includes the extraction of boundary vertices and topological errors, thus decreasing the possibility of faults in all further steps. On the other hand, some topological errors can represent the appearance of digitization errors, the detection of which is also important in distinguishing them from geometric decay deformations of the considered artifact.

Ordered Statistics Algorithm
The core of our approach toward the extraction of important vertices is an ordered statistic ranking criteria algorithm [27], which has already been proven in the field of 3D mesh watermarking [32] and adapted to the shape recognition and similarity detection. Our idea is to assign weights to all criteriaĜ = wχ according to the statistical information of their participation in determining the shape importance (Section 3.4), instead of separately sorting important vertices according to each geometric and topological criteria, which further participate in the quantization and similarity determining process. In this way, we not only reduce the computation time due to a significant reduction of the number of loops in the execution of software procedures but also enable an additional use of neural networks in the criteria ranking.
All weight values in previous table were used from our experimental computation and improved by results from our recent research using the NN ranking [33]. The result of this step is the vector of vertex indices v I ordered according to their computed importance.

Adaptive Mesh Quantization
The second significant achievement of this paper is introducing the adaptive quantization of important vertices in simplifying the complex mesh structure for the next computation use. The first step of the algorithm is the sample mesh quantization that sorts only the important vertices of the chosen sample structure into k cells for each dimension in R 3 . From the set of vertices whose Euclidean coordinates belong to the particular quantization level, the algorithm selects only one with the highest position in the previously ordered vector v I (Section 4.2). The starting minimal number of quantization levels k = 2 facilitates the very fast extraction of 2 3 vertices.
Considering a pair or triplet matching vertices, using a low number of quantization levels is usually not enough for the shape description, but this step ensures a valid starting point and very fast computations of the selected distances. Increasing the quantization levels in the loop within the next steps, the algorithm will provide higher precision of the important vertices and, thus, a more accurate 3D shape description.

Similarity Matching Procedure
The simplified mesh structure obtained using both previous steps of the algorithm ensures low computational time for the vertex distance computation and determination of Information 2022, 13, 145 7 of 17 a correlation between the obtained measures. In other words, this step of the algorithm operates on a number of descriptors that is more than two orders smaller than that of an ordinary complex mesh. The result of our quantization algorithm is the matrix with the information of selected set of important vertices, the density of their distribution and also their position level in the vector of ordered vertices v I . Thus, contrary to standard methods, our algorithm provides additional information prior to the rigid similarity calculations based on distances between selected points.
Assuming that most of the geometric information is concentrated into dense quants, we first determine the quants of the whole mesh that meet that condition. The second similarity criterion is the connection of selected important vertices to the adjacent quants according to the low-level quantized sample and their "importance" position. Finally, the mutual distance between the selected vertices in the sample mesh and all resulting matching vertices is calculated as the Hausdorff distance: The vertices with a minimal value of the calculated d Hausdor f f are considered as matching points in the first iteration step. The fine tuning of the selected matching points includes increasing the quantization level k, repeating the calculation procedure and considering the more-important vertices. Minimizing the number of matching mesh surface regions interrupts the iterative procedure.
It is quite clear that a better choice of important points leads to a reduction in the number of iterations and a faster and accurate matching region detection. In this paper, we propose improvements by introducing NNs for the vertex extraction criteria ranking.

Neural Networks for 3D Feature Ranking
An additional contribution of our method is a theoretical approach in employing neural networks to the already described vertex extraction algorithm. It enables moreprecise feature ranking within the mesh geometry and topology estimation. Following the principles of feature-based neural networks and including all relevant 3D mesh features g(v), we designed a feature learning framework that directly uses a vector of all vertex features and their q derived criteria as the NN inputs [33]. In order to avoid an irregularity problem of the 3D mesh geometry and topology, we set each of our hidden neurons to be activated by the same input weight value and also have the same bias for all input neurons. The NN learns by adjusting all weights and biases through backpropagation to obtain the ordered vector of vertex indices that provides information on vertex importance.
In the backpropagation process, hidden neurons H receive inputs from the input neurons I. The activations of these neurons are the components of the input vector I = {i i , 1, . . . , n}, respectively. The weight between the input I I and our hidden layer H J is w I J . In this context, the NN input to jth neuron is the sum of weighted signals from neurons h inj = b j + ∑ i i i w ij , where the sigmoid function is σ(i) = 1/(1 + e i ) and the bias b is included by adding it to the input vector. Updating the hidden layer weights is the standard procedure for minimizing an error using a training set. Since each of our hidden layer neurons are weighted by the same weights and biases from all input neurons, the gradient weight of the hidden layer is given as ∆w j = ησ j i, where η is the learning rate.
This theoretical approach relies on 3D feature-based NN application that is considered only within the step of stabile vertice extraction. Additional overall improvements can be achieved using some of the already developed machine learning algorithms in the final similarity matching phase, particularly in the predictive selection of neighboring vertices after our quantization step.

Numerical Results
In order to obtain valuable and perceptually measurable results, we deliberately chose the digital 3D model of the Camino Degli Angeli, in the Urbino's Ducal Palace as the case study. This 3D model is, at a glance, perceptually appropriate for the experimental purpose for at least three reasons: (i) There is obvious central symmetry and, thus, similarity between the left and right sides of an object. (ii) The model contains figures whose recognizable similar shapes represent a strict functionality test and measurable evaluation of the algorithm accuracy. (iii) Digitization errors on the right are also clearly visible and help an assessment of the algorithm's efficiency in classifying this type of shortcoming.

Mesh Processing Performance
We started the simulation with separately performed computations for five parts of the model: angelo-2R.obj-the symmetrical pair at the right, and • camino degli angeli.obj-the whole fireplace 3D model.
The artifacts inside these parts were assumed to be the best example of the restoration process' requirement and the right guidance in performing a simulation. In addition, previous assumptions also included that a type of decay at the right is not a priori defined. All input models were without textures, which would actually interfere with perceptual tests. The renders are shown on the next (Figure 2).
chose the digital 3D model of the Camino Degli Angeli, in the Urbino's Ducal Palace as the case study. This 3D model is, at a glance, perceptually appropriate for the experimental purpose for at least three reasons: (i) There is obvious central symmetry and, thus, similarity between the left and right sides of an object. (ii) The model contains figures whose recognizable similar shapes represent a strict functionality test and measurable evaluation of the algorithm accuracy. (iii) Digitization errors on the right are also clearly visible and help an assessment of the algorithm's efficiency in classifying this type of shortcoming.

Mesh Processing Performance
We started the simulation with separately performed computations for five parts of the model: • angelo-1L.obj-the sculpture of an angel on the top-left, • angelo-1R.obj-the symmetrical pair at the right, • angelo-2L.obj-the left angel ornament, • angelo-2R.obj-the symmetrical pair at the right, and • camino degli angeli.obj-the whole fireplace 3D model.
The artifacts inside these parts were assumed to be the best example of the restoration process' requirement and the right guidance in performing a simulation. In addition, previous assumptions also included that a type of decay at the right is not a priori defined. All input models were without textures, which would actually interfere with perceptual tests. The renders are shown on the next (Figure 2). Our algorithm estimates the curvature and shape of the model by computing 24 geometrical and topological features at each vertex over the mesh surface. According to specificity of the similarity problem, in all the next steps, we used the resulting vectors of eight features (Table 1). Figure 3 and additional figures in Figure A2 illustrate a level of success of the computed features in the process of determining the salient regions of the mesh surface and, thus, in the shape recognition.  Our algorithm estimates the curvature and shape of the model by computing 24 geometrical and topological features at each vertex over the mesh surface. According to specificity of the similarity problem, in all the next steps, we used the resulting vectors of eight features (Table 1). Figure 3 and additional figures in Figure A2

Ordered Statistics Vertex Extraction Performance
The first stage of this algorithm is selecting all the vertices and ordering their vector indices according to the value in the matrix of all defined criteria described in Table 1 and 1 Fitting quadric method of curvature estimation.

Ordered Statistics Vertex Extraction Performance
The first stage of this algorithm is selecting all the vertices and ordering their vector indices according to the value in the matrix of all defined criteria described in Table 1 and Section 4.2. Each resulting extracted matrix column now contains ordered vertex indices obtained by using all the criteria from the same table. Roughly speaking, we can visually illustrate the result of this step by observing all the images from Figure A2, in which each vertex color (from blue to red) represents the value of the corresponding feature.
The next stage statistically or empirically defines the "importance" value of each criterion or, more precisely, the importance of each derived criteria in the shape definition process. In other words, the purpose of using criteria is to determine the grades of the vertices in the ranking criteria phase. For example, ranking criteria κ G > 0 and κ H < 0 with the highest-grades algorithm will extract only convex regions. However, in the shape creation, all criteria have an influence and particular contribution. The result and output of this phase is an extracted vector of vertex indices v I in accordance with the sum vector of all 14 criteria (Section 3.4).

Ordered Statistics Vertex Extraction Performance
The first stage of this algorithm is selecting all the vertices and ordering their vector indices according to the value in the matrix of all defined criteria described in Table 1 and Section 4.2. Each resulting extracted matrix column now contains ordered vertex indices obtained by using all the criteria from the same table. Roughly speaking, we can visually illustrate the result of this step by observing all the images from FigureB1, in which each vertex color (from blue to red) represents the value of the corresponding feature.
The next stage statistically or empirically defines the "importance" value of each criterion or, more precisely, the importance of each derived criteria in the shape definition process. In other words, the purpose of using criteria is to determine the grades of the vertices in the ranking criteria phase. For example, ranking criteria > 0 and < 0 with the highest-grades algorithm will extract only convex regions. However, in the shape creation, all criteria have an influence and particular contribution. The result and output of this phase is an extracted vector of vertex indices I in accordance with the sum vector of all 14 criteria (Section 3.4).  Another proof of our approach is the noticeable invariance of the top selected vertices after the simplification and optimization processes, even after strong optimization with only 5% of the surviving vertices ( Figure 5). Another proof of our approach is the noticeable invariance of the top selected vertices after the simplification and optimization processes, even after strong optimization with only 5% of the surviving vertices ( Figure 5).

Results of the Quantization
Our previously extracted set of important vertices, I , ensures valid data for the next quantization procedure. As expected, our quantizer provides better results for the higher level of quantization, , but unexpectedly good result for the small number of levels,

Results of the Quantization
Our previously extracted set of important vertices, v I , ensures valid data for the next quantization procedure. As expected, our quantizer provides better results for the higher level of quantization, k, but unexpectedly good result for the small number of levels, which is actually very useful in the proposed iterative computation (Section 4.3).
Our algorithm allows a user to interactively enter a desired level of quantization, k, but we can clearly notice, from Figure 6, the satisfactory results of our quantizer's efficiency using k = [2, 3,4]. The table below provides more extensive data for each level of quantization.

Results of the Quantization
Our previously extracted set of important vertices, quantization procedure. As expected, our quantizer prov level of quantization, , but unexpectedly good result which is actually very useful in the proposed iterative co Our algorithm allows a user to interactively enter a but we can clearly notice, from Figure 6, the satisfactory ciency using = [2,3,4]. The table below provides more quantization.    (Table 2).
Supplementing the tabular data, Figure 7 illustrates the spatial distribution of the matched vertices by quantization. Regardless of the presented results in the above table, we can notice, even perceptually, the similarity of the black points' distribution in the sample area of the left angel ornament and the red points' distribution on the right. This is promising information for the final matching step.
For better visual comparison of the results of each considered quantization level, an additional set of illustrations is provided in Appendix B ( Figure A3).
Information 2022, 13, x FOR PEER REVIEW Supplementing the tabular data, Figure 7 illustrates the spatial distributio matched vertices by quantization. Regardless of the presented results in the abo we can notice, even perceptually, the similarity of the black points' distributio sample area of the left angel ornament and the red points' distribution on the ri is promising information for the final matching step. For better visual comparison of the results of each considered quantization additional set of illustrations is provided in Appendix B ( Figure A3).

Matching Shapes Performance
The adaptive 3D quantizer operates with iterative determined quantization s correlates with ordered-statistics vertex extraction algorithm ensuring a reliable s tices for the final computation of similarity matching descriptors. The value of th lated Hausdorff distance between the matching shape descriptors and the samp descriptors triggers an increase or decrease in the quantization level , striving t the quantization error. Figure 8 illustrates the successfully detected similarity of the shape in the mesh area and the corresponding shape inside the considered whole mesh. The shape matching algorithm is calibrated to exclude all sample vertices whole mesh distance computing, thus decreasing the number of required combin All computations in the paper are performed using our developed software tained results are verifiable by the open software code available online [34]. The mented materials, include all used 3D models are also available at the link [34].

Matching Shapes Performance
The adaptive 3D quantizer operates with iterative determined quantization steps and correlates with ordered-statistics vertex extraction algorithm ensuring a reliable set of vertices for the final computation of similarity matching descriptors. The value of the calculated Hausdorff distance between the matching shape descriptors and the sample shape descriptors triggers an increase or decrease in the quantization level k, striving to reduce the quantization error. Figure 8 illustrates the successfully detected similarity of the shape in the selected mesh area and the corresponding shape inside the considered whole mesh. For better visual comparison of the results of each considered quantization level, an additional set of illustrations is provided in Appendix B ( Figure A3).

Matching Shapes Performance
The adaptive 3D quantizer operates with iterative determined quantization steps and correlates with ordered-statistics vertex extraction algorithm ensuring a reliable set of vertices for the final computation of similarity matching descriptors. The value of the calculated Hausdorff distance between the matching shape descriptors and the sample shape descriptors triggers an increase or decrease in the quantization level , striving to reduce the quantization error. Figure 8 illustrates the successfully detected similarity of the shape in the selected mesh area and the corresponding shape inside the considered whole mesh. The shape matching algorithm is calibrated to exclude all sample vertices from the whole mesh distance computing, thus decreasing the number of required combination.
All computations in the paper are performed using our developed software, and obtained results are verifiable by the open software code available online [34]. The supplemented materials, include all used 3D models are also available at the link [34]. The shape matching algorithm is calibrated to exclude all sample vertices from the whole mesh distance computing, thus decreasing the number of required combination.
All computations in the paper are performed using our developed software, and obtained results are verifiable by the open software code available online [34]. The supplemented materials, include all used 3D models are also available at the link [34].

Discussion, Conclusions and Future Work
In this paper, we aimed to reach a satisfactory level in the non-trivial field of 3D geometry estimation and especially in determining the geometrical similarities of shapes.
Our novel theoretical approach is described in detail with a clear explanation of the algorithm structure including each of its steps and procedures. The method's efficiency is experimentally proven with the presented numerical results and appropriate illustrations. The complete computational software engine is developed with provided interactivity for the research use. All developed source code is available online and free for use, review and improvements.
The contribution of this paper is threefold: (i) proposed novel method for extracting a set of crucial geometrical and topological features, G, using strong signal processing; (ii) introducing the fast and accurate ordered-statistic ranking criteria algorithm for importantvertex extraction and v I (iii) new adaptive 3D quantization technique that operates with iteratively determined quantization steps. Correlation and cooperation between all the above methods ensure a reliable set of vertices for the final computation of similarity matching descriptors. The value of the calculated Hausdorff distance between the matching shape descriptors and the sample shape descriptors triggers the increase or decrease of the quantization level k, striving to reduce the quantization error.
The presented experimental results demonstrate that the similarity of two shapes (ornamental angels in our case study) can be fairly estimated using eight features from the Table 1. We show that our quantizer using k = 3, quantization levels 14 × 42 × 20 for the whole mesh and the tolerance 1/6 ensures an accurate-featured representation of the matching shape. Moreover, our novel adaptive quantization technique overcomes a mesh complexity shortcoming by improving the computation speed. The experimental data presented in this paper satisfactorily suit the requirements of experts in the restoration of CH artifacts, providing numerical and perceptual support for their needs.
In addition to all achievements, we introduced the new theoretical approach in employing AI that enables more-precise feature ranking within the mesh geometry and topology estimation. The limitation of this approach is the small available training set and usually large and complex mesh structures that result in long computational time. However, the further improvements of the proposed novel method and its combination with other image-based NN applications are promising.
Our future work envisages the application and testing the method on the broader set of CH artifacts. The expected result will automatize the feature ranking process and improve quantization technique and their positive impact on matching similarity and also on the mesh and point cloud simplification. We also aim to expand our research with the introduction of different types of 3D models such as point cloud data as a common CH digitization format.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/info13030145/s1. These materials include all images, 3D models, and software code.  The case study, on which the present methodology was implemented and tested for the first time on a cultural artifact, was the "Camino degli Angeli" (the "Chimney of Angels"), from which the name of the Sala degli Angeli (Hall of Angels) derives, one of the most famous rooms of the Apartment of Duke Federico, located on the Piano Nobile of the Ducal Palace of Urbino. The rich and imaginative decoration of the room is due to the sculptural skill of Domenico Rosselli (1439-1498): in the large receiving room, he created a wide and refined repertoire of stone and stucco carvings, with which he celebrated the Montefeltro family. In particular, he produced the most richly decorated fireplace in the entire palace. The fireplace owes its name to the procession of "putti" with gilded hair and wings that unfolds on a blue background; above the fireplace is placed, within a circular garland, an eagle holding the Montefeltro coat of arms with its claw.
The photos were taken both with parallel axes (nadiral) and converging axes (oblique). Considering the characteristics of the camera, the dimensions of the object and the maximum distance at which to position oneself, the grip design is summarized in the following table (Table A1). Photos were taken in RAW format, taking care to also acquire an additional picture with the "Color Checker", intended to reproduce the color faithfully. The photos were then processed in Adobe Photoshop Camera RAW and exported to JPEG format. These images were then processed in Agisoft Metashape software according to the established procedures to obtain the cloud of the chimney. The 3D model of the fireplace of the Sala degli Angeli, also obtained with Metashape, contained a total of 499,984 faces. The steps of this part of procedure are shown in Figure A1 below. images were then processed in Agisoft Metashape software according to the established procedures to obtain the cloud of the chimney. The 3D model of the fireplace of the Sala degli Angeli, also obtained with Metashape, contained a total of 499,984 faces. The steps of this part of procedure are shown in Figure A1 below. Figure A1. The illustration of all digitization and mesh constructing steps. Figure A2 represents additional visualizations of the computed criteria specified in Table 1 and Section 4.2 and their perceptual results. A brief explanation for the rendering color scheme used is given in the figure caption below. Figure A1. The illustration of all digitization and mesh constructing steps. Figure A2 represents additional visualizations of the computed criteria specified in Table 1 and Section 4.2 and their perceptual results. A brief explanation for the rendering color scheme used is given in the figure caption below. images were then processed in Agisoft Metashape software according to the established procedures to obtain the cloud of the chimney. The 3D model of the fireplace of the Sala degli Angeli, also obtained with Metashape, contained a total of 499,984 faces. The steps of this part of procedure are shown in Figure A1 below.  Within the mesh quantization step, we performed computations using various tolerance factors and quantization levels. The illustrated comparison of the best obtained results is presented in Figure A3. Within the mesh quantization step, we performed computations using various tolerance factors and quantization levels. The illustrated comparison of the best obtained results is presented in Figure A3. Figure A2. The input models ((a) angelo-1L.obj, (b) angelo-2L.obj, (c) camino degli angeli.obj, (d) angelo-2R.obj, (e) angelo-1R.obj) rendered and texturized by the color map in accordance with all computed criteria from Table 1 and Section 4.2. Blue corresponds to low values and red to high values of the computed criteria.

Appendix B
Within the mesh quantization step, we performed computations using various tolerance factors and quantization levels. The illustrated comparison of the best obtained results is presented in Figure A3. The red, black and magenta markers in all illustrations denote the tolerance levels 1/2, 1/6, and 1/12, respectively.