Cultural Heritage Restoration of a Hemispherical Vault by 3D Modelling and Projection of Video Images with Unknown Parameters and from Unknown Locations

: Reverse engineering applied to architectural restoration for the reconstruction of structural surfaces depends on metric precision. Sometimes there are elements on these surfaces whose value is even higher than the building itself. This is the case for many churches whose ceilings have pictorial works of art. Reconstruction requires the existence of some identiﬁable remainder and/or a surface geometry that enables mathematical development. In our case, the vault has an irregular hemispherical geometry (without possible mathematical development), and there are no signiﬁcant remains of the painting (which was destroyed by a ﬁre). Through the 3D modelling of the irregular vault and two historic frames with a camera of unknown geometry, an inverse methodology is designed to project the original painting without metric deformations. For this, a new methodology to locate the camera positions is developed. After, a 3D virtual mathematical model of the complete image on the vault is calculated, and from it, partial 3D virtual images are automatically calculated depending on the variable unknown positions of the video cannons (distributed along the upper corridor of the apse) that will project them (visually forming a perfect complete 3D image).


Introduction
Classical photogrammetric techniques have been used for reverse engineering in buildings for decades [1], and currently, due to UAVs, their applications are even more common [2,3]. The mathematical control of the photographic images allows the orthorectification and measurement of elements on 2D images or the volumetric measurement of 3D elements by stereoscopy. These techniques, widely known and described in numerous publications [4][5][6][7], have been complemented for years with the appearance of laser scanners [8][9][10][11]. Although a photograph usually provides the interpretation of any visible element (there are many cases where other types of images result in being more convenient [12]), the laser contributes its geometry with a very high degree of precision, automatically and in a short space of time [13][14][15][16][17]. Due to the laser scanner, the application of both techniques together allows for non-invasive reverse engineering processes where no contact with the element is necessary [18][19][20][21][22], which is especially relevant in heritage studies [23]. Cultural heritage is one of the fields with more different applications of combined techniques (as many different sciences are applied together in restoration works). Recently, augmented reality applications have become of great interest too [24,25].
One of the buildings that encompasses the most architectural conservation and restoration activities, due to their high patrimonial value, are churches [26][27][28] and caverns with Rock Arts [29]. No contact with the object is of great importance when the surface to be 2 of 12 restored contains works of art (usually pictorial). In these cases, the work of art depends directly on the geometry of the structural surface on which it is located. For the restoration of paintings, the surface is usually modelled and developed mathematically in 2D (using cartographic projections) to perform the restoration on a plane and then transfer it to the surface of the structure [30,31]. However, there are cases in which it is not possible, mainly because there are surfaces that cannot be developed mathematically [32,33]. Sometimes it happens because the construction has not been well executed (mainly due to the constructive means of the era in question), because the surface is not mathematically developable (mainly irregular spherical surfaces), or for both things at the same time [34]. In these cases, in which it is not possible to apply the mathematical transformations that allow a two-dimensional development without metric and/or angular deformations, the only possible option is to act directly on the surface.
This study shows the restoration by video projection of the paintings that were destroyed by a fire on the vault of the church of Santos Juanes in Valencia. For this, there are only two black and white analogue frames taken before the fire, whose geometric origin is unknown (no data are available on the camera parameters or the shooting position). Due to the three-dimensional model obtained with the laser scanner, we will have a precise geometry of the surface. Given that the ceiling of our case study is an irregular hemispherical vault, it is impossible to mathematically develop its 2D surface to reconstruct the painting and then transfer it to the structure. The solution will be to perform the reconstruction directly on the element by projecting images on its surface (with video cannons) in the original position and without metric deformation. This implies being able to define the geometry with the greatest accuracy so that we can project partial images on the real surface by mathematically controlling the deformations of the projection and the scale of each of them. For this, it will be necessary to first calculate a complete 3D virtual image of paintings on the mathematical modelling of this irregular surface (from the original frames) and then, calculate 3D partial virtual images (which will be projected by each video cannon). The designed methodology is independent of the spatial positions of the canyons (they are not necessary to be known), preventing the transfer of any of them to prevent the formation of the correct image (the virtual image is recalculated automatically for each position). The composition of the partial images projected simultaneously will provide the complete three-dimensional image of original paintings without metric deformations (in position and scale) from any point of observation of the church.

3D Model of the Surface
To obtain the 3D model of the surface of the vault, a Leica RTC360 laser scanner was used. It is a quarter sphere with a radius of 8.40 m and a total area of 220 m 2 . The scan was performed from 4 positions, obtaining a complete point cloud formed by 80,000,000 points ( Figure 1).

Full 3D Model of the Image
There are two photographs of the vault (Figure 2) that contain the original image before the fire (taken with a camera whose parameters are unknown). Since a photograph

Full 3D Model of the Image
There are two photographs of the vault (Figure 2) that contain the original image before the fire (taken with a camera whose parameters are unknown). Since a photograph is a 2D document, it will be necessary to project the image on the 3D model of the surface to obtain a 3D image. For this, it is necessary to establish a relationship of each pixel of the image with its corresponding spatial position on the surface.

Full 3D Model of the Image
There are two photographs of the vault ( Figure 2) that contain the original image before the fire (taken with a camera whose parameters are unknown). Since a photograph is a 2D document, it will be necessary to project the image on the 3D model of the surface to obtain a 3D image. For this, it is necessary to establish a relationship of each pixel of the image with its corresponding spatial position on the surface. First, it is necessary to establish the spatial coordinates of the shooting points of both photographs. To do this, identifiable existing points on the surface are identified in the photograph (it is only possible on an identifiable construction element: corners and edges.). Selecting the points on straight lines in the ceiling, these lines must also be rectilinear in the image (1): = * + * + * + 2 * + 2 * + 2 * + 2 ; = 1 * + 1 * + 1 * + 1 2 * + 2 * + 2 * + 2 (1) where a,b,c,d: projective parameters including Euler matrix angles and the scale factor between object space and image space; (X, Y, Z): coordinates of a point in space; First, it is necessary to establish the spatial coordinates of the shooting points of both photographs. To do this, identifiable existing points on the surface are identified in the photograph (it is only possible on an identifiable construction element: corners and edges). Selecting the points on straight lines in the ceiling, these lines must also be rectilinear in the image (1): where a, b, c, d: projective parameters including Euler matrix angles and the scale factor between object space and image space; (X, Y, Z): coordinates of a point in space; (x i , y i ): coordinates of the same point in the image.
As images were taken with old cameras that used a photographic plate (it is a photographic support made of a glass sheet covered with a light-sensitive emulsion) they were more stable and did not warp or distort (especially in large formats for wide field imaging). Therefore, radial distortion is negligible and does not introduce any error in the process.
While in oblique photography, there are enough identifiable points for the determination of the coefficients (Figure 2b), in zenith/nadir photography, it will be more difficult to find valid points, so the fit may be inconsistent. In this situation, it is necessary to add more condition equations to the system. The verticality of the lamp chain, located on one side of the vault, provides an equation of direction. The dimension of its vectors in space and in the image defined by ∂ and ∂' is given by Equation (2): Equation (2) particularized for a vertical element is Equation (3): Known in space (∆X, ∆Y, ∆Z), we can calculate that dimension in the image by Equation (4): The calculation of ∆X i and ∆Y i is obtained from the subtraction of Equations (1) and (4): The equation that allows the verification of the vanishing direction, which we will call the direction equation, is given by Equation (7): In which we explicitly verify that the direction of the image of the lamp string is correct (since Equation (7) defines the direction of the vertical line determined by the quotient between the coordinate increments given by Equations (5) and (6) between two spaces). Likewise, it is possible to incorporate more condition equations in relation to the size of identifiable elements. In this way, the beginning arch of the dome, of 1 meter of uniform thickness, presents variation in its width in the image due to the effect of perspective. Establishing an analogous approach to the previous one, we will obtain that it is possible to check the size of elements in the image according to Equation (8), an Equation that we will call the size equation ( Figure 3): Having established the above equations, it is possible to determine the shooting points of both frames to establish the correspondence of all the pixels of the images with their corresponding points on the surface of the vault.

Calculation of Partial 3D Virtual Images from Unknown Positions
The projection of the complete 3D image will be performed using n partial images projected from n video cannons located in the upper corridor of the apse. For this, it will be necessary to project from each video cannon a previous rectangular mesh that allows establishing a relationship with the corresponding image portion. To calculate these partial (virtual) images, we know the value of each pixel of the original image that is associ- Having established the above equations, it is possible to determine the shooting points of both frames to establish the correspondence of all the pixels of the images with their corresponding points on the surface of the vault.

Calculation of Partial 3D Virtual Images from Unknown Positions
The projection of the complete 3D image will be performed using n partial images projected from n video cannons located in the upper corridor of the apse. For this, it will be necessary to project from each video cannon a previous rectangular mesh that allows establishing a relationship with the corresponding image portion. To calculate these partial (virtual) images, we know the value of each pixel of the original image that is associated with each spatial position (1). In this way, given that we had a 3D model of spatial coordinates with a pixel associated with each of them (the model will have more resolution the denser the cloud of points used), we can generate as many partial images as desired. The problem will be given by the deformation caused in the calculated image by the position from which it is projected (Figure 4). Having established the above equations, it is possible to determine the shooting points of both frames to establish the correspondence of all the pixels of the images with their corresponding points on the surface of the vault.

Calculation of Partial 3D Virtual Images from Unknown Positions
The projection of the complete 3D image will be performed using n partial images projected from n video cannons located in the upper corridor of the apse. For this, it will be necessary to project from each video cannon a previous rectangular mesh that allows establishing a relationship with the corresponding image portion. To calculate these partial (virtual) images, we know the value of each pixel of the original image that is associated with each spatial position (1). In this way, given that we had a 3D model of spatial coordinates with a pixel associated with each of them (the model will have more resolution the denser the cloud of points used), we can generate as many partial images as desired. The problem will be given by the deformation caused in the calculated image by the position from which it is projected (Figure 4).   To solve this problem, a reticulated matrix is projected from each video cannon on the surface of the vault. Let a(i, j), b(I + 1,j), c(i,j + 1) and d(i + 1,j + 1) be the four points of a unit cell k (the four parameters sum the unit y are within the range [0,1]) of the matrix [M] projected from h, and their counterparts in the space 1(X 1 , Y 1 , Z 1 ), 2(X 2 , Y 2 , Z 2 ), 3(X 3 , Y 3 , Z 3 ) and 4(X 4 , Y 4 , Z 4 ) with spatial coordinates on the vault (known by the triangulated 3D model), the position of any point P within it, deformed by the projection will be given by bilinear interpolation (9): P ab = (i + 1)b + ia P cd = (i + 1)d + ic P = (j + 1)P cd + jP ab = (j + 1)ia + (j + 1)(i + 1)b + jic + (i + 1)jd (9) The essence of this bilinear combination is to take the same affine combination on two opposite lines. The result is identical regardless of the opposite lines chosen two by two. In a three-dimensional space, with 4 non-coplanar points, a ruled (bilinear) surface is obtained whose edges are the 4 segments. The barycentric coordinates of 4 points have 3 degrees of freedom because they add up to 1, while in this case there are 2 free parameters. On the other hand, the surface is ruled because it is formed by an infinite succession of straight segments that are obtained by giving values between 0 and 1 between each of the pairs of opposite lines, forming the surface with high precision. The equation applied to each position in the space comprised by the matrix provides the coordinates that each pixel of the calculated image must have to be projected in its correct position in space.

Adjustment of the Point Cloud to the Precision of the Image
The point cloud obtained results in a density much higher than the pixel size projected on the surface, so it is necessary to establish a reduction in its density [35] without losing precision in its geometry ( Table 1). The image format is 14,041 pixels × 10,623 pixels (149 Mpixel) with 1500 ppp of resolution (24 bits) and the original point cloud had almost 3,640,000 points. The point cloud with the highest density will be used to camera location while at a projection distance of 20 m (visualization distance is even higher), a mesh accuracy of 5 centimetres was required ( Figure 5).

Adjustment of the Point Cloud to the Precision of the Image
The point cloud obtained results in a density much higher than the pixel size projected on the surface, so it is necessary to establish a reduction in its density [35] without losing precision in its geometry ( Table 1). The image format is 14,041 pixels × 10,623 pixels (149 Mpixel) with 1500 ppp of resolution (24 bits) and the original point cloud had almost 3,640,000 points. The point cloud with the highest density will be used to camera location while at a projection distance of 20 m (visualization distance is even higher), a mesh accuracy of 5 centimetres was required ( Figure 5).

Three-Dimensionsal Model of the Complete Image of the Vault
Equation (1) generates a system of equations that allows determining the coefficients of the equation and from them the shooting position, its direction and the focal length of  , whose solution is determined by Equation (10): Given the heterogeneity in the precision of the image coordinates, it is necessary to introduce a weight matrix [P] for each of the equations. It is necessary to introduce a weight matrix in the system since not all the points are defined with the same precision (due to the state of conservation). If it is clear in which pixel a point is located, the error in coordinates is less than 0.5 pixel and therefore the weight is 4 (the inverse of its square). In all other cases, where precision is lower, the weight value will be lower too: if the imprecision is 2 pixels, the error in the coordinates would be less than 1 (weight = 1), if the precision is 4 pixels, the error in the coordinates would be less than 2 (weight = 0.25). If the precision is lower than 4 pixels, the point is not suitable.
When the image is vertical, the relationship between object space and image space is determined by parameters a and b in the numerator, while the denominator defines the perspective effect. Obviously, where the numerator results in 0 in both coordinates of Equation (1) the result in 0. If denominator results in 0 too, we had 0/0, which is an uncertainty. There is only one point that can never get rid of, which is the projection centre (camera position), so that is the way we determine the camera position (with an accurate of 20 cm).
We will fix the first photograph c2 = 1 for having been obtained with the camera in the direction of the Z axis (since the image is practically vertical). We will define the coordinate system in space with the X axis in the direction of the central aisle of the church aisle and the Y axis perpendicular to the other two. Finally, we will add a direction equation (lamp chain) and three size equations (initial, final and centre arc thickness). The complete system of equations consists of 37 points, which provide a system with 74 equations and 11 unknowns (Figure 6). To these equations are added the equations of the points obtained to apply the conditions of direction and size. Both systems complement each other by providing a consistent system of equations (Table 2).
precision is 4 pixels, the error in the coordinates would be less than 2 (weight = 0.25). If the precision is lower than 4 pixels, the point is not suitable.
When the image is vertical, the relationship between object space and image space is determined by parameters a and b in the numerator, while the denominator defines the perspective effect. Obviously, where the numerator results in 0 in both coordinates of Equation (1) the result in 0. If denominator results in 0 too, we had 0/0, which is an uncertainty. There is only one point that can never get rid of, which is the projection centre (camera position), so that is the way we determine the camera position (with an accurate of 20 cm).
We will fix the first photograph c2 = 1 for having been obtained with the camera in the direction of the Z axis (since the image is practically vertical). We will define the coordinate system in space with the X axis in the direction of the central aisle of the church aisle and the Y axis perpendicular to the other two. Finally, we will add a direction equation (lamp chain) and three size equations (initial, final and centre arc thickness). The complete system of equations consists of 37 points, which provide a system with 74 equations and 11 unknowns (Figure 6). To these equations are added the equations of the points obtained to apply the conditions of direction and size. Both systems complement each other by providing a consistent system of equations (Table 2).    Applying the same procedure to the second frame, the coefficient a 2 = 1 since the image was taken in the direction of the X axis. The complete system of equations consists of 34 points, which provide a system with 68 equations and 11 unknowns. The results are shown in Table 3.   Once the positions of the cameras were determined and with the geometry provided by the laser, we obtained the relationship between both images and the complete point cloud of the vault, obtaining the complete 3D calculated image (Figure 8).

Calculation of Partial 3D Images
Once the pixel value associated with each spatial position was calculated, the partial images were calculated from the positions where the video cannons were located. Using Equation (9), all the pixels of each of the images were recalculated according to the projected mesh (Figures 9-11)

Calculation of Partial 3D Images
Once the pixel value associated with each spatial position was calculated, the partial images were calculated from the positions where the video cannons were located. Using Equation (9)

Calculation of Partial 3D Images
Once the pixel value associated with each spatial position was calculated, the partial images were calculated from the positions where the video cannons were located. Using Equation (9), all the pixels of each of the images were recalculated according to the projected mesh (Figures 9-11)     For the projection of the images to provide a homogeneous image, it is necessary to take into account the design of the optics and the level of focus in a range of depth differences to validate the quality of the complete projected image (Figure 12)

Discussion
Photogrammetry combined with laser scanning has been shown once again as a suitable combination in the field of reverse engineering applied to restoration in architecture and cultural heritage. The contribution of the precise geometry, with the original photographic image, allows the reconstruction of an image projected on the 3D surface without metric deformations, so that original paintings can be observed.
Given the irregular spherical geometry of the surface, performing a mathematical development that allows the application of a conventional restoration technique is unnecessary. For this reason, it was necessary to develop an alternative methodology based on recalculated and projected partial images. This solution will allow visualization of the For the projection of the images to provide a homogeneous image, it is necessary to take into account the design of the optics and the level of focus in a range of depth differences to validate the quality of the complete projected image (Figure 12 For the projection of the images to provide a homogeneous image, it is necessary to take into account the design of the optics and the level of focus in a range of depth differences to validate the quality of the complete projected image ( Figure 12)

Discussion
Photogrammetry combined with laser scanning has been shown once again as a suitable combination in the field of reverse engineering applied to restoration in architecture and cultural heritage. The contribution of the precise geometry, with the original photographic image, allows the reconstruction of an image projected on the 3D surface without metric deformations, so that original paintings can be observed.
Given the irregular spherical geometry of the surface, performing a mathematical development that allows the application of a conventional restoration technique is unnecessary. For this reason, it was necessary to develop an alternative methodology based on recalculated and projected partial images. This solution will allow visualization of the

Discussion
Photogrammetry combined with laser scanning has been shown once again as a suitable combination in the field of reverse engineering applied to restoration in architecture and cultural heritage. The contribution of the precise geometry, with the original photo-graphic image, allows the reconstruction of an image projected on the 3D surface without metric deformations, so that original paintings can be observed.
Given the irregular spherical geometry of the surface, performing a mathematical development that allows the application of a conventional restoration technique is unnecessary. For this reason, it was necessary to develop an alternative methodology based on recalculated and projected partial images. This solution will allow visualization of the vault in its original state (not only from a geometrical point of view, instead from the original colour of paintings through digital techniques applied to images).
The algorithm developed consists of the formation of a calculation system by MMCC with weight assignment depending on the type of point selected. Although the designed methodology has worked optimally, the limitations of its application have been revealed. In those cases in which the number of identifiable points is insufficient (as in our first image where the identifiable points are distributed on the edge of the image and located practically in the same plane), it is necessary to introduce equations of geometry, direction and/or size conditions (element location in the image does not affect since radial distortion is negligible). These restrictions provide a degree of consistency to the algorithm that allows an optimal fit even in cases in which the arrangement and identification of points is very low (finding elements of mathematically modellable rectilinear or curvilinear geometry in a building is very common, whereas, on the other hand, finding elements of known dimension is not so easy. In these cases, Equation (8) cannot be used in the adjustment). In cases where the number of identifiable points is high, the method converges optimally without the need for additional restrictions.
In this way, it can be affirmed that the methodology should be applicable in the vast majority of cases in buildings (due to the elements of regular geometry) and with oblique photographs. Although inclined photographs have a greater deformation in the projection than zenith or nadir photographs, it is easier to find identifiable points in them (eliminating the need for additional restrictions). In addition, oblique images allow us to fully capture the 3D image, while a perpendicular image would make it impossible to record the parts farthest from the centre of the vault (in the case of a complete hemisphere, for example). Given that the point cloud provided by the laser guarantees the rectification of the image on the 3D model with total precision (including any geometric irregularity), the oblique image is more convenient than the perpendicular image (contrary to what could be expected and that occurs in processes of differential rectification in flat surfaces).
The formation of the complete image from different points of view with projective deformation forces the calculation of (virtual) images based on the relationship between each pixel of the original image and its position in space. The procedure does not generate any problem, and through bilinear interpolation, the images are projected, creating a perfect joint image, so that any observer will visualize the perfect original painting regardless of its perspective position. However, while there is no mathematical limitation for methodology, in cases where projection positions force a very oblique perspective the complete image would not be visible correctly (in our case there is no problem since the vault is hemispherical and there are multiple suitable positions along the apse). Likewise, the number of necessary partial images will also be defined by the available projection positions (a greater number of projectors being necessary the closer we are to the element). From this point of view, this methodology can always be applied mathematically to similar situations, while limitations can be found if there are not suitable positions for cannons (both by distance and by angle). Data Availability Statement: Some data used during the study were provided by a third party (photographic images). Direct request for these materials may be made to the provider as indicated in the Acknowledgments. The rest of data, models or code that support the findings of this study are available from the corresponding author upon reasonable request.