Next Article in Journal
The Structure and Mechanical Properties of Hemp Fibers-Reinforced Poly(ε-Caprolactone) Composites Modified by Electron Beam Irradiation
Previous Article in Journal
Macrophagic Inflammatory Response Next to Dental Implants with Different Macro- and Micro-Structure: An In Vitro Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cultural Heritage Restoration of a Hemispherical Vault by 3D Modelling and Projection of Video Images with Unknown Parameters and from Unknown Locations

1
School of Civil Engineering, Polytechnic University of Valencia, 46022 Valencia, Spain
2
School of Architecture, Polytechnic University of Valencia, 46022 Valencia, Spain
3
Polytechnic School of Engineering, University of Santiago de Compostela, 27002 Lugo, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(12), 5323; https://doi.org/10.3390/app11125323
Submission received: 18 May 2021 / Revised: 29 May 2021 / Accepted: 2 June 2021 / Published: 8 June 2021

Abstract

:
Reverse engineering applied to architectural restoration for the reconstruction of structural surfaces depends on metric precision. Sometimes there are elements on these surfaces whose value is even higher than the building itself. This is the case for many churches whose ceilings have pictorial works of art. Reconstruction requires the existence of some identifiable remainder and/or a surface geometry that enables mathematical development. In our case, the vault has an irregular hemispherical geometry (without possible mathematical development), and there are no significant remains of the painting (which was destroyed by a fire). Through the 3D modelling of the irregular vault and two historic frames with a camera of unknown geometry, an inverse methodology is designed to project the original painting without metric deformations. For this, a new methodology to locate the camera positions is developed. After, a 3D virtual mathematical model of the complete image on the vault is calculated, and from it, partial 3D virtual images are automatically calculated depending on the variable unknown positions of the video cannons (distributed along the upper corridor of the apse) that will project them (visually forming a perfect complete 3D image).

1. Introduction

Classical photogrammetric techniques have been used for reverse engineering in buildings for decades [1], and currently, due to UAVs, their applications are even more common [2,3]. The mathematical control of the photographic images allows the orthorectification and measurement of elements on 2D images or the volumetric measurement of 3D elements by stereoscopy. These techniques, widely known and described in numerous publications [4,5,6,7], have been complemented for years with the appearance of laser scanners [8,9,10,11]. Although a photograph usually provides the interpretation of any visible element (there are many cases where other types of images result in being more convenient [12]), the laser contributes its geometry with a very high degree of precision, automatically and in a short space of time [13,14,15,16,17]. Due to the laser scanner, the application of both techniques together allows for non-invasive reverse engineering processes where no contact with the element is necessary [18,19,20,21,22], which is especially relevant in heritage studies [23]. Cultural heritage is one of the fields with more different applications of combined techniques (as many different sciences are applied together in restoration works). Recently, augmented reality applications have become of great interest too [24,25].
One of the buildings that encompasses the most architectural conservation and restoration activities, due to their high patrimonial value, are churches [26,27,28] and caverns with Rock Arts [29]. No contact with the object is of great importance when the surface to be restored contains works of art (usually pictorial). In these cases, the work of art depends directly on the geometry of the structural surface on which it is located. For the restoration of paintings, the surface is usually modelled and developed mathematically in 2D (using cartographic projections) to perform the restoration on a plane and then transfer it to the surface of the structure [30,31]. However, there are cases in which it is not possible, mainly because there are surfaces that cannot be developed mathematically [32,33]. Sometimes it happens because the construction has not been well executed (mainly due to the constructive means of the era in question), because the surface is not mathematically developable (mainly irregular spherical surfaces), or for both things at the same time [34]. In these cases, in which it is not possible to apply the mathematical transformations that allow a two-dimensional development without metric and/or angular deformations, the only possible option is to act directly on the surface.
This study shows the restoration by video projection of the paintings that were destroyed by a fire on the vault of the church of Santos Juanes in Valencia. For this, there are only two black and white analogue frames taken before the fire, whose geometric origin is unknown (no data are available on the camera parameters or the shooting position). Due to the three-dimensional model obtained with the laser scanner, we will have a precise geometry of the surface. Given that the ceiling of our case study is an irregular hemispherical vault, it is impossible to mathematically develop its 2D surface to reconstruct the painting and then transfer it to the structure. The solution will be to perform the reconstruction directly on the element by projecting images on its surface (with video cannons) in the original position and without metric deformation. This implies being able to define the geometry with the greatest accuracy so that we can project partial images on the real surface by mathematically controlling the deformations of the projection and the scale of each of them. For this, it will be necessary to first calculate a complete 3D virtual image of paintings on the mathematical modelling of this irregular surface (from the original frames) and then, calculate 3D partial virtual images (which will be projected by each video cannon). The designed methodology is independent of the spatial positions of the canyons (they are not necessary to be known), preventing the transfer of any of them to prevent the formation of the correct image (the virtual image is recalculated automatically for each position). The composition of the partial images projected simultaneously will provide the complete three-dimensional image of original paintings without metric deformations (in position and scale) from any point of observation of the church.

2. Materials and Methods

2.1. 3D Model of the Surface

To obtain the 3D model of the surface of the vault, a Leica RTC360 laser scanner was used. It is a quarter sphere with a radius of 8.40 m and a total area of 220 m2. The scan was performed from 4 positions, obtaining a complete point cloud formed by 80,000,000 points (Figure 1).

2.2. Full 3D Model of the Image

There are two photographs of the vault (Figure 2) that contain the original image before the fire (taken with a camera whose parameters are unknown). Since a photograph is a 2D document, it will be necessary to project the image on the 3D model of the surface to obtain a 3D image. For this, it is necessary to establish a relationship of each pixel of the image with its corresponding spatial position on the surface.
First, it is necessary to establish the spatial coordinates of the shooting points of both photographs. To do this, identifiable existing points on the surface are identified in the photograph (it is only possible on an identifiable construction element: corners and edges). Selecting the points on straight lines in the ceiling, these lines must also be rectilinear in the image (1):
x i = a * X + b * Y + c * Z + d a 2 * X + b 2 * Y + c 2 * Z + d 2   ;    y i = a 1 * X + b 1 * Y + c 1 * Z + d 1 a 2 * X + b 2 * Y + c 2 * Z + d 2
where
  • a, b, c, d: projective parameters including Euler matrix angles and the scale factor between object space and image space;
  • (X, Y, Z): coordinates of a point in space;
  • (xi, yi): coordinates of the same point in the image.
As images were taken with old cameras that used a photographic plate (it is a photographic support made of a glass sheet covered with a light-sensitive emulsion) they were more stable and did not warp or distort (especially in large formats for wide field imaging). Therefore, radial distortion is negligible and does not introduce any error in the process.
While in oblique photography, there are enough identifiable points for the determination of the coefficients (Figure 2b), in zenith/nadir photography, it will be more difficult to find valid points, so the fit may be inconsistent. In this situation, it is necessary to add more condition equations to the system. The verticality of the lamp chain, located on one side of the vault, provides an equation of direction. The dimension of its vectors in space and in the image defined by ∂ and ∂’ is given by Equation (2):
= Δ X 2 + Δ Y 2 + Δ Z 2                               ´ = Δ x i m 2 + Δ y i m 2
where
  • (∆ X, ∆Y, ∆Z): vector in space;
  • (∆xim, ∆yim): vector in the image.
Equation (2) particularized for a vertical element is Equation (3):
Δ X = 0 ,       Δ Y = 0 ,         Δ Z =
Known in space (∆X, ∆Y, ∆Z), we can calculate that dimension in the image by Equation (4):
x i + Δ x i = a * ( X + Δ X ) + b * ( Y + Δ Y ) + c ( Z + Δ Z ) + d a 2 * ( X + Δ X ) + b 2 * ( Y + Δ Y ) + c 2 * ( Z + Δ Z ) + d 2   y i + Δ y i = a 1 * ( X + Δ X ) + b 1 * ( Y + Δ Y ) + c 1 ( Z + Δ Z ) + d 1 a 2 * ( X + Δ X ) + b 2 * ( Y + Δ Y ) + c 2 * ( Z + Δ Z ) + d 2
The calculation of ∆Xi and ∆Yi is obtained from the subtraction of Equations (1) and (4):
Δ x i = a * X + b * Y + c Z + d a 2 * X + b 2 * Y + c 2 * Z + d 2 a * ( X + Δ X ) + b * ( Y + Δ Y ) + c ( Z + Δ Z ) + d a 2 * ( X + Δ X ) + b 2 * ( Y + Δ Y ) + c 2 * ( Z + Δ Z ) + d 2
Δ y i = a 1 * X + b 1 * Y + c 1 Z + d 1 a 2 * X + b 2 * Y + c 2 * Z + d 2 a 1 * ( X + Δ X ) + b 1 * ( Y + Δ Y ) + c 1 ( Z + Δ Z ) + d 1 a 2 * ( X + Δ X ) + b 2 * ( Y + Δ Y ) + c 2 * ( Z + Δ Z ) + d 2
The equation that allows the verification of the vanishing direction, which we will call the direction equation, is given by Equation (7):
A t a n ( Δ X i Δ Y i ) = A t a n ( Δ x i m Δ y i m )
In which we explicitly verify that the direction of the image of the lamp string is correct (since Equation (7) defines the direction of the vertical line determined by the quotient between the coordinate increments given by Equations (5) and (6) between two spaces). Likewise, it is possible to incorporate more condition equations in relation to the size of identifiable elements. In this way, the beginning arch of the dome, of 1 meter of uniform thickness, presents variation in its width in the image due to the effect of perspective. Establishing an analogous approach to the previous one, we will obtain that it is possible to check the size of elements in the image according to Equation (8), an Equation that we will call the size equation (Figure 3):
Δ X i 2 + Δ Y i 2 = Δ x i m 2 + Δ y i m 2
Having established the above equations, it is possible to determine the shooting points of both frames to establish the correspondence of all the pixels of the images with their corresponding points on the surface of the vault.

2.3. Calculation of Partial 3D Virtual Images from Unknown Positions

The projection of the complete 3D image will be performed using n partial images projected from n video cannons located in the upper corridor of the apse. For this, it will be necessary to project from each video cannon a previous rectangular mesh that allows establishing a relationship with the corresponding image portion. To calculate these partial (virtual) images, we know the value of each pixel of the original image that is associated with each spatial position (1). In this way, given that we had a 3D model of spatial coordinates with a pixel associated with each of them (the model will have more resolution the denser the cloud of points used), we can generate as many partial images as desired. The problem will be given by the deformation caused in the calculated image by the position from which it is projected (Figure 4).
To solve this problem, a reticulated matrix is projected from each video cannon on the surface of the vault. Let a(i, j), b(I + 1,j), c(i,j + 1) and d(i + 1,j + 1) be the four points of a unit cell k (the four parameters sum the unit y are within the range [0,1]) of the matrix [M] projected from h, and their counterparts in the space 1(X1, Y1, Z1), 2(X2, Y2, Z2), 3(X3, Y3, Z3) and 4(X4, Y4, Z4) with spatial coordinates on the vault (known by the triangulated 3D model), the position of any point P within it, deformed by the projection will be given by bilinear interpolation (9):
P a b = ( i + 1 ) b + i a P c d = ( i + 1 ) d + i c P = ( j + 1 ) P c d + j P a b = ( j + 1 ) i a + ( j + 1 ) ( i + 1 ) b + jic + ( i + 1 ) jd
The essence of this bilinear combination is to take the same affine combination on two opposite lines. The result is identical regardless of the opposite lines chosen two by two. In a three-dimensional space, with 4 non-coplanar points, a ruled (bilinear) surface is obtained whose edges are the 4 segments. The barycentric coordinates of 4 points have 3 degrees of freedom because they add up to 1, while in this case there are 2 free parameters. On the other hand, the surface is ruled because it is formed by an infinite succession of straight segments that are obtained by giving values between 0 and 1 between each of the pairs of opposite lines, forming the surface with high precision. The equation applied to each position in the space comprised by the matrix provides the coordinates that each pixel of the calculated image must have to be projected in its correct position in space.

3. Results

3.1. Adjustment of the Point Cloud to the Precision of the Image

The point cloud obtained results in a density much higher than the pixel size projected on the surface, so it is necessary to establish a reduction in its density [35] without losing precision in its geometry (Table 1). The image format is 14,041 pixels × 10,623 pixels (149 Mpixel) with 1500 ppp of resolution (24 bits) and the original point cloud had almost 3,640,000 points. The point cloud with the highest density will be used to camera location while at a projection distance of 20 m (visualization distance is even higher), a mesh accuracy of 5 centimetres was required (Figure 5).

3.2. Three-Dimensionsal Model of the Complete Image of the Vault

Equation (1) generates a system of equations that allows determining the coefficients of the equation and from them the shooting position, its direction and the focal length of the camera. The linearized mathematical model is of the type [A]*[x]=[K], whose solution is determined by Equation (10):
[ x ] = ( [ A ] t * [ P ] * [ A ] ) 1 * ( [ A ] t * [ P ] ) * [ k ]
where
  • [A]: coefficient matrix;
  • [x]: matrix of unknowns (coefficients of Equation (1));
  • [k]: matrix of independent terms;
  • [P]: matrix of weight of each equation.
Given the heterogeneity in the precision of the image coordinates, it is necessary to introduce a weight matrix [P] for each of the equations. It is necessary to introduce a weight matrix in the system since not all the points are defined with the same precision (due to the state of conservation). If it is clear in which pixel a point is located, the error in coordinates is less than 0.5 pixel and therefore the weight is 4 (the inverse of its square). In all other cases, where precision is lower, the weight value will be lower too: if the imprecision is 2 pixels, the error in the coordinates would be less than 1 (weight = 1), if the precision is 4 pixels, the error in the coordinates would be less than 2 (weight = 0.25). If the precision is lower than 4 pixels, the point is not suitable.
When the image is vertical, the relationship between object space and image space is determined by parameters a and b in the numerator, while the denominator defines the perspective effect. Obviously, where the numerator results in 0 in both coordinates of Equation (1) the result in 0. If denominator results in 0 too, we had 0/0, which is an uncertainty. There is only one point that can never get rid of, which is the projection centre (camera position), so that is the way we determine the camera position (with an accurate of 20 cm).
We will fix the first photograph c2 = 1 for having been obtained with the camera in the direction of the Z axis (since the image is practically vertical). We will define the coordinate system in space with the X axis in the direction of the central aisle of the church aisle and the Y axis perpendicular to the other two. Finally, we will add a direction equation (lamp chain) and three size equations (initial, final and centre arc thickness). The complete system of equations consists of 37 points, which provide a system with 74 equations and 11 unknowns (Figure 6). To these equations are added the equations of the points obtained to apply the conditions of direction and size. Both systems complement each other by providing a consistent system of equations (Table 2).
Applying the same procedure to the second frame, the coefficient a2 = 1 since the image was taken in the direction of the X axis. The complete system of equations consists of 34 points, which provide a system with 68 equations and 11 unknowns. The results are shown in Table 3.
Once the positions of the cameras were determined and with the geometry provided by the laser, we obtained the relationship between both images and the complete point cloud of the vault, obtaining the complete 3D calculated image (Figure 8).

3.3. Calculation of Partial 3D Images

Once the pixel value associated with each spatial position was calculated, the partial images were calculated from the positions where the video cannons were located. Using Equation (9), all the pixels of each of the images were recalculated according to the projected mesh (Figure 9, Figure 10 and Figure 11)
For the projection of the images to provide a homogeneous image, it is necessary to take into account the design of the optics and the level of focus in a range of depth differences to validate the quality of the complete projected image (Figure 12)

4. Discussion

Photogrammetry combined with laser scanning has been shown once again as a suitable combination in the field of reverse engineering applied to restoration in architecture and cultural heritage. The contribution of the precise geometry, with the original photographic image, allows the reconstruction of an image projected on the 3D surface without metric deformations, so that original paintings can be observed.
Given the irregular spherical geometry of the surface, performing a mathematical development that allows the application of a conventional restoration technique is unnecessary. For this reason, it was necessary to develop an alternative methodology based on recalculated and projected partial images. This solution will allow visualization of the vault in its original state (not only from a geometrical point of view, instead from the original colour of paintings through digital techniques applied to images).
The algorithm developed consists of the formation of a calculation system by MMCC with weight assignment depending on the type of point selected. Although the designed methodology has worked optimally, the limitations of its application have been revealed. In those cases in which the number of identifiable points is insufficient (as in our first image where the identifiable points are distributed on the edge of the image and located practically in the same plane), it is necessary to introduce equations of geometry, direction and/or size conditions (element location in the image does not affect since radial distortion is negligible). These restrictions provide a degree of consistency to the algorithm that allows an optimal fit even in cases in which the arrangement and identification of points is very low (finding elements of mathematically modellable rectilinear or curvilinear geometry in a building is very common, whereas, on the other hand, finding elements of known dimension is not so easy. In these cases, Equation (8) cannot be used in the adjustment). In cases where the number of identifiable points is high, the method converges optimally without the need for additional restrictions.
In this way, it can be affirmed that the methodology should be applicable in the vast majority of cases in buildings (due to the elements of regular geometry) and with oblique photographs. Although inclined photographs have a greater deformation in the projection than zenith or nadir photographs, it is easier to find identifiable points in them (eliminating the need for additional restrictions). In addition, oblique images allow us to fully capture the 3D image, while a perpendicular image would make it impossible to record the parts farthest from the centre of the vault (in the case of a complete hemisphere, for example). Given that the point cloud provided by the laser guarantees the rectification of the image on the 3D model with total precision (including any geometric irregularity), the oblique image is more convenient than the perpendicular image (contrary to what could be expected and that occurs in processes of differential rectification in flat surfaces).
The formation of the complete image from different points of view with projective deformation forces the calculation of (virtual) images based on the relationship between each pixel of the original image and its position in space. The procedure does not generate any problem, and through bilinear interpolation, the images are projected, creating a perfect joint image, so that any observer will visualize the perfect original painting regardless of its perspective position. However, while there is no mathematical limitation for methodology, in cases where projection positions force a very oblique perspective the complete image would not be visible correctly (in our case there is no problem since the vault is hemispherical and there are multiple suitable positions along the apse). Likewise, the number of necessary partial images will also be defined by the available projection positions (a greater number of projectors being necessary the closer we are to the element). From this point of view, this methodology can always be applied mathematically to similar situations, while limitations can be found if there are not suitable positions for cannons (both by distance and by angle).

Author Contributions

Conceptualization and methodology, J.H. and J.R.; software, J.H.; validation and data curation, J.L.D. and E.P.; formal analysis and investigation, M.T.M. and P.N.; writing—original draft preparation, J.R.; writing—review and editing, J.H. and J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Some data used during the study were provided by a third party (photographic images). Direct request for these materials may be made to the provider as indicated in the Acknowledgments. The rest of data, models or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The images used belong to the Luis Roig D Alos Archive. Selected by the research framework project called “Previous studies for the restoration of wall paintings, sculptures and stucco of the Santos Juanes church”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, R.; Jáuregui, D.V.; White, K.R. Close-range photogrammetry applications in bridge measurement: Literature review. Measurement 2008, 41, 823–834. [Google Scholar] [CrossRef]
  2. Jeong, G.Y.; Nguyen, T.N.; Tran, D.K.; Hoang, T.B.H. Applying unmanned aerial vehicle photogrammetry for measuring dimension of structural elements in traditional timber building. Measurement 2020, 153, 107386. [Google Scholar] [CrossRef]
  3. Manajitprasert, S.; Tripathi, N.K.; Arunplod, S. Three-Dimensional (3D) Modeling of Cultural Heritage Site Using UAV Imagery: A Case Study of the Pagodas in Wat Maha That, Thailand. Appl. Sci. 2019, 9, 3640. [Google Scholar] [CrossRef] [Green Version]
  4. Zhizhuo, W. Principles of Photogrammetry (with Remote Sensing); Press of Wuhan Technical University of Surveying and Map-ping Publishing House of Surveying and Mapping: Beijing, China, 1990. [Google Scholar]
  5. Fryer, J.G. Close Range Photogrammetry and Machine Vision; Atkinson, K.B., Ed.; Whittles Publishing: Caithness, UK, 1996. [Google Scholar]
  6. Krauss, K.; Jansa, J.; Kager, H. Photogrammetry, Volume 2, Advanced Methods and Applications; Dümmler: Bonn, Germany, 1997. [Google Scholar]
  7. Wolf, P.R.; Dewitt, B.A. Elements of Photogrammetry with Applications in GIS, 3rd ed.; McGraw-Hill: New York, NY, USA, 2000. [Google Scholar]
  8. Herráez, J.; Navarro, P.; Denia, J.L.; Martín, M.T.; Rodríguez, J. Modeling the thickness of vaults in the church of santa maria de magdalena (Valencia, Spain) with laser scanning techniques. J. Cult. Herit. 2014, 15, 679–686. [Google Scholar] [CrossRef]
  9. Kilambi, S.; Tipton, S.M. Development of an algorithm to measure defect geometry using a 3D laser scanner. Meas. Sci. Technol. 2012, 23, 085604. [Google Scholar] [CrossRef]
  10. González-Aguilera, D.; Gómez-Lahoz, J.; Muñoz-Nieto, Á.; Herrero-Pascual, J. Monitoring the health of an emblematic monument from terrestrial laser scanner. Nondestruct. Test. Eval. 2008, 23, 301–315. [Google Scholar] [CrossRef]
  11. Park, H.S.; Lee, H.M.; Adeli, H.; Lee, I. A New Approach for Health Monitoring of Structures: Terrestrial Laser Scanning. Comput. Civ. Infrastruct. Eng. 2006, 22, 19–30. [Google Scholar] [CrossRef]
  12. Campione, I.; Lucchi, F.; Santopuoli, N.; Seccia, L. 3D Thermal Imaging System with Decoupled Acquisition for Industrial and Cultural Heritage Applications. Appl. Sci. 2020, 10, 828. [Google Scholar] [CrossRef] [Green Version]
  13. Chaudhry, S.; Salido-Monzú, D.; Wieser, A. A Modeling Approach for Predicting the Resolution Capability in Terrestrial Laser Scanning. Remote Sens. 2021, 13, 615. [Google Scholar] [CrossRef]
  14. Martínez, S.; Cuesta, E.; Barreiro, J.; Álvarez, B.J. Álvarez Analysis of laser scanning and strategies for dimensional and geometrical control. Int. J. Adv. Manuf. Technol. 2010, 46, 621–629. [Google Scholar] [CrossRef]
  15. Lee, I.S.; Lee, J.O.; Park, H.J.; Bae, K.H. Investigations into the influence of object characteristics on the quality of terrestrial laser scanner data. KSCE J. Civ. Eng. 2010, 14, 905–913. [Google Scholar] [CrossRef]
  16. Polo, M.E.; Felicísimo, Á.M.; Villanueva, A.G.; Martinez-del-Pozo, J.Á. Estimating the Uncertainty of Terrestrial Laser Scanner Measurements. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4804–4808. [Google Scholar] [CrossRef]
  17. Gordon, S.J.; Lichti, D.D. Modeling Terrestrial Laser Scanner Data for Precise Structural Deformation Measurement. J. Surv. Eng. 2007, 133, 72–80. [Google Scholar] [CrossRef]
  18. Conesa-García, C.; Puig-Mengual, C.; Riquelme, A.; Tomás, R.; Martínez-Capel, F.; García-Lorenzo, R.; Pastor, J.; Pérez-Cutillas, P.; Gonzalez, M.C. Combining SfM Photogrammetry and Terrestrial Laser Scanning to Assess Event-Scale Sediment Budgets Along a Gravel-Bed Ephemeral Stream. Remote Sens. 2020, 12, 3624. [Google Scholar] [CrossRef]
  19. Al-Manasir, K.; Fraser, C.S. Registration of terrestrial laser scanner data using imagery. Photogramm. Rec. 2006, 21, 255–268. [Google Scholar] [CrossRef]
  20. González-Aguilera, D.; Rodríguez-Gonzálvez, P.; Gómez-Lahoz, J. An automatic procedure for co-registration of terrestrial laser scanners and digital cameras. ISPRS J. Photogramm. Remote Sens. 2009, 64, 308–316. [Google Scholar] [CrossRef]
  21. Guarnieri, A.; Milan, N.; Vettore, A. Monitoring of Complex Structure For Structural Control Using Terrestrial Laser Scanning (Tls) And Photogrammetry. Int. J. Arch. Herit. 2013, 7, 54–67. [Google Scholar] [CrossRef]
  22. Pesci, A.; Bonali, E.; Galli, C.; Boschi, E. Laser scanning and digital imaging for the investigation of an ancient building: Palazzo d’Accursio study case (Bologna, Italy). J. Cult. Herit. 2012, 13, 215–220. [Google Scholar] [CrossRef]
  23. Alshawabkeh, Y. Color and Laser Data as a Complementary Approach for Heritage Documentation. Remote Sens. 2020, 12, 3465. [Google Scholar] [CrossRef]
  24. Luna, U.; Rivero, P.; Vicent, N. Augmented Reality in Heritage Apps: Current Trends in Europe. Appl. Sci. 2019, 9, 2756. [Google Scholar] [CrossRef] [Green Version]
  25. Marto, A.; Gonçalves, A. Mobile AR: User Evaluation in a Cultural Heritage Context. Appl. Sci. 2019, 9, 5454. [Google Scholar] [CrossRef] [Green Version]
  26. Faella, G.; Frunzio, G.; Guadagnuolo, M.; Donadio, A.; Ferri, L. The Church of the Nativity in Bethlehem: Non-destructive tests for the structural knowledge. J. Cult. Herit. 2012, 13, e27–e41. [Google Scholar] [CrossRef]
  27. Baraccani, S.; Silvestri, S.; Gasparini, G.; Palermo, M.; Trombetti, T.; Silvestri, E.; Lancellotta, R.; Capra, A. A Structural Analysis of the Modena Cathedral. Int. J. Arch. Herit. 2015, 10, 235–253. [Google Scholar] [CrossRef]
  28. Sánchez, A.R.; Meli, R.; Chávez, M.M. Structural Monitoring of the Mexico City Cathedral (1990–2014). Int. J. Arch. Herit. 2015, 10, 254–268. [Google Scholar] [CrossRef]
  29. Bayarri, V.; Sebastián, M.A.; Ripoll, S. Hyperspectral Imaging Techniques for the Study, Conservation and Management of Rock Art. Appl. Sci. 2019, 9, 5011. [Google Scholar] [CrossRef] [Green Version]
  30. Rosado, T.; Gil, M.; Caldeira, A.T.; Martins, M.D.R.; Dias, C.B.; Carvalho, L.; Mirão, J.; Candeias, A. Material Characterization and Biodegradation Assessment of Mural Paintings: Renaissance Frescoes from Santo Aleixo Church, Southern Portugal. Int. J. Arch. Herit. 2014, 8, 835–852. [Google Scholar] [CrossRef]
  31. Bianchin, S.; Favaro, M.; Vigato, P.A.; Botticelli, G.; Germani, G.; Botticelli, S. The scientific approach to the restoration and monitoring of mural paintings at S. Girolamo Chapel—SS. Annunziata Church in Florence. J. Cult. Herit. 2009, 10, 379–387. [Google Scholar] [CrossRef]
  32. Camallonga, P.N.; Esteve, P.N. “Trazar una bóveda cónica cuadrada” La trompa volada de Tomás Vicente Tosca. EGE 2020, 12, 4–17. [Google Scholar] [CrossRef]
  33. Lanzara, E.; Samper, A.; Herrera, B. Point Cloud Segmentation and Filtering to Verify the Geometric Genesis of Simple and Composed Vaults. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 645–652. [Google Scholar] [CrossRef] [Green Version]
  34. Esteve, P.N.; Oliver, S.Y.; Boquera, J.H.; Ríos, J.L.D.; Sánchez, M.T.M.; Pereña, J.R. Restoration of paintings on domes with non-developable geometry (Los Santos Juanes Church in Valencia). Int. J. Arch. Herit. 2017, 12, 169–177. [Google Scholar] [CrossRef]
  35. Herráez, J.; Denia, J.; Navarro, P.; Yudici, S.; Martin, M.; Rodriguez, J. Optimal modelling of buildings through simultaneous automatic simplifications of point clouds obtained with a laser scanner. Measurement 2016, 93, 243–251. [Google Scholar] [CrossRef]
Figure 1. (a) Photograph of the altar and the vault on it. (b) Complete three-dimensional cloud (laser). (c) 3D model (render).
Figure 1. (a) Photograph of the altar and the vault on it. (b) Complete three-dimensional cloud (laser). (c) 3D model (render).
Applsci 11 05323 g001
Figure 2. (a) Original photograph (nadir). (b) Original photograph (oblique).
Figure 2. (a) Original photograph (nadir). (b) Original photograph (oblique).
Applsci 11 05323 g002
Figure 3. (a) Vertical element (lamp string). (b) Dimensional element (arc). (c) Point cloud (both).
Figure 3. (a) Vertical element (lamp string). (b) Dimensional element (arc). (c) Point cloud (both).
Applsci 11 05323 g003
Figure 4. (a) Matrix portion generated in a projector. (b) Matrix cell projected on the dome.
Figure 4. (a) Matrix portion generated in a projector. (b) Matrix cell projected on the dome.
Applsci 11 05323 g004
Figure 5. (a) Complete original cloud formed by 3,640,000 points. (b) Reduced cloud formed by 556,000 points. (c) Detail of the reduced vault (points every 20 cm).
Figure 5. (a) Complete original cloud formed by 3,640,000 points. (b) Reduced cloud formed by 556,000 points. (c) Detail of the reduced vault (points every 20 cm).
Applsci 11 05323 g005
Figure 6. Selection of points. (a) Composition of the first image on the 3D model to identify common points (in blue). (b) Composition of the second image on the 3D model to identify common points (in red).
Figure 6. Selection of points. (a) Composition of the first image on the 3D model to identify common points (in blue). (b) Composition of the second image on the 3D model to identify common points (in red).
Applsci 11 05323 g006
Figure 7. Shooting position of the oblique frame.
Figure 7. Shooting position of the oblique frame.
Applsci 11 05323 g007
Figure 8. Simulation of the images projected on the vault.
Figure 8. Simulation of the images projected on the vault.
Applsci 11 05323 g008
Figure 9. The 4 × 4 mesh projected on the model. Expanded mesh detail.
Figure 9. The 4 × 4 mesh projected on the model. Expanded mesh detail.
Applsci 11 05323 g009
Figure 10. Image calculated from one of the video projection positions (details of the mesh portion).
Figure 10. Image calculated from one of the video projection positions (details of the mesh portion).
Applsci 11 05323 g010
Figure 11. Partial mosaic calculated from the video projection positions.
Figure 11. Partial mosaic calculated from the video projection positions.
Applsci 11 05323 g011
Figure 12. Complete image over the vault.
Figure 12. Complete image over the vault.
Applsci 11 05323 g012
Table 1. Simplification of clouds.
Table 1. Simplification of clouds.
MeshSimplificationEliminated% Simplification
5 cm555,8913,082,89384.72%
10 cm150,4913,488,29395.86%
20 cm38,6713,600,11398.94%
50 cm62103,632,57499.83%
100 cm15533,637,23199.96%
Table 2. Values resulting from the coefficients (first frame).
Table 2. Values resulting from the coefficients (first frame).
CoefficientValueCoefficientValueCoefficientValue
a−338.2226338a111,057.49659a2−0.995080693
b10,817.14366b1220.3747217b22.671136229
c222.2847508c14712.747525c21.000000000
d69,270.99399d1173,118.9394d22.001528118
Derived from the calculation, the spatial position (Figure 6) of the camera (X = −15,824 m, Y = −6913 m, Z = 0.718 m), the main distance (Fx = 10,730, Fy = 12,070) and the direction of the shot (∆X = −0.377, ∆Y = −0.051, ∆Z = 0.925) were determined.
Table 3. Values resulting from the coefficients (second frame).
Table 3. Values resulting from the coefficients (second frame).
CoefficientValueCoefficientValueCoefficientValue
a727.7124709a1−873.7891925a21.000000000
b−5596.546082b1−239.0502191b20.194472120
c82.79976057c1−5606.398259c2−0.106081323
d−16,136.59733d19033.674876d2−11.59965274
Derived from the calculation, the spatial position (Figure 7) of the camera (X = 11,843 m, Y = −1346 m, Z = −0.177 m), the main distance (Fx = 5511, Fy = 5545) and the direction of the capture (∆X = −0.97633131, ∆Y = −0.18986922, ∆Z = 0.103570517) were determined.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Herraez, J.; Denia, J.L.; Priego, E.; Navarro, P.; Martin, M.T.; Rodriguez, J. Cultural Heritage Restoration of a Hemispherical Vault by 3D Modelling and Projection of Video Images with Unknown Parameters and from Unknown Locations. Appl. Sci. 2021, 11, 5323. https://doi.org/10.3390/app11125323

AMA Style

Herraez J, Denia JL, Priego E, Navarro P, Martin MT, Rodriguez J. Cultural Heritage Restoration of a Hemispherical Vault by 3D Modelling and Projection of Video Images with Unknown Parameters and from Unknown Locations. Applied Sciences. 2021; 11(12):5323. https://doi.org/10.3390/app11125323

Chicago/Turabian Style

Herraez, Jose, Jose L. Denia, Enrique Priego, Pablo Navarro, Maria T. Martin, and Jaime Rodriguez. 2021. "Cultural Heritage Restoration of a Hemispherical Vault by 3D Modelling and Projection of Video Images with Unknown Parameters and from Unknown Locations" Applied Sciences 11, no. 12: 5323. https://doi.org/10.3390/app11125323

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop