Next Article in Journal
Effect of Ultrasonic Assistance on Properties of Ultra-High-Strength Steel in Laser-Arc Hybrid Welding
Previous Article in Journal
Calibration and Verification of Coated Caragana korshinskii Seeds Based on Discrete Element Method
Previous Article in Special Issue
Essential Oils and Essential Oil-Based Products Compared to Chemical Biocides Against Microbial Patinas on Stone Cultural Heritage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Close-Range Photogrammetry and RTI for 2.5D Documentation of Painted Surfaces: A Tiryns Mural Case Study

by
Georgios Tsairis
1,*,
Athina Georgia Alexopoulou
2,
Nicolaos Zacharias
3 and
Ioanna Kakoulli
4
1
Department of Conservation of Works of Art and Antiquities, University of West Attica, 12243 Athens, Greece
2
Laboratory of Conservation—Promotion of Visual Arts, Books and Archival Material (ARTICON Lab), School of Applied Arts and Culture, University of West Attica, 12243 Athens, Greece
3
Laboratory of Archaeometry, University of the Peloponnese, 24100 Kalamata, Greece
4
Materials Science and Engineering, University of California, Los Angeles, CA 90095, USA
*
Author to whom correspondence should be addressed.
Coatings 2025, 15(4), 388; https://doi.org/10.3390/coatings15040388
Submission received: 19 February 2025 / Revised: 21 March 2025 / Accepted: 22 March 2025 / Published: 26 March 2025
(This article belongs to the Special Issue Coatings for Cultural Heritage: Cleaning, Protection and Restoration)

Abstract

:
Painted surfaces, regardless of their substrate, possess unique elements crucial for their study and interpretation. These elements include geometric characteristics, surface texture, brushwork relief, color layer morphology, and preservation state indicators like overpainting, interventions, cracks, and mechanical deformations. Traditional recording methods such as handwritten or digital descriptions, 2D scale drawings, calipers, rulers, tape measures, sketches, tracings, and conventional or technical photography fall short in capturing the three-dimensional detail necessary for comprehensive analysis. To overcome these limitations, this paper proposes the integration of two digital tools, Close-Range Photogrammetry (SfM-MVS) and Reflectance Transformation Imaging (RTI), which have become accessible with the advancement of computing power. While other 3D imaging tools like laser scanners and structured light systems exist and may be preferred for very specialized applications, such as capturing the texture of the surface with sub-millimeter accuracy, SfM-MVS and RTI offer a cost-efficient and highly accurate alternative, with 3D modeling capabilities and advanced pixel color fidelity, essential for documenting the geometric and color details of painted artifacts. The application of these highly promising methods to the mural paintings from the Palace of Tiryns (Nafplion, Greece) demonstrates their potential, providing significant insights for art historians, researchers, conservators, and curators.

1. Introduction

The study of ancient frescoes presents considerable challenges due to their fragile and sensitive nature, as well as the limitations imposed by their state of preservation and inaccessibility. These factors often restrict direct physical interaction and prevent extensive dimensional measurements, making non-invasive and accurate documentation essential. At the same time, their high cultural and historical value, along with their unique artistic and material characteristics, further emphasize the necessity of their thorough study and long-term preservation.
To address these challenges, 2.5D surface reconstruction has emerged as an indispensable tool in archaeological documentation and conservation. This technique enables the creation of high-fidelity digital replicas that preserve the geometric accuracy, radiometric properties, and surface topography of frescoes. Unlike full three-dimensional (3D) reconstruction, the approach employed here focuses on 2.5D surface reconstruction, capturing the relief of the fresco without generating a full volumetric model. By providing a reliable, measurable, and interactive digital twin of the artwork, 2.5D digital modeling facilitates in-depth study, interpretation, and comparative analysis over time. It also serves as a critical resource for conservators, allowing them to assess deterioration, evaluate past interventions, and guide future conservation strategies with unprecedented precision.
Previous research in the 3D recording and computation of painted surfaces has explored various techniques to overcome these limitations. Methods such as 3D laser scanning and structured light scanning have been applied to capture the intricate surface textures of painted artefacts with sub-millimeter precision [1,2,3]. Although these techniques have produced highly valuable results, particularly in terms of geometric accuracy and surface detail, they are often associated with significant costs and require specialized equipment, potentially limiting their accessibility and widespread application [4].
In response to these limitations, the present paper proposes the integration of two cost-effective, image-based digital tools—Close-Range Photogrammetry using Structure from Motion-Multi-View Stereo (SfM-MVS) and Reflectance Transformation Imaging (RTI). This approach outlines the practical workflow steps for these techniques. These methods have gained widespread availability in recent decades due to advancements in computing power and the increasing accessibility of personal computers. They have been selected for their ability to generate highly accurate 3D models and photorealistic textures, coupled with their cost-efficiency [5,6,7,8,9,10,11]. We propose that these tools become fundamental components of any standard workflow for the scientific study of painted artefacts.
SfM photogrammetry is a passive [12,13] (p. 413–414), image-based documentation technique that allows the derivation of accurate, metric, and semantic information from a series of digital photographic images taken with off-the-shelf digital photographic equipment, but processed in specialist photogrammetry software [14] (p. 15). However, SfM differs fundamentally from conventional photogrammetry, as the geometry of the scene, camera positions, and orientation are solved automatically without the need to specify a priori a network of targets that have 3D positions known in advance. Instead, these are solved simultaneously using a highly redundant, iterative bundle adjustment procedure, based on a database of features automatically extracted from a set of multiple overlapping images [15]. It rigorously turns 2D image data into 3D data (like digital 3D models), establishing the geometric relationship between the acquired images and the scene as surveyed at the time of the imaging event [16] (p. 65). Since it is a passive technique with no emitting light, it relies purely on incident illumination (e.g., the sun or artificial lighting) and, in that way, does not physically harm the object material [17]. However, it is essential to follow appropriate procedures with special care, as the objects in question are archaeological artefacts, which are often sensitive to prolonged or intense light exposure. Even though the term SfM photogrammetry is only part of the process of obtaining 3D models from overlapping pictures, it is also widely used to denote a photogrammetry methodology [18,19].
RTI is a non-invasive/non-contact method that has its roots in the principles of raking illumination ([20] (p. 116), [21] (pp. 127–129)), which has been extensively used in museums and other heritage contexts. A raking light photograph is made by casting light across the surface of a painting at a very low angle, highlighting any surface texture or irregularities, including incisions, impasto, raised or flaking paint, damage, and deformations of the canvas or panel. However, as photographs are usually taken with lights in only one, or perhaps two, positions, the information obtained depends largely on the choice of lighting position; a photograph with lights designed to highlight a particular area may not reveal interesting features in another part of a painting [22] (p. 1). PTM overcomes this drawback by allowing virtual re-lighting of the subject from any direction, and subsequently, the mathematical enhancement of the shape and color attributes of the object’s surface reveal information about the topology and reflection of the imaged surface in the form of surface “normal” [23] for each pixel [24,25,26,27]. This “normal” information indicates the directional vectors perpendicular to the subject’s surface at each location, recorded by the corresponding image pixel. Since each encoded normal corresponds to a point on the object, the whole set provides a complete and accurate “description” of its topography. Consequently, PTMs are 2D images containing true 3D information. This ability to document color and true 3D shape information by using normals is the source of RTI’s documentary power [28]. The enhancement functions of RTI reveal surface information that is not readily discernable under direct empirical examination of the physical object. Today’s RTI software and related methodologies were developed by an international team of researchers and developers at Cultural Heritage Imaging (CHI) [29].
In this study, SfM-MVS and RTI will be applied to a painted lime plaster fragment from the Palace of Tiryns, room 18 (Small Megaron), dated to the LHIIIB 2 period (late 13th century BC). The fragment, catalogued as MN33460, is housed in the conservation laboratory adjacent to the Archaeological Site of Ancient Tiryns (Nafplion, Peloponnese, Greece) [30,31]. The aim is to produce an accurate 2.5D model and a full-scale ortho-image (1:1 scale), alongside a detailed recording of the surface relief.
Subsequent phases of this research will focus on acquiring comprehensive optical data across various regions of the electromagnetic spectrum, which will be superimposed onto the geometrically accurate digital reference model generated by the SfM-MVS process. This digital substrate will represent a 2.5D model, precisely defined in terms of shape and dimension, free from geometric distortions. The combination of this digital substrate with the complementary data from the RTI application will enable the accurate rendering of the color, texture, surface relief, and pattern of the fresco.
While this research is still ongoing, the present study seeks to demonstrate the potential of these methods and to propose the most effective application procedure to achieve optimal results. Due to the absence of available data from physicochemical analyses of the murals, this paper will not present detailed findings concerning the mural itself. Instead, it will serve as a case study to highlight the proposed methodology for the integration of SfM-MVS and RTI in the documentation and study of painted artefacts.

2. Materials and Methods

In this paper, an integrated methodology is proposed, combining Close-Range Photogrammetry, utilizing Structure-from-Motion (SfM) and Multi-View Stereo (MVS) algorithms, with Reflectance Transformation Imaging (RTI). This dual approach aims to capture and accurately model the three-dimensional geometry of painted surfaces while also gathering high-resolution spatial and color data that are crucial for conservation and analytical purposes. The outlined methodological workflow demonstrates the appropriate application of these techniques to achieve highly detailed and accurate 2.5D models of painted artefacts.
To illustrate the effectiveness of this approach, a Mycenaean wall painting from the Palace of Tiryns, dated to the LHIIIB 2 period (late 13th century BC) [31] (p. 315), [32] (p. xvii) was selected as a case study (According to [32] (p. XVII), “the initial excavations of Mycenaean palaces were conducted many years ago, when the only chronological label available to archaeologists was simply ‘Mycenaean’”. Carl Blegen’s threefold subdivision of the Late Helladic period [33] (35 ff and 120 ff) later allowed scholars to place all extant palatial structures within the LH III phase. Further refinement by Arne Furumark enabled Blegen to attribute the palace at Pylos specifically to the LH IIIB phase, particularly its middle subphase, shortly after excavation commenced. Currently, there is a scholarly consensus that all remaining Mycenaean palaces date to the LH IIIB period (circa the thirteenth century BCE), a dating convention also adhered to in this study). This fragmented lime plaster painting (see Figure 1a) originally part of the floor decoration within the small megaron of the palace (see Figure 1b) [34] (pp. 178–79, PLATE I), was detached from Room XVIII, “women’s hall” [34] (pp. 180–81, PLATE II) (see Figure 1c) [32] (p. 47) (Dr. Wilhelm Dorpfeld notes that Homer [35] (Od. IV. 627; XVIl. 169; and XXI. 120) describes the use of simply beaten-in clay for the floors of Mycenaean megarons, a feature also observed in earlier phases at Tiryns. However, Dorpfeld points out that “the palace of Tiryns” exhibits a distinctly different type of flooring, specifically a concrete mixture composed of lime and small pebbles or, in some cases, entirely of lime. Within the rooms, these pebbles are generally absent, resulting in smoother floors. Moreover, in several rooms, the mortar has been fashioned into carpet-like patterns with scratched lines. Such features can still be discerned in the men’s hall, its vestibule, and the women’s hall [34] (275–76)). It is currently preserved as a portable wall painting fragment; it measures approximately 83.0 cm in height, 89.0 cm in width, and 2.5 cm in thickness [31] (p. 315) and is housed in the conservation laboratory adjacent to the Archaeological Site of Ancient Tiryns (Nafplion, Greece). Given the artefact’s fragility and cultural significance, all data acquisition was performed in situ within the conservation facility.
The case study artefact exhibits surface irregularities due to the uneven preservation of its fragments, which were uncovered during excavation and consolidated without any attempt to alter their original topography. These preserved variations in surface elevation, along with the artifact’s construction technique, make this painting an exemplary candidate for the detailed documentation and analysis afforded by the combined use of SfM photogrammetry and RTI.
In terms of stylistic context, painted floor decorations in the Bronze Age varied significantly between Minoan Crete and the Greek mainland. In Minoan contexts, painted floors in both palatial and non-palatial settings typically featured red pigments on plaster substrates, often arranged in solid color fields, borders for stone slabs, or geometric patterns. Red paint was also used to simulate stone pavements or to create decorative effects within interstitial spaces, occasionally in combination with inlaid materials. Conversely, on the Greek mainland, although Mycenaeans imitated Minoan floor decoration, their decoration was generally limited to the principal rooms of palaces and followed a distinct design approach. The floors were commonly adorned with a grid of intersecting red-painted lines, creating square fields that were colored in blue, yellow, or red, either in isolation or in varied combinations. These fields were further enhanced with linear and marine motifs, often rendered in vibrant colors [36].
Of particular interest in this painted floor fragment is the depiction of tricurved arches, a motif unique to the floors at Tiryns, enclosing a pattern of lines and circles (see Figure 1d) [30] (p. 273); (see Figure 1e) [36] (p. 458). This design portrays a stylized representation of stony or rocky terrain and includes additional elements of “flowers” within each arch as a simplified form of a voluted papyrus blossom in a T shape. The volutes have been reduced to circles on either side of the stem, and the stamen is indicated by a row of dots [37] (p. 229), [32] (p. 48). While the purpose of this floral detail remains unclear—whether it aimed to depict a natural landscape dotted with flowering vegetation or held symbolic significance—it likely functioned as an ornamental enhancement [36].

2.1. Close-Range Photogrammetry (SfM-MVS)

The process involved capturing a series of images from multiple angles, with over 75% overlap, which were then loaded into Agisoft Metashape Pro, a commercial and user-friendly computer vision-based software package, to generate a textured 2.5D model and achieve the ortho-projection of the painting surface. The process is referred to as Structure-from-Motion (SfM) [38] and Multi-View Stereo (MVS) [39] photogrammetry. The majority of studies discussing the application of Structure from Motion (SfM) have focused on the use of Agisoft’s Photoscan, later renamed Metashape [40,41,42,43]. However, Green et al. [44] highlight the advantages of utilizing open-source software.
SfM is considered an extension of stereo vision. Instead of image pairs, the method attempts to reconstruct depth from several unordered 2D images that depict a static scene or an object from arbitrary viewpoints. It relies on computer vision algorithms [45] that detect and describe local interest points for each image (i.e., image locations that are in a certain way exceptional and are locally surrounded by distinctive texture) and then match those 2D interest points throughout the multiple images. This results in a number of potential correspondences (often called tie points). Using this set of correspondences as input, SfM computes the locations of those interest points in a local coordinate frame and produces a sparse 3D point cloud that represents the geometrical structure of the scene [46]. Inherently, the scale, position, and orientation of any photogrammetric model are arbitrary [47]. For the outputs to be scaled and their dimensions to be measurable, it is necessary to define the relationship between the image and the coordinates of the object. For this purpose, it is essential to place at least three control points (CPs) or GCPs (a GCP is a point on the object illustrated in the image, while at the same time, its 3D coordinates (X, Y, Z) are known, either in a local or in a global reference system) with the exact known distances between them or certified scale-bars established at appropriately selected positions within the photographed scene. The accuracy of the reference information that is used to scale the photogrammetric model will determine the scale accuracy of any data output. After the reconstruction of the 1:1 scale photogrammetric model, any measurement or study of the model can be obtained without limitations regarding the place, as the physical presence in situ, where the original object is located, is not necessary.
SfM photogrammetry as a passive image-based technique, and the results are heavily influenced by the input image data. Employing an automated process to identify and match features by computer vision is fundamentally dependent on the image quality. That being said, any sensors, settings, and acquisition designs should be considered with great care [48].

2.1.1. Photogrammetry: Data Acquisition

The present photogrammetric survey was carried out using a Nikon D850, Full-Frame 45.7 MP Single-Lens Reflex Digital camera with a CMOS sensor (8256 × 5504 pixels, 4.35 μm pixel size), manufactured by Nikon Corporation, Tokyo, Japan, and equipped with a Nikon AF Nikkor 50 mm f/1.8 lens.
In total, 468 images (see Figure 2a–c) in RAW (NEF) format and 16-bit depth were captured by the Nikon D850 camera tethered to a laptop. The mural was placed horizontally on a tray on the floor. The camera was mounted at a distance of 715 mm from the painting, on a custom-made construction of a device [49], allowing fully controlled manual movement along the x and y axes, with horizontal marking to ensure precise movement distance while parallel to the painting surface, achieving constant focus throughout the entire shooting process. A total of 139 images were taken at a small angle of about 15° and −15° to the horizontal plane (see Figure 2b), and 49 images at a larger angle around the mural were taken (see Figure 2c) in order to contribute to the creation of the 3D model keeping constant the focus and distance from the object (Although parallel image acquisitions are good for human stereoscopic projection and automatic surface reconstruction, when combined with convergent acquisitions they often lead to higher accuracy, especially in the z-direction [50] (p. 11), [51] (p. 374)). More specifically, in a coordinate system where the three camera rotation angles are as specified in Figure 3, the images acquired were as follows: 196 images with ω = 0°, φ = 0°, κ = 0°; 84 images with ω = 0°, φ = 0°, κ = 90° (see Figure 2a,b); 75 images taken with the camera tilted at an angle of φ= +15°; and 64 additional images taken with the camera tilted at an angle of φ= −15° (see Table 1, Figure 2a–c and Figure 3) to the center of the mural. Two Speedlight flashes in softboxes, to diffuse light evenly, were chosen as a light source, oriented downwards at an angle of 45° to the surface of the wall painting, ensuring uniform illumination and avoiding strong shadows. The use of Speedlight flashes was chosen because they provided adequate lighting conditions, ensuring controlled color balance (white balance), fast shutter speeds of 1/100, a closed aperture of f/8 (medium f-number), and being in accordance with ISO 100. The two flashes were automatically triggered by the camera using a wireless transmitter at ¼ + 0.7 of power.
Similar to the process of capturing a relatively flat landscape with a UAV (Unmanned Aerial Vehicle) following a flight path designed to ensure sufficient overlap between neighboring strips, a baseline distance of approximately 100 mm was selected along both the x and y axes between successive photographic positions. This systematic approach, analogous to the methodology used in aerial photogrammetry, ensured significant image overlap, thereby enhancing the spatial consistency of the dataset. Specifically, this configuration achieved an estimated 79% overlap in the longitudinal direction, commonly referred to as endlap, and approximately 69% overlap in the lateral direction, which is commonly referred to as sidelap [52] (pp. 718–719), as calculated using the following equations.
The sidelap q was calculated using the following formula:
q % = 1 S A A S A × 100 = 1 A S A × 100 = 1 100   m m 318   m m × 100 q 68.5 % sidelap
where SA = 318 mm is the lateral image length, and A = 100 mm represents the lateral movement.
Similarly, the endlap p was derived as follows:
p % = 1 S B B S B × 100 = 1 B S B × 100 = 1 100   m m 473   m m × 100 p 78.8 % endlapwhere
where SB = 473 mm is the longitudinal image length, and B = 100 mm represents the longitudinal movement.
Thus, the configuration yielded a sidelap of approximately 68.5% and an endlap of approximately 78.8%, ensuring substantial image alignment for each camera position. These overlap percentages are critical for minimizing gaps and ensuring comprehensive coverage in both axes, directly supporting the integrity of the dataset for further analysis Figure 4.
To establish the reference system, 16 coded circular targets (calibrated Agisoft Metashape markers) (see Figure 5 and Figure 6) were placed within the subject’s frame. Target placement was carefully chosen to avoid areas of high interest on the painting’s surface, instead positioning them on a neutral background and at four fixed locations outside the mural. The external targets consisted of custom-made 90ο angle rulers, each equipped with prefabricated pairs of Metashape-coded markers separated by highly accurate, pre-measured distances. These rulers were aligned such that the coded targets (5, 1, 7) (see Figure 5 and Figure 6) formed a straight line and were leveled to lie on the same plane with the mural’s painted surface.
By precisely measuring the distance between specific points, referred to as Scale Bars, Agisoft Metashape transforms the spatial data into real-world dimensions, enabling the creation of a 1:1 scale photogrammetric model. During the Align Photo procedure, the software exploits EXIF metadata from the camera and lens to estimate both interior and exterior orientation parameters, including nonlinear radial distortions [53]. Metashape employs Structure from Motion (SfM) and robust feature-matching algorithms, such as Scale-Invariant Feature Transform (SIFT) [54], to automate the estimation of camera geometry without manual calibration. This process incorporates self-calibration, which iteratively refines intrinsic parameters—including focal length, principal point, and lens distortions—based on correspondences between overlapping images. The bundle adjustment algorithm further optimizes these parameters in conjunction with camera poses and 3D structures, minimizing projection errors and ensuring geometric consistency.
Förstner and Wrobel [52] describe how such methodologies, rooted in photogrammetric principles, enable accurate reconstructions even when using off-the-shelf cameras without pre-calibration. However, while Metashape’s automated approach delivers satisfactory results in many cases, projects requiring high precision may benefit from manual calibration to further reduce uncertainties.
In this study, the camera-lens calibration involved capturing 21 images of a checkerboard calibration pattern displayed on an LCD screen, using the same camera, lens, and focal length settings as for the in situ mural imaging. These images were imported into Metashape as a separate chunk, and calibration data were computed and stored. This calibration was incorporated into the photo alignment workflow, enhancing the overall accuracy of the 3D reconstruction process.
In any digital photogrammetric project, the density and precision of measurements are intrinsically tied to three critical variables: the distance from the camera to the object (H), the sensor’s pixel size or resolution (ps), and the focal length of the lens (c). These variables collectively determine the ground sample distance (GSD), which represents the spatial resolution of the image and corresponds to the distance between the centers of two adjacent pixels on the subject’s surface [47]. GSD is a key metric that influences the accuracy of any photogrammetric analysis. According to [55] (p. 264), the relationship between these variables can be expressed as follows:
c H = p s G S D H = G S D × c p s
For a camera with a focal length c = 50 mm, pixel size ps = 4.35 μm, and object distance H = 715 mm, the GSD can be calculated as follows:
G S D = H × p s c = 715   m m × 0.00435   m m 50   m m = 3.11025 50 = 0.062   m m
Further refinement using Equation (4) with a principal distance of c = 53.76 mm yields GSD as follows:
G S D = H × p s c = 715   m m × 0.00435   m m 53.76   m m = 3.11025 53.76 = 0.0578   m m
Agisoft software, Professional Edition, Version 2.0.1 build 16069 (64-bit), estimates this value to be 0.055 mm (see Table 1), highlighting that to achieve a GSD below 0.058 mm, images should be captured at distances less than 715 mm from the object. Ref. [56] (p. 31) supports this by relating GSD to the sensor’s width and footprint, using the following equation:
p G S D = f H = w W   G S D = H   f p
where (ps) or (p) represents the photosite pitch, and (H) is the object distance from the optical center. Additionally, based on [57] (p. 46), the principal distance (c) is determined as follows:
1 f = 1 H + 1 c c = H × f H f
For H = 715 mm and f = 50 mm, this results in a principal distance of c = 53.76 mm. Notably, c = f only when H→∞, as explained by [58] (p. 168) and [55] (p. 167).

2.1.2. Photogrammetry: Data Processing

After the sequence of image capture was completed, Adobe Photoshop Lightroom Classic software (12.3 Release) was utilized to adjust the white balance based on the color calibration chart X-Rite Colorchecker® Passport Photo target (in order to relate the recorded colors to the well-defined standards of ICC profiles) and applied to all images via sync. Finally, the RAW files were batch processed into DNG uncompressed format in Adobe Photoshop software (24.4.1 Release) (The term “batch” refers to editing photos of an entire group of images rather than one image at a time, accelerating the editing process).
The next step was to import all images into Agisoft Metashape Pro to generate the 2.5D models (see Table 1). This semi-automated process consisted of seven basic consecutive steps: Align Photos, Build Mesh (3D polygonal model), Build Texture, Build Point Cloud, Build DEM, Build Orthomosaic, and Export Results. A detailed step-by-step guide to the software procedure is presented in the Agisoft Metashape Pro manual. The Build Texture step, during which the color texture map is generated, was performed using 89 images captured parallel to the surface. These images were taken prior to the placement of the coded targets to ensure that the model’s surface remained fully visible without being covered by the targets.

2.2. Reflectance Transformation Imaging (RTI) Technique

RTI is a computational photographic method [20] (p. 125) that captures the relief of the object’s surface through highlights and shadows in situ, taking advantage of the ability to relight the subject interactively and virtually in real time from various angles at the office. It describes a suite of technologies and methods for generating surface reflectance information using photometric stereo, i.e., by comparison between images with fixed camera and object locations but varying lighting [23]. RTI refers to a file format [5] in addition to a set of methods. The most common implementations of RTI are via Polynomial Texture Mapping (PTM) and Hemispherical Harmonics (HSH) fitting algorithms. PTM, developed by Tom Malzbender and Dan Gelb at HPLabs [24,59,60], is a mathematical model describing luminance information for each pixel in an image in terms of a function representing the direction of incident illumination. The illumination direction function is approximated in the form of a biquadratic polynomial whose six coefficients are stored along with the color information of each pixel [61]. The Hemispherical Harmonics (HSH) algorithm, by contrast, was developed in 2007–2008 by a research team at the University of California, Santa Cruz, under the supervision of Professor James Davis, in collaboration with Cultural Heritage Imaging (CHI), Inc., San Francisco, CA, USA, with consultation provided by Tom Malzbender [62].
Each RTI resembles a single, two-dimensional (2D) photographic image. Unlike a typical photograph, reflectance information is derived from the three-dimensional (3D) shape of the image subject and encoded in the image per pixel, so that the synthesized RTI image “knows” how light will reflect off the subject. When the RTI is opened in RTIViewer software, each constituent pixel can reflect the software’s interactive “virtual” light from any position selected by the user. This changing interplay of light and shadow in the image discloses fine details of the subject’s 3D surface form [62]. The interactive output is produced from multiple photographs that are taken from one stationary position, while the surface of the subject is illuminated from different raking light positions in each shot. Although this is technically a 2D recording approach, it is often described as 2.5D because of the high-level visual information provided by highlighting and shadowing 3D surfaces. It should be noted that while this procedure provides detailed qualitative surface information, it does not yield metrically accurate 3D data [63].
There are several capture methods that can be utilized to create a Polynomial Texture Map (PTM), each requiring different toolkits and budgets. Due to its relatively low-cost, portable toolkit and flexible recording parameters, the Highlight-RTI (H-RTI) method was selected for this project. H-RTI is a variation of the Reflectance Transformation Imaging (RTI) [5] capture technique, in which the light source is moved manually, and its position is estimated based on reflections detected on a reference sphere placed near the subject. Unlike the classical RTI method, which requires pre-defined and fixed lighting angles, H-RTI offers greater flexibility in data acquisition, making it particularly suitable for in situ applications and the study of objects that cannot be relocated. The H-RTI image capture process allows the recording of light positions as each photograph is taken, obtaining digital image data from which reflectance transformation images (RTIs) can be produced. An RTI not only stores color information for each pixel but also encodes a “normal” value that describes its surface orientation. The processing software computes this value using data from multiple lighting positions relative to the camera, enabling enhanced visualization of surface details.
A key element of the RTI methodology is the presence of one or (preferably) two reflective black glossy spheres in the frame of the photograph in each shot. The reflection of the light source on the spheres, in each image, enables the processing software to calculate the exact light direction for that image. The size of the spheres depends on the dimensions of the object being photographed and the resolution of the camera. It must be noted that the sphere diameter should be at least 250 pixels wide to be used for RTI calculation. As with the various camera setups, the sphere configuration needs to be adjusted according to the circumstances of the subject and environment. The appropriate position for the spheres can be determined by looking at the camera’s view.
During post-processing, the reflective spheres will be cropped out from the images; this is something to keep in mind when positioning them. They must be close enough to the subject so that the camera can focus on both the spheres and the subject with sufficient Depth of Field (DoF), but far away enough so that they can be cropped out of the image without losing any image data for the subject itself. It is very important to pay extreme attention during the shooting sequence so that the camera, the target object, and the reflective spheres do not move at all—only the light “moves” [64].

2.2.1. RTI: Data Acquisition

In total, 32 lossless RAW (NEF) format photographs were taken with a Nikon Z6 Full-Frame 24.5 MP mirrorless camera (Nikon Corporation, Tokyo, Japan), equipped with a Tokina AT-X PRO SD 16–28 mm f/2.8 (IF) FX lens (Tokina, Tokyo, Japan), shooting at 28 mm focal length and mounted in a fixed position throughout the process, with the camera sensor parallel to the surface of the painting. The camera was tethered to a laptop via the USB computer-control cable so that it could be remotely triggered via Adobe Lightroom software. Adobe Photoshop Lightroom Classic software (12.3 Release). This layout enabled the camera viewfinder to be remotely displayed in a live-view mode on the larger screen of the laptop, providing the advantage of avoiding any possible movement of the camera while shooting, having better control of the photo frame, focus, brightness, and white balance, as well as checking the quality of the captured images and saving them directly on the laptop’s hard drive. A Speedlight flash was employed as the illumination source and was positioned at multiple locations on the surface of an imaginary dome to simulate a three-dimensional lighting arrangement, while the camera remained fixed and stationary throughout the process. Initially, the flash was placed at eight equidistant positions along the perimeter of the dome’s base, maintaining a constant distance from the center of the painting surface. These first eight photographs were captured at an elevation angle of 10 degrees above the painting surface. Subsequently, the flash was incrementally elevated along the dome’s surface in three more successive steps, each aligned along an imaginary radial line extending from the center of the painting surface toward the dome’s surface. These successive positions corresponded to elevation angles increasing in 15-degree increments, reaching from 10 degrees at the base to a maximum elevation of 55 degrees at the highest position. It was automatically triggered by the camera using a wireless transmitter. The use of a flash unit exposes the Mycenaean frescoes to relatively minimal levels of ultraviolet light and allows the image to be captured in 1/100 s at f/8 in ISO 100 (shutter speed 1/100, aperture f/8 and ISO 100).
The quality of the final RTI is based on the quality of the captured images. Although the processing software is relatively robust and photographing in RAW format allows for some post-processing adjustments, it is important to precede test shots with a grey card to select the appropriate exposure (aperture, shutter speed and light intensity from the light source) by examining the histogram of the captured images. This process, at the time of analogue photography, would require the use of a photometer.
It is also recommended that before each capture sequence or during the shooting, a shot be taken using the light source at the highest angle (55 degrees), with a color balance card incorporated in the subject’s frame [64]. This image capture will be used later in the post-processing phase to adjust the color of the photographs and ensure accurate color by compensating for the effects of the color temperature of the light source [63]. The color balance card that was used was contained in the X-Rite Colorchecker® Passport [20] (p. 93). Τwo reflective, black, glossy spheres were positioned within the field of view alongside the mural and remained stationary throughout the acquisition process.

2.2.2. RTI: Data Processing

After the acquisition of the images was complete, Adobe Lightroom software was used to make appropriate white balance adjustments based on the Colorchecker® Passport, which was applied to all images via synchronization. Finally, the RAW files were batch-processed into DNG format and then converted to JPEG (.jpg) in Adobe Photoshop software. Tethered shooting and data processing for both techniques, Close-Range Photogrammetry (SfM-MVS) and Reflectance Transformation Imaging (RTI), was performed by a Microsoft Surface Studio laptop (Microsoft, Redmond, WA, USA) with a Quad-core Intel 11th Gen Intel Core H35 i7-11370H@ 3.30 GHz (Intel Corporation, Santa Clara, CA, USA), 32 GB LPDDR memory (4 × 32 GB), 64-bit operating system (Windows 11, Microsoft, Redmond, WA, USA) and 4 GB GDDR6 GPU memory NVIDIA GeForce RTX 3050 Ti laptop graphics card (NVIDIA Corporation, Santa Clara, CA, USA) was performed by a Microsoft Surface Studio laptop (Microsoft, Redmond, WA, USA) with a Quad-core Intel 11th Gen Intel Core H35 i7-11370H@ 3.30 GHz (Intel Corporation, Santa Clara, CA, USA), 32 GB LPDDR memory (4 × 32 GB), 64-bit operating system (Windows 11) and 4 GB GDDR6 GPU memory NVIDIA GeForce RTX 3050 Ti laptop graphics card.
Post-processing of the 32 image captures occurred through the open-source RTIBuilder software (Version 2.0.2.) and HSH fitter in order to collate information about the direction of the light source and generate the final RTI file. The generated RTI file was then loaded into RTIViewer software (Version 1.1), available from the CHI website [62].

3. Results

3.1. Photogrammetry Results

The outputs generated in the photogrammetric survey of the wall paintings are the Point cloud, the Dense elevation model (DEM), and the Orthomosaics.
The Point cloud consists of points in a three-dimensional space, providing a detailed representation of the geometry of the mural (see Figure 7a,b, Figure 8, and Figure 9).
The produced Dense Elevation Model (DEM) is a 2.5D model (see Figure 10) of the surface, which uses pixel locations to represent X and Y coordinates and pixel values to represent the depth. The color gradient corresponds to measurable altitudinal gradients. They can also be visualized in the form of elevation contours (see Figure 11). Metashape also enables the calculation of cross sections, with the cut being made at a plane parallel to the z-axis (see Figure 12 and Figure 13).
Orthomosaics are usually high-resolution photomosaics (see Figure 14) of numerous overlapping photographs and three-dimensional models (see Figure 7a,b, Figure 8, and Figure 9), providing a metric record of the paintings at a single point in time. The most utilized outputs are orthoimages (see Figure 14 and Figure 15a), which are rectified images that are corrected for most distortions [65].
The orthoimage is an image of the object surface in orthogonal parallel projection, allowing measurement of distances, angles, areas, etc. (see Figure 15a,b). The orthogonal projection reflects a uniform scale in every product point, since no relief displacements appear on the product. The orthoimage is divided into pixels by considering a specific pixel size based on the ground coordinate system. For each one of these pixels, there is a corresponding specific greyscale or RGB value. This value is obtained from the original image corresponding to the appropriate pixel by using one of the resampling methods, namely nearest neighbor, bilinear, and bicubic [55].

3.2. RTI Results

The real power of this technique is the interactive RTI Viewer tool (see Figure 16, Figure 17 and Figure 18), which allows the viewer to virtually relight the object’s surface from any direction, much like tilting it back and forth to catch the light and shadow that best reveal features of interest. Furthermore, its enhanced vision reveals shallow reliefs that may not be easily distinguishable under normal lighting conditions or by photogrammetry. However, the results and their interpretation are the subject of an ongoing research project. Therefore, to reach and present confident and reliable conclusions, the information obtained from these two methodologies in combination with other studies should be taken into account. Indicatively, it can be mentioned that the application of the RTI methodology makes it possible to distinguish the painter’s brushstroke as well as the sequence of the color layers, i.e., which of them was applied first and which last. That information may be suspected or assumed, but by the RTI, a non-contact, non-invasive technique, it can be certified with certainty.

4. Discussion

This study highlights the invaluable role of Close-Range Photogrammetry (SfM-MVS) and Reflectance Transformation Imaging (RTI) as indispensable tools for the recording and documentation of painted works of art. These methods are proposed as complementary due to the unique and synergistic information they provide. Specifically, photogrammetry generates a precise, geometrically accurate 2.5D digital representation of the artwork’s surface, capturing its relief with high fidelity. Meanwhile, RTI, through selective illumination, facilitates virtual relighting and reveals subtle surface texture variations by enhancing them mathematically, thus improving their interpretability.
Moreover, the accessibility of these methods, which rely on consumer-grade cameras, underscores their practical applicability for conservators and researchers, even with minimal technical expertise. However, achieving accurate and reliable results requires strict adherence to the methodology outlined in this study. By following the proposed workflow, practitioners can ensure a robust and standardized approach to the documentation of cultural heritage.
Digitally recording a painting with photogrammetry and RTI and creating the 2.5D model offer the unique ability to exploit depth information, significantly enhancing the study process. This approach overcomes limitations related to the object’s preservation status, exposure, storage location, available observation time, or object–observer distance. Regardless of where the physical object is located, its digital “twin” [66] can be accessed and studied from any location, providing the opportunity to examine it closely from a few centimeters away.
A digital replica allows for the observation of an entire scene or a focused area, down to the pixel level, collecting high-resolution spatial and color information. The model can be rotated, zoomed, and measured with absolute precision without physical contact with the original object. Τaking advantage of the ability to interactively and virtually relight the subject in real time from various angles in the RTIViewer environment enhances the surface relief, revealing detailed information about construction techniques, color layers, brushstroke textures, and the artist’s signature or fingerprint.
These techniques also facilitate the detection of micro-cracks, bulging, flaking, losses, and previous conservation-restoration interventions, which are often not visible to the naked eye. During conservation, they allow for three-dimensional recording of the stages of conservation, damage, or deformation during exhibition or storage, creating a valuable digital record of high geometric and imaging accuracy for future analysis.
This high-accuracy digital record serves as a precisely defined geometric reference for integrating images captured in different wavelengths, drawings, sketches, annotations, and notes. As a geometrically accurate background—free from distortions and with absolute metric precision—it provides a reliable framework upon which all additional data can be corrected and geometrically aligned. This ensures that images from different imaging devices, as well as other forms of documentation, are accurately positioned, enhancing the reliability of study, research, and interpretation. Consequently, this dynamic digital archive strengthens the accuracy and consistency of conservation and scholarly analysis.
In conclusion, the application of these methods is highly promising, and the results justify their proposal as fundamental non-invasive recording tools in the regular workflow of a standardized and integrated scientific methodology for the documentation and study of works of art.
This project is an initial presentation of the methodology proposed for the creation of a geometrically accurate 3D model, a true digital replica, and a full-scale ortho-image (scale 1:1) of the mural under examination as well as the recording of the surface relief and the morphological and geometric characteristics of the color layers and the brushstroke relief utilizing the capabilities provided by Reflectance Transformation Imaging. The resulting orthophoto will serve as the geometrically defined background layer, onto which imaging information from the hyperspectral analysis of the drawing technique and pigments will be superimposed in layers.
Next steps involve acquiring optical 2D information across various regions of the electromagnetic spectrum and overlaying these images, with pixel-level accuracy, onto the digital reference background produced by the SfM-MVS application. This substrate will be a 2.5D digital model, precisely defined in shape and dimensions, free of geometric distortions. Combined with complementary RTI images, this model will accurately render the color, texture, surface relief, and pattern of the painting.
However, research is ongoing to fully exploit these techniques. Due to the current lack of physicochemical study data, a detailed presentation of the mural and an extensive interpretation of the information obtained from the application of SfM-MVS and RTI are not included in this project. Instead, the mural serves as a case study for the application of SfM-MVS and RTI.

5. Conclusions

This study demonstrates the effectiveness of Close-Range Photogrammetry (SfM-MVS) and Reflectance Transformation Imaging (RTI) as complementary, non-invasive tools for the recording and documentation of painted artworks. Their integration provides both geometric and textural information, significantly enhancing the accuracy of digital replicas and conservation workflows. These techniques were applied as a case study on a Mycenaean fresco from the Palace of Tiryns, highlighting their capability to digitally document and analyze fragile cultural heritage artifacts with high precision.
Through photogrammetry, a true digital twin of the original artwork is created—a geometrically precise and metrically accurate 2.5D model that faithfully represents the physical object. This digital replica serves as a reliable foundation onto which additional imaging and analytical data, such as RTI-captured surface details, can be overlaid. The synergy of these techniques results in an enriched dataset that enhances study, conservation, and interpretation.
Beyond its documentation value, this approach is an essential non-destructive and non-invasive method for recording objects with high cultural and historical value of a fragile and sensitive nature. By allowing in-depth examination from a computer screen, it overcomes the limitations of in situ observation, such as restricted access, time constraints, and conservation risks. Researchers and conservators can now analyze fine details with unprecedented precision while ensuring the preservation of the original artifact.
Future work will focus on incorporating optical 2D imaging data across multiple wavelengths, aligning them with pixel-level accuracy onto the digital model. Additionally, further research is needed to integrate physicochemical analysis and enhance the interpretative potential of the recorded data. Despite current limitations, this study highlights the value of SfM-MVS and RTI in developing standardized, high-accuracy digital documentation methods for cultural heritage preservation.

Author Contributions

Methodology, G.T.; Investigation, G.T.; Resources, G.T.; Data curation, G.T.; Writing—original draft, G.T.; Writing—review & editing, G.T. and A.G.A.; Supervision, A.G.A., N.Z. and I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors are grateful to Alkestis Papadimitriou, Director of the Ephorate of Antiquities of Argolida, for the permission to study the floor fragment with painted decoration from the Palace of Tiryns, as well as for providing every assistance at the laboratories of the Ephorate in order to carry out this study. We also thank Aristides Stasinakis and Aristotelis Petrouleas for entrusting us with some of the required photographic equipment and to Metrica S.A. for facilitating the supply of Agisoft Metashape Pro, the commercial software package. Furthermore, the article processing charges (APC) were funded by the Special Account for Research Grants (ELKE) of the University of West Attica.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Koutsoudis, A.; Vidmar, B.; Ioannakis, G.; Arnaoutoglou, F.; Pavlidis, G.; Chamzas, C. Multi-image 3D reconstruction data evaluation. J. Cult. Herit. 2014, 15, 73–79. [Google Scholar]
  2. Bianconi, F.; Catalucci, S.; Filippucci, M.; Marsili, R.; Moretti, M.; Rossi, G.; Speranzini, E. Comparison between Two Non-Contact Techniques for Art Digitalization. J. Phys. Conf. Ser. 2017, 882, 012005. [Google Scholar]
  3. Adamopoulos, E.; Rinaudo, F.; Ardissono, L. Ardissono A critical comparison of 3D digitization techniques for heritage objects. ISPRS Int. J. Geo-Inf. 2020, 10, 10. [Google Scholar]
  4. Abate, D.; Menna, F.; Remondino, F.; Gattari, M. 3D painting documentation: Evaluation of Conservation Conditions with 3D Imaging and Ranging Techniques. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 1–8. [Google Scholar]
  5. Mudge, M.; Malzbender, T.; Schroer, C.; Lum, M. New Reflection Transformation Imaging Methods for Rock Art and Multiple-Viewpoint Display. In Proceedings of the 7th International Symposium on Virtual Reality, Archaeology and Cultural Heritage, VAST, Nicosia, Cyprus, 30 October–4 November 2006; Volume 6, pp. 195–202. [Google Scholar]
  6. Skarlatos, D.; Kiparissi, S. Comparison of Laser Scanning, Photogrammetry and SFM-MVS Pipeline Applied in Structures and Artificial Surfaces. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 3, 299–304. [Google Scholar]
  7. Miles, J.; Pitts, M.; Pagi, H.; Earl, G. Photogrammetry and RTI Survey of Hoa Hakananai’a Easter Island Statue. In Papers from the 41st Conference on Computer Applications and Quantitative Methods in Archaeology; Amsterdam University Press: Amsterdam, The Netherlands, 2013; pp. 144–156. [Google Scholar]
  8. Porter, S.T.; Huber, N.; Hoyer, C.; Floss, H. Portable and low-cost solutions to the imaging of Paleolithic art objects: A comparison of photogrammetry and reflectance transformation imaging. J. Archaeol. Sci. Rep. 2016, 10, 859–863. [Google Scholar]
  9. Altaratz, D.; Caine, M.; Maggen, M. Combining RTI & SFM. A Multi-Faceted Approach to Inscription Analysis. In Proceedings of the Electronic Imaging and the Visual Arts Florence, Florence, Italy, 8–9 May 2019. [Google Scholar]
  10. Kotoula, E.; Robinson, D.W.; Gandy, D.; Jolie, E.A. Computational Photography, 3-D Modeling, and Online Publication of Basketry for Cache Cave, California. Adv. Archaeol. Pract. 2019, 7, 366–381. [Google Scholar]
  11. Verhoeven, G.; Santner, M.; Trinks, I. From 2D (to 3D) to 2.5 D: Not All Gridded Digital Surfaces are Created Equally. In Proceedings of the 28th CIPA Symposium “Great Learning & Digital Emotion”, Beijing, China, 28 August–1 September 2021; Volume 8, pp. 171–178. [Google Scholar]
  12. Kraus, K. Photogrammetry: Geometry from Images and Laser Scans, 2nd ed.; Walter de Gruyter: Berlin, Germany, 2007. [Google Scholar]
  13. Remondino, F.; El-Hakim, S. Image-based 3D modelling: A review. Photogramm. Rec. 2006, 21, 269–291. [Google Scholar]
  14. Kelley, K.; Wood, R.K.L. Digital Imaging of Artefacts: Developments in Methods and Aims; Archaeopress Publishing Ltd.: Bicester, UK, 2018. [Google Scholar]
  15. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar]
  16. Remondino, F.; Campana, S. 3D Recording and Modelling in Archaeology and Cultural Heritage Theory and Best Practices; Archaeopress Publishing Ltd.: Bicester, UK, 2014; pp. 65–73. [Google Scholar]
  17. Fuhrmann, S.; Langguth, F.; Goesele, M. MVE-a multi-view reconstruction environment. GCH 2014, 3, 4. [Google Scholar]
  18. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from motion photogrammetry in forestry: A review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar]
  19. Solem, D.E.; Nau, E. Two new ways of documenting miniature incisions using a combination of image-based modelling and reflectance transformation imaging. Remote. Sens. 2020, 12, 1626. [Google Scholar] [CrossRef]
  20. Frey, F.S.; Warda, J.; Heller, D.; Kushel, D.; Vitale, T.; Weaver, G. The AIC Guide to Digital Photography and Conservation Documentation; American Institute for Conservation of Historic and Artistic Works: Washington, DC, USA, 2017. [Google Scholar]
  21. Alexopoulou-Agoranou, A.; Chrysoulakis, Y. Sciences and Artworks; Gonis, N., Ed.; Gonis: Athens, Greece, 1993. [Google Scholar]
  22. Padfield, J.; Saunders, D.; Malzbender, T. Polynomial texture mapping: A new tool for examining the surface of paintings. ICOM Comm. Conserv. 2005, 1, 504–510. [Google Scholar]
  23. Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 139–144. [Google Scholar] [CrossRef]
  24. Malzbender, T.; Gelb, D.; Wolters, H. Polynomial texture maps. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, ACM, Los Angeles, CA, USA, 12–17 August 2001; pp. 519–528. [Google Scholar] [CrossRef]
  25. Frank, E. Documenting Archaeological Textiles with Reflectance Transformation Imaging (RTI). Archaeol. Text. Rev. 2014, 56, 3–13. [Google Scholar]
  26. MacDonald, L.; Robson, S. Polynomial texture mapping and 3d representations. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 422–427. [Google Scholar]
  27. Schädel, M.; Yavorskaya, M.; Beutel, R. The earliest beetle† Coleopsis archaica (Insecta: Coleoptera)–morphological re-evaluation using Reflectance Transformation Imaging (RTI) and phylogenetic assessment. Arthropod Syst. Phylogeny 2022, 80, 495–510. [Google Scholar]
  28. Happa, J.; Mudge, M.; Debattista, K.; Artusi, A.; Gonçalves, A.; Chalmers, A. Illuminating the past: State of the art. Virtual Real. 2010, 14, 155–182. [Google Scholar]
  29. Cultural Heritage Imaging (CHI), 2002–2025. [En línea]. Available online: https://culturalheritageimaging.org/ (accessed on 16 January 2022).
  30. Rodenwaldt, G.; Hackl, R.; Heaton, N. Die Fresken des Palastes, 2; von Zabern, P., Ed.; Eleutheroudakis and Barth: Athens, Greece, 1912. [Google Scholar]
  31. Thaler, U. Mykene—Die Sagenhafte Welt des Agamemnon; Wbg Philipp von Zabern in Wissenschaftliche Buchgesellschaft (WBG): Darmstadt, Germany, 2018; p. 391. [Google Scholar]
  32. Hirsch, E.S. Painted and Decorated Floors on the Greek Mainland and Crete in the Bronze Age. 1974. Available online: https://orb.binghamton.edu/dissertation_and_theses/262/ (accessed on 2 October 2022).
  33. Blegen, C.W. Korakou: A Prehistoric Settlement Near Corinth; American School of Classical Studies at Athens: Boston, MA, USA, 1921. [Google Scholar]
  34. Sturgis, R.; Schliemann, H.; Oxon, D.C.L. Tiryns: The Prehistoric Palace of the Kings of Tiryns, the Results of the Latest Excavations; Harper & Brothers: New York, NY, USA, 1886; Volume 2, p. 75. [Google Scholar]
  35. Homerus. Homēri Odyssea: Scholarum in Usum; Cauer, P., Ed.; G. Freytag: Lipsiae, Germany, 1887. [Google Scholar]
  36. Hirsch, E.S. Another look at Minoan and Mycenaean interrelationships in floor decoration. Am. J. Archaeol. 1980, 84, 453–462. [Google Scholar]
  37. Hackl, R. Die Fußboden. In Tiryns: Die Ergebnisse der Ausgrabungen des Instituts (Band 2): Die Fresken des Palastes; Rodenwaldt, G., Ed.; Eleutheroudakis and Barth: Athens, Greece, 1912; pp. 222–237. [Google Scholar]
  38. Ullman, S. The interpretation of structure from motion. Proceedings of the Royal Society of London. Ser. B. Biol. Sci. 1979, 203, 405–426. [Google Scholar]
  39. Furukawa, Y.; Hernández, C. Multi-view stereo: A tutorial. Found. Trends® Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef]
  40. Verhoeven, G. Taking computer vision aloft–archaeological three-dimensional reconstructions from aerial photographs with photoscan. Archaeol. Prospect. 2011, 18, 67–73. [Google Scholar] [CrossRef]
  41. Verhoeven, G.; Doneus, M.; Briese, C.; Vermeulen, F. Mapping by matching: A computer vision-based approach to fast and accurate georeferencing of archaeological aerial photographs. J. Archaeol. Sci. 2012, 39, 2060–2070. [Google Scholar] [CrossRef]
  42. De Reu, J.; De Clercq, W.; Sergant, J.; Deconynck, J.; Laloo, P. Orthophoto mapping and digital surface modeling for archaeological excavations an image-based 3D modeling approach. In 2013 Digital Heritage International Congress (DigitalHeritage); IEEE: Piscateville, NJ, USA, 2013. [Google Scholar]
  43. Olson, B.R.; A Placchetti, R.; Quartermaine, J.; E Killebrew, A. The Tel Akko Total Archaeology Project (Akko, Israel): Assessing the suitability of multi-scale 3D field recording in archaeology. J. Field Archaeol. 2013, 38, 244–262. [Google Scholar] [CrossRef]
  44. Green, S.; Bevan, A.; Shapland, M. A comparative assessment of structure from motion methods for archaeological research. J. Archaeol. Sci. 2014, 46, 173–181. [Google Scholar] [CrossRef]
  45. Szeliski, R. Computer Vision: Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  46. Verhoeven, G.; Doneus, N.; Doneus, M.; Štuhec, S. From Pixel to Mesh: Accurate and Straightforward 3D Documentation of Cultural Heritage from the Cres/Lošinj Archipelago. In Istraživanja na otocima; Hrvatsko arheološko društvo: Lošinjski muzej, Croatia, 2015; Volume 30, pp. 165–176. [Google Scholar]
  47. Barnes, A. Digital Photogrammetry. Encycl. Archaeol. Sci. 2018, 1–4. [Google Scholar]
  48. Georgopoulos, A. Photogrammetric automation: Is it worth? Mediterr. Archaeol. Archaeom. 2016, 16, 11. [Google Scholar]
  49. Tsairis, G. Development of a Methodology for the Digital Documentation of Fragile Paintings: Utilizing a Low-Cost, Custom-made, Portable Construction for Various Applications in Cultural Heritage Management. In Book of Abstracts 5th Panhellenic Conference on Cultural Heritage Digitization—EUROMED; EU: Larissa, Greece, 2024. [Google Scholar]
  50. Linder, W. Digital Photogrammetry; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  51. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging; Walter de Gruyter GmbH & Co KG: Berlin, Germany, 2023. [Google Scholar]
  52. Förstner, W.; Wrobel, B.P. Photogrammetric Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  53. Agisoft, L.L.C. Agisoft Metashape User Manual: Professional Edition, Version 2.0; Agisoft: St. Petersburg, Russia, 2023. [Google Scholar]
  54. Lowe, D.G. Object recognition from local scale-invariant features. Proc. Seventh IEEE Int. Conf. Comput. Vis. 1999, 2, 1150–1157. [Google Scholar]
  55. Stylianidis, E.; Remondino, F. 3D Recording, Documentation and Management of Cultural Heritage; Whittles Publishing: Dunbeath, UK, 2016. [Google Scholar]
  56. Verhoeven, G. Resolving some spatial resolution issues: Part 1: Between line pairs and sampling distance. AARGnews 2018, 25–34. [Google Scholar]
  57. Patias, P.; Karras, G. Modern Photogrammetric Practices in Architecture and Archaeology Applications; Diptycho Publications: Thessaloniki, Greece, 1995. [Google Scholar]
  58. Hecht, E. Optics, 5th ed.; Pearson: Boston, MA, USA, 2017; Available online: https://emineter.wordpress.com/wp-content/uploads/2020/04/hecht-optics-5ed.pdf (accessed on 9 May 2023).
  59. Earl, G.; Beale, G.; Martinez, K.; Pagi, H. Polynomial texture mapping and related imaging technologies for the recording, analysis and presentation of archaeological materials. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 218–223. [Google Scholar]
  60. Malzbender, T.; Gelb, D.; Wolters, H.; Zuckerman, B. Enhancement of Shape Perception by Surface Reflectance Transformation. 2004. Available online: https://www.researchgate.net/publication/220839189_Enhancement_of_Shape_Perception_by_Surface_Reflectance_Transformation (accessed on 3 October 2022).
  61. Zányi, E.; Schroer, C.; Mudge, M.; Chalmers, A. Lighting and Byzantine Glass Tesserae. In Proceedings of the EVA London Conference, London, UK, 11–13 July 2007. [Google Scholar]
  62. Cultural Heritage Imaging Reflectance Transformation Imaging (RTI). 2002–2025. [En línea]. Available online: https://culturalheritageimaging.org/Technologies/RTI/ (accessed on 16 January 2022).
  63. England, H. Multi-Light Imaging for Cultural Heritage; Historic England: Swindon, UK, 2018. [Google Scholar]
  64. Cultural Heritage Imaging Reflectance Transformation Imaging: Guide to Highlight Image Capture v2.0; CulturalHeritage Imaging: San Francisco, CA, USA, 2013.
  65. Granshaw, S.I. Photogrammetric terminology: Fourth edition. Photogramm. Rec. 2020, 35, 143–288. [Google Scholar] [CrossRef]
  66. Grieves, M. Digital Twin: Manufacturing Excellence Through Virtual Factory Replication. White Paper 2014, 1, 1–7. [Google Scholar]
Figure 1. (a) The painted floor fragment from room XVIII (Thaler 2018, 315 [31]). (b) Plan of Tiryns. Room XVIII is delineated with red parallel line hatching (Schliemann 1885, plate I, modified by G. Tsairis [34]). (c) Floor plan of room XVIII. The pattern on the squares is a tricurved arch enclosing a design of lines and circles, identified as flowers on a rock terrain or stone (Hirsch 1974, 47 [32]). (d) Reconstruction of the linear patterns from the squares of the Mycenaean floor (Rodenwaldt 1912, 273 [30]). (e) A generalized drawing of the linear patterns of tricurved arches (Hirsch 1980, 458 [36]).
Figure 1. (a) The painted floor fragment from room XVIII (Thaler 2018, 315 [31]). (b) Plan of Tiryns. Room XVIII is delineated with red parallel line hatching (Schliemann 1885, plate I, modified by G. Tsairis [34]). (c) Floor plan of room XVIII. The pattern on the squares is a tricurved arch enclosing a design of lines and circles, identified as flowers on a rock terrain or stone (Hirsch 1974, 47 [32]). (d) Reconstruction of the linear patterns from the squares of the Mycenaean floor (Rodenwaldt 1912, 273 [30]). (e) A generalized drawing of the linear patterns of tricurved arches (Hirsch 1980, 458 [36]).
Coatings 15 00388 g001
Figure 2. (a) The camera positions during the photogrammetric survey are illustrated in blue. A total of 468 images were captured in RAW (NEF) format using the Nikon D850 camera. (b) A subset of 139 images was acquired at a slight inclination of 15° and −15° relative to the horizontal plane, which is parallel to the surface of the painting. (c) Additionally, 49 images were captured from a steeper inclination around the mural while maintaining a constant focusing distance. The two different shades of blue used to represent the camera positions reflect the significant variation in the camera’s inclination during the respective image acquisitions.
Figure 2. (a) The camera positions during the photogrammetric survey are illustrated in blue. A total of 468 images were captured in RAW (NEF) format using the Nikon D850 camera. (b) A subset of 139 images was acquired at a slight inclination of 15° and −15° relative to the horizontal plane, which is parallel to the surface of the painting. (c) Additionally, 49 images were captured from a steeper inclination around the mural while maintaining a constant focusing distance. The two different shades of blue used to represent the camera positions reflect the significant variation in the camera’s inclination during the respective image acquisitions.
Coatings 15 00388 g002
Figure 3. The rotation angles (φ, ω, κ) representing the camera’s tilt. The camera movement followed the horizontal plane defined by the x and y coordinates. Illustration by G. Tsairis. Camera image source: Nikon Z6.
Figure 3. The rotation angles (φ, ω, κ) representing the camera’s tilt. The camera movement followed the horizontal plane defined by the x and y coordinates. Illustration by G. Tsairis. Camera image source: Nikon Z6.
Coatings 15 00388 g003
Figure 4. The black dots indicate the positions of the camera, while the different colors represent the number of overlapping images, with the dark blue color (marked as >9) indicating areas where there is overlap from more than nine images. The black outline delineates the surface of the mural. More overlap results in greater accuracy in the estimated measurements.
Figure 4. The black dots indicate the positions of the camera, while the different colors represent the number of overlapping images, with the dark blue color (marked as >9) indicating areas where there is overlap from more than nine images. The black outline delineates the surface of the mural. More overlap results in greater accuracy in the estimated measurements.
Coatings 15 00388 g004
Figure 5. Screenshot from Agisoft Metashape software, Professional Edition, Version 2.0.1 build 16069 (64-bit). Eight calibrated coded targets (Agisoft markers) were positioned on the surface of the mural. Beyond the mural, four custom-designed scale bars were constructed using 90° angle rulers and four prefabricated coded pairs of Metashape targets (1–8) with precisely pre-measured distances. These scale bars were arranged such that the rulers on each side formed a straight line and were leveled on a plane defined by the coded targets (5, 1, 7) and the mural surface.
Figure 5. Screenshot from Agisoft Metashape software, Professional Edition, Version 2.0.1 build 16069 (64-bit). Eight calibrated coded targets (Agisoft markers) were positioned on the surface of the mural. Beyond the mural, four custom-designed scale bars were constructed using 90° angle rulers and four prefabricated coded pairs of Metashape targets (1–8) with precisely pre-measured distances. These scale bars were arranged such that the rulers on each side formed a straight line and were leveled on a plane defined by the coded targets (5, 1, 7) and the mural surface.
Coatings 15 00388 g005
Figure 6. A line laser projecting a linear beam to facilitate the precise alignment of rulers on either side of the mural, ensuring the formation of a straight reference line.
Figure 6. A line laser projecting a linear beam to facilitate the precise alignment of rulers on either side of the mural, ensuring the formation of a straight reference line.
Coatings 15 00388 g006
Figure 7. The dense point cloud of the mural, generated through photogrammetric processing and visualized within the Agisoft Metashape software environment, with (a) and without (b) the application of texture.
Figure 7. The dense point cloud of the mural, generated through photogrammetric processing and visualized within the Agisoft Metashape software environment, with (a) and without (b) the application of texture.
Coatings 15 00388 g007
Figure 8. From top to bottom: (a) wireframe (triangulated mesh) of the surface, (b) 2.5D solid without texture, and (c) 2.5D model with texture. (df): Detailed views of (ac), respectively.
Figure 8. From top to bottom: (a) wireframe (triangulated mesh) of the surface, (b) 2.5D solid without texture, and (c) 2.5D model with texture. (df): Detailed views of (ac), respectively.
Coatings 15 00388 g008
Figure 9. Detailed view of the dense point cloud corresponding to the area depicted in Figure 8d–f. The point cloud consists of points in a three-dimensional space, each carrying color information derived from the source images, providing a high-fidelity representation of the mural’s geometry and chromatic characteristics.
Figure 9. Detailed view of the dense point cloud corresponding to the area depicted in Figure 8d–f. The point cloud consists of points in a three-dimensional space, each carrying color information derived from the source images, providing a high-fidelity representation of the mural’s geometry and chromatic characteristics.
Coatings 15 00388 g009
Figure 10. The generated Dense Elevation Model (DEM) utilizes pixel locations to represent the X and Y coordinates, while pixel values correspond to depth measurements. Additionally, these variations can be represented through elevation contours.
Figure 10. The generated Dense Elevation Model (DEM) utilizes pixel locations to represent the X and Y coordinates, while pixel values correspond to depth measurements. Additionally, these variations can be represented through elevation contours.
Coatings 15 00388 g010
Figure 11. Altitude differences can alternatively be represented as contours. The generated contours are spaced at intervals of 0.001 m.
Figure 11. Altitude differences can alternatively be represented as contours. The generated contours are spaced at intervals of 0.001 m.
Coatings 15 00388 g011
Figure 12. Cross-section providing both visual and metrical data regarding the relief and topography of the mural painting.
Figure 12. Cross-section providing both visual and metrical data regarding the relief and topography of the mural painting.
Coatings 15 00388 g012
Figure 13. The digital photogrammetry product delivers comprehensive and valuable information pertaining to the object, which may prove useful for future applications. By employing a relatively cost-effective and efficient data collection process, a wealth of information is made available for both current and future use.
Figure 13. The digital photogrammetry product delivers comprehensive and valuable information pertaining to the object, which may prove useful for future applications. By employing a relatively cost-effective and efficient data collection process, a wealth of information is made available for both current and future use.
Coatings 15 00388 g013
Figure 14. The reconstructed orthomosaic generated from 89 DNG images, with an overall resolution of 26,210 × 17,852 pixels and a pixel size of 0.0555 mm. The orthomosaic is orthorectified onto a plane defined by the three targets, 5, 1, and 7 (Figure 5), which are positioned at z = 0 in the local coordinate system.
Figure 14. The reconstructed orthomosaic generated from 89 DNG images, with an overall resolution of 26,210 × 17,852 pixels and a pixel size of 0.0555 mm. The orthomosaic is orthorectified onto a plane defined by the three targets, 5, 1, and 7 (Figure 5), which are positioned at z = 0 in the local coordinate system.
Coatings 15 00388 g014
Figure 15. (a) Precise measurements performed on the reconstructed orthomosaic. (b) Detailed view of the wall painting, highlighting the ability to obtain accurate spatial data not in situ or on the physical object, but through non-contact methods, virtually accessible from any location, at any time, and on any device.
Figure 15. (a) Precise measurements performed on the reconstructed orthomosaic. (b) Detailed view of the wall painting, highlighting the ability to obtain accurate spatial data not in situ or on the physical object, but through non-contact methods, virtually accessible from any location, at any time, and on any device.
Coatings 15 00388 g015
Figure 16. Screenshot from RTIViewer. Reflectance Transformation Imaging (RTI) provides a detailed visualization of surface textures, such as brushstrokes, preliminary incised lines (a), and other subtle features that reflect the artist’s technique. This method facilitates a deeper analysis of the painting style and an enhanced understanding of the artist’s craftsmanship. It also highlights surface anomalies, damages, or irregularities on the painting’s surface (b).
Figure 16. Screenshot from RTIViewer. Reflectance Transformation Imaging (RTI) provides a detailed visualization of surface textures, such as brushstrokes, preliminary incised lines (a), and other subtle features that reflect the artist’s technique. This method facilitates a deeper analysis of the painting style and an enhanced understanding of the artist’s craftsmanship. It also highlights surface anomalies, damages, or irregularities on the painting’s surface (b).
Coatings 15 00388 g016
Figure 17. (ac) Screenshots from RTIViewer demonstrating the adjustment of the green sphere (top right) to simulate varying light directions for enhanced surface analysis.
Figure 17. (ac) Screenshots from RTIViewer demonstrating the adjustment of the green sphere (top right) to simulate varying light directions for enhanced surface analysis.
Coatings 15 00388 g017
Figure 18. Screenshots from RTIViewer. In (a,b), different light directions enhance the topography information. In (c), the RTI Normals Visualization is utilized, while in (d), string impressions are most prominently revealed through the RTI-HSH specular enhancement algorithms.
Figure 18. Screenshots from RTIViewer. In (a,b), different light directions enhance the topography information. In (c), the RTI Normals Visualization is utilized, while in (d), string impressions are most prominently revealed through the RTI-HSH specular enhancement algorithms.
Coatings 15 00388 g018
Table 1. Summarized technical report of the procedure (NIKON D850, Nikon AF Nikkor 50 mm f/1.8 lens, 8256 × 5504 pix, 4.35 × 4.35 μm).
Table 1. Summarized technical report of the procedure (NIKON D850, Nikon AF Nikkor 50 mm f/1.8 lens, 8256 × 5504 pix, 4.35 × 4.35 μm).
ProjectCamera Position According to
Figure 3
Number
of Images
Altitude
(mm)
Ground Resolution (mm/pix)Tie PointsReprojection Error (pix)F Error
(pix)
Scale Bars
Error (m)
D850 TirynsParallel, No targets
Parallel, targets
Turn 90°
Tilt 15°
Tilt −15°
Oblique
ω = 0°, φ = 0°, κ = 0°
ω = 0°, φ = 0°, κ = 0°
ω = 0°, φ = 0°, κ = 90°
ω = 0°, φ = 15°, κ = 90°
ω = 0°, φ = −15°, κ = 90°
89
107
84
75
64
49
7150.0555100,78410.120.000013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tsairis, G.; Alexopoulou, A.G.; Zacharias, N.; Kakoulli, I. Close-Range Photogrammetry and RTI for 2.5D Documentation of Painted Surfaces: A Tiryns Mural Case Study. Coatings 2025, 15, 388. https://doi.org/10.3390/coatings15040388

AMA Style

Tsairis G, Alexopoulou AG, Zacharias N, Kakoulli I. Close-Range Photogrammetry and RTI for 2.5D Documentation of Painted Surfaces: A Tiryns Mural Case Study. Coatings. 2025; 15(4):388. https://doi.org/10.3390/coatings15040388

Chicago/Turabian Style

Tsairis, Georgios, Athina Georgia Alexopoulou, Nicolaos Zacharias, and Ioanna Kakoulli. 2025. "Close-Range Photogrammetry and RTI for 2.5D Documentation of Painted Surfaces: A Tiryns Mural Case Study" Coatings 15, no. 4: 388. https://doi.org/10.3390/coatings15040388

APA Style

Tsairis, G., Alexopoulou, A. G., Zacharias, N., & Kakoulli, I. (2025). Close-Range Photogrammetry and RTI for 2.5D Documentation of Painted Surfaces: A Tiryns Mural Case Study. Coatings, 15(4), 388. https://doi.org/10.3390/coatings15040388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop