Next Article in Journal
The Neglected Remains of the Royal Citadel of Messina, Sicily: A Proposal for a Suitable Conservation and Re-Use Project
Previous Article in Journal
Echoes of the Past: Unveiling the Kharga Oasis’ Cultural Heritage and Climate Vulnerability through Millennia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Paradigms in the Cultural Heritage Digitization with Self and Custom-Built Equipment

1
Department of Architecture, Alma Mater Studiorum-University of Bologna, 40126 Bologna, Italy
2
Relio Labs s.r.l., 25124 Brescia, Italy
*
Author to whom correspondence should be addressed.
Heritage 2023, 6(9), 6422-6450; https://doi.org/10.3390/heritage6090336
Submission received: 29 July 2023 / Revised: 13 September 2023 / Accepted: 18 September 2023 / Published: 21 September 2023

Abstract

:
In the field of the Cultural Heritage (CH), image-based 2D and 3D digital acquisition is today the most common technique used to create digital replicas of existing artifacts. This is carried out for many reasons, such as the following: research, analysis, preservation, conservation, communication, and valorization. These activities usually require complementary specialized equipment, tailored to specific purposes in order to achieve the desired results. This equipment is not easy-to-find on the market, it is not always affordable for museums operators; it is sometimes expensive, and it usually needs tricky customizations. However, the development in recent years of more generalized, versatile, and affordable instruments and technologies has led to new approaches, leveraging a new generation of low-cost, adaptable equipment. This paper presents custom-made equipment following this new path, designed to provide optimized results through calibrated tools alongside the software to make it work. The essay focuses specifically on the self-production of instruments for the digital reproduction of ancient drawings, manuscripts, paintings, and other museum artifacts and their transformative impact on digitization techniques. The outcomes of self and custom-built equipment specifically produced for the contexts described in this paper highlight their potential to foster interdisciplinary collaboration, facilitate scholarly research, enhance conservation efforts, and promote cultural exchange. The final goal is to propose inexpensive equipment that is easy to use (even by not specifically trained operators) and that provides remarkable quality.

1. Introduction

Hardware instruments and equipment played a key role in the activities of surveyors, architects, restorers, and other professionals involved in cultural heritage (CH) documentation, particularly in the field of arts and architecture, with the aim of knowledge, conservation, communication, and re-use as design background (even dating back to ancient times).
Despite their significance and the contemporary transformative impact of Information Technology (IT), these tools have recently received limited attention. New instruments and equipment were generally simply accepted without any critical analysis of their advantages, limits, constraints, suitable applications, and potential to enable previously unfeasible analyses. The main result of this lack of attention is that the new workflows demonstrate minimal innovation within the process in which they are embedded, so the benefits are much more limited than possible.
This paper focuses on this issue with specific attention to the 2D and 3D digital data acquisition of CH by imaging, where the problem is particularly sharp. E.g., in the 3D domain, increasingly accurate three-dimensional models are usually authored using IT-enabled technologies such as laser scanners and digital cameras [1,2,3], but the process for reality capture has remained unaltered in its demand for specialized operators, due to the use of complex, usually uncalibrated, sometimes expensive, and hard to carry equipment. Currently, no laser scanner possesses the capability to consistently capture objects of different classes of size. In the case of the 2D domain, again exemplifying, high precision captures require specific and expensive instrumentation, such as the Phase One iXM Camera System [4].
To overcome this standoff, it is nowadays a key point to explore and evaluate new instruments, to precisely bind their capabilities, and estimate accurately their proper utilization, which holds utmost significance for the CH-related fields. This remains an uncharted objective in present-day research. Some research groups have found, analyzed, and measured the quality of the new instruments in certain uses. E.g., in 3D data capture of artifacts of all shapes and kinds, the 3Dom group at the Bruno Kessler Foundation in Trento (Italy) conducted an excellent work [5,6,7,8,9]. However, these analyses were essentially in the context of the evaluation of the performance of the single instrument, and not in the study and verification of its ability to fit into a well-determined pipeline or to allow new workflows and uses. Furthermore, there is little literature on frames, rigs, and stands designed for specific digitization techniques and easily replicable (a very isolated example is in [10]), a limited number of examples concern gigantic solutions (e.g., [11]), and the major parts of them are linked to topics outside of the interest of this work [12]. It is possible to find several studies on a limited number of devices as rotary table-based solutions for 3D object acquisition, for example in [13,14]. However, the presented equipment usually requires complex handling and programming skills.
Lately, the scenario of image-based data capture instruments is potentially changing, thanks to the emergence of new general-purpose adaptable tools, enabling new approaches to reality data acquisition, including on-the-fly surveys, and streamlining workflows while maintaining high-quality and accurate results. Devices such as smartphones, thanks to the various equipment they feature (GPS, accelerometers, magnetometers, and gyroscopes) and the ability to connect to the Internet besides the cameras, have the potential to support real-time processing and tasks such as positioning, navigation, data recording, and transmission. Recent mobile phones, such as the Apple iPhone 14 Pro and the Huawei Mate 50 Pro, bring levels of image resolution, sharpness, and color accuracy sometimes better than prosumer SLR cameras [15]. They force to rethink the entire way of interpreting and carrying out a survey and the equipment needed to carry them out. Certainly, the problem of their configuration for the specific survey of CH scenario needs to be deepened but, they have the strong advantage of being usable by any professional or non-professional. Moreover, specific physical accessories such as tripods, rigs, and stands started to be easier to build and customize, thanks to the diffusion and affordability of 3D printers. These 3D prototyping systems allow a fabrication process based on standard elements and special parts at very low costs [16]. In the photography-based reality capture this innovative approach not only enhances the functionality and stability of the devices, but also enables their precise positioning and alignment necessary to capture images.
The potential of producing custom, affordable, on-demand accessories introduces transformative aspects to the digital imaging-based workflow [17,18], which can generate time savings, a wider user base, and better CH management (even though it has been minimally explored today).
During the last five years, our group at the Department of Architecture, University of Bologna (Fabrizio I. Apollonio, Andrea Ballabeni, Giovanni Bacci, Filippo Fantini, Riccardo Foschi, Marco Gaiani, and Simone Garagnani), experimented a new research path based on cost-effective solutions using generic instruments, self-produced accessories, and ad hoc software able to overcome the problems related to the use of general-purpose devices. The idea of developing streamlined workflows was followed to enable real processes to better fill their lifecycle requirements, to address unresolved CH problems, and to reduce the need for manual, repetitive tasks. The outcomes of this research resulted in innovative concepts for creating tailored equipment capable of accommodating cameras, smartphones, and lighting systems to take pictures of various items such as paintings, drawings, small architectural artifacts, and sculptures.
Since 2000, the research group has been involved in the development of photography-based data acquisition methods, techniques, software, and hardware tools to digitize and visualize in 2D and 3D these objects [19,20] allowing the construction of a solid theoretical and practical background for problems and solutions. The wide range of dimensions, materials, and optical properties of the samples that were studied confirms that the solutions developed were comprehensive and strong enough to cover a broad range of object types. This ensures the consistency of the solutions developed.
In this paper, we developed an approach to design and fabricate the equipment to digitize the above-mentioned CH objects using image-based techniques. We present a critical overview of these new systems with the aim of showing a new methodological and technical scenario to focus on which research is coming in the next years. To fully understand problems and solutions, the evolution of each instrument created is described to allow the reader to easily understand the rationale of each specific choice. An overview of the developed software, equipment, and workflows that enable its use is also given, to better understand its motivations, hardware solutions and effective uses.

2. Materials and Methods

The digitalization process for paintings, drawings, small architectural artifacts, and sculptures through photography involves the use of various hardware tools, mostly digital cameras, and their accessories. The camera and any additional equipment depend on the goal of the acquisition, which can be one of the two possible: to achieve the utmost precision or to attain a specific level of accuracy while prioritizing user-friendliness and portability.
Usually, the highest precision is reached through costly professional equipment, while easy digitization can be produced using more general devices, such as SLR cameras or, nowadays, smartphones. The operations with both types of devices may result in complex activities that must be carried out by trained operators, while CH artifacts are often studied by museum curators, art historians, and restorers, who need digital models to improve and share their knowledge, without being experts in digital 3D reproductions or owners of expensive tools [21]. Their needed equipment aims to overcome typical capture problems, such as the lack of control of lighting sources and interreflections in the environment, the safety of the surveyed objects, and the maintenance of the predetermined photometric, dimensional, and spatial quality requirements.
The equipment presented in the paper aims to offer solutions to these necessities. Basically, the workflows used are unchanged, but logistics, usability, and efficiency are much improved. Moreover, as the custom devices are closely linked to image-processing computer programs to achieve the desired outputs, the description of new hardware equipment solutions will be preceded by the description of the developed software, needed to easily exploit the new instruments developed. Alongside these tools, the calibration methods for the entire solutions will be introduced, as they are essential to ensure the required colorimetric, resolution, metric, and formal accuracies granted by the equipment.

2.1. Pipelines

To meet the requirements of the two listed above possible outputs, image-based acquisition presents two well-established workflows. Differences depend only on whether the desired output is 2D or 3D. These can be summarized, in general, as follows:
2D. 2D imaging:
  • Sensor resolution and color calibration,
  • Image acquisition,
  • Analog-to-digital conversion,
  • Demosaicking,
  • White balance,
  • Color correction,
  • Denoising and sharpening,
  • 2D output to target color space and gamma encoding.
3D. Photogrammetry:
  • Sensor radiometric calibration,
  • Image acquisition,
  • Analog-to-digital conversion,
  • Demosaicking,
  • White balance,
  • Color correction,
  • Denoising and sharpening,
  • 2D output to target color space and gamma encoding,
  • Sensor calibration and orientation through self-calibration,
  • Measurement introduction,
  • Surfaces generation,
  • 2D output:
    Texturing,
    Ortho-images production,
  • 3D output:
    Export of models towards 3D modeling applications.
3D. Photometric stereo:
  • Sensor resolution and color calibration,
  • Image acquisition,
  • Analog-to-digital conversion,
  • Demosaicking,
  • White balance,
  • Color correction,
  • Denoising and sharpening,
  • 2D output to target color space and gamma encoding,
  • Maps extraction:
    Diffusion map,
    Normal map,
    Specular map,
  • Generation of mesh surfaces,
  • 2D output:
    Maps for shaders,
  • 3D output:
    Export of models towards 3D applications.
Challenges in acquiring 2D images lie in achieving the appropriate resolution and color accuracy, to ensure their most accurate reproduction. In the case of 3D, additional requirements include images leveraging common techniques such as photogrammetry for geometric surveying and photometric stereo [22,23,24,25,26,27] for primarily reconstructing the mesostructure, along with more recent advancements such as neural radiance fields (NeRF) [28,29,30,31].
As said, workflows remained unchanged in our case, but operability was greatly facilitated, sometimes allowing operations previously impossible, and the quality of the results improved significantly.

2.2. An Overview of the Companion Software

To make the 2D and 3D pipelines efficient, two software companions were developed alongside the equipment in MATLAB programming environment:
  • SHAFT (SAT and HUE Adaptive Fine Tuning), a solution for Color Correction (CC) [32]
  • nLights, a solution to reconstruct mesostructure and surface reflectance properties of the objects [33].
For example, the typical color management workflow is based on CIE illuminant D50 and the CIE 1931 standard observer. However, our designed equipment is based on illuminants at 4000 K temperature, as they offer more consistent spectral emission across all of the wavelengths and have been preferred by observers [34]. SHAFT is designed to manage data coming from a 4000 K illuminant, embedding it in the ICC D50 pipeline.
In Figure 1, 2D and 3D pipelines are represented, highlighting the steps that are performed by the two software solutions.
SHAFT is a software that manages the 2D imaging steps of RAW conversion, white balance, and color correction (CC), exploiting target-based techniques [35]. The target-based characterization establishes the color relationship according to a set of color patches with available pre-measured spectral or colorimetric data by ISO 17321 [36]. CC is necessary since camera sensors (both in traditional DSLR and smartphones) do not have the same spectral sensitivity as the cones in the human eye [37], leading to metamerism phenomena between the camera and the eye [38]. SHAFT adopts as targets the Calibrite ColorChecker Classic and Passport and it is basically organized in three steps: RAW image linearization; exposure equalization and white balance adjustment relative to the D4 patch of the ColorChecker. CC is in three phases: linear, polynomial, and successive approximations. The final images can be rendered in common color spaces as sRGB, and Display P3.
nLights is an automatic photometric stereo solution for reconstructing bitmaps that can describe the mesostructure and the reflectance properties of the artifact surface, namely albedo, normals, heights, and specular. Additionally, nLights provides the 3D geometry of the shape in the form of a 3D STL or OBJ geometric file, as obtained from the depth map. nLights uses n pictures of the artifacts with constant illumination from n directions, approximately orthogonal to each other, while keeping the camera position fixed with the axis perpendicular to the surface of the drawing or painting. In the first version the number of light sources was four, while in its most recent update this solution presents an algorithmic implementation that can accommodate an unlimited number of light sources.

2.3. Developed Equipment Solutions

The developed custom-built equipment solutions described refer to indoor acquisitions with controlled lighting and are designed to overcome the following common issues:
  • Lack of illuminants efficiency, with inconsistent or harmful lighting conditions leading to variations in image quality and accuracy;
  • Presence of erratic reflections and parasite light coming from outdoor;
  • Absence of planarity of the camera and the surface to be captured: without correct planarity between the camera and the object’s surface, there can be discrepancies in image resolution, affecting the clarity and details of the captured images;
  • Significant time required to set up the shooting stage: positioning lighting equipment, light-blocking screens, and cameras can be difficult to set up, time-consuming, increasing the overall time required for capturing images;
  • Too complex tools requiring specific expertise in their use or many human resources that may pose challenges for average users;
  • Costly and hard-to-find tools and spares: some hardware tools or setups may be expensive and not easily accessible, making it challenging to acquire them, as, e.g., in the Operation Night Watch project, by Rijksmuseum in Amsterdam [39].
The custom equipment solutions developed to minimize or solve these problems are:
  • A set of very portable lights with known emissions and efficiency to easily transport them and minimize the technical time required to set up the stage,
  • A repro stand for artworks to be captured on a horizontal plane, such as ancient drawings;
  • A repro stand for artworks to be captured on a vertical plane, such as paintings or frescoes;
  • A calibrated roundtable and 3D test-field plate to capture small museum objects.
A.
Portable lights—To avoid the lack of illuminants efficiency and too complex lighting systems to illuminate the scenes, a new lighting system based on a series of single LED lights, characterized by limited dimensions and good portability was chosen. A custom prototype was crafted, including either sixteen or thirty-two Relio2 LED lights thoughtfully distributed on the repro stand, with four or eight lights placed on each side. These lights boast a 4000 K Correlated Color Temperature (CCT), and an illuminance of 40,000 lux at 0.25 m. Their IES (Illuminating Engineering Society) TM-30–15 Color Fidelity Index is Rf = 91 [40]. The illuminators’ Spectral Power Distribution (SPD) shows high chromatic consistency across the entire wavelength spectrum (Figure 2) and excellent color rendition even during the rendering phase. Besides, they do not generate any potentially harmful UV or IR emissions. Finally, they present a setup in which the time in which the lamps warm up to the nominal color temperature and light intensity stabilization is minimized. Relio2 illuminants were positioned and grouped with specifically designed 3D printed supports to place lights at desired directions and inclinations (Figure 3). These supports were crafted in ABS (acrylonitrile-butadiene-styrene) using a XYZprinting Da Vinci Pro 1.0 3D printer and 1.75 mm diameter black matte filament. The choice of ABS as preferred material, rather than other easier-to-print filaments, was driven by the need to prevent the supports from being damaged or deformed by the heat emitted by the illuminators (approximately 60 °C each). This solution is also economically affordable, since it solves the problem of finding costly spare parts in case of damages during the working operations (as they can be reprinted).
B.
Repro stand for artworks to be captured on a horizontal plane—The stand, aiming to host ancient drawings or manuscripts to be digitally acquired, was designed to own the following features:
  • A stable structure to minimize blurring caused by oscillations and vibrations and small movements of lamps that may cause potential non-uniformity of the light,
  • A wide reproduction area capable of accommodating the open passe-partout containing the drawings to be captured, ensuring a safety management of it and its planarity,
  • The lighting system positioned on all four sides, equidistant from the center of the drawing to guarantee homogeneous illumination for the whole acquisition area,
  • No interference between the light sources and the camera,
  • Easy portability within the locations where the drawings are usually stored.
To fulfill these requirements, two different solutions were made in 2018 and in 2020, improving progressively.
The first stand (2018) was conceived separating the structure supporting the lighting system and housing the drawing from the column that holds the camera. The camera stand was a Manfrotto 809 Salon 230, 2800 mm height, equipped with a Manfrotto 410 geared head. The lightning support and capture plan system was based on a medium density top, from which four detachable arms made of aluminum extruded profile allowed to accommodate four Relio2 lights on each side, angled at 45° relative to the horizontal and equidistant from the center of the drawing to guarantee homogeneous illumination for the whole acquisition area. The arms were connected to the surface using steel angled brackets secured with bolts. These supports were deliberately positioned in an asymmetrical manner to enable the opening of the passe-partout hosting the drawing and to ensure the positioning of the lights at the same distance from the center of the acquisition plane (660 mm). Alignment gauges were constructed to check the perfect positioning of the arm inclination and light height once assembled at the acquisition location.
In Figure 4 the prototype is illustrated while Table 1 defines the nature of the (commercial or self-built) components used to assemble it. In Figure 5 the bill of custom-built components and the alignment gauge are shown.
Starting from the first version of the stand, a second prototype was developed to improve the planarity of the camera and the surface to be captured, the variety of format and size of the drawings to be acquired and the system portability, to increase the number of the lights to better model reflections, and to minimize the presence of erratic reflections and parasite light coming from outdoor.
The planarity of the shooting plane was achieved by using adjustable screw feet and a series of laser distance meter (accuracy 0.25 mm), placed at the four corners of the frame.
To extend the number of the format of the drawings to be acquired, the inclined arms supporting the illuminators were mounted not directly onto the surface as before, but rather onto an elevated frame positioned 20 mm above the horizontal plane. The frame, made of 20 mm × 20 mm aluminum profiles with a thickness of 1 mm and welded at the corners, was raised by four spacers at the vertices. Each of them can be removed, allowing the structure to remain stable in only three points. This allows passepartout of different sizes and wider formats to be hosted.
To improve the portability a new, shorter column Lupo Repro 3, equipped with a Manfrotto 410 geared head, was adopted, as it can be disassembled into segments no longer than two meters.
In the end, the new stand accommodated light sources inclined at two different angles, 15° and 45° relative to the acquisition surface, allowing for a more accurate normal and reflection maps extraction.
To minimize the presence of erratic reflections and parasite light coming from outdoor, instead of the previous solution that was space-demanding in darkening the shooting area (Figure 6), a new guide atop the lamp-supporting arms was designed, allowing a black curtain to slide and effectively block reflected light around the surface. This new solution proved to be more practical, both in terms of assembly time and light insulation.
In Figure 7 the second acquisition prototype is illustrated while Table 2 defines the nature of the components used to assemble it (commercial or self-built).
In Figure 8, the bill of custom-built components is illustrated.
C.
Repro stand for artworks to be captured on a vertical plane—The stand, was designed to offer the following features:
  • A stable structure to minimize blurring caused by oscillations and vibrations and small movements of lamps that may cause potential non-uniformity of the light;
  • Due to the larger size of paintings when compared to drawings and the limited frame of each picture taken to get the needed resolution, once they need to be mosaicked together the acquisition system had to be both vertically and horizontally movable;
  • Safety management and parallelism to the vertical plane of the painting to be acquired;
  • The lighting system positioned on all four sides, equidistant from the center of the captured area to guarantee homogeneous illumination for the whole acquisition area;
  • No interference between the light sources and the camera;
  • A system for camera calibration since calibration panels or color checkerboard cannot be superimposed to the painting for safety reasons.
The solution was developed starting from the Manfrotto 809 Salon 230, as the main stand of the rig. A new structure made of square aluminum tubes measuring 20 mm × 20 mm × 1 mm, each mounted along the horizontal arm of the column, is coupled to the column. Connections through ABS nodes, forming a trapezoidal shape that links them to the column via aluminum and steel tension rods, were introduced to strengthen the structure, as verified by exploiting finite element method (FEM) simulations techniques following Von Mises criteria.
Figure 9 (left) shows internal forces influencing the structure when loaded with the whole wired lighting system and camera: notably, the lower aluminum rods are excessively stressed. Figure 9 (right) shows the internal forces after strengthening the same solution with a couple of tie rods: the capacity to withstand loading stress and vibrations is amplified.
The movement was ensured horizontally by wheels placed below the base and vertically exploiting the rack of the column. To ensure the parallelism of the arm to the painting plane and the control of the distance between the camera lens and the painting plane in different shots, two laser distance meters have been mounted on two brackets (one on the right and one on the left side).
The camera is seamlessly integrated within the lighting structure itself, precisely positioned at its focal center to prevent interferences and to keep the camera in the correct position in relation to the lights.
Since the artwork cannot be often removed from the wall, and calibration panels or color checkerboard cannot be superimposed to the painting for safety reasons, in addition to the structure supporting the lights and camera it was necessary to build a separate stand to perform specifically the preliminary calibration procedures.
In Figure 10 the acquisition prototype for vertical surfaces is illustrated, while Table 3 defines the nature of the components used to assemble it (commercial or self-built).
D.
Calibrated roundtable and 3D test-field plate—The developed solution addresses the challenges that stem from using overly intricate tools that demand specialized expertise, even for technical procedures such as camera calibration [41]. This stands in contrast to sophisticated approaches as outlined in [14], which features complex setups involving stepper motors and controllers to maneuver the table’s movement. In place of complex rigs, which present challenges in terms of fabrication for operators, a more user-friendly approach has been adopted. This entails providing comprehensive usage guidelines and crafting tools directly to users who can build up their own acquisition set through the folding of basic cardboard.
This process follows a readily accessible open-source layout, as illustrated in Figure 11. Furthermore, this simple solution better matches with the use of acquisition devices such as smartphone cameras with the purpose of effortless usage and portability, per the desired level of accuracy, and not the highest precision possible. A rotating support was built to acquire small objects with a set of Ringed Automatically Detected (RAD) coded targets printed upon the acquisition circular flat surface, as well as on six regularly arranged cubes also textured with RAD targets to help alignment and scaling.
These cubes are firmly connected to the rotating table profile by metal rods, radially placed along the circular plate’s thickness and rotating with it. For simpler calibration procedures, a 3D test-field plate hosting 150 RAD coded targets was also prepared, using 12-bit length coordinates, which can be easily plotted using a 600-dpi b/w printer on a rigid paper sheet.
In Figure 11 both tools are illustrated.

2.4. Acquisition Requirements and Calibration Procedures

Some requirements need to be considered as goals in the acquisition process using the introduced equipment. They can be basically listed as follows:
  • Color: color accuracy must be estimated between 0 and 1.5 CIE ΔE00 [42].
  • Resolution: the resolution of acquired artwork must be at least equal to 0.1 mm, which means to acquire digital information at a 0.05 mm magnitude, according to the Nyquist-Shannon sampling theorem. According to this theorem, the sample rate must be at least twice the signal’s bandwidth to effectively avoid aliasing distortion.
  • Times: it must be the less possible, without affecting the quality of the acquisition.
  • Costs: they must be low if compared to commercial solutions, to improve the economic sustainability of the acquisitions.
  • Usability: it must be wide, to involve possible users not necessarily specifically prepared.
Requirements are met when following strict procedures to calibrate the instruments, especially for color and resolution, assess time, costs, and usability. For the requirements concerning color, resolution, and usability the procedures in detail were as follows.
Color—The evaluation of color accuracy in image acquisition typically requires a validation effort, since there are many factors that can lead to inaccuracies: objective, subjective, technological, or simply due to carelessness or human approximation [43]. There are various formulas to evaluate visual color difference and the most used is the CIELab color difference, ΔE, calculated for each color patch of a reference color checkerboard. The overall color encoding performance is usually obtained by summing the statistical measurements of ΔE for the entire set of color samples contained in the targets. The color metric issued by CIE in 2000 was used: a formula that adopts the concept of measuring the Euclidean distance between the expected color and the measured color in the CIE color space, introducing correction factors to minimize the problem of non-perceptual uniformity of colors. As complementary parameter, the Euclidean measurement of lightness error ∆L was introduced (∆L = L2 − L1, where L2 and L1 are respectively the acquired and the measured values), where the L element closely matches human perception of lightness.
The evaluation of color accuracy, based on the color checker target and the rendered color space Display P3, is computed according to the following parameters: mean and maximum color difference relative to the mean ideal chroma in the ΔE00 color metric on the CIEXYZ chromaticity diagram; mean of absolute luminance; exposure error in f-stops measured by pixel levels of patches B4-E4, using gamma values measured rather than the standard value for the color space (i.e., in the case of Display P3 2.2). As reference for the ΔE00 mean, values less than 1.5 were considered; for ΔL mean values less than 1 are required; for the exposure values less than 0.25 f-stops are needed.
Resolution—To define the resolution two parameters must be measured: the spatial detail and its conservation. Spatial detail is defined fixing the nominal sampling resolution. The common solution to measure detail preservation is the Modulation Transfer Function (MTF), a measure of how accurately an imaging device or system can reproduce a scene. The MTF quantifies the degree of attenuation with which the whole system (so also considering the custom frame) reproduces variations in luminous intensity based on its spatial frequency. The MTF is standardized in the ISO 12233, of which the 2000 version was followed here [44].
Table 4 illustrates the average overall response of the systems to a vertically inclined edge. The evaluation is based on rise distance (10–90% rise) and measured using MTF50 and MTF10 [45,46] at 1200 mm from the ISO chart. Both cameras used were equipped with a 120 mm Hasselblad lens set.
Figure 12 is an example of the application of the MTF evaluation workflow.

2.5. Camera Geometric Calibration Procedure

The first step in the data processing is the camera geometric calibration, to determine both the interior and the exterior orientation parameters, as well as the additional values. The most common set of additional parameters, employed to compensate for systematic errors in digital cameras, is the eight-term physical model originally formulated by Brown [47]. Brown’s model includes the 3D position of the perspective center in image space (principal distance and principal point), as well as the three coefficients for radial and two for decentering distortion. The model can be extended by two further parameters to account for affinity and shear within the sensor system. The iPhone X camera calibration is carried out using the developed 3D test field. It consisted of taking a set of 20 images using a tripod and a photo studio illumination set. It also included a set of convergent images (some of them rotated by 90°) with good intersection angles of the rays from the camera to the test-field [48,49]. The calibration process is performed using RAD coded target based geometric calibration in Agisoft Metashape. Every center of RAD coded target is reconstructed by eight or more rays to enhance the accuracy allowing to calculate the Brown’s camera model parameters: fx, fy focal length coordinates; cx, cy principal point coordinates, K1, K2, K3 radial distortion coefficients; P1, P2 = tangential distortion coefficients.
The performance of the photogrammetric pipeline is evaluated by a statistical analysis that considers the following parameters:
  • Number of oriented images.
  • Bundle adjustment (BA) (re-projection error).
  • Number of points collected in the dense point cloud.
  • Comparison of the dense point cloud to the ground truth of the object. The photogrammetric models were compared to a reference SLR camera model, using CloudCompare.
Usability—The know-how needed to manage the introduced ecosystem of tools has become easier to use, more automatic and friendly over time. This can be assessed offering it to museum operators who started to use it proficiently with reduced training. In fact, following the Kirkpatrick’s Four-level Training Evaluation Model [50], the learners’ reactions to training were positively influenced by the easy setup of the developed tools. The novelty of this approach lies not in the established practice of imparting technical knowledge to end-users, which is a well-entrenched approach, instead, it focuses on the pre-existing general familiarity possessed by the end-users themselves. They can effortlessly acquire new skills while leveraging their existing knowledge, such as employing ubiquitous devices such as smartphones or simple rigs to assemble and switch on with buttons, with no care for light angles, camera movements, etc., as these are preset in the rig. These devices, originally designed for general tasks, have been repurposed for specialized functions without altering their fundamental everyday usage.
The equipment presented so far was used, tested, and improved on acquisition projects including case studies, it was also used along projects including works on these case studies:
  • Original drawings by Leonardo da Vinci (mostly around at the end of XV century with dimensions roughly similar to an A4 sheet), hosted and digitized in several locations such as Gallerie dell’Accademia in Venice, Le Gallerie degli Uffizi in Florence, Civico Gabinetto dei Disegni al Castello Sforzesco and Veneranda Biblioteca Ambrosiana, both in Milan.
  • Manuscript no. 589 (XIV century, 273 mm × 187 mm) titled Dante Alighieri, Commedia con rubriche volgari brevi e glosse dal Lana per le prime due cantiche, hosted and digitized at the Biblioteca Universitaria di Bologna (BUB).
  • Annunciation by Beato Angelico (c. 1430-32, 2380 mm × 2340 mm) hosted and digitized at the Museum of the Basilica of Santa Maria delle Grazie in San Giovanni Valdarno.
  • An embalmed Porcupinefish (Diodon Antennatus, 350 mm × 190 mm × 250 mm) and a Globe by astronomer Horn d’Arturo (310 mm × 310 mm × 460 mm), both hosted and digitized at the Sistema Museale di Ateneo (SMA) in Bologna.
These case studies present different materials and dimensions as follows, and then can be considered representative of entire classes of objects belonging to the CH: Drawings by Leonardo da Vinci (materials such as handmade prepared paper, iron-gall inks, silver point traces, black and red stones, or chalks), Manuscript no. 589 (materials such as medieval parchment, iron-gall inks, and gold foil), the Annunciation (materials such as tempera on a wooden panel, gold foil), the Porcupinefish (matte clear coat), and the Globe by astronomer Horn d’Arturo (glossy clear varnish).

3. Results

Results produced by the designed equipment and showcased concern:
  • Footprint of the acquisition area and on-field setup time for the acquisition set.
  • Color accuracy.
  • Quality of the normal maps for the 3D surface mesostructure reconstruction.
  • Dimensional quality achieved using general-purpose devices (i.e., smartphone cameras) on 3D CH objects.
  • Visual outcomes from the developed pipeline on different types of CH objects.
  • Costs.
  • Public and scientific successes of the outputs.

3.1. Footprint and On-Field Setup Time

Usually, ancient drawings, paintings, and other CH objects hosted in the museums raise the problem that they are placed in narrow spaces often with tight access paths (museums’ storage rooms, artworks based into historical buildings with narrow aisles or steep stairways, etc.). Due to these conditions, the transportability of the equipment to digitize artworks is tricky. The commercial solutions usually need complex logistics in which a van is usually necessary to transport the equipment in the acquisition locations and to assemble it in the planned acquisition place. Then, the footprint of the acquisition area and the volume occupied by the equipment during the handling are the main aspects.
These parameters were measured considering the stand developed to acquire the drawings (case study (A)). Table 5 outlines the improvements across various usability aspects such as the occupancy of equipment and the assembly time for the stage. Additionally, Table 5 also presents the individual time components required for the pre-shooting setup and calibration activities. In 2014 (Venice), almost all of the used equipment was commercial except the table on which the drawing was placed which was necessarily purpose-built from a commercial version. The custom prototypes developed later (from 2018 onwards) rather offer embedded solution with pre-assembled components, reducing setup times with the advantage of having lights and cameras in mutual rigid and controlled positions for the whole photoshoot. Therefore, in Table 5 it is relatively easy to compare the occupancy of the equipment and the time required for the acquisition’s operations with custom prototypes that were introduced later. Values are progressively decreasing; the occupancy of equipment determined a substantial decrement in the room necessary to perform the acquisitions (i.e., in the first test in 2014 an area of about 4 mt × 4 mt × 2.80 mt was necessary to host all of the equipment, and in 2022 a smaller area was enough (about 1.5 mt × 1.2 mt × 1.6 mt) (Figure 13)).
The first prototypes presented, needed a van to move the components in the acquisition locations, while the most recent prototype can easily be disassembled and carried with a common car. The reduction of complex operations to setup the acquisition environment granted less time necessary for assembly too.
In Figure 14 a comparison between the equipment to move and assemble in 2014 (a) and 2022 (b) is represented, to prove how portability improved: while the former solution implied components heavy and difficult to carry by car (i.e., the stand and the table), the latter adopts a shorter main column, and a variety of components (including lights) that can easily fit in a common trunk.

3.2. Color Accuracy

The color accuracy results consist of the measurement of improvements in two different phases:
(a)
Lighting system design and construction
(b)
Lighting system use on the field.
(c)
To determine the best performance in terms of color rendition, three different set of illuminators were tested using the same camera (Canon EOS 5D Mark III equipped with EF 100 mm f/2.8 lens). Table 6 summarizes the results of tests conducted at the Dept. of Architecture in Bologna to compare Relio2 lights to previous solutions adopted. The ΔE00 max. values are also documented, as they report the highest variation observable within a unique color segment of the chart.
(d)
Table 7 presents the evolution of the statistical measures of ∆E00 and ∆L (the difference of lightness between values measured and values expected), considering stands’ improvement in features and building quality. Starting from 2014, when custom prototypes were still non-existent, ∆E00 and ∆L resulted in better and better values, following a linear progression that confirmed the quality of the enhancements introduced in the prototypes, estimated using the same SHAFT version to keep results comparable. Many camera devices were used over the years: the Rencay DiRECT Camera System 24k3 (image resolution 13,000 px × 8000 px and sensor size 72 mm × 118 mm exploiting a trilinear RGB architecture), the Hasselblad H6D-400C (image resolution 11,600 px × 8700 px and sensor size 53.4 mm × 40.0 mm) and the Hasselblad H2D-100C (image resolution 11,656 px × 8742 px and sensor size 43.8 mm × 32.9 mm). Since the quality of the various sensors is very similar (over the years the single pixel accuracy improved but the size of the pixels decreased, giving very close results) the measurement mainly reflected the difference in quality of the light source.
In (D), the color accuracy of a Nikon D5200 (image resolution 6000 px × 4000 px and sensor size 23.5 mm × 15.6 mm) and an iPhoneX smartphone (image resolution 4032 px × 3024 px and sensor size 4.92 mm × 3.69 mm) were evaluated. Results (Table 8) showed absolute comparability between the color obtained with the smartphone and the DSLR camera, with color differences hardly distinguishable by the human eye (i.e., the ΔE00 mean error values of SLR and smartphone cameras are close enough since the difference is less than 1).

3.3. Normal Maps Improvements

The quality improvement in Normal maps was measured for the case study (C). Nominal resolution of the normal maps presented in Figure 15 is 1182 pixels per inch (ppi), evaluated with the equation [51]:
P D = S r · F L · U f D · S w
where:
  • PD = Pixel density (in ppi)
  • Sr = Sensor width resolution (in pixel): 23,200
  • FL = Focal length of the lens system (in mm): 120
  • Uf = Unit conversion factor: 25.4 for PD in inch (ppi)
  • D = Camera distance to the painting (mm): 1120
  • Sw = Camera sensor width (mm): 53.4
Traditional solution based on Woodham [27] and on four lights inclined at 45° [52] achieve results as in Figure 15a, while exploiting the developed software nLights, which accurately detected lighting direction and position, improvements can be observed: in Figure 15b is the results with four lights again at 45°; in Figure 15c is the result achieved with the introduction of a second row of light sources on the arms of the repro stand inclined at 15° on the acquisition plan.
Figure 15. Normal maps as extracted by commercial software for texture maps production (detail in (a)), and normal maps automatically extracted by nLights software: results from 4 pictures with 45° lights only (detail in (b)), and normal map from 8 pictures with 45° + 15° light sources on the same subject (detail in (c)).
Figure 15. Normal maps as extracted by commercial software for texture maps production (detail in (a)), and normal maps automatically extracted by nLights software: results from 4 pictures with 45° lights only (detail in (b)), and normal map from 8 pictures with 45° + 15° light sources on the same subject (detail in (c)).
Heritage 06 00336 g015

3.4. Dimensional Quality

The dimensional quality was measured for the case study (D). The Porcupinefish featured highly specular skin, and tiny details, while the Globe by astronomer Horn d’Arturo presented a highly reflective regular surface and a considerable Fresnel effect (Figure 16).
The 3D test field device was used for smartphone calibration using RAD coded targets. Every center of the RAD target is reconstructed by more than 8 rays to enhance the accuracy allowing the calculation of the camera model parameters.
The accuracy evaluation of the 3D shape results was caried out using, as ground truth, the mentioned Nikon D5200 camera. In both models (reference and data comparison), the average Ground Sample Distance (GSD) of the images was approximately 1 mm providing a summary of the obtained results.
Table 9 reports the results of the evaluation of the photogrammetric performance for both the Nikon D5200, equipped with an 18 mm focal length lens, and iPhone X, with a 4 mm focal length lens. The results concerning the number of oriented images and mean reprojection error of the Bundle Adjustment for the two cameras are comparable. The number of points collected in the dense point cloud is different but congruent with the number of pixels in the image set: those acquired by Nikon D5200 doubled those in the iPhone X camera image set.

3.5. Visual Outcomes

The case study (B) requires the replication of quasi-planar surfaces, complex materials as parchment, different inks, and gold-foil on manuscript’s pages. The solution was achieved by exploiting photometric stereo techniques to have as result the 3D shape, the specular and diffuse reflection, and the mesostructured reconstruction. The camera used was a Canon EOS 5D Mark III (image resolution 5760 px × 3840 px and sensor size 36 mm × 24 mm). The visual results presented are in Figure 17 where a traditional 2D picture is compared with the 3D model built following the introduced pipelines and using the presented equipment. The case study (C), a painting surrounded by wooden frames, required the combination of the photogrammetry and photometric stereo techniques to reproduce three-dimensionally the artwork with all its reflectance properties. A Canon EOS 5D Mark III camera was used to acquire the frame, while a Hasselblad H6D-400C Multishot was used for the painted surfaces.
Figure 18 presents the acquisition plan to replicate the painting’s reflectance properties and mesostructure exploiting photometric stereo techniques with the repro stand prototype for vertical acquisitions. The whole painting was subdivided into portions whose resolution was 25 µm, as planned. The area covered for each picture resulted in 500 mm × 375 mm. The overlapping between images was the 20%. The custom-built stand was moved accordingly to the plan along horizontal and vertical directions, while planar alignment during movements was granted by the estimation of rig/painting distance through stand-fixed laser distance meters.
A detailed description of this case study is in [53], while a result of the visual accuracy and detail reached in replicating materials is reported in Figure 19.

3.6. Costs

Costs are assessed within the context of the case study (C), focusing on the equipment required for the painting acquisition. The reproduction stand designed for artworks captured in a vertical orientation carries a price of approximately €9000. This inclusive cost accounts for the Manfrotto stand, the aluminum profiles, the ABS 3D-printed components, energy consumption during production, all of the necessary hardware, wiring, switches, 32 Relio2 lights, and labor costs. In contrast, several commercially available alternatives, though dissimilar due to their lack of calibration, begin at €28,000 (commercial vendor no. 1) and go up to €35,000 (commercial vendor no. 2). Both of these options, however, necessitate the adaptation to the specific camera used and entail further equipment and camera calibration procedures.

3.7. Public and Scientific Successes

The equipment produced underwent rigorous testing during prominent exhibitions such as “Perfecto e Virtuale. L’Uomo Vitruviano di Leonardo” [54] in 2014 or “Leonardo in Vinci, at the origin of the genius”, opened at Museo Leonardiano in Vinci in 2019 and visited by more than 140,000 people, or in the same year “Leonardo, drawing anatomy”, at Museum of Palazzo Poggi in Bologna (Italy), with more than 12,000 people attending and more than 100,000 when the same event lately moved to Vinci, rebranded as “Leonardo, drawing anatomy [Reloaded]” [55]. Another exhibition, “The Leonardo’s anatomic drawing at the time of Salvator Mundi” is open in Vinci (24th June–23rd September 2023), hosting three more drawings by the genius we replicated with presented framework of tools [56].
The results on the Annunciation were part of the exhibit “Masaccio e Angelico. Dialogo sulla verità nella pittura”, opened at the Museum of the Basilica of Santa Maria delle Grazie in San Giovanni Valdarno, from 17 September 2022, to 5 February 2023 [57]. The 3D model, presented on a 4k touch screen and integrated by interactive captions in a guided tour, was actively part of the exhibit and it is now permanent installation at the museum. Also, the exhibit “Dall’Alma Mater al Mondo. Dante at the University of Bologna”, held in 2021 at the Biblioteca Universitaria di Bologna (BUB), hosted three replicas of pages belonging to the manuscript case study. All of these exhibitions attracted a substantial number of visitors, with their feedback highlighting the replicas’ success in conveying the intricate details behind these artifacts.
The results of the application for methods and tools were also presented, after invitation, during many important events in well renowned museum institutions, such as two editions of the 2and3D Photography Conference organized by The Rijksmuseum in Amsterdam (2019 and 2021), and the two-day international academic Leonardo da Vinci’s Papers: Invention and Reconstruction, at the British Library in London, in May 2023.

4. Conclusions

This paper presented a critical overview of the self-production of equipment for the digital replicas of cultural artifacts, such as ancient drawings, manuscripts, paintings, and other works of art. It is demonstrated how the introduction of new and more accessible technologies, such as 3D printing, and the shifting from specialized instruments to general-purpose devices such as smartphones for performing the same tasks make it possible to create workflows better suited to Cultural Heritage professionals. These advancements yield even more favorable outcomes compared to the traditional approach of relying on dedicated commercial equipment, tailored for specific purposes.
The Section 2 presented the equipment solutions supported by software, encompassing both interconnected and custom-built options, along with their distinctive attributes and the methods employed for assessing their efficiency, economic viability, and user-friendliness. Preceding their detailed description is an overview of general digitization pipelines, providing context for situating these solutions within broader workflow frameworks and contextualizing the new operation modes.
In Section 3, a description of the experimental results was introduced, referring to specific case studies upon which the custom-made tools were assessed. These results also show a temporal dimension, illustrating how the developed equipment adheres to specific methodological guidelines and thereby facilitating consistent enhancements across all parameters that signify their quality (footprint of the acquisition area and on-field setup time for the acquisition set; color accuracy; quality of the normal maps for the 3D mesostructure reconstruction; dimensional quality achieved using general-purpose devices for the 3D acquisition; outcomes from the developed pipeline on different types of objects; costs; public and scientific successes of the outputs).
The broad selection of case studies showcases how the tools achieved a level of maturity and reliability that makes them applicable across a wide range of scenarios.
In summary, the outcomes highlight new possibilities for museums and cultural institutions seeking precise, budget-friendly, and user-friendly equipment and workflows for digitizing their collections.
While solutions suggested may not be final, they could open a path to drive further exploration in the years ahead.
Planned future works include:
  • The full elimination of commercial components,
  • A modular design using a limited number of distinct parts with the goal to enhance the production standardization, and to reduce costs and design time,
  • The construction of stands tailored to specific cameras, with the aim to mitigate common issues related to parts that unintentionally shift the camera or fail to ensure precise positioning and orientation, challenges frequently encountered using conventional stands.
In general, future advancements in equipment construction, following the new path here described, with the rapidly evolving landscape of technology due to the introduction of Artificial Intelligence (AI), will introduce new opportunities and complexities in the digitalization of artworks.

Author Contributions

Conceptualization, S.G. and M.G.; methodology, S.G. and M.G.; equipment, G.B. and M.B.; investigation, G.B., M.B., M.G. and S.G.; data curation, S.G. and M.B.; writing—original draft preparation, S.G. and G.B.; writing—review and editing, S.G. and M.G.; supervision, M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Fabrizio Ivan Apollonio, Andrea Ballabeni and Filippo Fantini for being pro-active parts during the development of the research. Authors also thank Alfredo Liverani for the support in the mechanical design of prototypes and their structural assessment, Davide Giaffreda and Davide Prati for the logistics and building support for physical devices.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Addison, A.C.; Gaiani, M. Virtualized architectural heritage: New tools and techniques. IEEE MultiMedia 2000, 7, 26–31. [Google Scholar] [CrossRef]
  2. Masciotta, M.G.; Sanchez-Aparicio, L.J.; Oliveira, D.V.; Gonzalez-Aguilera, D. Integration of Laser Scanning Technologies and 360° Photography for the Digital Documentation and Management of Cultural Heritage Buildings. Int. J. Archit. Herit. 2023, 17, 56–75. [Google Scholar] [CrossRef]
  3. Chong, H.T.; Lim, C.K.; Rafi, A.; Tan, K.L.; Mokhtar, M. Comprehensive systematic review on virtual reality for cultural heritage practices: Coherent taxonomy and motivations. Multimed. Syst. 2022, 28, 711–726. [Google Scholar] [CrossRef]
  4. Phase One. Available online: https://www.phaseone.com/ (accessed on 25 July 2023).
  5. Nocerino, E.; Stathopoulou, E.K.; Rigon, S.; Remondino, F. Surface Reconstruction Assessment in Photogrammetric Applications. Sensors 2020, 20, 5863. [Google Scholar] [CrossRef] [PubMed]
  6. Matrone, F.; Grilli, E.; Martini, M.; Paolanti, M.; Pierdicca, R.; Remondino, F. Comparing Machine and Deep Learning Methods for Large 3D Heritage Semantic Segmentation. ISPRS Int. J. Geo-Inf. 2020, 9, 535. [Google Scholar] [CrossRef]
  7. Toschi, I.; Capra, A.; De Luca, L.; Beraldin, J.-A.; Cournoyer, L. On the evaluation of photogrammetric methods for dense 3D surface reconstruction in a metrological context. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2–5, 371–378. [Google Scholar] [CrossRef]
  8. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F.; Gonizzi-Barsanti, S. Dense image matching: Comparisons and analysis. In Proceedings of the IEEE Conference Digital Heritage 2013, Marseille, France, 1 November 2013; Volume 1, pp. 47–54. [Google Scholar] [CrossRef]
  9. Farella, E.M.; Morelli, L.; Grilli, E.; Rigon, S.; Remondino, F. Handling critical aspects in massive photogrammetric digitization of museum assets. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLVI-2/W1-2022, 215–222. [Google Scholar] [CrossRef]
  10. Porter, S.; Roussel, M.; Soressi, M. A Simple Photogrammetry Rig for the Reliable Creation of 3D Artifact Models in the Field: Lithic Examples from the Early Upper Paleolithic Sequence of Les Cottés (France). Adv. Archaeol. Pract. 2016, 4, 71–86. [Google Scholar] [CrossRef]
  11. Sun, T.; Xu, Z.; Zhang, X.; Fanello, S.; Rhemann, C.; Debevec, P.; Tsai, Y.T.; Barron, J.T.; Ramamoorthi, R. Light stage super-resolution: Continuous high-frequency relighting. ACM Trans. Graph. 2020, 39, 260. [Google Scholar] [CrossRef]
  12. Santos, P.; Ritz, M.; Tausch, R.; Schmedt, H.; Monroy, R.; De Stefano, A.; Posniak, O.; Fuhrmann, C.; Fellner, D.W. CultLab3D: On the verge of 3D mass digitization. In Proceedings of the Eurographics Workshop on Graphics and Cultural Heritage (GCH’ 14), Darmstadt, Germany, 6–8 October 2014; Eurographics Association: Goslar, Germany; pp. 65–73, ISBN 978-3-905674-63-7. [Google Scholar]
  13. Menna, F.; Nocerino, E.; Morabito, D.; Farella, E.M.; Perini, M.; Remondino, F. An open-source low-cost automatic system for image-based 3D digitization. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W8, 155–162. [Google Scholar] [CrossRef]
  14. Gattet, E.; Devogelaere, J.; Raffin, R.; Bergerot, L.; Daniel, M.; Jockey, P.; De Luca, L. A versatile and low-cost 3D acquisition and processing pipeline for collecting mass of archaeological findings on the field. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W4, 299–305. [Google Scholar] [CrossRef]
  15. DXO Mark. Available online: https://www.dxomark.com/disruptive-technologies-mobile-imaging-taking-smartphone-cameras-next-level (accessed on 25 July 2023).
  16. Han, W.; Shin, J.; Ho Shin, J. Low-cost, open-source contact angle analyzer using a mobile phone, commercial tripods and 3D printed parts. HardwareX 2022, 12, 1175–1185. [Google Scholar] [CrossRef] [PubMed]
  17. Sylwan, S.; MacDonald, D.; Walter, J. Stereoscopic CG camera rigs and associated metadata for cinematic production. In Proceedings of the SPIE 7237, Stereoscopic Displays and Applications XX, 72370C, San Jose, CA, USA, 14 February 2009. [Google Scholar] [CrossRef]
  18. Lee, S.H.; Lee, S.J. Development of remote automatic panorama VR imaging rig systems using smartphones. Clust. Comput. 2018, 21, 1175–1185. [Google Scholar] [CrossRef]
  19. Apollonio, F.I.; Gaiani, M.; Garagnani, S. Visualization and Fruition of Cultural Heritage in the Knowledge-Intensive Society: New Paradigms of Interaction with Digital Replicas of Museum Objects, Drawings, and Manuscripts. In Handbook of Research on Implementing Digital Reality and Interactive Technologies to Achieve Society 5.0; Ugliotti, F.M., Osello, A., Eds.; IGI Global: Hershey, PA, USA, 2022; pp. 471–495. [Google Scholar] [CrossRef]
  20. Apollonio, F.I.; Fantini, F.; Garagnani, S.; Gaiani, M. A Photogrammetry-Based Workflow for the Accurate 3D Construction and Visualization of Museums Assets. Remote Sens. 2021, 13, 486. [Google Scholar] [CrossRef]
  21. Apollonio, F.I.; Foschi, R.; Gaiani, M.; Garagnani, S. How to Analyze, Preserve, and Communicate Leonardo’s Drawing? A Solution to Visualize in RTR Fine Art Graphics Established from “the Best Sense”. ACM J. Comput. Cult. Herit. 2021, 14, 1–30. [Google Scholar] [CrossRef]
  22. Ackermann, J.; Goesele, M. A Survey of Photometric Stereo Techniques. Found. Trends Comput. Graph. Vis. 2015, 9, 149–254. [Google Scholar] [CrossRef]
  23. Horn, B.K.P. Obtaining shape from shading information. In The Psychology of Computer Vision; Winston, P.H., Ed.; McGraw-Hill: New York, NY, USA, 1975; pp. 115–155. ISBN 978-0070710481. [Google Scholar]
  24. Horn, B.K.P.; Sjoberg, R.W. Calculating the reflectance map. Appl. Opt. 1979, 18, 1770–1779. [Google Scholar] [CrossRef]
  25. Ikeuchi, K.; Horn, B.K.P. An application of the photometric stereo method. In Proceedings of the 6th International Joint Conference on Artificial Intelligence, Tokyo, Japan, 20 August 1979; ISBN 978-0-934613-47-7. [Google Scholar]
  26. Silver, W.M. Determining Shape and Reflectance Using Multiple Images. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1980. [Google Scholar]
  27. Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 139–144. [Google Scholar] [CrossRef]
  28. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
  29. Martin-Brualla, R.; Radwan, N.; Sajjadi, M.S.; Barron, J.T.; Dosovitskiy, A.; Duckworth, D. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7210–7219. [Google Scholar] [CrossRef]
  30. Müller, T.; Evans, A.; Schied, C.; Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. ToG 2022, 41, 1–15. [Google Scholar] [CrossRef]
  31. Remondino, F.; Karami, A.; Yan, Z.; Mazzacca, G.; Rigon, S.; Qin, R. A Critical Analysis of NeRF-Based 3D Reconstruction. Remote Sens. 2023, 15, 3585. [Google Scholar] [CrossRef]
  32. Gaiani, M.; Ballabeni, A. SHAFT (SAT & HUE Adaptive Fine Tuning), a new automated solution for target-based color correction. Colour Color. Multidiscip. Contrib. 2018, 14, 69–80. [Google Scholar]
  33. Fantini, F.; Gaiani, M.; Garagnani, S. Knowledge and documentation of renaissance works of art: The replica of the “Annunciation” by Beato Angelico. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, XLVIII-M-2-2023, 527–534. [Google Scholar] [CrossRef]
  34. Scuello, M.; Abramov, I.; Gordon, J.; Weintraub, S. Museum lighting: Why are some illuminants preferred? J. Opt. Soc. Am. A. Opt. Image Sci. Vis. 2004, 21, 306–311. [Google Scholar] [CrossRef]
  35. Hong, G.; Ronnier, M.L.; Rhodes, P.A. A Study of Digital Camera Colorimetric Characterization Based on Polynomial Modeling. Color Res. Appl. 2001, 26, 76–84. [Google Scholar] [CrossRef]
  36. ISO 17321-1; Graphic Technology and Photography—Colour Characterisation of Digital Still Cameras (DSCs). ISO (International Organization for Standardization), ISO Central Secretariat: Geneva, Switzerland, 2012.
  37. Jiang, J.; Liu, D.; Gu, J.; Süsstrunk, S. What is the space of spectral sensitivity functions for digital color cameras? In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV), Los Alamitos, CA, USA, 15–17 January 2013. [Google Scholar] [CrossRef]
  38. Lyon, R.F. A Brief History of “Pixel”. In Proceedings of the SPIE Electronic Imaging 2006—Digital Photography II, San Jose, CA, USA, 15–19 January 2006. [Google Scholar] [CrossRef]
  39. Operation Night Watch. Available online: https://www.rijksmuseum.nl/en/whats-on/exhibitions/operation-night-watch (accessed on 25 July 2023).
  40. Relio2. Available online: https://www.relio.it/ (accessed on 25 July 2023).
  41. Beraldin, J.A.; Gaiani, M. Evaluating the performance of close-range 3D active vision systems for industrial design applications. In Proceedings of the SPIE Electronic Imaging 2005—Videometrics VIII, San Jose, CA, USA, 17 January 2005. [Google Scholar] [CrossRef]
  42. ISO/CIE 11664–6; Colorimetry–Part 6: CIEDE2000 Colour-Difference Formula. ISO (International Organization for Standardization), ISO Central Secretariat: Geneva, Switzerland, 2014.
  43. Beretta, G.; Gaiani, M.; Rizzi, A. La riproduzione digitale del colore: Una storia da quattro bit. Kritike 2021, 2, 263–304. [Google Scholar]
  44. Parulski, K.; Wueller, D.; Burns, P.; Yoshida, H. Creation and evolution of ISO 12233, the international standard for measuring digital camera resolution. In Proceedings of the IS&T International Symposium on Electronic Imaging: Image Quality and System Performance 2022, New York, NY, USA, 17–26 January 2022; Society for Imaging Science and Technology (IS&T): Springfield, VA, USA; pp. 347-1–347-7. [Google Scholar] [CrossRef]
  45. Williams, D.R. Benchmarking of the ISO 12233 Slanted-Edge Spatial Frequency Response (SFR) Plug-In. In Proceedings of the IS&T/PICS Image Processing, Image Quality, Image Capture, Systems Conference (PICS-98), Portland, OR, USA, 17–20 May 1998; pp. 133–136, ISBN 0-89208-211-9. [Google Scholar]
  46. Burns, P.D. Slanted Edge MTF for Digital Camera and Scanner Analysis. In Proceedings of the IS&T/PICS Image Processing, Image Quality, Image Capture, Systems Conference (PICS-00), Portland, OR, USA, 27–31 March 2000; pp. 135–138, ISBN 0-89208-227-5. [Google Scholar]
  47. Brown, D.C. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  48. Fryer, J.G. Camera calibration. In Close Range Photogrammetry and Machine Vision; Atkinson, K.B., Ed.; Whittles Publishing: Dunbeath, UK, 2001; Chapter 6; pp. 156–179. ISBN 978-1870325-73-8. [Google Scholar]
  49. Remondino, F.; Fraser, C.S. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar] [CrossRef]
  50. Kirkpatrick, J.D.; Kirkpatrick, W.K. Kirkpatrick’s Four Levels of Training Evaluation; Association for Talent Development: Alexandria, VA, USA, 2016; ISBN 9781607280088. [Google Scholar]
  51. Cabezos-Bernal, P.M.; Rodriguez-Navarro, P.; Gil-Piqueras, T. Documenting Paintings with Gigapixel Photography. J. Imaging 2021, 7, 156. [Google Scholar] [CrossRef] [PubMed]
  52. Berns, R.; Tongbo, C. Practical total appearance imaging of paintings. In Proceedings of the IS&T Archiving 2012, Copenhagen, Denmark, 12–15 June 2012; pp. 162–167. [Google Scholar] [CrossRef]
  53. Apollonio, F.I.; Gaiani, M.; Garagnani, S.; Martini, M.; Strehlke, C.B. Misurare e restituire l’Annunciazione di San Giovanni Valdarno del Beato Angelico. Disegnare Idee Immagin. 2023, in press. [Google Scholar]
  54. Apollonio, F.I.; Clini, P.; Gaiani, M.; Perissa Torrini, A. The third dimension of Leonardo’s Vitruvian Man. Disegnare Idee Immagini 2015, 50, 48–59. [Google Scholar]
  55. Marani, P.C.; Barsanti, R.; Apollonio, F.I.; Gaiani, M. Leonardo, Anatomia Dei Disegni [Reloaded]; Bologna University Press: Bologna, Italy, 2023; p. 112. ISBN 979-12-547-7263-8. [Google Scholar]
  56. Marani, P.C.; Barsanti, R.; Gaiani, M. Il Disegno Anatomico di Leonardo al Tempo Del Salvator Mundi; Bologna University Press: Bologna, Italy, 2023; p. 136. ISBN 979-12-547-7289-8. [Google Scholar]
  57. Strehlke, C.B. (Ed.) Masaccio e Angelico: Dialogo Sulla Verità Nella Pittura; Magonza: Arezzo, Italy, 2022; p. 172. ISBN 88-31280-71-6. [Google Scholar]
Figure 1. The 2D and 3D pipelines with, highlighted, the phases where new software applications are used.
Figure 1. The 2D and 3D pipelines with, highlighted, the phases where new software applications are used.
Heritage 06 00336 g001
Figure 2. Comparative SPD emission charts by producers, in which the continuous spectrum of visible light for each tested light is clearly highlighted (from left to right Osram, Godox and Relio2).
Figure 2. Comparative SPD emission charts by producers, in which the continuous spectrum of visible light for each tested light is clearly highlighted (from left to right Osram, Godox and Relio2).
Heritage 06 00336 g002
Figure 3. 3D printed components in ABS to support the Relio2 lights.
Figure 3. 3D printed components in ABS to support the Relio2 lights.
Heritage 06 00336 g003
Figure 4. The first prototype for horizontal acquisitions.
Figure 4. The first prototype for horizontal acquisitions.
Heritage 06 00336 g004
Figure 5. The first prototype: the bill of the custom-built parts (annotations in mm, left) and the alignment gauge (right).
Figure 5. The first prototype: the bill of the custom-built parts (annotations in mm, left) and the alignment gauge (right).
Heritage 06 00336 g005
Figure 6. The darkening system to minimize erratic reflections and parasite light as it progressed over the years: the latest (bottom) version guarantees better covering and it can be easily assembled.
Figure 6. The darkening system to minimize erratic reflections and parasite light as it progressed over the years: the latest (bottom) version guarantees better covering and it can be easily assembled.
Heritage 06 00336 g006
Figure 7. The second improved prototype for horizontal acquisitions, with the raised frame.
Figure 7. The second improved prototype for horizontal acquisitions, with the raised frame.
Heritage 06 00336 g007
Figure 8. The second improved prototype: the bill of the custom-built parts (annotations in mm).
Figure 8. The second improved prototype: the bill of the custom-built parts (annotations in mm).
Heritage 06 00336 g008
Figure 9. The aluminum frame designed according to structural simulations through a FEM solver.
Figure 9. The aluminum frame designed according to structural simulations through a FEM solver.
Heritage 06 00336 g009
Figure 10. The prototype custom stand for vertical acquisitions.
Figure 10. The prototype custom stand for vertical acquisitions.
Heritage 06 00336 g010
Figure 11. Our calibrated roundtable and the 3D test-field plate to fold.
Figure 11. Our calibrated roundtable and the 3D test-field plate to fold.
Heritage 06 00336 g011
Figure 12. MTF evaluation during the calibration step: a laser-leveled orientation to check planarity.
Figure 12. MTF evaluation during the calibration step: a laser-leveled orientation to check planarity.
Heritage 06 00336 g012
Figure 13. The extension of the acquisition area as it has evolved over the years: in the top left, the huge space necessary in Venice (2014), in the top right an improvement in Florence (2018) and in the bottom, from left to right, Milan (2019 and 2022).
Figure 13. The extension of the acquisition area as it has evolved over the years: in the top left, the huge space necessary in Venice (2014), in the top right an improvement in Florence (2018) and in the bottom, from left to right, Milan (2019 and 2022).
Heritage 06 00336 g013
Figure 14. Comparison between the solutions used in 2014 (a) and in 2022 (b), with their set of components.
Figure 14. Comparison between the solutions used in 2014 (a) and in 2022 (b), with their set of components.
Heritage 06 00336 g014
Figure 16. The Porcupinefish (left) and the Globe by astronomer Horn d’Arturo (right) as reproduced using the developed tools (image courtesy by Filippo Fantini).
Figure 16. The Porcupinefish (left) and the Globe by astronomer Horn d’Arturo (right) as reproduced using the developed tools (image courtesy by Filippo Fantini).
Heritage 06 00336 g016
Figure 17. Comparative results in the reproduction of parchments, inks, and gold foils: on the left a traditional, static 2D picture, on the right the 3D model generated, in which the whole geometry and the optical properties of each material are better replicated and dynamically visualized under different light directions.
Figure 17. Comparative results in the reproduction of parchments, inks, and gold foils: on the left a traditional, static 2D picture, on the right the 3D model generated, in which the whole geometry and the optical properties of each material are better replicated and dynamically visualized under different light directions.
Heritage 06 00336 g017
Figure 18. The acquisition plan using the vertical movable rig, considering spaces and distances to reach the desired resolution according to the camera features (different colors mean different shots).
Figure 18. The acquisition plan using the vertical movable rig, considering spaces and distances to reach the desired resolution according to the camera features (different colors mean different shots).
Heritage 06 00336 g018
Figure 19. The visualization of the Annunciation. The whole 3D model (a) can be dynamically zoomed in and rotated to show behaviors of materials to light coming from different angles, such as the gold foil reflectance (b) or the vibrant colors in one of the scenes in the predella (c).
Figure 19. The visualization of the Annunciation. The whole 3D model (a) can be dynamically zoomed in and rotated to show behaviors of materials to light coming from different angles, such as the gold foil reflectance (b) or the vibrant colors in one of the scenes in the predella (c).
Heritage 06 00336 g019
Table 1. Components used to assemble the first prototype and their nature.
Table 1. Components used to assemble the first prototype and their nature.
ComponentCommercialCustom Made
Camera systemRencay DiRECT Camera System 24k3 camera equipped with a Rodenstock Apo Macro Sironar Digital 120 mm, f/5.6 lens.-
Column standManfrotto 809 Salon 230-
Light system16 Relio2 single LED lamps (gathered in 4 groups consisting in 4 lights each)-
Light support-Custom 3D printed joints placed on four detachable arms using 20 mm × 20 mm × 1 mm hollow aluminum extrusions
Flat acquisition surface-900 mm × 650 mm × 32 mm medium-density panels, with laser engraved reference system
Table 2. Components used to assemble the second prototype and their nature.
Table 2. Components used to assemble the second prototype and their nature.
ComponentCommercialCustom Made
Camera systemHasselblad H6D-400C multi-shot camera-
Column standLupo Repro 3, equipped with a
Manfrotto 410 geared head
Modified with reinforced welded steel sheet base and made sturdier with additional bracing elements attached to the structure
Light system16 Relio2 single LED lamps (gathered in 4 groups consisting of 4 lights each)-
Light support-Custom 3D printed joints placed on four detachable arms using 20 mm × 20 mm × 1 mm hollow aluminum extrusions
Flat acquisition surface-900 mm × 650 mm × 32 mm medium-density panels, with laser engraved reference system
Darkening
system
Introduced with a guide at the top of the arms to hold a black drape
Table 3. Components used to assemble the vertical prototype and their nature.
Table 3. Components used to assemble the vertical prototype and their nature.
ComponentCommercialCustom Made
Camera systemHasselblad H6D-400C multi-shot camera-
Column standManfrotto 809 Salon 230-
Light system32 Relio2 single LED lamps (gathered in 8 groups consisting of 4 lights each)-
Light support-Custom 3D printed joints placed on four arms made of n. 12, 20 mm × 20 mm × 1 mm hollow aluminum extrusions
Flat calibration surface-medium-density panel, with laser engraved reference system hosted on a vertical aluminum frame
Table 4. A general response of the systems to a vertical inclined edge, without image sharpening.
Table 4. A general response of the systems to a vertical inclined edge, without image sharpening.
ValuePrototype for Horizontal AcquisitionsPrototype for Vertical Acquisitions
MTF500.1300 (Hasselblad H6D-400C)0.165 (Hasselblad H6D-400C)
0.119 (Hasselblad X2D-100C)0.308 (Hasselblad X2D-100C)
MTF100.228 (Hasselblad H6D-400C)0.277 (Hasselblad H6D-400C)
0.119 (Hasselblad X2D-100C)0.556 (Hasselblad X2D-100C)
Efficiency0.314 (Hasselblad H6D-400C)0.662 (Hasselblad H6D-400C)
0.475 (Hasselblad X2D-100C)0.754 (Hasselblad X2D-100C)
Table 5. The linear evolution of the main sizes of prototypes and their usage area, with the time necessary to perform the acquisitions.
Table 5. The linear evolution of the main sizes of prototypes and their usage area, with the time necessary to perform the acquisitions.
20142018201920212022
Usability area (in mt)~4 × 4 × 3~2.5 × 3 × 2.8~2.5 × 3 × 2.8.~1.5 × 1.5 × 2.8~1.5 × 1.2 × 1.6
Setup time for the stand2 h1 h30 min30 min30 min
Setup time for lights1 h1 hEmbeddedEmbeddedEmbedded
Setup time for camera5 h30 minEmbeddedEmbeddedEmbedded
MTF + Color checker50 min30 min15 min10 min5 min
White image (flat fielding)30 min15 min8 min8 min5 min
Acquisition time (per shot)20 min8 min2 min2 min<1 min
Dismantling time3 h1 h20 min15 min15 min
Total time (all shots)14 h7 h6 h4.5 h3 h
Table 6. Results for the use of different light sources using hardware and software tools developed.
Table 6. Results for the use of different light sources using hardware and software tools developed.
Osram Fluorescent Lamps 1Godox LED Lamps 2Relio2 LED Lamps 3
ΔE00 mean = 1.47ΔE00 mean = 1.17ΔE00 mean = 1.05
ΔE00 max = 3.5ΔE00 max = 3.3ΔE00 max = 2.5
1 Two Lunarea LF-A220 illuminators with six 55W Osram Studioline fluorescent lamps each. 2 Two Godox SL150III VideoLight LED illuminators. 3 16 Relio2 LED lamps in groups of four, at the four sides of the rectangular capturing area.
Table 7. The evolution of the color accuracy with the adoption of the custom-built systems.
Table 7. The evolution of the color accuracy with the adoption of the custom-built systems.
YearPrototypeGeneral Results 1
2014First prototype for ancient drawings (horizontal)∆E00 mean = 1.34
∆L mean = 0.14
2018Second prototype for ancient drawings (horizontal)∆E00 mean = 1.31
∆L mean = 0.19
2019Second prototype for ancient drawings (horizontal, darkened)∆E00 mean = 1.33
∆L mean = 0.21
2021Second prototype for ancient drawings (horizontal, darkened)∆E00 mean = 0.95
∆L mean = 0.19
2022Second prototype for ancient drawings (horizontal, darkened)∆E00 mean = 0.94
∆L mean = 0.19
2022Prototype for vertical paintings∆E00 mean = 0.85
∆L mean = 0.18
1 Color accuracy determined with the CIE ∆E00 formula, using a Calibrite ColorChecker Classic.
Table 8. Apple iPhone X vs. Nikon D5200—ΔE00 evaluation.
Table 8. Apple iPhone X vs. Nikon D5200—ΔE00 evaluation.
iPhone XNikon D5200
ΔE00 MeanΔE00 MaxΔLExposure
Error
(f-Stops)
ΔE00 MeanΔE00 MaxΔLExposure Error
(f-Stops)
Porcupinefish3.678.112.52−0.042.796.791.72−0.03
Horn d’Arturo’s Globe3.057.382.02−0.102.476.431.43−0.01
Table 9. iPhone X vs. Nikon D5200 photogrammetric process results.
Table 9. iPhone X vs. Nikon D5200 photogrammetric process results.
PorcupinefishHorn d’Arturo’s Globe
iPhone XNikon D5200iPhone XNikon D5200
Mean BA reprojection error (px)0.66080.443300.57030.4929
Numb. oriented images141/141141/14176/7676/76
Observations220,568352,664222,170344,729
Points50.39090.11542.88160.341
Numb. 3D points dense matching1,815,0273,216,0051,541,9921,645,099
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bacci, G.; Bozzola, M.; Gaiani, M.; Garagnani, S. Novel Paradigms in the Cultural Heritage Digitization with Self and Custom-Built Equipment. Heritage 2023, 6, 6422-6450. https://doi.org/10.3390/heritage6090336

AMA Style

Bacci G, Bozzola M, Gaiani M, Garagnani S. Novel Paradigms in the Cultural Heritage Digitization with Self and Custom-Built Equipment. Heritage. 2023; 6(9):6422-6450. https://doi.org/10.3390/heritage6090336

Chicago/Turabian Style

Bacci, Giovanni, Marco Bozzola, Marco Gaiani, and Simone Garagnani. 2023. "Novel Paradigms in the Cultural Heritage Digitization with Self and Custom-Built Equipment" Heritage 6, no. 9: 6422-6450. https://doi.org/10.3390/heritage6090336

Article Metrics

Back to TopTop