Next Article in Journal
Investigating the Etiology and Demographic Distribution of Enamel Hypoplasia
Previous Article in Journal
Oxidative Degradation Mechanism of Zinc White Acrylic Paint: Uneven Distribution of Damage Under Artificial Aging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of Hyperspectral Imaging and Robotics: A Novel Approach to Analysing Cultural Heritage Artefacts

1
Centre for Cultural Heritage Technology, Italian Institute of Technology, Via Adriano Olivetti 1, 31056 Roncade, Italy
2
Industrial Robotics Facility, Italian Institute of Technology, Via Greto di Cornigliano, 6/R, 16152 Genoa, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Heritage 2025, 8(10), 417; https://doi.org/10.3390/heritage8100417
Submission received: 20 June 2025 / Revised: 28 August 2025 / Accepted: 3 September 2025 / Published: 3 October 2025
(This article belongs to the Section Digital Heritage)

Abstract

This paper pioneers the integration of hyperspectral imaging and robotics for the automated analysis of cultural heritage, representing a measurable advancement over existing manually operated systems. For the first time in the cultural heritage domain, a compact push-broom hyperspectral camera working in the VNIR range has been successfully mounted on a robotic arm, enabling precise and repeatable acquisition trajectories without the need for manual intervention. Unlike traditional approaches that rely on fixed paths or manual repositioning, the proposed approach allows dynamic and programmable imaging of both planar and volumetric objects, greatly improving adaptability to complex geometries. The integrated system achieves spectral reliability comparable to established manual methods, while offering superior flexibility and scalability. Current limitations, particularly regarding the illumination setup, are discussed alongside planned optimisation strategies.

1. Introduction

Over the years, hyperspectral imaging has emerged as one of the most versatile tools for the non-invasive investigation of movable and unmovable cultural heritage objects. This technique is based on the acquisition of many images sampled across a portion of the electromagnetic spectrum; as a result, a three-dimensional data structure is created (often referred to as a data cube) consisting of two spatial dimensions and one spectral dimension. In this way, a full spectrum is obtained in each pixel of the image, revealing detailed information about the materials and their distribution across the surface of the artefact [1,2]. Although an in-depth review of hyperspectral imaging applications is beyond the scope of this study, it is worth noting the significant growth in the use of this technique in the field of cultural heritage over the past decade, particularly in combination with increasingly innovative and portable acquisition systems and setups specifically tailored to the diverse needs of the cultural heritage sector [1,3]. These advancements address the sector’s unique challenges, which stem from the wide variety of artefacts in terms of shape, size, material, and mobility. Hyperspectral imaging is now being applied to an extensive range of artistic and archaeological objects, from thin, fragile artefacts such as handwritten documents, paintings, cinematographic films, and textiles, to large surfaces such as wall paintings on various supports [4,5,6,7,8]. More recently, applications have also expanded to volumetric objects, such as bas-relief sculptures, ceramics and metal artefacts [9,10,11].
To meet the technical demands of acquiring data from such a wide range of artefacts, several imaging solutions have been developed, with push-broom (or line scanning) systems being the most widely adopted. This type of camera records the data cube by capturing one spatial line of pixels at a time by means of a two-dimensional detector (focal plane array) and a light-dispersive element (prism or grating), thus requiring the use of linear or two-axis scanning systems [5,12,13,14,15,16] or tripods equipped with rotating or translating stages [17,18] to build the hyperspectral image. Accordingly, the acquisition process can be carried out by either moving the object while keeping the camera stationary or moving the camera around the object. Consequently, the data cube has one fixed spatial dimension (across-track, perpendicular to camera movements), and the other (along-track) depending on the size of the object being analysed [1]. More recently, the necessity of more compact and lightweight cameras has led to the development of a different kind of push-broom cameras with integrated scanning mechanisms [19,20,21], as well as systems based on Fourier Transform spectrometers that allow simultaneous capturing of spatial and spectral information [22,23]. While this solution eliminates the need for external scanning equipment, it also constrains the areas analysed, as the data cubes have fixed dimensions.
Despite these advances, current acquisition systems often rely on hardcoded trajectories (e.g., raster scanning) or manual repositioning of the camera, which can hinder their effectiveness when dealing with irregularly shaped objects, complex geometries, or constrained environments. These limitations underscore the need for a more adaptive and automated approach capable of dynamically adjusting to the object’s morphology and spatial configuration during scanning.
Against this backdrop, the present study explores the potential of using a robotic arm for hyperspectral imaging to automate the process, ensuring precision and consistency in data collection while providing greater flexibility in movement across space during scanning. To date, there are no documented applications of this approach in the field of cultural heritage, although the successful use of robotic arms for hyperspectral imaging is well-established in other domains such as plant phenotyping and agricultural robotics. In these areas, robotics-based hyperspectral imaging has enabled the automated, rapid, and labour-saving measurement of the morphological, chemical, and physiological properties of large quantities of plants [24,25]. Another notable application is in engineering, where hyperspectral cameras mounted on robotic arms have been used to detect anomalies in cluttered workspaces, thereby enhancing robot manipulation and task efficiency [26].
To evaluate the feasibility of such an integrated approach in the cultural heritage domain, this work presents a series of exploratory tests conducted on both planar (a painting) and volumetric (archaeological bronzes) objects using a Specim IQ (Specim Spectral Imaging Ltd., Oulu, Finland), a compact push-broom hyperspectral camera with an internal scanning system, positioned in various orientations. While still at a preliminary stage, the experiments detailed in this paper demonstrated promising results: the robotic arm showed excellent flexibility in reaching target positions during data acquisition and successfully covered relatively extensive scanning areas. The spectral data collected were reliable and consistent for all the objects studied, even when analysed in different orientations and under varying experimental conditions. Existing limitations, such as the illumination system, will be addressed in the following sections, along with the optimisation strategy, to emphasise the potential of this approach in achieving the same or improved reliability as more conventional methods.

2. Materials and Methods

2.1. Materials

The proposed approach was evaluated on planar and volumetric objects to gain a comprehensive understanding of the integrated system’s capabilities and challenges in different scenarios.
A test artwork was used to assess the capabilities of the robotic arm’s movements across large planar surfaces, as well as the stability of the lighting system over long imaging sessions. The painting used measured 51.5 × 73 cm in size and was characterised by a bright palette with predominant golden areas and thick brushstrokes.
Three small bronze archaeological artefacts (Figure S1–S3 in Supplementary Material) were selected to explore the capability of obtaining good-quality hyperspectral data of volumetric objects at different angles. The artefacts were characterised by various shapes and sizes, and were covered by heterogeneous layers of red and green corrosion products, thus also serving as ideal case studies for exploring surface mapping from multiple angles.

2.2. Methods

2.2.1. Hyperspectral Imaging

The Specim IQ (Specim Spectral Imaging Ltd., Oulu, Finland), a small and compact push-broom hyperspectral camera, was utilised for the experiments due to its characteristics. The camera requires no external scanning system (e.g., a rotating or translating stage) as the recording process is performed through internal mechanisms, facilitating the integration with the robotic arm and the acquisition workflow. The camera has a spectral resolution of 204 bands, covering the visible and near-infrared (VNIR) region between 400 and 1000 nm, and a spatial sampling of 512 pixels per line. The number of recorded lines is fixed; thus, the output is a square image with a resolution of 512 × 512 pixels. The camera has a field of view of 31° × 31°, and the effective pixel size of the object depends on the distance between the camera and the object analysed. Further technical details are described in [6,8].
Halogen illumination was used as the light source, as it provides a continuous spectrum that spans the entire VNIR spectral range. To achieve consistent distribution during every acquisition and ensure experimental repeatability, two lamps were mounted directly on the robotic arm. A supporting system was purposefully built so that the lighting sources could be kept at 45° with respect to the camera, according to the standard geometry requirements. Diffusion was enhanced by creating a temporary diffusing case by using basic materials, such as aluminium foil and translucent paper. All tests were conducted in a darkened environment to eliminate interference from ambient light and ensure that the recorded spectra were solely determined by the halogen lamps.
During the experiments, all the acquisition parameters were controlled remotely through the camera’s proprietary software except for the focus, which was manually adjusted before the analysis. The flat fielding was performed through the “Custom mode” option of the camera, which allows the recording of the white reference at the beginning of each set of experiments and its storage for all the subsequent measurements. An acid-free EPSON Ultrasmooth Fine Art Paper sheet was used as a white reference because its dimensions ensured that the width of the field of view was entirely encompassed within the sheet, while standard reference materials are often either too small to fill the field of view or prohibitively expensive. This was crucial to compensate for variations in light intensity during the flat-fielding process. The selected paper contains no optical brighteners or other additives; therefore, its spectral signature is relatively flat across the VNIR range, making it a suitable and cost-effective alternative to conventional references. To ensure that the flat-fielding process was always conducted under the best possible conditions, the paper sheet was carefully checked for dirt accumulation or the presence of creases between acquisitions and replaced as necessary. During the acquisitions, the exposure time was kept fixed at 60 ms for the painting and 80 ms for the bronze artefacts.
The processed data cubes were then analysed through the Fiji distribution of the open-access software ImageJ (version 1.54p) [27].

2.2.2. Robotic Platform

The hyperspectral camera was mounted on the wrist of a UR5e robotic arm manufactured by Universal Robots, through a custom-made connector. The robotic arm is comprised of six rotating joints, ensuring coverage of a vertical and a horizontal area of 100 × 110 cm and 70 × 80 cm, respectively. It can support a payload of 5 kg maximum. These characteristics enable exceptional flexibility in placing the hyperspectral camera at various poses (i.e., both positions and orientations), which is crucial for capturing high-quality images from different perspectives without compromising data integrity.
The robotic arm was controlled through the UR touchscreen teach pendant and the proprietary software PolyScope (version 5.11). The software provides an intuitive and user-friendly interface, allowing users to define waypoints (i.e., camera poses for each measurement) and create dedicated algorithms to automate each experimental sequence. For these tests, the moveL command was used to execute linear Cartesian trajectories between waypoints, ensuring that both the hyperspectral camera and the halogen lighting system retained a consistent spatial orientation throughout each transition. A motion speed of 0.20 m/s was configured within PolyScope to preserve mechanical stability and minimise vibrations that could affect the performance of the optical system. At each waypoint, a stop of 120 s was programmed to allow the hyperspectral sensor to complete the acquisition process safely and to verify data quality before proceeding to the following position (See Video S1–S3 in Supplementary Material).

2.2.3. Object-Specific Imaging Workflow

In the following section, the specific workflows designed for the two classes of objects are described.
Painting. The painting was analysed in the vertical and horizontal positions (Figure 1a and Figure 2a, respectively) to evaluate and compare the system’s capability to generate trajectories able to cover the entire surface of the painting in both orientations, presence of geometric distortion, and consistency of the light distribution through the entire scanning sequence. Notably, the transition between these two configurations did not require any physical modification to the setup. The illumination on the painting was evaluated over the whole acquisition process, and showed no significant variations over time, proving the high reliability of the developed acquisition system. Only the acquisition trajectory had to be redefined via the robot control software, without the need to dismantle or reposition any components.
During the experiment, the camera was kept perpendicular to the painting at a constant distance of 30 cm. The white reference sheet was positioned at the same level as the painting surface, and captured at the same distance, to guarantee a correct flat fielding process. Six waypoints were fixed in both orientations, one to record and store the white reference at the beginning of the test and five for representative areas across the painting (Figure 1b and Figure 2b), resulting in a total acquisition time of 12 min. Note that while the areas captured are roughly the same in both orientations, the acquisition order is different. In both cases, the starting point was set at the bottom right corner. However, during the imaging in the vertical position, the trajectory followed a Z-shape, while in the horizontal position, it first covered the left and centre areas before scanning the right side. This trajectory in the horizontal position was designed to optimise the arm’s movement and avoid potential interference with the lighting support system mounted on the robotic arm. Nonetheless, despite these differences, the methodology showed that it was possible to reach all the areas of the painting successfully.
For each orientation, two sets of experiments are presented as a preliminary demonstration of the system’s performance repeatability. The consistency of the results between the first and the second sets was evaluated first through visual inspection and then quantitatively by applying projective transformations [28,29,30] to estimate the extent of the variations introduced between distinct scanning sessions. For these calculations, the images acquired during the second scanning session were used as the reference to highlight differences in relation to the last experiment performed. Projective transformations were also applied to compare the results obtained from the vertical and horizontal positions, to evaluate the compatibility of the two orientations. In this case, images from the horizontal orientation were used as the reference, as their wider field of view allowed for more reliable and consistent alignment and comparison (see Section 3 for further details).
The resulting transformation matrices were subsequently utilised to generate false-colour comparison images, offering an intuitive visual representation of the observed differences. Technical details on the implementation, along with the tables reporting the values of the projective transformation matrices, are provided in the Supplementary Material (Document S1). Regarding the quality of the spectral data, the evaluation was performed by extracting at least one spectrum from each analysed area and comparing the results across all experiments. Each spectrum represents the average signal from a region of 5 × 6 pixels.
Archaeological bronzes. The three small bronze artefacts were positioned on a raised platform (Figure 3) to provide sufficient clearance between the lighting support and the table surface, thereby facilitating smooth movements of the robotic arm and enabling easy wrist rotation around the objects. Moreover, the hyperspectral camera mounted on the robotic arm allowed for the capture of multiple perspectives of the artefacts without requiring manual adjustment of the camera or lighting system, thus simplifying the overall acquisition process. The camera was kept at a distance of around 20 cm from the objects, calculated from their average height, in order to ensure that all the artefacts were in focus. The white reference was positioned directly on the platform, thus a little further away from the camera. For the preliminary experiments, five waypoints were fixed: as for the painting, the first waypoint was reserved for the acquisition and storage of the white reference, while the other four were adjusted to capture the bronzes from different perspectives (top, right, left, and front). Since the analysis from the front side was performed twice to account for focus issues (see Section 3.2), the total acquisition time was 12 min, as in the previous experiments.
The obtained data cubes were then cropped to eliminate non-informative areas and combined in Fiji (version 1.54p) to create a single hyperspectral image.
Distribution maps of the bronze alteration patinas were obtained through the PoissonNMF algorithm implemented as a Fiji plug-in [31]. PoissonNMF is a blind source separation algorithm based on non-negative matrix factorisation, which was originally developed to unmix fluorescence microscopy data in biological applications. The algorithm is relatively fast and user-friendly, requiring no prior knowledge of the object under analysis. It operates by manually selecting Regions of Interest (ROIs)—i.e., areas with homogeneous colour or with known composition—directly from the data cube. Since the algorithm can only select up to 10 ROIs at a time, it is particularly effective for providing an initial approximation of pigment distribution when the number of components to identify is limited.
Tests were also conducted to assess the feasibility of capturing the bottom side of the artefacts, using a modified version of the setup in Figure 3. In this configuration, the black cardboard was substituted with a transparent polycarbonate panel, which slightly overhung the edge of the support. This solution allowed the robotic arm and hyperspectral camera to move beneath the panel and capture the lower surfaces of the objects.

3. Results

3.1. Painting

The areas analysed during the experiments are illustrated in Figure 4 (vertical position) and Figure 5 (horizontal position), along with the points selected for spectral comparison. The differences between the results of experiments conducted in the same orientation, as well as those between the horizontal and vertical acquisitions, are visualised in the false-colour images in Figure 6.
Regarding the analyses conducted in the vertical position, the second set of images shows a slight rightward shift and a different light distribution compared to the first set (Figure 4a). This effect is particularly evident in the left and centre side of the painting, where the golden parts appear brighter and, in some cases, oversaturated (Figure 4a bottom-left image). These differences are well highlighted in the false-colour comparison images (Figure 6a); a green strip is visible on the left side where the first and second image sets do not overlap, and green spots indicate where the oversaturation occurs. This outcome may reflect minor variations in the positioning of the painting relative to the camera between the two sessions, likely due to subtle differences in how the easel was set up. On the other hand, the images recorded in the horizontal position (Figure 5) do not show prominent differences, just a marginal shift to the left; this observation is proven by the comparison images, which show a more homogeneous grey-scale colouration (with a slight exception in the bottom-right area), suggesting a higher similarity between the two images.
When comparing the results obtained in the two orientations, it can be observed that the images recorded with the painting in the vertical position are less affected by geometrical distortions and closely preserve the actual dimensions of the captured area. Conversely, the images acquired with the painting placed horizontally show a perceptible geometrical distortion appearing as a vertical stretch. These perspective differences between the two orientations are also confirmed by higher values in the last row of the transformation matrices (see Table S1c in Supplementary Material Document S1), which quantify the perspective distortion between the two images [29]. This effect may be partially attributed to the configuration of the camera mounting system: although the connector attaching the camera to the robotic arm ensures mechanical stability during the arm’s movements, the horizontal orientation might have introduced a slight tilt due to gravitational effects, subtly shifting the field of view downward. Additionally, the intrinsic characteristics of the painting may have played a role; in particular, the canvas shows a slight looseness in the bottom-left corner, which could have contributed to minor deformations due to gravity, potentially enhancing geometric distortion in the resulting images.
Despite these discrepancies related to the geometrical aspects of the images, the spectra obtained are well comparable (Figure 7). While a detailed identification of the signals is beyond the scope of this paper, the overall shapes and positions of the characteristic bands show no significant variations between the different measurements.
The main difference is observed as a consistently slightly higher spectral intensity in the data cubes recorded in the horizontal position, particularly for the teal and green2 points, and to a lesser extent for lilac, blue, and red. With regard to green1 and red, the spectra taken from the first measurement in the vertical conditions tend to overlap with those from the horizontal measurements, while for lilac, the same behaviour can be observed, albeit with the spectra taken from the second vertical measurement. The yellow point exhibits the most diverse pattern compared to the other pigments; in this instance, the spectrum taken from the first vertical measurement is significantly higher than the others. However, this behaviour can be easily explained as a consequence of the light distribution issue discussed in the previous paragraph, which causes the top right corner of the first vertical measurement to be more illuminated than the others.

3.2. Archaeological Bronzes

To demonstrate the benefits of integrating robotics and hyperspectral imaging for the analysis of volumetric objects, preliminary tests were performed by recording the archaeological bronzes from four perspectives: top, right, left, and front. As some objects appear at varying distances from the camera objective when viewed from the front, two acquisitions were made from this perspective, adjusting the focus accordingly (Figure 8a).
To obtain the distribution maps of the bronze alteration patinas, seven ROIs were selected to run the PoissonNMF algorithm. For the corrosion products, four reference spectra were identified and chosen by visually observing the colours and the relative spectral signature: a red patina associated with cuprite [32] and three greenish shades that can be roughly identified as copper-based patinas [32,33] (Figure 8b). In addition, three other ROIs were selected in the uncorroded bronze, the metallic reflection, and the cardboard background to ensure the accurate mapping of all components.
The maps revealed that cuprite (Figure 9a) and the pale green and the dark green patinas (Figure 9c,d) seem to be present across all four artefacts, with varying degrees of thickness and distribution. On the other hand, the bright green corrosion product is found in localised areas (Figure 9b), probably partially covered by the other alteration layers. While the exact number of endmembers, and consequently the accuracy of the maps, cannot be confirmed without using additional analytical techniques, it is still worth highlighting that the maps of the four components remain consistent across different acquisitions, even in areas affected by the loss of focus, providing a sufficiently detailed and complete visualisation of the distribution pattern.
To assess the feasibility of acquiring hyperspectral information from all surfaces of volumetric objects, tests were conducted on two of the three bronze artefacts: the bell and the key fragment. A transparent polycarbonate panel was used to allow the robotic arm to capture the underside of the artefacts without altering the camera and lighting setup. To correct for the spectral interference introduced by the support, the white reference was placed on the top surface of the panel, and its spectral data were recorded and saved to compensate for the polycarbonate’s spectral signal, which showed fingerprint absorbance beyond 800 nm [34]. In addition, the distance between the camera and the object was slightly increased to minimise the specular reflectance of the lamps on the transparent plate.
Reliable and consistent spectral data were obtained despite the presence of the polycarbonate panel, indicating that the setup maintained good performance under these conditions. Moreover, the spectra extracted from the underside of the object were broadly comparable to those obtained from the top. Minor differences can be observed at the extremes of the spectral range, likely due to the influence of the polycarbonate signal during the flat-fielding process (Figure 10). Nonetheless, the overall spectral shapes remain unaffected and clearly recognisable. These findings highlight the potential of this configuration for comprehensive full-surface spectral characterisation, although further optimisation of the support material and lighting geometry may help reduce residual spectral interferences.

3.3. Scientific Contribution

The experiments reported in this study represent the first documented application of a robotic arm for hyperspectral imaging in the cultural heritage domain. While robotics-based approaches have already been explored in other fields, their integration into cultural heritage analysis has remained unexplored. The results of the present work demonstrate that this combination is technically feasible and scientifically valuable, as it provides:
  • Adaptive acquisition trajectories and programmable workflows. The six degrees of freedom of the robotic arm enable the system to follow trajectories that adjust to the shape, size, and spatial configuration of objects, overcoming the limitations of linear translation stages or fixed scanning setups. The possibility of storing these trajectories in the system represents an additional advantage, as it allows repeatable and user-independent acquisitions.
  • Consistent and reproducible spectral data. Experiments confirm that reliable and reproducible measurements can be obtained even when object orientation or geometry varies. As demonstrated by the geometric transformations calculated for the planar object, minimal distortions occur in the images and stable illumination is maintained throughout repeated acquisitions. This demonstrates that the robotic system can deliver high-quality hyperspectral data comparable to conventional setups while adding flexibility and adaptability.
  • Extended access to volumetric objects. The integration of the robotic arm with the hyperspectral camera allows multiple perspectives to be captured, including surfaces that are normally difficult to reach, such as the underside of artefacts. In particular, acquiring the bottom of the objects was made possible thanks to the use of a transparent tray, which does not affect the colour rendition of the object studied and the spectral information retrieved. Such an approach ensures satisfying data quality across all surfaces.
By combining robotic flexibility with hyperspectral imaging, this study introduces a methodological advancement for cultural heritage science. It shifts the emphasis from static, object-constrained setups towards adaptable, programmable, and systematic acquisition strategies, laying the groundwork for more comprehensive and reproducible characterisation in both laboratory and in situ contexts.

4. Conclusions

This study demonstrates that integrating a robotic arm with hyperspectral imaging provides an effective and flexible approach for acquiring high-quality spectral data from both planar and volumetric cultural heritage artefacts, regardless of their geometry. In particular, for applications involving paintings, the proposed approach ensured the acquisition of accurate and coherent data even when planar artefacts were positioned differently during scanning. Similarly, for metal objects, it was possible to extract detailed hyperspectral maps of corrosion patinas from multiple angles, demonstrating the robustness and versatility of the proposed method.
Collectively, these findings underscore the significance of this first application of a robotic arm for hyperspectral imaging in the cultural heritage field, marking a measurable advancement in the automation of high-resolution data acquisition. While traditional scanning methods, such as motorised translation stages, remain valid and efficient for flat surfaces in controlled environments, they are typically restricted to linear motion along fixed axes and may require repositioning of the object or the imaging system. In contrast, the flexibility of the proposed solution proves particularly advantageous in contexts where objects cannot be moved or where complex geometries are involved, making the system especially suitable for in situ applications. For example, this setup could be ideal for analysing contiguous sides of mural paintings, without the need to move the entire system to capture the desired area, even in the presence of corners or occluded areas.
It is also worth mentioning that the proposed system is inherently scalable. Although the experiments were conducted on mid-sized artefacts, robotic arms with greater reach and payload capacity can be integrated with minimal adjustments, extending the applicability of the system to the analysis of larger artworks or architectural elements. This physical scalability is complemented by the programmable nature of robotic systems, which allows seamless integration with external computational packages and in-house developed algorithms, enabling the implementation of sophisticated scanning protocols tailored to specific object geometries and analytical requirements.
Notwithstanding these promising features, the current configuration presents a few minor limitations that will require further refinements. Foremost among these is the illumination system, which is temporarily built using cost-effective materials. While generally functional, it may require occasional adjustments between imaging sessions to maintain optimal performance. To address these minor constraints, future work will focus on some improvements aimed at enhancing overall stability and performance. A more robust illumination system will be developed to provide more stable light sources and improve light distribution during scanning. Alongside these refinements, alternative approaches for positioning scanned objects will be explored to help achieve better alignment with the camera, further enhancing the accuracy and reliability of the data collection process. Lastly, further research will be dedicated to improving the quality of the measurements using the transparent support, evaluating, for example, the use of a purposefully built covering system to eliminate any external interference.
In light of these considerations, this novel integration of robotic systems with hyperspectral imaging opens promising opportunities for scalable and systematic characterisation of cultural heritage, offering a valuable tool for condition monitoring in this field.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/heritage8100417/s1, Figure S1: Bronze bell fragment; Figure S2 Bronze bell fragment; Figure S3: Bronze key fragment; Video S1: Hyperspectral data acquisition of the painting in the vertical position in the vertical position; Video S2: Hyperspectral data acquisition of the painting in the horizontal position; Video S3: Hyperspectral data acquisition of the archaeological bronzes; Document S1: Geometric transformation.

Author Contributions

Conceptualization, A.T.; methodology, A.B., S.F., G.S. and F.A.; validation, A.B., S.F., G.S. and F.A.; formal analysis, A.B.; investigation, A.B. and S.F.; data curation, A.B.; writing—original draft preparation, A.B. and S.F.; writing—review and editing, A.T., F.A., F.C. and G.M.; visualization, A.B.; supervision, A.T. and F.C.; project administration, A.T.; funding acquisition, A.T. and F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC were funded by the Italian Ministero delle Imprese e del Made in Italy (Ministry for Business and Made in Italy), within the ‘Casa delle Tecnologie Emergenti—Genova. Opificio della Cultura’ (CTE-Genova) project, CUP number (Codice Unico di Progetto—Unique Project Code) B37F23000000008.

Data Availability Statement

The original contributions presented in this study are included in the article/Supplementary Materials. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors acknowledge the ‘Casa delle Tecnologie Emergenti—Genova. Opificio della Cultura’ (CTE-Genova) project for supporting this research and Federico Dassiè for his technical assistance during the initial experiments. The images of the archaeological bronzes are used on authorization of the Museo Archeologico Nazionale di Aquileia—Direzione Regionale Musei del Friuli Venezia Giulia, Ministero della Cultura. The use of these images is regulated by current legislation (art. 108, co. 3 del D. Lgs 42/2004s.m.i.—DM 161/23). Any reproduction, duplication or manipulation is strictly prohibited.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Picollo, M.; Cucci, C.; Casini, A.; Stefani, L. Hyper-Spectral Imaging Technique in the Cultural Heritage Field: New Possible Scenarios. Sensors 2020, 20, 2843. [Google Scholar] [CrossRef]
  2. Radpour, R.; Delaney, J.K.; Kakoulli, I. Acquisition of High Spectral Resolution Diffuse Reflectance Image Cubes (350–2500 nm) from Archaeological Wall Paintings and Other Immovable Heritage Using a Field-Deployable Spatial Scanning Reflectance Spectrometry Hyperspectral System. Sensors 2022, 22, 1915. [Google Scholar] [CrossRef]
  3. Pei, Z.; Huang, Y.M.; Zhou, T. Review on Analysis Methods Enabled by Hyperspectral Imaging for Cultural Relic Conservation. Photonics 2023, 10, 1104. [Google Scholar] [CrossRef]
  4. Cucci, C.; Delaney, J.K.; Picollo, M. Reflectance hyperspectral imaging for investigation of works of art: Old master paintings and illuminated manuscripts. Acc. Chem. Res. 2016, 49, 2070–2079. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, L.; Delnevo, G.; Mirri, S. Unsupervised hyperspectral image segmentation of films: A hierarchical clustering-based approach. J. Big Data 2023, 10, 31. [Google Scholar] [CrossRef]
  6. Grillini, F.; de Ferri, L.; Pantos, G.A.; George, S.; Veseth, M. Reflectance imaging spectroscopy for the study of archaeological pre-Columbian textiles. Microchem. J. 2024, 200, 110168. [Google Scholar] [CrossRef]
  7. Cutajar, J.D.; Babini, A.; Deborah, H.; Hardeberg, J.Y.; Joseph, E.; Frøysaker, T. Hyperspectral imaging analyses of cleaning tests on Edvard Munch’s monumental Aula paintings. Stud. Conserv. 2022, 67 (Suppl. S1), 59–68. [Google Scholar] [CrossRef]
  8. Cucci, C.; Picollo, M.; Chiarantini, L.; Uda, G.; Fiori, L.; De Nigris, B.; Osanna, M. Remote sensing hyperspectral imaging for applications in archaeological areas: Non-invasive investigations on wall paintings and on mural inscriptions in the Pompeii site. Microchem. J. 2020, 158, 105082. [Google Scholar] [CrossRef]
  9. Baillet, V.; Chapoulie, R.; Dutailly, B.; Galluzzi, F.; Mora, P.; Mounier, A. A new approach to locate, characterise and restore in 3D polychromy of Apollo’s temple at Delphi (4th century B. C.). Digit. Appl. Archaeol. Cult. Herit. 2024, 34, e00345. [Google Scholar] [CrossRef]
  10. Galluzzi, F.; Manca, R.; Puntin, M.; Raneri, S.; Sciuto, C.; Benvenuti, M.; Chapoulie, R. Maiolica seen by Vis–NIR hyperspectral imaging spectroscopy: The application of an ultraportable camera at the Museo Nazionale del Bargello. Eur. Phys. J. Plus 2024, 139, 629. [Google Scholar] [CrossRef]
  11. Liggins, F.; Vichi, A.; Liu, W.; Hogg, A.; Kogou, S.; Chen, J.; Liang, H. Hyperspectral imaging solutions for the non-invasive detection and automated mapping of copper trihydroxychlorides in ancient bronze. Herit. Sci. 2022, 10, 142. [Google Scholar] [CrossRef]
  12. Radpour, R.; Kleynhans, T.; Facini, M.; Pozzi, F.; Westerby, M.; Delaney, J.K. Advances in Automated Pigment Mapping for 15th-Century Manuscript Illuminations Using 1-D Convolutional Neural Networks and Hyperspectral Reflectance Image Cubes. Appl. Sci. 2024, 14, 6857. [Google Scholar] [CrossRef]
  13. Vlachou-Mogire, C.; Danskin, J.; Gilchrist, J.R.; Hallett, K. Mapping Materials and Dyes on Historic Tapestries Using Hyperspectral Imaging. Heritage 2023, 6, 3159–3182. [Google Scholar] [CrossRef]
  14. Gabrieli, F.; Delaney, J.K.; Erdmann, R.G.; Gonzalez, V.; van Loon, A.; Smulders, P.; Berkeveld, R.; van Langh, R.; Keune, K. Reflectance Imaging Spectroscopy (RIS) for Operation Night Watch: Challenges and Achievements of Imaging Rembrandt’s Masterpiece in the Glass Chamber at the Rijksmuseum. Sensors 2021, 21, 6855. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, S.; Cen, Y.; Qu, L.; Li, G.; Chen, Y.; Zhang, L. Virtual Restoration of Ancient Mold-Damaged Painting Based on 3D Convolutional Neural Network for Hyperspectral Image. Remote Sens. 2024, 16, 2882. [Google Scholar] [CrossRef]
  16. Tang, X.; Yan, J.; Zhang, P.; Dong, W.; He, Z.; Qiu, S.; Zeng, Z. Digital restoration of mural paintings from late Tang tomb M1373 in Xi’an based on hyperspectral analysis and image interaction processing. Herit. Sci. 2025, 13, 192. [Google Scholar] [CrossRef]
  17. Daniel, F.; Mounier, A. Mobile hyperspectral imaging for the non-invasive study of a mural painting in the Belves Castle (France, 15th C). STAR Sci. Technol. Archaeol. Res. 2015, 1, 81–88. [Google Scholar] [CrossRef]
  18. Mindermann, S.; Thalguter, N.; Angelin, E.M.; Sessa, C.; Cucci, C.; Stege, H.; Danzl, T. Mobile, ground-based VNIR-SWIR hyperspectral imaging for cultural heritage applications. Towards the study of complex architectural surfaces. In Proceedings of the International Association of Spectral Imaging (IASIM), Bilbao, Spain, 6–10 July 2024. [Google Scholar]
  19. Picollo, M.; Casini, A.; Cucci, C.; Jussila, J.; Poggesi, M.; Stefani, L. A New Compact VNIR Hyperspectral Imaging System for Non-Invasive Analysis in the Fine Art and Architecture Fields. In Proceedings of the Electronic Imaging & the Visual Arts, EVA 2018, Florence, Italy, 9–10 May 2018; Firenze University Press: Florence, Italy, 2018; pp. 69–74. [Google Scholar]
  20. Monico, L.; Cartechini, L.; Rosi, F.; Chieli, A.; Grazia, C.; De Meyer, S.; Nuyts, G.; Vanmeert, F.; Janssens, K.; Cotte, M.; et al. Probing the chemistry of CdS paints in The Scream by in situ noninvasive spectroscopies and synchrotron radiation x-ray techniques. Sci. Adv. 2020, 6, eaay3514. [Google Scholar] [CrossRef]
  21. Pietroni, E.; Botteon, A.; Buti, D.; Chirivì, A.; Colombo, C.; Conti, C.; Di Carlo, A.L.; Magrini, D.; Mercuri, F.; Orazi, N.; et al. “Codex 4D” Project: Interdisciplinary Investigations on Materials and Colors of De Balneis Puteolanis (Angelica Library, Rome, Ms. 1474). Heritage 2024, 7, 2755–2791. [Google Scholar] [CrossRef]
  22. Guglielmi, V.; Comite, V.; Pini, C.A.; Bergomi, A.; Colella, M.; Lombardi, C.A.; Provera, M.; Preda, F.; Fermo, P. Hyperspectral Imaging and Raman Analyses of the Red Decoration of the St. Mauro Altar in St. Salvatore’s Church (Pavia). In Proceedings of the IMEKO TC4 International Conference on Metrology for Archaeology and Cultural Heritage, Conseca, Italy, 19–21 October 2022. [Google Scholar]
  23. Ardini, B.; Corti, M.; Ghirardello, M.; Di Benedetto, A.; Berti, L.; Cattò, C.; Goidanich, S.; Sciutto, G.; Prati, S.; Valentini, G.; et al. Enhancing hyperspectral imaging through macro and multi-modal capabilities. J. Phys. Photonics 2024, 6, 035013. [Google Scholar] [CrossRef]
  24. Atefi, A.; Ge, Y.; Pitla, S.; Schnable, J. Robotic Technologies for High-Throughput Plant Phenotyping: Contemporary Reviews and Future Perspectives. Front. Plant Sci. 2021, 12, 611940. [Google Scholar] [CrossRef]
  25. Chen, Z.; Wang, J.; Jin, J. Fully automated proximal hyperspectral imaging system for high-resolution and high-quality in vivo soybean phenotyping. Precision Agric. 2023, 24, 2395–2415. [Google Scholar] [CrossRef]
  26. Hanson, N.; Lvov, G.; Padir, T. Occluded object detection and exposure in cluttered environments with automated hyperspectral anomaly detection. Front. Robot. AI 2023, 9, 982131. [Google Scholar] [CrossRef]
  27. Schindelin, J.; Arganda-Carreras, I.; Frise, E.; Kaynig, V.; Longair, M.; Pietzsch, T.; Preibisch, S.; Rueden, C.; Saalfeld, S.; Schmid, B.; et al. Fiji: An open-source platform for biological-image analysis. Nat. Methods 2012, 9, 676–682. [Google Scholar] [CrossRef] [PubMed]
  28. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  29. Hartley, R.; Zisserman, A. Projective Geometries and Transformation of 2D. In Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004; pp. 25–64. [Google Scholar]
  30. Szeliski, R. Image Formation. In Computer Vision: Algorithms and Applications; Springer International Publishing: Cham, Switzerland, 2022; pp. 27–83. [Google Scholar]
  31. Neher, R.A.; Mitkovski, M.; Kirchhoff, F.; Neher, E.; Theis, F.J.; Zeug, A. Blind source separation techniques for the decomposition of multiply labeled fluorescence images. Biophys. J. 2009, 96, 3791–3800. [Google Scholar] [CrossRef] [PubMed]
  32. Liu, W.; Li, M.; Wu, N.; Liu, S.; Chen, J. A new application of Fiber optics reflection spectroscopy (FORS): Identification of “bronze disease” induced corrosion products on ancient bronzes. J. Cult. Herit. 2022, 49, 19–27. [Google Scholar] [CrossRef]
  33. Orsilli, J.; Caglio, S. Combined Scanned Macro X-Ray Fluorescence and Reflectance Spectroscopy Mapping on Corroded Ancient Bronzes. Minerals 2024, 14, 192. [Google Scholar] [CrossRef]
  34. Goyal, D.K.; Yadav, R.; Kant, R. An integrated hybrid methodology for estimation of absorptivity and interface temperature in laser transmission welding. Int. J. Adv. Manuf. Technol. 2022, 121, 3771–3786. [Google Scholar] [CrossRef]
Figure 1. (a) Imaging setup for the painting analysis in the vertical position. (b) Overview of the painting with highlights of the areas analysed and the trajectory of the robotic arm.
Figure 1. (a) Imaging setup for the painting analysis in the vertical position. (b) Overview of the painting with highlights of the areas analysed and the trajectory of the robotic arm.
Heritage 08 00417 g001
Figure 2. (a) Imaging setup for the painting analysis in the horizontal position. (b) Overview of the painting with highlights of the areas analysed and the trajectory of the robotic arm.
Figure 2. (a) Imaging setup for the painting analysis in the horizontal position. (b) Overview of the painting with highlights of the areas analysed and the trajectory of the robotic arm.
Heritage 08 00417 g002
Figure 3. Examples of the robotic arm positions during the analysis of the volumetric objects.
Figure 3. Examples of the robotic arm positions during the analysis of the volumetric objects.
Heritage 08 00417 g003
Figure 4. Comparison of two sets of experiments with the painting in the vertical position. (a) RGB image output from the camera, showing the points selected for spectral comparison in the first experiment. (b) RGB image output from the camera, showing the points selected for spectral comparison in the second experiment.
Figure 4. Comparison of two sets of experiments with the painting in the vertical position. (a) RGB image output from the camera, showing the points selected for spectral comparison in the first experiment. (b) RGB image output from the camera, showing the points selected for spectral comparison in the second experiment.
Heritage 08 00417 g004
Figure 5. Comparison of two sets of experiments with the painting in the horizontal position. (a) RGB image output from the camera, showing the points selected for spectral comparison in the first experiment. (b) RGB image output from the camera, showing the points selected for spectral comparison in the second experiment.
Figure 5. Comparison of two sets of experiments with the painting in the horizontal position. (a) RGB image output from the camera, showing the points selected for spectral comparison in the first experiment. (b) RGB image output from the camera, showing the points selected for spectral comparison in the second experiment.
Heritage 08 00417 g005
Figure 6. False-colour comparison images: (a) between vertical experiments; (b) between horizontal experiments; (c) between horizontal and vertical acquisitions. The green colour represents the reference image, while the image registered (target) is magenta.
Figure 6. False-colour comparison images: (a) between vertical experiments; (b) between horizontal experiments; (c) between horizontal and vertical acquisitions. The green colour represents the reference image, while the image registered (target) is magenta.
Heritage 08 00417 g006
Figure 7. Spectral comparison of selected points on the painting for each of the four experimental conditions (Figure 4 and Figure 5). The background image of the painting provides spatial reference, with spectral plots positioned approximately at the corresponding sampling locations.
Figure 7. Spectral comparison of selected points on the painting for each of the four experimental conditions (Figure 4 and Figure 5). The background image of the painting provides spatial reference, with spectral plots positioned approximately at the corresponding sampling locations.
Heritage 08 00417 g007
Figure 8. (a) Composite RGB images from hyperspectral images of the bronze artefacts, with three points indicating where the spectra of the patinas (b) were extracted.
Figure 8. (a) Composite RGB images from hyperspectral images of the bronze artefacts, with three points indicating where the spectra of the patinas (b) were extracted.
Heritage 08 00417 g008
Figure 9. Distribution maps of the corrosion patinas: (a) cuprite, (b) green, (c) light green, and (d) dark green patina.
Figure 9. Distribution maps of the corrosion patinas: (a) cuprite, (b) green, (c) light green, and (d) dark green patina.
Heritage 08 00417 g009
Figure 10. (a) RGB images from the hyperspectral data cube of the bronze artefacts acquired from the bottom, with points indicating where the spectra of the patinas were extracted. (b) Comparison between the spectra of corrosion layers analysed from the top (see points in Figure 8a) and the bottom side.
Figure 10. (a) RGB images from the hyperspectral data cube of the bronze artefacts acquired from the bottom, with points indicating where the spectra of the patinas were extracted. (b) Comparison between the spectra of corrosion layers analysed from the top (see points in Figure 8a) and the bottom side.
Heritage 08 00417 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Babini, A.; Frascella, S.; Sech, G.; Andriulo, F.; Cannella, F.; Marchello, G.; Traviglia, A. Integration of Hyperspectral Imaging and Robotics: A Novel Approach to Analysing Cultural Heritage Artefacts. Heritage 2025, 8, 417. https://doi.org/10.3390/heritage8100417

AMA Style

Babini A, Frascella S, Sech G, Andriulo F, Cannella F, Marchello G, Traviglia A. Integration of Hyperspectral Imaging and Robotics: A Novel Approach to Analysing Cultural Heritage Artefacts. Heritage. 2025; 8(10):417. https://doi.org/10.3390/heritage8100417

Chicago/Turabian Style

Babini, Agnese, Selene Frascella, Gregory Sech, Fabrizio Andriulo, Ferdinando Cannella, Gabriele Marchello, and Arianna Traviglia. 2025. "Integration of Hyperspectral Imaging and Robotics: A Novel Approach to Analysing Cultural Heritage Artefacts" Heritage 8, no. 10: 417. https://doi.org/10.3390/heritage8100417

APA Style

Babini, A., Frascella, S., Sech, G., Andriulo, F., Cannella, F., Marchello, G., & Traviglia, A. (2025). Integration of Hyperspectral Imaging and Robotics: A Novel Approach to Analysing Cultural Heritage Artefacts. Heritage, 8(10), 417. https://doi.org/10.3390/heritage8100417

Article Metrics

Back to TopTop