Next Article in Journal
Integral Security Pillars for Medical Devices: A Comprehensive Analysis
Previous Article in Journal
Edge Convolutional Networks for Style Change Detection in Arabic Multi-Authored Text
Previous Article in Special Issue
Structural Pattern Analysis in Patella vulgata Shells Using Raman Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Human Vision: Unlocking the Potential of Augmented Reality for Spectral Imaging

by
Rafael Cavaco
1,2,
Tomás Lopes
1,2,
Diana Capela
1,2,
Diana Guimarães
1,2,*,
Pedro A. S. Jorge
1,2 and
Nuno A. Silva
1,2,*
1
Department of Physics and Astronomy, Faculty of Sciences of the University of Porto, 4169-007 Porto, Portugal
2
Center for Applied Photonics, INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4169-007 Porto, Portugal
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(12), 6635; https://doi.org/10.3390/app15126635
Submission received: 6 May 2025 / Revised: 6 June 2025 / Accepted: 10 June 2025 / Published: 12 June 2025
(This article belongs to the Special Issue Novel Laser-Based Spectroscopic Techniques and Applications)

Abstract

:
Spectral imaging is a broad term that refers to the use of a spectroscopy technique to analyze sample surfaces, collecting and representing spatially referenced signals. Depending on the technique utilized, it allows the user to reveal features and properties of objects that are invisible to the human eye, such as chemical or molecular composition. However, the interpretability and interaction with the results are often limited to screen visualization of two-dimensional representations. To surpass such limitations, augmented reality emerges as a promising technology, assisted by recent developments in the integration of spectral imaging datasets onto three-dimensional models. Building on this context, this work explores the integration of spectral imaging with augmented reality, aiming to create an immersive toolset to increase the interpretability and interactivity of the results of spectral imaging analysis. The procedure follows a two-step approach, starting from the integration of spectral maps onto a three-dimensional models, and proceeding with the development of an interactive interface to allow immersive visualization and interaction with the results. The approach and tool developed present the opportunity for a user-centric extension of reality, enabling more intuitive and comprehensive analyses with the potential to drive advancements in various research domains.

1. Introduction

In their own yet intertwined ways, spectral imaging and augmented reality may be considered as enhancers of human perception with a wide range of applications across scientific and industrial fields, particularly in domains such as mineral exploration, cultural heritage diagnostics, and artifact conservation. On the one hand, spectral imaging captures information from a scene beyond classical RGB imaging, mapping it onto a two-dimensional (2D) image. By leveraging the potential of the spectroscopy technique, it may provide information regarding the spatial distribution of chemical elements or sample composition, revealing hidden features that are invisible to the human eye. On the other hand, Augmented Reality (AR) is a technology that enhances the perception of reality by projecting digital information on the physical world via mixed-reality displays or projection solutions. Recent technological advancements have unlocked the potential for new immersive and interactive systems that hold promise as tools for harnessing the full range of human sensory perception to interact with digital data [1]. Thus, while spectral imaging and AR have different purposes, they are connected in the way they provide a user-centric extension of reality and enhance human capabilities [2,3]. Within this context, it also makes sense to explore synergistic pathways for both technologies, in particular, exploring how AR can enhance the visualization and interpretation of spectral imaging datasets.
Conceptually, the combination of spectral imaging with AR presents a unique opportunity to assist and enhance the capabilities of the individual tools in a bidirectional manner, i.e., (i) AR assisting existing spectral imaging and (ii) spectral imaging enhancing AR common applications.
From the perspective of spectral imaging, spectroscopy techniques may offer in situ, sensitive, and rapid spatial maps of chemical analyses, with minimal sample preparation and utilizing micro and non-destructive analytical methods [4,5]. Although spectral imaging results are typically presented as 2D images, a recent approach [6] suggested a methodology for integrating 2D surface maps into 3D models of the sample, which may be constructed using various scanning solutions, such as photogrammetry, LIDAR, or laser scanning [7,8,9,10]. The results demonstrate the possibility of integrating both models in a seamless manner, allowing an external user with no access to the physical sample to locate the data better, look from different perspectives, and explore scientific visualization tools to interact with the spectral dataset in non-trivial manners. For instance, viewing spectral data in 3D allows to see how chemical elements are distributed across the surface texture of a sample, making it easier to correlate specific elements or signal variations with physical characteristics, such as surface roughness or material texture. This type of immersive visualization supports more intuitive pattern recognition and material interpretation compared to static 2D interfaces, where spectral maps are typically analyzed on flat screens without spatial context or physical scale. While such tools are well-established, they often fragment the user attention and require cognitive effort to mentally reconstruct the 3D geometry or correlate spectral data with real-world locations.
In this sense, bridging these 3D digital models to real environments via AR offers a pathway for a new data analysis environment, providing contextual interpretation with unique interaction and rescaling capabilities as explored in other areas [3]. In addition, while most of the work on the topic of spectral imaging focuses on the acquisition of images at the surface of a sample, specific techniques or applications allow access to volumetric information up to a certain degree [11,12,13,14]. In this case, AR can help visualize cross-sectional maps with subsurface 3D reconstructions in real time, holding interesting potential for applications in subjects such as artwork restoration, where understanding complex stratigraphy or volumetric characteristics may be critical.
From the perspective of AR, the key idea is to superimpose digital information into the real world, contrasting with virtual reality which immerses the user in a fully synthetic 3D space, often disconnecting them from their physical surroundings. Thus, AR maintains a connection with the physical space, allowing for a more contextual and interactive experience with digital models and data [15], supporting contextual awareness, in situ annotation, and collaborative analysis within shared physical environments. These characteristics are particularly relevant in domains such as cultural heritage diagnostics and the mining industry, where users may need to compare physical samples, access physical documentation, or interact collaboratively in shared spaces. In this context, spectral imaging holds the potential to add novel multiple layers of information, superimposing them both onto these digital models or even in real-world objects, allowing humans to see beyond human vision and enable deeper analysis and better insights [16,17].
Indeed, some pioneering results in the literature already hint towards these directions, either by utilizing projection methods [18] or interactive headsets, with the latter significantly enhancing the experience via additional embedded sensors. For example, ref. [17] explores the combination of a fiber optic Raman probe and an augmented reality localization strategy and interface to create a visualization of a chemical image of samples, overlaid yet not anchored to the physical object. Regarding hyperspectral imaging, interesting approaches include medical applications such as the classification of tissues during in vivo surgical procedures [19] and a toolset for guided brain tumor phantom resection which utilizes an HoloLens AR headset [16], to mention a few illustrative examples. In the geological sciences domain, ref. [20] presents a toolset to analyze rock surfaces and mineral types, which may be promising to assist in situ operations, but again missing the contextual interpretation of 3D models. In general, although these approaches demonstrate the potential of combining AR and spectral imaging, the analytical potential of the solution is a characteristic often overlooked. To the best of our knowledge, no toolset fuses interactive 3D digital models with spectral imaging, adding the capabilities to arbitrarily scale the samples to enhance details, access other perspectives, or interactively explore spectral information.
In this manuscript, we describe a full-stack methodology and pipeline to integrate spectral imaging datasets onto 3D object digital models, deploying it as a user-friendly interface that provides an interactive environment to explore the object and dataset characteristics. For that purpose, a Unity software solution for spectral analysis will be developed for the Microsoft Hololens 2 device, taking advantage of the Mixed Reality Feature Toolkit needed to add interaction such as zoom adjustment and rotation, in addition to more in-depth spectrum analysis tools. The results presented demonstrate the potential of the fusion of AR and spectral imaging to open new avenues for research and innovation, enabling us to perceive and interact with our world in ways previously unattainable and enhancing human vision capabilities.

2. Method

The main objective of this manuscript is to introduce a versatile and user-friendly solution that allows the analysis of a spectral imaging dataset in an Augmented Reality (AR) environment. More specifically, we effectively capitalize on the interpretability and interactive capabilities of the latter, utilizing 3D models of the analyzed samples.
Following the same rationale, for the purpose of this work, we will focus on headset wearable AR devices, specifically, the Hololens 2 Microsoft model, a robust low-latency and markerless tracking device. Indeed, although AR has advanced significantly in recent years, offering various solutions ranging from headsets to heads-up displays and projection methods [21,22,23], wearable devices are still the most suitable in providing user-centric interactive environments.
In generic terms, the solution presented here consists of a three-step process that will be further detailed in the sections below. The first step concerns data acquisition, i.e., obtaining the spectral imaging dataset and the 3D model of the object. The second step focuses on asset management and consists of a processing pipeline to pre-process spectral imaging datasets and construct spectral maps, as well as their integration with the 3D models via mesh alignment. Finally, the third step focuses on the development and deployment of AR software solution as a user-friendly interface for the effective exploration of the acquired spectral data.

2.1. Step 1: Data Acquisition

As depicted in Figure 1, the first step is to acquire the necessary datasets to construct the 3D digital model of the object and integrate it with spectroscopy information.
For the acquisition of the 3D model of the sample, we utilized a photogrammetry technique [7,24] as previously explored in a recent work [6]. Note that although we opted for this method due to the lower cost of the solution, there are other techniques such as laser-based scanning [25,26] that can be utilized if better resolution models are needed, in particular, if the object is small (e.g., below centimeter scale). For the photogrammetry technique, the 3D reconstruction of the sample was generated by capturing a video using a smartphone, which was then loaded into the Polycam platform to generate a realistic three-dimensional model of the sample and the respective texture.
Regarding the spectral imaging dataset, any spectral imaging technique could be explored, as long it is utilized to scan a sample surface, resulting in a dataset of N x × N y × N λ , where N x and N y relate to the transverse spatial dimensions, whereas N λ is associated with the spectral part (i.e., wavelength or wavenumber). For the present case, we utilized a Laser-Induced Breakdown Spectroscopy technique [6] to scan a sample surface point by point, obtaining a spectral signal for each point that contains emission lines to be related with the elements present in the sample [27].

2.2. Step 2: Asset Management

Building on the datasets acquired in the first step, step 2 comprises the necessary tasks to analyze the spectral dataset, construct the relevant images, and merge them with the 3D digital model of the sample. This step is performed in the form of Python scripts and streamlined user-centric actions.
First, having the spectral dataset acquired, it is necessary to analyze it and construct the relevant maps to be merged with the 3D digital model. While this process could be performed in the headset itself and incorporated into the software solution, we opted to perform it previously for efficiency purposes. Indeed, the analysis of spectral data typically requires a significant amount of pre-processing such as baseline removal [28] or filtering [29], which, in addition to the fact that spectral imaging datasets are typically large in memory size (in the order of GB of data), require a lot of processing power for efficient operation.
For this work, and having utilized LIBS, we applied a baseline removal procedure using Asymmetrical Least Squares Smoothing (ALSS) [28], and the main spectral lines corresponding to each characteristic spectral signature were extracted. Next, we select a set of N i relevant spectral lines and construct the spectral maps by integrating them in the λ dimension around the selected line and applying a spatial Gaussian filter. All of these steps are performed in a single Python script, outputting a series of N i images for the relevant features selected (see Figure 2). An additional image with a mean spectrum is also saved to a file to be displayed in the final solution as further described in step 3.
Subsequently, and in order to merge with the 3D digital model, we need to superimpose the spectroscopic data in the corresponding surface of the sample. This is performed using a mesh matching algorithm previously presented in the literature [6], which consists of determining the following three transformations: translation, rotation, and scaling between two point clouds, one in the 3D model and the other one in the 2D spectral image, minimizing the mean square error between the clouds.
Mathematically speaking, these selected points, p i and q i , corresponding to the same spatial region in both the 3D model and the 2D image, can be represented by two matrices (Equation (1)) that contain the set of points intended to be aligned as follows:
P = x 1 y 1 z 1 x 2 y 2 z 2 x n y n z n and Q = i 1 j 1 k 1 i 2 j 2 k 2 i n j n k n
According to ref. [30], this minimization problem can be solved by computing the cross-covariance matrix, Σ P Q , for the point patterns P and Q as
Σ P Q = 1 n i = 1 n ( q i μ q ) T ( p i μ p )
By decomposing the matrix Σ P Q through Singular Value Decomposition (SVD), i.e., Σ P Q = UDV T , it is possible to achieve the optimal transformation parameters that minimize the mean squared error given by the following:
R o t a t i o n : R = UCV T S c a l i n g : s = 1 σ P 2 t r ( DC ) T r a n s l a t i o n : t = μ Q s R μ P
In this description, μ P and μ Q correspond to the position of the centroids, σ P 2 and σ Q 2 correspond to the variances of the matrices, and C is a parameter that ensures that the rotation matrix R retains the correct orientation, even when the alignment involves a reflection, and this is given by
C = I , if det ( U ) det ( V ) = 1 , diag ( 1 , 1 , , 1 , 1 ) , if det ( U ) det ( V ) = 1 .
This algorithm was implemented in Python (v3.12.7) and executed autonomously using ParaView software (v5.12) [31], requiring only the selection of specified points by the user. Although this method provides flexibility and control, it may limit reproducibility and increase the time required for setup in more complex samples.
Finally, some final additional tweaks are applied to the scene for visualization purposes (e.g., adjust the colorbar, brightness, and textures), before importing it to the software solution. Some results are depicted in Figure 3, showing a reliable alignment between the spectral data and the 3D model, highlighting the effectiveness of the proposed method.

2.3. Step 3: Development and Deployment of Unity Software Solution

In order to interact with the digital models in a meaningful manner in the context of spectral imaging, the simple upload of the generated models to a third-party 3D model visualizer on the Hololens 2 would be extremely limited. Indeed, a complete toolset requires not only the visualization of the model but also the interaction with the spectral data. This means, for example, integrating a window to analyze the spectral signal, as well as additional solutions to shift between different spectral maps for the same sample and even putting multiple samples side by side for integrated analysis.
For that purpose, we opted to develop a tailored software solution, building a Unity project within a 3D Universal Windows Platform adapted to the Microsoft Hololens 2 headset. Unity is a game engine that can be used to create 3D and 2D interactive content. Unity excels in real-time rendering, which together with its cross-platform capabilities make it a natural choice for a versatile approach to addressing the major challenge of the manuscript [32,33]. In addition, to better explore AR capabilities, the Mixed Reality Toolkit (MRKT) package was utilized in the project. The MRKT adds a unique collection of components and advanced functionalities such as spatial mapping, hand tracking, and eye tracking, which are essential for creating immersive and interactive AR experiences.
In Figure 1 (right), we present the designed interface, composed of a text box, a dropdown menu, a 2D graph, and the sample itself. Each one of these components has a particular function in the interface, with the information updated according to that contained in the loaded model. The text box displays the sample name and the dropdown menu, the relevant features for which we previously generated the spectral maps (e.g., element lines for LIBS). By interacting with this icon, the user can switch between spectral maps. The 2D graph displays the image previously generated in the second step, which contains the average spectroscopy signal for the sample. Finally, in the center of the interface, the model of the sample appears, displaying the selected spectral map seamlessly overlaid on the sample surface while preserving the relevant object texture information. We added collision properties and object manipulator functions to the objects alongside the GameObjects interactions tool to provide the elements with all the necessary gesture-based interactions.
Finally, a menu window can be utilized to access additional functions. Among these functions, the first button allows us to hide all elements of the interface, except for the sample and menu to achieve an environment with fewer distractions. The second allows us to access a list of samples, enabling also the simultaneous display of samples to facilitate comparative examination. Finally, a third function activates a QR code reading mode to allow users to move the sample to their desired location, which can be utilized to load models directly from a remote server in the future. All of these buttons have the particularity of being voice-activated, allowing the user to have hands-free operation for other tasks.

3. Results

As previously said, for the purpose of this work, we focused on the LIBS technique for obtaining the spectral imaging datasets. In short, LIBS imaging allows for the creation of two-dimensional chemical maps of samples by identifying elemental and molecular emission lines (Figure 2) through the analysis of spectral signatures generated during the decay of laser-induced plasma on the sample surface. This method was applied to two distinct samples for illustrative purposes as follows (Figure 3): (i) an oxidized double open-end wrench composed of chrome–vanadium steel, and (ii) a traditional Portuguese tile fragment.
For the case of the wrench sample, the presence of a spatial distribution of iron (Fe) (I) and (II) emission lines was detected during the analytical phase in step 2, as well as vanadium (V), chromium (Cr), aluminum (Al), and oxygen (O). All of these are consistent with the surface oxidation observed on the sample. On one hand, the presence of iron and vanadium is consistent with the chrome–vanadium steel composition. On the other hand, Cr and oxygen lines were detected with a higher intensity as they are part of the protective coating, revealing the less damaged part.
For the tile fragment, the distribution of cobalt (Co) was particularly noteworthy as a part of the paint used in the tile. The expected spatial match underscores the capability of LIBS to identify and analyze heritage artifacts, offering significant potential for the study and preservation of cultural heritage artifacts.
Figure 4 presents the user perspective of the developed interactive tool, illustrating the user-friendly and informative interface that integrates spectral imaging with augmented reality (AR). This interface allows users to observe all relevant characteristics of the spectral map projected on a digital model of the sample, offering a degree of contextualized interpretation and gesture-based interaction of the acquired data not achievable with traditional visualization methods. In particular, from user experience (which can be seen in the video in the Supplementary Materials), we highlight the following interaction capabilities:
  • Moving and rotating the sample: It allows us to position and move the sample freely, analyzing the composition in detail from multiple perspectives;
  • Scaling: Allowing us to scale up small samples or reducing larger ones, the solution offers unique perspectives of the samples in integration with additional spectral features;
  • Texture-based interpretation: Including the texture of the sample surface and possible roughness, it is possible to relate common signal variations (e.g., plasma signal in LIBS) due to the surface properties and better contextualize them.
  • Multiple sample comparison: Adding multiple samples at the same time to the environment eases visual comparisons, enabling the deduction of clear connections or distinctions.

4. Discussion

Leveraging these capabilities, the potential applications of this approach are vast and will be discussed in relation to the following two major directions, in close connection with the samples presented: industrial applications and academic ones, with a focus on heritage science.
From an industrial perspective, the applications of this technology essentially focus on quality control and monitoring. As seen for the tool in Figure 2, LIBS imaging can identify and monitor material degradation, providing critical information for maintenance and conservation efforts. Integrated with AR, areas of corrosion or material fatigue may become more evident, allowing us to establish a deeper connection with the object volume. In addition, and in the context of quality control, AR can guide maintenance personnel through complex inspection processes, overlaying spectral data onto physical components to highlight areas requiring attention, thus improving worker efficiency. This holds potential for upstream interventions, including the detection of production defects and ensuring product quality by detecting elemental inconsistencies or contaminations.
From an academic perspective, the applications are also compelling. Focusing on heritage science for illustrative purposes, the study of objects and artifacts strongly benefits from using spectral imaging techniques like LIBS as they offer sensitive and rapid spatial maps of chemical compositions with minimal sample preparation. With such a technique, the researcher can analyze paintings, metals, ceramics, and stones, providing detailed information about the composition of the object, which may aid in their characterization and inform the planning of preservation interventions. AR is also rather popular in this research domain, as it provides a means for users to visualize and interact with accurate digitized models of objects, allowing free gesture-based interaction without harming asset preservation [34]. In addition, it can transform cultural and educational experiences into engaging experiences, facilitating collaborative and inclusive environments among scholars using shared digital platforms regardless of their physical location [34,35,36].
In this context, the potential to add novel layers of information to digital models of heritage artifacts coming from spectral imaging technologies may enable deeper analysis and yield insights without physical interaction, contributing to a better yet harmless understanding of the materials and techniques used for example, as well as the formation of connections to the historical period [34]. The 3D interactivity may also be utilized to inform the planning of preservation strategies and to establish collaborative playgrounds with multiple layers of information and multiple samples (e.g., fragments [37,38]).
All in all, besides the natural exploration of the tool in multiple domains of science and technology such as heritage research [34], biology and medicine [19,39,40], geology [41], and industrial process control [42], we anticipate that promising future research lines may also include the exploration of a few other ramifications, including the following: the exploration of other spectral imaging techniques with multimodality and sensor fusion; the integration of machine learning tools and solutions for analysis and sample classification; and the exploration of the tool in classroom settings for teaching and demonstration purposes.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15126635/s1, Video S1: Demonstration of the augmented reality interface and interactive visualization tool developed for Microsoft HoloLens 2. The video illustrates user interactions, including object manipulation, spectral data visualization, and interface navigation.

Author Contributions

Conceptualization, R.C., T.L. and N.A.S.; methodology, R.C. and N.A.S.; software, R.C. and T.L.; validation, N.A.S. and P.A.S.J.; formal analysis, R.C. and T.L.; investigation, R.C. and T.L.; resources, R.C.; data curation, R.C., D.C. and D.G.; writing—original draft preparation, R.C. and N.A.S.; writing—review and editing, R.C. and N.A.S.; visualization, R.C. and T.L.; supervision, N.A.S.; project administration, P.A.S.J.; funding acquisition, P.A.S.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by national funds through FCT—Fundação para a Ciência e a Tecnologia, I.P., under the support UID/50014/2023. Tomás Lopes and Diana Capela acknowledge the support of the Foundation for Science and Technology (FCT), Portugal, through Grants 2024.01830.BD and 2024.02874.BD, respectively. Nuno A. Silva acknowledges the support of FCT under the grant 2022.08078.CEECIND/CP1740/CT0002 (https://doi.org/10.54499/2022.08078.CEECIND/CP1740/CT0002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Crofton, E.; Botinestean, C.; Fenelon, M.; Gallagher, E. Potential applications for virtual and augmented reality technologies in sensory science. Innov. Food Sci. Emerg. Technol. 2019, 56, 102178. [Google Scholar] [CrossRef]
  2. Schmalstieg, D.; Hollerer, T. Augmented Reality: Principles and Practice; Addison-Wesley Professional: Boston, MA, USA, 2016. [Google Scholar]
  3. Carmigniani, J.; Furht, B. Augmented Reality: An Overview; Springer Nature: Berlin, Germany, 2011; pp. 3–46. [Google Scholar] [CrossRef]
  4. Capela, D.; Ferreira, M.F.; Lima, A.; Dias, F.; Lopes, T.; Guimarães, D.; Jorge, P.A.; Silva, N.A. Robust and interpretable mineral identification using laser-induced breakdown spectroscopy mapping. Spectrochim. Acta Part B At. Spectrosc. 2023, 206, 106733. [Google Scholar] [CrossRef]
  5. Das, R.S.; Agrawal, Y. Raman spectroscopy: Recent advancements, techniques and applications. Vib. Spectrosc. 2011, 57, 163–176. [Google Scholar] [CrossRef]
  6. Lopes, T.; Rodrigues, P.; Cavaco, R.; Capela, D.; Ferreira, M.F.; Guimarães, D.; Jorge, P.A.; Silva, N.A. Interactive three-dimensional chemical element maps with laser-induced breakdown spectroscopy and photogrammetry. Spectrochim. Acta Part B At. Spectrosc. 2023, 203, 106649. [Google Scholar] [CrossRef]
  7. Mikhail, E.M.; Bethel, J.S.; McGlone, J.C. Introduction to Modern Photogrammetry; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
  8. Marín-Buzón, C.; Pérez-Romero, A.; López-Castro, J.L.; Ben Jerbania, I.; Manzano-Agugliaro, F. Photogrammetry as a new scientific tool in archaeology: Worldwide research trends. Sustainability 2021, 13, 5319. [Google Scholar] [CrossRef]
  9. Tavani, S.; Billi, A.; Corradetti, A.; Mercuri, M.; Bosman, A.; Cuffaro, M.; Seers, T.; Carminati, E. Smartphone assisted fieldwork: Towards the digital transition of geoscience fieldwork using LiDAR-equipped iPhones. Earth-Sci. Rev. 2022, 227, 103969. [Google Scholar] [CrossRef]
  10. Bi, S.; Yuan, C.; Liu, C.; Cheng, J.; Wang, W.; Cai, Y. A survey of low-cost 3D laser scanning technology. Appl. Sci. 2021, 11, 3938. [Google Scholar] [CrossRef]
  11. Antony, M.M.; Sandeep, C.S.; Matham, M.V. Hyperspectral vision beyond 3D: A review. Opt. Lasers Eng. 2024, 178, 108238. [Google Scholar] [CrossRef]
  12. Wang, N.; Wang, L.; Feng, G.; Gong, M.; Wang, W.; Lin, S.; Huang, Z.; Chen, X. Volumetric Imaging From Raman Perspective: Review and Prospect. Laser Photonics Rev. 2024, 19, 2401444. [Google Scholar] [CrossRef]
  13. Ferreira, M.F.; Guimarães, D.; Oliveira, R.; Lopes, T.; Capela, D.; Marrafa, J.; Meneses, P.; Oliveira, A.; Baptista, C.; Gomes, T.; et al. Characterization of Functional Coatings on Cork Stoppers with Laser-Induced Breakdown Spectroscopy Imaging. Sensors 2023, 23, 9133. [Google Scholar] [CrossRef]
  14. Gallot-Duval, D.; Quere, C.; De Vito, E.; Sirven, J.B. Depth profile analysis and high-resolution surface mapping of lithium isotopes in solids using laser-induced breakdown spectroscopy (LIBS). Spectrochim. Acta Part B At. Spectrosc. 2024, 215, 106920. [Google Scholar] [CrossRef]
  15. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S.; MacIntyre, B. Recent advances in augmented reality. IEEE Comput. Graph. Appl. 2001, 21, 34–47. [Google Scholar] [CrossRef]
  16. Huang, J.; Halicek, M.; Shahedi, M.; Fei, B. Augmented reality visualization of hyperspectral imaging classifications for image-guided brain tumor phantom resection. Proc. SPIE Int. Soc. Opt. Eng. 2020, 11315, 113150U. [Google Scholar]
  17. Yang, W.; Mondol, A.S.; Stiebing, C.; Marcu, L.; Popp, J.; Schie, I.W. Raman ChemLighter: Fiber optic Raman probe imaging in combination with augmented chemical reality. J. Biophotonics 2019, 12, e201800447. [Google Scholar] [CrossRef]
  18. Alsberg, B.K. Is sensing spatially distributed chemical information using sensory substitution with hyperspectral imaging possible? Chemom. Intell. Lab. Syst. 2012, 114, 24–29. [Google Scholar] [CrossRef]
  19. Sancho, J.; Villa, M.; Chavarrías, M.; Juarez, E.; Lagares, A.; Sanz, C. SLIMBRAIN: Augmented reality real-time acquisition and processing system for hyperspectral classification mapping with depth information for in-vivo surgical procedures. J. Syst. Archit. 2023, 140, 102893. [Google Scholar] [CrossRef]
  20. Engelke, U.; Rogers, C.; Klump, J.; Lau, I. HypAR: Situated mineralogy exploration in augmented reality. In Proceedings of the 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry, Brisbane, QLD, Australia, 14–16 November 2019; pp. 1–5. [Google Scholar]
  21. Xiong, J.; Hsiang, E.L.; He, Z.; Zhan, T.; Wu, S.T. Augmented reality and virtual reality displays: Emerging technologies and future perspectives. Light. Sci. Appl. 2021, 10, 1–30. [Google Scholar] [CrossRef]
  22. Bolton, A.; Burnett, G.; Large, D.R. An investigation of augmented reality presentations of landmark-based navigation using a head-up display. In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI’15, New York, NY, USA, 1–3 September 2015; pp. 56–63. [Google Scholar] [CrossRef]
  23. Stork, A.; Bimber, O.; Amicis, R.d. Projection-based Augmented Reality in Engineering Applications. In Proceedings of the CAD 2002, Dresden, Germany, 4–5 March 2002. [Google Scholar]
  24. Magnani, M.; Douglass, M.; Schroder, W.; Reeves, J.; Braun, D.R. The digital revolution to come: Photogrammetry in archaeological practice. Am. Antiq. 2020, 85, 737–760. [Google Scholar] [CrossRef]
  25. Baltsavias, E.P. A comparison between photogrammetry and laser scanning. ISPRS J. Photogramm. Remote Sens. 1999, 54, 83–94. [Google Scholar] [CrossRef]
  26. Lu, T.; Si, H.; Gao, Y. A research of 3D models for cloud-based technology combined with laser scanning close-range photogrammetry method. Int. J. Adv. Manuf. Technol. 2023, 1–10. [Google Scholar] [CrossRef]
  27. Gaudiuso, R.; Dell’Aglio, M.; De Pascale, O.; Senesi, G.S.; De Giacomo, A. Laser induced breakdown spectroscopy for elemental analysis in environmental, cultural heritage and space applications: A review of methods and results. Sensors 2010, 10, 7434–7468. [Google Scholar] [CrossRef]
  28. Peng, J.; Peng, S.; Jiang, A.; Wei, J.; Li, C.; Tan, J. Asymmetric least squares for multiple spectra baseline correction. Anal. Chim. Acta 2010, 683, 63–68. [Google Scholar] [CrossRef]
  29. Pořízka, P.; Klus, J.; Képeš, E.; Prochazka, D.; Hahn, D.W.; Kaiser, J. On the utilization of principal component analysis in laser-induced breakdown spectroscopy data analysis, a review. Spectrochim. Acta Part B At. Spectrosc. 2018, 148, 65–82. [Google Scholar] [CrossRef]
  30. Umeyama, S. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 376–380. [Google Scholar] [CrossRef]
  31. Ahrens, J.; Geveci, B.; Law, C. ParaView: An End-User Tool for Large Data Visualization. In Visualization Handbook; Elesvier: Amsterdam, The Netherlands, 2005; ISBN 978-0123875822. [Google Scholar]
  32. Kim, S.L.; Suk, H.J.; Kang, J.H.; Jung, J.M.; Laine, T.H.; Westlin, J. Using Unity 3D to facilitate mobile augmented reality game development. In Proceedings of the 2014 IEEE World Forum on Internet of Things (WF-IoT), Seoul, Republic of Korea, 6–8 March 2014; pp. 21–26. [Google Scholar]
  33. Carter, E.; Sakr, M.; Sadhu, A. Augmented Reality-Based Real-Time Visualization for Structural Modal Identification. Sensors 2024, 24, 1609. [Google Scholar] [CrossRef]
  34. Boboc, R.G.; Băutu, E.; Gîrbacia, F.; Popovici, N.; Popovici, D.M. Augmented reality in cultural heritage: An overview of the last decade of applications. Appl. Sci. 2022, 12, 9859. [Google Scholar] [CrossRef]
  35. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. J. Comput. Cult. Herit. 2018, 11, 7. [Google Scholar] [CrossRef]
  36. Tscheu, F.; Buhalis, D. Augmented reality at cultural heritage sites. In Proceedings of the Information and Communication Technologies in Tourism 2016: Proceedings of the International Conference, Bilbao, Spain, 2–5 February 2016; Springer: Cham, Switzerland, 2016; pp. 607–619. [Google Scholar]
  37. Detalle, V.; Bai, X. The assets of laser-induced breakdown spectroscopy (LIBS) for the future of heritage science. Spectrochim. Acta Part B At. Spectrosc. 2022, 191, 106407. [Google Scholar] [CrossRef]
  38. Anglos, D.; Detalle, V. Cultural heritage applications of LIBS. In Laser-Induced Breakdown Spectroscopy: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2014; pp. 531–554. [Google Scholar]
  39. Moawad, G.; Elkhalil, J.; Klebanoff, J.; Rahman, S.; Habib, N.; Alkatout, I. Augmented Realities, Artificial Intelligence, and Machine Learning: Clinical Implications and How Technology Is Shaping the Future of Medicine. J. Clin. Med. 2020, 9, 3811. [Google Scholar] [CrossRef]
  40. Castelan, E.; Vinnikov, M.; Alex Zhou, X. Augmented reality anatomy visualization for surgery assistance with hololens: Ar surgery assistance with hololens. In Proceedings of the 2021 ACM International Conference on Interactive Media Experiences, Virtual, 21–23 June 2021; pp. 329–331. [Google Scholar]
  41. Mathiesen, D.; Myers, T.; Atkinson, I.; Trevathan, J. Geological visualisation with augmented reality. In Proceedings of the 2012 15th International Conference on Network-Based Information Systems, Melbourne, VIC, Australia, 26–28 September 2012; pp. 172–179. [Google Scholar]
  42. Mourtzis, D.; Siatras, V.; Angelopoulos, J. Real-time remote maintenance support based on augmented reality (AR). Appl. Sci. 2020, 10, 1855. [Google Scholar] [CrossRef]
Figure 1. Schematic of the described methodology to deploy an interactive visualization tool for a HoloLens 2 device, broken into the three steps referred to in the manuscript. (Left, bottom) Spectral map of the cobalt emission line at 345.3 nm and 3D model of the analyzed sample. (Middle, bottom) Illustrative results of the superimposition of the 3D model with the spectral data correspondent to a cobalt emission line at 345.3 nm. (Right, bottom) Illustration of the final user-friendly interface in unity.
Figure 1. Schematic of the described methodology to deploy an interactive visualization tool for a HoloLens 2 device, broken into the three steps referred to in the manuscript. (Left, bottom) Spectral map of the cobalt emission line at 345.3 nm and 3D model of the analyzed sample. (Middle, bottom) Illustrative results of the superimposition of the 3D model with the spectral data correspondent to a cobalt emission line at 345.3 nm. (Right, bottom) Illustration of the final user-friendly interface in unity.
Applsci 15 06635 g001
Figure 2. LIBS imaging of chromium (Cr), iron (Fe II), and vanadium (V) emission lines corresponding to 283.5 nm, 259.9 nm, and 437.9 nm over the three-dimensional model, respectively.
Figure 2. LIBS imaging of chromium (Cr), iron (Fe II), and vanadium (V) emission lines corresponding to 283.5 nm, 259.9 nm, and 437.9 nm over the three-dimensional model, respectively.
Applsci 15 06635 g002
Figure 3. Three-dimensional models constructed using photogrammetry techniques: (A) Map of cobalt emission line at 345.3 nm, superimposed on a patterned tile model. (B) Map of iron emission line at 259.9 nm, superimposed on an oxidized wrench model.
Figure 3. Three-dimensional models constructed using photogrammetry techniques: (A) Map of cobalt emission line at 345.3 nm, superimposed on a patterned tile model. (B) Map of iron emission line at 259.9 nm, superimposed on an oxidized wrench model.
Applsci 15 06635 g003
Figure 4. Depiction of using the Microsoft HoloLens 2 device and interacting with the interface developed.
Figure 4. Depiction of using the Microsoft HoloLens 2 device and interacting with the interface developed.
Applsci 15 06635 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cavaco, R.; Lopes, T.; Capela, D.; Guimarães, D.; Jorge, P.A.S.; Silva, N.A. Beyond Human Vision: Unlocking the Potential of Augmented Reality for Spectral Imaging. Appl. Sci. 2025, 15, 6635. https://doi.org/10.3390/app15126635

AMA Style

Cavaco R, Lopes T, Capela D, Guimarães D, Jorge PAS, Silva NA. Beyond Human Vision: Unlocking the Potential of Augmented Reality for Spectral Imaging. Applied Sciences. 2025; 15(12):6635. https://doi.org/10.3390/app15126635

Chicago/Turabian Style

Cavaco, Rafael, Tomás Lopes, Diana Capela, Diana Guimarães, Pedro A. S. Jorge, and Nuno A. Silva. 2025. "Beyond Human Vision: Unlocking the Potential of Augmented Reality for Spectral Imaging" Applied Sciences 15, no. 12: 6635. https://doi.org/10.3390/app15126635

APA Style

Cavaco, R., Lopes, T., Capela, D., Guimarães, D., Jorge, P. A. S., & Silva, N. A. (2025). Beyond Human Vision: Unlocking the Potential of Augmented Reality for Spectral Imaging. Applied Sciences, 15(12), 6635. https://doi.org/10.3390/app15126635

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop