Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (50)

Search Parameters:
Keywords = holographic lens

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 2534 KiB  
Article
Holographic Lens Array for Solar Collector with Large Angle and Expanded Spectral Width
by Changyu Wang, Yuan Xu, Hong Xu and Juan Liu
Appl. Sci. 2025, 15(10), 5354; https://doi.org/10.3390/app15105354 - 11 May 2025
Viewed by 377
Abstract
Holographic optical elements (HOEs) are promising for solar energy collection due to their advantages of lightweight and high efficiency, but the narrow reconstruction of their angular and spectral widths limits their application. This paper proposes a novel holographic lens array solar collector (HLASC), [...] Read more.
Holographic optical elements (HOEs) are promising for solar energy collection due to their advantages of lightweight and high efficiency, but the narrow reconstruction of their angular and spectral widths limits their application. This paper proposes a novel holographic lens array solar collector (HLASC), which can collect light from a large angle range with an expanded spectrum. The large acceptance angle is achieved by a holographic lens array with a large relative aperture. To expand the collection spectral width within a large incident angle, the collection spectrum is spatially allocated within a single lens range. The methods of single-wavelength recording and multi-wavelength reconstruction contribute to the flexible design and simple fabrication of the HLASC. The optical experiment demonstrates that the HLASC can collect the entire visible spectrum in a range greater than 54°. We believe that an expanded reconstruction angle and wavelength will promote the widespread application of HOEs in solar energy collection. Full article
(This article belongs to the Special Issue Digital Holography: Advancements, Applications, and Challenges)
Show Figures

Figure 1

9 pages, 5726 KiB  
Communication
Mixed Reality (Holography)-Guided Minimally Invasive Cardiac Surgery—A Novel Comparative Feasibility Study
by Winn Maung Maung Aye, Laszlo Kiraly, Senthil S. Kumar, Ayyadarshan Kasivishvanaath, Yujia Gao and Theodoros Kofidis
J. Cardiovasc. Dev. Dis. 2025, 12(2), 49; https://doi.org/10.3390/jcdd12020049 - 27 Jan 2025
Cited by 2 | Viewed by 1126
Abstract
The operative field and exposure in minimally invasive cardiac surgery (MICS) are limited. Meticulous preoperative planning and intraoperative visualization are crucial. We present our initial experience with HoloLens® 2 as an intraoperative guide during MICS procedures: aortic valve replacement (AVR) via right [...] Read more.
The operative field and exposure in minimally invasive cardiac surgery (MICS) are limited. Meticulous preoperative planning and intraoperative visualization are crucial. We present our initial experience with HoloLens® 2 as an intraoperative guide during MICS procedures: aortic valve replacement (AVR) via right anterior small thoracotomy, coronary artery bypass graft surgery (CABG) via left anterior small thoracotomy (LAST), and pulmonary valve replacement (PVR) via LAST. Three-dimensional (3D) segmentations were performed using the patient’s computer tomography (CT) data subsequently rendered into a 3D hologram on the HoloLens® 2. The holographic image was then superimposed on the patient lying on the operating table, using the xiphoid and the clavicle as landmarks, and was used as a real-time anatomical image guide for the surgery. The incision site marking made using HoloLens® 2 differed by one intercostal space from the marking made using a conventional surgeon’s mental reconstructed image from the patient’s preoperative imaging and was found to be a more appropriate site of entry into the chest for the structure of interest. The transparent visor of the HoloLens® 2 provided unobstructed views of the operating field. A mixed reality (MR) device could contribute to preoperative surgical planning and intraoperative real-time image guidance, which facilitates the understanding of anatomical relationships. MR has the potential to improve surgical precision, decrease risk, and enhance patient safety. Full article
Show Figures

Figure 1

11 pages, 7800 KiB  
Communication
Lens-Free On-Chip Quantitative Phase Microscopy for Large Phase Objects Based on a Biplane Phase Retrieval Method
by Yufan Chen, Xuejuan Wu, Yang Chen, Wenhui Lin, Haojie Gu, Yuzhen Zhang and Chao Zuo
Sensors 2025, 25(1), 3; https://doi.org/10.3390/s25010003 - 24 Dec 2024
Viewed by 936
Abstract
Lens-free on-chip microscopy (LFOCM) is a powerful computational imaging technology that combines high-throughput capabilities with cost efficiency. However, in LFOCM, the phase recovered by iterative phase retrieval techniques is generally wrapped into the range of −π to π, necessitating phase unwrapping [...] Read more.
Lens-free on-chip microscopy (LFOCM) is a powerful computational imaging technology that combines high-throughput capabilities with cost efficiency. However, in LFOCM, the phase recovered by iterative phase retrieval techniques is generally wrapped into the range of −π to π, necessitating phase unwrapping to recover absolute phase distributions. Moreover, this unwrapping process is prone to errors, particularly in areas with large phase gradients or low spatial sampling, due to the absence of reliable initial guesses. To address these challenges, we propose a novel biplane phase retrieval (BPR) method that integrates phase unwrapping results obtained at different propagation distances to achieve accurate absolute phase reconstruction. The effectiveness of BPR is validated through live-cell imaging of HeLa cells, demonstrating improved quantitative phase imaging (QPI) accuracy when compared to conventional off-axis digital holographic microscopy. Furthermore, time-lapse imaging of COS-7 cells in vitro highlights the method’s robustness and capability for long-term quantitative analysis of large cell populations. Full article
(This article belongs to the Special Issue Digital Holography in Optics: Techniques and Applications)
Show Figures

Figure 1

24 pages, 4304 KiB  
Review
Photopolymer Holographic Lenses for Solar Energy Applications: A Review
by Eder Alfaro, Tomás Lloret, Juan M. Vilardy, Marlón Bastidas, Marta Morales-Vidal and Inmaculada Pascual
Polymers 2024, 16(6), 732; https://doi.org/10.3390/polym16060732 - 7 Mar 2024
Cited by 12 | Viewed by 2443
Abstract
Holographic lenses (HLs) are part of holographic optical elements (HOE), and are being applied to concentrate solar energy on a focal point or focal line. In this way, the concentrated energy can be converted into electrical or thermal energy by means of a [...] Read more.
Holographic lenses (HLs) are part of holographic optical elements (HOE), and are being applied to concentrate solar energy on a focal point or focal line. In this way, the concentrated energy can be converted into electrical or thermal energy by means of a photovoltaic cell or a thermal absorber tube. HLs are able to passively track the apparent motion of the sun with a high acceptance angle, allowing tracking motors to be replaced, thus reducing the cost of support structures. This article focuses on a review of the materials used in the recording of a holographic lens (HL) or multiple HLs in photovoltaic and/or concentrating solar collectors. This review shows that the use of photopolymers for the recording of HLs enables high-performance efficiency in physical systems designed for energy transformation, and presents some important elements to be taken into account for future designs, especially those related to the characteristics of the HL recording materials. Finally, the article outlines future recommendations, emphasizing potential research opportunities and challenges for researchers entering the field of HL-based concentrating solar photovoltaic and/or concentrating solar thermal collectors. Full article
(This article belongs to the Special Issue Feature Papers in Polymer Applications II)
Show Figures

Figure 1

15 pages, 2581 KiB  
Article
Quantifying the Dynamics of Bacterial Biofilm Formation on the Surface of Soft Contact Lens Materials Using Digital Holographic Tomography to Advance Biofilm Research
by Igor Buzalewicz, Aleksandra Kaczorowska, Wojciech Fijałkowski, Aleksandra Pietrowska, Anna Karolina Matczuk, Halina Podbielska, Alina Wieliczko, Wojciech Witkiewicz and Natalia Jędruchniewicz
Int. J. Mol. Sci. 2024, 25(5), 2653; https://doi.org/10.3390/ijms25052653 - 24 Feb 2024
Cited by 9 | Viewed by 3354
Abstract
The increase in bacterial resistance to antibiotics in recent years demands innovative strategies for the detection and combating of biofilms, which are notoriously resilient. Biofilms, particularly those on contact lenses, can lead to biofilm-related infections (e.g., conjunctivitis and keratitis), posing a significant risk [...] Read more.
The increase in bacterial resistance to antibiotics in recent years demands innovative strategies for the detection and combating of biofilms, which are notoriously resilient. Biofilms, particularly those on contact lenses, can lead to biofilm-related infections (e.g., conjunctivitis and keratitis), posing a significant risk to patients. Non-destructive and non-contact sensing techniques are essential in addressing this threat. Digital holographic tomography emerges as a promising solution. This allows for the 3D reconstruction of the refractive index distribution in biological samples, enabling label-free visualization and the quantitative analysis of biofilms. This tool provides insight into the dynamics of biofilm formation and maturation on the surface of transparent materials. Applying digital holographic tomography for biofilm examination has the potential to advance our ability to combat the antibiotic bacterial resistance crisis. A recent study focused on characterizing biofilm formation and maturation on six soft contact lens materials (three silicone hydrogels, three hydrogels), with a particular emphasis on Staphylococcus epidermis and Pseudomonas aeruginosa, both common culprits in ocular infections. The results revealed species- and time-dependent variations in the refractive indexes and volumes of biofilms, shedding light on cell dynamics, cell death, and contact lens material-related factors. The use of digital holographic tomography enables the quantitative analysis of biofilm dynamics, providing us with a better understanding and characterization of bacterial biofilms. Full article
(This article belongs to the Special Issue Molecular Research of Biofilms in Microbial Infections)
Show Figures

Figure 1

15 pages, 10083 KiB  
Article
Aberration Estimation for Synthetic Aperture Digital Holographic Microscope Using Deep Neural Network
by Hosung Jeon, Minwoo Jung, Gunhee Lee and Joonku Hahn
Sensors 2023, 23(22), 9278; https://doi.org/10.3390/s23229278 - 20 Nov 2023
Cited by 1 | Viewed by 1624
Abstract
Digital holographic microscopy (DHM) is a valuable technique for investigating the optical properties of samples through the measurement of intensity and phase of diffracted beams. However, DHMs are constrained by Lagrange invariance, compromising the spatial bandwidth product (SBP) which relates resolution and field [...] Read more.
Digital holographic microscopy (DHM) is a valuable technique for investigating the optical properties of samples through the measurement of intensity and phase of diffracted beams. However, DHMs are constrained by Lagrange invariance, compromising the spatial bandwidth product (SBP) which relates resolution and field of view. Synthetic aperture DHM (SA-DHM) was introduced to overcome this limitation, but it faces significant challenges such as aberrations in synthesizing the optical information corresponding to the steering angle of incident wave. This paper proposes a novel approach utilizing deep neural networks (DNNs) for compensating aberrations in SA-DHM, extending the compensation scope beyond the numerical aperture (NA) of the objective lens. The method involves training a DNN from diffraction patterns and Zernike coefficients through a circular aperture, enabling effective aberration compensation in the illumination beam. This method makes it possible to estimate aberration coefficients from the only part of the diffracted beam cutoff by the circular aperture mask. With the proposed technique, the simulation results present improved resolution and quality of sample images. The integration of deep neural networks with SA-DHM holds promise for advancing microscopy capabilities and overcoming existing limitations. Full article
(This article belongs to the Special Issue Advanced Optical Sensors Based on Machine Learning)
Show Figures

Figure 1

17 pages, 19209 KiB  
Article
Terahertz Bessel Beams Formed by Binary and Holographic Axicons
by Boris Knyazev, Natalya Osintseva, Maxim Komlenok, Vladimir Pavelyev, Vasily Gerasimov, Oleg Kameshkov, Yulia Choporova and Konstantin Tukmakov
Photonics 2023, 10(6), 700; https://doi.org/10.3390/photonics10060700 - 20 Jun 2023
Cited by 2 | Viewed by 2264
Abstract
The characteristics of high-power vortex Bessel beams in the terahertz range (λ=141 μm) obtained with the use of diffractive axicons (DAs) illuminated by a Gaussian beam of the Novosibirsk free-electron laser were studied. Two of the three possible types of [...] Read more.
The characteristics of high-power vortex Bessel beams in the terahertz range (λ=141 μm) obtained with the use of diffractive axicons (DAs) illuminated by a Gaussian beam of the Novosibirsk free-electron laser were studied. Two of the three possible types of DA recently described in our previous paper, namely, binary spiral silicon axicons (BAs), forming beams with a topological charge l equal to 0–4 and 9, and a diamond “holographic” axicon (HA), forming a beam with l=9, were used in the experiments. These axicons formed beams whose cross sections in the region of inner Bessel rings were close to those of ideal Bessel beams, but their intensities varied in azimuth with a frequency of l and 2l for the BAs and HA, respectively. However, in the case of the BAs, the beams had a pronounced helical structure at the periphery, whereas for the HA, the beam was axisymmetric. By focusing these beams with a lens, we studied the structure of the so-called “perfect” beams (PBs). While an ideal Bessel beam exhibits a PB as a thin ring, in the case of the BAs, we observed a broadened ring structure consisting of 2l short spirals, and for the HA, we observed a narrow ring with 2l maxima in azimuth. A comparison of the numerical calculations and experiments showed that the observed azimuthal intensity variations can be attributed to inaccuracies in the preparation of the axicon relief and/or discrepancies between the calculated and actual wavelengths, within a few percent. The results of this work enable the establishment of quality requirements for axicon manufacture and the appropriate selection of the axicon type in accordance with the requirements for the beam. Full article
(This article belongs to the Special Issue Terahertz Spectroscopy and Imaging)
Show Figures

Graphical abstract

13 pages, 1800 KiB  
Article
Live Cell Light Sheet Imaging with Low- and High-Spatial-Coherence Detection Approaches Reveals Spatiotemporal Aspects of Neuronal Signaling
by Mariana Potcoava, Donatella Contini, Zachary Zurawski, Spencer Huynh, Christopher Mann, Jonathan Art and Simon Alford
J. Imaging 2023, 9(6), 121; https://doi.org/10.3390/jimaging9060121 - 16 Jun 2023
Cited by 2 | Viewed by 2186
Abstract
Light sheet microscopy in live cells requires minimal excitation intensity and resolves three-dimensional (3D) information rapidly. Lattice light sheet microscopy (LLSM) works similarly but uses a lattice configuration of Bessel beams to generate a flatter, diffraction-limited z-axis sheet suitable for investigating subcellular compartments, [...] Read more.
Light sheet microscopy in live cells requires minimal excitation intensity and resolves three-dimensional (3D) information rapidly. Lattice light sheet microscopy (LLSM) works similarly but uses a lattice configuration of Bessel beams to generate a flatter, diffraction-limited z-axis sheet suitable for investigating subcellular compartments, with better tissue penetration. We developed a LLSM method for investigating cellular properties of tissue in situ. Neural structures provide an important target. Neurons are complex 3D structures, and signaling between cells and subcellular structures requires high resolution imaging. We developed an LLSM configuration based on the Janelia Research Campus design or in situ recording that allows simultaneous electrophysiological recording. We give examples of using LLSM to assess synaptic function in situ. In presynapses, evoked Ca2+ entry causes vesicle fusion and neurotransmitter release. We demonstrate the use of LLSM to measure stimulus-evoked localized presynaptic Ca2+ entry and track synaptic vesicle recycling. We also demonstrate the resolution of postsynaptic Ca2+ signaling in single synapses. A challenge in 3D imaging is the need to move the emission objective to maintain focus. We have developed an incoherent holographic lattice light-sheet (IHLLS) technique to replace the LLS tube lens with a dual diffractive lens to obtain 3D images of spatially incoherent light diffracted from an object as incoherent holograms. The 3D structure is reproduced within the scanned volume without moving the emission objective. This eliminates mechanical artifacts and improves temporal resolution. We focus on LLS and IHLLS applications and data obtained in neuroscience and emphasize increases in temporal and spatial resolution using these approaches. Full article
(This article belongs to the Special Issue Fluorescence Imaging and Analysis of Cellular System)
Show Figures

Figure 1

12 pages, 2388 KiB  
Article
Off-Axis Polarization Volume Lens for Diffractive Waveguide
by Lixuan Zhang, Yishi Weng, Ran Wei, Chuang Wang, Yuchen Gu, Chenyu Huang and Yuning Zhang
Crystals 2023, 13(3), 390; https://doi.org/10.3390/cryst13030390 - 24 Feb 2023
Cited by 3 | Viewed by 2353
Abstract
In augmented reality diffractive waveguide technology, the light field needs to be collimated before being transmitted into the diffractive waveguide. Conventional schemes usually require additional collimating optics to collimate the light from the micro-image source and guide it into the waveguide in-coupling elements. [...] Read more.
In augmented reality diffractive waveguide technology, the light field needs to be collimated before being transmitted into the diffractive waveguide. Conventional schemes usually require additional collimating optics to collimate the light from the micro-image source and guide it into the waveguide in-coupling elements. In order to meet the needs of head-mounted devices and further miniaturize the equipment, this paper proposes a waveguide device that combines collimation and coupling by using a reflective polarization volume lens (PVL). A related model is also established and simulated to calculate the diffraction and transmission characteristics of the PVL element, and is then improved to fit the experiment. The diffraction lens studied in this paper has high diffraction efficiency with a large off-axis angle, which can fold the optical path and reduce considerably the volume of the optical system when applied to the waveguide system. Full article
(This article belongs to the Section Liquid Crystals)
Show Figures

Figure 1

17 pages, 1724 KiB  
Review
Towards Wearable Augmented Reality in Healthcare: A Comparative Survey and Analysis of Head-Mounted Displays
by Yahia Baashar, Gamal Alkawsi, Wan Nooraishya Wan Ahmad, Mohammad Ahmed Alomari, Hitham Alhussian and Sieh Kiong Tiong
Int. J. Environ. Res. Public Health 2023, 20(5), 3940; https://doi.org/10.3390/ijerph20053940 - 22 Feb 2023
Cited by 39 | Viewed by 6300
Abstract
Head-mounted displays (HMDs) have the potential to greatly impact the surgical field by maintaining sterile conditions in healthcare environments. Google Glass (GG) and Microsoft HoloLens (MH) are examples of optical HMDs. In this comparative survey related to wearable augmented reality (AR) technology in [...] Read more.
Head-mounted displays (HMDs) have the potential to greatly impact the surgical field by maintaining sterile conditions in healthcare environments. Google Glass (GG) and Microsoft HoloLens (MH) are examples of optical HMDs. In this comparative survey related to wearable augmented reality (AR) technology in the medical field, we examine the current developments in wearable AR technology, as well as the medical aspects, with a specific emphasis on smart glasses and HoloLens. The authors searched recent articles (between 2017 and 2022) in the PubMed, Web of Science, Scopus, and ScienceDirect databases and a total of 37 relevant studies were considered for this analysis. The selected studies were divided into two main groups; 15 of the studies (around 41%) focused on smart glasses (e.g., Google Glass) and 22 (59%) focused on Microsoft HoloLens. Google Glass was used in various surgical specialities and preoperative settings, namely dermatology visits and nursing skill training. Moreover, Microsoft HoloLens was used in telepresence applications and holographic navigation of shoulder and gait impairment rehabilitation, among others. However, some limitations were associated with their use, such as low battery life, limited memory size, and possible ocular pain. Promising results were obtained by different studies regarding the feasibility, usability, and acceptability of using both Google Glass and Microsoft HoloLens in patient-centric settings as well as medical education and training. Further work and development of rigorous research designs are required to evaluate the efficacy and cost-effectiveness of wearable AR devices in the future. Full article
Show Figures

Figure 1

21 pages, 3468 KiB  
Review
Multi-Illumination Single-Holographic-Exposure Lensless Fresnel (MISHELF) Microscopy: Principles and Biomedical Applications
by José Ángel Picazo-Bueno, Martín Sanz, Luis Granero, Javier García and Vicente Micó
Sensors 2023, 23(3), 1472; https://doi.org/10.3390/s23031472 - 28 Jan 2023
Cited by 6 | Viewed by 2911
Abstract
Lensless holographic microscopy (LHM) comes out as a promising label-free technique since it supplies high-quality imaging and adaptive magnification in a lens-free, compact and cost-effective way. Compact sizes and reduced prices of LHMs make them a perfect instrument for point-of-care diagnosis and increase [...] Read more.
Lensless holographic microscopy (LHM) comes out as a promising label-free technique since it supplies high-quality imaging and adaptive magnification in a lens-free, compact and cost-effective way. Compact sizes and reduced prices of LHMs make them a perfect instrument for point-of-care diagnosis and increase their usability in limited-resource laboratories, remote areas, and poor countries. LHM can provide excellent intensity and phase imaging when the twin image is removed. In that sense, multi-illumination single-holographic-exposure lensless Fresnel (MISHELF) microscopy appears as a single-shot and phase-retrieved imaging technique employing multiple illumination/detection channels and a fast-iterative phase-retrieval algorithm. In this contribution, we review MISHELF microscopy through the description of the principles, the analysis of the performance, the presentation of the microscope prototypes and the inclusion of the main biomedical applications reported so far. Full article
(This article belongs to the Collection Biomedical Imaging and Sensing)
Show Figures

Figure 1

15 pages, 3268 KiB  
Article
Holographic Lens Resolution Using the Convolution Theorem
by Tomás Lloret, Marta Morales-Vidal, Víctor Navarro-Fuster, Manuel G. Ramírez, Augusto Beléndez and Inmaculada Pascual
Polymers 2022, 14(24), 5426; https://doi.org/10.3390/polym14245426 - 11 Dec 2022
Cited by 3 | Viewed by 1816
Abstract
The similarity between object and image of negative asymmetrical holographic lenses (HLs) stored in a low-toxicity photopolymer has been evaluated theoretically and experimentally. Asymmetrical experimental setups with negative focal lengths have been used to obtain HLs. For this purpose, the resolution of the [...] Read more.
The similarity between object and image of negative asymmetrical holographic lenses (HLs) stored in a low-toxicity photopolymer has been evaluated theoretically and experimentally. Asymmetrical experimental setups with negative focal lengths have been used to obtain HLs. For this purpose, the resolution of the HLs was calculated using the convolution theorem. A USAF 1951 test was used as an object and the impulse responses of the HLs, which in this case was the amplitude spread function (ASF), were obtained with two different methods: using a CCD sensor and a Hartmann Shack (HS) wavefront sensor. For a negative asymmetrically recorded HL a maximum resolution of 11.31 lp/mm was obtained. It was evaluated at 473 nm wavelength. A theoretical study of object-image similarity had carried out using the MSE (mean squared error) metric to evaluate the experimental results obtained quantitatively. Full article
(This article belongs to the Section Polymer Applications)
Show Figures

Figure 1

10 pages, 3729 KiB  
Communication
Off-Axis Holographic Interferometer with Ensemble Deep Learning for Biological Tissues Identification
by Hoson Lam, Yanmin Zhu and Prathan Buranasiri
Appl. Sci. 2022, 12(24), 12674; https://doi.org/10.3390/app122412674 - 10 Dec 2022
Cited by 2 | Viewed by 1828
Abstract
This paper proposes a method with an off-axis interferometer and an ensemble deep learning (I-EDL) hologram-classifier to interpret noisy digital holograms captured from the tissues of flawed biological specimens. The holograms are captured by an interferometer, which serves as a digital holographic scanner [...] Read more.
This paper proposes a method with an off-axis interferometer and an ensemble deep learning (I-EDL) hologram-classifier to interpret noisy digital holograms captured from the tissues of flawed biological specimens. The holograms are captured by an interferometer, which serves as a digital holographic scanner to scan the tissue with 3D information. The method achieves a high success rate of 99.60% in identifying the specimens through the tissue holograms. It is found that the ensemble deep learning hologram-classifier can effectively adapt to optical aberration coming from dust on mirrors and optical lens aberrations such as the Airy-plaque-like rings out-turn from the lenses in the interferometer. The deep learning network effectively adapts to these irregularities during the training stage and performs well in the later recognition stage without prior optical background compensations. The method does not require an intact sample with a full outline shape of the specimens or the organs to understand the objects’ identities. It demonstrates a new paradigm in object identification by ensemble deep learning through a direct wavefront recognition technique. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

24 pages, 21607 KiB  
Article
Using Mixed Reality for the Visualization and Dissemination of Complex 3D Models in Geosciences—Application to the Montserrat Massif (Spain)
by Marc Janeras, Joan Roca, Josep A. Gili, Oriol Pedraza, Gerald Magnusson, M. Amparo Núñez-Andrés and Kathryn Franklin
Geosciences 2022, 12(10), 370; https://doi.org/10.3390/geosciences12100370 - 7 Oct 2022
Cited by 10 | Viewed by 3496
Abstract
In the last two decades, both the amount and quality of geoinformation in the geosciences field have improved substantially due to the increasingly more widespread use of techniques such as Laser Scanning (LiDAR), digital photogrammetry, unmanned aerial vehicles, geophysical reconnaissance (seismic, electrical, geomagnetic), [...] Read more.
In the last two decades, both the amount and quality of geoinformation in the geosciences field have improved substantially due to the increasingly more widespread use of techniques such as Laser Scanning (LiDAR), digital photogrammetry, unmanned aerial vehicles, geophysical reconnaissance (seismic, electrical, geomagnetic), and ground-penetrating radar (GPR), among others. Furthermore, the advances in computing, storage and visualization resources allow the acquisition of 3D terrain models (surface and underground) with unprecedented ease and versatility. However, despite these scientific and technical developments, it is still a common practice to simplify the 3D data in 2D static images, losing part of its communicative potential. The objective of this paper is to demonstrate the possibilities of extended reality (XR) for communication and sharing of 3D geoinformation in the field of geosciences. A brief review of the different variants within XR is followed by the presentation of the design and functionalities of headset-type mixed reality (MR) devices, which allow the 3D models to be investigated collaboratively by several users in the office environment. The specific focus is on the functionalities of Microsoft’s HoloLens 2 untethered holographic head mounted display (HMD), and the ADA Platform App by Clirio, which is used to manage model viewing with the HMD. We demonstrate the capabilities of MR for the visualization and dissemination of complex 3D information in geosciences in data rich and self-directed immersive environment, through selected 3D models (most of them of the Montserrat massif). Finally, we highlight the educational possibilities of MR technology. Today MR has an incipient and reduced use; we hope that it will gain popularity as the barriers of entry become lower. Full article
(This article belongs to the Section Natural Hazards)
Show Figures

Figure 1

28 pages, 6540 KiB  
Article
3D Space Layout Design of Holographic Command Cabin Information Display in Mixed Reality Environment Based on HoloLens 2
by Wei Wang, Xuefeng Hong, Sina Dang, Ning Xu and Jue Qu
Brain Sci. 2022, 12(8), 971; https://doi.org/10.3390/brainsci12080971 - 23 Jul 2022
Cited by 7 | Viewed by 2558
Abstract
After the command and control information of the command and control cabin is displayed in the form of mixed reality, the large amount of real-time information and static information contained in it will form a dynamic situation that changes all the time. This [...] Read more.
After the command and control information of the command and control cabin is displayed in the form of mixed reality, the large amount of real-time information and static information contained in it will form a dynamic situation that changes all the time. This brings a great burden to the system operator’s cognition, decision-making and operation. In order to solve this problem, this paper studies the three-dimensional spatial layout of holographic command cabin information display in a mixed reality environment. A total of 15 people participated in the experiment, of which 10 were the subjects of the experiment and 5 were the staff of the auxiliary experiment. Ten subjects used the HoloLens 2 generation to conduct visual characteristics and cognitive load experiments and collected and analyzed the subjects’ task completion time, error rate, eye movement and EEG and subjective evaluation data. Through the analysis of experimental data, the laws of visual and cognitive features of three-dimensional space in a mixed reality environment can be obtained. This paper systematically explores the effects of three key attributes: depth distance, information layer number and target relative position depth distance of information distribution in a 3D space, on visual search performance and on cognitive load. The experimental results showed that the optimal depth distance range for information display in the mixed reality environment is: the best depth distance for operation interactions (0.6 m~1.0 m), the best depth distance for accurate identification (2.4 m~2.8 m) and the overall situational awareness best-in-class depth distance (3.4 m~3.6 m). Under a certain angle of view, the number of information layers in the space is as small as possible, and the number of information layers should not exceed five at most. The relative position depth distance between the information layers in space ranges from 0.2 m to 0.35 m. Based on this theory, information layout in a 3D space can achieve a faster and more accurate visual search in a mixed reality environment and effectively reduce the cognitive load. Full article
Show Figures

Figure 1

Back to TopTop