Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Journal = J. Imaging
Section = Mixed, Augmented and Virtual Reality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 4535 KiB  
Article
Prediction of Attention Groups and Big Five Personality Traits from Gaze Features Collected from an Outlier Search Game
by Rachid Rhyad Saboundji, Kinga Bettina Faragó and Violetta Firyaridi
J. Imaging 2024, 10(10), 255; https://doi.org/10.3390/jimaging10100255 - 16 Oct 2024
Cited by 1 | Viewed by 2084
Abstract
This study explores the intersection of personality, attention and task performance in traditional 2D and immersive virtual reality (VR) environments. A visual search task was developed that required participants to find anomalous images embedded in normal background images in 3D space. Experiments were [...] Read more.
This study explores the intersection of personality, attention and task performance in traditional 2D and immersive virtual reality (VR) environments. A visual search task was developed that required participants to find anomalous images embedded in normal background images in 3D space. Experiments were conducted with 30 subjects who performed the task in 2D and VR environments while their eye movements were tracked. Following an exploratory correlation analysis, we applied machine learning techniques to investigate the predictive power of gaze features on human data derived from different data collection methods. Our proposed methodology consists of a pipeline of steps for extracting fixation and saccade features from raw gaze data and training machine learning models to classify the Big Five personality traits and attention-related processing speed/accuracy levels computed from the Group Bourdon test. The models achieved above-chance predictive performance in both 2D and VR settings despite visually complex 3D stimuli. We also explored further relationships between task performance, personality traits and attention characteristics. Full article
Show Figures

Figure 1

11 pages, 5641 KiB  
Communication
Altered Movement Coordination during Functional Reach Tasks in Patients with Chronic Low Back Pain and Its Relationship to Numerical Pain Rating Scores
by Susanne M. van der Veen, Christopher R. France and James S. Thomas
J. Imaging 2024, 10(9), 225; https://doi.org/10.3390/jimaging10090225 - 12 Sep 2024
Viewed by 1164
Abstract
Identifying the effects of pain catastrophizing on movement patterns in people with chronic low back pain (CLBP) has important clinical implications for treatment approaches. Prior research has shown people with CLBP have decreased lumbar-hip ratios during trunk flexion movements, indicating a decrease in [...] Read more.
Identifying the effects of pain catastrophizing on movement patterns in people with chronic low back pain (CLBP) has important clinical implications for treatment approaches. Prior research has shown people with CLBP have decreased lumbar-hip ratios during trunk flexion movements, indicating a decrease in the contribution of lumbar flexion relative to hip flexion during trunk flexion. In this study, we aim to explore the relationship between pain catastrophizing and movement patterns during trunk flexion in a CLBP population. Participants with CLBP (N = 98, male = 59, age = 39.1 ± 13.0) completed a virtual reality standardized reaching task that necessitated a progressively larger amount of trunk flexion. Specifically, participants reached for four virtual targets to elicit 15°, 30°, 45°, and 60° trunk flexion in the mid-sagittal plane. Lumbar flexion was derived from the motion data. Self-report measures of numerical pain ratings, kinesiophobia, and pain catastrophizing were obtained. Pain catastrophizing leads to decreased lumbar flexion angles during forward reaching. This effect is greater in females than males. Full article
Show Figures

Figure 1

15 pages, 3247 KiB  
Article
The Usefulness of a Virtual Environment-Based Patient Setup Training System for Radiation Therapy
by Toshioh Fujibuchi, Kosuke Kaneko, Hiroyuki Arakawa and Yoshihiro Okada
J. Imaging 2024, 10(8), 184; https://doi.org/10.3390/jimaging10080184 - 30 Jul 2024
Cited by 1 | Viewed by 1949
Abstract
In radiation therapy, patient setup is important for improving treatment accuracy. The six-axis couch semi-automatically adjusts the patient’s position; however, adjusting the patient to twist is difficult. In this study, we developed and evaluated a virtual reality setup training tool for medical students [...] Read more.
In radiation therapy, patient setup is important for improving treatment accuracy. The six-axis couch semi-automatically adjusts the patient’s position; however, adjusting the patient to twist is difficult. In this study, we developed and evaluated a virtual reality setup training tool for medical students to understand and improve their patient setup skills for radiation therapy. First, we set up a simulated patient in a virtual space to reproduce the radiation treatment room. A gyro sensor was attached to the patient phantom in real space, and the twist of the phantom was linked to the patient in the virtual space. Training was conducted for 24 students, and their operation records were analyzed and evaluated. The training’s efficacy was also evaluated through questionnaires provided at the end of the training. The total time required for patient setup tests before and after training decreased significantly from 331.9 s to 146.2 s. As a result of the questionnaire regarding the usability of training to the trainee, most were highly evaluated. We found that training significantly improved students’ understanding of the patient setup. With the proposed system, trainees can experience a simulated setup that can aid in deepening their understanding of radiation therapy treatments. Full article
Show Figures

Figure 1

13 pages, 8773 KiB  
Article
Exocentric and Egocentric Views for Biomedical Data Analytics in Virtual Environments—A Usability Study
by Jing Ng, David Arness, Ashlee Gronowski, Zhonglin Qu, Chng Wei Lau, Daniel Catchpoole and Quang Vinh Nguyen
J. Imaging 2024, 10(1), 3; https://doi.org/10.3390/jimaging10010003 - 23 Dec 2023
Cited by 2 | Viewed by 2320
Abstract
Biomedical datasets are usually large and complex, containing biological information about a disease. Computational analytics and the interactive visualisation of such data are essential decision-making tools for disease diagnosis and treatment. Oncology data models were observed in a virtual reality environment to analyse [...] Read more.
Biomedical datasets are usually large and complex, containing biological information about a disease. Computational analytics and the interactive visualisation of such data are essential decision-making tools for disease diagnosis and treatment. Oncology data models were observed in a virtual reality environment to analyse gene expression and clinical data from a cohort of cancer patients. The technology enables a new way to view information from the outside in (exocentric view) and the inside out (egocentric view), which is otherwise not possible on ordinary displays. This paper presents a usability study on the exocentric and egocentric views of biomedical data visualisation in virtual reality and their impact on usability on human behaviour and perception. Our study revealed that the performance time was faster in the exocentric view than in the egocentric view. The exocentric view also received higher ease-of-use scores than the egocentric view. However, the influence of usability on time performance was only evident in the egocentric view. The findings of this study could be used to guide future development and refinement of visualisation tools in virtual reality. Full article
(This article belongs to the Section Mixed, Augmented and Virtual Reality)
Show Figures

Figure 1

40 pages, 12295 KiB  
Article
Method for Assessing the Influence of Phobic Stimuli in Virtual Simulators
by Artem Obukhov, Mikhail Krasnyanskiy, Andrey Volkov, Alexandra Nazarova, Daniil Teselkin, Kirill Patutin and Darya Zajceva
J. Imaging 2023, 9(10), 195; https://doi.org/10.3390/jimaging9100195 - 25 Sep 2023
Cited by 1 | Viewed by 1718
Abstract
In the organizing of professional training, the assessment of the trainee’s reaction and state in stressful situations is of great importance. Phobic reactions are a specific type of stress reaction that, however, is rarely taken into account when developing virtual simulators, and are [...] Read more.
In the organizing of professional training, the assessment of the trainee’s reaction and state in stressful situations is of great importance. Phobic reactions are a specific type of stress reaction that, however, is rarely taken into account when developing virtual simulators, and are a risk factor in the workplace. A method for evaluating the impact of various phobic stimuli on the quality of training is considered, which takes into account the time, accuracy, and speed of performing professional tasks, as well as the characteristics of electroencephalograms (the amplitude, power, coherence, Hurst exponent, and degree of interhemispheric asymmetry). To evaluate the impact of phobias during experimental research, participants in the experimental group performed exercises in different environments: under normal conditions and under the influence of acrophobic and arachnophobic stimuli. The participants were divided into subgroups using clustering algorithms and an expert neurologist. After that, a comparison of the subgroup metrics was carried out. The research conducted makes it possible to partially confirm our hypotheses about the negative impact of phobic effects on some participants in the experimental group. The relationship between the reaction to a phobia and the characteristics of brain activity was revealed, and the characteristics of the electroencephalogram signal were considered as the metrics for detecting a phobic reaction. Full article
(This article belongs to the Section Mixed, Augmented and Virtual Reality)
Show Figures

Figure 1

15 pages, 27462 KiB  
Article
A Collaborative Virtual Walkthrough of Matera’s Sassi Using Photogrammetric Reconstruction and Hand Gesture Navigation
by Nicla Maria Notarangelo, Gilda Manfredi and Gabriele Gilio
J. Imaging 2023, 9(4), 88; https://doi.org/10.3390/jimaging9040088 - 21 Apr 2023
Cited by 10 | Viewed by 2741
Abstract
The COVID-19 pandemic has underscored the need for real-time, collaborative virtual tools to support remote activities across various domains, including education and cultural heritage. Virtual walkthroughs provide a potent means of exploring, learning about, and interacting with historical sites worldwide. Nonetheless, creating realistic [...] Read more.
The COVID-19 pandemic has underscored the need for real-time, collaborative virtual tools to support remote activities across various domains, including education and cultural heritage. Virtual walkthroughs provide a potent means of exploring, learning about, and interacting with historical sites worldwide. Nonetheless, creating realistic and user-friendly applications poses a significant challenge. This study investigates the potential of collaborative virtual walkthroughs as an educational tool for cultural heritage sites, with a focus on the Sassi of Matera, a UNESCO World Heritage Site in Italy. The virtual walkthrough application, developed using RealityCapture and Unreal Engine, leveraged photogrammetric reconstruction and deep learning-based hand gesture recognition to offer an immersive and accessible experience, allowing users to interact with the virtual environment using intuitive gestures. A test with 36 participants resulted in positive feedback regarding the application’s effectiveness, intuitiveness, and user-friendliness. The findings suggest that virtual walkthroughs can provide precise representations of complex historical locations, promoting tangible and intangible aspects of heritage. Future work should focus on expanding the reconstructed site, enhancing the performance, and assessing the impact on learning outcomes. Overall, this study highlights the potential of virtual walkthrough applications as a valuable resource for architecture, cultural heritage, and environmental education. Full article
(This article belongs to the Special Issue The Roles of the Collaborative eXtended Reality in the New Social Era)
Show Figures

Figure 1

21 pages, 10030 KiB  
Article
VICO-DR: A Collaborative Virtual Dressing Room for Image Consulting
by Gilda Manfredi, Gabriele Gilio, Vincenzo Baldi, Hiba Youssef and Ugo Erra
J. Imaging 2023, 9(4), 76; https://doi.org/10.3390/jimaging9040076 - 26 Mar 2023
Cited by 15 | Viewed by 4210
Abstract
In recent years, extended reality has increasingly been used to enhance the shopping experience for customers. In particular, some virtual dressing room applications have begun to develop, as they allow customers to try on digital clothes and see how they fit. However, recent [...] Read more.
In recent years, extended reality has increasingly been used to enhance the shopping experience for customers. In particular, some virtual dressing room applications have begun to develop, as they allow customers to try on digital clothes and see how they fit. However, recent studies found that the presence of an AI or a real shopping assistant could improve the virtual dressing room experience. In response to this, we have developed a collaborative synchronous virtual dressing room for image consulting that allows customers to try on realistic digital garments chosen by a remotely connected human image consultant. The application has different features for the image consultant and the customer. The image consultant can connect to the application, define a database of garments, select different outfits with different sizes for the customer to try, and communicate with the customer through a single RGB camera system. The customer-side application can visualize the description of the outfit that the avatar is wearing, as well as the virtual shopping cart. The main purpose of the application is to offer an immersive experience, ensured by the presence of a realistic environment, an avatar that resembles the customer, a real-time physically-based cloth simulation algorithm, and a video-chat system. Full article
(This article belongs to the Section Mixed, Augmented and Virtual Reality)
Show Figures

Figure 1

23 pages, 798 KiB  
Article
ARAM: A Technology Acceptance Model to Ascertain the Behavioural Intention to Use Augmented Reality
by Anabela Marto, Alexandrino Gonçalves, Miguel Melo, Maximino Bessa and Rui Silva
J. Imaging 2023, 9(3), 73; https://doi.org/10.3390/jimaging9030073 - 21 Mar 2023
Cited by 16 | Viewed by 4363
Abstract
The expansion of augmented reality across society, its availability in mobile platforms and the novelty character it embodies by appearing in a growing number of areas, have raised new questions related to people’s predisposition to use this technology in their daily life. Acceptance [...] Read more.
The expansion of augmented reality across society, its availability in mobile platforms and the novelty character it embodies by appearing in a growing number of areas, have raised new questions related to people’s predisposition to use this technology in their daily life. Acceptance models, which have been updated following technological breakthroughs and society changes, are known to be great tools for predicting the intention to use a new technological system. This paper proposes a new acceptance model aiming to ascertain the intention to use augmented reality technology in heritage sites—the Augmented Reality Acceptance Model (ARAM). ARAM relies on the use of the Unified Theory of Acceptance and Use of Technology model (UTAUT) model’s constructs, namely performance expectancy, effort expectancy, social influence, and facilitating conditions, to which the new and adapted constructs of trust expectancy, technological innovation, computer anxiety and hedonic motivation are added. This model was validated with data gathered from 528 participants. Results confirm ARAM as a reliable tool to determine the acceptance of augmented reality technology for usage in cultural heritage sites. The direct impact of performance expectancy, facilitating conditions and hedonic motivation is validated as having a positive influence on behavioural intention. Trust expectancy and technological innovation are demonstrated to have a positive influence on performance expectancy whereas hedonic motivation is negatively influenced by effort expectancy and by computer anxiety. The research, thus, supports ARAM as a suitable model to ascertain the behavioural intention to use augmented reality in new areas of activity. Full article
(This article belongs to the Section Mixed, Augmented and Virtual Reality)
Show Figures

Figure 1

20 pages, 5345 KiB  
Article
Environment-Aware Rendering and Interaction in Web-Based Augmented Reality
by José Ferrão, Paulo Dias, Beatriz Sousa Santos and Miguel Oliveira
J. Imaging 2023, 9(3), 63; https://doi.org/10.3390/jimaging9030063 - 8 Mar 2023
Cited by 6 | Viewed by 5083
Abstract
This work presents a novel framework for web-based environment-aware rendering and interaction in augmented reality based on WebXR and three.js. It aims at accelerating the development of device-agnostic Augmented Reality (AR) applications. The solution allows for a realistic rendering of 3D elements, handles [...] Read more.
This work presents a novel framework for web-based environment-aware rendering and interaction in augmented reality based on WebXR and three.js. It aims at accelerating the development of device-agnostic Augmented Reality (AR) applications. The solution allows for a realistic rendering of 3D elements, handles geometry occlusion, casts shadows of virtual objects onto real surfaces, and provides physics interaction with real-world objects. Unlike most existing state-of-the-art systems that are built to run on a specific hardware configuration, the proposed solution targets the web environment and is designed to work on a vast range of devices and configurations. Our solution can use monocular camera setups with depth data estimated by deep neural networks or, when available, use higher-quality depth sensors (e.g., LIDAR, structured light) that provide a more accurate perception of the environment. To ensure consistency in the rendering of the virtual scene a physically based rendering pipeline is used, in which physically correct attributes are associated with each 3D object, which, combined with lighting information captured by the device, enables the rendering of AR content matching the environment illumination. All these concepts are integrated and optimized into a pipeline capable of providing a fluid user experience even on middle-range devices. The solution is distributed as an open-source library that can be integrated into existing and new web-based AR projects. The proposed framework was evaluated and compared in terms of performance and visual features with two state-of-the-art alternatives. Full article
(This article belongs to the Special Issue The Roles of the Collaborative eXtended Reality in the New Social Era)
Show Figures

Figure 1

13 pages, 6623 KiB  
Article
Remote Interactive Surgery Platform (RISP): Proof of Concept for an Augmented-Reality-Based Platform for Surgical Telementoring
by Yannik Kalbas, Hoijoon Jung, John Ricklin, Ge Jin, Mingjian Li, Thomas Rauer, Shervin Dehghani, Nassir Navab, Jinman Kim, Hans-Christoph Pape and Sandro-Michael Heining
J. Imaging 2023, 9(3), 56; https://doi.org/10.3390/jimaging9030056 - 23 Feb 2023
Cited by 8 | Viewed by 3379
Abstract
The “Remote Interactive Surgery Platform” (RISP) is an augmented reality (AR)-based platform for surgical telementoring. It builds upon recent advances of mixed reality head-mounted displays (MR-HMD) and associated immersive visualization technologies to assist the surgeon during an operation. It enables an interactive, real-time [...] Read more.
The “Remote Interactive Surgery Platform” (RISP) is an augmented reality (AR)-based platform for surgical telementoring. It builds upon recent advances of mixed reality head-mounted displays (MR-HMD) and associated immersive visualization technologies to assist the surgeon during an operation. It enables an interactive, real-time collaboration with a remote consultant by sharing the operating surgeon’s field of view through the Microsoft (MS) HoloLens2 (HL2). Development of the RISP started during the Medical Augmented Reality Summer School 2021 and is currently still ongoing. It currently includes features such as three-dimensional annotations, bidirectional voice communication and interactive windows to display radiographs within the sterile field. This manuscript provides an overview of the RISP and preliminary results regarding its annotation accuracy and user experience measured with ten participants. Full article
Show Figures

Figure 1

10 pages, 933 KiB  
Concept Paper
Translation of Medical AR Research into Clinical Practice
by Matthias Seibold, José Miguel Spirig, Hooman Esfandiari, Mazda Farshad and Philipp Fürnstahl
J. Imaging 2023, 9(2), 44; https://doi.org/10.3390/jimaging9020044 - 14 Feb 2023
Cited by 6 | Viewed by 2443
Abstract
Translational research is aimed at turning discoveries from basic science into results that advance patient treatment. The translation of technical solutions into clinical use is a complex, iterative process that involves different stages of design, development, and validation, such as the identification of [...] Read more.
Translational research is aimed at turning discoveries from basic science into results that advance patient treatment. The translation of technical solutions into clinical use is a complex, iterative process that involves different stages of design, development, and validation, such as the identification of unmet clinical needs, technical conception, development, verification and validation, regulatory matters, and ethics. For this reason, many promising technical developments at the interface of technology, informatics, and medicine remain research prototypes without finding their way into clinical practice. Augmented reality is a technology that is now making its breakthrough into patient care, even though it has been available for decades. In this work, we explain the translational process for Medical AR devices and present associated challenges and opportunities. To the best knowledge of the authors, this concept paper is the first to present a guideline for the translation of medical AR research into clinical practice. Full article
Show Figures

Figure 1

10 pages, 5150 KiB  
Article
Verification, Evaluation, and Validation: Which, How & Why, in Medical Augmented Reality System Design
by Roy Eagleson and Leo Joskowicz
J. Imaging 2023, 9(2), 20; https://doi.org/10.3390/jimaging9020020 - 17 Jan 2023
Cited by 3 | Viewed by 2200
Abstract
This paper presents a discussion about the fundamental principles of Analysis of Augmented and Virtual Reality (AR/VR) Systems for Medical Imaging and Computer-Assisted Interventions. The three key concepts of Analysis (Verification, Evaluation, and Validation) are introduced, illustrated with examples of systems using AR/VR, [...] Read more.
This paper presents a discussion about the fundamental principles of Analysis of Augmented and Virtual Reality (AR/VR) Systems for Medical Imaging and Computer-Assisted Interventions. The three key concepts of Analysis (Verification, Evaluation, and Validation) are introduced, illustrated with examples of systems using AR/VR, and defined. The concepts of system specifications, measurement accuracy, uncertainty, and observer variability are defined and related to the analysis principles. The concepts are illustrated with examples of AR/VR working systems. Full article
Show Figures

Figure 1

13 pages, 1660 KiB  
Article
CAL-Tutor: A HoloLens 2 Application for Training in Obstetric Sonography and User Motion Data Recording
by Manuel Birlo, Philip J. Eddie Edwards, Soojeong Yoo, Brian Dromey, Francisco Vasconcelos, Matthew J. Clarkson and Danail Stoyanov
J. Imaging 2023, 9(1), 6; https://doi.org/10.3390/jimaging9010006 - 29 Dec 2022
Cited by 5 | Viewed by 3298
Abstract
Obstetric ultrasound (US) training teaches the relationship between foetal anatomy and the viewed US slice to enable navigation to standardised anatomical planes (head, abdomen and femur) where diagnostic measurements are taken. This process is difficult to learn, and results in considerable inter-operator variability. [...] Read more.
Obstetric ultrasound (US) training teaches the relationship between foetal anatomy and the viewed US slice to enable navigation to standardised anatomical planes (head, abdomen and femur) where diagnostic measurements are taken. This process is difficult to learn, and results in considerable inter-operator variability. We propose the CAL-Tutor system for US training based on a US scanner and phantom, where a model of both the baby and the US slice are displayed to the trainee in its physical location using the HoloLens 2. The intention is that AR guidance will shorten the learning curve for US trainees and improve spatial awareness. In addition to the AR guidance, we also record many data streams to assess user motion and the learning process. The HoloLens 2 provides eye gaze, head and hand position, ARToolkit and NDI Aurora tracking gives the US probe positions and an external camera records the overall scene. These data can provide a rich source for further analysis, such as distinguishing expert from novice motion. We have demonstrated the system in a sample of engineers. Feedback suggests that the system helps novice users navigate the US probe to the standard plane. The data capture is successful and initial data visualisations show that meaningful information about user behaviour can be captured. Initial feedback is encouraging and shows improved user assessment where AR guidance is provided. Full article
Show Figures

Figure 1

21 pages, 26921 KiB  
Article
Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process
by Nassir Navab, Alejandro Martin-Gomez, Matthias Seibold, Michael Sommersperger, Tianyu Song, Alexander Winkler, Kevin Yu and Ulrich Eck
J. Imaging 2023, 9(1), 4; https://doi.org/10.3390/jimaging9010004 - 23 Dec 2022
Cited by 30 | Viewed by 6655
Abstract
Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still [...] Read more.
Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions. Full article
Show Figures

Figure 1

8 pages, 6592 KiB  
Communication
Remote Training for Medical Staff in Low-Resource Environments Using Augmented Reality
by Austin Hale, Marc Fischer, Laura Schütz, Henry Fuchs and Christoph Leuze
J. Imaging 2022, 8(12), 319; https://doi.org/10.3390/jimaging8120319 - 29 Nov 2022
Cited by 4 | Viewed by 2690
Abstract
This work aims to leverage medical augmented reality (AR) technology to counter the shortage of medical experts in low-resource environments. We present a complete and cross-platform proof-of-concept AR system that enables remote users to teach and train medical procedures without expensive medical equipment [...] Read more.
This work aims to leverage medical augmented reality (AR) technology to counter the shortage of medical experts in low-resource environments. We present a complete and cross-platform proof-of-concept AR system that enables remote users to teach and train medical procedures without expensive medical equipment or external sensors. By seeing the 3D viewpoint and head movements of the teacher, the student can follow the teacher’s actions on the real patient. Alternatively, it is possible to stream the 3D view of the patient from the student to the teacher, allowing the teacher to guide the student during the remote session. A pilot study of our system shows that it is easy to transfer detailed instructions through this remote teaching system and that the interface is easily accessible and intuitive for users. We provide a performant pipeline that synchronizes, compresses, and streams sensor data through parallel efficiency. Full article
Show Figures

Figure 1

Back to TopTop