Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = Video See-Through

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 22347 KB  
Article
Enhancing V2V Communication by Parsimoniously Leveraging V2N2V Path in Connected Vehicles
by Songmu Heo, Yoo-Seung Song, Seungmo Kang and Hyogon Kim
Sensors 2026, 26(3), 819; https://doi.org/10.3390/s26030819 - 26 Jan 2026
Viewed by 314
Abstract
The rapid proliferation of connected vehicles equipped with both Vehicle-to-Vehicle (V2V) sidelink and cellular interfaces creates new opportunities for real-time vehicular applications, yet achieving ultra-reliable communication without prohibitive cellular costs remains challenging. This paper addresses reliable inter-vehicle video streaming for safety-critical applications such [...] Read more.
The rapid proliferation of connected vehicles equipped with both Vehicle-to-Vehicle (V2V) sidelink and cellular interfaces creates new opportunities for real-time vehicular applications, yet achieving ultra-reliable communication without prohibitive cellular costs remains challenging. This paper addresses reliable inter-vehicle video streaming for safety-critical applications such as See-Through for Passing and Obstructed View Assist, which require stringent Service Level Objectives (SLOs) of 50 ms latency with 99% reliability. Through measurements in Seoul urban environments, we characterize the complementary nature of V2V and Vehicle-to-Network-to-Vehicle (V2N2V) paths: V2V provides ultra-low latency (mean 2.99 ms) but imperfect reliability (95.77%), while V2N2V achieves perfect reliability but exhibits high latency variability (P99: 120.33 ms in centralized routing) that violates target SLOs. We propose a hybrid framework that exploits V2V as the primary path while selectively retransmitting only lost packets via V2N2V. The key innovation is a dual loss detection mechanism combining gap-based and timeout-based triggers leveraging Real-Time Protocol (RTP) headers for both immediate response and comprehensive coverage. Trace-driven simulation demonstrates that the proposed framework achieves a 99.96% packet reception rate and 99.71% frame playback ratio, approaching lossless transmission while maintaining cellular utilization at only 5.54%, which is merely 0.84 percentage points above the V2V loss rate. This represents a 7× cost reduction versus PLR Switching (4.2 GB vs. 28 GB monthly) while reducing video stalls by 10×. These results demonstrate that packet-level selective redundancy enables cost-effective ultra-reliable V2X communication at scale. Full article
Show Figures

Figure 1

12 pages, 3049 KB  
Article
Supporting Tremor Rehabilitation Using Optical See-Through Augmented Reality Technology
by Kai Wang, Dong Tan, Zhe Li and Zhi Sun
Sensors 2023, 23(8), 3924; https://doi.org/10.3390/s23083924 - 12 Apr 2023
Cited by 6 | Viewed by 3352
Abstract
Tremor is a movement disorder that significantly impacts an individual’s physical stability and quality of life, and conventional medication or surgery often falls short in providing a cure. Rehabilitation training is, therefore, used as an auxiliary method to mitigate the exacerbation of individual [...] Read more.
Tremor is a movement disorder that significantly impacts an individual’s physical stability and quality of life, and conventional medication or surgery often falls short in providing a cure. Rehabilitation training is, therefore, used as an auxiliary method to mitigate the exacerbation of individual tremors. Video-based rehabilitation training is a form of therapy that allows patients to exercise at home, reducing pressure on rehabilitation institutions’ resources. However, it has limitations in directly guiding and monitoring patients’ rehabilitation, leading to an ineffective training effect. This study proposes a low-cost rehabilitation training system that utilizes optical see-through augmented reality (AR) technology to enable tremor patients to conduct rehabilitation training at home. The system provides one-on-one demonstration, posture guidance, and training progress monitoring to achieve an optimal training effect. To assess the system’s effectiveness, we conducted experiments comparing the movement magnitudes of individuals with tremors in the proposed AR environment and video environment, while also comparing them with standard demonstrators. Participants wore a tremor simulation device during uncontrollable limb tremors, with tremor frequency and amplitude calibrated to typical tremor standards. The results showed that participants’ limb movement magnitudes in the AR environment were significantly higher than those in the video environment, approaching the movement magnitudes of the standard demonstrators. Hence, it can be inferred that individuals receiving tremor rehabilitation in the AR environment experience better movement quality than those in the video environment. Furthermore, participant experience surveys revealed that the AR environment not only provided a sense of comfort, relaxation, and enjoyment but also effectively guided them throughout the rehabilitation process. Full article
(This article belongs to the Special Issue Smart Mobile and Sensing Applications)
Show Figures

Figure 1

23 pages, 3541 KB  
Communication
Design Principles of a Mixed-Reality Shopping Assistant System in Omnichannel Retail
by Shubham Jain, Gabriele Obermeier, Andreas Auinger, Dirk Werth and Gabriel Kiss
Appl. Sci. 2023, 13(3), 1384; https://doi.org/10.3390/app13031384 - 20 Jan 2023
Cited by 10 | Viewed by 4858
Abstract
New digital technologies furnish retail managers with new means to enhance consumer experiences in omnichannel retailing. Conceptual academic literature and industry emphasize the promising use of immersive digital displays and their potential benefits for retailers. In this research, we present the design of [...] Read more.
New digital technologies furnish retail managers with new means to enhance consumer experiences in omnichannel retailing. Conceptual academic literature and industry emphasize the promising use of immersive digital displays and their potential benefits for retailers. In this research, we present the design of a personal shopping assistance system that is based on optical see-through mixed-reality technology. Microsoft HoloLens 2 was leveraged as the archetype to realize this novel system, facilitating consumer information search and decision making. The design incorporates various shopping assistance elements (i.e., product information, reviews, recommendations, product availability, videos, a virtual cart, and an option to buy). Users can interact with these elements with gesture-based inputs to navigate through the interface. A qualitative study with 35 participants was conducted to collect users’ feedback and perceptions about the mixed-reality shopping assistant system. Derived from the qualitative feedback, we propose seven design principles that aim to support future designs and developments of mixed-reality shopping applications for head-mounted displays in omnichannel retail: rigor, informativeness, tangibility, summary, comparability, flexibility and holism. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality - 2nd Volume)
Show Figures

Figure 1

16 pages, 7479 KB  
Article
AR-Supported Supervision of Conditional Autonomous Robots: Considerations for Pedicle Screw Placement in the Future
by Josefine Schreiter, Danny Schott, Lovis Schwenderling, Christian Hansen, Florian Heinrich and Fabian Joeres
J. Imaging 2022, 8(10), 255; https://doi.org/10.3390/jimaging8100255 - 21 Sep 2022
Cited by 7 | Viewed by 3743
Abstract
Robotic assistance is applied in orthopedic interventions for pedicle screw placement (PSP). While current robots do not act autonomously, they are expected to have higher autonomy under surgeon supervision in the mid-term. Augmented reality (AR) is promising to support this supervision and to [...] Read more.
Robotic assistance is applied in orthopedic interventions for pedicle screw placement (PSP). While current robots do not act autonomously, they are expected to have higher autonomy under surgeon supervision in the mid-term. Augmented reality (AR) is promising to support this supervision and to enable human–robot interaction (HRI). To outline a futuristic scenario for robotic PSP, the current workflow was analyzed through literature review and expert discussion. Based on this, a hypothetical workflow of the intervention was developed, which additionally contains the analysis of the necessary information exchange between human and robot. A video see-through AR prototype was designed and implemented. A robotic arm with an orthopedic drill mock-up simulated the robotic assistance. The AR prototype included a user interface to enable HRI. The interface provides data to facilitate understanding of the robot’s ”intentions”, e.g., patient-specific CT images, the current workflow phase, or the next planned robot motion. Two-dimensional and three-dimensional visualization illustrated patient-specific medical data and the drilling process. The findings of this work contribute a valuable approach in terms of addressing future clinical needs and highlighting the importance of AR support for HRI. Full article
Show Figures

Figure 1

21 pages, 13393 KB  
Article
Extensible Neck: A Gesture Input Method to Extend/Contract Neck Virtually in Video See-through AR Environment
by Shinnosuke Yamazaki, Ayumi Ohnishi, Tsutomu Terada and Masahiko Tsukamoto
Sensors 2022, 22(9), 3559; https://doi.org/10.3390/s22093559 - 7 May 2022
Viewed by 2238
Abstract
With the popularization of head-mounted displays (HMDs), many systems for human augmentation have been developed. This will increase the opportunities to use such systems in daily life. Therefore, the user interfaces for these systems must be designed to be intuitive and highly responsive. [...] Read more.
With the popularization of head-mounted displays (HMDs), many systems for human augmentation have been developed. This will increase the opportunities to use such systems in daily life. Therefore, the user interfaces for these systems must be designed to be intuitive and highly responsive. This paper proposes an intuitive input method that uses natural gestures as input cues for systems for human augmentation. We investigated the appropriate gestures for a system that expands the movements of the user’s viewpoint by extending and contracting the neck in a video see-through AR environment. We conducted an experiment to investigate natural gestures by observing the motions when a person wants to extend his/her neck. Furthermore, we determined the operation method for extending/contracting the neck and holding the position through additional experiments. Based on this investigation, we implemented a prototype of the proposed system in a VR environment. Note that we employed a VR environment since we could test our method in various situations, although our target environment is AR. We compared the operability of the proposed method and the handheld controller using our prototype. The results confirmed that the participants felt more immersed using our method, although the positioning speed using controller input was faster than that of our method. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 3025 KB  
Article
Architecture of a Hybrid Video/Optical See-through Head-Mounted Display-Based Augmented Reality Surgical Navigation Platform
by Marina Carbone, Fabrizio Cutolo, Sara Condino, Laura Cercenelli, Renzo D’Amato, Giovanni Badiali and Vincenzo Ferrari
Information 2022, 13(2), 81; https://doi.org/10.3390/info13020081 - 8 Feb 2022
Cited by 30 | Viewed by 6863
Abstract
In the context of image-guided surgery, augmented reality (AR) represents a ground-breaking enticing improvement, mostly when paired with wearability in the case of open surgery. Commercially available AR head-mounted displays (HMDs), designed for general purposes, are increasingly used outside their indications to develop [...] Read more.
In the context of image-guided surgery, augmented reality (AR) represents a ground-breaking enticing improvement, mostly when paired with wearability in the case of open surgery. Commercially available AR head-mounted displays (HMDs), designed for general purposes, are increasingly used outside their indications to develop surgical guidance applications with the ambition to demonstrate the potential of AR in surgery. The applications proposed in the literature underline the hunger for AR-guidance in the surgical room together with the limitations that hinder commercial HMDs from being the answer to such a need. The medical domain demands specifically developed devices that address, together with ergonomics, the achievement of surgical accuracy objectives and compliance with medical device regulations. In the framework of an EU Horizon2020 project, a hybrid video and optical see-through augmented reality headset paired with a software architecture, both specifically designed to be seamlessly integrated into the surgical workflow, has been developed. In this paper, the overall architecture of the system is described. The developed AR HMD surgical navigation platform was positively tested on seven patients to aid the surgeon while performing Le Fort 1 osteotomy in cranio-maxillofacial surgery, demonstrating the value of the hybrid approach and the safety and usability of the navigation platform. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

17 pages, 18800 KB  
Article
AudienceMR: Extending the Local Space for Large-Scale Audience with Mixed Reality for Enhanced Remote Lecturer Experience
by Bin Han and Gerard Jounghyun Kim
Appl. Sci. 2021, 11(19), 9022; https://doi.org/10.3390/app11199022 - 28 Sep 2021
Cited by 8 | Viewed by 3172
Abstract
AudienceMR is designed as a multi-user mixed reality space that seamlessly extends the local user space to become a large, shared classroom where some of the audience members are seen seated in a real space, and more members are seen through an extended [...] Read more.
AudienceMR is designed as a multi-user mixed reality space that seamlessly extends the local user space to become a large, shared classroom where some of the audience members are seen seated in a real space, and more members are seen through an extended portal. AudienceMR can provide a sense of the presence of a large-scale crowd/audience with the associated spatial context. In contrast to virtual reality (VR), however, with mixed reality (MR), a lecturer can deliver content or conduct a performance from a real, actual, comfortable, and familiar local space, while interacting directly with real nearby objects, such as a desk, podium, educational props, instruments, and office materials. Such a design will elicit a realistic user experience closer to an actual classroom, which is currently prohibitive owing to the COVID-19 pandemic. This paper validated our hypothesis by conducting a comparative experiment assessing the lecturer’s experience with two independent variables: (1) an online classroom platform type, i.e., a 2D desktop video teleconference, a 2D video screen grid in VR, 3D VR, and AudienceMR, and (2) a student depiction, i.e., a 2D upper-body video screen and a 3D full-body avatar. Our experiment validated that AudienceMR exhibits a level of anxiety and fear of public speaking closer to that of a real classroom situation, and a higher social and spatial presence than 2D video grid-based solutions and even 3D VR. Compared to 3D VR, AudienceMR offers a more natural and easily usable real object-based interaction. Most subjects preferred AudienceMR over the alternatives despite the nuisance of having to wear a video see-through headset. Such qualities will result in information conveyance and an educational efficacy comparable to those of a real classroom, and better than those achieved through popular 2D desktop teleconferencing or immersive 3D VR solutions. Full article
Show Figures

Figure 1

12 pages, 1586 KB  
Article
Can Liquid Lenses Increase Depth of Field in Head Mounted Video See-Through Devices?
by Marina Carbone, Davide Domeneghetti, Fabrizio Cutolo, Renzo D’Amato, Emanuele Cigna, Paolo Domenico Parchi, Marco Gesi, Luca Morelli, Mauro Ferrari and Vincenzo Ferrari
J. Imaging 2021, 7(8), 138; https://doi.org/10.3390/jimaging7080138 - 5 Aug 2021
Cited by 6 | Viewed by 3500
Abstract
Wearable Video See-Through (VST) devices for Augmented Reality (AR) and for obtaining a Magnified View are taking hold in the medical and surgical fields. However, these devices are not yet usable in daily clinical practice, due to focusing problems and a limited depth [...] Read more.
Wearable Video See-Through (VST) devices for Augmented Reality (AR) and for obtaining a Magnified View are taking hold in the medical and surgical fields. However, these devices are not yet usable in daily clinical practice, due to focusing problems and a limited depth of field. This study investigates the use of liquid-lens optics to create an autofocus system for wearable VST visors. The autofocus system is based on a Time of Flight (TOF) distance sensor and an active autofocus control system. The integrated autofocus system in the wearable VST viewers showed good potential in terms of providing rapid focus at various distances and a magnified view. Full article
Show Figures

Figure 1

24 pages, 5581 KB  
Article
A Study on Persistence of GAN-Based Vision-Induced Gustatory Manipulation
by Kizashi Nakano, Daichi Horita, Norihiko Kawai, Naoya Isoyama, Nobuchika Sakata, Kiyoshi Kiyokawa, Keiji Yanai and Takuji Narumi
Electronics 2021, 10(10), 1157; https://doi.org/10.3390/electronics10101157 - 13 May 2021
Cited by 6 | Viewed by 4903
Abstract
Vision-induced gustatory manipulation interfaces can help people with dietary restrictions feel as if they are eating what they want by modulating the appearance of the alternative foods they are eating in reality. However, it is still unclear whether vision-induced gustatory change persists beyond [...] Read more.
Vision-induced gustatory manipulation interfaces can help people with dietary restrictions feel as if they are eating what they want by modulating the appearance of the alternative foods they are eating in reality. However, it is still unclear whether vision-induced gustatory change persists beyond a single bite, how the sensation changes over time, and how it varies among individuals from different cultural backgrounds. The present paper reports on a user study conducted to answer these questions using a generative adversarial network (GAN)-based real-time image-to-image translation system. In the user study, 16 participants were presented somen noodles or steamed rice through a video see-through head mounted display (HMD) both in two conditions; without or with visual modulation (somen noodles and steamed rice were translated into ramen noodles and curry and rice, respectively), and brought food to the mouth and tasted it five times with an interval of two minutes. The results of the experiments revealed that vision-induced gustatory manipulation is persistent in many participants. Their persistent gustatory changes are divided into three groups: those in which the intensity of the gustatory change gradually increased, those in which it gradually decreased, and those in which it did not fluctuate, each with about the same number of participants. Although the generalizability is limited due to the small population, it was also found that non-Japanese and male participants tended to perceive stronger gustatory manipulation compared to Japanese and female participants. We believe that our study deepens our understanding and insight into vision-induced gustatory manipulation and encourages further investigation. Full article
(This article belongs to the Special Issue Recent Advances in Virtual Reality and Augmented Reality)
Show Figures

Figure 1

13 pages, 7514 KB  
Article
The Wearable VOSTARS System for Augmented Reality-Guided Surgery: Preclinical Phantom Evaluation for High-Precision Maxillofacial Tasks
by Laura Cercenelli, Marina Carbone, Sara Condino, Fabrizio Cutolo, Emanuela Marcelli, Achille Tarsitano, Claudio Marchetti, Vincenzo Ferrari and Giovanni Badiali
J. Clin. Med. 2020, 9(11), 3562; https://doi.org/10.3390/jcm9113562 - 5 Nov 2020
Cited by 53 | Viewed by 5253
Abstract
Background: In the context of guided surgery, augmented reality (AR) represents a groundbreaking improvement. The Video and Optical See-Through Augmented Reality Surgical System (VOSTARS) is a new AR wearable head-mounted display (HMD), recently developed as an advanced navigation tool for maxillofacial and plastic [...] Read more.
Background: In the context of guided surgery, augmented reality (AR) represents a groundbreaking improvement. The Video and Optical See-Through Augmented Reality Surgical System (VOSTARS) is a new AR wearable head-mounted display (HMD), recently developed as an advanced navigation tool for maxillofacial and plastic surgery and other non-endoscopic surgeries. In this study, we report results of phantom tests with VOSTARS aimed to evaluate its feasibility and accuracy in performing maxillofacial surgical tasks. Methods: An early prototype of VOSTARS was used. Le Fort 1 osteotomy was selected as the experimental task to be performed under VOSTARS guidance. A dedicated set-up was prepared, including the design of a maxillofacial phantom, an ad hoc tracker anchored to the occlusal splint, and cutting templates for accuracy assessment. Both qualitative and quantitative assessments were carried out. Results: VOSTARS, used in combination with the designed maxilla tracker, showed excellent tracking robustness under operating room lighting. Accuracy tests showed that 100% of Le Fort 1 trajectories were traced with an accuracy of ±1.0 mm, and on average, 88% of the trajectory’s length was within ±0.5 mm accuracy. Conclusions: Our preliminary results suggest that the VOSTARS system can be a feasible and accurate solution for guiding maxillofacial surgical tasks, paving the way to its validation in clinical trials and for a wide spectrum of maxillofacial applications. Full article
(This article belongs to the Special Issue Innovation in Head and Neck Reconstructive Surgery)
Show Figures

Figure 1

13 pages, 19973 KB  
Article
Wearable Augmented Reality Platform for Aiding Complex 3D Trajectory Tracing
by Sara Condino, Benish Fida, Marina Carbone, Laura Cercenelli, Giovanni Badiali, Vincenzo Ferrari and Fabrizio Cutolo
Sensors 2020, 20(6), 1612; https://doi.org/10.3390/s20061612 - 13 Mar 2020
Cited by 38 | Viewed by 7951
Abstract
Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To [...] Read more.
Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space. Full article
(This article belongs to the Special Issue Intelligent Sensors in the Industry 4.0 and Smart Factory)
Show Figures

Figure 1

12 pages, 3558 KB  
Article
Real-Time Augmented Reality Physics Simulator for Education
by Nak-Jun Sung, Jun Ma, Yoo-Joo Choi and Min Hong
Appl. Sci. 2019, 9(19), 4019; https://doi.org/10.3390/app9194019 - 25 Sep 2019
Cited by 31 | Viewed by 9731
Abstract
Physics education applications using augmented reality technology, which has been developed extensively in recent years, have a lot of restrictions in terms of performance and accuracy. The purpose of our research is to develop a real-time simulation system for physics education that is [...] Read more.
Physics education applications using augmented reality technology, which has been developed extensively in recent years, have a lot of restrictions in terms of performance and accuracy. The purpose of our research is to develop a real-time simulation system for physics education that is based on parallel processing. In this paper, we present a video see-through AR (Augmented Reality) system that includes an environment recognizer using a depth image of Microsoft’s Kinect V2 and a real-time soft body simulator based on parallel processing using GPU (Graphic Processing Unit). Soft body simulation can provide more realistic simulation results than rigid body simulation, so it can be more effective in systems for physics education. We have designed and implemented a system that provides the physical deformation and movement of 3D volumetric objects, and uses them in education. To verify the usefulness of the proposed system, we conducted a questionnaire survey of 10 students majoring in physics education. As a result of the questionnaire survey, 93% of respondents answered that they would like to use it for education. We plan to use the stand-alone AR device including one or more cameras to improve the system in the future. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

19 pages, 7712 KB  
Article
Wand-Like Interaction with a Hand-Held Tablet Device—A Study on Selection and Pose Manipulation Techniques
by Ali Samini and Karljohan Lundin Palmerius
Information 2019, 10(4), 152; https://doi.org/10.3390/info10040152 - 24 Apr 2019
Cited by 1 | Viewed by 4447
Abstract
Current hand-held smart devices are supplied with powerful processors, high resolution screens, and sharp cameras that make them suitable for Augmented Reality (AR) applications. Such applications commonly use interaction techniques adapted for touch, such as touch selection and multi-touch pose manipulation, mapping 2D [...] Read more.
Current hand-held smart devices are supplied with powerful processors, high resolution screens, and sharp cameras that make them suitable for Augmented Reality (AR) applications. Such applications commonly use interaction techniques adapted for touch, such as touch selection and multi-touch pose manipulation, mapping 2D gestures to 3D action. To enable direct 3D interaction for hand-held AR, an alternative is to use the changes of the device pose for 6 degrees-of-freedom interaction. In this article we explore selection and pose manipulation techniques that aim to minimize the amount of touch. For this, we explore and study the characteristics of both non-touch selection and non-touch pose manipulation techniques. We present two studies that, on the one hand, compare selection techniques with the common touch selection and, on the other, investigate the effect of user gaze control on the non-touch pose manipulation techniques. Full article
(This article belongs to the Special Issue Human-Centered 3D Interaction and User Interface)
Show Figures

Figure 1

17 pages, 2592 KB  
Article
Contact-Less Real-Time Monitoring of Cardiovascular Risk Using Video Imaging and Fuzzy Inference Rules
by Gabriella Casalino, Giovanna Castellano, Vincenzo Pasquadibisceglie and Gianluca Zaza
Information 2019, 10(1), 9; https://doi.org/10.3390/info10010009 - 29 Dec 2018
Cited by 37 | Viewed by 10655
Abstract
Conventional methods for measuring cardiovascular parameters use skin contact techniques requiring a measuring device to be worn by the user. To avoid discomfort of contact devices, camera-based techniques using photoplethysmography have been recently introduced. Nevertheless, these solutions are typically expensive and difficult to [...] Read more.
Conventional methods for measuring cardiovascular parameters use skin contact techniques requiring a measuring device to be worn by the user. To avoid discomfort of contact devices, camera-based techniques using photoplethysmography have been recently introduced. Nevertheless, these solutions are typically expensive and difficult to be used daily at home. In this work, we propose an innovative solution for monitoring cardiovascular parameters that is low cost and can be easily integrated within any common home environment. The proposed system is a contact-less device composed of a see-through mirror equipped with a camera that detects the person’s face and processes video frames using photoplethysmography in order to estimate the heart rate, the breath rate and the blood oxygen saturation. In addition, the color of lips is automatically detected via clustering-based color quantization. The estimated parameters are used to predict a risk of cardiovascular disease by means of fuzzy inference rules integrated in the mirror-based monitoring system. Comparing our system to a contact device in measuring vital parameters on still or slightly moving subjects, we achieve measurement errors that are within acceptable margins according to the literature. Moreover, in most cases, the response of the fuzzy rule-based system is comparable with that of the clinician in assessing a risk level of cardiovascular disease. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 12504 KB  
Article
Perspective Preserving Solution for Quasi-Orthoscopic Video See-Through HMDs
by Fabrizio Cutolo, Umberto Fontana and Vincenzo Ferrari
Technologies 2018, 6(1), 9; https://doi.org/10.3390/technologies6010009 - 13 Jan 2018
Cited by 28 | Viewed by 8066
Abstract
In non-orthoscopic video see-through (VST) head-mounted displays (HMDs), depth perception through stereopsis is adversely affected by sources of spatial perception errors. Solutions for parallax-free and orthoscopic VST HMDs were considered to ensure proper space perception but at expenses of an increased bulkiness and [...] Read more.
In non-orthoscopic video see-through (VST) head-mounted displays (HMDs), depth perception through stereopsis is adversely affected by sources of spatial perception errors. Solutions for parallax-free and orthoscopic VST HMDs were considered to ensure proper space perception but at expenses of an increased bulkiness and weight. In this work, we present a hybrid video-optical see-through HMD the geometry of which explicitly violates the rigorous conditions of orthostereoscopy. For properly recovering natural stereo fusion of the scene within the personal space in a region around a predefined distance from the observer, we partially resolve the eye-camera parallax by warping the camera images through a perspective preserving homography that accounts for the geometry of the VST HMD and refers to such distance. For validating our solution; we conducted objective and subjective tests. The goal of the tests was to assess the efficacy of our solution in recovering natural depth perception in the space around said reference distance. The results obtained showed that the quasi-orthoscopic setting of the HMD; together with the perspective preserving image warping; allow the recovering of a correct perception of the relative depths. The perceived distortion of space around the reference plane proved to be not as severe as predicted by the mathematical models. Full article
(This article belongs to the Special Issue Wearable Technologies)
Show Figures

Figure 1

Back to TopTop