Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (103)

Search Parameters:
Keywords = human avatars

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 960 KiB  
Review
Zebrafish as a Model for Translational Immuno-Oncology
by Gabriela Rodrigues Barbosa, Augusto Monteiro de Souza, Priscila Fernandes Silva, Caroline Santarosa Fávero, José Leonardo de Oliveira, Hernandes F. Carvalho, Ana Carolina Luchiari and Leonardo O. Reis
J. Pers. Med. 2025, 15(7), 304; https://doi.org/10.3390/jpm15070304 - 11 Jul 2025
Viewed by 562
Abstract
Despite remarkable progress in cancer immunotherapy, many agents that show efficacy in murine or in vitro models fail to translate clinically. Zebrafish (Danio rerio) have emerged as a powerful complementary model that addresses several limitations of traditional systems. Their optical transparency, [...] Read more.
Despite remarkable progress in cancer immunotherapy, many agents that show efficacy in murine or in vitro models fail to translate clinically. Zebrafish (Danio rerio) have emerged as a powerful complementary model that addresses several limitations of traditional systems. Their optical transparency, genetic tractability, and conserved immune and oncogenic signaling pathways enable high-resolution, real-time imaging of tumor–immune interactions in vivo. Importantly, zebrafish offer a unique opportunity to study the core mechanisms of health and sickness, complementing other models and expanding our understanding of fundamental processes in vivo. This review provides an overview of zebrafish immune system development, highlighting tools for tracking innate and adaptive responses. We discuss their application in modeling immune evasion, checkpoint molecule expression, and tumor microenvironment dynamics using transgenic and xenograft approaches. Platforms for high-throughput drug screening and personalized therapy assessment using patient-derived xenografts (“zAvatars”) are evaluated, alongside limitations, such as temperature sensitivity, immature adaptive immunity in larvae, and interspecies differences in immune responses, tumor complexity, and pharmacokinetics. Emerging frontiers include humanized zebrafish, testing of next-generation immunotherapies, such as CAR T/CAR NK and novel checkpoint inhibitors (LAG-3, TIM-3, and TIGIT). We conclude by outlining the key challenges and future opportunities for integrating zebrafish into the immuno-oncology pipeline to accelerate clinical translation. Full article
(This article belongs to the Special Issue Advances in Animal Models and Precision Medicine for Cancer Research)
Show Figures

Figure 1

18 pages, 2044 KiB  
Article
Intuitive Recognition of a Virtual Agent’s Learning State Through Facial Expressions in VR
by Wonhyong Lee and Dong Hwan Jin
Electronics 2025, 14(13), 2666; https://doi.org/10.3390/electronics14132666 - 30 Jun 2025
Viewed by 346
Abstract
As artificial intelligence agents become integral to immersive virtual reality environments, their inherent opacity presents a significant challenge to transparent human–agent communication. This study aims to determine if a virtual agent can effectively communicate its learning state to a user through facial expressions, [...] Read more.
As artificial intelligence agents become integral to immersive virtual reality environments, their inherent opacity presents a significant challenge to transparent human–agent communication. This study aims to determine if a virtual agent can effectively communicate its learning state to a user through facial expressions, and to empirically validate a set of designed expressions for this purpose. We designed three animated facial expression sequences for a stylized three-dimensional avatar, each corresponding to a distinct learning outcome: clear success (Case A), mixed performance (Case B), and moderate success (Case C). An initial online survey (n=93) first confirmed the general interpretability of these expressions, followed by a main experiment in virtual reality (n=30), where participants identified the agent’s state based solely on these visual cues. The results strongly supported our primary hypothesis (H1), with participants achieving a high overall recognition accuracy of approximately 91%. While user background factors did not yield statistically significant differences, observable trends suggest they may be worthy of future investigation. These findings demonstrate that designed facial expressions serve as an effective and intuitive channel for real-time, affective explainable artificial intelligence (affective XAI), contributing a practical, human-centric method for enhancing agent transparency in collaborative virtual environments. Full article
(This article belongs to the Special Issue Advances in Human-Computer Interaction: Challenges and Opportunities)
Show Figures

Figure 1

110 pages, 4617 KiB  
Review
Exploring Experimental Models of Colorectal Cancer: A Critical Appraisal from 2D Cell Systems to Organoids, Humanized Mouse Avatars, Organ-on-Chip, CRISPR Engineering, and AI-Driven Platforms—Challenges and Opportunities for Translational Precision Oncology
by Ahad Al-Kabani, Bintul Huda, Jewel Haddad, Maryam Yousuf, Farida Bhurka, Faika Ajaz, Rajashree Patnaik, Shirin Jannati and Yajnavalka Banerjee
Cancers 2025, 17(13), 2163; https://doi.org/10.3390/cancers17132163 - 26 Jun 2025
Viewed by 2417
Abstract
Background/Objectives: Colorectal cancer (CRC) remains a major global health burden, marked by complex tumor–microenvironment interactions, genetic heterogeneity, and varied treatment responses. Effective preclinical models are essential for dissecting CRC biology and guiding personalized therapeutic strategies. This review aims to critically evaluate current experimental [...] Read more.
Background/Objectives: Colorectal cancer (CRC) remains a major global health burden, marked by complex tumor–microenvironment interactions, genetic heterogeneity, and varied treatment responses. Effective preclinical models are essential for dissecting CRC biology and guiding personalized therapeutic strategies. This review aims to critically evaluate current experimental CRC models, assessing their translational relevance, limitations, and potential for integration into precision oncology. Methods: A systematic literature search was conducted across PubMed, Scopus, and Web of Science, focusing on studies employing defined in vitro, in vivo, and emerging integrative CRC models. Studies were included based on experimental rigor and relevance to therapeutic or mechanistic investigation. Models were compared based on molecular fidelity, tumorigenic capacity, immune interactions, and predictive utility. Results: CRC models were classified into in vitro (2D cell lines, spheroids, patient-derived organoids), in vivo (murine, zebrafish, porcine, canine), and integrative platforms (tumor-on-chip systems, humanized mice, AI-augmented simulations). Traditional models offer accessibility and mechanistic insight, while advanced systems better mimic human tumor complexity, immune landscapes, and treatment response. Tumor-on-chip and AI-driven models show promise in simulating dynamic tumor behavior and predicting clinical outcomes. Cross-platform integration enhances translational validity and enables iterative model refinement. Conclusions: Strategic deployment of complementary CRC models is critical for advancing translational research. This review provides a roadmap for aligning model capabilities with specific research goals, advocating for integrated, patient-relevant systems to improve therapeutic development. Enhancing model fidelity and interoperability is key to accelerating the bench-to-bedside translation in colorectal cancer care. Full article
(This article belongs to the Special Issue Recent Advances in Basic and Clinical Colorectal Cancer Research)
Show Figures

Figure 1

20 pages, 1159 KiB  
Article
Visualization of a Multidimensional Point Cloud as a 3D Swarm of Avatars
by Leszek Luchowski and Dariusz Pojda
Appl. Sci. 2025, 15(13), 7209; https://doi.org/10.3390/app15137209 - 26 Jun 2025
Viewed by 239
Abstract
This paper proposes an innovative technique for representing multidimensional datasets using icons inspired by Chernoff faces. Our approach combines classical projection techniques with the explicit assignment of selected data dimensions to avatar (facial) features, leveraging the innate human ability to interpret facial traits. [...] Read more.
This paper proposes an innovative technique for representing multidimensional datasets using icons inspired by Chernoff faces. Our approach combines classical projection techniques with the explicit assignment of selected data dimensions to avatar (facial) features, leveraging the innate human ability to interpret facial traits. We introduce a semantic division of data dimensions into intuitive and technical categories, assigning the former to avatar features and projecting the latter into a four-dimensional (or higher) spatial embedding. The technique is implemented as a plugin for the open-source dpVision visualization platform, enabling users to interactively explore data in the form of a swarm of avatars whose spatial positions and visual features jointly encode various aspects of the dataset. Experimental results with synthetic test data and a 12-dimensional dataset of Portuguese Vinho Verde wines demonstrate that the proposed method enhances interpretability and facilitates the analysis of complex data structures. Full article
Show Figures

Figure 1

26 pages, 8159 KiB  
Article
A Combined Mirror–EMG Robot-Assisted Therapy System for Lower Limb Rehabilitation
by Florin Covaciu, Bogdan Gherman, Calin Vaida, Adrian Pisla, Paul Tucan, Andrei Caprariu and Doina Pisla
Technologies 2025, 13(6), 227; https://doi.org/10.3390/technologies13060227 - 3 Jun 2025
Viewed by 2086
Abstract
This paper presents the development and initial evaluation of a novel protocol for robot-assisted lower limb rehabilitation. It integrates dual-modal patient interaction, employing mirror therapy and an auto-adaptive EMG-driven control system, designed to enhance lower limb rehabilitation in patients with hemiparesis impairments. The [...] Read more.
This paper presents the development and initial evaluation of a novel protocol for robot-assisted lower limb rehabilitation. It integrates dual-modal patient interaction, employing mirror therapy and an auto-adaptive EMG-driven control system, designed to enhance lower limb rehabilitation in patients with hemiparesis impairments. The system features a robotic platform specifically engineered for lower limb rehabilitation, which operates in conjunction with a virtual reality (VR) environment. This immersive environment comprises a digital twin of the robotic system alongside a human avatar representing the patient and a set of virtual targets to be reached by the patient. To implement mirror therapy, the proposed protocol utilizes a set of inertial sensors placed on the patient’s healthy limb to capture real-time motion data. The auto-adaptive protocol takes as input the EMG signals (if any) from sensors placed on the impaired limb and performs the required motions to reach the virtual targets in the VR application. By synchronizing the motions of the healthy limb with the digital twin in the VR space, the system aims to promote neuroplasticity, reduce pain perception, and encourage engagement in rehabilitation exercises. Initial laboratory trials demonstrate promising outcomes in terms of improved motor function and subject motivation. This research not only underscores the efficacy of integrating robotics and virtual reality in rehabilitation but also opens avenues for advanced personalized therapies in clinical settings. Future work will investigate the efficiency of the proposed solution using patients, thus demonstrating clinical usability, and explore the potential integration of additional feedback mechanisms to further enhance the therapeutic efficacy of the system. Full article
Show Figures

Figure 1

12 pages, 1391 KiB  
Article
Speech Intelligibility in Virtual Avatars: Comparison Between Audio and Audio–Visual-Driven Facial Animation
by Federico Cioffi, Massimiliano Masullo, Aniello Pascale and Luigi Maffei
Acoustics 2025, 7(2), 30; https://doi.org/10.3390/acoustics7020030 - 23 May 2025
Viewed by 1195
Abstract
Speech intelligibility (SI) is critical in effective communication across various settings, although it is often compromised by adverse acoustic conditions. In noisy environments, visual cues such as lip movements and facial expressions, when congruent with auditory information, can significantly enhance speech perception and [...] Read more.
Speech intelligibility (SI) is critical in effective communication across various settings, although it is often compromised by adverse acoustic conditions. In noisy environments, visual cues such as lip movements and facial expressions, when congruent with auditory information, can significantly enhance speech perception and reduce cognitive effort. In an ever-growing diffusion of virtual environments, communicating through virtual avatars is becoming increasingly prevalent, thus requiring a comprehensive understanding of these dynamics to ensure effective interactions. The present study used Unreal Engine’s MetaHuman technology to compare four methodologies used to create facial animation: MetaHuman Animator (MHA), MetaHuman LiveLink (MHLL), Audio-Driven MetaHuman (ADMH), and Synthetized Audio-Driven MetaHuman (SADMH). Thirty-six word pairs from the Diagnostic Rhyme Test (DRT) were used as input stimuli to create the animations and to compare them in terms of intelligibility. Moreover, to simulate a challenging background noise, the animations were mixed with a babble noise at a signal-to-noise ratio of −13 dB (A). Participants assessed a total of 144 facial animations. Results showed the ADMH condition to be the most intelligible among the methodologies used, probably due to enhanced clarity and consistency in the generated facial animations, while eliminating distractions like micro-expressions and natural variations in human articulation. Full article
Show Figures

Figure 1

9 pages, 765 KiB  
Article
Anthropometric Measurements from a 3D Photogrammetry-Based Digital Avatar: A Non-Experimental Cross-Sectional Study to Assess Reliability and Agreement
by Matteo Briguglio, Marialetizia Latella, Stefano Borghi, Sara Bizzozero, Lucia Imperiali, Thomas W. Wainwright, Jacopo A. Vitale and Giuseppe Banfi
Appl. Sci. 2025, 15(10), 5738; https://doi.org/10.3390/app15105738 - 20 May 2025
Viewed by 647
Abstract
Photogrammetry captures and stitches multiple images together to generate a digital model of the human body, called an avatar, making it potentially useful in healthcare. Its validity for anthropometry remains to be established. We evaluated the reliability and agreement of measurements derived from [...] Read more.
Photogrammetry captures and stitches multiple images together to generate a digital model of the human body, called an avatar, making it potentially useful in healthcare. Its validity for anthropometry remains to be established. We evaluated the reliability and agreement of measurements derived from a three-dimensional digital avatar generated by photogrammetry compared to manual collection. Fifty-three volunteers (34.02 ± 11.94 years of age, 64% female, 22.5 kg∙m−2 body mass index) were recruited, and twenty-two body regions (neck, armpits, biceps, elbows, wrists, chest, breast, waist, belly, hip, thighs, knees, calves, ankles) were taken by an individual rater with a tape measure. Digital measurements were generated from photogrammetry. Participants’ intraclass correlation coefficients indicated strong consistency, with agreement of over 90% for limb regions such as biceps, elbows, wrists, thighs, knees, calves, and ankles, while chest and armpits showed lowest agreement (<60%). Random errors were low in limb regions, while trunk measurements showed highest errors (up to >1 cm) and variation. Bland–Altman analysis revealed wider limits of agreements and higher biases for chest (−2.44 cm), waist and belly (around −1.2 cm), and armpits (around −1.1 cm) compared to limbs. Our findings suggest that photogrammetry-based digital avatars can be a promising tool for anthropometric assessment, particularly for limbs, but may require refinement in trunk-related regions. Full article
(This article belongs to the Special Issue Novel Anthropometric Techniques for Health and Nutrition Assessment)
Show Figures

Figure 1

18 pages, 2982 KiB  
Article
The Development of an Emotional Embodied Conversational Agent and the Evaluation of the Effect of Response Delay on User Impression
by Simon Christophe Jolibois, Akinori Ito and Takashi Nose
Appl. Sci. 2025, 15(8), 4256; https://doi.org/10.3390/app15084256 - 11 Apr 2025
Viewed by 1549
Abstract
Embodied conversational agents (ECAs) are autonomous interaction interfaces designed to communicate with humans. This study investigates the impact of response delays and emotional facial expressions of ECAs on user perception and engagement. The motivation for this study stems from the growing integration of [...] Read more.
Embodied conversational agents (ECAs) are autonomous interaction interfaces designed to communicate with humans. This study investigates the impact of response delays and emotional facial expressions of ECAs on user perception and engagement. The motivation for this study stems from the growing integration of ECAs in various sectors, where their ability to mimic human-like interactions significantly enhances user experience. To this end, we developed an ECA with multimodal emotion recognition, both with voice and facial feature recognition and emotional facial expressions of the agent avatar. The system generates answers in real time based on media content. The development was supported by a case study of artwork images with the agent playing the role of a museum curator, where the user asks the agent for information on the artwork. We evaluated the developed system in two aspects. First, we investigated how the delay in an agent’s responses influences user satisfaction and perception. Secondly, we explored the role of emotion in an ECA’s face in shaping the user’s perception of responsiveness. The results showed that the longer response delay negatively impacted the user’s perception of responsiveness when the ECA did not express emotion, while the emotional expression improved the responsiveness perception. Full article
(This article belongs to the Special Issue Human–Computer Interaction and Virtual Environments)
Show Figures

Figure 1

14 pages, 2113 KiB  
Article
Immersive Virtual Reality for Enabling Patient Experience and Enrollment in Oncology Clinical Trials: A Feasibility Study
by Frank Tsai, Landon Gray, Amy Mirabella, Margaux Steinbach, Jacqueline M. Garrick, Nadine J. Barrett, Nelson Chao and Frederic Zenhausern
Cancers 2025, 17(7), 1148; https://doi.org/10.3390/cancers17071148 - 29 Mar 2025
Viewed by 837
Abstract
Background/Objectives: Informed consent is a crucial part of the clinical trial enrollment process in which patients are asked to understand and provide approval for medical interventions. Consent forms can be complex and hinder patient comprehension, highlighting the need for novel tools to [...] Read more.
Background/Objectives: Informed consent is a crucial part of the clinical trial enrollment process in which patients are asked to understand and provide approval for medical interventions. Consent forms can be complex and hinder patient comprehension, highlighting the need for novel tools to improve the patient enrollment experience. This feasibility study aimed to develop an immersive technology to enroll human subjects in oncology clinical trials and provide 3D avatar-based informed consent in a virtual reality (VR) environment. Methods: Clinical feasibility and the effects of head-mounted VR devices on motion sickness and educational quality were evaluated in adult oncology patients enrolled in an intravenous (IV) port placement intervention study. Participants received before- and after-questionnaires to measure their understanding of the information received in VR. A follow-up questionnaire was given four weeks post-consent to measure knowledge retention. Results: Clinical staff reported that VR technology was manageable to use. Among 16 adult participants, all reported that VR was well tolerated with no motion sickness. The mean pre-intervention knowledge score was 64.6%, with an immediate post-intervention knowledge score of 97.9%. A mean knowledge score of 93.3% four-weeks post-consent was observed among 10/16 participants who completed a follow-up questionnaire. Conclusions: These findings support that VR is well tolerated and effective at delivering information during the informed consent process for oncology clinical trials. Key limitations include the small sample size and single clinical population. Further trials are warranted to compare efficacy over traditional consenting mechanisms and include more diverse clinical populations among a wider participant pool. Full article
(This article belongs to the Special Issue Digital Health Technologies in Oncology)
Show Figures

Figure 1

22 pages, 1300 KiB  
Systematic Review
Emerging Roles of 3D Body Scanning in Human-Centric Applications
by Mahendran Balasubramanian and Pariya Sheykhmaleki
Technologies 2025, 13(4), 126; https://doi.org/10.3390/technologies13040126 - 24 Mar 2025
Cited by 1 | Viewed by 2263
Abstract
The three-dimensional (3D) body scanning technology has impacted various fields, from digital anthropometry to healthcare. This paper provides an exhaustive review of the existing literature on applications of 3D body scanning technology in human-centered work. Our systematic analysis of Web of Science and [...] Read more.
The three-dimensional (3D) body scanning technology has impacted various fields, from digital anthropometry to healthcare. This paper provides an exhaustive review of the existing literature on applications of 3D body scanning technology in human-centered work. Our systematic analysis of Web of Science and Scopus journal articles revealed six critical themes: product development, healthcare, body shape, anthropometric measurement, avatar creation, and body image. Three-dimensional body scanning technology is used to design and develop ergonomically coherent and fit products. In addition to its application in clothing, footwear, and furniture, its non-invasive and rapid image-capturing capabilities make it an attractive tool for clinical diagnostics and evaluations in healthcare. Given the exponential growth of digital interfaces, 3D avatars and body forms have gained popularity, and scanners facilitate their growth and adoption. The creation of anthropometric databases for various populations, from children to boomers and from adolescents to pregnant women, has been made possible with body scanning technology and has been helpful in several applications. This review highlights the growing importance of 3D body scanning technology in various contexts and provides a foundation for researchers and practitioners seeking to understand its utility and implications. Full article
(This article belongs to the Section Manufacturing Technology)
Show Figures

Figure 1

18 pages, 592 KiB  
Article
Exploring Avatar Utilization in Workplace and Educational Environments: A Study on User Acceptance, Preferences, and Technostress
by Cristina Gasch, Alireza Javanmardi, Ameer Khan, Azucena Garcia-Palacios and Alain Pagani
Appl. Sci. 2025, 15(6), 3290; https://doi.org/10.3390/app15063290 - 18 Mar 2025
Viewed by 1178
Abstract
With the rise of virtual avatars in professional, educational, and recreational settings, this study investigates how different avatar types—varying in realism, gender, and identity—affect user perceptions of embodiment, acceptability, technostress, privacy, and preferences. Two studies were conducted with 42 participants in Study 1 [...] Read more.
With the rise of virtual avatars in professional, educational, and recreational settings, this study investigates how different avatar types—varying in realism, gender, and identity—affect user perceptions of embodiment, acceptability, technostress, privacy, and preferences. Two studies were conducted with 42 participants in Study 1 and 40 in Study 2, including professionals and students with varying VR experiences. In Study 1, participants used pre-assigned avatars they could control during interactions. In Study 2, an interviewer used different avatars to interact with participants and assess their impact. Questionnaires and correlation analyses measured embodiment, technostress, privacy, and preference variations across contexts. Results showed that hyper-realistic avatars resembling the user enhanced perceived embodiment and credibility in professional and educational settings, while non-realistic avatars were preferred in recreational contexts, particularly when interacting with strangers. Technostress was generally low, though younger users were more sensitive to avatar appearance, and privacy concerns increased when avatars were controlled by others. Gender differences emerged, with women expressing more concern about appearance and men preferring same-gender avatars in professional environments. These findings highlight the need for VR platform designers to balance realism with user comfort and address privacy concerns to encourage broader adoption in professional and educational applications. Full article
(This article belongs to the Special Issue Emerging Technologies of Human-Computer Interaction)
Show Figures

Figure 1

23 pages, 5646 KiB  
Article
Enhancing Security and Authenticity in Immersive Environments
by Rebecca Acheampong, Dorin-Mircea Popovici, Titus Balan, Alexandre Rekeraho and Manuel Soto Ramos
Information 2025, 16(3), 191; https://doi.org/10.3390/info16030191 - 1 Mar 2025
Viewed by 1057
Abstract
Immersive environments have brought a great transformation in human–computer interaction by enabling realistic and interactive experiences within simulated or augmented spaces. In these immersive environments, virtual assets such as custom avatars, digital artwork, and virtual real estate play an important role, often holding [...] Read more.
Immersive environments have brought a great transformation in human–computer interaction by enabling realistic and interactive experiences within simulated or augmented spaces. In these immersive environments, virtual assets such as custom avatars, digital artwork, and virtual real estate play an important role, often holding a substantial value in both virtual and real worlds. However, this value also makes them attractive to fraudulent activities. As a result, ensuring the authenticity and integrity of virtual assets is of concern. This study proposes a cryptographic solution that leverages digital signatures and hash algorithms to secure virtual assets in immersive environments. The system employs RSA-2048 for signing and SHA-256 hashing for binding the digital signature to the asset’s data to prevent tampering and forgery. Our experimental evaluation demonstrates that the signing process operates with remarkable efficiency; over ten trials, the signing time averaged 17.3 ms, with a narrow range of 16–19 ms and a standard deviation of 1.1 ms. Verification times were near-instantaneous (0–1 ms), ensuring real-time responsiveness. Moreover, the signing process incurred a minimal memory footprint of approximately 4 KB, highlighting the system’s suitability for resource-constrained VR applications. Simulations of tampering and forgery attacks further validated the system’s capability to detect unauthorized modifications, with a 100% detection rate observed across multiple trials. While the system currently employs RSA, which may be vulnerable to quantum computing in the future, its modular design ensures crypto-agility, allowing for the integration of quantum-resistant algorithms as needed. This work not only addresses immediate security challenges in immersive environments but also lays the groundwork for broader applications, including regulatory compliance for financial virtual assets. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

24 pages, 30185 KiB  
Article
3D Digital Human Generation from a Single Image Using Generative AI with Real-Time Motion Synchronization
by Myeongseop Kim, Taehyeon Kim and Kyung-Taek Lee
Electronics 2025, 14(4), 777; https://doi.org/10.3390/electronics14040777 - 17 Feb 2025
Cited by 1 | Viewed by 3431
Abstract
The generation of 3D digital humans has traditionally relied on multi-view imaging systems and large-scale datasets, posing challenges in cost, accessibility, and real-time applicability. To overcome these limitations, this study presents an efficient pipeline that constructs high-fidelity 3D digital humans from a single [...] Read more.
The generation of 3D digital humans has traditionally relied on multi-view imaging systems and large-scale datasets, posing challenges in cost, accessibility, and real-time applicability. To overcome these limitations, this study presents an efficient pipeline that constructs high-fidelity 3D digital humans from a single frontal image. By leveraging generative AI, the system synthesizes additional views and generates UV maps compatible with the SMPL-X model, ensuring anatomically accurate and photorealistic reconstructions. The generated 3D models are imported into Unity 3D, where they are rigged for real-time motion synchronization using BlazePose-based lightweight pose estimation. To further enhance motion realism, custom algorithms—including ground detection and rotation smoothing—are applied, improving movement stability and fluidity. The system was rigorously evaluated through both quantitative and qualitative analyses. Results show an average generation time of 211.1 s, segmentation accuracy of 92.1%, and real-time rendering at 64.4 FPS. In qualitative assessments, expert reviewers rated the system using the SUS usability framework and heuristic evaluation, confirming its usability and effectiveness. This method eliminates the need for multi-view cameras or depth sensors, significantly reducing the barrier to entry for real-time 3D avatar creation and interactive AI-driven applications. It has broad applications in virtual reality (VR), gaming, digital content creation, AI-driven simulation, digital twins, and telepresence systems. By introducing a scalable and accessible 3D modeling pipeline, this research lays the groundwork for future advancements in immersive and interactive environments. Full article
(This article belongs to the Special Issue AI Synergy: Vision, Language, and Modality)
Show Figures

Figure 1

25 pages, 6644 KiB  
Review
Intelligent Virtual Reality and Augmented Reality Technologies: An Overview
by Georgios Lampropoulos
Future Internet 2025, 17(2), 58; https://doi.org/10.3390/fi17020058 - 2 Feb 2025
Cited by 5 | Viewed by 3158
Abstract
The research into artificial intelligence (AI), the metaverse, and extended reality (XR) technologies, such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), has been expanding over the recent years. This study aims to provide an overview regarding the combination of [...] Read more.
The research into artificial intelligence (AI), the metaverse, and extended reality (XR) technologies, such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), has been expanding over the recent years. This study aims to provide an overview regarding the combination of AI with XR technologies and the metaverse through the examination of 880 articles using different approaches. The field has experienced a 91.29% increase in its annual growth rate, and although it is still in its infancy, the outcomes of this study highlight the potential of these technologies to be effectively combined and applied in various domains transforming and enriching them. Through content analysis and topic modeling, the main topics and areas in which this combination is mostly being researched and applied are as follows: (1) “Education/Learning/Training”, (2) “Healthcare and Medicine”, (3) “Generative artificial intelligence/Large language models”, (4) “Virtual worlds/Virtual avatars/Virtual assistants”, (5) “Human-computer interaction”, (6) “Machine learning/Deep learning/Neural networks”, (7) “Communication networks”, (8) “Industry”, (9) “Manufacturing”, (10) “E-commerce”, (11) “Entertainment”, (12) “Smart cities”, and (13) “New technologies” (e.g., digital twins, blockchain, internet of things, etc.). The study explores the documents through various dimensions and concludes by presenting the existing limitations, identifying key challenges, and providing suggestions for future research. Full article
Show Figures

Figure 1

21 pages, 2476 KiB  
Article
Enhancing Human–Agent Interaction via Artificial Agents That Speculate About the Future
by Casey C. Bennett, Young-Ho Bae, Jun-Hyung Yoon, Say Young Kim and Benjamin Weiss
Future Internet 2025, 17(2), 52; https://doi.org/10.3390/fi17020052 - 21 Jan 2025
Viewed by 1273
Abstract
Human communication in daily life entails not only talking about what we are currently doing or will do, but also speculating about future possibilities that may (or may not) occur, i.e., “anticipatory speech”. Such conversations are central to social cooperation and social cohesion [...] Read more.
Human communication in daily life entails not only talking about what we are currently doing or will do, but also speculating about future possibilities that may (or may not) occur, i.e., “anticipatory speech”. Such conversations are central to social cooperation and social cohesion in humans. This suggests that such capabilities may also be critical for developing improved speech systems for artificial agents, e.g., human–agent interaction (HAI) and human–robot interaction (HRI). However, to do so successfully, it is imperative that we understand how anticipatory speech may affect the behavior of human users and, subsequently, the behavior of the agent/robot. Moreover, it is possible that such effects may vary across cultures and languages. To that end, we conducted an experiment where a human and autonomous 3D virtual avatar interacted in a cooperative gameplay environment. The experiment included 40 participants, comparing different languages (20 English, 20 Korean), where the artificial agent had anticipatory speech either enabled or disabled. The results showed that anticipatory speech significantly altered the speech patterns and turn-taking behavior of both the human and the agent, but those effects varied depending on the language spoken. We discuss how the use of such novel communication forms holds potential for enhancing HAI/HRI, as well as the development of mixed reality and virtual reality interactive systems for human users. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop