Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,307)

Search Parameters:
Keywords = human-technology interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 6020 KiB  
Article
“It Felt Like Solving a Mystery Together”: Exploring Virtual Reality Card-Based Interaction and Story Co-Creation Collaborative System Design
by Yaojiong Yu, Mike Phillips and Gianni Corino
Appl. Sci. 2025, 15(14), 8046; https://doi.org/10.3390/app15148046 (registering DOI) - 19 Jul 2025
Abstract
Virtual reality interaction design and story co-creation design for multiple users is an interdisciplinary research field that merges human–computer interaction, creative design, and virtual reality technologies. Story co-creation design enables multiple users to collectively generate and share narratives, allowing them to contribute to [...] Read more.
Virtual reality interaction design and story co-creation design for multiple users is an interdisciplinary research field that merges human–computer interaction, creative design, and virtual reality technologies. Story co-creation design enables multiple users to collectively generate and share narratives, allowing them to contribute to the storyline, modify plot trajectories, and craft characters, thereby facilitating a dynamic storytelling experience. Through advanced virtual reality interaction design, collaboration and social engagement can be further enriched to encourage active participation. This study investigates the facilitation of narrative creation and enhancement of storytelling skills in virtual reality by leveraging existing research on story co-creation design and virtual reality technology. Subsequently, we developed and evaluated the virtual reality card-based collaborative storytelling platform Co-Relay. By analyzing interaction data and user feedback obtained from user testing and experimental trials, we observed substantial enhancements in user engagement, immersion, creativity, and fulfillment of emotional and social needs compared to a conventional web-based storytelling platform. The primary contribution of this study lies in demonstrating how the incorporation of story co-creation can elevate storytelling proficiency, plot development, and social interaction within the virtual reality environment. Our novel methodology offers a fresh outlook on the design of collaborative narrative creation in virtual reality, particularly by integrating participatory multi-user storytelling platforms that blur the traditional boundaries between creators and audiences, as well as between fiction and reality. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

30 pages, 2282 KiB  
Article
User Experience of Navigating Work Zones with Automated Vehicles: Insights from YouTube on Challenges and Strengths
by Melika Ansarinejad, Kian Ansarinejad, Pan Lu and Ying Huang
Smart Cities 2025, 8(4), 120; https://doi.org/10.3390/smartcities8040120 (registering DOI) - 19 Jul 2025
Abstract
Understanding automated vehicle (AV) behavior in complex road environments and user attitudes in such contexts is critical for their safe and effective integration into smart cities. Despite growing deployment, limited public data exist on AV performance in construction zones; highly dynamic settings marked [...] Read more.
Understanding automated vehicle (AV) behavior in complex road environments and user attitudes in such contexts is critical for their safe and effective integration into smart cities. Despite growing deployment, limited public data exist on AV performance in construction zones; highly dynamic settings marked by irregular lane markings, shifting detours, and unpredictable human presence. This study investigates AV behavior in these conditions through qualitative, video-based analysis of user-documented experiences on YouTube, focusing on Tesla’s supervised Full Self-Driving (FSD) and Waymo systems. Spoken narration, captions, and subtitles were examined to evaluate AV perception, decision-making, control, and interaction with humans. Findings reveal that while AVs excel in structured tasks such as obstacle detection, lane tracking, and cautious speed control, they face challenges in interpreting temporary infrastructure, responding to unpredictable human actions, and navigating low-visibility environments. These limitations not only impact performance but also influence user trust and acceptance. The study underscores the need for continued technological refinement, improved infrastructure design, and user-informed deployment strategies. By addressing current shortcomings, this research offers critical insights into AV readiness for real-world conditions and contributes to safer, more adaptive urban mobility systems. Full article
Show Figures

Figure 1

6 pages, 766 KiB  
Proceeding Paper
Acoustics of Nature: Rebuilding Human–Plant Connection Through Art and Technology
by Wei Peng
Eng. Proc. 2025, 98(1), 38; https://doi.org/10.3390/engproc2025098038 - 18 Jul 2025
Viewed by 63
Abstract
An innovative approach is explored to reconnect urban populations with nature through the integration of technology and artistic expression. In a case study of London’s Canary Wharf, environmental sensor data of sound and visual art were analyzed to create new pathways for human–plant [...] Read more.
An innovative approach is explored to reconnect urban populations with nature through the integration of technology and artistic expression. In a case study of London’s Canary Wharf, environmental sensor data of sound and visual art were analyzed to create new pathways for human–plant interaction. By transforming plant biological data into accessible artistic experiences, interdisciplinary methods spanning environmental science, plant biology, and artistic practice can enhance ecological awareness and engagement. The synthesized approach in this study offers promising solutions for addressing the growing disconnect between urban communities and their natural environment. Full article
Show Figures

Figure 1

21 pages, 588 KiB  
Article
Systemic Configurations of Functional Talent for Green Technological Innovation: A Fuzzy-Set QCA Study
by Mingjie Guo, Menghan Yan, Xin Yan and Yi Li
Systems 2025, 13(7), 604; https://doi.org/10.3390/systems13070604 - 18 Jul 2025
Viewed by 104
Abstract
Achieving high-level green technological innovation in heavily polluting enterprises is critical for advancing sustainable development, particularly in the context of both organizational and regional digitalization. This study adopts a configurational perspective grounded in the Technology–Organization–Environment (TOE) framework and integrates theoretical insights from resource [...] Read more.
Achieving high-level green technological innovation in heavily polluting enterprises is critical for advancing sustainable development, particularly in the context of both organizational and regional digitalization. This study adopts a configurational perspective grounded in the Technology–Organization–Environment (TOE) framework and integrates theoretical insights from resource orchestration, resource dependence, and IT capability theories. It investigates how different types of skilled talent, such as production, technical, sales, and managerial employees, contribute to green innovation under varying digital conditions. By applying fuzzy-set qualitative comparative analysis (fsQCA) to a sample of 96 publicly listed firms from China’s heavily polluting industries, this study identifies four distinct talent-based configurations that can lead to high levels of green innovation: production-centric, management-led, technical talent driven, and regionally enabled models. Each configuration reflects a specific system state in which a core group of skilled employees plays a leading role, supported by complementary functions, and shaped by the interaction between internal digital transformation and the external digital environment. This study contributes to the systems literature by elucidating the combinational roles of digital resources and talent deployment within the systemic TOE framework, and offers practical guidance for enterprises aiming to strategically utilize human capital to enhance green innovation performance amid ongoing digital transformations. Full article
Show Figures

Figure 1

13 pages, 1243 KiB  
Review
Evidence-Based Medicine: Past, Present, Future
by Filippos Triposkiadis and Dirk L. Brutsaert
J. Clin. Med. 2025, 14(14), 5094; https://doi.org/10.3390/jcm14145094 - 17 Jul 2025
Viewed by 88
Abstract
Early medical traditions include those of ancient Babylonia, China, Egypt, and India. The roots of modern Western medicine, however, go back to ancient Greece. During the Renaissance, physicians increasingly relied on observation and experimentation to understand the human body and develop new techniques [...] Read more.
Early medical traditions include those of ancient Babylonia, China, Egypt, and India. The roots of modern Western medicine, however, go back to ancient Greece. During the Renaissance, physicians increasingly relied on observation and experimentation to understand the human body and develop new techniques for diagnosis and treatment. The discovery of antibiotics, antiseptics, and other drugs in the 19th century accelerated the development of modern medicine, the latter being fueled further by advances in technology, research, a better understanding of the human body, and, most recently, the introduction of evidence-based medicine (EBM). The EBM model de-emphasized intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision-making and stressed the examination of evidence from clinical research. A later EBM model additionally incorporated clinical expertise and the latest model of EBM patients’ preferences and actions. In this review article, we argue that in the era of precision medicine, major EBM principles must be based on (a) the systematic identification, analysis, and utility of big data using artificial intelligence; (b) the magnifying effect of medical interventions by means of the physician–patient interaction, the latter being guided by the physician’s expertise, intuition, and philosophical beliefs; and (c) the patient preferences, since, in healthcare under precision medicine, the patient will be a central stakeholder contributing data and actively participating in shared decision-making. Full article
(This article belongs to the Section Clinical Research Methods)
Show Figures

Figure 1

81 pages, 11973 KiB  
Article
Designing and Evaluating XR Cultural Heritage Applications Through Human–Computer Interaction Methods: Insights from Ten International Case Studies
by Jolanda Tromp, Damian Schofield, Pezhman Raeisian Parvari, Matthieu Poyade, Claire Eaglesham, Juan Carlos Torres, Theodore Johnson, Teele Jürivete, Nathan Lauer, Arcadio Reyes-Lecuona, Daniel González-Toledo, María Cuevas-Rodríguez and Luis Molina-Tanco
Appl. Sci. 2025, 15(14), 7973; https://doi.org/10.3390/app15147973 - 17 Jul 2025
Viewed by 379
Abstract
Advanced three-dimensional extended reality (XR) technologies are highly suitable for cultural heritage research and education. XR tools enable the creation of realistic virtual or augmented reality applications for curating and disseminating information about cultural artifacts and sites. Developing XR applications for cultural heritage [...] Read more.
Advanced three-dimensional extended reality (XR) technologies are highly suitable for cultural heritage research and education. XR tools enable the creation of realistic virtual or augmented reality applications for curating and disseminating information about cultural artifacts and sites. Developing XR applications for cultural heritage requires interdisciplinary collaboration involving strong teamwork and soft skills to manage user requirements, system specifications, and design cycles. Given the diverse end-users, achieving high precision, accuracy, and efficiency in information management and user experience is crucial. Human–computer interaction (HCI) design and evaluation methods are essential for ensuring usability and return on investment. This article presents ten case studies of cultural heritage software projects, illustrating the interdisciplinary work between computer science and HCI design. Students from institutions such as the State University of New York (USA), Glasgow School of Art (UK), University of Granada (Spain), University of Málaga (Spain), Duy Tan University (Vietnam), Imperial College London (UK), Research University Institute of Communication & Computer Systems (Greece), Technical University of Košice (Slovakia), and Indiana University (USA) contributed to creating, assessing, and improving the usability of these diverse cultural heritage applications. The results include a structured typology of CH XR application scenarios, detailed insights into design and evaluation practices across ten international use cases, and a development framework that supports interdisciplinary collaboration and stakeholder integration in phygital cultural heritage projects. Full article
(This article belongs to the Special Issue Advanced Technologies Applied to Cultural Heritage)
Show Figures

Figure 1

30 pages, 2023 KiB  
Review
Fusion of Computer Vision and AI in Collaborative Robotics: A Review and Future Prospects
by Yuval Cohen, Amir Biton and Shraga Shoval
Appl. Sci. 2025, 15(14), 7905; https://doi.org/10.3390/app15147905 - 15 Jul 2025
Viewed by 188
Abstract
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot [...] Read more.
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot capabilities across perception, planning, and decision-making remains lacking (especially in recent years). Addressing this gap, our review unifies the latest advances in visual recognition, deep learning, and semantic mapping within a structured taxonomy tailored to collaborative robotics. We examine foundational technologies such as object detection, human pose estimation, and environmental modeling, as well as emerging trends including multimodal sensor fusion, explainable AI, and ethically guided autonomy. Unlike prior surveys that focus narrowly on either vision or AI, this review uniquely analyzes their integrated use for real-world human–robot collaboration. Highlighting industrial and service applications, we distill the best practices, identify critical challenges, and present key performance metrics to guide future research. We conclude by proposing strategic directions—from scalable training methods to interoperability standards—to foster safe, robust, and proactive human–robot partnerships in the years ahead. Full article
Show Figures

Figure 1

23 pages, 3542 KiB  
Article
An Intuitive and Efficient Teleoperation Human–Robot Interface Based on a Wearable Myoelectric Armband
by Long Wang, Zhangyi Chen, Songyuan Han, Yao Luo, Xiaoling Li and Yang Liu
Biomimetics 2025, 10(7), 464; https://doi.org/10.3390/biomimetics10070464 - 15 Jul 2025
Viewed by 131
Abstract
Although artificial intelligence technologies have significantly enhanced autonomous robots’ capabilities in perception, decision-making, and planning, their autonomy may still fail when faced with complex, dynamic, or unpredictable environments. Therefore, it is critical to enable users to take over robot control in real-time and [...] Read more.
Although artificial intelligence technologies have significantly enhanced autonomous robots’ capabilities in perception, decision-making, and planning, their autonomy may still fail when faced with complex, dynamic, or unpredictable environments. Therefore, it is critical to enable users to take over robot control in real-time and efficiently through teleoperation. The lightweight, wearable myoelectric armband, due to its portability and environmental robustness, provides a natural human–robot gesture interaction interface. However, current myoelectric teleoperation gesture control faces two major challenges: (1) poor intuitiveness due to visual-motor misalignment; and (2) low efficiency from discrete, single-degree-of-freedom control modes. To address these challenges, this study proposes an integrated myoelectric teleoperation interface. The interface integrates the following: (1) a novel hybrid reference frame aimed at effectively mitigating visual-motor misalignment; and (2) a finite state machine (FSM)-based control logic designed to enhance control efficiency and smoothness. Four experimental tasks were designed using different end-effectors (gripper/dexterous hand) and camera viewpoints (front/side view). Compared to benchmark methods, the proposed interface demonstrates significant advantages in task completion time, movement path efficiency, and subjective workload. This work demonstrates the potential of the proposed interface to significantly advance the practical application of wearable myoelectric sensors in human–robot interaction. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 4th Edition)
Show Figures

Figure 1

18 pages, 2200 KiB  
Article
A Self-Supervised Adversarial Deblurring Face Recognition Network for Edge Devices
by Hanwen Zhang, Myun Kim, Baitong Li and Yanping Lu
J. Imaging 2025, 11(7), 241; https://doi.org/10.3390/jimaging11070241 - 15 Jul 2025
Viewed by 220
Abstract
With the advancement of information technology, human activity recognition (HAR) has been widely applied in fields such as intelligent surveillance, health monitoring, and human–computer interaction. As a crucial component of HAR, facial recognition plays a key role, especially in vision-based activity recognition. However, [...] Read more.
With the advancement of information technology, human activity recognition (HAR) has been widely applied in fields such as intelligent surveillance, health monitoring, and human–computer interaction. As a crucial component of HAR, facial recognition plays a key role, especially in vision-based activity recognition. However, current facial recognition models on the market perform poorly in handling blurry images and dynamic scenarios, limiting their effectiveness in real-world HAR applications. This study aims to construct a fast and accurate facial recognition model based on novel adversarial learning and deblurring theory to enhance its performance in human activity recognition. The model employs a generative adversarial network (GAN) as the core algorithm, optimizing its generation and recognition modules by decomposing the global loss function and incorporating a feature pyramid, thereby solving the balance challenge in GAN training. Additionally, deblurring techniques are introduced to improve the model’s ability to handle blurry and dynamic images. Experimental results show that the proposed model achieves high accuracy and recall rates across multiple facial recognition datasets, with an average recall rate of 87.40% and accuracy rates of 81.06% and 79.77% on the YTF, IMDB-WIKI, and WiderFace datasets, respectively. These findings confirm that the model effectively addresses the challenges of recognizing faces in dynamic and blurry conditions in human activity recognition, demonstrating significant application potential. Full article
(This article belongs to the Special Issue Techniques and Applications in Face Image Analysis)
Show Figures

Figure 1

30 pages, 8616 KiB  
Article
Evaluation and Prediction of Comprehensive Efficiency of Wind Power System in China Based on Two-Stage EBM Model and FNN Model
by Fang-Rong Ren, Hui-Lin Liu and Xiao-Yan Liu
Systems 2025, 13(7), 579; https://doi.org/10.3390/systems13070579 - 14 Jul 2025
Viewed by 227
Abstract
Wind power is a core component of a clean energy system. The efficiency of a wind power system evolves through coordinated interactions. These interactions occur among three regional subsystems: resource subsystem, technology subsystem, and economy subsystem. To reveal the operational mechanisms of its [...] Read more.
Wind power is a core component of a clean energy system. The efficiency of a wind power system evolves through coordinated interactions. These interactions occur among three regional subsystems: resource subsystem, technology subsystem, and economy subsystem. To reveal the operational mechanisms of its internal subsystems, this study analyzes the comprehensive efficiency of the wind power system in China from 2010 to 2022. The two-stage EBM model, the Tobit regression model and the feedforward neural network model are employed in combination. The results show that: (1) The comprehensive efficiency of the wind power system has gradually improved, but shows spatiotemporal variations due to uneven subsystem coordination. (2) The improvement of efficiency is characterized by stages. The optimization of technology subsystems drives the development stage, while economic scaling dominates the operation stage (though operation and maintenance technologies remain deficient). (3) The correlation between development and operation stages is suboptimal, and the coordination of subsystems remains weak. (4) Technology innovation and electricity demand boost comprehensive efficiency, while human resources hinder it. Extreme weather exerts either a contributing or an interfering effect on the system. (5) Future projections show continued efficiency growth. The study concludes with cross-system coordination strategies to enhance the contribution of wind power in clean energy. Full article
Show Figures

Figure 1

20 pages, 796 KiB  
Article
Exploring the Influence of Human–Computer Interaction Experience on Tourist Loyalty in the Context of Smart Tourism: A Case Study of Suzhou Museum
by Ke Xue, Xuanyu Jin and Yifei Li
Behav. Sci. 2025, 15(7), 949; https://doi.org/10.3390/bs15070949 - 14 Jul 2025
Viewed by 240
Abstract
As digital technology evolves rapidly, smart tourism has become a significant trend in the modernization of the industry, relying on advanced tools like big data and cloud computing to improve travelers’ experiences. Despite the growing use of human–computer interaction in museums, there remains [...] Read more.
As digital technology evolves rapidly, smart tourism has become a significant trend in the modernization of the industry, relying on advanced tools like big data and cloud computing to improve travelers’ experiences. Despite the growing use of human–computer interaction in museums, there remains a lack of in-depth academic investigation into its impact on visitors’ behavioral intentions regarding museum engagement. This paper employs Cognitive Appraisal Theory, considers human–computer interaction experience as the independent variable, and introduces destination image and satisfaction as mediators to examine their impact on destination loyalty. Based on a survey of 537 participants, the research shows that human–computer interaction experience has a significant positive impact on destination image, satisfaction, and loyalty. Destination image and satisfaction play a partial and sequential mediating role in this relationship. This paper explores the influence mechanism of human–computer interaction experience on destination loyalty and proposes practical interactive solutions for museums, aiming to offer insights for smart tourism research and practice. Full article
Show Figures

Figure 1

20 pages, 1012 KiB  
Article
Interaction with Tactile Paving in a Virtual Reality Environment: Simulation of an Urban Environment for People with Visual Impairments
by Nikolaos Tzimos, Iordanis Kyriazidis, George Voutsakelis, Sotirios Kontogiannis and George Kokkonis
Multimodal Technol. Interact. 2025, 9(7), 71; https://doi.org/10.3390/mti9070071 - 14 Jul 2025
Viewed by 236
Abstract
Blindness and low vision are increasing serious public health issues that affect a significant percentage of the population worldwide. Vision plays a crucial role in spatial navigation and daily activities. Its reduction or loss creates numerous challenges for an individual. Assistive technology can [...] Read more.
Blindness and low vision are increasing serious public health issues that affect a significant percentage of the population worldwide. Vision plays a crucial role in spatial navigation and daily activities. Its reduction or loss creates numerous challenges for an individual. Assistive technology can enhance mobility and navigation in outdoor environments. In the field of orientation and mobility training, technologies with haptic interaction can assist individuals with visual impairments in learning how to navigate safely and effectively using the sense of touch. This paper presents a virtual reality platform designed to support the development of navigation techniques within a safe yet realistic environment, expanding upon existing research in the field. Following extensive optimization, we present a visual representation that accurately simulates various 3D tile textures using graphics replicating real tactile surfaces. We conducted a user interaction study in a virtual environment consisting of 3D navigation tiles enhanced with tactile textures, placed appropriately for a real-world scenario, to assess user performance and experience. This study also assess the usability and user experience of the platform. We hope that the findings will contribute to the development of new universal navigation techniques for people with visual impairments. Full article
Show Figures

Figure 1

24 pages, 831 KiB  
Article
Spatiotemporal Evolution and Driving Factors of Coupling Coordination Among China’s Digital Economy, Carbon Emissions Efficiency, and High-Quality Economic Development
by Fusheng Li and Fuyi Ci
Sustainability 2025, 17(14), 6410; https://doi.org/10.3390/su17146410 - 13 Jul 2025
Viewed by 271
Abstract
Grounded in coupling theory, this study investigates the interplay among three key elements of economic growth, namely the digital economy, carbon emissions efficiency, and high-quality economic development. Drawing on data from 30 Chinese provinces from 2000 to 2023, we employ exploratory spatiotemporal data [...] Read more.
Grounded in coupling theory, this study investigates the interplay among three key elements of economic growth, namely the digital economy, carbon emissions efficiency, and high-quality economic development. Drawing on data from 30 Chinese provinces from 2000 to 2023, we employ exploratory spatiotemporal data analysis and the GeoDetector model to examine the spatial–temporal evolution and underlying driving forces of coupling coordination. This research enriches the theoretical framework of multi-system synergistic development in a green transition context and offers empirical insights and policy recommendations for fostering regional coordination and sustainable development. The results reveal that (1) both the digital economy and high-quality economic development show a steady upward trend, while carbon emissions efficiency has a “U-shaped” curve pattern; (2) at the national level, the degree of coupling coordination has evolved over time from “mild disorder” to “on the verge of disorder” to “barely coordinated,” while at the regional level, this pattern of coupling coordination shifts over time from “Eastern–Northeastern–Central–Western” to “Eastern–Central–Northeastern–Western”; (3) although spatial polarization in coupling coordination has improved, disparities fluctuate in a “decline–rise” pattern, with interregional differences being the main source of that variation; (4) the degree of coupling coordination has a positive spatial correlation, but with a declining trend with fluctuations; and (5) improvements in the level of economic development, human capital, industrial structure, green technological innovation, and market development capacity all contribute positively to coupling coordination. Among them, green technological innovation and market development capacity are the most influential drivers, and the interactions among all driving factors further enhance their collective impact. Full article
Show Figures

Figure 1

37 pages, 618 KiB  
Systematic Review
Interaction, Artificial Intelligence, and Motivation in Children’s Speech Learning and Rehabilitation Through Digital Games: A Systematic Literature Review
by Chra Abdoulqadir and Fernando Loizides
Information 2025, 16(7), 599; https://doi.org/10.3390/info16070599 - 12 Jul 2025
Viewed by 285
Abstract
The integration of digital serious games into speech learning (rehabilitation) has demonstrated significant potential in enhancing accessibility and inclusivity for children with speech disabilities. This review of the state of the art examines the role of serious games, Artificial Intelligence (AI), and Natural [...] Read more.
The integration of digital serious games into speech learning (rehabilitation) has demonstrated significant potential in enhancing accessibility and inclusivity for children with speech disabilities. This review of the state of the art examines the role of serious games, Artificial Intelligence (AI), and Natural Language Processing (NLP) in speech rehabilitation, with a particular focus on interaction modalities, engagement autonomy, and motivation. We have reviewed 45 selected studies. Our key findings show how intelligent tutoring systems, adaptive voice-based interfaces, and gamified speech interventions can empower children to engage in self-directed speech learning, reducing dependence on therapists and caregivers. The diversity of interaction modalities, including speech recognition, phoneme-based exercises, and multimodal feedback, demonstrates how AI and Assistive Technology (AT) can personalise learning experiences to accommodate diverse needs. Furthermore, the incorporation of gamification strategies, such as reward systems and adaptive difficulty levels, has been shown to enhance children’s motivation and long-term participation in speech rehabilitation. The gaps identified show that despite advancements, challenges remain in achieving universal accessibility, particularly regarding speech recognition accuracy, multilingual support, and accessibility for users with multiple disabilities. This review advocates for interdisciplinary collaboration across educational technology, special education, cognitive science, and human–computer interaction (HCI). Our work contributes to the ongoing discourse on lifelong inclusive education, reinforcing the potential of AI-driven serious games as transformative tools for bridging learning gaps and promoting speech rehabilitation beyond clinical environments. Full article
Show Figures

Graphical abstract

18 pages, 3288 KiB  
Article
Influence of Material Optical Properties in Direct ToF LiDAR Optical Tactile Sensing: Comprehensive Evaluation
by Ilze Aulika, Andrejs Ogurcovs, Meldra Kemere, Arturs Bundulis, Jelena Butikova, Karlis Kundzins, Emmanuel Bacher, Martin Laurenzis, Stephane Schertzer, Julija Stopar, Ales Zore and Roman Kamnik
Materials 2025, 18(14), 3287; https://doi.org/10.3390/ma18143287 - 11 Jul 2025
Viewed by 222
Abstract
Optical tactile sensing is gaining traction as a foundational technology in collaborative and human-interactive robotics, where reliable touch and pressure feedback are critical. Traditional systems based on total internal reflection (TIR) and frustrated TIR (FTIR) often require complex infrared setups and lack adaptability [...] Read more.
Optical tactile sensing is gaining traction as a foundational technology in collaborative and human-interactive robotics, where reliable touch and pressure feedback are critical. Traditional systems based on total internal reflection (TIR) and frustrated TIR (FTIR) often require complex infrared setups and lack adaptability to curved or flexible surfaces. To overcome these limitations, we developed OptoSkin—a novel tactile platform leveraging direct time-of-flight (ToF) LiDAR principles for robust contact and pressure detection. In this extended study, we systematically evaluate how key optical properties of waveguide materials affect ToF signal behavior and sensing fidelity. We examine a diverse set of materials, characterized by varying light transmission (82–92)%, scattering coefficients (0.02–1.1) cm−1, diffuse reflectance (0.17–7.40)%, and refractive indices 1.398–1.537 at the ToF emitter wavelength of 940 nm. Through systematic evaluation, we demonstrate that controlled light scattering within the material significantly enhances ToF signal quality for both direct touch and near-proximity sensing. These findings underscore the critical role of material selection in designing efficient, low-cost, and geometry-independent optical tactile systems. Full article
(This article belongs to the Section Polymeric Materials)
Show Figures

Figure 1

Back to TopTop