Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = touch gesture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1019 KiB  
Article
Micro-Yizkor and Hasidic Memory: A Post-Holocaust Letter from the Margins
by Isaac Hershkowitz
Religions 2025, 16(7), 937; https://doi.org/10.3390/rel16070937 - 19 Jul 2025
Viewed by 216
Abstract
This paper examines a previously unknown anonymous Hebrew letter inserted into a postwar edition of Shem HaGedolim, found in the library of the Jewish University in Budapest. The letter, composed in Győr in 1947, consists almost entirely of passages copied from Tiferet Chayim, [...] Read more.
This paper examines a previously unknown anonymous Hebrew letter inserted into a postwar edition of Shem HaGedolim, found in the library of the Jewish University in Budapest. The letter, composed in Győr in 1947, consists almost entirely of passages copied from Tiferet Chayim, a hagiographic genealogy of the Sanz Hasidic dynasty. Although derivative in content, the letter’s form and placement suggest it was not meant for transmission but instead served as a private act of mourning and historiographical preservation. By situating the letter within the broader context of post-Holocaust Jewish and Hasidic memory practices, including yizkor books, rabbinic memoirs, and grassroots commemorative writing, this study proposes that the document constitutes a “micro-yizkor”: a bibliographic ritual that aimed to re-inscribe lost tzaddikim into sacred memory. Drawing on theories of trauma, religious coping, and bereavement psychology, particularly the Two-Track Model of Bereavement, the paper examines the letter as both a therapeutic and historiographical gesture. The author’s meticulous copying, selective omissions, and personalized touches (such as modified honorifics and emotive phrases) reflect an attempt to maintain spiritual continuity in the wake of communal devastation. Engaging scholarship by Michal Shaul, Lior Becker, Gershon Greenberg, and others, the analysis demonstrates how citation, far from being a passive act, functions here as an instrument of resistance, memory, and redemptive reconstruction. The existence of such a document can also be examined through the lens of Maurice Rickards’ insights, particularly his characterization of the “compulsive note” as a salient form of ephemera, materials often inserted between the pages of books, which pose unique challenges for interpreting the time capsule their authors sought to construct. Ultimately, the paper argues that this modest and anonymous document offers a rare window into postwar Ultra-orthodox religious subjectivity. It challenges prevailing assumptions about Hasidic silence after the Holocaust and demonstarates how even derivative texts can serve as potent sites of historical testimony, spiritual resilience, and bibliographic mourning. The letter thus sheds light on a neglected form of Hasidic historiography, one authored not by professional historians, but by the broken-hearted, writing in the margins of sacred books. Full article
Show Figures

Figure 1

33 pages, 5057 KiB  
Article
Exploring Preferential Ring-Based Gesture Interaction Across 2D Screen and Spatial Interface Environments
by Hoon Yoon, Hojeong Im, Seonha Chung and Taeha Yi
Appl. Sci. 2025, 15(12), 6879; https://doi.org/10.3390/app15126879 - 18 Jun 2025
Viewed by 433
Abstract
As gesture-based interactions expand across traditional 2D screens and immersive XR platforms, designing intuitive input modalities tailored to specific contexts becomes increasingly essential. This study explores how users cognitively and experientially engage with gesture-based interactions in two distinct environments: a lean-back 2D television [...] Read more.
As gesture-based interactions expand across traditional 2D screens and immersive XR platforms, designing intuitive input modalities tailored to specific contexts becomes increasingly essential. This study explores how users cognitively and experientially engage with gesture-based interactions in two distinct environments: a lean-back 2D television interface and an immersive XR spatial environment. A within-subject experimental design was employed, utilizing a gesture-recognizable smart ring to perform tasks using three gesture modalities: (a) Surface-Touch gesture, (b) mid-air gesture, and (c) micro finger-touch gesture. The results revealed clear, context-dependent user preferences; Surface-Touch gestures were preferred in the 2D context due to their controlled and pragmatic nature, whereas mid-air gestures were favored in the XR context for their immersive, intuitive qualities. Interestingly, longer gesture execution times did not consistently reduce user satisfaction, indicating that compatibility between the gesture modality and the interaction environment matters more than efficiency alone. This study concludes that successful gesture-based interface design must carefully consider the contextual alignment, highlighting the nuanced interplay among user expectations, environmental context, and gesture modality. Consequently, these findings provide practical considerations for designing Natural User Interfaces (NUIs) for various interaction contexts. Full article
Show Figures

Figure 1

18 pages, 1082 KiB  
Article
ITap: Index Finger Tap Interaction by Gaze and Tabletop Integration
by Jeonghyeon Kim, Jemin Lee, Jung-Hoon Ahn and Youngwon Kim
Sensors 2025, 25(9), 2833; https://doi.org/10.3390/s25092833 - 30 Apr 2025
Viewed by 509
Abstract
This paper presents ITap, a novel interaction method utilizing hand tracking to create a virtual touchpad on a tabletop. ITap facilitates touch interactions such as tapping, dragging, and swiping using the index finger. The technique combines gaze-based object selection with touch gestures, while [...] Read more.
This paper presents ITap, a novel interaction method utilizing hand tracking to create a virtual touchpad on a tabletop. ITap facilitates touch interactions such as tapping, dragging, and swiping using the index finger. The technique combines gaze-based object selection with touch gestures, while a pinch gesture performed with the opposite hand activates a manual mode, enabling precise cursor control independently of gaze direction. The primary purpose of this research is to enhance interaction efficiency, reduce user fatigue, and improve accuracy in gaze-based object selection tasks, particularly in complex and cluttered XR environments. Specifically, we addressed two research questions: (1) How does ITap’s manual mode compare with the traditional gaze + pinch method regarding speed and accuracy in object selection tasks across varying distances and densities? (2) Does ITap provide improved user comfort, naturalness, and reduced fatigue compared to the traditional method during prolonged scrolling and swiping tasks? To evaluate these questions, two studies were conducted. The first study compared ITap’s manual mode with the traditional gaze + pinch method for object selection tasks across various distances and in cluttered environments. The second study examined both methods for scrolling and swiping tasks, focusing on user comfort, naturalness, and fatigue. The findings revealed that ITap outperformed gaze + pinch in terms of object selection speed and error reduction, particularly in scenarios involving distant or densely arranged objects. Additionally, ITap demonstrated superior performance in scrolling and swiping tasks, with participants reporting greater comfort and reduced fatigue. The integration of gaze-based input and touch gestures provided by ITap offers a more efficient and user-friendly interaction method compared to the traditional gaze + pinch technique. Its ability to reduce fatigue and improve accuracy makes it especially suitable for tasks involving complex environments or extended usage in XR settings. Full article
Show Figures

Figure 1

14 pages, 1689 KiB  
Article
Evaluating the Effectiveness of Tilt Gestures for Text Property Control in Mobile Interfaces
by Sang-Hwan Kim and Xuesen Liu
Multimodal Technol. Interact. 2025, 9(5), 41; https://doi.org/10.3390/mti9050041 - 29 Apr 2025
Viewed by 527
Abstract
The objective of this study is to verify the usability of gesture interactions such as tilting or shaking, rather than conventional touch gestures, on mobile devices. To this end, a prototype was developed that manipulates the text size in a mobile text messaging [...] Read more.
The objective of this study is to verify the usability of gesture interactions such as tilting or shaking, rather than conventional touch gestures, on mobile devices. To this end, a prototype was developed that manipulates the text size in a mobile text messaging application through tilt gestures. In the text input interface, three types of tilt gesture interaction methods (‘Shaking’, ‘Leaning’, and ‘Acceleration’) were implemented to select the text size level among five levels (extra-small, small, normal, large, and extra-large). Along with the gesture-based interaction methods, the conventional button method was also evaluated. A total of 24 participants were asked to prepare text messages of specified font sizes using randomly assigned interaction methods to select the font size. Task completion time, accuracy (setting errors and input errors), workload, and subjective preferences were collected and analyzed. As a result, the ‘Shaking’ method was generally similar to the conventional button method and superior to the other two ‘Leaning’ and ‘Acceleration’ methods. This may be because ‘Leaning’ and ‘Acceleration’ are continuous operations, while ‘Shaking’ is an individual operation for each menu (font size level). According to subjective comments, tilting gestures on mobile devices can not only be useful if users take the time to learn them, but also provide ways to convey intentions with simple text. Although tilting gestures were not found to significantly improve text editing performance compared to conventional screen touch methods, the use of motion gestures beyond touch on mobile devices can be considered for interface manipulations such as app navigation, gaming, or multimedia controls across diverse applications. Full article
Show Figures

Figure 1

10 pages, 3451 KiB  
Article
Stretchable and Wearable Sensors for Contact Touch and Gesture Recognition Based on Poling-Free Piezoelectric Polyester Elastomer
by Kaituo Wu, Wanli Zhang, Qian Zhang and Xiaoran Hu
Polymers 2025, 17(8), 1105; https://doi.org/10.3390/polym17081105 - 19 Apr 2025
Viewed by 523
Abstract
Human–computer interaction (HCI) enables communication between humans and computers, which is widely applied in various fields such as consumer electronics, education, medical rehabilitation, and industrial control. Human motion monitoring is one of the most important methods of achieving HCI. In the present work, [...] Read more.
Human–computer interaction (HCI) enables communication between humans and computers, which is widely applied in various fields such as consumer electronics, education, medical rehabilitation, and industrial control. Human motion monitoring is one of the most important methods of achieving HCI. In the present work, a novel human motion monitoring sensor for contact touch and gesture recognition is fabricated based on polyester elastomer (PTE) synthesized from diols and diacids, with both piezoelectric and triboelectric properties. The PTE sensor can respond to contacted and contactless me-chemical signals by piezoelectric and triboelectric responding, respectively, which enables simultaneous touch control and gesture recognition. In addition, the PTE sensor presents high stretchability with elongation at break over 1000% and high durability over 4000 impact cycles, offering significant potential for consumer electronics and wearable devices. Full article
(This article belongs to the Special Issue Polymer-Based Smart Materials: Preparation and Applications)
Show Figures

Figure 1

43 pages, 2542 KiB  
Article
Mathematical Background and Algorithms of a Collection of Android Apps for a Google Play Store Page
by Roland Szabo
Appl. Sci. 2025, 15(8), 4431; https://doi.org/10.3390/app15084431 - 17 Apr 2025
Viewed by 388
Abstract
This paper discusses three algorithmic strategies tailored for distinct applications, each aiming to tackle specific operational challenges. The first application unveils an innovative SMS messaging system that substitutes manual typing with voice interaction. The key algorithm facilitates real-time conversion from speech to text [...] Read more.
This paper discusses three algorithmic strategies tailored for distinct applications, each aiming to tackle specific operational challenges. The first application unveils an innovative SMS messaging system that substitutes manual typing with voice interaction. The key algorithm facilitates real-time conversion from speech to text for message creation and from text to speech for message playback, thus turning SMS communication into an audio-focused exchange while preserving conventional messaging standards. The second application suggests a secure file management system for Android, utilizing encryption and access control algorithms to safeguard user privacy. Its mathematical framework centers on cryptographic methods for file security and authentication processes to prevent unauthorized access. The third application redefines flashlight functionality using an optimized touch interface algorithm. By employing a screen-wide double-tap gesture recognition system, this approach removes the reliance on a physical button, depending instead on advanced event detection and hardware control logic to activate the device’s flash. All applications are fundamentally based on mathematical modeling and algorithmic effectiveness, emphasizing computational approaches over implementation specifics. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 7710 KiB  
Article
Immersive Interaction for Inclusive Virtual Reality Navigation: Enhancing Accessibility for Socially Underprivileged Users
by Jeonghyeon Kim, Jung-Hoon Ahn and Youngwon Kim
Electronics 2025, 14(5), 1046; https://doi.org/10.3390/electronics14051046 - 6 Mar 2025
Cited by 1 | Viewed by 1288
Abstract
Existing virtual reality (VR) street view and 360-degree road view applications often rely on complex controllers or touch interfaces, which can hinder user immersion and accessibility. These challenges are particularly pronounced for under-represented populations, such as older adults and individuals with limited familiarity [...] Read more.
Existing virtual reality (VR) street view and 360-degree road view applications often rely on complex controllers or touch interfaces, which can hinder user immersion and accessibility. These challenges are particularly pronounced for under-represented populations, such as older adults and individuals with limited familiarity with digital devices. Such groups frequently face physical or environmental constraints that restrict their ability to engage in outdoor activities, highlighting the need for alternative methods of experiencing the world through virtual environments. To address this issue, we propose a VR street view application featuring an intuitive, gesture-based interface designed to simplify user interaction and enhance accessibility for socially disadvantaged individuals. Our approach seeks to optimize digital accessibility by reducing barriers to entry, increasing user immersion, and facilitating a more inclusive virtual exploration experience. Through usability testing and iterative design, this study evaluates the effectiveness of gesture-based interactions in improving accessibility and engagement. The findings emphasize the importance of user-centered design in fostering an inclusive VR environment that accommodates diverse needs and abilities. Full article
Show Figures

Figure 1

21 pages, 3092 KiB  
Article
TapFix: Cursorless Typographical Error Correction for Touch-Sensor Displays
by Nicholas Dehnen, I. Scott MacKenzie and Aijun An
Sensors 2025, 25(5), 1421; https://doi.org/10.3390/s25051421 - 26 Feb 2025
Viewed by 504
Abstract
We present TapFix, a cursorless mobile text correction method for touch-sensor displays. Unlike traditional methods, TapFix eliminates the need to position a cursor to make corrections. Instead, the method allows for direct, character-level access, offering easy-to-use swipe gestures on a zoomed-in target word [...] Read more.
We present TapFix, a cursorless mobile text correction method for touch-sensor displays. Unlike traditional methods, TapFix eliminates the need to position a cursor to make corrections. Instead, the method allows for direct, character-level access, offering easy-to-use swipe gestures on a zoomed-in target word for corrective actions. A user study with 15 participants compared TapFix to two traditional text correction methods on Apple iOS. For each of the three methods, participants completed 100 text correction tasks of four different error types on an Apple iPhone 14 Pro. The TapFix method was on average between 43.0% and 44.1% faster than the existing methods in completing the tasks. Participants also reported experiencing 5.6% to 21.1% lower levels of frustration with TapFix, as indicated by post-experiment NASA TLX and SUS questionnaires, compared to the traditional methods. Additionally, they attributed a level of usability to TapFix that was comparable to the well-established TextMagnifier method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

25 pages, 2844 KiB  
Article
Real-Time Gesture-Based Hand Landmark Detection for Optimized Mobile Photo Capture and Synchronization
by Pedro Marques, Paulo Váz, José Silva, Pedro Martins and Maryam Abbasi
Electronics 2025, 14(4), 704; https://doi.org/10.3390/electronics14040704 - 12 Feb 2025
Viewed by 2089
Abstract
Gesture recognition technology has emerged as a transformative solution for natural and intuitive human–computer interaction (HCI), offering touch-free operation across diverse fields such as healthcare, gaming, and smart home systems. In mobile contexts, where hygiene, convenience, and the ability to operate under resource [...] Read more.
Gesture recognition technology has emerged as a transformative solution for natural and intuitive human–computer interaction (HCI), offering touch-free operation across diverse fields such as healthcare, gaming, and smart home systems. In mobile contexts, where hygiene, convenience, and the ability to operate under resource constraints are critical, hand gesture recognition provides a compelling alternative to traditional touch-based interfaces. However, implementing effective gesture recognition in real-world mobile settings involves challenges such as limited computational power, varying environmental conditions, and the requirement for robust offline–online data management. In this study, we introduce ThumbsUp, which is a gesture-driven system, and employ a partially systematic literature review approach (inspired by core PRISMA guidelines) to identify the key research gaps in mobile gesture recognition. By incorporating insights from deep learning–based methods (e.g., CNNs and Transformers) while focusing on low resource consumption, we leverage Google’s MediaPipe in our framework for real-time detection of 21 hand landmarks and adaptive lighting pre-processing, enabling accurate recognition of a “thumbs-up” gesture. The system features a secure queue-based offline–cloud synchronization model, which ensures that the captured images and metadata (encrypted with AES-GCM) remain consistent and accessible even with intermittent connectivity. Experimental results under dynamic lighting, distance variations, and partially cluttered environments confirm the system’s superior low-light performance and decreased resource consumption compared to baseline camera applications. Additionally, we highlight the feasibility of extending ThumbsUp to incorporate AI-driven enhancements for abrupt lighting changes and, in the future, electromyographic (EMG) signals for users with motor impairments. Our comprehensive evaluation demonstrates that ThumbsUp maintains robust performance on typical mobile hardware, showing resilience to unstable network conditions and minimal reliance on high-end GPUs. These findings offer new perspectives for deploying gesture-based interfaces in the broader IoT ecosystem, thus paving the way toward secure, efficient, and inclusive mobile HCI solutions. Full article
(This article belongs to the Special Issue AI-Driven Digital Image Processing: Latest Advances and Prospects)
Show Figures

Figure 1

23 pages, 5966 KiB  
Article
Intelligent Human–Computer Interaction for Building Information Models Using Gesture Recognition
by Tianyi Zhang, Yukang Wang, Xiaoping Zhou, Deli Liu, Jingyi Ji and Junfu Feng
Inventions 2025, 10(1), 5; https://doi.org/10.3390/inventions10010005 - 16 Jan 2025
Cited by 2 | Viewed by 1276
Abstract
Human–computer interaction (HCI) with three-dimensional (3D) Building Information Modelling/Model (BIM) is the crucial ingredient to enhancing the user experience and fostering the value of BIM. Current BIMs mostly use keyboard, mouse, or touchscreen as media for HCI. Using these hardware devices for HCI [...] Read more.
Human–computer interaction (HCI) with three-dimensional (3D) Building Information Modelling/Model (BIM) is the crucial ingredient to enhancing the user experience and fostering the value of BIM. Current BIMs mostly use keyboard, mouse, or touchscreen as media for HCI. Using these hardware devices for HCI with BIM may lead to space constraints and a lack of visual intuitiveness. Somatosensory interaction represents an emergent modality of interaction, e.g., gesture interaction, which requires no equipment or direct touch, presents a potential approach to solving these problems. This paper proposes a computer-vision-based gesture interaction system for BIM. Firstly, a set of gestures for BIM model manipulation was designed, grounded in human ergonomics. These gestures include selection, translation, scaling, rotation, and restoration of the 3D model. Secondly, a gesture understanding algorithm dedicated to 3D model manipulation is introduced in this paper. Then, an interaction system for 3D models based on machine vision and gesture recognition was developed. A series of systematic experiments are conducted to confirm the effectiveness of the proposed system. In various environments, including pure white backgrounds, offices, and conference rooms, even when wearing gloves, the system has an accuracy rate of over 97% and a frame rate maintained between 26 and 30 frames. The final experimental results show that the method has good performance, confirming its feasibility, accuracy, and fluidity. Somatosensory interaction with 3D models enhances the interaction experience and operation efficiency between the user and the model, further expanding the application scene of BIM. Full article
Show Figures

Figure 1

32 pages, 475 KiB  
Review
Multimodal Interaction, Interfaces, and Communication: A Survey
by Elias Dritsas, Maria Trigka, Christos Troussas and Phivos Mylonas
Multimodal Technol. Interact. 2025, 9(1), 6; https://doi.org/10.3390/mti9010006 - 14 Jan 2025
Cited by 5 | Viewed by 7901
Abstract
Multimodal interaction is a transformative human-computer interaction (HCI) approach that allows users to interact with systems through various communication channels such as speech, gesture, touch, and gaze. With advancements in sensor technology and machine learning (ML), multimodal systems are becoming increasingly important in [...] Read more.
Multimodal interaction is a transformative human-computer interaction (HCI) approach that allows users to interact with systems through various communication channels such as speech, gesture, touch, and gaze. With advancements in sensor technology and machine learning (ML), multimodal systems are becoming increasingly important in various applications, including virtual assistants, intelligent environments, healthcare, and accessibility technologies. This survey concisely overviews recent advancements in multimodal interaction, interfaces, and communication. It delves into integrating different input and output modalities, focusing on critical technologies and essential considerations in multimodal fusion, including temporal synchronization and decision-level integration. Furthermore, the survey explores the challenges of developing context-aware, adaptive systems that provide seamless and intuitive user experiences. Lastly, by examining current methodologies and trends, this study underscores the potential of multimodal systems and sheds light on future research directions. Full article
Show Figures

Figure 1

17 pages, 8226 KiB  
Article
Design of a Capacitive Tactile Sensor Array System for Human–Computer Interaction
by Fei Fei, Zhenkun Jia, Changcheng Wu, Xiong Lu and Zhi Li
Sensors 2024, 24(20), 6629; https://doi.org/10.3390/s24206629 - 14 Oct 2024
Cited by 3 | Viewed by 1523
Abstract
This paper introduces a novel capacitive sensor array designed for tactile perception applications. Utilizing an all-in-one inkjet deposition printing process, the sensor array exhibited exceptional flexibility and accuracy. With a resolution of up to 32.7 dpi, the sensor array was capable of capturing [...] Read more.
This paper introduces a novel capacitive sensor array designed for tactile perception applications. Utilizing an all-in-one inkjet deposition printing process, the sensor array exhibited exceptional flexibility and accuracy. With a resolution of up to 32.7 dpi, the sensor array was capable of capturing the fine details of touch inputs, making it suitable for applications requiring high spatial resolution. The design incorporates two multiplexers to achieve a scanning rate of 100 Hz, ensuring the rapid and responsive data acquisition that is essential for real-time feedback in interactive applications, such as gesture recognition and haptic interfaces. To evaluate the performance of the capacitive sensor array, an experiment that involved handwritten number recognition was conducted. The results demonstrated that the sensor accurately captured fingertip inputs with a high precision. When combined with an Auxiliary Classifier Generative Adversarial Network (ACGAN) algorithm, the sensor system achieved a recognition accuracy of 98% for various handwritten numbers from “0” to “9”. These results show the potential of the capacitive sensor array for advanced human–computer interaction applications. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

10 pages, 215 KiB  
Article
Death Images in Michael Haneke’s Films
by Susana Viegas
Philosophies 2024, 9(5), 155; https://doi.org/10.3390/philosophies9050155 - 1 Oct 2024
Viewed by 2075
Abstract
Although meditating on death has long been a central philosophical practice and is gaining prominence in modern European public discourse, certain misconceptions still persist. The Austrian filmmaker Michael Haneke does not shy away from confronting real and performed images of death, combining a [...] Read more.
Although meditating on death has long been a central philosophical practice and is gaining prominence in modern European public discourse, certain misconceptions still persist. The Austrian filmmaker Michael Haneke does not shy away from confronting real and performed images of death, combining a denouncing cinematic approach with no less polemic aesthetic and ethical theories. Certainly, visually shocking and disturbing films can, in their own way, challenge the boundaries of what is thinkable, at times even touching upon the unthinkable. Images of death and death-related themes are particularly pervasive in Haneke’s films. His films raise significant philosophical and ethical questions about mortality, violence, death, and ageing. This analysis is a tentative attempt to map how Haneke explores representations of death and dying in Benny’s Video (1992) and Funny Games (1997), with particular reference to the rewind gesture depicted in both films. In doing so, it aims to examine the conversation such films prompt between moving images and the audience. Full article
27 pages, 23655 KiB  
Article
Development and Usability Evaluation of Augmented Reality Content for Light Maintenance Training of Air Spring for Electric Multiple Unit
by Kyung-Sik Kim and Chul-Su Kim
Appl. Sci. 2024, 14(17), 7702; https://doi.org/10.3390/app14177702 - 31 Aug 2024
Cited by 1 | Viewed by 2061
Abstract
The air spring for railway vehicles uses the air pressure inside the bellows to absorb vibration and shock to improve ride comfort and adjust the height of the underframe with a leveling valve to control stable driving of the train. This study developed [...] Read more.
The air spring for railway vehicles uses the air pressure inside the bellows to absorb vibration and shock to improve ride comfort and adjust the height of the underframe with a leveling valve to control stable driving of the train. This study developed augmented reality content that proposes a novel visual technology to effectively support the training of air spring maintenance tasks. In this study, a special effect algorithm that displays the dispersion and diffusion of fluid, and an algorithm that allows objects to be rotated at various angles, were proposed to increase the visual learning effect of fluid flow for maintenance. The FDG algorithm can increase the training effect by visualizing the leakage of air at a specific location when the air spring is damaged. In addition, the OAR algorithm allows an axisymmetric model, which is difficult to rotate by gestures, to be rotated at various angles, using a touch cube. Using these algorithms, maintenance personnel can effectively learn complex maintenance tasks. The UMUX and CSUQ surveys were conducted with 40 railway maintenance workers to evaluate the effectiveness of the developed educational content. The results showed that the UMUX, across 4 items, averaged as score of 81.56. Likewise, the CSUQ survey score, consisting of 19 questions in 4 categories, was very high, at 80.83. These results show that this AR content is usable for air spring maintenance and field training support. Full article
(This article belongs to the Special Issue Application of Intelligent Human-Computer Interaction)
Show Figures

Figure 1

14 pages, 4546 KiB  
Article
Differential Signal-Amplitude-Modulated Multi-Beam Remote Optical Touch Based on Grating Antenna
by Yanwen Huang, Weiqiang Lin, Peijin Wu, Yongxin Wang, Ziyuan Guo, Pengcheng Huang and Zhicheng Ye
Sensors 2024, 24(16), 5319; https://doi.org/10.3390/s24165319 - 16 Aug 2024
Viewed by 1068
Abstract
As screen sizes are becoming larger and larger, exceeding human physical limitations for direct interaction via touching, remote control is inevitable. However, among the current solutions, inertial gyroscopes are susceptible to positional inaccuracies, and gesture recognition is limited by cameras’ focus depths and [...] Read more.
As screen sizes are becoming larger and larger, exceeding human physical limitations for direct interaction via touching, remote control is inevitable. However, among the current solutions, inertial gyroscopes are susceptible to positional inaccuracies, and gesture recognition is limited by cameras’ focus depths and viewing angles. Provided that the issue of ghost points can be effectively addressed, grating antenna light-trapping technology is an ideal candidate for multipoint inputs. Therefore, we propose a differential amplitude modulation scheme for grating antenna-based multi-beam optical touch, which can recognize different incidence points. The amplitude of the incident beams was first coded with different pulse widths. Then, following the capture of incident beams by the grating antenna and their conversion into electrical currents by the aligned detector arrays, the incident points of the individual beams were recognized and differentiated. The scheme was successfully verified on an 18-inch screen, where two-point optical touch with a position accuracy error of under 3 mm and a response time of less than 7 ms under a modulation frequency of 10 kHz on both incident beams was achieved. This work demonstrates a practical method to achieve remote multi-point touch, which can make digital mice more accurately represent the users’ pointing directions by obeying the natural three-point one-line aiming rule instantaneously. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Back to TopTop