Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = mid-air gestures

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 5057 KiB  
Article
Exploring Preferential Ring-Based Gesture Interaction Across 2D Screen and Spatial Interface Environments
by Hoon Yoon, Hojeong Im, Seonha Chung and Taeha Yi
Appl. Sci. 2025, 15(12), 6879; https://doi.org/10.3390/app15126879 - 18 Jun 2025
Viewed by 558
Abstract
As gesture-based interactions expand across traditional 2D screens and immersive XR platforms, designing intuitive input modalities tailored to specific contexts becomes increasingly essential. This study explores how users cognitively and experientially engage with gesture-based interactions in two distinct environments: a lean-back 2D television [...] Read more.
As gesture-based interactions expand across traditional 2D screens and immersive XR platforms, designing intuitive input modalities tailored to specific contexts becomes increasingly essential. This study explores how users cognitively and experientially engage with gesture-based interactions in two distinct environments: a lean-back 2D television interface and an immersive XR spatial environment. A within-subject experimental design was employed, utilizing a gesture-recognizable smart ring to perform tasks using three gesture modalities: (a) Surface-Touch gesture, (b) mid-air gesture, and (c) micro finger-touch gesture. The results revealed clear, context-dependent user preferences; Surface-Touch gestures were preferred in the 2D context due to their controlled and pragmatic nature, whereas mid-air gestures were favored in the XR context for their immersive, intuitive qualities. Interestingly, longer gesture execution times did not consistently reduce user satisfaction, indicating that compatibility between the gesture modality and the interaction environment matters more than efficiency alone. This study concludes that successful gesture-based interface design must carefully consider the contextual alignment, highlighting the nuanced interplay among user expectations, environmental context, and gesture modality. Consequently, these findings provide practical considerations for designing Natural User Interfaces (NUIs) for various interaction contexts. Full article
Show Figures

Figure 1

26 pages, 5883 KiB  
Article
Real-Time Air-Writing Recognition for Arabic Letters Using Deep Learning
by Aseel Qedear, Aldanh AlMatrafy, Athary Al-Sowat, Abrar Saigh and Asmaa Alayed
Sensors 2024, 24(18), 6098; https://doi.org/10.3390/s24186098 - 20 Sep 2024
Cited by 1 | Viewed by 2329
Abstract
Learning to write the Arabic alphabet is crucial for Arab children’s cognitive development, enhancing their memory and retention skills. However, the lack of Arabic language educational applications may hamper the effectiveness of their learning experience. To bridge this gap, SamAbjd was developed, an [...] Read more.
Learning to write the Arabic alphabet is crucial for Arab children’s cognitive development, enhancing their memory and retention skills. However, the lack of Arabic language educational applications may hamper the effectiveness of their learning experience. To bridge this gap, SamAbjd was developed, an interactive web application that leverages deep learning techniques, including air-writing recognition, to teach Arabic letters. SamAbjd was tailored to user needs through extensive surveys conducted with mothers and teachers, and a comprehensive literature review was performed to identify effective teaching methods and models. The development process involved gathering data from three publicly available datasets, culminating in a collection of 31,349 annotated images of handwritten Arabic letters. To enhance the dataset’s quality, data preprocessing techniques were applied, such as image denoising, grayscale conversion, and data augmentation. Two models were experimented with using a convolution neural network (CNN) and Visual Geometry Group (VGG16) to evaluate their effectiveness in recognizing air-written Arabic characters. Among the CNN models tested, the standout performer was a seven-layer model without dropout, which achieved a high testing accuracy of 96.40%. This model also demonstrated impressive precision and F1-score, both around 96.44% and 96.43%, respectively, indicating successful fitting without overfitting. The web application, built using Flask and PyCharm, offers a robust and user-friendly interface. By incorporating deep learning techniques and user feedback, the web application meets educational needs effectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 7863 KiB  
Article
Designing Gestures for Data Exploration with Public Displays via Identification Studies
by Adina Friedman and Francesco Cafaro
Information 2024, 15(6), 292; https://doi.org/10.3390/info15060292 - 21 May 2024
Viewed by 1353
Abstract
In-lab elicitation studies inform the design of gestures by having the participants suggest actions to activate the system functions. Conversely, crowd-sourced identification studies follow the opposite path, asking the users to associate the control actions with functions. Identification studies have been used to [...] Read more.
In-lab elicitation studies inform the design of gestures by having the participants suggest actions to activate the system functions. Conversely, crowd-sourced identification studies follow the opposite path, asking the users to associate the control actions with functions. Identification studies have been used to validate the gestures produced by elicitation studies, but not to design interactive systems. In this paper, we show that identification studies can be combined with in situ observations to design the gestures for data exploration with public displays. To illustrate this method, we developed two versions of a gesture-controlled system for data exploration with 368 users: one designed through an elicitation study, and one designed through in situ observations followed by an identification study. Our results show that the users discovered the majority of the gestures with similar accuracy across the two prototypes. Additionally, the in situ approach enabled the direct recruitment of target users, and the crowd-sourced approach typical of identification studies expedited the design process. Full article
(This article belongs to the Special Issue Recent Advances and Perspectives in Human-Computer Interaction)
Show Figures

Figure 1

21 pages, 2546 KiB  
Article
Assessing the Acceptance of a Mid-Air Gesture Syntax for Smart Space Interaction: An Empirical Study
by Ana M. Bernardos, Xian Wang, Luca Bergesio, Juan A. Besada and José R. Casar
J. Sens. Actuator Netw. 2024, 13(2), 25; https://doi.org/10.3390/jsan13020025 - 9 Apr 2024
Cited by 3 | Viewed by 2626
Abstract
Mid-gesture interfaces have become popular for specific scenarios, such as interactions with augmented reality via head-mounted displays, specific controls over smartphones, or gaming platforms. This article explores the use of a location-aware mid-air gesture-based command triplet syntax to interact with a smart space. [...] Read more.
Mid-gesture interfaces have become popular for specific scenarios, such as interactions with augmented reality via head-mounted displays, specific controls over smartphones, or gaming platforms. This article explores the use of a location-aware mid-air gesture-based command triplet syntax to interact with a smart space. The syntax, inspired by human language, is built as a vocative case with an imperative structure. In a sentence like “Light, please switch on!”, the object being activated is invoked via making a gesture that mimics its initial letter/acronym (vocative, coincident with the sentence’s elliptical subject). A geometrical or directional gesture then identifies the action (imperative verb) and may include an object feature or a second object with which to network (complement), which also represented by the initial or acronym letter. Technically, an interpreter relying on a trainable multidevice gesture recognition layer makes the pair/triplet syntax decoding possible. The recognition layer works on acceleration and position input signals from graspable (smartphone) and free-hand devices (smartwatch and external depth cameras), as well as a specific compiler. On a specific deployment at a Living Lab facility, the syntax has been instantiated via the use of a lexicon derived from English (with respect to the initial letters and acronyms). A within-subject analysis with twelve users has enabled the analysis of the syntax acceptance (in terms of usability, gesture agreement for actions over objects, and social acceptance) and technology preference of the gesture syntax within its three device implementations (graspable, wearable, and device-free ones). Participants express consensus regarding the simplicity of learning the syntax and its potential effectiveness in managing smart resources. Socially, participants favoured the Watch for outdoor activities and the Phone for home and work settings, underscoring the importance of social context in technology design. The Phone emerged as the preferred option for gesture recognition due to its efficiency and familiarity. The system, which can be adapted to different sensing technologies, addresses the scalability concerns (as it can be easily extended for new objects and actions) and allows for personalised interaction. Full article
(This article belongs to the Special Issue Machine-Environment Interaction, Volume II)
Show Figures

Figure 1

19 pages, 1245 KiB  
Article
Mid-Air Gestural Interaction with a Large Fogscreen
by Vera Remizova, Antti Sand, I. Scott MacKenzie, Oleg Špakov, Katariina Nyyssönen, Ismo Rakkolainen, Anneli Kylliäinen, Veikko Surakka and Yulia Gizatdinova
Multimodal Technol. Interact. 2023, 7(7), 63; https://doi.org/10.3390/mti7070063 - 24 Jun 2023
Cited by 5 | Viewed by 2902
Abstract
Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen [...] Read more.
Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen using tapping and dwell-based gestural techniques, with and without vibrotactile/haptic feedback. In terms of Fitts’ law, the throughput was about 1.4 bps to 2.6 bps, suggesting that gestural interaction with a large fogscreen is a suitable and effective input method. Our results also suggest that tapping without haptic feedback has good performance and potential for interaction with a fogscreen, and that tactile feedback is not necessary for effective mid-air interaction. These findings have implications for the design of gestural interfaces suitable for interaction with fogscreens. Full article
Show Figures

Figure 1

18 pages, 3497 KiB  
Article
Gesture Vocabularies for Hand Gestures for Controlling Air Conditioners in Home and Vehicle Environments
by Hasan J. Alyamani
Electronics 2023, 12(7), 1513; https://doi.org/10.3390/electronics12071513 - 23 Mar 2023
Viewed by 2310
Abstract
With the growing prevalence of modern technologies as part of everyday life, mid-air gestures have become a promising input method in the field of human–computer interaction. This paper analyses the gestures of actual users to define a preliminary gesture vocabulary for home air [...] Read more.
With the growing prevalence of modern technologies as part of everyday life, mid-air gestures have become a promising input method in the field of human–computer interaction. This paper analyses the gestures of actual users to define a preliminary gesture vocabulary for home air conditioning (AC) systems and suggests a gesture vocabulary for controlling the AC that applies to both home and vehicle environments. In this study, a user elicitation experiment was conducted. A total of 36 participants were filmed while employing their preferred hand gestures to manipulate a home air conditioning system. Comparisons were drawn between our proposed gesture vocabulary (HomeG) and a previously proposed gesture vocabulary which was designed to identify the preferred hand gestures for in-vehicle air conditioners. The findings indicate that HomeG successfully identifies and describes the employed gestures in detail. To gain a gesture taxonomy that is suitable for manipulating the AC at home and in a vehicle, some modifications were applied to HomeG based on suggestions from other studies. The modified gesture vocabulary (CrossG) can identify the gestures of our study, although CrossG has a less detailed gesture pattern. Our results will help designers to understand user preferences and behaviour prior to designing and implementing a gesture-based user interface. Full article
Show Figures

Figure 1

13 pages, 1221 KiB  
Article
Interpretative Structural Modeling Analyzes the Hierarchical Relationship between Mid-Air Gestures and Interaction Satisfaction
by Haoyue Guo and Younghwan Pan
Appl. Sci. 2023, 13(5), 3129; https://doi.org/10.3390/app13053129 - 28 Feb 2023
Cited by 2 | Viewed by 1880
Abstract
Mid-air gestures as a new form of human–computer interaction has a wide range of satisfaction factors, for which the primary and secondary relationships and hierarchical relationships between factors are unclear. By examining usability definitions, collecting satisfaction questionnaires and user interviews, 30 observed variables [...] Read more.
Mid-air gestures as a new form of human–computer interaction has a wide range of satisfaction factors, for which the primary and secondary relationships and hierarchical relationships between factors are unclear. By examining usability definitions, collecting satisfaction questionnaires and user interviews, 30 observed variables were obtained and a scale was developed. A total of 310 valid questionnaires were collected, and six latent variables were summarized through factor analysis. The matrix quantitative analysis of latent variables based on interpretative structural model theory was used to construct a hierarchical model of influencing factors of satisfaction with mid-air gestures. The study shows that the influencing factors of mid-air gesture satisfaction can be divided into three levels. The first layer of attractiveness is the direct influencing factor on the surface and the goal of mid-air gesture design. In the second layer, Simplicity and Efficiency, Simplicity and Tiredness, and Tiredness and Friendliness interact with each other. Simplicity positively affects Friendliness, and Efficiency positively affects Tiredness. The third layer, Intuitiveness is the root layer influencing factor, which affects Simplicity. This study provides a theoretical basis for the design of mid-air gesture so that it can be designed and selected more objectively. Full article
Show Figures

Figure 1

24 pages, 3218 KiB  
Article
Mapping Directional Mid-Air Unistroke Gestures to Interaction Commands: A User Elicitation and Evaluation Study
by Yiqi Xiao, Ke Miao and Chenhan Jiang
Symmetry 2021, 13(10), 1926; https://doi.org/10.3390/sym13101926 - 13 Oct 2021
Cited by 8 | Viewed by 3767
Abstract
A stroke is the basic limb movement that both humans and animals naturally and repetitiously perform. Having been introduced into gestural interaction, mid-air stroke gestures saw a wide application range and quite intuitive use. In this paper, we present an approach for building [...] Read more.
A stroke is the basic limb movement that both humans and animals naturally and repetitiously perform. Having been introduced into gestural interaction, mid-air stroke gestures saw a wide application range and quite intuitive use. In this paper, we present an approach for building command-to-gesture mapping that exploits the semantic association between interactive commands and the directions of mid-air unistroke gestures. Directional unistroke gestures make use of the symmetry of the semantics of commands, which makes a more systematic gesture set for users’ cognition and reduces the number of gestures users need to learn. However, the learnability of the directional unistroke gestures is varying with different commands. Through a user elicitation study, a gesture set containing eight directional mid-air unistroke gestures was selected by subjective ratings of the direction in respect to its association degree with the corresponding command. We evaluated this gesture set in a following study to investigate the learnability issue, and the directional mid-air unistroke gestures and user-preferred freehand gestures were compared. Our findings can offer preliminary evidence that “return”, “save”, “turn-off” and “mute” are the interaction commands more applicable to using directional mid-air unistrokes, which may have implication for the design of mid-air gestures in human–computer interaction. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

18 pages, 2916 KiB  
Article
Investigating the Performance of Gesture-Based Input for Mid-Air Text Entry in a Virtual Environment: A Comparison of Hand-Up versus Hand-Down Postures
by Yahui Wang, Yueyang Wang, Jingzhou Chen, Yincheng Wang, Jie Yang, Ting Jiang and Jibo He
Sensors 2021, 21(5), 1582; https://doi.org/10.3390/s21051582 - 24 Feb 2021
Cited by 9 | Viewed by 4495
Abstract
Although the interaction technology for virtual reality (VR) systems has evolved significantly over the past years, the text input efficiency in the virtual environment is still an ongoing problem. We deployed a word-gesture text entry technology based on gesture recognition in the virtual [...] Read more.
Although the interaction technology for virtual reality (VR) systems has evolved significantly over the past years, the text input efficiency in the virtual environment is still an ongoing problem. We deployed a word-gesture text entry technology based on gesture recognition in the virtual environment. This study aimed to investigate the performance of the word-gesture text entry technology with different input postures and VR experiences in the virtual environment. The study revealed that the VR experience (how long or how often using VR) had little effect on input performance. The hand-up posture has a better input performance when using word-gesture text entry technology in a virtual environment. In addition, the study found that the perceived exertion to complete the text input with word-gesture text entry technology was relatively high. Furthermore, the typing accuracy and perceived usability for using the hand-up posture were obviously higher than that for the hand-down posture. The hand-up posture also had less task workload than the hand-down posture. This paper supports that the word-gesture text entry technology with hand-up posture has greater application potential than hand-down posture. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 1998 KiB  
Article
Mid-Air Gesture Control of Multiple Home Devices in Spatial Augmented Reality Prototype
by Panagiotis Vogiatzidakis and Panayiotis Koutsabasis
Multimodal Technol. Interact. 2020, 4(3), 61; https://doi.org/10.3390/mti4030061 - 31 Aug 2020
Cited by 18 | Viewed by 7113
Abstract
Touchless, mid-air gesture-based interactions with remote devices have been investigated as alternative or complementary to interactions based on remote controls and smartphones. Related studies focus on user elicitation of a gesture vocabulary for one or a few home devices and explore recommendations of [...] Read more.
Touchless, mid-air gesture-based interactions with remote devices have been investigated as alternative or complementary to interactions based on remote controls and smartphones. Related studies focus on user elicitation of a gesture vocabulary for one or a few home devices and explore recommendations of respective gesture vocabularies without validating them by empirical testing with interactive prototypes. We have developed an interactive prototype based on spatial Augmented Reality (AR) of seven home devices. Each device responds to touchless gestures (identified from a previous elicitation study) via the MS Kinect sensor. Nineteen users participated in a two-phase test (with and without help provided by a virtual assistant) according to a scenario that required from each user to apply 41 gestural commands (19 unique). We report on main usability indicators: task success, task time, errors (false negative/positives), memorability, perceived usability, and user experience. The main conclusion is that mid-air interaction with multiple home devices is feasible, fairly easy to learn and apply, and enjoyable. The contributions of this paper are (a) validation of a previously elicited gesture set; (b) development of a spatial AR prototype for testing of mid-air gestures, and (c) extensive assessment of gestures and evidence in favor of mid-air interaction in smart environments. Full article
Show Figures

Figure 1

22 pages, 19609 KiB  
Article
Evaluation of Full-Body Gestures Performed by Individuals with Down Syndrome: Proposal for Designing User Interfaces for All Based on Kinect Sensor
by Marta Sylvia Del Rio Guerra and Jorge Martin-Gutierrez
Sensors 2020, 20(14), 3930; https://doi.org/10.3390/s20143930 - 15 Jul 2020
Cited by 5 | Viewed by 3911
Abstract
The ever-growing and widespread use of touch, face, full-body, and 3D mid-air gesture recognition sensors in domestic and industrial settings is serving to highlight whether interactive gestures are sufficiently inclusive, and whether or not they can be executed by all users. The purpose [...] Read more.
The ever-growing and widespread use of touch, face, full-body, and 3D mid-air gesture recognition sensors in domestic and industrial settings is serving to highlight whether interactive gestures are sufficiently inclusive, and whether or not they can be executed by all users. The purpose of this study was to analyze full-body gestures from the point of view of user experience using the Microsoft Kinect sensor, to identify which gestures are easy for individuals living with Down syndrome. With this information, app developers can satisfy Design for All (DfA) requirements by selecting suitable gestures from existing lists of gesture sets. A set of twenty full-body gestures were analyzed in this study; to do so, the research team developed an application to measure the success/failure rates and execution times of each gesture. The results show that the failure rate for gesture execution is greater than the success rate, and that there is no difference between male and female participants in terms of execution times or the successful execution of gestures. Through this study, we conclude that, in general, people living with Down syndrome are not able to perform certain full-body gestures correctly. This is a direct consequence of limitations resulting from characteristic physical and motor impairments. As a consequence, the Microsoft Kinect sensor cannot identify the gestures. It is important to remember this fact when developing gesture-based on Human Computer Interaction (HCI) applications that use the Kinect sensor as an input device when the apps are going to be used by people who have such disabilities. Full article
Show Figures

Figure 1

29 pages, 3944 KiB  
Article
Enhancing Interaction with Augmented Reality through Mid-Air Haptic Feedback: Architecture Design and User Feedback
by Diego Vaquero-Melchor and Ana M. Bernardos
Appl. Sci. 2019, 9(23), 5123; https://doi.org/10.3390/app9235123 - 26 Nov 2019
Cited by 16 | Viewed by 6916
Abstract
Nowadays, Augmented-Reality (AR) head-mounted displays (HMD) deliver a more immersive visualization of virtual contents, but the available means of interaction, mainly based on gesture and/or voice, are yet limited and obviously lack realism and expressivity when compared to traditional physical means. In this [...] Read more.
Nowadays, Augmented-Reality (AR) head-mounted displays (HMD) deliver a more immersive visualization of virtual contents, but the available means of interaction, mainly based on gesture and/or voice, are yet limited and obviously lack realism and expressivity when compared to traditional physical means. In this sense, the integration of haptics within AR may help to deliver an enriched experience, while facilitating the performance of specific actions, such as repositioning or resizing tasks, that are still dependent on the user’s skills. In this direction, this paper gathers the description of a flexible architecture designed to deploy haptically enabled AR applications both for mobile and wearable visualization devices. The haptic feedback may be generated through a variety of devices (e.g., wearable, graspable, or mid-air ones), and the architecture facilitates handling the specificity of each. For this reason, within the paper, it is discussed how to generate a haptic representation of a 3D digital object depending on the application and the target device. Additionally, the paper includes an analysis of practical, relevant issues that arise when setting up a system to work with specific devices like HMD (e.g., HoloLens) and mid-air haptic devices (e.g., Ultrahaptics), such as the alignment between the real world and the virtual one. The architecture applicability is demonstrated through the implementation of two applications: (a) Form Inspector and (b) Simon Game, built for HoloLens and iOS mobile phones for visualization and for UHK for mid-air haptics delivery. These applications have been used to explore with nine users the efficiency, meaningfulness, and usefulness of mid-air haptics for form perception, object resizing, and push interaction tasks. Results show that, although mobile interaction is preferred when this option is available, haptics turn out to be more meaningful in identifying shapes when compared to what users initially expect and in contributing to the execution of resizing tasks. Moreover, this preliminary user study reveals some design issues when working with haptic AR. For example, users may be expecting a tailored interface metaphor, not necessarily inspired in natural interaction. This has been the case of our proposal of virtual pressable buttons, built mimicking real buttons by using haptics, but differently interpreted by the study participants. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

20 pages, 3426 KiB  
Article
On the Use of Mobile Devices as Controllers for First-Person Navigation in Public Installations
by Spyros Vosinakis and Anna Gardeli
Information 2019, 10(7), 238; https://doi.org/10.3390/info10070238 - 11 Jul 2019
Cited by 4 | Viewed by 4233
Abstract
User navigation in public installations displaying 3D content is mostly supported by mid-air interactions using motion sensors, such as Microsoft Kinect. On the other hand, smartphones have been used as external controllers of large-screen installations or game environments, and they may also be [...] Read more.
User navigation in public installations displaying 3D content is mostly supported by mid-air interactions using motion sensors, such as Microsoft Kinect. On the other hand, smartphones have been used as external controllers of large-screen installations or game environments, and they may also be effective in supporting 3D navigations. This paper aims to examine whether a smartphone-based control is a reliable alternative to mid-air interaction for four degrees of freedom (4-DOF) fist-person navigation, and to discover suitable interaction techniques for a smartphone controller. For this purpose, we setup two studies: A comparative study between smartphone-based and Kinect-based navigation, and a gesture elicitation study to collect user preferences and intentions regarding 3D navigation methods using a smartphone. The results of the first study were encouraging, as users with smartphone input performed at least as good as with Kinect and most of them preferred it as a means of control, whilst the second study produced a number of noteworthy results regarding proposed user gestures and their stance towards using a mobile phone for 3D navigation. Full article
(This article belongs to the Special Issue Wearable Augmented and Mixed Reality Applications)
Show Figures

Figure 1

16 pages, 4305 KiB  
Article
Hand Gestures in Virtual and Augmented 3D Environments for Down Syndrome Users
by Marta Sylvia Del Rio Guerra, Jorge Martin-Gutierrez, Renata Acevedo and Sofía Salinas
Appl. Sci. 2019, 9(13), 2641; https://doi.org/10.3390/app9132641 - 29 Jun 2019
Cited by 13 | Viewed by 6007
Abstract
Studies have revealed that applications using virtual and augmented reality provide immersion, motivation, fun and engagement. However, to date, few studies have researched how users with Down syndrome interact with these technologies. This research has identified the most commonly used interactive 3D gestures [...] Read more.
Studies have revealed that applications using virtual and augmented reality provide immersion, motivation, fun and engagement. However, to date, few studies have researched how users with Down syndrome interact with these technologies. This research has identified the most commonly used interactive 3D gestures according to the literature and tested eight of these using Oculus, Atheer and Leap Motion technologies. By applying MANOVAs to measurements of the time taken to complete each gesture and the success rate of each gesture when performed by participants with Down syndrome versus neurotypical participants, it was determined that significant difference was not shown for age or gender between these two sample groups. From the results, a difference was only demonstrated for the independent variable Down syndrome when analysed as a group. By using ANOVAs, it was determined that both groups found it easier to perform the gestures Stop, Point, Pan and Grab; thus, it is argued that these gestures should be used when programming software to create more inclusive AR and VR environments. The hardest gestures were Take, Pinch, Tap and Swipe; thus, these should be used to confirm critical actions, such as deleting data or cancelling actions. Lastly, the authors gather and make recommendations on how to develop inclusive 3D interfaces for individuals with Down syndrome. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

16 pages, 1546 KiB  
Article
Frame-Based Elicitation of Mid-Air Gestures for a Smart Home Device Ecosystem
by Panagiotis Vogiatzidakis and Panayiotis Koutsabasis
Informatics 2019, 6(2), 23; https://doi.org/10.3390/informatics6020023 - 5 Jun 2019
Cited by 21 | Viewed by 7382
Abstract
If mid-air interaction is to be implemented in smart home environments, then the user would have to exercise in-air gestures to address and manipulate multiple devices. This paper investigates a user-defined gesture vocabulary for basic control of a smart home device ecosystem, consisting [...] Read more.
If mid-air interaction is to be implemented in smart home environments, then the user would have to exercise in-air gestures to address and manipulate multiple devices. This paper investigates a user-defined gesture vocabulary for basic control of a smart home device ecosystem, consisting of 7 devices and a total of 55 referents (commands for device) that can be grouped to 14 commands (that refer to more than one device). The elicitation study was conducted in a frame (general scenario) of use of all devices to support contextual relevance; also, the referents were presented with minimal affordances to minimize widget-specific proposals. In addition to computing agreement rates for all referents, we also computed the internal consistency of user proposals (single-user agreement for multiple commands). In all, 1047 gestures from 18 participants were recorded, analyzed, and paired with think-aloud data. The study reached to a mid-air gesture vocabulary for a smart-device ecosystem, which includes several gestures with very high, high and medium agreement rates. Furthermore, there was high consistency within most of the single-user gesture proposals, which reveals that each user developed and applied her/his own mental model about the whole set of interactions with the device ecosystem. Thus, we suggest that mid-air interaction support for smart homes should not only offer a built-in gesture set but also provide for functions of identification and definition of personalized gesture assignments to basic user commands. Full article
Show Figures

Figure 1

Back to TopTop