Design of Interactions for Handheld Augmented Reality Devices Using Wearable Smart Textiles: Findings from a User Elicitation Study

: Advanced developments in handheld devices’ interactive 3D graphics capabilities, processing power, and cloud computing have provided great potential for handheld augmented reality (HAR) applications, which allow users to access digital information anytime, anywhere. Nevertheless, existing interaction methods are still conﬁned to the touch display, device camera, and built-in sensors of these handheld devices, which su ﬀ er from obtrusive interactions with AR content. Wearable fabric-based interfaces promote subtle, natural, and eyes-free interactions which are needed when performing interactions in dynamic environments. Prior studies explored the possibilities of using fabric-based wearable interfaces for head-mounted AR display (HMD) devices. The interface metaphors of HMD AR devices are inadequate for handheld AR devices as a typical HAR application require users to use only one hand to perform interactions. In this paper, we aim to investigate the use of a fabric-based wearable device as an alternative interface option for performing interactions with HAR applications. We elicited user-preferred gestures which are socially acceptable and comfortable to use for HAR devices. We also derived an interaction vocabulary of the wrist and thumb-to-index touch gestures, and present broader design guidelines for fabric-based wearable interfaces for handheld augmented reality applications.


Introduction
Augmented reality (AR) overlays computer-generated visual information-such as images, videos, text information, and 3D virtual objects-onto the real-world [1].Unlike virtual reality (VR), which immerses the users inside a computer-generated environment, AR allows the users to see the real world with virtual objects blended within the real environment and enables the users to interact the virtual content in real-time [2].The increasing computational capabilities, powerful graphics processors, built-in cameras, variety of sensors, and power of cloud-supported information access have made AR possible using handheld mobile devices, such as smartphones and tablets [3].The evolution of AR technology-from the heavy wearable head-mounted display (HMD) devices to smartphones and tablets-has enabled new and rich handheld mobile AR experiences for everyday users.In handheld augmented reality (HAR), users can see the real world overlaid with virtual information in real-time by using the camera on their mobile devices.Current mobile devices are small in size, lightweight, and portable enough to be carried wherever users go.With this ubiquitous availability, HAR allows us to develop and design innovative applications in navigation, education, gaming, tourism, interactive shopping, production, marketing, and others [3].Thus, smartphones have been identified as an ideal platform for HAR experiences in various outdoor and indoor environments [4][5][6].
In order to interact with the virtual world using HAR displays, a user needs to position and orientate the device using one hand and manipulate the virtual 3D objects with the other hand.In general, the touchscreen is used as a primary interface to interact with AR content [7,8].In addition, the various built-in sensors in the handheld devices-such as cameras, GPS, compass, accelerometers, and gyroscope-enable to precisely determine the position and orientation of the device in the real world (e.g., [8][9][10]).Furthermore, the device's camera is used to naturally capture the user's mid-air hand movements while holding the device [11,12].
Like in HMD AR, manipulations such as selecting and moving virtual 3D information are primary interactions in HAR devices [13].The existing HAR interaction methods, such as touch input, offer promising solutions to manipulate virtual content (e.g., [14]).However, they still have substantial limitations.For instance, touch input is limited by the device's physical boundary and usability suffers as on-screen content becomes occluded by finger (i.e., finger occlusions [15,16]).Also, 2D inputs on the touch surface do not directly support manipulating the six degrees of freedom of a virtual object in the augmented world [17].Unlike touch-based input, device movement-based HAR interaction methods support 3D object manipulations (e.g., [8,9]).However, orienting the device to rotate the 3D virtual objects might force the users to lose sight of the manipulated virtual object in the display.A mid-air hand gesture input method supports more natural and intuitive interaction for HAR applications (e.g., [11,12]).Nevertheless, using the back-camera on the device to track the hand and finger movements is often not feasible, due to several factors such as (1) it may not be very accurate, (2) the device should be held at a certain distance from the user's eye, and (3) the human arm has a very limited region of reach.Furthermore, interactions with handheld devices also have issues, like social acceptance, when performed in front of unfamiliar people [18].
With the recent developments in fabric-based technology and new types of electronic textiles and smart clothing [19], we are witnessing Mark Weiser's dream-of computers that "weave themselves into the fabric of everyday life" [20] -has taken one step closer to reality.Integrated into users' clothing, fabric sensors are able to unobtrusively detect user movements [21] to convert them into input methods to control modern devices (e.g., [22][23][24]).Fabric-based wearable devices are soft, natural, and flexible; remove the need for the user to hold or carry a device; can be worn without any discomfort; and offer more freedom of movements compared to the rigid sensors (e.g., accelerometer) (e.g., [25]).Recently, researchers have started undertaking research on fabric-based interaction methods and attempting to provide a more natural, unobtrusive end-user interaction experience (e.g., [24,26]).
As stated by Hansson et al., an interface is considered as a "natural interface", when it builds upon the knowledge and experience that the user already possesses [27].Humans are used to wear clothes throughout their lives.In general, they do not need special training to wear or use clothes or, in other words, to use a clothing-based wearable interface.Among the different body parts, users' hands enable various poses particularly important and suitable for gestural interaction for the following reasons: (1) hands are always available, (2) support high proprioception, (3) require minimal movement, (4) produce distinct subtle gestures, and (5) offer socially acceptable gestures.
In this paper, we are particularly interested in exploring fabric-based interaction methods that allow a user to use combined touch and gesture input to perform HAR interactions.To this end, we explore the potential use of gestures performed via a hand-worn clothing-based artifact (similar to a glove).Input based on textiles is not new (e.g., [22][23][24][25]), but their use has been directed primarily at traditional mobile devices [24] and smartwatches [23].To our best knowledge, there are no studies that explored the combined use of hand and wrist gestures performed via a hand-worn device to interact with HAR content.To fill this gap, our work investigates the use of clothing-based hand and wrist gestures for HAR content.Our paper makes the following contributions: (1) an investigation of the potential design space of clothing-based gestures using the hand and wrist for HAR applications; (2) identification of a set of user-defined gestures; and (3) a set of design guidelines for fabric-based devices that allow users to interact with HAR content.

Participatory Elicitation Studies
An elicitation method involves potential users in the process of designing any new type of interface or interaction.This approach allows designers to explore the possibilities of finding gestures from the end-users' perspective, which are often are not in the initial gesture set designed by the designers.User-preferred gestures are easier to remember than expert-designed gestures [28].To elicit gestures from the users, an interface designer shows a result of an action for each task, known as referents [29].The designer then asks each participant to imagine and perform an action that would cause each referent to occur.These user proposed actions are known as symbols [29].The designer compares all of the symbols for a given referent from each participant and groups them based on their similarity.Wobbrock et al. [29] introduced a method to understand the agreement and other factors associated with user-elicited gestures.Their method has been widely used in other elicitation studies (e.g., [30][31][32]).Most recently, Vatavu and Wobbrock [33] have redefined the agreement measurement to understand the data more precisely.This refined agreement formula and approach is applied in the analysis of our results.
In terms of user-elicited gestures for augmented reality, Piumsomboon et al. [34] presented a study of user-defined hand gestures for head-mounted AR displays.Their participants elicited gestures using both hands.By contrast, HAR devices are not head-worn and are often operated using only one hand on the touchscreen.Thus, interface metaphors of HMD AR are not suitable for HAR displays.In terms of interactions based on single hand thumb-to-fingers, Chan et al. [35] presented a study of user-elicited single hand gestures using two or more fingers.They reported that both thumb and index fingers were highly involved in the interactions of their gesture set.Their study focused on the fingers and did not include the wrist, which can provide richer interaction possibilities (e.g., [36,37]).
The above studies do not use devices to elicit user-designed gestures.This approach enables users to propose any gestures regardless of their feasibility of implementation and the affordance of a device.Similarly, with their approach, the method of grouping the elicited gestures that look similar significantly influences the magnitude of user agreement rates [38] as this process is often dependent on the researcher's goals for the specific domain for which gestures were elicited.Therefore, by following their method, interface designers are unable to recommend a suitable sensing technology to recognize their user-elicited gestures.To address these issues, most recently, Nanjappan et al. [39] co-designed a prototype with potential end-users and conducted an elicitation study for a fabric-based wearable prototype to devise a taxonomy of wrist and touch gestures.Their non-functional prototype allowed the users to physically feel the affordance of the interface to perform their preferred gestures that are implementable in the functional version of the device.We adopted and extended their approach to elicit both wrist and thumb-to-index touch gestures to perform interactions with HAR applications.

Hand-Worn Clothing-Based Interfaces
The human hand is primarily used to interact and manipulate real-world objects in our everyday life.Hence, it is not surprising that hand gestures, involving hands and fingers, are repeatedly studied in interactive systems (e.g., [25,26,40,41]).Glove-based interfaces-the most popular systems for capturing hand movements-started nearly thirty years ago, and are continuously attracting the interest of a growing number of researchers.Glove-based input devices remove the requirement of the user to carry or hold their device to perform interactions.They are implemented by directly mounting different types of sensing technologies on the hand, mostly on each finger.The development of fabric-based strain sensing technology allows strain sensors to be either sewed or attached to the hand glove to detect the hand or finger movements [25,41,42].
Notably, Miller et al. [42] proposed a glove system to enter inputs using users' thumb with two approaches: (1) by tapping ring and little fingertips with the thumb or (2) by sliding the thumb over the entire palmar surfaces of the index and middle fingers.Their proposed glove consists of both conductive and non-conductive fabrics, supporting both discrete and continuous output.They used two different approaches: (1) the palmar sides of both index and middle fingers are completely covered with grid conductive threads woven in an over-and-under pattern and (2) pads of thumb, ring, and little fingers are covered in conductive fabric.Typically, their glove-based input acts like a set of switches which can be closed by making contact with the thumb.This approach allows the glove interface to detect tap inputs on any fingertip and swipe gestures are performed on the conductive grids on both index and middle fingers.While their approach sounds promising for single-handed eyes-free interactions, the authors reported, however, that their sensing technology provided inaccurate results.Peshock et al. [25] presented "Argot", a wearable one-handed keyboard glove which allows users to type English letters, numbers, and symbols.Argot is made of multiple polyester or spandex blended stretch knit and textile patches which provide breathability and mobility.It employs conductive thread and textile conductive patches which improve user comfort and utilizes a simpler input language.The patches are installed on 15 different locations on finger pads (5 patches on each digit), fingernails (4 patches except thumb), and sides of the index, middle, and ring fingers (2 patches on each finger side).In addition, it provides both tactile and auditory feedback through magnetic connection.A layering system is developed to enclose the magnets within the conductive patches.All these materials are insulated using fusible stitchless bonding film.However, the authors reported that the users were always accidentally triggering the side buttons on the index, middle, and ring fingers while performing gesture inputs.
Thumb-to-finger interaction methods present a promising input mechanism that is suitable to perform subtle, natural, and socially acceptable gestures.In particular, prior studies on hand gestures [35,43] have found that both thumb and index fingers are preferred over the middle, ring, and pinky fingers.As explained by Wolf et al. [43] the thumb is opposable and has the ability to rotate and be able to touch other fingers on the same hand.Similarly, the muscles involved in moving each finger and the biomechanics make the index finger well suited for independent movement [35].In particular, Yoon et al. [24] proposed "Timmi", a finger-worn piezoresistive fabric-based prototype which supports thumb-to-finger multimodal input.Timmi comprises of elastic fabric (made of 80% Nylon and 20% Spandex) painted with conductive elastomers and conductive threads.Diluted conductive elastomer was used to transform a normal fabric into a piezoresistive fabric.It consists of pressure and strain sensors made of using two different shapes and the conductive threads are cross-stitched to control the fabric.The interface can be worn on a finger, especially the index finger; the Spandex provides exceptional elasticity.Timmi supports the following input methods: finger bend, touch pressure, and swipe gesture.This approach takes advantage of the digit-wise independence and successfully uses the thumb-to-index touch input in a natural eyes-free manner.
Similarly, wrist-worn interfaces are particularly suitable for performing natural, subtle, and eyes-free interactions that can be both continuous and discrete [44] and support high proprioception [45].Particularly, Strohmeier et al. [46] proposed "WristFlicker", which detects wrist movements using a set of conductive polymer cable as strain sensors embedded in wrist-worn clothing (e.g., a wrist warmer).They employed three sets of two counteracting strain sensors to precisely measure the flexion/extension, ulnar/radial deviation, and pronation/supination of the human wrist.WristFlicker supports both discrete and continuous input, and can be used in an eyes-free manner.Recently, Ferrone et al. [47] developed a wristband equipped with stretchable polymeric strain gauge sensors to detect wrist movements.The number of small filaments (0.7 mm diameter and 1 cm length) are distributed on the surface of the wristband at regular intervals to cover the muscles of interest.These filaments are made using a mixture of thermoplastic and nano-conductive particles.
Appl.Sci.2019, 9, 3177 5 of 21 Their system presented high accuracy in sensing wrist flexion and extension.Our proposed prototype can be implemented by combining the two approaches mentioned above: thumb-to-index touch inputs and wrist gestures; as such, the results of this study could also be domain-independent and be useful for designers of applications beyond augmented reality applications.

Summary
HAR is becoming promising in many different fields, such as tourism, shopping, learning, and gaming [3].In order to allow a more natural and intuitive way to interact with HAR content, researchers have suggested increasing interaction methods outside of the handheld mobile device itself.Advances in fabric sensing technology allow us to combine multiple interface modalities together.A prior study on fabric-based interfaces focused on implementing the fabric sensing technology to detect either finger (e.g., [24]) or wrist (e.g., [46]) movements, but not for HAR interactions.Thus, a thorough investigation to understand user preferences regarding a fabric type for an interface worn on the hand will benefit designers regarding the types of gestures to utilize in the design of HAR interfaces that lead to a natural and intuitive interactive experience.

Methodology
Our study explored the scenario where a fabric-based hand-worn device would be available to users to perform interactions with HAR devices.We followed the methodology proposed by [39] to identify user-preferred thumb-to-index touch and wrist gestures for HAR interactions using a hand-worn fabric-based prototype (see Figure 1a).

Summary
HAR is becoming promising in many different fields, such as tourism, shopping, learning, and gaming [3].In order to allow a more natural and intuitive way to interact with HAR content, researchers have suggested increasing interaction methods outside of the handheld mobile device itself.Advances in fabric sensing technology allow us to combine multiple interface modalities together.A prior study on fabric-based interfaces focused on implementing the fabric sensing technology to detect either finger (e.g., [24]) or wrist (e.g., [46]) movements, but not for HAR interactions.Thus, a thorough investigation to understand user preferences regarding a fabric type for an interface worn on the hand will benefit designers regarding the types of gestures to utilize in the design of HAR interfaces that lead to a natural and intuitive interactive experience.

Methodology
Our study explored the scenario where a fabric-based hand-worn device would be available to users to perform interactions with HAR devices.We followed the methodology proposed by [39] to identify user-preferred thumb-to-index touch and wrist gestures for HAR interactions using a handworn fabric-based prototype (see Figure 1a).
In order to minimize the effect of users' previous knowledge acquired by the existing touchbased interfaces [48], we applied the production [48] technique by asking each participant to propose three different gestures for each HAR task.In a typical HAR usage scenario, only one hand is available for users to perform interactions.Therefore, we asked our participants, when given a HAR task, to perform gestures that they thought were suitable and natural for that particular HAR task while holding the device on their one hand.

Fabric-Based Hand-Worn Interface
Our proposed interface is a modified hand glove with a fingerless design made of Lycra and cotton.Three soft foam buttons-two on the sides of the index finger on proximal and middle phalanges and one on the palmar side of the index finger (see Figure 1b) -were fixed using fabric In order to minimize the effect of users' previous knowledge acquired by the existing touch-based interfaces [48], we applied the production [48] technique by asking each participant to propose three different gestures for each HAR task.In a typical HAR usage scenario, only one hand is available for users to perform interactions.Therefore, we asked our participants, when given a HAR task, to perform gestures that they thought were suitable and natural for that particular HAR task while holding the device on their one hand.

Fabric-Based Hand-Worn Interface
Our proposed interface is a modified hand glove with a fingerless design made of Lycra and cotton.Three soft foam buttons-two on the sides of the index finger on proximal and middle phalanges and one on the palmar side of the index finger (see Figure 1b) -were fixed using fabric glue.These three buttons all allow touch (to toggle between states) and hold (to keep one state over time) gestures.The location of these soft buttons was determined based on prior studies [24,25] and on user preferences from our series of initial pilot studies.
Our co-designed fabric-based interface is based on the human hand movements, especially wrist [49] and finger [24] joint movements, and supports the use of both thumb-to-index touch and wrist gesture inputs (see Figure 1c-e).The wrist joint is a flexible joint and wrist movements can take place along different axes.Our proposed design supports both horizontal and vertical wrist movements.Flexion occurs when the palm bends downward, towards the wrist.Extension is the movement opposite to flexion.Ulnar and radial deviation is the rightward and leftward wrist movement and occurs when the palm is facing down.Furthermore, three soft foam buttons enable legacy-inspired touch and hold gestures.
The hand-worn fabric-based prototype used in our study supports the following wrist and thumb-to-index touch gestures: (1) flexion and extension, ulnar and radial deviation of wrist joint movements; (2) tap and hold gestures using thumb; and (3) combination of all these gestures.There are only two constraints while performing a combination of wrist and thumb-to-index touch gestures.First, the two touch gestures (tap and hold) cannot be performed together and should always be associated with any one of the wrist gestures; and, second, touch gestures must precede a wrist gesture.See Figure 1c-e for sample gestures supported in our prototype.

Participants
Thirty-three participants volunteered to participate in the study (10 female, average age 20.61, SD = 1.13).All were university students of different educational backgrounds (mathematics, engineering, computer science, accounting, and industrial design) and recruited using WeChat, a widely used social media platform.All participants reported owning a handheld mobile device (such as a smartphone or tablet).Of these, 87.9% participants (14 participants with HAR experience) mentioned that they have some knowledge of AR.Fourteen participants reported owning at least one wearable device and 29 participants (87.88%) expressed their interest in using wearable mid-air gestural interfaces to perform interactions with their handheld devices.None of our participants were lefthanded.

Experimental Setup
The interaction situation was defined above a table (160 cm × 80 cm) in a dedicated experimental laboratory space.All participants were seated in front of the table and a Nexus 5X mobile phone running Android SDK 8.1 was used as the HAR device.Two 4K cameras mounted on tripods were used to capture the entire experiment procedure at two different angles.A 55-inch 4K display was used to play the video of possible gestures (see Figure 2).The whole elicitation process contained four phases for each participant.All 33 participants were video-recorded throughout the experiment, and extensive notes were taken.In addition to two questionnaires, one of the researchers observed each session and interpreted the gestures to the participants.The entire process lasted about 50 min for each participant.

Introduction
Participants were introduced to the experiment setup and a short video about HAR interactions was played for those who were not aware of this technology.A short online questionnaire was given to them to collect demographic and prior experience information.Each participant was given 5 min to familiarize themselves with our HAR app.

Pre-Elicitation
In this second phase, participants were informed of the purpose of the study and primed [48] with a short video to demonstrate the possible ways of using our prototype.Detailed use of the prototype was explained to all participants, such as the possible wrist and thumb-to-index touch gestures supported in our prototype, including tap and hold gestures in the two-minute-long video.The researcher clarified each participant's questions at this stage, e.g., regarding the types of gestures (supported by our prototype) and using a similar button, but a different gesture for multiple tasks (for example, using button one to perform both tap and hold gestures).We informed all of our

Handheld Augmented Reality Tasks
We wanted to identify the list of tasks related to commonly used HAR applications.To do that, we first looked at Piumsomboon et al.'s list of tasks list [34] for HMD AR interactions and adopted the tasks related to the 3D object manipulations.In addition, we further surveyed the most common operations used for HAR applications in different domains (such as IKEA's Place app) and collected a list of user interface (navigation) tasks.Furthermore, in HAR, the camera is the main interaction medium and enables different kinds of HAR experience, such as creating AR-based video content and personalized emojis (notably Apple's Animoji).Therefore, we added the tasks related to control the cameras.We also had to limit the tasks list of three categories after the pilot studies to reduce the duration of the experiment to retain the participants' commitment during the elicitation study.Table 1 presents the final set of 27 tasks which are classified in three different categories: (1) User Interface, (2) Object Transformation, and (3) Camera.The whole elicitation process contained four phases for each participant.All 33 participants were video-recorded throughout the experiment, and extensive notes were taken.In addition to two questionnaires, one of the researchers observed each session and interpreted the gestures to the participants.The entire process lasted about 50 min for each participant.

Introduction
Participants were introduced to the experiment setup and a short video about HAR interactions was played for those who were not aware of this technology.A short online questionnaire was given to them to collect demographic and prior experience information.Each participant was given 5 min to familiarize themselves with our HAR app.

Pre-Elicitation
In this second phase, participants were informed of the purpose of the study and primed [48] with a short video to demonstrate the possible ways of using our prototype.Detailed use of the prototype was explained to all participants, such as the possible wrist and thumb-to-index touch gestures supported in our prototype, including tap and hold gestures in the two-minute-long video.The researcher clarified each participant's questions at this stage, e.g., regarding the types of gestures (supported by our prototype) and using a similar button, but a different gesture for multiple tasks (for example, using button one to perform both tap and hold gestures).We informed all of our participants that using buttons was not compulsory and they were told to use them based on their preference.After this, participants were given a suitable fabric-based interface.We prepared three different sizes of fabric-based wrist interfaces of black color suitable for both right and left hands, and did not constrain participants regarding which hand they wanted to wear the interface on-i.e., they could choose to wear it on either hand.We informed participants that all 27 tasks would be introduced, one by one, via our HAR app, and asked them to perform three different gestures while holding the handheld device on one hand.

Elicitation
To elicit user-preferred gestures for HAR interactions, participants were asked to hold the AR device (i.e., smartphone) on their preferred hand.They were encouraged to rest their arms on the table while holding the device.The 27 HAR tasks were presented via our HAR app and also presented via a printed A4 paper for their reference.All 33 participants were asked to follow the think-aloud protocol [29] to think of three different gestures for each task.Each participant was given a minute to associate their choice of three gestures for each task and perform them one by one while holding the device on their hand (see Figure 3a,b).They were also instructed to pick their preferred gesture for each task, and also to rate their preferred gestures in terms of "social acceptance" and "comfortable to use" in dynamic environments.All 27 tasks were always presented in the same order for each participant.For each task, the experimenter wrote down the gesture code of the preferred gesture and the ratings of the preferred gesture using custom-built software.For a greater understanding of their thought process, participants were asked to say a few words about their preferred gestures for each task while not holding the phone.
gesture for each task, and also to rate their preferred gestures in terms of "social acceptance" and "comfortable to use" in dynamic environments.All 27 tasks were always presented in the same order for each participant.For each task, the experimenter wrote down the gesture code of the preferred gesture and the ratings of the preferred gesture using custom-built software.For a greater understanding of their thought process, participants were asked to say a few words about their preferred gestures for each task while not holding the phone.Finally, we had a small discussion interview with each of the participants about their experience with our fabric-based prototype, including their opinions and difficulties encountered while performing gestures for HAR tasks.Their verbal feedback was encouraged during the experiment and when using the wrist and thumb-to-index touch gestures for HAR interactions.

Measures
We employed the following measures to assess and understand users' preferences and cognition for gestures produced with our fabric-based interface for HAR interactions: initially, we grouped and tallied the wrist and thumb-to-index touch gestures based on the predefined unique gesture codes, which produced a percentage score for each gesture.Then, we computed agreement rates (A-Rate) for each of the 27 tasks, coagreement rates (CR) between pair of tasks (e.g., scroll up/scroll down), and also between subject agreements (e.g., HAR experience) using the formulas proposed by

Semi-Structured Interview
Finally, we had a small discussion interview with each of the participants about their experience with our fabric-based prototype, including their opinions and difficulties encountered while performing gestures for HAR tasks.Their verbal feedback was encouraged during the experiment and when using the wrist and thumb-to-index touch gestures for HAR interactions.

Measures
We employed the following measures to assess and understand users' preferences and cognition for gestures produced with our fabric-based interface for HAR interactions: initially, we grouped and tallied the wrist and thumb-to-index touch gestures based on the predefined unique gesture codes, which produced a percentage score for each gesture.Then, we computed agreement rates (A-Rate) for each of the 27 tasks, coagreement rates (CR) between pair of tasks (e.g., scroll up/scroll down), and also between subject agreements (e.g., HAR experience) using the formulas proposed by Wobbrock et al. [29] and Vatavu et al. [33], respectively, and their agreement analysis application (AGATe: Agreement Analysis Toolkit).The following equation shows their agreement rate formula: where, "P is the set of all proposals for referent r, |P| the size of the set, and Pi subsets of identical proposals from P" [33].Using this formula, we can understand how much agreement is shared between our participants for gestures produced with our fabric-based interface for HAR interactions.Finally, we asked participants to rate their preferred gesture in terms of social acceptance and how comfortable they were to use for each task, on a scale of 1-7.

Results
We collected a total of 2673 gestures (33 participants × 27 tasks × 3 gestures) for the 27 given HAR tasks from our 33 participants.In addition, our data collection included the video recording, transcripts, and verbal feedback for each participant.Our results include the agreement between our user's wrist and thumb-to-index touch gesture proposals, a gesture taxonomy, user-preferred gesture set, subjective feedback, and qualitative observations.

Classification of Gestures
All elicited user-preferred gestures were organized into three categories supported by our proposed interface: (1) wrist gestures: flexion, extension, ulnar and radial deviations; (2) touch gestures: tap and hold; and (3) combination of touch and wrist gestures.These gestures were performed using the wrist and fingers.Touch gestures were performed using the thumb by either tapping or holding any of the three soft foam buttons on the index finger.Wrist gestures were performed by moving the wrist joints on two different axes.As stated earlier, one of the key constraints is that the touch gesture (tap or hold) must always precede a wrist gesture while performing a combination of thumb-to-index and wrist gestures.

Consensus between the Users
Table 2 shows the agreement rate for each of the 27 tasks.The participants' agreement rate (A-Rate) ranged between 0.053 (lowest agreement) and 0.328 (medium agreement) with the mean A-Rate of 0.144.We applied the coagreement rate (CR) formula proposed by Wobbrock et al. [29] to understand the agreement shared between two tasks r1 and r2.For example, in most cases, users chose to perform opposite for directional pairs which have a similar meaning, such as "Swipe left/Swipe right" and "Move closer/Move further".In our results, both "Move closer" and "Move further" have equal agreement rates (A-Rate for Move closer = 0.138, and A-Rate for Move down = 0.138).The CR for "Move closer" and "Move further" was 0.095.This suggests that the opposite gestures were used to perform these two tasks.

Effects of Users' Prior Experience on Agreement
In our study, 33 participants were asked to perform gestures for HAR applications.We found that there was more agreement for users who have no prior experience with HAR for the tasks "Long press on the target" (0.343 vs. 0.124, V b(2,N=33) = 370.806,p = 0.003), "Select the target" (0.295 vs. 0.144, V b(2,N=33) = 177.859,p = 0.020), and "Uniform scale up" (0.171 vs. 0.052, V b(2,N=33) = 110.071,p = 0.052), while users with HAR experience achieved higher agreement for the tasks "Move right" (0.301 vs. 0.133, V b(2,N=33) = 217.095,p = 0.012), "Move left" (0.255 vs. 0.152, V b(2,N=33) = 81.504,p = 0.088), and "Scroll down" (0.229 vs. 0.124, V b(2,N=33) = 85.409, p = 0.082); see Table 2. Similarly, there was more agreement for participants who do not use a wearable device for the tasks "End recording" (0.275 vs. 0.105, V b(2,N=33) = 202.604,p = 0.019) and "Select next section" (0.392 vs. 0.2645, V b(2,N=33) = 18.881, p = 0.057) than those who use at least one wearable device.To further understand these differences, we computed between-group coagreement rates for each task.For example, coagreement for the task "Long press on the target" was CR b = 0.207, showing only 20.7% of all pairs of users across the two groups were in agreement regarding how to press the selected target in a HAR application, i.e., by holding button three.The reason why the other participants disagreed was that while the users with prior HAR experience preferred to press on the selected target by holding button three (equivalent of performing hold gesture on the smartphone), the users with no prior HAR experience elicited more variations, such as by tapping button one and holding button two.All of the elicited gestures from users who had never used HAR applications before showed a clear influence from previous experience acquired from touchscreen devices-this difference was statistically significant (p = 0.003) for the "Long press on the target" and exposed the largest effect size (V b = 370.806)among all other tasks.A similar effect was observed for the "Take a photo" task as well, but this time, from another perspective: although the agreement rates of the two groups were similar (0.190 vs. 0.210) and the difference was not significant V b(2,N=18) = 3.096), the coagreement rate displayed different gesture preferences for two groups (CR b = 0.167).

Taxonomy of Wrist and Thumb-to-Index Touch Gestures
To further understand wrist and thumb-to-index touch gestures used to perform HAR interactions, we considered the following three dimensions in our analysis.We were inspired and adopted/modified these dimensions from previous studies [35,50,51], and grouped them by the specifics of both wrist and thumb-to-index touch gestures: 1.
Complexity (see Figure 4) identifies both touch and wrist gesture as either (a) simple or (b) complex.We define simple gestures as gestures that are performed using only one gesture (e.g., wrist or touch gesture).For example, moving the wrist downwards toward the palm to perform downward flexion and/or using the thumb to press any of the soft foam buttons are identified as simple gestures.Gestures performed using two or more distinctive gestures are identified as complex gestures (e.g., pressing a soft foam button followed by moving the wrist downwards toward the palm).We adopted this dimension from [52].

2.
Structure (see Figure 5) represents the relative importance of the wrist and touch gestures in the elicitation of HAR gestures, with seven categories: (a) wrist (b) touch (button one), (c) touch (button two), (d) touch (button three), (e) touch (button one) and wrist, (f) touch (button two) and wrist, and (g) touch (button three) and wrist.We modified this category from the taxonomy of Vatatu and Pentiu [51].For example, for touch only category, the tap or hold gesture was performed using any one of the three buttons.

3.
Action (see Figure 6) groups the gestures based on their actions rather than their semantic meaning with five categories: (a) scroll, (b) swipe, (c) tap, (d) hold, and (e) compound.We adopted and modified this classification from Chan et al. [35], who used these dimensions to define user-designed single-hand microgestures without any specific domains.For example, downward flexion and upward extension were grouped as scrolls, while leftward flexion and rightward extension were grouped as swipes.

Consensus Gesture Set for HAR Interactions
We isolated 891 preferred gestures (33 participants × 27 tasks) from the original 2673 gestures.57 unique gestures were used to perform 891 preferred gestures for the 27 tasks.To create a userpreferred gesture set, we picked the gestures which achieved the highest consensus for each task.This led us to have 13 unique gestures performed for 27 tasks, which represented the 599/891 gestures or 67.23% of the user-preferred gestures (see Figure 7).
We asked our participants to rate their preferred gesture proposal with numbers from 1 (very poor fit) to 7 (very good fit) to denote their confidence in the usability of their preferred gestures in two categories.We compared the subjective rating for social acceptance and comfortable to use between the user-preferred gesture set and the discarded set.We found that the average scores for gestures that are socially acceptable for the given tasks were 6.019 (SD = 0.992) and 4.830 (SD = 1.603), and average scores for being comfortable to use with our proposed design were 6.362 (SD = 0.366) and 5.612 (SD = 0.844), respectively.Independent t-tests confirmed that the user-preferred set was rated significantly higher than the discarded set in all two factors of social acceptance (p = 0.023) and comfortable to use (p = 0.005).Therefore, gestures in the user-preferred set are more suitable for HAR interactions in dynamic outdoor environments than the discarded one in terms of social acceptance and comfortable to use.

Users' Feedback
All participants were encouraged to share their opinions and suggestions during the semistructured discussion interview.All participants felt comfortable wearing our proposed prototype to perform gestures for the 27 HAR tasks.Nineteen (4 females) participants expressed that our proposed interface is most suitable for candid interaction [53], especially in a public place.When asked about the positions of the buttons, all 33 participants unanimously agreed and were satisfied with the current locations of the three soft buttons; however, only three participants (P3, P25, P27) further recommended adding an additional button.In particular, two participants preferred to have the fourth button on the sides of the index finger between buttons one and two, while the other participant preferred to have it on the palmar side of the index finger.Only one user (P32) suggested that the buttons could have haptic feedback (e.g., vibration) while another user (P10) proposed an additional slider type of control on the fabric-based interface.All participants mentioned that the Simple gestures were highly performed for both discrete (82.75%) and continuous tasks (49.78%).Interestingly, participants preferred to use simple touch gestures (51.05% vs. 31.70%)for discrete and simple wrist gestures (36.58% vs. 13.2%) for continuous tasks.Similarly, complex gestures (50.22% vs. 17.25%) were also preferred for the continuous tasks.These results were confirmed by a one-way ANOVA test.We found a significant effect for complexity (F (1,25) = 73.274,p = 0.000) as our participants preferred simple gestures to perform discrete HAR tasks.
Our participants preferred to use button one on the index finger to perform both simple and complex gestures for all 27 tasks.For example, for the discrete tasks, 27.51% (button two: 23.78%; button three: 17.01%) of gestures were performed using button one.Similarly, button one was involved in nearly 19.91% (button two and wrist: 17.97%; button three and wrist: 12.34%) of complex gestures for the continuous tasks.A one-way ANOVA test revealed a statistically significant effect for touch inputs using button one for discrete tasks (F (1,25) = 14.275, p = 0.001) and button one and wrist inputs for continuous tasks (F (1,25) = 23.773,p = 0.000).
Compounds (all 27 tasks) were the most common out of the five action types, mainly used to perform continuous tasks, particularly the object transformation tasks.Our participants mentioned two key reasons why they preferred Compounds for object transformation tasks: (1) wrist gestures were simple and easier to perform; and (2) buttons allowed adding additional functions to wrist gestures to associate similar gesture for directional pairs.Taps (20 of 27 tasks) were the next most preferred action type.Notably, Taps were mainly used for the camera tasks.Participants mentioned that buttons are most suitable for camera tasks as (1) buttons allowed them to perform simple interactions and (2) did not cover the visual content on the screen, making them particularly convenient for taking photos or recording videos.Notably, Tap and Hold actions were used to perform state toggles "Select the target" and "Long press on the target".
Participants associated wrist movements for the tasks which resemble symbolic actions.Scrolls were highly preferred for tasks which resemble vertical movements (e.g., Select next section and Select previous section) while Swipes were mostly associated for the tasks which involve in horizontal movements (e.g., Go to next target and Go to the previous target).A one-way ANOVA test revealed a statistically significant effect on the action (F (1,25) = 26.741,p = 0.000) and users preferred to use Tap Appl.Sci.2019, 9, 3177 14 of 21 actions for the discrete tasks in HAR applications, which confirms the strong influence of previous interaction experience acquired from the touchscreen devices.

Consensus Gesture Set for HAR Interactions
We isolated 891 preferred gestures (33 participants × 27 tasks) from the original 2673 gestures.57 unique gestures were used to perform 891 preferred gestures for the 27 tasks.To create a user-preferred gesture set, we picked the gestures which achieved the highest consensus for each task.This led us to have 13 unique gestures performed for 27 tasks, which represented the 599/891 gestures or 67.23% of the user-preferred gestures (see Figure 7).glove-based single-handed interface was convenient to control the camera as it does not block the display like in touchscreens.Five female participants particularly expressed that they would like to have a fancy and colorful interface.

Discussion
In this work, we adopted and extended a methodology proposed by [39] to uniquely identify usable gestures for HAR interactions.We defined all user-preferred possible wrist and thumb-toindex touch gestures using our prototype before the study began, and grouped the elicited gestures using the unique gesture code.All 33 participants utilized the large degrees of the wrist and thumbto-index touch gestures supported by our design to elicit distinct gestures for each HAR task.We used our HAR app to introduce all 27 tasks (most of the tasks are dichotomous pairs) in the same order to all the participants.Though we set a one-minute time limit to think of three different gestures for each task, all participants thought of their choice of three gestures within 45 s.Therefore, as mentioned in the results section, our user-elicited gestures using hand-worn interface for a specific set of HAR tasks achieved a medium agreement, which is aligned with the previous elicitation study using non-functional fabric-based prototype [39].
All participants performed three gestures for each task while holding the mobile phone on their left hand.Thus, they had to keep looking at the touchscreen to understand the tasks and had to associate their preferred gesture for each task.A significant number of gestures (5 simple touch gestures; and 4 touch and wrist gestures) from the user-preferred set were touchscreen-inspired tap and hold gestures.Nevertheless, our participants were not influenced by the legacy bias as they We asked our participants to rate their preferred gesture proposal with numbers from 1 (very poor fit) to 7 (very good fit) to denote their confidence in the usability of their preferred gestures in two categories.We compared the subjective rating for social acceptance and comfortable to use between the user-preferred gesture set and the discarded set.We found that the average scores for gestures that are socially acceptable for the given tasks were 6.019 (SD = 0.992) and 4.830 (SD = 1.603), and average scores for being comfortable to use with our proposed design were 6.362 (SD = 0.366) and 5.612 (SD = 0.844), respectively.Independent t-tests confirmed that the user-preferred set was rated significantly higher than the discarded set in all two factors of social acceptance (p = 0.023) and comfortable to use (p = 0.005).Therefore, gestures in the user-preferred set are more suitable for HAR interactions in dynamic outdoor environments than the discarded one in terms of social acceptance and comfortable to use.

Users' Feedback
All participants were encouraged to share their opinions and suggestions during the semi-structured discussion interview.All participants felt comfortable wearing our proposed prototype to perform gestures for the 27 HAR tasks.Nineteen (4 females) participants expressed that our proposed interface is most suitable for candid interaction [53], especially in a public place.When asked about the positions of the buttons, all 33 participants unanimously agreed and were satisfied with the current locations of the three soft buttons; however, only three participants (P3, P25, P27) further recommended adding an additional button.In particular, two participants preferred to have the fourth button on the sides of the index finger between buttons one and two, while the other participant preferred to have it on the palmar side of the index finger.Only one user (P32) suggested that the buttons could have haptic feedback (e.g., vibration) while another user (P10) proposed an additional slider type of control on the fabric-based interface.All participants mentioned that the glove-based single-handed interface was convenient to control the camera as it does not block the display like in touchscreens.Five female participants particularly expressed that they would like to have a fancy and colorful interface.

Discussion
In this work, we adopted and extended a methodology proposed by [39] to uniquely identify usable gestures for HAR interactions.We defined all user-preferred possible wrist and thumb-to-index touch gestures using our prototype before the study began, and grouped the elicited gestures using the unique gesture code.All 33 participants utilized the large degrees of the wrist and thumb-to-index touch gestures supported by our design to elicit distinct gestures for each HAR task.We used our HAR app to introduce all 27 tasks (most of the tasks are dichotomous pairs) in the same order to all the participants.Though we set a one-minute time limit to think of three different gestures for each task, all participants thought of their choice of three gestures within 45 s.Therefore, as mentioned in the results section, our user-elicited gestures using hand-worn interface for a specific set of HAR tasks achieved a medium agreement, which is aligned with the previous elicitation study using non-functional fabric-based prototype [39].
All participants performed three gestures for each task while holding the mobile phone on their left hand.Thus, they had to keep looking at the touchscreen to understand the tasks and had to associate their preferred gesture for each task.A significant number of gestures (5 simple touch gestures; and 4 touch and wrist gestures) from the user-preferred set were touchscreen-inspired tap and hold gestures.Nevertheless, our participants were not influenced by the legacy bias as they preferred two distinct gestures for state toggles.This finding contrasts with a prior elicitation study [35], where they did not use any prototype.Thus, they were often forced to use the same gesture for state toggles, whereas using our method, participants were able to perform two different gestures for state toggles.

Design Recommendations for Fabric-Based Interface for HAR Interactions
We discuss, in this section, some of our participants' preferred gestures in more detail as we propose design guidelines for using wearable smart textiles for HAR interactions and recommend the following set of suggestions to further investigate the use of fabric-based interfaces for HAR interactions.

Simple Gestures Were Preferred for HAR Interactions
Of all user-preferred gestures, 82.75% (discrete) and 49.78% (continuous) were "simple" gestures, i.e., gestures performed using only one action, either a touch or wrist gesture, as mentioned in our gesture taxonomy (see Figure 7a-i).Participants reported that simple gestures were convenient to perform while complementing the primary task.However, they preferred to use complex gestures for 3D object manipulation tasks (see Figure 7j-m).Also, we found that complex gestures were less preferred for discrete (17.259%) than continuous tasks (50.22%), showing a clear preference for simple gestures for discrete tasks.These findings align with the prior elicitation study using fabric-based interface to perform interactions with in-vehicle systems [39].

Utilize Familiar Touch-Based Input Methods for Discrete HAR Tasks
We found that 51.05% of the gestures proposed for the discrete tasks were touch (37.53% tap and 13.52% hold) gestures.The literature has documented the effect of users' prior experience [48].Interestingly, subjects preferred to use two different variations of touch gestures for discrete tasks, as adopted from their prior experience [31].They favored using touch gestures to toggle between states as they wanted to perform quick actions in a fairly short time.For example, tap gesture was preferred for switching between photo and video mode (see Figure 7a) while hold gesture was preferred to long-press on the target (see Figure 7d), particularly influenced by the semantic expression of the task.As reported in the prior elicitation studies (e.g., [35,54]), most of our users preferred to use identical touch gestures for state toggles and this suggests that for HAR interactions, gesture designers need to consider the existing metaphors related to the nature of the tasks.

Design Wrist Gestures for Continuous HAR Tasks
Of all proposed gestures, 36.58%(continuous) and 31.70%(discrete) were simple wrist gestures.Participants preferred to use wrist gestures for continuous tasks even though touch gestures were available: 13.20% (tap: 1.95%; hold: 11.26%) of touch gestures were preferred for continuous tasks.We found that participants used the flexion/extension in 86.89% of their simple wrist gestures and in 89.21% for complex gestures.Participants used in-air gestures to manipulate imaginary controls to associate wrist gestures for the continuous tasks, e.g., pushing a virtual object down (Figure 7f) or away in mid-air (Figure 7g), and sliding an imaginary control on both sides (Figure 7h,i).This association of wrist gestures with imaginary objects produced more dichotomous gestures, i.e., similar gestures but with opposite directions, as users tended to pair related tasks (Figure 7f-i).We recommend associate wrist gestures to create an instinctive mapping between movement and action, e.g., performing downward flexion replicates a physical action of moving an object down with control.

Favor Stretchable Fabric with Fingerless Design
We designed a fingerless hand glove (made of cotton and Lycra) as a physical interface.Three soft foam buttons were positioned on the index finger between metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints.The custom-made fingerless design allowed users to have full control over their thumb to comfortably rotate around to perform touch (Tap and Hold) gestures on the index finger.We endorse that for fabric-based interfaces to be practical and usable, they need to be thin, lightweight, and exceptionally stretchy, with increased elasticity while enhancing comfort and breathability.

Consider Resistive Sensing Technique to Capture Both the Wrist and Touch Inputs
Prior studies on fabric-based interfaces demonstrated the advantages of deploying resistive sensing technology to detect unobtrusively both wrist gestures [46] and thumb-to-index inputs [24].They used fabric-based strain sensors to capture wrist movements and pressure sensors to detect legacy-inspired touch inputs.Our proposed design supports both wrist and thumb-to-touch input methods.To successfully recognize the gestures from our user-defined set, we recommend using two opposing fabric-based piezoresistive strain sensors [55] on the wrist and pressure sensors [56] on the index finger.The strain sensors can be sewn while the pressure sensors can be glued to the hand glove to allow detecting both wrist and thumb-to-index touch inputs.

Design Fabric-Based Interface That Foster Socially Acceptable Gestures
The aim of our study is to design an unobtrusive fabric-based interface for HAR applications so that end-users are willing to use this interface in public settings.Usability is defined by Bevan as "quality in use" [57].However, a user's readiness to use ubiquitous devices, particularly wearable devices, is not only limited to the quality of the device.In addition to the default usability aspect, the decision to use a wearable device, particularly a fabric-based wearable device, is based on numerous factors, mainly (1) wearable comfort and (2) social acceptability.Knight et al. [58] reported that factors both internal and external to the wearers significantly influence how comfortable they feel about wearing and using the device; as such, they influence their intention to use it.Similarly, Rice and Brewster [59] reported that social acceptability is also influenced by various factors, such as the device's appearance, social status, and cultural conventions associated with it.Our proposed design allows both the already familiar touch and mid-air gestural inputs in a subtle unobtrusive posture, and results from the users' subjective feedback show that the gestures in the user-preferred set are more suitable for HAR interactions in dynamic outdoor environments in terms of social acceptance and how comfortable they are to use.Furthermore, fabric-based wearable interfaces are useful for the places that are extremely cold (e.g., Siberia) where touch interactions on HAR devices require users to remove their gloves.

Limitations
We conducted our study inside a room and participants could rest their hands while holding the AR device on the table to produce the gestures.It will be interesting to investigate if users would produce the same interactions in other scenarios, like standing or even walking.We used a non-functional fabric-based fingerless glove-based prototype in our study.Similarly, we excluded pronation/supination of the forearm in our design because our participants did not prefer these movements and mentioned difficulties associating them with the given 27 HAR tasks during the series of pilot studies.Despite the absence of interactive functionalities and a set of distinct wrist movements, we were still able to understand users' behavior and responses to it as an input interface for HAR interactions.None of our participants were lefthanded and, thus, our gesture set is only suitable for righthanded users.

Conclusions
In this study, we investigate the use of textile-based interface for HAR interactions.To further explore the design space of hand-worn fabric-based interfaces as an alternative interface for HAR applications, we designed a glove-based prototype which supports both wrist and thumb-to-index input methods.The fingerless design allowed users to comfortably rotate their thumb to perform touch gestures on the soft foam buttons integrated into the index finger.We recruited 33 potential end-users to participate in the elicitation study.By following the methodology to elicit the gestures using textile-based wearable interface, we were able to elicit gestures for HAR interactions.In addition to reflecting the user behavior, our user-preferred gesture set has properties that make them usable in dynamic HAR environments, such as social acceptance and how comfortable they are to use.In addition, we have presented a taxonomy of wrist and thumb-to-index tough gestures useful for performing interactions with HAR devices.By identifying gestures in our study, we have gained insight into the user-preferred gestures and have derived design guidelines for further exploration.Our results suggest using fabric-based interfaces to perform HAR interactions is simple, natural, and completely discourages the users from crossing the fingers across the screen, which could potentially cover the content on the screen.

Figure 1 .
Figure 1.(a) Fabric-based prototype used in our study for righthanded participants.Three soft foam buttons (two on the sides of the index finger on proximal and middle phalanges, and one on the palmar side of the index proximal phalange) were glued to a fingerless hand glove design.(b) Illustrates the finger joints on the index finger.(c-e) Sample gestures supported by in-house developed prototype: (c) wrist only; (d) touch only; and (e) touch and wrist.

Figure 1 .
Figure 1.(a) Fabric-based prototype used in our study for righthanded participants.Three soft foam buttons (two on the sides of the index finger on proximal and middle phalanges, and one on the palmar side of the index proximal phalange) were glued to a fingerless hand glove design.(b) Illustrates the finger joints on the index finger.(c-e) Sample gestures supported by in-house developed prototype: (c) wrist only; (d) touch only; and (e) touch and wrist.

Figure 2 .
Figure 2. A participant is performing a gesture while wearing the fabric-based prototype on the right hand and holding the handheld augmented reality (HAR) device on the left hand.Two cameras were used the capture the entire process.

Figure 2 .
Figure 2. A participant is performing a gesture while wearing the fabric-based prototype on the right hand and holding the handheld augmented reality (HAR) device on the left hand.Two cameras were used the capture the entire process.

Figure 3 .
Figure 3. Variety of gestures performed by our participants using one hand (right hand) while holding the device on the other hand: (a) wrist gestures and (b) thumb-to-index touch gestures.

Figure 3 .
Figure 3. Variety of gestures performed by our participants using one hand (right hand) while holding the device on the other hand: (a) wrist gestures and (b) thumb-to-index touch gestures.

Figure 4 .
Figure 4. Frequency distribution of complexity of gestures in each category.Simple gestures were highly preferred for discrete tasks.

Figure 5 .
Figure 5. Observed percentages of wrist and touch gestures for HAR interactions.Button one was highly preferred for both simple and complex gestures.Simple wrist gestures were preferred for continuous tasks and simple touch gestures for discrete tasks (shows a clear influence of prior experience).

Figure 4 .
Figure 4. Frequency distribution of complexity of gestures in each category.Simple gestures were highly preferred for discrete tasks.

Figure 4 .
Figure 4. Frequency distribution of complexity of gestures in each category.Simple gestures were highly preferred for discrete tasks.

Figure 5 .
Figure 5. Observed percentages of wrist and touch gestures for HAR interactions.Button one was highly preferred for both simple and complex gestures.Simple wrist gestures were preferred for continuous tasks and simple touch gestures for discrete tasks (shows a clear influence of prior experience).

Figure 5 .
Figure 5. Observed percentages of wrist and touch gestures for HAR interactions.Button one was highly preferred for both simple and complex gestures.Simple wrist gestures were preferred for continuous tasks and simple touch gestures for discrete tasks (shows a clear influence of prior experience).

Figure 6 .
Figure 6.Frequency distribution of action types in the preferred gesture set for 27 tasks.Tap action was highly preferred for the camera tasks.

Figure 6 .
Figure 6.Frequency distribution of action types in the preferred gesture set for 27 tasks.Tap action was highly preferred for the camera tasks.

Figure 7 .
Figure 7. User-preferred gesture set for HAR interactions.Simple touch gestures: (a) tap button one, (b) tap button two, (c) tap button three, (d) hold button two, (e) hold button three; Simple wrist gestures: (f) downward flexion, (g) upward extension, (h) leftward flexion, (i) rightward extension; Complex gestures: (j) hold button one and downward flexion, (k) hold button two and downward flexion, (l) hold button two and leftward flexion, (m) hold button two and rightward extension.

Figure 7 .
Figure 7. User-preferred gesture set for HAR interactions.Simple touch gestures: (a) tap button one, (b) tap button two, (c) tap button three, (d) hold button two, (e) hold button three; Simple wrist gestures: (f) downward flexion, (g) upward extension, (h) leftward flexion, (i) rightward extension; Complex gestures: (j) hold button one and downward flexion, (k) hold button two and downward flexion, (l) hold button two and leftward flexion, (m) hold button two and rightward extension.

Table 1 .
The selected list of 27 HAR tasks used in our study.The tasks are classified into three different categories and presented via our HAR app.

Table 2 .
Agreement rate for 27 tasks in two categories: discrete and continuous with effects of users' prior experience (experience with HAR application and using a wearable device).The highest agreement rates are highlighted in dark gray, while lowest are shown in light gray.Significance is highlighted in green.