Next Article in Journal
Combining VR Visualization and Sonification for Immersive Exploration of Urban Noise Standards
Next Article in Special Issue
Guest Editors’ Introduction: Multimodal Technologies and Interaction in the Era of Automated Driving
Previous Article in Journal
Recognition of Tactile Facial Action Units by Individuals Who Are Blind and Sighted: A Comparative Study
Previous Article in Special Issue
Tell Them How They Did: Feedback on Operator Performance Helps Calibrate Perceived Ease of Use in Automated Driving
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study

1
Department of Computer Science and Software Engineering, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
2
Department of Chemistry, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
3
School of Electrical Engineering, Electronics and Computer Science, University of Liverpool, Liverpool L69 3BX, UK
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2019, 3(2), 33; https://doi.org/10.3390/mti3020033
Submission received: 15 March 2019 / Revised: 15 April 2019 / Accepted: 27 April 2019 / Published: 9 May 2019

Abstract

:
Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and used as input devices for interactive systems. Clothing-based wearable interfaces are suitable for in-vehicle controls. They can combine various modalities to enable users to perform simple, natural, and efficient interactions while minimizing any negative effect on their driving. Research on clothing-based wearable in-vehicle interfaces is still underexplored. As such, there is a lack of understanding of how to use textile-based input for in-vehicle controls. As a first step towards filling this gap, we have conducted a user-elicitation study to involve users in the process of designing in-vehicle interactions via a fabric-based wearable device. We have been able to distill a taxonomy of wrist and touch gestures for in-vehicle interactions using a fabric-based wrist interface in a simulated driving setup. Our results help drive forward the investigation of the design space of clothing-based wearable interfaces for in-vehicle secondary interactions.

1. Introduction

This paper explores the use of a clothing-based device that is worn on drivers’ wrists to allow them to perform gestures to interact with in-vehicle systems. Nowadays, touchscreens are rapidly replacing traditional dashboard controls—such as climate and media controls—in current vehicles and have become an essential part of the current in-vehicle controls. For instance, the most recently released Tesla Model 3 (https://www.tesla.com/model3) features a 15-inch touchscreen display that has replaced all traditional in-vehicle dashboard controls. We will likely see other car manufacturers follow Tesla’s approach and have extended uses of touchscreen-based controls for in-vehicle systems [1]. However, touch displays intrinsically demand visual attention and cognitive workload, as it is almost impossible to perform interactions on touch displays without relying on rapid eye glances and arms movements [2]. In addition, the apparent absence of the tactile cues of the traditional dashboard controls and the smooth surface of the touch screen further decreases the driver’s attention on the road. Therefore, any visual focus directed towards touch displays can seriously affect the driving performance as the driver may not be able to keep focus on the road [3].
Driving requires very high levels of perceptual and cognitive focus. Similarly, performing secondary interactions with in-vehicle systems requires a good level of visual and cognitive focus [4]. To interact with touchscreen-based devices, users often need to take their hands away from the steering wheel and perform an action, often with their eyes off from the road. Performing secondary tasks while driving divides attention and increases the visual and cognitive workload, which can seriously affect driving performance and increase the chance of road accidents because, as research shows, performing secondary tasks on touchscreens while driving challenges drivers’ focus on the primary task of steering the vehicle [3]. They often need to move one hand off the steering wheel and look away from the windshield.
To improve driving safety, different alternative in-vehicle interaction methods have been explored (e.g., [2,5,6]). Prior studies explored new in-vehicle interfaces on the steering wheel [5,6,7,8,9,10] and are influenced by González et al.’s “eyes on the road, hands on the wheel” paradigm [5]. Another frequently investigated in-vehicle interaction method is mid-air gestures [2,11]. Similarly, Riener and Wintersberger [12] have explored the possibility of performing gestures while drivers are holding the gear lever. Some other studies used multiple modalities for in-vehicle interactions [2,13].
With continuous rapid developments in new types of electronic textiles and smart clothing [14], we are currently seeing how interface designers and researchers are able to produce fabric-based wearable interfaces (e.g., [15,16,17,18]). They combine textile processing methods with conductive fibers, yarns and fabrics to enable a large space of sensing possibilities [15,16,18]. There are many ways of transforming a regular clothing material into a sensing device. For instance, capacitive or resistive sensors can be seamlessly integrated into any fabrics using conductive yarns [18]. Unlike the traditional rigid electronics or sensors (e.g., accelerometer), devices based on textiles are shape-changeable, stretchable, bendable, and twistable, and as such easily able to detect and measure human body movements without discomfort. With fabric-based sensing prototypes (e.g., [15,18]) becoming technologically feasible and inexpensive, one can envision a future where wearable interfaces are used to control a variety of ubiquitous computing devices in our daily lives, including those in our vehicles.
While fabric-based interfaces are technologically advanced, the majority of effort has been directed primarily for mobile devices [17,19] and smartwatches [16]. To our knowledge, no suitable study has been conducted on users’ preference for wrist gestures, especially when they are performed with smart textiles, to control in-vehicle systems or applications. As such, very little information is available to help designers determine what types of wrist wearable devices are suitable and what types of wrist gestures are natural and usable.
Our paper contributes towards filling this gap. We follow a methodology to enable users to be involved in the process of designing fabric-based wearable interfaces. We conducted a simulated driving experiment to elicit taxonomy of wrist and touch gestures for in-vehicle interactions using a fabric-based wrist interface. Our results can help drive forward the investigation of the design space of clothing-based wearable interfaces for in-vehicle secondary interactions. The wrist and touch gestures provide interactions to the most essential commands while steering the vehicle. Our results also offer some recommendations for further investigation on the fabric-based wearable interfaces for in-vehicle controls.

2. Related Work

2.1. Participatory In-Vehicle Elicitation Studies

To safely perform secondary tasks while driving, the following issues need to be carefully addressed: (1) reduce the visual and cognitive load, and (2) minimize the effort to switch the attention between the primary and secondary tasks. To address these issues, numerous studies explored new in-vehicle interactions on (e.g., [5]) or around the steering wheel (e.g., [20]) and using different areas of the car dashboard around the driver (e.g., [2,21]). Angelini et al. [22] performed a user-elicitation study for in-vehicle interactions on the surface of the steering wheel. They asked users to perform gestures using both hands while holding the steering wheel. Their users elicited gestures using fingers, especially the index and thumb on the external ring and the spokes of the steering wheel while not driving the car. Some of their elicited gestures require the users to leave both hands off the steering wheel. This is very unsafe while driving, as the drivers should not keep moving their hands away from the steering wheel. Using a very similar approach, Döring et al. [7] integrated a touch surface in the middle of the steering wheel by replacing the spokes to elicit multi-touch in-vehicle gestures. They reported that their approach significantly reduced drivers’ visual demand. Their simulation setup included a simple driving task on a two-lane endless highway with no vehicle traffic, and their participants had to change their lanes while driving the car. Although the touch surface covers the entire wheel space, they restricted the interaction space only to the two sides of the steering wheel. Therefore, it only allowed the drivers to use their thumbs from both hands to interact on the surface. Most recently, Huber et al. [23] proposed a taxonomy of in-vehicle gestures using force-based touch interactions on the steering wheel. They elicited thumb-based touch interactions in a controlled simulation environment with oncoming traffic in measured speed limit. As mentioned by one of their participants, force-based bi-manual touch interaction requires a significant amount of multi-tasking for in-vehicle interaction, which is not desirable.
Endres et al. [20] introduced a method, “Geremin”, to detect gestures performed in the immediate proximity to the steering wheel. They applied electric field sensing to detect gestures. They mounted two antennas on fixed locations behind the steering wheel to detect the movement or presence of the driver’s hands. They reported that their approach is cost effective and the accuracy level can be increased by adding two or more antennas. May et al. [24] addressed the importance of using mid-air gestures to perform in-vehicle interactions. They used a two-way method to elicit mid-air gestures. Fourteen of their participants elicited air gestures while resting their elbow on an armrest in front of the turned-off driving simulation setup. The most common gestures produced by these participants were evaluated by 147 remote participants through an online survey. Their study contributed with an in-vehicle air gesture set to navigate touchscreen menus. Riener et al. [25] explored users’ preferences for performing open gestures inside the vehicle. They found that most of the gestures to manipulate car controls were performed around the steering wheel, while gestures to interact with media controls were performed near the gear lever.
The above studies aimed to minimize visual and cognitive load using either touch or air gestures to interact with in-vehicle systems. Touch gestures are able to reduce visual attention [7], but multi-touch interaction is unsafe as it forces the drivers to move their both hands far away from the steering wheel [9]. On the other hand, performing gestures in the mid-air, still requires rapid eye glances [2] as it is almost impossible to perform interaction on touchscreens without looking at them. Nevertheless, these studies only addressed one input modality, touch or air gestures. Plfeging et al. [13] have proposed a multimodal interface combining speech with on-the-wheel touch gestures. They used speech to describe the object the driver needs to control and manipulate it using multi-touch gestures. They suggested that this combination can minimize the driver’s visual demand. However, speech increases cognitive load, can distract drivers, and can decrease their reaction time [26]. Additionally, speech is unsuitable in noisy environments and when drivers need to use their voice for other purposes (e.g., talking with passengers).

2.2. Wrist-Worn Wearable Interfaces

While the human wrist offers many degrees of freedom (both angular and rotational movements), previous studies on in-vehicle interactions only make limited use of the human wrist (e.g., [2,21]). They used simple wrist movements to perform interactions inside the vehicle. Alternatively, numerous studies widely explored the human wrist for tilt interactions for tasks such as text entry on smartphones [27] and smartwatches [28]. The wrist-worn interfaces are particularly suitable for performing eyes-free and hands-free subtle interactions that can be both continuous and discrete [29]. Lopes et al. [30] demonstrated that users are able to use their proprioceptive sense to feel the relative position of their wrist without actually looking at their hands to perform interactions. They reported that their participants enjoyed this experience of using their wrist as an interface. Prior studies used numerous types of sensors (e.g., accelerometer) to detect wrist movements (e.g., [28,31]). For instance, Crossan et al. [31] used 3-axis accelerometers to evaluate the wrist rotation as an interaction technique for three postures: standing, seated and rested. They reported that accelerometers were accurate for static positions but gave an inaccurate estimation of the wrist rotation when users are moving. Similarly, Gong et al. [28] used a dozen infrared proximity sensors on a watch strap to detect flexion, deviation, and extension of the wrist. The prototypes reported in these studies require sensors to be worn on the users’ forearm or upper arm, which makes them less practical to use while steering a vehicle. Similarly, they used rigid sensor components, and this will restrict the movement of the wrist (e.g., circumduction).
To successfully use the human wrist as a suitable interface to control in-vehicle systems, the device must meet the following key requirements: First, they should minimize interference from users’ primary task of driving. The device should be comfortably worn on the hand and fingers should be able to securely and firmly handle the steering wheel. More importantly, the wrist interface should not be interfering with the natural hand movements. Second, the interface should be natural, and gestures should be very simple, intuitive, and subtle. Furthermore, the device should only support single-hand input, so the other hand can control the steering wheel. A clothing-based approach would meet the above requirements to allow users to be able to perform more comfortable wrist gestures while steering the vehicle. Notably, Strohmeier et al. [32] applied a resistive sensing technology to measure three types of wrist movements on a wrist-worn fabric. They embedded two stretch sensors in a wrist-warmer to recognize the wrist movements that are device or application independent. Equally, Iwasaki et al. [33] developed clothing-based switches which offer standard switch functions (on/off) by simply touching the device. Our proposed wrist prototype combines these above two approaches; as such, the results of this study could also be domain independent and be useful for designers of applications beyond in-vehicle interactions, e.g., for augmented reality applications.

2.3. Summary

Touchscreens are replacing traditional dashboard controls and have become an essential part of in-vehicle controls; their ease of use directly correlates to driving safety. To increase driving safety, experts recommend considering a variety of interfaces [34]. Wrist-based gesture input has the extra benefit of supporting eyes-free interactions (e.g., [28]). Similarly, advances in fabric sensing technology allows us to combine multiple interface modalities together. A prior study [32] on fabric-based wrist interface focused on implementing the wrist sensing technology but did not conduct a systematic investigation of usable wrist gestures for different use cases, particularly for in-vehicle secondary interactions. Previous studies reported that creating in-vehicle interfaces through participatory-design tends to be easier to learn and use [21,23]. Therefore, a thorough study to understand the users’ preference for a fabric type of artifact worn on the wrist will be useful for the design of in-vehicle interfaces that lead to safe driving but also allow a rich interactive and driving experience.

3. Methodology

The current study explored the scenario where a clothing-based device would be available to users. Initially, we wanted to understand suitable materials for such a device. Therefore, we conducted preliminary experiments with different fabric materials to find the suitable combination of elastic fabrics. Afterwards, we explored the subjective wearable comfort of the chosen fabric with different elasticities and lengths. Finally, we designed prototypes together with end users [35] and came up with the design used in this study (see Figure 1a below).
User-elicitation studies with no devices ask users to perform gestures regardless of their feasibility of implementation and the affordance of a device but this approach often produces gestures which are not technically feasible (e.g., [36]). In addition, the process of grouping the elicited gestures that appear similar to find the consensus between the users influences the magnitude of agreement rates [37] because this process is usually dependent on the designer’s goals for the specific domain for which gestures were elicited. In general, with this approach, designers are unable to recommend a suitable sensing method to capture the proposed gestures. To address these issues, we co-designed a prototype with end users to elicit user-preferred wrist gestures for in-vehicle interactions because it allows users to feel the affordance of the device to perform gestures that are implementable in a future functional prototype.
To minimize the influence of legacy bias [38], we applied the production [38] technique by asking each participant to design three different gestures for each command. Prior in-vehicle elicitation studies were either conducted without driving [22,24] or limiting the driving activity [7]. Therefore, some of their elicited gestures are not practical in a real driving situation. To address this issue, we asked our participants, when given an interaction, to perform wrist-based gestures that they thought were suitable and natural for that interaction while steering the car in the simulation environment.

Fabric-based Wrist Interface

Our prototype is made of Lycra and cotton with a thumb-hole design. Two foam buttons, one on the bottom (under the wrist) and one on the posterior side of the forearm (see Figure 1a), were fixed using fabric glue. The location of the buttons was determined based on users’ preferences from our pilot studies.
Our in-house built wearable prototype is based on the use of the wrist, especially wrist joint movements [39]. The wrist joint is a flexible joint in the human body and wrist movements can take place along different axes. Our proposed design supports both horizontal and vertical wrist movements (see Figure 1b–d). Flexion occurs when the palm bends downward, towards the wrist. Extension is the movement opposite to flexion. Ulnar and radial deviation is the rightward and leftward wrist movement and occurs when the palm is facing down. The condyloid joints of the wrist can produce circumduction, which is a circular movement that occurs at the wrist joint when it and the hand are rotating. Unlike other wrist movements, circumduction gives more degrees of movement than other wrist motions and can be very precise. In addition, two soft foam buttons allow touch and hold gestures.
The following wrist and touch gestures are possible using our proposed prototype: (1) flexion and extension, (2) ulnar and radial deviation, (3) clockwise and anti-clockwise circumduction, (4) tap, (5) hold, and (6) combination of all these gestures. Both flexion/extension and ulnar/radial deviations can support two different directions: vertical and horizontal. We considered these as two distinct gestures and identified through a unique gesture code.
There are only three constraints. The two touch gestures (tap and hold) cannot be performed together and should always be associated with any one of the wrist gestures; and touch gestures must precede a wrist gesture. These gestures can only be performed on three different in-vehicle locations: on-steering wheel, off-steering wheel (mid-air), and on gear lever. All these gestures were identified using a predefined unique gesture code.

4. Co-Design Study

4.1. Participants

Eighteen (4 females) unpaid participants from a local university (aged between 18 and 36; Mean = 22.56, SD = 3.729) volunteered to participate in the study. They were all university students from different backgrounds (such as, computer science, engineering, mathematics, business and accounting) and were recruited using WeChat, a popular social media platform. All of them hold a valid driver’s license at the time of the study and from right-hand traffic countries, like US or Germany. Four had more than 5 years of driving experience. Only 6 of them had experience using mid-air gestural interfaces (such as the Leap Motion and Kinect). Nine reported to own a wearable device and, except for one participant, all were right-handed. All of them preferred to use a map navigation application while driving and would carry their mobile phone to the car.

4.2. Simulated Elicitation Setup

Our elicitation setup was composed of a driving simulator running in a 55-inch 4K TV as a windscreen, a game steering wheel, pedals, and gear lever controls. A PC with an i7 processor and 1080TI GTX graphics processing unit (GPU) was used to run this setup (see Figure 2a). We used two different driving conditions on a left-hand drive passenger car. We used automatic driving mode, yet drivers had to use the pedals, gear lever and handbrake while driving the car. An expressway in a city with zero percent car traffic and pedestrians was used in the practice session. In the elicitation phase, the downtown in the city setting with 40% traffic and pedestrians was used. Two cameras with 4K resolution were mounted on tripods and used to capture the elicitation phase and the interview. The cameras were positioned at two different angles to capture the gestures. The participants were informed of the recording but could not see the recording live to keep their focus on driving. We also screen-captured the simulation for each participant to find any lane deviation or change of speed while performing the gestures—this would indicate distraction or re-allocation of mental resources. We also recorded the conversation between the participants and researchers.

4.3. Commands

We identified 16 commands or interactions that are the most performed secondary tasks on a smartphone or a touchscreen device while driving the vehicle (see Table 1). We classified these commands in three different categories: (1) Phone, (2) Map Navigation, and (3) Music Player. Performing these non-driving related tasks is often distracting and if drivers take their eyes off the road and hands off the steering wheel this can lead to accidents [40]. However, phone systems allow the drivers to make emergency calls, map navigation let them drive safely to the destination and a music player provides a pleasant journey. Therefore, we selected all the most common controls which are recommended to use while driving, but which can distract the drivers and often force the driver to remove one hand from the steering wheel. Our aim is to allow drivers to do these tasks but reducing driving risks.

4.4. Procedure

The elicitation procedure contained four stages for each participant. All 18 participants were video recorded throughout the study, and extensive notes were taken. The whole process was completed within 45 min.

4.4.1. Introduction and Driving Practice

At the beginning, the participants were introduced to the driving simulator setup and were requested to complete a short online questionnaire to collect demographic and prior experience data. In the practice session, all the 18 users participated in our study were given a maximum of 5 min to familiarize themselves with the driving simulator and to explore how to use the controls to drive the virtual car. We used an expressway in a city setting with zero percent vehicle traffic in the practice session. During this time, they were encouraged to practice the most common driving scenarios such as urban, suburban, motorway driving, and parking lot maneuvers.

4.4.2. Pre-Elicitation

In this stage, participants were informed of the aim of the elicitation study and we primed [38] them with a short video of the potential possibilities of using our wrist-worn prototype. In this two-minutes-long video, we demonstrated all the supported wrist and touch gestures of our proposed interface including the ways of using the soft buttons. Three types of locations including on-steering wheel, off-steering wheel, and on gear lever were shown and explained to the participants. Some possible generic ways of using the soft buttons on different locations were illustrated through examples. We encouraged them to ask for clarification about gestures and in-vehicle locations (for example, about performing similar wrist or touch gestures on different locations). We informed participants that using buttons were not compulsory and they were told to use them based on their preference. Only after priming, a wrist interface was given to them. We had three different sizes of fabric-based wrist interfaces of the black color suitable for both hands. We did not restrict participants on which hand they wanted to wear the band—they could choose to put it on either hand.
The 16 commands were also presented to the participants via a printed A4 paper if they want to reference them. We informed participants that the commands would be verbally introduced one by one and asked them to perform three different gestures while doing the virtual driving.

4.4.3. Elicitation

The participants were asked to drive the virtual car in the simulation environment in this elicitation stage. They were asked to follow all driving safety regulations (such as keeping a safe distance from other vehicles) and obey all traffic rules. A researcher would ask them to pull the car over to introduce a command. All 16 commands were verbally introduced and always presented in the same order to every participant. For each command, all participants were given a minute to think aloud three different gestures and told to perform their choice of three gestures while driving the car (see Figure 2b,c, on page 6). We set this one-minute time limitation based on the results of initial pilot studies. They were also instructed to pick their preferred gesture for each command after producing three different gestures. For each command, the experimenter wrote down the gesture codes of the performed three gestures and the location where they were performed. For a greater understanding of their thought process, the researchers would ask them to say a few words about their preferred gesture for each command while not driving the car.

4.4.4. Semi-Structured Interview

At the end of the elicitation stage, we had a short semi-structured interview with each of the participants to elicit feedback about their experience with the fabric-based wrist-worn interface, including their opinions, and difficulties encountered while driving. Almost all participants were generally enthusiastic to provide their opinion. Their oral feedback was encouraged during the study and when using wrist gestures for in-vehicle interactions.

5. Results

Our 18 participants proposed a total of 864 gestures for the chosen 16 commands (18 participants x 16 commands x three gestures). First, we grouped and tallied the gestures based on the predefined gestures codes and the location where they were performed, which produced a percent score for each gesture. Using the formulas proposed by [41,42], we calculated the agreement rate for each command and the agreement shared between the users with different driving experience. We present our results in the following subsections.

5.1. Classification of In-Vehicle Wrist Gestures

We organized all elicited gestures into eight different types of distinct gestures: flexion, extension, ulnar deviation, radial deviation, clockwise circumduction, anti-clockwise circumduction, tap and hold. As mentioned above, these gestures were performed on three different locations. On-steering wheel gestures were performed either while resting the hand or by simply tapping or pressing the foam button on the wrist joint on the surface of the wheel. Gestures performed while firmly holding the wheel were also identified as on-steering wheel gestures. Off-steering wheel gestures were mid-air gestures performed on the sides of the wheel without blocking the user’s face or on the top of a gear control. Gestures performed while resting the wrist on the gear lever were grouped into on gear lever gestures.

5.2. Consensus between the Drivers

Table 2 shows the agreement rate for each of the 16 commands (or referents [42]). We used the agreement method proposed by Vatavu et al. [41] and their agreement analysis application (AGATe: Agreement Analysis Toolkit). Using this formula, we can understand how much agreement is shared between the drivers. The following equation shows their agreement rate formula:
A R ( r ) = | P | | P | 1 P i P ( | P i | | P | ) 2 1 | P | 1
where, “P is the set of all proposals for referent r, |P| the size of the set, and Pi subsets of identical proposals from P” [39].
The participants’ agreement rate (AR) ranged between 0.033 (lowest agreement) and 0.183 (medium agreement) with the mean AR of 0.084. We applied Wobbrock et al.’s [42] Coagreement Rate (CR) formula to understand the agreement shared between two commands r1 and r2. For example, in most cases users choose to perform opposite gestures for directional pairs which has similar meaning such as “Volume up/Volume down”. In our results, both “Move up” and “Move down” almost have equal agreement rates (AR for Move up = 0.111, and AR for Move down = 0.105). The CR for “Move up” and “Move down” is 0.085. This suggests that the opposite gestures were used to perform these two commands.

5.3. Effects of Driving Experience on Agreement Rate

In our study, 18 participants (eight with more than two years of driving experience) were asked to perform gestures for in-vehicle interactions with a mean AR of 0.071 for drivers with less than two years of experience and 0.087 for others. We found that there was more agreement for drivers with more than two years of driving experience for the task “Answer the call” (0.250 versus 0.022, Vb(2, N=18) = 30.913, p = 0.078); see Table 2. However, there was no significant difference between the drivers with different years of driving experience for any other commands. To further understand about these differences, we calculated the between-group coagreement rates for each command. For example, coagreement for the “Answer the call” task was CRb = 0.125, showing that only 12.5% of all participants across the two groups were in agreement about how to answer the call, i.e., by tapping the bottom button on the steering wheel. The reason the other participants disagreed was that while the drivers with more than a year of driving experience preferred to answer the phone call by tapping the bottom button (similar to performing this task on the smartphones), the drivers with less than 24 months of driving experience preferred more variations, such as by performing clockwise circumduction on mid-air and by holding the bottom button to move the palm towards the wrist (downward flexion). All these proposals elicited from the drivers who started to drive recently indicate a clear influence of their driving experience that narrowly eluded significance (p = 0.078) for the answer the call task and displayed the largest effect size (Vb = 30.913) among all other 15 commands. A similar effect was observed for hanging up the call as well, but this time from another perspective: although the agreement rates of the two groups were similar (0.156 versus 0.143) and the difference was not significant Vb(2, N=20) = 0.096), the coagreement rate displayed different gesture preferences for two groups (CRb = 0.150). On the other hand, more experienced drivers were not able to reach agreement for the commands with directional mappings such as “Move right” and “Move left”, for which agreement rates were also similar. We also found that for stage toggles, such as “Play/Resume”, and “Pause”, there were no significant differences between the agreement of the two groups. Cooagreement was between 0.088 and 0.188, which shows the need for specific gesture designs for those commands, regardless of users’ driving experience.

5.4. Wrist Movements and Touch Gestures

To find out which wrist movements and touch gestures were elicited by our drivers, we assembled all the preferred wrist and touch gestures proposed for each command into eight categories supported by the design of the wrist prototype.
Figure 3 presents the distribution of both wrist and touch gestures for each command. Both flexion and extension movements were involved in nearly 56% of all gestures in all three locations. 15.22% of flexion wrist movements were performed on-steering wheel, while 9.24% off the wheel. Similarly, 19.57% of the gestures involved ulnar and radial deviations and were performed on all three locations. Although circumduction was not suitable while users were holding the steering wheel, 24.46% of wrist gestures elicited were circumduction movements. They were highly preferred for commands that required to set up a specific value within a continuous range. In particular, flexion was highly preferred for commands which resembled continuous actions like “Move down” a value or turn the “Volume down”. On the other hand, extension was preferred for the opposite actions like “Move up” and “Volume up”. Similarly, ulnar and radial deviations were preferred for actions which required precise continuous control (for example “Next song” and “Previous song”). Unsurprisingly, circumduction was preferred for “Zoom-in/Zoom-out” type of commands. Some participants preferred anti-clockwise circumduction for “Ignore the call” command types. Particularly, one participant (P7) mentioned that “I needed to think for a few seconds to before ignoring the call; thus, I wanted to use it [circumduction] to delay the action to think a bit before ignoring the call”.
Only 24.31% (Tap: 19.10%; Hold: 5.21%) of gestures were performed using touch gestures. Tap gestures were preferred for state toggles, such as “Pause”. This suggests the influence of prior experience. Additionally, touch gestures were reserved as gesture delimiters.

5.5. In-Vehicle Consensus Gesture Set

We isolated 288 preferred gestures (18 participants × 16 commands) from the original 864 gestures. 54 unique gestures were used to perform 288 preferred gestures for the 16 commands. To create a consensus gesture set, we grouped the gestures performed by at least three participants for each command. This led us to have 10 unique gestures performed for 15 commands. However, one command (T4: Ignore the call) did not satisfy the given consensus threshold. Similarly, seven commands had multiple consensus. For example, T6 (Move right) was performed using rightward extension and rightward ulnar deviation by three participants each. To avoid this conflict, we created two more gesture sets comprising unique gestures: (a) performed by at least three participants, and (b) performed for at least three commands and to choose the gestures which achieved the given consensus threshold (at least three participants or commands) in all three sets. This led us to have 10 unique gestures (see Figure 4), which represented the 163/288 gestures or 56% of the user-preferred gestures.

5.6. Taxonomoy of Wrist and Touch Gestures

To further understand our 18 participants’ gesture proposals for in-vehicle interactions, we consider the following four dimensions of analysis. We were inspired by and adopted/modified these dimensions from previous studies [36,43,44,45], and grouped by the specifics of both wrist and touch gestures:
  • Complexity (Figure 5a) identifies a proposed gesture as either (a) simple or (b) complex. We describe simple gestures as gestures that are performed using only one action, wrist or touch gesture. For example, moving the wrist downwards toward the palm to perform downward flexion and/or using a soft foam button to tap on the steering wheel are identified as simple gestures. Complex gestures are combination of two gestures performed using two distinct gestures, e.g., tapping any one of the buttons followed by moving the wrist downwards toward the palm. We adopted this dimension from Reference [43].
  • Locale (Figure 5b) indicates the location inside the vehicle where the wrist and touch gestures were performed: (a) on steering wheel, (b) off steering wheel, and (c) on gear lever. We adopted and modified this measure from Reference [44]. For example, mid-air gestures were performed immediately off the steering wheel and also on top of the gear control. Similarly, touch gestures were performed on the steering wheel and also on the gear lever.
  • Structure (Figure 6, next page) distinguishes the relative importance of the wrist and touch gestures in the elicitation of in-vehicle gestures, with five categories: (a) wrist (b) touch (bottom button), (c) touch (side button), (d) touch (bottom button) and wrist, and (e) touch (side button) and wrist. For example, for the touch (bottom button) category, the tap or hold gesture was performed using the bottom button. The touch (bottom button) and wrist category include any wrist gestures performed after either tapping or holding the bottom button. We modified this category from the taxonomy of Vatavu and Pentiuc [45].
  • Action (Figure 7) classifies the gestures based on their actions rather than their semantic meaning with six categories: (a) scroll, (b) swipe, (c) circle, (d) tap, (e) hold, and (f) compound. We adopted and modified this classification from Chan et al. [36], who used these dimensions to define user designed single-hand micro gestures without any specific domains. For example, downward flexion and upward extension were grouped as scrolls while leftward flexion and rightward extension were grouped as swipes.
Simple gestures were highly preferred for the commands in all three categories. Interestingly, the drivers preferred to distinguish commands using two different simple gestures (touch and wrist). 45.84% of touch gestures (compared to 41.66%) were preferred for the commands in the phone category, while wrist gestures (45.37% compared to 25%) were reserved for the music player. Unsurprisingly, 32.41% of wrist gestures were performed off the steering wheel for the commands in the music player category. Complex gestures (map navigation: 43.52%; music player: 29.63%; phone: 12.50%) were highly preferred for the commands in the map navigation category. Particularly, the bottom button was preferred to perform both simple (phone: 29.17% vs. 16.67%) and complex gestures (map navigation: 37.96% compared to 5.56%). Only 7.99% of gestures were performed on gear lever.
64.81% of gestures were performed on the steering wheel. Out of this, nearly 3.5% gestures were performed while holding the steering wheel. 27.78% of mid-air gestures were performed in very close proximity to the steering wheel, while nearly 8% were performed on the gear lever. These findings are aligned with the ISO 39581 (https://www.iso.org/standard/9613.html) standard for “Passenger cars-Driver hand-control reach” for new in-vehicle interactions. These results were confirmed by a one-way ANOVA test. We found a significant effect for complexity (F(2,13) = 13.977, p = 0.001) and location (F(2,13) = 6.705, p = 0.010). A Tukey post hoc test revealed participants preferred to use simple gestures for the tasks in the phone category and complex gestures were preferred for the map navigation tasks. Similarly, on-steering wheel was preferred for performing the phone and map navigation tasks while off-steering wheel was used to perform music player tasks. However, there was no significance for structure (F(2,13) = 0.347, p = 0.713).
Compounds (16 of 16 commands) were the most preferred out of all the six action types. Participants gave two key reasons why they were popular: (1) Compounds avoided doing a gesture that was performed incorrectly; and (2) they were safe to perform because they did not have to switch their one hand from the steering wheel. All 18 drivers mentioned that buttons gave them increased interaction options which allowed them to perform combinations of gestures. Therefore, they defined different functions for each button to perform wrist gestures with the combination of these two buttons.
Scrolls (14 of 16 commands) were next most frequently used when the commands appeared to elicit vertical movements, such as moving up or down. Scrolls were also used for the commands that required to set up a specific value within a continuous range such as volume up and down. 16 participants mentioned that these gestures were figurative and resembled real world scenarios such as picking up a phone to answer a call. All 18 drivers mentioned these actions were simple and easy to perform while steering the car. One participant (P4) mentioned that “he preferred to use Scrolls as driving requires more focus and the interactions should be very simple to perform with limited effort”.
Circles (13 of 16 commands) were performed for the commands which required more precise continuous control such as “Zoom-in” and “Zoom-out”. Taps (11 of 16 commands) were performed for selection type of interactions. Participants wanted to use this type of action which needed quick and very short reaction time such as playing or pausing the music player and answering or hanging up a phone call. Almost all participants preferred this type because of the influence of the existing touch-based devices (like a smartphone). Interestingly, some participants preferred to use Taps to perform moving left and right and used different buttons for each command.
Swipe actions appeared in nine commands, especially for the commands which elicited horizontal, lateral movements, such as moving left or playing next songs. Out of all elicited gestures only 5.21% were Hold gestures. Nine participants preferred to use Holds to perform Compound actions. Figure 6 shows the distribution of the action types of the user-preferred gestures for each command. A one-way ANOVA test revealed a statistically significant effect on action (F(2,13) = 4.468, p = 0.033) and users preferred to use Tap for the tasks related to phone actions. This shows a clear influence of their previous experience of using touchscreen devices transferred to in-vehicle interactions.

5.7. Participants’ Feedback

We had a semi-structured interview with each participant immediately after the elicitation stage. None of our 18 participants reported that performing wrist gestures with fabric-based interface affected their focus on the main task of driving. All participants expressed their interest to use a fabric-based interface for in-vehicle interactions when available. They further mentioned that simple wrist gestures were convenient for commands such as “Move up/Move down” and that the soft buttons made complex commands such as “Zoom-in/Zoom-out” easier to perform while steering the car. All of our participants preferred to use a fabric-based interface with at least one button. They were satisfied with the current positions of the buttons; in particular, three participants preferred to have one more button under the bottom button. All participants commented that while performing wrist gestures in the roundabouts considerably affected their driving performance. Very few drivers expressed concerns about whether the wrist interface could absorb sweat while using it for a long time. Finally, all female drivers opted to use a colorful fancy wrist interface.

6. Discussion

As mentioned in the results section, the elicited wrist gestures for our specific set of in-vehicle commands achieved a low agreement rate, which is not unusual for this type of user defined interactions [37,46,47]. We observed the following five possible explanations for this outcome from our study: (1) our method of defining the possible functions of the wrist prototype and experiment design; (2) large degrees of freedom afforded by the wrist and touch gestures; (3) the novelty of this type of smart fabric-based wrist interface; (4) participants preferred to keep distinct gestures for different commands; and (5) our way of defining wrist and touch gestures. Regarding the method of defining our gestures, we identified the possible gestures using our prototype. As reported in a prior elicitation study [37], the criteria to group gestures can influence on the magnitude of agreement rates. Since our goal is to identify implementable gestures for in-vehicle interactions, we identified and grouped the gestures based on their distinct wrist movements. Similarly, we showed all the possible combinations of gestures using our prototype in the video and set a one-minute time limit (1-min × 16 commands = 16 min) for all 18 participants to think of three distinct gestures for each command. Interestingly, all 18 participants thought of their preferred gestures within the first 30 seconds and often picked their gestures for the next pair of commands. Therefore, for nearly half of the commands participants immediately performed their gestures. A previous study [48] reported that the participants’ increased thinking time leads to less agreement rates. However, our participants took a lot less time to think of three different gestures, yet less agreement rates resulted. This is because all participants always had a gesture in mind for the next command. There are two main reasons for this: (1) most of the commands were directional pairs, and (2) commands were always presented in the same order to all the participants. This approach influenced the participants to propose relevant gestures as they were able to identify the relationships between commands. It may also be that we verbally introduced all the tasks; thus, our participants thought of the gestures in different ways. Another possible aspect for the low agreement rate would be that we allowed users to reuse the same gesture for more than one command and our participants preferred to make use of the device to define and customize their own gestures. We observed this customization as our participants made use of the affordance of the device to define their own gestures.
Unlike previous in-vehicle elicitation studies (e.g., [22,24]), our participants performed three gestures for each command while steering the vehicle in the simulation environment. Despite performing gestures while driving the car, all participants remembered their preferred gesture and the order of the proposed three gestures for all 16 commands. This is mainly because we asked the participants to think of three different gestures while not driving and to perform the gestures while steering the car. Similarly, none of our 18 drivers deviated from the lane or collided with any other vehicles while performing the gestures. This is because we asked them not to exceed the allowed 65 km/h speed limit and strictly follow the driving regulations. Like previous studies (e.g., [22,36]), participants were influenced by the legacy bias as they preferred similar gestures for state toggles.
In the next sections, we discuss our drivers’ preferred gestures in more detail as we extrapolate design recommendations for wrist and touch gestures for in-vehicle interactions. We provide a following set of proposals to further investigate the use of fabric-based interfaces for in-vehicle interactions.

6.1. Design Recommendations for Fabric-Based Wrist Interfaces

6.1.1. Simple Gestures Were Preferred over Complex

Of all proposed gestures, touch (24.31%) and wrist (45.14%) were “simple” as mentioned in our gesture taxonomy, i.e., gestures performed using only one action. Our participants went for simple gestures for most of the commands as they preferred simple gestures to complement the primary driving task. Additionally, we found that complex gestures were less preferred for phone activities (12.50%) than music player (29.63%) and navigation commands (43.52%), showing a clear preference for simple gestures for in-vehicle interactions.

6.1.2. Reflect Legacy Inspired “Simple Taps” for State Toggles

We found that 24.31% (19.10% Tap and 5.21% Hold) of the gestures were performed using touch gestures. Users preferred to use this gesture to switch between two states, such as “Play” or “Pause” (Figure 4g,h). For example, “Pause” was highly influenced by the touch metaphor of touchscreens. Users preferred the soft foam buttons to toggle between states as they wanted to perform quick actions in a relatively short time. Most participants preferred to use identical gestures for state toggle operations such as “Play/Pause”. These findings align with previous elicitation studies outside of in-vehicle interactions [36,49] and suggest that for in-vehicle gestures, designers need to consider also existing metaphors related to the nature of the commands. As such, we suggest that for in-vehicle gestures designers need to consider the existing metaphors related to the nature of the tasks.

6.1.3. Users Prefer “Simple Wrist Gestures” for Directional Pairs

Our results showed that simple wrist gestures were performed for 45.37% of the commands in the music player category and for 41.67% of the commands in the phone category. We found that our users applied real-life natural actions to associate wrist gestures for commands. For example, P3 applied the real-life metaphor of pushing something down by moving the palm downward to the wrist (downward flexion, Figure 4a) for “Volume down”. Similarly, P7 preferred to use the clockwise circumduction gesture to mimic the more detailed view of the map, while using the anti-clockwise circumduction to visualize a larger view of the map. This association of natural intuitive gestures produced more dichotomous sets as users tended to pair related commands. For example, upward extension and downward flexion (Figure 4a,b) preferred for “Move up/Move down” and “Volume up/Volume down” while rightward ulnar deviation and leftward radial deviation (Figure 4d,e) were performed for the commands such as “Move right/Move left” and “Next song/Previous song”. Interestingly, anti-clockwise circumduction (Figure 4f) was preferred for “Zoom-out”. These related commands had basically the similar gestures but with opposite directions. We recommend that designers associate wrist gestures to create an instinctive mapping between movement and action, e.g., performing clockwise circumduction replicates a physical action of turning a knob with precision. In addition, designers should focus their design to capture all possible wrist movements. The sensors should not interfere with the natural wrist movements to perform interactions while driving the car.

6.1.4. Consider Similar Gestures for Similar Commands

Prior studies reported that users perform the same gestures in many different ways for multi-touch interactions [36,50]. By default, our prototype allows users to perform the same gestures in various ways—for instance, moving the palm towards the wrist on mid-air or while resting on the steering wheel (see Figure 4a,i above). Our participants proposed variations of same gestures for similar commands, such as “Move up” and “Volume up” (see Figure 4b,j), thereby resulting in a small gesture set. This approach minimizes the number of gestures to be learned and remembered [51]. Our finding aligns with the heuristics for gestural interaction in vehicles proposed by Gable et al. [52]. We further recommend designers to consider mapping similar gestures for similar commands but distinguish them by the location in which they are performed. This approach can minimize the effort to learn and remember many gestures as performing secondary tasks need to complement the primary task of steering the car.

6.1.5. Design That Leverages the Synergy between Gestures and In-Vehicle Locations

The in-vehicle literature recommends some alternative interactions, such as tapping and swiping on the surface (e.g., [1]) or in the middle (e.g., [16]) of the steering wheel. With a fabric-based interface, our participants proposed gestures that not only used the previous gesture locations but went further. We found that our users preformed gestures in relation to different locations, such as placing the wrist on top of the steering wheel (Figure 4i,j), on its side (Figure 4g,h), and on top of the gear lever. Most of the mid-air gestures were performed on the sides (very close to recommended 9 and 3 o’clock positions (https://www.nhtsa.gov/)) of the steering wheel without blocking the user’s front view of the windshield. Prior research showed the benefits of applying users’ proprioceptive sense for eyes-free interaction for wearable wrist interfaces [30]. We recommend further investigation of combining users’ proprioceptive sense and in-vehicle locations to make the shift towards location-based interactions, where wearable interface can be combined with other in-vehicle sensors.

6.1.6. Favor Stretchable Fabric with Fingerless Thumb-Hole Design

We co-designed a fingerless palm wrist-band (made of cotton and Lycra) with a thumb hole design as the physical interface. The fingerless with the thumb-hole design allowed drivers to have full control over their fingers to securely hold the steering wheel. All our 18 users felt comfortable wearing the palm wrist interface while steering the car. We endorse that for clothing-based interfaces to be practical and usable, they need to be thin, lightweight and exceptionally stretchy with increased elasticity while enhancing comfort and breathability.

6.1.7. Consider Side Button for Gesture Delimiter

The bottom button (36.11% vs. 18.75%) is preferred over the side button to perform both simple (14.93% vs. 9.38%) and complex (21.18% vs. 9.38%) gestures (see Figure 4g,i,j above). Participants gave two key reasons why they preferred the bottom button on the wrist interface: (1) it was convenient to use while steering the car, and (2) it was easier to perform as they did not have to move their hand from the steering wheel to use the button. Three drivers highlighted that the bottom button is also convenient to use on the gear lever. Based on our observations, the button placed on the posterior side of the forearm can be used as a gesture delimiter.

6.2. Limitations

We used a non-functional fabric-based wrist-worn prototype. Despite the absence of interactive capabilities, we were still able to understand users’ behavior and responses to it as an input interface for in-vehicle secondary interactions. It was apparent that users were highly influenced by existing interaction technologies, like the touch phones and displays. We only used a driving simulation setup to elicit these gestures so that participants were in a safe environment. All our participants were from right-hand traffic countries; thus, we used a simulated driving conditions on left-hand drive passenger car. Even though we offered the fabric-based prototype for both hands, none of our participants opted to use the interface on their left hand, which restrict our gesture set to right hand users. Similarly, our participants were students (mean age of ~23 years) with limited years of driving experience. It would be useful to investigate how the gestures will change for a different population.

7. Conclusions

In this paper, we presented the results from a study conducted to investigate the use of a textile-based wrist interface that allows both gestural and touch inputs for in-vehicle interactions. We involved end users in the design process of the wrist interface. To further explore the design space of fabric-based interfaces as an alternative approach to support users’ secondary actions while driving, we conducted a user-elicitation study with a wrist-based nonfunctional artifact. By integrating soft foam buttons, users were able to perform touch gestures while steering the car. Eighteen end users, all with driving experience, were involved in eliciting in-vehicle gestures. We followed a methodology for eliciting gestures using a fabric-based interface and presented a taxonomy of wrist and touch gestures and a collection of in-vehicle gesture types. We also described a set of design recommendations and suggestions for further research. Our results suggest that in-vehicle interactions using fabric-based interface are simple, natural, intuitive and convenient to perform while steering the car. Our results on users’ input preferences can also be used to understand the deployment of sensors to detect accurate wrist movements. We believe that our investigation of the user-driven interface development will be useful for designers to produce textile-based wearable interfaces for in-vehicle interactions. Our future work will focus on the validation of our user-driven in-vehicle input methods in a controlled driving setup to sense all gestural and touch input actions listed in our gesture set.

Author Contributions

Conceptualization, V.N., H.-N.L., and K.K.-T.L.; methodology, V.N., H.-N.L., and K.K.-T.L.; software, R.S. and V.N.; validation, V.N. and H.-N.L.; formal analysis, V.N., R.S., and H.-N.L.; investigation, R.S. and V.N.; resources, H.-N.L., Y.Y., and K.K.-T.L.; data curation, R.S. and V.N.; writing—original draft preparation, V.N., R.S., and H.-N.L.; writing—review and editing, V.N., R.S., H.-N.L., Y.Y., K.A., and K.K.-T.L.; visualization, V.N. and R.S.; supervision, H.-N.L., K.K.-T.L., Y.Y., and K.A.; project administration, H.-N.L.; funding acquisition, H.-N.L.

Funding

This research was funded by Xi’an Jiaotong-Liverpool University (XJTLU) Key Program Special Fund (#KSF-A-03) and XJTLU Research Development Fund (#RDF-13-02-19).

Acknowledgments

We thank all the volunteers who participated in the experiment for their time. We also thank the reviewers for their comments and suggestions that have helped to improve our paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pfleging, B.; Rang, M.; Broy, N. Investigating user needs for non-driving-related activities during automated driving. In Proceedings of the the 15th International Conference on Mobile and Ubiquitous Multimedia, MUM ’16, Rovaniemi, Finland, 13–15 December 2016; pp. 91–99. [Google Scholar]
  2. May, K.R.; Gable, T.M.; Walker, B.N. A multimodal air gesture interface for in vehicle menu navigation. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; pp. 1–6. [Google Scholar]
  3. Tsimhoni, O.; Green, P. Visual demand of driving and the execution of display-intensive in-vehicle tasks. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2001, 45, 1586–1590. [Google Scholar] [CrossRef]
  4. Normark, C.J.; Tretten, P.; Gärling, A. Do redundant head-up and head-down display configurations cause distractions? In Proceedings of the Fifth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Big Sky, MT, USA, 22–25 June 2009; pp. 398–404. [Google Scholar]
  5. González, I.E.; Wobbrock, J.O.; Chau, D.H.; Faulring, A.; Myers, B.A. Eyes on the road, hands on the wheel: thumb-based interaction techniques for input on steering wheels. In Proceedings of the Graphics Interface 2007, Montreal, QC, Canada, 28–30 May 2007; pp. 95–102. [Google Scholar]
  6. Bach, K.M.; Jæger, M.G.; Skov, M.B.; Thomassen, N.G. You can touch, but you can’t look: Interacting with In-Vehicle Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; p. 1139. [Google Scholar]
  7. Döring, T.; Kern, D.; Marshall, P.; Pfeiffer, M.; Schöning, J.; Gruhn, V.; Schmidt, A. Gestural interaction on the steering wheel – Reducing the visual demand. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; p. 483. [Google Scholar]
  8. Koyama, S.; Sugiura, Y.; Ogata, M.; Withana, A.; Uema, Y.; Honda, M.; Yoshizu, S.; Sannomiya, C.; Nawa, K.; Inami, M. Multi-touch steering wheel for in-car tertiary applications using infrared sensors. In Proceedings of the 5th Augmented Human International Conference, Kobe, Japan, 7–9 March 2014; pp. 1–4. [Google Scholar]
  9. Pfeiffer, M.; Kern, D.; Schöning, J.; Döring, T.; Krüger, A.; Schmidt, A. A multi-touch enabled steering wheel — Exploring the design space. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 3355–3360. [Google Scholar]
  10. Werner, S. The steering wheel as a touch interface: using thumb-based gesture interfaces as control inputs while driving. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Automotive’UI 14, Seattle, WA, USA, 17–19 September 2014; pp. 9–12. [Google Scholar]
  11. Hessan, J.F.; Zancanaro, M.; Kavakli, M.; Billinghurst, M. Towards Optimization of Mid-air Gestures for In-vehicle Interactions. In Proceedings of the 29th Australian Conference on Computer-Human Interaction, Brisbane, Australia, 28 November–1 December 2017; pp. 126–134. [Google Scholar]
  12. Riener, A.; Wintersberger, P. Natural, intuitive finger based input as substitution for traditional vehicle control. In Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Salzburg, Austria, 30 November–2 December 2011; p. 159. [Google Scholar]
  13. Pfleging, B.; Schneegass, S.; Schmidt, A. Multimodal interaction in the car– Combining speech and gestures on the steering wheel. In Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Portsmouth, NH, USA, 17–19 October 2012; p. 155. [Google Scholar]
  14. Stoppa, M.; Chiolerio, A. Wearable electronics and smart textiles: A critical review. Sensors 2014, 14, 11957–11992. [Google Scholar] [CrossRef] [PubMed]
  15. Parzer, P.; Sharma, A.; Vogl, A.; Steimle, J.; Olwal, A.; Haller, M. SmartSleeve: Real-time sensing of surface and deformation gestures on flexible, interactive textiles, using a Hybrid gesture detection Pipeline. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, Quebec City, QC, Canada, 22–25 October 2017; pp. 565–577. [Google Scholar]
  16. Schneegas, S.; Voit, A. GestureSleeve: Using touch sensitive fabrics for gestural input on the forearm for controlling smartwatches. In Proceedings of the 2016 ACM International Symposium on Wearable Computers, Heidelberg, Germany, 12–16 September 2016; pp. 108–115. [Google Scholar]
  17. Yoon, S.H.; Huo, K.; Nguyen, V.P.; Ramani, K. TIMMi: Finger-worn textile input device with multimodal sensing in mobile interaction. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, Stanford, CA, USA, 16–19 January 2015; pp. 269–272. [Google Scholar]
  18. Strohmeier, P.; Knibbe, J.; Boring, S.; Hornbæk, K. zPatch: Hybrid Resistive/Capacitive eTextile Input. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, Stanford, CA, USA, 16–19 January 2015; pp. 188–198. [Google Scholar]
  19. Yoon, S.H.; Huo, K.; Ramani, K. Plex: Finger-Worn textile sensor for mobile interaction during activities. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 191–194. [Google Scholar]
  20. Endres, C.; Schwartz, T.; Müller, C. Geremin’: 2D microgestures for drivers based on electric field sensing. In Proceedings of the 16th International Conference on Intelligent User Interfaces, Palo Alto, CA, USA, 13–16 February 2011; pp. 327–330. [Google Scholar]
  21. Riener, A. Gestural interaction in vehicular applications. Computer 2012, 45, 42–47. [Google Scholar] [CrossRef]
  22. Angelini, L.; Carrino, F.; Carrino, S.; Caon, M.; Khaled, O.A.; Baumgartner, J.; Sonderegger, A.; Lalanne, D.; Mugellini, E. Gesturing on the steering wheel: A user-elicited taxonomy. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; pp. 1–8. [Google Scholar]
  23. Huber, J.; Sheik-Nainar, M.; Matic, N. Force-enabled touch input on the steering wheel: An elicitation study. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, AutomotiveUI ’17, Oldenburg, Germany, 24–27 September 2017; pp. 168–172. [Google Scholar]
  24. May, K.R.; Gable, T.M.; Walker, B.N. Designing an in-vehicle air gesture set using elicitation methods. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’17, Oldenburg, Germany, 24–27 September 2017; pp. 74–83. [Google Scholar]
  25. Riener, A.; Ferscha, A.; Bachmair, F.; Hagmüller, P.; Lemme, A.; Muttenthaler, D.; Pühringer, D.; Rogner, H.; Tappe, A.; Weger, F. Standardization of the in-car gesture interaction space. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI ’13, Eindhoven, The Netherlands, 27–30 October 2013; pp. 14–21. [Google Scholar]
  26. Horswill, M.S.; McKenna, F.P. The effect of interference on dynamic risk-taking judgments. Br. J. Psychol. 1999, 90, 189–199. [Google Scholar] [CrossRef]
  27. Wigdor, D.; Balakrishnan, R. TiltText: Using tilt for text input to mobile phones. In Proceedings of the 16th Annual Acm Symposium on User Interface Software and Technology, Vancouver, BC, Canada, 2–5 November 2003; pp. 81–90. [Google Scholar]
  28. Gong, J.; Yang, X.-D.; Irani, P. WristWhirl: One-handed continuous smartwatch input using wrist gestures. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 861–872. [Google Scholar]
  29. Cheung, V.; Eady, A.K.; Girouard, A. Exploring Eyes-free Interaction with Wrist-Worn Deformable Materials. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, Yokohama, Japan, 20–23 March 2017; pp. 521–528. [Google Scholar]
  30. Lopes, P.; Ion, A.; Mueller, W.; Hoffmann, D.; Jonell, P.; Baudisch, P. Proprioceptive interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 939–948. [Google Scholar]
  31. Crossan, A.; Williamson, J.; Brewster, S.; Murray-Smith, R. Wrist rotation for interaction in mobile contexts. In Proceedings of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services, Amsterdam, The Netherlands, 2–5 September 2008; p. 435. [Google Scholar]
  32. Strohmeier, P.; Vertegaal, R.; Girouard, A. With a flick of the wrist: Stretch sensors as lightweight input for mobile devices. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, Kingston, ON, Canada, 19–22 February 2012; p. 307. [Google Scholar]
  33. Iwasaki, S.; Sakaguchi, S.; Abe, M.; Matsushita, M. Cloth switch: Configurable touch switch wearable device made with cloth. In Proceedings of the SIGGRAPH Asia 2015 Posters, Kobe, Japan, 2–6 November 2015; p. 22. [Google Scholar]
  34. Green, P. Visual and Task Demands of Driver Information Systems; UMTRl Technical Report 98-16; The University of Michigan Transportation Research Institute: Ann Arbor, MI, USA, June 1999; p. 120. [Google Scholar]
  35. Pakanen, M.; Lappalainen, T.; Roinesalo, P.; Häkkilä, J. Exploring smart handbag concepts through co-design. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, MUM ’16, Rovaniemi, Finland, 12–15 December 2016; pp. 37–48. [Google Scholar]
  36. Chan, E.; Seyed, T.; Stuerzlinger, W.; Yang, X.-D.; Maurer, F. User elicitation on single-hand microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 3403–3414. [Google Scholar]
  37. Gheran, B.-F.; Vanderdonckt, J.; Vatavu, R.-D. Gestures for smart rings: Empirical results, insights, and design implications. In Proceedings of the 2018 Designing Interactive Systems Conference, Hong Kong, China, 9–13 June 2018; pp. 623–635. [Google Scholar]
  38. Morris, M.R.; Danielescu, A.; Drucker, S.; Fisher, D.; Lee, B.; Schraefel, M.C.; Wobbrock, J.O. Reducing legacy bias in gesture elicitation studies. Interactions 2014, 21, 40–45. [Google Scholar] [CrossRef]
  39. Rahman, M.; Gustafson, S.; Irani, P.; Subramanian, S. Tilt techniques: Investigating the dexterity of wrist-based input. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI 09, Boston, MA, USA, 4–9 April 2009; p. 1943. [Google Scholar]
  40. Green, P. Crashes induced by driver information systems and what can be done to reduce them. SAE Tech. Paper 2000, 1, C008. [Google Scholar]
  41. Vatavu, R.-D.; Wobbrock, J.O. Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 1325–1334. [Google Scholar]
  42. Wobbrock, J.O.; Aung, H.H.; Rothrock, B.; Myers, B.A. Maximizing the guessability of symbolic input. In Proceedings of the CHI 2005 Conference on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 1869–1872. [Google Scholar]
  43. Ruiz, J.; Vogel, D. Soft-Constraints to reduce legacy and performance bias to elicit whole-body gestures with low arm fatigue. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 3347–3350. [Google Scholar]
  44. Piumsomboon, T.; Clark, A.; Billinghurst, M.; Cockburn, A. User-defined gestures for augmented reality. In Proceedings of the IFIP Conference on Human-Computer Interaction, Cape Town, South Africa, 2–6 September 2013; pp. 282–299. [Google Scholar]
  45. Vatavu, R.D.; Pentiuc, S.G. Multi-Level representation of gesture as command for human computer interaction. Comput. Inf. 2012, 27, 837–851. [Google Scholar]
  46. Liang, H.N.; Williams, C.; Semegen, M.; Stuerzlinger, W.; Irani, P. An investigation of suitable interactions for 3D manipulation of distant objects through a mobile device. Int. J. Innov. Comput. Inf. Control 2013, 9, 4737–4752. [Google Scholar]
  47. Seyed, T.; Burns, C.; Costa Sousa, M.; Maurer, F.; Tang, A. Eliciting usable gestures for multi-display environments. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces, Cambridge/Boston, MA, USA, 11–14 November 2012; pp. 41–50. [Google Scholar]
  48. Gheran, B.-F.; Vatavu, R.-D.; Vanderdonckt, J. Ring x2: Designing gestures for smart rings using temporal calculus. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA, 29 October–1 November 2017; pp. 117–122. [Google Scholar]
  49. Morris, M.R. Web on the wall: Insights from a multimodal interaction elicitation study. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces, Cambridge/Boston, MA, USA, 11–14 November 2012; pp. 95–104. [Google Scholar]
  50. Anthony, L.; Vatavu, R.-D.; Wobbrock, J.O. Understanding the consistency of users’ pen and finger stroke gesture articulation. In Proceedings of the Graphics Interface 2013, Regina, SK, Canada, 29–31 May 2013; pp. 87–94. [Google Scholar]
  51. Pickering, C.A.; Burnham, K.J.; Richardson, M.J. A research study of hand gesture recognition technologies and applications for human vehicle interaction. In Proceedings of the 2007 3rd Institution of Engineering and Technology Conference on Automotive Electronics, Warwick, UK, 28–29 June 2007; pp. 1–15. [Google Scholar]
  52. Gable, T.M.; May, K.R.; Walker, B.N. Applying popular usability heuristics to gesture interaction in the vehicle. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; pp. 1–7. [Google Scholar]
Figure 1. (a) The clothing-based prototype used in our study for right-handed participants. Two foam buttons (one on the bottom under the wrist and one on the posterior side of the forearm) were glued to a palm wrist-band with a thumb-hole design. (b–d) Sample wrist movements supported by our prototype: (b) Flexion and Extension (vertical movements) (c) Radial and Ulnar Deviation (horizontal movements) (d) Circumduction: clockwise (right) and anti-clockwise (left).
Figure 1. (a) The clothing-based prototype used in our study for right-handed participants. Two foam buttons (one on the bottom under the wrist and one on the posterior side of the forearm) were glued to a palm wrist-band with a thumb-hole design. (b–d) Sample wrist movements supported by our prototype: (b) Flexion and Extension (vertical movements) (c) Radial and Ulnar Deviation (horizontal movements) (d) Circumduction: clockwise (right) and anti-clockwise (left).
Mti 03 00033 g001
Figure 2. (a) A participant driving the simulator while wearing the wrist-band on a three-lane highway. Smartphone is mounted on the participant’s preferred location. (b,c) Two different angles of video capturing of the participant while performing gestures and driving the car.
Figure 2. (a) A participant driving the simulator while wearing the wrist-band on a three-lane highway. Smartphone is mounted on the participant’s preferred location. (b,c) Two different angles of video capturing of the participant while performing gestures and driving the car.
Mti 03 00033 g002
Figure 3. Frequency distribution of the wrist and touch gestures used for the 16 commands (along the horizontal axis). They are grouped based on the three categories. Tap gestures were used for state toggles (e.g., pause).
Figure 3. Frequency distribution of the wrist and touch gestures used for the 16 commands (along the horizontal axis). They are grouped based on the three categories. Tap gestures were used for state toggles (e.g., pause).
Mti 03 00033 g003
Figure 4. Consensus in-vehicle gesture set: (a) downward flexion; (b) upward extension; (c) rightward extension; (d) rightward ulnar deviation; (e) leftward radial deviation; (f) anti-clockwise circumduction; (g) tap bottom button; (h) tap side button; (i) hold bottom button and downward flexion; and (j) hold bottom button and upward extension. Downward flexion was performed by 13 participants for 12 commands. (T7) Move up command was more frequently performed using downward flexion. Gestures are referenced and discussed in the design guidelines section.
Figure 4. Consensus in-vehicle gesture set: (a) downward flexion; (b) upward extension; (c) rightward extension; (d) rightward ulnar deviation; (e) leftward radial deviation; (f) anti-clockwise circumduction; (g) tap bottom button; (h) tap side button; (i) hold bottom button and downward flexion; and (j) hold bottom button and upward extension. Downward flexion was performed by 13 participants for 12 commands. (T7) Move up command was more frequently performed using downward flexion. Gestures are referenced and discussed in the design guidelines section.
Mti 03 00033 g004
Figure 5. Observed percentages of wrist and touch gestures for the commands used in our study (a) Complexity—Simple gestures were highly preferred for phone category; (b) Locale—on-steering wheel gestures were highly preferred for all three categories.
Figure 5. Observed percentages of wrist and touch gestures for the commands used in our study (a) Complexity—Simple gestures were highly preferred for phone category; (b) Locale—on-steering wheel gestures were highly preferred for all three categories.
Mti 03 00033 g005
Figure 6. Observed percentages of wrist gestures for in-vehicle interactions. Bottom button was highly preferred for both simple and complex gestures. Simple wrist gestures were performed for interacting with a music player and simple touch gesture were performed for phone activities (they show a clear influence of prior experience).
Figure 6. Observed percentages of wrist gestures for in-vehicle interactions. Bottom button was highly preferred for both simple and complex gestures. Simple wrist gestures were performed for interacting with a music player and simple touch gesture were performed for phone activities (they show a clear influence of prior experience).
Mti 03 00033 g006
Figure 7. Frequency distribution of actions used for each command in the preferred gesture set. Scrolls were highly preferred for commands move up and down.
Figure 7. Frequency distribution of actions used for each command in the preferred gesture set. Scrolls were highly preferred for commands move up and down.
Mti 03 00033 g007
Table 1. 16 commands based on three different categories were used in our study. These tasks are the most performed on a touchscreen device as secondary tasks while driving.
Table 1. 16 commands based on three different categories were used in our study. These tasks are the most performed on a touchscreen device as secondary tasks while driving.
PhoneMap NavigationMusic Player
T1. Unlock the phoneT5. Move leftT11. Play/Resume
T2. Answer the callT6. Move rightT12. Pause
T3. Hang up the callT7. Move upT13. Volume up
T4. Ignore the callT8. Move downT14. Volume down
T9. Zoom-inT15. Next song
T10. Zoom-outT16. Previous song
Table 2. Agreement rates (AR) for the 16 commands based on the participants’ driving experience. Commands with the highest AR are highlighted in dark gray while the lowest AR are shown in light gray.
Table 2. Agreement rates (AR) for the 16 commands based on the participants’ driving experience. Commands with the highest AR are highlighted in dark gray while the lowest AR are shown in light gray.
CommandsARDriving Experience
Less than 2 YearsMore than 2 Yearsp
T1. Unlock the phone0.0780.0890.0360.625
T2. Answer the call0.1180.0220.2500.078 1
T3. Hang up the call0.150.1560.1430.909
T4. Ignore the call0.0390.0220.0710.646
T5. Move left0.0390.04400.688
T6. Move right0.0520.04400.688
T7. Move up0.1110.0670.1430.493
T8. Move down0.1050.0670.1070.713
T9. Zoom-in0.0460.0220.0360.898
T10. Zoom-out0.0330.0440.0360.942
T11. Play/Resume0.0780.0890.0360.625
T12. Pause0.1830.20.1430.603
T13. Volume up0.0850.0890.0710.878
T14. Volume down0.150.0890.2140.276
T15. Next song0.0390.0440.0710.804
T16. Previous song0.0390.0440.0360.942
1 Narrowly eluded significance (p = 0.78).

Share and Cite

MDPI and ACS Style

Nanjappan, V.; Shi, R.; Liang, H.-N.; Lau, K.K.-T.; Yue, Y.; Atkinson, K. Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study. Multimodal Technol. Interact. 2019, 3, 33. https://doi.org/10.3390/mti3020033

AMA Style

Nanjappan V, Shi R, Liang H-N, Lau KK-T, Yue Y, Atkinson K. Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study. Multimodal Technologies and Interaction. 2019; 3(2):33. https://doi.org/10.3390/mti3020033

Chicago/Turabian Style

Nanjappan, Vijayakumar, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue, and Katie Atkinson. 2019. "Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study" Multimodal Technologies and Interaction 3, no. 2: 33. https://doi.org/10.3390/mti3020033

Article Metrics

Back to TopTop