Tactual Articulatory Feedback and Gestural Input

The role of the sense of touch in Human–Computer Interaction as a channel for feedback in manipulative processes is investigated through the research presented here. The paper discusses how information and feedback as generated by the computer can be presented haptically, and focusses on the feedback that supports the articulation of human gesture. A range of experiments are described that investigate the use of (redundant) tactual articulatory feedback. The results presented show that a significant improvement of effectiveness only occurs when the task is sufficiently difficult, while in some other cases the added feedback can actually lower the effectiveness. However, this work is not just about effectiveness and efficiency, it also explores how multimodal feedback can enhance the interaction and make it more pleasurable—indeed, the qualitative data from this research show a perceived positive effect for added tactual feedback in the overall experience. The discussion includes suggestions for further research, particularly investigating the effect in free moving gestures, multiple points of contact, and the use of more sophisticated actuators.

The computer has a wide variety of information to present, such as feedback about processes, information about objects, representation of higher level meaning, indications of relevant aspects of the system state.Currently, the desktop computer is mainly presenting this information visually, and in some cases with sounds.This paper describes a research project investigating if there is an advantage in using tactual feedback as secondary notation or added feedback to the user who is manipulating the virtual objects in the computer, such as widgets, icons and menus.It looks particularly at feedback that supports the articulation of a gesture.A standard mouse input device is used with an added vibrotactile feedback element based on a small loudspeaker, which is usually placed under the user's index finger tip of their preferred hand.

An Information Based Approach
In the context of interactive systems and communication, it is useful to view the world around us (including all the objects and processes) as information.Objects and processes can be physical or mental, and they can be real or virtual -that is, computer generated, synthetic.
Objects inform the attentive perceiver of all sorts of things, from just being there and not to be bumped into, to revealing things about themselves -what they are made of, what one potentially can do with them etc.The latter are called affordances, informing the perceiver about what can be done with the object and how it can be used (Gibson 1979, Ch. 8).
Several levels or classes of signs, through which information is communicated, can be described (Rosengren, 2000;Littlejohn, 2002).There is information from objects themselves, and what is called their indices or signals, information that they cause.Processes are often revealed through their symptoms.When manipulating things in the world around us, these kinds of information or signs give us feedback on our actions.Living things add further signs to our world, and so do human artefacts, particularly those that are made to convey information.Signs can be made that mimic the object that is signified, for instance the pictograms found in train stations or a gesture that imitates a certain action.These kinds of signs are called icons, which in the case of the computer interface can have their own behaviour or processes.Abstract signs that have no resemblance to the object or concept it signifies are called symbols, such as in written or spoken language or a musical score.Symbols can be organised in language, such as our speech, and then written down.Additionally, signals and symbols can have emotional value, supporting or contradicting the information and meaning.

Summary of information signs
• all the things in the world, their affordances, signals and symptoms • icons that mimic and resemble the thing that is signified • symbols organised in languages (or codes) and written down A language can be verbal as well as non-verbal, such as gestures, often accompanying each other.Non-verbal signs can be uttered unconsciously.The information can be conveyed through a variety of modalities, such as speech, tactile symbols such as Braille, visual modalities such as images, written language, musical scores, and olfactory signs (Eco, 1976).

Presentation and Feedback
All these signs (of objects and processes) and symbolic languages are applied in the most complex artefact developed by humans so far, the computer.Computer generated phenomena are called virtual as opposed to real -that is, these synthetic phenomena are known not to be real.Often virtual phenomena can be discriminated from real phenomena because they are lacking tangible properties, but this is rapidly changing.There is a philosophical side to this debate, from Plato's cave to Daniel Dennett's 'Brain in the Vat' (Dennett, 1991), which is beyond the scope and relevance of this article.
There are two main forms of information generated by computers: presentation and feedback.Through its displays (visual, auditory, haptic) it presents what objects and processes are offered and what affordances they have.When being used, being manipulated, being communicated with by the user, the system conveys information about the process(es) at hand, by feeding back information to the user, in different temporal scales.This can have the form of messages (verbal) or signals.If the feedback is presented in order to guide the users actions, to support them in articulating their intentions, it can called articulatory feedback (Hix & Hartson, 1993, p 40) or synchronous feedback (Erickson, 1995).Articulatory feedback is particularly relevant in gestural interfaces, as shown in musical instruments (Bongers, 1998;Chafe, 1993;Chafe & O'Modrain, 1996) When the system is generating information that actively draws the user to something, it is generally referred to as feed-forward.The strongest examples of this can be found in haptic systems, that actively 'pull' the user towards some location.This is a form of presentation.Feedback can come from elements specially designed for that purpose, such as message boxes and widgets that allow manipulation, or from the content itself.Information that is presentation in one moment, can be feedback in another situation or even at the same time.
Most feedback is actively generated by the system, but some feedback can come from passive elements of the system.An example of such passive feedback, sometimes also called inherent feedback (Schomaker et al, 1995), is the mouse click felt when the button is pressed -it is useful information but not generated by the system.The system therefore cannot be held responsible for misinterpretationsthe click will still be felt even if there is nothing to click on the screen, the machine has crashed, or not even on.In fact this is a bit of real world information, blending in often in a useful way with the virtual world information.Mixing real and virtual phenomena is a good thing as long as one is aware, and it is designed as a whole.In the example of the mouse click it can be said that it is estimated informationusually true, but not always.
Presentation and feedback information can be displayed in various ways, addressing various human senses.In each case, particularly as part of parallel and multitasking situations, the most appropriate modality has to be chosen to convey the information.

Levels of Interaction
In order to understand the interaction between human and technology, it is useful to discern various levels of interaction.An action is usually initiated in order to achieve some higher order goal or intention, which has to be prepared and verbalised, and finally presented and articulated through physical actions and utterances.The presentation and feedback by the computer passes through several stages as well, before it can be displayed, possibly in various modalities including the haptic, in order to be perceived by the user.The actual interaction takes place at the physical level.In the standard literature often three levels are discerned: semantic, syntactic, and lexical (Dix et al, 1993, p252), but for more specific cases more levels can to be described.Nielsen's virtual protocol model ( 1986) is example of this, specifying a task and a goal level above the semantic level, and a alphabetical and physical level below the lexical level.This can also be applied on direct manipulation interface paradigms (Nielsen, 1992).Norman (1986) makes a useful explicit discrimination between input and output flows of information in stages.Users have to accomplish their goals through the physical system's action through two processes, having to bridge a Gulf of Execution and a Gulf of Evaluation by the flows of actions in various stages.The Layered Protocol is an example of a more complex model (Taylor, 1988), particularly to describe the dialogue using the speech modality, but also applied on general user interface issues (Eggen, Haakma and Westerink, 1996).When more sensory modalities are included in the interaction, models often have to be refined.Applying the Layered Protocol on the interaction which includes active haptic feedback, introduces the idea of (higher level) E-Feedback which has to do with expectations of the system of user, and the I-Feedback which communicates the lower level interpretations of the user's actions by the system (Engel, Goossens & Haakma, 1994).It can be said that virtual messages are exchanged between higher levels between user and system (still through translations to the physical level though), and that various messages are multiplexed into others and vice versa (Taylor and Waugh, 1991).Garett's Elements of User Experience is an example of a more recent model, developed to include approaches from design and engineering particularly of web site architectures (Garett, 2003) The articulatory feedback (or interpretation feedback) which is studied in the research described in this paper, takes place at the physical level but be extended to include the semantic levels.

Articulation
In the final phase of the process of making an utterance or a gesture we rely on feedback in order to shape our actions.An action is a continuous process of acting, processing feedback, and adjusting.When speaking, we rely on the auditory feedback of our utterances and continuously adjust our articulation.When manipulating objects or tools, we rely on the information conveyed to our senses of vision, touch, sound, smells, including our self-perception, in order to articulate.The mainstream computer interface paradigm relies almost entirely on the visio-motor loop, sometimes with an added layer of auditory feedback such as sound-sets built into the operating system.However, in a real world 'direct manipulation' action the closest sense involved in that act is usually the sense of touch.Musicians know how important the touch feedback on articulation is, and so does the craftsman.Computers seem to have been conceived as a tool for intellectual processes such as mathematics rather than as a 'tool' for manual workers, and have inherited a strong tendency of anti-crafts, and therefore anti-touch (McCullough, 1996).
Anticipating a further development and emphasis on gestural control of computers, including the fine movements and issues of dexterity, there will be a need for the feedback to be generated properly in order to be adequate.That is, feedback that supports the articulation of the gesture, enabling the user to refine the action on the fly.

Tactual Perception
The human sense of touch gathers its information through various interrelated channels, together called tactual perception (Loomis & Leederman, 1986).These channels and their sub-channels can be functionally distinguished, in practice however they often interrelate.The tactile perception receives its information through the cutaneous sense, from the several different mechanoreceptors in the skin (Sherrick and Cholewiak, 1986).Proprioceptors (mechanoreceptors in the muscles, tendons and joints) are the main input for our kinaesthetic sense, which is the awareness of movement, position and orientation of limbs and parts of the human body.
A third source of information is 'efference copy', reflecting the state of the nerves that control the human motor system.In other words, we know that we are moving and this is a source of input as well.This is called active touch (Gibson, 1962) as opposed to passive touch.
Haptic perception uses information from both the tactile and kinaesthetic senses when actively gathering information about objects outside of the body.Most of the interactions in the real world involve haptic perception, and it is the main modality for applying in HCI.
The feedback discussed in this paper mainly involves the tactile sense.There are four sense systems involved relating to four types of mechanoreceptors in the skin, making up all four combinations of the parameters adaptivity (slow and fast, having to do with frequency) and spatial sensitivity (diffuse and punctuate) (Bolanowski et al, 1988).The experiments described in this paper are particularly addressing the fast adapting and diffuse system.This is often called the Pacinian system (named after the Pacinian corpuscles that are the relevant mechanoreceptors), and is important for perceiving textures but also vibrations -its sensitivity overlaps with the audible range.

Tactual Feedback Devices
Several devices have been developed to actively address the human sense of touch, and many studies have shown the improvement in speed or accuracy in the interaction.
Existing devices like the mouse were retrofitted with solenoids for tactile feedback on the index finger and electromagnets acting as brakes on a iron mousepad for force-feedback (Akamatsu andSato, 1992, 1994) (Münch and Dillmann, 1997), or solenoids on the side of the mouse for vibrotactile feedback (Göbel et al, 1995).The principle of electromagnetic braking mouse was also applied in an art interface, an interesting application where the emphasis was not on improving efficiency (Kruglanski, 2000).The now discontinued Logitech force-feedback mouse, and its predecessor the Immersion FEELit mouse, have two motors influencing the mouse movement through a pantograph mechanism.These have been used for several studies (Dennerlein et al, 2000) and in a continuation of the studies with the vibrotactile device as described in this paper now with force-feedback (Hwang et al, 2003), all showing performance improvements by applying various forms of feedback as described above.The Immersion Touchware protocol, defining tangible display elements, is used in many of the commercial force feedback devices.The Logitech iFeel mouse uses this, generating vibrations with a little motor device.
Several motor-based force-feedback joysticks have been used for generating virtual textures (Hardwick et al, 1996(Hardwick et al, , 1998))), and other experiences (Minsky et al, 1990), and became cheaply available for gaming applications, such as the devices from Microsoft and Logitech.
To investigate the notion of E-and I-Feedback as described above, a force-feedback trackball was developed using computer controlled motors to influence the movement of the ball, enabling the system to generate feedback and feed-forward.This device was further used for studies in tactual perception (Keyson & Houtsma, 1995;Keyson, 1996), and the development of multimodal interfaces (Keyson & van Stuivenberg, 1997;Bongers et al, 1998).
The Phantom is a well known device, it is a multiple degree-of-freedom mechanical linkage that uses motors to generate force-feedback and feed-forward on the movements.It has been used in many studies that investigate the advantages of tactual feedback (Oakley et al, 2000(Oakley et al, , 2002)).Somewhat similar to the solenoid is the (miniature) relay coil that can generate tactile cues, such as used in the Tactile Ring developed for gestural articulation (Bongers, 1998).The advantage of using a miniature loudspeaker however is that many more pressure levels and frequencies can be generated.This is the rationale behind the developments described in this paper and earlier research (Bongers, 2002b).

Investigating Tactual Articulatory Feedback
As stated before, in the current computer interaction paradigm articulatory feedback is given visually, while it is known from everyday experience that every movement and manipulative action is guided by touch, vision and audition.The rationale behind the work as described in this paper is that it is expected that adding auditory and vibrotactile feedback to the visual articulatory feedback improves the articulation, either in speed or accuracy, and that of these two the vibrotactile will give the greatest benefit as it is the most natural form of feedback in this case.

Method
In the experiments the users are given a simple task, and by measuring the response times and error rates in a controlled situation differences can be detected.It must be noted that, in the current desktop metaphor paradigm, a translation takes place between hand movement with the mouse (input) and the visual feedback of the pointer (output) on the screen.This translation is convincing due to the way our perceptual systems work.The tactual feedback in the experiments is presented there were the input takes place, on the hand, using a custom built vibrotactile element.In some of the experiments auditory articulatory feedback was generated as well, firstly because this often happens in the real world, and secondly to investigate if sound can substitute for tactual feedback.Mobile phones often have such an audible key click.
A gesture can be defined as a multiple degree-of-freedom meaningful movement.In order to investigate the effect of the feedback a restricted gesture was chosen.In its simplest form, the gesture has one degree-of-freedom and a certain development over time.As mentioned earlier, there are in fact several forms of tactual feedback that can occur through the tactual modes as described above, discriminating between cutaneous and proprioceptive, active and passive.When interacting with a computer, cues can be generated by the system (active feedback) while other cues can be the result of real world elements (passive feedback).When moving the mouse with the vibrotactile element, all tactual modes are addressed, actively or passively; it is a haptic experience drawing information from the cutaneous, proprioceptive, and efference copy (because the subject moves actively), while the system only actively addresses the cutaneous sense.All other feedback is passive, ie.not generated by the system, but it can play some role as articulatory information.
It was a conscious decision not to incorporate additional feedback on the reaching of the goal.This kind of 'confirmation feedback' has often been researched in haptic feedback systems, and has proven to be an effective demonstration of positive effects as discussed in section 1.6.However, we are interested in situations were the system does not know where the user is aiming for, and are primarily interested in the feedback that supports the articulation of the gesture -not the goal to be reached, as it is not known.For the same reason we chose not to guide (pull) the users towards the goal, which could be beneficiary although following Fitts' Law what happens in such cases is that the target size gets actually increased (Akamatsu & MacKenzie, 1996).

Experimental Set-Up
The visual, object based programming language Max/MSP (2003) was used.Max/MSP is a program originally developed for musical purposes and has therefore a suitable real time precision (the internal scheduler is set to 1 ms.) and many built-in features for the generation of sound, images, video clips, and interface widgets.It has been used and proved useful and valid as a tool in several psychometric and HCI related experiments (Vertegaal, 1998, p. 41;Bongers 2002a).Experiments can be set up and modified very quickly.
The software for this experiment run on an Apple PowerBook G3, used in dual-screen mode; the experimenter uses the internal screen for monitoring the experiment and the subject uses an external screen.The subject carries out the tasks using a standard Logitech mouse, with the vibrotactile element positioned under the index finger of the subject where the feedback would be expected.For the right handed users this was on the left mouse button, for the left handed users the element could be attached to the right mouse button.The vibrotactile element is a small loudspeaker (Ø 20 mm) covered and protected by a ring, as shown in Fig. 2. The ring avoids the user to press on the loudspeaker cone which would make the stimulus dissapear.The vibrotactile element is covered by the user's finger so that frequencies in the auditory domain are further dampened.Generally the tactual stimuli, addressing the Pacinian system which is sensitive between 40 and 1000 Hz, are chosen with a low frequency to avoid interference with the audio range.

Fig. 2. The vibrotactile element in position
Auditory feedback is presented through a small loudspeaker next to the subject's screen.The signals sent to the vibrotactile elements are low frequency sounds, making use of the sound generating capabilities of Max/MSP.
In total 35 subjects participated in the trials and pilot studies, mainly first year students of Computer Science.They carried out a combination of experiments, as described below, using their preferred hand to move the mouse and explore or carry out tasks, and were given a form with open questions for qualitative feedback.In all phases of the experiment the subjects could work in their own pace, selftiming when to click the "next" button to generate the next cue.
Special attention was given to the user's posture and movements, as the experiment involved lots of repetitive movements which is a recipe for RSI, ironically enough because the overall goal of the research is to come up with paradigms and interaction styles that are more varied precisely to avoid such complaints.At the beginning of each session therefore participants were instructed and advised on strategies to avoid such problems, and monitored throughout the experiment.

Phase 1: Threshold Levels
The first phase of the experiment was designed to investigate the threshold levels of tactile sensitivity of the participants, in relationship to the vibrotactile element under the circumstances of the test set-up.The program generated in random order a virtual texture, varying the parameters base frequency of the stimulus (25 or 83.3 Hertz), amplitude (30%, 60% and 100%) and spatial distribution (1, 2 or 4 pixels between stimuli) resulting in 18 different combinations.Also was a case included with an amplitude of 0, eg.no stimulus, for control purposes.In total therefore there were 24 combinations.The software was developed by the first author at Cambridge University in 2000 and is described in more detail elsewhere (Bongers, 2002b).
The participant had to actively explore an area on the screen, a white square of 400 x 400 pixels, and report if anything could be felt.This could be done by selecting the appropriate button on the screen, upon which the next texture was set.Their responses, together with a code corresponding with the specific combination of values of the parameters of the stimuli, were logged by the program into a file for later analysis.In this phase of the experiment 26 subjects participated.
This phase also helped making the subjects more familiar with the use of tactual feedback.

Phase 2: Menu Selection
In this part of the experiment, a random number was generated, visually presented and the participants had to select the matching item from a list of numbers (a familiar pop-up menu).There were two conditions, one visual only and the other supported by tactual feedback where every transition between menu items generated a tangible 'bump' in addition to the normal visual feedback.Response times and error rates were logged into a file for further analysis.
The menu contained 20 items, a list of numbers from 100 to 119 as shown in Fig. 3.All 20 values (the cues) would occur twice, from a randomly generated table in a fixed order so that the data across conditions could be easily compared -all distances travelled with the mouse were the same in each condition.
In total 30 subjects participated in this phase of the experiment.

Phase 3: Slider Manipulation
In this experiment a visual cue was generated by the system, moving the horizontal system slider on screen to a certain position.The participant' were instructed to place their slider to the same position, as fast and as accurate as they could.Feedback was given in four combinations of three modalities: visual only (V), visual + tactual (VT), visual + auditory (VA), and visual + auditory + tactual (VAT).The tactual feedback consisted of a pulse for every step of the slider, a triangular wave form with a base frequency of 83.3 Hz.The auditory feedback was generated with the same frequency, making a clicking sound due to the overtones of the triangular wave shape.
These combinations could be presented in 24 different orders, but given the nature of the work (the context of user interface applications) it was decided to choose only two, which would reveal enough about potential order effects: V-VT-VA-VAT and VAT-VA-VT-V.Fig. 4 shows the experimenter's screen.
All 40 cues were presented from a randomly generated, fixed order table of 20 values, every value occurring twice.Values near or at the extreme ends of the slider were avoided, as it was noted during pilot studies that participants developed a strategy of moving to the particular end very fast knowing the system would ignore overshoot.The sliders were 600 pixels wide, mouse ratio was set to slow in order to make the participants really move.Through the use of the fixed values in the table across conditions, it was ensured that in all conditions the same distance would be travelled in the same order, thereby avoiding having to bother with Fitts' law or the mouse-pointer ratio (which is proportional to the movement speed, a feature which cannot be turned off in the Macintosh operating system).
The cues were presented in blocks for each condition, if the conditions would have been mixed subjects would have developed a visual only strategy in order to be independent on the secondary (and tertiary) feedback.Their slider was also colour coded, each condition had its own colour.31 subjects participated in this phase.

Phase 4: Step Counting
In this experiment the participants were prompted with a certain number of steps to be taken with a horizontal slider, similar to the one in the previous experiment.The range was set to 20 rather then 120 in this case, to keep it manageable.Conditions were visual only (V) and visual combined with touch (VT).No confirmative feedback was given, the subject would press a button when he or she thought that the goal was reached and the next cue was generated.
In this phase of the experiment 11 subjects participated

Phase 5: Questionnaire
The last part of the session consisted of a form with questions to obtain qualitative feedback on the chosen modalities, and some personal data such as gender, handiness and experience with computers and the mouse.This data was acquired for 31 of the subjects.

Results and Analysis
The data from the files compiled by the Max patch was assembled in an MS Excel spreadsheet.The data was then analysed in SPSS.
All phases of the experiment showed a strong learning effect, which is not surprising given the novelty of the presentation.All phases were balanced in orders, to compensate for this effect.
The errors logged were distinguished in two types: misalignments and mistakes.A mistake is a misshit, for instance when the subject 'drops' the slider too early.A mistake can also appear in the measured response times, some values appeared that were below the physically possible minimum response time, as a result of the subject moving the slider by just clicking briefly in the desired position so that the slider jumps to that position.They were instructed not to use this 'cheating' but accidentally it occurred.Some values were unrealistically high, probably due to some distraction or other error.All these cases contain no real information, and were therefore omitted from the data as mistakes.The misalignment errors were hoped to be analysed as they might reveal information about the level of performance.

Phase 1: Threshold Levels
Of the 19 presented textures (averaged over two runs) twelve were recognised correctly (including four of the six non-textures) by all the subjects, and a further ten with more than 90% accuracy (including two non-textures).There were two lower scores (88% and 75%) which corresponded with the two most difficult to perceive conditions, the combination of the lowest amplitude (30%), low frequency (25 Hz) and the lowest spatial distributions of two respectively four pixels.

Phase 2: Menu Selection
Table 1 shows the total mean response times.All trials were balanced to compensate for learning effects, and the number of trials was such that individual differences were spread out, so that that all trials and subjects (N in the table) could be grouped.The response times Visual+Tactual condition were slightly higher than for the Visual Only condition, statistical analysis showed that this was not significant.The error rates were not statistically analysed, in both conditions they were too low to draw any conclusions from.

Phase 3: Slider Manipulation
The table below gives an overview of the means of all response times (for all distances) per condition.The response times were normally distributed, and not symmetrical in the extremes, so two-tailed ttests were carried out in order to investigate whether these differences are significant.Of the three possible degrees-of-freedom the interaction between V and VT as well as VA and VAT was analysed, in order to test the hypothesis of the influence of T (tactual feedback added) on the performance.This shows that response times for the Visual only condition were significantly faster than the Visual + Tactual condition, and that the Visual + Auditory condition was significantly faster than the similar condition with tactual feedback added.
The differences in response times between conditions were further analysed for each of the 20 distances, but no strong effects were found.The differences in error rates were not significant.In order to find out if the difference is significant an analysis of variance (ANOVA) was performed, to discriminate between the learning effect and a possible order effect in the data.The ANOVA systematically controlled for the following variables: • between subject variable: sequence of conditions (V -VT vs. VT -V); • within subject variables: 20 different distances.The results of this analysis are summarised in the table below.From this analysis follows that the mean response times for the Visual + Tactile condition are significantly faster than for the Visual Only condition.

Phase 5: Questionnaire
The average age of the 31 subjects who filled in the questionnaire was 20 years, the majority was male (26 out of 31) and right handed (27 out of 31, they used their preferred hand which was the right hand in all cases).They answered a question about their computer and mouse experience with a Likert scale from 0 (no experience) to 5 (lot of experience).The average from this question was 4.4 on this scale, meaning their computer and mousing skills were high.The qualitative information obtained by the open questions is catagorised as shown in the table below.There was also a question about whether it was thought that the added feedback was useful.For added sound sixteen (out of 31) people answered "yes", four answered "no" and eleven thought it would depend on the context (issues were mentioned such as privacy, blind people, precision).For the added touch feedback 27 people thought it was useful, and four thought it would depend on the context.When asked which combination they preferred in general, three (out of 31) answered Visual Only, thirteen answered Visual+Tactual, three answered Visual+Auditory, seven Visual+Auditory+Tactual, and four thought it would depend on the context.One subject did not answer.

Discussion of the Results
The results from the Threshold experiment show that our subjects had no difficulty recognising the stimuli generated, apart from the cases of the really low levels.For the experiments combinations of the stimuli were chosen that were far above these thresholds.
The results from the menu selection show no significant difference, this task was easy and familiar for people.
The results from the slider experiment show that people tend to slow down when the tactual feedback is added.This can be due to the novelty effect of it, people are not used having their sense of touch actively addressed by their computer.This slowing down effect may be explained by the factor of perceived physical resistance, as has been found in other research (Oakley et al, 2001) When the task reaches a sufficient level of difficulty, the advantage of the added tactual feedback can be shown.This is proved by the Step Counter phase of the experiment, where task completion times were significantly shorter with the added feedback.
The questionnaires show that people generally appreciate the added feedback, and favour the tactual over the auditory.From auditory feedback it is known that is can be perceived as irritating, particularly in some contexts.Computer generated tactual feedback can be unpleasant and not much research has been carried out investigating this rather qualitative aspect and its relationship to the choice of actuator (eg. a motor or a speaker).

Discussion
The participants carried out the experiments as part of their lecture series on Information Presentation at the department.They were therefore easily tempted to be thoroughly involved in the experiments.It can be argued that this would bias their responses, particularly in the quantitative parts of the session, but it must be stressed that the work here is carried out in the context of human-computer interaction research and not as pure psychometric experiments.The response times between subjects vary largely, which is quite fascinating and a potential subject for further investigation.Clearly, people all have their own individual way of moving (what we call the 'movement fingerprint') and these results show that even in the simplest gesture this idiosyncrasy can manifest itself.The observation that in some cases some subjects actually seem to slow down when given the added tactual feedback has a lot to do with the tasks, which were primarily visual.The tactual feedback can be perceived by the user as resistance.This is interesting, as we know from everyday experience (particularly with musical instruments) that indeed effort is an important factor in information gathering of our environment, and for articulation.Dancers can control their precise movements while relying more on vision and internal kinaesthetic feedback, but only after many years of training.It is still expected that the greatest benefit of adding tactual feedback to a gesture will be found in a free moving gesture, without any passive feedback as is the case when moving the mouse.This has to be further investigated, some preliminary experiments carried out both with lateral movements as well as with rotational movement in free space show promise.The potential benefit of this has already been shown elsewhere, for instance at Sony Labs (Poupyrev, 2002) where a tilting movement of a handheld device did benefit of the added or secondary feedback (here somewhat confusingly labelled as 'ambient touch').The set-up as described in this paper mainly addresses the Pacinian system, the one out of the four tactual systems of cutaneous sensitivity that has the rapidly adapting and diffuse sensitivity (see section 1.5), and that can pick up frequencies from 40 Hz to 1000 Hz (Verrillo, 1992).Other feedback can be applied as well, conveying different kinds of information.In the current situation only pure articulatory feedback was considered, other extensions can be added later.This would also include the results from the ongoing research on virtual textures, resulting in a palette of textures, feels and other tangible information to design with.A logical extension of the set-up is to generate feedback upon reaching the goal, a 'confirmative feedback' which has been proven to produce a strong effect.This could greatly improve performance as it is known that the gesture can be described by dichotomous models of human movement in a pointing task, which imply an initial ballistic movement towards a target followed by finer movements controlled by finer feedback (Mithal, 1995).This was observed in some cases in our user trials.Other improvements that are thought of are: to have multiple points of feedback addressing more fingers or parts of fingers (Max/MSP works with external hardware to address multiple output channels) and to make the vibrotactile element smaller, so that more can be put on one finger.This can be used to simulate direction, such as in the effect of stroking velvet.The palette eventually should also incorporate force-feedback, addressing the kinaesthetic sense.

Conclusion
In the research described we have investigated the influence of added tactile feedback in the interaction with the computer.
In Phase 1 it was found that the participants were very sensitive and were able to perceive fine tactile 'virtual textures' generated with the tactile display.In Phase 2 it was found that selecting items from a menu list did not show any significant difference in performance when tactile feedback was added.Manipulating a slider under various feedback conditions, in Phase 3, showed a significant difference.Performance was influenced by the added tactile feedback also in the case of added auditory feedback, task completion speed decreased.When the task was made more difficult, the Step Counter experiment Phase 4, showed a positive effect in the tactile feedback condition.The vibrotactile element, based on a miniature loudspeaker and low-frequency 'sounds', proved to be a cost-effective and flexible solution allowing a wide variety of tactile experiences to be generated using sound synthesis techniques.
In the experiments the added feedback was redundant.We are not trying to replace visual feedback by tactual feedback, but add this feedback to make the interaction more natural.It is expected that in some cases this leads to a performance benefit.The widget manipulations we chose to investigate are standard user interactions, the subjects were very familiar with it, and they could devote all their attention to fulfilling the tasks.Only when the task is made less mundane, such as in the last phase were the subjects had to count steps of the slider widget, adding tactile feedback helps to improve the interaction.This may seem irrelevant in the current computer interaction paradigm, which is entirely based on the visio-motor loop, but our research has a longer term goal of developing new interaction paradigms based on natural interaction.
Multitasking is an element of such paradigms, and a future experiment can be developed where the participants have to divide their attention between the task (ie.placing the slider under various feedback conditions) and a distracting task.
Multiple degree-of-freedom input is another expected element of new interaction paradigms, and we are interested to see the effects of added tactual feedback on a free-moving gesture.In such a case the passive feedback (kinaesthetic guidance from the table top surface in the 2D case) is absent and it is expected that adding active feedback can compensate for this.Such an experiment has been developed and is currently tested.

Fig. 3 .
Fig. 3.The list of menu items for selection

Fig. 4
Fig. 4 The experimenter's screen of phase 3, the Slider experiment, showing the controls and monitoring of the experiment and the user's actions.

Table 1 .
Mean response times and standard deviations for the Menu Selection

Table 2 .
Mean response times and standard deviations for the Slider.

Table 3 .
The results of the t-tests

Table 4 .
This phase of the experiment was the most challenging for the subjects.The results are in the table below: Mean response times and standard deviations for the Counter

Table 5 .
The results of the comparison for the two conditions of the Counter phase

Table 6 .
Catagorised answers on open questions, out of 31 subjects .