Next Article in Journal
Integrated Voltage—Current Monitoring and Control of Gas Metal Arc Weld Magnetic Ball-Jointed Open Source 3-D Printer
Previous Article in Journal / Special Issue
Vuur: Exploring Shared Interaction with Light

Awakening the Synthesizer Knob: Gestural Perspectives

Department of Industrial Design, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Author to whom correspondence should be addressed.
Academic Editor: Denis Lalanne
Machines 2015, 3(4), 317-338;
Received: 24 July 2015 / Revised: 16 October 2015 / Accepted: 19 October 2015 / Published: 28 October 2015
(This article belongs to the Special Issue Tangible meets Gestural)


While being the primary mode of interaction with mainstream digital musical instruments, the knob has been greatly overlooked in its potential for innovation. In this paper, we aim to open up the thinking about new possibilities for the knob. Based on an analysis of the background of the knob and the relevant theory on interaction, three directives are formulated to guide the design of a new breed of knobs. Six prototypes are tested through an AttrakDiff questionnaire and Discourse Analysis. It is shown that the proposed new breed of knobs has stronger hedonic qualities than knobs of mainstream digital musical instruments, though the pragmatic quality appears lower. The strongest improvement is seen in stimulation, an important factor in enticing investment to play, and as such, more expressive control. Through using three directives, a new generation of knobs can be made that would improve the expressive affordances of digital musical instruments.
Keywords: knobs; interfaces; digital musical instruments; electronic musical instruments; interaction design; AttrakDiff; discourse analysis knobs; interfaces; digital musical instruments; electronic musical instruments; interaction design; AttrakDiff; discourse analysis

1. Introduction

Digital musical instruments (DMI) have brought a revolution in the way music sounds, but in the case of mainstream commercialized and widespread DMI, or mDMI (a more extensive definition is given in Section 1.1.), devices have lost a lot of richness in interaction compared to traditional musical instruments. As the works from, among others, New Interfaces for Musical Expression conference (NIME) and the International Computer Music Conference (ICMC) show, there is a lot of interest in making new, more expressive instruments, though it seems rare to see reflections on the mainstream DMI paradigm. We believe there is a lot of untapped potential in the current mDMI interfaces. In this project, we focus on the knob, it being a prime example of an ever-present interface element in mDMI lacking rich interaction yet having a strong appeal to users.
Through considering relevant literature, we draft a set of directives guiding to a new generation of knobs, with the aim of having more expressive qualities than current knobs on mDMI. Six prototypes are made and tested in a user study.

1.1. Definition; Mainstream Digital Musical Instruments

The field of Digital Musical Instruments (DMI) is a wide one, including sophisticated sound synthesis systems like Kyma, via CD or mp3 players, to experimental setups made with puredata (pd). Not all of these DMI are relevant in this project; for that reason, we will only discuss a specific subgroup of DMI in this paper, namely mainstream commercialized and widespread Digital Musical Instruments, or mDMI.
These mDMI are commonly sold in music stores and are professionally used by DJs, sound designers, music producers and live performers. The main categories of mDMI are synthesizers (e.g. Novation Supernova, Buckinghamshire, UK), drum computers (e.g. Machinedrum by Elektron (Gothenburg, Sweden)), DJ controllers (e.g., DDJ-SX2 by Pioneer (Kawasaki, Japan)) and effect controllers (e.g., KP3 by KORG (Tokyo, Japan)). DMI not regarded as mDMI are, for instance, niche controllers (e.g. Seaboard by Roli (London, UK)), digitized traditional instruments (e.g., EZ-AG Guitar by Yamaha (Hamamatsu, Japan)) and one-off do-it-yourself instruments (e.g. setups with Kinect by Microsoft (Redmond, WA, USA)).
mDMI share a common interface paradigm, using knobs, buttons, sliders and keys, supplemented with small screens. A common logic of these instruments is based on analog synthesizers and early computer interfaces and allows a relatively low barrier in learning to use similar devices.

1.2. The Knob

On mDMI, knobs are the tangible interfaces of potentiometers (Figure 1) or rotary encoders whom practically differ from each other in that knobs have a limited range of motion, while encoders can turn endlessly. The position of the knob gets translated into a variable within the synthesis engine, allowing the musician to modify sound. Knobs are often used alongside buttons and switches (which are cheaper and take less space, but can only control discrete parameters) and rarer control elements like sliders, distance sensors and touch-strips (which are generally more expensive and take more space). While very limited, twisting the knob is a form of tangible gestural interaction. To understand why knobs are used, we briefly look at their history.
Figure 1. A potentiometer with a generic push-on knob.
Figure 1. A potentiometer with a generic push-on knob.
Machines 03 00317 g001

1.3. History

The potentiometer, with a knob as its interface, was invented in the late 19th century as a means to make components in electronic circuits variable. It took shape similar to the rheostat patented by Wheatstone, Figure 2. A deeper insight in the history of the potentiometer can be found in the work of Rutenberg [1].
Figure 2. Charles Wheatstone’s rheostat (1843).
Figure 2. Charles Wheatstone’s rheostat (1843).
Machines 03 00317 g002
The history of mDMI, as described by Manning [2], allows us to trace why knobs are used on mDMI interfaces in the way we are used to them.
Knobs were rarely used on the interfaces of early electronic musical instruments like the Hammond organ and the Ondes Martenot. These instruments, primarily being inspired by organs, favored sliders or buttons. They were not meant to have their sounds tweaked during performance and as such, the interfaces were made for set-and-forget rather than continuous interaction.
The scientific approach of deconstructing all synthesis elements into modular systems in the 1950s popularized knobs as practical control elements. Borrowing heavily from electronic test equipment, see Figure 3, knobs were mainly used for practical reasons: small size, highly versatile and commonly available. These huge systems, like the RCA Mark II Music Synthesizer, were focused on pushing the limits of sound synthesis rather than performing music, and because they were immensely expensive, more the domain of technicians and composers rather than performers.
With manufacturers like Moog and Buchla condensing the synthesizer into portable systems in the 1960s, the synthesizer form factor/paradigm got defined. In this time period, again in the hands of performers, musicians slowly started to discover the possibilities of all these knobs in live performance, with the Trips festival in 1966 famously giving “knob wiggling” a place on stage [3].
In the 1980s, digital synthesis made for a revolution in sounds, leading to domination of electronic sounds in popular music. However, with the instruments more strongly relating to computers, the task of modifying the sound required such deep understanding of the synthesis engine that few people took the step to really learn to program the instrument. Evidence of this is found with the praised Yamaha DX7 synthesizer, of which the presets are recognizable in many hits [4].
Developments seem to be focused on making synthesizers less expensive, more portable and recently with more understandable user interfaces. Still, little difference is seen in how knobs are used on the interfaces, compared to earlier synthesizers.
Knobs are ubiquitous on synthesizers, but as of yet, nobody seems to have considered their characteristics for making music. What could happen when we do?
Figure 3. Synthesizer (left) and oscilloscope (right).
Figure 3. Synthesizer (left) and oscilloscope (right).
Machines 03 00317 g003

1.4. Characteristics

To gain a deeper understanding of how the qualities of knobs on mDMI relate to making music, a practical study of external and internal characteristics was conducted, as shown in Figure 4 and Figure 5. External characteristics were compared by sorting an assortment of knobs taken from mDMI on parameters as adapted from Lederman and Klatzky [5]. Internal characteristics were compared by placing an assortment of potentiometers and rotary encoders from mDMI in a blank box and equipping them with the same knob, to unify the external characteristics.
An informal preliminary study was done with three knob-interaction experts. They were asked to turn a knob and describe the sounds the different types of knobs would make in their imagination as well as to which interactive qualities the differences could relate. This study was intended to lead to a practical direction on how to design for intimacy with knobs, but resulted only in a list of external and internal parameters and their relation to musical interaction, as seen in Table 1. This list was made by distilling the words related to knob parameter from the experts’ comments. While it is interesting to know which parameters there are and which ones were considered to have a greater influence on music than others, it was seen that only a unity of characteristics could lead to a strong emotional response according to the experts. For example, moving a large knob and making a big sound excited the participants, while it felt difficult to make any related sound with the medium sized grey knobs. The next paragraphs will describe the most relevant characteristics of knobs on mDMI in detail.
Figure 4. Internal knob characteristics study.
Figure 4. Internal knob characteristics study.
Machines 03 00317 g004
Figure 5. External knob characteristics study.
Figure 5. External knob characteristics study.
Machines 03 00317 g005
Table 1. Knob parameters and their relation to musical interaction: important (!), neutral ( ), eliminate (-), and unknown (?).
Table 1. Knob parameters and their relation to musical interaction: important (!), neutral ( ), eliminate (-), and unknown (?).
Shape (!)
Size (!)
Material (!)
Color ( )
Texture (!)
Knurling ( )
Bottom chamfer ( )
Position indicator (!)
Resistance (!)
Range of motion ( )
Detents (!)
Inertia (?)
Axial skew (-)
Axial play (-)
Dynamic feedback (?)
Knobs are made to be easy to grab by having a size that easily fits in between the index finger and thumb. Small knobs seem to be more suitable for making a quick motion by rolling the knobs in between the fingers, while bigger knobs give the feeling of having more precise control.
The gesture used to interact with knobs, is either to hold it with two or with three fingers, and either with the index finger parallel to the interface or perpendicular. Extending to the body, variations can be seen in the level of involvement of the arms, body and even the face. In expressive performance, all of these elements are involved and connected.
While most knobs are made of a hard polymer, some have a soft-touch coating or are made out of metal. Combined with the surface texture, this is the main contributor to the initial feel of the knob.
While twisting them, the knob has resistance. A very low resistance feels cheap while a very high resistance can feel like the knob is broken. This suggests that there is an optimal resistance related to the musical function. Some knobs have rhythmic detents or a detent to indicate the center position. If used in a logical way, for instance in a detune knob with its neutral position marked with a detent in the center of the knob’s range, these detents are a major contribution to the interaction.
As the interaction with knobs is purely rotational around one axis, any other movement is undesirable. Any other linear movement is described with axial play, while any other rotational movement is described with axial skew.
To assist in twisting, most knobs are slightly slanted and have some form of straight knurling. This comes in various forms, but does not seem to have much impact on the interaction.
Indicating their current position, most knobs, excluding rotary encoders, have a position indicator both in shape and color. A position indicator that is both tactile and visually clear assists the user greatly in understanding the current status of the knob.
Some knobs have chamfers on the bottom to prevent friction between the fingers and the interface panel. This does not seem to give a large contribution to the interaction but mostly adds a traditional aesthetic.
To indicate function, nearly all knobs have a label printed next to them. Sometime color-codes are used to group knobs with a similar function.
While all of these parameters influence the way a knob is used in making music, this is not always a positive relationship. For example, there is no relationship between the shape and the function the knob has. As an example, both LFO speed and filter resonance can have the same knob while being radically different functions. Combined with the absence of tactile cues and varying functions of a same knob, it makes the development of muscle memory difficult, which leads a high cognitive load and a constant need of visual support.
The range of motion needed for an audible effect can have a huge difference in between knobs. For example, a commonly heard complaint is that all the audible effect of a knob is only within a small part of the motion range of a knob.
Lastly, the functions assigned to knobs, might not always match the musical goals a musician has. For example, to make the sound more aggressive, the musician needs to simultaneously turn the filter, distortion, volume, and harmonics knobs, but he can only twist two knobs with his two hands at the same time.
How much could we improve musical interaction when the aesthetics and feel of a knob would be connected to its function? With the use of tactile cues, could the use of visual cues be minimized? Could the interaction with knobs become truly musical?
The knob study shown in this section provided insight into the characteristics of current knobs on mDMI, and it allowed us to see that a focus on shape might have the most potential for finding new directions. In addition, doing experiments on deliberately using knob feedback parameters like resistance, might bear interesting results. The following sections will first explore other directions, before consolidating into a connected approach.

1.5. Innovation

There have been a few variations in mDMI exploring the effects of different shapes of knobs, leading to different gestural qualities. In this section we will discuss three alternate shapes of knobs shown in Figure 6.
Figure 6. Modulation wheels (left); Korg Kaoss Pad; (center) and bender on the Teenage Engineering OP-1 (right).
Figure 6. Modulation wheels (left); Korg Kaoss Pad; (center) and bender on the Teenage Engineering OP-1 (right).
Machines 03 00317 g006
Modulation/pitch wheels are used as an alternative interface for the potentiometer. A very different interaction is created through using a larger surface and rotating along a horizontal, rather than a vertical axis. These expression controls help, adding nuance during musical performance, but are limited in their application as supplements to keyboards.
The Korg Kaosspad uses a two-dimensional sensor to control several parameters at the same time, enabling impressive expressive control. This shows that simply putting two very different parameters under the same finger, can lead to a great impact on expressivity.
The Teenage Engineering (Stockholm, Sweden) OP-1 accessories are a series of push-on objects that completely change the interaction. For instance, the bender turns the knob into a spring-loaded handle and the crank lets the knob be turned like the handle on a music box. While seemingly small differences in shape, these accessories make a huge difference in how the musician uses the knob and what kind of sounds it invites to make.
These examples show that there are fruitful ways to improve expression. However, to find more radical innovations, steps outside the synthesizer realm need to be taken.

1.6. Expression and Interaction

Towards developing a connected approach for our proposed new generation of knobs, and by following the lead of peers like Bovermann [6], we take a closer look at other scientific fields to see which aspects can improve expressivity.

1.6.1. Musical Expression

Research shows that making and perceiving music is most strongly connected with the human motor system [7]. This suggests that enhancing the senso-motoric components of interaction can lead to a more musical interaction. This can be done by simply increasing the size of a knob, i.e., requiring a stronger physical interaction, or by providing (force) feedback.
In classical music practice, a composer pre-sets most of the musical parameters while the musician expresses himself through variations within those constraints. However, while mDMI are made suitable for musicians to control those basic parameters, more subtle variations are often not translated into sound. Simply by making sure that the musician has control over all desirable musical parameters with both hands, expression could be enhanced greatly.
As Jordà [8] explains, expressivity can happen on different levels. mDMI may even exceed traditional instruments at offering stylistic possibilities, for example through sampling, but they lack in micro dynamics, i.e., performance nuances. For example, much more dimensions of how a performer is moving his or her body are heard within the sound of bowing of a violin string, than in the sound of opening a filter on a DJ-controller. Making knobs sensitive to more modalities of interaction could be an approach with a lot of potential. Adding sensitivity to pressure, gesture shape and angle of motion could be a good start.

1.6.2. Embodiment

Virtuoso musicians play their instrument as if it is an extension of their body and as such, they embody their instrument. Therefore, to aim for such a high level of expressivity with mDMI knobs, it is important to design for embodiment. Fels [9] describes that to reach embodiment, the designer must aim to create certain conditions for the user to reach intimacy with a device. Just like in human relationships, certain conditions like reliability, attraction and receptiveness, can help or hinder the development of this intimacy. In addition, it is important to note that it simply takes time for intimacy to develop. The question then is which conditions can help for developing intimacy with knobs on mDMI.
Through making a diagram of all the information streams in the interaction with a knob (Figure 7) it becomes clear that interacting with a knob is mostly a sensory experience. Reinforcing the assumptions from Section 1.4, improvements can be made by designing for a strong coherence in the visual, tactile and auditory channels.
Figure 7. Knob interaction diagram.
Figure 7. Knob interaction diagram.
Machines 03 00317 g007
mDMI design has focused on technological advancement rather than human interaction. Through literature we can formulate a number of guidelines to design for intimacy:
  • Humans do not think in abstract models, so a user should not be expected to fully understand the inner workings of the instrument, like complex interplay of elements in FM synthesis. Experience is stored in the form of concepts, which means that the same action should yield the same reaction, making menus and multi-functional knobs poor choices [10].
  • The brain is originally made for movement; design the movement of interaction! Perception and motor control are not separated; movement seen on the mDMI will influence the actions of the musician. This means that not only the shape of a knob should be designed, but also the (possible) motion the musician makes while turning the knob [10].
  • The brain is multimodal, which means that even seemingly irrelevant aspects of the interaction, like the axial skew of a knob, play a role in the overall experience. Interaction is always emotional: will you be making affective music on a technical looking instrument? While it is difficult to make a definite statement on this, it does seem that the delicate violin is more suitable for emotional music than a rugged drum computer [10].
  • Tactile feedback containing features of the controlled sound enables a stronger feedback loop, improving muscular control and expression [11].
  • While cognitive affordances are important when learning an instrument, and physical affordances are important to ensure a good interaction, the performer’s focus should be within the affective domain while performing. This means that to be expressive, an mDMI should not require high cognitive effort to be operated [12].
  • Many of the traditionally used guidelines for affordances might not be applicable to musical instruments. For instance, the function of holding a violin bow in different positions might not be intuitive or even cognitively understandable, but through sensorimotoric and affective learning the musician will understand what happens when using different bow positions, even if the musician is not consciously aware of the effect. Ease of use should not be the main goal in designing musical instruments, but ability to express is [13].

1.6.3. Cognitive Load

Having an overview of the conceptual frame for improving interaction, some practical aspects of new knob interaction propositions should also be considered. As the high cognitive load of mDMI interfaces is hindering intimacy, lowering this load will certainly improve interaction. There are three types of cognitive load, which can be reduced in the following ways [14]:
Intrinsic cognitive load describes the demand on working memory capacity. Sound synthesis algorithms have many parameters working on the same sound, each having an influence on the other. This makes the demand on working memory high and makes it difficult to fully understand the interdependencies. The only way to deal with this is to lower the amount of interactive elements and make the interaction more direct, e.g., avoid the use of menus or double functions.
Extraneous cognitive load describes the demand on long-term memory. Where acoustic instruments do not require you to know why it works, complex synthesis algorithms like FM synthesis are very difficult to operate without understanding the inner workings. Simpler algorithms reduce extraneous load, but as Paas [14] describes, it is often more effective to focus on the intrinsic load.
Germane cognitive load describes the way the interaction is presented. Something attractive and stimulating will motivate a user and will lower the perceived cognitive load. While the current interface paradigm, especially in modular synthesizers, favors the aesthetics of complex obscure interfaces, simplified minimal interfaces would lessen the cognitive load.

1.6.4. Gestures

The guidelines for tangible gestural interaction by van den Hoven and Mazalek [15] can be applied to knobs in a number of different ways. First of all, by making use of proprioception: right now all interaction with mDMI happens in a small area with small movements, by expanding this, spatial relationships can be used more consciously in the interface design. Secondly, more information about the gesture can be captured, for example, on the force of movement and hand configuration. Lastly, they suggest that finding different shapes for the knobs might elicit different gestures, which make better use of human modalities or carry more emotional content.

1.6.5. Enabling Expression

For supporting the perceived affordances, we need to change our focus on the technical aspects. The model of MacLean [16] describes how you can allow for more emotiveness by increasing the following aspects: amount of sensed modalities, resolution, and response-time.
As Fels describes: “A high degree of intimacy is achieved when the relationship reaches a level where the mapping between control and sound is transparent to the player, that is, the player embodies the device” [17].
Hunt and Wanderley [18] suggest a three stage mapping. First, the input parameters (e.g., knob position) are converted to performance parameters (e.g., Laban performance parameters [19]), which are then converted into musical parameters (e.g., timbre), which are finally translated into synthesis parameters (e.g., filter cutoff). An example of this is shown in Table 2. To illustrate, the turning direction influences the tone of the sound, which can directly be translated into pitch, but also the force of turning will have an influence of the pitch. Likewise, volume is influenced by nearly all parameters because of its relationship with timbre, articulation and loudness. While this method might take more effort to design, it will result in a more intuitive interaction.
Table 2. Example complex mapping.
Table 2. Example complex mapping.
Position Machines 03 00317 i001Direction Machines 03 00317 i002Tone Machines 03 00317 i003Pitch Machines 03 00317 i004Sound
Jordá [7] and Patrik [20] give some additional guidelines for matching natural behavior: variations are good, but randomness should be avoided; complexity can be added to improve consistency; and curved mappings are preferable to linear mappings.
Adding new sensors could improve the richness of the mapping even further. Interesting sensors measure touch, force of touch, shape of gesture [21], skew of rotation, and other parameters.

1.7. Three Directives for a New Generation of Knobs

As we have seen in the knob study in Section 1.4, making our desired new generation of knobs for mDMI is a complex endeavor, as all different parameters are connected and cannot be studied separately without losing the musical context. Similar to the work of peers, like that of Hunt and Wanderley [18] and Armstrong [22], we deemed it necessary to make a framework managing this complexity. However, as the existing frameworks were either too broad for the subject, or did not have enough concrete handles to guide practical design, three directives were formulated to serve as the practical handles resulting from the literature and knob study.
To clarify, we do not claim that all knobs will have to adhere to these directives, but only our new generation of knobs. They share similarities and can coexist with “traditional” knobs, but serve a different role on mDMI interfaces. These three directives that will be introduced under the next headings, are meant to be constantly kept in mind during the design of these new knobs.

1.7.1. Twisting the Knob Generates a Musical Outcome, Rather Than Changing a Technical Parameter

To make a knob intimate, it should be designed for what humans are capable of, rather than for technical functionality. Humans think in musical goals, rather than functional steps; for example, this means a knob for filter-cutoff will be much harder to understand than a knob to control openness.
It is a struggle for humans to memorize a synthesis engine with all of its constantly changing parameters; a human has a mental model of gestures causing outcomes, so never assume the musician has explicit knowledge on how the instrument works.
Knobs should not be the externalization of a synthesis algorithm at the interface, but they should enable making music. For this to happen, a mapping layer needs to be integrated. One knob should change multiple synthesis parameters, and many knobs should have an influence on the same synthesis parameter.

1.7.2. The Knobs is Sensitive to Your Own Way of Using It

Knobs do not just sense position, but also direction, speed and acceleration. Through translating this into meaningful gestural parameters, the musician’s intentions can be sensed at a deeper level. Adding new sensors could enrich this even further.
To perfect the mapping nuances, developments should be directed towards making use of natural rules: action = reaction, physical energy in = musical energy out, etc.
To be able to trust the instrument, it needs to be consistent. While being sensitive to micro changes, it needs to react in the same way, every time again. This more physical way of interacting might take some more time to get used to, but will eventually give better musical results.

1.7.3. The Looks and Feel of the Knob Are Connected to its Musical Output

The designer should focus on creating consistency all the way from looks, through tactile quality, via haptic feedback, on to the sonic output. Everything is connected when making music and only when all bugs are limited, it will make sense and the knob will become fully intuitive.
Where the shape of acoustic instruments is greatly determined by their physical sound generation mechanism, that is not the case for mDMI. Thus, we should make use of the freedom that new technologies offer, especially in regard to the new possibilities in computation and sensors, and make shapes related to the sound, rather than the sound generation mechanism.
The more modalities the system uses the better. Vibrotactile feedback is a strong contender for strengthening the loop, because it shortens the feedback loop to the actual location of interaction.
Labels might help when learning an instrument, but once that stage has been surpassed, the musician should be able to intuitively find the knob he is looking for. The function of the knobs should be indicated through their shape, however abstract, rather than with cognitive hints.

2. Experimental Section

With the three directives in hand, six prototypes were made. The new knobs were developed as part of a research-through-design process. In this approach, the knobs were designed as design-research prototypes to gain insight in their potential to translate expressive gestures into musical output. As design-research prototypes, each knob functions as “an artefact as vehicle for embodying what ‘ought to be’,” [23]. In the context of studying new possibilities for the knob in mDMI, we see design research as the preferred approach, because it allows new propositions to be evaluated by real musicians in a realistic musical setting, we learn about the topic by doing, and because the design-research process leads to discussions, new insights and ideas [24].
The prototypes, which will be discussed in detail in the next sections, were designed in the spirit of the third directive: the looks and feel of the knob are connected to its musical output. Starting with the concept of character, an iterative approach was used in which different sound, interaction and visual characters were drafted and correlated to form a group of potential “new knobs”. The six most diverse knob characters where chosen to develop into the six prototypes. Care was taken that each of the prototypes would shine a different light on the directives.
As was experienced in the knob study (see Section 1.4), the new knob can only be evaluated with actual users within a musical context. We chose to use two parallel methods. A discourse analysis (DA), as describes by Stowell et al. [25], is a rigorous qualitative method, suitable for gaining deep insights on expressive qualities. However, as DA is an extremely time-consuming method, it was decided to supplement it with the semi-automated AttrakDiff, by Hassenzahl [26], to be able to get a larger sample size and make valid statement on hedonic qualities. While a neutral knob could have been used to compare the different prototypes, it is not the differences between the prototypes we are looking for. Rather, we want to compare the experience of using the prototypes and of using knobs on mDMI. For this reason, it was deemed better to have the participants recall their memories of using knobs, instead of presenting a knob, novel for them, meant to represent a general mDMI knob. The methods will be described further in Section 2.3.1 and Section 2.3.2, respectively. The main goals of the user study is to make a statement on the validity of the three directives working towards an expressive new breed of knobs, and to identify the areas with the biggest potential for further research.
Because of the limited time available for the user study, it would be unfair to make a direct comparison to a usual mDMI knob.

2.1. Materials

2.1.1. The Six Prototypes

Found objects were used for the physical shape of the knob because they transfer strong emotional references, not present in traditional knobs, allowing them to inspire new types of use. To create visual unity and allow for a fairer comparison, they were all painted white.
As shown in Section 1.6.2, larger size is beneficial to reach embodiment. As participants will only have a short time to get familiar with the prototypes, we chose to make the prototype knobs larger than generic knobs (i.e., 2–20 cm height) to aid with gaining fast familiarity. While leading to a potential disparity in the comparison, this issue seemed to be outweighed by the advantage of using bigger knobs in the user study.
The sounds produced by each of them were made to blend into each other to generate an abstract soundscape. The following sections describe the form and behavior for each of the six prototypes as shown in Figure 8.
Figure 8. All six prototypes, from left to right: 1, 2, 3, 4, 5, and 6.
Figure 8. All six prototypes, from left to right: 1, 2, 3, 4, 5, and 6.
Machines 03 00317 g008

2.1.2. Knob 1: Passive Haptics

The first knob, the pointy one, is an abstract diagonally upwards pointing object with two rubber rings. When turning it, detents are felt and a pointy laser sound is made every time it goes over one of the bumps. The speed and turning direction have an influence on the pitch and duration of the sound. The main focus is to see if passive haptic feedback and sound matching works.

2.1.3. Knob 2: Shape > Behavior > Sound

The second knob, the rocksteady, is a big rock that is rhythmically moving back and forth. With each motion, it makes a percussive sound, which changes depending on the position of the rock. When grabbed, the motion and sound can be constrained, while moving it percussively back and forth makes you trigger percussive sounds. This knob has a heavy vibration speaker (a type of speaker which vibrates an external surface rather than a diaphragm), so the percussiveness is felt. The main focus is to see the effect of a very literal translation of percussive shape to percussive behavior, up to percussive sound.

2.1.4. Knob 3: Knob in Control

The third knob, the subversive, is a big, blobby object that turns quickly to a position it wants, whenever it wants. It makes a sharply tuned note, starting on movement and coming to its peak when it finds its position. When grabbed, it takes a lot of force to make it do what you want, but it reacts by changing pitch depending on where you put it. The main focus is to see what happens when the knob is in control, rather than the musician.

2.1.5. Knob 4: Emotional Response

The fourth knob, the scary, is a slowly moving head of a baby doll. It makes an eerie human voice-like sound when moving, of which the timbre and pitch changes depending on the location. It is very easy to move and will stop its motion for a small moment in time, but, then, it will unexpectedly start turning its head again. The main focus is to see the effect of a shape with a very strong emotional reaction (scariness, creepiness) to the way it is used to make sound.

2.1.6. Knob 5: Energy In = Energy Out

The fifth knob, the conservative, is a traditional turned wooden object that constantly makes an organic sound, reminiscent of a tuning orchestra. It increases in volume and gets a brighter sound when you twist the knob. The knob is spring loaded, so you need increasingly more energy as it gets twisted further. The sound will contain increasingly more energy as it gets twisted towards its limit. This knob has a vibration speaker to really make the energy felt in the hands. The main focus of this knob is to test the effects of a strong energy in = energy out relationship.

2.1.7. Knob 6: The Holy

The last knob is a heavy geometric shape that holds the momentum of turning it for a few seconds. While turning, the sound is like a choir with a change in timbre depending on the turning speed. The sound stops as soon as the knob stops turning. The main focus is creating a strong coupling of movement and sound.

2.2. Prototype Design

The six prototypes were constructed with mechanisms taken from (force-feedback) racing wheels (2, 3 and 5), an off-the-shelf detented rotary encoder (1), a motorized potentiometer (4) and a hard drive motor converted to a rotary encoder (6).
An Arduino pro mini was used to transfer the data from the potentiometers and the rotary encoders to the PC, while running a proportional-integral-derivative (PID) algorithm to control the motors. The control board is shown in Figure 9.
Communicating with an OSC-over-serial protocol, the PC received the sensor data in pd (pure data), which does all the interpreting and mapping.
To generate the sound, pd sends MIDI to Reaper, running a number of proprietary vst instruments.
From Reaper, six channels of audio get sent to a 7.1 audio interface, amplified with an assortment of 1–3 W amplifiers and played with vibration speakers (2,5) and small 3 W speakers. In this setup, the sound from each of the prototypes will originate from the prototype itself, helping the participants in differentiating the different characters. The vibration speakers were used to facilitate low frequency sounds.
Figure 9. Prototype control board with Sweex 7.1 sound card (a); assorted 1–3 W amplifiers (b); L298N H-bridge motor controller (c); Arduino Pro Mini (d); and terminals (e).
Figure 9. Prototype control board with Sweex 7.1 sound card (a); assorted 1–3 W amplifiers (b); L298N H-bridge motor controller (c); Arduino Pro Mini (d); and terminals (e).
Machines 03 00317 g009

2.3. Methods

2.3.1. Discourse Analysis

Discourse analysis (DA) is an approach to get deep insights in the experience of the participants. By having the participants think out loud during their interaction with the prototypes and by conducting a post-test interview, a good view is gained on how the participants interacted with the prototypes. All speech by the participants was transcribed and encoded, which formed the basis of the DA results in Section 3.2. More details and the specific method can be found in the paper of Stowell et al. [25]. Considering the large amount of time needed to do a DA, it was chosen to only perform this method on two of the participants.

2.3.2. AttrakDiff

The AttrakDiff (AD) questionnaire is a method for investigating the hedonic and pragmatic quality in interactive products. Before the test, a questionnaire is administered capturing the participants’ general experience with interacting with DMI. After the test, a questionnaire is taken to measure the experience with the six prototypes. With this setup, it can be tested whether the new knobs are an improvement to DMI. More details and the statistical explanation can be found in the paper by Hassenzahl et al. [26].
It is hypothesized that the prototypes will be more stimulating and have a stronger identity because that is the aim of the prototypes. In the context of Attrakdiff, stimulation refers to the level to which the product motivates and interests the user, and identity refers to the level to which a user can identify him or herself with the product. The prototypes might be judged to have lower pragmatic qualities, the level to which the user feels like he or she can reach his or her goals, as they are not so familiar with the functioning of the prototypes as with mDMI.

2.3.3. Setup

Sixteen people participated in the study. They were chosen to have medium to long experience with DMI. A broad range of participants was chosen, covering different life phases and different music practices, e.g., DJs, producers, and live musicians. The majority of the participants were connected to the department of Industrial Design at the Eindhoven University of Technology.
As shown in Figure 10, the six prototypes were placed on a table in a quiet space. The control board and computer were placed underneath the table. For practical reasons, the AttrakDiff questionnaire was filled out on paper by half of the participants, and on a laptop for the other half. Everything was recorded with a video camera and audio recorder.
Figure 10. User study setup with the six prototypes and a participant filling out a questionnaire.
Figure 10. User study setup with the six prototypes and a participant filling out a questionnaire.
Machines 03 00317 g010
After the participants entered the space, a strict protocol was followed to ensure reproducibility of the test. First, the overall test was explained and a verbal consent was asked. This was followed by the participant filling in the first AD questionnaire. When the questionnaire was finished, the instruments were turned on and the participant was allowed a few min to explore the functioning of the knobs. To get a deeper level of musical interaction, the participants were asked to, respectively: “bring a climax to the music”, “change the mood of the music as much as possible” and “give a solo”. After these tasks, the participants were asked to fill out the second AD questionnaire. An interview was consecutively held only with two DA participants. From start to end, the whole test lasted 10 min for the majority of the participants and 30 min for the DA participants.

3. Results

3.1. AttrakDiff

The number of participants was enough to create a small confidence rectangle; participants were at one in their evaluation. mDMI in general were rated neutral with room for improvement in both hedonic and pragmatic quality. The prototypes were rated as “fairly self-oriented”; that is, not seen as pragmatic but as stimulating. Figure 11 shows a visual representation of the results.
A statistically significant difference is seen in general hedonic quality; stimulation is especially better in the prototypes. While not statistically significant, the questionnaire shows that the prototypes are more attractive compared to mDMI in general, while scoring lower on their identity aspect and pragmatic quality. More details on the statistical analysis can found in the paper by Hassenzahl [26].
Figure 11. Overview of AttrakDiff results, with A being knobs on mDMI and B being the prototype knobs.
Figure 11. Overview of AttrakDiff results, with A being knobs on mDMI and B being the prototype knobs.
Machines 03 00317 g011

3.2. Discourse Analysis

3.2.1. Participant 1

The participant intuitively approached the prototypes with a focus on shape, material and gesture. For example, knob 6 was described as having a good grip and having a pleasurable momentum, while knob 4 was felt as being emotionally disturbing and unpleasant to touch. The participant was distracted by all the prototypes working at the same time and expressed discontent in how he could not understand the coherence. The self-moving prototypes 2 and 3 were forced still while doing the tasks, to create space for the other sounds.
While the participant noticed that multiple modalities were translated into sound, he felt that those were not enough. For instance, on knob 5 there are holes on the top and several differently shaped ridges. He explained how he would expect that moving those would have an impact on the sound.
The biggest value was seen in having physical properties, like momentum and material, related to the sound. The participant expressed a desire for having multiple mechanisms work for the same instrument, for instance, with a spinning element controlling sustain of a note and blob shape controlling timbre.

3.2.2. Participant 2

The participant mostly remarked on the topic of control. She had strong reactions to the “screaming people” of knobs 3 and 4, and the physical characteristics of knobs 5 and 6. While knob 1 was described as having a strong chattering mouse-like personality, it did not return in her later reflections. She described the whole experience as being in a room with unruly people, in which she had to make some people talk more and some people shut up, to create balance.
She commented on the fact that the knobs were prototypes and that she was scared to damage them, but was the only participant to actively rearrange the knobs to put the right ones in her range to be able to perform the desired gestures. In addition, she made use of the unintended physical characteristic of the knob, in which the sound influenced by slightly lifting the knob.
Where she initially seemed to be mostly bothered by the way knobs 2 and 3 were moving autonomously, she later interpreted this behavior in a way it would be beneficial for her own music practice. She described wanting to be able to define a certain range of motion for the knob, in which it would repeat a gesture she would give to the knob. In this way, the behavior of the knob would be a physical metaphor for the musical concept of looping.

4. Discussion

Using the three directives to design six prototypes of a new generation of knobs on mDMI has proved to be received well by the participants. Results of this study show that, compared to the participants’ experience with knobs on mDMI, the prototypes showed to be more stimulating, but not directly to be more expressive. Two possible explanations: firstly, the prototypes might not have been developed to the technical quality as mDMI, and secondly, the participants might not have had enough time to develop the intimacy needed to express themselves with the instruments. It is also important to note that, according to Cannon and Favilla [27], more stimulation will lead to more investment in play, which in turn will lead to more expression.
Some new ideas on enabling expressivity have arisen from experimenting. For example, an unexpected result is that the knobs with an active behavior, led to lower stimulation. As completely passive objects generally do not stimulate, it suggests that there is an ideal level of activity in between fully passive and fully actuated. Searching for such a sweet spot could be a fruitful approach for learning more about the control aspects of musical expressivity.
This first series of prototypes did not lead to a knob that can directly be applied to a product. However, it did demonstrate that there is value in the directives. If given a larger number of iterations, a full musical instrument should be made using these principles. During that time, the directives should be revisited constantly.


The three directives represent a concise summary of the research. While there are many different methods for making the proposed “new knob” work, they have given conceptual, rather than technical requirements. This allowed for a non-dogmatic approach and for a broad interpretation without losing the essence.
While one new knob might make an interesting musical instrument by itself, it can have more value when combined with other elements or other “new knobs”. For instance, a knob with inertia might be used to control the articulation of the sound, while a knob with force-feedback might control the sound color, next to that there might be some traditional control elements to control volume or program type.

5. Conclusions

Despite the seemingly endless sonic possibilities, mainstream digital musical instruments (mDMI) generally lack the right affordances for expressive performance. Through recognizing the interaction with mDMI as tangible and gestural, rather than purely functional, a line is set for making a new generation of one of the most ubiquitous interface elements: the knob.
Through analyzing the history and characteristics of the knob, it can be seen that while it is used in a musical context, it has never been considered for its true potential in musical interaction. By taking a step back and by looking at the knob from the perspective of musicology and interaction design, some fields for improvement can be identified. The main point is that the design of a more expressive knob requires consistency in the whole chain, from knob parameters, via mapping to perceived affordances. Computer science in turn provides a concrete handle on tackling these issues like using complex multi-layer mappings.
Three directives are formulated to guide future developments: twisting the knob generates a musical outcome, rather than changing a technical parameter; the knob is sensitive to your own way of twisting it; and the looks and feel of the knob are connected to its musical output.
By making six prototypes of extreme variations within these directives, the validity of the directives could be tested. The results show that while the prototypes were not considered more expressive by themselves, they were significantly more stimulating than mDMI in general.
While only the next generation of more nuanced prototypes can show whether this new breed of expressive knobs is viable, the results definitely show that the directives are a valid way of continuing this work. Through the user test it was shown that force-feedback had an adverse effect on expression, while using unusual mechanisms, like springs and high-inertia knobs, gave strong positive responses.
By taking a tangible-gestural approach, it is possible to free mDMI from the dogmas of the current interface paradigm, in which interface elements are seen as purely functional. When recognizing that every interaction has an emotional load, and when interfaces are designed to sensitive for those gestural subtleties, we believe enormous advances can be made in the expressivity of mDMI.
There is an increasing amount of possibilities in digital and sensor technology, they are just waiting to be used. We hope that this paper will lead to thoughts about new, more musical performance oriented types of knobs.
The research-through-design approach has proved to be a practical approach in making sense of a complex situation with ambitious goals. While the results from this paper cannot yet be applied on mDMI one-to-one, it gives strong support for continuing further research towards a new generation of knobs.
It has been the aim of this paper to inspire thoughts on expanding the usage of knobs towards a more musical interaction. It is shown that the concept knobs are more stimulating than the traditional knob. Through this stimulation, a more intimate bond will be created between performer and instrument, which in turn should lead to more expressive performance. This new knob can be interpreted in a lot of different ways and we would like to encourage anyone to try to apply the directive on any form of DMI.


Our thanks go to the support of: Marie Caye, / labs, the participants of the user study, and STEIM.

Author Contributions

Arvid Jense conceived and executed the project. Berry Eggen provided coaching and in-depth expertise.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Rutenberg, D. The early history of the potentiometer system of electrical measurement. Ann. Sci. 1939, 4, 212–243. [Google Scholar]
  2. Manning, P. Electronic and Computer Music; Oxford University Press: New York, NY, USA, 2013. [Google Scholar]
  3. Bernstein, D.W. The San Francisco Tape Music Center: 1960s Counterculture and the Avant-Garde; University of California Press: Oakland, CA, USA, 2008; p. 5. [Google Scholar]
  4. Yamaha DX7 Famous Examples. Available online (accessed on 1 August 2015).
  5. Lederman, S.J.; Klatzky, R.L. Hand movements: A window into haptic object recognition. Cognit. Psychol. 1987, 3, 342–368. [Google Scholar] [CrossRef]
  6. Bovermann, T.; Hinrichsen, A.; Lopes, D.H.M.; de Campo, A.; Egermann, H.; Foerstel, A.; Hardjowirogo, S.-I.; Pysiewicz, A.; Weinzierl, S. 3DMIN—Challenges and Interventions in Design, Development and Dissemination of New Musical Instruments. Int. Conput. Music Assoc. 2014, 2014, 1637–1641. [Google Scholar]
  7. Todd, N.P.M. The dynamics of dynamics: A model of musical expression. J. Acoust. Soc. Am. 1992, 91, 3540–3550. [Google Scholar] [CrossRef]
  8. Jordà, S. Digital Instruments and Players: Part II: Diversity, Freedom and Control. In Proceedings of the International Computer Music Conference, Miami, FL, USA, 1–6 November 2004.
  9. Fels, S. Intimacy and Embodiment: Implications for Art and Technology. Available online: (accessed on 26 October 2015).
  10. Streeck, J.; Goodwin, C.; LeBaron, C. Embodied Interaction in the Material World: An Introduction; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  11. Bongers, B. Tactile display in electronic musical instruments. In Proceedings of the IEEE Colloquium on Developments in Tactile Displays, London, UK, 21 January 1997; pp. 41–70.
  12. Bloom, B.S.; Engelhart, M.D.; Furst, E.J.; Hill, W.H.; Krathwohl, D.R. Taxonomy of Educational Objectives: The Classification of Educational Goals; Longman Group United Kingdom: London, UK, 1956. [Google Scholar]
  13. Hartson, R.H. Cognitive, physical, sensory, and functional affordances in interaction design. Behav. Inf. Technol. 2003, 22, 315–338. [Google Scholar] [CrossRef]
  14. Paas, F.; Renkl, A.; Sweller, J. Cognitive Load Theory and Instructional Design: Recent Developments. Educ. Psychol. 2003, 28, 1–4. [Google Scholar] [CrossRef]
  15. Hoven, E.V.D.; Mazalek, A. Grasping gestures: Gesturing with physical artifacts. Artif. Intell. Eng. Des. Anal. Manuf. 2011, 25, 255–271. [Google Scholar] [CrossRef]
  16. MacLean, K.E. Haptic interaction design for everyday interfaces. Rev. Hum. Factors Ergon. 2008, 1, 149–194. [Google Scholar] [CrossRef]
  17. Fels, S. Designing for Intimacy: Creating New Interfaces for Musical Expression. Proc. IEEE 2004, 92, 672–685. [Google Scholar] [CrossRef]
  18. Hunt, A.; Wanderley, M.M. Mapping performer parameters to synthesis engines. Organ. Sound 2012, 2, 97–108. [Google Scholar] [CrossRef]
  19. Von Laban, R. Principles of Dance and Movement Notation; Macdonald & Evans: London, UK, 1956. [Google Scholar]
  20. Juslin, P.N.; Friberg, A.; Bresin, R. Toward a computational model of expression in music performance: The GERM model. Music. Sci. 2002, 5, 63–122. [Google Scholar]
  21. Sato, M.; Poupyrev, I.; Harrison, C. Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects. In Proceedings of the CHI, Austin, TX, USA, 5–10 May 2012.
  22. Armstrong, N. An Enactive Approach to Digital Musical Instrument Design. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 2006. [Google Scholar]
  23. Zimmerman, J.; Forlizzi, J.; Evenson, S. Research through design as a method for interaction design research in HCI. In Proceedings of CHI, San Jose, CA, USA, 30 April–3 May 2007; pp. 493–502.
  24. Van den Hoven, E.; Frens, J.; Aliakseyeu, D.; Martens, J.B.; Overbeeke, K.; Peters, P. Design research & tangible interaction. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, Baton Rouge, LA, USA, 15–17 February 2007; pp. 109–115.
  25. Stowell, D.; Plumbley, M.D.; Bryan-Kinns, N. Discourse analysis evaluation method for expressive musical interfaces. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), Genova, Italy, 5–7 June 2008.
  26. Hassenzahl, M.; Burmester, M.; Koller, F. AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität. Mensch Comput. 2003, 57, 187–196. [Google Scholar]
  27. Cannon, J.; Favilla, S. The Investment of Play: Expression and Affordances in Digital Musical Instrument Design. In Proceedings of the ICMC, Ljubljana, Slovenia, 9–14 September 2012.
Back to TopTop