Body-Borne Computers as Extensions of Self

.


Introduction
The ubiquity of information technology no longer separates us from technology.Clark argues that human beings are already cyborgs [1], as they are innately driven to use the environment and external tools as extensions of themselves [2].A classic example mentioned by Clark is timepieces.When you ask if a person knows the time, that person would answer "yes" even without knowing the time and then check his/her watch.As such, we offload our cognitive processes to artifacts around us [3], and for centuries we have extended our capabilities through similar means of incorporating technologies.
Exoskeletons [4,5] or prostheses [6][7][8] are well known examples of the integration of humans and machines for improving personal abilities.Sensory substitution [9,10] is a field of research aiming to supplement a loss of sensory modality with another, or to enable the perception of signals that we normally are not capable of sensing.While a variety of related subjects have been presented in the field of human-computer interaction (HCI), these kinds of work are distinct from traditional user interface (UI) [11] research.In contrast to UIs that temporarily provide abilities to a user, augmented human (AH) technologies are designed to be always available and operating for an extended period of time.They also often establish direct interfacing with the body and are, thereby, not limited to control by our fingertips.
This poses the questions of how a technology that is tightly coupled with the human cognitive or physiological system will be perceived and where the boundary lies between the technology and the human.The field of embodied cognition [12,13] hints at the transformative potential of the human body image for incorporating extended, or alternative, capabilities.On a similar note, Lanier, in his pioneering virtual reality (VR) works, proposes the idea of homuncular flexibility (HF) [14,15].He argues that human brain plasticity allows for the remapping of the body's motor control to a different, non-anthropomorphic body (also see [16]).In his collaborators' and his recent VR user study [17], it is reported that, within a few minutes, participants were able to learn how to control avatars with non-anthropomorphic (having a third arm) or juxtaposed (e.g., the legs controlling the hands of an avatar) configurations.
In this paper, we offer a perspective on future AH designs that facilitates a stronger interplay between the body and mind, through directly changing the body's morphology or visual/sensing capabilities.We discuss how the use of a tool not only helps achieve a task but also changes the way we perceive ourselves and the body's functionalities.It is well known that the possession of an opposable thumb, as well as the skills of utilizing tools, resulted in a significant difference between humans and other species on Earth.The acquisition of sophisticated manual skills accompanied the growth in brain size of Homo sapiens, signifying the effect of corporeal capabilities on how the brain develops [18,19].This effect is not limited to genetic modifications; modern neuroscience research suggests that our internal body model updates, given temporary extensions to sensorimotor capabilities [20][21][22][23].
In light of this discussion, we will review three areas of augmentation of the human body: physical morphology, skin display, and somatosensory extension, along with a framework that puts works from different research domains into perspective.We add some of our ongoing research progress in the discussion, and show how the HCI approach can extend the research with more computing, interactivity, and application perspectives.Finally, we identify future challenges and what opportunities there are within the HCI domain.

The Human Body in HCI
Although the human body plays an important role in HCI, explicit discussions of the role of the body did not happen until the late 1990s [24].Gaver's take on affordances [25] is also largely focused on media spaces, whereas the original concept by Gibson [26] describes the relation between a living organism and its environment.Such a tendency to look away from the body tracks back to Heidegger's Being and Time [27], which only has six lines about the body.
On the other hand, insights into the role of the body in interactions can be found from Merleau-Ponty's works [28] on the embodied basis of human perception.He rejects the idea of perception as passive reception of a stimuli, and sees that there is no perception without action; active perception being one of the core ideas in embodied cognition research [29].In other words, the condition of the body (such as past experiences or trainings) determines the way a person perceives.The effect of the body on perception is denoted as the "phenomenal field" by Merleau-Ponty [28].
Neuroscience studies reveal evidence that tool-use induces changes in how the brain processes the body image [21,22].When a macaque monkey uses a rake for collecting food, its visual receptive field is enlarged to include the rake [21].A continuous tactile stimulation can lead a person to have the sensation of having a third arm, even feeling touches on both a rubber hand and a biological hand [30].Aymerich-Franch [23] proposed an overarching hypothesis relating to the neural phenomena of self-attribution-"our brain attributes a perceived entity as our limb, if the physical properties of the entity are sufficient to afford certain actions the brain has associated with that limb."

Symbiotic Human-Machine Interfaces
The relationship between brain plasticity and tool-use hints at a new perspective on how computational systems can be designed to form a stronger bond with users.Tools, wearables, and prosthetic devices do not just offer functional capabilities, but can be designed to become an integral part of the human system.We envision such technologies to offer extended functions of existing parts of the body, and establish low-level communication with our sensorimotor or somatosensory systems.Therefore, a body-integrated (-worn or -implanted) technology has the potential to be perceived as an intimate and natural extension of ourselves.A well-known example is a study by Bach-y-Rita [10], where a vibro-tactile feedback is used on the back of visually impaired subjects to give them an alternate way of perceiving images.A noteworthy part in their report is: "...subjects spontaneously report the external localization of stimuli, in that sensory information seems to come from in front of the camera, rather than from the vibro-tactors on their back."Their system does not only provide an alternative channel for visual perception, but the entire action-perception loop is reconfigured and encoded in the subjects' cognitive architecture.
The concept we propose is a continuation of Licklider's vision [31], where humans and machines synergistically collaborate on accomplishing tasks by offloading repetitive and mechanical procedures to the machines.With the advances in computing, robotics, and bio-electronic technologies, it is possible to realize a much closer and low-level cybernetic loop between humans and computers that goes well beyond a simple role division.In later sections of this paper, we will discuss a larger body of related works that aim to realize such forms of human-computer integration.

In Relation to Traditional HCI
The field of HCI has so far explored two major paradigms for how people relate to machines.The dominant approach is that of the machine as tool exemplified in the widespread use of graphical user interfaces (GUI) [11], as well as the emerging category of tangible user interfaces (TUI) [32].The interaction consists of the user employing software or hardware tools to get things done.A second approach, which has recently gained a lot of attention, is that of the machine as an intelligent agent.Interactions in this case typically involves commands or conversations in natural language, and the Holy Grail is for the machine to become an autonomous worker as intelligent as a human.
The third approach (Figure 1) we discuss in this paper consists of an intimate integration of human and machine.On the one hand, it attempts to mitigate the two existing paradigms by offering semi-intelligent systems interfaced with the human body so that they become always-available tools for sensing, controlling, and performing actions.On the other hand, the semi-intelligent system and the user become integrated, with the system forming a natural extension for an augmented human.The form of such a realization will often be, but not limited to, wearable, implanted, or skin-attached systems.
Computers 2017, 6, 12 3 of 17 vibro-tactors on their back."Their system does not only provide an alternative channel for visual perception, but the entire action-perception loop is reconfigured and encoded in the subjects' cognitive architecture.The concept we propose is a continuation of Licklider's vision [31], where humans and machines synergistically collaborate on accomplishing tasks by offloading repetitive and mechanical procedures to the machines.With the advances in computing, robotics, and bio-electronic technologies, it is possible to realize a much closer and low-level cybernetic loop between humans and computers that goes well beyond a simple role division.In later sections of this paper, we will discuss a larger body of related works that aim to realize such forms of human-computer integration.

In Relation to Traditional HCI
The field of HCI has so far explored two major paradigms for how people relate to machines.The dominant approach is that of the machine as tool exemplified in the widespread use of graphical user interfaces (GUI) [11], as well as the emerging category of tangible user interfaces (TUI) [32].The interaction consists of the user employing software or hardware tools to get things done.A second approach, which has recently gained a lot of attention, is that of the machine as an intelligent agent.Interactions in this case typically involves commands or conversations in natural language, and the Holy Grail is for the machine to become an autonomous worker as intelligent as a human.
The third approach (Figure 1) we discuss in this paper consists of an intimate integration of human and machine.On the one hand, it attempts to mitigate the two existing paradigms by offering semi-intelligent systems interfaced with the human body so that they become always-available tools for sensing, controlling, and performing actions.On the other hand, the semi-intelligent system and the user become integrated, with the system forming a natural extension for an augmented human.The form of such a realization will often be, but not limited to, wearable, implanted, or skin-attached systems.

Implications for Designing User Interfaces
Neuroscience studies provide evidence of how tool-use or repeated behaviors can lead to different paths of neural development.For example, a skillful use of the hand changes the cortical activity allotted to the tactile receptors on fingertips [33][34][35].It is reported that string instrument players have larger somatosensory cortical activity in response to touch on the little fingertip than non-players [34,35].Recently, scientists showed that the cortical potentials from the thumb and index fingertips were directly proportional to the intensity of smartphone use [33].
These findings point at the role of UI design in affecting development of new neural circuitries and mappings.The change in users is not limited to tactile receptors; studies of neural prostheses [8] showed developments of new muscle synergies through continuous training on myoelectric controls.Cognitive science studies further demonstrate changes in perception and behavior of users as a result of giving users VR avatars with different traits, such as gender [36], body shape [37], or character [38].Despite the evidence, current interaction or interface designs rely heavily on the innate skills users possess, or their natural body image, meaning that they regard the user as a time-invariant parameter.However, interfaces can be designed to co-develop with the user's physical or cognitive abilities.

Implications for Designing User Interfaces
Neuroscience studies provide evidence of how tool-use or repeated behaviors can lead to different paths of neural development.For example, a skillful use of the hand changes the cortical activity allotted to the tactile receptors on fingertips [33][34][35].It is reported that string instrument players have larger somatosensory cortical activity in response to touch on the little fingertip than non-players [34,35].Recently, scientists showed that the cortical potentials from the thumb and index fingertips were directly proportional to the intensity of smartphone use [33].
These findings point at the role of UI design in affecting development of new neural circuitries and mappings.The change in users is not limited to tactile receptors; studies of neural prostheses [8] showed developments of new muscle synergies through continuous training on myoelectric controls.Cognitive science studies further demonstrate changes in perception and behavior of users as a result of giving users VR avatars with different traits, such as gender [36], body shape [37], or character [38].Despite the evidence, current interaction or interface designs rely heavily on the innate skills users possess, or their natural body image, meaning that they regard the user as a time-invariant parameter.However, interfaces can be designed to co-develop with the user's physical or cognitive abilities.

Symbiotic Interfaces that Extend the Self
In this section we divide possible augmentation technologies into subcategories: (1) physical morphology-sensorimotor augmentations that act as end effectors or sensory probes that offer functional extensions of the body; (2) skin display-augmentations that extend the aesthetic, informational, or sensing capabilities of our skin; and (3) somatosensory extensions-means to create computational alterations of perception for pseudo-nervous mappings (Table 1).We exclude discussions on general sensory substitution in this section, focusing on ones that modulate or extend the sense of self.The division into the three categories is defined based on the traditional role of the body-its mechanical capability, use for expression, and sensing capability.The physical shape and mechanical structure (e.g., muscular) defines the physical capabilities afforded by a body part or the body as a whole.The surface of the body affords interactions with the environment or in social contexts.It has been used as a display (for self-expression in the form of makeups and fashion), sensory organ (texture, temperature, humidity, and so on), and a place from which to acquire biosignals (heartbeat, body temperature, and so on).Finally, events or feelings (somatosensory) from the body critically determine emotions or the perception of self.In order to illustrate the three spaces for augmentation, we introduce works from various domains including some of the authors' own works.
We also include a section on reconfigured bio-electronic circuits in living organisms.This separate section intends to look into further possibilities through more invasive modifications of the body.Such techniques are also often used as complementary elements to the other three categories, e.g., for sensing and transmitting biosignals through the body.

Physical Morphology
This section discusses augmentation technologies that provide functional morphology change.The augmentation may offer increased sensorimotor capabilities to a user [39][40][41][42][43] or exploit existing body structure [44,45] for added functionalities."Morphology" in this section can be (1) the physical shape of the body or the material/functional property of the extremities (external morphology); or (2) how the body and the brain communicate for motor control (internal morphology).

The Shape of the Body-External Morphology
Unlike the commonly known type of exoskeletons-parallel-limb exoskeletons [46,47]-series-limb exoskeletons [48][49][50] offer interesting extensions to their wearers [51].They are designed to be connected with a user in series, extending or transforming the extremities of the user to provide new properties.PowerSkip [49] is a wearable device that provides leg extensions that help a user jump extremely high, and SpringWalker [50] is a contraption that allows faster and more economic maneuvering with longer legs.Gloves used by NASA [52] change stiffness so that their users can effortlessly hold heavy objects, and gecko-inspired climbing gloves [53] give its wearer a temporary ability to climb on vertical walls.Supernumerary robotics (SR) [39][40][41] is a recently established field of research that studies additional robotic fingers or limbs that support physical tasks by reducing load or aiding in object manipulation.SR fingers [40,41] are designed to work in synergy with a user's fingers, offering improved hand grasp capabilities.Shoulder-mounted SR arms [39,54] are designed to support their wearer in accomplishing assembly jobs that would normally require two men with extra robotic arms instead.Similarly, a pair of robotic legs [55] allows the user to maneuver on extreme terrain and balance better.Georgia Tech demonstrated a three-armed drumming system [56] where the high-level control of the robotic arm is performed by brain signals and software that fine-tunes the rhythm to fit into the music being played.
Seminal art works by Stelarc, such as The Third Arm [57], are muscle-controlled robotic contraptions worn on the body.Through the muscles around his abdomen, the artist was able to control the third arm, along with two biological ones, and write characters with all three arms at the same time.Horn, in her art pieces Pencil Mask (1972) and Two Hands Scratching Both Walls (1974)(1975), showcases structures worn on the body that heighten the wearer's senses.

Control of the Muscles through Computers-Internal Morphology
Existing limbs may also be used and controlled through computational inputs-a computational device acquires the capability to drive muscles, e.g., independent from a user's intention.Furthermore, the computational control of muscles can allow new synergies between individual muscles, which is not possible with our internal neuromuscular structure.Tokyo University showcased the use of functional electrical stimulation (FES) as a means of controlling the human body [44], allowing a person to control another user's body as a surrogate for teleoperation.Inferno [58] is an art performance in which participants are asked to wear robotic exoskeletons and are controlled/forced to dance in synchronization.In other words, a user's body is temporarily controlled by another human or a machine operator.

Sensorimotor Retargeting
Sensory channels combined with motor actions allow active perception-like turning the head to see from different perspectives, therefore, better spatial cognition.An addition to new sensing capabilities to a body part may open up opportunities for new sensorimotor mappings between action and sensing.In contrast to the external morphology section, sensorimotor retargeting focusses on the data transmitted or received in accordance to motor actions.
A study by Bach-y-Rita [10], mentioned earlier in this paper, reports that people perceive visual feeds from a camera through tactile feedback on their back as coming from in front of the camera, not from their back.Subdermal implantation of a magnet has also been studied to explore sensory alterations of the fingertips [59].FingerReader [42] is a camera worn on a finger that reads texts shown to the camera to its wearer.Effectively, the finger turns into an information probe where a user can move and point in different directions to read spatially-relevant information.Such a connection between motor behavior and sensing has been explained by Haans and IJsselsteijn: "When the mediating technology allows for these sensorimotor contingencies to be registered, or enacted, one will experience the digital content as having a specific physicality in time and space" [60].
The body can also be modified to output data.Warwick [61] implanted a microchip in his wrist that makes contact with the nerve bundles, where the nervous signals are read and transmitted to computers.Ishin-Den-Shin [43] is a wrist-worn device that conducts sound through bones, enabling one to send audio messages to another person by gently tapping on the ear.

More Programmability in the Body
Research in shape-changing robotics hints at how we can further extend the adaptability of our body shape and, therefore, our physical capabilities.We have developed a set of wrist-worn programmable robotic joints that offers additional action capabilities to users [62] (Figure 2).The main motivation is to create a modular system of which the functionality can be reconfigured in a more "programmatic" manner, and offer a platform through which a multitude of applications can be realized.The application includes modes where the joints act as extra actuators (or fingers), passive support structure (such as for holding a notepad for one-handed note-taking), or haptic user interfaces for VR games.Different modes of operation require different controller schemes, however, the input signal used for the control remains unchanged.In other words, software modifications alone can enable the user to have interchangeable (robotic) functions controlled by the same set of muscle movements.
one to send audio messages to another person by gently tapping on the ear.

More Programmability in the Body
Research in shape-changing robotics hints at how we can further extend the adaptability of our body shape and, therefore, our physical capabilities.We have developed a set of wrist-worn programmable robotic joints that offers additional action capabilities to users [62] (Figure 2).The main motivation is to create a modular system of which the functionality can be reconfigured in a more "programmatic" manner, and offer a platform through which a multitude of applications can be realized.The application includes modes where the joints act as extra actuators (or fingers), passive support structure (such as for holding a notepad for one-handed note-taking), or haptic user interfaces for VR games.Different modes of operation require different controller schemes, however, the input signal used for the control remains unchanged.In other words, software modifications alone can enable the user to have interchangeable (robotic) functions controlled by the same set of muscle movements.Modularity in a morphology-extending prosthetic device can further improve the adaptability of such technologies.We are developing a modular assembly kit consisting of basic sensor or motor components that somewhat resemble those of human joints (Figure 3).The modules, with a universal connector design, can easily be arranged in different orders-like we arrange different programming blocks in software development.To date, we implemented servomotor modules with two different axes of rotations, as well as sensor modules so an engineer can add new sensing capabilities of choice to the module chain.Customization of end-effectors is possible through 3D printing pluggable fingertip modules, allowing the robotic device to have effectors designed for specific tasks, such as for plucking guitar strings (Figure 4).Modularity in a morphology-extending prosthetic device can further improve the adaptability of such technologies.We are developing a modular assembly kit consisting of basic sensor or motor components that somewhat resemble those of human joints (Figure 3).The modules, with a universal connector design, can easily be arranged in different orders-like we arrange different programming blocks in software development.To date, we implemented servomotor modules with two different axes of rotations, as well as sensor modules so an engineer can add new sensing capabilities of choice to the module chain.Customization of end-effectors is possible through 3D printing pluggable fingertip modules, allowing the robotic device to have effectors designed for specific tasks, such as for plucking guitar strings (Figure 4).
one to send audio messages to another person by gently tapping on the ear.

More Programmability in the Body
Research in shape-changing robotics hints at how we can further extend the adaptability of our body shape and, therefore, our physical capabilities.We have developed a set of wrist-worn programmable robotic joints that offers additional action capabilities to users [62] (Figure 2).The main motivation is to create a modular system of which the functionality can be reconfigured in a more "programmatic" manner, and offer a platform through which a multitude of applications can be realized.The application includes modes where the joints act as extra actuators (or fingers), passive support structure (such as for holding a notepad for one-handed note-taking), or haptic user interfaces for VR games.Different modes of operation require different controller schemes, however, the input signal used for the control remains unchanged.In other words, software modifications alone can enable the user to have interchangeable (robotic) functions controlled by the same set of muscle movements.Modularity in a morphology-extending prosthetic device can further improve the adaptability of such technologies.We are developing a modular assembly kit consisting of basic sensor or motor components that somewhat resemble those of human joints (Figure 3).The modules, with a universal connector design, can easily be arranged in different orders-like we arrange different programming blocks in software development.To date, we implemented servomotor modules with two different axes of rotations, as well as sensor modules so an engineer can add new sensing capabilities of choice to the module chain.Customization of end-effectors is possible through 3D printing pluggable fingertip modules, allowing the robotic device to have effectors designed for specific tasks, such as for plucking guitar strings (Figure 4).

Skin Display
Emerging work at the intersection of fashion and technology has explored skin and interactive garments as displays of information or the wearer's emotional state [63][64][65][66][67]. Several interactive wearable garments visualize the wearer's discomfort towards violation of personal space.For instance, Caress of the Gaze [63] morphs in response to unwanted visual attention.Amisha Gadani [68], Meejin Yoon [69], and Anouk Wipprecht [70] created defensive dresses that exhibit animal-like behaviors in response to fear upon the encroachment of personal space.These works are all examples of social prostheses that amplify the automatic emotional response a wearer may exhibit.

Skin Display
Emerging work at the intersection of fashion and technology has explored skin and interactive garments as displays of information or the wearer's emotional state [63][64][65][66][67]. Several interactive wearable garments visualize the wearer's discomfort towards violation of personal space.For instance, Caress of the Gaze [63] morphs in response to unwanted visual attention.Amisha Gadani [68], Meejin Yoon [69], and Anouk Wipprecht [70] created defensive dresses that exhibit animal-like behaviors in response to fear upon the encroachment of personal space.These works are all examples of social prostheses that amplify the automatic emotional response a wearer may exhibit.
Other works, such as Ying Gao's Incertitudes garments [71] are integrated with motorized pins that shift according to the spectators' voice to explore ambiguity in conversation.Liu and Lengeling [72] presented a speculative design of artificial goosebumps that reflect a user's stress levels.Their idea is inspired by the Iowa Gambling Test [73], a study that shows that a subliminal "stress" signal is produced by the body even when the person is not consciously aware of the stressful situation.
The examples above illustrate the opportunities of augmenting skin in three application areas-display, bio-sensing, and reaction to external stimuli.The skin has not only been used for curating identity, but is also innately responsive to the environment (e.g., temperature, humidity) and channels emotional responses through visual or physical changes of the skin.
However, most of the HCI research to date has focused on on-skin interactions for touchscreen input styles, or techniques for fabricating microcircuits in a small form factor to be attached to the skin [74][75][76][77][78].We briefly discuss prior research and present three research studies that describe how further incorporation of a biochemical approach can create a more skin-friendly skin augmentation products, with customized bio-sensing or environmental sensing capabilities.

Techniques for Skin Augmentation
HCI researchers have used bio-acoustic [74], capacitive [75,76], and magnetic [77] sensing to appropriate the human skin as input surface.In order to use on-skin surfaces for visual output, researchers have explored mounting small projectors [74] or displays [78] onto the body.The Vivalnk tattoo, a commercially available tattoo sticker, has near-field communication (NFC) capability [79].For a more in-depth review of wearable designs for skin interfaces, please refer to [65].
The most critical challenge in designing on-skin interfaces is the minimization of electronics and form factor.In order to address this, several fabrication processes have been proposed.In material science, epidermal electronics integrated with soft, stretchable forms that have skin-like properties have been explored [80].However, incorporating sensing capabilities with such form factors poses challenges in manufacturing and cost [81,82].More widely accessible fabrication processes have been proposed as well.iSkin is a silicone-based skin overlay for touch sensing [76], however, it requires the use of material-science grade materials (PDMS with carbon) and its thickness of 700 µm is much thicker than that of epidermal electronics (0.8 µm).Skintillates prints conductive silver ink on temporary tattoo paper to create 36 µm thick on-skin electrical traces [66].

Chemically-Produced Interactive Skin Interfaces
SkinSense [65] is a chemically-produced tattoo-like interface that has a layer of stretchable electrodes (Figure 5).Components, such as sensors or LEDs, can be connected, enabling designs of interactive tattoos.An example is composed of RGB LEDs and IR receivers, therefore, a facial tattoo can be controlled by a remote controller for interactive play and aesthetics purposes.DuoSkin [64] is a fabrication process that generates ultra-thin functional devices which can be attached directly onto the skin.It leverages gold metal leaf, a material that is cheap, skin-friendly, and robust for everyday wear.By overlaying multiple layers of gold metal leaf, DuoSkin devices can function as a capacitive touchpad or as an antenna for wireless communication (Figure 6).Importantly, DuoSkin incorporates aesthetic customization similar to body decorations, giving form to exposed interfaces that so far have mostly been concealed.a fabrication process that generates ultra-thin functional devices which can be attached directly onto the skin.It leverages gold metal leaf, a material that is cheap, skin-friendly, and robust for everyday wear.By overlaying multiple layers of gold metal leaf, DuoSkin devices can function as a capacitive touchpad or as an antenna for wireless communication (Figure 6).Importantly, DuoSkin incorporates aesthetic customization similar to body decorations, giving form to exposed interfaces that so far have mostly been concealed.A chemically-produced interactive tattoo can offer minimal and specific-purpose computing powers to the skin (Figure 7).Due to its spatial affinity to the skin, such tattoos can acquire biosignals from the body or respond to environmental conditions to inform a user of potential risk.In the project Dermal Abyss, biochemical sensors are used to acquire and display information about the physiological state of the user.The biosensors developed are capable of indexing the concentration of sodium, glucose, and pH in the interstitial fluid of the skin, and output the collected data through colors on a tattoo.Currently, the experiments are conducted ex vivo using a pig skin due to clinical experimentation challenges.the skin.It leverages gold metal leaf, a material that is cheap, skin-friendly, and robust for everyday wear.By overlaying multiple layers of gold metal leaf, DuoSkin devices can function as a capacitive touchpad or as an antenna for wireless communication (Figure 6).Importantly, DuoSkin incorporates aesthetic customization similar to body decorations, giving form to exposed interfaces that so far have mostly been concealed.A chemically-produced interactive tattoo can offer minimal and specific-purpose computing powers to the skin (Figure 7).Due to its spatial affinity to the skin, such tattoos can acquire biosignals from the body or respond to environmental conditions to inform a user of potential risk.In the project Dermal Abyss, biochemical sensors are used to acquire and display information about the physiological state of the user.The biosensors developed are capable of indexing the concentration of sodium, glucose, and pH in the interstitial fluid of the skin, and output the collected data through colors on a tattoo.Currently, the experiments are conducted ex vivo using a pig skin due to clinical experimentation challenges.A chemically-produced interactive tattoo can offer minimal and specific-purpose computing powers to the skin (Figure 7).Due to its spatial affinity to the skin, such tattoos can acquire biosignals from the body or respond to environmental conditions to inform a user of potential risk.In the project Dermal Abyss, biochemical sensors are used to acquire and display information about the physiological state of the user.The biosensors developed are capable of indexing the concentration of sodium, glucose, and pH in the interstitial fluid of the skin, and output the collected data through colors on a tattoo.Currently, the experiments are conducted ex vivo using a pig skin due to clinical experimentation challenges.
a fabrication process that generates ultra-thin functional devices which can be attached directly onto the skin.It leverages gold metal leaf, a material that is cheap, skin-friendly, and robust for everyday wear.By overlaying multiple layers of gold metal leaf, DuoSkin devices can function as a capacitive touchpad or as an antenna for wireless communication (Figure 6).Importantly, DuoSkin incorporates aesthetic customization similar to body decorations, giving form to exposed interfaces that so far have mostly been concealed.A chemically-produced interactive tattoo can offer minimal and specific-purpose computing powers to the skin (Figure 7).Due to its spatial affinity to the skin, such tattoos can acquire biosignals from the body or respond to environmental conditions to inform a user of potential risk.In the project Dermal Abyss, biochemical sensors are used to acquire and display information about the physiological state of the user.The biosensors developed are capable of indexing the concentration of sodium, glucose, and pH in the interstitial fluid of the skin, and output the collected data through colors on a tattoo.Currently, the experiments are conducted ex vivo using a pig skin due to clinical experimentation challenges.EarthTones [83] is a cosmetics-inspired wearable chemical sensing powder to detect and display environmental conditions (Figure 8).It presents an alternative to current mainstream wearable displays (e.g., smart watches, LED textiles), which are digital and battery-laden.Instead, EarthTones present a direction towards soft, analog wearable displays composed of chemical-sensing powders for increased wearability, which are also built upon habitual makeup practices.The powders detect elevated levels of carbon monoxide (CO), ultraviolet (UV) rays, and ozone (O 3 ) and trigger corresponding color change to increase environmental awareness.
environmental conditions (Figure 8).It presents an alternative to current mainstream wearable displays (e.g., smart watches, LED textiles), which are digital and battery-laden.Instead, EarthTones present a direction towards soft, analog wearable displays composed of chemical-sensing powders for increased wearability, which are also built upon habitual makeup practices.The powders detect elevated levels of carbon monoxide (CO), ultraviolet (UV) rays, and ozone (O3) and trigger corresponding color change to increase environmental awareness.

Somatosensory Extension
In this section we discuss the third category of sensorimotor human augmentation namely somatosensory extensions.We distinguish our scope from the large body of sensory augmentation research that aims to substitute or add new sensory modalities.Instead, we focus on ones that reconfigure how we interpret biosignals or the body image to include external entities as a part of oneself.We discuss interoceptive and exteroceptive means of "deceiving" the perception of self or of internal biosignals, and introduce a work in development that aims to go beyond relying on existing interoceptive cues.

Interoceptive Modulation
The phenomenon of gut instinct reveals a sensorial path from the inside out, where feelings are directed toward the external world.A work from Critchley and colleagues [84] suggests that activity in the right anterior insular/opercular cortex predicted subjects' interoceptive accuracy, which correlates to awareness of emotions.For example, a person may perceive themselves to be anxious if they notice elevated heart rate.Interoceptive sensitivity and accessibility are further proven to correlate with self-awareness, wellness, and social behaviors [85].
There are very few interactive applications that utilize interoception, however, in the field of medical and psychological therapies [86][87][88] interoceptive exposures have been widely used for mitigating hypersensitivity to internal alerts.Interoceptive exposure (IE) is a procedure in which a patient goes through a set of exercises to produce a sensation that is close to what causes panic disorder [86,87], irritable bowel syndrome [88], or another problematic condition.In that way, the patient becomes desensitized to the interoceptive cues, thereby effectively removing the trigger for certain disorders.
Modulation of interoceptive signals has also been studied in VR environments.A study suggests visual stimuli that are close to one's heartbeat signal can alter the perceived self-physiology [89].Although the study's aim is to affect interoception through non-interoceptive stimuli, it suggests that, in immersive environments, such techniques can be used to attribute an emotional presence beyond visual manifestation in a virtual environment.

Somatosensory Extension
In this section we discuss the third category of sensorimotor human augmentation namely somatosensory extensions.We distinguish our scope from the large body of sensory augmentation research that aims to substitute or add new sensory modalities.Instead, we focus on ones that re-configure how we interpret biosignals or the body image to include external entities as a part of oneself.We discuss interoceptive and exteroceptive means of "deceiving" the perception of self or of internal biosignals, and introduce a work in development that aims to go beyond relying on existing interoceptive cues.

Interoceptive Modulation
The phenomenon of gut instinct reveals a sensorial path from the inside out, where feelings are directed toward the external world.A work from Critchley and colleagues [84] suggests that activity in the right anterior insular/opercular cortex predicted subjects' interoceptive accuracy, which correlates to awareness of emotions.For example, a person may perceive themselves to be anxious if they notice elevated heart rate.Interoceptive sensitivity and accessibility are further proven to correlate with self-awareness, wellness, and social behaviors [85].
There are very few interactive applications that utilize interoception, however, in the field of medical and psychological therapies [86][87][88] interoceptive exposures have been widely used for mitigating hypersensitivity to internal alerts.Interoceptive exposure (IE) is a procedure in which a patient goes through a set of exercises to produce a sensation that is close to what causes panic disorder [86,87], irritable bowel syndrome [88], or another problematic condition.In that way, the patient becomes desensitized to the interoceptive cues, thereby effectively removing the trigger for certain disorders.
Modulation of interoceptive signals has also been studied in VR environments.A study suggests visual stimuli that are close to one's heartbeat signal can alter the perceived self-physiology [89].Although the study's aim is to affect interoception through non-interoceptive stimuli, it suggests that, in immersive environments, such techniques can be used to attribute an emotional presence beyond visual manifestation in a virtual environment.

Exteroceptive Modulation
There are well-known examples that use multisensory stimulation to alter body representation, such as the rubber hand illusion [30], and mirror-touch synesthesia [90].An experiment by Tajadura-Jiménez [91] shows that the representation of key properties of one's body, like its length, is affected by the sound of one's actions.Recent studies have shown that synchronous interpersonal multisensory stimulation (IMS) produces quantifiable changes of mental representation of one's self and dilutes the differentiation with others [92].BeAnotherLab [93] created a project called The Machine To Be Another that uses virtual reality to encourage empathy by swapping the user's body with the body of another gender.

Novel Somatosensory Experience
A somatosensory intervention opens up opportunities for creating new sensations by triggering sensory receptors programmatically.Such possibility can particularly reinforce immersion with virtual avatars, where an embodiment into avatars different from the self can potentially accompany corresponding somatosensory feedback.
TreeSense [94] is a tactile VR experience of being a tree (Figure 9).Through a headset, one can see their body in a form of a tree with two main branches moving along with their arms.In addition to that, a pair of EMS devices on each arm triggers "novel" tactile sensations that are not innately possible.The triggered sensations are the result of tactile induction and actuation of multiple muscles at varying intensities, in a way that will not happen normally.Therefore, a feeling of something crawling on the body or growing from inside the body can be simulated.

Exteroceptive Modulation
There are well-known examples that use multisensory stimulation to alter body representation, such as the rubber hand illusion [30], and mirror-touch synesthesia [90].An experiment by Tajadura-Jiménez [91] shows that the representation of key properties of one's body, like its length, is affected by the sound of one's actions.Recent studies have shown that synchronous interpersonal multisensory stimulation (IMS) produces quantifiable changes of mental representation of one's self and dilutes the differentiation with others [92].BeAnotherLab [93] created a project called The Machine To Be Another that uses virtual reality to encourage empathy by swapping the user's body with the body of another gender.

Novel Somatosensory Experience
A somatosensory intervention opens up opportunities for creating new sensations by triggering sensory receptors programmatically.Such possibility can particularly reinforce immersion with virtual avatars, where an embodiment into avatars different from the self can potentially accompany corresponding somatosensory feedback.
TreeSense [94] is a tactile VR experience of being a tree (Figure 9).Through a headset, one can see their body in a form of a tree with two main branches moving along with their arms.In addition to that, a pair of EMS devices on each arm triggers "novel" tactile sensations that are not innately possible.The triggered sensations are the result of tactile induction and actuation of multiple muscles at varying intensities, in a way that will not happen normally.Therefore, a feeling of something crawling on the body or growing from inside the body can be simulated.The focus of this work is on somatosensory interventions through the electrical actuation of internal parts of the body.However, other sources, such as mechanical vibration, sound, or electronic implants, could expand how novel somatosensory experiences can be created.The use of programmatic induction of new somatosensory cues can be utilized for natural modalities for informing a user of their physiological state, or for virtual body ownership in VR with non-human configurations.

Reconfiguring Biological Circuits
In this category, we discuss research on computationally-augmented biological circuits.We focus on examples demonstrating practical applications, such as neural prostheses, in order to illustrate possibilities outside laboratories.
Advancements in electronics now afford a direct interface with biological systems.While pacemakers and defibrillators were initial instances of bio-hybrid systems, a new convergence of computation with biological substrates is on the horizon.Adamatzky and colleagues showed various functional biomorphic computing devices operated by slime mold [95].In the same vein, mobile robots interfaced with brain cells of worms [96], rats [97,98], and monkeys [99] were proposed by various researchers.
But these explorations are not limited to lab experiments, there have been many clinical studies regarding the treatment of peripheral nerve injuries that cause problems, such as loss of function or neuropathic pain [100].Neural prostheses [8, 101,102], and cell or tissue transplants [100,103,104], have employed direct interfacing with the internal physiology for treatment.Osseointegration [101]  The focus of this work is on somatosensory interventions through the electrical actuation of internal parts of the body.However, other sources, such as mechanical vibration, sound, or electronic implants, could expand how novel somatosensory experiences can be created.The use of programmatic induction of new somatosensory cues can be utilized for natural modalities for informing a user of their physiological state, or for virtual body ownership in VR with non-human configurations.

Reconfiguring Biological Circuits
In this category, we discuss research on computationally-augmented biological circuits.We focus on examples demonstrating practical applications, such as neural prostheses, in order to illustrate possibilities outside laboratories.
Advancements in electronics now afford a direct interface with biological systems.While pacemakers and defibrillators were initial instances of bio-hybrid systems, a new convergence of computation with biological substrates is on the horizon.Adamatzky and colleagues showed various functional biomorphic computing devices operated by slime mold [95].In the same vein, mobile robots interfaced with brain cells of worms [96], rats [97,98], and monkeys [99] were proposed by various researchers.
But these explorations are not limited to lab experiments, there have been many clinical studies regarding the treatment of peripheral nerve injuries that cause problems, such as loss of function or neuropathic pain [100].Neural prostheses [8, 101,102], and cell or tissue transplants [100,103,104], have employed direct interfacing with the internal physiology for treatment.Osseointegration [101] is a technique that connects electrodes directly through a socket anchored through the bone.Thereby, nerves and muscle bundles can be accessed internally, reducing noise and instability of acquired biosignals.It is further reported that tactile perception could be chronically reproduced through direct electrical stimulation of the peripheral nerves, despite long-term amputation (>10 years).Targeted muscle reinnervation (TMR) [105] is a surgical procedure for retargeting remaining arm nerves to alternative muscle sites, such as remaining upper-arm or chest muscles.Therefore, loss of muscles due to an amputation can be overcome, and produce a sufficient number of signals for neuroprosthetic control.Researchers also explored producing axons in vitro, and implanting those bioartificial axon modules to bridge nerve damage [100,103,104].The modules can include electrodes exposing electrical contacts to the subdermal areas, making nervous signal readings and inputs accessible [103].

Discussion
In this paper, we introduced works from various domains in order to shed light on the rise of human augmentation technologies.The paper highlights the interdisciplinary nature of the research and how the HCI perspective could contribute to new applications and corresponding studies.

In Relation to Robotics and Rehabilitation
The fields of robotics and rehabilitation study a wide range of prostheses and design of neuromuscular control systems.A potential synergy between these fields and HCI can explore further use of such systems beyond exclusive scenarios.For example, the works presented in previous sections [44,56,62,106], showcase new setups developed in light of computer-mediated interaction.The degree of usability of such systems can be evaluated and tuned through the interaction practices and methodologies already established in HCI.

In Relation to Cognitive Science
Cognitive and neuroscience research widely investigate the effect of embodiments on behaviors and cognitive processes.Understanding the depth of such functions will unveil further opportunities in computer-mediated communication, teleoperation, and wearable computers-topics actively discussed within HCI.Such understanding can not only affect the design rationales for systems, but could be actively connected with a body's sensory or nervous systems.For example, researchers [107,108] have explored how rich haptic feedback for VR can be constructed by tricking the sense of orientation, or through a Wizard-of-Oz style reconfiguration of physical spaces.
In fact, cognitive science researchers are already pushing the boundaries, focusing not only on sociological factors [36][37][38] or neural phenomena [20][21][22], but also on parameters that are essential to designing an interaction system.For instance, the discussion by Aymerich-Franch and Ganesh [23] addresses the effect of (the body's) functionality in self-attribution.However, few works [17] have studied the relationship between embodiment and its interaction with the world or tasks.More critical interrogation towards the interaction aspect will lead to insights for designing a system that partially becomes a part of a user and its association with application contexts.

Evaluation and Deployment
Deployment and scaling of the technologies discussed in this article may be a challenge.A technology with very close proximity to the body, or one that is potentially invasive, has to be evaluated differently than conventional user interfaces.Reliability, robustness, running times and health effects will all be key parameters in the success of such technologies.For example, in the context of skin interfaces, the composition should always adhere to the body, should not fail in sensing, and should be free from any long-term health effects.This will demand another level of user studies than what is customary in HCI research, where elongated use of the technology has to be investigated.Similar knowledge has been generated by wearable research [109,110] where studies have been mostly done through user surveys.HCI researchers have also explored the plausibility of implantable devices through tests on cadavers [111].
The field has attempted to find the means to explore these new ideas.However, there is still a long way to go before actual products will become available.It was only the last year (2016) when an automatic insulin injector was approved by the FDA [112].Still, only 5% of people among 29 million people in the US with diabetes [113] are eligible for the use of the technology.This is just one case that demonstrates that productisation of such technologies may take a long time, even if there is a large need.In the case of the new technologies discussed in the paper, deployment may take even longer, owing to the novelty of the concepts and applications.

Conclusions
In this paper, we discussed the concept of user interfaces that are closely integrated with the human body.As shown in numerous neuroscience and cognitive science studies, the use of tools affects how our brain functions.Building upon our human nature of incorporating tools as our extension, we envision computational interfaces that are internalized and augment our cognition and functional capabilities.We focused on three areas of augmentation, physical morphology, on-skin interfaces, and somatosensory extension, and presented examples of work by researchers and artists.In addition, recent developments in bioelectronics for body-implanted computers were discussed.The authors explored robotic and on-skin interfaces for transforming what the human body is and what it can do.While the research projects surveyed mostly focus on implementation, we believe future research in how such technologies can transform self-image and body-brain mapping will offer deeper insight into symbiotic user interfaces.

Figure 1 .
Figure 1.Symbiotic interfaces combine aspects of the two prior interaction paradigms (direct manipulation and autonomous agents).

Figure 1 .
Figure 1.Symbiotic interfaces combine aspects of the two prior interaction paradigms (direct manipulation and autonomous agents).

Figure 2 .
Figure 2. Body-worn robotic device that transforms into multiple forms.

Figure 3 .
Figure 3. Modular robotic wearable system: (a) modules for robot assembly; (b) the modules are connected by a single clasp action; (c) an assembled robot connected to the wrist attachment module; (d) the robot in action, worn on a user's wrist.

Figure 2 .
Figure 2. Body-worn robotic device that transforms into multiple forms.

Figure 2 .
Figure 2. Body-worn robotic device that transforms into multiple forms.

Figure 3 .
Figure 3. Modular robotic wearable system: (a) modules for robot assembly; (b) the modules are connected by a single clasp action; (c) an assembled robot connected to the wrist attachment module; (d) the robot in action, worn on a user's wrist.

Figure 3 .
Figure 3. Modular robotic wearable system: (a) modules for robot assembly; (b) the modules are connected by a single clasp action; (c) an assembled robot connected to the wrist attachment module; (d) the robot in action, worn on a user's wrist.Computers 2017, 6, 12 7 of 17

Figure 4 .
Figure 4. Different end effector module designs-(from left to right) trigger, pick holder, sensor tip, knob tip, and LEGO connector.

Figure 4 .
Figure 4. Different end effector module designs-(from left to right) trigger, pick holder, sensor tip, knob tip, and LEGO connector.

Figure 7 .
Figure 7. (a) Biosensors tattooed onto pig skin and their interaction with chemical solutions; and (b) fluorescent biosensors made by a professional tattoo artist on pig skin.

Figure 7 .
Figure 7. (a) Biosensors tattooed onto pig skin and their interaction with chemical solutions; and (b) fluorescent biosensors made by a professional tattoo artist on pig skin.

Figure 7 .
Figure 7. (a) Biosensors tattooed onto pig skin and their interaction with chemical solutions; and (b) fluorescent biosensors made by a professional tattoo artist on pig skin.

Figure 7 .
Figure 7. (a) Biosensors tattooed onto pig skin and their interaction with chemical solutions; and (b) fluorescent biosensors made by a professional tattoo artist on pig skin.

Figure 8 .
Figure 8. EarthTones is a wearable chemical display in the form of cosmetic powders.It senses environmental pollution, and generates color change to display elevated levels.We created three powder instantiations which detect carbon monoxide (CO), ultraviolet radiation (UV), and ozone (O3).In the example of a UV-sensing powder, a color change from yellow (a) to dark red (c) occurs when exposed to UV.

Figure 8 .
Figure 8. EarthTones is a wearable chemical display in the form of cosmetic powders.It senses environmental pollution, and generates color change to display elevated levels.We created three powder instantiations which detect carbon monoxide (CO), ultraviolet radiation (UV), and ozone (O 3 ).In the example of a UV-sensing powder, a color change from yellow (a) to dark red (c) occurs when exposed to UV.

Table 1 .
The categories of symbiotic interfaces discussed in the paper.