Next Article in Journal / Special Issue
Design and Assessment of Digital Musical Devices Yielding Vibrotactile Feedback
Previous Article in Journal
Decolonizing Photography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Feeling the Future—Haptic Audio: Editorial

by
Justin Paterson
1,* and
Marcelo M. Wanderley
2,*
1
London College of Music, University of West London, London W5 5RF, UK
2
CIRMMT, McGill University, Montreal, QC H3A 1E3, Canada
*
Authors to whom correspondence should be addressed.
Arts 2023, 12(4), 141; https://doi.org/10.3390/arts12040141
Submission received: 27 June 2023 / Accepted: 3 July 2023 / Published: 7 July 2023
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)
This Special Issue in Arts aligns with the journal’s established theme of ‘Music and the Machine’. It is concerned with haptics—the transmission and understanding of touch and force-related information—and its application to music and audio.
Haptics increasingly pervades our interaction with technology; the vibrating phone is a simple everyday experience for billions of people. However, increasingly sophisticated haptic applications are developing in numerous industries, from autonomous cars to surgical simulation, wearables to wellbeing, and in creative sectors such as gaming and fashion.
It is well understood that the sense of touch is crucial to any musician—the string sliding under the finger, the vibration of the embouchure, the key hitting its end-stop. Yet, these qualities often remain elusive in the contemporaneous generation of (often electronic) new musical instruments and controllers, and this often compromises the human body’s ability to exert control upon them, foster experiential performative memory and develop increasing mastery.
The era of working and playing in some form of extended reality is in its naissance, yet it is coming in more ways than we can yet typically imagine. Music performance is just one area that will increasingly explore this medium, and it is haptics that will not just bring an increased sense of realism, but also offer an essential modality to the transparent and relatively lifeless world of purely visual and audio experiences.
In recent decades, such research has built increasing momentum and has given rise to many novel tactile interfaces and approaches to musicking. Conversely, research on force feedback in musical applications has traditionally suffered from issues such as hardware cost and the lack of community-wide accessibility to software and hardware platforms for prototyping applications. Typically, associated publications often require the quantitative analysis of primary data, formal user testing in controlled conditions, or present mathematical contexts for novel interfaces. Although many of these approaches have proved valuable, the literature tends to be rooted in engineering, technology or computing – thus proving out of reach to many in the creative arts.
The field has yet to demonstrate an amalgamated and compelling case for the actual benefit of haptics to audio and musical applications without employing any such overbearing mathematics or statistics. Accordingly, presenting this case to the artistic community represents the scope of this Special Issue: ‘Feeling the Future—Haptic Audio’. It offers a variety of reviews, case studies, insights, and explorations by a team of world experts.
We believe it is time to discuss future opportunities more openly, to propose directions in which this field can blossom, and eventually precipitate more ubiquitous tools for audio and music interaction.
  • Prof. Dr. Justin Paterson
  • Prof. Dr. Marcelo M. Wanderley
  • Guest Editors

Contributors:

1. Devon Blewett; University of Regina, Canada

Devon Blewett is a graduate student, research assistant, and lab instructor from the University of Regina. His research focuses on user-interaction design, with the goal of creating open-source and accessible educational tools on both web and virtual platforms for remote learners. His current projects center on VR-training simulations for music and healthcare, prioritizing the potential of gesture-based interactions through camera tracking.

2. Bert Bongers; University of Technology Sydney, Australia

Bert Bongers studies the relationships between technology, people and nature, which he has been researching and exploring for over three decades. He has a mixed background in technology, human sciences, and the arts and music, developed through education as well as practical experience in numerous R&D and design projects. In his work he combines insights and experiences gained from musical instrument design, interactive architecture, video performances, and interface development for multimedia systems to establish frameworks and an ecological approach for interaction between people and technology. Since 2007, Bert has been an Associate Professor at the University of Technology Sydney (UTS) in the School of Design at the Faculty of Design, Architecture and Building, where he directed the Interactivation Studio from 2008 to 2017, a 100 sqm flexible and reconfigurable space and audiovisual infrastructure (multi-channel sound system and video projections) for developing new paradigms for interacting with technology. At UTS he teaches and develops subjects across a wide range of topics. He has published widely in academic journals and conferences; his latest book is Understanding Interaction (Volume 1 came out in 2022, Volume 2 in 2023).

3. Doga Cavdir; Stanford University, USA

Doga Cavdir is a researcher and a sound artist whose work integrates body movement and expression into music performance. Her research process actively engages with kinesthetic, immersive, and shared experiences for inclusivity as a way to bridge diverse abilities. Doga designs and builds new wearable musical instruments and listening experiences to study inclusive and embodied methods for music-making and to create inclusive, accessible performance collaborations. Her artistic work has been featured by Bay Area and European art initiatives. She is a recipient of the 2021 DARE fellowship from Stanford University where she is completing her PhD at CCRMA.

4. James Cunningham; Queen’s University Belfast, UK

James Cunningham is a visually impaired doctoral student at Queen’s University Belfast. As part of the Bridging the Gap project under the Performance without Barriers research umbrella at the Sonic Arts Research Center, he is interested in how music technology might be made more accessible for people who identify as disabled sound creatives, and how insights drawn from that question can benefit his own artistic practice as a music maker.

5. Mauricio Flores Vargas; Trinity College Dublin, Ireland

Mauricio Flores Vargas is a Creative Arts professional with a background in sound design and music composition, interested in audio for interactive applications and technological innovations. He is driven by the dynamic intersection of artistic ideas and new means of expression using technology to bridge the gap between artist and listener. His recent work has focused on developing Virtual and Augmented Reality experiences in sonic and visual contexts. He holds an M.Phil in Music and Media Technology (2020) and is undertaking a PhD in the ADAPT Centre at Trinity College Dublin focused on the influence of audio in the development of VR experiences. As an active composer, he creates music for renowned media publishers and has recorded/produced well-known Irish acts.

6. Federico Fontana; University of Udine, Italy

Federico Fontana received the Laurea degree in electronic engineering from the University of Padova, Italy, in 1996, and a PhD degree in computer science from the University of Verona, Italy, in 2003. During his PhD degree studies, he was a Research Consultant in the design and realization of real-time audio DSP systems. He is currently an Associate Professor in the Department of Mathematics, Computer Science and Physics, University of Udine, Italy, teaching auditory and tactile interaction and computer architectures. In 2001, he was Visiting Scholar at the Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology, Espoo, Finland. In summer 2016, he visited the Institute for Computer Music and Sound Technology at the Zurich University of the Arts. His current interests are in interactive sound processing methods and in the design and evaluation of musical interfaces. Professor Fontana coordinated the EU project 222107 NIW under the FP7 ICT-2007.8.0 FET-Open call from 2008 to 2011. From 2017 to 2021 he was Associate Editor of the IEEE/ACM Transactions on Audio, Speech and Language Processing. Since 2022, he has also been Associate Editor of the IEEE Signal Processing Letters.

7. Christian Frisson; Carleton University, Canada

Christian Frisson is a researcher and developer at the Society for Arts and Technology (SAT), Metalab, contract instructor at Carleton University, School of Information Technologies (CSIT). He was previously a postdoctoral researcher at McGill University, Input Devices and Music Interaction Lab (IDMIL), at the University of Calgary, Interactions Lab, and at Inria, Mjolnir/Loki team, France. He obtained his PhD at the University of Mons, numediart Institute, Belgium; his Master in “Art, Science, Technology” from Institut National Polytechnique de Grenoble with the Association for the Creation and Research on Expression Tools (ACROE) and his Master of Engineering in Acoustics and Metrology from ENSIM, France.

8. Francesco Ganis; Aalborg University, Denmark

Francesco Ganis is an engineer and sound technician who is pursuing his PhD at Aalborg University of Copenhagen. His research focus is enhancing the musical experience for people with hearing disabilities, using vibrotactile feedback. He is currently working on employing vibrations to add an additional layer of information to musical training targeted for children with cochlear implants and auditory nerve deficiency. Thanks to his studies on sound and his love of music, he is determined to create practical and effective methods of enhancing the musical experience for people with hearing impairments, collaborating with the Center for Hearing and Balance of Rigshospitalet in Copenhagen.

9. David Gerhard; University of Manitoba, USA

David Gerhard is a Professor in the Department of Computer Science at the University of Manitoba, specializing in human–computer interaction as it relates to performance. His work focuses on computational interaction with information-rich human data, such as music, speech, vision, and movement. This research combines signal processing, pattern classification, information retrieval, and sensor-based physical computing techniques with multimedia, speech recognition, computer music, and human–computer interaction, especially in virtual and augmented reality. His work is often interdisciplinary, incorporating aspects of both technology and artistic expression. Both his teaching and his research have won awards.

10. Jacob Harrison; Queen Mary University of London, UK

Jacob Harrison is a postdoctoral researcher with the AHRC-funded ‘Bridging the Gap’ research project, co-funded by Ableton. He works with Andrew McPherson in the Augmented Instruments Lab at Queen Mary University of London and Imperial College London. His work focuses on accessibility, disability, and music technology, within the wider domain of music HCI. He has previously worked on a number of academic research projects collaborating with arts charities and organizations, such as CWPlus at Chelsea and Westminster Hospital, Heart n Soul, Drake Music and the OHMI Trust.

11. Hanna Järveläinen; Zurich University of the Arts, Switzerland

Hanna Järveläinen received her MSc and DSc degrees in electronics and communications engineering from Helsinki University of Technology (now Aalto University), Finland, in 1997 and 2003, respectively, and a BMus degree from the Sibelius Academy (now Uniarts Helsinki), Finland, in 2006. In 2012, after focusing on music for several years, she joined the Institute for Computer Music and Sound Technology, Zurich University of the Arts, Switzerland, where her main research field is multimodal perception and action.

12. Alexandros Kontogeorgakopoulos; Kapodestrian University of Athens, Greece

Alexandros Kontogeorgakopoulos is an academic, musician, and artist conducting transdisciplinary research and creating work at the intersection of art, science, and technology. He equally has a scientific, engineering, and musical background, reflected in the nature of his creative practice and his techno-scientific exploration. He is currently Assistant Professor in Interactive Digital Arts and Digital Art Installations in the Department of Digital Arts and Cinema of the National and Kapodestrian University of Athens (NKUA). His artistic work, which includes audiovisual performances, installations, and compositions, has been presented in Europe and the US. His research in the field of sound and music computing focuses principally on haptic musical interaction and sound synthesis/processing through physical modelling.

13. Alex Lucas

Alex Lucas is a product designer turned academic who specializes in designing music technology hardware interfaces. Commercial products designed by Alex include the Novation Peak polyphonic synthesizer and Circuit Mono Station sequenced monosynth. He is the postdoctoral lead on the AHRC project ‘Bridging the Gap’. Alex has a passion for intuitive user experience and aspires to improve the accessibility of music technology interfaces. Alex completed an MSc in Creative and Computation Sound at the University of Portsmouth in 2007. In 2017, Alex relocated to Belfast to take up a prestigious AHRC NPIF (National Productivity Investment Fund) PhD studentship to join the Performance Without Barriers Team.

14. Rachel McDonnell; Trinity College Dublin, Ireland

Rachel McDonnell is an Associate Professor of Creative Technologies at Trinity College Dublin, Ireland. Her research focuses on the animation of virtual characters, using perception to deepen our understanding of how virtual characters are perceived and directly provide new algorithms and guidelines for industry developers on where to focus their efforts. She has published over 100 papers in conferences and journals in her field, including many top-tier publications at venues, such as SIGGRAPH, Eurographics, and IEEE TVCG. She serves as Associate Editor in journals, such as ACM Transactions on Applied Perception, Computers & Graphics, and Computer Graphics Forum, and is a regular member of many international program committees (including ACM SIGGRAPH and Eurographics).

15. Andrew McPherson; Imperial College London, UK

Andrew McPherson is a musician, engineer, and musical instrument designer. He is Chair in Design Engineering and Music at the Dyson School of Design Engineering, Imperial College London. He leads the Augmented Instruments Laboratory (http://instrumentslab.org, accessed on 27 June 2023), a research team based at Imperial and at the Centre for Digital Music at Queen Mary University of London. The lab brings together electronic engineering, human-computer interaction and musical practice to create new digital instruments and study the interaction between people and technology in creative contexts.

16. Néill O’Dwyer; Trinity College Dublin, Ireland

Néill O’Dwyer is a Research Fellow and the Principal Investigator for ‘Performative Investigations into Extended and Augmented Reality Technologies’ (PIX-ART) in the School of Creative Arts (SCA) at Trinity College Dublin (TCD). He previously worked on the V-SENSE project (Dept. of Computer Science), where he continues to be an associate researcher. Néill specializes in practice-based research in scenography and design-led performance with a specific focus on digital media, computer vision, human-computer interaction, prosthesis, synergy, agency, performativity, and the impact of technology on artistic processes. He is the sole author of Digital Scenography: 30 years of experimentation and innovation in performance and interactive media (2021) and a co-editor of The Performing Subject in the Space of Technology: Through the Virtual, Towards the Real (2015) and Aesthetics, Digital Studies and Bernard Stiegler (2021).

17. Marius George Onofrei

Marius George Onofrei is a recent graduate of the Sound and Music Computing M.S. program at Aalborg University in Denmark. Here, he was involved in many projects in the fields of sound synthesis, machine learning for media technology, or music signal analysis while finally settling on specializing in the field of physical modeling for sound synthesis. As part of these studies, he carried out an internship at the University of Udine, Italy, where he has explored the multi-modal potential of physical models as a single interconnected source for both audio and tactile synthesis, which served as the starting point for his current research focus. He previously received an MS degree in structural engineering from the same university in 2013 and has had a successful career afterwards, working with the analysis of metocean data, with a focus on the likelihood of rogue waves, as well as working in the wind energy field in the structural design of offshore wind-turbine foundations.

18. Razvan Paisa; Aalborg University, Denmark

Razvan Paisa is a PhD student who is focused on making music accessible for people with cochlear implants. With a background in engineering and a passion for music, Razvan’s research is focused on creating vibrotactile displays that can enhance the music listening experience for individuals with severe hearing impairments. He is committed to making a positive impact on the lives of people with disabilities and is motivated by the potential of technology to transform lives.

19. Justin Paterson; University of West London, UK

Justin Paterson is Professor of Music Production at London College of Music in the University of West London, where he leads the MA Advanced Music Technology. He has numerous research publications ranging through journal articles, conference presentations, and book chapters, and is author of the Drum Programming Handbook. Justin is founding co-chair of the Innovation in Music conference series, and co-editor of its books. He is also an active music producer with many professional credits, and is a consultant to music-app-development company RT60 Ltd. His current research interests are 3D audio, interactive music, and haptic audio. Working with Warner Music Group to release prominent artists from their roster, he co-developed the variPlay interactive-music format, funded by the UK Arts and Humanities Research Council. This work was graded as ‘world leading’ in the UK Research Excellence Framework 2021. Originally funded by Innovate UK with consortium partners, including Generic Robotics Ltd, Numerion Software Ltd., and the Science Museum Group, Justin leads a research group at UWL. The group’s work is around the development of novel music-production interfaces in mixed reality utilizing haptic feedback via a range of devices from Ultraleap and haptic gloves to dual Phantom® PremiumsTM. They are also working on the deployment of haptics alongside 3D audio to enhance audio description for visually impaired people. In 2021, along with University College London and Generic Robotics Ltd, the group demonstrated the world’s first real-time remote haptic–collaboration workspace.

20. Stefano Papetti; Zurich University of the Arts, Switzerland

Stefano Papetti received a Laurea (M.Sc.) degree in computer engineering from the University of Padova, Italy, in 2006, and a Ph.D. degree in computer science from the University of Verona, Italy, in 2010. Since 2011, he has a research associate at the Institute for Computer Music and Sound Technology, Zurich University of the Arts, Switzerland. His current research focuses on the design and evaluation of haptic musical interfaces, and on models and applications for interactive sound synthesis. He was PI of two research projects funded by the Swiss National Science Foundation (AHMI, grant No. 150107, 2014–2016; HAPTEEV, grant No. 178972, 2018–2022) and co-edited the open-access book “Musical Haptics” (Springer Series on Touch and Haptic Systems, 2018).

21. Franziska Schroeder: Queen’s University Belfast, UK

Franziska Schroeder is a saxophonist and improviser, originally from Berlin. She works as a Professor of Music and Cultures at the Sonic Arts Research Centre, at Queen’s University Belfast, where she leads the research team “Performance without Barriers”, a group dedicated to researching more accessible and inclusive ways to designing music technologies for and with disabled musicians. The group’s agenda-setting research in designing virtual reality instruments was recognized by the Queen’s Vice Chancellor’s 2020 Prize for Research Innovation. Franziska’s newest passion is #StableDiffusion #Deforum #AIArt, which allow her to create and design her own immersive performance AI eco-system.

22. Stefania Serafin; Aalborg University, Denmark

Stefania Serafin is Professor of Sonic Interaction Design at Aalborg University in Copenhagen and the leader of the multi-sensory experience lab together with Rolf Nordahl. She is the President of the Sound and Music Computing association, Project Leader of the Nordic Sound and Music Computing network, and lead of the Sound and music computing Master at Aalborg University. Stefania received her PhD entitled “The sound of friction: computer models, playability and musical applications” from Stanford University in 2004, supervised by Professor Julius Smith III. Her research on sonic interaction design, sound for virtual and augmented reality with applications in health and culture can be found here: tinyurl.com/35wjk3jn (accessed on 27 June 2023).

23. Aljosa Smolic; Lucerne University of Applied Sciences and Arts, Switzerland

Aljosa Smolic has been a lecturer in AR/VR at Hochschule Luzern since 2022. Before that, he was SFI Research Professor of Creative Technologies at Trinity College Dublin (TCD, 2016–2021), heading the research group V-SENSE on visual computing, combining computer vision, computer graphics, and media technology to extend the dimensions of visual sensation. Before joining TCD, Dr Smolic was with Disney Research Zurich as Senior Research Scientist and Head of the Advanced Video Technology group (2009–2016) and with the Fraunhofer Heinrich-Hertz-Institut (HHI), Berlin, also heading a research group as Scientific Project Manager (2001–2009). Dr Smolic is also co-founder of the start-up company Volograms, which commercializes volumetric video content creation. He received the IEEE ICME Star Innovator Award 2020 for his contributions to volumetric video content creation, TCD’s Campus Company Founders Award 2020, and several best paper awards.

24. Gerrit Van der Veer; Open Universiteit, Netherlands

Gerrit van de Veer has an educator and researcher in human–computer interaction and Cognitive Ergonomics at the Vrije University Amsterdam (VUA) since 1961. The VUA allowed him to teach at other Dutch universities, as well as in Romania, Belgium, Spain, Italy, and China. In 2005 he founded the VUA Department Multimedia and Culture. Currently he is guest Professor of Multimedia and Animation at the Lushun Academy of fine Arts in China. His research is on individual differences, mental models, task analysis, user interface design, service design, and interaction design for art and for cultural heritage.

25. Tara Venkatesan; University of Oxford, UK

Tara Venkatesan is a cognitive science researcher who studies the intersection of technology and art. She completed her Ph.D. at the University of Oxford and B.S. at Yale University. Her research spans a wide variety of topics, including the multisensory effects of digital album art on music perception, music and psychological connection, and why we enjoy sad art. She is also a professional opera singer with a specialism in historical performance.

26. Qian Janice Wang: Aarhus University, Denmark

Qian Janice Wang is an experimental psychologist and computer scientist by training and an Assistant Professor at the Department of Food Science at Aarhus University, with an affiliation at the Center for Hybrid Intelligence, the Cognitive Neuroscience Research Unit, and the Aarhus Institute of Advanced Studies. Her research examines multisensory flavor perception and preference, with on a focus on how environmental (like background music) and cognitive factors (like expertise) can modify and enhance the way we perceive food and drink. She is a researcher interested in the role of the brain flavor system and its connection with eating behavior, in order to gain a deeper understanding of why people eat what they do, and to encourage behavior changes for a healthier, more sustainable lifestyle.

27. Marcelo M. Wanderley; McGill University, Canada

Marcelo M. Wanderley is Professor of Music Technology at McGill University, Canada. His research interests include the design and evaluation of digital musical instruments and the analysis of performer movements. He co-edited the electronic book “Trends in Gestural Control of Music” in 2000, co-authored the textbook “New Digital Musical Instruments: Control and Interaction Beyond the Keyboard” in 2006, and chaired the 2003 International Conference on New Interfaces for Musical Expression (NIME03). He is a member of the Computer Music Journal’s Editorial Advisory Board and a senior member of the ACM and the IEEE.

28. Peter Williams; Aalborg University, Denmark

Peter Williams is a jazz musician, engineer and researcher affiliated with the rapid prototyping and embedded electronics laboratory at Aalborg University Copenhagen where he teaches physical prototyping, interface design, and mobile and wearable computing. His focus is on both interface design and evaluation from the performer’s perspective, and multimodal technologies that enrich music related experiences.

29. Gareth W. Young; Trinity College Dublin, Ireland

Gareth W. Young is an interdisciplinary senior research fellow on the TRANSMIXR project at Trinity College Dublin, researching the intersection of artistic practice, human-computer interaction (HCI), and extended-reality (XR) technology. His academic interests stem from a life-long passion for technology-mediated creativity, as applied in multiple disciplines and co-creative media contexts. Gareth has published numerous works on HCI in arts practices and innovative applications of XR, all of which can be explored on his website www.GarethYoung.org. Gareth has previously worked as a postdoctoral research fellow at V-SENSE (School of Computer Science and Statistics at Trinity College Dublin) and the Building City Dashboards project (National Centre for Geocomputation at Maynooth University).

In Memoriam Vincent Hayward (1955–2023)

It is hard to overestimate Vincent Hayward’s impact on haptics research. As a researcher, he made significant contributions to the area, from basic research in tribology to developing novel devices and establishing several groundbreaking companies. Vincent had a seemingly endless curiosity and passion for anything related to tactile and force feedback. Those who knew him can share similar stories of how he would passionately talk about his latest research results and present his prototypes developed to demonstrate haptic effects, whether at the Electrical and Computer Engineering Department at McGill University, Montreal or at the Institut des Systèmes Intelligents et Robotique at the Sorbonne University, Paris.
But Vincent was also interested in haptic applications to other fields—for instance, computer music.
I met Vincent in 1997–1998 when he came to Ircam in Paris during a sabbatical leave from McGill. We shared the same office for roughly a year. During this time, I had the unique opportunity to discuss various topics related to haptics and computer music (I was studying my Ph.D. thesis with the Analysis and Synthesis Team then). He showed us the Pantograph and took part in the nascent Gesture Research in the Music working group (http://recherche.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/, accessed on 27 June 2023) that I created in 1997 with Joseph Butch Rovan and Schlomo Dubnov (Wanderley et al. 1998). What an excellent way to learn from him the endless possibilities of haptics in the arts!
Butch and Vincent wrote an amazing chapter for the e-book “Trends in Gestural Control of Music,” Ircam, 2000, on adding vibrotactile feedback to gestural controllers (aka musical interfaces), entitled “Typology of Tactile Sounds and their Synthesis in Gesture-Driven Computer Music Performance” (Rovan and Hayward 2000). I still use this reference every year in my seminar on Musical Human–Computer Interaction. It is a fantastic work, with a review of the foundations of haptic perception, and a description of the VR/TX (pronounced “Vortex”) system.
I met Vincent again in Montreal in late 1998 when I visited his lab at ECE McGill. What a place! It is funny to think that this happened some three years before I joined McGill as an assistant professor (2001). I obviously had no clue that we would end up as colleagues at McGill.
During our time together at McGill, even though he was busy directing the Centre for Intelligent Machines (CIM), we had the opportunity to work on the EU project Enactive Interfaces, coordinated by Annie Luciani (ACROE, Grenoble) and Massimo Bergamasco (Escola Superior Sant’anna, Pisa), thanks to grants from Quebec’s Ministry of Economic Development and the Natural Sciences and Engineering Research Council of Canada. We later co-supervised Stephen Sinclair’s Ph.D. thesis (“Velocity-driven Audio-Haptic Interaction with Real-Time Digital Acoustic Models,” (Sinclair 2012)) and published a few papers with Stephen and Gary Scavone on haptics and computer music (Sinclair et al. 2011; 2014).
Remembering Vincent brings back memories of his seemingly insatiable curiosity, his passion for research and knowledge, and also his chuckles when telling stories about his work. It is gratifying to see that his contributions have been recognized in the last few years (e.g. his election to the French Academy of Sciences in 2019). It is such a pity that he left us so young. I try to picture him at 85 years old and still passionately talking about his latest haptic-related discovery! Rest in peace, Vincent.
Justin and I would like to dedicate this Special Issue to him.
  • Marcelo M. Wanderley

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rovan, Joseph, and Vincent Hayward. 2000. Trends in Gestural Control of Music. Edited by Marcelo M. Wanderley and Marc Battier. Paris: Ircam. [Google Scholar]
  2. Sinclair, Stephen. 2012. Velocity-Driven Audio-Haptic Interaction with Real-Time Digital Acoustic Models. Ph.D. thesis, McGill University, Montreal, QC, Canada. [Google Scholar]
  3. Sinclair, Stephen, Marcelo M. Wanderley, and Vincent Hayward. 2014. Velocity estimation algorithms for audio-haptic simulations involving stick-slip. IEEE Transactions on Haptics 7: 533–44. [Google Scholar] [CrossRef] [PubMed]
  4. Sinclair, Stephen, Marcelo M. Wanderley, Vincent Hayward, and Gary Scavone. 2011. Noise-free Haptic Interaction with a Bowed-string Acoustic Model. Presented at the IEEE World Haptics Conference, Istanbul, Turkey, June 21–24. [Google Scholar]
  5. Wanderley, Marcelo M., Marc Battier, Philippe Depalle, Shlomo Dubnov, Vincent Hayward, Francisco Iovino, Véronique Larcher, Mikhail Malt, Patrice Pierrot, Joseph Butch Rovan, and et al. 1998. Gestural Research at Ircam: A Progress Report. In Proceedings Journées d’Informatique Musicale (JIM98). France: La Londe-Les-Maures. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paterson, J.; Wanderley, M.M. Feeling the Future—Haptic Audio: Editorial. Arts 2023, 12, 141. https://doi.org/10.3390/arts12040141

AMA Style

Paterson J, Wanderley MM. Feeling the Future—Haptic Audio: Editorial. Arts. 2023; 12(4):141. https://doi.org/10.3390/arts12040141

Chicago/Turabian Style

Paterson, Justin, and Marcelo M. Wanderley. 2023. "Feeling the Future—Haptic Audio: Editorial" Arts 12, no. 4: 141. https://doi.org/10.3390/arts12040141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop