Next Article in Journal
Metamaterial Based Design of Compact UWB/MIMO Monopoles Antenna with Characteristic Mode Analysis
Next Article in Special Issue
Good Practices in the Use of Augmented Reality for the Dissemination of Architectural Heritage of Rural Areas
Previous Article in Journal
Coordinated Control Strategy of CU-MTDC under Abnormal Conditions Considering Power Supply Security
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Evaluation of a Web- and Mobile-Based Binaural Audio Platform for Cultural Heritage

Dyson School of Design Engineering, Imperial College London, London SW7 2DB, UK
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2021, 11(4), 1540; https://doi.org/10.3390/app11041540
Submission received: 14 January 2021 / Revised: 29 January 2021 / Accepted: 2 February 2021 / Published: 8 February 2021

Abstract

:
PlugSonic is a suite of web- and mobile-based applications for the curation and experience of 3D interactive soundscapes and sonic narratives in the cultural heritage context. It was developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation) and consists of two main applications: PlugSonic Sample, to edit and apply audio effects, and PlugSonic Soundscape, to create and experience 3D soundscapes for headphones playback. The audio processing within PlugSonic is based on the Web Audio API and the 3D Tune-In Toolkit, while the mobile exploration of soundscapes in a physical space is obtained using Apple’s ARKit. The main goal of PlugSonic is technology democratisation; PlugSonic users—whether cultural institutions or citizens—are all given the instruments needed to create, process and experience 3D soundscapes and sonic narratives; without the need for specific devices, external tools (software and/or hardware), specialised knowledge or custom development. The aims of this paper are to present the design and development choices, the user involvement processes as well as a final evaluation conducted with inexperienced users on three tasks (creation, curation and experience), demonstrating how PlugSonic is indeed a simple, effective, yet powerful tool.

Graphical Abstract

1. Introduction

A heritage that is everywhere, and is relevant to everyday life, is one of the preconditions for genuine sustainability [1]. Currently, there are few Information and Communication Technologies (ICT) tools to support citizens in their everyday activities to shape cultural heritage and be shaped by it. Furthermore, existing applications and repositories for heritage dissemination do not foster the creation of heritage communities (e.g., Google Arts and Culture, and Europeana) [2,3]. Social platforms certainly offer potential to build networks, but they have not yet been fully exploited for global cultural heritage promotion and integration in people’s everyday life [4] and museums and institutions have only recently started to explore the potential of social media and technology for public engagement and co-creation purposes [5]. The PLUGGY project (Pluggable Social Platform for Heritage Awareness and Participation) [6] aims to bridge this gap by providing the necessary tools to allow users to share their local knowledge and everyday experience with others and, together with the contribution of cultural institutions, to build extensive networks around a common area of interest, connecting the past, the present and the future.
Within PLUGGY, several tools are being developed (Figure 1): A social platform, curation tools and four “pluggable” apps to demonstrate the platform’s potential and kick-start applications for the after-project life. The social platform serves both as a repository for all content uploaded (assets) and curated (exhibitions), as well as a place for interaction and collaboration. It has all the typical features of a social platform: profile, private and public content; follow; like; comment; share; notifications; folders to organise bookmarked content; and teams for collective contribution. At the same time, the social platform gives access to the curation tools—which are designed to create basic exhibitions in the form of blog stories or time-lines—and is extended by the “pluggable” apps for the curation of augmented exhibitions and engaging experiences. These applications focus on various aspects of digital heritage, which include Virtual (VR) and Augmented Reality (AR), Geolocation, Gamification and 3D Soundscapes and Sonic Narratives. The latter, called PlugSonic, is the focus of the current paper, which will look in particular at the web- and mobile-based binaural audio features of the application. The PLUGGY social platform gives also access to external repositories like Europeana [7] and Wikipedia and includes an open source application programming interface (API) which allows anyone to develop new applications that access the platform’s content and extend its features or curation tools. Any user can request to become a developer and—after adequate evaluation and approval—“plug” a new app to the platform for everyone to benefit (Figure 1).
The research presented here aims to contribute to both the cultural heritage and spatial audio communities through the development of novel tools for the creation and experience of realistic and interactive 3D soundscapes. We describe the design choices and the key implementation details and we also illustrate the authoring workflow. We show how 3D audio and augmented reality technologies can be exploited to enhance visits to museums and monuments [8]. These could also be adopted to document and preserve the acoustics of an archaeological site [9] or landscape [10] or to document acoustic heritage [11]. By integrating web- and mobile-based applications with a social platform and online repository we show how binaural audio and sonic narratives could be used to encourage cultural heritage dissemination and availability.
In this work, special importance has been given to designing applications which allow users with no previous knowledge or experience in 3D audio production or soundscapes curation, to contribute to the popularisation and growth of sonic cultural heritage. Being web- and mobile-based, the tools presented here are intrinsically ubiquitous and the content can be experienced online or in a physical space, without the need for specialised software or hardware installation and/or custom development. To understand whether these goals were achieved we conducted an evaluation with inexperienced users to substantiate our objective towards a very low barrier to the content development, while retaining the effectiveness in public engagement of spatial audio technologies. In this respect, focus of our evaluation was the usability of our tools, especially for naive users (no expert sound designers were involved). Therefore, quality of the output, pleasantness, accuracy and realism of the simulation are not within the scope of this work. There are already many tools available for expert users (see Section 2) and studies around the trade-offs between accuracy and plausibility of a sonic interaction [12].
The paper is organised as follows. Section 2 provides a background about binaural spatialisation, as well as a review of the state of the art for web-based spatial audio and the use of sonic narratives in cultural heritage. Section 3 lists the research aims. Section 4 describes all the apps that make up the PlugSonic Suite, main functionalities (complete details are included in the appendices) and implementation choices. In Section 5, we report on the evaluation methods and discuss the results. Section 6 is devoted to the conclusions and potential future work.

2. Background

2.1. Sonic Narratives and Web-Based Spatial Audio

A sonic narrative is defined as a sequence of sounds that suggests a sense of causality and temporal evolution. This is usually achieved through the interplay of tension and resolution in the acoustic properties of the sequence, or through its referential qualities [13]. Generally based on music features (e.g., timbre, pitch-melody, tempo, etc.) [14], sonic narratives are often not interactive (i.e., simple audio playback). Conversely, a soundscape is defined as the sum of natural, animal and human sounds that describe a landscape or environment [15] and—while dynamic at the single acoustic events level—it typically describes a static scene, with no control over the listening or sound sources’ position and/or proximity.
The aim of binaural spatialisation is to provide the listener (through standard headphones) with the impression that sound is positioned in a specific location in the three-dimensional space. The 3D characteristics of the sound can be captured during recording with special hardware, or simulated in post-production via spatialisation techniques. The addition of spatial attributes (e.g., placement of sound sources on a full 360° sphere, and at different distances), and most of all the addition of interactivity (e.g., to navigate soundscapes or sonic narratives moving around in the acoustic virtual environment), are features which have not been widely explored until now within the context of digital heritage. For example, the authors of [16] explored spatial sonic narratives, but exploited simple 2-dimensional audio panning techniques.
The theories at the basis of the binaural spatialisation technique are not particularly recent, and the first binaural recording dates back to the end of the 19th Century [17]. However, it is only within the last twenty years that the increase in the calculation power of personal computers enabled an accurate real-time simulation of three-dimensional sound-fields over headphones.
With the release of the Web Audio (WAA) [18] and the Web GL (WGL) [19] APIs in 2011, and with the specification of HTML5 in 2014, the World Wide Web Consortium (W3C) and Mozilla Foundation set the basis for the development of modern web applications. As stated in the introduction to the WAA [20], the specification of a high-level Javascript (JS) API was necessary to satisfy the demand for audio and video processing capabilities that would allow to develop “sophisticated web-based games or interactive applications”. To obtain performances comparable to modern digital audio workstations (DAWs) and games graphical engines, the two organisations decided to move the burden of audio and video processing from the server to the client (i.e., browser) side.
Several tools currently exist that perform binaural spatialisation, just to mention a few: Anaglyph [21], IRCAM Spat [22], the IEM binaural audio open library [23] and the 3D Tune-In Toolkit [24]. Although, very few of these are designed for non-expert users or implemented within a web-based application, and therefore available on multiple platforms through a simple browser. The idea of developing and evaluating web and mobile applications for the creation and experience of 3D sonic narratives and soundscapes does indeed represent a novel contribution to both the digital heritage and audio technology domains.

2.2. Sonic Narratives for Cultural Heritage

In this section, we look at how concepts like sonic narratives, soundscapes and interactivity have been explored in the cultural heritage context. We also look at audio and/or web technologies adoption and implementation methods, the results achieved so far, and the limitations of current systems, when it comes to cultural heritage applications—describing the panorama in which our system is being proposed.
In [25], Ardissono et al. give an exhaustive review and comparative analysis of research about digital storytelling and the delivery of multi-media content, on-site or on-line, with a focus on cultural heritage. Here, we will limit to those projects that use exclusively or mainly audio as medium for content delivery or to design novel types of experiences.
Looking at the state of the art in this area, it can be noticed how research is delving into solutions to make cultural heritage immersive, adopting augmented reality (AR), virtual reality (VR) and spatial audio; engaging, using personalisation and emotional storytelling; adaptive, exploiting context-awareness and location-awareness; interactive, using the paradigm of dramas; or open and inclusive, developing content for people with impairments and/or difficulties.
Two of the first projects aiming at enriching the user experience and engagement in a museum’s visit were the HyperAudio and the HIPS projects. In [26], the authors introduce the HyperAudio project, in which the user is presented with audio content played through headphones when approaching an artefact, together with suggestions for further exploration on the display of a palmtop. In [27], the hardware and software architecture is presented, which uses infrared transmitters nearby the exhibits and a receiver on the user’s headphones to implement a degree of location awareness. The HIPS project [28] expands on the HyperAudio experience aiming at a user-adaptive presentation system.
The LISTEN project [29] investigated audio augmented environments and user adaptation technologies. This involved the development of ListenSpace [30]—graphical authoring tool used to represent the real space and the sound sources’ position—as well as the implementation of a domain ontology [31] for an exhibition and the use of context-awareness to adapt to the user’s interest. For the spatial rendering the authors relied on IRCAM’s Spatialisateur [32]. The main limitations, from the content creators’ perspective, could be seen into the system’s software (server-based processing) and hardware (antennas or infrared cameras for head-tracking) requirements, and the necessity for custom development for each exhibition, together with the sound rendering only for the horizontal plane.
In the CHESS project [33], the focus was on personalisation and interactivity; where the story was delivered mainly through voice narration, with on-screen instructions and interaction obtained through applications running on web browsers. Personalisation was obtained through the definition of personae, profiling first-time visitors [34] to adapt the narration style. Once profiled, the adaptive storytelling engine [33] used contextual data to adapt the user’s experience.
Interaction and context awareness (based on geolocation) was explored in [35] with mobile urban dramas (in which the user becomes the main character of a story). The project used a multimedia style (audio, video, images and animations) and was implemented to run on mobile web browsers using XML to describe the content. Here, the advantage of multi-platform flexibility, was limited by the need for specific knowledge about the content metadata structure or the consultancy from the researchers for app implementation and web services.
Other projects have been studying and developing new ways to attract visitors with engaging experiences. The EMOTIVE project [36] aimed to “research, design, develop and evaluate methods and tools that can support the cultural and creative industries in creating narratives” that exploit emotional storytelling. A first evaluation of a mobile application for the ancient Athens Agora [37] produced positive results with the users particularly appreciating the chance of freely exploring the environment.
In the ARCHES project [38], researchers worked on the inclusivity aspect of storytelling exploring new modalities to design cultural heritage experiences for people with difficulties and/or disabilities. Other examples of tools developed by academia that could be used to create soundscapes and immersive experiences are the Soundscape Renderer [39] and those developed for the CARROUSO project [40].
There are also commercial applications developed for audio narratives like Echoes.XYZ [41]; both are GPS-triggered audio tours with web-based tools for the creation of audio guides and mobile apps to experience them. It is worth highlighting how none of the currently available commercial solutions use sound spatialisation.

3. Aims

The overall aims of our research are illustrated below.
  • Design and develop tools that foster spatial audio for the creation of interactive soundscapes and sonic narratives, with a focus towards cultural heritage.
  • Adopt web and mobile technologies to simplify and streamline the curation process and prevent the need for specialised software, custom applications development and/or hardware requirements.
  • Prove—through an evaluation with inexperienced participants—that 3D audio technologies can be designed to be accessible while remaining capable of delivering an engaging and compelling experience.

4. PlugSonic Suite

In this section, we describe the design criteria and some implementation details of the web and mobile applications that make up the PlugSonic Suite (for complete implementation details, see in [42]). The apps—organised into the PlugSonic Sample and PlugSonic Soundscape groups—were developed to (1) facilitate the use of audio content to augment virtual exhibitions; (2) enhance on-line and/or on-site visits to museums, monuments or archaeological sites; and (3) share tangible and intangible cultural heritage.
Specifically, we designed PlugSonic Sample to edit sound files and apply audio effects, and PlugSonic Soundscape to create and experience interactive spatialised soundscapes.
In this way, social platform, curation tools and pluggable apps can include standard sound files (i.e., mono/stereo or mp3/wav format) to be used in voice descriptions, audio narratives or sound accompaniment to the platform’s exhibitions, as well as interactive and explorable 3D audio narratives and soundscapes. PLUGGY users—whether institutions or citizens—are therefore given all the necessary instruments, and are not in need for specific devices, external tools (software and/or hardware), specialised knowledge and/or resources.
Figure 2 gives a complete overview of the structure of the web and mobile apps included in the PlugSonic Suite. PlugSonic Sample and PlugSonic Soundscape Create were implemented only as web applications while PlugSonic Soundscape Experience was implemented both as web and mobile application. In this way, all the Soundscape exhibitions in PLUGGY can be explored in a virtual or physical space. When using Soundscape Experience Web, the navigation takes place on the browser using mouse, arrow keys or touch controls. With Soundscape Experience Mobile the navigation can take place in a real space; thanks to the use of the Apple ARKit [43], the user can freely navigate within a room while the device’s camera is used to extrapolate the person’s position in the room and hence in the soundscape.
A typical use case of the PlugSonic Suite could be described as follows. Let us suppose that a user would like to curate a new 3D audio soundscape. The first step would be to select the appropriate audio assets to be included in the Soundscape exhibition. The user has the option to either upload new audio files to the PLUGGY social platform or use content already available. Any file uploaded to PLUGGY (e.g., image, video, 3D model or sound) is associated to an asset—with a dedicated web page—which contains extended information such as title, description, location, license, tags, number of likes, comments, etc. While creating a new audio asset, the user might decide to use PlugSonic Sample to edit the audio file and enhance the sound. The next step would be to create the new Soundscape exhibition using PlugSonic Soundscape Create. Upon loading the app, the user can set size and shape of the virtual environment (which might relate to a real physical space or not) and search and retrieve the desired audio files. The sound sources are imported into the soundscape and represented as circles in the virtual room. The curation process includes setting the position for each sound source together with many other options controlling the sources’ sound and the interaction between sources and user (see Section 4.2 for details). As PlugSonic Soundscape Create allows for playback and navigation, at any moment during the curation, the user can listen to the soundscape and test the exhibition. After this curation steps are completed, the user can save and publish it. At this point, other users could explore the soundscape using their own device (personal computer, mobile or tablet). Some might choose to do so from home, using Soundscape Experience Web to visit the exhibition in the virtual environment. Others might be in the real space (e.g., gallery, museum or archaeological site), or might just decide to explore the soundscape through physical movements within a space, and would therefore choose the Soundscape Experience Mobile app.
During the design process—and whenever a decision needed to be taken about user experience/interface, accessibility and ease of use—the authors’ intent was to follow the principle stated in the Article 4 of the Convention on the Value of Cultural Heritage for Society—Faro Convention, 2005 [44]: “Everyone, alone or collectively, has the right to benefit from the cultural heritage and to contribute towards its enrichment.” Therefore, differently from most of the tools described in previous sections, the focus of our project on inclusivity and participation required us to develop intuitive and immediate tools, usable by anyone without specific training, and ensuring a true impact on cultural heritage.

4.1. PlugSonic Sample

As explained in the introduction, PlugSonic Sample can be used to edit and apply audio effects to any audio file uploaded to the PLUGGY social platform. The modified file can then be saved into the social platform and used in any exhibition (e.g., Blog Story, Time Line and Soundscape).
Figure A1 shows the UI of the web application, integrated into the PLUGGY social platform. The UI is divided into three main parts: (1–2) waveform display with mouse navigation and selection functions; (3–14) playback and edit controls, together with buttons to export the audio file and save the modifications; and (15–18) filters and effects menu.
For a complete description of the controls and features implemented in Sample we refer to the Appendix A.

4.2. PlugSonic Soundscape Create

As illustrated in the previous sections, PlugSonic Soundscape Create was developed for the curation of interactive 3D audio narratives and soundscapes. To do so, a user proceeds with the creation of a new exhibition through the PLUGGY social platform. After selecting Soundscape as the exhibition’s type and setting its title and description the user is presented with the Soundscape Create UI, allowing them to proceed with the curation.
The UI, shown in Figure A2, is divided into three main sections: (1–3) the “room”, which shows the physical/virtual space described in the soundscape and includes icons that represent sound sources and listener; (4–7) the top bar, hosting playback control buttons; and (8–12) the dismissable side menu, containing all the controls and options to modify the soundscape.
The curation process would typically proceed as follows. The user starts by setting shape (round/rectangular) and size (width/depth/height) of the room and, if desired, choose an image to be used as floor-plan (room tab—Figure 3C). After uploading the sound sources—using the search tab (Figure 3A)—the user sets the options for each sound source (sound sources tab—Figure 3B). Appendix B includes a complete description of all the options available for each sound source, here we limit our description to what we believe are the most interesting and useful controls: Position—absolute or relative to the listener; Loop—to choose if the sound source will loop or play only once; Reach—to control the interaction between listener and sound source (when ON the listener will be able to hear the sound source only when inside the interaction area); Reach radius—to set the size of the interaction area; and Timings—to set an order in the reproduction of any sound source by constraining the playback to the reproduction of another sound source.
At any point during the soundscape’s creation, the user can explore the soundscape to verify the results. It is also possible to take a recording while navigating the soundscape in real-time and export it as a .wav audio file, which will include all the properties of the 3D audio rendering in a standard stereo audio file.
The exhibition’s properties (Figure A2) can be modified if necessary and the soundscape saved to the social platform and published. There are also buttons available to export the soundscape’s metadata. This is to allow the user to keep a local copy of the soundscape. Furthermore, the metadata can be imported into any application capable of interpreting it. In the case of PlugSonic, the metadata can be imported back into Soundscape Create or it can be opened with Soundscape Experience Mobile (see Section 4.4). There are two formats available to export the soundscape. The simple metadata, which requires an internet connection to access PLUGGY’s social platform and retrieve the audio files, or the metadata including the assets, in which case, the soundscape can be experienced off-line as the audio files’ data is added to the exported soundscape file.
The UI presented here is the result of both a complete redesign and extension, as well as the outcome of the expert’s evaluation described in [45]. For a complete description of the controls and features available in Plugsonic Soundscape Create we refer to Appendix B.

4.3. PlugSonic Soundscape Experience Web

The Soundscape Experience web app was developed to allow the navigation of Soundscape exhibitions using any device capable of running a web browser (pc, laptop, tablet or mobile). The app’s UI is the same as Soundscape Create stripped down of all the features that allow to modify the soundscape—the only controls available are the playback and record buttons, the access to the touch arrows controls, and the listener’s options (Figure 3D). Furthermore, the exhibition’s title, description and tags are visible but not editable (Figure 3E). Soundscape Experience is loaded when a user clicks on the view button on the exhibition’s page within the social platform.

4.4. PlugSonic Soundscape Experience Mobile

The Soundscape Experience Mobile application has been designed with two main goals: first, to enable navigating soundscapes using a natural, touch-based interface at home or on the go. Second, to allow users to explore soundscapes in an immersive virtual experience delivered in real-world environments. These goals are accomplished by providing two separate interaction modes.
In the first mode (Figure 4a), users can explore the soundscape by moving the listener’s icon using their finger and can change orientation by rotating the device. The 3D audio simulation is updated in real-time according to the listener’s icon current position.
The second interaction mode aims to provide an immersive experience by enabling users to explore a soundscape according to their movements in the real-world. In order to achieve such goal, we use ARkit [43], a technology developed by Apple to easily support detecting and tracking planar surfaces by analysing video frames captured by a device’s camera and data collected by inertial sensors. The framework provides anchors in a real-world environment that can be used to determine the relative position of the device with respect to the detected plane. Such position is used to update the 3D audio simulation in real-time (Figure 4b).
Soundscapes can be loaded from the PLUGGY social platform using the QR code reader included in the app or importing the metadata. The app includes buttons to reset the listener’s orientation, play and stop, and choose the HRTFs. It also allows some control over the convolutional reverb settings. The reverb uses impulse responses for three standard room sizes available from the 3D TuneIn Toolkit [24] but it could be extended by allowing to import other impulse responses.

5. Evaluation

The evaluation of PlugSonic focused on the Soundscape Create and Soundscape Experience Mobile apps, which we considered the most critical in terms of contributions to both the spatial audio and cultural heritage research communities. The purpose of the evaluation was manifold: Understand whether users with no previous experience in sound design or cultural heritage content curation, and without previous knowledge and experience of 3D audio, could easily familiarise with the apps and use the functionalities and features as they were designed for. See if users could quickly and effectively use PlugSonic to recreate a 3D soundscape that maps onto a real physical space. Find out if spatial audio technologies in general and the Experience Mobile app in particular constitute a useful and practical way to design compelling experiences and improve engagement with and understanding of cultural heritage. Specifically, we intended to answer the following questions:
  • How would users create a soundscape using PlugSonic?
  • Could users easily find functionalities/features they need?
  • Did users understand all functionalities/features offered in Plugsonic?
  • If not, what issues did participants face?
  • How long did it take them to successfully use the functionality/feature?
  • How easy or difficult was it for users to recreate a 3D soundscape experienced in a real physical space?
The evaluation was arranged in three parts. In the first part, participants created a soundscape following predefined tasks provided to them. In the second part, the same participants listened to a soundscape set up in a real physical space and were then asked to recreate it using Soundscape Create. In the third part, another group of participants was asked to explore the soundscape created for part 2 using Soundscape Experience Mobile. We discuss the details further below.

5.1. Part 1—Soundscape Creation

5.1.1. Methodology

In the first part, participants were asked to create a soundscape for one of the institutions involved in the project, the Open-Air Water Power Museum in Dimitsana (Greece). They were asked to use the Soundscape Create web app together with the material (images and sounds) already available on the PLUGGY social platform. Initially, participants were given a verbal introduction to the PLUGGY project and shown the PLUGGY social platform website, how it is organised and how to navigate it. They were then given 5 min to get familiar with the application, explore the different features and ask questions for clarifications whenever necessary. Participants were then given 12 tasks (Table 1) that would lead to a complete Soundscape exhibition (Figure 5). Participants were asked to think aloud and, after each task, to rate how easy it was on a 7-point Likert scale—with 1 as extremely difficult and 7 as extremely easy (Single Ease Questions (SEQ)). Time required to complete each task was measured and is compared with baseline times from the first author. Observations were taken throughout the experiment. We recruited 5 participants for part 1: 2 males and 3 females, young academics in their mid 20s to mid 30s. All participants reported to have normal hearing and no previous experience with 3D soundscapes.

5.1.2. Results

Overall, most of the steps in task 1 were easy for participants to complete and the average rate was 6.4 (out of 7). See Table 1 for the average ease score and completion time per task.
However, participants faced some difficulties for 3 sub-tasks. For example, users had difficulties adding a new sound source. One participant did not use the My Assets option for the search. One participant exited the app to search in the “My Assets” page of the social platform and then tried the Import button meant to be used for Soundscape metadata (see Section 4.2). Once assets were found, 2 participants tried to drag and drop the sound source button from the search panel onto the room instead of clicking on it. Once an asset was added, one participant could not see the icon of the source as it was masked by the floor-plan image. Participants also had difficulties understanding the hinting system, as it was perceived as not giving the right suggestions based on the word being typed. Two participants could not set up the reach of the sound and 2 other participants had issues with the slider precision.
We also observed difficulties when users were asked to set up the sound sources so that it would start playing when the user enters the area of reach. One participant struggled to find the drop-down menu and it was not clear to them what “start when entering” meant. One participant had difficulty seeing when a new sound source panel started and commented it was too easy to accidentally click on the delete button.
Finally, issues were observed when adding tags to the exhibition. One participant could not find the Exhibition tab, whereas another struggled to find the Tags field and checked throughout all tabs. Once found, one participant did not press Enter to add the tag as the call to action was not clear.

5.2. Part 2—Soundscape Curation

5.2.1. Methodology

In the second part participants were asked to become curators, and use what they had learnt about the Soundscape Create app in part 1 to recreate, as faithfully as possible, a virtual soundscape from a real one. For this purpose, we set up a room (rectangular—14 × 16 × 6 m) to look like it was part of an exhibition about space exploration. Specifically, we prepared an installation about the Voyager space program. We collaborated with a professional musician who composed and recorded a 5 min piece of music to use as accompaniment to speech excerpts from the Golden Record [46]. The content was mixed into 6 tracks, each of which was assigned to 1 of 6 loudspeakers placed around the room. A MaxMSP patch was used to control two laptops (a master and a slave) each one driving 3 loudspeakers. The patch allowed to assign tracks to speakers, set the volume of each track, control the playback—and used the Open Sound Control (OSC) protocol for communication between master and slave laptop.
Participants were first asked to listen and observe the real soundscape; they were also given a floor-plan of the room which they could use to take notes about the setup. Participants were asked to pay attention to the following: (1) match the track to the loudspeaker (each loudspeaker was labelled with the track assigned to it), (2) the loudspeakers’ position (e.g., position in the room and height) and (3) the sound sources’ reach (i.e., from how far the sound of each loudspeaker could be heard). Participants were allowed to listen to the soundscape as many times as needed, and after having noted down all the information, they needed to recreate the soundscape using the material (images and sounds) already available on the PLUGGY social platform. After completing the task, participants were asked to fill in the System Usability Scale (SUS) questionnaire [47] (Table 2) to measure usability. We used a 7-point Likert scale from strongly disagree (1) to strongly agree (7). Furthermore, to score how likely users would recommend PlugSonic to others, we adopted the net promoter score (NPS) [48] which is defined as the percentage of “promoters” (users rating their likelihood as 9 or 10 on a scale from 0 to 10) minus the percentage of “detractors” (6 or below); users rating 7 or 8 are considered “passive”. Therefore, the NPS ranges from −100 (all detractors) to +100 (all promoters). We recruited 5 participants for part 2 (the same ones who participated in part 1).

5.2.2. Results

Overall, participants agreed that the application is easy to use and were confident in using it, but there is space for improvement with regards to clarity of features, iconography and the integration of features.
Specifically, participants found the visual and interactive aspect of PlugSonic most interesting as it gives quick results and a pleasant user experience. They also liked the possibility to test the soundscape and listen to the binaural audio directly on the Create app. What participants did not like about the application is that it first requires some level of understanding of the available features and icons. Furthermore, some tasks where judged as being “too manual” and requiring “too many clicks” to implement. Additional features were desirable such as being able to save the settings of a single sound and automatically close the source panel or choose the colour for the sources’ icons.
Participants were neutral in whether they would use the system frequently. This is consistent with their rating of a 6 (out of 10) to the question how likely they would use any of the curation tools and add stories to the social platform. Although they see the novelty in the interactivity, the main reason for these scores is the fact that they do not work in the sector specifically or are not used to creating content.
For the NPS score: Among the participants we had 1 detractor (one participant rated a 6), 2 passives (2 participants rated an 8) and 2 promoters (2 participants rated a 9). This results in an overall NPS score of 20, which is good. The list of questions with the average rating is shown in Table 2.

5.3. Part 3—Soundscape Experience

5.3.1. Methodology

In part 3, participants were asked to use the PlugSonic Experience Mobile app to explore a soundscape within a real environment. The soundscape and physical space were the same used for part 2. The room was set up with photos and a video projection about the Voyager exploration program (Figure 6). Participants were asked to imagine visiting a museum and as part of the visit they were given a mobile device (iPad) which they could use to explore a soundscape. They were invited to explore the soundscape freely for as long as they wished while moving around the room. No specific recommendations were provided apart from the indication to point the device’s camera in the direction they were facing—as the camera and inertial sensors were used by the device to infer position and orientation, which in turn were used by the mobile app to render the audio in 3D. They were also advised to ignore the device’s screen as no information was going to be displayed. After exploring the soundscape, participants were asked to fill in a questionnaire about their emotional response—selecting among 11 options (indifferent, interested, uninspired, bored, excited, captivated, engaged, disappointed, satisfied, neutral and frustrated) or by adding other ones. They were also asked to answer questions about social potential and learning while using the app—using a 7-point Likert scale (with 1 strongly disagree and 7 strongly agree). We recruited 7 participants for part 3: 5 males, 2 females in their mid 20s to mid 30s.

5.3.2. Results

During this experience, most participants have indicated feeling “interested” (7 participants), “engaged” (6 participants), and “captivated” (6 participants). Some indicated to have felt “excited” (2 participants), “indifferent”, “satisfied”, “inspired”, “immersed”, “calm”, “overwhelmed” or “confused”. The average score for personal resonance and emotional connection was 5.2 out of 7 (see Table 3a). The average score for learning and intellectual stimulation was 4.4 (see Table 3b). The average score for shared experience and social connectedness was 5.8 (see Table 3c). This means future improvements should be made to increase scores for personal resonance and emotional connection and learning and intellectual stimulation. It is important to underline how the soundscape adopted for the evaluation did not contain any speech or narration describing the material presented, which might help to explain the score for learning and intellectual stimulation.

5.4. Discussion

The evaluation aimed at understanding if PlugSonic (1) is easy to learn and use for people unfamiliar with 3D audio in general and the concept of soundscape in particular, (2) is an effective tool for the creation of 3D soundscape from and for real environments and (3) has the potential to improve engagement and understanding of cultural heritage.
Overall, creating a soundscape was rated with a 6.4 (out of 7) for ease. Participants felt confident using PlugSonic Soundscape and found the interactive aspect of creating the soundscape most interesting and engaging. Some difficulties were observed, indicating that improvements could be made for adding and setting up new sound sources and tagging exhibitions. More in general, further improvements are possible to make the apps more streamlined and speed up the creation process. At the time of writing, the authors already updated PlugSonic Soundscape Create based on the feedback obtained during the evaluation. For example, the reach radius slider visibility has been improved; a visual feedback has been added when listener is in reach of a source (reach area changes colour); precision for all sliders has been set to one decimal point; the separation between sources’ panels has been made clearer; and a “Loading..” message has been added when retrieving a sound source.
With a net promoter score of 20, participants are likely to recommend PlugSonic Soundscape to others. Although, one of the most critical aspects highlighted during evaluation was the resistance of users to become content creators. It seems clear that efforts are still necessary from research communities and institutions to empower citizens and provide them with more active participation in the act of sharing and protecting cultural heritage.
Additionally, for the experience part of PlugSonic Soundscape, positive results were obtained in terms of personal resonance as well as shared experience and social connectedness. Future improvements should be made to support emotional connection and intellectual stimulation better. Although, it needs to be taken into account that the evaluation did not take place in an actual museum with real artefacts being exhibited.
The main limitation in the evaluation of PlugSonic was in the number of participants. Only a total of 12 participants were involved, 5 for part 1 and 2 and 7 for part 3. Adding the baseline completion time in Table 1 helped to partially compensate for it. Although the apps would benefit from an extended testing and evaluation, similar tools [49] have been evaluated with similar number of participants (10).
We can conclude that, even accounting for its current limitations, the PlugSonic Suite represents a major contribution to the digital heritage community, including both end users and researchers. Our evaluation with inexperienced participants showed that the apps and the creation, curation and experience processes, within the support of the PLUGGY social platform, were easily understood and effectively utilised. The speed with which participants were able to familiarise and perform tasks with PlugSonic Soundscape Create shows how the design and implementation choices seem to have worked to simplify the learning process. Participants appreciated the aspect and the interaction with the app and did not feel limited during the recreation of a virtual soundscape from a real one. This was a particularly important aspect for the project, proving that 3D audio technologies can indeed be democratised without sacrificing the rendering quality and interaction flexibility. Furthermore, the Experience Mobile app was well received; there was no particular friction in the navigation of the soundscape within a real space. From the questionnaires’ answers it seems clear how 3D audio soundscapes in general and PlugSonic in particular, can help cultural institutions in their mission to deliver engaging experiences and connect the public with cultural heritage.

6. Conclusions and Future Work

This paper presents the design, development and evaluation of a series of web and mobile applications—the PlugSonic Suite—for the curation and experience of 3D interactive soundscapes and audio narratives. These apps can be used to edit sound files and to create, test and experience soundscapes in a low friction environment that allows to transition from a web browser to a physical space navigation when desired.
The umbrella project (PLUGGY) includes the development of a social platform and several apps (AR, 3D Audio, Geolocation and Gamification) to provide users with the necessary tools to shape cultural heritage; both as curators and visitors of virtual or augmented exhibitions.
In Section 2, we defined sonic narratives and soundscapes, and introduced the binaural spatialisation technique and modern web-based technologies. We also described the state-of-the-art for sonic narratives, with special attention to cultural heritage applications. Even if research in these fields have produced significant advances, no specific tools have been developed to democratise spatial audio technologies and encourage the general public to adopt them to impact tangible and intangible heritage fruition and dissemination.
PlugSonic aims to demonstrate that spatial audio technologies and software can be designed to be accessible to anyone, without having to compromise on the binaural rendering quality or the flexibility of interactions available to the content creator, whether institutions or general public. After introducing the technology used for the development, we described the features and functionalities included in the apps—which were, partially, the result of a previous work with experts in the fields of cultural heritage and/or audio [45].
To understand whether our aims were achieved, we conducted an evaluation with subjects without previous experience and knowledge about 3D audio or soundscape design. The evaluation included three parts: creation, curation and experience of soundscapes with the PlugSonic Soundscape Create Web and PlugSonic Soundscape Experience Mobile applications. Participants were able to learn to use the apps quickly and effectively. They judged the task of creating a soundscape fairly simple and appreciated the level of interactivity and the possibility to test the results during the creation. Good results were obtained also for the task of curating a virtual soundscape starting from a real one, which shows how PlugSonic can indeed be effectively utilised to convey the atmosphere of a “real-world” situation. Limitations were highlighted when adding and setting up new sound sources and tagging exhibitions. A critical aspect observed in the evaluation was the resistance of users to become content creators. Further efforts seem necessary from research communities and institutions to empower citizens and allow them a more active participation in the act of sharing and protecting cultural heritage. The Experience Mobile app, exploiting the Apple ARkit [43] for the localisation of the user within a soundscape in a physical space, showed the potential of augmented exhibitions in increasing emotional resonance and connectedness with cultural heritage.
The feedback received during evaluation was used to improve user interface and user experience, but further work is necessary to simplify and speed up the soundscape creation process. The apps could also be improved by introducing moving sound sources, for an even more dynamic experience, and radiation patterns, for a more realistic simulation of sounds’ directivity or to emulate occlusion effects. The Soundscape Create app already allows to choose the shape of the virtual environment, limited to rectangular and round spaces, and to use an image as floor-plan or background of a soundscape. Further improvements would include custom room shapes and the options to set movement constraints.
Furthermore, we think that PlugSonic has the potential to become also a web- and mobile-based 3D audio research evaluation tool. In fact, even if web-based audio evaluation tools are available [50,51], none focus specifically on spatial audio topics (e.g., HRTF selection, HRTF adaptation, speech reception threshold and cocktail party effect). Moreover, the hearing loss and hearing aid simulation algorithms available from the 3D TuneIn Toolkit would also allow for hearing impairment specific tests. PlugSonic could be used within a listening test framework that lets researchers easily design different types of online tests (e.g., AB, ABX and MUSHRA). Both the web- and mobile-based apps could be exploited for localisation tests or games within a virtual or physical space. To conclude, it is also worth highlighting how PlugSonic could continuously improve and be extended as a direct result of the constant development of the underlining technologies (WAA, 3D TuneIn Toolkit and ARKit).
Cultural institutions have started to adopt social media to promote events and stimulate participation, but cannot rely on a common ground when it comes to communication channels and tools to involve their audience. Being developed within the framework of the PLUGGY social platforms, PlugSonic could help to bridge the gap between general public and cultural institutions in an effort to encourage participation, co-creation and sharing of cultural heritage; especially because PlugSonic does not require the development or installation of software or hardware, and can be used on any device. Anyone could—visiting a museum or a monument—retrieve audio narratives from the social platform servers and experience them straight away.

7. Links

(all links accessed on 8 February 2021)

Author Contributions

Conceptualization, L.P., M.C. and A.G.; methodology, M.C., L.P. and A.G.; software, M.C. and A.G.; validation, M.C. and L.P.; formal analysis, V.L. and M.C.; investigation, M.C.; resources, M.C. and L.P.; data curation, M.C. and L.P.; writing—original draft preparation, M.C.; writing—review and editing, M.C., A.G. and L.P.; visualization, M.C.; supervision, L.P.; project administration, L.P. and M.C.; funding acquisition, L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the PLUGGY project (https://www.pluggy-project.eu/ (accessed on 8 February 2021)), European Union’s Horizon 2020 research and innovation programme under grant agreement No 726765.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Imperial College Research Ethics Committee (reference: 17IC4197; approved in October 2017).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data supporting reported results: https://doi.org/10.5281/zenodo.4513876 (accessed on 8 February 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. PlugSonic Sample UI Controls Details

With reference to Figure A1, the user interface includes the following.
  • Waveform of the audio file with filename on the top left corner
  • Use of mouse to control the playback start and end points and select parts of the waveform to be modified
  • Play button to reproduce the whole audio file or the selected part
  • Stop button to stop reproduction
  • Undo button to cancel edit actions
  • Cut button to cut part of the waveform
  • Copy button to copy part of the waveform
  • Paste button to paste cut/copied part of the waveform
  • Mute button to mute part of the waveform
  • Fade In button to apply a volume fade in to the selection
  • Fade Out button to apply a volume fade out to the selection
  • Filters button to open the filters/effects menu (15)
  • Export button to save the modified audio file to the users device
  • Save button to save the modified audio file to the PLUGGY social platform
  • Audio Filters/Effects menu
  • Equaliser panel. Includes: lowpass, highpass, bandpass, lowshelf, highshelf, peaking and notch filters
  • Compressor effect panel with threshold, knee, ratio, attack and release controls
  • Reverb effect panel. Includes small, medium and large room reverbs
Figure A1. PlugSonic Sample user interface.
Figure A1. PlugSonic Sample user interface.
Applsci 11 01540 g0a1

Appendix B. PlugSonic Soundscape Create UI Controls Details

With reference to Figure A2, the user interface includes:
  • Virtual room used for the curation of the soundscape
  • Listener’s icon representing position and orientation of the listener in the virtual environment
  • Sound sources’ icons representing position of each sound source in the virtual environment
  • Play button
  • Stop button
  • Record button to record and export a .wav file of the rendered 3D audio
  • Touch arrows button to open a panel showing touch arrow controls (necessary to navigate the soundscape on a touchscreen-based device)
  • Search tab. To search and retrieve audio sources from the social platform (Figure 3A):
    • Search text field
    • Dropdown menu to search among the user’s audio assets (My Assets) or the whole social platform (All Pluggy)
    • Dropdown menu to choose the ordering of the results
    • Search button
    • One button for each sound found by the search. Upon clicking a button the source is retrieved and added to the soundscape.
  • Sound sources’ tab. Each sound source has the following options (Figure 3B):
    • On/off toggle to activate/deactivate the sound
    • Volume slider
    • Position options:
      • Relative to listener toggle. To set the sound’s position in an absolute or relative fashion
      • Position sliders. X/Y/Z for absolute positioning or Angle/Distance for relative positioning
    • Loop toggle to choose if the sound source will loop or play only once
    • Spatialisation toggle to turn on/off the spatialisation engine. When off the source is reproduced as a mono/stereo file depending on the original file format
    • Reach options:
      • Reach toggle—to turn on/off the interaction area. The interaction area (in yellow in Figure A2) is used to control the interaction between listener and sound source. When on, the listener will be able to hear the specific sound source only when they are inside the interaction area.
      • Reach radius slider—to choose the size of the interaction area
      • Reach behaviour dropdown menu—to choose the type of action the app will perform when entering/exiting the interaction area.
        • Fade in and out—playback will start as the user clicks the Play button but the source’s volume will fade in/out as the listener enters/exits the interaction area
        • Start when entering—Playback will start as the listener enters the interaction area
      • Fade duration slider—to set the volume’s fade in/out duration
    • Timings dropdown menu—to set an order in the reproduction of the sound sources. The reproduction of a specific source can be constrained to the start of another one.
    • Hidden toggle—to hide the sound source in the Soundscape Experience apps so that the user cannot see the source’s position on the screen
    • Delete button—to delete the sound source from the soundscape
  • Room options tab. The room has the following options (Figure 3C):
    • Room Shape dropdown menu (rectangular or round)
    • Room Size text fields (Width/Depth/Height)
    • Room floorplan—to search and select an image asset to be used as the soundscape’s floor-plan.
    • Reset listener position button—to reset the listener’s position to coordinate (0, 0)
  • Listener’s options tab—to set options regarding the 3D audio rendering engine. The listener’s options are the following (Figure 3D):
    • Performance mode toggle. When on, the performance mode is enabled, requiring less computational effort, allowing the rendering on low performance devices.
    • HRTF function dropdown menu—to select the head related transfer functions to be used for the 3D sound rendering
    • HRTF sample length dropdown menu—to choose between 128/256/512 samples long HRTFs
  • Exhibition tab—to set the exhibition’s options, save and publish the exhibition (Figure A2). It includes:
    • Title text field—to set the exhibition’s title
    • Description text field—to add a description of the soundscape
    • Tags text field and icons—to add/delete tags to the exhibition
    • Save button—to save the exhibition in the social platform
    • Publish/Unpublish button—to make the exhibition available or not to the social platform’s users.
    • Import button—to import a previously exported soundscape in either format (metadata or metadata + assets)
    • Export metadata button—to export the exhibition’s metadata (will require access to PLUGGY social platform to retrieve the audio files)
    • Export metadata+assets button—to export the exhibition’s metadata and the audio assets in one file (will not require access to PLUGGY social platform to retrieve the audio files. Can be experienced when off-line)
Figure A2. PlugSonic Soundscape Create user interface.
Figure A2. PlugSonic Soundscape Create user interface.
Applsci 11 01540 g0a2

References

  1. Fairclough, G.; Dragićević-Šešić, M.; Rogač-Mijatović, L.; Auclair, E.; Soini, K. The Faro convention, a new paradigm for socially-and culturally-sustainable heritage action? Culture 2014, 8, 9–19. Available online: https://journals.cultcenter.net/index.php/culture/article/view/111 (accessed on 27 January 2021).
  2. Russo, A. The rise of the media museum: Creating interactive cultural experiences through social media. In Heritage and Social Media. Understanding Heritage in a Participatory Culture; Routledge: London, UK, 2012; pp. 145–157. [Google Scholar] [CrossRef]
  3. Stuedahl, D.; Mörtberg, C. Heritage knowledge, social media, and the sustainability of the intangible. In Heritage and Social Media. Understanding Heritage in a Participatory Culture; Routledge: London, UK, 2012; pp. 106–125. [Google Scholar] [CrossRef]
  4. Lim, V.; Frangakis, N.; Tanco, L.M.; Picinali, L. PLUGGY: A pluggable social platform for cultural heritage awareness and participation. In Advances in Digital Cultural Heritage; Springer: Cham, Switzerland, 2018; pp. 117–129. Available online: https://doi.org/10.1007/978-3-319-75789-6_9 (accessed on 27 January 2021). [CrossRef] [Green Version]
  5. Russo, A.; Watkins, J.; Kelly, L.; Chan, S. Participatory communication with social media. Curator Mus. J. 2008, 51, 21–31. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.2151-6952.2008.tb00292.x (accessed on 27 January 2021). [CrossRef]
  6. PLUGGY Project. Available online: https://www.pluggy-project.eu/ (accessed on 27 January 2021).
  7. Europeana. Available online: https://www.europeana.eu/portal/en (accessed on 27 January 2021).
  8. Katz, B.F.; Murphy, D.; Farina, A. The Past Has Ears (PHE): XR explorations of acoustic spaces as cultural heritage. In Lecture Notes in Computer Science. Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Lecce, Italy, 7–10 September 2020; Springer: Cham, Switzerland, 2020; pp. 91–98. Available online: https://doi.org/10.1007/978-3-030-58468-9_7 (accessed on 27 January 2021). [CrossRef]
  9. Brezina, P. Acoustics of historic spaces as a form of intangible cultural heritage. Antiquity 2013, 87, 574–580. Available online: https://doi.org/10.1017/S0003598X00049139 (accessed on 27 January 2021). [CrossRef] [Green Version]
  10. Dumyahn, S.L.; Pijanowski, B.C. Soundscape conservation. Landsc. Ecol. 2011, 26, 1327–1344. Available online: https://doi.org/10.1007/s10980-011-9635-x (accessed on 27 January 2021). [CrossRef]
  11. Kytö, M.; Rémy, N.; Uimonen, H.; Acquier, F.; Bérubé, G.; Chelkoff, G.; Said, N.G.; Laroche, S.; McOisans, J.; Tixier, N.; et al. European Acoustic Heritage. 2012. Available online: https://hal.archives-ouvertes.fr/hal-00993848 (accessed on 27 January 2021).
  12. Serafin, S.; Geronazzo, M.; Erkut, C.; Nilsson, N.C.; Nordahl, R. Sonic interactions in virtual reality: State of the art, current challenges, and future directions. IEEE Comput. Graph. Appl. 2018, 38, 31–43. [Google Scholar] [CrossRef] [PubMed]
  13. Meelberg, V. Narrative sonic ambiances. Designing positive auditory environments using narrative strategies. In Proceedings of the Euronoise 2018, Crete, Greece, 27–31 May 2018; pp. 845–849. Available online: http://hdl.handle.net/2066/192407 (accessed on 27 January 2021).
  14. Delle Monache, S.; Rocchesso, D.; Qi, J.; Buechley, L.; De Götzen, A.; Cestaro, D. Paper mechanisms for sonic interaction. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, Kingston, ON, Canada, 19–22 February 2012; pp. 61–68. [Google Scholar] [CrossRef] [Green Version]
  15. Pijanowski, B.C.; Villanueva-Rivera, L.J.; Dumyahn, S.L.; Farina, A.; Krause, B.L.; Napoletano, B.M.; Gage, S.H.; Pieretti, N. Soundscape ecology: The science of sound in the landscape. BioScience 2011, 61, 203–216. Available online: https://doi.org/10.1525/bio.2011.61.3.6 (accessed on 27 January 2021). [CrossRef] [Green Version]
  16. Krakowsky, T. Sonic storytelling: Designing musical spaces. AdAge 2009. Available online: https://adage.com/article/on-design/sonic-storytelling-designing-musical-spaces/138028/ (accessed on 27 January 2021).
  17. Collins, P. Theatrophone: The 19th-century iPod. New Sci. 2008, 197, 44–45. [Google Scholar] [CrossRef]
  18. WebAudio API. Available online: https://www.w3.org/TR/webaudio/ (accessed on 27 January 2021).
  19. WebGL. Available online: https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API (accessed on 27 January 2021).
  20. Web Audio API Specification Proposal. Available online: https://www.w3.org/2011/audio/drafts/1WD/WebAudio/ (accessed on 27 January 2021).
  21. Poirier-Quinot, D.; Katz, B.F. The Anaglyph binaural audio engine. In Proceedings of the Audio Engineering Society Convention 144, Milan, Italy, 23–26 March 2018; Available online: http://www.aes.org/e-lib/browse.cfm?elib=19544 (accessed on 27 January 2021).
  22. Carpentier, T.; Noisternig, M.; Warusfel, O. Twenty years of Ircam Spat: Looking back, looking forward. In Proceedings of the 41st International Computer Music Conference (ICMC), Denton, TX, USA, 25 September–1 October 2015; pp. 270–277. Available online: https://hal.archives-ouvertes.fr/hal-01247594 (accessed on 27 January 2021).
  23. Musil, T.; Noisternig, M.; Höldrich, R. A library for realtime 3d binaural sound reproduction in pure data (pd). In Proceedings of the 8th international conference on digital audio effects (DAFX-05), Madrid, Spain, 20–22 September 2005; pp. 167–171. Available online: http://dafx.de/paper-archive/2005/P_167.pdf (accessed on 27 January 2021).
  24. Cuevas-Rodríguez, M.; Picinali, L.; González-Toledo, D.; Garre, C.; de la Rubia-Cuestas, E.; Molina-Tanco, L.; Reyes-Lecuona, A. 3D Tune-In Toolkit: An open-source library for real-time binaural spatialisation. PLoS ONE 2019, 14, e0211899. Available online: https://doi.org/10.1371/journal.pone.0211899 (accessed on 27 January 2021). [CrossRef] [PubMed]
  25. Ardissono, L.; Kuflik, T.; Petrelli, D. Personalization in cultural heritage: The road travelled and the one ahead. User Model. User-Adapt. Interact. 2012, 22, 73–99. Available online: https://doi.org/10.1007/s11257-011-9104-x (accessed on 27 January 2021). [CrossRef] [Green Version]
  26. Not, E.; Zancanaro, M. Content adaptation for audio-based hypertexts in physical environments. Hypertext’98: Second Workshop on Adaptive Hypertext and Hypermedia. 1998, pp. 27–34. Available online: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.1908 (accessed on 27 January 2021).
  27. Petrelli, D.; Not, E. User-centred design of flexible hypermedia for a mobile guide: Reflections on the HyperAudio experience. User Model. User-Adapt. Interact. 2005, 15, 303–338. Available online: https://doi.org/10.1007/s11257-005-8816-1 (accessed on 27 January 2021). [CrossRef] [Green Version]
  28. Benelli, G.; Bianchi, A.; Marti, P.; Not, E.; Sennati, D. HIPS: Hyper-interaction within physical space. In Proceedings of the IEEE International Conference on Multimedia Computing and Systems, Florence, Italy, 7–11 June 1999; pp. 1075–1078. Available online: https://doi.ieeecomputersociety.org/10.1109/MMCS.1999.778663 (accessed on 27 January 2021). [CrossRef]
  29. Zimmermann, A.; Lorenz, A. LISTEN: A user-adaptive audio-augmented museum guide. User Model. User-Adapt. Interact. 2008, 18, 389–416. Available online: https://doi.org/10.1007/s11257-008-9049-x (accessed on 27 January 2021). [CrossRef]
  30. Delerue, O.; Warusfel, O. Authoring of virtual sound scenes in the context of the Listen project. In Proceedings of the Audio Engineering Society Conference, 22nd International Conference: Virtual, Synthetic, and Entertainment Audio, Espoo, Finland, 15–17 June 2002. [Google Scholar]
  31. Zimmermann, A.; Lorenz, A.; Birlinghoven, S. Listen: Contextualized presentation for audio-augmented environments. In Proceedings of the 11th Workshop on Adaptivity and User Modeling in Interactive Systems, Bonn, Germany, 6–8 October 2003; pp. 351–357. [Google Scholar]
  32. IRCAM Spatialisateur. Available online: http://forumnet.ircam.fr/product/spat-en/ (accessed on 27 January 2021).
  33. Vayanou, M.; Katifori, A.; Karvounis, M.; Kourtis, V.; Kyriakidi, M.; Roussou, M.; Tsangaris, M.; Ioannidis, Y.; Balet, O.; Prados, T.; et al. Authoring personalized interactive museum stories. In Lecture Notes in Computer Science, Proceedings of the International Conference on Interactive Digital Storytelling, Singapore, 3–6 November 2014; Springer: Cham, Switzerland, 2014; pp. 37–48. Available online: https://doi.org/10.1007/978-3-319-12337-0_4 (accessed on 27 January 2021). [CrossRef]
  34. Pujol, L.; Katifori, A.; Vayanou, M.; Roussou, M.; Karvounis, M.; Kyriakidi, M.; Eleftheratou, S.; Ioannidis, Y. From personalization to adaptivity: Creating immersive visits through interactive digital storytelling at the acropolis museum. In Proceedings of the 9th International Conference on Intelligent Environments, Athens, Greece, 16–19 July 2013; pp. 541–554. Available online: https://doi.org/10.3233/978-1-61499-286-8-541 (accessed on 27 January 2021). [CrossRef]
  35. Hansen, F.A.; Kortbek, K.J.; Grønbæk, K. Mobile urban drama: Interactive storytelling in real world environments. New Rev. Hypermedia Multimed. 2012, 18, 63–89. Available online: https://doi.org/10.1080/13614568.2012.617842 (accessed on 27 January 2021). [CrossRef]
  36. Emotive. Available online: https://emotiveproject.eu/ (accessed on 27 January 2021).
  37. Roussou, M.; Ripanti, F.; Servi, K. Engaging visitors of archaeological sites through “emotive” storytelling experiences: A pilot at the ancient agora of Athens. Archeol. E Calc. 2017, 28, 405–420. Available online: https://doi.org/10.19282/AC.28.2.2017.33 (accessed on 27 January 2021). [CrossRef]
  38. Arches. Available online: https://www.arches-project.eu/ (accessed on 27 January 2021).
  39. Geier, M.; Spors, S. Spatial audio with the soundscape renderer. In Proceedings of the 27th Tonmeistertagung—VDT International Convention, Cologne, Germany, 22–25 November 2012; Available online: https://www.int.uni-rostock.de/fileadmin/user_upload/publications/spors/2012/Geier_TMT2012_SSR.pdf (accessed on 27 January 2021).
  40. Vaananen, R. User interaction and authoring of 3D sound scenes in the Carrouso EU project. In Proceedings of the Audio Engineering Society Convention 114, Amsterdam, The Netherlands, 22–25 March 2003. [Google Scholar]
  41. Echoes. Available online: https://echoes.xyz/ (accessed on 27 January 2021).
  42. Comunità, M.; Gerino, A.; Lim, V.; Picinali, L. PlugSonic: A web- and mobile-based platform for binaural audio and sonic narratives. arXiv 2020. Available online: https://arxiv.org/abs/2008.04638 (accessed on 27 January 2021).
  43. Apple AR Kit. Available online: https://developer.apple.com/augmented-reality/ (accessed on 27 January 2021).
  44. Faro Convention. Available online: https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/199 (accessed on 27 January 2021).
  45. Comunità, M.; Gerino, A.; Lim, V.; Picinali, L. Web-based binaural audio and sonic narratives for cultural heritage. In Proceedings of the Audio Engineering Society Conference: 2019 AES International Conference on Immersive and Interactive Audio, York, UK, 27–29 March 2019; Available online: http://www.aes.org/e-lib/browse.cfm?elib=20435 (accessed on 27 January 2021).
  46. Voyager Golden Record. Available online: https://voyager.jpl.nasa.gov/golden-record (accessed on 27 January 2021).
  47. Lewis, J.R. The system usability scale: Past, present, and future. Int. J. Hum. Comput. Interact. 2018, 34, 577–590. Available online: https://doi.org/10.1080/10447318.2018.1455307 (accessed on 27 January 2021). [CrossRef]
  48. Reichheld, F.F.; Covey, S.R. The Ultimate Question: Driving Good Profits and True Growth; Harvard Business School Press: Boston, MA, USA, 2006; Volume 211. [Google Scholar]
  49. Çamcı, A.; Lee, K.; Roberts, C.J.; Forbes, A.G. INVISO: A cross-platform user interface for creating virtual sonic environments. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, Québec City, QC, Canada, 22–25 October 2017; pp. 507–518. Available online: https://doi.org/10.1145/3126594.3126644 (accessed on 27 January 2021). [CrossRef]
  50. Jillings, N.; Man, B.; Moffat, D.; Reiss, J.D. Web audio evaluation tool: A browser-based listening test environment. In Proceedings of the 12th International Conference on Sound and Music Computing, Maynooth, Ireland, 26 July–1 August 2015; pp. 147–152. Available online: https://doi.org/10.5281/zenodo.851157 (accessed on 27 January 2021). [CrossRef]
  51. Kraft, S.; Zölzer, U. BeaqleJS: HTML5 and JavaScript based framework for the subjective evaluation of audio quality. In Proceedings of the Linux Audio Conference, Karlsruhe, Germany, 1–4 May 2014; Available online: http://lac.linuxaudio.org/2014/papers/26.pdf (accessed on 27 January 2021).
Figure 1. The PLUGGY social platform and apps structure.
Figure 1. The PLUGGY social platform and apps structure.
Applsci 11 01540 g001
Figure 2. Apps of the PlugSonic Suite.
Figure 2. Apps of the PlugSonic Suite.
Applsci 11 01540 g002
Figure 3. PlugSonic Soundscape Create dismissable menus: (A) Sound source search tab, (B) Sound sources settings tab, (C) Room settings tab, (D) Listener settings tab, (E) Exhibition tab
Figure 3. PlugSonic Soundscape Create dismissable menus: (A) Sound source search tab, (B) Sound sources settings tab, (C) Room settings tab, (D) Listener settings tab, (E) Exhibition tab
Applsci 11 01540 g003
Figure 4. PlugSonic Soundscape Experience Mobile UI
Figure 4. PlugSonic Soundscape Experience Mobile UI
Applsci 11 01540 g004
Figure 5. Participant using the Soundscape Create app for part 1 of the evaluation.
Figure 5. Participant using the Soundscape Create app for part 1 of the evaluation.
Applsci 11 01540 g005
Figure 6. Participant using the Soundscape Mobile app for part 3 of the evaluation.
Figure 6. Participant using the Soundscape Mobile app for part 3 of the evaluation.
Applsci 11 01540 g006
Table 1. Evaluation: Part 1—Single Ease Questions scores and completion times (average and standard deviation).
Table 1. Evaluation: Part 1—Single Ease Questions scores and completion times (average and standard deviation).
Task DescriptionSEQ
Avg (Std)
Completion Time (s)
Avg (Std)
Baseline Completion
Time (s)
1CREATE a new Soundscape exhibition6.8 (0.5)73 (35.9)66
2Set the ROOM SIZE7.0 (0.0)16 (5.9)10
3Set the ROOM FLOORPLAN image6.6 (0.9)20 (9.9)17
4ADD a new sound source
(search, position and reach)
4.2 (1.6)154 (64.5)48
5ADD a new sound source
(search, position, reach and hidden)
6.6 (0.6)62 (18.1)38
6ADD a new sound source
(search, position, and fade)
6.6 (0.6)82 (46.7)43
7Set all sound sources to START
PLAYING WHEN ENTERING reach
5.6 (0.9)70 (63.5)25
8Set sound sources TIMINGS to play in order6.2 (0.5)74 (60.1)40
9PLAY the soundscape and MOVE the
listener around to listen to the soundscape
7.0 (0.0)12 (11.0)5
10Add TAGS to the exhibition5.8 (1.8)50 (36.7)15
11SAVE and PUBLISH the exhibition7.0 (0.0)18 (2.9)16
12EXPORT the metadata to the laptop6.8 (0.5)14 (10.3)8
Table 2. Average system usability score for part 2 of the evaluation.
Table 2. Average system usability score for part 2 of the evaluation.
QuestionSUS Avg (Std)
1I think that I would like to use this system frequently.4.4 (1.7)
2I found the system unnecessarily complex.2.8 (1.3)
3I thought the system was easy to use.6.2 (0.8)
4I think that I would need the support of a technical
person to be able to use this.
3.0 (2)
5I found the various functions in this system were
well integrated.
5.4 (0.9)
6I found functionality of features and controls clear.5.0 (1.2)
7I thought there was too much inconsistencyin this application.
1.6 (0.6)
8I would imagine most people would learn to use
the application very quickly.
5.2 (1.6)
9I felt very confident using the system.5.8 (0.8)
10I needed to learn a lot of things before I could
get going with this application.
1.6 (0.6)
11I wanted to share or talk to people about
my experience with the application
5.8 (0.8)
Table 3. Evaluation part 3.
Table 3. Evaluation part 3.
a
QuestionsLikert Score
Personal Resonance and Emotional ConnectionAvg (Std)
1The soundscape made experiencing the objects
more interesting/fun.
5.4 (1.5)
2I found the soundscape emotionally engaging.5.6 (1.0)
3During the experience, I felt connected with the
objects presented to me
4.1 (2.0)
4I will be thinking about this experience
for some time to come.
5.7 (1.3)
b
Learning and Intellectual Stimulation
5I got a good understanding about the
objects presented to me.
3.1 (1.5)
6I got a good understanding about where
the objects where located.
4.9 (2.4)
7I felt engaged with the objects presented to me.5.0 (1.5)
8I felt challenged and provoked.4.0 (2.1)
9During this experience, my eyes were opened
to new ideas.
4.9 (2.0)
c
Shared Experience and Social Connectedness
10I would have liked to have shared about
it with other people
5.7 (1.1)
11After this experience, I wanted to talk
to people about it.
5.9 (0.9)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Comunità, M.; Gerino, A.; Lim, V.; Picinali, L. Design and Evaluation of a Web- and Mobile-Based Binaural Audio Platform for Cultural Heritage. Appl. Sci. 2021, 11, 1540. https://doi.org/10.3390/app11041540

AMA Style

Comunità M, Gerino A, Lim V, Picinali L. Design and Evaluation of a Web- and Mobile-Based Binaural Audio Platform for Cultural Heritage. Applied Sciences. 2021; 11(4):1540. https://doi.org/10.3390/app11041540

Chicago/Turabian Style

Comunità, Marco, Andrea Gerino, Veranika Lim, and Lorenzo Picinali. 2021. "Design and Evaluation of a Web- and Mobile-Based Binaural Audio Platform for Cultural Heritage" Applied Sciences 11, no. 4: 1540. https://doi.org/10.3390/app11041540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop