Next Article in Journal
One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation
Next Article in Special Issue
Multimodal Lip-Reading for Tracheostomy Patients in the Greek Language
Previous Article in Journal
Enriching Mobile Learning Software with Interactive Activities and Motivational Feedback for Advancing Users’ High-Level Cognitive Skills
Previous Article in Special Issue
LENNA (Learning Emotions Neural Network Assisted): An Empathic Chatbot Designed to Study the Simulation of Emotions in a Bot and Their Analysis in a Conversation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tangible and Personalized DS Application Approach in Cultural Heritage: The CHATS Project

by
Giorgos Trichopoulos
*,
John Aliprantis
,
Markos Konstantakis
*,
Konstantinos Michalakis
and
George Caridakis
Department of Cultural Technology and Communication, University of the Aegean, 81100 Mytilene, Greece
*
Authors to whom correspondence should be addressed.
Computers 2022, 11(2), 19; https://doi.org/10.3390/computers11020019
Submission received: 12 December 2021 / Revised: 20 January 2022 / Accepted: 28 January 2022 / Published: 31 January 2022

Abstract

:
Storytelling is widely used to project cultural elements and engage people emotionally. Digital storytelling enhances the process by integrating images, music, narrative, and voice along with traditional storytelling methods. Newer visualization technologies such as Augmented Reality allow more vivid representations and further influence the way museums present their narratives. Cultural institutions aim towards integrating such technologies in order to provide a more engaging experience, which is also tailored to the user by exploiting personalization and context-awareness. This paper presents CHATS, a system for personalized digital storytelling in cultural heritage sites. Storytelling is based on a tangible interface, which adds a gamification aspect and improves interactivity for people with visual impairment. Technologies of AR and Smart Glasses are used to enhance visitors’ experience. To test CHATS, a case study was implemented and evaluated.

1. Introduction

Storytelling is a widely used method for people across the world to engage emotionally, communicate, and project elements from their culture and personality. Humans can really benefit from their own stories, mentally and emotionally, and after all these years, we still learn and improve on telling stories [1]. Narratologists agree that to constitute a narrative, a text must tell a story, exist in a world, be situated in time, include intelligent agents, and have some form of a causal chain of events, while also it usually seeks to convey something meaningful to an audience [2].
Except for humans, museums can be considered as “natural storytellers” [3]. Museums aim to make their exhibits appealing and engaging to an increasing variety of audiences while also nurturing their role in conservation, interpretation, education, and outreach [4]. The utilization of multimodal storytelling mechanisms, in which digital information is presented through multiple communication ways/media (multimedia), is considered as a supplement to physical/traditional heritage preservation, activating users’ involvement/collaboration in integrated digital environments. Thus, digital storytelling is one of the resources museums have in hand for enriching their offer to visitors and to society at large. Through narratives, museums can find new ways to enhance and represent their exhibit’s stories to visitors, attracting their attention and increasing their interest through active engagement.
Digital Storytelling (DS) derives its engaging power by integrating images, music, narrative, and voice together, thereby giving deep dimension and vivid color to characters, situations, experiences, and insights. Consequently, technologies such as Augmented Reality (AR) can influence the way museums present their narratives and display their cultural heritage information to their visitors. AR can be seen as a form of mediation using interaction and customization that supports a form of narratives where visitors can engage or even create their own narrative scenarios in their cultural tour.
Meanwhile, the design of user-profiles as ‘fictional’ characters based on real data and research, and being created for a digital storytelling application, is considered to be a very consistent and representative way of defining actual users and their goals. Personalized cultural heritage (CH) applications require that the system collects data about the users and the environment, processing them in order to tailor the user experience. Context-awareness is a technology that addresses this requirement, by enhancing the interaction between human and machine and adding perception of the environment, which eventually leads to intelligence [5]. The context in the cultural space domain includes many features such as location, profiles, user movement and behavior, and environmental data [6].
In this work, the authors present the Cultural Heritage Augmented and Tangible Storytelling (CHATS) project, a framework that combines Augmented Reality and Tangible Interactive Narratives. This project is based on a famous painting named “Children’s Concert” by G. Jacovides and on a previous project in which 3D models representing the painting’s characters and objects were created [7,8]. It is a hybrid architecture that combines state-of-the-art technologies along with tangible artifacts and shows usability and expandability to other areas and applications at a relatively low cost. In addition, CHATS takes into consideration the special needs of visually impaired people and aims for an immersive cultural experience even without the need for images.
The paper is organized as follows: Section 2 is about related work in digital storytelling applications on cultural heritage. DS applications have been reviewed and classified. In Section 3, CHATS is presented. Its architecture, its hardware infrastructure, and all the modules that work together for a DS application over a tangible interface. In Section 4, CHATS is implemented over a painting and the project is tested and evaluated. The work is concluded in Section 5.

2. Related Work

There have been various applications that apply digital storytelling techniques, almost exclusively addressing the problem of delivering appropriate storytelling content to the visitor. The rationale for applying such techniques is that cultural heritage applications have a huge amount of information to present, which must be filtered in order to enable the individual user to easily access it.
For the purposes of this paper, we have reviewed those applications described in a variety of sources. Specifically, we identified 26 digital storytelling applications only from the last decade and included them in our collection. The criterion for choosing those applications was primarily the use of interactive narratives about cultural heritage, in combination with other technologies. The following table (Table 1) is based on digital technologies per cultural application and is described below.
In CHESS [10], researchers designed and tested personalized audio narratives about specific exhibits in the Acropolis archeological museum, in Athens. Visitors were assigned a predefined profile according to their age and had access to AR content through a mobile app, representing the artifacts in 3D. Some members of this research team continued with EMOTIVE [29], which was tested in different museums and, in addition to the above, they created an authoring tool for narratives and a mobile app with which to present those narratives. Personalization in CULTURA [12] works differently, according to visitors’ interests and the level of engagement with the cultural artifacts. In works, as with Lost State College [14], iGuide [15] personalized content that derives from the visitors, who upload their own media.
Gossip at Palace [19] efforts to attract teenagers as a primary target group using gamification elements. In contrast, SVEVO [25] targets adults and seniors to engage them more in a cultural visit. In some works, such as TolkArt [24], SPIRIT [22], and Střelák [20], AR guide-personalized content comes as a result of location awareness and user interactions. Cicero [30] and MyWay [27] are recommender systems that take advantage of DS to promote cultural heritage. In most of the works [10,15,16,17,18,20,21,22,23,25,29], AR apps (for smartphones and tablets), AR smart glasses, or VR headsets are used as technologies to immerse users into 3D environments and engage them into a more participatory interaction with objects.
To summarize, all the above applications make use of interactive stories, and most of these stories have personalization elements. This means that, by some method, they either create user profiles or use ready-made, predesigned user profiles to deliver personalized content. Many of them exploit augmented and virtual reality technologies to dive into virtual environments and offer a more intense experience. The same technologies offer the ability for gamification and offer serious games to communicate cultural information in a more entertaining way, especially to younger ages. Additionally, in most applications, there is context awareness, and applications provide content that is related to the user’s position and movement.
On the other side, not as many applications have a three-dimensional representation of objects and monuments of cultural heritage and, in even fewer, there is a tangible interface for the users to interact. It is a common finding that cultural heritage research has shifted over the years to mobile applications and the use of screens and has moved away from tangible interfaces. CHATS efforts to bridge the gap that has been created and exploit all the potentials a tangible interface can offer for cultural heritage. In addition, it uses 3D representations of artifacts, shows gamification aspects, and opens horizons to people with visual impairments under a low cost.

3. CHATS—Cultural Heritage Augmented and Tangible Storytelling

3.1. Architecture

In CHATS, there is a tangible 3D printed and handcrafted representative diorama for each exhibit—an object of interest in a collection. The system can be installed in any gallery, library, archive, or museum (GLAM); thus, objects of interest could be paintings, museum objects, books, artworks, etc.
The basis of CHATS architecture is the network of sensors and actuators, for an entire collection (an actual Internet of Things network), that interacts with the visitors and sends data to a server. The server could be remote or local. The number and type of sensors could be different for each exhibit, according to the desired level of interaction (detail, sensitivity, accuracy, modes of interaction). Sensors can sense the proximity of visitors and interactions such as touching objects, moving hands near objects, making noises, etc. Actuators offer automated interactivity, allowing events such as playing sounds or projecting images, triggered by user actions or context-aware procedures. Sensors are rather ubiquitous and not visible by users.
Nevertheless, the first interaction between visitors and exhibits is the sense of proximity via Bluetooth low energy (BLE) technology. All visitors carry a tag that is given to them when entering a cultural site, and the Bluetooth transceiver, also hidden in each construction, can pair automatically with the tags. Localization is executed in the Localization Module, which is responsible for the acquisition of localization data, i.e., data from the BLE infrastructure (on users and artifacts) and data that capture user activity. The output of the localization module is sent to the DS module, triggering the necessary actions.
In any case, the sensors are always active and stream data to be processed primarily by an Arduino controller (one for each artifact). Useful data (data of actual user interaction) are sent to the server for the second level of processing while redundant data are ignored. The sensor network data flow, in association with localization and personalization data, is combined to dynamically select one of the several story trajectories to be heard by the visitors.
The Personalization Module is responsible for executing the personalization procedure. Acquiring data from either stored profiled data in the database (DB) or directly from the user (e.g., through a questionnaire), the personalization module identifies the persona of the user, associates them with a profile that is relevant for the application, and outputs the result to the DS module in order to be exploited for a more personalized user experience.
The DS module is placed at the center of the CHATS architecture, receiving the output of all other modules, and delivering the appropriate content to the user. The typical output device of the DS module is the mobile device of the user but is not restricted only to traditional content delivery methods. In cases where smart glasses are available, AR content is reproduced, according to the narrative. AR content, along with binaural audio for the narratives, enhance visitors’ experience. Finally, the DS module may also trigger actions to actuators, such as the playing of appropriate music when the narrative requires it.
The complete architecture is illustrated in Figure 1. The three modules integrated into the CHATS architecture are further described in the following subsections.

3.2. DS Module

Narratives in CHATS have the form of audio. Technologies of binaural audio in combination with augmented reality projections, using smart glasses, were chosen for the immersion of visitors into the painting’s environment and for a richer user experience (UX). Nevertheless, the tangible interface can also be used without the need for smart glasses.
The binaural recording technique has been known and examined for more than a century. A lot of times, the term is considered synonymous with stereo recording, but these techniques are different conceptually and produce different audio results. In the binaural recording, specialized microphones are used. Usually, these microphones are shaped like a dummy head, where the ears are the actual microphones. Binaural audio creates the effect of immersion if it is reproduced using a headset. In this case, the listener feels like they are sitting in the exact location where the sound was originally created.
Narratives, in general, can be linear, following a specific trajectory, but interactive narratives are mostly branching narratives, which means they can follow several paths and that the user has some level of control over the story outcome. In CHATS, audio narratives are branching, according to user input and profile. This technique requires plenty of recordings to have more branching options and recordings should follow a general direction, or at least a specific format, so that a story’s meaning always exists.
Agency (a term mostly used in games) is the actual level of control that players feel while in the game world [2,35]. Multiplayer Interactive Narrative Experiences (MINEs) are interactive authored narratives in which multiple players experience distinct narratives (multiplayer differentiability) and their actions influence the storylines of both themselves and others (inter-player agency) [36]. The aim of CHATS is for the visitor-user to perceive the maximum possible level of agency, because this also leads to a higher level of engagement [37]. MINEs are also supported. A narrative trajectory can be defined by the simultaneous interaction of multiple visitors, but there is no differentiability, as only one narrative can occur at a time.
The DS module initiates the moment a visitor approaches an artifact (trigger distance can be calibrated into software, can vary in different hardware implementations, and can be affected by obstacles, such as other visitors, metal objects, and walls). At that time, sensors inside the tangible representation of the artifact start producing useful data that can initiate audio reproduction. This is the first level of interaction with CHATS—a rather involuntary interaction—and can be affected by the presence of multiple users at range. Meanwhile, users with smart glasses can utilize them to display digital information about their artifact of interest, viewing 3D models, images, and text that complete and enhance users’ knowledge and perception. This contextual information about each artifact can be also part of its narrative, further explaining the story behind its existence. The combination of the physical artifacts with the virtual information available through smart glasses and the awareness of its narrative can trigger curiosity and stimulate the interest of users to physically manipulate their object of interest.
At a second step comes the voluntary interaction, in which visitors are invited (by audio and hopefully by curiosity and self-interest) to touch and feel the 3D printed diorama. The 3D printed objects offer a tangible experience to the visitor, which is added to the digital experience. In this way, users with smart glasses can also interact with the virtual information displayed through AR techniques. By manipulating the physical object, users have access to different types of information based on the angle of the artifact’s view. Each part of the artifact can be used as an “image maker” for the AR software to display multiple virtual data according to the user’s view perspective. Users then can interact with the AR content by manipulating the artifact.

3.3. Personalization Module

The personalization module involves the necessary steps to initialize the application, consisting of the main components of the user persona identification by receiving data from the users, to accordingly adapt the initial customization of the DS application.
When a visitor uses a system for the first time, the system will most likely fail to effectively recommend content to the user. This problem is commonly known as cold start and is a common issue in recommendation systems. Many solutions and methods have been proposed to address the cold start issue. To construct an individual user’s profile, information may be collected explicitly, through direct user intervention, or implicitly, through agents that monitor user activity [38].
Therefore, the basic function of the CHATS personalization module is the definition of the user’s requirements and characteristics by defining his profile. Most similar surveys have used the questionnaire to categorize users according to their answers [9,39,40,41]. Viewing a small questionnaire with simple but targeted questions about the user’s profile and interests at the beginning of the CHATS application is a common and effective method in corresponding cases of personalized information to gather the necessary information about the user profile.
Therefore, the first section of the module contains six questions that inquire about the demographic information of the participants (gender, age range, and level of education), as well as their DS-related background information (frequency of DS platforms interaction, experience with different DS platforms or devices, preferred DS genres). This information would enable us to determine the heterogeneity of the sample, as well as investigate the effect of personal and contextual factors such as age, education, and prior experiences. The questionnaire is digitally filled by the visitor upon registration at the start of the visit (also when the BLE tagging is performed).
The next step includes the data acquisition from profiled data derived from various resources (mobile devices, database repository) that allow a refinement of the associated user persona. Overall, the user profiling module provides a setup for the application to know where to start from, eliminating the cold-start issue mentioned before [42].
Finally, this cycle of persona identification will be continuously performed during the whole DS integration of the user, eventually storing the identified persona for future use and providing a dynamic personalization module.

3.4. Localization and Context-Awareness Module

The proposed digital storytelling platform supports context-aware functionality, allowing for the automated identification of visitor proximity and interaction with the artifacts. Although context-awareness includes a wide range of context types such as location, time, social context, and activity, it is often sufficient to track visitor movement with respect to artifacts or other visitors. This approach has been applied in the proposed platform, which captures visitor location to initiate the next step of the narrative. A visitor’s location is expressed in three ways: (a) proximity with an artifact, (b) proximity with other visitors, and (c) tangible interaction with an artifact. Thus, apart from locative context, social context is also exploited, distinguishing between lone visitors and groups of visitors.
The infrastructure which is responsible for the realization of context-awareness includes BLE modules and capacitive touch sensors that are installed on Arduino microcontrollers and BLE beacons carried by visitors. The localization procedure is based on the following two independent sensing processes:
Firstly, the BLE module installed on the Arduino board is placed close to the related artifact and is continuously searching for BLE devices. Visitors are moving in the area, carrying BLE beacons (in the form of badges given to them) that advertise their presence. When the BLE module detects a BLE beacon, it retrieves the registered visitor mapped to this beacon and calculates his/her distance from the related artifact. The distance is calculated using the Received Signal Strength Indicator (RSSI). When the calculated distance is reduced below a predetermined threshold, the registered visitor is considered to be approaching the artifact, an event that is sent to the server. Accordingly, if the distance of a registered visitor becomes greater than the threshold, the visitor is considered to be moving away from the artifact, which is again sent to the server.
Secondly, the capacitive touch sensor installed on the Arduino board is also placed in a critical position on the artifact. When the visitor touches that area and triggers the sensor, the Arduino microcontroller sends that event to the server.
At the same time, the server, which is continuously executed, receives events from the various Arduino boards. Performing logical functions, it can identify whether a visitor has approached an artifact (first level of proximity) or has touched it, thus initiating the second level of proximity and interaction. Furthermore, proximity between visitors is indirectly calculated based on simultaneous proximity to an artifact. Thus, social context, i.e., grouping the visitors together, is acquired only in relation to an artifact.
It should be noted that the second layer of close proximity triggered by the capacitive touch sensor does not provide identification of the visitor, thus relying on BLE identification and mapping to a registered visitor to know who is currently interacting with a certain artifact. Overall, the described procedure disseminates the following contextual information to the DS module: who is approaching the artifact, which level of proximity has been reached, and who else is accompanying him/her.

4. Case Study

CHATS was tested over a painting named “Children’s concert”. It was created around 1900 by a Greek artist called George Jacovides and can be found in the Greek National Gallery in Athens (Figure 2). The painting was awarded a prize (a golden medal) in the 1900 Paris Exposition. In the painting, there are seven characters in total. Children are playing music for an infant and its mother. The scene is placed in a bright room with some furniture such as a table, a chair, and benches.

4.1. Tangible Interface Description and Characteristics

The 3D representation of the painting was part of another project, called ARTISTS [7,8]. Three-dimensional printing was applied to materialize some characters. The purpose of these prints was to create a tangible representation of the painting to be used in various instances:
  • For gallery visitors with vision disabilities;
  • For children, adding gamification features into gallery visits;
  • For any visitor wishing for a richer cultural user experience.
The printed representation model (diorama) works as a user interface, offering multiple interactions. The models’ fidelity was not a project target, as the case study mostly focuses on the interaction and the diorama represents a demo application for experimentation purposes.
Character 3D models were drawn in Cinema 4D and were processed again in a way that 3D printing would be possible and efficient. After an initial trial-and-error period, it was decided that it would be more convenient for the models to be hollow inside and split in half. This empty space could fit electronics, less printing material would be used, and the printing process would be faster. The second period of processing and reprinting the models started and resulted in half model parts, plus additional elements of the painting scene, such as a table (Figure 3).
The material used for our printing was polylactic acid (PLA), a common material for homemade 3D prints, using an Ultimaker 3, 3D printer. The best result for our rather complicated models was achieved using an AA 0.4 head at 180 °C and printing at a density of 0.2 mm with 40% filling.
All these parts were assembled and glued together in a diorama, but, prior to this, Arduino sensors should be positioned inside the models. Arduino was chosen as the sensing and communicating interface because of its versatility, availability, low cost, and open architecture. Ultrasonic distance sensor, capacitive touch sensor, Bluetooth BLE adapter, all connected on a Mega 2560 board, compose the electronics infrastructure and coding that was implemented using the Arduino SDK and processing programming language.
Model characters were positioned in fixed positions and in a way in which they both represent the drawn characters and are accessible by hand. Entering in area A or area B, as shown in Figure 4, triggers interaction. The same happens when touching the child which is sitting on the chair and holding the drum. This character is positioned at the closest position to the visitors and is referred to as “the drummer”. There is also a character that adds a humorous feeling to the artwork. It is the boy trying to play music by blowing air inside a watering can. This character is hosting our BLE receiver and is referred to as “the watering can” (Figure 5).
Distance sensors are quite accurate and cover the whole painting area. Proximity to a visitor, before entering by hand in the painting area, can be sensed using the BLE transceiver when paired with the beacon (BLE tag) carried by visitors. The initial pairing triggers the first audio message from the painting, which welcomes and attracts visitors.
Arduino’s sensors are installed into and around the models and, with the aid of BLE beacons, the proximity of visitors can be sensed. Touching a model or entering the area around models can also be sensed. Sensors trigger narratives about the painting in the form of sound. These narratives have a personalized form as they change according to the number of people involved and their behavior. Some form of profiling, matching with a predesigned persona, could also be processed earlier by the personalization module, however this function is not yet fully implemented. Narratives are based on actual historic data about the painting and the era, enriched with fictional features.
Those narratives are binaural recordings. The selection of binaural audio was part of the plan to impress the user and enhance the sense of presence into the virtual space. To keep the files small and fast to stream, narratives are short in length (about 10 s each) and stored in an SQL Server database, accessible via HTTP. There are about ten recordings for each of the two 3D printed characters on the demo diorama, ten more recorded files not associated with some character, plus ten more recordings (audio files) which are triggered when a group of people is sensed within a proximity of the construction (Table 2).
An algorithm decides which recording is going to be streamed, according to user interaction with the characters. Each new session starts with the welcome sound and then three usage scenarios are considered: (a) The user does not touch the objects; (b) The user uses one hand and touches a character of the painting; (c) The user puts both hands in the constructed painting space. These scenarios change in case a group of people (two or more) interacts with the painting. In that case, two different usage scenarios are considered: (a) Users are approaching and passing by; (b) Someone starts touching. In more complex situations, e.g., session starts with a single user and then someone else is added, the cycle flow must finish before a new session begins (Figure 6).
For each narrative in the above flowchart, there is a leveling approach (Figure 7) where each recording belongs to a specific level in such a way that, when a sound is completed (the green rectangles) and the narrative continues to the next level, there is always a logical continuity. It is essential for the narratives to have a meaning and a structure, that is, to have a beginning, a middle, and an end, and the use of levels is a technique to accomplish that. Users can even be led from level 1 of one narrative to level 2 of another narrative without losing the coherence of the story. To make the above possible, special attention had to be paid when authoring the parts of each level, so that they could be related and stitched to the previous and next level.
To make all the above feasible, the Arduino microcontroller is equipped with a Wi-Fi module (NINA) and uses the WiFiNINA Library to create a web server and be able to respond to http calls. The same library was essential for the communication and data exchange between the microcontroller and the AR portable device. Thus, augmented Reality (AR) techniques are utilized to digitally visualize data from the narratives. Users can watch the characters “come to life” and narrate their stories through AR smart glasses, and they can also listen to binaural sounds from the surroundings of the painting. Moreover, data from the Arduino sensors about the user’s movements and position dynamically change the digital information available through the smart glasses while also improving the physical interaction with the system. AR techniques enhance the cultural user experience by immersing them in the digitally reconstructed painting and engaging with its 3D models, thus combining digital storytelling in 3D virtual immersive learning environments [43].
The AR demo application (Figure 8) has been composed in Unity and the AR engine is Vuforia, but some other software tools have also been used. Animations were all imported by Adobe’s Mixamo, while the painting’s environment has been crafted using Adobe Photoshop.
The sound used for the application was either recorded using a 3Dio FS binaural microphone, both imported from common related web portals. The smart glasses used for the experiment were Microsoft’s HoloLens 2 (Figure 9).
Vuforia allows for the visual recognition of the painting representation and, while using the AR device, when the user focuses at a certain angle, activates the projection of the augmented content on the diorama. The smart glasses perform well, but improvements should be made to the application to align the augmented visual content more accurately over the tangible objects.

4.2. Evaluation

One of the most basic stages during the implementation of an application is its evaluation by the users themselves. The conclusions that come out about the user experience are very important, and a proper reading of the results helps the developers to optimize the performance of the application. A questionnaire, interviews, and user observations, as stated earlier, are the most widely known methods of evaluation. In this section, we present the results from the test of the usability and effectiveness of our CHATS application and evaluate the feedback received from the testing users based on the answers to our interviews, questionnaire, and user observation procedures [42].
Forty-two users were recruited, with no previous experience in DS tangible applications. When recruiting, we opted for a balanced sample across age and gender and tried to match the age demographics with those of our personas, which meant that we were looking for participants fitting in five different age range buckets: young children (but not younger than 10), middle-school-age children, young adults, adults, and people in middle or late adulthood (Figure 10). Although we were not able to achieve a perfect balance, we managed to collect a full set of data for a total of 42 participants who spent approximately two hours each experiencing the CHATS prototypes and participating in the evaluation [44].
Participants were invited to complete a series of tasks. Different sets of storytelling experiences were designed specifically for different visitor profiles based on the personalization module described earlier.
To overcome the challenges, our study design sought to strike a balance between different methods, including the use of observation, interviews, questionnaires, as well as the automatic recording of system logs. Specifically, the CHATS system evaluation employed the mixed methods described below (Figure 11, Figure 12 and Figure 13):
  • A general, pre-experience demographic questionnaire, administered verbally in the form of an interview;
  • Video and audio recording and note-taking of the observation of visitors’ behavior throughout their interaction with the CHATS system;
  • Two semi-structured, post-experience interviews per visitor/group, conducted immediately after the end of the visit. The interviews were delivered in a conversational tone in order to draw visitors out of what they experienced.
After their experience, the participants discussed with our experts, who concluded the following:
  • Most of the participants agreed that it was a pleasant educational experience and that they learned new things about the Jacovides painting and tangible Interactive Narratives. In the post-experience interviews, all users found the stories interesting and entertaining;
  • On a scale of 1 to 5, the users granted the CHATS application with 4 on how it attracts them to continue using it after 2 min;
  • Most of the participants would appreciate the CHATS application as a massive, multiplayer online experience through social data login;
  • Some players found the tasks very easy and suggested having a longer version, adding tasks not strictly related to educational content, and including rewards;
  • Most visitors found the visual assets presented on AR glasses fascinating, reporting that media assets aided their understanding;
  • Similar to other evaluations concerning mobile devices in cultural heritage, an important observation was that the majority of users, especially younger ones, were fully absorbed by the imagery shown on the AR glasses and spent more time looking at the screen than observing the exhibits;
  • In some cases, visitors felt that the visuals augmented the experience by bringing forth exhibit details that were not otherwise visible or related to informational content and artifacts;
  • Some visitors liked to be guided by the storytelling experience, but others would have liked to break the experience and focus on an irrelevant exhibit that caught their attention;
  • Regarding usability, both the observations and visitors’ responses showed that, overall, the interface was regarded as straightforward and easy to use, even by visitors not experienced with touch screen devices, smartphones, or AR glasses.

5. Conclusions

In this research work, an architecture for dynamic digital storytelling on tangible objects, named CHATS, was proposed. CHATS combines storytelling techniques such as narratives, augmented reality visualization, and binaural audio in a dynamic environment that is personalized and context-aware. The system identifies user proximity and interaction using appropriate sensors and delivers an enhanced user experience based on user behavior.
The proposed architecture was evaluated in a use case including 3D printed objects that represented figures of a painting. The tangible replication of painted characters allowed for a vivid representation of the scene depicted in the painting, offering new and exciting ways of digital storytelling associated with the scene. The evaluation showed that the involved users experienced a more immersive story compared to more traditional digital storytelling approaches. The evaluation of the experiences deployed for the CHATS system provided insights into real users’ interactions in a visiting context, which is challenging. The studies have revealed issues concerning both and less favorable aspects of the deployed system. Overall, visitors had a very positive response to the experience, indicating that more unconventional, e.g., storytelling, approaches to engage with cultural content may greatly contribute towards more compelling visiting experiences.
DS experts are keen to invest effort in providing different visitors with the right information at the right time and with the most effective type of interaction. CHATS developed a platform where personalization technology helps DS experts to tailor aspects of a digitally enhanced visiting experience, the interaction modalities through which the content is disclosed, and the pace of the visit both for individuals and for groups. We believe that the direct involvement of cultural heritage professionals in the co-design of CHATS technology as well as the extensive evaluation with visitors in field studies was instrumental in shaping a holistic approach to personalization that exploits in full the new opportunities offered by the tangible and embodied interaction.
CHATS can be further extended to allow for more advanced personalization and context-aware procedures which capture more behavioral patterns of the visitor, such as the tracking of complex movement beyond proximity and the identification of visitor focus on specific tangible objects. Moreover, additional ways and methods of digital storytelling, apart from prerecorded audio narratives and AR visualization, can be integrated into the architecture. Future storytelling research focuses on computational DS techniques for emergent narratives. These challenges will be addressed in future work, while the evaluation will be conducted in a larger scale environment.

Author Contributions

Conceptualization, G.T., K.M. and J.A.; methodology, M.K. and K.M.; software, G.T.; validation, M.K.; formal analysis, K.M. and M.K.; writing—original draft preparation, J.A., G.T., K.M., M.K.; writing—review and editing, G.T., M.K., K.M., J.A.; supervision, G.C.; project administration, M.K., G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kasunic, A.; Kaufman, G. Learning to Listen: Critically Considering the Role of AI in Human Storytelling and Character Creation. In Proceedings of the First Workshop on Storytelling, New Orleans, LA, USA, 5 June 2018. [Google Scholar]
  2. Ryan, J.O.; Mateas, M.; Wardrip-Fruin, N. Open design challenges for interactive emergent narrative. In ICIDS 2015; LNCS; Schoenau-Fog, H., Bruni, L.E., Louchart, S., Baceviciute, S., Eds.; Springer: Cham, Switzerland, 2015; Volume 9445, pp. 14–26. [Google Scholar]
  3. Bedford, L. Storytelling: The Real Work of Museums. Curator: Mus. J. 2001, 44, 27–34. [Google Scholar] [CrossRef]
  4. Roussou, M.; Pujol, L.; Katifori, A.; Chrysanthi, A.; Perry, S.; Vayanou, M. The Museum as Digital Storyteller: Collaborative Participatory Creation of Interactive Digital Experiences. In Proceedings of the Annual Conference of Museums and the Web: MW2015, Chicago, IL, USA, 8–11 April 2015. [Google Scholar]
  5. Abowd, G.D.; Dey, A.K.; Brown, P.J.; Davies, N.; Smith, M.; Steggles, P. Towards A Better Understanding of Context and Context-Awareness. In International Symposium on Handheld and Ubiquitous Computing; Springer: Berlin, Heidelberg, 1999; pp. 304–307. [Google Scholar]
  6. Not, E.; Petrelli, D. Blending customisation, context-awareness and adaptivity for personalised tangible interaction in cultural heritage. Int. J. Hum. Comput. Stud. 2018, 114, 3–19. [Google Scholar] [CrossRef]
  7. Trichopoulos, G.; Konstandakis, M.; Aliprantis, J.; Caridakis, G. ARTISTS: A virtual Reality culTural experIence perSonalized arTworks System: The “Children Concert” painting case study. In Proceedings of the International Conference on Digital Culture & AudioVisual Challenges (DCAC-2018), Corfu, Greece, 1–2 June 2018. [Google Scholar]
  8. Trichopoulos, G.; Aliprantis, J.; Konstantakis, M.; Michalakis, K.; Mylonas, P.; Voutos, Y.; Caridakis, G. Augmented and personalized digital narratives for Cultural Heritage under a tangible interface. In Proceedings of the 2021 16th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP), Online, 4–5 November 2021; pp. 1–5. [Google Scholar]
  9. Konstantakis, M.; George, C. Adding culture to UX: UX research methodologies and applications in cultural heritage. J. Comput. Cult. Herit. (JOCCH) 2020, 13, 1–17. [Google Scholar] [CrossRef] [Green Version]
  10. Pujol, L.; Roussou, M.; Poulou, S.; Balet, O.; Vayanou, M.; Ioannidis, Y. Personalizing interactive digital storytelling in archaeological museums: The CHESS project. In Proceedings of the 40th Annual Conference of Computer Applications and Quantitative Methods in Archeology, Southampton, UK, 26–29 March 2012. [Google Scholar]
  11. Lanir, J.; Kuflik, T.; Dim, E.; Wecker, A.J.; Stock, O. The Influence of a Location-Aware Mobile Guide on Museum Visitors’ Behavior. Interact. Comput. 2013, 25, 443–460. [Google Scholar] [CrossRef]
  12. Hampson, C.; Bailey, E.; Munnelly, G.; Lawless, S.; Conlan, O. Dynamic Personalisation for Digital Cultural Heritage Collections. In Proceedings of the UMAP, Rome, Italy, 10–14 June 2013. [Google Scholar]
  13. Callaway, C.; Stock, O.; Dekoven, E. Experiments with Mobile Drama in an Instrumented Museum for Inducing Conversation in Small Groups. ACM Trans. Interact. Intell. Syst. 2014, 4, 1–39. [Google Scholar] [CrossRef]
  14. Han, K.; Shih, P.C.; Rosson, M.B.; Carroll, J.M. Enhancing community awareness of and participation in local heritage with a mobile application. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, Baltimore, MA, USA, 15–19 February 2014; pp. 1144–1155. [Google Scholar]
  15. Tsekeridou, S.; Tsetsos, V.; Chalamandaris, A.; Chamzas, C.; Filippou, T.; Pantzoglou, C. iGuide: Socially-enriched mobile tourist guide for unexplored sites. In Proceedings of the Hellenic Conference on Artificial Intelligence, Ioannina, Greece, 15–17 May 2014. [Google Scholar]
  16. Tanenbaum, K.; Hatala, M.; Tanenbaum, J.; Wakkary, R.; Antle, A. A case study of intended versus actual experience of adaptivity in a tangible storytelling system. User Model. User-Adapted Interact. 2013, 24, 175–217. [Google Scholar] [CrossRef] [Green Version]
  17. Sylaiou, S.; Mania, K.; Liarokapis, F.; White, M.; Walczak, K.; Wojciechowski, R.; Patias, P. Evaluation of a Cultural Heritage Augmented Reality Game. Cartographies of Mind, Soul and Knowledge. Available online: https://www.researchgate.net/publication/292091242_Evaluation_of_a_Cultural_Heritage_Augmented_Reality_Game/stats (accessed on 11 December 2021).
  18. Chianese, A.; Marulli, F.; Moscato, V.; Piccialli, F. SmARTweet: A location-based smart application for exhibits and museums. In Proceedings of the 2013 International Conference on Signal-Image Technology & Internet-Based Systems, Kyoto, Japan, 2–5 December 2013; pp. 408–415. [Google Scholar]
  19. Rubino, I.; Barberis, C.; Xhembulla, J.; Malnati, G. Integrating a location-based mobile game in the museum visit: Evaluating visitors’ behaviour and learning. J. Comput. Cult. Herit. (JOCCH) 2015, 8, 1–18. [Google Scholar] [CrossRef]
  20. Střelák, D.; Škola, F.; Liarokapis, F. Examining User Experiences in a Mobile Augmented Reality Tourist Guide. In Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu Island, Greece, 29 June–1 July 2016; Association for Computing Machinery (ACM): New York, NY, USA, 2016; p. 19. [Google Scholar]
  21. Van, D.; Vaart, M.; Areti, D. Through the Loupe: Visitor Engagement with a Primarily Text-Based Handheld AR Application. In Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October 2015; Volume 2. [Google Scholar]
  22. Spierling, U.; Winzer, P.; Massarczyk, E. Experiencing the Presence of Historical Stories with Location-Based Augmented Reality. In Proceedings of the International Conference on Interactive Digital Storytelling, Madeira, Portugal, 14–17 November 2017; Springer: Cham, Switzerland; pp. 49–62. [Google Scholar]
  23. Hernández, S. Vapriikki Case: Design and Evaluation of an Interactive Mixed-Reality Museum Exhibit. Available online: https://trepo.tuni.fi/handle/10024/102557 (accessed on 11 December 2021).
  24. Piccialli, F.; Chianese, A. The Internet of Things Supporting Context-Aware Computing: A Cultural Heritage Case Study. Mob. Networks Appl. 2017, 22, 332–343. [Google Scholar] [CrossRef]
  25. Fenu, C.; Pittarello, F. Svevo tour: The design and the experimentation of an augmented reality application for engaging visitors of a literary museum. Int. J. Hum. Comput. Stud. 2018, 114, 20–35. [Google Scholar] [CrossRef]
  26. Andritsou, G.; Katifori, A.; Kourtis, V.; Ioannidis, Y. Momap - An Interactive Gamified App for the Museum of Mineralogy. In Proceedings of the 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), Würzburg, Germany, 5–7 September 2018; pp. 1–4. [Google Scholar]
  27. Kountouris, A.; Evangelos, S. Survey on Intelligent Personalized Mobile Tour Guides and a Use Case Walking Tour App. In Proceedings of the 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), Volos, Greece, 5–7 November 2018. [Google Scholar]
  28. Vassilakis, C.; Poulopoulos, V.; Antoniou, A.; Wallace, M.; Lepouras, G.; Nores, M.L. exhiSTORY: Smart exhibits that tell their own stories. Future Gen. Comput. Syst. 2018, 81, 542–556. [Google Scholar] [CrossRef]
  29. Katifori, A.; Roussou, M.; Perry, S.; Drettakis, G.; Vizcay, S.; Philip, J. The EMOTIVE Project-Emotive Virtual Cultural Experiences through Personalized Storytelling. In Proceedings of the Workshop on Cultural Informatics Research and Applications co-located with the International Conference on Digital Heritage, CIRA@EuroMed 2018, Nicosia, Cyprus, 3 November 2018. [Google Scholar]
  30. Sansonetti, G.; Gasparetti, F.; Micarelli, A.; Cena, F.; Gena, C. Enhancing cultural recommendations through social and linked open data. User Model. User-Adapted Interact. 2019, 29, 121–159. [Google Scholar] [CrossRef]
  31. Konstantakis, M.; Eirini, K.; George, C. Cultural Heritage, Serious Games and User Personas Based on Gardner’s Theory of Multiple Intelligences:“The Stolen Painting” Game. In Proceedings of the International Conference on Games and Learning Alliance, Athens, Greece, 27–29 November 2019. [Google Scholar]
  32. Alinam, M.; Ciotoli, L.; Torre, I. WoTEdu: A Multimodal Interactive Storytelling System. In Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 29 June 2021–2 July 2021; pp. 119–120. [Google Scholar]
  33. Cesário, V.; Sandra, O.; Valentina, N. A Natural History Museum Experience: Memories of Carvalhal’s Palace–Turning Point. In Proceedings of the International Conference on Interactive Digital Storytelling, Bournemouth, UK, 3–6 November 2020. [Google Scholar]
  34. Ciotoli, L.; Alinam, M.; Torre, I. Sail with Columbus: Navigation through Tangible and Interactive Storytelling. In Proceedings of the CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter (CHItaly 21), Bolzano, Italy, 11–13 July 2021. [Google Scholar]
  35. Peinado, F.; Gervás, P. Transferring game mastering laws to interactive digital storytelling. In TIDSE 2004; LNCS; Göbel, S., Ed.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3105, pp. 48–54. [Google Scholar] [CrossRef] [Green Version]
  36. Spawforth, C.; Gibbins, N.; Millard, D.E. StoryMINE: A System for Multiplayer Interactive Narrative Experiences. In Interactive Storytelling; ICIDS 2018. Lecture Notes in Computer Science; Rouse, R., Koenitz, H., Haahr, M., Eds.; Springer: Cham, Switzerland, 2018; Volume 11318. [Google Scholar] [CrossRef] [Green Version]
  37. Bouvier, P.; Lavoué, E.; Sehaba, K. Defining Engagement and Chara;cterizing Engaged-Behaviors in Digital Gaming. Simul. Gaming 2014, 45, 491–507. [Google Scholar] [CrossRef] [Green Version]
  38. Antoniou, A. Social network profiling for cultural heritage: Combining data from direct and indirect approaches. Soc. Netw. Anal. Min. 2017, 7, 1–11. [Google Scholar] [CrossRef]
  39. Chen, G.; Songshan, H. Understanding Chinese cultural tourists: Typology and profile. J. Travel Tour. Mark. 2018, 35, 162–177. [Google Scholar] [CrossRef]
  40. Konstantakis, M.; Georgios, A.; Caridakis, G. A Personalized Heritage-Oriented Recommender System Based on Extended Cultural Tourist Typologies. Big Data Cogn. Comput. 2020, 4, 12. [Google Scholar] [CrossRef]
  41. Vong, F. Application of cultural tourist typology in a gaming destination–Macao. Curr. Issues Tour. 2016, 19, 949–965. [Google Scholar] [CrossRef]
  42. Konstantakis, M.; Aliprantis, J.; Michalakis, K.; Caridakis, G. Recommending user experiences based on extracted cultural personas for mobile applications-REPEAT methodology. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, Barcelona, Spain, 3–6 September 2018. [Google Scholar]
  43. Mystakidis, S.; Berki, E. The case of literacy motivation: Playful 3D immersive learning environments and problem-focused education for blended digital storytelling. Int. J. Web-Based Learn. Teach. Technol. (IJWLTT) 2018, 13, 64–79. [Google Scholar] [CrossRef] [Green Version]
  44. Roussou, M.; Akrivi, K. Flow, staging, wayfinding, personalization: Evaluating user experience with mobile museum narratives. Multimodal Technol. Interact. 2018, 2, 32. [Google Scholar] [CrossRef] [Green Version]
Figure 1. CHATS architecture.
Figure 1. CHATS architecture.
Computers 11 00019 g001
Figure 2. Painting “Children’s Concert” by George Jacovides.
Figure 2. Painting “Children’s Concert” by George Jacovides.
Computers 11 00019 g002
Figure 3. A 3D printed character—the boy with the watering can.
Figure 3. A 3D printed character—the boy with the watering can.
Computers 11 00019 g003
Figure 4. Sensors’ positioning.
Figure 4. Sensors’ positioning.
Computers 11 00019 g004
Figure 5. The testing prototype.
Figure 5. The testing prototype.
Computers 11 00019 g005
Figure 6. The flow of a session.
Figure 6. The flow of a session.
Computers 11 00019 g006
Figure 7. The flow of a narrative.
Figure 7. The flow of a narrative.
Computers 11 00019 g007
Figure 8. Creating the demo application.
Figure 8. Creating the demo application.
Computers 11 00019 g008
Figure 9. Testing the prototype.
Figure 9. Testing the prototype.
Computers 11 00019 g009
Figure 10. Testing by middle-school-aged children.
Figure 10. Testing by middle-school-aged children.
Computers 11 00019 g010
Figure 11. Interview responses on DS contribution.
Figure 11. Interview responses on DS contribution.
Computers 11 00019 g011
Figure 12. Interview responses on user’s knowledge about DS.
Figure 12. Interview responses on user’s knowledge about DS.
Computers 11 00019 g012
Figure 13. Interview responses on app’s evaluation.
Figure 13. Interview responses on app’s evaluation.
Computers 11 00019 g013
Table 1. CUX applications and digital storytelling over the last decade [9].
Table 1. CUX applications and digital storytelling over the last decade [9].
ApplicationDSTSPERARVRCASG3D
CHESS Project [10] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i002 Computers 11 00019 i003 Computers 11 00019 i001
Lanir’s Mobile Guide [11] Computers 11 00019 i003 Computers 11 00019 i002 Computers 11 00019 i001 Computers 11 00019 i003
CULTURA [12] Computers 11 00019 i001 Computers 11 00019 i001
DRAMATRIC [13] Computers 11 00019 i001 Computers 11 00019 i001
Lost State College [14] Computers 11 00019 i003 Computers 11 00019 i001 Computers 11 00019 i001
iGuide [15] Computers 11 00019 i003 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
The Reading Glove [16] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i003
ARCO Project [17] Computers 11 00019 i002 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i003
The Beauty or the Truth [18] Computers 11 00019 i002 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i002
Gossip at Palace [19] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i003 Computers 11 00019 i002 Computers 11 00019 i001 Computers 11 00019 i003
Střelák’s AR guide [20] Computers 11 00019 i002 Computers 11 00019 i003 Computers 11 00019 i001 Computers 11 00019 i002 Computers 11 00019 i001
Through the Loupe [21] Computers 11 00019 i002 Computers 11 00019 i003 Computers 11 00019 i001 Computers 11 00019 i002 Computers 11 00019 i002
SPIRIT Project [22] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
Vapriikki Case [23] Computers 11 00019 i002 Computers 11 00019 i003 Computers 11 00019 i001 Computers 11 00019 i002
TolkArt app [24] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
SVEVO App [25] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i002
MoMap [26] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
MyWay [27] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
exhiSTORY [28] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
EMOTIVE [29] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
Cicero [30] Computers 11 00019 i003 Computers 11 00019 i001
meSch project [6] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
The Stolen Painting [31] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
WoTEdu [32] Computers 11 00019 i001
Turning Point [33] Computers 11 00019 i001 Computers 11 00019 i001 Computers 11 00019 i001
Sail with Columbus [34] Computers 11 00019 i001 Computers 11 00019 i001
SummaryAll520961696
DS: Digital Storytelling, TS: Tangible Storytelling, PER: PERsonalization, AR: Augmented Reality, VR: Virtual Reality, CA: Context Awareness, SG: Serious Games, 3D: 3D Digital Representation. Computers 11 00019 i001: high level, Computers 11 00019 i002: low level, Computers 11 00019 i003: medium level.
Table 2. Sound recordings used for demo application.
Table 2. Sound recordings used for demo application.
Welcome soundSingle visitorsJust lookingSoundFile 1–10
Touching “Watering can”SoundFile 11–20
Touching “Drummer”SoundFile 21–30
GroupApproachingGroupSound 1–5
TouchingGroupSound 6–10
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Trichopoulos, G.; Aliprantis, J.; Konstantakis, M.; Michalakis, K.; Caridakis, G. Tangible and Personalized DS Application Approach in Cultural Heritage: The CHATS Project. Computers 2022, 11, 19. https://doi.org/10.3390/computers11020019

AMA Style

Trichopoulos G, Aliprantis J, Konstantakis M, Michalakis K, Caridakis G. Tangible and Personalized DS Application Approach in Cultural Heritage: The CHATS Project. Computers. 2022; 11(2):19. https://doi.org/10.3390/computers11020019

Chicago/Turabian Style

Trichopoulos, Giorgos, John Aliprantis, Markos Konstantakis, Konstantinos Michalakis, and George Caridakis. 2022. "Tangible and Personalized DS Application Approach in Cultural Heritage: The CHATS Project" Computers 11, no. 2: 19. https://doi.org/10.3390/computers11020019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop