Next Article in Journal
A Usability Study on Widget Design for Selecting Boolean Operations
Next Article in Special Issue
Techno-Concepts for the Cultural Field: n-Dimensional Space and Its Conceptual Constellation
Previous Article in Journal
Assessing the Influence of Multimodal Feedback in Mobile-Based Musical Task Performance
Previous Article in Special Issue
An Interdisciplinary Design of an Interactive Cultural Heritage Visit for In-Situ, Mixed Reality and Affective Experiences
 
 
mti-logo
Article Menu

Article Menu

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Critical Help by a Robotic Assistant in the Field of Cultural Heritage: A New Challenge for Evolving Human-Robot Interaction

by
Filippo Cantucci
*,† and
Rino Falcone
*,†
Institute of Cognitive Science and Technology, National Research Council of Italy (ISTC-CNR), 00185 Rome, Italy
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Multimodal Technol. Interact. 2022, 6(8), 69; https://doi.org/10.3390/mti6080069
Submission received: 17 June 2022 / Revised: 12 August 2022 / Accepted: 14 August 2022 / Published: 17 August 2022
(This article belongs to the Special Issue Digital Cultural Heritage (Volume II))

Abstract

:
Over the years, the purpose of cultural heritage (CH) sites (e.g., museums) has focused on providing personalized services to different users, with the main goal of adapting those services to the visitors’ personal traits, goals, and interests. In this work, we propose a computational cognitive model that provides an artificial agent (e.g., robot, virtual assistant) with the capability to personalize a museum visit to the goals and interests of the user that intends to visit the museum by taking into account the goals and interests of the museum curators that have designed the exhibition. In particular, we introduce and analyze a special type of help (critical help) that leads to a substantial change in the user’s request, with the objective of taking into account the needs that the same user cannot or has not been able to assess. The computational model has been implemented by exploiting the multi-agent oriented programming (MAOP) framework JaCaMo, which integrates three different multi-agent programming levels. We provide the results of a pilot study that we conducted in order to test the potential of the computational model. The experiment was conducted with 26 real participants that have interacted with the humanoid robot Nao, widely used in Human-Robot interaction (HRI) scenarios.

1. Introduction

Information and communication technologies (ICT) have been well represented over the years, and today, they are a fundamental aspect of support for cultural heritage (CH) documentation, interpretation, recreation, and dissemination. Digitization and ICT applications have been recognized as effective support for cultural heritage preservation, as well as for the production of a significant number of additional resources for the management of cultural heritage itself. For example, the Europeana project [1] has created new scientific and public access to cultural heritage resources. Archivists today have a suite of tools for creating digital copies, ranging from scanning, photography, and 3D volumetric photogrammetry. Many of these tools have emerged as fundamental apparatuses for material-based CH archives. ArCo [2] is the Italian cultural heritage knowledge graph, consisting of a network of seven vocabularies and 169 million triples of about 820 thousand cultural entities. It collects and validates the catalog records of (ideally) all Italian cultural heritage properties (excluding libraries and archives). A number of research projects have also made notable progress in intangible assets. For instance, the i-Treasures project [3] focuses on organizing the intangible know-how. The aim of the project is to carry out a series of research to digitize CH assets, covering traditional dances, folk singing, craftsmanship, and contemporary music composition. Similarly, the Terpsichore project [4] aims to integrate ICT strategies with storytelling to advance the digitization of CH content related to traditional dances. Recent studies in cultural computing have triggered a growing number of algorithmic advancements to facilitate CH data usage [5,6]. In parallel to these approaches, innovative paradigms and technologies for cultural heritage exploitation have been proposed [7]. These technologies enable user-centred presentation and make cultural heritage digitally accessible by providing the possibility of a user experience when physical access is constrained. For example, virtual reality (VR) is becoming an increasingly important tool for the research, communication, and  popularization of cultural heritage [8]. A great deal of 3D interactive reconstructions of artifacts, monuments, and entire sites have been realized, which meet the consent of both specialists and the public at large [9].

1.1. Related Work

The last two decades have seen many efforts in deploying social robots in museum settings [10,11,12,13,14]. Social robots expose a wide range of capabilities that allow them to interact and assist humans in a natural manner. This makes them suitable for museum settings as they can greet, educate, or provide guides to visitors. Early and remarkable work in this field includes the autonomous Rhino robot, a mobile tour guide robot able to navigate in a museum and play pre-recorded descriptions of the exhibitions. The robot has been deployed in real museums with the effect of increasing the overall attendance to the museum by at least 50%. Other important pioneering work [15] has led to the development of various autonomous and mobile robots with the purposes of greeting visitors, giving guides, and showing additional information (e.g., videos) that bring exhibitions to life. Lastly, a similar robot, Minerva [11], provided guides to visitors; unlike Rhino, Minerva was equipped with a face and could display emotions using changes in vocal tonality and facial expressions. Despite the pioneering work, these robots were much more focused on aspects related to motion and were designed to guide visitors inside the museum without bothering to investigate the real tastes of the visitors. In fact, speaking about Minerva, for example, when questioned, 36.9% of 63 people perceived Minerva as having an intelligence similar to humans. However, 69.8% did not perceive Minerva to be alive, suggesting that its social interactivity capabilities were still limited.
Over the years, the purpose of several museums has shifted from providing static information about the resources they handle (e.g., collection of artworks) to providing a much more wide user experience. In this perspective, different Human-Robot interaction (HRI) approaches have been proposed in order to design intelligent robots able to properly interact with users in museums [13,14,16]. For example, the CiceRobot project [13] aimed to develop a robotic tour guide whose behavior was based on a cognitive architecture integrating perception, self-perception, planning, and Human-Robot interaction. In addition to navigating properly inside a museum, the robot was able to explain the contents of windows to visitors and enabled them to ask queries on topics related to objects in the windows themselves. Vasquez and Matia [16] proposed a research project with the main goal of developing a smart social robot showing sufficient intelligence to work as a tour guide in different environments. A fuzzy emotion system that controls the face and the voice modules form part of the architecture underlying the robot’s behavior for assisting interactions. Other works leverage robot and user gaze in order to establish a much more deep interaction with visitors. Recently developed museum robots also try to achieve personalization [17] of the user experience. For example, Tamakasa et al. [18] developed an autonomous human-like guide robot for a science museum. The robot identifies individuals, estimates the exhibits at which visitors are looking, and proactively approaches them to provide explanations with gaze autonomously, using our new approach called speak-and-retreat interaction. The robot also performs such relation-building behaviors as greeting visitors by their names and expressing a friendlier attitude to repeat visitors. However, despite these approaches allowing the development of natural and human-like interaction, they do not take into account the mental states of the interacting user. Therefore, they do not achieve real personalization [19], which could be possibly based on complex models that describe the user, starting from features that can be investigated by the robot during the interaction. Personalization of cultural heritage information requires a system that is able to model the user (e.g., interest, knowledge, and other personal characteristics), as well as contextual aspects, select the most appropriate content, and deliver it in the most suitable way. Contextually to the ability to consider the user’s needs, an intelligent system should also take into account the interests, goals, and plans of those who manage the cultural heritage and allow its usage. These plans, goals, and interests are, in general, implicit in the restrictions and mandatory choices that the museum makes available to the users. However, they can be adapted and personalized for each user. In practice, on the basis of the mental attitudes attributed to the user and the constraints or needs attributable to the museum curators, a mediation system between these two subjects (e.g., users and museum curators) can play a role in museum visit customization, in order to best satisfy both parties. For example, the mediation system should not only personalize a visit based on the user’s artistic interests and other characteristics declared or attributable to them (e.g., time available, level of interests, and so on), but it should also consider all those features related to the interests, goals, and plans that the museum curators designed for a museum tour. Most of the time (this is the approach followed in this paper), the goals/interests of the museum curators are oriented to the satisfaction of the user, not in contrast to that (e.g., intent to guide the user to visit a really relevant collection that the user did not know and cannot assess the value of). However, a negotiation process is necessary; in our case, this process takes place through the role of the mediator (e.g., a robot). In any case, it will be the user at the end of the visit to declare their satisfaction with the mediation process realized by the robot. A museum potentially has a huge amount of digital information to present, in addition to the artworks that can be physically visited. Intelligent systems have to be able to handle this amount of information in order to adapt the level of detail in describing a specific artwork, not only with respect to the level of accuracy required by the user but also on the basis of the level of accuracy that the museum curators believe it is necessary to understand an artwork.

1.2. Contribution

In this work, we propose a computational cognitive model that provides an artificial agent (e.g., robot, virtual assistant) with the capability of personalizing a museum tour with respect to the goals and interests of the users that intend to visit the museum, also taking into account the goals and interests of the museum curators that have designed the exhibition. In particular, the computational model is able to:
  • Investigate the artistic interests of the user and model the user with respect to those interests by attributing to them specific mental states (beliefs, goals, plans) and creating a complex user model;
  • Model the beliefs, goals, and plans of the museum curators;
  • Select the most suitable museum tour as a result of a negotiation internal to the agent, between the represented mental states of the user and the represented mental states of the exhibition curators;
  • Investigate different dimensions of the user’s satisfaction with respect to the tour proposed by the intelligent agent.
We provide the results of a Human-Robot interaction pilot study that we designed in order to test the capabilities of the computational model. We recruited 26 real participants that have interacted with the humanoid robot Nao [20], widely used in Human-Robot interaction scenarios. The robot plays the role of a museum assistant in a virtual museum, and it has the goal of providing a museum exhibition to the user. At the end of each interaction, the robot proposes a short survey to the user, with the aim of investigating different dimensions of her satisfaction with respect to the presented exhibition. The computational model has been implemented by exploiting the multi-agent oriented programming (MAOP) framework JaCaMo [21], which integrates three different multi-agent programming levels: agent-oriented (AOP), environment-oriented (EOP), and organization-oriented programming (OOP).
In conclusion, the main contribution of our work consists of investigating the possibility of offering a kind of help (this help is provided by an artificial system, such as a robot) that does not necessarily correspond to the explicit and declared request of the user. This kind of collaboration, which tries to offer an answer to protect the interests and goals of the user that the same user is not always able to perceive, represents a novelty in the panorama of Human-Robot collaboration. The novelty of this work lies in the fact that, in our model, the system solicits a request from the user but then analyzes it critically, also taking into account the actual collections of the museum and how these can best satisfy the profile that the system has built of the same user. The satisfactory results we have witnessed show how this experiment, albeit preliminary, goes in the right direction.
The paper is organized as follows: Section 2 describes the background underlying our approach; Section 3 focuses on the description of the cognitive model; Section 4 and Section 5 are dedicated to the experiment and its results; finally, Section 6 and Section 7 are dedicated to conclusions and future works.

2. Background

The human capability to attribute mental representations and states to AI agents becomes crucial in the context of Human–Agent Cooperation [22], where it is desirable that the role of such agents is not that of a passive executor but it becomes that of an active collaborator. Let us consider a collaborative scenario in which a human X and an artificial agent Y share the same plan. In this context, X relies on Y to realize some part of their common plan or of the X’s plan (task delegation); on its side, Y decides to help X to achieve some of her goals by replacing itself in some role/action of X’s plan and achieving some goal (task adoption). Now, in order to do something for X, Y has to understand X’s goals and beliefs, for example, X’s expectations about Y’s behavior. Cooperation and, consequently, task delegation/adoption implies more than simple obedience to orders or simple execution of a prescribed action [23]. From the artificial agent’s point of view, delegation and adoption distinguishes a collaborator from a simple tool and presuppose intelligence and autonomy [24]. In their complex sense, cooperation and help are not just order/task execution; they require more autonomy and even initiative. Let us focus on a deep level of cooperation, where agent Y can adopt a task delegated by X at different levels of effective help. The different levels of adoption can be individuated, according to [22]:
  • Sub help: agent Y satisfies a sub-part of the delegated world-state (so satisfying just a sub-goal of agent X),
  • Literal help: agent Y adopts exactly what has been delegated by agent X,
  • Over help: agent Y goes beyond what has been delegated by agent X without changing X plan (but including it within a hierarchically superior plan),
  • Critical over help: agent Y realizes an over help and, in addition, modifies the original plan/action (included in the new meta-plan),
  • Critical help: agent Y satisfies the relevant results of the requested plan/action (the goal), but modifies that plan/action,
  • Critical-sub help: agent Y realizes a sub help and, in addition, modifies the (sub) plan/action.
The theory of delegation and adoption and the more general theory of adjustable social autonomy [24] represents the core theoretical background underlying the design of the computational cognitive model proposed in this work. Two additional theoretical tools applied in the field of HAI, have supported the design process: theory of mind (ToM) and BDI agent modeling.
Theory of mind [25] can be defined as the ability of an agent (human or artificial) to ascribe to other agent specifics mental states, and to take them into account for making decisions. Modeling other agents is one of the most important abilities learned by humans when they cooperate with each other. Humans have a strong predisposition to anthropomorphize anything that surrounds them and to evaluate or predict behaviors of other humans on the basis of a strong ToM of their interlocutors, with the result of fostering an intelligent collaboration. However, the increasing but recent introduction of intelligent systems in society has not yet allowed people (mainly non-specialists) to have a ToM of the systems based on correct assumptions. Providing artificial agents with the capability to build complex models of the interlocutor’s mental states and to adapt their decisions on the basis of these models represents a crucial point for promoting an intelligent and trustworthy collaboration.
BDI agent modeling [26] is one of the most popular models in agent theory [27]. Originally inspired by the theory of human practical reasoning developed by Michael Bratman [28], the BDI model focuses on the role of intentions in reasoning and allows the characterization of agents using a human-like point of view. Very briefly, in the BDI model, the agent has beliefs, information representing what it perceives in the environment and communicates with other agents, and desires, states of the world that the agent means to accomplish. The agent deliberates on its desires and decides to commit to one of them: committed desires become intentions. To satisfy its intentions, it executes plans in the form of a course of action or sub-goals to achieve. The behavior of the agent is thus described or predicted by what it has committed to carry out. An important feature of BDI agents is the ability to react to changes in their environment as soon as possible while keeping their proactive behavior.

3. An Overview of the Computational Cognitive Model

The proposed computational cognitive model (Figure 1) provides a cognitive artificial agent (with its own beliefs, goals, intentions, and so on) with the capability of personalizing a museum tour on the basis of the mental state of the user that intends to visit the museum, by also taking into account the mental states of the museum curators that have designed the exhibition. The final tour recommended is the result of an agent’s internal process of negotiation between the mental states of the user and those of the museum curators.
The mental states of the agent are stored in the Beliefs Base, a database where the following are collected:
  • The current state of the environment, excluding the agents involved in the scenario;
  • The mental states of the user; that is, the beliefs, goals, and plans that the agent attributes to the user thanks to the capability of having a ToM of the user themselves;
  • The mental states of other agents involved in the scenario. In this case, the agents are the museum curators; that is, those who designed, realized, and maintain the museum exhibition;
  • General beliefs, which correspond to the agent’s knowledge.
The computational model provides the agent with the tools to interact with the user in order to map, into its Beliefs Base, the information it considers relevant for adapting the museum visit to the user themself. The agent establishes an initial interaction with the user, with the goal of profiling them by investigating their artistic interests (Artistic User Profiling). Through a voice interaction and supported by interactive tools (GUI), the agent is able to extract information and collect it into a user profile P U = < p F , P D , A c c u >, defined as a tuple of features encoding:
  • p F : the artistic period favorited by the user,
  • P D : the artistic periods in which the user has no interest,
  • A c c u : the level of accuracy with which the user intends to view the material proposed during the visit to the museum.
In addition to the user, the cognitive model allows the agent to model the mental states of other agents involved in the scenario. In this case, the agent is able to model in its Beliefs Base some beliefs, goals, and plans ascribed to the museum curators that have designed the entire exhibition. Unlike the user model, which is created at run-time based on their profile, the exhibition curators’ model was previously described in the agent’s Beliefs Base. While representing an a priori knowledge of the agent, this model can be modified by the agent itself through interaction with the museum curators themselves.
After investigating the user’s artistic interests, profiling them, and attributing mental states consistent with the profile created, the agent has to select a museum visit to propose to the user. The cognitive model defines multiple heuristics that can be exploited by the agent to identify the most suitable museum tour; these heuristics implement different internal negotiation processes that the agent triggers with the aim of mediating the choice of the museum visit, considering the mental state of the user and those of the curators of the exhibition (Negotiation strategies). The selection of the most suitable heuristic depends on the mental states that are modeled on the agent’s Beliefs Base (Strategy selection). In  Section 4, we will describe the heuristic exploited by the agent in the pilot study.

4. The Pilot Study

This section describes a Human-Robot interaction (HRI) pilot study that we designed in order to test the capabilities of the computational model. We recruited 26 participants that have interacted with the Nao robot. The robot plays the role of a museum assistant in a virtual museum, and it has the goal of providing a museum exhibition to the user. During the interaction, the robot collects information for profiling the user and, therefore, plays the role of assistant to the visit to the museum by offering the possibility to listen to the descriptions of the artworks read by itself. At the end of each tour, the robot proposes a short survey to the user, with the aim of investigating different dimensions of their satisfaction with respect to the recommended exhibition. The robot helps users to visit the part of the museum that is the most appropriate to their artistic interests and that represents a mediation between these interests and those of the museum curators. The museum tour resulting from the mediation process can be suited to the artistic interests explicitly declared by the user, or it could be slightly different from the declared user interest. In the first case, the robot provides literal help to the user; in the second case, it provides critical help to the user. In the case where the robot provides critical help, it tries to satisfy the user by leveraging the implicit assumptions that are based on the user’s artistic interests explicitly declared.

4.1. Experimental Design

The museum that the user explores is organized in multiple thematic tours (Figure 2), each containing artworks (Figure 3) that belong to the same artistic period (e.g., Impressionism, Surrealism, Baroque, Greek Art, and so on). The museum is designed in such a way that it covers the entire body of the history of art. As a reference for the classification of the history of art into historical periods, we referred to the work of one of the most important art historians of the 20th century, Giulio Carlo Argan [29]. The categorization of the history of art periods follows the schema shown in Figure 4. This categorization allows us to establish potential assumptions believed by the users: for example, the artistic periods belonging to the same category are more homogeneous and, therefore, correspond to the preferences of the users with respect to artistic periods of other categories. For example, a user that indicates their preferred artistic period is “impressionism” will probably be more inclined to “modern art” rather than “ancient art”. The final model that the agent attributes to the user will be a collection of beliefs, goals, etc., that the agent infers on the basis of the features perceived during the profiling phase. The museum is organized into thematic tours. Each thematic tour is described by three attributes: relevance, accuracy, and category.
  • The relevance of an artistic period is defined on the basis of the originality of the artworks that compose it and the impact they had in the field of art history.
  • The accuracy, on the other hand, specifies the detail in the description of each artwork present in a thematic room.
  • Each thematic tour (artistic period) belongs to a category that collects different artistic periods; for example, the “Impressionism” tour belongs to the same category as the “Surrealism” and “Cubism” tours, which are in the more general class named “modern art”. This is replicated for any artistic period.
For the experiment, we defined three levels of relevance (high, medium, low) and two levels of accuracy (high, low). The user can explore the museum room by choosing the artwork they wish, and they can leave the museum at any time.

4.2. The Heuristic for the Tour Selection

Algorithm 1 describes the heuristic exploited by the agent in order to select the most appropriate section to visit. The algorithm takes as input the user’s preferred artistic period, the periods of non-interest, and the level of accuracy chosen by the user. After obtaining the values of relevance, accuracy, and the category of the tour corresponding to the user’s preferred artistic period, the algorithm checks multiple conditions. The first condition (Condition C 1 ) requires verifying if the same artistic period required by the user has maximum relevance from the museum curator’s point of view and, in this case, if the accuracy of its description corresponds with that chosen by the user. If these two conditions are true, then the robot will recommend the visit of the corresponding tour. If just the accuracy condition is not satisfied, however, the algorithm chooses the period required by the user and presents it with a level of accuracy different from that indicated. The accuracy will be the one believed by the museum curators (Condition C 2 ). If condition C 2 is not verified either, then the algorithm investigates the tours corresponding to the artistic periods that the user has not discarded ( P M ). If there is a tour with a high level of relevance, which belongs to the same category as the user’s preferred artistic period and which requires a level of accuracy equal to that chosen by the user, then the robot will recommend the visit of the corresponding tour (Condition C 3 ). If not even this condition is verifiable, then the algorithm will try to select a tour with a high level of relevance, which belongs to the same category as the user’s preferred artistic period, regardless of the level of accuracy it requires; the accuracy will be the one believed by the museum curators (Condition C 4 ). Condition C 5 instead occurs when, having not found any tour to recommend in the same class in which the required artistic period was contained, there is a tour that corresponds to an artistic period belonging to the next or previous category to that of the user’s preferred artistic period and has a level of relevance immediately following to that of the user preferred artistic period. Finally, if even C 5 is not respected, then the algorithm selects a random tour among those corresponding to the artistic periods not discarded by the user (Condition C 6 ).
Algorithm 1 Artistic Period Selection Algorithm.
Input: 
p F , P D , A c c u
  1:
procedureHeuristic for Selection
  2:
     r p F getRelevance( p F )
  3:
     a p F getAccuracy( p F )
  4:
     c p F getCategory( p F )
  5:
     P M remove( P D , p F )
  6:
    if ( r p F = r M a x   &   a p F = A c c u ) then                              ▹ C 1
  7:
         R t o V i s i t p F
  8:
    else
  9:
        if ( r p F = r M a x ) then                                     ▹ C 2
10:
            R t o V i s i t p F
11:
        else
12:
           for  p M P M  do
13:
                r p M getRelevance( p M )
14:
                a p M getAccuracy( p M )
15:
                c p M getCategory( p M )
16:
               if ( r p M = r M a x   &   c p M = c p F   &   a p M = A c c u ) then                   ▹ C 3
17:
                    R t o V i s i t p M
18:
                   return  R t o V i s i t
19:
               else
20:
                   if ( r p M = r M a x   &   c p M = c p F ) then                           ▹ C 4
21:
                        R t o V i s i t p M
22:
                       return R t o V i s i t
23:
                   else
24:
                        c n e x t getNextCategory( p M , p f )
25:
                        c p r e v getPreviousCategory( p M , p f )
26:
                        r n e w getNewRelevance( p M , p f )
27:
                       if ( r p M = r n e w   &   ( c p M = c n e x t | c p M = c p r e v ) ) then                    ▹ C 5
28:
                           R t o V i s i t p M
29:
                          return  R t o V i s i t
30:
                       else
31:
                           R t o V i s i t getRandom( P M )                               ▹ C 6
32:
                       end if
33:
                   end if
34:
               end if
35:
           end for
36:
        end if
37:
    end if
38:
end procedure

4.3. Experimental Procedure

A total of 26 participants were recruited for this pilot study. The sample was composed of 6 females and 20 males, aged between 25 and 75 years old. The subjects were not necessarily robotics experts and did not have to deal with robots in daily life. Each participant carried out an entire interaction with the robot (trial), aims to take part in a tour of the virtual museum corresponding to a specific artistic period, and is aware of the fact that the tour will be chosen by the robot that manages the virtual museum, who will choose the most suitable tour. Each trial develops in the following phases:
  • Starting interaction: the robot introduces itself to the user, describing its role and the virtual museum it manages.
  • User artistic profiling: the robot proposes a series of questions to the user, which aim to investigate their artistic interests in terms of her favorite artistic periods and artistic periods of no interest. In this phase, the interaction is supported by a GUI through which the user can express their artistic preferences, and the robot can collect useful data to profile the user. In addition to defining the artistic periods of interest and non-interest of the user, the robot asks the user with what degree of accuracy they intend to visit the section.
  • Tour visit: once the user profile has been established, the robot exploits the heuristic defined in Section 4.2 to select the tour on behalf of the user. Once the selection has been made, the robot activates the corresponding tour in the virtual museum and leaves the control to the user, who can visit the room, selecting the artworks inside.
  • End museum tour: the user can leave the recommended tour and, therefore, the museum. Once this happens, the robot returns to interact with the user, asking them questions. These questions, which belong to a short survey, are used to investigate how satisfied the user is with the visit. We have decided to adopt a five-level scale to encode the user responses, where value 1 is the worst case, and 5 is the best one.
In particular, the survey’s questions that the user had to answer are the following:
  • Q1: How satisfied were you with the duration of the visit?
  • Q2: How satisfied were you with the quality of the artworks?
  • Q3: How satisfied were you with the number of the artworks?
  • Q4: How surprised were you with the artistic period recommended by the robot compared to the artistic period initially chosen by you?
  • Q5: How satisfied are you with the robot’s recommendation given the artistic period initially chosen by you?

5. Results

The pilot study has been designed with the goal of answering the following research questions (RQ):
  • RQ1: How risky/acceptable is the critical help compared to the literal help? Does the heuristic proposed help to make this help much more acceptable?
  • RQ2: Given the risks that the critical help determines, in what situations and how much critical help can be useful?
Here we report the results obtained in the pilot study. We divided the results into users who have received critical help from the agent (the preferred artistic period chosen by the user does not match with the tour recommended by the robot), summarized in Table 1, and those who have received literal help (the preferred artistic period selected by the user coincides with the tour recommended by the robot), summarized in Table 2. We can observe that 15 users received critical help, while 11 received literal help. We are interested in investigating the answers to questions Q4 and Q5; these questions are designed in order to understand what the impact is of the robot’s ability to propose to the user a tour different from what they expected. In this experiment, questions Q1, Q2 and Q3 are not deeply analyzed, but they have been asked to contextualize the user and to ensure the user could focus on specific questions before answering questions Q4 and Q5. In this way, we try to get the user’s attention to the contents of the tour and, therefore, focus on these and not only on the quality of the interaction with the robot (and on the modes of critical or literal help).
In order to answer RQ1, we ran an independent samples t-test. From the parametric analysis of the answers to question Q5, reported in Table 3, we observed that users who received a tour recommendation consistent with the initial choice of their preferred artistic period (literal help) show a level of satisfaction higher on average than that of the users to whom the robot has proposed the tour referred to an artistic period different from the one initially chosen. This demonstrates how critical help implies the risk of leaving the user, at least partially, dissatisfied because the robot expressly violated their requests. However, although the difference between the averages in the two cases is significant ( D = 1.36 ), the mean value of satisfaction referred to critical help (M = 3) demonstrates how this type of help does not raise a low level of satisfaction. Especially if we consider that no justification has been provided by the robot for its behavior in contradicting the user’s requests. Indeed, value 3 in the scale used for the survey corresponds to a medium level of satisfaction. We recall that the heuristic for the tour selection is designed so that if the robot does not find a tour that matches the artistic period chosen by the user, it tries to recommend a tour corresponding to an artistic period belonging to the same category of that selected by the user (e.g., impressionism, surrealism, and romanticism all belong to modern art).
Much more interesting is the analysis regarding the critical help. To answer RQ2, we focus only on the group of users who have received critical help (Table 1).
Figure 5 shows how, among 15 participants who received critical help, only 4 evaluated the tour recommended by the robot as unsatisfactory, while 4 users evaluated it with a medium satisfaction value, 6 users evaluated it as satisfying, and 1 of them evaluated it as strongly satisfying. A total of 73.3% of the participants that have received critical help evaluated it positively. This result is particularly relevant if we consider that the visitors did not have any prior notice about the possibility that their request could be changed by modifying the artistic period chosen by them. As we have seen, this change finds justification from the will of the museum curators to offer the visit of highly relevant artistic periods (given that considering the collection owned by the museum, the artistic period offered has greater relevance than the one chosen by the visitor) and thus to favor the goals of the user. However, this information is not communicated to the visitor, and it is difficult to be directly deducted by them. However, the user’s satisfaction is particularly high. The surprise effect (encoded by the answer to Q4) confirms this unexpected choice of the robot in the face of a different explicit request. If this change could be explained, probably the number of those who have given a negative judgment about the robot’s suggestion would lower further. Surely, a lot also depends on the collections owned by the museum and their value or presentation, as well as the artistic flair of the user. In any case, the awareness of a choice made by the robot closer to the user’s artistic taste would certainly play a positive role in the final satisfaction of the user.

Experiment Limitations

Here we discuss some of the limitations that this pilot study could have. First of all, we can observe that the number of users considered in the pilot study is low, and this can be a limitation. In any case, despite the low number of users, the results we expected have statistical significance. Given the statistical significance of some of the results obtained, in future works, we will consider a larger sample than the one used in this pilot study. A further limitation can be related to the fact that we do not consider the artistic habits and expertise of the users involved in the experiment. We have not investigated this variable, nor have we made the robot investigate it. This can be considered a confounding variable that represents a bias in our pilot study. Another possible bias present in the pilot study can be related to the fact that we do not consider the participant’s prior interaction with robots, their comfort, and willingness to interact and accept the introduction of robots in society. These variables can have an impact on how participants perceive the robots they interact with. These influences are often hard to control and can be a source of contamination that influences the results. We tried to mitigate this bias by constructing a questionnaire made up of multiple questions, the first of which are not necessary in order to investigate the impact of the robot’s behavior on user satisfaction. Indeed, as mentioned in Section 5, in the experiment, questions Q1, Q2, and Q3 are not deeply analyzed, but they have been asked to contextualize the user and to ensure the user could focus on specific questions before answering questions Q4 and Q5. In this way, we try to get the user’s attention to the contents of the tour and, therefore, focus on these and not only on the quality of the interaction with the robot. Because of the preliminary aspect of the study, we built the questionnaire with ad-hoc questions, which do not refer to standardized tools. This can represent a limit in the comparison of the results with other similar works. With this preliminary work, we wanted to investigate the impact of a particular type of help that a robot can provide to a user and which can lead to a substantial change in the user’s request with the objective of taking account of needs that the same user cannot or has not been able to assess. The results show some significance, and we will try to minimize the bias by following much more standard methodological approaches [30,31,32].

6. Conclusions

In this paper, we present a computational cognitive model that provides an artificial agent with the capability of personalizing a museum tour with respect to the goals and interests of the users that intend to visit the museum. The model does not only consider the mental states of the user related to their artistic interests, but it also takes into account the constraints and goals related to the curators that have designed the exhibitions hosted by the museum. In this way, the artificial agent assumes the role of a mediator between the user and curators, with the goal of offering an experience that is as satisfying as possible for the user. The negotiation process that emerges between users’ mental states and the constraints/goals of the exhibition curators can lead the artificial agent to suggest, in some cases, a museum tour that is very close to the user’s artistic interests (literal help); in other cases, the agent can suggest a tour that diverges from the user’s more explicit interests, but which always tries to satisfy the interests/goals that, although not explicitly declared, may be attributable to them (critical help). This form of help, based on the consolidated theory of Adjustable Social Autonomy [24], has the main goal of keeping the level of user satisfaction high, making a choice that is as suitable as possible for the user but which also takes into account constraints that could determine low levels of user satisfaction. Naturally, a change in the user’s requests without negotiation involves the risk of dissatisfaction in the user. However, it remains useful to evaluate how an alternative choice of the robot, for the protection of user goals/interests, can be accepted by the user. We conducted a Human-Robot interaction pilot study with 26 real participants in order to investigate the potential of the cognitive model. The participants interacted with the humanoid robot Nao, which played the role of a museum assistant in a virtual museum, and it had the goal of providing a museum exhibition to the user. At the end of each interaction, the robot proposed a short survey to the user, with the aim of investigating different dimensions of their satisfaction with respect to the presented exhibition. The exploratory study has shown promising results. In fact, despite the fact that the literal help, compared to the critical help, results were more satisfactory for users; in most cases where users have received critical help, they have positively evaluated the museum tour recommended by the robot. This result is particularly relevant by virtue of the fact that users do not know the reasons that led to a choice different from the one they expected. Despite this, even though they were surprised by the tour recommended, they maintained high levels of satisfaction after the visit to the exhibition.

7. Future Works

First of all, our goal is to follow up on this pilot study in order to systematize the preliminary results obtained and to give consistency to the research questions we investigated. Another relevant future work will be to focus on explainability. In particular, we want to design other experiments in order to evaluate different dimensions of user satisfaction every time the robot provides an explanation of the reasons that led it to recommend that specific tour to the user. We are convinced that providing an explanation of the reasons that led the robot, for example, to suggest a museum tour different than the one the user expects, has a decisive impact on the user’s acceptance of the critical help, which tries to satisfy the results requested by the user but adapts the request to a context that may be unfavorable compared to the initial request. Finally, we intend to extend the computational model by integrating other levels of help, as provided by the delegation and adoption theory, and to test their impact through other HRI experiments in real cultural heritage scenarios.

Author Contributions

Conceptualization, methodology, validation, formal analysis, investigation, resources, data curation, writing—review and editing: F.C. and R.F.; software, writing—original draft preparation, visualization: F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank Galleria Borghese, Rome, for having granted the publication of a photograph of the artwork il ratto di Proserpina (photo Luciano Romano).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Isaac, A.; Haslhofer, B. Europeana linked open data–data. europeana. eu. Semant. Web 2013, 4, 291–297. [Google Scholar] [CrossRef]
  2. Carriero, V.A.; Gangemi, A.; Mancinelli, M.L.; Marinucci, L.; Nuzzolese, A.G.; Presutti, V.; Veninata, C. ArCo: The Italian cultural heritage knowledge graph. In Proceedings of the International Semantic Web Conference, Auckland, New Zealand, 26–30 October 2019; pp. 36–52. [Google Scholar]
  3. Dimitropoulos, K.; Manitsaris, S.; Tsalakanidou, F.; Nikolopoulos, S.; Denby, B.; Al Kork, S.; Crevier-Buchman, L.; Pillot-Loiseau, C.; Adda-Decker, M.; Dupont, S.; et al. Capturing the intangible an introduction to the i-Treasures project. In Proceedings of the 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5–8 January 2014; Volume 2, pp. 773–781. [Google Scholar]
  4. Doulamis, A.D.; Voulodimos, A.; Doulamis, N.D.; Soile, S.; Lampropoulos, A. Transforming Intangible Folkloric Performing Arts into Tangible Choreographic Digital Objects: The Terpsichore Approach. In Proceedings of the VISIGRAPP (5: VISAPP), Porto, Portugal, 27 February–1 March 2017; pp. 451–460. [Google Scholar]
  5. Fiorucci, M.; Khoroshiltseva, M.; Pontil, M.; Traviglia, A.; Del Bue, A.; James, S. Machine learning for cultural heritage: A survey. Pattern Recognit. Lett. 2020, 133, 102–108. [Google Scholar] [CrossRef]
  6. Sansonetti, G.; Gasparetti, F.; Micarelli, A.; Cena, F.; Gena, C. Enhancing cultural recommendations through social and linked open data. User Model. User-Adapt. Interact. 2019, 29, 121–159. [Google Scholar] [CrossRef]
  7. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A survey of augmented, virtual, and mixed reality for cultural heritage. J. Comput. Cult. Herit. (JOCCH) 2018, 11, 1–36. [Google Scholar] [CrossRef]
  8. Trunfio, M.; Lucia, M.D.; Campana, S.; Magnelli, A. Innovating the cultural heritage museum service model through virtual reality and augmented reality: The effects on the overall visitor experience and satisfaction. J. Herit. Tour. 2022, 17, 1–19. [Google Scholar] [CrossRef]
  9. Machidon, O.M.; Duguleana, M.; Carrozzino, M. Virtual humans in cultural heritage ICT applications: A review. J. Cult. Herit. 2018, 33, 249–260. [Google Scholar] [CrossRef]
  10. Burgard, W.; Cremers, A.B.; Fox, D.; Hähnel, D.; Lakemeyer, G.; Schulz, D.; Steiner, W.; Thrun, S. Experiences with an interactive museum tour-guide robot. Artif. Intell. 1999, 114, 3–55. [Google Scholar] [CrossRef]
  11. Thrun, S.; Bennewitz, M.; Burgard, W.; Cremers, A.B.; Dellaert, F.; Fox, D.; Hahnel, D.; Rosenberg, C.; Roy, N.; Schulte, J.; et al. MINERVA: A second-generation museum tour-guide robot. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), Detroit, MI, USA, 10–15 May 1999; Volume 3. [Google Scholar]
  12. Nieuwenhuisen, M.; Behnke, S. Human-like interaction skills for the mobile communication robot robotinho. Int. J. Soc. Robot. 2013, 5, 549–561. [Google Scholar] [CrossRef]
  13. Chella, A.; Liotta, M.; Macaluso, I. CiceRobot: A cognitive robot for interactive museum tours. Ind. Robot. Int. J. 2007, 34, 503–511. [Google Scholar] [CrossRef]
  14. Gehle, R.; Pitsch, K.; Dankert, T.; Wrede, S. How to open an interaction between robot and museum visitor? Strategies to establish a focused encounter in HRI. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 187–195. [Google Scholar]
  15. Willeke, T.; Kunz, C.; Nourbakhsh, I.R. The History of the Mobot Museum Robot Series: An Evolutionary Study. In Proceedings of the FLAIRS Conference, Key West, FL, USA, 21–23 May 2001; pp. 514–518. [Google Scholar]
  16. Vásquez, B.P.E.A.; Matía, F. A tour-guide robot: Moving towards interaction with humans. Eng. Appl. Artif. Intell. 2020, 88, 103356. [Google Scholar] [CrossRef]
  17. Lee, M.K.; Forlizzi, J.; Kiesler, S.; Rybski, P.; Antanitis, J.; Savetsila, S. Personalization in HRI: A longitudinal field experiment. In Proceedings of the 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, MA, USA, 5–8 March 2012; pp. 319–326. [Google Scholar]
  18. Iio, T.; Satake, S.; Kanda, T.; Hayashi, K.; Ferreri, F.; Hagita, N. Human-like guide robot that proactively explains exhibits. Int. J. Soc. Robot. 2020, 12, 549–566. [Google Scholar] [CrossRef]
  19. Ardissono, L.; Kuflik, T.; Petrelli, D. Personalization in cultural heritage: The road travelled and the one ahead. User Model. User-Adapt. Interact. 2012, 22, 73–99. [Google Scholar] [CrossRef]
  20. Robaczewski, A.; Bouchard, J.; Bouchard, K.; Gaboury, S. Socially assistive robots: The specific case of the NAO. Int. J. Soc. Robot. 2021, 13, 795–831. [Google Scholar] [CrossRef]
  21. Boissier, O.; Bordini, R.H.; Hübner, J.F.; Ricci, A.; Santi, A. Multi-agent oriented programming with JaCaMo. Sci. Comput. Program. 2013, 78, 747–761. [Google Scholar] [CrossRef]
  22. Castelfranchi, C.; Falcone, R. Towards a theory of delegation for agent-based systems. Robot. Auton. Syst. 1998, 24, 141–157. [Google Scholar] [CrossRef]
  23. Chella, A.; Lanza, F.; Pipitone, A.; Seidita, V. Knowledge acquisition through introspection in human-robot cooperation. Biol. Inspir. Cogn. Archit. 2018, 25, 1–7. [Google Scholar] [CrossRef]
  24. Falcone, R.; Castelfranchi, C. The human in the loop of a delegated agent: The theory of adjustable social autonomy. IEEE Trans. Syst. Man, Cybern.-Part A Syst. Hum. 2001, 31, 406–418. [Google Scholar] [CrossRef]
  25. Scassellati, B.M. Foundations for a Theory of Mind for a Humanoid Robot. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2001. [Google Scholar]
  26. Rao, A.S.; Georgeff, M.P. BDI agents: From theory to practice. In Proceedings of the ICMAS, San Francisco, CA, USA, 12–14 June 1995; Volume 95, pp. 312–319. [Google Scholar]
  27. Wooldridge, M.; Jennings, N.R. Agent theories, architectures, and languages: A survey. In Proceedings of the International Workshop on Agent Theories, Architectures, and Languages, Amsterdam, The Netherlands, 8–9 August 1994; pp. 1–39. [Google Scholar]
  28. Bratman, M. Intention, Plans, and Practical Reason; Harvard University Press: Cambridge, MA, USA, 1987; Volume 10. [Google Scholar]
  29. Argan, G.C.; Oliva, A.B. L’arte Moderna; Sansoni: Florence, Italy, 1999. [Google Scholar]
  30. Innes, J.M.; Morrison, B.W. Experimental studies of Human-Robot interaction: Threats to valid interpretation from methodological constraints associated with experimental manipulations. Int. J. Soc. Robot. 2021, 13, 765–773. [Google Scholar] [CrossRef]
  31. Oliveira, R.; Arriaga, P.; Paiva, A. Human-robot interaction in groups: Methodological and research practices. Multimodal Technol. Interact. 2021, 5, 59. [Google Scholar] [CrossRef]
  32. Hoffman, G.; Zhao, X. A primer for conducting experiments in Human-Robot interaction. ACM Trans. Hum.-Robot Interact. (THRI) 2020, 10, 1–31. [Google Scholar] [CrossRef]
Figure 1. Perception-Reasoning-Action (PRA) cycle of the computational cognitive model.
Figure 1. Perception-Reasoning-Action (PRA) cycle of the computational cognitive model.
Mti 06 00069 g001
Figure 2. The virtual museum is organized as follows: to the left are enumerated the artworks that can be visited during the tour, while to the right there is the map of the tour. The red button over the map allows the closing of the tour. Please notice the numbers on the map indicate the number of rooms in the museum (room 1, room 2), while the letters before each artwork’s title are used to sort the list of the artworks.
Figure 2. The virtual museum is organized as follows: to the left are enumerated the artworks that can be visited during the tour, while to the right there is the map of the tour. The red button over the map allows the closing of the tour. Please notice the numbers on the map indicate the number of rooms in the museum (room 1, room 2), while the letters before each artwork’s title are used to sort the list of the artworks.
Mti 06 00069 g002
Figure 3. Example of artwork. On the left, there is a copy of the artwork, on the right, is placed the artwork’s description. If the level of accuracy is high, the user can read, in addition to the basic characteristics (Title, Author, Date, Tecnique, Location), a more detailed description. The description can be read by the robot as well as by the user. If the level is low, the bottom right description is excluded. (Image credits: Galleria Borghese/photo Luciano Romano).
Figure 3. Example of artwork. On the left, there is a copy of the artwork, on the right, is placed the artwork’s description. If the level of accuracy is high, the user can read, in addition to the basic characteristics (Title, Author, Date, Tecnique, Location), a more detailed description. The description can be read by the robot as well as by the user. If the level is low, the bottom right description is excluded. (Image credits: Galleria Borghese/photo Luciano Romano).
Mti 06 00069 g003
Figure 4. History of Art categorization based on Giulio Carlo Argan’s work.
Figure 4. History of Art categorization based on Giulio Carlo Argan’s work.
Mti 06 00069 g004
Figure 5. The bar plot reports the level of user satisfaction investigated through Q5 in the case of robot critical help.
Figure 5. The bar plot reports the level of user satisfaction investigated through Q5 in the case of robot critical help.
Mti 06 00069 g005
Table 1. This table reports the answers to the questions in the survey proposed by the robot after it provides critical help to the user. In these cases, the robot recommends a tour slightly different from the artistic period the user indicated as preferred in the history of art.
Table 1. This table reports the answers to the questions in the survey proposed by the robot after it provides critical help to the user. In these cases, the robot recommends a tour slightly different from the artistic period the user indicated as preferred in the history of art.
UserPreferred
Artistic Period
User
Accuracy
Recommended
Tour
Tour
Accuracy
Q1Q2Q3Q4
(Surprise)
Q5
(Satisfaction)
1BaroqueMediumCaravaggioHigh45455
3BaroqueHighCaravaggioHigh45544
4ImpressionismMediumRomanticismMedium54453
4CubismHighEspressionismMedium32351
5700 SculptureHigh700 PaintingHigh55434
8CubismHighNeoclassicismHigh54443
10ImpressionismHighEspressionismLow41351
12ImpressionismHighSurrealismMedium33434
15Art NouveauMediumRomanticismMedium55543
17Art NouveauMediumNeoclassicismHigh55434
20FuturismMediumRomanticismLow42451
21CubismMediumSurrealismMedium54333
22BaroqueHigh400 PaintingHigh25341
23RomanticismMediumSimbolismHigh15434
24CubismMediumSurrealismMedium53554
Table 2. This table reports the answers to the questions in the survey proposed by the robot after it provides literal help to the user. In these cases, the robot recommends a tour corresponding to the artistic period the user indicated as preferred in the history of art.
Table 2. This table reports the answers to the questions in the survey proposed by the robot after it provides literal help to the user. In these cases, the robot recommends a tour corresponding to the artistic period the user indicated as preferred in the history of art.
UserPreferred
Artistic Period
User
Accuracy
Recommended
Tour
Tour
Accuracy
Q1Q2Q3Q4
(Surprise)
Q5
(Satisfaction)
2500 Italian PaintingHigh500 Italian PaintingHigh45415
5500 Italian PaintingHigh500 Italian PaintingHigh55525
6Greek ArtMediumGreek ArtMedium55515
7GothicMediumGothicMedium55514
9500 Italian PaintingMedium500 Italian PaintingHigh54423
11CaravaggioLowCaravaggioHigh54414
13GothicLowGothicLow11112
14Contemporary ArtMediumContemporary ArtMedium53315
16700 PaintingHigh700 PaintingMedium44525
18500 Italian PaintingHigh500 Italian PaintingHigh53415
19Contemporary ArtHighContemporary ArtLow54415
Table 3. Independent t-test conducted in order to answer RQ1. Please notice that there is a significant difference between the means of the two groups: the p-value is p = 0.0103 ).
Table 3. Independent t-test conducted in order to answer RQ1. Please notice that there is a significant difference between the means of the two groups: the p-value is p = 0.0103 ).
GroupLiteral HelpCritical Help
Mean4.363.00
SD1.031.36
N1115
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cantucci, F.; Falcone, R. Autonomous Critical Help by a Robotic Assistant in the Field of Cultural Heritage: A New Challenge for Evolving Human-Robot Interaction. Multimodal Technol. Interact. 2022, 6, 69. https://doi.org/10.3390/mti6080069

AMA Style

Cantucci F, Falcone R. Autonomous Critical Help by a Robotic Assistant in the Field of Cultural Heritage: A New Challenge for Evolving Human-Robot Interaction. Multimodal Technologies and Interaction. 2022; 6(8):69. https://doi.org/10.3390/mti6080069

Chicago/Turabian Style

Cantucci, Filippo, and Rino Falcone. 2022. "Autonomous Critical Help by a Robotic Assistant in the Field of Cultural Heritage: A New Challenge for Evolving Human-Robot Interaction" Multimodal Technologies and Interaction 6, no. 8: 69. https://doi.org/10.3390/mti6080069

Article Metrics

Back to TopTop