Next Article in Journal
Performance Comparison between Mini-LED Backlit LCD and OLED Display for 15.6-Inch Notebook Computers
Next Article in Special Issue
An Animation Character Robot That Increases Sales
Previous Article in Journal
Updates and Original Case Studies Focused on the NMR-Linked Metabolomics Analysis of Human Oral Fluids Part I: Emerging Platforms and Perspectives
Previous Article in Special Issue
Using Multiple Robots to Increase Suggestion Persuasiveness in Public Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Philosophical Dialogue with a Robot and with a Human

1
College of Humanities, University of Tsukuba, Tsukuba 305-8571, Japan
2
Faculty of Culture and Information Science, Doshisha University, Kyoto 610-0394, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(3), 1237; https://doi.org/10.3390/app12031237
Submission received: 18 December 2021 / Revised: 14 January 2022 / Accepted: 19 January 2022 / Published: 25 January 2022
(This article belongs to the Special Issue Social Robotics: Theory, Methods and Applications)

Abstract

:
Philosophical dialogue is an effective way to deepen one’s thoughts, but it is not easy to practice it because humans have emotions. We proposed the use of a robot in practicing philosophical dialogue and experimentally investigated how philosophical dialogue with a robot differs from philosophical dialogue with a human. The results of the experiment showed that (1) participants talking to a human spend more time answering than those talking to a robot, (2) the increase in the time participants spend answering comes from the increase in the time participants spend speaking and is not influenced by reaction latency and pause time so much, (3) the reason for the increase in the time spent speaking is that some participants who talked to a human were likely to choose their words so as not to make the interlocutor uncomfortable and rephrased their thoughts so that they were easier for the interlocutor to understand, and some participants who talked to a robot might have thought that the robot would not be concerned even if they give a brief answer, and finally (4) it seems there is no significant difference in the depth of thought between participants talking to a human and participants talking to a robot. These results suggest the effectiveness of using robots for philosophical dialogue, in particular for people who feel nervous about talking to others.

1. Introduction

Philosophical dialogue is one of the methods to help us deepen one’s thought through talking with others [1]. Its origin dates back to the question-and-answer method of Socrates, the ancient Greek philosopher [2]. In ancient Greece, dialogue was the only way to do philosophy. Since then, philosophical dialogue has been continued so far, forming the modern way. For example, project-based learning (PBL) is an example of philosophical dialogue in education, which is considered to achieve higher standards than traditional learning styles [3]. Philosophical counseling and open dialogue have occurred in Europe since the 1980s, and then, it has been rapidly spreading to all major countries of the world [4]. In 1992, the philosopher Sautet conducted an event of philosophical dialogue for the general public in a café on the Place de la Bastille [5]. This event was the beginning of the philosophical cafés which are held all over the world.
Practicing philosophical dialogue is not easy for us from two aspects. First, people tend to be concerned about what others think of them. People basically want to look smart and do not want to make the atmosphere awkward. These minds lead us to hide our true opinions. Especially in dialogues conducted within a community such as a class and an office, the already established hierarchical relationship and social position may influence the discussion process [6]. For the sake of a good relationship afterward, some people may take in the feelings of their interlocutor and agree with them reluctantly, even if they had a different opinion from their interlocutor. We cannot regard such a situation as pure philosophical dialogue. Second, some people get emotional in philosophical dialogue. Emotional statements lacking politeness encourage a hostile attitude and can prevent from continuing the dialogue smoothly. In the worst case, the dialogue could be terminated there and then. This kind of dialogue that generates hate is often seen not only in anonymous Internet forums but also in the real world. In short, because we are human beings having emotions, it is often difficult to practice philosophical dialogue in pure form for us.
To support practicing purer philosophical dialogue, we propose using robots, which are artificial beings, as interlocutors. Although it is still unclear how people behave and feel in philosophical dialogue with a robot, we suppose that the use of a robot in philosophical dialogue can have two advantages. First, people do not need to be concerned with what the robot thinks of them. If people disagree with the robot’s opinions in dialogue, the robot does not show a disgusting look, does not hate the people, and does not make up a bad reputation. Second, even if people behave aggressively, the robot thinks nothing of that behavior and can continue the dialogue because they do not feel anger. According to Uchida et al. [7], people tended to easily self-disclose their negative content to a robot more than a human interlocutor. It suggests that people do not hesitate to discuss with a robot so much.
In this study, as the first step to finding out whether our proposal is meaningful or not, we explore characteristics of philosophical dialogue with a robot, comparing philosophical dialogue with a human. Through an experiment, we analyzed participants’ behavior during the dialogue and free descriptions before and after the dialogue.

2. Robots versus Other Artifacts

In this study, we use a robot as the interlocutor of philosophical dialogue. This is because robots have properties that make them more suitable for dialogue than other artifacts.
First, robots have a body structure similar to humans. When people interact with another person, they look at his/her face, body, and surrounding areas at certain ratios [8]. Robots with a human-like body structure can naturally encourage people to do this kind of behavior. Artifacts that do not have a human-like body structure (cell phones and tablets) are less likely to make humans behave in such a way. Compared to cell phones and tablets, robots are expected to give people the sense that they are easier to interact with the robots.
Second, robots can control their body structure. According to studies on human body movements, people produce beat gestures where they move their hands in front of their bodies during speeches [9,10], and they also move their bodies slightly during silence (e.g., breathing movements). A robot that can control the body structure can implement beat gestures and micro-movements during a conversation. This implementation allows the robot to naturally express that it is speaking and listening to the other person. Therefore, we thought that robots could interact with humans more smoothly than toys or mannequins whose bodies cannot be controlled.
Third, robots are embodied in physical space. Some human–robot interaction studies have suggested that a robot can provide preferable interactions to humans than a virtual agent [11,12,13]. For example, Kidd and Breazeal showed that humans are more likely to perceive a physical robot as more credible, informative, and enjoyable to interact with than virtual agents [11]. Shinozawa et al. reported that when performing the same task on the physical space, people tended to accept the recommendation of a physical robot on the same space more than a virtual agent on a computer screen [12]. With this background in mind, we decided to conduct our experiments using a physical robot.
Note that the above discussion does not mean that other artifacts cannot be used for philosophical dialogue. Cell phones, toys without physical movements, and virtual agents may be capable of philosophical dialogue. However, as mentioned above, we judged that it is robots that hold the greatest promise for engaging humans in dialogue.

3. Materials and Methods

3.1. Participants

A total of 26 participants (18 males and 18 females) between the ages of 20 and 39 (average 29.9 years) participated in the study. We recruited the participants through a staffing agency. Their native language was Japanese, and the experiment was conducted in Japanese. We targeted young adults because in the 18–30 age group, 48% did not pay attention to daily news and 23% had little interest in the news [14]. They are said to be poor at interpersonal communication because they have grown up with the social networking society on the Internet [15]. We believed that there was social value in engaging those people in a philosophical dialogue about social issues.
We asked the participants to read a consent form and fill out the form. The consent form explained the outline of the research, the handling of personal information of the participants, the video images and sounds obtained in the experiment, the questionnaire response data. In addition, it explained measures to prevent the spread of the coronavirus and that participation in the experiment was voluntary. We paid the participants for their participation, including transportation costs, through the staffing agency. The consent form has been approved by the ethical committee of the University of Tsukuba (approval number: 2020R442).
Before the experiment, we asked the participants to answer the following psychological scales: Robot Negative Attitude [16] and Multidimensional Empathy Scale [17]. We used these scores to divide the participants into two groups (described in the following section).

3.2. Conditions

We conducted an inter-participant experiment in which participants were divided into two groups: a human partner group and a robot partner group, and each group experienced a philosophical dialogue. Participants were assigned so that the ratio of male to female, Robot Negative Attitude, and Multidimensional Empathy Scale scores would not be biased between the groups. The following is an explanation of the human partner group and the robot partner group.
This experiment adopted a between-subject design. We divided the participants into two groups: (1) a human partner group and (2) a robot partner group. We assigned the participants so that the ratio of male to female, the Robot Negative Attitude scale scores, and Multidimensional Empathy scale scores would be as equal as possible between the groups. This is because negative feelings toward robots and the level of empathy may affect the experiment results.

3.2.1. Human Interlocutor Group

The participants in the human partner group interacted one-on-one with an experimental collaborator (Figure 1a). The collaborator proceeded with dialogue by speaking lines of a scenario (described in Section 3.4.3) prepared in advance. The collaborator did not say anything other than the sentences in the scenario.

3.2.2. Robot Interlocutor Group

The participants in the robot partner group interacted one-on-one with a robot (Figure 1b). The collaborator remotely controlled the robot. The robot also spoke lines of the same scenario as the human partner group. We describe the details of the robot in Section 3.6.

3.3. Procedure

Figure 2 illustrates the experimental procedure. First, the participants watched a two-minute news video about a serious social issue. After watching the video, the participants wrote their impressions and thoughts in a form for 10 min. Next, they had a dialogue with a human or a robot about the news. After the dialogue, the participants wrote their impressions and thoughts about the news through the discussion and about a feeling of hesitation of the discussion with their partner for 10 min.

3.4. Materials

3.4.1. Video Stimuli

The news report that the participant watched was about the murder of George Floyd by a white police officer in 2019 and the BLM (Black Lives Matter) movement that resulted from it. We chose the BLM video because it is a sensitive issue. The content of people being murdered due to racism is a significant issue about which we humans should think. On the other hand, since attitudes about political and racial issues are different from person to person, the issues are often not easy to talk about with others.

3.4.2. Free Description Form

Two free description forms were prepared for before and after the dialogue. The pre-description form had only one item: “Please feel free to describe what you felt or thought after watching the video.” The post-description form had two items: (1) “How did you feel after this dialogue about the news? Please feel free to describe how it has influenced your own thinking, changed your view of the news, etc.” (2) “Did you feel any hesitation, difficulty or other impression in talking with the interlocutor? Please feel free to describe your thoughts”.

3.4.3. Dialogue Scenario

We have created the following dialogue scenario for this experiment. In the scenario, the A person means the interlocutor (human or robot), and the B person means a participant.
  • A: Do you think that racism should not exist?
  • B: (Answer)
  • A: Uh-hah. For example, let us suppose that you are born as a human again in the next life. However, you don’t know what country you will be born in, your race, or your abilities yet. At this time, what rules do you think should be in the world?
  • B: (Answer)
  • A: I see. By the way, in the latter half of the video, many people were asserting their own opinions. What would you do if you disagreed with some of the rules that everyone decided?
  • B: (Answer)
  • A: I understand. Then, how can we decide what the proper rules are for everyone?
  • B: (Answer)
  • A: This is a difficult question. I think we need to look for other ways than talking together.
There are two points in this scenario.
First, the partner attempted to take the initiative in the dialogue and controlled the timing of the participant’s speeches. Specifically, the partner asked the participant questions and let the participants express their opinions. Whatever the opinions were, the partner said something like "Uh-hah" or "I see", accepting the opinions. Then, they repeated another question regarding the first question. Such an approach has an effect to make dialogue relatively smooth despite no adaptation to the participant speech [18,19]. Indeed natural response generation technology is rapidly developing, but it is hard to generate natural and consistent responses according to individual positions and opinions. Therefore, we designed the scenario considering the dialogue that a robot with the current technology can realize.
Second, we incorporated the philosopher’s ideas into the scenario for eliciting more profound thoughts from the participants. For example, we cited Rawls’ "Thought of the Veil of Ignorance" [20] and Aristotle’s common good [21] in the third and ninth sentences, respectively.

3.5. Environment

The experiment took place in a laboratory (Figure 3). In the human group, as shown in the left figure, the collaborator and the participant sat facing each other to watch videos and interact (Figure 3a). In the robot group, as shown on the right, the collaborator was in a compartment in the laboratory and remotely controlled the robot (Figure 3b). The participant did not see the collaborator. The participant sat facing the robot on the table.

3.6. Robot

This experiment used a Sota robot (Figure 4), a table-top communication robot that enables interaction using words and gestures. The robot has a head, a body, and two arms. The head has two eyes and a mouth that can be lit by LEDs but do not move mechanically. The hands on both arms have no fingers, and the body has no legs. It has an anthropomorphic design to make people feel intimate. The robot lit its eyes up in the experiment. The robot performed simple motions while speaking, expressing that it was alive and not stopped [22].

3.7. Measurements

We measured participants’ speech during the dialogue to analyze their unconscious reactions. In addition, we measured participants’ free descriptions before and after the dialogue to analyze their psychological states and impressions of the dialogue. The specific items of the measurements are described below.

3.7.1. Speech Analysis

Duration, reaction latency, speaking time, and pause time of each participant’s turn were measured. We defined turn as when the interlocutor asks a question until a participant finishes answering the question. We also defined reaction latency as when a turn starts until a participant starts speaking, speaking time as the time a participant is speaking, and pause as a voiceless interval of 300 ms or more. Figure 5 shows the relationship between these elements. Each element has the following relationships:
T = R + S + P
where T is turn duration, R is reaction latency, S is sum of speaking time, and P is sum of pause time.
In addition, we counted the number of pauses in each turn and compared the means of each across conditions. The number of pauses is related to the fluency of speech. The higher the number of pauses and fillers, the more stuttering there was. If the numbers have a difference between the conditions, it may indicate difficulty speaking to the other person [23].

3.7.2. Free Description Analysis

We counted the number of characters of the pre- and post-descriptions, and we categorized participants’ impressions in terms of a feeling of hesitation of dialogue using the post-descriptions. The number of characters can be regarded as one of the indicators of the depth of thought. If the number of characters of a post-description is significantly lower than the number of characters of a pre-description, it suggests that the participant may have obtained little insight through the dialogue.
For categorizing the impressions, we coded participants’ answers to the second question in each post-description as follows:
  • Hesitation: A description has phrases that indicate hesitation, such as “I felt nervous to talk with the listener” or “It was difficult to choose words due to the listener”.
  • No hesitation: A description has phrases that indicate no hesitation, such as “I did not feel hesitation” or “It was easy to talk with the listener”.
  • Both: A description has hesitation phrases and no-hesitation phrases.
  • Unknown: A description has no mention of hesitation.

4. Results

4.1. Speech Analysis

Figure 6 shows the data of turn duration, reaction latency, speaking time, pause time, and pause count of every participant. The black bars mean the means of each measurement. We analyzed the data using the Mann–Whitney test because we could not assume that some of the data followed a normal distribution. We summarized the descriptive statistics values and the results of the analysis (statistics U, p-value, and effect size Cohen’s d) in Table 1.
The Mann–Whitney test showed that turn duration, speaking time, and pause count had significant differences between the human interlocutor group and the robot interlocutor group (alpha level was 0.05). Turn duration and speaking time were longer in the human interlocutor group than the robot interlocutor group, and pause count was more in the human interlocutor group than the robot interlocutor group. On the other hand, reaction latency and pause time have no significant difference between both groups.

4.2. Free Description Analysis

Figure 7 shows the numbers of characters of the pre-descriptions and the post-descriptions. The numbers of characters of the pre-descriptions were 306 (SD = 69.6) in the human interlocutor group and 315 (SD = 104) in the robot interlocutor group. The number of characters of the post-descriptions were 304 (SD = 75.2) in the human interlocutor group and 296 (SD = 98.2) in the robot interlocutor group. The repeated measures ANOVA showed that no main effect within participants (F(1, 34) = 1.456, p = 0.236, η 2 = 0.004), no main effect between the groups (F(1, 34) = 4.85 × 10 5 , p = 0.994, η 2 < 0.001), and no interaction (F(1, 34) = 0.890, p = 0.352, η 2 = 0.002). Therefore, regarding the number of characters, we did not find any significant difference between conditions and between pre- and post-descriptions.

Feeling of Hesitation

Table 2 shows the categorization of impressions of dialogue about hesitation. The number of participants mentioning that they felt hesitation was 6 in the human interlocutor group and 10 in the robot interlocutor group. The numbers of participants mentioning that they did not feel hesitation were 7 in the human interlocutor group and 5 in the robot interlocutor group. More participants in the robot group wrote that they had hesitated interacting with their interlocutor than in the human group. However, the reasons for the hesitation were quite different between the human interlocutor group and the robot interlocutor group (see typical comments in Table 2).

5. Discussion and Conclusions

5.1. Implications

In philosophical dialogue, participants talking to a human spend more time answering than participants talking to a robot. This claim is obvious from the results that the mean turn duration of participants in the human interlocutor group was longer than in the robot interlocutor group.
The increase in the time participants spend answering is derived from the increase in the time the participants are speaking. The time participants spend answering includes reaction latency, speaking time, and pause time. Reaction latency and pause time were not significantly different between the human interlocutor group and the robot interlocutor group, but speech time was significantly longer in the human interlocutor group than the robot interlocutor group. The effect size in speech time was also larger than the effect size in reaction latency and pause time. These results indicate that the increase in the time participants spent speaking contributed significantly to the increase in the time participants spent answering.
The reasons participants talking to a human feel hesitation in philosophical dialogue are quite different from the reasons participants talking to a robot feel so. Many participants in the human interlocutor group mentioned consideration for the human interlocutor’s thought and position as the cause of their hesitation. On the other hand, most participants in the robot interlocutor group mentioned the unnaturalness of the robot’s appearance and behavior as the cause of their hesitation. Since none of the participants in the robot interlocutor group mentioned consideration for the robot’s thought or position, it seems that the participants felt little hesitation to the robot as they would to a human being.
The reason why the speaking time of participants talking to a human is longer than that of participants talking to a robot can be explained from two perspectives: some people are hesitant to talk to a human, and some people do not consider a robot as a partner for a deep dialogue.
First, participants in the human interlocutor group put more pauses during their turns than participants in the robot interlocutor group. The number of pauses is related to the hesitation in word choice and can be regarded as one of the indicators of the difficulty in speaking [24]. The top three participants with the highest number of pauses were in the human interlocutor group, and they commented in the free description, “I had trouble deciding what words to use”, “I felt a little uncomfortable because the dialogue was not casual. I felt a little hesitation and difficulty in speaking”, and “I felt uncomfortable when I thought about the possibility of making others feel uncomfortable”. It means that some participants in the human interlocutor group were likely to choose their words so as not to make the interlocutor uncomfortable, and rephrased their thoughts so that they were easier for the interlocutor to understand. We believe this is the reason why the participants talking to a human had longer speech time.
Second, even though nobody in the human interlocutor group showed speaking time less than 10 s, five participants in the robot interlocutor group showed that. Most of the utterances less than 10 s were brief claims like “I think it is ***.” According to the free description, most participants in the robot interlocutor group mentioned that the robot had no emotions. In other words, some of the participants may have thought that the robot would not be concerned even if they give such a brief answer. We believe this is the reason why participants talking to the robot had shorter speech time.
We cannot say that there is any difference in the depth of thought between participants talking to a human and participants talking to a robot. We did not find any significant differences between the human interlocutor group and the robot interlocutor group regarding the number of characters of the post-dialogue free descriptions and the change of pre- and post-dialogue free descriptions. There was also little correlation between speaking time and the number of characters. It seems that longer speech does not lead to deeper thinking.
Based on the above considerations, we conclude that it is worthwhile for people to engage in philosophical dialogue with a robot. At least, as long as the content of the dialogue is the same, there is not much difference between talking to a human and talking to a robot in terms of deepening one’s thoughts. For people who feel nervous about talking to others, having a philosophical dialogue with a robot could be an effective way to deepen their thinking. Robots also have the advantage of being able to avoid deviations from the discussion by predetermining what the robot will say. Furthermore, even if there is no one to talk to, we can always practice philosophical dialogue with a robot. As facial design and speech recognition technologies become more advanced, robots will be able to interact more smoothly.

5.2. Applications

We believe philosophical dialogue with a robot will be applied to various fields such as philosophical counseling, philosophical cafe, and so on. In particular, its application to counseling of social anxiety disorder (SAD) would be of value. Social anxiety disorder is defined by Stein, M. B., and Stein, D. J. (2008) [25] as a pathology in which a person feels embarrassed or nervous in interpersonal situations, which causes difficulties in communicating with others, performing various activities in public, and sometimes even being in public itself. It is currently attracting a great deal of attention around the world, along with depression, as a mental disorder. In fact, the prevalence of social anxiety disorder is high, ranging from 3 to 13%, and it has also been shown to be a major problem in social life. Social anxiety disorder is closely related to interpersonal phobia, and Alden, L. E., and Taylor, C. T. (2004) [26] describes interpersonal phobia as a neurological disorder in which a person feels unjustifiably strong anxiety and mental tension in situations where he or she is in the presence of other people, and fears that this will cause them to despise him or her or make them feel uncomfortable. It is defined as a type of neurosis in which a person tries to withdraw as much as possible from interpersonal relationships, fearing that others will despise him or her or that he or she will make others feel uncomfortable. In addition, social anxiety disorder is characterized by anxiety in interpersonal situations, such as fear of disturbing others. Philosophical dialogue with a robot will give such people an opportunity to deepen their own thoughts through dialogue.

5.3. Limitations and Future Work

Characteristics of a participant such as age, gender, education level, and so on may play an important role in the participants’ responses, but we did not analyze the relationships between these characteristics and participants’ responses because such analysis is beyond the scope of this paper. The purpose of this paper is to explore the differences in participants’ behavior in philosophical dialogues with humans and robots. Investigating what role the characteristics of a participant play in philosophical dialogue with a robot is the next task based on the findings of this study.
The topic of philosophical dialogue in this experiment was limited to BLM, which was not so familiar news to typical Japanese. The results may be different if the experiment was conducted in other countries such as the US.
In addition to the above point, our participants were only Japanese. According to Kamide and Arai [27], Americans perceived more comfortableness with a robot with a higher quality of human essences than Japanese. Based on their finding, since philosophy is an effort to explore human nature, Americans may prefer to engage in philosophical dialogue with a robot than Japanese. As a result, Americans may respond to the robot interlocutor at the same length as the human interlocutor. We believe that the above points are exciting for future work.
The appearance of the robot was limited in this experiment. It remains the possibility that other realistic androids or more deformed pet robots will have different interactions. Furthermore, if robots become an everyday object and people get used to them just like they do with smartphones today, the results may be able to change. In this study, we believe that the anthropomorphism of the robot plays a role in facilitating the way humans respond to the robot’s questions. According to a survey paper on anthropomorphism in human–robot interaction [28], many studies have pointed out that robot anthropomorphism facilitates human interaction. Even in philosophical dialogues, the anthropomorphic nature of the robot is likely to give humans the sense that they are interacting with intelligent individuals.
Relating to the appearance, the robot’s size may have affected the impression of cuteness and brought about a sense of familiarity. In the experiment, some participants who talked with the robot commented less hesitation toward the robot because of its cute appearance. Since some studies [29,30] have shown that the height and eye level can be intimidating to participants, we consider that the robot’s facial appearance and size contributed to such a feeling. However, to discuss the effects of anthropomorphism and size in detail, we need to conduct experiments similar to the present study using robots with different degrees of anthropomorphism and sizes.
This paper did not analyze the complexity of the participants’ responses to the robot’s questions, even though it is one of the essential features in philosophical dialogue. This is because defining the complexity of the responses was difficult. For example, it is hard to quantify how complex or simple concept words like “freedom” and “race” are. Additionally, the sentence “It is important to solve problems through discussion with others” and the sentence “Democracy is important” mean similar concepts, although the lengths of the sentences are different. It is not easy to quantitatively determine which of these is more complex.
However, it is meaningful to mention the difference in the complexity of participants’ responses between the human and robot interlocutor groups, even if that mention is based on subjective and quantitative observations. From our subjective point of view, the difficulty of the words and concepts used by the participants appeared to be comparable between the human and robot interlocutor groups. Most participants in both groups did not use advanced terms that are learned in higher education, nor overly simple words that could be said to children. Perhaps the reason why the participants spoke with the same level of complexity as the adult experimenters, even to a small robot recalling the appearance of a child, was that the robot spoke the same language as the adult experimenters. This point is related to the behavior design of robots used in philosophical dialogues, and we think it is a point worth investigating in the future.
As a prospect of this study, we have thought that we should investigate the human behavioral changes caused by long-term interventions. Some works have investigated the effects of an intervention using interactive devices, for example, Alhasan et al. [31] report on a long-term intervention (twice a week, eight weeks in total). It would be fascinating to investigate what changes occur in human thinking after practicing a philosophical dialogue with a robot for several weeks.
Finally, as a challenging topic, we should start to consider the significance of the embodiment in philosophical dialogue. In the past few years, advances in artificial intelligence and humanoid robots have been made through the development of deep learning and the understanding of the information processing mechanisms in the human brain. Although the robot in this study used a dialogue we designed, in the near future philosophical dialogues may be generated autonomously. However, human thoughts reflect not only symbolic knowledge but also the experience of interaction between one’s body and the environment. Therefore, to practice a genuinely philosophical dialogue with a robot, it might be necessary to realize a dialogue based on the robot’s physicality, including the somatosensory system [32]. We believe that this is a challenging problem that approaches the essence of human philosophy.

Author Contributions

Conceptualization, Y.S. and T.I.; methodology, Y.S.; software, T.I.; validation, Y.S.; formal analysis, T.I.; investigation, Y.S.; resources, Y.S.; data creation, Y.S.; writing—original draft preparation, Y.S. and T.I.; writing—review and editing, Y.S. and T.I.; visualization, Y.S. and T.I.; supervision, T.I.; project administration, T.I.; funding acquisition, T.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by JST, PRESTO grant number JPMJPR1851 and JSPS KAKENHI grant number 19H05691.

Institutional Review Board Statement

The study was conducted according to the guidelines of the University of Tsukuba and approved by the Institutional Review Board of the University of Tsukuba (protocol code 2020R442 and date of approval 15 January 2020).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data presented in this study can be obtained on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Rakoczy, M. Philosophical dialogue—Towards the cultural history of the genre. Ling. Posnan. 2017, 59, 79–93. [Google Scholar] [CrossRef] [Green Version]
  2. Benson, H.H. Essays on the Philosophy of Socrates; Oxford University Press: Oxford, UK, 1992. [Google Scholar]
  3. Overby, K. Student-Centered Learning. ESSAI 2008, 9, 154–196. [Google Scholar]
  4. Peter, B.R. Philosophical Counseling–Theory and Practice; Praeger: Westport, CT, USA; London, UK, 2001. [Google Scholar]
  5. Sutet, M. Un Cafe pour Socrate; Robert Laffont: Paris, France, 1995. [Google Scholar]
  6. Schultz, H.S. Unpleasant interactions. J. Int. Stud. 2020, 5, 95–104. [Google Scholar]
  7. Uchida, T.; Takahashi, H.; Ban, M.; Shimaya, J.; Minato, T.; Ogawa, K.; Yoshikawa, Y.; Ishiguro, H. Japanese Young Women did not discriminate between robots and humans as listeners for their self-disclosure-pilot study. Multimodal Technol. Interact. 2020, 4, 35. [Google Scholar] [CrossRef]
  8. Mutlu, B.; Kanda, T.; Forlizzi, J.; Hodgins, J.; Ishiguro, H. Conversational gaze mechanisms for human-like robots. ACM Trans. Interact. Intell. Syst. (TiiS) 2012, 1, 1–33. [Google Scholar] [CrossRef]
  9. McNeill, D. Hand and Mind: What Gestures Reveal about Thought; University of Chicago Press: Chicago, IL, USA, 1992. [Google Scholar]
  10. Biau, E.; Soto-Faraco, S. Beat gestures modulate auditory integration in speech perception. Brain Lang. 2013, 124, 143–152. [Google Scholar] [CrossRef] [Green Version]
  11. Kidd, C.D.; Breazeal, C. Effect of a robot on user perceptions. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 4, pp. 3559–3564. [Google Scholar]
  12. Shinozawa, K.; Naya, F.; Yamato, J.; Kogure, K. Differences in effect of robot and screen agent recommendations on human decision-making. Int. J. Hum.-Comput. Stud. 2005, 62, 267–279. [Google Scholar] [CrossRef] [Green Version]
  13. Fridin, M.; Belokopytov, M. Embodied robot versus virtual agent: Involvement of preschool children in motor task performance. Int. J. Hum.-Comput. Interact. 2014, 30, 459–469. [Google Scholar] [CrossRef]
  14. Patterson, T.E. Young People and News: A Report from the Joan Shorenstein Center on the Press, Politics and Public Policy, John F. Kennedy School of Government, Harvard University; Joan Shorenstein Center on the Press, Politics and Public Policy: Cambridge, MA, USA, 2007. [Google Scholar]
  15. Gentina, E.; Chen, R. Digital natives’ coping with loneliness: Facebook or face-to-face? Inf. Manag. 2019, 56, 103138. [Google Scholar] [CrossRef]
  16. Nomura, T.; Kanda, T.; Suzuki, T. Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI Soc. 2006, 20, 138–150. [Google Scholar] [CrossRef]
  17. Reidenbach, R.E.; Robin, D.P. Toward the Development of a Multidimensional Scale for Improving Evaluations of Business Ethics. J. Bus. Ethics 1990, 9, 639–653. [Google Scholar] [CrossRef]
  18. Iio, T.; Yoshikawa, Y.; Chiba, M.; Asami, T.; Isoda, Y.; Ishiguro, H. Twin-robot dialogue system with robustness against speech recognition failure in human-robot dialogue with elderly people. Appl. Sci. 2020, 10, 1522. [Google Scholar] [CrossRef] [Green Version]
  19. Iio, T.; Yoshikawa, Y.; Ishiguro, H. Double-meaning agreements by two robots to conceal incoherent agreements to user’s opinions. Adv. Robot. 2021, 35, 1145–1155. [Google Scholar] [CrossRef]
  20. Rawls, J. A Theory of Justice; Harvard University Press: Cambridge, MA, USA, 2009. [Google Scholar]
  21. Aristotle; Sinclair, T.A.; Saunders, T.J. The Politics; Penguin Books: London, UK, 1981. [Google Scholar]
  22. Matsui, Y.; Kanoh, M.; Kato, S.; Nakamura, T.; Itoh, H. A Model for Generating Facial Expressions Using Virtual Emotion Based on Simple Recurrent Network. J. Adv. Comput. Intell. Intell. Inform. 2010, 14, 453–463. [Google Scholar] [CrossRef]
  23. Senft, G. Phatic communion. In Handbook of Pragmatics (Loose Leaf Installment); John Benjamins: Amsterdam, The Netherlands, 1996. [Google Scholar]
  24. Carlson, R.; Gustafson, K.; Strangert, E. Modelling hesitation for synthesis of spontaneous speech. In Proceedings of the Speech Prosody 2006, Dresden, Germany, 2–5 May 2006. [Google Scholar]
  25. Stein, M.B.; Stein, D.J. Social anxiety disorder. Lancet 2008, 371, 1115–1125. [Google Scholar] [CrossRef]
  26. Alden, L.E.; Taylor, C.T. Interpersonal processes in social phobia. Clin. Psychol. Rev. 2004, 24, 857–882. [Google Scholar] [CrossRef] [PubMed]
  27. Kamide, H.; Arai, T. Perceived comfortableness of anthropomorphized robots in US and Japan. Int. J. Soc. Robot. 2017, 9, 537–543. [Google Scholar] [CrossRef]
  28. Złotowski, J.; Proudfoot, D.; Yogeeswaran, K.; Bartneck, C. Anthropomorphism: Opportunities and challenges in human–robot interaction. Int. J. Soc. Robot. 2015, 7, 347–360. [Google Scholar] [CrossRef]
  29. Hiroi, Y.; Ito, A. Influence of the Height of a Robot on Comfortableness of Verbal Interaction. IAENG Int. J. Comput. Sci. 2016, 43, 447–455. [Google Scholar]
  30. Rae, I.; Takayama, L.; Mutlu, B. The influence of height in robot-mediated communication. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 1–8. [Google Scholar]
  31. Alhasan, H.S.; Wheeler, P.C.; Fong, D.T. Application of Interactive Video Games as Rehabilitation Tools to Improve Postural Control and Risk of Falls in Prefrail Older Adults. Cyborg Bionic Syst. 2021, 2021, 9841342. [Google Scholar] [CrossRef]
  32. Wang, L.; Ma, L.; Yang, J.; Wu, J. Human Somatosensory Processing and Artificial Somatosensation. Cyborg Bionic Syst. 2021, 2021, 9843259. [Google Scholar] [CrossRef]
Figure 1. Experimental scenes of the human interlocutor group (a) and the robot interlocutor group (b).
Figure 1. Experimental scenes of the human interlocutor group (a) and the robot interlocutor group (b).
Applsci 12 01237 g001
Figure 2. Experimental procedure.
Figure 2. Experimental procedure.
Applsci 12 01237 g002
Figure 3. Experimental environment of the human interlocutor group (a) and the robot interlocutor group (b).
Figure 3. Experimental environment of the human interlocutor group (a) and the robot interlocutor group (b).
Applsci 12 01237 g003
Figure 4. Sota robot and its axis arrangement.
Figure 4. Sota robot and its axis arrangement.
Applsci 12 01237 g004
Figure 5. The visualization of the meaning of each measurement.
Figure 5. The visualization of the meaning of each measurement.
Applsci 12 01237 g005
Figure 6. The raw data of measurements of non-verbal behavior in each participant’s turn.
Figure 6. The raw data of measurements of non-verbal behavior in each participant’s turn.
Applsci 12 01237 g006
Figure 7. The number of characters of pre- and post-descriptions.
Figure 7. The number of characters of pre- and post-descriptions.
Applsci 12 01237 g007
Table 1. The summary of the descriptive statistics values and the analysis results of participants’ speeches. The symbol * means that the p-value is lower than 0.05.
Table 1. The summary of the descriptive statistics values and the analysis results of participants’ speeches. The symbol * means that the p-value is lower than 0.05.
GroupNMeanSDUp Cohen’s d
Turn durationHuman1835,11114,049810.010*0.906
Robot1824,02210,108
Reaction latencyHuman18324325911621.000 0.299
Robot1826071524
Speaking timeHuman1822,38910,650890.016*0.830
Robot1814,6927667
Pause timeHuman1810,59961191060.064 0.596
Robot1873224797
Pause countHuman188.654.61970.041*0.834
Robot185.522.62
Table 2. The results of categorizing participant’s description about their hesitation in the dialogue. Typical comments are our aggregation of the participants’ opinions.
Table 2. The results of categorizing participant’s description about their hesitation in the dialogue. Typical comments are our aggregation of the participants’ opinions.
CategoryGroupNTypical Comments
HesitationHuman7- In conversations about discrimination and politics, there is a diversity of ideas, and we need to be mindful of the other person’s thoughts when we speak. (5/7)
Robot10- No facial expressions, gestures, pauses before speech starts, eye contact, etc. (10/10) - It was difficult to grasp the robot’s intent of the questions and to predict the robot’s responses. (5/10) - The robot said too much clearly. (2/10)
No-hesitationHuman7- The interlocutor had a listening attitude and was basically neutral and did not give a critical response. Furthermore, the interlocutor was not a person concerned. (2/7) - Since the interlocutor was not an acquaintance, it was not necessary to be concerned about the afterward relationship. (1/7)
Robot5- Robot’s pause and backchannels were good. (2/5) - It looks pretty. (1/5)
BothHuman1- It’s not often that I get a chance to talk to a stranger about something like this, so I was a bit nervous at first, wondering what I would be asked. (1/1) - I didn’t find it difficult to talk, but I was worried about whether I could convey what I wanted to say to the human interlocutor. (1/1)
Robot3I didn’t feel hesitation talking to a robot, but I felt it uncomfortable different from talking with a human being. (2/3)
UnknownHuman3- Whether to feel hesitation depends on the relationship with the interlocutor or the situation of interaction. (3/3)
Robot0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Someya, Y.; Iio, T. Comparison of Philosophical Dialogue with a Robot and with a Human. Appl. Sci. 2022, 12, 1237. https://doi.org/10.3390/app12031237

AMA Style

Someya Y, Iio T. Comparison of Philosophical Dialogue with a Robot and with a Human. Applied Sciences. 2022; 12(3):1237. https://doi.org/10.3390/app12031237

Chicago/Turabian Style

Someya, Yurina, and Takamasa Iio. 2022. "Comparison of Philosophical Dialogue with a Robot and with a Human" Applied Sciences 12, no. 3: 1237. https://doi.org/10.3390/app12031237

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop