Yōkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours
Abstract
:1. Introduction
2. Contribution
3. Related Works
3.1. Creating Encounters and Links between Persons
3.2. Form Factor
3.3. Perception through Reduced Robotic Movement
4. Design and Implementation
4.1. Yōkobo’s Shape Design
4.2. Services and Associated Functions
4.3. Behaviour Design
- Humidity inside:
- when the value is high, Yōkobo displays characteristics of human sleepiness, hinting at slowness and human body stretching.
- Humidity outside:
- it displays a movement, combining the body and apex, to suggest a human sneeze.
- Temperature:
- depending on the temperature variation, it shakes the body and apex, or has slow movements.
- COand AP:
- these values are used through Yōkobo’s periodical movements to increase or decrease the motor speed. They are used to imply the human breathing alteration by a higher concentration of CO or AP increments.
4.4. Hardware
4.5. Software Control
- 1.
- Initialisation: it ensures the correct system boot-up and coordination among FSMs.
- 2.
- Idle: pause state while the first sensor’s data package is gathered.
- 3.
- Rest: the robot apex follows the movement defined by .
- 4.
- Wake-up: Yōkobo displays an animation selected based on the current house temperature.
- 5.
- State of the house: Yōkobo’s motion is guided by hm. It can also do pre-defined animations based on the two humidity sensor readings.
- 6.
- Go back to rest: an analogous process to the Wake-Up state, the difference being that the motion is selected based on the outdoor temperature.
- 7.
- MDL mimic: The MDL commands the MC to enact its mimic behaviour or play a recording of the previous trace. If r has no value, the robot mimics the user’s motion. Otherwise, it moves to play a trace provided that the current RFID tag is not the same as the one that left the message. The system then plays the current trace before recording. The light colour (blue) signals the user the trace is playing.
- 8.
- Record: still mimicking, but now the person’s movements are saved for 10 s and the light is set to green.
- 1.
- The Move state, where the animation data points are sent to the motors.
- 2.
- The House State, the node in charge of using the humidity-guided planned trajectory, applying the generator.
- 3.
- Continue Idle.
- 4.
- Move to the MC Mimic subprocess.
- Sidesteps:
- the system obtains the human waist centre XY coordinates from the image data and rotates the base motor so that the human is always seen in the field of (centre) view.
- Bowing:
- the application checks the vertical motion of the user’s shoulders, neck, and hips. If one of the first two is lowered below a given threshold, and the hip position has not changed, the second motor lowers the apex.
- Twist:
- the system captures the human shoulder width () and the torso length (), viewed from the front. It also continually calculates the shoulder width to torso ratio (). This ratio decreases when the user turns to the side because becomes smaller. When this situation is detected, the program determines there is a twist. It then rotates the top motor 90° and reproduces the gestures.
5. Experimental Validation
- i
- The technical robustness over multiple days;
- ii
- People’s perceptions of the robot motions;
- iii
- The users’ receptions toward Yōkobo;
- iv
- The usability and user experience.
5.1. Experiment Preparation
5.2. Modifications Made for the Experiment
5.3. Tools and Protocol
5.4. Instructions to Participants
6. Results and Discussion
6.1. Differences between Experiments
6.2. Regarding Robustness, Stability, and Usability
6.3. Regarding Perception and Reception
7. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AP | atmospheric pressure |
DoF | degree of freedom |
E1, E2 | experiments 1 and 2 |
FSM | finite state machine |
HPEA | human position estimation algorithm |
HRHI | human–robot–human interaction |
HRI | human-robot interaction |
LED | light-emitting diode |
MC | motor control |
MDL | main decision loop |
NVB | non-verbal behaviour |
PCB | printed circuit board |
QS, QW1, QW2 | questionnaire (start, end week 1, end week 2) |
RA | robot assistant |
RFID | radio frequency identification |
RPi | Raspberry Pi |
SP | selected participant |
SR | social robot |
SUS | system usability scale |
US | ultrasonic sensor |
VA | vocal assistant |
Appendix A. Experiment
Appendix A.1. Questionnaires
Question | Answer Choice |
---|---|
Personal questions | |
Student ID 1 | number |
How old are you? 2 | number |
What is your gender? 2 | Female, Male, Prefer not to say |
What is your nationality? 2 | text |
What is your level of knowledge about robots? 2 | Not Familiar (1)–Familiar (5) |
What is your knowledge about Yōkobo? 2 | I was involved in the first experiment I already interacted with Yōkobo on my own I know how it works and already saw it I only saw it I don’t know about Yōkobo |
About Yōkobo and its behaviour | |
How much has Yōkobo welcomed you? [Likert scale] | Not at all (1)–I felt welcomed (5) |
Did you see intelligence in Yōkobo? [Likert scale] | Not at all (1)–Totally (5) |
Did you see life in Yōkobo? [Likert scale] | Not at all (1)–Totally (5) |
What is your feeling about Yōkobo?(1: not at all ; 5: totally [Likert scale]) | Curious Happy Afraid Enthusiastic Confusion Friendly |
Yōkobo is [positive adj. ➀➁➂➃➄ negative adj.—semantic scale] | Smart (1) Stupid (5) Simple (1) Complicated (5) Dynamic (1) Static (5) Lifelike (1) Artificial (5) Responsive (1) Slow (5) Emotional (1) Emotionless (5) Useful (1) Useless (5) Familiar (1) Unknown (5) Desirable (1) Undesirable (5) Cute (1) Ugly (5) Modern (1) Old (5) Attractive (1)Unattractive (5) |
I ______ Yōkobo [semantic scale] | Like (1)–Dislike (5) |
Design | |
Name with your word, in the next questions, the different parts of Yōkobo. | |
Numbers 1 to 4 point to the whole part. Numbers 5 and 6 point to the holes. | |
How do you (will) call mark 1 3 | text |
How do you (will) call mark 2 3 | text |
How do you (will) call mark 3 3 | text |
How do you (will) call mark 4 3 | text |
How do you (will) call mark 5 3 | text |
How do you (will) call mark 6 3 | text |
Interactions | |
How many times did you interact with Yōkobo? 4 | number |
Additional remarks | |
Do you have any additional remarks or comments about Yōkobo? 5 | text |
Question | Answer Choice |
---|---|
Messages | |
How many messages did you receive from your partner? | number |
How many messages did you send to your partner? | number |
How would you rate the recording of a message? [semantic scale] | Hard (1)–Easy (5) |
How much did you feel the existence of your partner during this week? [Likert scale] | Does not exist (1)–Exists (5) |
Do you think Yōkobo helped you to feel your partner? [Likert scale] | Not at all (1)–A lot (5) |
What does Yōkobo represent in the connection with your partner? | text |
Did you have the impression that the movement of the robot was Yōkobo’s behaviours or was coming from your partner? | text |
Question | Answer Choice |
---|---|
I think I would like to use Yōkobo frequently | Strongly Disagree (1)–Strongly Agree (5) |
I found Yōkobo unnecessarily complex | |
I though Yōkobo was easy to use | |
I think I would need the support of a technical person to be able to use Yōkobo | |
I found the various functions in Yōkobo were well integrated | |
I thought there was too much inconsistency in Yōkobo | |
I would imagine that most people would learn to use Yōkobo very quickly | |
I found Yōkobo very cumbersome to use | |
I felt very confident using Yōkobo | |
I need to learn a lot of things before I could get going with Yōkobo |
Appendix A.2. Graffiti Wall
References
- Mišeikis, J.; Caroni, P.; Duchamp, P.; Gasser, A.; Marko, R.; Mišeikienė, N.; Zwilling, F.; de Castelbajac, C.; Eicher, L.; Früh, M.; et al. Lio-a personal robot assistant for human-robot interaction and care applications. IEEE Robot. Autom. Lett. 2020, 5, 5339–5346. [Google Scholar] [CrossRef]
- Intuition Robotics. Elliq, the Sidekick for Healthier, Happier Aging. Available online: https://elliq.com/ (accessed on 22 June 2022).
- Knight, H. Eight lessons learned about non-verbal interactions through robot theater. In Proceedings of the International Conference on Social Robotics, Golden, CO, USA, 14–18 November 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 42–51. [Google Scholar]
- Venture, G.; Kulić, D. Robot expressive motions: A survey of generation and evaluation methods. ACM Trans. Hum. Robot. Interact. 2019, 8, 1–17. [Google Scholar] [CrossRef]
- Odom, W.T.; Sellen, A.J.; Banks, R.; Kirk, D.S.; Regan, T.; Selby, M.; Forlizzi, J.L.; Zimmerman, J. Designing for Slowness, Anticipation and Re-Visitation: A Long Term Field Study of the Photobox. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’14. Toronto, ON, Canada, 26 April–1 May 2014; pp. 1961–1970. [Google Scholar] [CrossRef]
- Gomez, R.; Szapiro, D.; Galindo, K.; Nakamura, K. Haru: Hardware design of an experimental tabletop robot assistant. In Proceedings of the the ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 233–240. [Google Scholar]
- Luria, M.; Hoffman, G.; Zuckerman, O. Comparing social robot, screen and voice interfaces for smart-home control. In Proceedings of the the Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 580–628. [Google Scholar]
- Ostrowski, A.K.; Zygouras, V.; Park, H.W.; Breazeal, C. Small Group Interactions with Voice-User Interfaces: Exploring Social Embodiment, Rapport, and Engagement. In Proceedings of the the ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; pp. 322–331. [Google Scholar] [CrossRef]
- Jeong, K.; Sung, J.; Lee, H.S.; Kim, A.; Kim, H.; Park, C.; Jeong, Y.; Lee, J.; Kim, J. Fribo: A Social Networking Robot for Increasing Social Connectedness through Sharing Daily Home Activities from Living Noise Data. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’18, Chicago, IL, USA, 2–8 March 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 114–122. [Google Scholar] [CrossRef]
- Vaussard, F.; Bonani, M.; Rétornaz, P.; Martinoli, A.; Mondada, F. Towards autonomous energy-wise RObjects. In Proceedings of the Conference Towards Autonomous Robotic Systems, Lincoln, UK, 8–10 September 2021; Springer: Berlin/Heidelberg, Germany, 2011; pp. 311–322. [Google Scholar]
- Anderson-Bashan, L.; Megidish, B.; Erel, H.; Wald, I.; Hoffman, G.; Zuckerman, O.; Grishko, A. The Greeting Machine: An Abstract Robotic Object for Opening Encounters. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Nanjing, China, 27–31 August 2018; pp. 595–602. [Google Scholar] [CrossRef]
- Trovato, G.; Zecca, M.; Sessa, S.; Jamone, L.; Ham, J.; Hashimoto, K.; Takanishi, A. Cross-cultural study on human-robot greeting interaction: Acceptance and discomfort by Egyptians and Japanese. Paladyn J. Behav. Robot. 2013, 4, 83–93. [Google Scholar] [CrossRef]
- Heenan, B.; Greenberg, S.; Aghel-Manesh, S.; Sharlin, E. Designing social greetings in human robot interaction. In Proceedings of the Conference on Designing Interactive Systems, Vancouver, BC, Canada, 21–25 June 2014; pp. 855–864. [Google Scholar]
- Broers, H.A.T.; Ham, J.; Broeders, R.; de Silva, P.R.; Okada, M. Goal Inferences about Robot Behavior: Goal Inferences and Human Response Behaviors. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; IEEE Press: Piscataway, NJ, USA, 2013; pp. 91–92. [Google Scholar]
- Mondada, F.; Fink, J.; Lemaignan, S.; Mansolino, D.; Wille, F.; Franinović, K. Ranger, an example of integration of robotics into the home ecosystem. In New Trends in Medical and Service Robots; Springer: Berlin/Heidelberg, Germany, 2016; pp. 181–189. [Google Scholar]
- Erel, H.; Cohen, Y.; Shafrir, K.; Levy, S.D.; Vidra, I.D.; Shem Tov, T.; Zuckerman, O. Excluded by Robots: Can Robot-Robot-Human Interaction Lead to Ostracism? In Proceedings of the the ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; pp. 312–321. [Google Scholar] [CrossRef]
- Brock, H.; Šabanović, S.; Gomez, R. Remote You, Haru and Me: Exploring Social Interaction in Telepresence Gaming With a Robotic Agent. In Proceedings of the the ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; pp. 283–287. [Google Scholar] [CrossRef]
- Ricks, D.J.; Colton, M.B. Trends and considerations in robot-assisted autism therapy. In Proceedings of the International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 4354–4359. [Google Scholar]
- Rifinski, D.; Erel, H.; Feiner, A.; Hoffman, G.; Zuckerman, O. Human-human-robot interaction: Robotic object’s responsive gestures improve interpersonal evaluation in human interaction. Hum. Comput. Interact. 2020, 36, 333–359. [Google Scholar] [CrossRef]
- Campa, R. The rise of social robots: A review of the recent literature. J. Evol. Technol. 2016, 26, 106–113. [Google Scholar] [CrossRef]
- Ramírez, V.; Deuff, D.; Indurkhya, X.; Venture, G. Design Space Survey on Social Robotics in the Market. J. Intell. Robot. Syst. 2022, 105, 25. [Google Scholar] [CrossRef]
- Mori, M.; MacDorman, K.F.; Kageki, N. The Uncanny Valley [From the Field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
- Li, D.; Rau, P.P.; Li, Y. A cross-cultural study: Effect of robot appearance and task. Int. J. Soc. Robot. 2010, 2, 175–186. [Google Scholar] [CrossRef]
- Paschal, T.; Bell, M.A.; Sperry, J.; Sieniewicz, S.; Wood, R.J.; Weaver, J.C. Design, Fabrication, and Characterization of an Untethered Amphibious Sea Urchin-Inspired Robot. IEEE Robot. Autom. Lett. 2019, 4, 3348–3354. [Google Scholar] [CrossRef]
- Latikka, R.; Turja, T.; Oksanen, A. Self-efficacy and acceptance of robots. Comput. Hum. Behav. 2019, 93, 157–163. [Google Scholar] [CrossRef]
- Feil-Seifer, D.; Matarić, M.J. Socially assistive robotics. IEEE Robot. Autom. Mag. 2011, 18, 24–31. [Google Scholar]
- GROOVE X. LOVOT. Available online: https://lovot.life (accessed on 22 June 2022).
- Haring, K.S.; Watanabe, K.; Mougenot, C. The influence of robot appearance on assessment. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; pp. 131–132. [Google Scholar] [CrossRef]
- Levillain, F.; Zibetti, E. Behavioral objects: The rise of the evocative machines. JHRI 2017, 6, 4–24. [Google Scholar] [CrossRef]
- Duarte, N.F.; Raković, M.; Tasevski, J.; Coco, M.I.; Billard, A.; Santos-Victor, J. Action anticipation: Reading the intentions of humans and robots. IEEE Robot. Autom. Lett. 2018, 3, 4132–4139. [Google Scholar] [CrossRef]
- Bevins, A.; Duncan, B.A. Aerial Flight Paths for Communication: How Participants Perceive and Intend to Respond to Drone Movements. In Proceedings of the the ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; pp. 16–23. [Google Scholar] [CrossRef]
- Lehmann, H.; Saez-Pons, J.; Syrdal, D.S.; Dautenhahn, K. In good company? Perception of movement synchrony of a non-anthropomorphic robot. PLoS ONE 2015, 10, e0127747. [Google Scholar] [CrossRef] [PubMed]
- Hoffman, G.; Ju, W. Designing robots with movement in mind. J. Hum. Robot. Interact. 2014, 3, 91–122. [Google Scholar] [CrossRef]
- Hoffman, G. Dumb robots, smart phones: A case study of music listening companionship. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 358–363. [Google Scholar]
- Ashmore, S.; Runyan, K. Introduction to Agile Methods; Addison-Wesley Professional: Boston, MA, USA, 2014. [Google Scholar]
- Deuff, D.; Ocnarescu, I.; Coronado, L.E.; Rincon-Ardila, L.; Milleville, I.; Venture, G. Designerly way of thinking in a robotics research project. J. Robot. Soc. Jpn. 2020, 38, 692–702. [Google Scholar] [CrossRef]
- Deuff, D.; Garcin, D.; Aznar, C.; Ocnarescu, I.; Milleville, I.; Capy, S.; Osorio, P.; Hagane, S.; Coronado, E.; Rincon-Ardila, L.; et al. Together alone, Yōkobo, a sensible presence robject for the home of newly retired couples. In Designing Interactive Systems Conference; ACM: New York, NY, USA, 2022; in press. [Google Scholar]
- Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
- Coronado, E.; Venture, G. Towards IoT-Aided Human–Robot Interaction Using NEP and ROS: A Platform-Independent, Accessible and Distributed Approach. Sensors 2020, 20, 1500. [Google Scholar] [CrossRef]
- Nielsen, J.; Landauer, T.K. A Mathematical Model of the Finding of Usability Problems. In Proceedings of the the Interact and Conference on Human Factors in Computing Systems, Amsterdam, The Netherlands, 24–29 April 1993; pp. 206–213. [Google Scholar] [CrossRef]
- Manakhov, P.; Ivanov, V.D. Defining usability problems. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 3144–3151. [Google Scholar]
- Martin, B.; Hanington, B.; Hanington, B. Universal Methods of Design: 100 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions; Rockport Publishers: Beverly, MA, USA, 2012. [Google Scholar]
- Coronado, E.; Venture, G.; Yamanobe, N. Applying Kansei/Affective Engineering Methodologies in the Design of Social and Service Robots: A Systematic Review. Int. J. Soc. Robot. 2020, 13, 1161–1171. [Google Scholar] [CrossRef]
- Nagamachi, M. Kansei/Affective Engineering; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
- Verhagen, T.; Hooff, B.V.D.; Meents, S. Toward a better use of the semantic differential in IS research: An integrative framework of suggested action. J. Assoc. Inf. Syst. 2015, 16, 1. [Google Scholar] [CrossRef]
- Stoklasa, J.; Talášek, T.; Stoklasová, J. Semantic differential for the twenty-first century: Scale relevance and uncertainty entering the semantic space. Qual. Quant. 2019, 53, 435–448. [Google Scholar] [CrossRef]
- Nagamachi, M.; Lokman, A.M. Innovations of Kansei Engineering; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
- Brooke, J. SUS: A retrospective. J. Usability Stud. 2013, 8, 29–40. [Google Scholar]
- Liang, J.; Xian, D.; Liu, X.; Fu, J.; Zhang, X.; Tang, B.; Lei, J. Usability study of mainstream wearable fitness devices: Feature analysis and system usability scale evaluation. JMIR Mhealth Uhealth 2018, 6, e11066. [Google Scholar] [CrossRef] [PubMed]
- Palaver, W. René Girard’s Mimetic Theory; MSU Press: East Lansing, MI, USA, 2013. [Google Scholar]
- Cannon, C.; Goldsmith, K.; Roux, C. A self-regulatory model of resource scarcity. J. Consum. Psychol. 2019, 29, 104–127. [Google Scholar] [CrossRef]
Cpt | Dimension (Score) | Semantic Evaluation | |||
---|---|---|---|---|---|
Positive (1) | Negative (5) | QS | QW1 | QW2 | |
Behaviour | Dynamic | Static | 2.7(1.0) | 2.2(1.0) | 2.6(1.0) |
Smart | Stupid | 2.2(0.8) | 2.7(0.7) | 2.4(0.5) | |
Simple | Complicated | 2.5(0.9) | 2.3(0.9) | 3.0(0.9) | |
Responsive | Slow | 3.1(0.9) | 2.9(1.2) | 3.2(0.8) | |
Interaction | Lifelike | Artificial | 3.2(0.8) | 2.7(0.9) | 3.4(1.1) |
Emotional | Emotionless | 2.7(1.2) | 2.6(1.0) | 2.9(0.9) | |
Familiar | Unknown | 2.1(0.8) | 2.6(1.1) | 2.1(0.9) | |
Useful | Useless | 2.4(0.5) | 2.7(0.7) | 2.0(0.7) | |
Appearance | Desirable | Undesirable | 2.9(0.8) | 3.0(1.3) | 2.6(1.1) |
Cute | Ugly | 1.5(0.5) | 1.9(0.3) | 1.6(0.5) | |
Modern | Old | 1.5(0.5) | 1.9(0.8) | 1.5(0.5) | |
Attractive | Unattractive | 1.9(0.5) | 2.1(0.3) | 2.1(0.7) | |
Like | Dislike | 1.8(0.6) | 2.1(0.9) | 1.7(0.7) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Capy, S.; Osorio, P.; Hagane, S.; Aznar, C.; Garcin, D.; Coronado, E.; Deuff, D.; Ocnarescu, I.; Milleville, I.; Venture, G. Yōkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours. Machines 2022, 10, 708. https://doi.org/10.3390/machines10080708
Capy S, Osorio P, Hagane S, Aznar C, Garcin D, Coronado E, Deuff D, Ocnarescu I, Milleville I, Venture G. Yōkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours. Machines. 2022; 10(8):708. https://doi.org/10.3390/machines10080708
Chicago/Turabian StyleCapy, Siméon, Pablo Osorio, Shohei Hagane, Corentin Aznar, Dora Garcin, Enrique Coronado, Dominique Deuff, Ioana Ocnarescu, Isabelle Milleville, and Gentiane Venture. 2022. "Yōkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours" Machines 10, no. 8: 708. https://doi.org/10.3390/machines10080708
APA StyleCapy, S., Osorio, P., Hagane, S., Aznar, C., Garcin, D., Coronado, E., Deuff, D., Ocnarescu, I., Milleville, I., & Venture, G. (2022). Yōkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours. Machines, 10(8), 708. https://doi.org/10.3390/machines10080708