Next Article in Journal
Multivariate Fence: Using Parallel Coordinates to Locate and Compare Attributes of Adjacency Matrix Nodes in Immersive Environment
Previous Article in Journal
Assessment of Concrete Strength Using the Combination of NDT—Review and Performance Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effects of Voice and Lighting Color on the Social Perception of Home Healthcare Robots

1
College of Fine Arts, Guangdong Polytechnic Normal University, Guangzhou 510665, China
2
Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, China
3
Department of Creative Design, Dongguan City University, Dongguan 523419, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 12191; https://doi.org/10.3390/app122312191
Submission received: 10 October 2022 / Revised: 12 November 2022 / Accepted: 23 November 2022 / Published: 28 November 2022

Abstract

:
The influence of matching between robots’ social cues on users’ social perceptions should be investigated systematically to better fit robots to their occupational roles. In this study, an experiment with 69 older and middle-aged participants was conducted to explore the effects of the voice and lighting color of a home healthcare robot on users’ social perception, which was measured by the Robotic Social Attributes Scale (RoSAS). The results indicated that voice and lighting color significantly affected social perceptions of the healthcare robot. Specifically, the adopted robot received high warmth ratings when it had an adult female voice or a child voice, whereas it received high competence ratings when it had an adult male voice. The robot received a high warmth rating and a high competence rating when warm and cool lighting were used, respectively, as visual feedback. Furthermore, a mismatch in the robot’s voice and lighting color was discovered to evoke feelings of discomfort. The findings of this study can be used as a reference to design robots with acceptable social perception and to expand the roles of social robots in the future.

1. Introduction

The emerging application of social robots has led to increasing systematic research on human–robot interaction (HRI) [1,2,3]. Robots that are perceived to be consistent with their professional roles and conform to user expectations are more accepted by users. Healthcare robots are expected to play crucial roles in people’s daily lives in the future [4]. Such robots can be used in hospitals, care centers, and homes to manage users’ health and give them care services or companionship [5]. In contrast to the service robots used in malls, which interact with people for short periods, healthcare robots tend to interact frequently with users for extended periods. Therefore, people’s perception of and interaction with healthcare robots have become increasingly crucial topics [6]. On the basis of the computers-are-social-actors (CASA) paradigm [7], studies have found that a robot’s social cues during human–robot interaction can affect people’s social perception of the robot [8,9,10]. People tend to categorize the gender and age of a robot in accordance with the social cues given by the robot [11,12,13]. Such categorization is particularly crucial during the first encounter between a human and a robot [14]. After a robot is assigned social membership, people are more likely to treat the robot as a part of their social group and apply social norms to it. Thus, the social perception of a robot during the first HRI can strongly affect people’s subsequent interaction experience with, trust in, and acceptance of the robot [15]. If healthcare robots trigger social perceptions that are more favorable, they are more likely to be accepted by users, which leads to fewer difficulties in subsequent HRI. For example, a study on gender effects related to robots found that users associate stereotypes of women with robots that are categorized into the female group [13]. Such robots might be considered caring and compassionate and might be accepted in industries that favor these characteristics [10]. A social robot’s voice, facial expressions, and appearance are crucial social cues [16] that can influence a user’s social perception of it [17,18]. The manipulation of these social cues can produce a stable social perception of robots and influence users’ acceptance of robots [19,20]. Stroessner and Benitez [12] measured the effects of a robot’s facial stimuli on social perceptions of the robot and discovered that people highly favor contact with robots that appear humanlike and feminine. Morillo-Mendez, Schrooten [21] investigated how robots’ eye gaze affects older users’ social perception of robots. Dou, Wu [22] explored users’ affective evaluation of robots’ voice and found that users judge whether a robot is suited to its professional role on the basis of its voice during their first interaction with it. Verbal and nonverbal cues influence the social perception of robots [23]. Relevant studies have examined the effects of a single type of robot cue (visual or verbal) on the social perception of robots; however, research investigating the effect of matched verbal and visual cues [23] of robots on the social perception of those robots is lacking. Therefore, studying visual or verbal cues in isolation may lead to perceptual mismatches, which were found to even cause user disgust. The influence of matching between the aforementioned cues of a robot on social perceptions of the robot must thus be explored.
Verbal interaction is the most intuitive HRI method. The voice carries considerable socially relevant information [24], and the voice of intelligent robots can strongly affect people’s social perception of them [10,13,25]. People have preferences for certain types of voices, and they associate stereotypes with these voices [26]. For example, men and women consider a relatively low-frequency voice to be attractive and dominant [27]. By contrast, a relatively high-frequency voice is considered emotional and immature [28]. People judge whether a speaker conforms to a stereotype by listening to their voice, and on the basis of this judgment, they feel affection or disaffection for the speaker [26,29,30]. Despite the importance of verbal communication in HRI, relying on verbal communication alone is insufficient, particularly in noisy environments or when a robot interacts with a user who has impaired hearing or is located far away [31]. In addition to the effects of verbal interaction on HRI, the effects of visual feedback on HRI should be examined [32]. Visual feedback in human interactions occurs mainly through nonverbal cues, such as gestures and facial expressions. However, only a few social robots currently have the ability to present vivid facial expressions and behaviors; thus, most robots use lighting for providing visual feedback [33]. It is commonly seen that when a robot is interacting with a user, a light flashes in the robot’s eyes, and this light acts as interactive feedback. Researchers have indicated that lighting can allow a robot to exhibit visual feedback cues and express its inner state, thereby facilitating HRI [31,32,33,34]. Lighting is often installed in the eye of a robot (e.g., Nao and Pepper) to improve its engagement with users. A robot can convey nonverbal information, such as “I am talking to you” and “I am listening to you” to a user, and this information might affect the user’s social perception of the robot. Animation patterns, lighting speed, and lighting color are crucial factors that affect the perception of robots [32]. Kobayashi et al. [35] analyzed several patterns of blinking lights used in intelligent robots during verbal communication with humans and investigated the effects of the lighting animation and speed. The color of lighting is particularly crucial because colors are strongly related to people’s feelings and life experiences and can influence their social perceptions of robots. Studies have shown that recognition of facial emotional states can affect social perception [36]. The six basic emotions (anger, disgust, fear, joy, sadness, surprise) identified by Ekman [37] have been favored by researchers investigating the emotional expressions of robots. Studies have observed emotional associations with colors. For example, Fromme and O’Brien [38] developed a color circle that matches the circular model of emotions proposed by Plutchik [39]. Moreover, Terada, Yamauchi [40] manipulated the color luminosity of a robot’s body to express the emotions of the robot. They found that the hue value represents basic types of emotion (anger—red, anticipation—amber, joy—yellow, trust—green). Blue lighting usually causes people to feel comfortable and safe [41], whereas red lighting can invoke positivity (e.g., feeling strong) or negativity (e.g., associations with blood and aggression) [42]. These color-associated feelings can be paired with verbal cues to create a stable social perception of robots to ensure that people have a suitable image of robots in their minds.
On the basis of Fiske [43]’s social–cognitive dimensions (sociability and competence), our previous study [44] established user expectation coordinate systems for robot applications (Figure 1) and explored the effect of matching between the verbal and visual cues of robots on the social perception of these robots within the context of shopping reception, education, and companionship [45]. Studies have not yet examined the aforementioned effects for healthcare robots, of which users have high expectations. Therefore, the aim of this study was to explore how home healthcare robots’ verbal (voice type) and visual (lighting color) cues can be matched and how this matching influences older and middle-aged users’ social perception of these robots. The results of this study can be used as a basis to derive generalized design guidelines for social robots to be used in various occupational fields.

2. Materials and Methods

2.1. Materials

2.1.1. Interaction Context

The present study focused on healthcare robots, and the considered interaction context was the first conversation between a user and a healthcare robot that provides a heart rate monitoring service. The content of the conversation between the robot and the user was selected from the research of Dou [46]. Healthcare robots are developed to address the physical, cognitive, medical, and psychosocial issues of the users [4]. They can support people living independently by assisting with mobility, completing household tasks, and monitoring health and safety [47]. Dou, Wu [44] found that users have high expectations of healthcare robots (e.g., high sociality and suitable functions). The high expectation and the accessibility requirements of users with special needs [48] raise extremely high requirements for the design of healthcare robots.
In this study, the topic of human and robot dialogue is a medication reminder, which is a common function of healthcare robots. The main content of the dialogue includes free talk and task-oriented talk. The user in the video clip would communicate with the robot and ask about her blood pressure. The robot will answer the user and suggest the time and quantity for taking medicine.

2.1.2. Voice Stimulus

In this study, we examined the effects of voice type and lighting color on the social perception of a robot. The robot used in this study was Alpha 1 Pro, which is a humanoid robot (401 × 198 × 124 mm3) developed by UBtech company in Shen Zhen, China. This robot can talk to people through a 3-W speaker. We added a circuit board to the head of the aforementioned robot and placed light-emitting diodes (LEDs) in its eyes.
Most studies on robot voices have focused on recorded adult male and female voices; few have employed children’s voices [10,49,50]. We used the Chinese text-to-speech system iFlytek Open Platform, which is among the most advanced synthetic Chinese speech systems and has been used in robot experiments [51], to produce the three most commonly used types of voice: adult male, adult female, and child. The Jiu Ge iFlytek synthetic voice was used as the adult male voice, whereas the Xiao Yuan iFlytek synthetic voice was used as the adult female voice. Because iFlytek has no child voice, a child’s voice was generated by increasing the pitch of an iFlytek synthetic female voice (Fang Fang) by 10 units by using Adobe Audition 2015. Praat (which is widely used in acoustic research) was employed to conduct parametric analysis of the robot voice acoustics [52,53], and the results indicated that the mean fundamental frequencies of all voice samples were within the ranges for the corresponding sample categories (Table 1).

2.1.3. Lighting Colors

To correctly convey the states of the robot when it listened and spoke, we first determined the robot’s lighting feedback pattern in the previous studies [32,35,57]. When the robot was listening to a person, the two light-emitting diodes in the robot’s eyes flashed in a fixed on–off pattern over 2 s at a frequency of 15 Hz [35] (Figure 2), which was the most acceptable frequency. The robot’s eyes remained lit while it was talking.
Colors are typically divided into cool and warm tones [58]. Cool colors include blue and green, whereas warm colors include red and orange [59]. Neutral colors such as black and white do not have hues. A robot’s inner state is exhibited mainly by using LEDs, which can emit a wide range of cool, warm, and neutral colors. A light-emitting diode can be defined by its wavelength in nanometers [60]. Cultural associations indicate that people normally consider yellow (550–600 nm), orange (600–630 nm), and red (630–780 nm) as warm colors and blue (450–500 nm) as a cool color [61]. Song and Yamada [34] used LEDs in a robot’s lighting feedback and found that despite the numerous colors available, people preferred blue, green, and yellow as the robot’s feedback. White light does not exist in the spectrum, and the white light that humans see is a mixture of other wavelengths. Thus, a white two-light-emitting diode is defined by its color temperature, which is around 6500–10,000 K. In the present study, two LEDs were placed in the robot’s eyes and used as light sources. Considering current application trends, these LEDs were set to emit cool (blue: 460–465 nm), warm (yellow: 610–615 nm), and white light (6500–7500 K) (Table 2).

2.2. Experimental Design

This study employed a 3 (lighting color: warm, cool, and white) × 3 (robot voice type: adult male, adult female, and child) within-subject experimental design and video-based HRI (VHRI), which is a mainstream experimental paradigm [62,63] in which participants watch a recorded video clip of HRI. VHRI is advantageous because it minimizes the influence of changes in dialogue content and enables repeated measurement with several participants. On the basis of the experimental variables, nine video clips were produced and viewed by each participant (Figure 3).

2.3. Measurements

Although the Godspeed scales [64] are widely used to evaluate robots, these scales mainly focus on basic robot properties and do not fully consider the psychological dimension of HRI [65]. On the basis of the Godspeed scales and two dimensions of social judgment (warmth and competence) that have psychological consensus [43,66], the Robotic Social Attributes Scale (RoSAS) was developed by Carpinella, Wyman [65] (Table 3). This scale was employed in the present study because it can measure social perceptions of robots [12,65]. The scale comprises 18 items related to three fundamental aspects of the social perceptions of robots, namely warmth (happy, feeling, social, organic, compassionate, and emotional), competence (capable, responsive, interactive, reliable, competent, and knowledgeable), and discomfort (scary, strange, awkward, dangerous, awful, and aggressive). Each item is assessed in random order on a 5-point Likert scale ranging from 1 for “not at all” to 5 for “very much so”. In addition, the RoSAS contains an item related to the overall acceptance of the robot by participants.

2.4. Participants

A total of 69 adults participated in this study, and 62 valid questionnaires were retrieved (22 men and 18 women; Mean age = 64.05, SD age [standard deviation (SD) of the age] = 6.51, and age range: 52–84). Participants were all native Chinese speakers from China. They were recruited randomly through leaflets sent into their neighborhoods and were not required to be perfectly healthy but could not have severe hearing impairment, a history of hearing disorders, or severe vision problems.

2.5. Procedures

The experiment was conducted from July to August in 2021 and the location was a local community activity center in Binzhou City. The participants were asked to make an appointment to participate in this study. There are two researchers in the experimental groups. One of the researchers was responsible for providing help to participants if they were having difficulty in recognizing the words or understanding the items in the questionnaire. After a participant had arrived at the location, the other researcher briefed them on the experiment to be conducted. Each participant was then asked to provide informed consent and complete a personal information questionnaire. Subsequently, the participants were led to a room, where they were randomly shown one of nine videos. After viewing the video, the participants were asked to complete the RoSAS questionnaire. After the participants had completed this questionnaire, another video was played randomly until all nine videos had been presented. Each participant submitted nine questionnaires at the end of the experiment.

3. Results

3.1. Manipulation Check

Before the experiment, participants (N = 12) were asked to evaluate voices and lighting colors. One-way analysis of variance (ANOVA) was used in the manipulation checks regarding the three voice types and three colors (Table 4). The participants perceived the female voice (M = 4.92, SD = 0.29) to be significantly more feminine than the other two voices [F(8, 107) = 172.89; p < 0.00], they perceived the male voice (M = 5.00, SD = 0.00) to be significantly more masculine than the other two voices, and they perceived the child voice (M = 4.83, SD = 0.39) to be more childish than the other two voices. The colors were also successfully controlled for [F(8, 107) = 38.89; p < 0.00]. The warm lighting (M = 4.67, SD = 0.49) was perceived to be significantly warmer than the cool and white lighting, whereas the cool lighting (M = 4.83, SD = 0.39) was perceived to be significantly cooler than the warm and white lighting. The white lighting (M = 3.83, SD = 1.03) was perceived to be whiter than the other lighting colors.

3.2. Effects of Robot Voice Type and Lighting Color on the Participants’ Social Perception and Overall Acceptance of the Healthcare Robot

Two-factor multivariate ANOVA (MANOVA) was conducted to analyze the effects of robot voice type and lighting color on the participants’ feelings of warmth, competence, and discomfort during HRI and on their overall acceptance of the robot (Table 5). The reliabilities of the three subscales of the RoSAS were also analyzed. The Cronbach’s α values for the warmth, competence, and discomfort subscales were 0.800, 0.819, and 0.874, respectively.
The results of main effects analysis indicated that robot voice type had different effects on warmth [F(2, 558) = 20.43, p < 0.001] and competence [F(2, 558) = 31.68, p < 0.001] scores. In particular, the child voice was considered the warmest (M = 3.30, SD = 0.71), followed by the adult female voice (M = 3.05, SD = 0.72) and adult male voice (M = 2.85, SD = 0.62). The adult male voice (M = 3.53, SD = 0.62) was considered the most competent, followed by the adult female voice (M = 3.40, SD = 0.63) and child voice (M = 3.31, SD = 0.70). Moreover, the adult male voice (M = 2.26, SD = 0.95) and adult female voice (M = 2.13, SD = 0.84) caused considerably stronger feelings of discomfort than did the child voice (M = 1.97, SD = 0.72).
The lighting color of the robot was discovered to have main effects on the participants’ feelings of warmth [F(2, 558) = 3.68, p < 0.05], competence [F(2, 558) = 11.42, p < 0.001], and discomfort [F(2, 558) = 23.51, p < 0.001], as well as on their overall acceptance of the robot [F(2, 558) = 23.51, p < 0.05]. Warm lighting (M = 3.15, SD = 0.66) and white lighting (M = 3.09, SD = 0.77) were rated to be considerably warmer than cool lighting (M = 2.96, SD = 0.68). Moreover, white lighting (M = 3.42, SD = 0.68) was perceived to be associated with higher competence than cool lighting (M = 3.39, SD = 0.73) and warm lighting (M = 3.13, SD = 0.66). Cool lighting (M = 2.38, SD = 0.94) caused the most discomfort, followed by warm lighting (M = 2.17, SD = 0.81) and white lighting (M = 1.81, SD =0.67). Furthermore, white lighting was associated with significantly higher user acceptance of the robot compared to warm lighting and cool lighting.
In the discomfort evaluation, the interaction between robot voice type and lighting color had crucial effects on various parameters [F(4, 558) = 3.22, p < 0.05; Figure 4]. The results indicated that the participants felt strong discomfort when white lighting and the child or adult male voice were used. If warm lighting was used, the participants felt strong discomfort when the adult male voice was employed. If cool lighting was used, the participants felt strong discomfort when the adult male or female voice was employed.

4. Discussion

A healthcare robot’s social cues can affect users’ perception, experience, acceptance, and rejection of it. In this study, we examined the effects of a healthcare robot’s voice type and lighting color during HRI on its social perception. In line with previous relevant studies, the results of the present study suggest that the voice of robots is a strong social cue that can influence people’s perception of them [23,67]. The results also reveal that a robot’s lighting color can affect users’ social perceptions of healthcare robots. The adopted healthcare robot received higher warmth scores when it provided feedback in the form of warm lighting, and it received higher competence scores when it provided feedback in the form of cool or white lighting. The participants considered the robot to be warm and competent when it used white lighting. This result can be attributed to the properties of white light, which is colorless and associated with complex feelings, such as innocence, purity, sadness, and coldness.
In the present study, high overall acceptance was discovered for the adopted healthcare robot among the participants when an adult female voice and white lighting were used, which is consistent with the results of our previous study [45]. White lighting (i.e., neutral lighting) is the most conservative choice for healthcare robots and does not easily evoke unpleasant feelings; thus, white lighting is a suitable visual feedback for robots used in any field. Baraka et al. [32] claimed that robots using blue (cool) lighting can attract attention. However, our findings indicate that using cool lighting for a healthcare robot can cause discomfort. Although Naz et al. [68] reported that green can evoke positive emotions and red can excite people, these colors were not selected in the present study because Baraka et al. [32] suggested that they should be avoided on account of the prevalence of red–green color blindness and the misreading of colors in different cultures. In addition, our previous study found that robots with children’s voices are highly accepted in shopping reception, education, and companionship-related applications; however, the descriptive statistics of the present study suggest that healthcare robots with a child’s voice are less suitable than are those with an adult female voice, which is in agreement with the results of Tay et al. (2014) [13]. In the present study, the lighting color exhibited a moderating effect; for example, the healthcare robot was more widely accepted when it had the child voice and white lighting than when it had an adult female voice and warm lighting.
The results of this study suggest that controlling verbal and visual cues not only leads to robots being favorably socially perceived but also reinforces certain aspects of social perceptions—i.e., warmth or competence—in HRI. Although many studies have discussed how the social cues of robots should be adjusted in accordance with user preferences, they have not explored the appropriate verbal cue–visual cue matching if the robot’s occupation requires it to make users feel that it is highly capable or highly sociable. The results of this study suggest that to reinforce the warmth perception, an adult female voice and warm lighting can be used for robots. To ensure that users perceive a robot to be highly competent, an adult male voice and cool lighting should be employed.
Studies have claimed that using an adult female voice for robots is a golden rule in robot design and development [10]; however, our previous study [22] indicated that this rule may be invalid depending on the robot’s occupational role. Moreover, the results of the present study reveal that if a mismatch exists between a robot’s verbal and visual cues, this golden rule is invalid. In the present study, the overall acceptance of the adopted robot was lower when it had an adult female voice and warm lighting than when it had an adult male voice. When the healthcare robot’s voice and lighting color did not match (e.g., the use of warm lighting and an adult male voice), feelings of discomfort were evoked in the participants. This finding suggests that robots’ social cues should be paired with consistency to prevent discomfort in users. Many studies on the uncanny valley effect have reported that the humanlike nature and realism of a robot’s appearance are the key to inducing this effect [69,70,71]; however, these studies have ignored the potential influence of multimodal interaction on the aforementioned effect [72]. The discomfort evaluation conducted in this study indicates that even when using lighting as visual feedback, users experience discomfort if the lighting does not match the robot’s voice. The results of the present study are similar to those of Mitchell [73], who determined that a mismatch between a robot’s face and voice considerably increased a user’s feelings of discomfort when interacting with the robot. Paetzel et al. [74] found that a mismatch between robots’ facial texture and voice can cause the perception of uncanniness among children. The influence on HRI of matching robots’ social cues is not completely clear and must be investigated in future studies. To a certain extent, this study provides insights on the influence of verbal cue–visual cue matching for robots on overall user acceptance.
The participants of this study rated the warmth of the adopted robot highly when it had an adult female voice or a child voice, and they rated the robot’s competence highly when it had an adult male voice. When robots have an adult male voice, stereotypes about men are activated. Therefore, robots that have an adult male voice are perceived to have high competence but low warmth. Stereotypes about children, such as innocence and harmlessness, are activated when a robot has a child voice; thus, users perceive a robot with a child voice to have high warmth but low competence. In the discomfort evaluation in this study, use of an adult male or female voice caused considerably more discomfort than did use of a child voice. This result may have been due to the stereotype of a child’s voice, which is associated with harmlessness, unlike adult male and female voices. Robots that have a child’s voice make people comfortable, which explains the current use of children’s voices in many commercial robots. From the social perception perspective, people generally prefer competent and warm people; however, an evaluation of warmth takes precedence over an evaluation of ability. Therefore, robots that evoke warm feelings in people make them feel comfortable [75]. Similar results were obtained by Niculescu et al. [67], who found that people favor robots with high-pitched adult female voices because they consider these voices to be more attractive than low-pitched adult male voices. Dou et al. [76] reported that users generally perceive robots with an adult male voice to be more capable than robots with an adult female voice or a child voice [77]. This result appears to confirm another golden rule in HRI; that is, robots should conform to users’ social stereotypes. Similar results were acquired by Tay et al. [13], who found that robots’ social cues can trigger gender stereotypes with respect to occupation [78]. Their results suggest that robots with an extroverted personality and adult female voice are perceived by users to be suitable for healthcare occupations, whereas robots with an introverted personality and adult male voice are considered to be suitable for security-related tasks. This phenomenon may be attributable to the amount of information transferred during HRI. Some social cues—especially social categorization-related information, such as gender—can trigger a social perception during an interaction. Additional information is required to evaluate other aspects of robots apart from their social perception. Because only a few robots (e.g., Ameca [79] and Sophia [80]) are capable of producing key nonverbal cues, such as humanlike facial expressions and behaviors, users must evaluate social robots on the basis of limited verbal and visual information; thus, gender stereotypes have strong effects on HRI.

Implications and Limitations

The findings of this study can be used as a reference for designing a robot with acceptable social perceptions. Robots with high social interactivity are generally assigned high warmth scores. For example, if a companion robot has a child voice or an adult female voice, the most suitable lighting colors are warm and white light. The use of an adult male voice combined with cool or white light is recommended if the robot must be skillful or highly competent. Moreover, the robot’s voice should match its lighting, appearance, and other design features. Mismatches in these factors can cause discomfort during interactions, which reduces the acceptance of robots.
This research has certain limitations. First, in this study, a fundamental frequency (F0) was used to control voice type; thus, other voice characteristics could not be analyzed. Many studies have indicated that controlling F0 is the most effective method for producing different types of voices [10]. Other acoustic parameters, such as format, may not influence the social perception of a robot. Future research could explore more acoustic parameters to reach more comprehensive results. Second, this study mainly analyzed a humanoid robot; mechanical robots were not included in this research. Although humanoid robots represent a mainstream intelligent robot appearance, future research should focus on a wider range of robot appearances. Third, cultural backgrounds can influence perceptions of colors. Cross-cultural comparisons can be conducted in future studies to extend the results of the present study.

5. Conclusions

An understanding of the social perceptions of healthcare robots can enable manufacturers to develop healthcare robots that will be accepted by users and that can easily integrate into human society. A healthcare robot’s social cues affect social perceptions of the robot. In the present study, the RoSAS was used to evaluate the effects of two social cues, namely voice type and lighting color, on the social perceptions of a healthcare robot. The results indicated that voice affected the social perceptions of the healthcare robot. The participants of this study provided high warmth ratings to the robot when it had a child voice or an adult female voice, and they provided a high competence rating to the robot when it had an adult male voice. Matching between the visual feedback and verbal interactions of current intelligent robots is essential for increasing their acceptance by users. In this study, the lighting color of the healthcare robot was controlled to analyze its effect on social perceptions of the robot. The robot obtained high warmth scores when warm colors were used for lighting as visual feedback, whereas it obtained high competence scores when a cool color was employed. The robot was perceived to be warm and competent when white lighting was used as visual feedback.
The findings of this study offer insights regarding the consistency in the social perceptions triggered by a robot’s social cues. Mismatches in robot voice and lighting color can cause user discomfort, which results in low user acceptance of the robot. The present results indicate that the user acceptance of healthcare robots can be increased by optimizing their voice and lighting color. On the basis of the findings of the present study, further research will be conducted on the effects of other social cues of robots on social perceptions.

Author Contributions

X.D.: writing—review and editing; L.Y.: supervision; K.W.: methodology; J.N.: writing—original draft preparation; investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guangdong Social Science Association (grant number GD20XYS22) and Guangdong Polytechnic Normal University (grant number 2021SDKYB026). The APC was funded under grant GD20XYS22.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dou, X.; Wu, C.-F. Are We Ready for “Them” Now? The Relationship Between Human and Humanoid Robots. In Integrated Science; Springer: Berlin/Heidelberg, Germany, 2021; pp. 377–394. [Google Scholar]
  2. Liu, S.X.; Shen, Q.; Hancock, J. Can a social robot be too warm or too competent? Older Chinese adults’ perceptions of social robots and vulnerabilities. Comput. Hum. Behav. 2021, 125, 106942. [Google Scholar] [CrossRef]
  3. Song, Y.; Luximon, A.; Luximon, Y. The effect of facial features on facial anthropomorphic trustworthiness in social robots. Appl. Ergon. 2021, 94, 103420. [Google Scholar] [CrossRef] [PubMed]
  4. Robinson, H.; Macdonald, B.; Broadbent, E. The Role of Healthcare Robots for Older People at Home: A Review. Int. J. Soc. Robot. 2014, 6, 575–591. [Google Scholar] [CrossRef]
  5. Stafford, R.Q.; MacDonald, B.A.; Jayawardena, C.; Wegner, D.M.; Broadbent, E. Does the Robot Have a Mind? Mind Perception and Attitudes towards Robots Predict Use of an Eldercare Robot. Int. J. Soc. Robot. 2014, 6, 17–32. [Google Scholar] [CrossRef]
  6. McGinn, C.; Sena, A.; Kelly, K. Controlling robots in the home: Factors that affect the performance of novice robot operators. Appl. Ergon. 2017, 65, 23–32. [Google Scholar] [CrossRef]
  7. Nass, C.; Steuer, J.; Tauber, E.R. Computers Are Social Actors: Conference Companion on Human Factors in Computing Systems-CHI’94; Association for Computing Machinery: New York, NY, USA, 1994. [Google Scholar]
  8. Beer, J.M.; Liles, K.R.; Wu, X.; Pakala, S. Affective Human—Robot Interaction. In Emotions and Affect in Human Factors and Human-Computer Interaction; Academic Press: Cambridge, MA, USA, 2017. [Google Scholar]
  9. Skjuve, M.; Følstad, A.; Fostervold, K.I.; Brandtzaeg, P.B. My chatbot companion-a study of human-chatbot relationships. Int. J. Hum. Comput. Stud. 2021, 149, 102601. [Google Scholar] [CrossRef]
  10. Chang, R.C.-S.; Lu, H.-P.; Yang, P. Stereotypes or golden rules? Exploring likable voice traits of social robots as active aging companions for tech-savvy baby boomers in Taiwan. Comput. Hum. Behav. 2018, 84, 194–210. [Google Scholar] [CrossRef]
  11. Cheng, Y.-W.; Sun, P.-C.; Chen, N.-S. The essential applications of educational robot: Requirement analysis from the perspectives of experts, researchers and instructors. Comput. Educ. 2018, 126, 399–416. [Google Scholar] [CrossRef]
  12. Stroessner, S.J.; Benitez, J. The Social Perception of Humanoid and Non-Humanoid Robots: Effects of Gendered and Machinelike Features. Int. J. Soc. Robot. 2019, 11, 305–315. [Google Scholar] [CrossRef]
  13. Tay, B.; Jung, Y.; Park, T. When stereotypes meet robots: The double-edge sword of robot gender and personality in human-robot interaction. Comput. Hum. Behav. 2014, 38, 75–84. [Google Scholar] [CrossRef]
  14. Aronson, E.; Wilson, T.D.; Akert, R.M. Social Psychology, 7th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
  15. Paetzel, M.; Perugia, G.; Castellano, G. The persistence of first impressions: The effect of repeated interactions on the perception of a social robot. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020. [Google Scholar]
  16. Fong, T.; Nourbakhsh, I. Socially interactive robots. Robot. Auton. Syst. 2003, 42, 139–141. [Google Scholar] [CrossRef]
  17. Powers, A.; Kiesler, S. The advisor robot: Tracing people’s mental model from a robot’s physical attributes. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, HRI 2006, Salt Lake City, UT, USA, 2–3 March 2006. [Google Scholar]
  18. Powers, A.; Kramer, A.; Lim, S.; Kuo, J.; Lee, S.-L.; Kiesler, S. Eliciting information from people with a gendered humanoid robot. In Proceedings of the IEEE International Workshop on Robot and Human Communication (ROMAN), Nashville, TN, USA, 13–15 August 2005. [Google Scholar]
  19. Chidambaram, V.; Chiang, Y.-H.; Mutlu, B. Designing persuasive robots: How robots might persuade people using vocal and nonverbal cues. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA, 5–8 March 2012. [Google Scholar]
  20. Hirano, T.; Shiomi, M.; Iio, T.; Kimoto, M.; Tanev, I.; Shimohara, K.; Hagita, N. How Do Communication Cues Change Impressions of Human–Robot Touch Interaction? Int. J. Soc. Robot. 2018, 10, 21–31. [Google Scholar] [CrossRef]
  21. Morillo-Mendez, L.; Schrooten, M.G.S.; Loutfi, A.; Mozos, O.M. Age-related Differences in the Perception of Eye-gaze from a Social Robot. Soc. Robot. 2021, 350–361. [Google Scholar] [CrossRef]
  22. Dou, X.; Wu, C.-F.; Lin, K.-C.; Gan, S.; Tseng, T.-M. Effects of different types of social robot voices on affective evaluations in different application fields. Int. J. Soc. Robot. 2021, 13, 615–628. [Google Scholar] [CrossRef]
  23. Lee, S.-Y.; Lee, G.; Kim, S.; Lee, J. Expressing Personalities of Conversational Agents through Visual and Verbal Feedback. Electronics 2019, 8, 794. [Google Scholar] [CrossRef] [Green Version]
  24. Belin, P.; Bestelmeyer, P.E.G.; Latinus, M.; Watson, R. Understanding voice perception. Br. J. Psychol. 2011, 102, 711–725. [Google Scholar] [CrossRef]
  25. Niculescu, A.; van Dijk, B.; Nijholt, A.; Li, H.; See, S.L. Making Social Robots More Attractive: The Effects of Voice Pitch, Humor and Empathy. Int. J. Soc. Robot. 2013, 5, 171–191. [Google Scholar] [CrossRef] [Green Version]
  26. Berry, D.S. Vocal types and stereotypes: Joint effects of vocal attractiveness and vocal maturity on person perception. J. Nonverbal Behav. 1992, 16, 41–54. [Google Scholar] [CrossRef]
  27. Vukovic, J.; Feinberg, D.; Jones, B.; DeBruine, L.; Welling, L.; Little, A.; Smith, F. Self-rated attractiveness predicts individual differences in women’s preferences for masculine men’s voices. Personal. Individ. Differ. 2008, 45, 451–456. [Google Scholar] [CrossRef]
  28. Scherer, K.R. Vocal communication of emotion: A review of research paradigms. Speech Commun. 2003, 40, 227–256. [Google Scholar] [CrossRef]
  29. Riordan, C.A.; Quigley-Fernandez, B.; Tedeschi, J.T. Some variables affecting changes in interpersonal attraction. J. Exp. Soc. Psychol. 1982, 18, 358–374. [Google Scholar] [CrossRef]
  30. Zuckerman, M.; Driver, R.E. What sounds beautiful is good: The vocal attractiveness stereotype. J. Nonverbal Behav. 1989, 13, 67–82. [Google Scholar] [CrossRef]
  31. Baraka, K.; Rosenthal, S.; Veloso, M. Enhancing human understanding of a mobile robot’s state and actions using expressive lights Enhancing Human Understanding of a Mobile Robot’s State and Actions using Expressive Lights. IEEE Explor. IEEE Org. 2018, 652–657. [Google Scholar] [CrossRef]
  32. Baraka, K.; Veloso, M.M. Mobile Service Robot State Revealing Through Expressive Lights: Formalism, Design, and Evaluation. Int. J. Soc. Robot. 2018, 10, 65–92. [Google Scholar] [CrossRef]
  33. Baraka, K. Effective Non-Verbal Communication for Mobile Robots Using Expressive Lights. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2016. [Google Scholar]
  34. Song, S.; Yamada, S. Designing LED lights for a robot to communicate gaze. Adv. Robot. 2019, 33, 360–368. [Google Scholar] [CrossRef]
  35. Kobayashi, K.; Funakoshi, K.; Yamada, S.; Nakano, M.; Komatsu, T.; Saito, Y. Blinking light patterns as artificial subtle expressions in human-robot speech interaction. In Proceedings of the 20th IEEE International Symposium on Robot and Human Interactive Communication, Atlanta, GA, USA, 31 July–3 August 2011. [Google Scholar]
  36. Barbato, M.; Liu, L.; Cadenhead, K.S.; Cannon, T.D.; Cornblatt, B.A.; McGlashan, T.H.; Perkins, D.O.; Seidman, L.J.; Tsuang, M.T.; Walker, E.F.; et al. Theory of mind, emotion recognition and social perception in individuals at clinical high risk for psychosis: Findings from the NAPLS-2 cohort. Schizophr. Res. Cogn. 2015, 2, 133–139. [Google Scholar] [CrossRef] [Green Version]
  37. Ekman, P. What emotion categories or dimensions can observers judge from facial behavior? Emot. Hum. Face 1982, 39–55. [Google Scholar]
  38. Fromme, D.K.; O’Brien, C.S. A dimensional approach to the circular ordering of the emotions. Motiv. Emot. 1982, 6, 337–363. [Google Scholar] [CrossRef]
  39. Plutchik, R. A Psychoevolutionary Theory of Emotions; Sage Publications: Thousand Oaks, CA, USA, 1982. [Google Scholar]
  40. Terada, K.; Yamauchi, A.; Ito, A. Artificial emotion expression for a robot by dynamic color change. In Proceedings of the IEEE International Workshop on Robot and Human Communication (ROMAN), Paris, France, 9–13 September 2012. [Google Scholar]
  41. Tijssen, I.; Zandstra, E.H.; de Graaf, C.; Jager, G. Why a ‘light’product package should not be light blue: Effects of package colour on perceived healthiness and attractiveness of sugar-and fat-reduced products. Food Qual. Prefer. 2017, 59, 46–58. [Google Scholar] [CrossRef]
  42. Hemphill, M. A note on adults’ color–emotion associations. J. Genet. Psychol. 1996, 157, 275–280. [Google Scholar] [CrossRef]
  43. Fiske, S.T.; Cuddy, A.J.C.; Glick, P. Universal dimensions of social cognition: Warmth and competence. Trends Cogn. Sci. 2007, 11, 77–83. [Google Scholar] [CrossRef]
  44. Dou, X.; Wu, C.-F.; Niu, J.; Pan, K.-R. Effect of Voice Type and Head-Light Color in Social Robots for Different Applications. Int. J. Soc. Robot. 2021, 14, 229–244. [Google Scholar] [CrossRef]
  45. Dou, X.; Wu, C.-F.; Wang, X.; Niu, J. User expectations of social robots in different applications: An online user study. In Proceedings of the HCI International 2020-Late Breaking Papers: Multimodality and Intelligence, Copenhagen, Denmark, 19–24 July 2020. [Google Scholar]
  46. Dou, X. A Study on the Application Model of Social Robot’s Dialogue Style and Speech Parameters in Different Industries. Ph.D. Thesis, Tatung University, Taipei, Taiwan, 2020. [Google Scholar]
  47. Broekens, J.; Heerink, M.; Rosendal, H. Assistive social robots in elderly care: A review. Gerontechnology 2009, 8, 94–103. [Google Scholar] [CrossRef] [Green Version]
  48. Wang, X.; Dou, X. Designing a More Inclusive Healthcare Robot: The Relationship Between Healthcare Robot Tasks and User Capability. In Proceedings of the HCI International 2022—Late Breaking Papers: HCI for Health, Well-being, Universal Access and Healthy Aging, Virtual Events, 26 July–1 July 2022. [Google Scholar]
  49. Walters, M.L.; Syrdal, D.S.; Koay, K.L.; Dautenhahn, K.; Boekhorst, R.T. Human Approach Distances to a Mechanical-Looking Robot with Different Robot Voice Styles. In Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN, Munich, Germany, 1–3 August 2008. [Google Scholar]
  50. Niu, J.; Wu, C.-F.; Dou, X.; Lin, K.-C. Designing Gestures of Robots in Specific Fields for Different Perceived Personality Traits. Front. Psychol 2022, 13. [Google Scholar] [CrossRef]
  51. Aaron, A.; Eide, E.; Pitrelli, J.F. Conversational computers. Sci. Am. 2005, 292, 64–69. [Google Scholar] [CrossRef]
  52. Liu, X.; Xu, Y. What makes a female voice attractive? ICPhS 2011, v, 17–21. [Google Scholar]
  53. Boersma, P.J.G.I. Praat, a system for doing phonetics by computer. Speech Lang. Pathol. 2002, 5, 341–345. [Google Scholar]
  54. Laver, J.; John, L. Principles of Phonetics; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  55. Kent, R. Anatomical and neuromuscular maturation of the speech mechanism: Evidence from acoustic studies. J. Speech Hear. Res. 1976, 19, 421–447. [Google Scholar] [CrossRef]
  56. Sheppard, W.C.; Lane, H.L. Development of the prosodic features of infant vocalizing. J. Speech Hear. Res. 1968, 11, 94–108. [Google Scholar] [CrossRef]
  57. Kobayashi, K.; Funakoshi, K.; Yamada, S.; Nakano, M.; Komatsu, T.; Saito, Y. Impressions made by blinking light used to create artificial subtle expressions and by robot appearance in human-robot speech interaction. In Proceedings of the IEEE International Workshop on Robot and Human Communication (ROMAN), Paris, France, 9–13 September 2012. [Google Scholar]
  58. Holtzschue, L. Understanding Color: An Introduction for Designers; John Wiley and Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  59. Bellizzi, J.A.; Hite, R.E. Environmental color, consumer feelings, and purchase likelihood. Psychol. Mark. 1992, 9, 347–363. [Google Scholar] [CrossRef]
  60. Narendran, N.; Deng, L. Color rendering properties of LED light sources. In Solid State Lighting II; SPIE: Bellingham, WA, USA, 2002; Volume 4776, pp. 61–67. [Google Scholar]
  61. Lee, H. Effects of Light-Emitting Diode (LED) Lighting Color on Human Emotion, Behavior, and Spatial Impression. Ph.D. Thesis, Michigan State University, East Lansing, MI, USA, 2019. [Google Scholar]
  62. Dautenhahn, K. Methodology & themes of human-robot interaction: A growing research field. Int. J. Adv. Robot. Syst. 2007, 4, 15. [Google Scholar] [CrossRef]
  63. Brule, R.V.D.; Dotsch, R.; Bijlstra, G.; Wigboldus, D.H.J.; Haselager, P. Do Robot Performance and Behavioral Style affect Human Trust?: A Multi-Method Approach. Int. J. Adv. Robot. Syst. 2014, 6, 519–531. [Google Scholar] [CrossRef]
  64. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef]
  65. Carpinella, C.M.; Wyman, A.B.; Perez, M.A.; Stroessner, S.J. The Robotic Social Attributes Scale (RoSAS). In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction-HRI, ‘17, Vienna, Austria, 6–9 March 2017; pp. 254–262. [Google Scholar] [CrossRef]
  66. Ko, S.J.; Judd, C.M.; Stapel, D.A. Stereotyping based on voice in the presence of individuating information: Vocal femininity affects perceived competence but not warmth. Personal. Soc. Psychol. Bull. 2009, 35, 198–211. [Google Scholar] [CrossRef] [Green Version]
  67. Niculescu, A.; Dijk, B.V.; Nijholt, A.; See, S.L. The influence of voice pitch on the evaluation of a social robot receptionist. In Proceedings of the 2011 International Conference on User Science and Engineering (i-USEr), Selangor, Malaysia, 29 November–2 December 2011. [Google Scholar]
  68. Naz, K.; Helen, H. Color-emotion associations: Past experience and personal preference. Psychology 2004, 5, 31. [Google Scholar]
  69. Ho, C.-C.; MacDorman, K.F. Measuring the Uncanny Valley Effect: Refinements to Indices for Perceived Humanness, Attractiveness, and Eeriness. Int. J. Soc. Robot. 2017, 9, 129–139. [Google Scholar] [CrossRef] [Green Version]
  70. Walters, M.L.; Syrdal, D.S.; Dautenhahn, K.; Boekhorst, R.T.; Koay, K.L. Avoiding the uncanny valley: Robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion. Auton. Robot. 2008, 24, 159–178. [Google Scholar] [CrossRef] [Green Version]
  71. Prakash, A.; Rogers, W.A. Why some humanoid faces are perceived more positively than others: Effects of human-likeness and task. Int. J. Soc. Robot. 2015, 7, 309–331. [Google Scholar] [CrossRef]
  72. Paetzel, M. The influence of appearance and interaction strategy of a social robot on the feeling of uncanniness in humans. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, Tokyo, Japan, 12–16 November 2016. [Google Scholar]
  73. Mitchell, W.J.; A Szerszen, K.; Lu, A.S.; Schermerhorn, P.W.; Scheutz, M.; MacDorman, K.F. A mismatch in the human realism of face and voice produces an uncanny valley. i-Perception 2011, 2, 10–12. [Google Scholar] [CrossRef]
  74. Paetzel, M.; Peters, C.; Nyström, I.; Castellano, G. Effects of multimodal cues on children’s perception of uncanniness in a social robot. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, Tokyo, Japan, 12–16 November 2016. [Google Scholar]
  75. Ko, Y.H. The effects of luminance contrast, colour combinations, font, and search time on brand icon legibility. Appl. Ergon. 2017, 65, 33–40. [Google Scholar] [CrossRef]
  76. Dou, X.; Wu, C.F.; Lin, K.-C.; Tseng, T.M. The Effects of Robot Voice and Gesture Types on the Perceived Robot Personalities. In Proceedings of the International Conference on Human-Computer Interaction, Orlando, FL, USA, 26–31 July 2019. [Google Scholar]
  77. Siegel, M.; Breazeal, C.; Norton, M.I. Persuasive robotics: The influence of robot gender on human behavior. In Proceedings of the IEEE International Workshop on Intelligent Robots and Systems (IROS), St Louis, MO, USA, 10–15 October 2009. [Google Scholar]
  78. Piçarra, N.; Giger, J.C. Predicting intention to work with social robots at anticipation stage: Assessing the role of behavioral desire and anticipated emotions. Comput. Hum. Behav. 2018, 86, 129–146. [Google Scholar] [CrossRef]
  79. Singh, S.; Chaudhary, D.; Gupta, A.D.; Lohani, B.P.; Kushwaha, P.K.; Bibhu, V. Artificial Intelligence, Cognitive Robotics and Nature of Consciousness. In Proceedings of the 3rd International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 27–29 April 2022. [Google Scholar]
  80. Weller, C. Meet the first-ever robot citizen-a humanoid named Sophia that once said it would ‘destroy humans’. Bus. Insid. 2017, 27. [Google Scholar]
Figure 1. User expectation coordinate system for robot applications.
Figure 1. User expectation coordinate system for robot applications.
Applsci 12 12191 g001
Figure 2. Lighting flash pattern when the robot was listening to a person.
Figure 2. Lighting flash pattern when the robot was listening to a person.
Applsci 12 12191 g002
Figure 3. Video clips.
Figure 3. Video clips.
Applsci 12 12191 g003
Figure 4. Effects of the interaction between robot voice type and lighting color on the participants’ (a) feeling of warmth, (b) feeling of competence, (c) feeling of discomfort, and (d) overall acceptance of the robot.
Figure 4. Effects of the interaction between robot voice type and lighting color on the participants’ (a) feeling of warmth, (b) feeling of competence, (c) feeling of discomfort, and (d) overall acceptance of the robot.
Applsci 12 12191 g004
Table 1. Mean fundamental frequencies of voice stimuli.
Table 1. Mean fundamental frequencies of voice stimuli.
Voice TypeMean
Fundamental Frequency
Normal Range
Adult male voice 131.3550–250 [54]
Adult female voice231.54120–480 [54]
Child voice339.23200–451 [55,56]
Unit: Hz.
Table 2. Values of lighting colors.
Table 2. Values of lighting colors.
Lighting ColorPerceived ColorWavelength or Color Temperature
WarmYellow610–615 nm
CoolBlue460–465 nm
WhiteWhite6500–7500 K
Note: nm = nanometers, unit of wavelength; K = Kelvin, unit of color temperature.
Table 3. Items of the RoSAS.
Table 3. Items of the RoSAS.
ConstructItemsMeasurement
WarmthHappy, Feeling, Social,
Organic, Compassionate, Emotional
5-point Likert
CompetenceCapable, Responsive, Interactive,
Reliable, Competent, Knowledgeable
DiscomfortScary, Strange, Awkward,
Dangerous, Awful, Aggressive
Table 4. Results of the manipulation check.
Table 4. Results of the manipulation check.
Voice Typefemale malechildFp
4.92 5.00 4.83 172.89 ** 0.00
0.29 0.00 0.39
Lighting ColorwarmcoolwhiteFp
4.67 4.83 3.83 38.89 ** 0.00
0.49 0.39 1.03
Note: ** p < 0.001.
Table 5. Results of the MANOVA.
Table 5. Results of the MANOVA.
Mean and SDF Values and Effect Sizes (ηp2)
Voice (V)Light (L)Voice (V)Light (L)V × L
MFCWHWACOFηp2Fηp2Fηp2
Warmth2.85 3.05 3.30 3.09 3.15 2.96 20.43 *0.073.68 *0.011.190.01
(0.62)(0.72)(0.71)(0.77)(0.66)(0.68)C > F > MWA, WH > CO
Competence3.53 3.40 3.31 3.42 3.13 3.39 31.68 **0.1011.42 **0.041.970.01
(0.63)(0.63)(0.70)(0.68)(0.66)(0.73)M > F > CWH, CO > WA
Discomfort2.26 2.13 1.97 1.81 2.17 2.38 6.10 *0.0223.51 **0.083.22 *0.02
M: WH > WA, CO
(0.95)(0.84)(0.72)(0.67)(0.81)(0.94)M, F > CCO > WA > WHF: WH > WA > CO
C: WH > WA, CO
Overall
Acceptance
3.12 3.25 3.06 3.58 2.84 3.01 1.80.0128.56 **0.090.750.01
(1.02)(0.93)(1.13)(0.96)(0.96)(1.03) WH > WA, CO
Note: M = male voice; F = female voice; C = child voice; WH = white lighting; WA = warm lighting; CO = cool lighting. * p < 0.05, ** p < 0.001.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dou, X.; Yan, L.; Wu, K.; Niu, J. Effects of Voice and Lighting Color on the Social Perception of Home Healthcare Robots. Appl. Sci. 2022, 12, 12191. https://doi.org/10.3390/app122312191

AMA Style

Dou X, Yan L, Wu K, Niu J. Effects of Voice and Lighting Color on the Social Perception of Home Healthcare Robots. Applied Sciences. 2022; 12(23):12191. https://doi.org/10.3390/app122312191

Chicago/Turabian Style

Dou, Xiao, Li Yan, Kai Wu, and Jin Niu. 2022. "Effects of Voice and Lighting Color on the Social Perception of Home Healthcare Robots" Applied Sciences 12, no. 23: 12191. https://doi.org/10.3390/app122312191

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop