Next Article in Journal
A Multi-Inductor H Bridge Fault Current Limiter
Previous Article in Journal
Embedded Flight Control Based on Adaptive Sliding Mode Strategy for a Quadrotor Micro Air Vehicle
Article Menu
Issue 7 (July) cover image

Export Article

Electronics 2019, 8(7), 794;

Expressing Personalities of Conversational Agents through Visual and Verbal Feedback
Department of Communication, Seoul National University, Seoul 08826, Korea
Author to whom correspondence should be addressed.
Received: 3 June 2019 / Accepted: 6 July 2019 / Published: 16 July 2019


As the uses of conversational agents increase, the affective and social abilities of agents become important with their functional abilities. Agents that lack affective abilities could frustrate users during interaction. This study applied personality to implement the natural feedback of conversational agents referring to the concept of affective computing. Two types of feedback were used to express conversational agents’ personality: (1) visual feedback and (2) verbal cues. For visual feedback, participants (N = 45) watched visual feedback with different colors and motions. For verbal cues, participants (N = 60) heard different conditions of agents’ voices with different scripts. The results indicated that the motions of visual feedback were more significant than colors. Fast motions could express distinct and positive personalities. Different verbal cues were perceived as different personalities. The perceptions of personalities differed according to the vocal gender. This study provided design implications for personality expressions applicable to diverse interfaces.
personality expression; conversational agent; visual feedback; verbal cues; artificial intelligent speakers

1. Introduction

Future artificial intelligence (AI) assistants must be more than just question-and-answer machines. Computers need to express and recognize affect and emotions because affective computers can help reduce user frustration during interactions and enable smooth communication between users and computers [1]. Computers with unnatural expressions and reactions could deter human–computer interaction (HCI) and frustrate users. For instance, when users expect their conversational agents to deliver factual information without intense emotions, but conversational agents speak with intense emotions, users will become frustrated. Therefore, the feedback that matches with users’ perceptions of natural human-computer interaction needs to be designed.
Most studies in the field of affective computing have applied emotion to convey an agent’s internal states naturally. However, this research applied the concept of personality to implement more natural feedback for a conversational agent for the following reasons: (1) The concept of personality can allow individualized and complex features to be conveyed simultaneously; (2) it is possible to predict users’ perceptions by giving a personality to conversational agents; and (3) with personality, agents can implement consistent patterns of reactions rather than simple and immediate reactions.
Personalities are not innate for computer systems or interfaces; thus, humans must program them. In addition, they cannot be defined in a single sentence; instead, they are combinations of abilities, beliefs, preferences, dispositions, behaviors, and temperamental features with diverse behavioral and emotional attributes [2,3]. Therefore, the study posits the following general research question: RQ. How can conversational agents effectively express personalities?
Personalities can be expressed with two factors: (1) behavioral features (gestures, movements) and (2) verbal traits (voice, speech style). People with different personalities demonstrate different amplitudes and speeds of gestures and movements. Extroverts demonstrate faster, wider, and broader movements than introverts [4]. In addition, extroverts tend to demonstrate more reactive and faster movements and body gestures than introverts [5]. Personality and verbal traits are also highly interrelated. Extroverts tend to express more emotionality with positive emotions and use fewer formal expressions with agreement and gratitude [6]. Extroverts also demonstrate shorter silences and use more positive words and informal expressions than introverts. Introverts use more abstract words and formal words than extroverts [2,7].
Gestures are difficult to implement with current conversational agents because most of them are designed in the form of AI speakers. Existing conversational agents in the current market, including Amazon Alexa and Google Home, are not able to implement gesture movements. Instead of using gestures, AI speakers deliver simple visual feedback through a smart display in response to voice commands. Considering the current form of conversational agents, visual feedback was chosen as the personality expression element. As people with different personalities demonstrate different gesture speeds, the lighting speed could also be perceived as different personalities. Quick lighting is perceived as more active than slow lighting; therefore, the study posits the following research question: RQ 1. Can different visual feedback be perceived as different personalities?
Most current AI speakers use a consistent voice with the same speech style. As extroverts and introverts demonstrate different verbal traits, different verbal traits are perceived as different personalities. Therefore, the study posits the second research question: RQ 2. Can different verbal cues be perceived as different personalities?
Unlike previous studies, the current study applied a wider range of personalities rather than focusing on expressing only two contrasting personalities, such as extroversion and introversion. In addition, personality was expressed with diverse factors, including visual feedback with different colors and motions and five verbal cues, rather than focusing on a single element.

2. Related Work

2.1. Expressing Internal Machine States in HCI

People treat computers as social actors [8]. Considering the uncertainty of systems, it is difficult to set absolute standards for expressing a machine’s psychological and emotional states. In the field of HCI, machines’ internal states can be represented by diverse elements.
Verbal cues and text are the most direct ways of representing machines’ and computers’ internal states. Voice-related factors, such as volume, frequency range, and speech rate, are associated with the internal states of machines and computers [9,10]. Non-verbal behaviors, such as facial expressions, interpersonal closeness, gestures [9], and gaze [11,12], of embodied conversational agents were adjusted to demonstrate their social competencies in conversations. In particular, the gaze of the conversational agents were used to deliver back-channel feedback in turn-taking situations [13]. Diverse colors and lights were also used to express the social robots’ emotions [14,15,16], and a simple blinking light was used as a subtle expression to smooth human–robot speech interactions [17].
A large number of studies have focused on emotion [18]. This study argues that, for long-term HCI, the interface’s personality plays a more crucial role than simple emotional reactions. The current approach for personality expression was adapted from previous affective computing [19] and expressive output modality studies [15,16]. For instance, similar to prior studies, this study adopted cues to express conversational agents’ internal states, such as personality. Affective computers can both perform better when assisting humans and enhance their decision-making skills.

2.2. Personality Expressions of Computers and Interfaces

Personality can be defined as consistent patterns of feeling, thinking, and behavior [3]. In this study, personality was chosen to express the internal states of conversational agents for the following reasons: First, personality can easily express subtle and complex dimensions in individual differences. Second, personality can provide a wide range of predictions concerning how people may respond to various personalities [8]. In other words, people unconsciously use personality as a tool for assessing social partners [20]. In addition, much literature supports that individuals assess computers according to personality [8,21,22]. Lastly, personality is essential [23,24] in that it can help agents build natural relationships with humans.
Personality can be expressed and perceived in diverse ways because diverse viewpoints exist regarding personality. Personality could be seen as only measurable consistencies in behavior, or it could be seen as perceived consistencies regardless of its actual measurable consistencies. Among many diverse perspectives, the current study argues that personality is expressed and perceived through the interactions between the expectations of observers and the behaviors of the observed [20].
Most prior studies on a machine’s expressions of personality in particular have focused specifically on expressing two contrasting personalities, such as extroversion/introversion or dominance/submissiveness. Comparatively, few studies of personality expression have applied diverse types of personalities, such as openness and conscientiousness. Among the diverse categorizations of personalities, the Big Five personality traits were applied because they have demonstrated high reliability and validity in psychology and communication studies [25,26,27].

2.3. Combination of Diverse Cues

The two representative environmental factors in affective computing are (1) visual and behavioral elements (including color, light, video, and animations) and (2) audio stimuli (including noise, sound, and music). The current study uses both visual and verbal feedback.
Previous studies effectively examined personalities using diverse interfaces and contexts. Behavioral and linguistic factors and their correlations with personality were observed in virtual environments [28]. A social robot using different postures, gestures, and eye gazes depending on its personality and that of its user was also studied [9,29]. Linguistic cues and their correlations with personality [8] were also studied. LED lights [30] were used for the internal expressions of interfaces.
Personality and verbal traits, such as voice pitch, speech speed, wordiness, questioning, and vocal emotionality are highly related. Pitch and speech speed are two key factors of vocal communication [31] for delivering individual features. Vocal emotionality, which was expressed as the emotional reactions and feedback of the conversational agents in this study, was selected because the agent generating the desired emotional state and personality is important in users’ perceptions of the agent [1]. The questioning element was chosen because the usages of questionnaires determine the perception of voices [31]. Wordiness was chosen with reference to the natural language generator named PERSONAGE [9], which uses verbosity as the element to express personality dimensions through language. Vocal gender could be an element of a verbal cue because it is meaningful for personal attributes [6].
Based on previous studies, the current study argues that the personality of interfaces must be expressed using combinations of diverse cues. Therefore, both visual feedback and paralinguistic and verbal cues were adopted to express the conversational agent’s personality.

2.4. Big Five Personality

A number of researchers have examined and established human personality measurements. The Big Five Personality traits are a taxonomy for personality traits [32]. The Big Five personality model classified human personalities into five dimensions. Researchers agree that there are five robust factors of personality that can serve as a meaningful taxonomy for classifying personality attributes [33]. The first dimension is extraversion/introversion, which is associated with being sociable, talkative, and active. The second dimension is related to emotional stability and neuroticism. It is associated with being anxious, depressed, angry, and insecure. Agreeableness and likeability are the third dimension, which is associated with being trusting, good-natured, cooperative, and forgiving. The fourth dimension is conscientiousness, which is related to being careful, thorough, hardworking, and organized. The last dimension is openness to experience, which is related to being imaginative, cultured, intelligent, and artistically sensitive.
Diverse previous studies have applied the Big Five personality taxonomies. Researchers mapped human verbal traits to corresponding nonverbal and verbal behaviors of robots based on the extraversion–introversion personality dimension, which is one element of the Big Five personalities [9]. Another study investigated the role of a robot’s personality in the hands-off therapy process focusing only on the extraversion and introversion personality dimension [34]. A previous study examined the role of task context in perceived social robot personality applying the Big Five personality traits [10,35,36]. They found that extraverted people preferred robots with a similar personality when the robot was a tour guide [36]. Most previous studies that applied the Big Five personality traits focused on the extraversion/introversion dimension. In contrast to previous studies, this study adopted more diverse dimensions of the Big Five personality model for the conversational agent interface.

3. Methods

3.1. Overall Experimental Procedure

A total of 105 participants were recruited for the entire study (Table 1). All participants were recruited from a local community group located in Seoul, South Korea. They were all college students with a similar educational background, and they all had an intermediate level of English. In addition, they had similar residences and similar economic backgrounds.
Participants entered the room and received oral instructions about the study. Divided into groups, they were exposed to different experimental materials (visual feedback or verbal cues) because this research aims to express conversational agents’ personalities both with visual feedback and verbal cues.
A total of 45 participants (20 male, 25 females, mean age = 25, standard deviation of age = 3.6) were recruited for visual feedback. Divided into eight groups, five or six participants watched one visual feedback among eight visual feedback (see Section 3.2). A total of 60 participants (34 male, 26 females, mean age = 23.5, standard deviation of age = 2.5) heard 24 different verbal cues. Divided into five groups, 12 participants heard one condition of verbal cues among five conditions. (see Section 3.3).
After being exposed to experimental materials, they answered survey questions asking which personalities best matched the shown visual feedback and verbal cue voices. The survey questions were the short version of the Big Five personality questionnaires [37] (Table A1). The Big Five personality traits were applied because they have demonstrated high reliability and validity in psychology and communication studies [24,25,26,27]. In total, 10 questions were asked of participants in the survey.

3.2. Experimental Materials: Conversational Agent

A telepresence robot was used during the entire study because it is suitable for demonstrating both visual feedback and verbal cues (Figure 1). The visual feedback was demonstrated to participants through the telepresence robot’s screen. The verbal cues were played through the speaker of the robot. The static height of the telepresence robot is between 119.36 cm and 149.86 cm. The weight is 6.35 kg including the iPad with 7 kg. The height of the telepresence robot was fixed as 125 cm.

3.3. Experimental Materials: Visual Feedback

The main object of visual feedback is abstract moving circles (Figure 2). The background of the visual feedback was white blank space. The design of the visual feedback was motivated by diverse traits of the Big Five personalities. Considering that the Big Five personality traits are measured with bipolar adjectives [32], colors with bipolar traits were chosen based on previous studies about colors. Visual feedback was designed with combinations of bipolar or consistent colors. When using multiple colors, colors were only partially blended and overlapped, not entirely mixed. The partial overlapping of colors was motivated by the concept of continuity since each personality is defined as continuous traits between bipolar adjectives.
Among eight visual feedbacks, four visual feedbacks were designed with a single color (Visual feedbacks 2 and 5–7) and the other four visual feedbacks (1, 3, 4, and 8) were designed with combinations of two or three colors (Table 2). Both red and yellow reflect exciting, stimulating, and enlivening properties [38,39]. Considering this, visual feedback 1 was designed with combinations of red and yellow expressing an active personality. Visual feedback 6 used only red to observe the single impact. In contrast to visual feedback 1, visual feedback 2 was designed with a single purple color, which is related to vigorous and unhappy properties [38,40]. Visual feedback 3 combined red, yellow, and dark blue, which are colors with contrasting traits. Dark blue is related to soothing and tender properties [38], but red and yellow reflect active properties [38,39]. Visual feedback 4 was designed with green and blue, the opposite of visual feedback 3. In contrast with active colors, both green and blue are related to calm, peaceful, and comfortable properties [41]. Visual feedback 5 was designed with a combination of black and white. Black is related to melancholy and unhappy properties [42], while white is relaxing [15].
Motion was chosen as the design element of visual feedback because the five personalities have distinctly different levels of expressivity, proactivity, and energy [43]. The radius of circles’ movements was about 4 cm, and circles move clockwise and counterclockwise. All visual feedback moved with the same sequence (Figure 3). The duration of an object’s movement was 30 seconds. The motion size was also adjusted in parallel with the motion speed. The motion speed was adjusted using the Adobe Premiere Pro and After Effects programs, and the movement speed was adjusted between fast (0.35), moderate (0.25), and low (0.15) with the program. Visual feedback with fast motion demonstrated wider movements. Motions were selected with the opposite color traits.

3.4. Experimental Materials: Verbal Cues

Each script was written based on small talk from the chatbot Mitsuku [2] and current AI speakers such as Google Home and Amazon Alexa. Content that is commonly provided by current AI speakers and chatbots as the default setting was applied to observe the impacts of verbal cues, avoiding a heavy influence from content. Two Oddcast TTS programs [3] were used for vocal manipulations. Different voices were designed using the PC internal recorder with different scripts. The manipulation of each verbal cue occurred as follows (Table 3).
The pitch level was adjusted using the Audacity program and Oddcast TTS. Using the Amazon Polly female and male voices as the default voices, the vocal pitch level was adjusted. The effects of vocal pitch on the evaluation of a social robot were examined [44]. Exuberant and calm voices were manipulated with different pitch levels. In the case of the male voice, the high pitch was 125.5 Hz, the moderate pitch was 110 Hz, and the low pitch was 98 Hz. In the case of the female voice, the high pitch was 226 Hz, the moderate pitch was 213 Hz, and the low pitch was 200 Hz. The pitch level was adjusted based on previous studies about pitch and personality [31,44]. Script 1 asked a social robot to introduce itself.
Wordiness was manipulated with two levels: high and low. The wordiness was adjusted with the length of the conversational agent’s answers. High wordiness was manipulated as more than 20 words, and the low wordiness condition was manipulated as less than 20 words. Script 2 asked for the name of the robot.
The speed was manipulated with three levels using the Audacity program. For female voices, the high speed was 18.5 times and the low speed was −11 times. For male voices, the high speed was 20 times and the low speed was −11 times. Voices with a moderate speed were not adjusted by the program. Script 3 asked the robot about the weather conditions.
Vocal emotionality was manipulated with two levels. For voices with low emotionality, a default voice was used. For voices with high emotionality, emotion functions were applied through the Oddcast program. Emotional expressions such as "Aha" and "Wow" were included. The Oddcast program without emotion was applied for voices without emotionality. Script 4 gave directions. The participant asked the agent for directions from their home to the subway station.
The questionnaire condition was manipulated with two levels: high and low. The high-level questionnaire was expressed in the format of interrogative sentences. The low-level questionnaire was expressed in the format of declarative sentences.

4. Results

4.1. Visual Feedback

One-way ANOVA and Tukey post-hoc tests were conducted. The following are the detailed statistical results of the one-way ANOVA (Table 4). For agreeableness, visual feedback 4 (F (4,30) = 14.37, p < 0.001) demonstrated the highest mean value (M = 6.16). For conscientiousness, visual feedback 2 (F (4,30) = 6.12, p < 0.01) demonstrated the highest mean value (M = 5.83). For extraversion, visual feedback 8 demonstrated the highest mean value (Mean = 6.6) (F (4, 30) = 11.01, p < 0.001). For neuroticism, visual feedback 2 demonstrated the highest value (M = 6.167) (F (4, 30) = 6.167, p < 0.0001). For openness, visual feedback 4 demonstrated the highest mean value (M = 3.66) (F (4, 30) = 3.24, p < 0.05).
Based on the results of visual feedback (Table 4), we analyzed personality perceptions depending on the motions and colors of visual feedback. Our initial research purpose was to find personality perceptions depending on visual feedback, not the separate elements of visual feedback (colors, motions). Therefore, in the experiment, we did not separately test these two elements as individual factors. However, in the process of analysis, we discovered that levels of motions influenced the personality perceptions.
Depending on the levels of motions, eight visual feedbacks could be categorized into three groups. Visual feedbacks 3 and 8 were designed with fast motions. Visual feedbacks 1, 4, and 6 were designed with moderate motions. Visual feedbacks 2 and 5 were designed with slow motions (Table 2). We calculated the average mean values of conversational agents’ perceived personalities for each personality depending on the levels of motions (Table 5).
Visual feedback with faster motions was more likely to be perceived as active and likeable personalities, such as extraversion and agreeableness. In the case of extraversion, fast motion showed the highest value (M = 6.25) compared with moderate (M = 3.31) and slow motions (M = 3.25). Agreeableness demonstrated a similar tendency in that faster motions showed higher values. However, slow motions were more likely to be perceived as negative and anxious personalities, including conscientiousness and neuroticism (Figure 4).

4.2. Verbal Cues

To analyze the results of study 2, two-way ANOVA and Tukey post-hoc test were conducted. Female voice with slow speed (M = 5.87, p < 0.05) and voice with low emotionality (M = 5.20, p < 0.005) were likely to be perceived as conscientious compared with other conditions (Table A2). Male voice that questions a lot (M = 4.62, p < 0.05), voice with high emotionality (M = 6.29, p < 0.001), and voice that speaking fast (M = 4.13, p < 0.01) were most perceived as extraversion (Table A3). Female voice with high emotionality (M = 4.95, p < 0.05) were most perceived as openness (Table A4). Male voice speaking fast (M = 3.50, p < 0.01) and low wordiness (M = 3.33, p < 0.001) were most perceived as neuroticism (Table A5). Agreeableness was excluded in the table because verbal cues condition did not show a significant value (Table A6).

5. Discussion

Considering the visual feedback results, different motions of visual feedback were highly influential on the perceptions of personalities. Fast, moderate, and slow motions could express different personalities, which answers the first research question of whether visual feedback with different motions could express different personalities. Regardless of colors, visual feedback with fast and moderate-speed motions were perceived as agreeable and kind (agreeableness). Visual feedback with slow motions was perceived as deliberate and careful (conscientiousness). Visual feedback with fast and moderate motions was considered to have a creative and imaginative personality (openness). Visual feedback with fast motions was considered active and sociable (extraversion). Visual feedback with slow motions was considered depressed and anxious (neuroticism). Different motions of visual feedback were perceived as different personalities.
The study suggested that fast motions were appropriate for expressing positive personalities such as agreeableness and extraversion. The traits of extroversion were related to ambition and sociability. Agreeableness can be interpreted as likability, cooperation, social conformity, and love [32], and these traits are more highly related to positive personalities than the other Big Five personalities. The results supported the previous finding that users’ perceptions of extraversion increased as the motion level increased [45]. Considering that previous studies about motions were conducted with virtual characters’ behavioral movements, this study provides design implications for more diverse formats of appearance constraint agents. The study argues that using slow motions is suitable for personalities that are usually perceived as negative concerning the results of personality perceptions associated with slow motions. The results reveal that slow motions were perceived as neurotic and conscientious. According to the Big Five personality trait study [43], traits associated with neuroticism were highly related to emotional instability, anxiety, and insecurity. In addition, traits associated with conscientiousness were related to thoroughness and planning, which could be negatively perceived in relationships. Slow motions of visual feedback are required to design conversational agents with negative personalities, such as neuroticism and conscientiousness. This is similar to human–human conversations because actively reactive conversational partners are perceived as having more active and positive personalities than passive and sullen conversational partners [5].
The study demonstrated that color was not a valid factor for expressing different personality traits. Even though the experimental materials were designed with the viewpoint that colors have different traits, colors do not show any consistent or significant patterns depending on different personality traits. Even though the two studies used experimental materials with diverse colors, such as red, purple, green, and green with blue, consistent patterns of personality perceptions were only demonstrated depending on the motion speed, regardless of color.
The visual feedback results highlight the issue of color subjectivity and objectivity. The subjectivity and objectivity of colors are a highly controversial issue among color scientists. Based on the argument that color perceptions can be organized with objective standards and systems, the relationship between colors and personalities has been widely observed and studied, particularly in human psychology. For instance, personality testing systems, such as the Color Pyramid Test [46], exist to evaluate personalities depending on different colors. In addition, according to the theory of color [39], red, yellow, and orange are related to exciting and enlivening features, blue and purple are related to anxious and yearning features, yellow is related to anger, and black is related to depression. Bold colors are more suitable for expressing dominant personalities than submissive personalities [20].
The current study argues that colors are not suitable for expressing diverse personalities because personal preference is decisive in the perception of colors. The finding of this study that colors are not influential factors is supported by diverse color perception studies. Color eliminativism, which is the notion that physical objects are not colored at all and colors are perceived psychologically rather than objectively, supports the current study’s results [47]. In addition, the color researcher Jean-Philippe Lenclos’s study of geographical color demonstrated that colors could be explained and defined based on geographical, cultural, and geographical conditions [48], which is highly related to the ecological perspective on color [40]. The correlations between specific colors and psychological factors as neither objective nor complicated [49]; therefore, more sophisticated color schemes that reflect other design elements should be designed for future studies.
The results could suggest the possibility that the number of colors could influence personality perceptions rather than their hue. In particular, both visual feedbacks 3 and 8 demonstrated significantly high mean values compared with other visual feedback instances in extraversion. Considering that only visual feedbacks 3 and 8 used three colors, while the others used only one or two colors, we suggest that using diverse colors rather than a single color could be suitable for expressing extraversion. However, since this study cannot ignore the impacts of hues, this study’s results do not fully support this argument.
Perceptions of the conversational agent’s personalities differed according to vocal gender. Fast speaking speed was perceived differently depending on the vocal gender. When the male voice spoke fast, it was perceived as emotionally unstable and nervous, but the female voice speaking fast was perceived as sociable and outgoing. Sociable and outgoing personalities are considered more positive than unstable and nervous personalities [50]. Even though the same verbal cue was used, the personality perception differed depending on vocal gender. In particular, people tended to perceive female voices as more positive than male voices. This result is highly related to the previous study findings that people prefer female extraverted voices for assistive social robots for elders [43,51].
The limitation of this study was that the design standard for choosing the colors of the visual feedback was based on previous studies, which could be subjective. Perceptions of colors could be influenced by personal preferences and cultural backgrounds. The findings of this study could be restricted to subjective and personal perceptions. For future studies, participants’ characteristics, such as personalities and gender, could be added as variables. Another limitation is that we did not separately measure the personality perceptions depending on motions and colors because visual feedback was designed with the combination of these two elements. Based on these findings, with more detailed conditions, the impacts of motions could be measured separately for a future study.

6. Conclusions

This study has explored how a conversational agent’s personalities can be expressed. The current study provided a novel approach to implement the natural feedback of conversational agents with the concept of personality rather than emotion. In addition, rather than focusing on two contrasting personalities, the current study applied wider personality categories. The study sought to implement more natural feedback by applying more diverse personality expression cues.
Visual feedback was used as an experimental material to answer the first research question: "Can different visual feedback be perceived as different personalities?" The results demonstrated that different motions of visual feedback were highly influential on the perception of personality regardless of color. Visual feedback with different motions could express different personalities. Visual feedback with slow motion was perceived as depressed and anxious, while fast motion was perceived as active and sociable. Moderate motion was perceived as having a creative, agreeable, and imaginative personality. Slow motion was perceived as conscientious and neurotic.
Verbal cues were used as experimental materials to answer the second research question: "Can different verbal cues be perceived as different personalities?" The results demonstrated that different verbal cues were perceived as different personalities. Regarding conscientiousness, the male voice with low emotionality and the female voice with a slow and moderate speed were statistically significant. The conversational agent with a voice with high emotionality, the female voice with fast speech, and the male voice asking many questions were perceived as extraverts. The female voice with high emotionality was likely perceived as having an open personality. The male voice speaking fast with low wordiness was perceived as neurotic.
The overall results of the conversational agents could be applied to diverse interfaces designed with smart displays, such as AI speakers, social robots, cars, and Internet of Things environments. In addition, this study applied a wider range of personalities rather than focusing on two contrasting personalities. Furthermore, more diverse elements were applied in personality expressions rather than focusing on simple factors.

Author Contributions

Methodology, S.-y.L., G.L. and S.K.; Supervision, J.L.; Writing—original draft, S.-y.L.; Writing—review & editing, S.-y.L., G.L., S.K., J.L.


This research was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean National Police Agency and the Ministry of Science and ICT for Police field customized research and development project. (NRF-2018M3E2A1081492).


Early in the research, Soohyun Shin helped to develop the initial ideation of the visual feedback design. Also, we worked closely with Seung Hyuk Choe when designing the visual feedback for the study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Survey questionnaires of Big Five Personality used in the experiment.
Table A1. Survey questionnaires of Big Five Personality used in the experiment.
Item NumberQuestionnaires
How well do the following statements describe the personality of the experimental material? (1 = Disagree strongly, 7 = Agree strongly)
1Is reserved.
2Is generally trusting.
3Tends to be lazy.
4Is relaxed, handles stress well.
5Has few artistic interests.
6Is outgoing, sociable.
7Tends to find fault with others.
8Does a thorough job.
9Gets nervous easily.
10Has an active imagination.
Note. Scoring of the Big Five Inventories 10 scales are as follows (R= items are reversed-scored): Extraversion 1R, 6; Agreeableness: 2, 7R; Conscientiousness: 3R, 8; Neuroticism: 4R, 9; Openness: 5R, 10.
Table A2. Results of two-way ANOVA for verbal cue conditions: Conscientiousness.
Table A2. Results of two-way ANOVA for verbal cue conditions: Conscientiousness.
Verbal cuesMean (SD)dfF-ValueP-Value
Voice GenderFemale4.62 (1.30)10.120.72
Male4.75 (1.31)
EmotionalityHigh4.17 (1.22)18.690.00 ***
Low5.20 (1.16)
Gender * EmotionalityFemale * high 4.29 (1.25)11.160.29
Male * high 4.04 (1.23)
Female * low5.45 (0.98)
Male * low 4.95 (1.32)
QuestionHigh5.13 (1.15)11.050.31
Low4.81 (0.97)
Gender * QuestionFemale * High4.87 (1.22)10.370.54
Male * High5.38 (1.07)
Female * Low4.75 (0.94)
Male * Low4.87 (1.05)
SpeedHigh5.00 (1.56)21.550.22
Moderate5.29 (1.08)
Low4.63 (1.76)
Gender * SpeedFemale * High5.21 (1.50)24.810.01 *
Male * High4.79 (1.66)
Female * Moderate5.54 (0.62)
Male * Moderate5.04 (1.39)
Female * Low5.88 (0.86)
Male * Low3.37 (1.54)
PitchHigh5.16 (1.20)20.040.96
Moderate5.25 (1.25)
Low5.25 (1.19)
Gender* pitchFemale * High5.71 (0.96)20.790.46
Male * High4.63 (1.20)
Female * Moderate5.38 (1.15)
Male * Moderate5.13 (1.28)
Female * Low5.67 (0.86)
Male * Low4.83 (1.48)
WordinessHigh5.33 (1.48)13.20.09
Low5.16 (0.65)
Gender* WordinessFemale * High5.11 (0.4)13.10.78
Male * High4.32 (0.23)
Female * Low4.53 (0.15)
Male * Low5.23 (0.42)
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.
Table A3. Results of two-way ANOVA for verbal cue conditions: Extraversion.
Table A3. Results of two-way ANOVA for verbal cue conditions: Extraversion.
Verbal cuesMean (SD)dfF-ValueP-Value
Voice GenderFemale4.72 (1.67)10.130.72
Male5.14 (1.47)
EmotionalityHigh6.29 (0.69)18.960.00 ***
Low3.60 (0.92)
Gender * EmotionalityFemale * high 6.33 (0.61)11.680.2
Male * high 6.20 (0.78)
Female * low3.95 (1.03)
Male * low 3.25 (0.65)
QuestionHigh3.85 (1.57)14.150.06
Low3.10 (0.98)
Gender * QuestionFemale * High3.08 (1.54)14.620.03 *
Male * High4.62 (1.22)
Female * Low3.12 (1.26)
Male * Low3.08 (0.66)
SpeedHigh4.13 (0.81)25.170.00**
Moderate3.16 (1.46)
Low3.17 (1.46)
Gender * SpeedFemale * High4.29 (0.81)20.150.86
Male * High3.95 (0.81)
Female * Moderate3.42 (1.62)
Male * Moderate2.92 (1.31)
Female * Low3.54 (1.21)
Male * Low2.83 (1.19)
PitchHigh4.13 (1.27)22.450.09
Moderate3.27 (1.42)
Low3.60 (1.25)
Gender* pitchFemale * High4.08 (0.76)20.10.89
Male * High4.17 (1.67)
Female * Moderate3.54 (1.39)
Male * Moderate3.67 (1.17)
Female * Low3.38 (1.35)
Male * Low3.17 (1.54)
WordinessHigh2.95 (1.30)10.170.68
Low3.16 (1.15)
Gender * WordinessFemale * High2.83 (1.23)11.490.76
Male * High2.45 (0.89)
Female * Low3.08 (0.54)
Male * Low3.21 (0.12)
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.
Table A4. Results of two-way ANOVA for verbal cue condition: Openness.
Table A4. Results of two-way ANOVA for verbal cue condition: Openness.
Verbal cuesMean (SD)dfF-ValueP-Value
Voice GenderFemale4.22 (1.76)10.010.93
Male4.18 (1.65)
EmotionalityHigh4.83 (1.35)17.170.66
Low3.58 (1.78)
Gender * EmotionalityFemale * high 4.95 (1.43)10.190.01*
Male * high 4.70 (1.32)
Female * low3.50 (1.81)
Male * low 3.66 (1.83)
QuestionHigh4.42 (1.41)10.350.56
Low4.19 (1.28)
Gender * QuestionFemale * High4.79 (1.27)10.030.87
Male * High4.04 (1.49)
Female * Low4.63 (1.11)
Male * Low4.79 (1.27)
SpeedHigh4.08 (1.44)20.030.97
Moderate4.04 (1.25)
Low3.98 (1.46)
Gender * SpeedFemale * High4.13 (1.28)20.180.83
Male * High4.04 (1.64)
Female * Moderate4.29 (1.18)
Male * Moderate3.79 (1.32)
Female * Low4.25 (1.35)
Male * Low3.71 (1.57)
PitchHigh4.44 (1.48)20.660.53
Moderate4.29 (1.22)
Low4.00 (1.29)
Gender* pitchFemale * High4.04 (1.46)20.770.47
Male * High4.83 (1.45)
Female * Moderate4.29 (1.18)
Male * Moderate4.29 (1.32)
Female * Low4.04 (1.09)
Male * Low3.95 (1.51)
WordinessHigh3.88 (1.83)10.030.85
Low4.00 (1.52)
Gender* WordinessFemale * High3.87 (1.84)10.250.62
Male * High4.00 (1.52)
Female * Low3.76 (1.23)
Male * Low4.12 (1.76)
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.
Table A5. Results of two-way ANOVA for verbal cue condition: Neuroticism.
Table A5. Results of two-way ANOVA for verbal cue condition: Neuroticism.
Verbal cuesMean (SD)dfF-ValueP-Value
Voice GenderFemale2.93 (1.14)10.220.64
Male3.08 (0.96)
EmotionalityHigh2.97 (1.02)10.040.83
Low3.04 (1.09)
Gender * EmotionalityFemale * high 2.87 (0.97)11.330.26
Male * high 3.08 (1.10)
Female * low3.29 (0.94)
Male * low 2.79 (1.21)
QuestionHigh3.19 (1.39)10.010.91
Low3.15 (0.99)
Gender * QuestionFemale * High3.58 (1.56)10.480.49
Male * High2.79 (1.12)
Female * Low3.29 (1.09)
Male * Low3.00 (0.90)
SpeedHigh3.52 (1.49)22.790.07
Moderate2.87 (1.29)
Low2.73 (0.97)
Gender * SpeedFemale * High3.50 (1.31)20.080.00 **
Male * High3.54 (1.72)
Female * Moderate2.67 (0.98)
Male * Moderate2.79 (1.01)
Female * Low2.76 (1.03)
Male * Low3.00 (1.55)
PitchHigh3.02 (1.13)20.910.46
Moderate2.56 (1.25)
Low2.96 (1.38)
Gender* pitchFemale * High2.96 (0.86)20.070.92
Male * High3.08 (1.38)
Female * Moderate2.42 (1.26)
Male * Moderate2.71 (1.27)
Female * Low2.75 (1.25)
Male * Low3.17 (1.53)
WordinessHigh2.67 (1.07)13.20.00 ***
Low3.33 (0.72)
Gender * WordinessFemale * High5.33 (1.48)12.20.78
Male * High5.24 (0.78)
Female * Low4.16 (0.65)
Male * Low4.77 (0.78)
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.
Table A6. Results of two-way ANOVA for verbal cue condition: Agreeableness.
Table A6. Results of two-way ANOVA for verbal cue condition: Agreeableness.
Verbal cuesMean (SD)dfF-ValueP-Value
Voice GenderFemale5.04 (1.39)10.120.73
Male5.16 (1.08)
EmotionalityHigh4.87 (1.20)11.620.21
Low5.33 (1.25)
Gender * EmotionalityFemale * high 4.96 (1.51)10.660.42
Male * high 4.79 (0.86)
Female * low5.12 (1.33)
Male * low 5.54 (1.17)
QuestionHigh5.15 (1.28)10.010.91
Low5.29 (0.87)
Gender * QuestionFemale * High5.13 (1.13)10.480.49
Male * High5.17 (1.46)
Female * Low5.38 (0.86)
Male * Low5.20 (0.92)
SpeedHigh5.15 (1.66)20.390.67
Moderate5.48 (1.19)
Low5.19 (1.39)
Gender * SpeedFemale * High5.50 (1.52)20.190.83
Male * High4.79 (1.79)
Female * Moderate5.71 (0.99)
Male * Moderate5.25 (1.37)
Female * Low5.67 (1.13)
Male * Low4.71 (1.51)
PitchHigh5.46 (1.21)20.001.00
Moderate5.46 (1.09)
Low5.46 (1.25)
Gender* pitchFemale * High5.42 (1.39)21.490.23
Male * High5.50 (1.04)
Female * Moderate5.21 (0.96)
Male * Moderate5.71 (1.19)
Female * Low5.79 (0.84)
Male * Low5.13 (1.52)
WordinessHigh5.50 (1.52)12.190.15
Low4.71 (1.05)
Gender * WordinessFemale * High3.88 (1.84)12.30.78
Male * High3.75 (0.98)
Female * Low4.00 (1.52)
Male * Low4.15 (0.23)
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.


  1. Picard, R.W. Picard Affective computing for HCI. HCI 1999, 1, 829–833. [Google Scholar]
  2. Mairesse, F.; Walker, M.A. Can Conversational Agents Express Big Five Personality Traits through Language?: Evaluating a Psychologically-Informed Language Generator. Camb. Sheffield. 2009, 1–59. [Google Scholar]
  3. Pervin, L.A.; John, O.P. Handbook of Personality; Guilford Press: New York, NY, USA, 1999. [Google Scholar]
  4. Neff, M.; Wang, Y.; Abbott, R.; Walker, M. Evaluating the Effect of Gesture and Language on Personality Perception in Conversational Agents In International Conference on Intelligent Virtual Agents. In Proceedings of the 10th international conference on Intelligent virtual agents, Philadelphia, PA, 20–22 September 2010; pp. 222–235. [Google Scholar]
  5. Brebner, J. Personality Theory and Movement; Springer: Dordrecht, The Netherlands, 1985; pp. 27–41. [Google Scholar]
  6. Dewaele, J.M.; Furnham, A. Extraversion: The Unloved Variable in Applied Linguistic Research. Lang. Learn. 2010, 49, 509–544. [Google Scholar] [CrossRef]
  7. Dewaele, J.M.; Heylighen, F. Variation in the Contextuality of Language: An Empirical Measure. Found. Sci. 2002, 7, 293–340. [Google Scholar]
  8. Nass, C. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  9. Aly, A.; Tapus, A. A Model for Synthesizing a Combined Verbal and Nonverbal Behavior Based on Personality Traits in Human-Robot Interaction. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, Tokyo, Japan, 3–6 March 2013; pp. 325–332. [Google Scholar]
  10. Lee, S.; Kim, S.; Lee, G.; Lee, J. Robots in Diverse Contexts: Effects of Robots Tasks on Expected Personality. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 169–170. [Google Scholar]
  11. Bickmore, T.; Cassell, J. Relational Agents: A Model and Implementation of Building User Trust. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Seattle, Washington, USA, 1 March 2001; pp. 396–403. [Google Scholar]
  12. Oliveira, R.; Arriaga, P.; Alves-Oliveira, P.; Correia, F.; Petisca, S.; Paiva, A. Friends or Foes?: Socioemotional Support and Gaze Behaviors in Mixed Groups of Humans and Robots. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 279–288. [Google Scholar]
  13. Torres, O.E.; Cassell, J.; Prevost, S. Modeling Gaze Behavior as a Function of Discourse Structure. In Proceedings of the First International Workshop on Human-Computer Conversation, Bellagio, Italy, 14–16 July 1997. [Google Scholar]
  14. Löffler, D.; Schmidt, N.; Tscharn, R. Multimodal Expression of Artificial Emotion in Social Robots Using Color, Motion and Sound. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 334–343. [Google Scholar]
  15. Song, S.; Yamada, S. Expressing Emotions through Color, Sound, and Vibration with an Appearance-Constrained Social Robot. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 2–11. [Google Scholar]
  16. Terada, K.; Yamauchi, A.; Ito, A. Artificial emotion expression for a robot by dynamic color change. In Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 314–321. [Google Scholar]
  17. Funakoshi, K.; Kobayashi, K.; Nakano, M.; Yamada, S.; Kitamura, Y.; Tsujino, H. Smoothing human-robot speech interactions by using a blinking-light as subtle expression. In Proceedings of the 10th international conference on Multimodal interfaces, Chania, Crete, Greece, 20–22 October 2008; pp. 293–296. [Google Scholar]
  18. Seo, Y.S.; Huh, J.H. Automatic Emotion-Based Music Classification for Supporting Intelligent IoT Applications. Electronics 2019, 8, 164. [Google Scholar] [CrossRef]
  19. Picard, R.W. Affective Computing; The MIT Press: London, UK, 2000. [Google Scholar]
  20. Dryer, D.C. Getting personal with computers: How to design personalities for agents. Appl. Artif. Intell. 1999, 13, 273–295. [Google Scholar] [CrossRef]
  21. Isbister, K.; Nass, C. Consistency of personality in interactive characters: Verbal cues, non-verbal cues, and user characteristics. Int. J. Hum. Comput. Stud. 2000, 53, 251–267. [Google Scholar] [CrossRef]
  22. Nass, C.; Youngme, M.; Fogg, B.J.; Reeves, B.; Dryer, D.C. Can computers personalities be human personalities? Int. J. Hum. Comput. Stud. 1995, 43, 223–239. [Google Scholar] [CrossRef]
  23. Breazeal, C. Social interactions in HRI: The robot view. IEEE Trans. Syst. Man Cybern. Part. C Appl. Rev. 2004, 34, 181–186. [Google Scholar] [CrossRef]
  24. Lee, K.M.; Peng, W.; Jin, S.A.; Yan, C. Can robots manifest personality?: An empirical test of personality recognition, social responses, and social presence in human-robot interaction. J. Commun. 2006, 56, 754–772. [Google Scholar] [CrossRef]
  25. Moon, Y.; Nass, C. How “real” are computer personalities? Psychological responses to personality types in human-computer interaction. Commun. Res. 1996, 23, 651–674. [Google Scholar] [CrossRef]
  26. Wang, R.; Harari, G.; Hao, P.; Zhou, X.; Campbell, A.T. SmartGPA: How Smartphones Can Assess and Predict Academic Performance of College Students. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Osaka, Japan, 7–11 September 2015; pp. 295–306. [Google Scholar]
  27. Hancock, J.T.; Dunham, P.J. Impression Formation in Computer - Mediated Communication Revisited. Communic. Res. 2001, 28, 325–347. [Google Scholar] [CrossRef]
  28. Bailenson, J.N.; Yee, N.; Brave, S.; Merget, D.; Koslow, D. Virtual Interpersonal Touch: Expressing and Recognizing Emotions Through Haptic Devices. Hum. Comput. Interact. 2007, 22, 325–353. [Google Scholar]
  29. Andrist, S.; Mutlu, B.; Tapus, A. Look Like Me: Matching Robot Personality via Gaze to Increase Motivation. ACM Conf. Hum. Factors Comput. Syst. 2015, 2, 3603–3612. [Google Scholar]
  30. Sokolova, M.; Fernández-Caballero, A. A Review on the Role of Color and Light in Affective Computing. Appl. Sci. 2015, 5, 275–293. [Google Scholar] [CrossRef]
  31. Apple, W.; Streeter, L.A.; Krauss, R.M. Effects of pitch and speech rate on personal attributions. J. Pers. Soc. Psychol. 1979, 37, 715–727. [Google Scholar] [CrossRef]
  32. Barrick, M.R.; Mount, M.K. The big five personality dimensions and job performance: A meta-analysis. Pers. Psychol. 1991, 44, 1–26. [Google Scholar] [CrossRef]
  33. Digman, J.M. Personality Structure: Emergence of the Five-Factor Model. Annu. Rev. Psychol. 1990, 41, 417–440. [Google Scholar] [CrossRef]
  34. Tapus, A.; Matarić, M.J. User personality matching with a hands-off robot for post-stroke rehabilitation therapy. Springer Tracts Adv. Robot. 2008, 39, 165–175. [Google Scholar]
  35. Lee, S. Expressing the personalitis of the conversational agents with visual feedback and verbal cues. Master’s Thesis, Seoul National University, Seoul, South Korea, 2019; pp. 1–104. [Google Scholar]
  36. Joosse, M.; Lohse, M.; Perez, J.G.; Evers, V. What you do is who you are: The role of task context in perceived social robot personality. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2134–2139. [Google Scholar]
  37. Rammstedt, B.; John, O.P. Measuring personality in one minute or less: A 10-item short version of the Big Five Inventory in English and German. J. Res. Pers. 2007, 41, 203–212. [Google Scholar] [CrossRef]
  38. Murray, D.C.; Deabler, H.L. Colors and mood-tones. J. Appl. Psychol. 1957, 41, 279–283. [Google Scholar] [CrossRef]
  39. Warner, S. On the relation of color and personality. J. Proj. Tech. Pers. Assess. 1966, 30, 512–524. [Google Scholar]
  40. Karwoski, T.F.; Odbert, H.S. Color music. Psychol. Monogr. 1938, 50, i-60. [Google Scholar] [CrossRef]
  41. Naz, K.; Helen, H. Color-emotion associations: Past experinece and personal preference. AIC Color. Paint. Interim Meet. Int. Color. Assoc. 2004, 5, 31. [Google Scholar]
  42. Bricks, M. Mental hygiene value of children’s art work. Am. J. orthopsychiat 1994, 14, 136–146. [Google Scholar] [CrossRef]
  43. Costa, P.T.; McCrae, R.R. The Revised NEO Personality Inventory (NEO-PI-R); The SAGE Handbook of Personality Theory and Assessment: Thousand Oaks, CA, USA, 2008; pp. 179–198. [Google Scholar]
  44. Niculescu, A.; Van Dijk, B.; Nijholt, A.; See, S.L. The influence of voice pitch on the evaluation of a social robot receptionist. In Proceedings of the International Conference on User Science and Engineering (i-USEr ), Shah Alam, Selangor, Malaysia, 29 November–1 December. 2011; pp. 18–23. [Google Scholar]
  45. Hyde, J.; Carter, E.J.; Kiesler, S.; Hodgins, J.K. Using an Interactive Animated Avatar’s Facial Expressiveness to Increase Persuasiveness and Socialness. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 1719–1728. [Google Scholar]
  46. Schaie, W. The color pyramid test: A nonverbal technique for personality assessment. Psychol. Bull. 1963, 60, 530–547. [Google Scholar] [CrossRef]
  47. Byrne, A.; Hilbert, D.R. Color realism and color science. Behav. Brain Sci. 2003, 26, 3–21. [Google Scholar] [CrossRef]
  48. Lenclos, J.-P.; Lenclos, D. Colors of the World: The Geography of Color; WW Norton & Company: New York, NY, USA, 2004. [Google Scholar]
  49. Brave, S.; Nass, C. Emotion in Human—Computer Interaction; The Human-Computer Interaction Handbook: Hillsdale, NJ, USA, 2003; pp. 81–96. [Google Scholar]
  50. McCrae, R.R.; Costa, P.T. Personality Trait Structure as a Human Universal. Am. Psychol. 1997, 52, 509–516. [Google Scholar] [CrossRef]
  51. Chang, R.C.S.; Lu, H.P.; Yang, P. Stereotypes or golden rules? Exploring likable voice traits of social robots as active aging companions for tech-savvy baby boomers in Taiwan. Comput. Human Behav. 2018, 84, 194–210. [Google Scholar] [CrossRef]
Figure 1. Telepresence robot used in the entire study.
Figure 1. Telepresence robot used in the entire study.
Electronics 08 00794 g001
Figure 2. Visual feedback 1–8 used in the study.
Figure 2. Visual feedback 1–8 used in the study.
Electronics 08 00794 g002
Figure 3. Partially captured sequence of motions of visual feedback. All visual feedback moved with the same sequence.
Figure 3. Partially captured sequence of motions of visual feedback. All visual feedback moved with the same sequence.
Electronics 08 00794 g003
Figure 4. Average mean values of conversational agent’s perceived personalities depending on different levels of motions of visual feedback.
Figure 4. Average mean values of conversational agent’s perceived personalities depending on different levels of motions of visual feedback.
Electronics 08 00794 g004
Table 1. Number of participants depending on conditions.
Table 1. Number of participants depending on conditions.
ConditionsNumber of ParticipantsTotal
Visual feedbackMale2045
Verbal cuesMale3460
Table 2. Details of experimental materials for visual feedback. Five visual feedback were designed with different motions and colors.
Table 2. Details of experimental materials for visual feedback. Five visual feedback were designed with different motions and colors.
Visual Feedback.MotionsMotion SpeedColorsNumber of Colors
1Moderate0.25Red + Yellow2
2Slow0.15 Purple1
3Fast0.35Dark Blue + Yellow + Red3
4Moderate0.25Green + Blue2
5Slow 0.15Grey1
8Fast0.35Dark Blue + Yellow + Purple3
Table 3. Experimental materials of verbal cues.
Table 3. Experimental materials of verbal cues.
ScriptVerbal cues
1: Introduction3 Pitch (high/moderate/low) x 2 Gender
2: Weather conditions2 Wordiness (high/low) x 2 Gender
3: Give directions3 Speed (high/moderate/low) x 2 Gender
4: Ask a name2 Emotionality (high/low) x 2 Gender
5: Scheduling2 Questioning styles (high/low) x 2 Gender
Table 4. Results of visual feedback. Mean values of conversational agent’s perceived personalities depending on different visual feedback.
Table 4. Results of visual feedback. Mean values of conversational agent’s perceived personalities depending on different visual feedback.
MeanSDdfF ValueP Value
Agreeableness15.830.75414.370.00 ***
Conscientiousness13.501.0546.120.00 **
Extraversion 13.50.554 11.01 0.00 ***
Neuroticism12.000.89423.150.00 ***
Openness13.161.2143.240.03 *
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.
Table 5. Average mean values of conversational agent’s perceived personalities depending on different levels of motions of visual feedback.
Table 5. Average mean values of conversational agent’s perceived personalities depending on different levels of motions of visual feedback.
Personality TraitsMotion speed (Visual Feedback)
Fast (3, 8)Moderate (1, 4, 6)Slow (2, 5)
Agreeableness5.51 5.392.99

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
Electronics EISSN 2079-9292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top