Next Article in Journal
Flocculation Treatment for Mitigating Clogging of Dredge Slurry Under Vacuum Preloading with Particle Image Velocimetry Analysis
Previous Article in Journal
Integrating Virtual Reality into Art Education: Enhancing Public Art and Environmental Literacy Among Technical High School Students
Previous Article in Special Issue
Echoes of the Day: Exploring the Interplay Between Daily Contexts and Smartphone Push Notification Experiences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Empathy in Home Service Agents: A Conceptual Framework and Typology of Empathic Human–Agent Interactions

by
Joohyun Lee
1 and
Hyo-Jin Kang
2,*
1
Department of Future Convergence Technology Engineering, Sungshin Women’s University, Seoul 02844, Republic of Korea
2
Department of Service Design Engineering, Sungshin Women’s University, Seoul 02844, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(6), 3096; https://doi.org/10.3390/app15063096
Submission received: 14 February 2025 / Revised: 4 March 2025 / Accepted: 5 March 2025 / Published: 12 March 2025

Abstract

:
As artificial intelligence (AI) technology advances, there has been a diversification of home service functions and services, as well as a change in the applied technologies and functions. This is due to the fact that the needs and expectations vary depending on the purpose of performing the task in the same environment. Although interactions with AI often occur in the home environment, which is a personal space, there is a need for research that examines interactions in consideration of the concept of empathy. This study thus aims to identify previous studies that examine the interaction between users and technology and to systematize the elements of interaction that can be considered based on intelligent agents that are often used in the home environment. To this end, a framework was established to examine multifaceted elements through research that shows that the interaction between technology and users should be natural, with sophisticated psychological anthropomorphism. This study analyzed the literature for the establishment of an artificial empathy interaction system and presented an initial framework. Subsequently, we proceeded to the application of authentic industry cases to the framework, with the objective of ascertaining the feasibility of mapping groups exhibiting analogous trends. This process culminated in the categorization of these cases into three distinct types, alongside the identification of the empathy interaction elements that should be given consideration for each category. Moreover, we identified additional components necessary for the formulation of the final framework and elements that were deemed to be superfluous. Thereafter, we initiated the refinement process to elaborate the framework. The final framework is “Empathic HAX (human-agent interactions) Canvas”, which is designed to examine the necessity of empathic interaction between users and AI agents in the home service domain and to determine the optimal design for such interaction. The significance of this study lies in the creation of a framework that has not previously existed, and the presentation of a design tool that is highly likely to be used both academically and practically.

1. Introduction

The advent of artificial intelligence (AI) has precipitated a paradigm shift in the realm of domestic services, ushering in a new era of technology that can perform a wide range of tasks [1]. The advent of these intelligent agents has led to a decline in the use of conventional household appliances, as they have become redundant in the face of the advanced capabilities of these intelligent agents. The implementation of intelligent agents in home services has led to a wide range of capabilities, including home automation, domestic assistance, and healthcare management. However, the scope of these services varies depending on the specific agent and the needs of the user.
In a study that identified what users want from smart home intelligent agents, they said that they would actively use intelligent agents if the purpose of the agent was to answer questions or perform simple tasks as instructed, but they were reluctant to use them for questions that intruded on the user’s personal space or took the lead in the situation [2]. Consequently, the range of tasks that users can perform in home services has expanded, and the capabilities of these services have been enhanced. Furthermore, the capabilities of smart home services have been extended to include additional tasks [3]. The ability to recognize patterns in movement (activity recognition) is a more advanced capability in comparison to the ability to recognize images (image recognition) and process data (data processing). However, the same is true for the interaction between the intelligent home environment and the user. In addition, the personal characteristics of the individual are most pronounced in the context of the home environment, and the degree to which this is achieved is a matter of concern. The degree of concern is minimal in comparison to the level of concern regarding the personal characteristics of the individual in the home environment [4]. However, the functionality of the system is not limited to the individual user, but extends to the interaction between users, which is a more advanced feature. Consequently, the system’s capabilities are not limited to the individual user, but extend to the interaction between users, which is a more advanced feature. The current study has focused on the most prominent field of research, i.e., the human–robot interaction (HRI), and the potential of a humanoid robot to interact with humans in a more comprehensive manner. A representative example of research on technology and user interaction is the design of intelligent agents that can be used in specific fields as physical robots or the proposal of specific services or systems [5,6]. Most of these studies were about structuring systems or showing data flows in terms of technology application and operation and were studied in various fields such as medicine, healthcare, social, communication, and education. Another type of study looked at factors that could reduce the gap in long-term effects and acceptance in interactions. However, other studies have focused on identifying the factors that enable this interaction to occur and the extent to which this is possible [7,8]. However, extant studies have predominantly centered on the direction of technical implementation and have identified a limited number of factors that may affect interaction based on some hypotheses. A paucity of research exists that has systematized and identified various factors that can be considered based on intelligent agents. In particular, they emphasize that psychological anthropomorphism must be meticulously and organically integrated to elicit a substantial effect in the interaction between technology and users [9]. While they do touch upon factors such as behavior and speech patterns, the extent and method of application may fluctuate contingent on the context and situation. They also underscore the necessity for research that contemplates this from a multifaceted perspective, encompassing the domain of artificial empathy interaction. This study aspires to ascertain the variation in interaction methodologies demanded by users across diverse service domains, functions, contexts, and tasks within the ambit of home services. This endeavor is of paramount importance, as the ambit of application of AI technology is undergoing a progressive diversification, thereby offering a plethora of services that contribute to the enhancement of daily well-being. In this regard, this study seeks to identify the components that serve to delineate the artificial empathy interaction of home service intelligent agents, with the overarching objective of establishing a conceptual framework for the empathic human–intelligent agent interaction. In this study, we propose the designation “HAX (Human-Agent Interaction)” for human–agent interaction and the establishment of a framework to serve as a theoretical guideline for the design of empathetic human–agent interaction in the future.

2. Theoretical Background

2.1. Home Services AI Agent Evolution and HRI Theory-Based Interactions

An intelligent agent can be defined as software that is able to interact with its environment and the user to collect and utilize data for the purpose of making its own decisions and performing the correct task at the optimum time [10]. When applied in the home environment, intelligent agents are often combined with multiple smart home services and can make predictions based on the user’s context, performing repetitive and cumbersome tasks on their behalf, and more [11]. These agents can have tangible physical forms or intangible virtual forms, and their utilization can be performed separately or in combination for different purposes. While previously focused on functional roles related to efficiency, such as increasing productivity and reducing costs, the scope of intelligent agents in home services is expanding and diversifying to include the ability to make contextual decisions based on collected data, interact with users, and provide personalized experiences. The following home service areas have been identified as the primary domains for the implementation of intelligent agents: cleaning, entertainment, care, security, energy management, social and communication, and work. The subsequent data as below Table 1 presents these domains in an organized manner, categorized according to their purpose, task, form, and interaction. The robotic form of the agent is proposed as an integrated system that can be managed and controlled in conjunction with a variety of sensors and other devices. The robotic form of the agent is most frequently mentioned in cleaning, care, and social and communication services, which are service areas that require physical assistance or involve a variety of interactions with the user. Other service areas that have been identified include those that recognize the surrounding environment and situation to provide contextual control or information, or to provide integrated, intelligent control of tasks that are difficult for the user to see each time or that require different tasks to be performed individually. The proposition of specific guidelines or methods for interacting with the user is contingent upon the level of visual complexity of the service area. In the case of telework, it was determined that body movements, including those of the head and arms, are a significant factor, given the potential impact of visual factors on interaction, even in the absence of a physical robot.
As demonstrated in the preceding section, a comprehensive review of the extant literature on intelligent agents in home service areas was conducted, revealing significant variations in interaction design across diverse domains. These variations are attributed to the specific purpose, primary tasks, and agent form and type. It was observed that the level and nature of interaction is contingent on the circumstances and conditions encountered by the user when employing intelligent agents within the domestic environment. It is evident that the implementation and configuration conditions, encompassing the interaction method, are undergoing refinement in scenarios or tasks where interaction with the user is paramount. Furthermore, we have identified the literature in the field of HRI that elucidates factors influencing interaction. However, with the exception of a few studies that have identified interaction factors that should be considered in virtual environments, the majority of studies in this domain are applicable exclusively to physical forms. The paucity of research identifying appropriate interaction application factors under various complex conditions is particularly salient in the context of intelligent agents, which are embedded in physical forms (e.g., robots) and utilized systematically. To illustrate, in the case of a robot, a nod may be sufficient to indicate agreement with the other person’s opinion; however, it remains unclear what would be appropriate in the case of an interactive agent. Psychological anthropomorphizing is important to enhance the effectiveness of technology interaction with users, and this is related to empathy; it is therefore necessary to consider how to apply the reaction method to the relevant service area. Shim and Choi [24] found that the presence or absence of anthropomorphism, the degree of humanization, and specific role settings of an agent can affect user experience. In a similar vein, a study [25] found that user adoption rates increase with problem-solving capabilities and anthropomorphizing levels in interactions with chatbots, which are interactive agents, confirming the need for research that examines interactions in terms of artificial empathy. Accordingly, this study aimed to identify the appropriate interaction method, and the empathy expected by users according to the setting factors categorized by home service area.

2.2. Artificial Empathy Theory and Applications

Empathy, a fundamental component of interpersonal relationships, is crucial to maintaining acceptance and flexible relationships with others [26]. The traditional definition of empathy, frequently employed in psychological discourse, refers to the process of humanizing an object and reading or feeling cues [27]. This concept is also significant in various other fields, including neuroscience, psychotherapy, social development, art, and engineering. While research continues in the field of applying empathic concepts and methodologies to AI and user interaction systems, the anthropomorphizing factor remains a complex problem that needs to be addressed to bridge the gap. In addition, in psychology itself, empathy is characterized by the fact that it is a complex process, and the emotions and perceptions of others can be expressed differently depending on individual characteristics, capabilities, reactions, and relationships. Empathic research has highlighted the integration of emotional and cognitive dimensions in its conceptualization, emphasizing that empathy is not a discrete process, but rather a multifaceted and interconnected one [27,28]. The importance of maintaining the cognitive and experiential boundaries that separate one’s own thoughts and feelings from those of the other person while emotional resonance is occurring in an empathic process that includes both emotional and cognitive dimensions suggests that empathy requires nuanced and appropriate responses in different situations and contexts. Given the intricacies of empathy and the challenges inherent in its natural implementation, as observed in human behavior, numerous researchers in the domain of agent and robot development have sought to identify factors that can enhance the efficacy of empathy or propose an overarching framework to systematize the empathy process [29,30,31]. The extant literature has demonstrated that the process of recognizing emotional cues begins with a common process of perception, followed by a series of processes of judgment and emotional expression [32]. Firstly, the process of recognizing emotional cues involves embedded emotional cues, such as elements perceived through the senses. This is followed by the process of identifying intentions based on the recognized cues, and then modulating and matching them with appropriate empathic responses. The level of response can vary depending on factors such as personality and role. Empathic responses are then demonstrated, and ways to express emotional behavior are also considered. The express and react phase usually includes elements for the means of output. Empirical research in this domain has focused on the development of interfaces for empathic responses and the categorization of expressive modalities that facilitate multimodal communication to enhance mutual empathy and understanding [33,34,35]. These have been classified into verbal, visual, auditory, and non-visual categories, or even proposed combinations thereof, to design and propose interfaces that can achieve effective interaction. Within this basic empathic response structure, it was found that different fields and service areas have different detailed configuration factors that determine what to emphasize, interpret, or express in different ways [36,37]. However, given that the empathy factors were explored by focusing on different purposes and tasks in different fields, there are different definitions and levels of application for each factor, and the application of empathy needs to be checked individually under different conditions. In addition, although interaction methods are also related, there is a lack of an efficient system that can be connected to each empathy factor and identified by service area, and related research emphasizes the need to develop a framework with empathy measurement factors for intelligent agents. This study aims to identify various components that affect empathic interaction between intelligent agents and humans based on the literature studies and establish a conceptual framework of empathic human–agent interaction, which can be used as a theoretical guideline for future empathic human–agent interaction design.

3. Methods

Based on prior research, this study extracts components that affect interaction with AI agents and establishes and proposes a conceptual framework for human–agent interaction that applies existing artificial empathy structures. The overall research flow and detailed objectives are shown in Figure 1 below.
Firstly, a comprehensive review of the extant literature about the empathic responses that users expect from intelligent agents, and the empathy elements that they deem to be of significance, was conducted. Thereafter, the flow or structure utilized for the basic concept of empathy, and the empathy components that are applicable within the framework, were derived. The proposed framework was then used to categorize additional factors that may affect the interaction itself. The framework was then utilized to explain the process of empathy in AI and user interaction systems, which are like the basic structure utilized in this study. As an initial tool for the framework, we borrowed from a model proposed for the development of empathic robots that allows for the definition of specific, purposeful capabilities [38]. This model can be used as a reference for designing empathic social robots, as empathy in robots must be designed to be like humans to be perceived positively by users. The main components are divided into the process by which a robot forms empathy, the outcomes that correspond to the reactions that occur during the empathy process, and the moderating factors that influence the process and outcomes of empathy. This structure reflects both the empathy process mentioned in conventional psychology and the considerations that need to be considered when applying AI. Therefore, it was deemed an appropriate format for examining empathy and interaction between intelligent agents and users, as is the purpose of this study. However, in this study, the intention was to include intelligent agents that operate systematically in addition to tangible robots. To this end, a search was conducted for relevant studies, and they were added by using a chain-referral sampling method. Subsequent steps were then taken before the final framework proposal to validate the structure and components of the initial proposed framework. This was achieved by reflecting actual industrial cases of smart home services using intelligent agents that are currently commercialized or planned to be developed in the framework and checking whether there are additional factors that need to be considered or are unnecessary. For the industry cases, we selected future scenarios and products on the market that well reflect the empathy interaction component and focused on the top 10 IT companies. Through this process, we redefined the necessary components of the framework and proposed a final framework to identify the empathy users expect from each home service area.

4. Results

4.1. Analyzing Artificial Empathy Interaction Components and Deriving an Initial Framework

It has been established through previous studies that psychological anthropomorphism is of significance in the interaction between intelligent agents and users. Furthermore, it has been demonstrated that this phenomenon is related to empathic responses. In order to demonstrate natural human-like responses, it is necessary to consider external environmental factors such as context and situation in addition to form and function. Consequently, this research proceeded with the process of identifying components to derive a framework that can identify the components that should be considered for various home service areas and situations and how interactions should be designed in a multifaceted manner. To establish the framework, we first checked to see if there was an empathy structure that could be borrowed to capture the overall flow. A comprehensive review of studies that have employed the empathy process in the context of AI was conducted, revealing that the structure of recognizing, interpreting, and responding to emotions remains consistent across these applications. Each process delineates the components that are identified [30]. Furthermore, the empathy process comprises additional components that must be given due consideration, such as information pertaining to the user, the situation and context, and the setting factors of the agent, given that the majority of studies focus on the interaction between a robot or agent and a user. Consequently, the structure of the framework proposed in this study draws from previous studies, as it is essential to examine the connections between various components. However, the proposed framework is constrained to tangible robots; consequently, the additional literature on empathic interfaces and empathic interactions of intelligent agents was identified and collated according to the purpose of this study (see Table 2 below). Additional concepts were cited from the literature to include the characteristics and details of intelligent agents operating as a system within the borrowed structure.
The components of the proposed initial framework are broadly categorized into the following: the purpose of the intelligent agent, context, user characteristics, AI agent characteristics, and interaction. Firstly, the purpose of the intelligent agent corresponds to the purpose of providing services and the goals of the tasks it performs to meet that purpose. This is an element that must be identified to define the goals to be pursued, and the functions required when designing an intelligent AI. It is also the part where the overall implementation direction can be confirmed.
The subsequent section pertains to the context component, encompassing situations and relationships. This component is further categorized into five intermediate categories: situation and context, task, relationship, space, and time. These elements have been explored in studies that utilize the concept of layers to describe the flow of smart technology within the home environment or the elements that should be viewed from the user’s perspective [45]. The studies that addressed the flow of technology and those that viewed it from a user-centered perspective highlighted the same point: the perceived context and surrounding factors related to the user, such as tasks and relationships. These are factors that are also mentioned in existing product ecosystem models and need to be considered to understand the overall user experience. In this study, we analyzed these factors closely and added them to the components to find customized, empathic interactions [46].
Firstly, the situation, context, and detailed factors that characterize the situation are set up, particularly in the case of tasks. The detail factor is the importance factor, which is how important the situation is based on the user’s needs. Accuracy is a factor that determines whether the situation requires proper information delivery or action performance. The lower the weight of this factor, the more social rather than informational or functional the situation is. The next factor is controllability, which determines whether the situation itself can be controlled and manipulated by an intelligent agent. Situation significance determines how much attention the user needs to pay to the situation they are in. This is a sub-factor of relevance because the more urgent the situation, the more likely it is that empathic interactions will be set up differently, such as the way of speaking or expressing.
The subsequent relationship factor is composed of sub-factors to examine the relationship between the user and the intelligent agent. The first of the sub-factors is the similarity between the user and the AI, which is measured as the degree to which the user’s image type is similar to the AI’s image type and whether they can feel a sense of unity. The intimacy sub-factor, meanwhile, is concerned with the closeness between user and AI, whilst the affinity sub-factor determines whether the user holds a favorable opinion of the AI. Intimacy may vary according to frequency of use or acceptance of the technology, whilst user perception of AI responses may be positive or negative, and thus influence user liking. The involvement sub-factor is dependent upon the extent of user interaction with the AI. This construct was inspired by a study by Kim and Kang, who posited that the anticipated functions or roles may be contingent on the user’s level of engagement [4]. This notion was deemed pertinent as the anticipated empathic interaction may also be subject to variation depending on whether the user requires basic task execution or social functions. The spatial dimension encompasses the characteristics of the physical environment in which the intelligent agent is situated. Physical proximity within a space is an essential factor because the type and level of direct interaction can vary depending on whether the space has a physical appearance. Sub-factors were also set up to categorize the type of space: individuality and concentration. Individuality determines whether a space is utilized individually or shared with many users. Concentration is a factor that determines whether a space is focused and utilized for a specific purpose or whether it is transient and used in passing. As with space, the temporal factor is further subdivided into smaller factors. Proximity is a factor that distinguishes the duration of the user’s interaction with the AI. Utilization is organized so that the user can choose whether time is utilized for efficiency and practicality or for other values. Repetitiveness was categorized according to whether the task occurs at regular intervals of time or intermittently.
The following characteristics are attributed to users and intelligent agents (see Table 3). These characteristics correspond to the HRI empathy model which was adopted in this study. However, due to the comprehensive scope of the detailed elements, this study added and refined them based on empathy theories in the field of psychology and studies related to empathic interfaces.
Firstly, user characteristics are divided into user disposition and user’s characteristic situation. User disposition is a factor that distinguishes identity, and we have categorized it into individual-centered and group-centered. Change acceptance is a factor that distinguishes whether a user is oriented towards stable situations and responses or an achievement-oriented person who prefers stimulation, change, and variety. Relatedness to others is a factor that categorizes whether a user is an extrovert or introvert. Situation control is a factor that categorizes whether a user has some control over a given situation by planning or improvising.
User situation consisted of factors that were optional, as opposed to the factors previously identified in context and situation. First, for the number of users, we distinguished between single and multiple users. This was deemed necessary because in the case of multiple users, we need to define which situations or criteria they should interact with. The user support relationship role was added as a sub-factor because the expected empathy and interaction may vary depending on whether the user is in the position of providing care or receiving care in a care situation.
Intelligent agents exhibit personality traits analogous to those of users, yet an additional transparency factor has been incorporated, enabling the adjustment of their degree of honesty when presenting information. A distinguishing feature of intelligent agents is their categorization based on the presence or absence of a physical form, thereby distinguishing between extrinsic factors and intrinsic characteristics. The specific details of extrinsic and intrinsic factors vary across studies and cases, necessitating their organization without the establishment of an index.
In the final analysis, the overall empathy process was divided into distinct phases, encompassing the interaction between the user and the intelligent agent. The detailed elements of this process are delineated in Table 4. In accordance with extant empathy theories and HRI research, the organization of empathic cues into a sequence of recognizing, understanding, interpreting, and responding to their meaning was facilitated. In addition, we have included technologies that can be utilized in the empathy process to ascertain which empathic cues are matched and connected to each process.
In the recognition stage, empathic cues are generally recognized through sensing and are divided into direct and indirect cues. The concept of the empathic cue was borrowed from previous studies on empathic signals and data and modified to be an appropriate name to apply within the framework of this study. In related study, empathic signals that are expressed explicitly and visibly and empathic signals that are latent and require inference are referred to as knowledge. However, for the purpose of this study and to facilitate comprehension, the term empathic cue will be defined. Direct cues include explicit clues, such as mechanical or electronic manipulations, spoken language, gestures, and sounds. Indirect cues are implicit or latent cues that have implied meaning. These include facial expressions, vital signs, specific actions or tasks, and environmental information such as light, temperature, humidity, and odors. The importance of further interpretation to clarify the situation and message can be inferred from these cues. Techniques used in the recognition phase include recognizing emotions or vital signs through the empathic cues.
The comprehension stage is the component of interpretation that facilitates comprehension of the meaning of previously recognized empathic cues. It comprises the empathic understanding method, the tools for empathic understanding, and the skills utilized. In this step, a level of accumulation was added to determine whether the cues were emotional or cognitive and to correlate them with existing data. Emotional empathy involves understanding the emotions and emotional cues that another person is expressing or experiencing. In contrast, cognitive empathy pertains to the ability to comprehend the situation through indirect or direct cues.
The subsequent judgment stage follows on from the understanding stage and involves reasoning and deciding what empathic response to implement. Since this stage is concerned with determining how to express the cues interpreted in the understanding stage, it is divided into emotional empathy and cognitive empathy and consists of the same skills utilized in the other stages. The expression of similar emotions in the judgment phase is indicative of emotional empathy, while the ability to judge the user’s thoughts or feelings is indicative of cognitive empathy. The factors that influence cognitive empathy include the robot’s role and perspective, which can be considered in the process of making judgments.
The final reaction phase comprises sections addressing the manner and contexts in which empathy is expressed by the intelligent agent. This phase is categorized into reaction methods, reaction types, and enabling technologies. The types of reactions that users can recognize are categorized as follows: verbal empathy, non-verbal empathy, and empathy through environmental changes. Verbal empathy includes text, spatial language, spoken word, and sound effects. Non-verbal empathy encompasses facial expressions conveyed through facial movements, such as eyebrows, eyes, and mouth, images including emojis, tactile responses such as vibration, and movement through body control. The environmental change empathy category includes light, temperature, humidity, and odor. The categorization of reactions was further refined based on the nature of the intelligent agent’s response, distinguishing between positive and negative responses. Additionally, factors such as the utilization of technology for the response were organized to ascertain its presence or absence.
As illustrated in Figure 2 below, the components and details of the initial framework described above are rendered visually. In a manner analogous to the HRI empathy process model borrowed from this study, arrows have been included to demonstrate the flow. Furthermore, the detailed factors that can be categorized according to the scale are expressed in the form of indicators, and the parts that require further explanation are not restricted so that they can be freely written.
The initial framework proposed in this study aligns with traditional product ecosystem models in terms of its structure, encompassing users, context, and interactions. Product ecosystem models serve as a framework for comprehending the interactions between users and products, as well as the interactions that arise from products influencing user behavior [46]. It is a framework for understanding the experience of using a product at the product or system level and can help in the design process to enhance the user’s experience. The empathy process that this study aims to address is also in the same vein, as it aims to provide an enhanced experience through empathic responses that users need in using home services. The components of the product ecosystem model and the components of the framework of this study are similar, which proves the necessity of identifying them from a user experience perspective. However, further validation is required to determine whether all the components in the initial proposed framework are appropriate and whether anything is missing.

4.2. Advancing the Framework by Applying Industry Practices

This study constitutes a refinement of the initial empathic human-agent interactions framework proposed earlier, with the objective of determining the validity of the organization of the components and identifying any components that are missing or require refinement. The study incorporates industry examples within the framework to reflect each component and visually illustrate the interaction between AI agents and users. Given that this study proposes a framework for identifying appropriate empathic interactions in the realm of home services, cases of intelligent agents in the home environment were selected. A total of 30 cases were selected as future scenarios or products on the market represented by intelligent agents applied to robots and home appliances by various companies, including leading IT companies. However, the scenarios and product features in the examples do not include very detailed tasks as they cover many areas of home services. Therefore, we extracted only the things that were clearly stated or understood in the scenarios and applied them to the cases in the framework to see if they showed similar behavior. Furthermore, in instances where scenarios encompassed multiple functions and situations, these were examined and applied to the framework individually. Subsequently, an evaluation was conducted to ascertain the applicability of the findings to each element. This process resulted in the analysis table of 30 cases according to the framework structure, and Table 5 illustrates three representative cases as examples. The comprehensive analysis data of total cases are provided in the Supplementary Materials.
Following a thorough examination of the outcomes derived from the application of the framework in reflecting upon the cases, it was ascertained that analogous patterns and characteristics could be identified, even in instances where the service areas differed. This finding paved the way for the classification of these cases. To facilitate the classification process, an initial classification axis was established, with this classification being based on trends that exhibited similarity. This study confirmed the variability of the empathy interaction method in accordance with the service area, function, context, and task within the home service domain. Consequently, the most efficacious approach to achieving clarity appeared to be the division of empathy areas. The initial axis was thus established to delineate emotional empathy from cognitive empathy. The subsequent axis was positioned to reflect the degree of technological advancement in accurately identifying and executing situations, contexts, and other parameters. This was chosen as a discernible criterion in cases and aligned with the research objective.
The classification of cases by the axis set resulted in the generation of Figure 3, which illustrates the distinct categories. The first category is designated as the “Loyal Assistant” type and encompasses tasks executed solely upon user request. This category includes childcare assistance, wherein the assistant observes the situation and context on behalf of the user and notifies them of immediate tasks. Additionally, it encompasses efficient energy management assistance within the domestic environment. The cases that fall under this category primarily respond by accurately recognizing the situation, thus demonstrating a greater focus on cognitive empathy as opposed to emotional empathy. Additionally, these cases are constrained to tasks that monitor the situation or address straightforward user requests, resulting in a comparatively lower level of AI compared to other cases and consequently a low ranking. A distinguishing feature of this category is the inclusion of scenarios where the user’s inability to articulate their needs results in suboptimal performance. Consequently, these cases have been classified within the Low Intelligence category.
The second category is that of the “Qualified Butler”, which includes cases that effectively manage the overall home environment. The majority of these cases are those that execute functions at the opportune moment to resolve situations that are problematic for the user, such as understanding the context and switching to a customized mode or managing security. In this category, it is evident that it is closely related to cognitive empathy in that it comprehends the overall situation and controls the environment and situation accordingly, akin to the first category. However, this type exhibits a more natural empathy response through accurate situational judgment, context recognition, and analysis than the first type, thus occupying a higher intelligence level.
The final type is “Reliable Mate”, which refers to cases where the level of intelligence is highly advanced founded on strong AI and, unlike the previous two types, displays emotional empathy. Typical examples include digital health, which directly assists in childcare when the user is absent, displays emotional responses in various forms of empathy, and involves experts to deliver and manage knowledge. In particular, the tone of voice, gender, and characteristics of AI are set according to the characteristics of caregivers, who have a high proportion of childcare, to enhance the effectiveness of empathic interactions. In addition, in the case of digital health, it is set as a virtual avatar that takes advantage of the characteristics of medical staff to create a sense of professionalism and induce cognitive and emotional empathy. These systems are unique in that, unlike other types, they emphasize emotional empathy responses even though they include service areas with outstanding expertise. Also, unlike other types, they include scenarios such as using virtual avatars, showing more diverse interaction methods.
The empty quadrant in the case analysis represents instances in which the level of artificial intelligence is low yet emotional empathy is desired. This example, which was identified through a single instance, illustrated a scenario of false empathy. A user had to perform a task while the pronunciation of his command was unclear due to his physical condition, so the user repeatedly failed to complete the task due to wrong command recognition despite the agent’s empathic responses. Consequently, it was determined that the user’s satisfaction level was low, and it was ascertained that this instance of empathy did not align with the expected paradigm. This outcome was deemed an undesirable form of empathy interaction due to its adverse nature.
Following a thorough examination of each case and their subsequent classification according to the predetermined criteria, the three types derived were organized according to the initial proposed framework format. This was carried out to verify the absence of superfluous elements and to ascertain whether any additional verification was required. Given that the same type encompasses diverse service areas, it was visualized in a manner that facilitates the determination of whether it needs to be expressed on a scale or if it needs to be selected in duplicate. The upper part is designed to describe the purpose of using the AI agent, which is expressed by the type of name and the cases included in the type. Below that is a section that captures the overall context, which is divided into context, relationship, space, and time. The lower left is designed to describe the characteristics of the user and is placed next to the user section so that the recognition, which is the initial stage of the process of empathy interaction, can be seen. The entire sequence is designed to be visible, with the recognition of the AI agent by the user being the ultimate objective. The characteristics of the AI agent are displayed at the bottom right, and are closely related to the understanding, reasoning, and response stages of the empathy interaction process. For clarity, this section is displayed in the same color as the user section, so that the connection from empathy recognition to response can be understood. Furthermore, any ambiguous or unambiguous expressions identified within the nomenclature employed in the initial framework have been modified to straightforward terms, thereby ensuring the elimination of any potential for confusion.
The framework for visualizing the first type, the Loyal Assistant, is shown in Figure 4 below. This type includes cases that monitor the situation and perform the requested tasks, including those that focus on functional aspects such as energy management and those that control entertainment elements. The majority of these are tasks that users can control directly and are incidental tasks in terms of life. In terms of the situation, these are tasks that users can control directly, and they are located between hedonic and functional tasks. Furthermore, the scenario itself does not constitute an urgent situation, and thus does not require significant attention, due to the fact that it is within the user’s capacity to exercise control. The characteristics of the situation are ultimately connected to the relationship between humans and AI agents; since the primary function of the AI agent is to receive a specific task from the user and to perform the function, the level of intimacy and sense of unity is moderate, and the level of user involvement is either Cooperator or Collaborator. The spatial characteristics exhibited in the scenarios were personalized and focused, with the scenarios automatically identifying voice requests and environmental factors and performing tasks, primarily in the form of embedded systems, thereby demonstrating limited physical accessibility within the space. In terms of temporal dynamics, tasks performed resembled routines with an emphasis on practicality.
Following the identification of the overall scenario in terms of situation, relationship, space, and time to ascertain its characteristics, the flow and characteristics of empathy interaction and interface were examined in the lower part. The process of empathy interaction was identified by arranging the relevant studies referenced when establishing the initial framework in the order of recognition, understanding, reasoning, and reaction. Due to the nature of this type of scenario, the level of recognition technology is very high because it recognizes and resolves situations. The degree of users’ expression of intention is high because there are parts that request tasks, but the degree of expression of emotion is very low because the main function is to control the situation and change the environment. This is related to the clues that the AI agent recognizes, and it was found that it mainly recognizes environmental changes, movements, user voice commands, sounds, etc., as clues for empathy. This type encompasses numerous scenarios pertaining to situational awareness, necessitating a high level of proficiency in comprehending contextual nuances. Conversely, the capacity to interpret, reason, and respond empathically is found to be less developed. The efficacy of this system has been demonstrated in scenarios that involve notifications based on straightforward requests or monitoring data. The necessity for adjustment of the environment or performance of tasks following the establishment of context renders the implementation of an embedded system preferable to that of an AI agent endowed with a human-like appearance.
In this section, we set out the direction for establishing empathy interaction within the framework, as demonstrated by representative examples in this category. Firstly, we present an instance of automatically controlling and adjusting to conserve energy. The user experienced empathy interaction between the AI agent through the brief and rapid notification sound and the text display in the app that is utilized in conjunction. We determined that such interaction is essential in scenarios where regular notifications are necessary or in situations where data are collected and reported. Secondly, in the context of monitoring to facilitate childcare, empathy interaction is manifested through light, notification sounds, and on-screen images. Given the urgency characteristic of many situations that arise during monitoring, empathy reactions are typically brief yet impactful. Finally, a case study demonstrated a scenario where the user performs a task or controls a function in response to a request. This is a response to the user’s request, which is primarily conducted through dialogue, thereby underscoring the necessity for an interaction method that seamlessly integrates into the context. This may involve an environmental adjustment that gradually augments the music volume, or the movement of a connected device in a natural manner.
The framework for visualizing the second type, Qualified Butler, is shown in Figure 5 below. In this type, the user performs tasks by identifying the home environment, as in the first type. However, in contrast to the previous type, the user performs not only the tasks requested by the user, but also many scenarios that identify the environment and situation and control it smartly in advance. This type was almost entirely focused on functionality. This aspect is characterized by a multitude of automated features that govern the environment, even in the absence of direct user intervention. The second type exhibits a greater number of situations necessitating user intervention, thereby prompting an investigation into the impact of this on empathy interaction. In terms of relationship dynamics, the intimacy and sense of unity between humans and AI agents is minimal, primarily due to the user’s role, which is either that of a Bystander or a Cooperator. In terms of space, these are also cases where the automation function is activated without much user involvement, resulting in physical accessibility being low, and both spatial division and concentration being diverse. In terms of time, these were cases where the functional aspect was emphasized, corresponding to the “Time Saving” category. It was also confirmed that these are not repeated and appear only when necessary, corresponding to the “Occasional” category.
In this study, we identified the characteristics of this type in terms of situation, relationship, space, and time. We then examined the characteristics exhibited by the empathy interaction flow at the bottom. The recognition stage, which collects empathy clues, included numerous elements that detect changes in the surrounding environment, including the parts that the user requests. In particular, the level of awareness of the user’s situation and the types of clues collected are similar to those of the first type. This category also encompasses scenarios where tasks are executed following situational awareness, resulting in elevated levels of context understanding, reasoning, and responsiveness. As these systems primarily identify situations and environments, they are predominantly embedded and lack a physical form. While analogous to the first category, this type demands a more advanced technological capacity to comprehend, reason, and respond, owing to the necessity of handling sophisticated functions and performing higher-level tasks. A notable distinction is the emphasis on time efficiency, while also exhibiting a substantial capacity for empathy responses.
To ascertain the direction in which empathy interaction is set up on the framework, a typical example was reflected upon. Firstly, the home environment control shows different empathy responses depending on the situation, whether it is entertainment or security, and shows customized interactions at the right time and in the right way, showing a higher level of empathy. Specifically, the tablet attached to the device displays a motion as if tilting its head, or it provides the subsequent step in advance on the screen and furnishes the necessary information, thereby demonstrating a highly sophisticated level of empathy. In the home security scenario, if the system were embedded, an emergency would be expressed through a whistle as if it were detected by a person. It has been demonstrated that the system responds in a manner inconsistent with human behavior; for instance, it does not immediately respond to movement, such as the activation of lighting or the emission of audible signals, in situations where such responses might be expected, such as in the presence of an approaching individual. Consequently, it has been determined that the system should exhibit an interaction that conveys a strong message at the opportune moment.
The framework that visualizes the third type, Reliable Mate, is shown in Figure 6 below. Unlike the other two types, this type includes examples of emotional empathy shown in digital health, advanced parenting, and daily tasks. Unlike the other types, this type emphasizes emotional responses and shows that appropriate forms of empathy are needed in healthcare and parenting. In terms of situation, the case uses an emotional empathy approach, but it is closer to functional in terms of performing functional tasks. The user has a high degree of autonomy, with no situations requiring significant attention. In terms of relationships, the level of intimacy and sense of unity between humans and AI agents is notably high, particularly when user engagement is high and they work together as Collaborators, communicating closely with each other. As demonstrated in the user engagement, the space was close to the user because they were working together and had a close relationship. The spatial aspect of the environment was found to be of medium size, with both separability and centrality being incorporated into various degrees. In terms of temporal considerations, instances that frequently occurred and did not necessitate the occurrence of an event were found to be conducive to time efficiency.
An examination of the flow of this particular empathic interaction reveals that the level of recognition technology exhibited was notably elevated. This is primarily attributable to the incorporation of tasks such as direct user requests, communication with the AI agent, and the balancing stage, which collectively contributed to the comprehensive recognition process. The identified cues encompassed a balanced distribution of both direct and indirect factors, including the user’s behavior, voice, and environmental changes. The perceived cues exhibited minimal distinction from the initial two types; however, a discernible divergence emerged in the manner they were interpreted and responded to. This category demonstrated a high degree of proficiency in comprehending both cognitive and emotional cues, a proficiency that was further exemplified in the subsequent reasoning stage, which was predicated on the comprehended content. A diverse array of empathy reactions was exhibited in the ensuing reaction stage, a feature that stood in contrast to the preceding two types, encompassing text, images, environmental control, and virtual avatars. This was attributed to the unique characteristics of the AI agent, including its form and verbal expression, and the high emotional response, which differed from the previous two types.
The analysis of representative examples reveals a more diverse set of empathy methods. Firstly, the home healthcare case demonstrated empathy interaction, whereby the user was immersed in the situation and felt as if they were being treated by a doctor, with a virtual avatar displaying the same professionalism as a doctor, such as wearing clothes and glasses. While the virtual environment has been predominantly employed for entertainment purposes, it has the potential to be utilized in professional domains to demonstrate a high level of empathy, thereby serving as a reference point for other forms of empathy. The subsequent example pertains to parenting, and in contrast to the monitoring-type parenting observed in the first type, it exemplifies a high level of empathy. In this paradigm, the voice is synchronized to the gender of the primary caregiver, thereby fostering a sense of realism and emotional resonance. This interaction exemplifies the integration of an emotional empathy method within a professional parenting framework, underscoring the significance of customizing methods to align with the user’s unique context and patterns. Furthermore, the demonstration extended to daily task assistance, reinforcing the efficacy of emotional empathy in fostering human-like interactions. This was primarily observed in the form of stand-alone robots, which exhibited human-like behavioral reactions such as nodding and tilting their heads. This suggests that non-verbal empathy interactions play a significant role in communication with users when providing information and performing tasks.

4.3. Final Framework Proposal: Empathic HAX (Human–Agent Interaction) Canvas

The industry cases were applied to the framework in order to identify the elements that should be considered in the final canvas. The final canvas-type framework is exhibited in Figure 7 below.
First, it was confirmed that the level of user empathy varies depending on the degree of empathy expression. In the case of emotional empathy, it was confirmed that the level of empathy interaction between the user and the agent increases when the user clearly expresses their thoughts or feelings or when the user treats the AI agent like a real person. On the other hand, the first and second types show empathy in a way that the environment is adjusted appropriately by recognizing changes in the surrounding situation surrounding the user. This suggests that even in cases where the user’s level of expression or emotional response may be modest, they can still experience empathy when the environment is optimized. Consequently, it has been determined that in service sectors that prioritize environmental settings, it is imperative to adopt a multifaceted approach. This entails the intelligent identification of the user’s environment and situation to facilitate an interaction with the environment that is context-sensitive and responsive to changes. In conclusion, it is imperative to establish distinct considerations based on the nature of the service to facilitate effective empathy and interaction.
The following process pertains to the refinement of nomenclature or the elucidation of classification criteria, with the objective of enhancing the clarity of the framework in instances where the criteria appear opaque. Primarily, this pertains to the external manifestation of the AI agent. The classification of this entity is contingent upon its resemblance to a human in both appearance and behavior. Entities that meet these criteria are designated as “Human-like”. This modification was implemented to circumvent any potential confusion arising from the utilization of a non-uniform nomenclature within the framework. Subsequently, we have augmented the framework with additional elucidations pertaining to the empathy interaction process. Specifically, we have incorporated a definition of the element to indicate the technological framework employed by the AI agent during the understanding and reasoning stages. This enhancement is intended to facilitate immediate comprehension by observers of the canvas. In the response stage, the intention is to indicate the appropriateness of the response and the level of response technology; however, it is difficult to ascertain this immediately, and thus the definition was added to the canvas. However, since the recognition stage has a section where empathy cues can be entered and is named as the level of related cognitive skills, no additional content was added, unlike the other empathy stages.
In the domain of AI agent characteristics, certain elements pertaining to reactions have been incorporated. A case study has substantiated that the background sound type is exhibited at a rapid tempo in situations of urgency. Consequently, this has been incorporated into the characteristics section as a factor that influences empathy reactions and exhibits a correlation. The subsequent section pertains to the categories of light expressions. It was determined that the manner in which light is expressed varies in scenarios where it is dimly lit along the user’s movement path or where the floor is rapidly illuminated with a welcome message in the area where the user is greeted. Features that are responsive have been incorporated to allow for more precise consideration during the setup of empathic interactions. Furthermore, more detailed explanations were frequently required for matters such as the intensity and method of expressing voice or sound. Consequently, the approach adopted was to compose it as a descriptive type, as opposed to a selective type, with the objective that users could freely document the voice and background sound they receive.
In the preceding case analysis stage, the option was designated as a selection type. However, in the subsequent AI agent feature area, empathy recognition and empathy response stages, it underwent modification so that it could not be selected simply by looking at it. Furthermore, it was configured so that appropriate empathy interactions could be examined by freely describing them. Additionally, in other areas, it was made possible to select in the form of an action button to increase the utilization of the canvas as the below Figure 8.

5. Discussion

This study commenced with an exploration of the ways and types of expressions of empathy in the category of interactions between users of intelligent agents and AI that differ within the home service domain. A review of the extant literature revealed that studies on the interaction between technology and users were predominantly discussed from a technical perspective. It was concluded that there is a need for studies that take a multifaceted approach to identifying empathic interaction based on the context, applicability of the technology, and level of the technology. In this study, we examined how empathic interactions vary according to services, functions, contexts, and tasks in the home service area where AI technology can assist users in their daily lives. To this end, we set the concept of empathy for AI agents and conducted a literature analysis to derive the components of empathy. Based on this, we derived an initial framework to establish an artificial empathy interaction system. Following this, we selected actual industrial cases, supplemented and advanced the framework, and proposed the final framework.
The result of the research was the identification of three types of empathy interaction in the home service area, together with the characteristics of each type and the technology settings and levels for empathy interaction. The most prominent feature that emerged in the process of categorization was that in the professional areas of digital healthcare and professional childcare, more emotional empathy responses were set than functional ones. Consequently, it was determined that the level and method of setting empathy interaction differ depending on the function and purpose, thereby confirming the framework proposed in this study as a versatile tool. Furthermore, it was observed that the manner in which sound and light are expressed varies depending on the situation, and the level of empathy users receive from them varies. This framework enables the identification of which interactions are appropriate in different contexts and situations.
The following proposal is put forward for consideration: a canvas-type framework that can be used in the stages of setting up and verifying empathy interactions, and how it can be applied in research and practice for designing empathy interactions. The application plan can be used by researchers, marketers, service planners, and developers throughout the entire process of the service development stage that requires empathy interaction. Therefore, the following plan is proposed for use by practitioners in each field.
Firstly, researchers and marketers can utilize it to identify the status and trends of existing services, as reflected in the industry cases analyzed in the framework of this study. As the results of the case analysis have identified three types of empathy interactions that occur in the home service area, it is easy to identify the characteristics by reflecting the services in the field that you want to analyze in the research stage. This facilitates the immediate identification of the general characteristics of existing services, as well as the users, AI agents, and situational characteristics at each stage where empathy interactions occur. These characteristics can then be actively used for comparative analysis between services.
The following are examples of new empathy interaction methods and service ideation stages that can be used from the perspective of a service planner. Among the cases analyzed in this study, a representative example is the one in which the father is responsible for childcare in the morning hours, and the tone is set to be gentle for the father to play with the child. While existing parenting services were limited to monitoring functions, it has been confirmed that advanced AI technology can be used to set various empathy interaction methods and settings depending on the situation and context. This has also been shown to confirm that the framework can be used in the benchmarking stage to derive new ideas in addition to the limited functions. Furthermore, it can be used as a checklist to check whether there are any settings or elements that are missing in the idea and strategy derivation stage. If this is considered in the planning stage, it will be easier to identify additional and unnecessary items in the actual development stage.
From the developer’s perspective, the tool is useful in that it allows them to identify the characteristics of each element and identify items that need to be modified at the stage of designing actual empathy interactions or establishing guidelines. It is possible to identify the level of connectivity at each stage of empathy, depending on the situation, user characteristics, agent characteristics, and conditions, enabling the design of appropriate interactions. The framework also facilitates the assessment of the organic nature of pertinent matters during the determination of the level and type of technology at each stage of the empathy response. Furthermore, the framework can be utilized as a diagnostic list to ascertain the validity and appropriateness of the corresponding method following the stages of empathy interaction design and function design. This study has reflected actual cases within companies, and it is expected that it will be easier to find vulnerabilities and areas for improvement if the empathy interaction in the planned service is reflected in the framework and its usability and effectiveness are evaluated.

6. Conclusions and Further Study

This study confirmed that as AI technology advances and smart features and services in the home service sector become increasingly sophisticated, the functions and values expected by users may vary depending on their level of engagement in the home environment, which is a highly personal space. This can be different not only in terms of functionality but also in terms of empathic interaction, which corresponds to social functions. The objective of this study is therefore to examine how the appropriate empathic interaction method required by users varies depending on the function, context, and task within the home service domain. To examine this, the empathy interaction framework was established in the form of a canvas based on a literature research analysis. Afterwards, actual industrial cases were reflected within the framework, and empathy interactions in the home service area were classified into three types and their characteristics were analyzed. This process was undertaken to confirm the academic and practical applicability of the empathy interaction framework proposed in this study.
The study proposes an empathy interaction framework, which is intended to categorize and subcategorize the overall context, users, and AI agents from the perspective of the interaction ecosystem between users and AI agents. This is of great academic significance, as it proposes a framework that finds and structures the elements and flows that make up the entire ecosystem from an empathy interaction perspective into a canvas. The home environment is conducive to the elicitation of an empathic response in users towards the AI agent; consequently, the design of the interaction with the user assumes great significance. The visualization of the interaction ecosystem through a comprehensible canvas facilitates the identification of variable definitions and considerations, with a view to enhancing usability.
The practical significance of this framework lies in its visualization as a canvas, which practitioners can utilize as a format for immediate comprehension. This canvas functions as a checklist, enabling various departments and personnel to employ it seamlessly. The canvas serves as a unifying medium for communication, transcending departmental boundaries. The study’s contribution lies in its utilization of appropriate metaphors as types, which can be communicated by mentioning their names and are well expressed by each type, classified according to the set axis. Additionally, the study is of practical significance in that it analyses the points to be considered when setting up an empathy interaction after categorization and the patterns between various variables. The people who use the canvas find the appropriate method according to the human–AI agent type.
Notwithstanding the academic and practical significance and usefulness of the study, its scope and technical issues have not been considered, thus resulting in certain limitations. Primarily, the study’s scope is confined to the domain of smart home services, precluding consideration of other potential areas or scenarios. This study was constrained to empathic interaction within the domain of smart home environments, a domain which exhibits considerable potential for future development. However, subsequent studies will be capable of identifying additional elements on the canvas that are either necessary or that should be considered in various situations and locations. This limitation can be attributed to the fact that agents or robots equipped with advanced technology are not employed in diverse ways within smart homes, and advanced service cases are not reflected, thereby constraining the analysis results. Specifically, the LLM technology, which has seen the most recent advancements, necessitates enhancement with respect to its application in the stages or processes of empathy. Furthermore, there is a paucity of specific items, plans, and implementation methods for how it can be designed differently for each situation. Consequently, there is a need to develop this area as future research.
Additionally, while this study delineates the skill levels in the stages of empathy recognition, understanding, reasoning, and response on the canvas, it is challenging to ascertain the specific skills employed. This complexity is particularly pronounced in the analyzed scenario cases, thus hindering its incorporation into the canvas. However, with the delineation of the technical intricacies of the recognition and judgment stages, its integration as an auxiliary tool for interface design, complementing the more technical design aspects, becomes a possibility.
Ultimately, while this study has been advanced through the utilization of cases, it remains a theoretically established conceptual framework, necessitating further verification and validity assessment through empirical research. As this is a framework that reflects conceptual aspects such as empathy theory, it is necessary to check whether the corresponding empathy method and setting direction are appropriate for products or services that are scheduled to be commercialized. The canvas-type framework proposed in this study is significant in that it has a high utilization opportunity in identifying the necessary elements without omissions in complex situations and contexts, which is appropriate for planning and designing empathy interactions. However, it is necessary to expand the scope of the study and add a verification stage. If this is supplemented and further research is conducted, it can be actively used as a tool for designing empathy methods in various situations.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/app15063096/s1. The comprehensive analysis table of 30 industrial cases according to the initial framework of empathic human–agent interactions are provided in the supplementary material.

Author Contributions

J.L.: conceptualization, methodology, formal analysis, investigation, resources, data curation, writing, visualization, and project administration. H.-J.K.: conceptualization, methodology, writing, review, visualization, supervision, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Sungshin Women’s University Research Grant (H20240051). This fund has no specific role or influence in the research process.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article and Supplementary Materials.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zachiotis, G.A.; Andrikopoulos, G.; Gornez, R.; Nakamura, K.; Nikolakopoulos, G. A survey on the application trends of home service robotics. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 1999–2006. [Google Scholar]
  2. Reig, S.; Carter, E.J.; Kirabo, L.; Fong, T.; Steinfeld, A.; Forlizzi, J. Smart home agents and devices of today and tomorrow: Surveying use and desires. In Proceedings of the 9th International Conference on Human-Agent Interaction, Online, 9–11 November 2021; pp. 300–304. [Google Scholar]
  3. Guo, X.; Shen, Z.; Zhang, Y.; Wu, T. Review on the application of artificial intelligence in smart homes. Smart Cities 2019, 2, 402–420. [Google Scholar] [CrossRef]
  4. Kim, H.M.; Kang, H.J. Smart Home AIoT Service Domains Derivation and Typology Based on User Involvement: Focused on Case Analysis. Des. Converg. Study 2023, 22, 1–20. [Google Scholar]
  5. Rialle, V.; Lamy, J.B.; Noury, N.; Bajolle, L. Telemonitoring of patients at home: A software agent approach. Comput. Methods Programs Biomed. 2003, 72, 257–268. [Google Scholar] [CrossRef] [PubMed]
  6. Chang, Y.; Gao, Y.; Zhu, D.; Safeer, A.A. Social robots: Partner or intruder in the home? The roles of self-construal, social support, and relationship intrusion in consumer preference. Technol. Forecast. Soc. Chang. 2023, 197, 122914. [Google Scholar] [CrossRef]
  7. Robert, L. Personality in the human robot interaction literature: A review and brief critique. In Personality in the Human Robot Interaction Literature: A Review and Brief Critique, Proceedings of the 24th Americas Conference on Information Systems, New Orleans, LA, USA, 16–18 August 2018; Robert, L.P., Ed.; AMCIS 2018: New Orleans, LA, USA, 2018. [Google Scholar]
  8. Rossi, S.; Staffa, M.; de Graaf, M.M.; Gena, C. Preface to the special issue on personalization and adaptation in human–robot interactive communication. User Model. User-Adapt. Interact. 2023, 33, 189–194. [Google Scholar] [CrossRef]
  9. Reich, N.; Eyssel, F. Attitudes towards service robots in domestic environments: The role of personality characteristics, individual interests, and demographic variables. Paladyn J. Behav. Robot. 2013, 4, 123–130. [Google Scholar] [CrossRef]
  10. What Is an AI Agent? Available online: https://aws.amazon.com/what-is/ai-agents/ (accessed on 18 September 2024).
  11. Cavone, D.; De Carolis, B.; Ferilli, S.; Novielli, N. An Agent-based Approach for Adapting the Behavior of a Smart Home Environment. In Proceedings of the 12th Workshop on Objects and Agents, Rende, Italy, 4–6 July 2011; pp. 105–111. [Google Scholar]
  12. Kim, S.W.; Kim, H.J. A Study on Design of Smart Home Service Robot McBot II. J. Korea Acad.-Ind. Coop. Soc. 2011, 12, 1824–1832. [Google Scholar]
  13. Anyfantis, N.; Kalligiannakis, E.; Tsiolkas, A.; Leonidis, A.; Korozi, M.; Lilitsis, P.; Antona, M.; Stephanidis, C. AmITV: Enhancing the role of TV in ambient intelligence environments. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 26–29 June 2018; pp. 507–514. [Google Scholar]
  14. Do, H.M.; Pham, M.; Sheng, W.; Yang, D.; Liu, M. RiSH: A robot-integrated smart home for elderly care. Robot. Auton. Syst. 2018, 101, 74–92. [Google Scholar] [CrossRef]
  15. Garg, R.; Sengupta, S. He is just like me: A study of the long-term use of smart speakers by parents and children. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–24. [Google Scholar] [CrossRef]
  16. Lin, G.C.; Schoenfeld, I.; Thompson, M.; Xia, Y.; Uz-Bilgin, C.; Leech, K. “What color are the fish’s scales?” Exploring parents’ and children’s natural interactions with a child-friendly virtual agent during storybook reading. In Proceedings of the 21st Annual ACM Interaction Design and Children Conference, Braga Portugal, 27–30 June 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 185–195. [Google Scholar]
  17. Ramoly, N.; Bouzeghoub, A.; Finance, B. A framework for service robots in smart home: An efficient solution for domestic healthcare. IRBM 2018, 39, 413–420. [Google Scholar] [CrossRef]
  18. Das, S.K.; Cook, D.J. Health monitoring in an agent-based smart home by activity prediction. In Proceedings of the International Conference on Smart Homes and Health Telematics, Singapore, 15 September 2004; Washington State University: Pullman, WA, USA, 2004; Volume 14, pp. 3–14. [Google Scholar]
  19. Sisavath, C.; Yu, L. Design and implementation of security system for smart home based on IOT technology. Procedia Comput. Sci. 2021, 183, 4–13. [Google Scholar] [CrossRef]
  20. Bangali, J.; Shaligram, A. Design and Implementation of Security Systems for Smart Home based on GSM technology. Int. J. Smart Home 2013, 7, 201–208. [Google Scholar] [CrossRef]
  21. Alan, A.T.; Costanza, E.; Ramchurn, S.D.; Fischer, J.; Rodden, T.; Jennings, N.R. Tariff agent: Interacting with a future smart energy system at home. ACM Trans. Comput.-Hum. Interact. 2016, 23, 1–28. [Google Scholar] [CrossRef]
  22. Luria, M.; Hoffman, G.; Megidish, B.; Zuckerman, O.; Park, S. Designing Vyo, a robotic Smart Home assistant: Bridging the gap between device and social agent. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 1019–1025. [Google Scholar]
  23. Nijholt, A. Google home: Experience, support and re-experience of social home activities. Inf. Sci. 2008, 178, 612–630. [Google Scholar] [CrossRef]
  24. Shim, H.R.; Choi, J.H. Anthropomorphic Design Factors of Pedagogical Agent: Focusing on the Human Nature and Role. J. Korea Contents Assoc. 2022, 22, 358–369. [Google Scholar]
  25. Sheehan, B.; Jin, H.S.; Gottlieb, U. Customer service chatbots: Anthropomorphism and adoption. J. Bus. Res. 2020, 115, 14–24. [Google Scholar] [CrossRef]
  26. Bošnjaković, J.; Radionov, T. Empathy: Concepts, theories and neuroscientific basis. Alcohol. Psychiatry Res. J. Psychiatr. Res. Addict. 2018, 54, 123–150. [Google Scholar] [CrossRef]
  27. Jin, H.O.; Kim, M.S.; Kim, C.S. A study on the Emotional Interface Elements of Design Products-focusing on newtro home appliances. J. Basic Des. Art 2021, 22, 477–490. [Google Scholar] [CrossRef]
  28. Dai, X.; Liu, Z.; Liu, T.; Zuo, G.; Xu, J.; Shi, C.; Wang, Y. Modelling conversational agent with empathy mechanism. Cogn. Syst. Res. 2024, 84, 101206. [Google Scholar] [CrossRef]
  29. Zhu, Q.; Luo, J. Toward artificial empathy for human-centered design: A framework. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Boston, MA, USA, 13–16 August 2023; Volume 87318, p. V03BT03A072. [Google Scholar]
  30. Alanazi, S.A.; Shabbir, M.; Alshammari, N.; Alruwaili, M.; Hussain, I.; Ahmad, F. Prediction of emotional empathy in intelligent agents to facilitate precise social interaction. Appl. Sci. 2023, 13, 1163. [Google Scholar] [CrossRef]
  31. Paiva, A. Empathy in social agents. Int. J. Virtual Real. 2011, 10, 1–4. [Google Scholar] [CrossRef]
  32. Yalçın, Ö.N. Evaluating empathy in artificial agents. In Proceedings of the 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, UK, 3–6 September 2019; pp. 1–7. [Google Scholar]
  33. Prendinger, H.; Ishizuka, M. The Empathic Companion: A Character-Based Interface that Addresses Users’ Affective States. Appl. Artif. Intell. 2005, 19, 267–285. [Google Scholar] [CrossRef]
  34. Feine, J.; Gnewuch, U.; Morana, S.; Maedche, A. A taxonomy of social cues for conversational agents. Int. J. Hum.-Comput. Stud. 2019, 132, 138–161. [Google Scholar] [CrossRef]
  35. Piumsomboon, T.; Lee, Y.; Lee, G.A.; Dey, A.; Billinghurst, M. Empathic mixed reality: Sharing what you feel and interacting with what you see. In Proceedings of the 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR), Nara, Japan, 27–29 June 2017; pp. 38–41. [Google Scholar]
  36. Gladstein, G.A. Understanding empathy: Integrating counseling, developmental, and social psychology perspectives. J. Couns. Psychol. 1983, 30, 467. [Google Scholar] [CrossRef]
  37. Kwak, S.S.; Kim, Y.; Kim, E.; Shin, C.; Cho, K. What makes people empathize with an emotional robot?: The impact of agency and physical embodiment on human empathy for a robot. In Proceedings of the 2013 IEEE Ro-man, Gyeongju, Republic of Korea, 26–29 August 2013; pp. 180–185. [Google Scholar]
  38. Park, S.; Whang, M. Empathy in human–robot interaction: Designing for social robots. Int. J. Environ. Res. Public Health 2022, 19, 1889. [Google Scholar] [CrossRef]
  39. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  40. Pelau, C.; Dabija, D.C.; Ene, I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput. Hum. Behav. 2021, 122, 106855. [Google Scholar] [CrossRef]
  41. Ryu, Y. A Development and Validation of Cognitive Empathy Scale. Master’s Thesis, Ewha Women’s University, Seoul, Republic of Korea, 2019. [Google Scholar]
  42. Kim, M. A Study on the Correlation between Empathy and Personal Traits. Ph.D. Thesis, Sunmoon University, Chungcheongnam-do, Republic of Korea, 2011. [Google Scholar]
  43. Kim, B.; Kim, Y. A Study on the Design of User Emotional Interface by Smartphone Application Type. Des. Knowl. J. 2011, 20, 181–192. [Google Scholar]
  44. Onnasch, L.; Roesler, E. A taxonomy to structure and analyze human–robot interaction. Int. J. Soc. Robot. 2021, 13, 833–849. [Google Scholar] [CrossRef]
  45. Lee, J.H.; Kang, H.J. Design Strategies per Smart Work Types based on Multi-Layered Framework. Arch. Des. Res. 2024, 37, 431–455. [Google Scholar]
  46. Forlizzi, J. The product service ecology: Using a systems approach in design. In Proceedings of the Relating Systems Thinking and Design 2013 Symposium Proceedings, Oslo, Norway, 9–11 October 2013. [Google Scholar]
  47. Hsieh, W.F.; Sato-Shimokawara, E.; Yamaguchi, T. Investigation of robot expression style in human-robot interaction. J. Robot. Mechatron. 2020, 32, 224–235. [Google Scholar] [CrossRef]
  48. Clarke, C.; Krishnamurthy, K.; Talamonti, W.; Kang, Y.; Tang, L.; Mars, J. One Agent Too Many: User Perspectives on Approaches to Multi-agent Conversational AI. arXiv 2024, arXiv:2401.07123, 2024. [Google Scholar]
  49. Kozima, H.; Yano, H. A robot that learns to communicate with human caregivers. In Proceedings of the First International Workshop on Epigenetic Robotics, Lund, Sweden, 17–18 September 2001; Volume 2001. [Google Scholar]
  50. Park, M.; Kim, H.; Kim, E.; Seok, H. A Study on Scenario Based Virtual Reality Contents Design Guideline for Psychological Type Diagnosis: Focusing on Empathy Type. J. Digit. Art Eng. Multimed. 2021, 8, 73–86. [Google Scholar]
  51. Lee, S.; Choi, M.; Lee, J.; Son, M.; Choi, S.; Lee, J.; Kim, M.; Kim, J.; Nam, Y.; Nam, S. Effects of Sympathetic Talking and Happiness Level on Consumer Trust and Decision-making Styles. J. Consum. Policy Stud. 2022, 53, 121–147. [Google Scholar]
  52. Lee, S.; Yoo, S. Design of the emotion expression in multimodal conversation interaction of companion robot. Des. Converg. Study 2017, 16, 137–152. [Google Scholar]
  53. Surma-Aho, A.; Hölttä-Otto, K. Conceptualization and operationalization of empathy in design research. Des. Stud. 2022, 78, 101075. [Google Scholar] [CrossRef]
  54. Jai, L.; Park, J. Artificial Intelligence Technology Trends and Application. J. Korea Soc. Inf. Technol. Policy Manag. 2022, 14, 2827–2832. [Google Scholar]
  55. Girju, R.; Girju, M. Design considerations for an NLP-driven empathy and emotion interface for clinician training via telemedicine. In Proceedings of the Second Workshop on Bridging Human—Computer Interaction and Natural Language Processing, Online, 15 July 2022; pp. 21–27. [Google Scholar]
Figure 1. Research flow diagram.
Figure 1. Research flow diagram.
Applsci 15 03096 g001
Figure 2. Initial framework of Empathic HAX (human–agent interactions).
Figure 2. Initial framework of Empathic HAX (human–agent interactions).
Applsci 15 03096 g002
Figure 3. Typology of empathic human–agent interactions.
Figure 3. Typology of empathic human–agent interactions.
Applsci 15 03096 g003
Figure 4. Empathic human–agent interaction type a: Loyal Assistant.
Figure 4. Empathic human–agent interaction type a: Loyal Assistant.
Applsci 15 03096 g004
Figure 5. Empathic human–agent interaction type b: Qualified Butler.
Figure 5. Empathic human–agent interaction type b: Qualified Butler.
Applsci 15 03096 g005
Figure 6. Empathic human–agent interaction type c: Reliable Mate.
Figure 6. Empathic human–agent interaction type c: Reliable Mate.
Applsci 15 03096 g006
Figure 7. Final framework of Empathic HAX (human–agent interactions) Canvas.
Figure 7. Final framework of Empathic HAX (human–agent interactions) Canvas.
Applsci 15 03096 g007
Figure 8. Empathic HAX (human–agent interactions) Canvas (Action Button Type).
Figure 8. Empathic HAX (human–agent interactions) Canvas (Action Button Type).
Applsci 15 03096 g008
Table 1. Smart home service cases from literature review.
Table 1. Smart home service cases from literature review.
AreaPurposeAgentInteractionType of InteractionRef No.
CleaningEfficiencyRobot, Integrated SystemContext Recognition and Control, AlertText, Device Movement[12]
EntertainmentImmersionSmart Appliances, Smart Furniture, Integrated SystemContext Recognition and Control, Alert, CommunicationText, Voice[13]
Adult CareSupportRobot, Integrated SystemContext Recognition, Information AlertText, Device Movement[14]
Baby CareSupportRobot, Integrated SystemReaction by ContextVoice, Images, Video on Screen, Device Movement[15,16]
HealthcareInformation DeliveryRobot, Integrated SystemInformative Contents DeliveryDevice Movement, Voice, Images, Video on Screen[17,18]
SecurityEfficiency and SupportIntegrated SystemContext Recognition and Control, AlertText[19,20]
EnergyEfficiency and SupportIntegrated System, ApplicationContext Recognition and Control, Alert, SummarizationText[21]
Social and CommunicationCommunicationRobot, Controller, Integrated System, ApplicationPhysical Reaction for Empathic SynchronizationDevice Movement, Voice, Images, Video on Screen[22]
Work at HomeImmersionSystem, Virtual Application System[23]
Table 2. Initial framework elements and definition (1): purpose and context.
Table 2. Initial framework elements and definition (1): purpose and context.
DomainCategoryElementsMetricsRef.
PurposePurpose of Use AI AgentPurpose of AI Agent ServiceN/A[39]
Target Goal of TasksN/A
ContextSituation, Context, TaskTask CharacteristicsProperness[40,41]
Accuracy
Controllability
Importance
RelationshipUser–AI Agent RelationshipUser–AI Agent IntimacyHigh-Mid-Low[42,43]
User–AI Agent TogethernessHigh-Mid-Low
User InvolvementSupervisor/Operator/Collaborator/Cooperator/Bystander
SpaceSpace CharacteristicPhysical ProximityHigh-Mid-Low[44]
Space SeparationPersonal
Shared
Space ConcentrationFocal
Access
TimeTime CharacteristicTime ProximityHigh-Mid-Low
Time ValueTime Saving
Quality Time
RepetitionRoutine
Occasional
Table 3. Initial framework elements and definition (2): characteristics of user and AI agent.
Table 3. Initial framework elements and definition (2): characteristics of user and AI agent.
DomainCategoryElementsMetricsRef.
UserUser’s DispositionIdentity StandardIndividualism[47]
Groupism
Change AcceptabilityStability-oriented
Openness-oriented
Affinity for RelationshipExtroversion
Introversion
Situation ControlPlanned
Impulsive
Tendency to Express EmpathyPositive
Negative
User’s SituationNumber of UsersSingle[48,49]
Multiple
User Support Relationship RoleCaretaker
Caregiver
AI AgentAI Agent’s DispositionIdentity StandardIndividualism[50]
Groupism
Change AcceptabilityStability-oriented
Openness-oriented
Affinity for RelationshipExtroversion
Introversion
Informational HonestyDirect
Indirect
AI Agent SettingsExternal FactorHuman-like[51,52]
Non-human-like
Tangible
Intangible
Potential FactorSpeech StyleN/A
Utterance StyleN/A
Table 4. Initial framework elements and definition (3): interaction.
Table 4. Initial framework elements and definition (3): interaction.
DomainCategoryElementsMetricsRef
InteractionRecognitionEmpathic CueDirect[29]
Indirect
Related TechnologyHigh-Mid-Low
UnderstandWay of Empathic UnderstandingAffective[33,53]
Cognitive
Related TechnologyHigh-Mid-Low
ReasoningWay of Empathic UnderstandingAffective[54]
Cognitive
Related TechnologyHigh-Mid-Low
ResponseWay of Empathic ResponseLinguistic[34,55]
Non-linguistic
Related TechnologyHigh-Mid-Low
Table 5. Representative examples of industry cases application and analysis by initial framework of empathic human–agent interactions.
Table 5. Representative examples of industry cases application and analysis by initial framework of empathic human–agent interactions.
DomainCategoryElementsMetricsCase 1Case 2Case 3
PurposePurpose of Use AI agentPurpose of AI Agent ServiceN/AEntertainmentHome Security ManagementAuto Task for Healthcare
Target Goal of TasksN/ACustomized SettingsOverall Security ManagingProvide Health Service
ContextSituation, Context, TaskTask CharacteristicsAccuracyNYY
ControllabilityYNY
Attentional PriorityLHM
RelationshipUser–AI Agent RelationshipUser–AI Agent IntimacyHHM
User–AI Agent TogethernessHLM
User InvolvementCooperatorBystanderOperator
SpaceSpace CharacteristicProximityLLL
Space SeparationSharedBothPersonal
Space ConcentrationFocalBothFocal
TimeTime CharacteristicTime Utilization ValueQuality TimeTime SavingTime Saving
Time RepetitionOccasionalOccasionalRoutine
UserUser CharacteristicUser’s DispositionIdentity StandardN/AIndividualismN/A
Change AcceptabilityOpenness-orientedStability-orientedStability-oriented
Affinity for RelationshipsN/AN/AN/A
Situation ControlN/APlannedPlanned
Tendency to Express EmpathyPositivePositivePositive
Empathic Expression DegreeVerbal ExpressionHLH
Affective ExpressionHLH
User’s SituationNumber of UsersSingleSingleSingle
User Support Relationship RoleN/AN/AN/A
AI AgentAI Agent CharacteristicAI Agent’s DispositionIdentity StandardN/AGroupismN/A
Affinity for RelationshipsN/AN/AN/A
Informational HonestyBothDirectBoth
AI Agent SettingsExternal FactorNon-human-likeHuman-like (Partially)Human-like (Virtual Avatar)
Potential FactorSolution, Positive, WomenSolution, WomenSolution, Positive, Women
InteractionInteraction Type and MethodRecognitionEmpathic CueUser’s VoiceSoundUser’s Voice
Related TechnologyContents, PhysiologicalContents, PhysiologicalContents, Physiological
UnderstandWay of Empathic UnderstandingCognitive
(Affective Little)
CognitiveCognitive
Related TechnologyHMM
ReasoningWay of Empathic ReasoningCognitive
(Affective Little)
CognitiveCognitive
Related TechnologyMMM
ResponseWay of Empathic ResponseVoice, Light, ControlImage, Movement, ColorsImage, Movement
Related TechnologyHHH
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, J.; Kang, H.-J. Artificial Empathy in Home Service Agents: A Conceptual Framework and Typology of Empathic Human–Agent Interactions. Appl. Sci. 2025, 15, 3096. https://doi.org/10.3390/app15063096

AMA Style

Lee J, Kang H-J. Artificial Empathy in Home Service Agents: A Conceptual Framework and Typology of Empathic Human–Agent Interactions. Applied Sciences. 2025; 15(6):3096. https://doi.org/10.3390/app15063096

Chicago/Turabian Style

Lee, Joohyun, and Hyo-Jin Kang. 2025. "Artificial Empathy in Home Service Agents: A Conceptual Framework and Typology of Empathic Human–Agent Interactions" Applied Sciences 15, no. 6: 3096. https://doi.org/10.3390/app15063096

APA Style

Lee, J., & Kang, H.-J. (2025). Artificial Empathy in Home Service Agents: A Conceptual Framework and Typology of Empathic Human–Agent Interactions. Applied Sciences, 15(6), 3096. https://doi.org/10.3390/app15063096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop