Fintech Agents: Technologies and Theories

: Many ﬁnancial technology (ﬁntech) applications have incorporated interactive computer agents to act as mediators between the user and the ﬁntech system. This paper provides a comprehensive review of interactive ﬁntech agents from technological and social science perspectives. First, we explain the general ﬁntech landscape and deﬁne interactive ﬁntech agents. Next, we review the major technologies involved in creating ﬁntech: (1) artiﬁcial intelligence and machine learning, (2) big data, (3) cloud computing, and (4) blockchain; as well as the speciﬁc key technologies enabling the following aspects of interactive ﬁntech agents: (1) intelligence, (2) understanding of users, and (3) manifestation as social actors. Following the technology review, we examine issues and theories related to human-ﬁntech agent interaction in the following areas: (1) agents’ understanding of users, (2) agents’ manifestation as social actors (via embodiment, emotion, and personality), and (3) users’ social interaction with agents. Finally, we suggest directions for future research on ﬁntech agents.


Introduction
Financial technology (fintech) is an emerging field where novel technologies are used to improve the business operations or services offered by financial institutions and enterprises. Some applications of fintech include e-commerce, crowdfunding, insurance-related technology, and automated investment apps (usually referred to as robo-advisors) [1]. These developments have had a significant impact on the traditional financial landscape. For example, the digital financial institution Nubank provides financial services without the high interest rates and fees of traditional banks, enabling financial inclusion for more sections of society [2]. Globally, an expanded user base is turning to technology for its financial needs. Hence, user experience with fintech services has become an important area for research.
As technology develops as a tool for optimizing services and cutting costs in the financial sector [3], financial services have seen an increased usage of computer agents. Agents are traditionally defined as computer programs that can imitate human action and communication and act on behalf of the user [4]. The development of agents led to an important shift in human-computer interaction [5], from direct manipulation (for e.g., controlling computers with a keyboard) to indirect manipulation (for e.g., controlling smartphones via voice assistants), allowing the automation of mundane tasks such as email filtering, scheduling, and bank account checking [6]. Direct manipulation is not suited for complex computer environments, but indirect manipulation increases accessibility and allows for easier social interaction [7].
Agents acquire knowledge about users and predict their needs [8]. They have been deployed to enable more sophisticated services in many industries, including education, e-commerce, finance, and transport. In short, agents simplify computer use by allowing users to delegate tasks to the computer [9]. This is particularly useful for tasks that humans cannot do [10], or for tasks for which it is expensive to hire humans. Agents are designed The advancement of AI and ML enables real-time analysis of multimedia streaming data, facilitating informed decision-making. Diverse sources generate vast amounts of valuable data, which AI and ML techniques efficiently process to extract meaningful insights. This empowers organizations to identify trends, anomalies, and critical events, optimizing processes and services [19].
Another application of AI is with electromyogram (EMG) signals, which are generated by the electrical activity of muscles and are widely used in applications such as prosthetics, rehabilitation, and human-computer interaction. Traditionally, hardware processing techniques have been employed to analyze and interpret EMG signals. However, with the advancements in AI and edge computing, intelligent embedded processing has emerged as a superior approach [20].
AI applications in marketing have revolutionized the ability to customize services and content on websites and apps, serving as a crucial initial step in driving personalized marketing campaigns and fostering meaningful consumer engagement. ML-powered AI chatbots play a vital role in this process by continuously improving and becoming smarter over time. These chatbots are vast, adaptable, and intelligent, enhancing user experiences with a more lifelike interaction [21].
Similarly, AI and ML have far-reaching implications for fintech. AI systems can process and analyze large amounts of financial data in a consistent and accurate manner that is not possible for humans [22]. AI-powered financial apps provide a greater range of tailor-made services and products at a lower cost by leveraging personal customer data [23]. AI and ML can be applied to data such as the client's income, saving and spending habits, assets, and liabilities, and can give investment recommendations that match their needs [24] as well as more customized advice than traditional advisors offer [25]. AI and ML also power conversational interfaces, automatically providing relevant and increasingly more accurate information over time [26,27].
For example, Bank of America's AI-driven virtual assistant, Erica, is used by millions of customers to answer basic banking questions. The chatbot is fed with customer data, including past financial history and location information. Applying ML and deep learning, Erica can provide tailor-made services [28]. The BlackRock Robo-Advisor 4.0 also uses AI and ML and can outperform human stock-pickers in the task of buying stocks whose estimated intrinsic value is higher than the market value [29].
While AI has a diverse range of applications, including fintech, AI technology faces the challenge of needing to be human-centered and placing human well-being at its core. This approach entails designing AI systems responsibly, respecting privacy, adhering to human-centered design principles, implementing appropriate governance and oversight, and ensuring that an AI system's interactions with individuals consider and respect users' cognitive capacities. By adopting such an approach, stakeholders can navigate the complexities of AI while prioritizing ethical considerations and harnessing the full potential of these technologies to benefit humanity [30]. Other challenges regarding the development of AI include security, privacy, energy consumption, morality, and ethics [31].

Big Data
Big data refers to massive data sets that are complex, varied, and fast-moving, requiring advanced management and analysis techniques. Big data analytics refers to a set of technologies and techniques used to find patterns and information from data sets that are substantially larger and more complex than usual data sets [32].
Big data helps banks provide improved services to their customers, boost their security systems, and gauge customer sentiments from social media data. For example, Q.ai, a robo-advisory app, uses AI and big data to provide customized portfolio recommendations and maximize returns on investments [33]. Banks also use big data analytics to study consumption patterns and customer behavior [34,35]. Similarly, health insurance companies use data from wearable technologies to provide superior customer service and product innovations [36], and some insurance companies track driving data to reward Electronics 2023, 12, 3301 5 of 30 safe driving [37]. It is also possible to build a system that recommends buying, selling, or holding a stock at specific times of the day [38].

Cloud Computing
The U.S. National Institute of Standards and Technology (NIST) defines cloud computing as "ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction" [39] (p. 2). Cloud computing lets clients access their personal financial files through the internet from anywhere. It is used extensively in finance, especially by banks, to reduce hardware, software, and human resource costs.
Cloud computing improves cash flows for banks, allowing them to rapidly provide and scale up services [40] and adopt newer technologies effectively [41]. Coupled with big data analytics, cloud computing enables banks to provide customized services and sound financial advice services [42]. The service Temenos Banking Cloud, for example, allows banks to launch and scale banking services quickly and at a low cost [43].

Blockchain
A blockchain comprises data sets, each composed of data packages or blocks. A block constitutes multiple transactions. With each additional block, the blockchain is extended, and together denotes a full ledger of the transaction history. These blocks can be validated by the network using cryptography [44]. Thus, a blockchain is a decentralized, open ledger where anyone can transact or validate transactions.
The impact of blockchain on the financial industry is far-reaching, promising lower costs and improved security [45]. When one block is added to another, it is through a verified transaction; hence, attackers cannot tamper with it once registered [46]. Also, blockchain allows transactions to be automated based on mathematical rules that are selfenforced; hence, transactions are largely secure, free of errors and illegal practices, and do not require verification from a reliable third party [47].
One of the most prominent uses of blockchain in finance is cryptocurrency [48], one example of which is Bitcoin, launched in 2008. It established a peer-to-peer system of payments based on electronic transactions, enabling different entities to send payments to one another without a central authority [49]. There are several other cryptocurrencies like Ethereum, Litecoin, Dash, and Ripple, and the industry is worth hundreds of billions of dollars [50].

Fintech Agent Technologies
This section will provide an overview of the key technological elements of fintech agents, specifically. Reviewing technologies related to fintech agents will deepen our understanding of theories and issues related to human-fintech agent interaction. First, we discuss a major design consideration in developing agents-embodiment. Then, we discuss the technological factors involved in creating disembodied and embodied fintech agents that display intelligence, understand the user, and manifest as social actors.

Types of Fintech Agents: Disembodied or Embodied
The way an agent is designed to signal its status as an interaction partner involves the decision of whether the agent should be embodied or disembodied. Embodiment refers to physical instantiation, i.e., bodily presence [51], whereas disembodiment refers to the absence of physical instantiation in either real or virtual forms. Embodied fintech agents can have bodily presence in either virtual form (agent has a digital body perceivable via a computer, mobile, virtual reality goggles, etc.) or physical form (with three-dimensional form in the actual world; we do not need specific technology to perceive and interact with it). Physically embodied agents include robots that interact with users, although this remains an area for further development in fintech.
As computing power increased and graphics interfaces became more widely used, embodied agents were proposed as ideal for digital collaborative environments and to engage in conversation with users [52,53], as they displayed body, shape, face, or other variations of form. Examples of virtually embodied fintech agents are HSBC Hong Kong's virtual chatbot assistant, Amy (Figure 1), and Rachel ( Figure 2), a digital assistant for the mortgage process faced by home buyers. Currently, most fintech agents are designed for virtual embodiment. One example of a physically embodied fintech agent is Xiaoi (Figure 3), an intelligent banking robot developed in China that can communicate verbally, guide customers to relevant queues, and make use of facial recognition and identification cards to check account balances [54].
the absence of physical instantiation in either real or virtual forms. Embodied fintech agents can have bodily presence in either virtual form (agent has a digital body perceiv able via a computer, mobile, virtual reality goggles, etc.) or physical form (with three-dimensional form in the actual world; we do not need specific technology to per ceive and interact with it). Physically embodied agents include robots that interact with users, although this remains an area for further development in fintech.
As computing power increased and graphics interfaces became more widely used embodied agents were proposed as ideal for digital collaborative environments and to engage in conversation with users [52,53], as they displayed body, shape, face, or othe variations of form. Examples of virtually embodied fintech agents are HSBC Hong Kong's virtual chatbot assistant, Amy (Figure 1), and Rachel ( Figure 2), a digital assistan for the mortgage process faced by home buyers. Currently, most fintech agents are de signed for virtual embodiment. One example of a physically embodied fintech agent is Xiaoi (Figure 3), an intelligent banking robot developed in China that can communicate verbally, guide customers to relevant queues, and make use of facial recognition and identification cards to check account balances [54].  Unlike embodied fintech agents, disembodied fintech agents do not have bodily form and thus rely on speech, text, or other modalities such as emoticons to simulate their presence as social actors. A prominent example is bank chatbots, which have become popular over the last decade. Despite not having bodily form, disembodied agents can display emotion or personality through verbal or textual cues and can successfully signal their presence as a social interaction partner [55,56]. For example, the chat-based personal finance management app Cleo uses a personality that appeals to younger users, with "roast" and "hype" options that can be brutally honest (Figure 4), yet are strongly socially present in younger users' minds.
It is possible that in the future, we will see more physically embodied fintech agents deployed to aid customers at financial institutions' physical locations since physical embodiment may allow the user to experience a more positive interaction with the agent [51,57]. Whether physical or virtual, the use of embodied agents is likely to be the trend [58].   Unlike embodied fintech agents, disembodied fintech agents do not hav form and thus rely on speech, text, or other modalities such as emoticons to simul presence as social actors. A prominent example is bank chatbots, which have popular over the last decade. Despite not having bodily form, disembodied ag    Unlike embodied fintech agents, disembodied fintech agents do not h form and thus rely on speech, text, or other modalities such as emoticons to sim presence as social actors. A prominent example is bank chatbots, which ha   Cleo's personality-driven interaction design. https://moneytothemasses.com/banking/cleo-review-the-ai-chatbot-that-manages-you you (accessed on 20 April 2023).
It is possible that in the future, we will see more physically embodied fin deployed to aid customers at financial institutions' physical locations sin embodiment may allow the user to experience a more positive interaction wi [51,57]. Whether physical or virtual, the use of embodied agents is likely to b [58].

Fintech Agent Technology
Fintech agents need to be intelligent enough to perform financial activi able to understand users quickly [60], and manifest themselves as social ac users can easily understand their behaviors and intentional stance quickly [61 among the many dimensions of agent technologies, we focus on the following in this section: technologies for agent intelligence, agents' understanding of their manifestation as social actors.

Technologies for Agent Intelligence
To be perceived as intelligent, agents must handle complex informatio

Fintech Agent Technology
Fintech agents need to be intelligent enough to perform financial activities [59], be able to understand users quickly [60], and manifest themselves as social actors so that users can easily understand their behaviors and intentional stance quickly [61,62]. Hence, among the many dimensions of agent technologies, we focus on the following three areas in this section: technologies for agent intelligence, agents' understanding of users, and their manifestation as social actors.

Technologies for Agent Intelligence
To be perceived as intelligent, agents must handle complex information, work in online environments, process large data sets, and be fast, efficient, and accurate when performing tasks [63,64]. Intelligent fintech agents start by analyzing vast amounts of data based on programmed models to provide responses during customer interaction [65,66]. For example, the automated insurance chatbot Magda, deployed by Polish insurer Link 4, provides constant customer support based on analyses of a vast knowledge database of motor, vehicle, and property insurance [67].
Beyond programmed intelligence, fintech agents learn from experience and the environment [68,69]. Using advanced ML, fintech agents continuously learn and draw from past knowledge, developing improved problem-solving capabilities [70] and overcoming the limitations of programmed intelligence. Such agents can handle questions that are ambiguous or new based on natural language processing. The more they interact with users, the more information and accuracy they gather [71,72]. Examples of fintech agents that can self-learn are HSBC Bank's virtual assistant Amy, introduced earlier, which has an in-built customer feedback mechanism to enhance knowledge over time and answer complex queries [70].

Technologies for Agents to Understand Users
In this section, we review the technologies required for agents to understand user intention [73] and user emotion [74]. Further, since fintech agents provide financial information and advice, they need technologies to analyze users' financial behavior.

Technologies for Understanding User Intention
To be useful, an agent needs to gauge user intention as soon as possible, determine how to collaborate with the user, and respond to the requirements of the user based on his or her current goal [73]. All the information regarding the user's intention, including his or her actions in the environment, is used to provide an agent with a ranked list of the most probable user goals at every instant. ML is used to construct user models on an incremental basis by studying users as they perform their tasks [75]. Along with AI and ML, natural language understanding (NLU) is used to analyze the text users submit and match it to a certain intent, which the agent is programmed to respond to [76]. Agents that use voice interaction have a speech recognizer component that converts the customer's voice input into a text message. The NLU component receives the text message, processes its meaning, and performs intent recognition and entry recognition (identifying numbers or names input by the user) to understand the user request or action [77].
For example, Capital One bank developed its own natural language processing technology for its intelligent agent, Eno, which can understand 2500 possible ways a user may ask for his bank balance, including misspellings and autocorrections, while simultaneously learning new variations and identifying user intent [78].

Technologies for Understanding User Emotion
When humans interact with machines, many of the same social principles are followed as humans communicate with one another [79]. Emotion recognition-an essential aspect of human communication-is thus a critical ability for interactive agents. Agents can recognize human emotions through facial emotions captured by a camera, through voice recorded by a microphone, or by measuring heart rate, sweat, and other physiological traits through electrodes, etc. [80]. Additionally, text-based fintech agents such as chatbots must understand the emotions a user conveys via text to provide emotionally appropriate responses, as non-verbal communication and cues can also play an important role in detecting human emotion sentiments [81]. These agents use deep learning and big data to detect users' emotions via a keyword-based analysis of words and sentiments [82,83]. Fintech agents currently use emotion detection techniques at a rudimentary level, and a lot of potential exists in this space. For example, Rosbank, a Russian universal bank, is working with Neurodata Labs, an AI solution company, to detect real-time customer emotions using several parameters [84]. China-based Emotibot has created a chatbot capable of reading 22 emotional patterns in text and seven patterns from voice and facial expressions and has partnered with China Minsheng Bank [85].

Technologies for Understanding Users' Financial Behaviors
Fintech agents often take on the traditional role of a human financial advisor and collect information related to users' financial behaviors, including investment preferences, goals, and risk appetite [86]. After collecting financial and demographic data, fintech agents undertake user analyses (e.g., risk assessments based on demographics, investment history and preferences, and risk appetite), suggest suitable recommendations, keep track of the user's behaviors (e.g., portfolio modification), and adjust the recommendations (e.g., asset allocation, managing taxes, product selection, and even trade execution) according to the user's needs, goals, and behavioral tendencies [87,88]. As discussed, AI and ML are key technological mechanisms behind these capabilities [23]. For example, the robo-advisory app Wealthfront evaluates customer information in detail with the help of AI algorithms before recommending which stock to buy [89]. It assesses the risk appetite of the customer by asking them how they would react to substantial losses, which can happen due to a market decline, and whether they prefer to capitalize on the market and maximize their gains. Based on answers to these questions and other data like the number of years to retirement, income, and so on, an investment risk metric is built [90]. The app then uses a proprietary AI algorithm to recommend a portfolio to its clients [25].

Technologies for Agents' Manifestation as Social Actors
In addition to having financial intelligence and the capability to understand users, fintech agents should be able to present themselves as social actors for efficient and natural interaction with users. Non-human entities such as fintech agents are able to elicit social responses from humans, although users clearly know that they are interacting with a non-human [91]. This is because humans have evolved to have social modules in the brain (e.g., the Theory of Mind Module, discussed in a later section), which enable a natural and efficient understanding of other humans or human-like entities [92]. When an agent presents itself with certain social cues, users' brains selectively attend to that information and respond socially. An agent can present itself as a social actor through embodiment or by presenting human-like personalities and emotions. This section explores technologies for agent embodiment, personality, and emotion.

Technologies for Agent Embodiment
Many financial institutions use disembodied fintech agents in the form of chatbots. However, due to the importance of embodiment-having a perceptible physical body [93]in human-computer interaction, computer games, and other technology applications [94], there is a growing trend of using virtually embodied agents such as animated characters in fintech. Currently, very few commercially available physically embodied fintech agents (i.e., fintech robots) exist, but they may become popular in the future, especially for assisting older adults [57,95]. Embodiment can involve more than giving an agent a perceptible body or face. For an embodied agent, conforming to cultural and social norms is essential. It must exhibit appropriate behavior and communication that is consistent with its physical embodiment, such as providing consistent verbal, non-verbal, and other behavioral cues [96].
Many technologies are used to create the look, voice, and behaviors of embodied agents [97]. For example, agents' bodies are usually created with computer animation software (e.g., Adobe Character Animator, version 23.1), and their behaviors are controlled by a fixed set of algorithms [98]. Such software platforms can simulate embodied agents in 3D virtual environments [99].
For voice, synthetic voice, pre-recorded human voice, or even a mix of the two can be used [100]. Affective [80] and emotion AI technology [101] are used to make agents' embodiment more compelling by enabling them to understand and react appropriately to user emotions [102]. Examples of virtually embodied fintech agents include the following: Raiffeisen Bank International AG (RBI) in Serbia developed REA, an AI-based digital assistant with blond hair and colored eyes. REA is available anytime to answer questions within five seconds and is popular among younger users [103]. The Royal Bank of Scotland's virtual chatbot, Cora, answers customer queries and brings a human element to the digital banking experience. The digital teller wears the branded uniform of the bank, has ear piercings, and helps to answer queries on mortgages and how a customer should block a card if it is lost [104].

Technologies for Agent Personality
Personality refers to a set of qualities and character traits that give an individual a distinctive character [105]. Personality plays an important role in building and maintaining interpersonal relationships [106,107], as well as human-agent [52,108] relationships. Agent personalities are typically created by simulations of human personalities through neural networks [109][110][111] and then expressed via visual appearance, voice, or behavioral cues such as facial expression, gesture, or interaction styles [112]. Conversational agents can be designed in such a way that their personalities align with an organization's image, which would be distinct for a financial institution or service as opposed to other contexts such as commerce or transport [113].
An example of a fintech chatbot with a rich personality Is Adam, a virtual banker at Tatra Bank in Slovakia. The chatbot speaks the Slovak language, provides all the required information about the bank's products, and resolves customer queries. It has an agreeable and rich personality and is modest, eager to help, and self-confident [114]. Similarly, Australian bank ANZ has launched Jamie, an agent with a human face, voice, and facial expressions ( Figure 5), which is capable of two-way voice communication with users. By combining neural networks and models of the human brain, it can express a distinct personality [115].
Electronics 2023, 12,3301 the bank, has ear piercings, and helps to answer queries on mortgages and tomer should block a card if it is lost [104].

Technologies for Agent Personality
Personality refers to a set of qualities and character traits that give an i distinctive character [105]. Personality plays an important role in building taining interpersonal relationships [106,107], as well as human-agent [52,10 ships. Agent personalities are typically created by simulations of human p through neural networks [109][110][111] and then expressed via visual appearanc behavioral cues such as facial expression, gesture, or interaction styles [112] tional agents can be designed in such a way that their personalities align wit ization's image, which would be distinct for a financial institution or service to other contexts such as commerce or transport [113].
An example of a fintech chatbot with a rich personality Is Adam, a virtu Tatra Bank in Slovakia. The chatbot speaks the Slovak language, provides all t information about the bank's products, and resolves customer queries. It has ble and rich personality and is modest, eager to help, and self-confident [114 Australian bank ANZ has launched Jamie, an agent with a human face, voice expressions ( Figure 5), which is capable of two-way voice communication wi combining neural networks and models of the human brain, it can expres personality [115].

Technologies for Agent Emotion
Virtual agents are perceived as more believable and relatable when th emotions [116]. Emotions like anger, anxiety, thoughtfulness, and confidence pressed by a virtual character through simple body gestures. This increases i ness, as the facial expressions of a virtual agent can provide feedback to Moreover, emotions can be communicated vocally by disembodied agents display visual cues [118]. Thus, AI-based agents can display emotions in ter expressions, tone of voice, or physical postures based on the design and [119,120]. Several computational models of emotions are used to enable agent

Technologies for Agent Emotion
Virtual agents are perceived as more believable and relatable when they express emotions [116]. Emotions like anger, anxiety, thoughtfulness, and confidence can be expressed by a virtual character through simple body gestures. This increases its effectiveness, as the facial expressions of a virtual agent can provide feedback to users [117]. Moreover, emotions can be communicated vocally by disembodied agents that cannot display visual cues [118]. Thus, AI-based agents can display emotions in terms of facial expressions, tone of voice, or physical postures based on the design and hardware [119,120]. Several computational models of emotions are used to enable agents to process emotional stimuli and generate emotional responses. Apart from computational technologies, such models are also based on findings from other branches of science, especially psychology [121]. One proposed way of instilling emotions in agents is for the system to apply a sentiment analysis model for the extraction of emotions from user input. This data is sent to another module known as the action selection module. Using Artificial Intelligence Markup Language (AIML) format, the module searches for the best answer based on emotions, which is then sent to the output system [122]. Currently, there are no commercially available fintech agents that express their emotional states to users, possibly because of the serious nature of financial tasks.

Issues and Theories Related to Human-Fintech Agent Interaction
This section examines the issues and theories related to: (1) agents' understanding of user intention, emotion, and financial behavior, (2) agents' manifestation as social actors (embodiment, emotion, and personality), and (3) users' social interaction with agents. As fintech agent interaction is a new and emerging field of study, studies on general human-agent interaction will also be included.

Understanding User Intention
Understanding user intention enables an agent to predict the next interaction required, estimate the user's learning and performance of the task, and predict and prepare for potential mistakes [123]. User intention is largely deduced from user inputs or overt signals to the system. Older input devices included tools such as a mouse and keyboard, but these methods have expanded to include speech, touch, and gestures, which are more natural ways of interacting and easier to learn and perform [124]. Studies have also employed search behaviors [125] and eye tracking data [126] to estimate user intention. However, while user inputs and eye tracking can provide detailed information, the underlying cognitive processes and intentions are not explicitly revealed and need to be inferred [127]. Modeling these semantic intentions [128] accurately and thoroughly remains a challenge for machines.
Humans are much more adept than computers at understanding human semantic intention because the structure of the human brain contains specialized neural mechanisms (mental modules) [92]. One of these, the Theory of Mind Module (ToMM) [95], allows humans with typical brain development to deduce and predict the intentions, desires, and beliefs of others. Researchers do not yet have a full picture of the mechanisms of ToMM, but aspects of it have been simulated in interactive technology. These include educational game agents that demonstrate that they are aware of other agents' intentions and mental models [129], a robot equipped with an internal model of itself and the actors in its environment to anticipate the consequences of an action [130], and the robots Kismet and Cog, designed to demonstrate their mental states via movement and facial expression [131]. ToMM has also been discussed as a requirement for inclusion in the design of technology such as social robots [132] and autonomous robots [133] to better facilitate human interaction.
Along this line of research and development, we propose that future research should consider designing fintech agents with specialized modules to observe specific social cues from users and better determine their intentions. A user's internal states may be manifested via eye movements, speech, touch, handwriting, and facial expression. An agent with a simulation of ToMM could include dedicated sensors for detecting these cues. A thorough understanding of user intention is crucial in the financial context, as user behavior is not uniform and individual user factors have been found to influence financial decision-making [134]. While current robo-advisory apps can identify different risk profiles among their users, agents that can discern and predict user intention more accurately on an individual level may have the potential to minimize losses. The concept of user intention has also been explored in the broader sense of whether a user plans to adopt a technology. The Unified Theory of Acceptance and Use of Technology (UTAUT), originally conceived for organizational contexts, has also been applied as a framework to analyze adoption intention in consumer contexts. For instance, UTAUT has been utilized to investigate the intention to adopt mobile financial services, online banking, and blockchain technology [135]. Additionally, social cues have been incorporated into a fintech chatbot to determine if users intend to continue using the technology [136].

Understanding User Emotion
Emotion, a subset of affect, is generally conceptualized as having valence (a positivenegative continuum), intensity (mild to strong), and duration (brief to enduring). It can significantly arouse and orient humans towards behavioral responses such as flight when experiencing fear [137]. Studies have shown that human consciousness arises from the interplay of cognitive processes and emotions. Emotional activity in the brain has a significant impact on cognitive mechanisms such as learning, memory, and decision-making [138]. This makes it crucial for interactive agents to understand users' emotions [84], especially in the financial context, as emotional biases may influence rational thought processes [139].
Developments in affective computing have enabled objective and reliable measures of human emotion with little human involvement [140], providing significant opportunity for the development of fintech agents that can assess user emotion. Computers can assess emotional data from visual, audio, textual, physiological, and behavioral modalities. Visual modalities (images and videos) provide information such as facial expression [141] and body gestures [142]. Audio modalities provide data such as speech, everyday sounds, acoustics, and elements of vocal data such as pitch and intensity, which are used in emotion and sentiment analysis. Textual modalities, including client interaction logs, social media posts, and reviews, have been used to gain insights into emotion [143]. One drawback of these modalities is that participants may not produce a continuous stream of visual, speech, or text data for emotion analysis. These modalities can be complemented by physiological measures such as heart rate [144], galvanic skin response [145], electrocardiography and electroencephalography [146], respiration rate and eye gaze [147], among others. Finally, behavioral modalities such as the user's gestures, postures, and actions with the computer mouse and keyboard have been used to determine affective states [148].
Once agents have identified user emotion, this data can be integrated into the agent's response [149]. Agents can thus be a useful means of providing cognitive and emotional feedback and enhancing online learning and interaction. For example, a speech-based emotion recognition system was developed in an interactive robot, which aided the humanrobot interaction process [150]. To enhance their usefulness, agents should be taught subcategories of emotions to enable better inferences and provide more tailored responses [151]. Integrating affective measurement capabilities specifically into fintech agents is in the preliminary stages of exploration, usually involving only one modality, but results from other domains provide promising opportunities for the development of next-generation fintech agents. In addition, future studies should address the shortcomings of using a single modality. People express their emotions in different ways; for example, some may use the tonal range of their voices to express their emotions, while others may rely more on facial expression [152]. Fintech agents equipped with multimodal sensors, which can analyze various forms of data such as tone of voice, facial expressions, eye gaze, physiological responses, and other tactile cues, will be capable of accounting for individual differences in emotional expression. This will enable further exploration of fintech agents as social actors that can adapt to a user's emotional style, assess their emotional state, and deliver an appropriate response [144].

Understanding Users' Financial Behaviors
Apart from understanding users' intentions and emotions, fintech agents must also understand users' financial behaviors, such as their decision biases and investment history. Traditional utility theory assumes the rationality of practitioners. In contrast, behavioral finance scholars have examined how psychology influences the behavior of financial practitioners [141,153]. Fintech agents, especially robo-advisors that automate investments for users, must be programmed with theoretical knowledge of these biases and identify individual user characteristics to minimize financial losses. It is unclear if current roboadvisory services are specifically programmed to account for human biases, but preliminary research suggests that fintech agents may have the potential to mitigate them [88,154]. Some financial biases include overconfidence, or overestimating one's skills and chances of success [155]; conservatism bias, which is when investors choose to maintain their initial forecasts or views without paying adequate attention to new information [156]; confirmation bias, where people give attention to or actively seek information that is in line with their beliefs or the hypothesis at hand [157]; and loss aversion bias, which is the phenomenon of people preferring to avoid losses than to make gains [158]. Fintech agents may be used to further our understanding of these financial biases and to examine what advisory approach is best suited to different user emotions and intentions. Relative to a human advisor, the ease with which a fintech agent can customize its approach to different users and incorporate data from users' past behaviors holds promise for further research.
Beyond human biases, financial behaviors can also be influenced by market conditions such as high volatility as well as the investor's personality. A preliminary study [159] found that when it comes to allocating funds to a robo-advisor, a complementary-attraction effect was exhibited, where investors with dominant personalities allocated more funds for investment when they interacted with robo-advisors with a submissive personality. This suggests that fintech agents should ascertain the personality of the user along with their risk appetite to foster a better interaction experience and profit-making behavior. The preliminary study also demonstrated that during high market volatility, investors prefer submissive robo-advisors regardless of their own personality. These suggest that a fintech agent may be perceived better if it takes a more controlled approach to its recommendations, catering its advice not only to the personality of its users but also to changing market conditions.

Issues and Theories Concerning Agents Manifesting as Social Actors
Once agents have determined what the user wishes to achieve, they must respond in the best way possible. As experts in social interaction, humans are more likely to enjoy the interaction with agents and feel that they are competent when agents follow social conventions and expectations [106]. This section explores three issues-agent embodiment, agent emotion, and agent personality-which have been dominant topics of study in human-agent interaction.

Embodiment
While embodiment is not a prerequisite for social interaction, it can quickly establish the possibility of interaction and has therefore received significant attention [106]. Agent embodiment can be anthropomorphic (human-like), zoomorphic (animal-like), caricatured (character-like), or functional (task-oriented). Among these, the human-like methodanthropomorphism-has been regarded as an efficient way of enhancing users' experiences with an agent. Anthropomorphism originally referred to the human tendency to superimpose human functional and behavioral characteristics on animals or objects to allow humans to rationalize their actions with greater ease [160]. In human-computer interaction (HCI), however, the term has been used to refer to equipping agents with human-like characteristics such as face, body, voice, emotion, personality, and even identity. Anthropomorphism in HCI increases social bonding and user perception of agent competence [161], which is crucial for the serious nature of financial transactions. Hence, we believe that anthropomorphism is likely to be the dominant method for designing fintech agents.
Research on embodied agents in fintech is very limited, but studies in related areas such as e-commerce and e-learning (which can be likened to a financial advisory relationship) can provide insights. A study found that online shoppers underwent a better experience when interacting with an anthropomorphic agent that had a human voice and trusted the agent more to help them with their purchase decisions [162]. Most studies on the effects of anthropomorphic agents in e-commerce show that anthropomorphism has generally positive impacts on user experience and buying intention [163][164][165]. In the context of e-learning, a study found that students' performance improved with an anthropomorphic agent (human-voiced, human-like gestures, facial expression, eye gaze, and body movement) compared to an agent with only human voice [166]. Most studies on the effects of anthropomorphic agents in e-learning show that anthropomorphism has positive impacts on learning and memory [167][168][169].
As technologies progress, fintech agents have developed from static images on the screen to fully animated characters incorporating facial expressions and lip movements that synchronize with speech (e.g., Figure 2). This form factor allows clients to have more natural face-to-face conversations at any time, combining the features of human interaction with the benefits of agent interaction. Furthermore, the inclusion of human-like behaviors such as eye movements has the potential to boost users' communication with agents [170].
In spite of recent technological developments, the implementation of highly realistic anthropomorphic agents has its challenges. These include having a negative effect if the accompanying conversational abilities are underdeveloped [171], difficulties in scaling up due to differences in manners, tone, and speech across different populations [172], needing to maintain a degree of artificiality to prevent unrealistic expectations [162,173], and needing to avoid the uncanny valley effect, where an agent begins to look eerie [174]. An alternative is to use caricatured agents, as in the case of REA, the Serbian fintech agent, and Joy, a virtual assistant on DBS Bank's website for corporate banking (Figure 6), to control unwarranted user expectations and prevent the uncanny valley effect.
Electronics 2023, 12,3301 experience when interacting with an anthropomorphic agent that had a huma trusted the agent more to help them with their purchase decisions [162]. Mos the effects of anthropomorphic agents in e-commerce show that anthropomo generally positive impacts on user experience and buying intention [163-context of e-learning, a study found that students' performance improved thropomorphic agent (human-voiced, human-like gestures, facial expression and body movement) compared to an agent with only human voice [166]. M on the effects of anthropomorphic agents in e-learning show that anthropomo positive impacts on learning and memory [167][168][169].
As technologies progress, fintech agents have developed from static im screen to fully animated characters incorporating facial expressions and lip that synchronize with speech (e.g., Figure 2). This form factor allows clients to natural face-to-face conversations at any time, combining the features of hum tion with the benefits of agent interaction. Furthermore, the inclusion of hum haviors such as eye movements has the potential to boost users' communi agents [170].
In spite of recent technological developments, the implementation of hig anthropomorphic agents has its challenges. These include having a negative accompanying conversational abilities are underdeveloped [171], difficultie up due to differences in manners, tone, and speech across different popula needing to maintain a degree of artificiality to prevent unrealistic expectation and needing to avoid the uncanny valley effect, where an agent begins to look An alternative is to use caricatured agents, as in the case of REA, the Serb agent, and Joy, a virtual assistant on DBS Bank's website for corporate bankin to control unwarranted user expectations and prevent the uncanny valley effe Figure 6. Joy, a caricatured fintech agent for corporate bankin https://www.dbs.com.hk/sme/business-banking/frequently-asked-questions.page (a April 2023).
As technologies to embody agents develop and become scalable, we more financial firms will move towards embodiment. Given the lack of stud As technologies to embody agents develop and become scalable, we expect that more financial firms will move towards embodiment. Given the lack of studies directly testing the effects of embodiment in fintech, future research should examine the effects of embodiment on trust, liking, perceived competence, and other important dimensions of financial relationships [175].

Agent Emotion
When humans interact, emotions are used to infer an interaction partner's internal states, behavior, or traits. Similarly, when computers portray emotions, this can act as a proxy to indicate the computer's internal state. This can foster productive and engaging interactions between users and technology [176]. Typically, artificial emotions are created based on emotional theories that categorize emotions as distinct types such as fear, anger, joy, sadness, disgust, and surprise [143]. Alternatively, emotions may be viewed as a system consisting of two dimensions, namely, arousal and valence. [177]. Some approaches to designing artificial emotions in machines combine both methods.
In human communication, mimicry fosters liking for the interaction partner [178]. An experiment using fMRI techniques found that even when users are explicitly informed that they are interacting with a computer, agents that display positive emotion in response to the user's smile allowed the user to experience positive emotion [179]. This finding established that the robust theory about human mimicry may apply to human-agent communication too. This makes the inclusion of emotional display mechanisms in fintech agents a crucial area for research. This adds to previous research [180] that found that agents displaying self-oriented emotion had little or no effect on a user's reactions to the agent, but agents displaying empathic emotion had major positive effects on both liking and trust. Additionally, the emotion displayed by an agent must be calibrated according to its intended users' demographic variables. For example, older adults may need more emotionally expressive empathic agents since they may be less adept than younger adults at pinpointing the emotions manifested by certain agent designs [181]. In the financial context, agents displaying emotion are not common yet. A preliminary attempt can be seen in the Dutch bank ING's fintech chatbot, Inga (Figure 7), which is designed to respond with empathy when a customer loses a credit card [182]. Although Inga is less embodied than the agent in Figure 2, which displays more body language, Inga uses emoticons to convey empathy for the user.
In other contexts, interactive agents have primarily displayed emotion via speech, facial expression, and body language. When it comes to identifying an agent's emotion, a disembodied voice agent was found to be just as effective as an embodied agent in conveying happy, content, and bored emotions [183]. A study found that students recognized the emotional state of the human or agent instructor, experienced the same emotional state as the human or agent instructor, and felt more motivated when the instructors displayed positive emotion [184]. Given that different channels, such as facial expression and voice, can contribute to emotional expression, a study examined which channel plays the most important role in emotion display [185]. Examining three channels (torso/limbs/head, face, and speech) and their contribution to five emotions (happiness, sadness, anger, fear, and surprise), this research found that the biggest contributor to the perceived believability of the animated emotion was the agent's body, followed by its face and speech. This is in line with previous research stating that humans have a strong tendency to respond to motion and to find semantic significance in motor action [106]. Studies examining the effects of fintech agents' body movements on user trust and liking will meaningfully extend the current literature. In addition, future studies should compare the distinctive effects of different categories of agent emotion (e.g., joy versus sadness) and dimensions of agent emotion (e.g., valence and arousal) in diverse financial contexts (e.g., bull versus bear markets; gain versus loss situations). This research can lead to the development of suitably emotional agents in finance. In other contexts, interactive agents have primarily displayed emotion via speech facial expression, and body language. When it comes to identifying an agent's emotion, a disembodied voice agent was found to be just as effective as an embodied agent in con veying happy, content, and bored emotions [183]. A study found that students recog nized the emotional state of the human or agent instructor, experienced the same emo tional state as the human or agent instructor, and felt more motivated when the instruc tors displayed positive emotion [184]. Given that different channels, such as facial ex pression and voice, can contribute to emotional expression, a study examined which channel plays the most important role in emotion display [185]. Examining three chan nels (torso/limbs/head, face, and speech) and their contribution to five emotions (happi ness, sadness, anger, fear, and surprise), this research found that the biggest contributo to the perceived believability of the animated emotion was the agent's body, followed by its face and speech. This is in line with previous research stating that humans have a strong tendency to respond to motion and to find semantic significance in motor action [106]. Studies examining the effects of fintech agents' body movements on user trust and Figure 7. Fintech agent Inga, designed to respond with empathy when a client loses a card. Source: https://medium.com/design-ing/how-we-designed-inga-a-delightful-banking-chatbotfor-ing-941d18c4646f (accessed 25 April 2023).

Agent Personality
Personality has been defined as the characteristic patterns of thinking, feeling, and behaving that distinguish individuals. Personality could serve as a useful affordance of the technology that guides users towards understanding an agent's behavior and easing the interaction [106]. The proliferation of computers will demand a more natural form of communication, for example, using embodied agents that are psychologically sensitive to the user [186]. Using personality to categorize humans significantly reduces cognitive load in an interaction, making it a useful construct to implement in human-computer interaction as well [187]. The Big Five model is a widely accepted typology of personality that comprehensively represents the fundamental traits of human personality [188]. This model identifies five main dimensions of personality: extraversion, neuroticism, agreeableness, conscientiousness, and openness to experience. Each of these broad dimensions is further broken down into more specific characteristics. Additionally, personality has also been defined as patterns of interpersonal interaction styles, which is a useful model for studying humanagent interaction [189]. In this area of research, two critical dimensions of personality are affiliation, which refers to how agreeable or quarrelsome an interaction partner is, and dominance, which refers to how dominant or submissive an interaction partner is. Research on personality in human-fintech agent interaction has focused primarily on the user's personality and how that impacts one's decision to use fintech agents. However, there is currently limited research on the design and effects of fintech agents' personalities.
Advisory service is a highly esteemed profession. If an agent should take over this role, its personality should be in line with what users expect from a human advisor. In terms of the affiliation dimension, fintech agents clearly need to be agreeable. In terms of the dominance dimension, however, it is unclear if a fintech agent should be dominant (taking leads, confidently making decisions) or submissive (letting customers take more leads, suggesting financial decisions). Preliminary research suggests that submissive roboadvisors may be preferred, although the user's own personality and the volatility of the market should be considered as well [190]. Future research on fintech agents should examine the personality of the agent as a factor affecting human-fintech agent interaction.
Preliminary qualitative research [159] found that a general context chatbot that identified users' personalities, adapted to them, and demonstrated its own personality through language cues provided a positive interaction experience for the participants. Examples from industry use are Rachel, Jamie, and Cleo, agents discussed earlier in this review [191]. Research on personalizing a mobile learning user interface according to the user's personality showed that the personalization helped to stimulate learning [192]. This holds potential for future exploration in the fintech context.

Issues and Theories Concerning Social Interaction with Fintech Agents
The previous two sections discussed the importance of fintech agents understanding their users and how agents should present themselves as interaction partners to users. These cues trigger heuristics in the human mind, whereby the human-interactive agent interaction becomes social. In the following section, we review two major research paradigms relevant to people's social interaction with technology, which could have direct implications for studies on human-fintech agent interaction.

The Media Equation (TME) and Computers Are Social Actors (CASA) Paradigms
The Media Equation (TME) theory and its derivation, the Computers are Social Actors (CASA) research paradigm [79], demonstrate that people process mediated experiences as though they were real and engage in fundamentally social interactions with interactive technologies like computers. The primary focus of TME and CASA revolves around how people react to the physical and social attributes of media and interactive technologies. Physical attributes include features like screen size, audio quality, and synchrony, while social attributes involve factors such as gender, personality, and manners. People tend to respond to virtual or nonhuman stimuli in the same way they would respond to actual human beings or real objects, despite being aware of their virtual or nonhuman nature [79]. For example, the automatic human tendency to pay attention to motion in our surroundings applies to media too-we pay more attention to motion on a screen even though we know it is not real. Similarly, when a computer provides fundamental social cues such as politeness and flattery, the user evaluates the computer better, just like how humans evaluate polite humans better than others.
CASA has been studied and verified in many HCI contexts, including human-agent interaction [93], human-smartphone interaction [193], and human-robot interaction [51]. People respond socially to these interactive technologies due to mindless behavior, whereby the user does not pay attention to all relevant features of the situation, such as the fact that the social cue is from an inanimate object. Instead, the user focuses on the social cues and relies on the overuse of human social categories, overlearned social behaviors, and premature cognitive commitments that are made based on the salience of a technology's social cues [62]. While TME explains that the human brain has not evolved enough to distinguish between actual stimuli and technology-mediated stimuli [80], other research argues that with continued interaction with an agent, users' responses to social cues change and are different from their responses to other humans. This suggests that users could develop agent-specific social responses that are subtly different from general social responses to other humans [194]. Research in this area will provide insight into how technology should be designed to create natural and enjoyable interactions.
Human-fintech agent interaction is an emerging area of interest and is likely to attract more scholarly attention, given that finance is a very personally relevant context for users. Examining the principles of TME and CASA in a fintech context (such as giving an agent motion where necessary or programming it to be polite) would lead to an expanded understanding of the mechanisms underlying human-agent interaction and how they may function in the financial context. This will allow financial institutions to create agents that are better accepted, trusted, and liked. Future studies should also examine the effects of long-term interaction with fintech agents, given the recent developments in CASA literature and the long-term nature of the human-fintech agent relationship.

Social Presence Theory
Following the robust findings of TME and CASA, scholars have examined the phenomenon where people automatically behave as though they are interacting with another human, even though the experience cannot exist without human-made technology. This phenomenon is called presence, a psychological state where the virtual nature of the objects goes unnoticed, and they are experienced as though they are actual objects [195]. Presence can be experienced with objects, social actors, and representations of one's own self. Among these three types of presence-physical, social, and self-social presence is highly relevant to the study of human-fintech agent interaction. Exploring factors that increase social presence and the effects of social presence when interacting with fintech agents, will have important theoretical and practical implications.
Several factors can affect social presence when interacting with human-agent communication, including the behavioral realism of the agent (e.g., nodding in response), the level of interactivity provided by the agent's design, psychological factors such as similarity attraction principles (e.g., agents of the same ethnicity), and individual user factors such as gender and familiarity [196]. While embodiment in general can increase presence, highly anthropomorphic agents may create unrealistic expectations that the agents cannot fulfill, ultimately leading to lower levels of presence [197]. In text-based communication, higher synchronicity (immediate responses) and the use of emoticons have been shown to increase social presence with agents [173]. In e-commerce, agents that signal their expertise in their tasks can create higher social presence, which mediates trust [198].
The effects of social presence have been examined in various technology-mediated contexts, including finance, e-learning, and healthcare. Feelings of social presence lead to increased enjoyment, trust, perceived usefulness, positive evaluation, intention to use the technology, memory and task performance, persuasion, and message processing [187,196,[199][200][201][202].
Given the well-recorded effects of social presence in various HCI contexts, future studies should further examine the effects of social presence in fintech. More nuanced approaches are needed, however, because of the private and sensitive nature of financial matters. A study showed that participants interacting with fintech chatbots prefer a mechanical chatbot over a human-like one when it comes to sharing sensitive financial information [203], suggesting that the role of social presence in fintech might be subtly different from its role in other contexts.
In addition, individual user factors are an important consideration in designing agents with social presence. For example, a user's financial knowledge or previous investment experience should be considered, as financial transactions can be complex. Hence, beginner investors might prefer interacting with an anthropomorphic agent with a high level of social presence. Other factors, such as market conditions or interaction duration (long-term versus short-term), should also be examined in future studies examining the effects of social presence in fintech.

Discussion
This review article demonstrated the ecosystem of technical and social factors involved in the development of interactive fintech agents. It identified the different aspects or elements through which fintech agents are perceived as interaction partners. These elements provide a theory-based reference for the development of technologies for interactive fintech agents.
First, interactive agents can employ embodied and disembodied modes of communication, each of which has its advantages and disadvantages. Understanding these differences will aid financial institutions and scholars in developing user-friendly interactive agents. For example, embodied agents can provide a richer interaction experience by making use of nonverbal communication modalities such as facial expressions. Embodied agents may also be personalized based on users' culture or preferences. On the other hand, disembodied agents may be preferred in situations where online accessibility or technical infrastructure is less developed.
Second, this review highlighted that interactive agents must have the intelligence to understand and execute different financial tasks. Beyond this, it would be useful for engineers and designers to examine how agents can demonstrate their intelligence to users. Some examples include the following: displaying the user's question while they process the answer, having embodied agents provide visual cues such as nods to show that they understand the user's request, or by predicting the user's subsequent requests. Demonstrating intelligence can help agents gain users' trust and their continued use.
Third, apart from understanding the financial tasks assigned to them, interactive agents must aim to understand their users. This review identified three domains of userrelated knowledge that interactive agents must focus on: users' intentions, their emotions, and their financial behaviors. Technology development in these domains will allow interactive agents to perform their tasks in a closer approximation of a traditional human financial advisor.
Fourth, to further bridge the gap between interactive fintech agents and human financial advisors, this review explained how it is important for agents to manifest as social actors, i.e., take on human-like traits to some degree. By presenting these traits or social cues, agents can trigger heuristics in the minds of users and make the interaction seem more natural. Three ways that agents can present themselves as social actors were discussed in this review: presenting themselves with form, displaying a personality, and displaying pseudo-emotions.
These aspects of human-financial agent interaction provide a critical overview of the areas where technical development should focus, which can lead to more efficient and enjoyable interaction experiences.
This review also highlighted some areas for future research and development. Foremost among these are the need to develop embodied agents and the need for research to determine if anthropomorphic agents are perceived positively in the financial industry. Furthermore, agents with mechanisms to display emotions such as empathy may be successful and well-liked, although further empirical research in this area is required. For example, in highly volatile market conditions, users may prefer agents that display less emotion. In this context, less emotion from an agent could be an advantage, as users may appreciate the agent's purely rational decision-making, in contrast to the human decision-making process, which can be fueled by fear or herd behavior. Similarly, the effect of motion or animation, as well as agent personality, must be further investigated in the context of interactive fintech agents. Moreover, research that examines users' feelings of social presence when interacting with a fintech agent would be useful to determine the extent to which fintech agents must present themselves as actual interaction partners. This is an important consideration in finance, a field where the human touch from a financial advisor may hold as much importance as the neutrality or rationality that a computer program can provide. Hence, the question of how human-like or machine-like a fintech agent must present itself must be investigated.
In addition, tailored experiences are another important area for future research. Agents that are intelligent enough to personalize themselves to each user would allow for greater ease of interaction. For example, agents that can learn about a user's personality over time and adapt to suit the user or market conditions would be beneficial. Currently, some fintech agents have limited interaction, such as robo-advisors that present general information to users but do not interact on a more personalized level with individual users. This is a major area for future development. We can also expect to see hybrid models, where fintech agents and human financial advisors work in tandem to cater to and assist users.
Other areas for research include examining the effects of agents that detect user emotion and convey artificial or pseudo-emotion as a means of reflecting their internal states. A modular approach to designing fintech agents may allow for increased levels of social presence, enhancing the interaction experience. This would aid in the development of physically embodied fintech agents whose presence in the same physical space as the user may create opportunities for novel and enjoyable financial experiences. Social presence could be examined as a mediator of the effects of interaction with a fintech agent on users' financial behaviors. As fintech agents continue to develop and seek to differentiate themselves from competition from both humans and other agents, they may be able to use customized voice modalities to interview new users to obtain detailed responses instead of relying on traditional, static questionnaires [204]. Such interactions would require advances in technology as well as careful implementation.

Conclusions
This paper examined a fast-expanding category of media technology-computer agents used in finance to interact with users and assist in completing financial tasks and goals. The use of interactive agents has seen considerable growth, even in contexts where the interaction could have potentially critical outcomes for users, such as healthcare [205]. However, there is a lack of review articles on the range of technologies and social interaction considerations that are important to the development of interactive agents in the context of finance. The financial context, like healthcare, has a significant degree of personal relevance to users, as they may rely on the agents for information and to take crucial decisions. Hence, both the back-end technology and the user-facing interactive aspect of the technology can have severe consequences for financial institutions and users. Considering the growth of interactive agents in this context and the serious nature of financial interactions, this review aimed to contribute to the understanding of fintech agents by providing an overview of the technologies and theoretical issues involved in creating successful interactive agents. Furthermore, this review offers a unique contribution to fintech agent understanding as it includes a social-scientific perspective. By discussing the major theoretical issues related to interaction, this review can guide the understanding of interactive agents as social actors, which will benefit scholars as well as practitioners.
The main technologies required to develop fintech, and fintech agents in particular, were presented. These emphasize the technological advancements enabling these agents' crucial role as interaction partners with users. This was followed by a discussion of how issues and theories from a social science perspective are necessary and relevant to human-fintech agent interaction. These theoretical issues explain how an interactive agent may understand the user, how agents can manifest as social actors, and how users can perceive virtual objects as though they were actual social actors during their interaction with the technology.
Some issues are beyond the scope of this paper, but must be taken into consideration when agents are used in daily life. For example, while robo-advisors are meant to overcome the behavioral biases of human investors, researchers argue that since agents are programmed by humans, they are not free from biases such as giving more allocation weight to domestic stocks [206]. It remains to be seen if fintech agents can drastically reduce errors that a human financial advisor may be prone to and how their performance influences the interaction with users. Furthermore, when users interact with agents that take an in-depth approach to understanding users, there is a risk of users' data getting leaked or misused. The ethics of artificial conversation partners should also be examined to ensure that humans who have grown to trust their interactions with fintech agents do not become victims of malicious programmers looking to exploit less savvy users.
Moreover, an ethical challenge persists with embodied agents. The examples of anthropomorphic fintech agents discussed in this review and available in the industry are predominantly female, which can raise concerns about perpetuating gender stereotypes such as women being assistants or taking on service roles [182]. A similar trend can be observed with disembodied chatbots, many of which have been given female names. Future research must consider the social implications of gendered agents in the finance industry, which is already male-dominated [207].
Finally, the use of fintech agents brings attention to the field of machine ethics, which aims to create agents that follow a set of ethical principles when making decisions or taking actions [208]. To establish an ethical frame of reference, organizations must have a clear understanding of their ethical stance, including how business should be conducted and what standards should be followed [209]. Fintech agents such as robo-advisors may raise ethical concerns, such as whether their interactions should prioritize the performance of a single user's portfolio or the overall performance of all portfolios they manage, or perhaps even prioritize the overall stability of the financial market for the long-term benefit of all users. Furthermore, robo-advisors may not fulfill their fiduciary duty when advising clients if they are not advanced enough to provide adequately personalized financial advice [210]. One potential solution is to develop AI technology that has a strong moral code and understands the consequences of violating it. This would require significantly more data and continuous scenario simulation for introspective guidance [206].
Nevertheless, interactive fintech agents are likely to be an important element of the financial landscape for users of all levels of experience. Fintech provides a good opportunity for practitioners and social scientists to test social science theories in a novel context that is deeply personal to users. Human-fintech agent interaction may also function as a platform for advancing our understanding of behavioral economics and examining ways to overcome human limitations with the aid of agents. By incorporating social science theories, developers and researchers will have the opportunity to create successful fintech platforms that promote a natural and enjoyable interaction with users.
The emerging area of human-fintech agent interaction is an exciting opportunity for scholars to advance the state of research. Findings from human-fintech agent interaction will also inform other emerging fields, such as autonomous vehicle agents and agents in the healthcare context. Like finance, both these areas involve high levels of trust between users and service providers. Understanding the interaction between users and fintech agents using behavioral science will provide a good starting point for research in these other critical areas.

Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest:
The authors declare no conflict of interest.