Next Article in Journal
Intergenerational Transfers in a Tractable Overlapping- Generations Setting
Previous Article in Journal
Research on the Condition Assessment Method for Marine Diesel Generators Considering the Effects of Fouling and Dust Deposition
Previous Article in Special Issue
Analyzing Diagnostic Reasoning of Vision–Language Models via Zero-Shot Chain-of-Thought Prompting in Medical Visual Question Answering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Emotion and Intention Detection in a Large Language Model

Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City 07738, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(23), 3768; https://doi.org/10.3390/math13233768
Submission received: 25 September 2025 / Revised: 5 November 2025 / Accepted: 17 November 2025 / Published: 24 November 2025
(This article belongs to the Special Issue Mathematical Foundations in NLP: Applications and Challenges)

Abstract

Large language models (LLMs) have recently shown remarkable capabilities in natural language processing. In this work, we investigate whether an advanced LLM can recognize user emotions and intentions from text, focusing on the open-source model DeepSeek. We evaluate zero-shot emotion classification and dialog act (intention) classification using two benchmark conversational datasets (IEMOCAP and MELD). We test the model under various prompting conditions, including those with and without conversational context, as well as with auxiliary information (dialog act labels or emotion labels). Our results show that DeepSeek achieves an accuracy of up to 63% in emotion recognition on MELD, utilizing context and dialog-act information. In the case of intention recognition, the model improved from 45% to 61% with the aid of context, but no further improvement was observed with the provision of emotional cues. Supporting the hypothesis that providing conversational context aids emotion and intention detection. However, conversely, adding emotion cues did not enhance intent classification, suggesting an asymmetric relationship. These findings highlight both the potential and limitations of current LLMs in understanding affective and intentional aspects of dialogue. For comparison, we also ran the same emotion and intention detection tasks on GPT-4 and Gemini-2.5. DeepSeek-r1 performed as well as Gemini-2.5 and better than GPT-4, confirming its place as a strong, competitive model in the field.

1. Introduction

Chatbot technology has advanced rapidly in the past decade, driven by improvements in artificial intelligence (AI) techniques such as natural language processing (NLP) and machine learning (ML) [1]. Modern conversational agents benefit from text-to-speech and speech-to-text capabilities, making interactions more natural [2] and increasing user acceptance worldwide [3]. Intelligent personal assistants (Apple Siri, Microsoft Cortana, Amazon Alexa, Google Assistant) now demonstrate a better understanding of user input than earlier chatbots [4], thanks to these AI advances. They can even mimic human voices and produce coherent, well-structured sentences.
Chatbots are employed across diverse domains—entertainment, healthcare, customer service, education, finance, travel [5]—and even as companions [6]. These successes result from years of research aimed at improving conversational AI functionality, performance, and language understanding accuracy [7], along with new strategies for efficient implementation [8] and dialogue quality evaluation [9]. Nevertheless, current chatbots often still fall short of user expectations [1], leading to frustration and dissatisfaction [10]. This shortfall has led researchers to emphasize the importance of endowing such systems with affective recognition capabilities, on the premise that acknowledging and responding to user emotions improves user experience [11].
Due to issues like the above, users may perceive chatbot interactions as unnatural or impersonal. To address this, Wolk [12] argued that conversational agents should provide personalized experiences, foster lasting relationships, and receive positive user feedback. He suggested this could be achieved if such systems were capable of processing users’ intents (in addition to understanding content).
In summary, many researchers advocate for conversational agents and other AI applications to be able to recognize both the user’s intentions and emotions during interactions. This dual capability could make dialogues more natural and satisfying.
Recently, large language models (LLMs) have had a profound impact on the NLP community with their remarkable zero-shot performance on a wide range of language tasks [13]. As LLMs are increasingly integrated into daily life applications, it is vital to analyze how well these models can recognize and classify user intentions and emotions. Developments like supervised fine-tuning have further improved LLMs’ understanding of user instructions and intentions [14].
The debut of ChatGPT in late 2022 [15] revolutionized AI conversations with its ability to engage in human-like dialog. More recently, in early 2025, a new open-source large language model (LLM) called DeepSeek-r1 (DS-r1) [16] was introduced as a promising model with purportedly advanced dialogue capabilities. DS-r1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks, and outstanding results on benchmarks such as MMLU, MMLU-Pro, and GPQA Diamond, with scores of 90.8% on MMLU, 84.0% on MMLU-Pro, and 71.5% on GPQA Diamond, showing its competitiveness [17]. DS-r1 also excels in a wide range of tasks, including creative writing, general question answering, editing, summarization, and more. It achieves an impressive length-controlled win-rate of 87.6% on AlpacaEval 2.0 and a win-rate of 92.3% on ArenaHard, showcasing its strong ability to handle non-exam-oriented queries intelligently [17].
However, to the best of our knowledge, this is the first systematic study evaluating DeepSeek, a recently released open-source large language model, on the combined tasks of emotion and intent detection in dialogue, and whether providing the additional context of the conversation (in the case of emotion recognition, the previous annotation of intent, and vice-versa) influences performance on these tasks. While prior research has assessed ChatGPT, GPT-4, or fine-tuned transformer models for these tasks, DeepSeek has not been examined in this context despite its reported strengths in reasoning and general dialogue. Our work, therefore, provides novel insights into the affective and pragmatic capacities of a state-of-the-art open-source LLM, highlighting an asymmetric relationship between emotions and intentions that has not been previously documented.
Objectives: In this study, we evaluate the performance of DeepSeek (V3-0324 and R1-0528) as a representative LLM on two key tasks: emotion recognition and intent recognition from text. We focus on zero-shot prompting scenarios—without any fine-tuning—on established conversational datasets. Specifically, we assess (1) how accurately DeepSeek-v3 (DS-v3) and DeepSeek-r1 (DS-r1) can classify the emotion expressed in a user utterance, and (2) how accurately they can classify the user’s intent (dialogue act). We also investigate whether providing additional context (such as the surrounding conversation or knowledge of the other aspect, emotion versus intent) affects performance on these tasks.
The main contributions of this work are:
  • a demonstration that DeepSeek can make a reasonable classification of intent and the emotional state in zero-shot conditions.
  • a demonstration that DeepSeek can improve its comprehension of intents and the emotional state of a conversation, providing it with the conversation context.
  • reaffirms the suggestion that providing the intention of the utterance to an LLM can improve an emotional state classification.
The remainder of this paper is organized as follows: Section 2 provides background on emotion and intention detection, as well as their relationship. Section 4 describes the experimental methodology, including the prompt design and datasets. Section 5 presents the results of the emotion and intent classification experiments. Section 6 presents the comparative results of ChatGPT, DeepSeek, and Gemini. Section 7 discusses the findings, and Section 8 concludes the paper and outlines future work.

2. Background and Problem Formulation

2.1. Emotion Detection: Definition and Importance

Emotion detection (ED), also called emotion recognition, is a branch of sentiment analysis focused on identifying and interpreting human emotions from various data sources. Its primary goal is to recognize and interpret emotional states by analyzing inputs such as facial expressions, vocal tone, body language, or textual content [18]. In text-based ED, the aim is to infer the writer’s or speaker’s emotional state from language cues.
A foundational psychological framework in ED is Ekman’s model of basic emotions. Ekman identified six basic emotions (happiness, sadness, anger, disgust, surprise, fear) that are universally recognized across cultures [19]. These categories often serve as a basis for labeling and classifying emotions in research.
Emotion detection is essential because it allows artificial agents to gather affective information from users and adapt accordingly. By understanding a user’s emotional state, a system can tailor its responses and create a more engaging, long-term interaction [20]. Indeed, systems capable of detecting or expressing emotions have been shown to improve user experience [11]. For example, an emotion-aware system might detect frustration and adjust its dialogue strategy to be more supportive.
Beyond improving general human–computer interaction, emotion recognition has diverse applications. In security and law enforcement, verbal and written cues to emotion can assist in lie detection or threat assessment [21,22]. In customer experience management, businesses use emotion detection to gauge customer satisfaction or dissatisfaction from feedback, enabling proactive service improvements [18]. In healthcare, emotion recognition from text (such as therapy transcripts or patient journals) can help identify signs of mental health issues like depression or anxiety [23]. In education, real-time emotion monitoring could alert teachers to student confusion or frustration [24]. Across these domains, adding emotional context enables AI systems to provide more personalized and effective responses.

2.2. Intention Detection: Definition and Importance

Intention detection is a subfield of NLP and AI that identifies the underlying goal or purpose behind a user’s utterance. For example, a user query might intend to ask a question, issue a command, make a request, or express an opinion. Recognizing intent is critical in task-oriented dialogue systems (such as virtual assistants), which need to map natural language to specific actions or replies. In dialog systems like Apple’s Siri, Amazon Alexa, or Google Assistant, intent detection is the process that interprets a user’s natural-language command and triggers the appropriate function or answer [25]. This interpretation’s process relies on pragmatics, where Austin said that language is used to accomplish actions [26], and according to Searle [27], speech acts are classified into five illocutionary acts (representatives, directives, commissives, expressives, and declarations), from which are obtained the categorized intents in terms of dialog acts (DAs) such as Question, Command, Statement, Request, etc.
Intention detection greatly enhances human–computer interaction by enabling systems to respond to what the user means, not just what they literally say. This leads to more efficient and meaningful interactions. Users experience less frustration when the system correctly interprets their requests [12] and provides relevant answers quickly [28]. Moreover, accurate intent recognition allows conversational agents to personalize responses to a user’s needs and context, creating a more engaging experience [28].
In customer service automation, intent classification allows chatbots to resolve standard queries without human intervention, thus reducing response times and operational costs [28]. In decision support, identifying user intent from feedback or questions can help businesses uncover what customers aim to do (e.g., complain, seek information, make a purchase) and adapt strategies accordingly. Intention detection also enables proactive assistance: a system that understands a user’s goal can sometimes anticipate needs and offer help before the user explicitly asks [28]. In accessibility contexts, intent-aware interfaces can make communication more seamless and reduce ambiguity [29].
Overall, intention detection is a key component in modern conversational AI, driving improvements in user experience, automation, personalization, and the overall effectiveness of dialogue systems.

2.3. The Relationship Between Emotions and Intentions

Emotions and intentions in communication are closely interconnected. Emotional states often influence intentions: for instance, fear may lead to the intention to escape a situation, whereas anger might lead to the intention to confront [30]. Positive emotions, such as excitement, can motivate intentions to pursue a goal, while gratitude can increase a consumer’s intention to repeat a purchase [31,32]. Thus, emotion can bias decision-making and drive particular intentions.
Conversely, intentions (or outcomes related to one’s intentions) can influence subsequent emotions. If a person achieves their intended goal, they may feel satisfaction or contentment; if their intention is thwarted, they may feel disappointment or sadness [33]. In dialog, a speaker’s success or failure in fulfilling their communicative intention (e.g., making a request or joke) can alter their emotional state.
Emotions and intentions often exist in a feedback loop: emotions give rise to intentions, and executing or thwarting those intentions generates new emotional responses. In conversation, this interplay can be subtle. For example, consider an utterance like “Yeah, sure.” Its literal meaning is agreement, but if delivered sarcastically (with an emotional nuance), the true intention might be disagreement or dismissal. The speaker’s emotion (frustration) is key to correctly inferring the communicative intent in such cases [34].
Understanding the relationship between emotion and intention is especially important for AI systems performing emotion or intent detection. Emotions can provide context to disambiguate a user’s intent when the language alone is unclear [35]. Likewise, knowing a speaker’s intent (such as their dialog act) could help interpret ambiguous emotional expressions. We hypothesize that incorporating knowledge of one aspect (emotion or intent) could improve the detection of the other. In other words, an AI that knows the user’s emotional tone might better judge their intent, and vice versa. This forms the central hypothesis of our study.

2.4. Problem Statement and Hypothesis

Despite progress in specialized emotion or intent classifiers, it remains unclear whether large pre-trained language models can achieve a similar level of understanding in a zero-shot setting. Preliminary studies have shown that LLMs like ChatGPT can perform emotion classification, although not at state-of-the-art levels of accuracy (e.g., around 58% on one benchmark [36]). In another study [37], various models of LLMs (GPT-3.5-turbo, GPT-4 and flan-alpaca) were explored in a zero-shot condition to recognize emotions. Their evaluation showed that these models did not perform as well as the top-rated state-of-the-art models (BERT and RoBERTa). Few studies have examined LLMs for intent (dialog act) classification, especially in the context of emotion.
The problem we address is how to improve the performance of an LLM in recognizing emotions and intentions from dialog text as reliably as task-specific models, and whether providing information about one will help the other. Specifically, we explore whether adding conversational context and cross-task information (emotion annotations or intent annotations) can improve zero-shot classification performance.
We formulate the hypothesis that emotions provide helpful cues for intent detection and vice versa. For example, knowing an utterance is spoken in anger might hint that it is intended as a criticism rather than a neutral statement. Likewise, knowing that an utterance is a question (intent) might constrain the likely emotions expressed. We test this hypothesis by comparing model performance with and without access to such supplemental information.

3. Literature Review

3.1. Emotion Recognition in Dialogue

Researchers have approached emotion recognition in text from various perspectives. One early milestone was the work of Zhou et al. [38], who incorporated an explicit “emotion feature” into a chatbot model to generate emotionally relevant responses. Their encoder–decoder model with gated recurrent units (GRUs) introduced emotion category embeddings and an internal/external memory for emotional context, enabling it to respond with contextually appropriate emotion.
Building on the importance of affect in conversation, Asghar et al. [39] improved upon Zhou’s work by using a sequence-to-sequence model with a Long Short-Term Memory (LSTM) network. This model generated responses that more accurately reflected the user’s emotions. With the advent of transformer architectures, Majumder et al. [40] proposed an encoder–decoder transformer chatbot that could mimic the user’s emotions to produce empathetic responses. Their model injected stochastic variation into emotional responses and mirrored the user’s emotional tone to appear more empathetic. Luo et al. [41] propose a fine-tuned pre-trained RoBERTa model with a CNN-LSTM network for textual emotion recognition in a conversation, taking into consideration the long-term emotion-relevant context information. Their results outperformed the state-of-the-art models on the MELD dataset in most cases.
The emergence of powerful LLMs led to new approaches. For example, the release of ChatGPT in 2022 opened the possibility of using a pre-trained conversational model for emotion recognition. Some researchers fine-tuned GPT-style models to create end-to-end empathetic chatbots [42]. Others evaluated ChatGPT directly as an emotion classifier. Banimelhem and Amayreh [36] tested ChatGPT on a standard emotion classification dataset (dair-ai/emotion) and reported an accuracy of about 58%, indicating some competence but leaving room for improvement. Similarly, Mullangi et al. [43] explored sentiment and emotion modeling with ChatGPT, highlighting both its potential and its pitfalls (e.g., inconsistent label assignment without fine-tuning). Wake et al. [44] investigated biases in ChatGPT’s emotion recognition, pointing out that while performance was generally good, certain emotions were systematically more complicated for the model to identify correctly. Mohammad et al. [37] explored various LLMs in a zero-shot setting for emotion recognition; their evaluation showed that these models did not perform as well as the top-rated state-of-the-art models.
In summary, emotion recognition in text has progressed from specialized neural architectures to the current exploration of prompting large pre-trained models. Fine-tuned models show that incorporating emotional embeddings or memories can enhance dialogue generation. Meanwhile, recent studies suggest that large language models can recognize emotions to some extent. Still, their zero-shot performance may not match that of dedicated models without further adaptation or context.

3.2. Intention (Dialog Act) Recognition

Detecting a speaker’s intent—often operationalized as dialog act (DA) classification—has also seen extensive research. Early work by Ortega and Vu [45] used recurrent neural networks (RNNs) with attention mechanisms for DA classification, highlighting the value of context representations in improving performance. Raheja and Tetreault [46] introduced a context-aware self-attention model, which improved on prior RNN approaches on the Switchboard dialogue corpus by capturing long-range dependencies in conversation.
Other studies combined statistical models with neural networks. Saha et al. [47] experimented with Conditional Random Fields (CRFs) along with neural encoders for DA tagging, and Li et al. [48] developed a dual-attention hierarchical RNN with a CRF output layer to label utterances with DAs, using both utterance-level and context-level attention. Shang et al. [49] further showed that incorporating information about speaker turns (who is speaking when) can improve DA classification; they modified the CRF layer to account for speaker change information and found that this yielded more accurate results.
Most of the above methods involve supervised learning on labeled dialogue datasets. There is less published work on using LLMs for DA classification in a zero-shot or prompt-based way. However, the intent detection task in virtual assistants is related and has been addressed with modern techniques. For example, in a recent dissertation, Ye [28] studied user intent modeling in conversational systems, though primarily focusing on task-specific modeling.
In general, intent recognition benefits from understanding dialogue context and structure. The current research gap lies in assessing whether LLMs, which have implicit knowledge of language and dialogue, can infer intents without explicit training on DA labels.

3.3. Emotion and Intent Recognition Together

Very few studies have directly examined the interaction of emotion and dialog act recognition in conversation analysis. A couple of notable exceptions include Bosma and André [35], who attempted to disambiguate speech acts by incorporating the user’s emotional state. In cases where an utterance was ambiguous or difficult to classify by intent alone, knowing the emotion led to more accurate speech-act classification. Their results were encouraging, suggesting a complementary relationship between affect and intent.
Novielli and Strapparava [50] explored affective analysis in dialogue act identification. They proposed that lexical features associated with emotion could inform DA classification. Their findings provided positive evidence that affective lexical cues correlated with particular intentions in dialogue, reinforcing the idea that emotions and intentions are linked.
These studies support our hypothesis that joint modeling or cross-conditioning of emotion and intention could be beneficial. However, they used relatively small models or specific algorithms. It remains to be seen how a large language model might implicitly capture these relationships.
In summary, integrating emotion and intention recognition is a nascent research area. Prior work indicates that emotion cues can help resolve intent ambiguities and that affective lexicons align with communicative intent. This motivates our experimental approach: we will test an LLM on both tasks and observe how providing it with additional emotional or intentional context influences its performance.

4. Materials and Methods

To classify every utterance from each conversation, those were sent to DeepSeek (DS) through a module specifically designed for this purpose. This classification was made within a predefined set of categories. To each utterance, a prompt specifying the desired task was added. Then the classification produced by DS was retrieved, compared, and evaluated against the dataset’s original annotations. Finally, we obtained adequate metrics to understand the model’s performance.
Formally, the method mathematically can be described as follows:
Given a conversation C,
C = u 1 + u 2 + + u n , i { 1 , 2 , , n }
where u i represent the i-th utterance.
The function that maps every utterance to its prediction can be defined as,
f 1 : { p 1 + u i } y i , i { 1 , 2 , , n } , a n d y i E
where p 1 represents the added prompt for each utterance, y i the prediction made by the model for the specific utterance u i , and E the set of predefined labels.
Then, let us say that,
L = y 1 + y 2 + + y i , i { 1 , 2 , , n } , a n d y i E
where L represents the ordered list of all the predictions made by the model in a set of predefined labels.
Now, if we want to consider the entire context of the conversation, we have the following function,
f 2 : { p 2 + { u 1 + u 2 + + u i } } { y 1 + y 2 + + y i } , i { 1 , 2 , , n }
where p 2 represents the prompt that asks for consideration of the conversational context of the conversation, y i is an element of L , the ordered list of all the predictions made by the model.
Now, let L be an ordered list with the original annotations of the conversation C mapped from each utterance,
L = y 1 + y 2 + + y i , i { 1 , 2 , , n }
In the last case, when we want to consider the context of the conversation and provide the model with the additional information of the pre-annotated label class, we have the following function,
f 3 : { p 3 + { u 1 y 1 + u 2 y 2 + + u i y i } } { y 1 + y 2 + + y i } , i { 1 , 2 , , n }
where p 3 represents the prompt that asks for consideration of the conversational context of the conversation with the additional information, and y i represents the original labels of the conversation C belonging to L, mapped from each utterance u i .
Finally, the next function returns the confusion matrix that will help us obtain the necessary metrics to evaluate the model,
f 4 ( y i , y i ) M i j , i a n d j { 1 , 2 , , n }
The matrix M i j enables us to obtain the metrics for evaluation (see Section 4.3 for more details on this function and other algorithms).
The following diagram helps us to visualize the same method described before (Figure 1):

4.1. Prompt Design for the LLM

Although at the beginning, we designed a series of text prompts to query DeepSeek for emotion and intent classification, and because we operate in a zero-shot setting (carefully phrased prompts are essential for eliciting the desired behavior from the model), we experimented with multiple prompt formulations to determine which yielded the most accurate and consistent responses. At the end, we issued two separate prompts per input: one requesting the dominant emotion, and another asking the inferred communicative intention. Categories for emotion followed the Ekman taxonomy for the MELD dataset (see Section 4.2 below), and for the IEMOCAP dataset, the same Ekman taxonomy plus excited and frustrated. While intentions were defined using the corpus authors’ annotation schema. All prompts instructed the model to output a single-word label indicating either an emotion or an intent.
For emotion recognition, without considering the context of the conversation, the prompt used was:
“In one word, choose between anger, excited, fear, frustrated, happy, neutral, sad, or surprised. Not a summary. What emotion is shown in the next text?: ‘…’”
We found that constraining the output to a single word and providing an explicit list of emotion labels often improved the consistency of the model’s answers. Adding “Not a summary” to the prompt is essential to retrieve the model’s response and avoid irrelevant information, which can make it harder to extract the desired word.
In some variants, we provided conversation context and asked the model to label the emotion of each utterance, evaluating its ability to perform in-context learning across multiple turns.
For intent (dialog act) recognition, we used analogous prompts but with dialog act labels. For example, when we wanted the model to classify the intent of an utterance, we might prompt:
“In one word, choose between Greeting, Question,, and Others. Not a summary. What dialogical act is shown in the next text?: ‘…’.”
In the case of classifying the emotion of each sentence, considering the context of the conversation, we used the following prompt:
“According to the conversation context, choose between anger, excitement, fear, frustration, happiness, neutral, sadness, or surprise. Answer each sentence in one word with a list and a corresponding number. Not a summary. What emotion is shown in each sentence?: ‘…’.”
In the prompt shown above, the sentences “According to the conversation context” and “Answer each sentence in one word with a list and a corresponding number” can be suppressed, but what is essential is the structure in which the conversation is provided to classify, each utterance of it must have a sequential number (e.g., “1.—Joey—But then who? The waitress I went out with last month? 2.—Rachel—You know? Forget it! 3.—Joey—No-no-no-no, no!…), to obtain the corresponding list as expected.
However, in this study, we primarily focus on intent classification in conjunction with emotion (see Section 4.4 below).
For experiments where we provided cross-task information, the prompts were extended. For instance, to see if emotion context aids intent classification, we gave the model an utterance along with a known emotion label and asked for the intent. Conversely, to test if knowing the intent aids emotion recognition, we provided the dialog act label in the prompt and asked for the emotion. An example of the latter:
“According to the context of the conversation and its dialog act classification (given in parentheses), choose the emotion (fear, surprise, sadness, anger, joy, disgust, or neutral) of each sentence. Each sentence’s dialog act is provided. Not a summary. What emotion is shown in each sentence of the conversation? ‘…’.”
Here, the conversation lines were annotated with dialog act tags in parentheses, and the model was asked to output an emotion for each line.
In summary, our prompt engineering strategy was to specify the task, restrict the output format (for reliability), and supply additional context or options when needed. We did not perform exhaustive prompt tuning; rather, we aimed for reasonably straightforward prompts, under the assumption that an effective LLM should handle such direct instructions (this reflects a typical end-user approach).

4.2. Datasets

We evaluated the model on the EMOTyDA (Emotion aware Dialogue Act) dataset (Saha et al. [34]), which was constructed by combining and reannotating two public dialogue datasets: IEMOCAP (Interactive Emotional Dyadic Motion Capture Database) and MELD (Multimodal Emotion Lines Dataset). Both are multimodal dialogue corpora with emotion labels; for our purposes, we used only the text transcripts and associated labels.
IEMOCAP [51] contains scripted and improvised two-person conversations performed by actors. It has about 152 dialogues and over 10,000 utterances (turns), each annotated by multiple annotators for emotion, in the following emotion categories: happy, sad, fear, disgust, neutral, angry, excited, frustrated, and surprise.
MELD [52] is derived from TV show transcripts (dialogues from the Friends series). It includes over 1400 dialogues and 13,000 utterances, labeled with seven emotion categories: joy, sadness, neutral, anger, fear, disgust, and surprise. To compare the results of the classifications of DS once they were made, we used the same emotion categories as the labels provided.
From these, EMOTyDA was formed by taking the text of each conversation along with its emotion labels (Saha et al. [34]). Additionally, for a subset of the data, each utterance was annotated with one of 12 dialog act (intent) categories: greeting, question, answer, statement-opinion, statement-non-opinion, apology, command, agreement, disagreement, acknowledge, backchannel, and other. Although the SWBD-DAMSL tag-set consists of 42 dialogue acts (DA) developed by [53], and it has been widely used for the task of classification, Saha et al. [34] employed the SWBD-DAMSL tag-set as the base for conceiving a tag-set for the EMOTyDA, since both these datasets contain task-independent conversations. Of those 42 tags, the 12 most common were used to annotate utterances in the EMOTyDA dataset. This is because EMOTyDA is smaller than the SWBD corpus. And many of the tags of the SWBD-DAMSL tag-set will never appear in the EMOTyDA dataset. These are the same categories used in the present study.
These dialogue act annotations were available or adapted from the original datasets, and we used those for consistency.
This unified dataset enabled us to test the model for emotion and intent classification using both corpora.
For evaluation, we considered each utterance in isolation or with its conversation context, depending on the experiment (see Section 4.4). The model’s predicted label was compared with the ground truth label to assess accuracy (see the Section 4.3).

4.3. Metrics and Algorithms

We evaluated DeepSeek performance using various metrics. Some algorithms were implemented as well to obtain the values of the variables used in the formulas, such as Algorithm 1, which was used to compute each confusion matrix, or Algorithm 2, to obtain the values of the True Positives, or Algorithm 3, to receive the values of the True Negatives, or Algorithm 4, to obtain the values of the False Positives, or Algorithm 5, to receive the values of the False Negatives [54].
Algorithm 1 Confusion Matrix
Input: List of Classes (listClass), Original Labels (y), Prediction of the model (y’)
Output: confusionMatrix (Confusion Matrix)
1:
l e n C l a s s = l e n ( l i s t C l a s s )
2:
c o n f u s i o n M a t r i x [ [ 0 ] l e n C l a s s ] l e n C l a s s
3:
for   k = 0 t o l e n ( y )   do
4:
       i = l i s t C l a s s . i n d e x ( y [ k ] )
5:
       j = l i s t C l a s s . i n d e x ( y [ k ] )
6:
       c o n f u s i o n M a t r i x [ i ] [ j ] + = 1
7:
end for
8:
return   c o n f u s i o n M a t r i x
Algorithm 2 True Positives (TP)
Input: confusionMatrix, listClass, class
Output: truePositives
1:
k = l i s t C l a s s . i n d e x ( c l a s s )
2:
t r u e P o s i t i v e = c o n f u s i o n M a t r i x [ k ] [ k ]
3:
return   t r u e P o s i t i v e
Algorithm 3 True Negatives (TN)
Input: confusionMatrix, listClass, class
Output: trueNegatives
  1:
t r u e N e g a t i v e s 0
  2:
k = l i s t C l a s s . i n d e x ( c l a s s )
  3:
for   i = 0 t o l e n ( c o n f u s i o n M a t r i x [ 0 ] )   do
  4:
      if  k i  then
  5:
             t r u e N e g a t i v e s + = c o n f u s i o n M a t r i x [ i ] [ i ]
  6:
      else
  7:
            do nothing
  8:
      end if
  9:
end for
10:
return   t r u e N e g a t i v e s
Algorithm 4 False Positives (FP)
Input: confusionMatrix, listClass, class
Output: falsePositives
  1:
f a l s e P o s i t i v e s 0
  2:
k = l i s t C l a s s . i n d e x ( c l a s s )
  3:
for   i = 0 t o l e n ( c o n f u s i o n M a t r i x [ 0 ] )   do
  4:
      if  k i  then
  5:
             f a l s e P o s i t i v e s + = c o n f u s i o n M a t r i x [ i ] [ k ]
  6:
      else
  7:
            do nothing
  8:
      end if
  9:
end for
10:
return   f a l s e P o s i t i v e s
Algorithm 5 False Negatives (FN)
Input: confusionMatrix, listClass, class
Output: falseNegatives
  1:
f a l s e N e g a t i v e s 0
  2:
k = l i s t C l a s s . i n d e x ( c l a s s )
  3:
for   i = 0 t o l e n ( c o n f u c t i o n M a t r i x [ 0 ] )   do
  4:
      if  k i  then
  5:
             f a l s e N e g a t i v e s + = c o n f u c t i o n M a t r i x [ k ] [ i ]
  6:
       else
  7:
            do nothing
  8:
      end if
  9:
end for
10:
return   f a l s e N e g a t i v e s
Next, we used specific formulas to obtain every metric:
Accuracy. Accuracy refers to the percentage of correct predictions [54].
A c c = T P + T N T P + T N + F P + F N
Precision. Precision is understood as the fraction of values that belong to a positive class out of all of the values that are predicted to belong to the same class [54].
P r e c i s i o n = T P T P + F P
Recall. Recall is equal to the number of correct predictions out of all the values that truly belong to the positive class [54].
R e c a l l = T P T P + F N
F1 score. The F1 score is the harmonic mean of precision and recall, with a value of 1 indicating perfect performance and 0 indicating no performance [54].
F 1 = 2 T P 2 T P + F P + F N
Macro-F1. Macro-F1 averages the F1 score across all classes equally [55].
M a c r o F 1 = F 1 c l a s s 1 + F 1 c l a s s 2 + + F 1 c l a s s N
Weighted F1 scores. Weighted (W) F1 accounts for class imbalance by giving more weight to frequent classes [55].
W e i g h t e d F 1 = F 1 c l a s s 1 W 1 + F 1 c l a s s 2 W 2 + + F 1 c l a s s N W N

4.4. Experimental Settings

We implemented a Python script to interface with DeepSeek models (DeepSeek-R1-0528 and DeepSeek-V3-0324, accessed in June 2025) via their API (simulating user queries to the model). All evaluations were done in a zero-shot manner; no few-shot examples or in-context demonstrations were used—no model fine-tuning was performed, only prompt-based queries. We used the EMOTyDA dataset, comprising 9420 instances from the IEMOCAP dataset (151 conversations) and 9988 cases from the MELD dataset (943 conversations). All prompts were submitted to DS with its standard temperature set to 1 and the default max output of 32k tokens. Each input was queried once. For future replication, full prompt logs are available: https://github.com/Emmanuel-Castro-M/EmotionAndIntentionRecognition (accessed on 24 September 2025).
We conducted three main experiments for emotion recognition and three for intention recognition, for each corpus (IEMOCAP and MELD):
  • Classification to utterance-level (no conversational context): The model was given each utterance independently, with a prompt asking for the classification (emotion or intention, independently). This setting mimics classifying each sentence in isolation, without conversational context.
  • Classification with conversational context: Utterances were presented to the model within their dialogue, and the model was asked to output an emotion or intention for each utterance. This tests whether providing context (preceding utterances) improves per-utterance classification recognition.
  • Classification with conversational context and known the contra-par classification of dialog acts or emotions: Here, we provided the model with each utterance’s dialog act label, in the case of emotion classification (from the human annotation), in addition to the conversation context, or we provided the model with each utterance’s emotion label, in the case of dialog acts classification. Then we asked it for the classification. This condition assesses whether providing an additional hint about the intent can enhance the prediction.
We calculated overall accuracy as the primary metric since the class distributions were moderately balanced. (In cases of class imbalance, we planned to consider F1-scores per class, but for simplicity and given our focus on overall trends, we mainly report accuracy.)
We used the gold labels provided by human annotators as ground truth. The model predictions were compared with these labels using accuracy and F1 scores. No additional human annotation was performed.

5. Results

5.1. Emotion Recognition Performance

We first evaluate the DeepSeek (DeepSeek-V3-0324 and DeepSeek-R1-0528) emotion classification on both datasets, and then assess its intention classification. Extracting, at each time, the single-label response generated by the model, which was parsed and matched against the predefined class labels. Initially, the IEMOCAP dataset was obtained from EMOTyDA and comprised 9420 utterances, but 2354 were incorrectly labeled (classified as xxx in the original dataset). As a result, these sentences were removed from some tables in the final results. Besides that, only two sentences were annotated as hldisgust, so they were removed from the evaluation to avoid affecting the metric values.
The model’s performance on emotion classification is summarized in Table 1 and Table 2. The intention classification is summarized in Table 3 and Table 4. To see more details of the performance of these experiments, you can review Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, and Table A12 that show the values of the metrics for these experiments, and Table A13, Table A14, Table A15, Table A16, Table A17, Table A18, Table A19, Table A20, Table A21, Table A22, Table A23, Table A24, Table A25, Table A26, Table A27, Table A28, Table A29, Table A30, Table A31, Table A32, Table A33, Table A34, and Table A35 show their confusion matrices.

5.1.1. IEMOCAP on Emotion Recognition

At the utterance level (baseline), without context and in zero-shot conditions, DeepSeek attempted to classify seven emotions, plus the label neutral, achieving very low performance. In the baseline condition of Table 1 it is shown that DeepSeek-v3 (DS-v3) obtained only 36% accuracy on IEMOCAP utterances. Meanwhile, DeepSeek-r1 (DS-r1) showed an accuracy of 37% in the same task. The confusion matrices, Table A13 and Table A14, indicate that the model had a bias toward classifying many utterances as frustrated even when the true label was different. For example, mild anger in text was often labeled as frustrated.
In a context-level setting (providing the additional context of each conversation), DS-v3 achieved an overall accuracy of approximately 47%, and DS-r1 showed a slightly better performance, achieving 51%. In the confusion matrices, Table A15 and Table A16, Its shown the model performed best on clearly expressed emotions like neutral, anger and sad, but struggled to distinguish excited vs. happy.
When the model is provided with the whole dialogue context (the preceding and following utterances of a target line) plus the label of DAs, and asked it to classify each line in context, the accuracy improved modestly from 47% to 49%, but in the same conditions DS-r1 has a step back in performance from 51% to 48%. This suggests that conversation context plus DAs helped the model infer emotions in some ambiguous cases (for example, understanding whether a short utterance “Okay.” is meant positively or negatively by seeing the previous turn) in the case of DS-v3, but were not helpful for DS-r1 on this occasion (further analysis must be done to find an acceptable explanation).

5.1.2. MELD on Emotion Recognition

Zero-shot performance was low on MELD without context, including neutral emotion and more fine-grained negative emotions. Table 2 shows that DS-v3 obtained only 44% accuracy on MELD utterances under these conditions, meanwhile, DS-r1, obtained only 54%. A likely factor is the presence of the seven emotions, including neutral, which the model frequently over-predicted. The confusion matrices, Table A19 and Table A20, describe similar behavior, indicating that the model was biased toward classifying many utterances as anger or joy even when the true label was different. For example, the emotion disgust in text was often labeled anger. Including conversational context significantly improved MELD emotion recognition. With a complete dialogue context, the accuracy increased to 57%. The model benefited from context to correctly identify emotions such as sadness, surprise, and sarcasm-related anger, which require understanding of the situation across multiple utterances. The confusion matrices for the context condition, Table A21 and Table A22, show fewer confusions: for instance, with context, the model distinguished surprise from anger more reliably. On this task, DS-r1 obtained 62% of accuracy, 8% better than its previous performance.
When we also provided the dialog act label for each utterance, DS-v3, the model’s accuracy further increased to 62% as shown in Table 2. This supports our hypothesis that understanding the intent can aid in emotion classification. In particular, we observed that when an utterance was labeled as a question or sarcasm (disagreement) as shown in Table A23, the model was more likely to assign the correct emotional tone (e.g., distinguishing a neutral factual question from one asked in an angry rhetorical manner). The improvement from 57% to 62% is slight but consistent across multiple emotion categories. Similarly, in the case of DS-r1 rose to 63%, surpassing its previous performance. The confusion matrix, Table A24, indicates that the model had a bias towards classifying many utterances as disgust even when the true label is anger, or joy when it is neutral.
Table 2 summarizes the results of the MELD emotions for the three conditions for each of the models, DS-v3 and DS-r1. The trend indicates that conversation context and dialog act cues can enhance the performance of emotion detection of an LLM.

5.2. Intent (Dialog Act) Recognition Performance

We evaluated DeepSeek’s (DS-v3 and DS-r1) ability to classify user intents (dialogue acts) using EMOTyDA (reclassified by [47] with dialogue acts beside the annotations of emotions). Table 3 presents the results on classifying intents under different conditions: baseline, context, and context plus emotions. Note that EMOTyDA’s dialog act label set is quite detailed (12 categories- Greeting, Question, Answer, Statement-Opinion (Stat. Op.), Statement-Non-Opinion (Stat. No Op.), Apology, Command, Agreement, Disagreement (Disagreem.), Acknowledge (Acknow.), Backchannel (Backch.), and Other).

5.2.1. IEMOCAP on Intent Recognition

In the utterance-only setting in a baseline conditions (without context), the model DS-v3 achieved about 44% accuracy on intent classification, Table 3, meanwhile, DS-r1 achieved 45%. Both models (DS-v3 and DS-r1) often confuse specific pairs of DAs, e.g., they struggle to detect correctly Agreement, Backchannel, or Greeting with Acknowledge (Table A25 and Table A26). These results indicate that the LLM required more information to achieve better performance. Without considering context and speaker roles, it becomes difficult for the LLM to accurately recognize the user’s intent due to its intrinsic limitations and the complexity of human communication.
Adding context to the conversation increased the classification results. The accuracy under these conditions was 56%, an improvement over the previous record. The model improved significantly at identifying Answer, since a question often receives an answer in context. Another class in which the model performed better was Greeting (Table A27). The performance of DS-r1 was better than in previous experiments, obtaining an accuracy of 61%. Ds-r1 improved at identifying Apology and Question than before, since a question often receives an answer in context and vice versa. The model performed better at classifying the Command class (Table A28) than in previous experiments.
In the third experiment, in addition to the context we provided, the emotions of each sentence were also used to make a better classification, but, contrary to what was expected, the performance was even poorer than the previous experiment, obtaining an accuracy of 53% instead of 56% with DS-v3 as it is shown in Table 3, in the confusion matrix in Table A29, we can appreciate an increased amount of confusion between Answer vs. Stat. Non Op. and as well as between Others vs. Backchannel. The performance of DS-r1 under these conditions maintained similar values, with an accuracy of 61%. Ds-r1 confused principally Greeting with Acknowledge, Table A30, but at the same time its performance in identifying Question was higher than in previous performances.

5.2.2. MELD on Intent Recognition

In the utterance-only setting (no context), the model DS-v3 achieved about 35% accuracy on intent classification (Table 4) and DS-r1 got about 45%. Many utterances in MELD are short (e.g., “Yeah.” which could be Acknowledgment, Backchannel, or Agreement), making it inherently challenging even for humans without context. The model tended to confuse specific pairs: for example, it often misclassified polite requests as statements, or it struggled to detect rhetorical questions as questions (confusion matrices in Table A31 and Table A32). The baseline indicates that the LLM without additional information performs worse on intent than it did on emotions in isolation.
Adding conversational context (the preceding turn or the entire dialog) improved the intent classification to around 50% accuracy, for DS-v3 and 55% for DS-r1. This improvement is more modest than what we saw with emotions. The model did better at identifying Questions (since a question often receives an answer in context) and Answers, and at distinguishing Backchannels (like “uh-huh”) when it sees surrounding speech. However, some intents, such as Statement-opinion vs. Statement-non-opinion remained difficult, as context did not always clarify whether a statement was intended as factual or opinionated.
Finally, when we provided the model with the ground-truth emotion label for each utterance (context + emotion condition), we expected an improvement if our hypothesis held. Interestingly, the performance of the model DS-v3 is slightly decreased to 48% in this condition (a drop of 2 points, which is within a margin of error). Essentially, giving the emotion did not significantly help or hurt—it appears to have introduced a slight inconsistency in model responses. For instance, knowing an utterance was labeled with an emotion like anger sometimes led the model to incorrectly assume the dialog act was a Complaint or Disagreement, even if it was actually just an angry question. In other cases, emotional info was irrelevant. The slight drop to 48% suggests that, at least for this LLM, emotion cues did not assist intent recognition and occasionally misled it. DS-r1 remained almost the same than the previous experiment (Table A35) with an accuracy of 55%.
In summary, DeepSeek-v3’s intent recognition hovered around 50% accuracy in context conditions, and 55% for DeepSeek-r1. Conversation context provided a considerable benefit, but giving emotion information did not improve intent detection accuracy. This asymmetrical result is notable: it implies that while context and intent cues benefit emotion classification, emotion cues alone did not meaningfully benefit intent classification for the model. We discuss possible reasons for this in Section 7.
Qualitatively, we observed the model sometimes making interpretable errors in intent classification. For example:
Dialogue snippet: A: “Could you BE any later?” B: “Sorry, traffic was terrible.”
Model output for A’s line: Question (expected label: Disagreement/Sarcasm).
The model treated the utterance as a literal question. This highlights the limitation that detecting non-literal intents (such as sarcasm or rhetorical questions) is difficult without specialized handling, despite emotion cues (speaker A is likely annoyed, which the model might recognize, but it still labels the form as a question).
Another example:
Utterance: “Yes, that’s exactly what I meant.” Model output: Agreement (expected label: Agreement).
This straightforward case was handled correctly. Given these results, we proceed to analyze and discuss their implications.

6. Comparing the Results with ChatGPT-4 and Gemini-2.5

To contextualize DeepSeek’s performance, we additionally tested GPT-4 and Gemini-2.5 on the same zero-shot tasks using identical prompts. Results (Table 5) show that DeepSeek-r1 achieves accuracy comparable to Gemini and superior to GPT-4 on emotion detection, confirming its competitive standing among current LLMs.
We used similar conditions to evaluate GPT-4 and Gemini-2.5 for DA classification. Results (Table 6) show that Gemini achieves an accuracy slightly higher than DeepSeek-r1 (DS-r1) and that the performance of DS-r1 was comparable to GPT-4 on DAs classification.

7. Discussion

Our experiments highlight an asymmetric relationship between emotion and intent detection in large language models. DeepSeek demonstrated moderate zero-shot performance, but its accuracy increased substantially when conversational context was available, and further improved when dialog act labels were supplied. This confirms that contextual and intentional cues are strong predictors of emotional tone. In contrast, providing emotion labels did not enhance intent recognition and sometimes misled the model, suggesting that emotions are not equally reliable indicators of communicative function.
This asymmetry is theoretically significant: while dialog acts often imply emotional tendencies (e.g., apologies correlate with sadness, disagreements with anger), emotions alone do not comparably constrain intent categories. Such findings suggest a structural imbalance in the interaction between affect and pragmatics, a phenomenon that is not widely reported in the current literature. They also highlight a limitation of zero-shot LLMs: although they capture affective nuance, they are less consistent in mapping emotions to communicative goals without explicit training.
Beyond accuracy scores, this study emphasizes the importance of designing evaluation frameworks that test both dimensions of dialogue simultaneously. By openly releasing our prompts and methodology, we provide a reproducible benchmark for exploring the affect–intent interplay in other models.
Emotion Detection: DeepSeek demonstrated a reasonable zero-shot ability to classify emotions from text, especially when provided with conversation context. The improvement in DS-v3 from 44% to 57% accuracy due to context, and then to 62% with dialog act cues, and in DS-r1 from 54% to 62%, then to 63%, underscores the importance of contextual understanding. This suggests that LLMs like DeepSeek can effectively leverage additional information. When the conversation’s flow or the nature of the utterance is known, the model can disambiguate emotions that might be neutral or ambiguous out of context. For instance, the model often failed to detect surprise or sarcasm without context, but with preceding lines, it could infer those emotions from an unexpected turn in the dialogue.
The positive impact of including dialog act labels on emotion classification (a five percentage point gain on MELD, from 57% to 62%) provides empirical support for our hypothesis in one direction: knowing what someone is doing (question, apologizing, etc.) helps the model figure out how they feel. Why might this be the case? Specific dialog acts carry implicit emotional connotations—an apology often correlates with regret or sadness; a disagreement may correlate with anger or frustration; a question might be asked in a curious (neutral/happy) tone or a challenging (angry) tone depending on context. By giving the model the dialog act, we essentially narrowed down the plausible emotions. The model could then map, say, a Disagreement act to a likely negative emotion such as anger or disgust, rather than considering all emotion categories. These findings may aid the development of empathy agents for mental health support, education, or customer interaction in low-resource settings, potentially benefiting the community.
Intent Detection: In contrast, providing emotion information did not aid intent classification. One interpretation is that emotion is a less reliable predictor of dialog act in our data. A user could be angry when asking a question or happy when making a statement—emotion does not deterministically indicate the functional intent of an utterance. The slight performance drop suggests that the model might have over-relied on emotion cues when they were present, leading to misclassifications (e.g., it might assume an angry utterance is a disagreement when in fact it was an angry question). This points to a limitation: the model does not inherently know when to separate style (emotion) from speech act (intent), and additional information can sometimes confuse style with content.
Another observation is that DeepSeek’s overall intent classification accuracy (61%) is modest. This is perhaps not surprising—dialog act classification can be quite nuanced, and our label set was large. Additionally, the model was not fine-tuned on any dialog act data; it relied solely on its pre-training. Its performance is roughly comparable to random guessing among a handful of dominant classes (since many utterances are statements or questions, which the model did get right). This indicates that while LLMs have implicit knowledge of language patterns, translating that into explicit dialog act labels may require either few-shot examples or fine-tuning. Indeed, prior work on ChatGPT (a similar LLM) has noted it can follow conversation flows but might not explicitly categorize them without guidance.
Logical Consistency and Errors: There were a few logical inconsistencies in model outputs. For example, in one scenario, the model labeled consecutive utterances from the same speaker as conflicting emotions (likely because it treated each utterance in isolation, failing to enforce the temporal consistency of that speaker’s emotional state). This highlights that the LLM does not maintain a persistent “persona state” of emotion unless explicitly modeled. In future work, incorporating a constraint that a speaker’s emotion should not wildly oscillate within a short exchange might improve realistic output.
We also noticed that ambiguous expressions like “Fine.” could be either neutral or negative; the model’s guess would sometimes flip depending on prompt phrasing. This suggests some instability typical of prompt-based LLM responses. More advanced prompt techniques or calibration might be needed for critical applications.
Implications for Affective Dialogue Systems: Our findings indicate that LLMs are promising as all-in-one classifiers for affect and intent, but their raw zero-shot capability might not yet match specialized models. For building empathetic chatbots, one could imagine using an LLM like DeepSeek to detect user emotion in real time and craft responses. The advantage is that the LLM doesn’t require separate training for the detection task. However, as we saw, it hits around 60% accuracy, whereas task-specific models can exceed 80% on some emotion benchmarks. There is, therefore, a trade-off between convenience (using one model for everything) and accuracy.
For intent detection in conversational AI, relying on an LLM’s internal knowledge might be risky if high precision is needed (for example, correctly interpreting a user’s request vs. a question can be critical). Fine-tuning an LLM on annotated intents or providing few-shot exemplars in the prompt could likely boost performance.
Error Analysis: The hypothesis that “emotions help detect intentions and vice versa” was only half-supported by our results. Emotions (especially when combined with context) did help detect intentions in some manual observations—for instance, when the model knew the user was laughing (happy), it correctly recognized a statement as a joke (which is a kind of intentional act). But these were specific cases. Overall, the automation did not show an aggregated improvement.
This asymmetry might stem from the mapping from intent to likely emotion being more direct than from emotion to intent. Intent categories are numerous and orthogonal to emotion in many cases, whereas emotions typically fall into a smaller set and can align with broad intent tones (e.g., anger often accompanies disagreement, sadness often accompanies apologetic statements, etc.). Our LLM could perhaps leverage intent cues to narrow down emotion options, but when given emotion, it still had to choose among many possible intents.
Some of the most common errors we observed are that when the context of the conversation is not provided to the LLM, it misclassifies different emotions in both datasets, as anger instead of frustration, or excited instead of happy, or disgust instead of anger. For example, consider the next sentence from IEMOCAP dataset: “I’ve been to the back of the line five times.”, annotated as anger, but predicted for the model as frustration. Once the context information was provided to the model, it made a correct prediction, because now, it was crystal clear that knowing the previous sentence “There’s nothing I can do for you. Do you understand that? Nothing.”, annotated as anger, the next sentence must be anger too, as a consequence of an reaction directed to whom said the previous sentence, perceived as a threat, instead of in the case of frustration that can be more generalized or self-directed.
In another example, the sentence: “Yes. There’s a big envelope, it says, you’re in. I know.”, annotated as excited, but predicted for the model in baseline condition as happy. Once the context information was provided, the model made the correct prediction, since the previous sentence was “Did you get the letter?”, the model now is ready to understand that the emotional state is derived of an intense temporary emotion linked to anticipation or enthusiasm for a specific event, so excitement, is the correct answer.
Now, let’s see the sentence “Oh my god, what were you thinking?” from MELD dataset, annotated as disgust, and initially predicted by the model as anger. Without the conversation context, the model struggled to make an accurate prediction; however, with the context provided, it made a better guess and changed its classification to disgust, which was the correct classification. Here is the reason why this happened, the two sentences previous were: the first one was “Monica—Joey, this is sick, it’s disgusting, it’s, it’s—not really true, is it?” which clearly can be classified as disgust, and the second one was “Joey—Well, who’s to say what’s true? I mean…”, and the third sentence was presented before where Monica replied to Joey, and by the flow of the conversation, we can infer that there is no reason to think that the state of emotion of Monica changed from disgust to anger. That is how context aids the model in making better predictions.
Limitations: While our study sheds light on several aspects of emotion and intention recognition using DeepSeek, certain limitations should be acknowledged. MELD and IEMOCAP datasets showed inconsistencies in the labeling criteria. Although they share similar labels, they also differ in the labeling criteria, eliciting different emotions when humans annotated them. This could have generated discrepancies in the results. Also, the results for emotion and intent detection based on the MELD style (TV show dialogues) might differ in more formal conversations or other contexts.
Another limitation of our study is that DeepSeek is a single model, and its behavior may not generalize to all LLMs. Newer or larger models might have different capabilities. By conducting new studies with other LLMs, we could find a reasonable explanation for why, in some cases, providing additional information (the emotion classification label for each sentence) to the model does not improve DA prediction performance.
Also, our prompt designs, while reasonable, could potentially be optimized. It’s possible that different wording (or using a few-shot examples) could significantly improve performance on either task. We did not exhaustively tune the prompts due to scope constraints.
Finally, we also note that the “Genesis flash 2.5” and additional model comparisons we initially intended (as hinted in the manuscript) were not completed; thus, our study focuses only on DeepSeek. Future work should compare multiple models (e.g., ChatGPT and others) on the same tasks to see whether they behave similarly or whether some handle the affect-intent interplay better.

8. Conclusions and Future Work

In this paper, we present a study on using large language models (DeepSeek-v3 and DeepSeek-r1) for emotion and intent detection in dialogues. Our evaluation, conducted without any task-specific fine-tuning, yielded several insights:
DeepSeek can recognize basic emotions from text with moderate accuracy in a zero-shot setting. Its performance improves substantially when given conversational context, highlighting the model’s strength in understanding dialogue flow.
Providing the model with information about the conversational intent (dialog act) of an utterance further enhanced emotion recognition, suggesting a synergistic effect where knowing “what” the utterance is doing helps determine “how it is said” emotionally.
For intent recognition, the model’s zero-shot performance was weaker (around chance level across a broad set of dialog act classes). Unlike the emotion case, providing the model with the speaker’s emotion did not assist and occasionally confused the intent classification.
The relationship between emotions and intentions is asymmetric in the context of this LLM: context and intent cues aid emotion detection, but emotion cues do not significantly aid intent detection.
The DeepSeek model, while powerful, sometimes struggled with non-literal language (e.g., sarcasm) and maintaining consistency, indicating areas for further refinement if used in practical systems.
In terms of academic contribution, our work demonstrates the feasibility of leveraging a single LLM for multiple dialogue understanding tasks simultaneously. This opens the door to developing more unified conversational AI systems. Rather than having separate pipelines for intent detection and sentiment analysis, a single model could potentially handle both, simplifying the architecture.
Future Work: There are several directions for future exploration: 1. Few-shot Prompting: We will experiment with providing a few examples of labeled emotions and intents in the prompt (in-context learning) to see if DeepSeek’s performance improves. Preliminary research on models like GPT-3/4 suggests that few-shot examples can dramatically boost accuracy. 2. Model Fine-tuning: Fine-tuning DeepSeek on a small portion of our dataset for each task might yield significant gains. It would be interesting to quantify how much fine-tuning data is required to reach parity with dedicated models. 3. Multi-task Learning: An extension of fine-tuning is to train a model on both emotion and intent labels jointly (multi-task learning). This could encourage the model to learn representations that capture both aspects internally. We hypothesize that multi-task training might enforce the kind of beneficial relationship between emotion and intent that we partially observed. 4. Applying to Real-world Conversations: We intend to test the model on more spontaneous, real user conversations (such as dialogues from customer support chats or social media threads). These often use noisier language, which would test the model’s robustness. 5. Incorporating External Knowledge: Emotions and intents might be better inferred if a model had access to external knowledge about typical scenarios. For instance, recognizing that “I’m fine!” with a particular punctuation is likely anger or sarcasm might be improved by knowledge distillation or rules. Hybrid systems combining LLMs with rule-based disambiguation for specific, tricky cases could be fruitful. 6. Improving Intent Granularity: Our results showed difficulty in fine-grained intent categories. Collapsing intents into broader classes (e.g., question, statement, command) might yield higher reliability. Future work could focus on whether LLMs are better suited to broad categorization and on refining them for detailed subclasses. 7. User State Tracking: One promising area is to have the LLM maintain a running estimate of a user’s emotional state throughout a conversation (rather than independent per-utterance classification). This could potentially smooth out moment-to-moment classification noise and provide a more stable assessment. Techniques from state tracking in dialogue could be applied here.
In conclusion, this work examined the capacity of DeepSeek to recognize emotions and intentions in dialogues under zero-shot conditions. The results show that the model can achieve reasonable emotion recognition when provided with context and dialog act cues. In contrast, intent recognition remains weaker and does not consistently benefit from emotional information.
The central contribution is the identification of an asymmetric relationship: intent knowledge helps disambiguate emotions, but emotion knowledge does not aid intent classification to the same extent. This finding advances our understanding of the limits of current LLMs and opens new perspectives for building empathetic and context-aware conversational systems.
Future research should test whether few-shot prompting, fine-tuning, or multi-task learning can enforce a more balanced integration of affect and pragmatics. Applying these methods to real-world conversational data, beyond scripted corpora, will also be crucial for validating the robustness of the proposed approach.
We acknowledge that while accuracy improved from 57% to 62% on MELD classification of emotions and other relationships, no statistical significance test (e.g., McNemar’s or bootstrap test) was conducted. Future work could explore this further.
We also recognize that using different experimental approaches is crucial for thoroughly validating our work. Future studies will include comparisons with transformer-based and classifier architectures better to demonstrate the strength and generalizability of our results.

9. Ethical Considerations

The confidence relayed in a dialogue system that can automatically analyze the intents and emotions in a conversation with the user and then make a decision related to this, or generate emotionally appropriate utterances presents many challenges, including risks of misinterpretation, dependency, and privacy concerns.
These challenges highlight the need to mitigate risks and ensure the proper use of systems, with clear responsibilities and appropriate safeguards. Also, these kinds of systems must ensure greater benefits than potential harms and maintain a balance among their features. One way we can try to reduce potential harm is by providing human supervision to the extent possible.
As some users perceive dialogue systems, also known as chatbots, as more human-like and conscious, and more often engage in relationships with them due to increased anthropomorphism, this can lead to stronger emotional attachments [56]. However, the complexity of this relationship will determine its benefits, taking into account the user’s pre-existing social needs; this must be carefully considered.

Author Contributions

Conceptualization, E.C., H.C. and O.K.; methodology, E.C.; formal analysis, E.C., H.C. and O.K.; investigation, E.C.; writing—original draft preparation, E.C. and H.C.; writing—review and editing, E.C. and H.C.; visualization, O.K.; supervision, O.K.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

Supported by the Instituto Politécnico Nacional (COFAA, SIP-IPN, Grants SIP 20250015 and 20253468) and the Mexican Government (SECIHTI, SNII).

Data Availability Statement

The original data presented in the study are openly available in [GitHub] at [https://github.com/sahatulika15/EMOTyDA] (accessed on 15 May 2025).

Acknowledgments

The authors wish to thank the support of the Instituto Politécnico Nacional.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LLMLarge language models
IEMOCAPEmotional Dyadic Motion Capture Database
MELDMultimodal Emotion Lines Dataset
AIArtificial Intelligence
DSDeepSeek
EDEmotion detection
DADialog Acts
EMOTyDAEmotion-aware Dialogue Act
Stat. Op.Statement-Opinion
Stat. No Op.Statement-Non-Opinion
Disagreem.Disagreement
Acknow.Acknowledge
Backch.Backchannel

Appendix A

Table A1. Classification reports emotions DS-v3 and DS-r1 (baseline) IEMOCAP.
Table A1. Classification reports emotions DS-v3 and DS-r1 (baseline) IEMOCAP.
EmotionPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
anger0.500.610.310.240.380.351034
excited0.680.520.130.230.220.32999
fear0.070.040.340.370.110.0738
frustrated0.370.370.580.600.450.461714
happy0.210.260.470.210.290.23563
neutral0.440.400.270.050.330.091613
sad0.430.500.380.250.400.331011
surprise0.160.110.280.440.200.1794
accuracy 0.360.377066
macro avg0.360.350.340.300.300.25
weighted0.440.440.360.290.360.30
Table A2. Classification reports emotions DS-v3 and DS-r1 (context) IEMOCAP.
Table A2. Classification reports emotions DS-v3 and DS-r1 (context) IEMOCAP.
EmotionPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
anger0.540.710.550.420.550.531034
excited0.750.700.270.330.390.45999
fear0.170.090.420.580.240.1538
frustrated0.530.500.310.460.390.481714
happy0.250.430.620.370.360.40562
neutral0.480.470.590.750.530.581613
sad0.570.650.590.540.580.591011
surprise0.270.210.460.540.340.3095
accuracy 0.470.517066
macro avg0.450.470.480.500.450.43
weighted avg0.530.560.470.510.470.51
Table A3. Classification reports emotions DS-v3 and DS-r1 (context + DAs) IEMOCAP.
Table A3. Classification reports emotions DS-v3 and DS-r1 (context + DAs) IEMOCAP.
EmotionPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
anger0.720.720.370.390.490.511034
excited0.740.750.270.280.400.41999
fear0.180.100.470.550.260.1638
frustrated0.480.470.440.440.460.451714
happy0.330.380.570.420.420.40563
neutral0.460.440.690.730.550.551613
sad0.570.630.530.490.550.551011
surprise0.250.250.470.560.330.3494
accuracy 0.490.487066
macro avg0.470.470.480.480.430.42
weighted0.540.550.490.480.480.48
Table A4. Classification reports emotions DS-v3 and DS-r1 (baseline) MELD.
Table A4. Classification reports emotions DS-v3 and DS-r1 (baseline) MELD.
EmotionPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
anger0.280.470.680.460.390.461109
disgust0.190.170.270.480.220.26271
fear0.280.210.280.520.280.30268
joy0.390.500.710.660.510.571743
neutral0.900.860.310.520.460.654709
sadness0.340.370.410.440.370.40683
surprise0.500.490.420.620.460.551205
accuracy 0.440.549988
macro avg0.410.440.440.530.380.46
weighted0.620.640.440.540.440.56
Table A5. Classification reports emotions DS-v3 and DS-r1 (context) MELD.
Table A5. Classification reports emotions DS-v3 and DS-r1 (context) MELD.
EmotionPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
anger0.400.560.640.560.500.561109
disgust0.250.270.310.460.280.34271
fear0.290.220.370.560.330.31268
joy0.510.560.700.640.590.601743
neutral0.860.790.530.680.650.734709
sadness0.350.500.570.410.430.45683
surprise0.590.650.560.610.570.631205
accuracy 0.570.629988
macro avg0.460.510.530.560.480.52
weighted0.650.660.570.620.580.63
Table A6. Classification reports emotions DS-v3 and DS-r1 (context+DAs) MELD.
Table A6. Classification reports emotions DS-v3 and DS-r1 (context+DAs) MELD.
EmotionPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
anger0.470.570.580.510.520.541109
disgust0.340.280.300.440.320.35271
fear0.350.240.350.550.350.33268
joy0.560.580.650.610.600.591743
neutral0.780.750.690.720.730.734709
sadness0.400.530.550.430.460.47683
surprise0.660.670.550.580.600.621205
accuracy 0.620.639988
macro avg0.510.520.520.550.510.52
weighted0.640.650.620.630.630.63
Table A7. Classification reports DAs DS-v3 and DS-r1 (baseline) IEMOCAP.
Table A7. Classification reports DAs DS-v3 and DS-r1 (baseline) IEMOCAP.
Dialogue ActPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
Acknowledge0.040.040.590.430.080.0756
Agreement0.290.310.220.310.250.31507
Answer0.580.720.050.040.090.081434
Apology0.420.410.960.950.590.5775
Backchannel0.360.360.340.410.350.38305
Command0.310.340.860.830.460.49350
Disagreement0.170.150.600.550.260.24373
Greeting0.330.350.270.220.290.2760
Others0.720.180.490.510.580.2770
Question0.850.880.760.770.800.821945
Stat. Non Op.0.540.520.270.340.360.412172
Stat. Opinion0.470.480.550.490.510.482073
accuracy 0.440.459420
macro avg0.420.390.500.490.390.37
weighted0.550.570.440.450.440.45
Table A8. Classification reports DAs DS-v3 and DS-r1 (context) IEMOCAP.
Table A8. Classification reports DAs DS-v3 and DS-r1 (context) IEMOCAP.
Dialogue ActPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
Acknowledge0.050.070.390.380.080.1256
Agreement0.580.560.280.460.380.50507
Answer0.860.900.200.320.320.471434
Apology0.650.740.960.830.780.7875
Backchannel0.580.580.490.390.530.47305
Command0.460.400.710.750.560.52350
Disagreement0.450.560.490.580.470.57373
Greeting0.390.450.270.250.320.3260
Others0.540.360.760.710.630.4870
Question0.910.870.800.890.850.881945
Stat. Non Op.0.520.580.430.500.470.542172
Stat. Opinion0.460.520.770.710.580.602073
accuracy 0.560.619420
macro avg0.540.550.540.560.500.52
weighted0.640.660.560.610.550.61
Table A9. Classification reports DAs DS-v3 and DS-r1 (context + emotions) IEMOCAP.
Table A9. Classification reports DAs DS-v3 and DS-r1 (context + emotions) IEMOCAP.
Dialogue ActPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
Acknowledge0.050.080.430.360.100.1356
Agreement0.600.550.180.490.270.52507
Answer0.890.900.100.330.170.491434
Apology0.660.750.870.790.750.7775
Backchannel0.540.590.500.450.520.51305
Command0.460.400.680.790.550.53350
Disagreement0.490.550.380.540.430.55373
Greeting0.350.420.280.230.310.3060
Others0.540.290.510.640.530.4070
Question0.910.860.830.880.870.871945
Stat. Non Op.0.450.500.580.510.470.542172
Stat. Opinion0.450.530.690.700.540.602073
accuracy 0.530.619420
macro avg0.530.540.500.560.460.52
weighted0.620.660.530.610.510.61
Table A10. Classification reports DAs DS-v3 and DS-r1 (baseline) MELD.
Table A10. Classification reports DAs DS-v3 and DS-r1 (baseline) MELD.
Dialogue ActPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
Acknowledge0.050.070.520.480.090.13103
Agreement0.280.310.320.350.300.33446
Answer0.290.510.030.040.050.071284
Apology0.350.630.840.790.490.70178
Backchannel0.210.210.270.660.240.31172
Command0.220.220.760.920.340.35286
Disagreement0.140.220.650.650.230.33289
Greeting0.670.720.760.750.710.73486
Others0.170.430.030.190.050.26615
Question0.860.860.560.780.680.822042
St. Non Op.0.540.630.190.290.290.392993
St. Opinion0.250.320.530.550.340.401093
accuracy 0.350.459988
macro avg0.340.430.450.540.320.40
weighted0.480.570.350.450.350.44
Table A11. Classification reports DAs DS-v3 and DS-r1 (context) MELD.
Table A11. Classification reports DAs DS-v3 and DS-r1 (context) MELD.
Dialogue ActPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
Acknowledge0.070.090.540.340.120.14103
Agreement0.360.430.350.490.350.46446
Answer0.780.800.200.350.320.491284
Apology0.650.760.800.760.720.76178
Backchannel0.310.400.380.430.340.41172
Command0.290.270.850.860.440.42286
Disagreement0.350.380.560.550.430.45289
Greeting0.730.780.730.680.730.73486
Others0.620.460.070.220.120.30615
Question0.890.870.780.850.830.862042
Stat. Non Op.0.580.620.410.440.480.522994
Stat. Opinion0.300.330.600.600.400.421093
accuracy 0.500.559988
macro avg0.490.520.520.550.440.50
weighted0.610.630.500.550.500.56
Table A12. Classification reports DAs DS-v3 and DS-r1 (context + emotions) MELD.
Table A12. Classification reports DAs DS-v3 and DS-r1 (context + emotions) MELD.
Dialogue ActPrecisionRecallF1-ScoreSupport
v3r1v3r1v3r1
Acknowledge0.070.100.460.370.120.15113
Agreement0.350.420.310.440.330.43446
Answer0.810.830.150.330.250.471284
Apology0.710.760.760.740.730.75178
Backchannel0.290.340.380.370.330.35172
Command0.320.270.810.850.460.41286
Disagreement0.350.380.460.450.400.41289
Greeting0.710.800.730.700.720.75486
Others0.400.520.040.220.070.30615
Question0.890.860.780.840.840.852042
Stat. Non Op.0.520.600.430.480.470.542994
Stat. Opinion0.280.330.580.580.370.421093
accuracy 0.480.559988
macro avg0.470.520.490.530.420.49
weighted0.580.630.480.550.480.56

Appendix B

Table A13. Confusion matrix emotions DS-v3 (baseline) IEMOCAP.
Table A13. Confusion matrix emotions DS-v3 (baseline) IEMOCAP.
angerexcitedfearfrustratedhappyneutralsadsurprised
anger0.3100.020.440.070.060.080.02
excited0.030.130.050.230.360.100.050.05
fear0.1100.340.2600.130.080.08
frustrated0.090.010.030.580.090.110.090.01
happy0.040.030.020.210.470.120.100.01
neutral0.040.010.020.370.170.270.100.02
sad0.030.010.040.250.150.120.380.01
surprise0.1500.030.310.060.120.050.28
Table A14. Confusion matrix emotions DS-r1 (baseline) IEMOCAP.
Table A14. Confusion matrix emotions DS-r1 (baseline) IEMOCAP.
angerexcitedfearfrustratedhappyneutralsadsurprised
anger0.240.020.030.490.030.110.040.04
excited0.020.230.060.220.160.170.020.11
fear000.370.1600.260.050.16
frustrated0.050.020.040.60.030.190.040.03
happy0.020.130.030.210.260.240.050.07
neutral0.010.040.050.340.050.420.050.04
sad0.020.030.060.290.090.230.250.03
surprise0.040.020.010.310.020.150.010.44
Table A15. Confusion matrix emotions DS-v3 (context) IEMOCAP.
Table A15. Confusion matrix emotions DS-v3 (context) IEMOCAP.
angerexcitedfearfrustratedhappyneutralsadsurprised
anger0.5500.010.180.060.110.070.02
excited0.030.270.010.040.470.120.030.03
fear0.0500.420.030.340.080.050.03
frustrated0.20.010.010.310.050.270.140.01
happy0.040.0700.060.620.170.040.01
neutral0.040.020.010.080.190.590.060.02
sad0.020.010.020.080.060.210.590.01
surprise0.110.040.040.090.110.130.020.46
Table A16. Confusion matrix emotions DS-r1 (context) IEMOCAP.
Table A16. Confusion matrix emotions DS-r1 (context) IEMOCAP.
angerexcitedfearfrustratedhappyneutralsadsurprised
anger0.420.010.030.3400.100.060.03
excited0.010.330.030.080.210.280.020.05
fear000.580.0300.240.050.11
frustrated0.080.010.050.4600.290.090.02
happy0.010.1400.070.370.340.040.02
neutral0.010.020.020.120.020.750.030.03
sad00.010.050.120.010.240.540.01
surprise0.030.030.010.100.050.220.010.54
Table A17. Confusion matrix emotions DS-v3 (context + DAs) IEMOCAP.
Table A17. Confusion matrix emotions DS-v3 (context + DAs) IEMOCAP.
angerexcitedfearfrustratedhappyneutralsadsurprised
anger0.3700.010.3700.140.070.03
excited00.270.010.080.380.190.020.04
fear0.0300.470.1100.320.080
frustrated0.070.010.010.440.020.320.120.01
happy0.010.0800.060.570.230.040.02
neutral0.010.0200.120.090.690.050.02
sad00.010.020.110.070.250.530.01
surprise0.030.050.050.140.020.220.010.47
Table A18. Confusion matrix emotions DS-r1 (context + DAs) IEMOCAP.
Table A18. Confusion matrix emotions DS-r1 (context + DAs) IEMOCAP.
angerexcitedfearfrustratedhappyneutralsadsurprised
anger0.3900.030.3800.120.050.03
excited00.280.020.070.280.280.020.04
fear000.550.0800.260.030.08
frustrated0.080.010.040.4400.330.090.02
happy0.010.0900.080.420.360.030.01
neutral0.010.010.020.140.040.730.030.02
sad0.010.010.060.120.020.290.490.01
surprise0.020.020.010.070.030.270.010.56
Table A19. Confusion matrix emotions DS-v3 (baseline) MELD.
Table A19. Confusion matrix emotions DS-v3 (baseline) MELD.
angerdisgustfearjoyneutralsadnesssurprise
anger0.680.020.020.170.030.040.04
disgust0.550.270.010.050.020.050.05
fear0.280.030.280.150.060.120.09
joy0.150.020.010.710.020.030.05
neutral0.200.040.020.280.310.080.07
sadness0.280.050.040.110.040.410.07
surprise0.280.020.010.20.030.030.42
Table A20. Confusion matrix emotions DS-r1 (baseline) MELD.
Table A20. Confusion matrix emotions DS-r1 (baseline) MELD.
angerdisgustfearjoyneutralsadnesssurprise
anger0.460.100.070.150.070.060.10
disgust0.270.480.030.040.040.070.07
fear0.060.060.520.090.080.090.11
joy0.050.050.030.660.070.040.10
neutral0.060.060.060.160.520.070.08
sadness0.080.080.090.080.110.440.12
surprise0.060.060.030.120.080.020.62
Table A21. Confusion matrix emotions DS-v3 (context) MELD.
Table A21. Confusion matrix emotions DS-v3 (context) MELD.
angerdisgustfearjoyneutralsadnesssurprise
anger0.640.040.030.090.070.090.04
disgust0.430.310.020.020.080.090.06
fear0.150.020.370.080.070.220.09
joy0.100.010.010.700.090.050.04
neutral0.090.020.020.190.530.090.06
sadness0.170.040.050.060.080.570.04
surprise0.160.030.030.120.070.030.56
Table A22. Confusion matrix emotions DS-r1 (context) MELD.
Table A22. Confusion matrix emotions DS-r1 (context) MELD.
angerdisgustfearjoyneutralsadnesssurprise
anger0.560.070.080.080.130.040.06
disgust0.210.460.020.040.150.070.05
fear0.080.010.560.050.180.050.07
joy0.050.020.030.640.180.020.05
neutral0.040.030.060.130.680.030.04
sadness0.090.060.130.050.200.410.06
surprise0.060.030.040.090.150.020.61
Table A23. Confusion matrix emotions DS-v3 (context + DAs) MELD.
Table A23. Confusion matrix emotions DS-v3 (context + DAs) MELD.
angerdisgustfearjoyneutralsadnesssurprise
anger0.580.030.030.080.160.090.04
disgust0.380.300.010.010.170.090.04
fear0.120.020.350.040.240.190.04
joy0.070.010.010.650.190.030.04
neutral0.050.010.010.130.690.060.04
sadness0.140.020.040.050.150.550.04
surprise0.110.010.030.110.160.030.55
Table A24. Confusion matrix emotions DS-r1 (context + DAs) MELD.
Table A24. Confusion matrix emotions DS-r1 (context + DAs) MELD.
angerdisgustfearjoyneutralsadnesssurprise
anger0.510.070.060.070.190.040.06
disgust0.230.440.030.010.200.040.04
fear0.060.030.550.030.210.060.05
joy0.040.020.020.610.240.020.04
neutral0.030.020.050.120.720.030.03
sadness0.090.050.120.040.220.430.04
surprise0.050.030.060.090.190.020.58
Table A25. Matrix confusion DAs DS-v3 (baseline) IEMOCAP.
Table A25. Matrix confusion DAs DS-v3 (baseline) IEMOCAP.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.590.14000.0400.02000.020.020.18
Agreement0.450.220.010.010.060.020.100000.010.12
Answer0.090.080.050.010.030.060.21000.030.190.25
Apology0000.9600.010.0300000
Backchannel0.410.0200.020.340.030.040.030.010.040.050.01
Command0.01000.0100.860.06000.010.010.04
Disagreement0.020.100.01000.060.600000.080.11
Greeting0.700.0200000.27000.020
Others00000.440.01000.4900.060
Question0.010.0100.010.020.040.07000.760.020.06
Stat. Non Op.0.050.020.010.020.010.120.120.0100.050.270.32
Stat. Opinion0.040.0300.010.010.090.17000.030.060.55
Table A26. Matrix confusion DAs DS-r1 (baseline) IEMOCAP.
Table A26. Matrix confusion DAs DS-r1 (baseline) IEMOCAP.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.430.23000.0900.0400.0200.020.18
Agreement0.290.3100.010.130.020.0700.020.010.030.12
Answer0.070.100.040.020.040.060.1700.010.030.240.22
Apology0.03000.9500.010.0100000
Backchannel0.250.08000.410.020.0500.130.030.020
Command0.02000.0100.830.0400.010.010.050.04
Disagreement0.010.100.0100.050.55000.010.150.12
Greeting0.720.0200.020.02000.2200.0200
Others0.010000.440.01000.51000.01
Question0.010.01000.010.030.090.0100.770.020.05
Stat. Non Op.0.060.020.010.020.020.090.1200.030.040.340.26
Stat. Opinion0.030.0400.010.010.090.2000.010.030.100.49
Table A27. Matrix confusion DAs DS-v3 (context) IEMOCAP.
Table A27. Matrix confusion DAs DS-v3 (context) IEMOCAP.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.390.0700.020.1100.02000.020.090.29
Agreement0.340.280.010.010.090.010.010000.060.20
Answer0.030.030.200.010.010.030.02000.010.340.33
Apology0.01000.9600.0100000.010
Backchannel0.3000.0100.490.010.0100.080.040.030.03
Command0.0300000.710.040.0100.010.050.16
Disagreement0.030.040.0200.010.020.490000.140.26
Greeting0.630000000.270.0500.020.03
Others0.040000.170.01000.7600.010
Question00000.010.020.01000.80.040.1
Stat. Non Op.0.0200.010.010.010.050.020.010.010.030.430.4
Stat. Opinion0.020.010000.040.04000.030.090.77
Table A28. Matrix confusion DAs DS-r1 (context) IEMOCAP.
Table A28. Matrix confusion DAs DS-r1 (context) IEMOCAP.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.380.11000.050.020000.040.040.38
Agreement0.20.460.0200.060.020.010000.090.14
Answer0.010.050.320.010.010.040.03000.030.260.25
Apology0.01000.8300.050000.010.030.07
Backchannel0.200.080.0100.390.010.0100.160.040.030.06
Command0.010000.010.750.0100.010.010.050.14
Disagreement0.010.030.0200.010.030.5800.010.010.140.18
Greeting0.60.0200000.020.250.070.020.030
Others0.0600.0100.20.01000.71000
Question000000.020.010.0100.890.020.06
Stat. Non Op.0.020.010.0100.010.060.0200.010.050.500.31
Stat. Opinion0.010.020000.070.03000.040.120.71
Table A29. Matrix confusion DAs DS-v3 (context + emotions) IEMOCAP.
Table A29. Matrix confusion DAs DS-v3 (context + emotions) IEMOCAP.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.430.0700.020.0700.020.0200.020.160.20
Agreement0.330.180.010.010.100.010.010000.110.26
Answer0.040.020.100.0100.020.02000.010.460.32
Apology0.04000.8700.0300000.070
Backchannel0.260.01000.50.010.010.020.060.030.070.02
Command0.0100000.680.01000.010.090.19
Disagreement0.010.030.0100.010.040.38000.010.160.35
Greeting0.570000000.280.020.020.070.05
Others0.060000.400000.5100.010.01
Question00000.010.020.010.0100.830.040.08
Stat. Non Op.0.020000.010.050.020.0100.030.500.37
Stat. Opinion0.0100000.040.03000.030.190.69
Table A30. Matrix confusion DAs DS-r1 (context + emotions) IEMOCAP.
Table A30. Matrix confusion DAs DS-r1 (context + emotions) IEMOCAP.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.360.11000.070000.020.020.020.41
Agreement0.160.490.0200.070.030.0200.010.010.070.12
Answer0.010.060.330.0100.030.02000.030.260.24
Apology0.040.010.010.7900.050.010000.080
Backchannel0.160.070.0100.450.010.0100.180.040.020.04
Command0.010000.010.790.0200.010.010.050.10
Disagreement0.010.030.01000.030.54000.010.140.23
Greeting0.480.02000000.230.120.020.070.07
Others0.0100.0100.290.01000.6400.010.01
Question000000.020.010.0100.880.020.05
Stat. Non Op.0.010.020.0100.010.070.0200.010.050.510.29
Stat. Opinion0.010.020000.070.03000.040.120.7
Table A31. Matrix confusion DAs DS-v3 (baseline) MELD.
Table A31. Matrix confusion DAs DS-v3 (baseline) MELD.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.520.100.010.040.020.010.0300.010.010.050.2
Agreement0.410.320.0200.050.030.0600.010.010.020.08
Answer0.120.090.030.030.030.060.180.010.030.040.170.23
Apology0.040.0600.8400000000.06
Backchannel0.370.0600.030.270.020.0600.020.050.020.11
Command0.040.0100.020.010.760.03000.030.040.04
Disagreement0.020.0100.060.010.060.6500.030.010.040.10
Greeting0.140.010000.020.010.7600.010.050.01
Others0.170.040.010.060.050.140.100.030.030.040.160.18
Question0.020.020.010.020.020.040.140.0400.560.030.10
Stat. Non Op.0.100.030.010.030.010.130.110.020.010.020.190.33
Stat. Opinion0.050.050.010.040.010.100.160.0100.020.040.53
Table A32. Matrix confusion DAs DS-r1 (baseline) MELD.
Table A32. Matrix confusion DAs DS-r1 (baseline) MELD.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.480.0600.010.1200.020.010.010.020.100.18
Agreement0.200.35000.210.040.04000.020.040.09
Answer0.060.100.040.010.060.080.150.010.020.040.230.20
Apology0.020.0300.790.020.020.010.010.010.010.010.08
Backchannel0.060.06000.660.020.010.010.090.050.010.03
Command0.010000.020.920.01000.010.020.01
Disagreement0.030.020.010.030.020.080.6500.010.010.050.09
Greeting0.140000.010.0200.750.030.020.010.01
Others0.110.040.0100.170.140.070.050.190.060.080.08
Question0.010.01000.020.050.040.020.010.780.020.04
Stat. Non Op.0.080.030.010.010.020.150.070.020.020.030.290.27
Stat. Opinion0.040.0500.010.010.140.090.0100.030.070.55
Table A33. Matrix confusion DAs DS-v3 (context) MELD.
Table A33. Matrix confusion DAs DS-v3 (context) MELD.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.540.0300.010.0600.01000.020.140.19
Agreement0.340.350.0200.030.020.03000.020.060.11
Answer0.050.070.20.010.010.030.05000.010.350.21
Apology0.040.0700.800.010.01000.0100.07
Backchannel0.370.010.0200.380.0200.010.030.110.020.02
Command0.020.010000.850.010.0100.010.050.03
Disagreement0.020.020.030.020.010.060.56000.010.140.12
Greeting0.1500.01000.0100.7300.010.050.02
Others0.260.030.0100.100.120.040.050.070.060.160.11
Question0.020.010.010.010.010.030.020.0300.780.030.07
Stat. Non Op.0.060.030.010.010.010.090.030.0100.020.410.31
Stat. Opinion0.030.0400.0100.080.05000.030.150.6
Table A34. Matrix confusion DAs DS-r1 (context) MELD.
Table A34. Matrix confusion DAs DS-r1 (context) MELD.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.340.0700.020.070.010.010.010.060.020.150.25
Agreement0.150.490.0400.050.040.0100.010.030.070.11
Answer0.010.070.350.010.010.040.0400.010.030.280.15
Apology0.010.0300.7600.030.010.010.010.030.010.10
Backchannel0.140.040.0200.430.020.0100.180.120.020.01
Command0.010.020000.860.0200.020.010.030.04
Disagreement0.020.020.020.0100.070.5500.020.010.140.14
Greeting0.110.010.0100.010.0100.680.060.020.050.03
Others0.110.040.0200.070.120.040.020.220.060.150.15
Question000000.030.010.020.010.850.030.03
Stat. Non Op.0.030.030.02000.100.030.010.010.030.440.28
Stat. Opinion0.010.030.010.0100.100.03000.040.160.60
Table A35. Matrix confusion DAs DS-v3 (context + emotions) MELD.
Table A35. Matrix confusion DAs DS-v3 (context + emotions) MELD.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.460.060.020.010.0500.0100.010.010.150.23
Agreement0.330.310.0100.040.020.03000.020.100.13
Answer0.040.060.150.010.010.020.04000.020.440.21
Apology0.040.0400.7600000.010.010.020.12
Backchannel0.360.03000.380.020.0100.040.110.040
Command0.020000.010.810.0100.010.020.070.05
Disagreement0.030.020.010.020.010.060.46000.010.230.14
Greeting0.130.01000.010.0100.7300.010.070.03
Others0.210.03000.110.110.040.050.040.070.210.12
Question0.010.010.0100.010.020.010.0300.780.050.07
Stat. Non Op.0.050.030.010.010.010.080.030.0100.020.430.33
Stat. Opinion0.020.0400.0100.060.04000.020.220.58
Table A36. Matrix confusion DAs DS-r1 (context + emotions) MELD.
Table A36. Matrix confusion DAs DS-r1 (context + emotions) MELD.
Acknow.AgreementAnswerApologyBackch.CommandDisagr.GreetingOthersQuestionSt. No Op.St. Op.
Acknowledge0.370.070.010.010.080.0100.010.030.020.150.25
Agreement0.150.440.0200.060.040.0300.010.030.110.11
Answer0.010.070.330.010.010.040.04000.030.310.15
Apology0.020.0300.740.010.0300.010.010.020.030.10
Backchannel0.190.030.0100.370.020.0200.190.140.030.01
Command0.010.02000.010.850.0100.020.010.040.03
Disagreement0.010.030.030.0200.080.4500.010.020.180.16
Greeting0.110.010.01000.0100.700.040.020.070.03
Others0.110.030.0200.050.120.030.020.220.070.190.13
Question00000.010.030.010.020.010.840.030.04
Stat. Non Op.0.030.030.0100.010.100.020.010.010.030.480.26
Stat. Opinion0.010.030.010.0100.100.03000.040.200.58

References

  1. Brandtzaeg, P.B.; Følstad, A. Chatbots: Changing user needs and motivations. Interactions 2018, 25, 38–43. [Google Scholar] [CrossRef]
  2. Bittner, E.; Oeste-Reiß, S.; Leimeister, J.M. Where is the bot in our team? Toward a taxonomy of design option combinations for conversational agents in collaborative work. In Proceedings of the Hawaii International Conference on System Sciences (HICSS), Maui, HI, USA, 8–11 January 2019; pp. 284–293. [Google Scholar]
  3. Di Prospero, A.; Norouzi, N.; Fokaefs, M.; Litoiu, M. Chatbots as assistants: An architectural framework. In Proceedings of the 27th Annual International Conference on Computer Science and Software Engineering, Markham, ON, USA, 6–8 November 2017; pp. 76–86. [Google Scholar]
  4. Grudin, J.; Jacques, R. Chatbots, humbots, and the quest for artificial general intelligence. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
  5. Caldarini, G.; Jaf, S.; McGarry, K. A literature survey of recent advances in chatbots. Information 2022, 13, 41. [Google Scholar] [CrossRef]
  6. Indrayani, L.; Amalia, R.; Hakim, F. Emotive expressions on social chatbot. J. Sosioteknologi 2020, 18, 509–516. [Google Scholar] [CrossRef]
  7. Chaves, A.; Gerosa, M. How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. Int. J. Hum. Comput. Interact. 2021, 37, 729–758. [Google Scholar] [CrossRef]
  8. Almansor, E.H.; Hussain, F.K. Survey on intelligent chatbots: State-of-the-art and future research directions. In Proceedings of the 13th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS-2019), Sydney, Australia, 3–5 July 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 534–543. [Google Scholar]
  9. Lowe, R.; Noseworthy, M.; Serban, I.V.; Angelard-Gontier, N.; Bengio, Y.; Pineau, J. Towards an automatic Turing test: Learning to evaluate dialogue responses. arXiv 2017, arXiv:1708.07149. [Google Scholar]
  10. Luger, E.; Sellen, A. “Like having a really bad PA”: The gulf between user expectation and experience of conversational agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 5286–5297. [Google Scholar]
  11. Song, Z.; Zheng, X.; Liu, L.; Xu, M.; Huang, X. Generating responses with a specific emotion in dialog. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), Florence, Italy, 28 July–2 August 2019; pp. 3685–3695. [Google Scholar]
  12. Wolk, K. Real-time sentiment analysis for Polish dialog systems using MT as pivot. Electronics 2021, 10, 1813. [Google Scholar] [CrossRef]
  13. Wang, Z.; Xie, Q.; Feng, Y.; Ding, Z.; Yang, Z.; Xia, R. Is ChatGPT a Good Sentiment Analyzer? In Proceedings of the First Conference on Language Modeling, Philadelphia, PA, USA,, 7–9 October 2024. [Google Scholar]
  14. Bi, X.; Chen, D.; Chen, G.; Chen, S.; Dai, D.; Deng, C.; Ding, H.; Dong, K.; Du, Q.; Fu, Z.; et al. DeepSeek LLM: Scaling open-source language models with longtermism. arXiv 2024, arXiv:2401.02954. [Google Scholar]
  15. OpenAI. ChatGPT (Online AI Model Interface). Available online: https://chat.openai.com (accessed on 15 January 2025).
  16. DeepSeek Research. DeepSeek-R1 Model Page. Available online: https://www.deepseek.com (accessed on 1 February 2025).
  17. Guo, D.; Yang, D.; Zhang, H.; Song, J.; Zhang, R.; Xu, R.; Zhu, Q.; Ma, S.; Wang, P.; Bi, X.; et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv 2025, arXiv:2501.12948. [Google Scholar]
  18. Garcia-Garcia, J.M.; Penichet, V.M.; Lozano, M.D. Emotion detection: A technology review. In Proceedings of the XVIII International Conference on Human-Computer Interaction, Cancun, Mexico, 25–27 September 2017; pp. 1–8. [Google Scholar]
  19. Ekman, P. Basic emotions. In Handbook of Cognition and Emotion; John Wiley & Sons: Hoboken, NJ, USA, 1999; pp. 45–60. [Google Scholar]
  20. Acheampong, F.A.; Wenyu, C.; Nunoo-Mensah, H. Text-based emotion detection: Advances, challenges, and opportunities. Eng. Rep. 2020, 2, e12189. [Google Scholar] [CrossRef]
  21. Cowie, R.; Douglas-Cowie, E.; Tsapatsoulis, N.; Votsis, G.; Kollias, S.; Fellenz, W.; Taylor, J.G. Emotion recognition in human-computer interaction. IEEE Signal Process. Mag. 2001, 18, 32–80. [Google Scholar] [CrossRef]
  22. Fragopanagos, N.; Taylor, J.G. Emotion recognition in human–computer interaction. Neural Netw. 2005, 18, 389–405. [Google Scholar] [CrossRef]
  23. Chen, A.; Koegel, S.; Hannon, O.; Ciriello, R. Feels Like Empathy: How “Emotional” AI Challenges Human Essence. In Proceedings of the Australasian Conference on Information Systems, Wellington, New Zealand, 5–8 December 2023. [Google Scholar]
  24. Pabba, C.; Kumar, P. An intelligent system for monitoring students’ engagement in large classroom teaching through facial expression recognition. Expert Syst. 2022, 39, e12839. [Google Scholar] [CrossRef]
  25. Zhang, L.; Lyu, Q.; Callison-Burch, C. Intent detection with WikiHow. arXiv 2020, arXiv:2009.05781. [Google Scholar]
  26. Austin, J.L. How to Do Things with Words; Harvard University Press: Cambridge, MA, USA, 1975. [Google Scholar]
  27. Searle, J.R. A taxonomy of illocutionary acts. In Language, Mind and Knowledge; Gunderson, K., Ed.; University of Minnesota Press: Minneapolis, MN, USA, 1975; pp. 344–369. [Google Scholar]
  28. Ye, F. User Intent and State Modeling in Conversational Systems. Ph.D. Thesis, University College London, London, UK, 2024. [Google Scholar]
  29. Ghafoor, K.; Ahmad, T.; Aslam, M.; Wahla, S. Improving social interaction of visually impaired individuals through conversational assistive technology. Int. J. Intell. Comput. Cybern. 2024, 17, 126–142. [Google Scholar] [CrossRef]
  30. Barnum, T.C.; Solomon, S.J. Fight or flight: Integral emotions and violent intentions. Criminology 2019, 57, 659–686. [Google Scholar] [CrossRef]
  31. Bee, C.C.; Madrigal, R. Consumer uncertainty: The influence of anticipatory emotions on ambivalence, attitudes, and intentions. J. Consum. Behav. 2013, 12, 370–381. [Google Scholar] [CrossRef]
  32. Soscia, I. Gratitude, delight, or guilt: The role of consumers’ emotions in predicting postconsumption behaviors. Psychol. Mark. 2007, 24, 871–894. [Google Scholar] [CrossRef]
  33. Peng, W.; Hu, Y.; Xie, Y.; Xing, L.; Sun, Y. CogIntAc: Modeling the Relationships between Intention, Emotion, and Action in Interactive Process from Cognitive Perspective. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022. [Google Scholar]
  34. Saha, T.; Ekbal, A.; Bhattacharyya, P. Towards emotion-aided multi-modal dialogue act classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020. [Google Scholar]
  35. Bosma, W.; André, E. Exploiting emotions to disambiguate dialogue acts. In Proceedings of the 9th International Conference on Intelligent User Interfaces, Funchal, Portugal, 13–16 January 2004; pp. 85–92. [Google Scholar]
  36. Banimelhem, O.; Amayreh, W. The performance of ChatGPT in emotion classification. In Proceedings of the 2023 14th International Conference on Information and Communication Systems (ICICS), Irbid, Jordan, 21–23 November 2023; IEEE: New York, NY, USA, 2023. [Google Scholar]
  37. Imran, M.M.; Chatterjee, P.; Damevski, K. Uncovering the causes of emotions in software developer communication using zero-shot llms. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering, Lisbon Portugal, 14–20 April 2024; pp. 1–13. [Google Scholar]
  38. Zhou, H.; Huang, M.; Zhang, T.; Zhu, X.; Liu, B. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  39. Asghar, N.; Poupart, P.; Hoey, J.; Jiang, X.; Mou, L. Affective neural response generation. In Proceedings of the Advances in Information Retrieval (ECIR 2018), Grenoble, France, 26–29 March 2018; pp. 154–166. [Google Scholar]
  40. Majumder, N.; Hong, P.; Peng, S.; Lu, J.; Ghosal, D.; Gelbukh, A.; Mihalcea, R.; Poria, S. MIME: MIMicking emotions for empathetic response generation. arXiv 2020, arXiv:2002.00193. [Google Scholar]
  41. Luo, J.; Phan, H.; Reiss, J. Fine-tuned RoBERTa Model with a CNN-LSTM Network for Conversational Emotion Recognition. In Proceedings of the Interspeech, Dublin, Ireland, 20–24 August 2023. [Google Scholar]
  42. Lin, Z.; Xu, P.; Winata, G.I.; Siddique, F.B.; Liu, Z.; Shin, J.; Fung, P. Caire: An end-to-end empathetic chatbot. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13622–13623. [Google Scholar]
  43. Mullangi, P.; Dimmita, N.; Supriya, M.; Murty, P.S.C.; Nirmala, G.V.; Palagan, C.A.; Rao, K.T.; Rajeswaran, N. Sentiment and Emotion Modeling in Text-based Conversations utilizing ChatGPT. Eng. Technol. Appl. Sci. Res. 2025, 15, 20042–20048. [Google Scholar] [CrossRef]
  44. Wake, N.; Kanehira, A.; Sasabuchi, K.; Takamatsu, J.; Ikeuchi, K. Bias in emotion recognition with ChatGPT. arXiv 2023, arXiv:2310.11753. [Google Scholar]
  45. Ortega, D.; Vu, N.T. Neural-based context representation learning for dialog act classification. arXiv 2017, arXiv:1708.02561. [Google Scholar]
  46. Raheja, V.; Tetreault, J. Dialogue act classification with context-aware self-attention. arXiv 2019, arXiv:1904.02594. [Google Scholar]
  47. Saha, T.; Srivastava, S.; Firdaus, M.; Saha, S.; Ekbal, A.; Bhattacharyya, P. Exploring machine learning and deep learning frameworks for task-oriented dialogue act classification. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  48. Li, R.; Lin, C.; Collinson, M.; Li, X.; Chen, G. A dual-attention hierarchical recurrent neural network for dialogue act classification. arXiv 2018, arXiv:1810.09151. [Google Scholar]
  49. Shang, G.; Tixier, A.J.P.; Vazirgiannis, M.; Lorré, J.P. Speaker-change aware CRF for dialogue act classification. arXiv 2020, arXiv:2004.02913. [Google Scholar]
  50. Novielli, N.; Strapparava, C. The Role of Affect Analysis in Dialogue Act Identification. IEEE Trans. Affect. Comput. 2013, 4, 439–451. [Google Scholar] [CrossRef]
  51. Busso, C.; Bulut, M.; Lee, C.C.; Kazemzadeh, A.; Mower, E.; Kim, S.; Chang, J.; Lee, S.; Narayanan, S. IEMOCAP: Interactive emotional dyadic motion capture database. Lang. Resour. Eval. 2008, 42, 335–359. [Google Scholar] [CrossRef]
  52. Poria, S.; Hazarika, D.; Majumder, N.; Naik, G.; Cambria, E.; Mihalcea, R. Meld: A multimodal multi-party dataset for emotion recognition in conversations. arXiv 2018, arXiv:1810.02508. [Google Scholar]
  53. Jurafsky, D. Switchboard SWBD-DAMSL Shallow-Discourse-Function Annotation Coders Manual. 1997. Available online: https://www.colorado.edu/ics/sites/default/files/attached-files/97-02-part1.pdf (accessed on 1 January 2025).
  54. Géron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow; O’Reilly Media, Inc.: Newton, MA, USA, 2022. [Google Scholar]
  55. Müller, A.C.; Guido, S. Introduction to Machine Learning with Python: A Guide for Data Scientists; O’Reilly Media, Inc.: Newton, MA, USA, 2016. [Google Scholar]
  56. Guingrich, R.E.; Graziano, M.S. Chatbots as social companions: How people perceive consciousness, human likeness, and social health benefits in machines. arXiv 2023, arXiv:2311.10599. [Google Scholar]
Figure 1. Classification’s model diagram.
Figure 1. Classification’s model diagram.
Mathematics 13 03768 g001
Table 1. DS-v3 and DS-r1 emotion classification on IEMOCAP under different conditions.
Table 1. DS-v3 and DS-r1 emotion classification on IEMOCAP under different conditions.
ConditionMetricsPrecisionRecallF1-ScorePerformance
v3r1v3r1v3r1
Baselineaccuracy 0.360.37low
macro avg0.360.350.340.300.300.25
weighted0.440.440.360.290.360.30
Contextaccuracy 0.470.51medium
macro avg0.450.470.480.50.450.43
weighted avg0.530.560.470.510.470.51
Context + DAaccuracy 0.490.48medium
macro avg0.470.470.480.480.430.42
weighted0.540.550.490.480.480.48
Table 2. DS-v3 and DS-r1 emotion classification on MELD under different conditions.
Table 2. DS-v3 and DS-r1 emotion classification on MELD under different conditions.
ConditionMetricsPrecisionRecallF1-ScorePerformance
v3r1v3r1v3r1
Baselineaccuracy 0.440.54low
macro avg0.410.440.440.530.380.46
weighted0.620.640.440.540.440.56
Contextaccuracy 0.570.62medium
macro avg0.460.510.530.560.480.52
weighted0.650.660.570.620.580.63
Context + DAaccuracy 0.620.63medium-high
macro avg0.510.520.520.550.510.52
weighted0.640.650.620.630.630.63
Table 3. DS-v3 and DS-r1 DAs classification on IEMOCAP under different conditions.
Table 3. DS-v3 and DS-r1 DAs classification on IEMOCAP under different conditions.
ConditionMetricsPrecisionRecallF1-ScorePerformance
v3r1v3r1v3r1
Baselineaccuracy 0.440.45low
macro avg0.420.390.50.490.390.37
weighted0.550.570.440.450.440.45
Contextaccuracy 0.570.62medium
macro avg0.460.510.530.560.480.52
weighted0.650.660.570.620.580.63
Context + emotionsaccuracy 0.530.61medium
macro avg0.530.540.500.560.460.52
weighted0.620.660.530.610.510.61
Table 4. DS-v3 and DS-r1 DAs classification on MELD under different conditions.
Table 4. DS-v3 and DS-r1 DAs classification on MELD under different conditions.
ConditionMetricsPrecisionRecallF1-ScorePerformance
v3r1v3r1v3r1
Baselineaccuracy 0.350.45low
macro avg0.340.430.450.540.320.40
weighted0.480.570.350.450.350.44
Contextaccuracy 0.500.55medium
macro avg0.490.520.520.550.440.50
weighted0.610.630.500.550.500.56
Context + emotionsaccuracy 0.480.55medium
macro avg0.470.520.490.550.420.50
weighted0.580.630.480.550.480.56
Table 5. Performance of DeepSeek-r1, Gemini-2.5, and ChatGPT-4, under different conditions on classification of emotions.
Table 5. Performance of DeepSeek-r1, Gemini-2.5, and ChatGPT-4, under different conditions on classification of emotions.
ConditionModelsPrecisionRecallF1-scoreAccuracy
M1I2MIMIMI
ContextChatGPT-40.490.430.550.470.490.390.600.46
DeepSeek-r10.510.470.560.500.520.430.620.51
Gemini-2.50.490.50.570.60.510.490.610.55
Context + DAChatGPT-40.530.430.520.440.510.370.630.45
DeepSeek-r10.520.470.550.480.520.420.630.48
Gemini-2.50.510.500.560.600.520.490.630.55
1 MELD. 2 IEMOCAP.
Table 6. Performance of DeepSeek-r1, Gemini-2.5, and ChatGPT-4, under different conditions on classification of DAs.
Table 6. Performance of DeepSeek-r1, Gemini-2.5, and ChatGPT-4, under different conditions on classification of DAs.
ConditionModelsPrecisionRecallF1-ScoreAccuracy
M1I2MIMIMI
ContextChatGPT-40.530.520.500.460.480.460.570.53
DeepSeek-r10.520.510.550.560.500.520.550.62
Gemini-2.50.530.540.590.610.520.530.580.64
Context + emotionChatGPT-40.520.510.490.440.470.440.550.52
DeepSeek-r10.520.540.550.560.500.520.550.61
Gemini-2.50.520.550.580.630.510.540.580.64
1 MELD. 2 IEMOCAP.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Castro, E.; Calvo, H.; Kolesnikova, O. Emotion and Intention Detection in a Large Language Model. Mathematics 2025, 13, 3768. https://doi.org/10.3390/math13233768

AMA Style

Castro E, Calvo H, Kolesnikova O. Emotion and Intention Detection in a Large Language Model. Mathematics. 2025; 13(23):3768. https://doi.org/10.3390/math13233768

Chicago/Turabian Style

Castro, Emmanuel, Hiram Calvo, and Olga Kolesnikova. 2025. "Emotion and Intention Detection in a Large Language Model" Mathematics 13, no. 23: 3768. https://doi.org/10.3390/math13233768

APA Style

Castro, E., Calvo, H., & Kolesnikova, O. (2025). Emotion and Intention Detection in a Large Language Model. Mathematics, 13(23), 3768. https://doi.org/10.3390/math13233768

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop