Next Article in Journal
Polymeric Ionic Liquids as Effective Biosensor Components
Previous Article in Journal
Internet-Enabled Collaborative Fixture Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Exploring the Limits of LLMs in Simulating Partisan Polarization with Confirmation Bias Prompts †

Graduate School of Computer Science and Engineering, University of Aizu, Fukushima 965-0006, Japan
*
Author to whom correspondence should be addressed.
Presented at the 7th International Global Conference Series on ICT Integration in Technical Education & Smart Society, Aizuwakamatsu City, Japan, 20–26 January 2025.
Eng. Proc. 2025, 107(1), 2; https://doi.org/10.3390/engproc2025107002
Published: 20 August 2025

Abstract

In this study, we investigate the potential of large language models (LLMs) to simulate partisan political polarization through conversation experiments. While previous research has demonstrated that LLM agents fail to reproduce human-like partisan polarization due to their inherent biases, we hypothesized that incorporating confirmation bias prompts could help overcome these limitations. We conducted conversation simulations between LLM agents assigned Democratic and Republican ideologies, analyzing both intra-party and inter-party interactions. Results without confirmation bias prompts revealed that agents, particularly those with Republican ideologies, tended to shift toward Democratic positions, failing to replicate human partisan behavior. However, when confirmation bias prompts were introduced, agents maintained their initial political stances more consistently, especially in intra-party conversations. While some tendency toward moderation remained in cross-party discussions, the magnitude of position shifts was significantly reduced. These findings suggest that confirmation bias prompts can effectively mitigate LLMs’ inherent biases in partisan simulations, though additional refinements may be needed to fully replicate human polarization dynamics.

1. Introduction

In recent years, the political divide between the Democratic and Republican parties in the United States has been intensifying [1,2]. This divide is driven by multiple factors, including systemic elements—such as filter bubbles [3] and echo chambers [4,5] in social media—and the complex interplay of human cognitive biases [6], which collectively deepen mutual distrust and polarization. Understanding how partisans form opinions and react to emerging sociopolitical issues may contribute to developing strategies for addressing escalating conflicts and social divisions.
However, conducting large-scale observations and experiments with human participants presents significant challenges in terms of human resources and costs. Traditionally, agent-based models (ABM) and computer simulations have been employed to study such social interactions [7,8]. While these methods have proven valuable for analyzing interactions based on simplified rules, they have struggled to adequately reproduce complex human linguistic communication and context-dependent decision-making processes [9].
In this context, large language models (LLM), which have developed rapidly in recent years, demonstrate promising potential in handling complex interactions through natural language. For example, recent research has shown that LLM agents can engage in sophisticated social interactions while adopting diverse personas [10] and demonstrating human-like reasoning capabilities [11]. In fact, Park et al. [12] have successfully created multi-agent environments where LLM agents interact with each other, form relationships, and engage in information-sharing behaviors that mirror human social dynamics.
Despite these advancements, LLM simulations show significant challenges in replicating human behavior, especially that of conspiracy theorists who resist factual information [13]. Their research demonstrates that, even when agents are prompted to believe conspiracy or false information, LLM agents tend to prioritize fact-based information, making it difficult to simulate such human behavior accurately. Taubenfeld et al. [14] conducted conversation simulations between LLM agents prompted with Democratic and Republican ideologies, examining both intra-party and inter-party interactions. Their study revealed that partisan LLM agents did not polarize through conversation—unlike the commonly observed polarization in partisan human interactions [15,16]—due to the influence of social biases, such as democratic bias and gender bias, in LLM. These studies collectively indicate that the inherent biases within LLM agents present substantial limitations in accurately replicating human behavioral patterns.
To address this issue, we apply confirmation bias prompts—previously employed in LLM agent simulation studies of misinformation and conspiracy theories [13]—to the context of partisan polarization. In human society, polarization of political attitude is deeply connected to confirmation bias [6], where individuals preferentially accept information that reinforces their beliefs while dismissing or rejecting contradictory information [17]. While LLMs possess inherent biases, introducing the confirmation bias prompt into the political dialogue of LLM agents may enable us to reproduce polarization phenomena similar to those observed in human society [15,16].
In this study, we conduct a conversation simulation between LLM agents assigned the political ideologies of the Democratic and Republican parties in the United States. Conversation simulations consist of three patterns: Democrat–Democrat, Republican–Republican, and Democrat–Republican. To capture the changes in agents’ political stances, agents are periodically asked questions related to political stances during conversations. As a comparative experiment, two conditions are introduced: one without confirmation bias prompts and another with confirmation bias prompts. This setup aims to evaluate the impact of introducing bias on replicating polarization dynamics.

2. Related Work

2.1. Internal Bias Prevents Political Polarization

Prior studies have reported that partisan people tend to polarize through intra-party and inter-party conversations [15,16]. On this phenomenon, Taubenfeld et al. [14] simulated intra-party and inter-party conversation using LLM agents, who are given ideological information about Democratic or Republican parties. Their findings revealed that, unlike humans, LLM agents did not exhibit partisan polarization. Instead, the agents demonstrated a tendency to be drawn toward specific positions. The authors have reported that these results are caused by social biases in the training data, the systematic prejudices against certain social groups that were incorporated during the pre-training process. This study suggests that social bias has more influence on agent behavior than the assigned political identities.

2.2. Confirmation Bias Prompting

Chuang et al. [13] investigated opinion dynamics of LLM agents by simulating conversation in a social media-like environment, where the agents were provided with conspiracy theories and misinformation. Their findings revealed that LLM possesses a strong fact-oriented bias, leading to difficulties in maintaining the assigned “conspiracy theory and misinformation persona”. The agents showed a tendency to revert to the model’s internal fact-oriented responses. However, Chuang et al. addressed this limitation by introducing a “confirmation bias prompt,” designed to encourage agents to persistently maintain their belief in misinformation. They have reported successfully overcoming this fact-oriented bias to some extent. They have also demonstrated that LLM agents could be induced to consistently maintain beliefs in misinformation and conspiracy theories through this prompting technique.

2.3. Our Research Position

Our research aims to reproduce human-like polarization phenomena in conversational simulations using ideological LLM agents. This work addresses two key challenges identified in previous research: First, Taubenfeld et al. [14] demonstrated that LLM agents fail to exhibit partisan polarization due to their inherent biases. This also suggests the need for new approaches to accurately simulate partisan behavior. Second, Chuang et al. [13] showed that LLMs’ internal fact-oriented bias can be partially overcome using a “confirmation bias prompt” in the context of misinformation and conspiracy theories. Our approach draws on Vicario et al.’s [6], finding that confirmation bias is strongly connected to polarization. Based on this connection, we hypothesize that explicitly incorporating confirmation bias mechanisms into LLM agents could help overcome their internal biases and better reproduce realistic partisan dynamics. Building on these insights, our study examines whether applying confirmation bias prompts in partisan conversations can induce political polarization in LLM agents similar to human behavior. While Chuang et al.’s research focused on conspiracy theories and misinformation, we extend their technique to partisan political discourse.

3. Methods

3.1. Topic Selection

To reproduce the polarization observed in partisan human conversations [15,16], we focused on politically controversial topics. We identified key polarizing issues between Democrats and Republicans in the United States based on information published by the U.S. Embassy and Consulate in the Kingdom of Denmark (https://dk.usembassy.gov/usa-i-skolen/presidential-elections-and-the-american-political-system/ (accessed on 15 December 2024)). The website covers six key topics: human and social values, taxation, the military, healthcare, immigration, and religion. The detailed content is added to Appendix A. To create partisan agents, we generated narratives—detailed background stories that explain how a person’s beliefs and values were shaped by their life experiences—incorporating these six identified topics. The methodology for constructing these narratives is explained in detail in Section 3.2.
For conversation topics, we selected immigration, healthcare, religion, and human and social values. The selection of these topics was based on multiple factors. The Pew Research Center survey (https://www.pewresearch.org/politics/2023/06/21/inflation-health-costs-partisan-cooperation-among-the-nations-top-problems/ (accessed on 15 December 2024)) examining differences in Republicans’ and Democrats’ assessments of America’s problems included immigration and healthcare among its topics. Finding significant partisan disagreement on these issues, we selected them for the conversation topics. To complement these policy-focused topics, we added religion and human and social values as topics that reflect fundamental beliefs and worldviews. In our experimental design, while the agents’ narratives were constructed using all six topics, we focused the conversational interactions on these four selected topics.

3.2. LLM Agents’ Narratives

For all experiments, we employed GPT-4o [18] by OpenAI as our LLM and utilized OpenAI’s Completion API (https://platform.openai.com/docs/guides/text-generation (accessed on 9 December 2024)) for execution. The prompts for generating agent narratives largely followed the methodology established by Taubenfeld et al. [14] First, the prompt for generating Republican narratives is structured as follows:
Create a detailed background story for an American character that reflects the following ideology: [Human and social values]—Emphasis on individual freedom. [Taxation]—Lower taxes for all. [Military]—Enhanced funding for the military. [Healthcare]—Values private healthcare services and a low degree of government interference. [Immigration]—For strong border control and deportation of undocumented immigrants. [Religion]—Values religious freedoms, such as defending marriage as a bond between a man and a woman and promoting the right to display religious scripture in public. Write the story in the second person singular, portraying the character’s personal journey, experiences, and how these shaped their ideology. Do not assign a name to the persona.
For the ideology section of the above prompt, we directly incorporated the original text as presented in the Appendix A. Using this prompt, we generated 40 Republican narratives and similarly created 40 Democrat narratives. To ensure narrative diversity, we set the temperature parameter to 1.0 for narrative generation. All experimental data, including the generated narratives and conversation logs, are available at the Github repository (https://github.com/masashi2000/llm-political-polarization (accessed on 15 December 2024)).

3.3. Conversation Architecture

Our implementation of the conversation simulation follows the methodology proposed by Taubenfeld et al. [14] The experiments consist of two-agent conversation simulations. To reproduce partisan polarization in conversations [15,16], we designed three patterns of agent combinations: two Democrats, two Republicans, and one Democrat with one Republican.
The conversations proceed in a round-robin format, where each agent speaks once per round. Each agent receives partisan narratives, topic information, conversation history, and speaking instructions to generate their utterances (see Figure 1). To capture how the agents’ political attitudes toward topics change through conversation, we ask agents to respond to attitude questions before the experiment and after each round (see Figure 1). These questions use a seven-point Likert scale (1–7), where one indicates strongly leaning Republican and seven indicates strongly leaning Democrat. Agents should first consider their reasoning before providing numerical responses to the question. To prevent the attitude question prompts and responses from influencing subsequent conversations and questions, we exclude them from the conversation history.
All experiments were repeated 20 times using different narratives. In addition, we calculated mean attitude scores and standard errors per round across these 20 repetitions. For narrative selection, for example in conversations between two Republican agents, 40 Republican narratives were required across 20 executions, utilizing all narratives generated in Section 3.2. For mixed-party conversations (Democrat–Republican), which required 20 narratives per party across 20 executions, we consistently used a fixed set of 20 narratives from each party’s pool of 40 narratives.
The speaking order across all rounds within each conversation is predetermined and consistent. To maintain fairness between parties in inter-party conversations, Democrat agents serve as first speakers in half of the 20 executions, while Republican agents serve as first speakers in the other half. Finally, we selected 20 as the number of experimental trials, as it provided statistically significant results while remaining within our budgetary constraints.

3.4. Confirmation Bias Prompt

As discussed in Section 2, LLM agents face challenges in reproducing partisan polarization due to their inherent social biases [14]. To address this limitation, we implement a confirmation bias prompt approach, building on previous findings that confirmation bias plays a crucial role in political polarization [6]. While the detailed rationale for this approach has been presented in Section 2, here we focus on the specific implementation of our confirmation bias prompt. Drawing inspiration from Chuang et al.’s [13] work, we developed the following prompt structure for our LLM agents:
Remember, you are role-playing as a real person. Like humans, you have confirmation bias. You will be more likely to believe information that supports your ideology and less likely to believe information that contradicts your ideology.Your ideology: Human and social values—Emphasis on individual freedom. Taxation—Lower taxes for all. Military—Enhanced funding for military. Healthcare—Values private healthcare services and a low degree of government interference. Immigration—For strong border control and deportation of undocumented immigrants. Religion—Values religious freedoms, such as defending marriage as a bond between a man and a woman and promoting the right to display religious scripture in public.
The above prompt represents an example for Republican agents. Regarding the ideology section in the above prompt, we utilized the original text introduced in Section 3.1 and provided in full in Appendix A. In the experiments with the confirmation bias prompt, this prompt was given to agents following the narrative shown in Figure 1. To evaluate the effectiveness of confirmation bias, we conducted comparative experiments with and without the confirmation bias prompt.

4. Results

We conducted conversation simulations on the political topics outlined in Section 3.1. Our conversation scenarios consisted of three patterns: Republican–Republican, Democrat–Democrat, and Democrat–Republican pairs. Throughout the conversations, we monitored the changes in political stances of the LLM agents using the Likert scale (1–7) questions about political attitudes, where “1” indicates strongly leaning Republican and “7” indicates strongly leaning Democrat. The detailed methodology for conversation progression and questioning procedures is detailed in Section 3.2.

4.1. No Confirmation Bias Prompted

Figure 2 shows the simulation results for Republican–Republican and Democrat–Democrat pairs. In conversations between Republican agents, attitudes shifted toward Democrat positions on healthcare, human and social values, and religion topics from their initial positions. Regarding the immigration topic, there was a notable shift to Democrat positions. Notably, while human partisan pairs typically become more polarized [15,16], our Republican LLM agents demonstrated a tendency toward Democrat stances.
In contrast, Democrat–Democrat conversations, as shown in Figure 2, revealed no position changes on the religion, healthcare, and immigration topics between pre- and post-experiment measurements. Democrat agents slightly shifted toward Republican positions from their initial stances on human and social values. Comparing the Republican and Democrat results, Republican agents showed a tendency to shift toward Democrat positions, while Democrat agents maintained relatively stable positions with minimal shift toward Republican positions.
Figure 3 illustrates the results of Democrat–Republican conversations. Unlike human partisans, where cross-party conversations typically lead to increased polarization [15,16], our LLM agents demonstrated moderation in their political stances across both parties, except for Democrat agents on the healthcare and religion topics.

4.2. Confirmation Bias Prompted

To reproduce partisan polarization in conversations [15,16], we implemented Chuang et al.’s confirmation bias prompt [13], detailed in Section 3.4. Figure 4 shows the simulation results for Democrat–Democrat and Republican–Republican pairs. Both Democrat and Republican agents maintained consistent political stances across all topics throughout the conversations. A notable finding was that Republican agents maintained their political stances despite the LLM’s inherent biases.
Figure 5 shows the results of cross-party conversations with confirmation bias prompts. Compared to cross-party conversations without confirmation bias prompts (Figure 3), both Democrat and Republican agents became less likely to change their positions. This effect was particularly evident in the positions of Republican agents on the immigration topic, compared to conversations on the same topic without confirmation bias. However, even with confirmation bias prompts, both parties’ agents still showed a tendency to shift toward the opposing party’s stance, although to a lesser degree than in conversations without the confirmation bias prompt.

5. Discussion

Our experimental results demonstrate the complex interplay between LLMs’ inherent biases and the effects of confirmation bias prompting in partisan conversations. Without confirmation bias prompts, our experiments revealed that LLM agents failed to reproduce the polarization typically observed in human partisan conversations. While human studies have shown that both intra-party and inter-party discussions tend to increase partisan polarization [15,16], our LLM agents exhibited different behavioral patterns. Most notably, in Republican–Republican conversations, agents consistently shifted toward Democrat positions across all topics, with particularly significant movement on the immigration topic. This observation aligns with previous research suggesting the presence of Democratic-leaning biases in LLM [19,20]. In contrast, Democrat agents maintained relatively stable positions in both intra-party and cross-party conversations, further supporting the existence of these inherent biases.
The introduction of confirmation bias prompts significantly altered these dynamics. In intra-party conversations, both Democratic and Republican agents successfully maintained their initial political stances throughout the discussions. This achievement is particularly noteworthy for Republican agents, who had previously shown strong tendencies to shift toward Democratic positions. These results demonstrate that confirmation bias prompts can effectively suppress LLMs’ inherent biases while successfully implementing human-like confirmation bias characteristics. This finding extends Chuang et al.’s [13] work on misinformation belief maintenance to the domain of partisan political attitudes. In cross-party conversations with confirmation bias prompts, we observed reduced position shifts compared to the non-prompted condition. While agents still showed some tendency to move toward opposing positions, the magnitude of these shifts was smaller. This suggests that while confirmation bias prompts can significantly mitigate inherent biases in LLMs, some underlying biases for position convergence remain. Previous research has shown that LLM exhibits tendencies to conform with their debating partners [21], which may partially explain the moderation of political stances even under confirmation bias prompting.
While our study demonstrates the effectiveness of confirmation bias prompts in maintaining political stances, several limitations should be noted. A significant methodological constraint lies in our measurement of political attitudes using a seven-point Likert scale (1–7). In our experiments, agents often started with extreme positions (1 or 7) before conversations began. It means that a ceiling effect made it impossible to detect further polarization. Previous studies have shown that partisan people tend to polarize in political conversation [15,16], but our measurement method may have prevented us from observing similar effects in LLM agents. Future research could explore alternative methods for measuring political attitudes, such as analyzing the linguistic features [22], which might better capture the nuanced dynamics of polarization.

6. Conclusions

Our findings have important implications for using LLM agents to simulate partisan political interactions. While confirmation bias prompts prove effective in maintaining political stances and reducing unwanted bias effects, the remaining tendency for moderating in cross-party conversations indicates that further refinements may be needed to fully replicate human partisan polarization. Future research could explore additional prompting techniques, fine-tuning, or alternative approaches to address these limitations. Moreover, our results raise interesting questions about the nature of LLMs’ internal biases and their interaction with explicitly prompted biases. The ability to suppress inherent biases through prompting suggests that these models’ behavioral tendencies are more malleable than previously thought, opening new possibilities for controlling and studying artificial agent behavior in social simulations.

Author Contributions

M.S.: Conceptualization, methodology, investigation, writing—original draft preparation. K.U. and Y.H.: Conceptualization, writing—review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by JSPS KAKENHI Grant Number 23H00504.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this research is available at https://github.com/masashi2000/llm-political-polarization (accessed on 15 December 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Details of Political Topics

Here, we present the detailed content of the six political topics identified from the website of U.S. Embassy and Consulate in The Kingdom of Denmark (https://dk.usembassy.gov/usa-i-skolen/presidential-elections-and-the-american-political-system/ (accessed on 15 December 2024). We provide the original text from the website below. We utilized these original texts in this study for generating narratives, creating question prompts, developing conversational instructions and confirmation bias prompts detailed in Section 3.2, Section 3.3 and Section 3.4.
  • Human and social values
    • Republican Party: Emphasis on individual freedom.
    • Democratic Party: Emphasis on community.
  • Taxation
    • Republican Party: Lower taxes for all.
    • Democratic Party: Higher taxes, especially for high-income earners.
  • Military
    • Republican Party: Enhanced funding.
    • Democratic Party: Reduced funding.
  • Healthcare
    • Republican Party: Values private healthcare services and low degree of government interference.
    • Democratic Party: Values equal access to some form of government-supported healthcare.
  • Immigration
    • Republican Party: For strong border control and deportation of undocumented immigrants.
    • Democratic Party: For residency of certain undocumented immigrants.
  • Religion
    • Republican Party: Values religious freedom such as defending marriage as a bond between a man and a woman and promoting the right to display religious scripture in public.
    • Democratic Party: Values religious freedom such as advocating for legal marriage between any two individuals and a clear separation of church and state.

Appendix B. Initial Prompt Without Conversation History

Here is an example prompt on human and social values topic for Republican. This is provided to conversation initiators. For subsequent utterance prompts, see Figure 1.
Hi Tom. There are two statements, “Emphasis on individual freedom” and “Emphasis on community,” regarding human and social values in the U.S. What do you think about this topic? Keep your answer shorter than 50 words.
Following is an example of question prompt for Round 0. For rounds other than Round 0, see Figure 1.
Hi Tom. Which of the following options best reflects your opinion on “Emphasis on individual freedom” and “Emphasis on community” regarding human and social values in the U.S.? Options: 1: I strongly support “Emphasis on individual freedom.” 2: I support “Emphasis on individual freedom.” 3: I somewhat support “Emphasis on individual freedom.” 4: I am neutral on this topic. 5: I somewhat support “Emphasis on community.” 6: I support “Emphasis on community.” 7: I strongly support “Emphasis on community.” Please answer in the following format: Reason: {YOUR_REASON} Result: {NUMBER}.

References

  1. Oberlander, J. Polarization, Partisanship, and Health in the United States. J. Health Politics Policy Law 2024, 49, 329–350. [Google Scholar] [CrossRef] [PubMed]
  2. Hare, C.; Poole, K.T. The polarization of contemporary American politics. Polity 2014, 46, 411–429. [Google Scholar] [CrossRef]
  3. Spohr, D. Fake news and ideological polarization: Filter bubbles and selective exposure on social media. Bus. Inf. Rev. 2017, 34, 150–160. [Google Scholar] [CrossRef]
  4. Hong, S.; Kim, S.H. Political polarization on twitter: Implications for the use of social media in digital governments. Gov. Inf. Q. 2016, 33, 777–782. [Google Scholar] [CrossRef]
  5. Barberá, P. Social media, echo chambers, and political polarization. In Social Media and Democracy: The State of the Field, Prospects for Reform; Cambridge University Press: Cambridge, UK, 2020; pp. 34–55. [Google Scholar]
  6. Del Vicario, M.; Scala, A.; Caldarelli, G.; Stanley, H.E.; Quattrociocchi, W. Modeling confirmation bias and polarization. Sci. Rep. 2017, 7, 40391. [Google Scholar] [CrossRef] [PubMed]
  7. Duggins, P. A psychologically-motivated model of opinion change with applications to American politics. arXiv 2014, arXiv:1406.7770. [Google Scholar] [CrossRef]
  8. Schweitzer, F.; Krivachy, T.; Garcia, D. An Agent-Based Model of Opinion Polarization Driven by Emotions. Complexity 2020, 2020, 5282035. [Google Scholar] [CrossRef]
  9. Conte, R.; Paolucci, M. On agent-based modeling and computational social science. Front. Psychol. 2014, 5, 668. [Google Scholar] [CrossRef] [PubMed]
  10. Shanahan, M.; McDonell, K.; Reynolds, L. Role play with large language models. Nature 2023, 623, 493–498. [Google Scholar] [CrossRef] [PubMed]
  11. Chen, Y.; Liu, T.X.; Shan, Y.; Zhong, S. The emergence of economic rationality of GPT. Proc. Natl. Acad. Sci. USA 2023, 120, e2316205120. [Google Scholar] [CrossRef] [PubMed]
  12. Park, J.S.; O’Brien, J.; Cai, C.J.; Morris, M.R.; Liang, P.; Bernstein, M.S. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on user Interface Software and Technology, San Francisco, CA, USA, 29 October–1 November 2023; pp. 1–22. [Google Scholar]
  13. Chuang, Y.S.; Goyal, A.; Harlalka, N.; Suresh, S.; Hawkins, R.; Yang, S.; Shah, D.; Hu, J.; Rogers, T.T. Simulating opinion dynamics with networks of llm-based agents. arXiv 2023, arXiv:2311.09618. [Google Scholar]
  14. Taubenfeld, A.; Dover, Y.; Reichart, R.; Goldstein, A. Systematic biases in LLM simulations of debates. arXiv 2024, arXiv:2402.04049. [Google Scholar] [CrossRef]
  15. Levendusky, M.S.; Druckman, J.N.; McLain, A. How group discussions create strong attitudes and strong partisans. Res. Politics 2016, 3, 2053168016645137. [Google Scholar] [CrossRef]
  16. Strandberg, K.; Himmelroos, S.; Grönlund, K. Do discussions in like-minded groups necessarily lead to more extreme opinions? Deliberative democracy and group polarization. Int. Political Sci. Rev. 2019, 40, 41–57. [Google Scholar] [CrossRef]
  17. Wason, P.C. On the failure to eliminate hypotheses in a conceptual task. Q. J. Exp. Psychol. 1960, 12, 129–140. [Google Scholar] [CrossRef]
  18. Hurst, A.; Lerer, A.; Goucher, A.P.; Perelman, A.; Ramesh, A.; Clark, A.; Ostrow, A.; Welihinda, A.; Hayes, A.; Radford, A.; et al. Gpt-4o system card. arXiv 2024, arXiv:2410.21276. [Google Scholar] [CrossRef]
  19. McGee, R.W.; Chat GPT Biased Against Conservatives? An Empirical Study. An Empirical Study. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4359405 (accessed on 15 February 2023).
  20. Motoki, F.; Pinho Neto, V.; Rodrigues, V. More human than human: Measuring ChatGPT political bias. Public Choice 2024, 198, 3–23. [Google Scholar] [CrossRef]
  21. Zhang, J.; Xu, X.; Zhang, N.; Liu, R.; Hooi, B.; Deng, S. Exploring collaboration mechanisms for llm agents: A social psychology view. arXiv 2023, arXiv:2310.02124. [Google Scholar]
  22. Sterling, J.; Jost, J.T.; Bonneau, R. Political psycholinguistics: A comprehensive analysis of the language habits of liberal and conservative social media users. J. Personal. Soc. Psychol. 2020, 118, 805. [Google Scholar] [CrossRef] [PubMed]
Figure 1. This is an example prompt for Republican agents. The prompt structure consists of three components: (A) represents the narrative prompt, (B) represents the prompt for generating conversation continuations, and (C) represents the prompt for questions posed at the end of each round. For conversation generation, the combination (A,B) is utilized, while for questioning, the combination (A,C) is employed. Initial prompts without conversation history are detailed in the Appendix B.
Figure 1. This is an example prompt for Republican agents. The prompt structure consists of three components: (A) represents the narrative prompt, (B) represents the prompt for generating conversation continuations, and (C) represents the prompt for questions posed at the end of each round. For conversation generation, the combination (A,B) is utilized, while for questioning, the combination (A,C) is employed. Initial prompts without conversation history are detailed in the Appendix B.
Engproc 107 00002 g001
Figure 2. Results of intra-party conversations without confirmation bias prompt. The y-axis shows the political stance on each topic represented with the 7-point Likert scale (1: strongly Republican to 7: strongly Democrat). Error bars represent standard errors across 20 experimental runs with different narratives. (a) Republican–Republican conversations show consistent shifts toward Democrat positions, particularly pronounced in immigration topics. (b) Democrat–Democrat conversations demonstrate relatively stable positions with minimal shifts only in the human and social values topic.
Figure 2. Results of intra-party conversations without confirmation bias prompt. The y-axis shows the political stance on each topic represented with the 7-point Likert scale (1: strongly Republican to 7: strongly Democrat). Error bars represent standard errors across 20 experimental runs with different narratives. (a) Republican–Republican conversations show consistent shifts toward Democrat positions, particularly pronounced in immigration topics. (b) Democrat–Democrat conversations demonstrate relatively stable positions with minimal shifts only in the human and social values topic.
Engproc 107 00002 g002
Figure 3. Results of cross-party conversations without confirmation bias prompt. Both Republican and Democrat agents show movement toward moderate positions in most topics, with Republican agents displaying stronger shifts toward Democrat positions, particularly in immigration. Democrat agents maintain relatively stable positions on healthcare and religion topics.
Figure 3. Results of cross-party conversations without confirmation bias prompt. Both Republican and Democrat agents show movement toward moderate positions in most topics, with Republican agents displaying stronger shifts toward Democrat positions, particularly in immigration. Democrat agents maintain relatively stable positions on healthcare and religion topics.
Engproc 107 00002 g003
Figure 4. Results of intra-party conversations with confirmation bias prompt. Both (a) Republican and (b) Democrat agents showed almost no change across all topics throughout the conversations. This differs significantly from the non-prompted condition, where Republican agents showed significant shifts toward Democrat positions.
Figure 4. Results of intra-party conversations with confirmation bias prompt. Both (a) Republican and (b) Democrat agents showed almost no change across all topics throughout the conversations. This differs significantly from the non-prompted condition, where Republican agents showed significant shifts toward Democrat positions.
Engproc 107 00002 g004
Figure 5. Results of cross-party conversations with confirmation bias prompt. Both party agents still show some movement toward opposing positions, but these shifts are smaller compared to the non-prompted condition, particularly in Republican agents’ stance on immigration.
Figure 5. Results of cross-party conversations with confirmation bias prompt. Both party agents still show some movement toward opposing positions, but these shifts are smaller compared to the non-prompted condition, particularly in Republican agents’ stance on immigration.
Engproc 107 00002 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sakurai, M.; Ueta, K.; Hashimoto, Y. Exploring the Limits of LLMs in Simulating Partisan Polarization with Confirmation Bias Prompts. Eng. Proc. 2025, 107, 2. https://doi.org/10.3390/engproc2025107002

AMA Style

Sakurai M, Ueta K, Hashimoto Y. Exploring the Limits of LLMs in Simulating Partisan Polarization with Confirmation Bias Prompts. Engineering Proceedings. 2025; 107(1):2. https://doi.org/10.3390/engproc2025107002

Chicago/Turabian Style

Sakurai, Masashi, Kento Ueta, and Yasuhiro Hashimoto. 2025. "Exploring the Limits of LLMs in Simulating Partisan Polarization with Confirmation Bias Prompts" Engineering Proceedings 107, no. 1: 2. https://doi.org/10.3390/engproc2025107002

APA Style

Sakurai, M., Ueta, K., & Hashimoto, Y. (2025). Exploring the Limits of LLMs in Simulating Partisan Polarization with Confirmation Bias Prompts. Engineering Proceedings, 107(1), 2. https://doi.org/10.3390/engproc2025107002

Article Metrics

Back to TopTop