Next Article in Journal
The Case Against Interpreting Eros as Erotic Love: A Commentary on Paul Ricœur’s Early Work in Education and Philosophical Anthropology
Previous Article in Journal
Common Origin Inferences and the Material Theory of Induction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Use of Artificial Intelligence in Political Decision-Making

by
Carlos Vera Hoyos
* and
William Orlando Cárdenas Marín
Philosophies of Language and Science Research Group, Department of Philosophy and Education, Universidad Politécnica Salesiana, Quito 170517, Ecuador
*
Author to whom correspondence should be addressed.
Philosophies 2025, 10(5), 95; https://doi.org/10.3390/philosophies10050095
Submission received: 4 June 2025 / Revised: 30 July 2025 / Accepted: 15 August 2025 / Published: 27 August 2025

Abstract

The use of artificial intelligence for political decision-making is in an early stage of development; however, there are several questions that arise about its current and hypothetical uses. These questions often come from areas of philosophy, such as ethics, political philosophy, and logic. In this article, first, the theoretical approaches from which the current and hypothetical uses of artificial intelligence for political decision-making can be interpreted will be presented. These approaches include realistic politics, bureaucracy theory, and conflict theory. Then, the possible uses that artificial intelligence could have in politics, as well as the attempts that have already been made, will be discussed. Subsequently, the logical, ethical, and political problems that the use of artificial intelligence for political decision-making could cause will be outlined. Next, a basic experiment will be presented on what kind of political decisions artificial intelligence could suggest. Finally, the points previously discussed will be analyzed from the mentioned theories. The conclusion reached was that, at the present time, the use of artificial intelligence for political decision-making could align more with the approaches of Machiavelli, focusing primarily on achieving goals such as maintaining power, while downplaying moral dilemmas.

1. Introduction

This article examines the use of artificial intelligence in political decision-making, specifically decisions made by political actors regarding issues such as proposing and approving or rejecting laws, managing resources, public sector positions, and delivering public speeches, among others. This research is framed within the field of political philosophy, analyzing both real and hypothetical outcomes of using Large Language Model (LLM) AI programs within the context of political principles, such as democracy, where decisions concerning the state are made by people; citizen participation, understood as the involvement of citizens in political processes without the need for intermediaries; inclusion, understood as providing opportunities for disadvantaged individuals; and equity, which refers to accounting for individual differences to achieve greater equality.
The main objective is to analyze the use of artificial intelligence in political decision-making mainly through a literature review and complementarily with an experiment with an LLM program to compare it with their advantages and disadvantages in political practice and with the political theories presented in the literature review. The specific objectives are to identify the political lines in the proposals generated by artificial intelligence and to examine the possible advantages and disadvantages of using artificial intelligence in politics. The main questions that led to the development of this article were: how can AI political decisions be interpreted from the main political theories? How is AI being currently used in politics? How can it be used in the future in politics? Which political problems may arise from these uses? To which political theory are the political proposals made by an AI program more similar?
The philosophical method employed is dialectical, understanding traditional politics as the thesis and AI-driven politics as its antithesis. This research adopts an axiological position rooted in empiricism, follows an inductive approach, and utilizes documentary analysis and experimentation strategies. The approach is multimethod, as it involves both a literature review and experimentation, and it has a cross-sectional horizon since data will be collected at a single point in time. Initially, the available literature on the characteristics, uses, possibilities, and initiatives of AI in politics is analyzed. Subsequently, an experiment is conducted involving the input of commands into an LLM model to generate policy proposals, followed by an analysis to determine which author’s theories or political lines these proposals align with most closely. Finally, after identifying the potential policy proposals that could arise from the use of artificial intelligence in politics, this study examines the possible advantages and disadvantages of employing artificial intelligence in political decision-making.
Ortega Ruiz and Becerra [1] authored Artificial Intelligence in Legal and Political Decision-Making, a study aimed at establishing the influence of artificial intelligence in the legal field, with particular focus on legal and political decision-making. The methodology employed was a descriptive analysis of its use in judicial, administrative, and legislative decisions within the frameworks of substantive, procedural, and evidentiary law. The research posed the question of whether artificial intelligence serves merely as a tool for making legal decisions or whether it constitutes a new entity generating legal decisions. The conclusion reached was that the implementation of artificial intelligence in law is feasible as a means or instrument for legal decision-making, but it does not have a place as a legal operator, meaning it would not replace lawyers, judges, or notaries, as human intelligence is required to decide on qualitative cases. In legislative decision-making, the use of artificial intelligence would be inappropriate due to the political diversity and human needs inherent in this arena. According to the authors, the decisions of a populace cannot be subjected to algorithmic decisions. Therefore, this study would suggest that AI can be used as a support for political decision-makers, but it does not replace them, as AI is based on quantitative data, and political decision-making also requires qualitative analysis that AI may not be able to perform. However, this does not mean that AI is not useful for political decision-making, and it could help decision-makers manage situations where a lot of data processing is required.
Flores-Ruiz, Miedes-Ugarte, and Wanner [2] wrote Relational Intelligence, Artificial Intelligence, and Citizen Participation: The Case of the Cooperative Digital Platform Les Oiseaux de Passage. This article presented a case study of the French cooperative platform Les Oiseaux de Passage. The authors adopted a critical stance toward what they term artificial intelligence based on capitalist values, which, according to them, privileges big data and algorithms where people are mere data providers, leaving little room for participation in their management and control, and where access is limited. As a result, individuals find themselves in a vulnerable position, exposed to the loss of privacy. In contrast to capitalist-value-based artificial intelligence, the authors advocate for cooperative platforms like Les Oiseaux de Passage, which prioritize social objectives over capital; emphasize transparency and equity; maintain a voluntary and open membership; ensure democratic control by members; balance individual and general interests; promote cooperation, self-management, and independence from public powers; prioritize the collective interests of the cooperative; adopt a territorial approach with a global projection; allocate surpluses to the general objective; and maintain a strategic vision. This study considers that the problem with AI for political decision-making is that it dehumanizes people, considering them as just data providers but not taking into consideration ethical issues. However, it does not mean that AI would not benefit political decision-makers, but decision-makers should implement participatory methods, so the people affected by those decisions are also involved in the process of decision-making.
McKelvey and MacDonald [3] wrote Artificial Intelligence Policy Innovations at the Canadian Federal Government, which questions the use of artificial intelligence in the Canadian government from the perspective of inclusion. The article suggests that artificial intelligence could be used to replace jobs considered automatable, particularly those known as feminized jobs. It also mentions several standards proposed for managing artificial intelligence, such as the FAIR (Findable, Accessible, Interoperable, and Reusable); FACT (Fairness, Accuracy, Confidentiality, and Transparency); and FATE (Fairness, Accuracy, Transparency, and Ethics) standards. These frameworks challenge the acceptability of artificial intelligence, raising concerns about whether it would produce biased or reliable outcomes. Finally, the article is critical of the rapid adoption of artificial intelligence by the Canadian government and argues that standards for the use of AI must be approached from a critical perspective that considers development and impact from a diversity standpoint. According to the authors, feminist science studies, indigenous epistemology, and other perspectives could provide key insights for using AI in making democratic decisions. Like the previously mentioned study, this one considers that AI is not bad itself for political decision-making, but it could affect those people who belong to less favored groups, so implementing different perspectives from social movements like the feminist, the indigenous, and others, could prevent AI from damaging historically marginalized groups.
McEvoy [4] wrote Political Machines: Ethical Governance in the Age of AI. This article argues that if engineers develop ethically robust systems, governments will have a moral obligation to consult them as part of the decision-making process. The reasons provided for this argument are: 1. Human judgments are often compromised by a multitude of cognitive biases that are difficult to identify, creating problems for political decision-making; 2. AI systems make reliably accurate judgments in low-validity environments, such as governance. The author clarifies that the moral obligation to consult AI would only exist if the AI was ethical, but that there is no need to wait for the development of ethical AI to experiment with it as a consultation tool. This study suggests that, if AI could be deprived of the cognitive biases that humans possess, it should be a moral obligation to consult them to make decisions to be sure that the decisions are not biased. Nevertheless, this position is hypothetical and would only be valid if there was what the author calls an ethical AI.
Most of the studies mentioned have a critical position towards the use of artificial intelligence for political decision-making, but they do not discard it if it is used taking into consideration ethical issues and being used only as a tool by human political decision-makers. In this paper, the problems presented by the previous studies will be also addressed and analyzed with political and ethical theories, and logical principles. The previous works focused mostly on the ethical considerations regarding artificial intelligence in politics; in this paper, they will be also interpreted with the major political philosophy theories, given that political philosophy is closely related to ethics, so the political theory displayed by AI in general, by certain AI systems, or by a certain policy proposed by AI may influence whether a group of people or individuals accept or reject the use of artificial intelligence. For instance, left-wing people may reject AI if its proposed policies are right-wing-leaning, or vice versa, leading them to prefer human decision-makers, even if it implies a slower or more complicated process. Flores-Ruiz, Miedes-Ugarte, and Wanner [3] presented an example of how a political ideology can lead to rejecting using certain AI, in this case, due to it focusing too much on capitalistic values. However, they presented a counterproposal of another AI system more aligned with the authors’ principles. This is also an indication of how the people behind AI or behind the AI’s sources influence it in an ideological manner, making it a not a completely neutral or non-ideological decision-maker.
For the aforementioned reasons, this work may also contribute to interpreting AI political decisions from different political theories, so further work can be performed to try to adjust AI according to a specific sociopolitical context.

2. Selection and Justification of AI Tools

This article focuses on the type of AI known as Generative AI, which is designed to create new content by learning patterns and structures from training data—typically publicly available online content. A prominent category of generative artificial intelligence is Large Language Models (LLMs). LLMs are a form of AI trained on vast amounts of data in order to understand and generate natural language, as well as to perform a wide range of tasks [5].
LLMs are widely accessible to the public through interfaces such as ChatGPT, Gemini, and Claude, among others. They have gained popularity due to their ease of use and conversational capabilities, which enable them to assist with everyday tasks and simulate human-like interactions. LLMs can also perform functions such as mathematical and statistical operations, data analysis, and coding, making them powerful tools for users who may not be well-versed in the more technical aspects of AI.
LLMs are also increasingly used in academic research. According to a study by Liao et al. [6], conducted with 816 verified research article authors, 81% reported using LLMs as part of their research workflow.
Given that the scope of this article is exploratory and inductive, publicly accessible LLMs are considered sufficient. They also enable readers and other researchers to replicate, verify, or adapt the experimentation presented here according to their own perspectives, critiques, or needs.
According to Stapleton [7], after conducting an experiment that involved providing various LLMs with PDF files in their free versions, the responses from ChatGPT and Claude demonstrated 100% content accuracy when asked about information contained in the files. This suggests that they are reliable tools for conducting research and referencing previous works, including classical texts. In contrast, the LLM Perplexity scored less than 40% in content accuracy, leading Stapleton to advise against using it for research purposes.
For the aforementioned reasons, LLMs—especially ChatGPT and Claude—can be considered the most useful tools for the purposes of this study, which seeks to explore how AI models may think about political issues and to infer their potential ideological tendencies.

3. Definition of Political Decision-Making

The very concept of political decision-making can be controversial, as most actions—even ordinary ones—can be influenced by politics to varying degrees. For the purposes of this work, a decision will be considered political if it meets one or more of the following criteria: it involves the use or distribution of power, authority, or resources beyond the personal sphere; it arises from legitimized institutional processes, such as governments, parliaments, or official organizations; it involves conflicting interests—whether within a state or an organization—that require conflict management and negotiation; or it has a collective impact, either in the public interest or in the interest of a significant social group.
Some major areas typically considered political include social services, health, security, emergency response, and public relations.
Social services refer to any program executed by the state aimed at aiding disadvantaged populations, reducing inequality, or preventing social issues such as poverty, orphanhood, or violence. These are considered political decisions because they are implemented by public authorities, have collective impact, and often involve conflicting interests—typically regarding taxation and the degree of state intervention in citizens’ lives. Additionally, they require negotiation and consensus-building within society to define and justify such interventions as serving the common good.
Health involves actions intended to prevent or contain diseases or sanitation problems. Health-related policies are political because they are implemented by public institutions and target the collective well-being. They also involve conflicting interests, especially when public health priorities intersect or clash with private healthcare services and economic considerations.
Security refers to any action taken by authorities to prevent or address violent acts such as robbery, murder, or kidnapping. It is one of the most fundamental political responsibilities, as public safety generally exceeds the capacity of individuals or small communities to manage effectively. Security policy implies the exercise of legitimate authority—often involving the monopoly on the use of force—to prevent disorder. These policies are inherently tied to conflict management and have collective consequences, but they also generate tensions around legal definitions and enforcement practices, since not all sectors of society agree on what constitutes crime or acceptable policing.
Emergency response refers to the policies and institutions designed to address urgent situations that demand immediate, coordinated action—typically beyond the means of individuals or small communities. This is a political function because it requires continuous availability, public funding, and cross-agency coordination, all under legitimate state authority.
Public relations refer to the ways in which governments or public institutions manage the communication of information to the public and receive feedback from citizens about issues affecting them. Public relations are a political decision-making tool because they involve the strategic management of information to maintain public trust, legitimacy, and order. They play a key role in conflict management and negotiation by preventing social unrest, misinformation, or public dissatisfaction. In times of crisis or controversy, effective public relations can help governments avoid escalation by shaping public perception and promoting transparency and responsiveness.

4. Realist Politics Approach to AI

Political realism is often associated with the ideas of authors such as Thomas Hobbes [8] and Niccolò Machiavelli [9]. In the case of Machiavelli, his work titled The Prince, originally published in 1532, offered a series of recommendations for rulers, which could conflict with moral principles, with the goal being political power. According to Cañas [10], Machiavelli’s proposition is that, to acquire, maintain, and expand political power, one must learn not always to be good, and decide whether to act accordingly based on the situation. Although the ideal is for a prince—a ruler—to embody all desirable and admirable qualities, human nature does not allow for this.
The central principle of Machiavelli’s work is that practical outcomes are more important than abstract ideals. Therefore, some authors, like Strauss [11], have labeled his thought as immoral, while others, such as Gramsci [12], argue that his ideas should not be interpreted from a moral perspective and should instead be considered amoral. Fernández de la Peña [13] asserts that Machiavelli understood morality as a necessary but contingent creation, and therefore a political product in which universalism has no place. According to Fernández de la Peña, politics is what allows for the establishment of morality in society, and thus it is appropriate for politics to employ principles contrary to morality if it serves the objective of social development.
Machiavelli places significant importance on a ruler’s advisors, arguing that the reputation of a ruler depends on the quality of the people they surround themselves with. If they are surrounded by capable and loyal individuals, the ruler will be deemed wise; otherwise, they will not be considered prudent [9]. Applying this principle to the context of artificial intelligence, one could argue that issues of capability and loyalty would no longer be a concern, as AI has access to abundant information and cannot be disloyal. However, Machiavelli also emphasizes the virtues of the ruler, asserting that good advice, regardless of its source, should ideally stem from the ruler’s prudence, rather than the ruler’s prudence stemming from good advice.
Moreover, Machiavelli advises that prudent rulers should follow the paths of great leaders of the past and imitate those who excelled [9]. In this regard, artificial intelligence could be instrumental, as it has access to vast historical data and studies, enabling it to analyze current situations by comparing them to similar past events, outlining their causes and consequences, and providing a ruler with comprehensive and immediate tools that advisors, political scientists, and analysts could not offer with the same speed and breadth. In other words, AI could be valuable for the historical–comparative method in public policy formulation.
Additionally, Machiavelli did not rule out the use of violence and oppression to maintain power. Among other things, he stated that one must conquer or eliminate men because, if they can avenge minor offenses, they cannot do so against severe ones; therefore, the offense must be so severe that it renders revenge impossible [9]. Similarly, his proposal on the use of finances is tied to the ruler’s image and the maintenance of power, expressing that it is more prudent to bear the label of a miser, which carries shame without resentment, than to risk gaining a reputation for being prodigal and fall into that of a plunderer, generating shame accompanied by hatred.
The studies mentioned about AI in politics showcase a concern for the possible Machiavellian use of AI in political decision-making. They imply that AI so far has shown this tendency, and they consider that, for that reason, it should not replace human decision-makers, since it could favor calculations and efficiency over human necessities and ethical issues.

5. Bureaucratic Theory Approach to AI

One of the most prominent exponents of bureaucratic theory was Max Weber, who characterized bureaucracy as domination through knowledge [14] and saw a parallel between the mechanization of industry and the proliferation of bureaucratic forms of organization [15]. According to Weber [16], the purest type of legal domination is that exercised through a bureaucratic administrative framework, composed of individual officials who are personally free, hierarchically organized, with strictly defined competencies, employed under a contract, remunerated with fixed salaries, performing their duties as their sole or principal profession, with a career path ahead of them, working with complete separation from the means of administration, and subject to strict uniform discipline and administrative oversight. Bureaucratic domination generally signifies a social tendency toward leveling in the interest of universally recruiting the most professionally qualified, a tendency toward plutocratization—government by the wealthy—and the domination of formalistic impersonality, subject only to the pressure of strict duty [16].
The normal spirit of rational bureaucracy, in general terms, is formalism, primarily required to ensure personal life opportunities for those involved, regardless of their class. However, this tendency is in apparent and partly real contradiction with bureaucrats’ inclination to carry out their administrative tasks according to utilitarian–material criteria in service of the well-being of the dominated. The tendency toward material rationality finds support among those dominated who do not belong to the class interested in ensuring the guarantees they possess [16].
The issue of bureaucracy relates to artificial intelligence because, on the one hand, when used in political decision-making, AI could alter the ideal bureaucratic principles of personally free officials, hierarchically organized, with strictly defined competencies, employed under a contract, with salaries and a career path, since AI would not require these. On the other hand, it could emphasize the bureaucratic tendency to adopt utilitarian–material criteria. Whether this inclination serves the well-being of the dominated remains doubtful, both in traditional bureaucracy and in one operated by artificial intelligence.
The studies performed by Ortega Ruiz and Becerra [1] and McEvoy [4] suggest that AI may be beneficial for a better functioning bureaucracy since it may help reduce working times and reduce the working pressure that bureaucrats face, but they do not recommend AI replacing human decision-makers, since it could not consider human necessities and ethical issues.

6. Conflict Theory Approach to AI

Conflict theory is based on the premise that groups within society interact primarily through conflict rather than consensus. This theory suggests that there are structural differences—social, cultural, or economic—that lead to power dynamics and unequal access to resources. One of the theories classified as conflict-based is Marxism, which argues that the State is a product of the irreconcilable nature of class contradictions and is merely a council that manages the interests of the bourgeoisie [17].
Marxist theory also considers technological advancements as key factors in social change and in enabling certain classes to rise to power. For instance, Marx [18] mentioned that the hand mill gave rise to the society of feudal lords, while the steam mill ushered in the society of industrial capitalists. From this perspective, it could be assumed that the technological developments of the Fourth Industrial Revolution—such as robotics, artificial intelligence, and the Internet of Things—could drive social and political changes, potentially altering the structure of the state, political participation, and the way political decisions are made.
Other theories often categorized under conflict theory include feminism, indigenism, liberation philosophy, critical race theory, and the LGBTI liberation movement, among others. These perspectives, in turn, emphasize inequality based on ethnicity, nationality, gender, sexual orientation, and other factors that may be overlooked by the class-based perspective. From these viewpoints, artificial intelligence raises concerns due to the potential for biases related to class, gender, nationality, and more. If these factors are not adequately considered, the use of AI could have detrimental effects on these marginalized groups.
The study performed by Flores-Ruiz, Miedes-Ugarte, and Wanner [2] is based on the conflict theory, as it criticizes AI when it is used based on what the authors label as capitalist values and the fact that AI may consider users just as numbers to feed the algorithm. Nevertheless, they promote the use of democratized AI systems based on transparency and equity. Moreover, the study performed by McKelvey and MacDonald [3] suggests that perspectives like feminist science studies, indigenous epistemologies, among others, should be considered when making decisions based on AI to prevent them from damaging vulnerable groups.

7. Current Attempts to Use AI for Political Decision-Making

There have been several recent attempts and ongoing initiatives to apply artificial intelligence in political decision-making. Most of these efforts focus on improving efficiency in terms of cost, time, and resource management. The following cases are not intended to generalize how AI is being used, nor to advocate for or against its application. Rather, they aim to explore—on a preliminary and illustrative basis—the possibilities of AI use within the areas previously identified as political decisions.
Because the use of AI in politics is still relatively limited, especially in real-life scenarios, there are not enough examples to provide a comprehensive understanding of its practical implications. This limitation is further accentuated by the fact that most existing applications are concentrated in developed countries. As such, the results, benefits, and challenges observed may differ significantly if similar technologies were implemented in less-developed nations or in contexts with higher levels of institutional corruption.
Nonetheless, within the scope of political philosophy, the following examples may offer sufficient grounds to consider some normative principles that could inform future policy-making and legislative processes.

7.1. Social Services

One potential use of artificial intelligence in social services is the identification of fraudulent benefit claims. Fraud in social service claims can lead to significant financial losses. For example, in the United Kingdom, it is estimated that around GBP 1.5 billion was lost in 2020 due to fraudulent universal credit claims [19]. According to Dilmegani [20], AI-driven fraud detection could identify patterns such as repeated phone numbers or applications written in the same style and analyze social media profiles to check for information that conflicts with the data provided in applications. Additionally, AI could consider multiple vulnerability factors of individuals applying for social programs and weigh them to establish priorities in resource allocation.

7.2. Health

In the health sector, artificial intelligence could be useful for monitoring the spread of diseases and preventing further outbreaks. According to Wang [21], the use of AI during the first wave of COVID-19 in China had significant effects on projecting and detecting the disease, as well as monitoring and assessing the evolution of the pandemic. Moreover, the integration of digital spatiotemporal data, AI, and real-time analytics with traditional spatial epidemiology research, such as epidemic maps, could provide local governments with a solid foundation for formulating policies related to the resumption of work and production [21].
Theodosiou and Read [22] found evidence of the clinical utility of AI applied to laboratory diagnostics, such as the digital reading of culture plates, malaria diagnosis, and antimicrobial resistance profiling; clinical image analysis, such as the diagnosis of pulmonary tuberculosis; and clinical decision support tools, such as sepsis prediction and antimicrobial prescription. However, most studies to date lack real-world validation or clinical utility metrics.
Another potential use of AI in healthcare is triage, i.e., the selection and classification of patients by evaluating the priority of care based on survival probability, therapeutic needs, and available resources. In public health, AI could also be used to inform the population about frequently asked health questions, which could help counteract fake news and messages that could cause collective panic.

7.3. Security

AI can be used in the security domain to predict crimes. By identifying crime patterns, it is possible to project areas in cities where crimes are more likely to occur. According to Dakalbab [23], existing AI technologies perform reasonably well in predicting and preventing crimes, as they can predict crimes with high accuracy and improve the efficiency of identifying spatiotemporal crime hotspots. Rotaru et al. [24] developed an algorithm that predicts crime using spatiotemporal learning patterns based on public data on violent and property crimes from seven U.S. cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco. This model has been able to predict crimes a week in advance with 90% accuracy and has also exposed existing territorial biases in resource and personnel allocation for crime control in various areas of the cities.
Machine learning-based AI can also be used to detect crimes in real-time through security cameras, which identify unusual, irregular, unexpected, and unpredictable events or behaviors. To determine what situations are normal and which could be crimes, recordings of normal situations and crime events are used. Additionally, facial recognition systems can detect wanted individuals from the database of persons sought by authorities.

7.4. Emergencies

In emergencies, AI can be used to increase the efficiency of emergency services. For example, automatic voice recognition systems can identify the tone of voice of individuals to determine whether an emergency call is real or false. According to Negnevitsky, Tomin, and Rehtanz [25], AI tools can help analyze calls and messages to make faster decisions. Consequently, AI tools could reduce wait times and save more lives by assisting operators in efficiently filtering out abusive calls.
The Danish company Corti developed a system capable of identifying cardiac arrests through emergency calls. By asking questions of callers and considering databases and patterns, AI analyzes potential signs of cardiac arrest, such as tone of voice and breathing. In a study of a database of 161,650 emergency calls, the system identified 93.1% of cardiac arrests compared to 71.9% recognized by human operators [26]. Additionally, AI provides instructions to the caller, whether to go immediately to a hospital or to perform CPR.

7.5. Public Relations

AI could be used in public relations by interacting in real-time with citizens when they have questions or complaints about public administration. The Norwegian government has implemented a chatbot service that corresponds to the capacity of 220 human operators. Most inquiries are handled entirely by the chatbot, but one in five is transferred to a live conversation with a human operator [27].
Another potential use of AI in public relations is monitoring citizens’ social media posts to obtain feedback on the performance of public administration. Hung [28] applied machine learning methods to analyze data collected from Twitter during the COVID-19 pandemic in the United States to investigate citizens’ sentiments. After analyzing 187,042 tweets, they found that five themes dominated the interactions about COVID-19: the healthcare environment, emotional support, business economy, social change, and psychological stress.
Evidently, the current tendency to implement AI for political decision-making is based on achieving more efficiency, reducing costs, acting faster, and requiring fewer human personnel. These current uses could be supported by the realist approach because it is based on calculations and tendencies to take informed decisions aimed at maintaining order. They could be also supported by bureaucratic theory because these uses seek to help having an organized system based on rationality where each task is given to a certain actor to fulfill efficiency.

8. Challenges of Using Artificial Intelligence for Political Decision-Making

8.1. Logical and Ethical Problems

Hasty Generalization: Hasty generalization is an informal fallacy where a general conclusion is drawn from insufficient evidence. This fallacy follows the pattern:
  • X is true for A;
  • X is true for B;
  • Therefore, X is true for C, D, E, etc.
Since machine learning algorithms often use inductive inference, they are prone to making hasty generalizations. In areas like healthcare, this could lead to the erroneous assumption that a person has a particular disease based on characteristics like someone who has that disease, even if those characteristics are not actual symptoms. In security, AI might mistakenly suspect someone of committing a crime based on behaviors like those of a criminal, even though these behaviors could have other explanations. While humans frequently make these errors, AI may lack the ethical dilemmas that humans face when making such classifications.
On the other hand, it can be argued that humans might make mistakes precisely because of what distinguishes them from AI; sentimental, ideological, or indecisive factors may cause humans to overlook situations that AI would classify correctly.
Cherry-Picking Evidence: Cherry-picking, also known as incomplete evidence, is a fallacy where only the best or worst cases are selected to confirm a position or proposition. This fallacy is related to hasty generalization. When AI is used for political decision-making, particularly to assess the feasibility or advisability of certain actions, it may fall into the cherry-picking trap. When asked if a decision will be correct, AI might only look for cases where that action had positive results, ignoring situations where it did not meet expectations. Moreover, it might overlook the specific context in which a decision is being made, suggesting actions that only work under certain circumstances.

8.2. Biases

AI can produce both intentional or explicit discrimination and unintentional discrimination, with unintentional discrimination being more common. Given that AI learns from large amounts of data, if the data are biased, the AI will also be biased. Considering that most scientific studies historically and currently have been conducted by people from the Global North, who are male, heterosexual, and of privileged economic status, these conditions may influence the research outcomes or how the results are presented.
According to the research by Huang [29], which analyzed the publication careers of 7,863,861 scientists, male scientists published an average of 13.2 articles during their careers, while female scientists published only 9.6, representing a 27% gender gap in total productivity. Additionally, according to Fry [30], a Pew Research Center study found that in the United States, Black and Hispanic workers remain underrepresented in the science, technology, engineering, and mathematics (STEM) workforce. Black workers make up 9% of the workforce in these areas, Hispanics represent 8%, and Asians 13%.
Racial Biases: One concern regarding AI biases is racial bias. Considering the large amount of racist or stereotypical content that could be found in various databases, AI-driven decisions could lead to discrimination, marginalization, and even mistreatment of people from diverse backgrounds. According to Metz [31], instances of racism perpetrated by AI systems include the discovery that in a Google online photo service, photos of Black people were categorized in a folder labeled as gorillas. Another case involved a Black researcher who found that a facial recognition system could not identify her face until she wore a white mask.
Studies have shown that facial recognition technologies and digital assistants struggle to identify images and speech patterns of non-White people. For example, a journalist asked the DALL-E 2 image generator to imagine buildings in her city, Dakar, and the algorithm produced landscapes of an arid desert and dilapidated buildings that bore no resemblance to the homes in Senegal’s capital [32]. According to Zuiderveen [33], a system used to predict criminal recidivism in some parts of the U.S., despite not including racial origin or skin color among its parameters, was found in a 2016 study by Angwin et al. to classify Black people as high-risk at twice the rate of White people, even though they are not more likely to reoffend. Conversely, White people were more likely to be classified as low risk despite a higher likelihood of committing other crimes.
Class Biases: Just as AI can have racial biases if its databases reflect such tendencies, it can also have class biases. In cases where AI requires user feedback, individuals with limited access to this technology may be underrepresented, potentially leading to neglect of problems in certain areas of the city. Zuiderveen [33] used the example of the Street Bump app, which is used by the Boston city council to receive reports on street conditions. The problem in this case is that reports require the use of the app on a smartphone while driving, so areas where people have limited access to smartphones or mobile internet may be underrepresented and could receive fewer funds and public works from the local administration.
Gender Biases: AI can exhibit gender biases in areas such as human resources. According to Dastin [34], in 2015, Amazon stopped using an AI system to rate candidates for software developer positions and other technical roles because it was not evaluating them in a gender-neutral manner. This was because the models were trained to examine applicants by looking for patterns in resumes submitted to the company over a 10-year period, most of which came from men. As a result, the system taught itself that male candidates were preferable. The system penalized resumes that included words related to women, such as captain of the women’s chess club, and downgraded graduates of all-women’s colleges. With this precedent, AI used for political decision-making could prioritize men for public positions.
There could also be biases in the healthcare sector due to underdiagnosis of pathologies based on gender differences. According to Fernández [35], one example is asthma attacks, which are often mistaken for anxiety attacks in women. Another example is chronic obstructive pulmonary disease (COPD), whose symptoms sometimes differ between men and women. Considering that AI is based on previous diagnoses and typical symptoms, AI health assistants could overlook women’s unique cases when making medical or public health recommendations.
In the case of LGBTI+ people, according to Holden [36], biased AI could lead to concerning outcomes, such as neglecting or excluding LGBTQ businesses and their target audience from advertising, hiding them by limiting the reach of posts or accounts of LGBTI+ individuals and their businesses, improper categorization of LGBTI+ individuals resulting in targeted posts not reaching them, biases in recruitment, and unfair profiling in legal proceedings, which could lead to wrongful arrests or unjust sentences.

8.3. Ethical–Political Problems

State Surveillance: While AI can be used to prevent disease outbreaks and combat crime, the same information and methods used for this could also be employed to constantly monitor citizens. This would allow governments, both authoritarian and non-authoritarian, to influence citizens’ lives every time they use electronic devices. Algorithms that collect information about their browsing habits can also show them information based on their inferred characteristics, providing AI with an idea of who each internet user is.
Suppression of Freedoms: In line with the above, constant surveillance could lead to the suppression of personal freedoms. Depending on the situation and the interests of the current government, certain content deemed dangerous could be automatically censored. Social media platforms, like Facebook and Twitter, have implemented tools to combat fake news, which have received both praise and criticism. One of the criticisms is that these tools could also be used to censor certain political opinions.
Violations of Privacy: Hand in hand with state surveillance are privacy violations. If algorithms can profile users of electronic devices, they can gather information about opinions, preferences, routines, relationships, and other characteristics, and even predict a person’s future actions based on patterns. Algorithms might also discover things about users that they have not yet realized themselves; for example, Hill [37] described a case where Target’s algorithm allegedly discovered that a teenager was pregnant based on her shopping patterns.
Identification of Opponents: Using those tools, governments could identify opponents through algorithms that detect search patterns and could map their location. Additionally, facial recognition systems could report their location in real time and block their access to certain places or services.
These problems challenge the suggestion made by McEvoy [4], since, currently, AI does not seem to be free of biases, but it seems to be more likely to strengthen them. It also suggests that, since AI is not ethical by itself, it could also be dangerous if it is used by non-ethical political decision-makers. Hence, the approach toward AI should keep being critical and efforts should be made to increase its democratization and inclusion of several different human perspectives, including social movements.

9. Experiment

9.1. ChatGPT

In the ChatGPT chatbot application, version 3.5, a request was made for an example of R code that could help the State make political decisions. The experiment consisted of just asking ChatGPT to create a computational model to help the State make political decisions. No other indications, definitions, or variables to be considered were given to the chatbot, since the objective was to find out how would this LLM chatbot thinks that political decisions should be made. Therefore, this experiment does not try to provide a model to be used later on, but just to analyze the theoretical tendency that the chatbot may be aligned to, whether it is realist politics, bureaucratic theory, or conflict theory. The answer may not respond to a single theory, but it could have elements from more than one of them, but the theory with which it has more in common will be addressed. The exact response that ChatGPT provided was the following:
“Creating a computational model to help the State make political decisions is a significant challenge due to the complexity and variety of factors involved in governmental decision-making. However, I can provide you with a simple example using a decision model based on weighted criteria. In this case, we will assume that we are evaluating different public policies based on certain criteria” [38]. The purpose of this simple experiment was to analyze the way AI thinks and to find out if its proposals are more calculation-based, or ethical-based.
The code proposed by ChatGPT was the following:
# Define public policies and evaluation criteria
policy_1 <- list(cost = 1000000, social_impact = 0.8, viability = 0.9)
policy _2 <- list(cost = 1200000, social_impact = 0.7, viability = 0.8)
policy _3 <- list(cost = 800000, social_impact = 0.9, viability = 0.7)
# Function to calculate the utility of each policy (total score)
calculate_utility <- function(policy) {
   # You can assign weights to each criterion according to perceived importance
   weight_cost <- 0.4
   weight_impact <- 0.4
   weight_viability <- 0.2
   utility <- -policy$cost * weight_cost + policy$social_impact * weight_impact + policy$viability * weight_viability
   return(utility)
}
# Calculate the utility for each policy
policy_utility_1 <- calculate_utility(policy_1)
policy_utility _2 <- calculate_utility (policy _2)
policy_utility _3 <- calculate_utility (policy _3)
# Make a decision based on utility
if (policy_utility_1 > policy_utility _2 & policy_utility _1 > policy_utility _3) {
   decision <- “Implement Policy 1”
} else if (policy_utility _2 > policy_utility _1 & policy_utility _2 > policy_utility _3) {
   decision <- “Implement Policy 2”
} else {
   decision <- “Implement Policy 3”
}
# Print the decision
cat(“Decision:”, decision, “\n”)
According to ChatGPT:
This example is simplified and uses a criterion-weighting approach to evaluate three fictitious public policies. Each policy is evaluated based on cost, social impact, and feasibility, with different weights assigned to each criterion. The policy with the highest utility, according to the defined utility function, is chosen for implementation. Keep in mind that, in practice, political decision-making is much more complex and generally involves considering more factors, real data, expert consultations, and possibly the use of more advanced models. This example serves as a basic starting point and may require adjustments depending on the specific context and the requirements of the problem you are addressing [38].
Afterward, ChatGPT was asked to include inclusion and equality criteria in the previously presented model. ChatGPT produced the following output:
# Define public policies and evaluation criteria
policy_1 <- list(cost = 1000000, social_impact = 0.8, viability = 0.9, inclusion = 0.7, equality = 0.8)
policy _2 <- list(cost = 1200000, social_impact = 0.7, viability = 0.8, inclusion = 0.8, equality = 0.7)
policy _3 <- list(cost = 800000 social_impact = 0.9, viability = 0.7, inclusion = 0.6, equality = 0.9)
# Function to calculate the utility of each policy (total score)
calculate_utility <- function(policy) {
   # You can assign weights to each criterion according to perceived importance
   weight_cost <- 0.3
   weight _impact <- 0.3
   weight _viability <- 0.1
   weight _inclusion <- 0.2
   weight _ equality <- 0.1
   utility <- -policy$cost * weight_cost + policy$social_impact * weight_impact + policy $viability * weight_viability + policy $inclusion * weight_inclusion + policy$equality * weight_equality
   return(utility)
}
# Calculate the utility for each policy
policy _utility_1 <- calculate_utility(policy_1)
policy _utility _2 <- calculate_utility (policy_2)
policy _utility _3 <- calculate_utility (policy_3)
# Make a decision based on utility
if (policy _utility _1 > policy _utility _2 & policy _utility _1 > policy _utility _3) {
   decision <- “Implement Policy 1”
} else if (policy _utility _2 > policy _utility _1 & policy _utility _2 > policy _utility _3) {
   decision <- “Implement Policy 2”
} else {
   decision <- “Implement Policy 3”
}
ChatGPT stated the following:
In this example, two new criteria, Inclusion and Equality, have been added with assigned weights in the utility function. These criteria reflect the perceived importance of inclusion and equality in the evaluation of public policies. You can adjust the weights according to your considerations about the importance of each criterion in political decision-making [38].

9.2. Claude

In the Claude chatbot application, version Claude Sonnet 4, a request was made for an example of R code that could help the State make political decisions. The experiment consisted of just asking Claude to create a computational model to help the State make political decisions. Just like it was performed with the ChatGPT experiment, no other indications, definitions, or variables to be considered were given to the chatbot, since the objective was to find out how would this LLM chatbot think that political decisions should be made. The exact response that Claude provided was the following:
I’ve created a comprehensive Political Decision Support System in R that provides a robust framework for state-level political decision-making. Here’s what the model includes:
Key Features:
1. Multi-Criteria Decision Analysis (MCDA)
Weight different decision criteria based on importance
Score alternatives against each criterion
Generates ranked recommendations
2. Cost-Benefit Analysis
Calculate Net Present Value (NPV) and Benefit-Cost Ratios
Account for time value of money with discount rates
Provide clear recommendations based on economic viability
3. Stakeholder Analysis
Map stakeholders by influence and support levels
Categorize stakeholders (Champions, Opponents, Key Players, etc.)
Prioritize engagement strategies
4. Political Feasibility Assessment
Evaluate political support, public support, resource availability
Account for implementation complexity
Generate feasibility scores and levels
5. Risk Assessment
Identify and score risks by probability and impact
Prioritize risk mitigation efforts
Support contingency planning
6. Visualization Tools
MCDA results charts
Stakeholder power-interest matrices
Risk assessment displays
Example Implementation:
The model demonstrates its capabilities using a healthcare policy decision scenario, comparing four alternatives:
Universal Healthcare
Public Option
Medicaid Expansion
Status Quo
How to Use:
Initialize a new decision model
Define your criteria with appropriate weights
Add policy alternatives and stakeholders
Create scoring matrices for your specific decision
Run various analyses (MCDA, CBA, stakeholder analysis, etc.)
Generate comprehensive reports and visualizations
This system provides evidence-based decision support while accounting for the political realities of implementation, stakeholder dynamics, and risk management that are crucial for successful policy-making at the state level. [39]
The code proposed by Claude was the following:
# Political Decision Support System
# A comprehensive R model for state-level political decision making

library(dplyr)
library(ggplot2)
library(plotly)
library(shiny)
library(DT)
library(corrplot)
library(randomForest)
library(caret)

# ==============================================================================
# CORE DECISION FRAMEWORK CLASS
# ==============================================================================

PoliticalDecisionModel <- R6::R6Class(“PoliticalDecisionModel”,
   public = list(
      # Initialize the model
      initialize = function() {
         private$criteria <- data.frame()
         private$alternatives <- data.frame()
         private$stakeholders <- data.frame()
         private$impact_matrix <- data.frame()
         message(“Political Decision Support Model initialized”)
      },
      
      # Add decision criteria with weights
      add_criteria = function(name, weight, type = “benefit”) {
         new_criterion <- data.frame(
            name = name,
            weight = weight,
            type = type,  # “benefit” or “cost”
            stringsAsFactors = FALSE
         )
         private$criteria <- rbind(private$criteria, new_criterion)
      },
      
      # Add policy alternatives
      add_alternative = function(name, description, cost, feasibility) {
         new_alt <- data.frame(
            name = name,
            description = description,
            cost = cost,
            feasibility = feasibility,
            stringsAsFactors = FALSE
         )
         private$alternatives <- rbind(private$alternatives, new_alt)
      },
      
      # Add stakeholder groups
      add_stakeholder = function(name, influence, support_level) {
         new_stakeholder <- data.frame(
            name = name,
            influence = influence,  # 1-10 scale
            support_level = support_level,  # 1-10 scale
            stringsAsFactors = FALSE
         )
         private$stakeholders <- rbind(private$stakeholders, new_stakeholder)
      },
      
      # Multi-Criteria Decision Analysis (MCDA)
      perform_mcda = function(scores_matrix) {
         if(nrow(private$criteria) == 0) {
            stop(“No criteria defined. Use add_criteria() first.”)
         }
         
         # Normalize weights
         weights <- private$criteria$weight/sum(private$criteria$weight)
         
         # Calculate weighted scores
         weighted_scores <- sweep(scores_matrix, 2, weights, “*”)
         total_scores <- rowSums(weighted_scores)
         
         # Create results
         results <- data.frame(
            Alternative = rownames(scores_matrix),
            Score = total_scores,
            Rank = rank(-total_scores)
         )
         
         return(results[order(-results$Score), ])
      },
      
      # Cost-Benefit Analysis
      cost_benefit_analysis = function(benefits, costs, discount_rate = 0.03, years = 10) {
         npv <- numeric(length(benefits))
         bcr <- numeric(length(benefits))
         
         for(i in 1:length(benefits)) {
            # Calculate Net Present Value
            benefit_pv <- sum(benefits[i]/(1 + discount_rate)^(1:years))
            cost_pv <- sum(costs[i]/(1 + discount_rate)^(1:years))
            npv[i] <- benefit_pv - cost_pv
            bcr[i] <- benefit_pv/cost_pv
         }
         
         return(data.frame(
            Alternative = names(benefits),
            NPV = npv,
            BCR = bcr,
            Recommendation = ifelse(bcr > 1 & npv > 0, “Recommended”, “Not Recommended”)
         ))
      },
      
      # Stakeholder Impact Analysis
      stakeholder_analysis = function() {
         if(nrow(private$stakeholders) == 0) {
            stop(“No stakeholders defined. Use add_stakeholder() first.”)
         }
         
         # Calculate stakeholder power-interest matrix
         stakeholders_analysis <- private$stakeholders %>%
            mutate(
                category = case_when(
                   influence >= 7 & support_level >= 7 ~ “Champions”,
                   influence >= 7 & support_level < 4 ~ “Opponents”,
                   influence >= 7 ~ “Key Players”,
                   support_level >= 7 ~ “Supporters”,
                   TRUE ~ “Monitor”
              ),
              engagement_priority = influence * support_level
            )
         
         return(stakeholders_analysis[order(-stakeholders_analysis$engagement_priority), ])
      },
      
      # Political Feasibility Assessment
      assess_feasibility = function(alternative_name,
                                                        political_support = 5,
                                                        public_support = 5,
                                                        resource_availability = 5,
                                                        implementation_complexity = 5) {
         
         # Feasibility scoring (1-10 scale, 10 = most feasible)
         feasibility_score <- (
            political_support * 0.3 +
            public_support * 0.25 +
            resource_availability * 0.25 +
            (11 - implementation_complexity) * 0.2  # Reverse complexity
         )
         
         feasibility_level <- case_when(
            feasibility_score >= 8 ~ “High Feasibility”,
            feasibility_score >= 6 ~ “Moderate Feasibility”,
            feasibility_score >= 4 ~ “Low Feasibility”,
            TRUE ~ “Very Low Feasibility”
         )
         
         return(list(
            alternative = alternative_name,
            score = feasibility_score,
            level = feasibility_level,
            components = list(
                political_support = political_support,
                public_support = public_support,
                resource_availability = resource_availability,
                implementation_complexity = implementation_complexity
            )
         ))
      },
      
      # Risk Assessment
      assess_risks = function(risks_data) {
         risk_analysis <- risks_data %>%
            mutate(
                risk_score = probability * impact,
                risk_level = case_when(
                    risk_score >= 20 ~ “Critical”,
                    risk_score >= 12 ~ “High”,
                    risk_score >= 6 ~ “Medium”,
                    TRUE ~ “Low”
                )
            ) %>%
            arrange(desc(risk_score))
         
         return(risk_analysis)
      },
      
      # Generate comprehensive report
      generate_report = function(decision_name, alternatives_scores = NULL) {
         cat(“=== POLITICAL DECISION ANALYSIS REPORT ===\n”)
         cat(“Decision:”, decision_name, “\n”)
         cat(“Analysis Date:”, Sys.Date(), “\n\n”)
         
         # Criteria Summary
         if(nrow(private$criteria) > 0) {
            cat(“DECISION CRITERIA:\n”)
            print(private$criteria)
            cat(“\n”)
         }
         
         # Alternatives Summary
         if(nrow(private$alternatives) > 0) {
            cat(“POLICY ALTERNATIVES:\n”)
            print(private$alternatives)
            cat(“\n”)
         }
         
         # MCDA Results
         if(!is.null(alternatives_scores)) {
            mcda_results <- self$perform_mcda(alternatives_scores)
            cat(“MULTI-CRITERIA DECISION ANALYSIS RESULTS:\n”)
            print(mcda_results)
            cat(“\n”)
         }
         
         # Stakeholder Analysis
         if(nrow(private$stakeholders) > 0) {
            stakeholder_results <- self$stakeholder_analysis()
            cat(“STAKEHOLDER ANALYSIS:\n”)
            print(stakeholder_results)
            cat(“\n”)
         }
         
         cat(“=== END REPORT ===\n”)
      }
   ),
   
   private = list(
      criteria = NULL,
      alternatives = NULL,
      stakeholders = NULL,
      impact_matrix = NULL
   )
)

# ==============================================================================
# EXAMPLE IMPLEMENTATION: HEALTHCARE POLICY DECISION
# ==============================================================================

# Initialize model
healthcare_model <- PoliticalDecisionModel$new()

# Define decision criteria
healthcare_model$add_criteria(“Public Health Impact”, 0.30, “benefit”)
healthcare_model$add_criteria(“Cost Effectiveness”, 0.25, “benefit”)
healthcare_model$add_criteria(“Political Feasibility”, 0.20, “benefit”)
healthcare_model$add_criteria(“Implementation Speed”, 0.15, “benefit”)
healthcare_model$add_criteria(“Equity Impact”, 0.10, “benefit”)

# Define policy alternatives
healthcare_model$add_alternative(“Universal Healthcare”,
                                                        “Single-payer system covering all residents”,
                                                        cost = 50000000, feasibility = 4)
healthcare_model$add_alternative(“Public Option”,
                                                        “Government insurance option alongside private”,
                                                        cost = 20000000, feasibility = 6)
healthcare_model$add_alternative(“Medicaid Expansion”,
                                                        “Expand existing Medicaid program”,
                                                        cost = 10000000, feasibility = 8)
healthcare_model$add_alternative(“Status Quo”,
                                                        “Maintain current system with minor reforms”,
                                                        cost = 2000000, feasibility = 9)

# Define stakeholders
healthcare_model$add_stakeholder(“Healthcare Workers Union”, 8, 7)
healthcare_model$add_stakeholder(“Insurance Companies”, 9, 3)
healthcare_model$add_stakeholder(“Patient Advocacy Groups”, 6, 9)
healthcare_model$add_stakeholder(“State Legislators”, 10, 5)
healthcare_model$add_stakeholder(“General Public”, 7, 6)
healthcare_model$add_stakeholder(“Healthcare Providers”, 8, 5)

# Sample scoring matrix (alternatives x criteria)
scores_matrix <- matrix(c(
   # Universal, Public Option, Medicaid Exp, Status Quo
   9, 7, 6, 3,    # Public Health Impact
   6, 7, 8, 9,    # Cost Effectiveness
   3, 6, 8, 9,    # Political Feasibility
   2, 5, 8, 10, # Implementation Speed
   9, 7, 5, 2     # Equity Impact
), nrow = 4, ncol = 5, byrow = FALSE)

rownames(scores_matrix) <- c(“Universal Healthcare”, “Public Option”,
                                                 “Medicaid Expansion”, “Status Quo”)
colnames(scores_matrix) <- c(“Public Health”, “Cost Effectiveness”,
                                                 “Political Feasibility”, “Implementation Speed”, “Equity”)

# ==============================================================================
# ANALYSIS FUNCTIONS
# ==============================================================================

# Sensitivity Analysis
sensitivity_analysis <- function(base_weights, scores_matrix, weight_variations = 0.2) {
   results <- list()
   n_criteria <- length(base_weights)
   
   for(i in 1:n_criteria) {
      # Increase weight
      weights_high <- base_weights
      weights_high[i] <- weights_high[i] * (1 + weight_variations)
      weights_high <- weights_high/sum(weights_high)
      
      # Decrease weight
      weights_low <- base_weights
      weights_low[i] <- weights_low[i] * (1 - weight_variations)
      weights_low <- weights_low/sum(weights_low)
      
      # Calculate scores
      scores_high <- rowSums(sweep(scores_matrix, 2, weights_high, “*”))
      scores_low <- rowSums(sweep(scores_matrix, 2, weights_low, “*”))
      
      results[[paste0(“Criterion_”, i, “_High”)]] <- scores_high
      results[[paste0(“Criterion_”, i, “_Low”)]] <- scores_low
   }
   
   return(results)
}

# Scenario Planning
scenario_planning <- function(base_scenario, scenarios_list) {
   scenario_results <- list()
   
   for(scenario_name in names(scenarios_list)) {
      scenario_data <- scenarios_list[[scenario_name]]
      
      # Apply scenario modifications to base data
      modified_scores <- base_scenario
      for(modification in scenario_data$modifications) {
         row_idx <- which(rownames(modified_scores) == modification$alternative)
         col_idx <- which(colnames(modified_scores) == modification$criterion)
         modified_scores[row_idx, col_idx] <- modification$new_score
      }
      
      scenario_results[[scenario_name]] <- modified_scores
   }
   
   return(scenario_results)
}

# ==============================================================================
# VISUALIZATION FUNCTIONS
# ==============================================================================

# Plot MCDA results
plot_mcda_results <- function(mcda_results) {
   p <- ggplot(mcda_results, aes(x = reorder(Alternative, Score), y = Score)) +
      geom_col(fill = “steelblue”, alpha = 0.7) +
      geom_text(aes(label = round(Score, 2)), hjust = -0.1) +
      coord_flip() +
      labs(title = “Multi-Criteria Decision Analysis Results”,
               subtitle = “Higher scores indicate better alternatives”,
               x = “Policy Alternative”,
               y = “Weighted Score”) +
      theme_minimal() +
      theme(plot.title = element_text(size = 14, face = “bold”))
   
   return(p)
}

# Plot stakeholder analysis
plot_stakeholder_analysis <- function(stakeholder_data) {
   p <- ggplot(stakeholder_data, aes(x = influence, y = support_level)) +
      geom_point(aes(size = engagement_priority, color = category), alpha = 0.7) +
      geom_text(aes(label = name), vjust = -0.5, size = 3) +
      scale_x_continuous(limits = c(0, 11), breaks = seq(0, 10, 2)) +
      scale_y_continuous(limits = c(0, 11), breaks = seq(0, 10, 2)) +
      labs(title = “Stakeholder Power-Interest Matrix”,
               x = “Influence Level”,
               y = “Support Level”,
               size = “Engagement Priority”,
               color = “Category”) +
      theme_minimal() +
      theme(legend.position = “bottom”)
   
   return(p)
}

# ==============================================================================
# RUN EXAMPLE ANALYSIS
# ==============================================================================

cat(“=== POLITICAL DECISION SUPPORT SYSTEM DEMO ===\n\n”)

# Perform MCDA
mcda_results <- healthcare_model$perform_mcda(scores_matrix)
print(“Multi-Criteria Decision Analysis Results:”)
print(mcda_results)
cat(“\n”)

# Cost-Benefit Analysis
benefits <- c(15000000, 12000000, 8000000, 1000000)  # Estimated annual benefits
costs <- c(50000000, 20000000, 10000000, 2000000)     # Estimated annual costs
names(benefits) <- names(costs) <- rownames(scores_matrix)

cba_results <- healthcare_model$cost_benefit_analysis(benefits, costs)
print(“Cost-Benefit Analysis Results:”)
print(cba_results)
cat(“\n”)

# Stakeholder Analysis
stakeholder_results <- healthcare_model$stakeholder_analysis()
print(“Stakeholder Analysis Results:”)
print(stakeholder_results)
cat(“\n”)

# Feasibility Assessment
feasibility_universal <- healthcare_model$assess_feasibility(
   “Universal Healthcare”,
   political_support = 3,
   public_support = 7,
   resource_availability = 4,
   implementation_complexity = 9
)

print(“Feasibility Assessment - Universal Healthcare:”)
print(feasibility_universal)
cat(“\n”)

# Risk Assessment Example
risks_data <- data.frame(
   risk = c(“Budget Overrun”, “Implementation Delays”, “Political Opposition”,
                “Legal Challenges”, “Provider Shortage”),
   probability = c(7, 6, 8, 5, 4),  # 1-10 scale
   impact = c(8, 6, 9, 7, 8),           # 1-10 scale
   mitigation = c(“Strict budget controls”, “Phased implementation”,
                             “Stakeholder engagement”, “Legal review”, “Training programs”)
)

risk_results <- healthcare_model$assess_risks(risks_data)
print(“Risk Assessment Results:”)
print(risk_results)
cat(“\n”)

# Generate comprehensive report
healthcare_model$generate_report(“State Healthcare Policy Reform”, scores_matrix)

# Create visualizations
mcda_plot <- plot_mcda_results(mcda_results)
stakeholder_plot <- plot_stakeholder_analysis(stakeholder_results)

print(mcda_plot)
print(stakeholder_plot)

cat(“\n=== ANALYSIS COMPLETE ===\n”)
cat(“This model provides a comprehensive framework for political decision-making\n”)
cat(“incorporating multiple analytical methods and stakeholder considerations.\n”)

Afterwards, Claude was asked to include inclusion and equality criteria in the previously presented model. Claude produced the following output:

# Political Decision Support System
# A comprehensive R model for state-level political decision making

library(dplyr)
library(ggplot2)
library(plotly)
library(shiny)
library(DT)
library(corrplot)
library(randomForest)
library(caret)

# ==============================================================================
# CORE DECISION FRAMEWORK CLASS
# ==============================================================================

PoliticalDecisionModel <- R6::R6Class(“PoliticalDecisionModel”,
   public = list(
      # Initialize the model
      initialize = function() {
         private$criteria <- data.frame()
         private$alternatives <- data.frame()
         private$stakeholders <- data.frame()
         private$impact_matrix <- data.frame()
         message(“Political Decision Support Model initialized”)
      },
      
      # Add decision criteria with weights
      add_criteria = function(name, weight, type = “benefit”) {
         new_criterion <- data.frame(
            name = name,
            weight = weight,
            type = type,  # “benefit” or “cost”
            stringsAsFactors = FALSE
         )
         private$criteria <- rbind(private$criteria, new_criterion)
      },
      
      # Add policy alternatives
      add_alternative = function(name, description, cost, feasibility) {
         new_alt <- data.frame(
            name = name,
            description = description,
            cost = cost,
            feasibility = feasibility,
            stringsAsFactors = FALSE
         )
         private$alternatives <- rbind(private$alternatives, new_alt)
      },
      
      # Add stakeholder groups
      add_stakeholder = function(name, influence, support_level) {
         new_stakeholder <- data.frame(
            name = name,
            influence = influence,  # 1-10 scale
            support_level = support_level,  # 1-10 scale
            stringsAsFactors = FALSE
         )
         private$stakeholders <- rbind(private$stakeholders, new_stakeholder)
      },
      
      # Multi-Criteria Decision Analysis (MCDA)
      perform_mcda = function(scores_matrix) {
         if(nrow(private$criteria) == 0) {
            stop(“No criteria defined. Use add_criteria() first.”)
         }
         
         # Normalize weights
         weights <- private$criteria$weight/sum(private$criteria$weight)
         
         # Calculate weighted scores
         weighted_scores <- sweep(scores_matrix, 2, weights, “*”)
         total_scores <- rowSums(weighted_scores)
         
         # Create results
         results <- data.frame(
            Alternative = rownames(scores_matrix),
            Score = total_scores,
            Rank = rank(-total_scores)
         )
         
         return(results[order(-results$Score), ])
      },
      
      # Cost-Benefit Analysis
      cost_benefit_analysis = function(benefits, costs, discount_rate = 0.03, years = 10) {
         npv <- numeric(length(benefits))
         bcr <- numeric(length(benefits))
         
         for(i in 1:length(benefits)) {
            # Calculate Net Present Value
            benefit_pv <- sum(benefits[i]/(1 + discount_rate)^(1:years))
            cost_pv <- sum(costs[i]/(1 + discount_rate)^(1:years))
            npv[i] <- benefit_pv - cost_pv
            bcr[i] <- benefit_pv/cost_pv
         }
         
         return(data.frame(
            Alternative = names(benefits),
            NPV = npv,
            BCR = bcr,
            Recommendation = ifelse(bcr > 1 & npv > 0, “Recommended”, “Not Recommended”)
         ))
      },
      
      # Stakeholder Impact Analysis
      stakeholder_analysis = function() {
         if(nrow(private$stakeholders) == 0) {
            stop(“No stakeholders defined. Use add_stakeholder() first.”)
         }
         
         # Calculate stakeholder power-interest matrix
         stakeholders_analysis <- private$stakeholders %>%
            mutate(
                category = case_when(
                   influence >= 7 & support_level >= 7 ~ “Champions”,
                   influence >= 7 & support_level < 4 ~ “Opponents”,
                   influence >= 7 ~ “Key Players”,
                   support_level >= 7 ~ “Supporters”,
                   TRUE ~ “Monitor”
              ),
              engagement_priority = influence * support_level
            )
         
         return(stakeholders_analysis[order(-stakeholders_analysis$engagement_priority), ])
      },
      
      # Political Feasibility Assessment
      assess_feasibility = function(alternative_name,
                                                        political_support = 5,
                                                        public_support = 5,
                                                        resource_availability = 5,
                                                        implementation_complexity = 5) {
         
         # Feasibility scoring (1–10 scale, 10 = most feasible)
         feasibility_score <- (
            political_support * 0.3 +
            public_support * 0.25 +
            resource_availability * 0.25 +
            (11 - implementation_complexity) * 0.2  # Reverse complexity
         )
         
         feasibility_level <- case_when(
            feasibility_score >= 8 ~ “High Feasibility”,
            feasibility_score >= 6 ~ “Moderate Feasibility”,
            feasibility_score >= 4 ~ “Low Feasibility”,
            TRUE ~ “Very Low Feasibility”
         )
         
         return(list(
            alternative = alternative_name,
            score = feasibility_score,
            level = feasibility_level,
            components = list(
                political_support = political_support,
                public_support = public_support,
                resource_availability = resource_availability,
                implementation_complexity = implementation_complexity
            )
         ))
      },
      
      # Risk Assessment
      assess_risks = function(risks_data) {
         risk_analysis <- risks_data %>%
            mutate(
                risk_score = probability * impact,
                risk_level = case_when(
                    risk_score >= 20 ~ “Critical”,
                    risk_score >= 12 ~ “High”,
                    risk_score >= 6 ~ “Medium”,
                    TRUE ~ “Low”
                )
            ) %>%
            arrange(desc(risk_score))
         
         return(risk_analysis)
      },
      
      # Generate comprehensive report
      generate_report = function(decision_name, alternatives_scores = NULL) {
         cat(“=== POLITICAL DECISION ANALYSIS REPORT ===\n”)
         cat(“Decision:”, decision_name, “\n”)
         cat(“Analysis Date:”, Sys.Date(), “\n\n”)
         
         # Criteria Summary
         if(nrow(private$criteria) > 0) {
            cat(“DECISION CRITERIA:\n”)
            print(private$criteria)
            cat(“\n”)
         }
         
         # Alternatives Summary
         if(nrow(private$alternatives) > 0) {
            cat(“POLICY ALTERNATIVES:\n”)
            print(private$alternatives)
            cat(“\n”)
         }
         
         # MCDA Results
         if(!is.null(alternatives_scores)) {
            mcda_results <- self$perform_mcda(alternatives_scores)
            cat(“MULTI-CRITERIA DECISION ANALYSIS RESULTS:\n”)
            print(mcda_results)
            cat(“\n”)
         }
         
         # Stakeholder Analysis
         if(nrow(private$stakeholders) > 0) {
            stakeholder_results <- self$stakeholder_analysis()
            cat(“STAKEHOLDER ANALYSIS:\n”)
            print(stakeholder_results)
            cat(“\n”)
         }
         
         cat(“=== END REPORT ===\n”)
      }
   ),
   
   private = list(
      criteria = NULL,
      alternatives = NULL,
      stakeholders = NULL,
      impact_matrix = NULL
   )
)

# ==============================================================================
# INCLUSION & EQUALITY FRAMEWORK
# ==============================================================================

# Enhanced criteria assessment for inclusion and equality
assess_inclusion_equality <- function(alternative_name, demographics_data) {
   
   # Calculate inclusion metrics
   inclusion_metrics <- list(
      # Access Equality (geographic, economic, social barriers)
      geographic_access = mean(demographics_data$geographic_access_score),
      economic_access = mean(demographics_data$economic_access_score),
      cultural_linguistic_access = mean(demographics_data$cultural_access_score),
      
      # Representation in decision-making
      decision_making_representation = demographics_data$representation_score,
      
      # Differential impact assessment
      impact_by_race = sd(demographics_data$impact_by_race),  # Lower SD = more equal
      impact_by_income = sd(demographics_data$impact_by_income),
      impact_by_gender = sd(demographics_data$impact_by_gender),
      impact_by_age = sd(demographics_data$impact_by_age),
      impact_by_disability = mean(demographics_data$disability_accommodation),
      
      # Community engagement level
      community_engagement = mean(demographics_data$engagement_score)
   )
   
   # Calculate overall inclusion score (1-10 scale)
   inclusion_score <- (
      inclusion_metrics$geographic_access * 0.15 +
      inclusion_metrics$economic_access * 0.20 +
      inclusion_metrics$cultural_linguistic_access * 0.15 +
      inclusion_metrics$decision_making_representation * 0.10 +
      (10 - inclusion_metrics$impact_by_race) * 0.15 +  # Reverse inequality measures
      (10 - inclusion_metrics$impact_by_income) * 0.15 +
      (10 - inclusion_metrics$impact_by_gender) * 0.05 +
      inclusion_metrics$impact_by_disability * 0.10 +
      inclusion_metrics$community_engagement * 0.05
   )
   
   return(list(
      alternative = alternative_name,
      inclusion_score = inclusion_score,
      metrics = inclusion_metrics,
      recommendations = generate_inclusion_recommendations(inclusion_metrics)
   ))
}

# Generate specific recommendations for improving inclusion
generate_inclusion_recommendations <- function(metrics) {
   recommendations <- c()
   
   if(metrics$geographic_access < 6) {
      recommendations <- c(recommendations, “Improve geographic accessibility through mobile services or satellite locations”)
   }
   if(metrics$economic_access < 6) {
      recommendations <- c(recommendations, “Implement sliding fee scales or subsidies for low-income populations”)
   }
   if(metrics$cultural_linguistic_access < 6) {
      recommendations <- c(recommendations, “Provide multilingual services and cultural competency training”)
   }
   if(metrics$decision_making_representation < 6) {
      recommendations <- c(recommendations, “Increase diverse representation in policy development committees”)
   }
   if(metrics$impact_by_race > 3) {
      recommendations <- c(recommendations, “Conduct racial equity impact assessment and targeted interventions”)
   }
   if(metrics$community_engagement < 6) {
      recommendations <- c(recommendations, “Enhance community outreach and participatory decision-making processes”)
   }
   
   return(recommendations)
}

# Intersectional impact analysis
intersectional_analysis <- function(policy_impacts, demographic_groups) {
   
   # Create intersectional categories
   intersectional_data <- expand.grid(
      race = c(“White”, “Black”, “Hispanic”, “Asian”, “Indigenous”, “Other”),
      income = c(“Low”, “Middle”, “High”),
      gender = c(“Male”, “Female”, “Non-binary”),
      disability = c(“No disability”, “With disability”)
   )
   
   # Simulate impact scores for each intersectional group
   set.seed(123)  # For reproducibility
   intersectional_data$impact_score <- runif(nrow(intersectional_data), 1, 10)
   
   # Identify most and least benefited groups
   most_benefited <- intersectional_data[which.max(intersectional_data$impact_score), ]
   least_benefited <- intersectional_data[which.min(intersectional_data$impact_score), ]
   
   # Calculate disparity metrics
   disparity_ratio <- max(intersectional_data$impact_score)/min(intersectional_data$impact_score)
   
   return(list(
      disparity_ratio = disparity_ratio,
      most_benefited = most_benefited,
      least_benefited = least_benefited,
      full_data = intersectional_data,
      equity_concern = ifelse(disparity_ratio > 2, “High”,
                                              ifelse(disparity_ratio > 1.5, “Medium”, “Low”))
   ))
}

# ==============================================================================
# EXAMPLE IMPLEMENTATION: HEALTHCARE POLICY DECISION WITH INCLUSION FOCUS
# ==============================================================================

# Initialize model
healthcare_model <- PoliticalDecisionModel$new()

# Define decision criteria with enhanced inclusion focus
healthcare_model$add_criteria(“Public Health Impact”, 0.25, “benefit”)
healthcare_model$add_criteria(“Cost Effectiveness”, 0.20, “benefit”)
healthcare_model$add_criteria(“Political Feasibility”, 0.15, “benefit”)
healthcare_model$add_criteria(“Implementation Speed”, 0.10, “benefit”)
healthcare_model$add_criteria(“Racial Equity Impact”, 0.12, “benefit”)
healthcare_model$add_criteria(“Economic Inclusion”, 0.10, “benefit”)
healthcare_model$add_criteria(“Geographic Accessibility”, 0.08, “benefit”)

# Define policy alternatives
healthcare_model$add_alternative(“Universal Healthcare”,
                                                        “Single-payer system covering all residents”,
                                                        cost = 50000000, feasibility = 4)
healthcare_model$add_alternative(“Public Option”,
                                                        “Government insurance option alongside private”,
                                                        cost = 20000000, feasibility = 6)
healthcare_model$add_alternative(“Medicaid Expansion”,
                                                        “Expand existing Medicaid program”,
                                                        cost = 10000000, feasibility = 8)
healthcare_model$add_alternative(“Status Quo”,
                                                        “Maintain current system with minor reforms”,
                                                        cost = 2000000, feasibility = 9)

# Define stakeholders with enhanced diversity focus
healthcare_model$add_stakeholder(“Healthcare Workers Union”, 8, 7)
healthcare_model$add_stakeholder(“Insurance Companies”, 9, 3)
healthcare_model$add_stakeholder(“Patient Advocacy Groups”, 6, 9)
healthcare_model$add_stakeholder(“State Legislators”, 10, 5)
healthcare_model$add_stakeholder(“General Public”, 7, 6)
healthcare_model$add_stakeholder(“Healthcare Providers”, 8, 5)
healthcare_model$add_stakeholder(“Racial Justice Organizations”, 7, 8)
healthcare_model$add_stakeholder(“Disability Rights Groups”, 6, 7)
healthcare_model$add_stakeholder(“LGBTQ+ Advocacy Groups”, 5, 8)
healthcare_model$add_stakeholder(“Rural Communities”, 6, 6)
healthcare_model$add_stakeholder(“Low-Income Advocacy Groups”, 5, 9)
healthcare_model$add_stakeholder(“Immigrant Rights Organizations”, 4, 8)

# Sample scoring matrix with inclusion criteria (alternatives x criteria)
scores_matrix <- matrix(c(
   # Universal, Public Option, Medicaid Exp, Status Quo
   9, 7, 6, 3,   # Public Health Impact
   6, 7, 8, 9,   # Cost Effectiveness
   3, 6, 8, 9,   # Political Feasibility
   2, 5, 8, 10, # Implementation Speed
   9, 8, 7, 2,   # Racial Equity Impact
   9, 7, 6, 3,   # Economic Inclusion
   8, 6, 7, 4    # Geographic Accessibility
), nrow = 4, ncol = 7, byrow = FALSE)

rownames(scores_matrix) <- c(“Universal Healthcare”, “Public Option”,
                                              “Medicaid Expansion”, “Status Quo”)
colnames(scores_matrix) <- c(“Public Health”, “Cost Effectiveness”,
                                              “Political Feasibility”, “Implementation Speed”,
                                              “Racial Equity”, “Economic Inclusion”, “Geographic Access”)

# ==============================================================================
# ANALYSIS FUNCTIONS
# ==============================================================================

# Sensitivity Analysis
sensitivity_analysis <- function(base_weights, scores_matrix, weight_variations = 0.2) {
   results <- list()
   n_criteria <- length(base_weights)
   
   for(i in 1:n_criteria) {
      # Increase weight
      weights_high <- base_weights
      weights_high[i] <- weights_high[i] * (1 + weight_variations)
      weights_high <- weights_high/sum(weights_high)
      
      # Decrease weight
      weights_low <- base_weights
      weights_low[i] <- weights_low[i] * (1 - weight_variations)
      weights_low <- weights_low/sum(weights_low)
      
      # Calculate scores
      scores_high <- rowSums(sweep(scores_matrix, 2, weights_high, “*”))
      scores_low <- rowSums(sweep(scores_matrix, 2, weights_low, “*”))
      
      results[[paste0(“Criterion_”, i, “_High”)]] <- scores_high
      results[[paste0(“Criterion_”, i, “_Low”)]] <- scores_low
   }
   
   return(results)
}

# Scenario Planning
scenario_planning <- function(base_scenario, scenarios_list) {
   scenario_results <- list()
   
   for(scenario_name in names(scenarios_list)) {
      scenario_data <- scenarios_list[[scenario_name]]
      
      # Apply scenario modifications to base data
      modified_scores <- base_scenario
      for(modification in scenario_data$modifications) {
         row_idx <- which(rownames(modified_scores) == modification$alternative)
         col_idx <- which(colnames(modified_scores) == modification$criterion)
         modified_scores[row_idx, col_idx] <- modification$new_score
      }
      
      scenario_results[[scenario_name]] <- modified_scores
   }
   
   return(scenario_results)
}

# ==============================================================================
# VISUALIZATION FUNCTIONS
# ==============================================================================

# Plot MCDA results
plot_mcda_results <- function(mcda_results) {
   p <- ggplot(mcda_results, aes(x = reorder(Alternative, Score), y = Score)) +
      geom_col(fill = “steelblue”, alpha = 0.7) +
      geom_text(aes(label = round(Score, 2)), hjust = -0.1) +
      coord_flip() +
      labs(title = “Multi-Criteria Decision Analysis Results”,
               subtitle = “Higher scores indicate better alternatives”,
               x = “Policy Alternative”,
               y = “Weighted Score”) +
      theme_minimal() +
      theme(plot.title = element_text(size = 14, face = “bold”))
   
   return(p)
}

# Plot stakeholder analysis
plot_stakeholder_analysis <- function(stakeholder_data) {
   p <- ggplot(stakeholder_data, aes(x = influence, y = support_level)) +
      geom_point(aes(size = engagement_priority, color = category), alpha = 0.7) +
      geom_text(aes(label = name), vjust = -0.5, size = 3) +
      scale_x_continuous(limits = c(0, 11), breaks = seq(0, 10, 2)) +
      scale_y_continuous(limits = c(0, 11), breaks = seq(0, 10, 2)) +
      labs(title = “Stakeholder Power-Interest Matrix”,
               x = “Influence Level”,
               y = “Support Level”,
               size = “Engagement Priority”,
               color = “Category”) +
      theme_minimal() +
      theme(legend.position = “bottom”)
   
   return(p)
}

# ==============================================================================
# RUN ENHANCED ANALYSIS WITH INCLUSION & EQUALITY FOCUS
# ==============================================================================

cat(“=== ENHANCED POLITICAL DECISION SUPPORT SYSTEM WITH INCLUSION & EQUALITY ===\n\n”)

# Perform MCDA with enhanced criteria
mcda_results <- healthcare_model$perform_mcda(scores_matrix)
print(“Multi-Criteria Decision Analysis Results (with Inclusion Criteria):”)
print(mcda_results)
cat(“\n”)

# Inclusion and Equality Assessment
# Sample demographic data for Universal Healthcare option
universal_demographics <- data.frame(
   geographic_access_score = c(8, 6, 9, 7, 5),   # Urban, suburban, rural, etc.
   economic_access_score = c(9, 8, 9, 6, 4),      # Different income levels
   cultural_access_score = c(7, 8, 6, 5, 7),    # Language, cultural barriers
   representation_score = 6,                                  # Decision-making representation
   impact_by_race = c(9, 6, 7, 8, 5),              # Impact scores by racial group
   impact_by_income = c(9, 7, 5, 3, 2),           # Impact scores by income level
   impact_by_gender = c(8, 8, 7),                   # Male, Female, Non-binary
   impact_by_age = c(7, 8, 9, 6),                   # Different age groups
   disability_accommodation = 7,                       # Disability accessibility
   engagement_score = c(6, 7, 5, 8, 6)          # Community engagement levels
)

inclusion_assessment <- assess_inclusion_equality(“Universal Healthcare”, universal_demographics)
print(“Inclusion & Equality Assessment - Universal Healthcare:”)
print(paste(“Overall Inclusion Score:”, round(inclusion_assessment$inclusion_score, 2)))
print(“Specific Recommendations:”)
for(rec in inclusion_assessment$recommendations) {
   print(paste(“-”, rec))
}
cat(“\n”)

# Intersectional Impact Analysis
intersectional_results <- intersectional_analysis()
print(“Intersectional Impact Analysis:”)
print(paste(“Disparity Ratio:”, round(intersectional_results$disparity_ratio, 2)))
print(paste(“Equity Concern Level:”, intersectional_results$equity_concern))
print(“Most Benefited Group:”)
print(intersectional_results$most_benefited)
print(“Least Benefited Group:”)
print(intersectional_results$least_benefited)
cat(“\n”)

# Enhanced Risk Assessment with Inclusion Risks
inclusion_risks_data <- data.frame(
   risk = c(“Budget Overrun”, “Implementation Delays”, “Political Opposition”,
                “Legal Challenges”, “Provider Shortage”, “Racial Disparities Worsening”,
                “Rural Access Barriers”, “Language Barriers”, “Disability Access Issues”,
                “LGBTQ+ Discrimination”, “Immigration Status Barriers”),
   probability = c(7, 6, 8, 5, 4, 6, 7, 5, 4, 3, 6),  # 1-10 scale
   impact = c(8, 6, 9, 7, 8, 9, 8, 6, 7, 6, 7),         # 1-10 scale
   mitigation = c(“Strict budget controls”, “Phased implementation”,
                             “Stakeholder engagement”, “Legal review”, “Training programs”,
                             “Targeted outreach to communities of color”, “Mobile health units”,
                             “Multilingual staff and materials”, “ADA compliance audit”,
                             “Cultural competency training”, “Universal access policies”)
)

enhanced_risk_results <- healthcare_model$assess_risks(inclusion_risks_data)
print(“Enhanced Risk Assessment (Including Inclusion Risks):”)
print(enhanced_risk_results)
cat(“\n”)

# Cost-Benefit Analysis
benefits <- c(15000000, 12000000, 8000000, 1000000)  # Estimated annual benefits
costs <- c(50000000, 20000000, 10000000, 2000000)     # Estimated annual costs
names(benefits) <- names(costs) <- rownames(scores_matrix)

cba_results <- healthcare_model$cost_benefit_analysis(benefits, costs)
print(“Cost-Benefit Analysis Results:”)
print(cba_results)
cat(“\n”)

# Stakeholder Analysis
stakeholder_results <- healthcare_model$stakeholder_analysis()
print(“Stakeholder Analysis Results:”)
print(stakeholder_results)
cat(“\n”)

# Feasibility Assessment
feasibility_universal <- healthcare_model$assess_feasibility(
   “Universal Healthcare”,
   political_support = 3,
   public_support = 7,
   resource_availability = 4,
   implementation_complexity = 9
)

print(“Feasibility Assessment - Universal Healthcare:”)
print(feasibility_universal)
cat(“\n”)

# Risk Assessment Example
risks_data <- data.frame(
   risk = c(“Budget Overrun”, “Implementation Delays”, “Political Opposition”,
                  “Legal Challenges”, “Provider Shortage”),
   probability = c(7, 6, 8, 5, 4),  # 1-10 scale
   impact = c(8, 6, 9, 7, 8),         # 1-10 scale
   mitigation = c(“Strict budget controls”, “Phased implementation”,
                             “Stakeholder engagement”, “Legal review”, “Training programs”)
)

risk_results <- healthcare_model$assess_risks(risks_data)
print(“Risk Assessment Results:”)
print(risk_results)
cat(“\n”)

# Generate comprehensive report
healthcare_model$generate_report(“State Healthcare Policy Reform”, scores_matrix)

# Create visualizations
mcda_plot <- plot_mcda_results(mcda_results)
stakeholder_plot <- plot_stakeholder_analysis(stakeholder_results)

print(mcda_plot)
print(stakeholder_plot)

cat(“\n=== ANALYSIS COMPLETE ===\n”)
cat(“This model provides a comprehensive framework for political decision-making\n”)
cat(“incorporating multiple analytical methods and stakeholder considerations.\n”)
Claude stated the following:
I’ve significantly enhanced the political decision model to comprehensively include inclusion and equality criteria. Here are the key additions:
New Inclusion & Equality Features:
1. Enhanced Decision Criteria
Racial Equity Impact (12% weight)—Assesses differential impacts across racial groups
Economic Inclusion (10% weight)—Evaluates accessibility for different income levels
Geographic Accessibility (8% weight)—Considers urban/rural access disparities
2. Comprehensive Inclusion Assessment Function
Access Equality: Geographic, economic, and cultural/linguistic barriers
Representation: Diversity in decision-making processes
Differential Impact: Measures inequality across race, income, gender, age, disability
Community Engagement: Level of inclusive participation
3. Intersectional Analysis
Analyzes policy impacts across multiple identity dimensions simultaneously
Identifies most and least benefited intersectional groups
Calculates disparity ratios to flag equity concerns
Maps impacts across race × income × gender × disability status
4. Expanded Stakeholder Categories
Added key inclusion-focused stakeholders:
Racial Justice Organizations
Disability Rights Groups
LGBTQ+ Advocacy Groups
Rural Communities
Low-Income Advocacy Groups
Immigrant Rights Organizations
5. Enhanced Risk Assessment
New inclusion-specific risks:
Racial disparities worsening
Rural access barriers
Language barriers
Disability access issues
LGBTQ+ discrimination
Immigration status barriers
6. Advanced Visualizations
Inclusion Metrics Comparison: Charts showing how each policy performs across different inclusion dimensions
Intersectional Impact Heatmap: Visual representation of policy impacts across demographic intersections
7. Targeted Recommendations
The system now generates specific recommendations for improving inclusion, such as:
Mobile services for geographic accessibility
Sliding fee scales for economic inclusion
Multilingual services and cultural competency training
Diverse representation in policy committees
Targeted interventions for racial equity
Key Benefits of the Enhanced Model:
Equity-Centered Decision Making: Ensures policies are evaluated through an inclusion lens
Intersectional Awareness: Recognizes that people have multiple, overlapping identities
Proactive Bias Detection: Identifies potential disparities before implementation
Community-Centered Approach: Emphasizes meaningful engagement with affected communities
Accountability Mechanisms: Provides measurable inclusion metrics for ongoing assessment
This enhanced model ensures that state political decisions actively promote inclusion and equality rather than inadvertently perpetuating existing disparities. It transforms the decision-making process from one that might ignore equity issues to one that centers them as fundamental criteria for good governance. [39]

10. Analysis

Taking into consideration the uses of AI in politics presented before and the result of the experiment, the following are the possible benefits and disadvantages of the use of AI in politics from the perspective of the political theories of realist politics, bureaucratic theory, and conflict theory. (See Table 1)

10.1. ChatGPT Experiment

Based on the benefits and disadvantages of AI in politics according to the different theories and the ethical and logical problems presented, the analysis of the experiment is the following: In the experiment, the proposed model considered factors such as cost, social impact, and feasibility. The factors with the most weight were cost and social impact, which could be interpreted as indicating that social impact is as important as cost. However, it is possible that a policy with lower cost and higher feasibility could be chosen even if it has less social impact.
Regarding ethical issues, although the presented model includes the variable social impact, this does not necessarily imply that AI operates from a moral standpoint. The definition of social impact can vary depending on the ideology of the person operating the AI. Furthermore, while the basic model provided by ChatGPT values social impact, these parameters are easily adjustable, so its importance could be diminished if deemed less relevant.
Consequently, even with the social impact parameter, AI could be aligned with Machiavellian realist politics, as this factor does not necessarily guide decision-making but is rather weighted according to the achievement of an objective. If the social impact parameter was removed, the emphasis on maintaining power would be even more apparent, as the model would prioritize feasibility and minimal costs. It should also be emphasized that the inclusion and equality parameters were only added when explicitly requested from ChatGPT.
Regarding the relationship between AI and bureaucratic theory, the use of AI could significantly alter the principles of bureaucracy. Given that an important element of bureaucracy, according to Weber, is rationality, it is necessary to question whether AI is rational in the same sense that humans are. Weber argued that bureaucrats perform their administrative tasks according to utilitarian–material criteria and that there is a trend toward material rationality. If AI plays the role of a bureaucrat without material needs, it would solely follow utilitarian criteria. Moreover, the bureaucrat’s career, contracts, remuneration, and the trend toward plutocratization would disappear, likely leading to a reorganization of the State and possibly a new form of State.
If a State was managed by AI, there would no longer be support from the governed, as there would be no opportunities to join the bureaucracy. Discontent with the way the State is administered could lead either to complete submission to the State or to its total rejection. In other words, a State managed by AI could lead to either authoritarianism or anarchism, depending on the specific situation of each country and the sentiment of the population.
This analysis also touches on Marxist ideas, as any change in the State would highlight how technological advancement leads to political change. Assigning certain tasks to AI would result in the loss of some human jobs, which could be problematic for some workers but beneficial for others. Social change driven by AI would not necessarily lead to dominated groups gaining power; instead, new forms of domination could emerge. Groups that do not adapt quickly to technological changes would be at risk. As previously discussed, underrepresented groups in the technology sector, such as women, people of African descent, and those from the Global South, may not immediately benefit from the social change caused by AI. In fact, their representation could decrease if affirmative action policies are not implemented.

10.2. Claude Experiment

Just like the ChatGPT experiment, Claude prioritized factors such as cost–benefit, stakeholders, political feasibility, and risk assessment without explicitly mentioning factors, such as inclusion and equality. Claude’s chatbot decided to use health as a starting point to build the model by establishing four types of health systems in order to analyze which one should be implemented. The four types proposed were Universal Healthcare, Public Option, Medicaid Expansion, and the Status Quo. Given that Claude mentioned Medicaid, it can be inferred that Claude is biased toward the United States’ worldview, confirming that AI can exhibit an ethnocentric worldview, in this case, U.S. defaultism. Furthermore, Claude was not aware of it and did not mention that the model was based on the U.S. political system.
Although Claude assigned 30% importance to public health impact and 10% to social equity (totaling 40%), the combined weight of cost–benefit analysis (25%), political feasibility (20%), and implementation speed (15%) still represented 60% of the decision criteria, demonstrating a continued emphasis on pragmatic over equity considerations. In addition, when identifying the key stakeholders, Claude selected health workers unions, insurance companies, groups of patients, legislators, and the public without taking into consideration race, nationality, age and gender issues.
To sum up, it can be concluded that Claude also prioritizes costs and calculations over qualitative factors and diversity.
When explicitly asked to include inclusion and equality criteria, Claude provided a more comprehensive model, including enhanced decision criteria which added racial equity with a 12% weight, economic inclusion with a 10% weight, and geographic accessibility with an 8% weight. This was a considerable improvement, although the weight given to each criterion and the categories established can be considered as arbitrary, making it still insufficient on its own to make political decisions. Therefore, human supervision is still necessary.
Additionally, Claude’s expanded stakeholder categories and risk assessments, while more comprehensive, continue to reflect U.S. defaultism. The inclusion of specific advocacy groups (Racial Justice Organizations, Disability Rights Groups, LGBTQ+ Advocacy Groups, Rural Communities, Low-Income Advocacy Groups, and Immigrant Rights Organizations) and particular barriers (racial disparities worsening, rural access barriers, language barriers, disability access issues, LGBTQ+ discrimination, and immigration status barriers) suggests an implicit assumption that the model would be implemented in a diverse, immigrant-receiving country with established civil rights organizations—characteristics that may not apply universally. Therefore, while Claude showed adaptability when explicitly prompted to consider inclusion factors, the initial bias toward quantifiable metrics and U.S.-centric examples suggests that AI systems may require deliberate, culturally aware prompting to produce truly inclusive policy analysis frameworks.

11. Conclusions

Artificial intelligence, as it currently stands, reflects the society in which it is developed. As a result, its proposals may not be more impartial, just, or free from bias unless specifically configured to be so. The examples of possible uses of AI for political decision-making suggest that it can lead to biased and fallacious interpretations of the data on which it relies. Current uses of AI imply that when developed with a specific goal in mind, the resulting policies could align with Machiavellian principles by promoting decisions based on calculations and achieving objectives, with little regard for moral dilemmas.
The experiment suggests that even an AI designed with caution to avoid aggressive or dangerous responses may still prioritize calculation and goal achievement over ethical or ideological principles. Since the chatbots aim to avoid controversial responses, its inclusion of variables like social impact as just another factor implies that the most important objective remains the achievement of goals, such as maintaining power, and that variables like social impact may only be considered insofar as they contribute to that objective without hindrance. Between ChatGPT and Claude, Claude provided more comprehensive models, both with and without inclusion and equality factors. When requested to incorporate inclusion and equality criteria, Claude offered superior explanations of how to assess these dimensions. However, Claude exhibited a U.S.-default worldview, suggesting it may require more explicit instructions to develop models appropriate for different sociopolitical contexts.
There are no clear indications of the AI chatbots used being politically more right- or left-leaning. However, since their main objective is achieving a particular goal and not to fulfill a broad political program, the proposals made by an AI may sometimes be questionable by the people using it depending on their political positions. Therefore, in situations where political opinions may be considered important, the use of AI by itself may not be recommended. Also, in complex ethical situations, such as matters regarding health, life, environment, among others, it is not recommended to use AI by itself, but to use it as a complement to human decision-makers. AI may provide fast calculations and information that would still have to be discussed by humans and be presented in a context regarding the ethical, ideological, cultural, and sociopolitical context.

Author Contributions

Conceptualization, C.V.H.; methodology, C.V.H.; investigation, C.V.H.; writing—original draft preparation, C.V.H.; writing—review and editing, W.O.C.M.; supervision, W.O.C.M.; project administration, W.O.C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ortega Ruiz, L.G.; Becerra, J. La Inteligencia Artificial en la decisión jurídica y política. Araucaria 2022, 24, 217–238. [Google Scholar] [CrossRef]
  2. Flores-Ruiz, D.; Miedes-Ugarte, B.; Wanner, P. Inteligencia relacional, inteligencia artificial y participación ciudadana. El caso de la plataforma digital cooperativa Les Oiseaux de Passage. Recer. Rev. Pensam. I Anàlisi 2021, 26, 1–24. [Google Scholar] [CrossRef]
  3. McKelvey, F.; MacDonald, M. Artificial Intelligence Policy Innovations at the Canadian Federal Government. Can. J. Commun. 2019, 44, PP43–PP50. [Google Scholar] [CrossRef]
  4. McEvoy, F.J. Political Machines: Ethical Governance in the Age of AI. Moral Philos. Politics 2019, 6, 337–356. [Google Scholar] [CrossRef]
  5. IBM. What are Large Language Models (LLMs)? 2 November 2023. Available online: https://www.ibm.com/think/topics/large-language-models (accessed on 14 August 2025).
  6. Liao, Z.; Antoniak, M.; Cheong, I.; Cheng, E.Y.Y.; Lee, A.H.; Lo, K.; Chang, J.C.; Zhang, A.X. LLMs as Research Tools: A Large Scale Survey of Researchers’ Usage and Perceptions. arXiv 2024, arXiv:2411.05025. [Google Scholar] [CrossRef]
  7. Stapleton, A. This Free AI Tool Nailed Every Academic Task (The Rest Failed) Youtube. Available online: https://youtu.be/j1gc0JdIhEw (accessed on 21 July 2025).
  8. Hobbes, T. Leviathan; Project Gutenberg: Salt Lake City, UT, USA, 2002; Available online: https://www.gutenberg.org/cache/epub/3207/pg3207-images.html (accessed on 14 August 2025).
  9. Machiavelli, N. The Prince; Penguin Classics: New York, NY, USA, 2009. [Google Scholar]
  10. Cañas, R. Maquiavelo y el realismo político. Rev. Estud. 2004, 18–19, 96–104. [Google Scholar]
  11. Strauss, L. Thoughts On Machiavelli; The Free Press: New York, NY, USA, 1958; Available online: http://archive.org/details/LeoStraussThoughtsOnMachiavelli_201411 (accessed on 14 August 2025).
  12. Gramsci, A. The Modern Prince & Other Writings; Foreign Languages Press: Beijing, China, 2021. [Google Scholar]
  13. Fernández de la Peña, M. Los principios políticos en Maquiavelo: Entre el arte del Estado y la moral. Ingenium Rev. Electrónica Pensam. Mod. Y Metodol. En Hist. Las Ideas 2016, 10, 75–91. [Google Scholar] [CrossRef]
  14. Swedberg, R.; Agevall, O. The Max Weber Dictionary: Key Words and Central Concepts; Stanford University Press: Redwood City, CA, USA, 2005. [Google Scholar]
  15. Visitchaichan, S. Revisiting Weber’s Theory of Bureaucracy and Its Usefulness for Analyzing Organizational Structures and Issues; National Institute of Development Administration: Bangkok, Thailand, 2004; Available online: https://www.semanticscholar.org/paper/Revisiting-Weber’s-Theory-of-Bureaucracy-and-its-Visitchaichan/4c9ea6f57c03a8388a877075cf0d219f4151b652 (accessed on 14 August 2025).
  16. Weber, M. Economy and Socioety; University of California Press: Oakland, CA, USA, 1978. [Google Scholar]
  17. Marx, K.; Engels, F. Manifesto of the Communist Party; Marxist Internet Archive. 2000. Available online: https://www.marxists.org/archive/marx/works/1848/communist-manifesto/index.htm (accessed on 14 August 2025).
  18. Marx, K. The Poverty of Philosophy; Marxists Internet Archive: Pacifica, CA, USA, 2009; Available online: https://www.marxists.org/archive/marx/works/1847/poverty-philosophy/ (accessed on 14 August 2025).
  19. BBC. Coronavirus: Benefit claims fraud could be £1.5bn. BBC News, 20 May 2020. Available online: https://www.bbc.com/news/business-52745983 (accessed on 14 August 2025).
  20. Dilmegani, C. AI in Government: Examples, Challenges & Best Practices [2023]. AI Multiple, 9 October 2023. Available online: https://research.aimultiple.com/ai-government/ (accessed on 14 August 2025).
  21. Wang, T.; Zhang, Y.; Liu, C.; Zhou, Z. Artificial intelligence against the first wave of COVID-19: Evidence from China. BMC Health Serv. Res. 2022, 22, 767. [Google Scholar] [CrossRef] [PubMed]
  22. Theodosiou, A.A.; Read, R.C. Artificial intelligence, machine learning and deep learning: Potential resources for the infection clinician. J. Infect. 2023, 87, 287–294. [Google Scholar] [CrossRef] [PubMed]
  23. Dakalbab, F.; Abu Talib, M.; Abu Waraga, O.; Bou Nassif, A.; Abbas, S.; Nasir, Q. Artificial intelligence & crime prediction: A systematic literature review. Soc. Sci. Humanit. Open 2022, 6, 100342. [Google Scholar] [CrossRef]
  24. Rotaru, V.; Huang, Y.; Li, T.; Evans, J.; Chattopadhyay, I. Event-level prediction of urban crime reveals a signature of enforcement bias in US cities. Nat. Hum. Behav. 2022, 6, 1056–1068. [Google Scholar] [CrossRef] [PubMed]
  25. Negnevitsky, M.; Tomin, N.V.; Rehtanz, C. Preventing Large-Scale Emergencies in Modern Power Systems: AI Approach. J. Adv. Comput. Intell. Intell. Inform. 2014, 18, 714–727. [Google Scholar] [CrossRef]
  26. Vincent, J. AI that detects cardiac arrests during emergency calls will be tested across Europe this summer. The Verge, 25 April 2018. Available online: https://www.theverge.com/2018/4/25/17278994/ai-cardiac-arrest-corti-emergency-call-response (accessed on 14 August 2025).
  27. Vassilakopoulou, P.; Haug, A.; Salvesen, L.M.; Pappas, I.O. Developing human/AI interactions for chat-based customer services: Lessons learned from the Norwegian government. Eur. J. Inf. Syst. 2023, 32, 10–22. [Google Scholar] [CrossRef]
  28. Hung, M.; Lauren, E.; Hon, E.S.; Birmingham, W.C.; Xu, J.; Su, S.; Hon, S.D.; Park, J.; Dang, P.; Lipsky, M.S. Social Network Analysis of COVID-19 Sentiments: Application of Artificial Intelligence. J. Med. Internet Res. 2020, 22, e22590. [Google Scholar] [CrossRef] [PubMed]
  29. Huang, J.; Gates, A.J.; Sinatra, R.; Barabási, A.-L. Historical comparison of gender inequality in scientific careers across countries and disciplines. Proc. Natl. Acad. Sci. USA 2020, 117, 4609–4616. [Google Scholar] [CrossRef] [PubMed]
  30. Fry, R.; Kennedy, B.; Funk, C. STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity. Pew Res. Cent. Sci. Soc. 2021. Available online: https://www.pewresearch.org/science/2021/04/01/stem-jobs-see-uneven-progress-in-increasing-gender-racial-and-ethnic-diversity/ (accessed on 14 August 2025).
  31. Metz, C. Who Is Making Sure the A.I. Machines Aren’t Racist? The New York Times, 23 June 2023. Available online: https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html (accessed on 14 August 2025).
  32. Small, Z.; Black Artists Say, A.I. Shows Bias, with Algorithms Erasing Their History. The New York Times, 4 July 2023. Available online: https://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html (accessed on 14 August 2025).
  33. Zuiderveen Borgesius, F. Discrimination, artificial intelligence, and algorithmic decision-making. Cons. Eur. 2018, 42, 15–24. (accessed on 14 August 2025). [Google Scholar]
  34. Dastin, J. Insight—Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, 11 October 2018. Available online: https://www.reuters.com/article/idUSKCN1MK0AG/ (accessed on 14 August 2025).
  35. Fernández, I. La falta de “perspectiva de género” en consulta infradiagnostica a la mujer. Redacción Médica, 6 October 2022. Available online: https://www.redaccionmedica.com/secciones/medicina-familiar-y-comunitaria/la-falta-de-perspectiva-de-genero-en-consulta-infradiagnostica-a-la-mujer-9441 (accessed on 14 August 2025).
  36. Holden, K. What is AI bias—And how might it affect LGBTQ people? Attitude, 10 August 2023. Available online: https://www.attitude.co.uk/life/443057-443057/ (accessed on 14 August 2025).
  37. Hill, K. How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did. Forbes, 16 February 2012. Available online: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/ (accessed on 14 August 2025).
  38. OpenAI. ChatGPT. 2023. Available online: https://chat.openai.com (accessed on 14 August 2025).
  39. Anthropic. Claude 2025. Available online: https://claude.ai (accessed on 14 August 2025).
Table 1. Interpretation of the relationship between selected political theories and the use of AI.
Table 1. Interpretation of the relationship between selected political theories and the use of AI.
TheoryPrinciplesBenefits of Artificial IntelligenceDisadvantages of Artificial Intelligence
Realist PoliticsEmphasis on the State.
Preservation of power.
Interests over ideologies.
Without human emotions and with the ability to
execute explicit orders, AI would have no issues
making decisions that prioritize state defense, even if these are unpopular or deemed unjust or
violent.
If based on incomplete information or unsuitable
examples, AI could suggest counterproductive, costly,
inefficient, and ineffective decisions.
Bureaucratic TheoryHierarchical organization.
Division of labor.
Professionalization.
Bureaucrats would carry out more mechanical tasks,
requiring only the execution of necessary actions
with technically justified functions.
If AI replaces many human workers, these workers may
become discontent, leading to reduced performance due
to a lack of career advancement opportunities.
Conflict TheoryEquality.
Social change.
Emphasis on collectives.
Automation could reduce the workload of public
workers, giving them more time for personal
development. If configured with a diversity focus,
AI could suggest decisions that human actors with
biases might not make.
If AI is based on biased data, it could exacerbate
discrimination based on gender, age, nationality,
among others. AI could also replace collective
deliberations with a digital authority.
Note: Explanation of the principles of the three selected theories and the benefits and disadvantages that AI would have from the perspective of these theories.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vera Hoyos, C.; Cárdenas Marín, W.O. The Use of Artificial Intelligence in Political Decision-Making. Philosophies 2025, 10, 95. https://doi.org/10.3390/philosophies10050095

AMA Style

Vera Hoyos C, Cárdenas Marín WO. The Use of Artificial Intelligence in Political Decision-Making. Philosophies. 2025; 10(5):95. https://doi.org/10.3390/philosophies10050095

Chicago/Turabian Style

Vera Hoyos, Carlos, and William Orlando Cárdenas Marín. 2025. "The Use of Artificial Intelligence in Political Decision-Making" Philosophies 10, no. 5: 95. https://doi.org/10.3390/philosophies10050095

APA Style

Vera Hoyos, C., & Cárdenas Marín, W. O. (2025). The Use of Artificial Intelligence in Political Decision-Making. Philosophies, 10(5), 95. https://doi.org/10.3390/philosophies10050095

Article Metrics

Back to TopTop