The Use of Artificial Intelligence in Political Decision-Making
Round 1
Reviewer 1 Report (Previous Reviewer 3)
Comments and Suggestions for AuthorsTwo things that need to be addressed before the paper can be considered again for publication:
- Definition of AI - this pertains to the kind of AI that is used to make political decisions. One can rely on generative AI such as ChatGPT, or one can rely on other forms of AI such as the kind that is used to make categorization and prediction. Without a proper discussion of the kind of AI that is used in the paper, the paper is too general and thus is unlikely to make any substantial contribution
- Definition of "political decision making." This is also important. On one side, any action at all is political by nature. Even the decision to eat what food for lunch can be political. So the author must make clear what is meant by political decision-making here. In the paper the author refers to health care decisions or security issues, while these can be political in the broad sense, it does not address the proper political decision-making such as that made by political leaders such as the prime minister of the president. This would be really interesting because it would be interesting to find out if political leaders actually rely on AI for their decisions.
Author Response
Comment 1: Definition of AI - this pertains to the kind of AI that is used to make political decisions. One can rely on generative AI such as ChatGPT, or one can rely on other forms of AI such as the kind that is used to make categorization and prediction. Without a proper discussion of the kind of AI that is used in the paper, the paper is too general and thus is unlikely to make any substantial contribution
Response 1: I added the justification of the type of AI chosen.
Comment 2: Definition of "political decision making." This is also important. On one side, any action at all is political by nature. Even the decision to eat what food for lunch can be political. So the author must make clear what is meant by political decision-making here. In the paper the author refers to health care decisions or security issues, while these can be political in the broad sense, it does not address the proper political decision-making such as that made by political leaders such as the prime minister of the president. This would be really interesting because it would be interesting to find out if political leaders actually rely on AI for their decisions.
Response 2: I added the scope of political decision-making and the justification for the categories chosen.
Additionally I did a second experiment with the AI Claude as suggested by another reviewer.
Reviewer 2 Report (New Reviewer)
Comments and Suggestions for AuthorsThis paper examines the growing impact of artificial intelligence (AI) on political decision-making in health, security, and public services. Using political theories such as realism, bureaucracy, and conflict theory, it assesses both ethical and logical concerns. Findings show that AI often favors efficiency and power, raising concerns about bias. The paper incorporates an illustrative experiment using ChatGPT, thereby bridging theoretical analysis with practical illustration, thus making the discussion more tangible and relatable. The paper, however, lacks original data, as it does not introduce unique datasets or empirical findings, instead largely synthesizing existing literature and an AI-generated example. The arguments are grounded in a literature review and theoretical reflection, with minimal empirical demonstration or case analysis beyond the AI experiment. The author should incorporate more applied case studies beyond the brief mentions of initiatives in the UK, Canada, and Norway, as this would help substantiate the claims. Moreover, the author should consider expanding the scope of the experiment. The single interaction with ChatGPT 3.5—though illustrative of general tendencies—lacks robustness. Additional prompts, scenarios, or comparative models across different platforms (e.g., Claude, Gemini, or open-source LLMs) would provide a broader understanding of AI’s political alignment or procedural logic. The paper also has minor editorial issues. For example, in line 94, the word “take” is misplaced.
Author Response
Comment 1: This paper examines the growing impact of artificial intelligence (AI) on political decision-making in health, security, and public services. Using political theories such as realism, bureaucracy, and conflict theory, it assesses both ethical and logical concerns. Findings show that AI often favors efficiency and power, raising concerns about bias. The paper incorporates an illustrative experiment using ChatGPT, thereby bridging theoretical analysis with practical illustration, thus making the discussion more tangible and relatable. The paper, however, lacks original data, as it does not introduce unique datasets or empirical findings, instead largely synthesizing existing literature and an AI-generated example. The arguments are grounded in a literature review and theoretical reflection, with minimal empirical demonstration or case analysis beyond the AI experiment. The author should incorporate more applied case studies beyond the brief mentions of initiatives in the UK, Canada, and Norway, as this would help substantiate the claims. Moreover, the author should consider expanding the scope of the experiment. The single interaction with ChatGPT 3.5—though illustrative of general tendencies—lacks robustness. Additional prompts, scenarios, or comparative models across different platforms (e.g., Claude, Gemini, or open-source LLMs) would provide a broader understanding of AI’s political alignment or procedural logic. The paper also has minor editorial issues. For example, in line 94, the word “take” is misplaced.
Response 1:
I have added a justification for the selection of these specific cases, framing them as part of an exploratory analysis rather than as a means of generalization.
Additionally, I have incorporated a second experiment using Claude to broaden the empirical scope.
Finally, I have addressed the editorial issue previously noted.
Round 2
Reviewer 1 Report (Previous Reviewer 3)
Comments and Suggestions for AuthorsThe author has made appropriate revisions of the manuscript according to the reviewers' comments. I recommend publication now.
This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe present paper seeks “to analyze the use of artificial intelligence in political decision-making”. For this purpose, it promises to conduct a “literature review” and “experimentation with LLM programs” (p. 1).
I find that the paper does not make a significant contribution to the debate about the relevance of artificial intelligence for political decision-making. The various parts of the paper are, in my opinion, largely irrelevant for that debate.
The literature review on pp. 2–3 is disconnected from the rest of the paper. The author summarizes four papers on issues of AI and politics, but does not explain how the present paper is related to these existing papers. That is, they do not explain whether and to what extent their paper is supposed to extend, amend, challenge, etc., the existing papers.
Section 5, entitled “Uses of Artificial Intelligence in Politics”, is largely irrelevant for the stated topic of the paper (i.e., the use of artificial intelligence in political decision-making). It is a list of brief reflections on how AI could be used in various domains of society (health care, social services, policing, etc.). As far as I can see, these reflections have nothing to do with political decision-making. At any rate, the author does not explain what these reflections have to do with that topic.
Section 6, entitled “Challenges of Using Artificial Intelligence for Political Decision-Making”, is a list of various kinds of errors and biases to which AI systems have been shown to be prone. These errors and biases are clearly relevant for the question of the use of AI in political decision-making. However, the errors and biases listed are well known, and the summary given by the author does not, as far as I can see, go beyond the existing literature. Therefore, I cannot see a relevant contribution in this section, either.
The “experiment” in section 7, finally, is, in my opinion, completely ill-conceived and irrelevant for anything whatsoever. The author asked ChatGPT to produce “an example of R code that could help the State make political decisions”. That is, they asked ChatGPT to program another system that could then be used in political decision-making. They are confusing two levels here. It is one thing to use a language model like ChatGPT as a tool in political-decision making. For instance, such a language model could be used to summarize and/or aggregate citizen opinions, to summarize relevant facts or source materials, to improve the expression of citizen opinions or government communications, etc. From the title and the abstract of the paper, I had expected an analysis of some of these or similar possibilities. Yet the paper does not do any of this. In the experiment, the author did not test the usefulness of ChatGPT for any of these possible tasks, but asked ChatGPT to program another system that would then help with political decision-making. Not surprisingly, what ChatGPT delivered is completely uninteresting: It delivered a formula that allows to give a specific score to a proposed policy, given partial scores on aspects like cost, social impact, viability, inclusion, etc. In any real instance of political decision-making, the real question would be how these aspects (i.e., cost, social impact, viability, etc.) are defined and measured. The formula delivered by ChatGPT does not address this question, but presupposes that this question has been answered, and is therefore completely inane and useless.
Author Response
Comment 1: The literature review on pp. 2–3 is disconnected from the rest of the paper. The author summarizes four papers on issues of AI and politics, but does not explain how the present paper is related to these existing papers. That is, they do not explain whether and to what extent their paper is supposed to extend, amend, challenge, etc., the existing papers.
Response 1: I added an analysis for the literature review in context with the theories presented to showcase that the previous research tends to present the use of AI in political decision-making as more aligned with the realist politics theory.
Comment 2: Section 5, entitled “Uses of Artificial Intelligence in Politics”, is largely irrelevant for the stated topic of the paper (i.e., the use of artificial intelligence in political decision-making). It is a list of brief reflections on how AI could be used in various domains of society (health care, social services, policing, etc.). As far as I can see, these reflections have nothing to do with political decision-making. At any rate, the author does not explain what these reflections have to do with that topic.
Response 2: I added an explanation of how these uses are related to political decision-making and the theories presented.
Comment 3: Section 6, entitled “Challenges of Using Artificial Intelligence for Political Decision-Making”, is a list of various kinds of errors and biases to which AI systems have been shown to be prone. These errors and biases are clearly relevant for the question of the use of AI in political decision-making. However, the errors and biases listed are well known, and the summary given by the author does not, as far as I can see, go beyond the existing literature. Therefore, I cannot see a relevant contribution in this section, either.
Response 3: I added an explanation about how these errors and biases are related to the previous research and the theories presented.
Comment 4: The “experiment” in section 7, finally, is, in my opinion, completely ill-conceived and irrelevant for anything whatsoever. The author asked ChatGPT to produce “an example of R code that could help the State make political decisions”. That is, they asked ChatGPT to program another system that could then be used in political decision-making. They are confusing two levels here. It is one thing to use a language model like ChatGPT as a tool in political-decision making. For instance, such a language model could be used to summarize and/or aggregate citizen opinions, to summarize relevant facts or source materials, to improve the expression of citizen opinions or government communications, etc.
Response 4: I added an explanation about what was the purpose of the experiment.
Reviewer 2 Report
Comments and Suggestions for AuthorsThe paper explores the role of artificial intelligence in political decision-making, focusing on the application of LLM in various political contexts. The analysis is conducted through a mixed-method strategy, using both literature review and an experiment.
The article is well written and the goals are outlined clearly. Moreover the paper represents a valuable and needed contribution to the literature on AI and political decision-making. That being said, in my view, the paper needs to be revised before being published on Philosophies.
First, in the introduction of the article the author presents four previous works that address the implications of the use of AI technologies in policy-making, but he/she does not locate any of them in the theoretical approaches discussed in the subsequent paragraphs of the paper. In addition, he/she does not highlight the limitations of previous studies and explains how the submitted manuscript will contribute to existing research.
Second, the author should also better clarify the methodology. As for the experiment, he/she should explain more in depth how the different factors (social impact, viability, cost and, later, inclusion and equality) are weighted. In other words, is Chatgpt that suggest the weight should be attributed to each factor or this task is performed by the researcher?
Third, the additional criteria (equality and inclusion) included in the second model should be part of the decision-making from the outset. In this regard, the author should emphasize more that policy-making is complex and this basic model does not address issues like decision-makers ideological preferences, pressures from interest groups, interministerial rivalries and divisions and so on.
Finally, in section 6, the author lists a long series of uses and challenges of AI in political decision-making, but this section is neither connected to the theoretical approaches discussed in the paper or to the experiment. Overall, in my view the author should work to increase the internal coherence of the paper.
Author Response
Comment 1: First, in the introduction of the article the author presents four previous works that address the implications of the use of AI technologies in policy-making, but he/she does not locate any of them in the theoretical approaches discussed in the subsequent paragraphs of the paper. In addition, he/she does not highlight the limitations of previous studies and explains how the submitted manuscript will contribute to existing research.
Response 1: After each previous work and in the following sections I added analysis and explanations about how the previous studies are related to the article's goal.
Comment 2: Second, the author should also better clarify the methodology. As for the experiment, he/she should explain more in depth how the different factors (social impact, viability, cost and, later, inclusion and equality) are weighted. In other words, is Chatgpt that suggest the weight should be attributed to each factor or this task is performed by the researcher?
Response 2: I added a further explanation about how the experiment was done and what was its purpose.
Comment 3: Third, the additional criteria (equality and inclusion) included in the second model should be part of the decision-making from the outset. In this regard, the author should emphasize more that policy-making is complex and this basic model does not address issues like decision-makers ideological preferences, pressures from interest groups, interministerial rivalries and divisions and so on.
Response 3: Same as above. The explanation should clarify why that criteria was not included in the first model.
Comment 4: Finally, in section 6, the author lists a long series of uses and challenges of AI in political decision-making, but this section is neither connected to the theoretical approaches discussed in the paper or to the experiment. Overall, in my view the author should work to increase the internal coherence of the paper.
Response 4: I added explanations about how these uses and challenges are related to the previous works and the theoretical approaches.
Reviewer 3 Report
Comments and Suggestions for AuthorsThe paper suffers from too much review of the literature and too little arguments or contribution from the author's own findings. The experiment with ChatGPT is supposed to be original contribution, but it is not related to the discussion section. The code that ChatGPT has generated was not used or discussed in the following section.
Round 2
Reviewer 2 Report
Comments and Suggestions for AuthorsThe authors have addressed the main issues I have raised in my review. I recommend the publication of the article.