Next Article in Journal
“Who’s Better at Math, Boys or Girls?”: Changes in Adolescents’ Math Gender Stereotypes and Their Motivational Beliefs from Early to Late Adolescence
Next Article in Special Issue
Teacher, Think Twice: About the Importance and Pedagogical Value of Blended Learning Design in VET
Previous Article in Journal
One Font Doesn’t Fit All: The Influence of Digital Text Personalization on Comprehension in Child and Adolescent Readers
Previous Article in Special Issue
Designing a MOOC on Computational Thinking, Programming and Robotics for Early Childhood Educators and Primary School Teachers: A Pilot Test Evaluation
 
 
Article
Peer-Review Record

Acceptance of AI in Semi-Structured Decision-Making Situations Applying the Four-Sides Model of Communication—An Empirical Analysis Focused on Higher Education

Educ. Sci. 2023, 13(9), 865; https://doi.org/10.3390/educsci13090865
by Christian Greiner *, Thomas C. Peisl, Felix Höpfl and Olivia Beese
Reviewer 1:
Reviewer 2: Anonymous
Educ. Sci. 2023, 13(9), 865; https://doi.org/10.3390/educsci13090865
Submission received: 10 July 2023 / Revised: 18 August 2023 / Accepted: 22 August 2023 / Published: 24 August 2023
(This article belongs to the Special Issue New Technology Challenges in Education for New Learning Ecosystem)

Round 1

Reviewer 1 Report

Abstract: Somewhat long. Perhaps remove lines 12-28. The Abstract provides a very interesting investigation idea.

Introduction: Brief. Good overview. Perhaps add lines 12-28 here.

Related Work: Very good coverage with detailed descriptions.

Research Strategy: The Qualitative approach is good for this type of research. Good overall description.

Findings and Discussion: Interesting and informative descriptions. Well presented findings relative to TAM.

Further Research: Agreed, but the focus should remain on the positive description of the findings.

Conclusions: With the description of modifications to existing models, the findings should be further investigated by academics.

 

 

Acceptable with minor changes.

Author Response

To Reviewer 1:

 

Thank you very much for your valuable comments to our paper. Please find our changes below as well in the paper attached.

 

Abstract: Somewhat long. Perhaps remove lines 12-28. The Abstract provides a very interesting investigation idea.

 

A new abstract has been sent to the editor and follows below.

 

Abstract: Research on the scrutiny of technologies like ChatGPT in academia has been well-received, signaling a significant change in research and academic norms. This study investigates the impact of ChatGPT on semi-structured decision-making, specifically in evaluating undergraduate dissertations. We propose using Davis' Technology Acceptance Model (TAM) and Schulz von Thun's four-sided communication model to understand human-AI interaction and necessary adaptations for acceptance in dissertation grading. We employed an inductive research design, conducting interviews with ten experts using four scenarios with escalating consequences, resembling dissertation grading in higher education. In all scenarios, the AI functioned as a sender based on the four-sided model. Findings reveal that technology acceptance for human-AI interaction is adaptive but requires modifications, particularly regarding AI's transparency. Testing the four-sided model showed support for three sides, with the appeal side receiving negative feedback for AI acceptance as a sender. Respondents struggled to accept the idea of AI suggesting a grading decision through an appeal. Consequently, transparency about AI's role emerged as vital. When AI supports instructors transparently, acceptance levels are higher. These results encourage further research on AI as a receiver and the impartiality of AI decision-making without instructor influence. This study emphasizes communication modes in learning ecosystems, especially in semi-structured decision-making situations with AI as a sender, while highlighting the potential to enhance AI-based decision-making acceptance.

 

Introduction: Brief. Good overview. Perhaps add lines 12-28 here.

 

Because of the rewriting of the abstract all topics are mentioned, hence, adding the same context seemed a duplication of work.

 

Related Work: Very good coverage with detailed descriptions.

 

Research Strategy: The Qualitative approach is good for this type of research. Good overall description.

 

Findings and Discussion: Interesting and informative descriptions. Well presented findings relative to TAM.

 

Further Research: Agreed, but the focus should remain on the positive description of the findings.

 

The limitations necessitate further avenues of research. The accessibility of AI tools to the general public is steadily expanding, with ChatGPT achieving a historical record as the fastest-growing consumer application [31]. Consequently, conducting a quantitative study to explore the acceptance of AI-generated decisions would prove highly insightful.

 

The broad applicability of the four-sides model was unsurprising, given the limited time available for the development of distinct communicational behaviors when interacting with an AI. However, our emphasis hitherto has been on the sender's perspective. It remains pertinent to investigate whether an AI, placed in the role of a recipient, could discern varying levels of communication.

 

The majority of participants in our interviews expressed a notable level of comfort with AI's role as a decision supporter. Nonetheless, their primary focus remained on humans as the ultimate arbiters in decision-making processes.

 

During our extensive interview sessions, we encountered an individual who displayed a highly positive outlook regarding AI's potential as a decision maker. This optimism stemmed from the belief that an AI, in comparison to a human decision maker, would be less susceptible to the influence of discriminatory tendencies. This particular aspect presents an intriguing avenue for future research: exploring whether AI, by its nature, possesses greater objectivity in decision-making processes compared to humans.

 

Conclusions: With the description of modifications to existing models, the findings should be further investigated by academics.

 

This study reveals that the four-sides model proves suitable for analyzing communication between an AI sender and a human receiver. However, we have identified transparency as a pivotal factor in ensuring a high level of acceptance in the context of AI's role in decision-making processes. To analyze achieved acceptance, the well-established Technology Acceptance Model (TAMs) can be utilized, albeit with the incorporation of additional factors necessitated by the need for heightened transparency.

Despite the limitations imposed by a relatively small group of interviewed experts, our findings indicate that we possess adequate groundwork to delve into the analysis of AI-human communication. Moreover, we have garnered valuable insights on its potential application within the domain of higher education, particularly concerning the evaluation of academic theses. Notably, the notion that AI could mitigate the individual biases of an examiner in the grading process of bachelor's or master's theses opens up intriguing and valuable avenues for the application of artificial intelligence in this realm. The description of modifications made to the existing models warrants further investigation by the academic community.

 

Author Response File: Author Response.docx

Reviewer 2 Report

This paper describes an empirical investigation in the form of an interview study, to understand the impact of ChatGPT on human's decision making. The authors interviewed ten 'experts' in their engagement with four AI-enabled scenarios. The scenarios differed with respect to the kinds of decisions made, from partner choice, to thesis evaluation, to salary increase, and determining a criminal sentence. The authors draw on a communication framework by Schultz von Thun – the four-sides communication model, as a lens for their analysis of the qualitative interview data.

While the authors have provided some clear theoretical frameworks as background for their analysis, the main weaknesses of the paper lie in the methodology of the study, which also affects the results and conclusions. The main issues are outlined below.

1.     The abstract describes the study as aiming to understand the impact of ChatGPT on human’s decision making. However, nowhere in the methodology is it made clear how this form of GenAI is operationalised, especially in the scenarios. From the description in the Methodology, it simply refers to the scenarios as being ‘AI decisions’. The authors need to provide more details regarding the scenarios- how they are presented, especially, so that readers can see where is the AI – or even the ChatGPT – component. At least one clear description of the scenario should be included in order to show what is the output of the AI being imagined.

2.     Interview partners. In the abstract, the interviewees are noted as ‘experts; in the AI field, however, Table 1 shows that the 10 participants range in their skill levels from ‘Advanced beginner’ to ‘Expert’. This will likely affect the way that they respond to the scenarios. Furthermore, while the skill level of each respondent is identified, the authors did not draw this variable out in order to provide a more nuanced understanding of how they perceived the AI. This conflict needs to be clarified- were these informants experts or not?

3.     In the Results, the data from the interviewees is hardly presented. In qualitative research the highlight is to show key quotes from the informants and to demonstrate how these may have been translated into themes. The authors say that they have used grounded theory but they have hardly shown how they have applied this method to the interview data. Especially, on p.6, a Figure 2 is shown as an ‘expanded’ version of the original framework by Schultz-von-Thun. But nowhere did they show how the interview data were used to expand this graphic. There is a gap in terms of how the authors have analysed the data and translated it to inform the adaptation.

4.     In the conclusion, the authors make the claim that “ we understood that transparency about the AI's role in a decision process is key to achieving a high level of acceptance” (p.8, line 292). Again, it is a big leap from the data (which is barely shown) to this conclusion. The authors could highlight the key result and signpost it so that the reader can draw the connection.

5.     Another point in the conclusion that raises questions is the sentence “we are well prepared to analyze AI-human communication and have received signals” (p.9, line 298). What is meant by signals? There is no mention of signals anywhere else in the manuscript. The authors could use a different word, or signpost this earlier in the literature review, or in the results. Otherwise it leaves the reader hanging with respect to what the authors are referring to.

Other minor points which need to be addressed for further clarity of the submission are:

1.     Table 2 (p.7). The description seems to be missing for the factor ‘Credibility’, as the cell in the second column is empty, whereas all the others have content in terms of a question or description.

2.     On p.8, “The overall acceptance levels can be found in the figure below” (line 264). However, there is no figure below.  

While the submission has been written in language which is fairly accessible to laypersons, there are a number of grammatical inaccuracies as well as expression irregularities that will benefit from going through third-party proofreading. Especially, long sentences need to broken up – if not into separate sentences, at least by using commas. One example is ” As decision making by an AI that affects a human being is very specific and up to now not experienced on a wider scale” (p.7, line 236).

Also, it is hard to understand this sentence on p.8 “at the time being the experience with AI is in the overall society reduced it was needed to question AI-experts” (line 269). This sentence needs to be rephrased for clarity.

Author Response

Please see attached documents.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Thank you for addressing the feedback, there is discernible improvement to the submission now, especially in terms of clarity and the reporting of the results.

A point to note is that the abstract is still stating "interviews with ten experts", whereas in the methodology it is stated that participants with varying degrees of expertise were recruited. This needs to be consistent- please update the abstract accordingly. 

Author Response

Thanks again. Abstract updated: 

... Utilizing an inductive research design, we conducted ten interviews with respondents with varying levels of AI and management expertise, employing four escalating-consequence scenarios mirroring higher education dissertation grading. ...
Back to TopTop