Next Article in Journal
Educational Transformation Through Emerging Technologies: Critical Review of Scientific Impact on Learning
Previous Article in Journal
Teachers Can Use It with Their Littles: Using Case Studies to Explore Preservice Teachers’ Perceptions of Technology
Previous Article in Special Issue
Pre-Service Science Teachers’ Beliefs About Creativity at School: A Study in the Hispanic Context
 
 
Article
Peer-Review Record

Exploring University Staff’s Perceptions of Using Generative Artificial Intelligence at University

Educ. Sci. 2025, 15(3), 367; https://doi.org/10.3390/educsci15030367
by Molly Whitbread 1, Chloe Hayes 2, Sanjana Prabhakar 3 and Rebecca Upsher 3,*
Educ. Sci. 2025, 15(3), 367; https://doi.org/10.3390/educsci15030367
Submission received: 5 December 2024 / Revised: 10 March 2025 / Accepted: 11 March 2025 / Published: 16 March 2025

Round 1

Reviewer 1 Report (New Reviewer)

Comments and Suggestions for Authors

The paper does a great job of introducing the issue of GenAI and its implications for higher education. The methodological novelty of the analysis of social media posts from X offers something new. However, contextualization could be more deeply rooted in the literature of previous works on using social media as a data source in educational research.

The research design and methods are well explained, and the selection process of posts is transparent and rigorous. However, there is little information on the distribution of posts by geographical location and academic discipline. This information would give a better overview of the representativeness of the data and potential biases.

The findings are thematically organized with subthemes and are well-illustrated. However, discussion would have been more complete if the proportion of posts in each theme and subtheme were quantified. This would give the reader an idea of the relative weight of each category. Moreover, although the short period of data collection is recognized as a limitation, the implications of this for generalisability and timeliness of findings are not more explicitly discussed.

The results are clearly structured, and the use of representative quotes enhances their readability. However, the absence of numerical data about the distribution of posts within themes limits the depth of the analysis. Providing this data in the results section would strengthen the study’s empirical contribution.

The references are relevant and update-to-date, but in the development of the paper, there is much more literature engaged, especially about more studies that may investigate educators' perception about GenAI through a qualitative approach as well as references related to qualitative data analysis online. This can further enhance their study and give more insight into the context of findings.

The conclusions are consistent with the findings but do not present actionable policy or practice recommendations. Moreover, the limitations of this study, such as the short time period, language limits, and limited consideration of contextual and disciplinary contexts, are not substantially reflected in the conclusions (maybe it was impossible to determine, but it would certainly improve the quality of the analysis). This section could be strengthened by including practical implications, suggestions for future research, and discussion of how such limitations may impact the findings.

• Suggestions: Quantify the distribution of posts across themes and subthemes to give a better idea of their relative importance.

• Provide information on the contextual and disciplinary context of the posts in order to enhance the representativeness of the dataset (if possible).

• Expand the discussion of limitations, especially the limited time of data collection and how that might affect the timeliness and generalizability of findings. This could include further reflection on how such a limitation may have resulted in an unrepresentative data set and what could be done to address such concerns in future studies.

• Engage with a broader range of literature, including literature on qualitative analysis of online data and educators' perceptions of generative AI, in particular LLMs, to set a wider context.

• Consider providing actionable recommendations for policy and practice based on the findings.

Author Response

  • Quantify the distribution of posts across themes and subthemes to give a better idea of their relative importance.

Thank you for your suggestion. We have now quantified the distribution of posts across themes and subthemes by adding subtheme frequencies in Table 1 and incorporating these figures in the Results section.

  • Provide information on the contextual and disciplinary context of the posts in order to enhance the representativeness of the dataset (if possible).

Thank you for highlighting the importance of providing contextual and disciplinary information to enhance the representativeness of the dataset. Unfortunately, we were unable to extract such details (e.g., country of origin, discipline of the poster) manually from the posts, as this information is not explicitly available or reliably discernible within the data retrieved from the platform.

We acknowledge the value of such contextual information and recognise its potential to enrich the analysis. However, given the constraints of the dataset, we have focused on the content of the posts themselves to provide meaningful insights. We will address this limitation in the discussion by reflecting on the absence of contextual data and its implications for the interpretation of our findings:

However, a limitation of this study is the lack of contextual and disciplinary information, such as the country of origin or professional background of the users. This absence limits the ability to assess the representativeness of the dataset and may influence the interpretation of findings, as perspectives may vary across cultural and disciplinary contexts. Future research could address this limitation by employing methods that allow for the inclusion of such metadata, such as surveys or interviews with platform users. Despite this limitation, the current study provides valuable insights into the discourse on ChatGPT in higher education by focusing on the content of the posts themselves.

  • Expand the discussion of limitations, especially the limited time of data collection and how that might affect the timeliness and generalizability of findings. This could include further reflection on how such a limitation may have resulted in an unrepresentative data set and what could be done to address such concerns in future studies.

Thank you for your suggestion to expand the discussion of limitations. We have now reflected on how the timeframe of data collection and language constraints may have influenced the representativeness and generalisability of the dataset, and we propose future research directions to address these concerns:

Another limitation of this study is the short timeframe of data collection (April–July 2023), which, while capturing timely insights into staff perceptions of ChatGPT, may not reflect longer-term or evolving attitudes. This temporal scope, combined with language constraints, may have resulted in an unrepresentative dataset, as discourse during this period could differ from broader staff perspectives. Future research could address this by employing longitudinal designs or triangulating social media analysis with surveys or interviews to enhance representativeness and generalisability.

  • Engage with a broader range of literature, including literature on qualitative analysis of online data and educators' perceptions of generative AI, in particular LLMs, to set a wider context.

Thank you for your feedback. We have expanded the literature review in the introduction to include studies on qualitative analysis of social media data; and qualitative research on higher education educators’ perspectives on ChatGPT.

 

  • Consider providing actionable recommendations for policy and practice based on the findings.

Thank you for this suggestion; it provided an excellent opportunity to tie our findings into actionable points. We have now added a “recommendations for policy and practice” section to the discussion. This section highlights several steps that universities can take to address the challenges and opportunities associated with the integration of GenAI in higher education. These recommendations are based on key themes identified in our analysis, such as staff concerns about institutional responses to GenAI, the need for improved literacy, and the impact on assessment practices:

The findings of this study highlight several actionable steps that universities can take to address the challenges and opportunities associated with the integration of GenAI in higher education. First, institutions should prioritise the development of clear and comprehensive policies that define the ethical use of GenAI. These policies should be co-created with input from staff and students to build trust and reflect the practical challenges faced by educators. Key areas for inclusion include academic integrity, equitable access to GenAI tools, and data privacy. Transparent communication of these policies across the institution is essential to ensure widespread understanding and adherence.

In addition to policy development, universities should implement targeted training programmes to improve GenAI literacy among staff and students. For staff, such training should focus on practical applications of GenAI in teaching and assessment, helping them to integrate these tools effectively while understanding their limitations. For students, training should emphasise the responsible use of GenAI and the critical evaluation of AI-generated outputs, encouraging a balanced approach to its adoption.

Redesigning assessments to reduce dependence on GenAI while maintaining inclusivity and academic integrity is also essential. Innovative assessment methods, such as open-book exams, oral assessments, and reflective essays, can encourage genuine student engagement and the development of higher-order thinking skills. Furthermore, establishing mechanisms to regularly collect feedback from staff and students will provide valuable insights into their experiences with GenAI, allowing institutions to refine policies and practices iteratively.

Finally, universities should invest in robust support structures to ensure that staff feel adequately equipped to navigate the integration of GenAI. This includes providing dedicated resources, ongoing professional development opportunities, and accessible support teams. By adopting these measures, universities can create an environment that maximises the potential benefits of GenAI while addressing the challenges it presents, ensuring its integration is ethical, effective, and aligned with academic values.

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

The manuscript is currently unsuitable for publication in this journal due to the following reasons. Major revisions are necessary to address these issues:

  1. The manuscript does not comply with the journal's formatting and submission guidelines. Specifically:

    • References are missing or improperly formatted.
    • Figures are not appropriately footnoted.
    • Tables lack proper headings.
  2. A critical flaw in the manuscript is the failure to adhere to the PRISMA 2020 protocol (https://www.prisma-statement.org/prisma-2020). This protocol is essential for bibliometric research to ensure systematic and transparent reporting. Revising the manuscript to align with PRISMA standards is imperative.

  3. The discussion section is superficial and does not adequately interpret or support the results presented. A deeper and more comprehensive analysis is required to strengthen the manuscript's contribution to the field.

Author Response

  1. The manuscript does not comply with the journal's formatting and submission guidelines. Specifically:
    • References are missing or improperly formatted.
    • Figures are not appropriately footnoted.
    • Tables lack proper headings.

 

We appreciate the reviewer’s feedback regarding formatting and submission guidelines and have made the necessary revisions to address these concerns. All references have been reviewed and updated to comply with the journal’s requirements, with missing details such as DOIs, URLs, and publication information added, titles adjusted to sentence case, and journal names italicised. In-text citations have been cross-checked for consistency. Figure and table headings have been updated.  For example, Figure 1 is now titled “Flow Diagram of Screening and Inclusion of Posts from X in Final Analysis”. Table 1 has been updated to “Themes and Subthemes of Analysis of University Staff’s Perceptions of Generative AI”. We believe these revisions ensure full compliance with the journal’s formatting and submission guidelines.

 

  1. A critical flaw in the manuscript is the failure to adhere to the PRISMA 2020 protocol (https://www.prisma-statement.org/prisma-2020). This protocol is essential for bibliometric research to ensure systematic and transparent reporting. Revising the manuscript to align with PRISMA standards is imperative.

Thank you for your suggestion regarding the PRISMA protocol. While PRISMA is an excellent tool for ensuring transparency and reproducibility in systematic reviews and meta-analyses, our study employs a qualitative content analysis approach, which is not aligned with the systematic review methodology that PRISMA is designed to support.

In qualitative content analysis, the focus is on analysing the content of pre-defined data sources—in this case, posts from the social media platform X—rather than systematically identifying and screening studies from a larger body of literature. To ensure transparency in our methods, we have provided detailed information on our study design, the eligibility criteria for the posts, our search strategy within X, the selection process, data collection and items, and our data analysis approach.

We hope this clarification addresses your concern, and we would be happy to provide further details if needed.

  1. The discussion section is superficial and does not adequately interpret or support the results presented. A deeper and more comprehensive analysis is required to strengthen the manuscript's contribution to the field.

Thank you for this valuable feedback. We have revised the discussion to provide a deeper and more comprehensive analysis of the results and their implications. In particular, we have added a “recommendations for policy and practice” subsection that ties our findings into actionable steps for universities addressing the integration of GenAI in higher education. This addition ensures that the discussion not only interprets the results but also highlights their practical relevance to the field:

The findings of this study highlight several actionable steps that universities can take to address the challenges and opportunities associated with the integration of GenAI in higher education. First, institutions should prioritise the development of clear and comprehensive policies that define the ethical use of GenAI. These policies should be co-created with input from staff and students to build trust and reflect the practical challenges faced by educators. Key areas for inclusion include academic integrity, equitable access to GenAI tools, and data privacy. Transparent communication of these policies across the institution is essential to ensure widespread understanding and adherence.

In addition to policy development, universities should implement targeted training programmes to improve GenAI literacy among staff and students. For staff, such training should focus on practical applications of GenAI in teaching and assessment, helping them to integrate these tools effectively while understanding their limitations. For students, training should emphasise the responsible use of GenAI and the critical evaluation of AI-generated outputs, encouraging a balanced approach to its adoption.

Redesigning assessments to reduce dependence on GenAI while maintaining inclusivity and academic integrity is also essential. Innovative assessment methods, such as open-book exams, oral assessments, and reflective essays, can encourage genuine student engagement and the development of higher-order thinking skills. Furthermore, establishing mechanisms to regularly collect feedback from staff and students will provide valuable insights into their experiences with GenAI, allowing institutions to refine policies and practices iteratively.

Finally, universities should invest in robust support structures to ensure that staff feel adequately equipped to navigate the integration of GenAI. This includes providing dedicated resources, ongoing professional development opportunities, and accessible support teams. By adopting these measures, universities can create an environment that maximises the potential benefits of GenAI while addressing the challenges it presents, ensuring its integration is ethical, effective, and aligned with academic values.

 

Round 2

Reviewer 1 Report (New Reviewer)

Comments and Suggestions for Authors

Summary of Revisions and Improvements:

  • The manuscript demonstrates clear enhancements in the latest version. The authors have:
    1. Quantified the dataset (e.g., number of posts for each theme/subtheme) to illustrate distribution and relevance.
    2. Expanded their discussion of literature on generative AI in higher education.
    3. Addressed limitations regarding timeframe, language exclusivity, and lack of disciplinary or geographic data.
    4. Strengthened their policy/practice recommendations, outlining actionable steps for institutions.

Persisting Limitations / Potential Improvements:

  1. Additional Methodological References: While the authors now include more literature, further discussion on social media research methods—especially concerning Twitter (X)—could fortify the methodological justification.
  2. Short Data-Collection Window: The April–July 2023 timeframe is acknowledged, yet the fast-paced evolution of ChatGPT and related tools may rapidly outdate perceptions. A reflection on how subsequent longitudinal studies could verify or expand these findings would be valuable.
  3. Monolingual Focus: Restricting data to English may omit culturally and contextually diverse staff perceptions. We recommend emphasizing this limitation and discussing how future research might broaden linguistic scope.
  4. Future Directions: Providing more explicit proposals on how to integrate different data sources (other social platforms, interviews, or surveys) would help address representativeness and disciplinary context limitations.

Overall, we recognize that the authors have substantially addressed earlier feedback and strengthened the manuscript. We encourage them to refine the discussion on methodological and temporal constraints. With these minor adjustments, this paper is well-positioned to contribute meaningfully to ongoing debates on integrating GenAI in higher education.

Comments on the Quality of English Language

While the manuscript is understandable, a careful edit (particularly for style and clarity) could further enhance the presentation of the research.

Author Response

Thank you for your comments. We appreciate your feedback to strengthen our manuscript.

Please see below for detail on how we have addressed these comments:

Comment 1- Additional Methodological References: While the authors now include more literature, further discussion on social media research methods—especially concerning Twitter (X)—could fortify the methodological justification.

Response 1- We have added to the introduction section to further justify this methodology:

Much of the outlined existing research has focused on understanding staff perceptions of GenAI through methods such as focus groups, interviews, and surveys, typically featuring smaller sample sizes and specific geographical locations (Dhamija & Dhami-ja., 2025; Firat., 2023; Wilkinson et al., 2024). In this current research, we employed a qualitative analysis of social media posts retrieved from X (formerly Twitter) to explore higher education staff’s perspectives regarding students' use of ChatGPT in academic settings. Analysis of social media posts has been used in a variety of contexts (Williams, Terras, & Warwick, 2013), for example, healthcare (Fu et al., 2023), mental health (Talbot et al., 2023), public health (Sleigh et al., 2021; Diddi, 2015), and education (Hadi Mogavi et al., 2021).  Analysing social media posts provides access to a geographically broader and more diverse range of opinions due to the prevalent use of social media by academics (Jordan & Weller, 2018). Social media, and Twitter/X in particular, provides a dynamic and organic space for academic discussions, making it a valuable data source for exploring staff perspectives on emerging technologies in higher education. Previous research has demonstrated that educators and academics frequently use Twitter/X for professional dialogue, networking, and sharing insights about pedagogical practices (Veletsianos, 2020). By leveraging this rich source of naturally occurring discourse, this study captures real-time reflections and discussions on GenAI, offering insights that may not emerge through traditional qualitative methods.

 

Comment 2- Short Data-Collection Window: The April–July 2023 timeframe is acknowledged, yet the fast-paced evolution of ChatGPT and related tools may rapidly outdate perceptions. A reflection on how subsequent longitudinal studies could verify or expand these findings would be valuable.

Response 2- Thank you for this comment. We have changed a couple of things here to make this clearer.

Firstly, we added a sentence into the ‘strengths and limitations’ section:

Another limitation of this study is the short timeframe of data collection (April–July 2023), which, while capturing timely insights into staff perceptions of ChatGPT, may not reflect longer-term or evolving attitudes. Given the rapid advancements in generative AI and its increasing integration into educational contexts, staff perceptions are likely to shift as institutions develop clearer policies, guidance, and practical applications for these tools. This temporal scope, combined with language constraints, may have resulted in an unrepresentative dataset, as discourse during this period could differ from broader staff perspectives.

Secondly, we removed the line about future research in the ‘strengths and limitations’ section (4.1), and added detail about longitudinal research in the ‘future research’ section (4.3):

Additionally, a longitudinal approach, tracing posts over a longer duration, can provide insights into how staff attitudes change over time as institutions and educators gain more experience with GenAI.

Comment 3- Monolingual Focus: Restricting data to English may omit culturally and contextually diverse staff perceptions. We recommend emphasizing this limitation and discussing how future research might broaden linguistic scope.

 

Response 3- Thank you, we have now added to the limitations section to emphasise this point:

Additionally, restricting the dataset to English-language posts may have excluded valuable insights from non-English-speaking university staff, whose perspectives on GenAI could differ based on regional policies, institutional norms, and broader sociocultural attitudes toward AI in education.

We have also added the following to the future research section:

Expanding the scope of the study to include posts in other languages or from different global regions would offer a richer, cross-cultural understanding of the topic. This could be achieved by incorporating multilingual data collection and analysis, leveraging machine translation tools or involving researchers fluent in other languages to ensure accuracy and cultural sensitivity.

Comment 4- Future Directions: Providing more explicit proposals on how to integrate different data sources (other social platforms, interviews, or surveys) would help address representativeness and disciplinary context limitations.

Response 4- Thank you. We have now made this point more explicit in the future research section:

To gain more contextual and disciplinary information, future research could employ methods that allow for the inclusion of such metadata, such as surveys or interviews with platform users. Additionally, integrating data from multiple social media platforms—such as LinkedIn, Facebook, or academic forums—could provide a broader and more diverse range of perspectives, as different platforms attract distinct user demographics and professional communities. Combining social media analysis with other qualitative and quantitative methods, such as follow-up surveys or interviews, could provide deeper insights into how perceptions evolve in response to policy shifts, pedagogical developments, and real-world experiences with ChatGPT in educational settings. A multi-method approach incorporating diverse data sources would enhance representativeness and generalisability while offering a more nuanced and comprehensive understanding of the ongoing integration of AI in higher education.

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

The authors have made some changes. However, they have not marked them in the manuscript, as stated in the MDPI guidelines. Therefore, I am unable to properly identify the changes.

Author Response

Please see attached revised manuscript with changes marked in the manuscript. This also now includes comments addressed in round 2 reviews from reviewer 1. 

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Methodology:

1. The authors used #chatgpt as the only contact search term; however, even in the posts they cited, #ai was very common, indicating a need to search that as well.

2. Not including this search term is a critical problem because the final sample of posts is small (211). After claiming to be a worldwide representative search, 211 analyzed posts are not enough. The thinness of posts can be compensated by quantity; however, here, the quantity does not justify the method.

3. What is the source of the limit to English? Is there literature about how many posts (or just academic posts) ar in English vs. non English? The authors need to provide justification since there is a claim to go worldwide.

4. The authors missed the fact that the post coded in lines 203-206 were from someone who appears to be a student. If they failed in this one were their reliability checks working?

5. In line 207 there is an interpretation of a post as negative when in fact it is not negative.

5. The paper is missing a reliability check to validate inclusion criteria and categories.

6. The authors failed to: identify responses and comments to posts as a unit of analysis (basic in social media research).

7. The authors failed to identify if there were repeat posters and if so what was the frequency of repeated posts from the same author

7. In Table and text for each category please add frequency so we can evaluate the breadth of each theme/subtheme.

8. Why does it say in limitations that the authors found over 700 posts when you had 211?

Author Response

Dear reviewers,

We greatly appreciate your thorough review of our manuscript and the constructive comments you provided. Your insights have been invaluable in enhancing the quality of our work. Below, we have addressed each of your comments in detail. We hope our revisions meet your expectations and look forward to publishing our findings.

 

Reviewer 1

  1. The authors used #chatgpt as the only contact search term; however, even in the posts they cited, #ai was very common, indicating a need to search that as well. Not including this search term is a critical problem because the final sample of posts is small (211). After claiming to be a worldwide representative search, 211 analyzed posts are not enough. The thinness of posts can be compensated by quantity; however, here, the quantity does not justify the method.

We chose to focus on #chatgpt as the search term because ChatGPT was the most widely used and popularised GenAI at the time and remains highly prevalent in academia. This provided a focused and relevant dataset directly related to our research objectives. Using a broader term like #ai could have generated a large number of irrelevant social media posts, diluting the specificity and applicability of our findings. Therefore, we believe our approach is justified given the context and goals of our study.

Moreover, we framed the title and literature search within the broader context of 'GenAI' to future-proof this piece, anticipating that other GenAI tools will gain popularity over time. This strategy was based on sound advice from other publications in this field, ensuring our study remains relevant as the landscape of GenAI evolves. However, reflecting on your comment here, we have returned to the abstract and introduction section to make sure we have been explicit about the framing of ChatGPT in the existing literature and our study.

 

  1. What is the source of the limit to English? Is there literature about how many posts (or just academic posts) ar in English vs. non English? The authors need to provide justification since there is a claim to go worldwide.

 

We state clearly in our methods that we reviewed posts in English language only:

The included posts could originate from any location globally, provided they were in the English language.

We also already acknowledge this in our discussion: By focusing on English-language posts, we ensured consistent linguistic analysis, offering a concentrated view of prevailing sentiments. However, these restrictions might have led to a potentially skewed or incomplete understanding of broader perspectives and cultural nuances.

 

 

 

  1. The authors missed the fact that the post coded in lines 203-206 were from someone who appears to be a student. If they failed in this one were their reliability checks working?

Thank you for this observation; we apologise for this error; this quote has been removed.

The whole research team (n=3) has since re-read all social media posts, checking them against their codes. We have clarified this check in the data analysis subsection of the methods section:

All three researchers (MW, CH, RU) read all the posts to verify inclusion criteria and agree on the initial coding and broader categories before the next stage.

Braun & Clarke’s reflexive thematic analysis does not require ‘reliability checks’. However, we acknowledge that a) our approach to qualitative analysis was unclear, and b) we did not specify who (and how many researchers) reviewed the dataset. To improve clarity, we have now added more detail to the ‘data analysis’ subsection of the Methods section:

Data analysis was conducted utilising Microsoft Excel. The dataset is available on Figshare: https://figshare.com/s/03d9e3b335ae5f24e5a5. A reflexive thematic analysis was employed (Braun & Clarke, 2019). Two researchers (MW, CH) familiarised themselves with the dataset, initially reading through the posts again. The same researchers then generated initial inductive codes independently but later discussed and generated an initial set of codes applied across the whole dataset (figure 1). These codes were part of three broader categories: Opinion (n=82), Advice-seeking (n=28), and Resource sharing (n=84). Seven posts were categorised as miscellaneous/other and were therefore excluded from the analysis. Consequently, themes were based on the analysis of n=194 posts. All three researchers (MW, CH, RU) read all the posts to verify inclusion criteria and agree on the initial coding and broader categories before the next stage.

Further coding of posts was conducted to understand nuances in the data. Initially, this process was conducted separately for the three broad categories (Opinion, Advice-seeking, and Resource sharing). For the advice-seeking posts (MW), additional coding identified the specific challenges or concerns staff were articulating. For opinion posts (RU), where sentiment had already been determined (positive, negative, mixed), additional coding described the specific elements of ChatGPT that users either liked or disliked. For the resource-sharing posts, additional codes were created to explore the nature of the shared resources. The research team reviewed the refined codes across all three categories, resolved any discrepancies in interpretations, and modified codes where necessary.

Subsequently, the research team collaboratively generated overarching themes and subthemes across the three refined categories (advice-seeking, opinion, and re-source-sharing). The shared themes across the three categories allowed for a more nuanced and detailed exploration of the inherent patterns within the data. Themes were then clearly defined and named, and quotes (i.e., posts) were selected to represent these themes and subthemes across the dataset.

  1. In line 207 there is an interpretation of a post as negative when in fact it is not negative.

 

We have revised the sentence to clarify this interpretation. The new sentence reads:

"One explanation for the previously mentioned negative reactions to GenAI could be a lack of knowledge and understanding of such technologies, as alluded to by one user:

'Talking to my 11yo about #chatGPT last night & reasons why I didn't think he should use it. … Wondering whether my concerns come from an academic perspective or just from my lack of understanding. I think I need to increase my knowledge in order to support him with its potential.'"

 

  1. The paper is missing a reliability check to validate inclusion criteria and categories.

As stated in comment 3, Braun & Clarke’s reflexive thematic analysis does not require such measures. Please see the response to comment 3.

 

  1. The authors failed to: identify responses and comments to posts as a unit of analysis (basic in social media research). The authors failed to identify if there were repeat posters and if so what was the frequency of repeated posts from the same author.

Thank you for these comments. We have addressed them together as we believe they can be discussed collectively.

For this topic and our review of the dataset, we did not feel these distinctions were necessary. However, we have now added these points as limitations in the discussion section:

One consideration for our study is that we did not specifically distinguish between original posts and replies or comments in our analysis, nor did we identify repeat posters or analyse the frequency of repeated posts from the same user. This approach allowed us to maintain a broad perspective on the dataset and focus on overall themes and patterns. However, differentiating between these types of interactions and recognising repeat contributors could provide additional insights into user engagement, sentiment, and the dynamics of online discussions. Future research could benefit from incorporating these levels of analysis to enhance understanding of individual contributions within social media contexts.

 

  1. In Table and text for each category please add frequency so we can evaluate the breadth of each theme/subtheme.

This is not common practice in reflexive thematic analysis. However, to ensure transparency with the dataset, we have now published it on Figshare: https://figshare.com/s/03d9e3b335ae5f24e5a5

 

  1. Why does it say in limitations that the authors found over 700 posts when you had 211?

Thank you for picking up on this. This reflected the posts generated in the initial search. We have changed this to represent the posts analysed instead.

Reviewer 2 Report

Comments and Suggestions for Authors

The way in which the day is presented should be improved for a journal like this. I strongly recommend making use of software specialised in the analysis of qualitative data, and try to make the most of it. In the very case of this paper, I would prepare a co-occurence table and include it because it will provide the reader with a concise perspective of the corpus which has been analysed. You include example of tweets, but I think it is not enough for the scholarly community, you should go beyond and I think that the co-occurence table (which is a graph) would provide your paper with a stronger methodological framework.

Please, check your citation style sticks to the guidelines of the journal. Besides, the section devoted to conclusions is too short, and, consequently, too weak. This issue should be thoroughly addressed.

 

Comments on the Quality of English Language

It is acceptable, minor editing of the language is required.

Author Response

Please see the attachment

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

The subject is relevant and fascinating! I follow it daily and, as a professor, I also discuss the topic with my students (undergrad and grad levels). I personally use GenAI and recommend that students use it with ethics and respect.

The paper is well written and well structured. The methodology adopted is appropriate for the matter at hand, and Table 1 provides a good starting point for further and deeper analysis. The discussion follows a scientific path, even though the subject can easily be thrown into a biased debate.

The authors are encouraged to pursue a continuation of this work, focusing on lecturers' practices in the classroom. Some guiding questions could be the following ones: How are they using GenAI? How are they exposing it to students? How are they supporting GenAI use in their classes? Is it possible to conceive a teaching methodology strongly rooted in GenAI? Would it be applicable to countries all over the world in the same way, or would cultural differences require significant adaptations?

Author Response

Please see the attachment

Author Response File: Author Response.docx

Reviewer 4 Report

Comments and Suggestions for Authors

The use of GenAI at universities is a relevant and interesting topic as well as the staff's attitude to this phenomenon. I have some comments and remarks on your manuscript.

It is recommended to have clearly defined research questions.

The posts' selection process (the eligibility criteria and data on how many posts meet them at each step) could be displayed graphically. 

The results in the Data Analysis section can be presented tabularly or graphically.

In the Results section the authors do not discuss the results with specific data, but use phrases like "many posts", "many users", "some users", "some staff members", etc. Using specific data when commenting on research results would be more scientific and well-founded. 

The conclusion could be more extensive and summarise the research outcomes.

Citations should be formatted as specified in the paper template.

Author Response

Dear reviewers,

We greatly appreciate your thorough review of our manuscript and the constructive comments you provided. Your insights have been invaluable in enhancing the quality of our work. Below, we have addressed each of your comments in detail. We hope our revisions meet your expectations and look forward to publishing our findings.

 

  1. The use of GenAI at universities is a relevant and interesting topic as well as the staff's attitude to this phenomenon. I have some comments and remarks on your manuscript.

Thank you.

 

  1. It is recommended to have clearly defined research questions.

We agree, this does give clarity to the overall narrative. We have added the following research questions:

What are the perspectives of higher education teaching staff on students' use of ChatGPT in academic settings as expressed on the social media platform, X?

What approaches do higher education teaching staff propose for the appropriate use of ChatGPT by students in academic practices?

 

  1. The posts' selection process (the eligibility criteria and data on how many posts meet them at each step) could be displayed graphically.

Thank you for this suggestion. We have now added a figure (figure 1).

 

  1. The results in the Data Analysis section can be presented tabularly or graphically.

Thank you. We combined this information in the same figure as the above comment (figure 1).

 

  1. In the Results section the authors do not discuss the results with specific data, but use phrases like "many posts", "many users", "some users", "some staff members", etc. Using specific data when commenting on research results would be more scientific and well-founded.

 

The quotes that accompany each section are indeed ‘specific data’ as they are the social media posts. This style of reporting is common in qualitative research and aligns with Braun & Clarke’s reflexive thematic analysis. To clarify this, we have now added more detail to the ‘data analysis’ subsection of the ‘data analysis’ subsection of the Methods section:

Subsequently, the research team collaboratively generated overarching themes and subthemes across the three refined categories (advice-seeking, opinion, and re-source-sharing). The shared themes across the three categories allowed for a more nuanced and detailed exploration of the inherent patterns within the data. Themes were then clearly defined and named, and quotes (i.e., posts) were selected to represent these themes and subthemes across the dataset.

 

To ensure transparency with the dataset, we have now published it on Figshare: https://figshare.com/s/03d9e3b335ae5f24e5a5

 

  1. The conclusion could be more extensive and summarise the research outcomes.

 

We have now reviewed the conclusion with this in mind:

This study explored higher education teaching staff's perspectives on students' use of ChatGPT in academic settings, as expressed on the social media platform X. The analysis highlighted a mixed sentiment among staff. Concerns included GenAI's potential to weaken critical thinking, reduce human involvement, and disrupt traditional teaching methods. Conversely, there was recognition of its ability to enhance creativity, personalise education, and support higher-order thinking tasks.

Theis need for improved GenAI literacy among staff through comprehensive training and support. Distrust in institutional responses to GenAI underscores the importance of establishing clear guidelines and ethical frameworks for its use in academia.

Despite concerns, there was optimism about GenAI's role in promoting meaningful discussions about the future of education. The rapid evolution of GenAI requires prompt, ethical integration by universities. Inclusive assessment methods that consider GenAI's capabilities while maintaining academic integrity are essential.

In conclusion, while GenAI presents both challenges and opportunities, a balanced approach is essential to harness its benefits while mitigating risks. This study contributes to ongoing discussions on AI integration in higher education, emphasising the need for collaboration, ethical considerations, and continual adaptation to technological advancements.

  1. Citations should be formatted as specified in the paper template.

 

Thank you, this has now been amended.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors have done very little to respond to the methodological critique provided for the first draft! In essence, they have admitted the shortcomings but did nothing to correct it. I believe that as a result I do not reccoemend publication until some of the work is redone. Stated differently: Research with very short social media posts with less than 100 quotations is too thin of a data base to justify the conclusions. Ignoring the other markers (such as likes) that can "thicken" the results are missing as well. 

Comments on the Quality of English Language

Writing is fine with a few minor uses of langauge

Reviewer 4 Report

Comments and Suggestions for Authors

The authors have made the necessary corrections.

Back to TopTop