1. Introduction
Generative AI is a branch of artificial intelligence that focuses on creating technological systems capable of producing human-like output in different formats such as images, text, audio, and even videos [
1]. Generative AI produces human-like output using artificial neural networks, natural language generation, and machine and deep learning [
2]. Generative AI can be employed in various fields, including creative design, content creation for entertainment, education, marketing, and many more.
Among the various applications of generative AI, one prominent example is generative AI writing tools. Generative AI writing tools are software applications that use artificial intelligence to create written content. Users provide prompts to these software applications, and, based on these prompts, the application produces different forms of content, such as essays, research articles, stories, presentations, poems, translations, and technical documentation [
3].
Generative AI has made remarkable progress over the last twenty years. One of the early indicators of AI’s potential in handling and generating human-like text and responses is the “Jeopardy!” game that was developed in 2011 by IBM’s Watson and that showcased natural language processing [
4]. Additionally, Apple’s Siri was introduced in 2011 as the digital assistant that enabled voice-based interactions and tasks [
5]. Another breakthrough that led to subsequent AI advancements was the introduction of Generative Adversarial Networks (GANs) in 2014, which were designed to generate realistic images [
6]. Another AI breakthrough that revolutionized natural language processing and generation was transformer architecture, introduced in 2017 and which served as the backbone of many large language models [
7]. In the past few years, OpenAI has been at the forefront of developing large language models (LLMs) by releasing the Generative Pre-trained Transformer (GPT) family that demonstrated the potential of these models to generate coherent and contextually relevant text. Currently, generative AI writing tools have become essential across various sectors due to their advanced capabilities and widespread adoption. Some of the most notable tools include OpenAI’s Chat Generative Pre-Trained Transformer (ChatGPT), Jasper AI, Copy.ai, Writesonic, Rytr, Wordtune, Grammarly, ShortlyAI, QuillBot, INK Editor, and Scribe [
8,
9]. Generative AI tools have fundamentally transformed various industries, and education is no exception. The initiation of generative AI tools has brought about significant transformations in education. In addition, the widespread adoption of ChatGPT has prompted educators and researchers to consider its implications for education.
Generative AI writing tools provide several services for university students that include translation, writing assistance, creating creative text, automated interaction, support for language learning, feedback about writing, and the automatic summarization of text [
10,
11,
12,
13]. The provided services of generative AI writing tools might offer several benefits for students, e.g., enhancing learning experiences [
14], customizing education to individual needs [
15], fostering better collaboration and communication among students and between educators and students [
13], improving access and equity [
15], increasing efficiency and productivity, supporting critical thinking and analysis [
16], and boosting creativity and innovation [
17].
However, despite the services provided by generative AI writing tools and their potential benefits in education, there are concerns about the use of the tools, including hindering essential writing skill development, which might reduce learning effectiveness [
18], impact job prospects and professional growth negatively [
19], and require advanced technical skills, as well as risking plagiarism and compromising originality [
20,
21], raising issues of authorship, accountability, and fair use [
21,
22], raising issues of security and privacy [
23,
24], and spreading misinformation or being manipulated [
25].
There are several factors that would raise the importance of understanding university students’ perceptions about the application of new technologies, e.g., generative AI writing tools in education. For instance, students’ perceptions of the applications new technologies can inform the best practices for integrating these technologies into educational settings. Furthermore, educators and policymakers can use students’ perceptions of new technologies to adapt the use of technologies to address their needs while promoting effective learning outcomes. Students’ perceptions of generative AI tools vary from one region to another. These variations are influenced by factors such as infrastructure, funding, resources, cultural attitudes toward innovation, supportive government policies, access to information, and collaborative networks. Therefore, understanding students’ perspectives on these technologies in specific educational settings is essential. Jordan is a developing country with a vision to achieve a significant technological leap in the integration of technology in education [
26,
27,
28]. The current study examined Jordanian university students’ insights into generative (AI) writing tools, focusing on their familiarity with, perceived concerns about, and perceived benefits of these tools in their academic work. In addition, the study examined the differences in Jordanian university students’ insights into generative (AI) writing tools based on their gender and program of study.
3. Previous Studies
Students’ perceptions of generative AI writing tools have been examined in several studies across the world. For instance, Jowarder [
44] examined social science students’ awareness, adoption, perceived usefulness, and impact on the academic performance of AI writing tool, i.e., ChatGPT. They followed a qualitative research design in which data were collected using semi-structured interviews. Two hundred undergraduate students from the United States were selected for interviews. The results showed that the participants were mostly aware of ChatGPT and had utilized it for academic purposes. Variables such as the perceived usefulness, ease of use, and social influence played significant roles in adopting this technology among students. The findings revealed that ChatGPT positively impacted students’ academic performance, where the students believed that ChatGPT helped them comprehend complex concepts and provide relevant study materials.
In addition, Sánchez [
45] conducted a study that aimed to analyze higher education students’ perception of using generative AI technology, i.e., ChatGPT, in their academic activities at a university in Mexico. A mixed-research design was used to carry out the study, which involved collecting data using a questionnaire with closed-ended questions. University students from various majors participated in the study. The results showed that a minority of respondents had used ChatGPT in their school practices, with many of the participants reporting that they did not consider it suitable for educational tasks.
In another study, Chan and Hu [
46] conducted a study that aimed to examine higher education students’ perceptions of generative AI technologies, e.g., ChatGPT. The study followed a descriptive research design in which 399 undergraduate and graduate Chinese students completed an online questionnaire. The results showed that the students were generally familiar with generative AI technologies. They were positive about using generative AI technologies in teaching and learning. The students believed that generative AI technologies were useful in facilitating personalized learning, writing and brainstorming assistance, and research and analysis capabilities. However, concerns about accuracy, privacy, ethical issues, and the impact on personal development, career prospects, and societal values were also expressed.
In another study, Arowosegbe, Alqahtani, and Oyelade [
47] conducted a study to assess students’ use and perceptions of generative AI tools in higher education in the United Kingdom (UK). The study followed a descriptive research design in which 136 students completed a questionnaire. The findings revealed that 61% of respondents were aware of AI tools in academia, with 52% having personal experience using them. Additionally, 56% agreed that AI provides an academic edge, and 40% had a positive overall perception of AI use in academia. However, challenges like plagiarism concerns, privacy issues, and a lack of university clarity were highlighted.
In India, Raman, Mandal, Das, Kaur, Sanjanasri, and Nedungadi [
48] conducted a study that aimed to examine higher education students’ adoption of generative AI writing too, i.e., ChatGPT. The researchers adopted Rogers’ diffusion of innovation theory with sentiment analysis as a theoretical framework to guide their investigation. They used a mixed-research design in which 288 students participated in the study. The results showed that the participants believed that ChatGPT is innovative, compatible, and user-friendly, enabling the independent pursuit of educational goals. In Pakistan, Imran and Lashari [
49] conducted a study that aimed to examine university students’ perception of the use of generative AI technology, i.e., ChatGPT, and its impact on their writing skills. The results flowed through a qualitative design in which 24 students were interviewed. The study highlighted a mixed trend in students’ perceptions of ChatGPT. The results showed that most students believed that ChatGPT hindered creative writing. However, some students felt it could be beneficial under proper supervision and controlled conditions. In Vietnam, Ngo [
50] examined university students’ perception of the use of generative AI writing tools, i.e., ChatGPT in education. The researcher followed a mixed-research design in which 200 students completed a questionnaire, and interviews were conducted with 30 students. The results showed that students had a positive perception of ChatGPT’s application. The students reported several benefits of ChatGPT that included saving time, providing information in various areas, providing personalized tutoring and feedback, and illuminating ideas in writing. However, they also reported some barriers to using ChatGPT that included the inability to assess the quality and reliability of sources, the inability to cite sources accurately, and the inability to replace words and use idioms accurately.
In Kazakhstan, Yilmaz, Maxutov, Baitekov, and Balta [
51] conducted a study that aimed to examine university student attitudes towards ChatGPT. The researcher used the Technology Acceptance Model (TAM) to guide their investigation. The data were collected from 239 students using a questionnaire. The results showed that the students had positive perceptions of the use of ChatGPT. In addition, the results showed that their attitudes toward the use of ChatGPT were not influenced by their gender, grade levels, or major.
In Saudi Arabia, Al-Qahtani and Al-Dayel [
52] conducted a study that aimed to examine university students’ awareness of artificial intelligence applications in education and their attitude toward using these applications in education. The study followed a descriptive research design in which data were collected from 333 students with different majors using a questionnaire. The results showed that they had a high degree of conceptual awareness of artificial intelligence. In addition, the results demonstrate that students generally reported a positive attitude regarding the use of these applications in education. In another study in Saudi Arabia, Abouammoh et al. [
53] conducted a study to explore the knowledge, perceived benefits, concerns, and limitations of using generative AI tools, i.e., ChatGPT, in medical education among faculty members and students. The researchers used focused interviews with participants to collect qualitative data from participants. The results showed that the participants demonstrated good knowledge of ChatGPT and its functions. The participants listed several benefits of the use of ChatGPT including information collection, summarization, and timesaving. They also reported some concerns that were related to its negative effect on critical thinking, access limitations, trust in generated content, and ethical uses.
In a study that focused on pre-service teachers’ perceptions, Markos, Prentzas, eamind Sidiropoulou [
54] examined Greek pre-service teachers’ perceptions of a generative AI writing tool, i.e., ChatGPT’s use in higher education. The research followed descriptive research design in which 257 students completed a structured questionnaire. The results showed that students believed that the tool has strengths and concerns. The main strengths were related to the tool’s adaptability and its role in enhancing learning experiences, academic research, and collaboration. The main concerns were related to data privacy, the originality of work, and the potential introduction of biases or inaccuracies. In another study that focused on future teachers, Fontao, Santos, and Lozano [
55] investigated future Spanish teachers’ perspectives of a generative AI writing tool, i.e., ChatGPT’s application in education. The study followed descriptive research design in which 70 students completed a questionnaire. The results showed that the participants believed that ChatGPT is a beneficial tool for students in terms of enhancing various aspects of teaching performance. However, they had concerns in reading the use of such tools in education in terms of the difficulty of detecting plagiarism and the potential decrease in students’ critical thinking skills.
In Jordan, Habes, Alanani, Youssef, and Sharif [
56] examined the factors behind university students’ preferences for a generative AI writing tool, i.e., ChatGPT. They used the technology readiness theory as a theoretical framework to guide their investigation. Three hundred and sixty-five students participated in the study by completing a questionnaire. The results showed that technology readiness, encompassing innovativeness and optimism, had a positive correlation with the perceived ease of use and perceived usefulness regarding ChatGPT usage. Additionally, the perceived ease of use and perceived usefulness had a positive correlation with students’ intention to use ChatGPT in Jordan. In another study that was conducted in Jordan, Gammoh [
57] examined Jordanian university students’ risks, misuses, and challenges associated with the use of a generative AI writing tool, i.e., ChatGPT. The study followed a qualitative research design in which 25 academics from various professional backgrounds in both public and private Jordanian universities participated in interviews. The results showed that four key risks associated with the use of ChatGPT in academia were related to plagiarism and compromised originality; overdependency on technology; weakened critical thinking skills; and decreased overall assignment quality.
The examined studies showed that generative AI tools might have a significant role in education. However, these studies showed that students had varied perceptions of these tools across different regions and academic disciplines. Previous studies showed that the students believed that generative AI tools might be useful in enhancing their academic performance; facilitating personalized learning; assisting with complex concepts; providing relevant study materials; offering tutoring and feedback; assisting in brainstorming; helping them to save time and access diverse information; facilitating research, analysis, and collaboration; aiding in summarization, information collection, and improving writing performance; and fostering adaptability in educational environments. However, they reported some concerns about generative AI tools related to creativity, accuracy, privacy, plagiarism, originality, students’ critical thinking, overdependency on technology, and ethical considerations. These mixed perceptions raise the need for continuous research to navigate the evolving landscape of AI in education and further understand students’ perceptions across different regions and academic disciplines. This study makes a notable contribution to the existing research on generative AI writing tools by focusing specifically on the insights of Jordanian university students from their College of Education using a cross-sectional descriptive research design. While previous studies have explored the perceptions of students from various countries and disciplines, this research addresses a gap by providing a comprehensive exploration of a College of Education students’ perceptions within the Jordanian context regarding generative AI writing tools.
The current study examined the Jordanian College of Education students’ insights into generative AI writing tools regarding their familiarity with, perceived concerns about, and perceived benefits of these tools in their academic work. In addition, the study examined the differences in Jordanian university students’ insights into generative (AI) writing tools based on their gender and program of study. Few research studies offer as detailed an analysis of these three dimensions, i.e., familiarity, concerns, and benefits, as well as their subdimensions related to insights into generative AI writing tools, as this one, making it particularly valuable.
6. Data Collection Tool
The questionnaire consists of four parts. The first part aims to collect data regarding participants’ gender and program. The second part aims to collect data regarding participants’ familiarity with generative AI writing tools. The familiarity scale was developed after reviewing previous studies [
29,
30,
31,
44]. The familiarity scale consisted of 14 items distributed evenly on seven dimensions that include technical knowledge, practical experience, conceptual understanding, critical thinking and evaluation, the awareness of applications, ethical considerations, and interest and engagement.
The third part aims to collect data regarding participants’ concerns about generative AI writing tools. The concerns scale was developed based on reviewing previous studies [
18,
33,
34,
35,
36,
37]. The concerns scale consisted of 14 items distributed evenly on seven dimensions that include learning efficacy, career and professional development, technical knowledge, plagiarism and originality, ethical considerations, security and privacy, and misinformation and manipulation.
The fourth part aims to collect data regarding participants’ perceived benefits from generative AI writing tools. The used benefits scale was developed based on reviewing previous studies [
14,
38,
39,
40,
41,
42,
43]. The benefits scale consisted of 14 items distributed evenly on seven dimensions that include enhanced learning experiences, personalization and differentiation, collaboration and communication, access and equity, efficiency and productivity, critical thinking and analysis, and creativity and innovation.
The response options utilized in the familiarity, concerns, and benefits scales comprised a five-point Likert scale, with each numerical value corresponding to a specific level of agreement. Specifically, “1” denoted “Strongly disagree”, “2” indicated “Disagree”, “3” represented “Neutral”, “4” signified “Agree”, and “5” reflected “Strongly agree”.
The reliability of each scale in the questionnaire instrument was verified using Cronbach’s Alpha. The values of Cronbach’s Alpha ranged between 0.91 and 0.94, indicating excellent internal consistency for each scale [
58], showing that the items in the scale were strongly related to each other and were likely measuring the same construct.
Table 1 shows a summary of the reliability analysis. In addition, the validity of the questionnaire was checked through a panel of experts in the field of education.
9. Results and Discussion
9.1. First Research Question: To What Extent Are University Students Familiar with Generative AI Writing Tools?
The results regarding the first research question, which aimed to measure university students’ familiarity with generative AI writing tools, show that, while all the dimensions of familiarity fall within the “Moderate” range, there are notable disparities among them. The students responded least positively to the “Technical knowledge” dimension (M = 2.61, SD = 0.90), while they responded most positively to the “Interest and engagement” dimension (M = 3.48, SD = 0.96). The lower technical familiarity may be related to the nature of the students’ academic programs in a College of Education. Colleges of education usually offer programs that might not provide students with sufficient opportunities to develop technical competencies. Moreover, the participants might have limited access to specialized training, workshops, or courses on generative AI, which could hinder students from gaining technical expertise. Despite their lower technical familiarity, students showed a high level of interest and engagement. Interest and engagement, in the context of learning and exploring generative AI text, refers to the level of active participation and the interest an individual demonstrates in the subject. The students seem to be interested in seeking out more information about generative AI writing tools. Students’ high level of interest and engagement regarding generative AI writing tools indicates that they recognize the potential benefits and are eager to learn and use these tools. Their high motivation could be driven by the perceived advantages of AI in enhancing their academic work. The results indicate that, while the students are motivated regarding the use of generative AI writing tools, they may need additional support and resources to build their technical competence in generative AI writing tools. Addressing technical competence might help ensure a more balanced and comprehensive understanding, enabling the students to apply their interest and engagement effectively in practical and technical contexts.
Students’ responses to four dimensions, namely, “Conceptual understanding”, “Practical experience”, “Awareness of applications”, and “Critical thinking and evaluation” dimensions, were particularly close to each other. The score in the “Awareness of applications” dimension (M = 3.21, SD = 1.16) suggests that the students have a foundational comprehension of the generative AI writing tools, but there is room for deeper understanding and exploration of these tools. The larger standard deviation in the “Practical experience” dimension (M = 3.02, SD = 1.11) indicates more unevenness in this dimension among the students. While some students may have considerable hands-on involvement with generative AI writing tools, others might have less, suggesting a need for more equal access opportunities to generative AI writing tools and consistent, practical training opportunities. The score in the “Awareness of applications” dimension (M = 3.21, SD = 1.16) suggests that students are moderately aware of how the generative AI writing tools can be used in various domains. Furthermore, the score in the “Critical thinking and evaluation” dimension (M = 3.13, SD = 1.01) indicates some capability in analyzing and assessing the effectiveness and implications of the generative AI writing tools, but it also suggests that there is room for improvement with more practice and guidance. Students’ scores in the “Ethical considerations” dimension (M = 3.46, SD = 1.13) are among the higher ones, reflecting a relatively better familiarity with ethical issues related to the use of generative AI writing tools, though further discussion and exploration of ethical guidelines could enhance this understanding.
Generally, students have a moderate level of familiarity with generative AI writing tools (
M = 3.14,
SD = 0.81). This score suggests that, while students have a foundational understanding and engagement with these tools, there is considerable room for improvement. Such findings aligned with the findings of previous studies that showed that students were generally aware of the different aspects of generative AI writing tools [
44,
46,
47].
Table 3 shows the means and standard deviations for university students’ responses to the questionnaire dimensions that examined their extent of familiarity with generative AI writing tools.
9.2. Second Research Question: To What Extent Do University Students Have Concerns About Using Generative AI Writing Tools in Their Academic Work?
The results regarding the second research question, which aimed to measure university students’ concerns about using generative AI writing tools in their academic work, show that, while all the dimensions of concerns fall within the “Moderate” range, there are notable disparities among them. The students responded least positively to the “Learning efficacy” dimension (
M = 3.18,
SD = 1.10). This dimension explores students’ concerns regarding the negative impact of generative AI writing tools on students’ critical thinking skills and the effectiveness of generative AI writing tools in advancing their academic knowledge. The students responded most positively to the “Misinformation and manipulation” dimension (
M = 3.55,
SD = 1.06), this dimension explores students’ concerns about the potential negative impacts of generative AI on academic discourse and research integrity. Such a result suggests that, while students are moderately concerned about the impact of generative AI writing tools on their learning processes, this concern is not as pronounced as the significant fears about the integrity and trustworthiness of AI-generated content. Concerns about misinformation and manipulation can negatively influence the adoption of generative AI writing tools. When the students doubt the reliability and accuracy of the content generated by generative AI writing tools, they may be hesitant to integrate these tools into their academic work. Such concerns can be addressed by providing training about how to evaluate AI content and by establishing ethical use policies that guide the responsible use of generative AI writing tools [
59].
The score in the “Security and privacy” dimension ranks as the second highest in terms of concern among university students (
M = 3.45,
SD = 1.08). This dimension explores students’ apprehensions about the potential risks associated with using AI platforms, particularly in terms of data security and privacy. Students’ concerns regarding potential security and privacy issues associated with using AI platforms were reported in previous studies [
35,
60]. Another dimension that ranks among the highest concerns is the “Ethical considerations” dimension (
M = 3.39,
SD = 0.92). This dimension examines ethical issues related to the usage of generative AI writing tools in academic work, e.g., the misuse or misinterpretation of data without proper ethical safeguards. The “Security and privacy” and the “Ethical considerations” concerns suggest a need for enhanced data protection measures, transparency, comprehensive training, and ethical guidelines [
23,
35] to ensure the secure and responsible use of generative AI writing tools in academic work.
The score in the “Technical knowledge” dimension (M = 3.28, SD = 1.02), which measures students’ concern about the technical knowledge required to effectively use AI tools, suggests that students feel moderately challenged by the complexity and accessibility of these technologies. Furthermore, the “Plagiarism and originality” dimension addresses students’ worries about unconsciously plagiarizing AI-generated content and maintaining originality and academic integrity in their work. The score in this dimension is (M = 3.34, SD = 1.10), indicating that students are moderately worried about the risk of using AI-generated content without proper attribution, which could lead to unintentional plagiarism, and they may fear that over-reliance on AI tools could weaken their intellectual contributions and compromise the integrity of their research. The dimension that explores students’ concerns about how the increasing reliance on generative AI tools might impact their future career expectations and academic achievements was titled “Career and professional development”. The score in this dimension (M = 3.24, SD = 0.92) suggests that the students are moderately aware of the transformative impact of AI on the job market and are aware of the need to adapt.
Generally, students have a moderate level of concern about using generative AI writing tools (
M = 3.35,
SD = 0.85). This score suggests that the students are wary about potential drawbacks and ethical implications for generative AI writing tools. Some of the findings regarding students’ concerns regarding the negative impact of generative AI writing tools aligned with the findings of previous studies that showed that the students had negative perceptions regarding the suitability of the use of generative AI writing tools for educational tasks [
45] and the use of generative AI writing tools, i.e., ChatGPT for creative writing [
49].
Table 4 shows the means and standard deviations for university students’ responses to the questionnaire dimensions that examined their extent of concerns about using generative AI writing tools.
9.3. Third Research Question: To What Extent Do University Students Perceive Benefits from Using Generative AI Writing Tools in Their Academic Work?
The results regarding the third research question that aimed to measure university students’ perceived benefits from using generative AI writing tools in their academic work show that, while all the dimensions of benefits fall within the “Moderate” range except one dimension, there are insignificant disparities among them.
The students responded least positively to the “Enhanced learning experiences” (M = 3.56, SD = 0.94) and “Personalization and differentiation” dimensions (M = 3.56, SD = 1.01). The “Enhanced learning experiences” dimension explores students’ perceptions of how generative AI writing tools can make learning more interactive, engaging, and effective by providing personalized and insightful responses to students’ inquiries. The “Personalization and differentiation” dimension explores students’ perceptions of the effectiveness of generative AI writing tools in adapting to their learning styles and providing personalized assistance and feedback. The score for the “Collaboration and communication” dimension was close to the least positively scored dimensions regarding the benefits of generative AI writing tools (M = 3.57, SD = 0.97). This dimension aims to explore students’ perceptions of how generative AI writing tools can enhance communication between students and educators and promote peer-to-peer learning through interactive and timely feedback.
The students responded most positively to the “Creativity and innovation” dimension (
M = 3.71,
SD = 1.00). This dimension explores students’ perceptions of the potential of AI to introduce new perspectives, encourage unconventional approaches, and inspire creative problem-solving. The benefits of generative AI writing tools, i.e., ChatGPT, in fostering students’ creativity was documented in previous studies [
61,
62]. Such results suggest that, while students recognize and value the role of generative AI writing tools in fostering creativity and innovation, they perceive less benefit in their ability to fully enhance and personalize their learning experiences, to adapt to learning styles, to provide personalized assistance and feedback, to enhance communication between students and educators, and to promote peer-to-peer learning through interactive and timely feedback.
Three dimensions share similar mean scores (M = 3.66) and close standard deviations, reflecting consistent levels of appreciation for different aspects of generative AI writing tools. These dimensions are “Access and equity”, “Efficiency and productivity”, and “Critical thinking and analysis”. The “Access and equity” dimension explores students’ perceptions of how generative AI writing tools can bridge gaps in access to educational resources, especially for students who might be disadvantaged by location or financial status. The “Efficiency and productivity” dimension explores students’ perceptions of generative AI writing tools’s ability to save time, automate tasks, and provide quick access to educational resources. The “Critical thinking and analysis” dimension explores students’ perceptions of generative AI writing tools’ capability to present stimulating thoughts and complex information, thereby challenging and improving their analytical skills. The overall consistency in scores across these three dimensions suggests a balanced recognition of the diverse benefits that generative AI writing tools can offer, making it a valuable tool for modern education.
Overall, students demonstrate a favorable attitude towards the advantages offered by generative AI writing tools (
M = 3.62,
SD = 0.81). This indicates that students generally acknowledge and appreciate the potential benefits of integrating AI into education, such as fostering creativity, streamlining productivity, promoting fairness, and facilitating critical thinking and analysis. Such findings aligned with the findings of previous studies, which showed that students believed that generative AI writing tools had several benefits in their academic work [
44,
46,
47]. However, the findings regarding students’ positive perceptions of the use of generative AI writing tools in improving their creativity and innovation contracted with the findings of another study, which reported that most students believed that ChatGPT hindered creative writing [
49]. The context in which generative AI writing tools are proposed to be used and individual differences in students’ attitudes toward technology could contribute to these varying perceptions.
Refer to
Table 5 for a summary of the means and standard deviations of university students’ responses to the questionnaire sections exploring their perceptions of the benefits associated with the use of generative AI writing tools.
9.4. Fourth Research Question: What Are the Differences in the College of Education Students’ Insights of Generative AI Writing Tools Based on Their Gender and Program?
9.4.1. Insights of Generative AI Writing Tools Based on Gender
The results regarding the differences in a College of Education students’ insights of generative AI writing tools based on their gender show that there were no significant differences across familiarity, concerns, and perceived benefits. Female participants reported a mean familiarity of 3.16 (
SD = 0.75), while male participants reported a mean of 3.09 (
SD = 0.99), with a
t-test result of
t(93) = 0.35,
p = 0.73). Similarly, concerns were nearly identical between the two groups (female
M = 3.34,
SD = 0.79; male
M = 3.35,
SD = 1.04;
t(93) = −0.05,
p = 0.96). Perceived benefits also showed no significant difference (female
M = 3.67,
SD = 0.80; male
M = 3.50,
SD = 0.87;
t(93) = 0.87,
p = 0.39). These results suggest that gender does not significantly influence participants’ familiarity with, concerns about, or perceived benefits of generative AI writing tools.
Table 6 shows the results of the independent samples’
t-tests comparing the different dimensions of insights into generative AI writing tools between female and male participants.
9.4.2. Insights of Generative AI Writing Tools Based on Program
Similarly, the results regarding the differences in a College of Education students’ insights of generative AI writing tools based on their program of study show that there were no significant differences across familiarity, concerns, and perceived benefits. Undergraduate participants had a mean familiarity of 3.11 (
SD = 0.70) compared to graduate participants’ mean of 3.17 (
SD = 0.90), with t(93) = −0.36,
p = 0.72). Concerns were also similar (undergraduate
M = 3.31,
SD = 0.84; graduate
M = 3.38,
SD = 0.87;
t(93) = −0.42,
p = 0.67). The perceived benefits showed no significant difference either (undergraduate
M = 3.73,
SD = 0.82; graduate
M = 3.54,
SD = 0.80;
t(93) = 1.14,
p = 0.26). These findings indicate that educational level does not significantly impact participants’ insights into generative AI writing tools.
Table 7 shows the results of the independent samples’
t-tests comparing different dimensions of insights into generative AI writing tools between undergraduate and graduate participants. Overall, both gender and educational level appear to have little effect on familiarity, concerns, and perceived benefits regarding these tools.
10. Conclusions and Recommendations
Students show a moderate familiarity with generative AI writing tools, indicating a foundational but not yet deep engagement. In addition, their concerns are moderate, reflecting a balanced awareness of potential drawbacks. However, the perceived benefits are viewed more positively, especially in terms of creativity and innovation, suggesting that students see significant potential for AI to enhance their educational experiences despite existing concerns. This overall moderate but positive perception underscores the importance of the continued integration, education, and refinement of AI tools in academic settings to maximize their benefits while addressing students’ concerns. Moreover, both gender and educational level appear to have little effect on familiarity, concerns, and perceived benefits regarding generative AI writing tools.
Since Jordan is a developing country with a vision to achieve a significant technological leap in the integration of technology in education, the practical implications of this study are significant. There is a need to bridge the gap in the Jordanian educational system between the traditional methods and innovative teaching methods with the aid of digital tools. To improve students’ familiarity with generative AI writing tools, it is essential to enhance technical training through targeted workshops and resources. Addressing the notable gap in technical knowledge will help students to build a balanced and comprehensive understanding of these tools. Additionally, providing more hands-on opportunities, such as lab sessions and practical assignments, will ensure equal access and improve students’ practical experience. Integrating discussions and explorations of ethical considerations into the curriculum will further deepen students’ understanding and application of ethical guidelines, enhancing their overall familiarity with generative AI writing tools.
Addressing students’ concerns about using generative AI writing tools requires a multidimensional approach. For instance, strengthening data security measures and communicating these protocols will alleviate significant concerns about security and privacy. In addition, developing comprehensive ethical guidelines and providing related training will help students navigate potential issues like misinformation, manipulation, and plagiarism. Additionally, improving AI literacy through dedicated courses or modules focusing on the technical aspects of AI tools will reduce perceived complexity and accessibility challenges, further mitigating students’ concerns.
To maximize the perceived benefits of generative AI writing tools, fostering creativity should remain a priority by highlighting the creative potential of these tools within the curriculum. Investing in AI tools that offer personalized learning experiences, adapting to individual learning styles, and providing tailored feedback will enhance educational outcomes. Promoting collaborative learning by utilizing AI tools to facilitate better communication and interaction between students and educators will encourage effective feedback, thereby improving learning experiences and maximizing the benefits of generative AI writing tools in academic settings.
Future studies should integrate both quantitative and qualitative methods to gain a more comprehensive and in-depth understanding of students’ insights into generative AI writing tools, i.e., familiarity, concerns, and perceived benefits. In addition, future research should involve a larger and more diverse sample size in terms of disciplines and educational levels to enhance the generalizability of the findings. While convenience sampling and voluntary participation are practical for this study, such a procedure might limit the sample’s representativeness and that might affect the generalizability of the findings.
Since the participants reported a moderate level of familiarity with generative AI writing tools, future studies should be conducted to examine how to deepen students’ engagement with generative AI writing tools through educational interventions such as workshops, tutorials, and courses. In addition, the current study raised some specific issues that need further investigation such as building and evaluating ethical guidelines that would address students’ concerns about misinformation, data security, and privacy related to use generative AI writing tools. Future research studies could investigate the effect of the concerns, raised in the study, on students’ adoption of generative AI writing tools in the short and long term. In addition, the issue of how generative AI writing tools can enhance creativity and innovation in academic work needs further investigation. For instance, future studies could examine the level of improvements in creative writing based on using generative AI writing tools.