9. Research Results
To respond to this inquiry, descriptive statistical analyses—including means, standard deviations, and percentages—were calculated from participants’ responses across all scale items.
Table 7 presents the aggregate level of Generative Artificial Intelligence (GAI) utilization in enhancing cognitive and research talent, together with the corresponding performance levels for each of the six dimensions among postgraduate learners.
The descriptive results in
Table 7 indicate that postgraduate students show a moderate overall level of engagement with Generative AI in developing their cognitive and research abilities, with a mean score of 6.18 (61.80%). Ethical and responsible use records the highest mean (7.80; 78.03%), reflecting strong awareness of transparency, accuracy, and responsible application when using AI in academic work. GAI-supported cognitive skills in literature review and theoretical framework development rank second (6.35; 63.53%), suggesting that students find AI particularly useful for navigating complex texts and synthesizing information.
Moderate engagement is also observed in research design, academic writing, and data analysis (5.43–5.82), indicating uneven integration across research stages. The lower score for AI-assisted data analysis (5.43; 54.31%) may reflect limited familiarity with advanced analytical tools or uncertainty about AI-generated interpretations. Challenges related to GAI use also appear at a moderate level (6.02; 60.17%), pointing to obstacles such as technical limitations and insufficient training.
Overall, the findings suggest that students are in a transitional phase of adoption, with strong ethical awareness but incomplete practical integration, highlighting the need for targeted training and institutional support. In summary, ethical and responsible use shows the highest engagement, while data analysis reflects the lowest, indicating that students prioritize responsible application of GAI but remain less confident in employing it for advanced analytical tasks.
Table 8 presents the descriptive statistical results for the items of the first dimension of the GAI-RCT Scale, namely GAI-supported cognitive skills in literature review and theoretical framework development.
The results in
Table 8 indicate that postgraduate students use GAI at a moderate level to support cognitive skills in literature review and theoretical framework development, with an overall mean of 6.35 (63.53%). This suggests meaningful engagement with AI tools during the early stages of research, though usage is not uniformly high across all tasks.
The most frequent practice is using GAI to translate content from international studies (M = 6.81; 68.08%), highlighting its value in accessing global research beyond language barriers. Students also report strong use of AI for identifying key concepts (M = 6.55) and summarizing lengthy studies (M = 6.44), reflecting the role of GAI in managing extensive academic literature efficiently.
Moderate use appears in tasks requiring deeper analysis—such as organizing literature, interpreting theories, and comparing perspectives—with means between 6.22 and 6.38. This suggests that students rely on AI for clarification and synthesis but still prefer personal judgment for theoretical interpretation.
The lowest mean relates to using AI to design the initial structure of the theoretical framework chapter (M = 5.82; 58.22%), indicating caution in delegating higher-order conceptual tasks.
Overall, students use GAI confidently for information-processing tasks while remaining more reserved in areas requiring advanced theoretical reasoning, reflecting a balanced and thoughtful integration of AI into their research practices. In summary, students show higher engagement with GAI in translation and summarization tasks, while their use is more cautious in structuring theoretical frameworks, underscoring a preference for AI in information management rather than conceptual modeling.
Table 9 presents the descriptive statistical results for the items of the second dimension of the GAI-RCT Scale, namely GAI-enhanced research design and problem-formulation skills.
The results in
Table 9 show that postgraduate students use GAI at a moderate level to support research design and problem formulation, with an overall mean of 5.82 (58.18%). This indicates that students are beginning to integrate AI tools into early research planning, though their use remains cautious and not yet fully embedded in methodological practice.
The highest-rated use of GAI involves improving the clarity and suitability of research instrument items (M = 6.14; 61.36%), reflecting confidence in AI’s ability to refine wording and enhance precision. Students also report moderate use of GAI for generating questionnaire and interview items and organizing instrument domains (M = 5.84–5.90), suggesting that AI is viewed as a helpful assistant in structuring data-collection tools.
Lower means appear in tasks requiring deeper conceptual reasoning—such as formulating objectives, developing hypotheses, and identifying variable relationships (M = 5.62–5.76). This pattern suggests hesitation to rely on AI for theoretical modeling or foundational research decisions.
Overall, the findings depict a balanced yet cautious adoption of GAI. Students are comfortable using AI for procedural and practical tasks but remain selective when engaging with conceptual and analytical components of research design. In summary, students show stronger engagement with GAI in refining and structuring research instruments, while their use is weaker in formulating objectives and hypotheses, underscoring a tendency to rely on AI for practical rather than conceptual aspects of research design.
Table 10 presents the descriptive statistical results for the items of the third dimension of the GAI-RCT Scale, namely GAI-assisted data analysis and interpretation skills.
The results in
Table 10 show that postgraduate students use GAI at a moderate level for data analysis and interpretation, with an overall mean of 5.43 (54.31%). This indicates that AI tools are beginning to enter the analytical phase of research, though their use remains cautious and not yet fully integrated into students’ routines.
The highest-rated practice is using AI to analyze findings and compare them with prior studies (M = 5.70; 56.96%), reflecting recognition of AI’s value in synthesizing results and situating them within existing literature. Students also report moderate use of AI to explain statistical concepts (M = 5.67) and identify patterns in qualitative data (M = 5.58), suggesting that GAI supports clarification and early-stage coding.
Moderate engagement is also seen in drafting initial versions of results and discussion sections, suggesting tables and charts, and linking findings to hypotheses (M = 5.36–5.58). These uses indicate reliance on AI for preliminary structuring, followed by human refinement.
The lowest means relate to interpreting statistical outputs (M = 5.00) and analyzing qualitative interviews (M = 5.08), reflecting caution in delegating tasks requiring precision and methodological rigor.
Overall, the findings suggest that GAI is becoming a supportive tool for explanation, synthesis, and early drafting, while students maintain careful oversight in areas demanding accuracy and nuanced interpretation. In summary, students show stronger engagement with GAI in synthesizing findings and explaining statistical concepts, while their use is weakest in interpreting statistical outputs and qualitative interviews, underscoring reliance on AI for supportive rather than precision-driven analytical tasks.
Table 11 presents the descriptive statistical results for the items of the fourth dimension of the GAI-RCT Scale, namely GAI-driven academic writing and research refinement skills.
The results in
Table 11 show that postgraduate students use GAI at a moderate level to support academic writing and research refinement, with an overall mean of 5.66 (56.55%). The highest-rated uses involve preparing abstracts in Arabic and English (M = 6.02) and rephrasing long sentences into clearer academic language (M = 6.00), indicating that students value GAI for improving clarity, readability, and bilingual communication.
Moderate engagement is also observed in generating recommendations, enhancing linguistic quality, and standardizing citation styles (M = 5.64–5.68), suggesting that students rely on GAI for technical and stylistic refinement. Lower scores appear in tasks requiring structural or conceptual development, such as improving coherence across thesis chapters (M = 5.32) and generating research titles (M = 5.40). These results imply that students prefer to retain control over the intellectual organization of their work.
Overall, GAI is emerging as a useful tool for enhancing clarity and presentation, while students continue to exercise judgment in higher-level conceptual tasks. In summary, students show stronger engagement with GAI in improving clarity, abstracts, and stylistic refinement, while their use is weaker in structuring coherence and generating titles, underscoring reliance on AI for linguistic and technical support rather than conceptual organization.
Table 12 presents the descriptive statistical results for the items of the fifth dimension of the GAI-RCT Scale, namely the ethical and responsible use of GAI in research.
The results in
Table 12 show that learners consistently exhibit a high level of ethical and responsible use of Generative AI, with an overall mean of 7.80 (78.03%). This indicates that students engage with AI tools while maintaining strong awareness of academic integrity, institutional expectations, and the potential risks linked to AI-generated content. The highest-rated item reflects students’ recognition of potential errors or hallucinations in AI outputs (M = 8.25), highlighting a critical and cautious approach to evaluating AI-produced information. Students also show strong commitment to ensuring that AI-generated content aligns with academic integrity standards and to viewing AI as a supportive tool rather than a substitute for scholarly effort (M = 8.10).
High levels of diligence are evident in verifying compliance with academic regulations (M = 8.07) and checking the accuracy of AI-provided data (M = 8.02). The lowest mean, though still high, relates to citing original sources suggested by AI (M = 6.44), possibly reflecting uncertainty about citation practices. Overall, the findings portray a student population that uses AI thoughtfully, balancing efficiency with responsible oversight. In summary, students show the strongest engagement in critically evaluating AI outputs and ensuring academic integrity, while relatively lower engagement appears in citation practices, underscoring a cautious yet responsible approach to integrating GAI into research.
Table 13 presents the descriptive statistics related to the challenges faced by postgraduate students in employing generative AI (GAI) for cognitive and research development.
The results in
Table 13 indicate that postgraduate students face a moderate level of challenges when using generative AI for cognitive and research development, with an overall mean of 6.02 (60.17%). Although students are increasingly engaging with GAI tools, several obstacles still limit full integration into their research practices. The most significant challenge is the lack of adequate training within academic programs (M = 7.10), revealing a clear gap between rapid AI advancements and institutional support. Concerns about privacy and data protection also rank highly (M = 6.78), reflecting students’ awareness of the ethical and security risks associated with AI platforms.
Plagiarism-related concerns (M = 6.63) further highlight students’ caution regarding academic integrity. Moderate challenges appear in handling technical issues, assessing originality, and aligning AI-generated content with academic requirements (M = 5.63–6.11). The lowest means relate to distinguishing accurate from unreliable outputs (M = 5.22) and limited knowledge of effective AI use (M = 5.01).
In summary, the findings reveal that although students acknowledge the significance of GAI, practical, ethical, and technical barriers persist, underscoring the need for structured training, institutional guidance, and enhanced digital literacy. The synthesis of the results highlights that the most critical challenges are institutional in nature, particularly the lack of adequate training and support, followed by ethical concerns such as privacy and plagiarism. Technical and knowledge-related barriers appear at a moderate level. Importantly, the relatively large mean differences across items point to substantive challenges that carry practical significance, indicating the need for targeted interventions and structured guidance rather than being viewed as merely statistical variations.
- 2.
Second question answer: Do notable variations exist in postgraduate students’ utilization of GAI for enhancing Cognitive and Research Talent when considered across gender, age, academic department, program level, academic standing, specialization, degree of technology adoption, and intensity of GAI engagement?
Independent samples
t-tests were conducted to examine differences in postgraduate students’ use of generative AI across gender, program level, and technology engagement.
Table 14 presents the results, including both statistical significance and effect sizes to provide a more comprehensive interpretation of the findings.
The results in
Table 14 show several notable differences in postgraduate learners’ use of GAI to enhance cognitive and research talent. A statistically significant gender difference emerged (t = 2.417,
p = 0.017), with male students reporting higher overall use of GAI (M = 312.25) than female students (M = 278.46). The effect size (d = 0.33) is moderate, suggesting that while the difference is meaningful, it is not large. This highlights the importance of pedagogical strategies that ensure equitable support for both male and female students in adopting AI-enhanced research practices.
No significant difference was found between master’s and doctoral students (t = −1.711, p = 0.088). The effect size (d = 0.17) is small, indicating that program level alone does not substantially influence GAI use. This suggests that both groups engage with AI at comparable levels, and differences in adoption are better explained by other factors such as digital readiness.
The strongest difference relates to students’ general level of technology use. Those with high technological engagement reported significantly greater use of GAI (M = 337.88) compared to those with moderate engagement (M = 253.79), with a highly significant result (t = 6.518, p < 0.001). The effect size (d = 0.89) is large, underscoring digital proficiency as a critical enabler of effective AI adoption.
By integrating effect sizes and pedagogical interpretation, the results move beyond statistical significance to emphasize their educational relevance. Overall, the findings suggest that gender and technological confidence influence GAI use, whereas program level does not. These patterns highlight the need for targeted support for students with lower digital readiness to ensure equitable access to AI-enhanced research practices. A more comprehensive examination of these implications is offered in the
Section 10.
Table 15 reports the ANOVA findings that explore variations in postgraduate learners’ use of Generative Artificial Intelligence (GAI) across multiple demographic and academic variables, including age, department, academic level, specialization, and overall GAI utilization.
The ANOVA results in
Table 15 show a varied pattern in the factors influencing postgraduate learners’ use of GAI to enhance cognitive and research talent. No significant differences were found for age, academic department, or program specialization (
p > 0.05), indicating that students across demographic and disciplinary groups engage with GAI at comparable levels. This suggests that GAI adoption is broadly distributed rather than concentrated within specific segments of the postgraduate population.
In contrast, two variables demonstrate strong and statistically significant effects. Academic level shows a notably high F value (F = 249.19, p < 0.001), indicating substantial increases in GAI use as students advance through their programs—likely due to greater research demands and growing familiarity with digital tools. In a similar vein, the extent of Generative Artificial Intelligence (GAI) utilization emerged as a significant predictor (F = 79.64, p < 0.001), emphasizing the importance of digital preparedness and prior experience in influencing students’ participation in AI-supported research practices.
Overall, the findings suggest that while demographic and disciplinary factors exert limited influence, academic progression and technological proficiency are key drivers of GAI adoption among postgraduate students.
Table 16 displays the outcomes of the post hoc (LSD) analyses, which investigated variations in postgraduate learners’ utilization of Generative Artificial Intelligence (GAI) to advance SERTPs across different academic levels and degrees of GAI use.
The post hoc analyses presented in
Table 16 provide a deeper understanding of the significant differences related to academic level and GAI-use level. For academic level, the LSD results show consistent and statistically significant differences across all groups, with students in the thesis preparation stage reporting the highest use of GAI. The large mean gaps—such as the 241-point difference between thesis-stage students and first-level students—indicate a sharp increase in GAI engagement as students progress academically. This pattern suggests that advanced students rely more heavily on GAI for literature synthesis, methodological refinement, data interpretation, and academic writing, whereas early-stage students appear more cautious, likely due to limited research experience.
A similarly clear pattern emerges for levels of GAI use. High users report significantly greater engagement than both moderate and low users, and moderate users differ significantly from low users. The substantial mean differences, including the 202-point gap between high and low users, highlight the pivotal importance of digital literacy and prior exposure in shaping effective GAI integration.
Overall, the findings indicate that academic progression and AI proficiency are the strongest predictors of GAI adoption, while demographic and disciplinary factors exert minimal influence. These results emphasize the need for structured training and early exposure to AI tools to ensure equitable and effective use among postgraduate students.
11. Conclusions
This study fosters a comprehensive understanding of how postgraduate students engage with Generative Artificial Intelligence (GAI) as part of their cognitive and research development. The findings reveal a research environment in transition, where GAI is increasingly recognized as a valuable academic partner, yet its adoption remains selective and shaped by individual readiness, institutional support, and the nature of the research task. Students appear to be integrating GAI in ways that enhance efficiency, improve clarity, and support access to information, particularly in tasks related to literature review, academic writing, and early-stage data interpretation.
At the same time, the results show that students maintain clear boundaries regarding the types of tasks they are willing to delegate to AI. They continue to rely on human judgment for activities requiring deep conceptual reasoning, theoretical structuring, and methodological precision. This selective approach reflects a balanced understanding of both the strengths and limitations of GAI and demonstrates that students view AI as a complementary tool rather than a replacement for scholarly expertise.
A notable conclusion emerging from this study is the strong ethical awareness demonstrated by postgraduate students. They exhibit a high level of responsibility in verifying AI-generated content, cross-checking information, and adhering to academic integrity standards. This ethical orientation suggests that students are developing a mature and critical stance toward AI use, recognizing the importance of human oversight in maintaining the quality and credibility of academic work.
Despite these positive trends, this study identifies several challenges that may hinder the full integration of GAI into postgraduate research. The most prominent of these is the lack of structured training, which leaves many students uncertain about how to use AI tools effectively for advanced research tasks. Concerns about privacy, data protection, and the appropriate use of AI further complicate adoption, highlighting the need for clear institutional guidelines and supportive learning environments.
The inferential analyses underscore the importance of academic experience and digital proficiency in shaping GAI use. Students at more advanced academic levels and those with higher levels of technological readiness are more likely to integrate AI into their research practices, suggesting that both experience and digital literacy play critical roles in enabling effective AI adoption.
Overall, this study concludes that while GAI holds significant potential for enhancing cognitive and research talent, its impact depends on thoughtful, responsible, and well-supported integration. To fully realize the benefits of GAI, higher education institutions must invest in training, establish clear policies, and cultivate environments that empower students to use AI confidently, ethically, and creatively. The findings highlight the need for a balanced approach—one that embraces innovation while safeguarding the intellectual rigor and ethical standards that define postgraduate scholarship. Importantly, the integration of GAI should be framed not only as a technical enhancement but also as part of broader professionalisation efforts in postgraduate education, positioning AI literacy and ethical competence as core elements of academic and research development.