You are currently viewing a new version of our website. To view the old version click .
Information
  • Article
  • Open Access

6 December 2025

Generative AI in Research Group Formation: Academic Perceptions and Institutional Pathways

and
1
Department of Oral and Maxillofacial Surgery, Oral Medicine, and Periodontology, School of Dentistry, The University of Jordan, Amman 11942, Jordan
2
Nutrition and Food Technology Department, School of Agriculture, Deanship of Scientific Research, The University of Jordan, Amman 11942, Jordan
*
Author to whom correspondence should be addressed.
Information2025, 16(12), 1081;https://doi.org/10.3390/info16121081 
(registering DOI)
This article belongs to the Section Artificial Intelligence

Abstract

Objective: This study provides timely insights into how faculty perceive the role of generative AI in academic collaboration and offers a case study on aligning institutional policy with emerging technological opportunities in higher education. It investigates how generative artificial intelligence (AI) tools are perceived and utilized in the formation of academic research groups, focusing on faculty at the University of Jordan. Design/Methodology: A descriptive cross-sectional study involving a mixed-methods survey of 100 faculty members primarily principal investigators (PI) was conducted, gathering quantitative data on AI familiarity, usage across research group (RG) planning tasks, perceived benefits and risks, and qualitative feedback on recommended institutional actions. Findings: The results indicate moderate adoption of generative AI in RG formation, especially for creative and writing tasks, with younger and junior faculty significantly tending to be more optimistic about AI’s benefits (e.g., increased efficiency, improved content quality) than senior faculty, who reported having greater concerns. The top concerns identified include data privacy, academic integrity (plagiarism), the accuracy of AI outputs, and overreliance on AI at the expense of human expertise. Despite reservations, a large majority agree on the need for official policies and training to guide AI’s ethical and effective use. Conclusion: The findings underscore a generational divide in attitudes, suggesting targeted interventions to support senior academics and influence juniors’ interest. Institutions should craft clear guidelines, provide training, and ensure access to AI tools to facilitate interdisciplinary collaboration and innovation, while safeguarding academic standards.

1. Introduction

Forming and maintaining effective research groups (RGs) is an important but difficult task in academia [1]. These groups usually bring together scholars from different disciplines and career stages to work on complex problems that require collaboration [2]. While diversity in research groups can bring creativity and innovation, it may also create challenges related to disciplinary differences, goal alignment and communication [3]. Members often struggle with aligning their goals, communicating effectively, and maintaining productivity over time [4]. For junior academics, these challenges are even harder, since leadership, project management, and cross-disciplinary collaboration are skills not formally taught [5]. Senior academics also face difficulties, especially with sustaining momentum and securing stable funding [6]. These obstacles highlight the growing need for new well-structured methods and tools to support the formation and long-term success of RGs [2].
The development and performance of RGs can be understood through several theoretical lenses. Team diversity and collaboration theory suggests that groups with varied expertise and backgrounds enhance creativity and innovation, but may also face conflicts due to differences in goals, communication styles, or disciplinary perspectives [7,8]. Social cognitive theory highlights the role of self-efficacy and experience in shaping attitudes toward adopting new tools and practices [9], which helps to explain generational differences in artificial intelligence (AI) adoption. In other hand, technology acceptance models (TAM) and diffusion of innovation theory provide a framework for understanding how individuals decide to adopt AI tools based on perceived usefulness, ease of use, and social influence [9,10,11]. Finally, institutional theory contextualizes how formal structures, norms, and organizational support—such as those provided by universities—affect adoption and integration of AI tools in academic practices [12,13]. Research across universities in Asia, the Middle East, and Africa suggests that these determinants significantly shape educators’ readiness to integrate AI in academic tasks, especially when institutions provide infrastructure, training, and explicit usage guidelines [12,14]. Together, these frameworks provide a strong conceptual basis for understanding the way in which individual attitudes, team dynamics, and institutional contexts coverage to shape AI adoption in research setting.
Generative artificial intelligence (AI) has recently emerged as a possible valuable tool to support academics. Unlike traditional software, generative AI can produce human-like text and images, making it useful for brainstorming, drafting, editing, and even planning collaboration [15]. For example, AI tools can help propose group names, write research missions, identify possible collaborators, or link projects with strategic or global priorities [2]. By accessing large knowledge bases, AI can also bring new interdisciplinary ideas and connect researchers to global frameworks, such as the United Nations Sustainable Development Goals [16]. In this way, AI does not replace academic creativity but can add value by helping researchers organize their work more effectively.
However, the use of generative AI in higher education has created both excitement and concern. Younger faculty and students tend to report more openness to adopting AI, often seeing it as a means to improve efficiency and productivity. Older academics, in contrast, reported being more cautious, with concerns about plagiarism, ethical misuse, and the risk of weakening critical thinking skills [17]. Surveys show a clear adoption gap: students reported using AI much more than their instructors. Still, studies also suggest that faculty who try AI become less unconvinced over time, showing that experience plays a significant role in shaping attitudes [18,19]. Nonetheless, issues of academic integrity, privacy, and clear institutional rules remain central to the debate [20]. These patterns reflect the generational and institutional contexts anticipated by the theoretical models, including TAM and institutional theory.
It is in this context that the University of Jordan (UJ), a large research-intensive institution, has launched a structured program to establish and accredit interdisciplinary research groups. This initiative reflects both national and institutional priorities and aims to promote collaboration across disciplines while linking research to strategic plans, socio-economic development, and global goals such as the Sustainable Development Goals. Faculty members who apply to create new groups must define a clear research line, assemble a diverse group, and show how their work aligns with institutional and global priorities. These requirements mirror common challenges faced in research group formation worldwide, such as defining a clear identity, selecting the right expertise, and planning projects that contribute to wider objectives.
This paper uses the case of the University of Jordan to study how generative AI is perceived and used in the process of research group formation. The primary aim of this study is descriptive in nature. The study does not attempt to test predictive models or establish causal relationships. It seeks to document how faculty members at the University of Jordan perceive and utilize generative AI tools during the formation of interdisciplinary research groups and examine generational differences in adoption, explore the benefits (such as efficiency, interdisciplinarity, and collaboration) and risks (such as plagiarism, overreliance, and ethical issues), and identify institutional challenges that influence adoption. By focusing on UJ, this paper contributes to the wider discussion on AI in higher education, offering insights into how universities can build policies, training, and infrastructure to ensure that AI is used responsibly and effectively in academic research.

2. Materials and Methods

This study employed a mixed-methods survey design to collect data on faculty perceptions and use of generative AI in research group formation at the University of Jordan. The target population was academic staff who serve as principal investigators (PIs) and lead research groups at UJ. In May 2024, an online questionnaire was distributed via the Research Group accreditation committee to all faculty who had registered or were in the process of forming research groups. Participation was voluntary and anonymous, and 100 valid responses were obtained (response rate: approximately 85% of those invited). This sample included representatives from all major disciplines and academic ranks, enabling comparison across demographic segments.
Survey Instrument: The questionnaire, which was administered in English, comprised three sections. The first section covered demographics and background, including age group, sex, academic rank, faculty (disciplinary cluster), and years of research experience, along with two questions on AI familiarity and prior AI use. Familiarity with generative AI tools (e.g., ChatGPT, Google Bard, Claude) was self-rated on a 5-point scale from “Not familiar” to “Expert”, and respondents indicated whether they had used such tools in any academic or research work before (Yes/No). The second section focused on AI usage in research group formation, listing specific tasks derived from UJ’s research group formation guidelines. Respondents were asked to select which stages, if any, they had used or considered using generative AI for; for example, identifying the research group’s focus area (“research line”), naming the group, creating the group’s logo, generating images for the group’s website, writing the group’s mission statement and research interest descriptions, identifying additional disciplines to include for a multidisciplinary approach, finding potential international group members, identifying entities for collaboration (local or international), assessing alignment with institutional priorities and SDGs, and editing/formatting website content (e.g., welcome notes, news, project descriptions). This section also included a multiple-response question on concerns about using AI in academic RG planning, where participants could select any of six provided concerns (data privacy and security; risk of academic integrity violations or plagiarism; ethical or legal implications; inaccurate or “hallucinated” outputs; overreliance on AI reducing human creativity; or “none of the above”). The third section consisted of statements evaluating perceptions of AI’s impact, using a 5-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree). Finally, an open-ended question invited participants to share any recommendations to improve the integration of AI in research planning at UJ. This qualitative prompt was designed to capture ideas, concerns, or suggestions in the respondents’ own words, complementing the structured data with richer insights.

Statistical Analysis

Data were analyzed using IBM SPSS Statistics (version 29). Descriptive statistics, including frequencies and percentages, were calculated to summarize respondents’ perceptions and recommendations regarding GAI integration. Associations between categorical variables (e.g., age groups, years of experience, academic school categories) and response categories were assessed using Pearson’s Chi-Square tests. When cell counts were low, the Likelihood Ratio Chi-Square test was also considered to verify the results. Additionally, Linear-by-Linear Association tests were performed to evaluate potential linear trends in ordinal data. Statistical significance was set at p < 0.05.

3. Results

As shown in Table 1, the survey respondents (n = 100) represented a balanced gender distribution, with a slight male majority. Most participants were aged between 41 and 50 years, and the largest proportion held the rank of professor, followed by associate professor. Nearly half had 11–20 years of academic experience. Humanities and Social Sciences schools constituted the largest field representation, followed by Scientific and Health schools.
Table 1. Demographic and professional characteristics of survey participants.
When asked about their familiarity with GAI tools (e.g., ChatGPT, Gemini, Claude), 3% of respondents described themselves as experts, 37% as very familiar, and another 37% as moderately familiar. Eighteen percent were slightly familiar, while only 5% reported no familiarity at all. Familiarity with these tools was not significantly related to any demographic or professional characteristics of the participants.
Just over half of the respondents (53%) reported having used GAI in academic or research work, while 47% had not. This usage showed no significant association with any demographic or professional characteristics.
A large majority of respondents (77%) believed that the use of GAI in forming RGs should be regulated by policy, while 23% did not. There was a statistically significant association between this opinion and participants’ age (p = 0.032), with support for regulation highest among those younger than 30 or older than 60.
Of the 100 surveyed faculty members, 76% reported use of GAI across RG formation stages. The survey results reveal that GAI is most commonly used in creating the RG logo (used by 38% of respondents) and editing or formatting website content (34%). Around one-quarter of respondents reported using GAI for creating website images and writing research interests (27% each). Less frequent use was noted in naming the group (23%) and writing the RG’s mission (17%). The lowest usage was seen in identifying international group members and collaboration entities (7% each), indicating varied adoption of GAI across different stages of RG formation and website creation (Table 2).
Table 2. Usage of generative AI across research group formation stages.
As shown in Table 3, the primary concerns regarding the use of GAI in RG planning focus on data privacy and security (43%), followed closely by risks related to academic integrity and plagiarism (38%). Ethical and legal implications are also significant, noted by 27% of respondents. Issues with inaccurate or fabricated AI outputs (“hallucinations”) and overreliance on AI impacting human creativity were each cited by 20% of participants. A small portion (7%) reported no concerns. These results highlight the need for clear guidelines and safeguards when integrating GAI in research activities.
Table 3. Frequency of concerns about using GAI in research group planning.
Most participants expressed neutral (39%) or somewhat positive (27%) views on whether GAI enhanced their RG’s concept, with only 9% reporting strong agreement. About one-quarter (25%) expressed some level of disagreement, indicating an overall moderate perception of its benefit. When examining academic rank, a statistically significant linear trend was observed (p = 0.024), showing that higher ranks were more likely to express disagreement, while lower ranks tended to be neutral or positive. A similar pattern emerged with age group, where a significant linear trend (p = 0.025) indicated older respondents were more likely to disagree and younger respondents were more neutral or positive. Finally, the relationship between years of experience and perceptions of GAI’s impact showed both a statistically significant overall association (p = 0.017) and a strong linear-by-linear trend (p = 0.001), with more experienced respondents tending toward disagreement and less experienced respondents leaning toward neutrality or positivity about AI’s contribution to the RG concept.
Regarding whether GAI increased the efficiency of the RG planning process, responses were generally positive. Nearly one-third (32%) agreed and 10% strongly agreed, while 29% remained neutral. A smaller portion expressed disagreement, with 12% strongly disagreeing and 17% somewhat disagreeing. Overall, these results suggest a modest positive impact of GAI on planning efficiency. Analysis of the relationship between age and perceptions of GAI’s effect on planning efficiency revealed a statistically significant overall association (p = 0.043) and a significant linear trend (p = 0.036), with younger respondents more likely to report improved efficiency and older respondents more likely to be less positive or disagree. Similarly, analysis by years of experience showed a statistically significant overall association (p = 0.013) and a significant linear trend (p = 0.003), where respondents with fewer years of experience reported greater efficiency improvements compared to those with more experience, who tended to be less positive or disagree about AI’s impact.
Regarding whether GAI helped in finding a clear mission and vision for the RG, responses were mixed but generally positive. Thirty percent of respondents were neutral, 28% agreed, and 8% strongly agreed. Meanwhile, 14% strongly disagreed and 20% somewhat disagreed. Overall, these results indicate a moderate positive perception of AI’s role in clarifying the group’s mission and vision. A significant linear trend was found between years of experience and perceptions of GAI’s help in defining the mission and vision (p = 0.007), with less experienced respondents more likely to perceive AI as helpful.
Regarding whether GAI facilitated interdisciplinary connections and networking, responses were mixed. While 25% were neutral, 17% agreed and 8% strongly agreed. However, a notable proportion expressed disagreement, with 21% strongly disagreeing and 29% somewhat disagreeing. Overall, perceptions of AI’s role in fostering interdisciplinary collaboration were moderate to low.
Respondents’ views were mixed regarding GAI’s role in introducing new or innovative research areas. About 46% disagreed to some extent, while 32% agreed that AI helped reveal research topics they had not previously considered. The remaining 22% were neutral, indicating a moderate level of recognition for AI’s contribution to expanding research ideas.
Regarding whether GAI helped align the RG with institutional goals, national priorities, and SDGs, responses were mixed. Approximately 41% expressed some level of disagreement, while 33% agreed or strongly agreed. About 26% remained neutral, reflecting a moderate perception of AI’s role in supporting strategic alignment. Analysis of the relationship between years of experience and these perceptions revealed a statistically significant overall association (p = 0.027) and a significant linear trend (p = 0.003). Respondents with fewer years of experience were more likely to view AI as helpful in this alignment, whereas those with greater experience tended to be less positive or disagreed.
Regarding whether GAI helped speed up the completion of the full website content for the RG, responses were generally positive. Over half of respondents (52%) agreed or strongly agreed, with 36% somewhat agreeing and 16% strongly agreeing. Meanwhile, 25% expressed some level of disagreement. These results suggest that GAI was perceived as an effective tool for accelerating website content development. A significant linear trend was observed between years of experience and perceptions of GAI’s role in speeding up website content completion (p = 0.006), with less experienced respondents more likely to perceive AI as helpful.
Regarding whether GAI improved the language of the RG’s content, responses were predominantly positive. A total of 66% of respondents agreed or strongly agreed (36% and 30%, respectively), while 19% somewhat disagreed and 8% strongly disagreed. These findings indicate a strong perception of AI’s effectiveness in enhancing language quality. Analysis of the relationship between years of experience and perceptions of AI’s effectiveness revealed a statistically significant overall association (p = 0.044) and a significant linear trend (p = 0.014). Respondents with fewer years of experience were more likely to perceive AI as beneficial for language improvement, whereas those with greater experience were less likely to do so.
Regarding intentions to continue using AI tools in future research planning, responses were generally positive but somewhat mixed. A combined 42% of respondents agreed or strongly agreed (27% and 15%, respectively), while 30% somewhat disagreed or strongly disagreed (15% each). About 28% remained neutral. This indicates a moderate level of willingness among participants to continue integrating AI tools in their research planning. Analysis of the relationship between age and intentions to continue using AI tools in future research planning revealed a statistically significant overall association (Pearson χ2(8) = 15.99, p = 0.043) and a significant linear trend (p = 0.008). Younger respondents were more likely to express willingness to continue using AI, while older respondents tended to be less willing or more uncertain.
Regarding the potential role of AI tools in shaping research strategies or policies, responses were generally favorable. A combined 44% of respondents agreed or strongly agreed (34% and 10%, respectively), while 29% somewhat disagreed or disagreed (18% and 11%, respectively). About 27% were neutral. These results indicate a moderate level of optimism about AI’s influence on research strategy and policy development. Analysis of perceptions regarding AI’s potential role in shaping research strategies or policies showed some gender differences (p = 0.045). Female respondents were more likely to view AI as influential in shaping research strategies and policies compared to male respondents.
Regarding the willingness to recommend GAI tools to colleagues, responses were generally positive. About 41% of respondents agreed or strongly agreed with recommending AI tools (27% and 14%, respectively), while 27% remained neutral. A smaller portion, 27%, expressed some level of disagreement. Overall, this indicates a moderately favorable attitude toward promoting GAI among peers. A significant association was found with school category (p = 0.047), reflecting variation in recommendation levels across disciplines. Specifically, 53.3% of respondents from Humanities and Social Sciences, 38.1% from Health schools, and 26.5% from Scientific schools expressed a positive recommendation of GAI tools.
Nearly three-quarters (72%) of respondents agreed or strongly agreed on the need for training in the effective use of GAI in academic settings, with 47% strongly agreeing. Only 18% disagreed to some extent, while 10% remained neutral. This indicates a strong consensus on the importance of training for successful AI integration. A statistically significant association was found between school category and the perceived need for training (p = 0.013). The highest agreement was observed among respondents from Health schools (95.2%), followed by Humanities and Social Sciences (73.3%), and Scientific schools (55.9%).
When respondents were asked about recommendations to improve the integration of GAI in research planning at the university, a variety of themes emerged. Many respondents emphasized the necessity of comprehensive training and capacity-building initiatives, advocating for expert-led, practical workshops and ongoing training programs that focus on the effective and ethical use of AI tools, including prompt engineering and responsible integration within research workflows. A strong call was made for the establishment of clear ethical and regulatory frameworks, highlighting the need for policies and guidelines that address academic integrity, plagiarism, and define permissible boundaries for AI usage across different academic disciplines.
Institutional support was also identified as critical, with suggestions for providing campus-wide licenses for professional AI software, such as paid versions of GPT, alongside the creation of centralized AI research support units and the integration of AI tools within the university’s research infrastructure to facilitate planning and execution. Increasing AI literacy among faculty members was frequently mentioned as well, with recommendations to organize seminars, orientation sessions, and share successful case studies to promote a culture that embraces AI as a research aid while ensuring continued human oversight.
Several respondents underscored the importance of cautious and responsible AI use, urging researchers to use AI as a supplementary tool rather than a substitute, and to maintain a deep understanding of their research content to critically evaluate AI-generated outputs. The encouragement of interdisciplinary collaboration through the formation of cross-disciplinary hubs was noted as a means to foster AI-driven innovation and knowledge exchange across faculties. Additional suggestions included integrating AI education into postgraduate curricula, establishing university-level policies regarding AI use, and promoting transparency and ethical conduct in AI-assisted research activities. Collectively, these recommendations reflect a multifaceted approach to enhancing the responsible and effective integration of AI technologies within academic research

4. Discussion

This study displays how generative AI is beginning to reshape academic practice, with the University of Jordan’s research group initiative offering a timely case study [15]. The PIs’ responses show a generational divide in perceptions, a set of benefits that enhance efficiency and writing quality, as well as serious risks that demand careful institutional responses. From a theoretical perspective, the generational divide observed in this study can be interpreted through established models of technology adoption and institutional behavior. These findings align with broader theoretical perspectives that explain how professional identity and institutional norms shape responses to technological change. According to the Technology Acceptance Model (TAM), perceived usefulness and ease of use strongly shaped attitudes toward new technologies [21]. Senior faculty, whose professional identities were formed before the emergence of generative AI and whose originality was constructed in human-centered intellectual labor, tend to perceive AI as a threat to the symbolic foundations of traditional academic logics [22,23]. Their evaluations reflect what organizational theorists describe as institutional conservatism, in which established actors resist innovations that disrupt long-standing academic norms [22]. Conversely, juniors or early-career faculty, having been socialized in more technologized academic environments [23], consider AI as an instrument that enhances efficiency and linguistic clarity. This aligns with research indicating that early-career academics often adopt a pragmatic orientation toward digital technologies due to their regular exposure and workload pressures These differences mirror global findings that younger users or the direct users of emerging technologies hold more positive perceptions than non-users [24]. The shared call for structured training across seniority levels suggests that institutions are in a transitional phase of developing coherent norms and expectations around AI use [25].
The generational gap emerges as one of the clearest findings in the study. Senior faculty, particularly full professors with decades of experience, displayed skepticism about the value of generative AI, questioning its impact on quality, integrity, and academic restriction. Their concerns stem from established research practices, their roles as guardians of scholarly standards, and heightened sensitivity to plagiarism and ethical misuse. For them, reliance on AI risks challenging the authenticity of academic work. In contrast, junior and early-career faculty, who are more digitally fluent, expressed greater enthusiasm [17]. They reported that AI saved time, improved clarity, and gave them more confidence, especially in tasks such as drafting proposals and group documents. These findings support the previous literature indicating that actual users of AI hold more positive attitudes than non-users. In this context, age and experience, rather than gender or discipline, reported to tend more reliable predictors of attitudes toward AI. While younger faculty may quietly adopt AI to improve productivity, their work may be viewed with suspicion by senior colleagues responsible for promotion or contract [26]. However, both groups expressed strong interest in training, suggesting that resistance is not absolute. Structured workshops, involving respected senior scholars in policy development, and showcasing successful cases where AI has demonstrably improved academic outcomes, could help build trust and reduce tension [27].
Beyond generational dynamics, the study highlights significant benefits of generative AI. Faculty reported substantial gains in efficiency, particularly in drafting and editing documents, allowing them to devote more time to conceptual and analytical tasks. Many noted that AI improved the quality of their writing by refining grammar and structure, an important advantage in an environment where English is often a second language [28]. These findings are particularly relevant in multilingual academic environments, where AI-supported language editing can reduce time spent on editing and increase focus on conceptual work. Some respondents also credited AI with stimulating new ideas and drawing unexpected interdisciplinary links, though this effect was less pronounced. AI’s potential to enhance collaboration and networking is emerging, as faculty visualize tools that could analyze institutional expertise and suggest partnerships across disciplines. Even in its current state, AI was found to ease the early, often overwhelming stages of research group formation by providing quick drafts and answers that reduced initial friction and boosted confidence.
These benefits are balanced by serious risks. Data privacy and security remain pressing concerns, as academics often handle sensitive or unpublished information that could be compromised if entered third-party systems. Academic integrity is another major issue, with questions about blurred authorship, the possibility of plagiarism, and misuse by students or junior researchers. AI’s tendency to produce inaccuracies or fabricated references was also noted, underscoring the need for rigorous verification of outputs. Finally, overreliance on AI risks eroding essential academic skills such as writing, critical analysis, and creativity, potentially reducing scholars to supervisors of machine-generated content rather than originators of knowledge. Their concerns are consistent with international debates on the implications of AI for scholarly rigor. Nonetheless, these risks are manageable through ethical guidelines, clear policies, comprehensive well-structured training, and institutional oversight [20].
A recurring theme in both faculty feedback and the wider literature is that AI must be understood as a support tool rather than a substitute for human scholarship [29]. When framed as an assistant that proofreads, drafts, or offers alternative perspectives, AI enhances productivity without displacing human judgment or creativity. The human-in-the-loop model maintains accountability and ensures that researchers remain responsible for originality and rigor. This framing is also pedagogically valuable, signaling to students that AI is a tool to aid learning, not a replacement for it, much as calculators supported mathematics without eliminating the need to understand its foundations.
The University of Jordan is well-positioned to lead in this domain. The study’s findings according to the faculty responses, highlight several institutional priorities. The development of a comprehensive AI policy is essential, clarifying issues of authorship, confidentiality, and acceptable use in academic evaluation, and aligning with Jordan’s national AI strategy. Structured training programs are needed to build competence and confidence across disciplines, with special attention to sensitive fields such as the health sciences. Providing secure access to AI systems, ideally through institutional subscriptions or locally hosted platforms, would address privacy concerns and ensure equity. Integrating AI guidance into existing procedures for research group formation, coupled with transparent reporting on its use, would normalize ethical practices. Finally, systematic monitoring and evaluation of outcomes would allow the university to refine its approach over time.
The implications extend beyond UJ. Globally, younger academics are often early adopters of AI, while senior faculty raise legitimate concerns about ethics and rigor. Institutions must navigate this divide by pairing innovation with governance. Those that provide training, secure infrastructure, and transparent policies are likely to realize AI’s benefits while safeguarding academic integrity.

5. Limitations

This study has several limitations. First, the sample was limited to principal investigators (PIs) from the University of Jordan, who are officially responsible for RGs and accountable for the data and outputs. As a result, the findings may not generalize to other universities, disciplines, or countries, and future research could compare similar structures across multiple institutions. Second, all data were self-reported, which may introduce bias, and no objective indicators of group performance or AI usage were collected. Third, the cross-sectional survey design prevents causal inferences; observed associations—such as between younger faculty and AI adoption—may be influenced by promotion requirements, prior experience, or technological familiarity. Fourth, the study focused exclusively on generative AI tools, without considering broader technology use in research groups. Fifth, statistical analyses were descriptive, reflecting the study’s exploratory focus on perceptions rather than predicting outcomes.

6. Conclusions

In conclusion, generative AI is becoming an integral part of academic research. At UJ, faculty are already using it to draft, edit, and support group formation. The enthusiasm of junior academics and the caution of senior scholars reflect a broader tension across higher education. The most constructive path forward lies in positioning AI as a supplementary tool under human oversight. For universities, the task is to establish clear policies, invest in training, and provide secure access to tools, thereby ensuring responsible adoption. UJ’s experience demonstrates that proactive integration can bridge generational divides, enhance productivity, and foster interdisciplinary collaboration. The principle that must guide this process is simple but crucial: AI should augment, not replace, human scholarship.

Author Contributions

The study was conceptualized and designed by F.S., who also conducted statistical analysis and oversaw language editing. Data collection, verification, and drafting of the manuscript were performed by H.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study involved only the collection of fully anonymous survey data. No names, emails, IP addresses, demographic identifiers, or any other personally identifiable information were collected. The data cannot be traced back to any individual. According to the research ethics policy at the University of Jordan, studies that involve anonymous questionnaires and do not include identifiable human data are classified as “Exempt” and therefore do not require IRB approval. For this category of minimal-risk research, the University of Jordan does not issue IRB approval letters, as there is no human subject identification, intervention, or risk involved.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hall, K.L.; Vogel, A.L.; Huang, G.C.; Serrano, K.J.; Rice, E.L.; Tsakraklides, S.P.; Fiore, S.M. The science of team science: A review of the empirical evidence and research gaps on collaboration in science. Am. Psychol. 2018, 73, 532–548. [Google Scholar] [CrossRef]
  2. Srivastava, B.; Koppel, T.; Paladi, S.T.; Valluru, S.L.; Sharma, R.; Bond, O. ULTRA: A Data-driven Approach for Recommending Team Formation in Response to Proposal Calls. In Proceedings of the 2022 IEEE International Conference on Data Mining Workshops (ICDMW), Orlando, FL, USA, 28 November–1 December 2022; pp. 1002–1009. [Google Scholar] [CrossRef]
  3. Hundschell, A.; Razinskas, S.; Backmann, J.; Hoegl, M. The effects of diversity on creativity: A literature review and synthesis. Appl. Psychol. 2022, 71, 1598–1634. [Google Scholar] [CrossRef]
  4. Newman, J. Promoting Interdisciplinary Research Collaboration: A Systematic Review, a Critical Literature Review, and a Pathway Forward. Soc. Epistem. 2024, 38, 135–151. [Google Scholar] [CrossRef]
  5. Sheng, J.; Liang, B.; Wang, L.; Wang, X. Evolution of scientific collaboration based on academic ages. Phys. A Stat. Mech. Its Appl. 2023, 624, 128846. [Google Scholar] [CrossRef]
  6. Breen, S.M.; Olson, T.H.; Gonzales, L.D.; Griffin, K.A. Barriers to Change: A Collective Case Study of Four Universities’ Efforts to Advance Faculty Diversity and Inclusion. Innov. High. Educ. 2025, 50, 513–539. [Google Scholar] [CrossRef]
  7. Wang, J.; Cheng, G.H.; Chen, T.; Leung, K. Team creativity/innovation in culturally diverse teams: A meta-analysis. J. Organ. Behav. 2019, 40, 693–708. [Google Scholar] [CrossRef]
  8. Tang, M. Fostering Creativity in Intercultural and Interdisciplinary Teams: The VICTORY Model. Front. Psychol. 2019, 10, 2020. [Google Scholar] [CrossRef]
  9. Shata, A.; Hartley, K. Artificial intelligence and communication technologies in academia: Faculty perceptions and the adoption of generative AI. Int. J. Educ. Technol. High. Educ. 2025, 22, 14. [Google Scholar] [CrossRef]
  10. Singh, S.; Strzelecki, A. Academics as adopters of generative AI: An application of diffusion of innovations theory. Educ. Inf. Technol. 2025. [Google Scholar] [CrossRef]
  11. Zhang, X.; Chen, S.; Wang, X. How can technology leverage university teaching & learning innovation? A longitudinal case study of diffusion of technology innovation from the knowledge creation perspective. Educ. Inf. Technol. 2023, 28, 15543–15569. [Google Scholar] [CrossRef]
  12. Zhao, Z.; An, Q.; Liu, J. Exploring AI tool adoption in higher education: Evidence from a PLS-SEM model integrating multimodal literacy, self-efficacy, and university support. Front. Psychol. 2025, 16, 1619391. [Google Scholar] [CrossRef]
  13. Jeilani, A.; Abubakar, S. Perceived institutional support and its effects on student perceptions of AI learning in higher education: The role of mediating perceived learning outcomes and moderating technology self-efficacy. Front. Educ. 2025, 10, 1548900. [Google Scholar] [CrossRef]
  14. Møgelvang, A.; Cipriani, E.; Grassini, S. Generative AI in Action: Acceptance and Use Among Higher Education Staff Pre- and Post-training. Technol. Knowl. Learn. 2025. [Google Scholar] [CrossRef]
  15. Almisad, B.; Aleidan, A. Faculty perspectives on generative artificial intelligence: Insights into awareness, benefits, concerns, and uses. Front. Educ. 2025, 10, 1632742. [Google Scholar] [CrossRef]
  16. Llorca, J.; Royuela, V.; Evans, C.; Diaz-Guilera, A.; Ramos, R. Fostering interdisciplinarity and collaboration: The role of challenge-driven research in European University Alliances through the CHARM-EU experience. Humanit. Soc. Sci. Commun. 2025, 12, 479. [Google Scholar] [CrossRef]
  17. Chan, C.K.Y.; Lee, K.K.W. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  18. Khlaif, Z.N.; Ayyoub, A.; Hamamra, B.; Bensalem, E.; Mitwally, M.A.A.; Ayyoub, A.; Hattab, M.K.; Shadid, F. University Teachers’ Views on the Adoption and Integration of Generative AI Tools for Student Assessment in Higher Education. Educ. Sci. 2024, 14, 1090. [Google Scholar] [CrossRef]
  19. Vieriu, A.M.; Petrea, G. The Impact of Artificial Intelligence (AI) on Students’ Academic Development. Educ. Sci. 2025, 15, 343. [Google Scholar] [CrossRef]
  20. Al-kfairy, M.; Mustafa, D.; Kshetri, N.; Insiew, M.; Alfandi, O. Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics 2024, 11, 58. [Google Scholar] [CrossRef]
  21. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  22. DiMaggio, P.J.; Powell, W.W. The iron cage revisited institutional isomorphism and collective rationality in organizational fields. In Economics Meets Sociology in Strategic Management; Emerald Publishing: Leeds, UK, 2000; pp. 143–166. [Google Scholar] [CrossRef]
  23. Parasuraman, A. Technology Readiness Index (Tri): A Multiple-Item scale to measure readiness to embrace new technologies. J. Serv. Res. 2000, 2, 307–320. [Google Scholar] [CrossRef]
  24. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  25. Czarniawska, B. Sensemaking in organizations. Scand. J. Manag. 1997, 13, 113–116. [Google Scholar] [CrossRef]
  26. Fazi, L.; Zaniboni, S.; Wang, M. Age differences in the adoption of technology at work: A review and recommendations for managerial practice. J. Organ. Change Manag. 2025, 38, 138–175. [Google Scholar] [CrossRef]
  27. Al-Abdullatif, A.M. Modeling Teachers’ Acceptance of Generative Artificial Intelligence Use in Higher Education: The Role of AI Literacy, Intelligent TPACK, and Perceived Trust. Educ. Sci. 2024, 14, 1209. [Google Scholar] [CrossRef]
  28. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  29. Yusuf, A.; Pervin, N.; Román-González, M. Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. Int. J. Educ. Technol. High. Educ. 2024, 21, 21. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.