Next Article in Journal
Fuzzy Ontology Embeddings and Visual Query Building for Ontology Exploration
Previous Article in Journal
Hierarchical Fake News Detection Model Based on Multi-Task Learning and Adversarial Training
Previous Article in Special Issue
Human-AI Symbiotic Theory (HAIST): Development, Multi-Framework Assessment, and AI-Assisted Validation in Academic Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention

by
Leo S. F. Lin
1,*,
Geberew Tulu Mekonnen
2,
Mladen Zecevic
3,
Immaculate Motsi-Omoijiade
4,
Duane Aslett
2 and
Douglas M. C. Allan
1
1
Australian Graduate School of Policing and Security, Charles Sturt University, 10-12 Brisbane Ave, Barton, Canberra, ACT 2600, Australia
2
Centre for Law and Justice, Charles Sturt University, 10-12 Brisbane Ave, Barton, Canberra, ACT 2600, Australia
3
School of Policing Studies, Charles Sturt University, McDermott Drive, Goulburn, NSW 2580, Australia
4
Artificial Intelligence and Cyber Futures Institute, Charles Sturt University, Panorama Ave, Bathurst, NSW 2795, Australia
*
Author to whom correspondence should be addressed.
Informatics 2025, 12(4), 132; https://doi.org/10.3390/informatics12040132
Submission received: 31 August 2025 / Revised: 20 November 2025 / Accepted: 24 November 2025 / Published: 28 November 2025

Abstract

Generative Artificial Intelligence (GenAI) has transformed Australian higher education, amplifying online harms such as misinformation, fraud, and image-based abuse, with significant implications for cybercrime prevention. Combining a PRISMA-guided systematic review with MAXQDA-driven analysis of Australian university policies, this research evaluates institutional strategies against national frameworks, such as the Cybersecurity Strategy 2023–2030. Analyzing data from academic literature, we identify three key themes: educational strategies, alignment with national frameworks, and policy gaps and development. As the first qualitative analysis of 40 Australian university policies, this study uncovers systemic fragmentation in governance frameworks, with only 12 institutions addressing data privacy risks and none directly targeting AI-driven disinformation threats like deepfake harassment—a critical gap in global AI governance literature. This study provides actionable recommendations to develop the National GenAI Governance Framework, co-developed by TEQSA/UA and DoE, enhanced cyberbullying policies, and behavior-focused training to enhance digital safety and prevent cybercrime in Australian higher education. Mandatory annual CyberAI Literacy Module for all students and staff to ensure awareness of cybersecurity risks, responsible use of artificial intelligence, and digital safety practices within the university community.

1. Introduction

The increasing popularity of Generative Artificial Intelligence (GenAI) has impacted and transformed the pedagogical and administrative landscapes [1,2,3,4]. However, this transformation introduces novel cybercrime vectors; AI-generated misinformation fuels opportunities for sophisticated phishing campaigns impersonating faculty, deepfake-driven fraud exploiting identities, sextortion, and enhanced ransomware targeting institutional databases. These harms, including AI-forged academic credentials and fictitious research data, directly challenge institutional digital safety, exposing students and staff to data breaches and academic integrity violations [5,6,7,8]. Bearman, Ryan and Ajawi (2023) [9] argue that AI policy discourse shapes institutional responses and capacities. When mis and disinformation are framed merely as a side effect of AI or conflated with academic misconduct, rather than recognized as a distinct threat, policy solutions may remain shallow or peripheral [9].
Cyber threats increasingly target Australian higher education, and there is a direct link between AI-powered tools in academia and university data breaches (See https://australiancybersecuritymagazine.com.au/the-negative-impact-of-ai-on-academic-integrity-in-tertiary-education (accessed on 25 May 2025)). As critical infrastructure under the Security of Critical Infrastructure Act 2018 and in alignment with the Australian Cyber Security Strategy 2023–2030, Australian universities responsible for critical infrastructure assets are obligated to mitigate cyber risks, which would include threats posed by AI-driven cybercrime, even though the legislation does not explicitly reference AI-driven attacks (https://universitiesaustralia.edu.au/submission/universities-australias-response-to-the-cyber-security-legislative-package-2024-inquiry/ (accessed on 22 May 2025)). However, policy fragmentation, such as inconsistent detection standards and uneven deployment of secure systems, creates exploitable vulnerabilities [10].
The Australian government and sector bodies, such as the Tertiary Education Quality and Standards Agency (TEQSA), have urged institutions to develop action plans to address AI-driven academic misconduct (See https://www.teqsa.gov.au/sites/default/files/2023-04/aain-generative-ai-guidelines.pdf (accessed on 22 May 2025)). In June 2024, TEQSA requested institutional action plans to address risks posed by generative artificial intelligence to award integrity, academic misconduct, plagiarism, and cheating. The request resulted in the TEQSA Gen AI strategies for Australian higher education: Emerging Practice (https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/gen-ai-strategies-australian-higher-education-emerging-practice (accessed on 25 May 2025)) and the Artificial Intelligence Hub (https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-hub/gen-ai-knowledge-hub (accessed on 25 May 2025)). The focus of these initiatives is on award integrity and admissions through general risk mitigation, staff and student support, and engagement. However, there are only limited references to student and staff exposure to harmful or false information generated by AI (generally only referring to incorrect responses created by GenAI) (https://theconversation.com/everyones-having-a-field-day-with-chatgpt-but-nobody-knows-how-it-actually-works-196378 (accessed on 25 May 2025)); How you should—and shouldn’t—use ChatGPT as a student | Open Universities Australia (https://www.open.edu.au/advice/insights/ethical-way-to-use-chatgpt-as-a-student) (accessed on 22 June 2025).
Established under the Tertiary Education Quality and Standards Agency Act 2011 (https://www.legislation.gov.au/C2011A00073/latest/versions (accessed on 22 June 2025)), TEQSA regulates higher education sector provider entry and compliance with the Higher Education Standards Framework (Threshold Standards) 2021 (https://www.legislation.gov.au/Series/F2021L00488 (accessed on 22 June 2025)). While HESF addresses academic and research integrity in its standards, it lacks specific obligations relating to AI or mis/disinformation (other than broad statements related to risk mitigation (See TESQSA document Sections 5.2, 6.2 and 6.3 of the HESF: https://www.teqsa.gov.au/how-we-regulate/higher-education-standards-framework-2021/hesf-domain-5-institutional-quality-assurance (accessed on 22 November 2025)).
The current regulatory landscape, including TEQSA guidance, the HESF, the voluntary Australia’s AI Ethics Principles (https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles (accessed on 20 May 2025)) and the now withdrawn Communication Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 (https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r7239 (accessed on 20 May 2025)) at the national level suggest that mis and disinformation is conceptualised more as a regulatory issue, rather than a specific policy priority demanding systemic safeguards [10]. Therefore, more research is needed to support the endeavor since this is an emerging and under-researched field. As stated in the Tertiary Education Quality and Standards Agency report, Australian government and academic experts have acknowledged “a high degree of uncertainty about the future of AI in higher education and will require a significant, systemic overhaul of assessment practices” [11].
This paper addresses three research questions:
  • What institutional strategies have Australian universities implemented to counter AI-generated misinformation and associated cybercrime risks?
  • To what extent do current policies and practices align with national frameworks for digital safety and cybercrime prevention?
  • What gaps exist in institutional responses to AI-driven online harms, and how can they inform future prevention?
This paper is arranged as follows: The Introduction outlines the impact of generative artificial intelligence (GenAI) on Australian higher education and introduces the research questions. Section 1 reviews national and sector-specific AI guidelines, analysing the general stance and focus areas of Australian universities’ GenAI policies. Section 2 outlines the qualitative approach, which combines a systematic literature review (SLR) with content analysis of 40 university policies and national frameworks. The Systematic Literature Review identifies key themes through a PRISMA-guided review of academic literature. Section 3 presents the results from the SLR and policy analysis. Section 4 critiques the fragmented policy landscape, proposes strategies to address misinformation, and suggests future policy development. Finally, Section 5 summarizes the findings and recommend directions for future research.

1.1. Background: Australian Universities’ Current Generative AI Policies

1.1.1. Australian National and Sector-Specific Guidelines and Frameworks

Australian universities do not operate in isolation when addressing the challenges of AI. Several national and sector-specific bodies provide guidance, facilitate collaboration, and establish standards that inform institutional strategies. These organizations play a crucial role in shaping a coordinated response to AI-driven misinformation and cybercrime.
The Department of Education (DoE) provides relevant national frameworks. The Department of Education takes an active interest in universities’ AI policy. The department published an AI Transparency Statement detailing its internal use of AI tools like Microsoft Copilot for productivity, information analytics, and policy/legal review (See Artificial Intelligence (AI) Transparency Statement—Department of Education, Australian Government https://www.education.gov.au/about-department/corporate-reporting/artificial-intelligence-ai-transparency-statement (accessed on 20 May 2025)). This statement emphasizes a “human-in-the-loop” approach, prohibiting fully automated output processes and direct public interaction with AI chatbots, and ensures compliance with existing legislation such as the Privacy Act 1988 and the Public Governance, Performance and Accountability Act 2013 (Ibid https://www.education.gov.au/about-department/corporate-reporting/artificial-intelligence-ai-transparency-statement (accessed on 22 May 2025)).
Several institutions provide relevant sector-specific frameworks. Universities Australia (UA) has made submissions to government inquiries, such as the House Standing Committee on Employment, Education and Training’s inquiry into generative AI in the Australian education system (See https://universitiesaustralia.edu.au/wp-content/uploads/2024/05/UA-Response-to-Adopting-AI-Inquiry.pdf (accessed on 20 May 2025)). These submissions typically outline the benefits of AI tools in research and teaching, such as enhanced productivity and personalized learning, while also acknowledging risks, particularly concerning academic integrity (Ibid https://universitiesaustralia.edu.au/wp-content/uploads/2024/05/UA-Response-to-Adopting-AI-Inquiry.pdf (accessed on 23 June 2025)). The UA advocates for universities to retain autonomy in developing internal policies and guidance for the appropriate use of AI by staff and students (Ibid https://universitiesaustralia.edu.au/wp-content/uploads/2024/05/UA-Response-to-Adopting-AI-Inquiry.pdf (accessed on 23 June 2025)). Establishing a National AI Safety Taskforce comprising the Department of Education (DoE), the Australian Federal Police (AFP), and key technology firms is crucial to oversee AI risk mitigation, promote responsible adoption, and develop robust national standards for AI use.
The Group of Eight (Go8), representing Australia’s leading research-intensive universities, has also established principles on the use of generative AI (See https://go8.edu.au/group-of-eight-principles-on-the-use-of-generative-artificial-intelligence (accessed on 22 October 2025)). These principles, published in September 2023, emphasise a commitment to the ethical and responsible use of AI to enhance teaching, learning, assessment, and research. Key tenets include maintaining academic excellence and integrity, promulgating clear guidelines for appropriate AI use by students and staff, developing resources to empower users, ensuring equitable access to AI, and engaging in collaborative efforts to exchange best practices (Ibid https://go8.edu.au/group-of-eight-principles-on-the-use-of-generative-artificial-intelligence (accessed on 22 October 2025)). The Go8 acknowledges the ethical challenges and potential risks associated with generative AI, including the creation of content that may infringe on intellectual property or contain errors (Ibid https://go8.edu.au/group-of-eight-principles-on-the-use-of-generative-artificial-intelligence (accessed on 22 October 2025)).
The Tertiary Education Quality and Standards Agency (TEQSA) plays a critical role in quality assurance for the Australian higher education sector. In response to the rise of generative AI, TEQSA has developed resources such as the “Gen AI Knowledge Hub” aimed at supporting institutions in considering the impacts of GenAI tools (See https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-hub/gen-ai-knowledge-hub (accessed on 22 May 2025)). TEQSA also provides links to various resources developed by universities and other organizations on these topics and, significantly, requires all universities to develop and implement an “Artificial Intelligence Action Plan” (See https://i.unisa.edu.au/staff/teaching-innovation-unit/academic-integrity/artificial-intelligence/ (accessed on 22 May 2025)). This mandate is a powerful driver for institutions to move beyond ad-hoc responses and develop more strategic, documented approaches to managing AI’s opportunities and risks across their operations.
The Australian Research Council (ARC), a major funding body for research, has also addressed the implications of GenAI. The ARC has a specific policy regarding the use of GenAI in its grant programs, emphasizing the principles of research integrity and the responsible conduct of research (See https://tinyurl.com/46xu4wea (accessed on 22 May 2025)). This policy signals the seriousness with which the research community and its funders are treating the integration of AI into research practices, compelling universities to ensure their researchers are aware of and comply with these national standards.
The Australasian Academic Integrity Network (AAIN) has developed influential “Generative Artificial Intelligence Guidelines.” These guidelines are frequently referenced by universities and serve as a valuable resource for developing institutional policies, particularly those related to academic integrity (https://tinyurl.com/57jme72p (accessed on 23 May 2025)).

1.1.2. Australian Universities’ GenAI Policies: General Stance and Focus Areas

The use of GenAI in the higher education sector has grown exponentially over the recent past. GenAI is being used for research (e.g., data analysis, simulation, and modelling), learning (e.g., language support and personalised learning content), as well as teaching (e.g., content generation and chatbots to handle routine enquiries). The increasing prevalence of this technology is fuelled by the ubiquity of its use across the board, from students and academics to support and administrative staff. This has resulted in universities putting in place new policies and/or updating existing policies aimed at addressing the novel risks presented by the GenAI across various dimensions of higher education provision. As confirmed in a submission by Universities Australia to the federal inquiry into the use of generative artificial intelligence in the Australian education system (Universities Australia: 2023), Australian comprehensive universities consulted had either implemented or were in the process of implementing institutional policies or strategic frameworks addressing the use of generative AI. Australian universities highlight the areas of convergence and divergence in the general stance and in the focus areas of GenAI policies, guidelines, and frameworks (Data from this study was collected and collated using the Gemini Deep Research tool version 2.5 using the search term <Australian Universities Generative AI Policies>. Data cleaning, data verification (triangulation with AAIN’s report on institutional responses to Gen AI and individual university policy documents) and data analysis were conducted by the authors).
Regarding the general stance of the GenAI policies, findings on the general stance taken by Australian universities in their GenAI policies highlight that the dominant approach towards GenAI use is permissive, allowing its use as a learning tool. However, this permissive stance is caveated by emphasis on the need for responsible and ethical use. For example, Curtin University’s stance “supports teaching students to use GenAI ethically and responsibly for future professional environments,” stating that GenAI should be “used with caution.” (See https://www.curtin.edu.au/students/essentials/rights/academic-integrity/gen-ai/ (accessed on 22 May 2025)). This is taken further by universities, which provide tools to ensure responsible use, such as the “Generative AI Toolkit” developed by Central Queensland University (launched March 2025) for the ICT discipline, promoting responsible AI adoption in education (See Centre for Machine Learning—Networking and Education Technology—CQUniversity (accessed on 22 May 2025)). Other universities take more nuanced approaches, mirroring the risk-based approaches to AI governance prevalent in the AI governance field. For example, whilst emphasising ethical considerations such as copyright, transparency, accuracy, bias, reproducibility and privacy, the Federation University Australia provides flexibility at the course level with recommended use ranging from “ZERO use” AI prohibited in assessments to “SOME use”, AI allowed for drafting with citation, and “ENCOURAGED use”, AI integrated as learning tool.” (See https://libguides.federation.edu.au/AI/about (accessed on 22 May 2025)). A similar nuanced approach is taken by Macquarie University, whose Academic Integrity Policy (updated 2 August 2023) recognises that AI may be used at many stages and that use does not automatically constitute misconduct, given that acceptable use varies by discipline/course/assessment (See https://policies.mq.edu.au/download.php?associated=1&id=768&version=1 (accessed on 22 May 2025)). A final observable trend in the findings is the focus on education and upskilling of staff and students, with universities such as Charles Sturt University stating it’s commitment to “preparing students to use AI tools effectively and ethically” (See https://policy.csu.edu.au/document/view-current.php?id=577 (accessed on 25 May 2025)) and Deakin University welcoming students to “develop skills to use GenAI.” (See https://deakin.libguides.com/generative-ai-research/about (accessed on 25 May 2025)).
There are several focus areas that we draw from the analysis. GenAI policies vary in scope but commonly converge to focus on areas related to student use, staff use, Acknowledgement, disclosure and referencing, academic integrity, and the use of university-endorsed/provided tools (See Annex 1 for a comprehensive overview of university GenAI policies). Student-related guidance generally refers to GenAI uses in coursework and assessment, while staff-related guidance falls in the areas of teaching, research and operations. The provision of guidance and resources for both staff and students is evident in all GenAI policies, with some universities placing an emphasis on one category over another, while others provide guidance relevant to both students and staff. Universities whose publicised policies and guidance are directed solely at students include Avondale University, Edith Cowan University, James Cook University, and Griffith University, which provides a module on “using generative AI ethically and responsibly” aimed at ensuring students adhere to academic integrity goals and values (See https://www.griffith.edu.au/__data/assets/pdf_file/0029/1763444/17_AI.pdf (accessed on 25 May 2025)). This focus on academic integrity in GenAI policies is linked to the observed foci on acknowledgement of GenAI use as well as guidance on disclosure and referencing of GenAI outputs, with a growing emphasis on fostering critical AI literacy. Another key trend has been the move towards providing secure, university-vetted AI platforms to mitigate data security and intellectual property risks associated with public tools. These range from the provision and endorsement of specific tools such as Microsoft CoPilot Enterprise to the development of bespoke, tailored tools such as RMIT’s “Val” GenAI Chatbot is RMIT’s private, secure tool for students, with features like image generation, personas (essay feedback, quiz), and document summarization (See https://www.rmit.edu.au/students/support-services/study-support/val (accessed on 25 May 2025)), and the University of Melbourne’s Spark AI for secure data processing (See https://mdhs.unimelb.edu.au/research/innovation-and-enterprise-trash/spark-melbourne (accessed on 25 May 2025)). Despite convergence of GenAI policies in certain areas, discrepancies exist in policy implementation, permissiveness in academic assessment and the provision of institutionally endorsed AI tools. For example, while there is consensus around the explicit classification of unauthorised or unacknowledged use of GenAI in assessments as academic misconduct, the use of GenAI is almost universally mandated when permitted (albeit with varying levels of detail provided) (See https://mdhs.unimelb.edu.au/research/innovation-and-enterprise-trash/spark-melbourne (accessed on 25 May 2025)).

2. Method

To capture the extent to which Australian universities have implemented their policies to deal with AI-generated misinformation and associated cybercrime risks, the alignment with national frameworks, and the gaps within the institutional responses, this study employs a qualitative approach to investigate institutional responses to AI-generated misinformation and their implications for cybercrime prevention in Australian higher education. Combining a systematic literature review (SLR) with qualitative content analysis, the research synthesizes evidence from academic literature, university policies, and regulatory frameworks. The methodology is structured in two phases: (1) an SLR to identify key themes, and (2) a qualitative content analysis applying these themes to 40 Australian university policies (see Appendix A) and national regulatory frameworks. Five researchers were purposively selected, based on their specialized expertise in qualitative methodologies and cybercrime prevention, independently conducted and cross-validated the analysis through a systematic triangulation process, ensuring robust confirmation of findings and bolstering the reliability and validity of the study’s results. The SLR, guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [12], provides a rigorous foundation for identifying global and regional trends in institutional responses. Qualitative content analysis enables a detailed examination of policy documents and regulatory frameworks within the Australian context.
Data were collected from three primary sources to ensure a comprehensive evidence base. The SLR targeted peer-reviewed articles published between 2000 and 2025; publicly available policies from 40 Australian universities, including academic integrity and AI usage guidelines, accessed via university websites; and regulatory frameworks, such as TEQSA’s AI Action Plans and Australia’s Cyber Security Strategy, obtained from government and agency reports. Data analysis involved a systematic literature review (SLR) and thematic content analysis using MAXQDA software 24.11 (MAXQDA is a software program designed for qualitative and mixed-methods data analysis, offering tools for thematic content analysis to support academic research).

2.1. Systematic Literature Review: Identifying Key Themes

To provide a literature review and identify key themes from the academic literature, this section explores how Australian universities address AI-generated misinformation as a significant online harm, examining its intersections with broader cyber threats, including fraud, image-based abuse, and ransomware. Through a systematic review, the study employed content analysis of Australian university policies, government reports, and academic literature to identify institutional strategies and gaps in mitigating these harms. The review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, ensuring a standardized and transparent approach to the systematic review process.

2.1.1. Literature Search Strategy and Source

All potentially relevant articles were systematically and thoroughly searched to ensure a comprehensive foundation for this study. A wide range of academic literature was searched across multiple databases, including Scopus (n = 1091 records), Web of Science (n = 375), IEEE Xplore (n = 652), Google Scholar (n = 34), and citation searching (n = 2), yielding a total of 2152 records. In addition, to ensure a comprehensive literature search, we carefully examined the reference lists of all identified studies to retrieve additional relevant articles not captured in the initial database searches. A systematic thorough search was conducted across multiple platforms, utilising a strategic combination of different keywords”. The search for all articles was conducted from 15–24 June 2025. The inclusion and exclusion criteria is shown in Table 1 below.

2.1.2. Study Selection and Eligibility Criteria

Articles published in English between 2000 and 2025 were included in this study. The rationale for selecting 2000 as a major cybercrime incident caused disruption and damage to computer systems worldwide [13]. On the other hand, articles published before 2000 or in languages other than English were excluded, as the study and regulation of online harms were not well-developed before 2000 [14]. Non-peer-reviewed materials, such as opinion pieces, editorials, and commentaries, were not considered to ensure the reliability and academic rigor of the sources. Additionally, articles lacking the specified keywords or focusing on topics unrelated to online harm were excluded. Finally, articles were omitted if they did not address strategies for preventing online harm and cybercrimes or failed to provide empirical data or theoretical insights directly relevant to the study’s research questions. The study considered the inclusion and exclusion criteria in Table 1.

2.1.3. Quality Assessment, Assessment of Risk of Bias, and Data Extraction

Mixed Methods Quality Assurance
The quality assurance process for qualitative studies employed the Mixed Methods Appraisal Tool (MMAT), version 2018 [15] to ensure methodological rigor. This tool assesses several critical dimensions: it verifies that research questions are clearly articulated and that collected data effectively address these questions. It also assesses the justification for adopting a mixed-methods design, confirming its suitability for the research objectives. In addition, the tool evaluates the consistency of results, confirming that findings from diverse methods are synthesised into a unified interpretation.
Qualitative Methods Quality Assurance
The Mixed Methods Appraisal Tool (MMAT) evaluates several key criteria to ensure the quality of qualitative studies. It assesses whether research questions are clearly defined and aligned with the collected data. The tool also examines the suitability of the qualitative approach for addressing the research question and the appropriateness of the data collection methods. In addition, it evaluates whether findings are robustly derived from the data and if the interpretation of results is well-supported by evidence.
The coherence between the qualitative data sources, collection, analysis, and interpretation was also assessed. For instance, the studies by Cubbage and Smith [16], Reeves, Delfabbro and Calic [17], and Striepe, Thomson and Sefcik [18] scored 7 out of 7, indicating a high level of methodological rigour and coherence in its qualitative approach (see Appendix B).
Data Extraction
Duplicate records were removed using reference management software (EndNote 21) to ensure a consolidated dataset. Two reviewers (GM and LL) independently screened the titles and abstracts, excluding records that did not meet the inclusion criteria, such as reviews and position papers. The methodological quality of the selected studies was evaluated using the Mixed Methods Appraisal Tool (MMAT), version 2018 [15]. Regarding disagreements between the reviewers, discussions were held with another review team member until consensus was reached. When the disagreements persisted, four reviewers (DA, DA, IM, and MZ) were consulted to negotiate and resolve conflicts between the reviewers. Data were extracted into a spreadsheet, capturing key details, including the primary author’s name, year of publication, institutional strategies to counter AI-generated misinformation and cybercrime risks, and alignment with national frameworks for digital safety and cybercrime prevention strategies, and gaps in institutional responses. The review process was documented and visualised using a PRISMA flow diagram (Figure 1).
Search Results
A total of 2152 records were initially identified, with 2 records from citation searching. After removing duplicates (n = 46) and excluding irrelevant records based on titles (n = 1968), 138 records were screened. Further screening based on titles and abstracts excluded 111 records. Of the 31 reports sought for retrieval, 4 were not retrieved, and 27 were assessed for eligibility, alongside 1 additional report from citation searching. Following quality assessment, 11 reports were excluded (7 for not reporting outcomes of interest and 4 for failing quality criteria). Ultimately, 16 new studies were included in the review, contributing to a total of 17 studies in the meta-analysis. This structured approach ensures a robust and reproducible synthesis of evidence, adhering to high standards of systematic review methodology.
This systematic review synthesises evidence on how Australian universities address AI-driven online harms, including misinformation and associated cybercrime risks, by examining institutional strategies, their alignment with national frameworks, and existing gaps with implications for future prevention. The findings reveal a multifaceted but fragmented approach, with strengths in educational initiatives and security measures but significant weaknesses in addressing AI-specific challenges and achieving consistent alignment with national digital safety and cybercrime prevention frameworks. The following discussion explores these themes, their implications, and directions for future research and practice. From the systematic literature review, this study has identified three key themes: Educational Strategies, Alignment with National Frameworks, and Policy Gaps and Implications [9,11]. For the identified themes, see Table 2.

3. Findings

3.1. Findings from SLR

Educational Strategies

Educational approaches form the foundation of institutional efforts focusing on academic integrity education through explicit instruction to promote ethical behavior [16]. Such approaches have proven effective in reducing fraud in enabling programs, suggesting potential for tackling AI-driven breaches, like assessment outsourcing. However, their reactive nature limits their ability to address AI-specific challenges, such as students using tools like ChatGPT for academic work. Proposed generative AI (GenAI) integration frameworks offer a proactive solution by advocating for curriculum revisions, training for staff and students, and evaluation matrices to ethically incorporate generative AI. These frameworks encourage critical evaluation of AI-generated content, helping to mitigate misinformation risks. Additionally, digital citizenship programs foster responsible technology use and empathy, potentially reducing AI-driven cyberbullying, such as deepfake harassment.
Cubbage and Smith (2009) [16] emphasised that robust security and environmental measures are vital for addressing online harm. Campus safety initiatives, such as the Security Awareness Program in collaboration promoted both physical and psychological safety [16]. These efforts created supportive environments that empower students to report cybercrimes, including AI-facilitated image-based abuse, with confidence. Crime Prevention Through Environmental Design (CPTED) principles, such as improved surveillance and open space design, reduce opportunities for crimes, including those enabled by AI technologies (e.g., stalking via AI-generated tracking tools) [19,23,27]. As Striepe, Thomson and Sefcik (2023) [18] noted, these measures have been used to support cybercrime prevention by fostering safer campus environments.

3.2. Alignment with National Frameworks

The alignment of university policies and practices with national frameworks, such as Australia’s Cyber Security Strategy [28] and the eSafety Commissioner’s guidelines are partial, with notable strengths and weaknesses [16,19,21,23]. Alignment strengths include support for digital literacy through academic integrity education, which aligns with national goals of promoting ethical technology use [19]. Teaching students to verify sources, for instance, supports the eSafety Commissioner’s focus on critical digital literacy to combat misinformation. Multi-stakeholder collaboration, such as police involvement in cyberbullying cases and safety committees [16,29], reflects the national emphasis on partnerships between institutions, law enforcement, and government, as outlined in the Cyber Security Strategy.
However, national frameworks have alignment weaknesses as well. The limited scope of AI-specific strategies is a critical gap, as current educational approaches focus on general academic integrity rather than AI-driven breaches like automated essay generation [18,20]. This misaligns with national calls for proactive measures against emerging technologies. Vaill, Campbell and Whiteford (2020) [21] further noted that the inconsistent policy implementation, particularly in anti-bullying policies, undermined the eSafety Commissioner’s emphasis on clear, accessible guidelines. The lack of standardized procedures for addressing AI-driven cyberbullying, such as deepfake harassment, also weakened alignment. In addition, cybersecurity training due to cybersecurity fatigue failed to meet national expectations for behavior-focused education [17,23]. Australian universities have not fully embraced national frameworks promoting the use of cybercrime prevalence and harm perceptions to drive preventive behaviours, underscoring the need for complete adoption of these frameworks and more effective motivational training strategies [18,21]. This incomplete alignment suggests that while universities support national digital safety objectives, their efforts are hindered by inconsistent policies and an insufficient focus on AI-specific challenges, necessitating stronger coordination with national frameworks to enhance their impact.

3.3. Policy Gaps and Development

The review identified several critical gaps in current responses to AI-driven online harms, each with significant implications for future prevention [17,21,24]. Limited GenAI adoption left universities unprepared to address AI-generated misinformation [20]. Implementing standardized GenAI frameworks, including curriculum redesign and training in the context of assessment such as revision assessment types (e.g., oral exams, in-class tasks) and open-book policies, could enable ethical AI use and reduce misinformation risks [22,26]. Inadequate AI-specific integrity measures, including the lack of policies and detection tools for AI-driven academic misconduct, underscore the need for targeted strategies [18]. Universities could develop comprehensive AI-specific academic integrity policies that explicitly outline procedures for detecting, reporting, and adjudicating AI-generated misconduct. Universities could invest in AI detection software, such as Turnitin. However, such AI detection software, alongside shifting to formative assessments should be used cautiously and complemented by education to minimise reliance on AI-susceptible tasks [30].
Weak cyberbullying policies, characterised by inconsistency and poor accessibility, fail to address AI-driven harms like deepfake harassment [21,26]. Behaviour-focused training, using gamified simulations and case studies, could improve engagement and threat recognition [31,32]. In addition, the lack of inter-systemic collaboration limits responses to complex AI-driven harms [24]. Finally, As Jacqueline (2020) [23] argued, the reliance on knowledge-based education over perception-based prevention misses opportunities to motivate preventive behaviours by highlighting cybercrime risks. Awareness campaigns emphasising real-world impacts of AI-driven harms could enhance the adoption of preventive measures.
Effective policy development encompasses anti-bullying, cyberbullying, and cybersecurity policies. Vaill, Campbell and Whiteford (2020) [21] found that while all 37 Australian universities have anti-bullying policies, their inconsistency and lack of user-friendliness hinder effectiveness, particularly in addressing AI-driven cyberbullying, such as automated hate campaigns. Similarly, Reeves, Delfabbro and Calic (2021) [17] noted that Security Education, Training, and Awareness (SETA) programs used to combat cyber threats, including AI-driven phishing. While these strategies reflected a commitment to tackling AI-driven harms, responsiveness to issues rather than anticipating and preventing AI-related risks, particularly in integrating AI-specific measures [17,21,33].
By integrating these strategies, particularly through forward-thinking GenAI frameworks and digital citizenship initiatives, universities can better address the evolving challenges of AI-driven harms while promoting the ethical and responsible use of technology [21].

3.4. Findings from Policy Content Analysis

3.4.1. Educational Strategies

AI-Generated Misleading Content (Misinformation)
Australian universities demonstrate a range of educational strategies to address the risks of AI-generated misinformation within the context of generative AI (GenAI) use. These strategies focus on fostering ethical, responsible, and critical engagement with GenAI tools to ensure academic integrity and mitigate the potential for misinformation. While no university explicitly references “misinformation” in their GenAI policies, their approaches implicitly tackle this issue through education, policy frameworks, and tool-specific guidance. The strategies can be broadly categorized into fostering digital literacy, embedding principles of academic integrity, providing practical guidance, and promoting the critical evaluation of AI outputs (please refer to Appendix A).
One key educational strategy is the emphasis on digital and AI literacy to equip students and staff with the skills to assess and use GenAI tools critically. Universities like the University of Sydney explicitly aim to develop digital literacy by encouraging students to understand the limitations of GenAI and balance its use with traditional learning methods. Similarly, the University of Melbourne supports staff and students in building AI/data literacy, with its Generative AI Taskforce (GAIT) outlining 10 guiding principles to navigate GenAI responsibly. This focus on literacy is evident across institutions like Monash University, which provides free access to tools like CoPilot and emphasizes safe usage, and the University of Western Australia, where the GenAI Think Tank advises on selecting safe tools and educating staff to avoid accidental data leakage. By prioritizing literacy, these institutions aim to empower users to recognize and mitigate AI-generated inaccuracies or biases that could lead to misinformation (please refer to Appendix A).
For instance, James Cook University mandates that students verify the credibility of GenAI outputs, fostering a culture of critical scrutiny that indirectly addresses misinformation risks. This is reinforced by Edith Cowan University and Charles Sturt University further reinforce ethical principles like integrity, transparency, and accountability, encouraging students to question AI-generated content (please refer to Appendix A).
Finally, universities emphasize critical evaluation and transparency in GenAI use to combat potential misinformation. Institutions like Macquarie University and the University of Newcastle recognize that GenAI use varies by discipline and assessment, requiring students to disclose and reference AI contributions transparently. This transparency, coupled with policies at universities like Swinburne University of Technology and the University of Tasmania, which mandate accurate acknowledgment of GenAI use, encourages students to verify AI outputs against credible sources. The University of South Australia and Southern Cross University further promote a balanced approach, integrating GenAI into education while emphasizing ethical use and critical evaluation (please refer to Appendix A). This focus on scrutiny and accountability ensures that students and staff are equipped to identify and correct AI-generated errors or biases, thereby mitigating the spread of misinformation.
AI-Executed Attacks (e.g., Deepfakes)
Bond University and Deakin University, for example, warn against uploading sensitive or copyrighted material to non-approved GenAI tools, as these could be used for training and potentially generate misleading outputs. Such risks extend to data privacy risks, which could otherwise contribute to misinformation through data misuse. While no policies explicitly name deepfakes, institutional controls on data handling indirectly limit vectors for AI-executed identity-based attacks.
Unethical AI Use in Academic Writing
Another critical strategy is the integration of academic integrity principles into GenAI education. Universities such as Griffith University, James Cook University, and Flinders University emphasize core values like honesty, trust, fairness, and responsibility, requiring students to acknowledge and reference GenAI use appropriately. This approach ensures that students are not only aware of the ethical implications of GenAI but also trained to evaluate its outputs critically, reducing the likelihood of perpetuating misinformation.
Practical guidance and tool-specific education form another pillar of these strategies. Many universities, such as Central Queensland University with its “Generative AI Toolkit” and RMIT University with its secure AI tool “Val,” provide structured resources to guide responsible GenAI use. These tools and guidelines help students and staff navigate the practical aspects of GenAI, including how to use university-endorsed platforms that minimize data privacy risks. By promoting university-provided tools like CoPilot (Federation University Australia, Monash University) or Spark AI (University of Melbourne), institutions ensure safer environments for AI use, indirectly reducing misinformation risks associated with unverified or external platforms (please refer to Appendix A).

3.4.2. Alignment with National Frameworks

The policies of Australian universities regarding generative artificial intelligence (GenAI) generally align with national frameworks such as those provided by the Tertiary Education Quality and Standards Agency (TEQSA) and the Australian Academic Integrity Network (AAIN). These frameworks emphasize academic integrity, ethical use of technology, and responsible engagement with AI tools. Most universities, such as the Australian National University (ANU), Deakin University, and Monash University, explicitly reference principles like honesty, transparency, fairness, and accountability, which mirror TEQSA’s focus on maintaining academic standards and AAIN’s guidelines for ethical AI use in education. For instance, ANU’s policy emphasizes responsible and ethical use consistent with academic integrity, aligning with TEQSA’s expectation that institutions mitigate risks of academic misconduct through education and policy enforcement. Similarly, Monash University’s adherence to the Australian Code for Responsible Conduct of Research and its provision of secure tools like CoPilot reflect a commitment to national standards for ethical research and data protection (please refer to Appendix A).
However, alignment varies in depth and specificity. Universities like the University of Melbourne, with its 10 guiding AI principles developed by the Generative AI Taskforce (GAIT), and the University of Western Australia, with its GenAI Think Tank, demonstrate proactive engagement with national frameworks by establishing structured advisory groups to address AI’s risks and opportunities. These institutions integrate national guidelines into detailed, context-specific policies that address both teaching and research applications. In contrast, universities like Queensland University of Technology (QUT) and the University of Technology Sydney (UTS) lack detailed GenAI-specific policies in the provided document, suggesting a reliance on general academic integrity principles and national guidelines without institution-specific adaptations. This indicates a spectrum of alignment, with some institutions embedding national frameworks more robustly into their policies than others, potentially reflecting differences in institutional resources or strategic priorities.

3.4.3. Policy Gaps and Development

A significant gap across many Australian university GenAI policies is the limited explicit focus on AI-generated misinformation and cybercrime. While universities like ANU, Bond University, and the University of Western Australia emphasize data privacy and security—warning against uploading sensitive or copyrighted material to GenAI tools—none of the policies explicitly address the risks of AI-generated misinformation, such as the spread of false narratives or manipulated content. This omission is notable given the potential for GenAI to produce misleading outputs, which could undermine academic integrity or public trust in research. For example, James Cook University stresses checking GenAI output for credibility, but this is not framed specifically as a countermeasure against misinformation. Developing policies that explicitly address misinformation, such as requiring critical evaluation of AI outputs or integrating media literacy into AI education, could strengthen institutional responses.
Another gap lies in the variability of policy specificity and enforcement. Universities like Federation University Australia offer flexible approaches (ranging from “ZERO use” to “ENCOURAGED use”), which allow for discipline-specific adaptations but risk inconsistency in addressing misinformation-related challenges. In contrast, institutions like RMIT University, with its secure “Val” chatbot, and the University of Wollongong, recommending Copilot for enterprise data protection, show progress in developing tools to mitigate data-related risks, but these are not universally adopted. To address these gaps, universities could develop standardized guidelines for identifying and mitigating AI-generated misinformation, potentially through national collaboration via TEQSA or AAIN. Additionally, investing in staff training, as seen at Murdoch University, and student education, as at Griffith University, could enhance policy implementation. Future policy development should prioritize explicit strategies for misinformation detection, robust data governance, and consistent enforcement to ensure academic integrity in the face of evolving AI technologies.

4. Discussion

The higher education sector is designated as critical infrastructure (Page 39 of the Strategy) that will drive cyber innovation and technological advancement (Page 8 and 13 of the Strategy) by the Australian Cybersecurity Strategy 2023 to 2030 (https://www.homeaffairs.gov.au/about-us/our-portfolios/cyber-security/strategy/2023-2030-australian-cyber-security-strategy (accessed on 22 May 2025)). This places higher education institutions at the forefront of GenAI use and development requiring robust and flexible AI policies.
However, the overall AI policy landscape within the Australian higher education sector remains significantly fragmented, reactive, and largely subordinated to concerns of academic and award integrity. Current AI policy responses (This paper considered 5 top ranked Australian Universities under THE (Times Higher Education) rankings: University of Melbourne, Monash University, Australian National University (ANU), University of New South Wales Sydney (UNSW) and University of Technology Sydney (UTS)) range from general guiding principles (ANU (https://learningandteaching.anu.edu.au/resources/anu-institutional-ai-principles/ (accessed on 22 May 2025)), UNSW (https://www.student.unsw.edu.au/notices/2024/05/ethical-and-responsible-use-artificial-intelligence-unsw (accessed on 22 May 2025)) and University of Melbourne (https://www.unimelb.edu.au/ai/university-of-melbourne-ai-principles (accessed on 22 May 2025)) to formalized operational policies and procedures (Monash University and UTS).
Despite progress, significant policy gaps remain in comprehensively addressing AI-generated misinformation. While educational strategies like critical AI literacy are mirrored in policies from universities like Griffith and James Cook, the policy analysis reveals stark discrepancies: only 12 of the 40 universities explicitly address data privacy risks, undermining systematic safeguards against AI-driven misinformation and cybercrime. Institutions like the Queensland University of Technology and the University of Technology Sydney lack detailed GenAI policies, relying on general academic integrity principles, which may not sufficiently address misinformation-specific challenges. The lack of explicit references to misinformation in publicly available policy information suggests that universities such as Flinders University and Griffith University primarily focus on academic integrity, rather than directly addressing broader misinformation risks.
Although these principles and policies embed principles of fairness, accountability, transparency, and safety, concentrating on ethical AI use and risk management, they rarely directly address misinformation. These risks are generally subsumed within broader risk mitigation frameworks involving procurement, integrity, and digital literacy programs.
The lack of specificity contradicts the concerns expressed by senior Australian educational policy makers who have identified the management of misinformation as one of the main challenges regarding the use and deployment of AI. They are uncertain and concerned over how the AI technology is used and the possible production of inaccurate or misleading data [10].
Additionally, the Australian Government Parliamentary Inquiry into the use of Generative Ai in Education (Inquiry into the use of generative artificial intelligence in the Australian education system—Parliament of Australia (https://www.aph.gov.au/Parliamentary_Business/Committees/House/Former_Committees/Employment_Education_and_Training/AIineducation (accessed on 2 May 2025)), highlighted the risk posed by mis and disinformation to the health and safety of individuals arising from hallucinations, factual errors, fake news, doctored images and videos, false social media information and advertisements which foster distrust and biases leading to poor outcomes, (the Australian Government Parliamentary Inquiry Sections 3.24 to 3.30). The inquiry issued a recommendation (Recommendation 13) that the Australian Government works with educational providers to mitigate the risk of misinformation and disinformation by:
  • training educators to teach students how to critique AI-generated outputs;
  • mandating that institutional deployers of AI systems in educational settings run regular bias audits and testing;
  • prohibiting the use of GenAI to create deceptive or malicious content in education settings;
  • completing risk assessments;
  • for example, identifying and seeking to eliminate bias and discrimination through the data the model is trained on, the design of the model and its intended uses;
  • mandating to allow independent researchers ‘under-the-hood’ access to algorithmic information.
While some of these elements are reflected in Australian higher education AI principles and policies, their effectiveness may be limited. For example, AI literacy programs may be falling short of the required intent due to a lack of sufficient staffing to vet AI-generated content, inconsistent institutional support for AI literacy programs, and limited expertise among staff [34]. A disconnect is therefore evident between policy ambitions and implementation ‘on the ground’ [10].
The reliance on AI detection tools, such as Turnitin, highlights another gap, as their efficacy is debated [35]. Universities like the University of Southern Queensland and the University of Adelaide use these tools but lack clear strategies for addressing false positives or evolving AI capabilities that may evade detection. Additionally, while universities like RMIT and the University of Melbourne provide secure tools (Val and Spark AI), others do not specify approved platforms, potentially exposing users to external tools with data privacy risks that could contribute to misinformation.
Further, transparency and accountability requirements may be difficult to enforce when third-party proprietary AI algorithms are used. Such algorithms are owned by businesses that often rely on opacity and advertisements for revenue [36,37]. The tools and applications that may be subject to misinformation and disinformation are incredibly varied and pervasive. To understand how and when an AI system is engaging or impacting users, policies need to specifically address both education about AI (literacy and ethics), and education on using AI (tools and applications) [38].
Some Australian Universities are developing policies that could be used as a framework to specifically address mis- and disinformation. Monash’s AI Operations Policy, associated Operations Procedure and Framework, could be adapted to include dynamic and dedicated components targeting misinformation and disinformation, such as specific risk assessments, user safeguards such as information assessment systems, mis and disinformation reporting mechanisms with rapid response rates, and interdisciplinary networking governance [10,39]. Likewise, the UNSW AI Capability framework could be expanded to directly counter misinformation and disinformation in student outputs and learning environments.
UTS has integrated the New South Wales state Government AI Assessment Framework (https://www.digital.nsw.gov.au/policy/artificial-intelligence/nsw-artificial-intelligence-assessment-framework (accessed on 22 May 2025)) into its AI policy. The framework is a living document that promotes safe and responsible use of AI while considering community benefit, fairness, privacy and security, transparency, and accountability. This demonstrates the potential for universities to adapt government-developed specific mis and disinformation guidelines into institutional practice.
The Monash University AI Operations Policy indicates that AI risk management is a shared responsibility between the university and AI users. It also denotes the roles and responsibilities of managers and supervisors (Monash University AI Operations Policy Section 4) (please refer to https://publicpolicydms.monash.edu/Monash/documents/2904420, accessed on 22 November 2025). A similar approach must be applied to mis and disinformation risk mitigation. Any AI policy must ensure that all stakeholders and target groups are aware of risks (including specifically, mis- and disinformation) [38].
Due to the exponential rate of AI development, ways in which AI users are mis- and dis-informed will evolve over time, necessitating ongoing oversight and policy reviews. Monash University has recognized this need:
Monash will ensure ongoing compliance with relevant regulatory frameworks governing the use of AI technologies, with regular reviews and amendments made to relevant policies to respond to new risks created by the evolving technology and its requirements (Monash AI Operations policy Section 3.5) (please refer to https://publicpolicydms.monash.edu/Monash/documents/2904420, accessed on 22 November 2025).
Critically, oversight of any AI policy must be inclusive. Oversight committees must ‘include students, academics, administrators, technologists, and external specialists’ and must convene regularly to address the pace of change [40]. Given the high level of AI usage among higher education students (Students are embracing AI, but are they confident about it? | Learning and Teaching | University of Adelaide (https://www.adelaide.edu.au/learning/news/list/2024/08/19/students-are-embracing-ai-but-are-they-confident-about-it (accessed on 22 May 2025)), and their exposure to misinformation and disinformation, students must have a direct role in drafting and implementing related policy [41].
The approach to mis and disinformation in higher education policy in Australia reflects the nature of AI policy and regulation development at the state and national level. Whatever the approach, it remains fragmented, and disagreements exist regarding how mis and disinformation may be regulated; higher education staff and students may be at the forefront of their education campaigns regarding AI risks. Considering the awareness of the issue and the high level of concern regarding misinformation (Digital News Report: Australia 2024: AI, social media, misinformation and distrust—what the data tells us about the news landscape in 2024—University of Canberra (https://www.canberra.edu.au/about-uc/media/newsroom/2024/june/digital-news-report-australia-2024-ai,-social-media,-misinformation-and-distrust-what-the-data-tells-us-about-the-news-landscape-in-2024 (accessed on 23 May 2025)), liability may also become an issue [42]. While waiting on a decision to regulate (or whether alternatives are considered) at the national level, the possibility of increasing TEQSA’s mandate may be considered; to ensure educational campaigns are robust enough to tackle the growing need, that the onus for risk management is appropriately shared and that all parties are aware of their level of liability.
To address these gaps, universities are encouraged to develop more explicit policies on misinformation, integrating insights from ongoing research, such as the University of Adelaide’s work on tracking false narratives. Collaboration with national bodies like TEQSA and AAIN could facilitate the development of standardized guidelines. At the same time, investment in advanced detection technologies and staff training, as seen at Murdoch University, could enhance proactive responses. By addressing these gaps, universities can enhance their policies to combat AI-generated misinformation more effectively.
The research contains several limitations. First, this research is constrained by its primary focus on Australian universities, which may limit the generalizability of findings to other regions or educational contexts with differing AI policy landscapes. The qualitative nature of the study, while rich in detail, relies heavily on publicly available policy documents and peer-reviewed literature, potentially missing unpublished institutional strategies or emerging AI-driven threats not yet documented. Additionally, the scope is limited to articles published between 2000 and 2025, which may exclude relevant historical perspectives or very recent developments due to the rapid evolution of AI technologies. The lack of quantitative data on the effectiveness of AI-specific interventions further hinders precise evaluation of institutional strategies, underscoring the need for future research to incorporate broader contexts, longitudinal data, and quantitative measures to enhance comprehensiveness.

5. Conclusions

As generative AI becomes increasingly prevalent in the Australian public space, both government and industry are embedding AI in their operations. In higher education, universities have also recognized the benefits of AI; for marketing, enhancing learning, assistance with assessments, student retention, recruitment and administration, curriculum development, social change, and improving equity [43]. However, the benefits also come with significant challenges. These include privacy risks, misinformation, plagiarism, the possibility of a decline in critical thinking skills, a decrease in social integration and equity, loss of fairness, transparency, and integrity. These risks are multilayered, interconnected, and expose all users irrespective of the industry they operate in.
Australian universities have implemented diverse strategies to counter AI-driven online harms, including educational approaches like academic integrity and digital citizenship programs, security measures such as CPTED, and policy development, though their effectiveness is limited by reactive, inconsistent policies and inadequate AI-specific measures; partial alignment with national frameworks like Australia’s Cyber Security Strategy (2023–2030) and the eSafety Commissioner’s guidelines shows strengths in digital literacy and multi-stakeholder collaboration but weaknesses in addressing AI-driven misinformation due to policy fragmentation and cybersecurity fatigue, necessitating stronger coordination; significant gaps, such as only 12 of 40 universities addressing data privacy, reliance on debated tools like Turnitin, and limited inter-systemic collaboration, highlight the need for standardized GenAI frameworks, targeted policies, and inclusive oversight to enhance future prevention efforts, ensuring universities contribute effectively to national digital safety goals while managing financial costs through proactive, evidence-based strategies [44]. Clearly, there is a need to prioritise proactive, evidence-based strategies to navigate the evolving landscape of AI-driven harms.
The findings suggest several practical implications. Institutions should establish a “National GenAI Governance Framework” that can be co-developed by TEQSA, Universities Australia, and the Department of Education, incorporating mandatory AI risk audits and disinformation reporting protocols. Universities should prioritise standardized GenAI frameworks to proactively address misinformation, integrating AI education into curricula and training programs [20]. Developing AI-specific academic integrity policies and detection tools will strengthen responses to misconduct [18]. Reforming cyberbullying policies and enhancing training to combat cybersecurity fatigue are critical for aligning with national frameworks [17,21].
Addressing these gaps through standardized frameworks, targeted policies, and collaborative partnerships will require enhanced prevention efforts, ensuring universities contribute effectively to national digital safety and cybercrime prevention goals. Implementing measures through inclusive oversight committees will foster adaptive policies capable of evolving with AI advancements. Future research should explore the longitudinal impacts of GenAI adoption frameworks to assess their effectiveness in reducing misinformation. Moreover, future study should expand the scope to encompass ransomware, deepfake attacks, and phishing, while conducting multi-year comparative analyses with different jurisdictions to yield actionable benchmarks for global best practice. Quantitative studies evaluating behavior-focused cybersecurity training could provide insights into overcoming online harm. In addition, research on inter-systemic collaboration models could identify best practices for integrating legal and educational responses to AI-driven harms. Comparative studies across universities could further clarify factors contributing to policy inconsistency and inform standardization efforts. Future research should track policy impact over 5 years and compare with the UK, New Zealand and Canada.

Author Contributions

L.S.F.L. contributed to conceptualization, validation, investigation, writing (original draft preparation), supervision, project administration, G.T.M. contributed to methodology, formal analysis, investigation, and writing (draft preparation). M.Z. contributed to validation, and writing (review and editing), and I.M.-O. contributed to validation and writing (review and editing). D.A. contributed to proof reading D.M.C.A. contributed to resources, writing (review and editing), and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Acknowledgments

The Gemini Deep Research tool was used to conduct a targeted search using the term “Australian Universities Generative AI Policies.” Data cleaning, data verification through triangulation with the Australasian Academic Integrity Network’s (AAIN) report on institutional responses to GenAI and individual university policy documents, and data analysis were performed exclusively by the authors. Additionally, Grammarly was employed to assist with grammar checks, ensuring adherence to linguistic standards.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

Online harmrefers to any negative experience caused by technology that affects an institution’s/a person’s, safety, reputation, or privacy.
MisinformationAccording to Wardle and Derakhshan [45], misinformation is false information that is shared without the intent to cause harm.
Cybercrime prevention refers to the proactive measures and strategies implemented to mitigate criminal activities targeting or leveraging computer systems and networks.
GenAIis the application of artificial intelligence models that can generate novel content, which paradoxically offers both advanced defense mechanisms against cyber threats and new avenues for malicious exploitation by sophisticated actors.
Digital safety/securityis a broader concept encompassing the protection of personal data, privacy, and overall well-being in the digital realm.

Appendix A

Table A1. AI Policies of 40 Australian Universities.
Table A1. AI Policies of 40 Australian Universities.
University Guidelines/Policies—General StanceGuidelines/Policies—Focus AreasReference to Misinformation and Cybercrime in GenAI PolicySources
1Australian Catholic University Specific GenAI policy information for ACU is not extensively detailed in the provided research. General academic integrity principles would apply. TEQSA guidelines and AAIN guidelines would likely inform ACU’s approach.Not SpecifiedNo explicit mention of misinformation or cybercrime in the context of generative AI.Mitigating AI misuse in assessments; Research Centre for Digital Data and Assessment in Education research
2Australian National University ANU permits GenAI as a learning tool but emphasizes responsible and ethical use, consistent with academic integrity. The university acknowledges the diversity of applications across disciplines.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
Data Privacy and Security: Strong emphasis on data privacy. Personal information and unpublished research should not be put into systems that may breach privacy or feed into GenAI data. University-approved tools are recommended for security.Artificial Intelligence including generative AI; Best Practice When Using Generative AI
3Avondale University Encourages ethical and responsible use of GenAI if permitted by lecturers, consistent with academic integrity policies.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Our Policies—Avondale University; GenAI Library Guide
4Bond UniversityGuided by the need for informed, mindful, and critical use of GenAI. Endorses specific licensed tools. Emphasizes academic integrity principles: honesty, trust, fairness, respect, responsibility, courage, and professionalism.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
Data Privacy and Security: Strong warnings against uploading sensitive or copyrighted material to GenAI tools, especially those that use data for training. Licensed library resources generally forbid use as input to AI technologies.Generative Artificial Intelligence
5Central Queensland University Developed a “Generative AI Toolkit” (launched March 2025) for the ICT discipline, promoting responsible AI adoption in education. The toolkit suggests a model for GenAI adoption including guided introduction, ethical use policy, and integrative learning.
  • Student use
  • Staff use
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.The Centre for Machine Learning—Networking and Education Technology
6Charles Darwin UniversityPrioritizes prevention of academic dishonesty through education. Students are expected to act with honesty, trust, fairness, respect, and responsibility.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Using AI tools at university; NT Academic Centre for Cyber Security and Innovation
7Charles Sturt UniversityCommitted to preparing students to use AI tools effectively and ethically. Principles for student AI use include Integrity, Transparency, Accountability, Fairness, and Respect for Privacy.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
No explicit mention of misinformation or cybercrime in the context of generative AI.Generative AI: For Study; Statement of Principles for the use of Artificial Intelligence; Your guide to generative Artificial Intelligence (AI)
8Curtin UniversitySupports teaching students to use GenAI ethically and responsibly for future professional environments. GenAI should be used with caution.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
No explicit mention of misinformation or cybercrime in the context of generative AI.Appropriate use of Gen-AI technologies; The Curtin AI in Research Group
9Deakin UniversityWelcomes students to develop skills to use GenAI ethically and responsibly. Emphasizes acting with honesty, trust, fairness, respect, and responsibility.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
Data Privacy and Security: Do not submit private/personal information or copyrighted/Deakin IP to AI platforms without prior written consent.Generative Artificial Intelligence (AI); Responsible use of GenAI in Research; GenAI basics
10Edith Cowan UniversityEncourages embracing emerging technologies responsibly. GenAI use must align with ECU’s Ethical Principles: Courage, Integrity, Personal Excellence, Rational Inquiry, and Respect.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
Data Privacy and Security: Do not prompt using personal/sensitive data. Follow ECU guidelines for data security and privacy.https://www.ecu.edu.au/schools/science/research/school-centres/centre-for-artificial-intelligence-and-machine-learning-aiml-centre/overview (accessed on 22 November 2025)
11Federation University AustraliaEmphasizes ethical considerations, copyright, transparency, accuracy, bias, reproducibility, privacy, and financial cost of AI tools. Policy examples provided range from “ZERO use” to “ENCOURAGED use” to “SOME use,” suggesting flexibility at course level.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
Data Privacy and Security: Don’t share copyright content of others, personal information, or IP you don’t have rights to share. Prefer data-locked (private) tools like University Co-Pilot. Be aware that some tool terms allow reuse of inputs/outputs.Generative artificial intelligence: Use at University
12Flinders UniversityCommitted to principles of academic integrity (honesty, respect, trust, fairness). Misusing AI tools (e.g., ChatGPT, Gemini, DALL-E) without permission and appropriate acknowledgement/citation is a failure to meet integrity requirements.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Flinders University Statement on the use of AI in research; Using AI tools in research; Good practice guide—Designing assessment for Artificial Intelligence and academic integrity
13Griffith UniversityAcademic integrity means students act with honesty, trust, fairness, respect, responsibility, and courage. Provides a module on “Using generative AI ethically and responsibly”.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Institute for Integrated and Intelligent Systems Topic Archives—Griffith News
14James Cook UniversityUse of AI in learning and assessment must be ethical, transparent, and purposeful, upholding Academic Integrity principles. Students must always check GenAI output for credibility.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Generative AI and Assignments; GenAI Guidelines; Generative Artificial Intelligence
15La Trobe UniversityProvides guides on understanding AI and working with it responsibly. Emphasizes abiding by Academic Integrity policy.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Generative AI in your research;
AI and Machine Learning; Cisco—La Trobe Centre for Artificial Intelligence and Internet of Things
16Macquarie UniversityAcademic Integrity Policy (updated Aug 2, 2023) defines “Unauthorised use of generative artificial intelligence.” Recognizes AI may be used at many stages; use does not automatically constitute misconduct. Acceptable use varies by discipline/course/assessment.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
Data Privacy and Security: Recognises privacy risks with GenAI tools (data recorded, may become public/shared) and IP issues (terms of service vary).Guidance Note: Using Generative Artificial Intelligence in Research
17Monash UniversityAcknowledges GenAI opportunities for enhancing research/innovation. Expects all GenAI use in research to comply with Australian Code for Responsible Conduct of Research and ARC Research Integrity Policy. Has an Artificial Intelligence Operations Policy Suite for responsible AI use. Students have free access to CoPilot; emphasizes safe and responsible use.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
No explicit mention of misinformation or cybercrime in the context of generative AI. https://www.monash.edu/graduate-research/support-and-resources/resources/guidance-on-generative-ai (accessed on 22 November 2025)
18Murdoch UniversityIntegrating AI to positively impact students, equipping them for the future. Staff receive training to support academics and innovative course offerings.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.https://www.murdoch.edu.au/schools/information-technology/research (accessed on 22 November 2025)
19Queensland University of TechnologySpecific GenAI policy information for QUT is not detailed in the provided research. General academic integrity principles and national guidelines (TEQSA, AAIN) would likely inform its approach.
  • Not specified
No explicit mention of misinformation or cybercrime in the context of generative AI.Ethical and evaluative use
20RMIT UniversitySupports critical and ethical engagement with GenAI, in accordance with established principles for responsible conduct of research. Provides “Val,” a private, secure, free AI tool for students.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
Data Privacy and Security: Val (private GenAI Chatbot) ensures user-provided information is not used for training or shared with third parties.Research Integrity and Generative AI; Principles for the use of Generative AI at RMIT; Teaching and Research guides
21Southern Cross UniversitySupports and encourages appropriate GenAI use where it doesn’t pose unacceptable risk to academic integrity/standards. Approach is consistent with AAIN and TEQSA guidelines. Taking a “first principles approach”—GenAI is a tool that can be used constructively.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.GenAI tools for research
GenAI
22Swinburne University of TechnologyAcademic integrity is key. Students may use GenAI tools under direction of unit teaching staff and with proper acknowledgement.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Academic Integrity Swinburne; Beating the bots
23The University of AdelaideAcademic Integrity Policy promotes and upholds academic integrity. Provides educational resources, support, and guidance.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Artificial Intelligence
24The University of MelbourneSupports staff use with AI/data literacy; does not ban for students but use varies by discipline/assessment. Emphasizes navigating GenAI for policy, practice, and integrity. Has 10 guiding AI principles developed by its Generative AI Taskforce (GAIT).
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
Data Privacy and Security: Warns against uploading university content/student data to external tools; provides Spark AI for secure processing.Statement on responsible use of digital assistance tools in research; University of Melbourne AI principles; Graduate researchers and digital assistance tools
25The University of New EnglandEncourages Unit Coordinators to take a balanced approach, considering discipline-appropriate applications, educating students on appropriate use, and assessment design that maintains integrity.
  • Student use
  • Staff use
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Ethical AI use and original thinking; Guidance for the Use of Artificial Intelligence (AI) in Research
26The University of New South WalesHas an AI leadership group and AI ecosystem to guide ethical, responsible, innovative AI use. Approved core principles for ethical/responsible AI use. AI capability framework for teaching staff.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
No explicit mention of misinformation or cybercrime in the context of generative AI.Chat GPT and Generative AI at UNSW; Artificial Intelligence at USW: Using AI in assignments
27The University of NewcastleRecognises AI may be used by students at many stages; use is not automatically misconduct. Work submitted must be original. Acceptable use varies by discipline/course/assessment. Misuse may breach Student Conduct Rule.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Can I use Generative Artificial Intelligence (such as ChatGPT or Copilot) to complete an assignment? / AskUON / The University of Newcastle, Australia
28The University of Notre Dame AustraliaUse of AI tools must adhere to existing policies (e.g., Responsible Use of Data & IT Resources). Students expected to abide by Generative AI Policy for Students. Faculty to communicate clear expectations.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
Data Privacy and Security: Protect confidential, copyrighted, personal information. Understand AI provider data policies. University reviewing AI tools for use with non-public data; see Approved AI Tools.Policies, procedures and guidelines
29The University of QueenslandStudents may use AI tools responsibly where permitted. Some assessments may restrict/prohibit AI. Staff encouraged to explore AI in line with UQ policies.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.University of Queensland Library Guide on Artificial Intelligence; A framework for discussing AI-assisted academic research and writing; Artificial Intelligence at UQ
30The University of South AustraliaBalances benefits of AI in research efficiency with ethics, transparency, IP, and critical evaluation. No blanket ban on AI tools; incorporating technology as part of teaching responsible/ethical use.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.University of South Australia’s perspective on AI
31University of the Sunshine Coast Expects students to act with academic integrity (ethical, honest, responsible approach). Unauthorised use of GenAI or paraphrasing tools can be academic misconduct.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Generative AI and Artificial Intelligence library guide
32The University of SydneyDefines GenAI. Using AI responsibly involves ethical use, understanding limitations, balancing technology with traditional learning. Aims to develop digital literacy.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Artificial intelligence and education at Sydney; Generative AI Guardrails; Guidelines for Researchers
33The University of Western AustraliaUWA GenAI Think Tank (created 2024) offers strategic advice on risks/opportunities for teaching, research, operations. Core AI values: Collaborative responsibility, Data-informed and human-driven agility, Sustainable innovation. Academic integrity requires acknowledging contributions.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
Data Privacy and Security: GenAI Think Tank advises on data sensitivity and selection of safe GenAI tools, risks of local GenAI platforms, educating staff on accidental data leakage. Users should not upload copyrighted works they don’t own into GenAI tools.Using AI Tools at UWA: A Guide for
Students
34University of CanberraStudent must not use AI tools/services for assessment or preparation unless explicitly permitted in published assessment instructions.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Human Centred Technology Research Cluster
35University of Southern QueenslandStudents must use AI in assessments within clearly defined levels to maintain academic integrity
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Using artificial intelligence (AI) in study
36University of TasmaniaAcademic integrity policy requires ethical, responsible, trustworthy conduct. Where GenAI use is permitted, it must be accurately acknowledged.
  • Student use
  • Staff use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Referencing guide: AI use
37University of Technology SydneySpecific GenAI policy information for UTS is not detailed in the provided research. General academic integrity principles and national guidelines would likely inform its approach.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Ethics of Artificial Intelligence: From Principles to Practice: summary; Generative AI: Ethical Use and Evaluation; Artificial Intelligence Operations Policy
38University of WollongongCommitted to embracing GenAI to enhance learning and develop work-readiness skills. No universal policy: guidance in Subject Outline, varies between subjects.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
  • University-endorsed/provided tools
Data Privacy and Security: Data harvesting is a risk; UOW recommends Copilot for its Enterprise Data Protection.Using Generative AI tools well; Research integrity: Generative artificial intelligence (GenAI)
39Victoria UniversityPotential to use GenAI responsibly in study, but risks must be considered. Student responsibility to be aware of policy/guidelines.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
Data Privacy and Security: Do not provide private/sensitive/confidential information to GenAI tools (can be used for training data).AI in Education for Students
40Western Sydney UniversityImportant to use GenAI tools honestly and responsibly. Inappropriate use has serious consequences. Students and staff co-designing agreements on GenAI use.
  • Student use
  • Acknowledgement, Disclosure and referencing
  • Academic Integrity
No explicit mention of misinformation or cybercrime in the context of generative AI.Integrating generative AI; AI Tools in Academic Writing and Research; Generative AI

Appendix B

Table A2. Qualitative Appraisal Tool.
Table A2. Qualitative Appraisal Tool.
StudiesThe Research Questions Are Clearly DefinedThe Collected Data Addresses the Research QuestionsThe Qualitative Approach Is Appropriate for Answering the Research QuestionThe Qualitative Data Collection Methods Are Adequate to Address the Research QuestionThe Findings Are Adequately Derived from the DataThe Interpretation of Results Is Sufficiently Substantiated by the DataThere Is Coherence Between the Qualitative Data Sources, Collection, Analysis, and InterpretationTotal Score Out of 7Level of Bias
  • Cubbage and Smith [16]
YesYesYesYesYesYesYes7100%
2.
Fudge, Ulpen [19]
NoYesYesYesYesNoYes571.4%
3.
Jacqueline [23]
NoYesYesYesYesYesYes686.7%
4.
Luu, Rathjens [46]
YesYesYesYesYesNoYes686.7%
5.
Mitchell [47]
YesYesYesNoYesYesYes686.7%
6.
Pennell, Campbell and Tangen [24]
YesYesNoYesYesYesNo571.4%
7.
Pennell, Campbell [25]
YesYesYesYesYesNoYes686.7%
8.
Reeves, Delfabbro and Calic [17]
YesYesYesYesYesYesYes7100%
9.
Samar, Rajan and Aakanksha [20]
YesYesYesNoYesYesYes686.7%
10.
Sandu, Gide and Elkhodr [48]
YesYesYesYesYesYesYes7100%
11.
Sheanoda, Bussey and Jones [26]
NoYesYesYesYesNoYes571.4%
12.
Spears, Taddeo and Ey [33]
YesYesYesYesYesNoYes686.7%
13.
Striepe, Thomson and Sefcik [18]
YesYesYesYesYesYesYes7100%
14.
Vaill, Campbell and Whiteford [21]
NoYesYesYesYesNoYes571.4%
15.
Whitty [49]
YesYesYesYesYesNoYes686.7%
16.
Xing, Mu and Henderson [50]
YesYesYesYesYesNoYes686.7%
17.
Young, Campbell [22]
NoYesYesYesYesNoYes571.4%

References

  1. Noviandy, T.R.; Maulana, A.; Idroes, G.M.; Zahriah, Z.; Paristiowati, M.; Emran, T.B.; Ilyas, M.; Idroes, R. Embrace, Don’t Avoid: Reimagining Higher Education with Generative Artificial Intelligence. J. Educ. Manag. Learn. 2024, 2, 81–90. [Google Scholar] [CrossRef]
  2. Bahroun, Z.; Anane, C.; Ahmed, V.; Zacca, A. Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability 2023, 15, 12983. [Google Scholar] [CrossRef]
  3. Zhang, J.; Goyal, S. AI-driven decision support system innovations to empower higher education administration. J. Comput. Mech. Manag. 2024, 3, 35–41. [Google Scholar] [CrossRef]
  4. Mariam, G.; Adil, L.; Zakaria, B. The integration of artificial intelligence (ai) into education systems and its impact on the governance of higher education institutions. Int. J. Prof. Bus. Rev. 2024, 9, 13. [Google Scholar] [CrossRef]
  5. Loh, P.K.; Lee, A.Z.; Balachandran, V. Towards a hybrid security framework for phishing awareness education and defense. Future Internet 2024, 16, 86. [Google Scholar] [CrossRef]
  6. Balogun, A.Y.; Ismaila Alao, A.; Olaniyi, O.O. Disinformation in the digital era: The role of deepfakes, artificial intelligence, and open-source intelligence in shaping public trust and policy responses. Comput. Sci. IT Res. J. 2025, 6, 28–48. [Google Scholar] [CrossRef]
  7. Singh, P.; Dhiman, D.B. Exploding AI-Generated Deepfakes and Misinformation: A Threat to Global Concern in the 21st Century. Available at SSRN 4651093. 2023. Available online: https://www.qeios.com/read/DPLE2L (accessed on 23 May 2025).
  8. Lin, L.S.; Aslett, D.; Mekonnen, G.; Zecevic, M. The Dangers of Voice Cloning and How to Combat It. 2024. Available online: https://theconversation.com/the-dangers-of-voice-cloning-and-how-to-combat-it-239926 (accessed on 22 May 2025).
  9. Bearman, M.; Ryan, J.; Ajjawi, R. Discourses of artificial intelligence in higher education: A critical literature review. High. Educ. 2023, 86, 369–385. [Google Scholar] [CrossRef]
  10. Bower, M.; Henderson, M.; Slade, C.; Southgate, E.; Gulson, K.; Lodge, J. What generative Artificial Intelligence priorities and challenges do senior Australian educational policy makers identify (and why)? Aust. Educ. Res. 2025, 52, 2069–2094. [Google Scholar] [CrossRef]
  11. Lodge, J.M. The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action. Tert. Educ. Qual. Stand. Agency 2024, 8. Available online: https://www.teqsa.gov.au/sites/default/files/2024-08/evolving-risk-to-academic-integrity-posed-by-generative-artificial-intelligence.pdf (accessed on 22 May 2025).
  12. Selçuk, A.A. A guide for systematic reviews: PRISMA. Turk. Arch. Otorhinolaryngol. 2019, 57, 57. [Google Scholar] [CrossRef]
  13. Yazdanifard, R.; Oyegoke, T.; Seyedi, A.P. Cyber-crimes: Challenges of the millennium age. In Advances in Electrical Engineering and Electrical Machines; Springer: Berlin/Heidelberg, Germany, 2012; pp. 527–534. [Google Scholar]
  14. Tan, S. Regulating online harms: Are current efforts working–or even workable? RSIS Comment. 2023, 170-23. Available online: https://dr.ntu.edu.sg/entities/publication/068523a4-583b-4df8-8cd3-95166a9723a9 (accessed on 22 May 2025).
  15. Hong, Q.N.; Pluye, P.; Fàbregues, S.; Bartlett, G.; Boardman, F.; Cargo, M.; Dagenais, P.; Gagnon, M.-P.; Griffiths, F.; Nicolau, B. Mixed methods appraisal tool (MMAT), version 2018. Regist. Copyr. 2018, 1148552, 1–7. [Google Scholar]
  16. Cubbage, C.J.; Smith, C.L. The function of security in reducing women’s fear of crime in open public spaces: A case study of serial sex attacks at a Western Australian university. Secur. J. 2009, 22, 73–86. [Google Scholar] [CrossRef]
  17. Reeves, A.; Delfabbro, P.; Calic, D. Encouraging Employee Engagement With Cybersecurity: How to Tackle Cyber Fatigue. SAGE Open 2021, 11, 21582440211000049. [Google Scholar] [CrossRef]
  18. Striepe, M.; Thomson, S.; Sefcik, L. Understanding Academic Integrity Education: Case Studies from Two Australian Universities. J. Acad. Ethics 2023, 21, 1–17. [Google Scholar] [CrossRef]
  19. Fudge, A.; Ulpen, T.; Bilic, S.; Picard, M.; Carter, C. Does an educative approach work? A reflective case study of how two Australian higher education Enabling programs support students and staff uphold a responsible culture of academic integrity. Int. J. Educ. Integr. 2022, 18, 5. [Google Scholar] [CrossRef]
  20. Samar, S.; Rajan, K.; Aakanksha, S. Framework for Adoption of Generative Artificial Intelligence (GenAI) in Education. IEEE Trans. Educ. 2024, 67, 777–785. [Google Scholar] [CrossRef]
  21. Vaill, Z.; Campbell, M.; Whiteford, C. Analysing the quality of Australian universities’ student anti-bullying policies. High. Educ. Res. Dev. 2020, 39, 1262–1275. [Google Scholar] [CrossRef]
  22. Young, H.; Campbell, M.; Spears, B.; Butler, D.; Cross, D.; Slee, P. Cyberbullying and the role of the law in Australian schools: Views of senior officials. Aust. J. Educ. 2016, 60, 86–101. [Google Scholar] [CrossRef]
  23. Jacqueline, M.D. A study of cybercrime victimisation and prevention: Exploring the use of online crime prevention behaviours and strategies. J. Criminol. Res. Policy Pract. 2020, 6, 17–33. [Google Scholar]
  24. Pennell, D.; Campbell, M.; Tangen, D. The education and the legal system: Inter-systemic collaborations identified by Australian schools to more effectively reduce cyberbullying. Prev. Sch. Fail. 2022, 66, 175–185. [Google Scholar] [CrossRef]
  25. Pennell, D.; Campbell, M.; Tangen, D.; Knott, A. Should Australia have a law against cyberbullying? Problematising the murky legal environment of cyberbullying from perspectives within schools. Aust. Educ. Res. 2022, 49, 827–844. [Google Scholar] [CrossRef]
  26. Sheanoda, V.; Bussey, K.; Jones, T. Sexuality, gender and culturally diverse interpretations of cyberbullying. New Media Soc. 2024, 26, 154–171. [Google Scholar] [CrossRef]
  27. Jayshri, N. Comprehensive Review of Digital Harassment Prevention and Intervention Strategies: Bystanders, Automated Content Moderation, Legal Frameworks, AI, Education, Reporting, and Blocking. Int. J. Multidiscip. Res. 2025, 7. [Google Scholar] [CrossRef]
  28. Australia’s Cyber Security Strategy. Australia’s Cyber Security Strategy 2020 at a Glance; Commonwealth of Australia: Barton, Australia, 2020. [Google Scholar]
  29. Bell, M.; Keles, S.; Furenes Klippen, M.I.; Caravita, S.C.S.; Fandrem, H. Cooperation within the school community to overcome cyberbullying: A systematic scoping review. Scand. J. Educ. Res. 2025, 1–16. [Google Scholar] [CrossRef]
  30. Ballantine, J.; Boyce, G.; Stoner, G. A critical review of AI in accounting education: Threat and opportunity. Crit. Perspect. Account. 2024, 99, 102711. [Google Scholar] [CrossRef]
  31. Smiderle, R.; Rigo, S.J.; Marques, L.B.; Peçanha de Miranda Coelho, J.A.; Jaques, P.A. The impact of gamification on students’ learning, engagement and behavior based on their personality traits. Smart Learn. Environ. 2020, 7, 3. [Google Scholar] [CrossRef]
  32. Bassanelli, S.; Vasta, N.; Bucchiarone, A.; Marconi, A. Gamification for behavior change: A scientometric review. Acta Psychol. 2022, 228, 103657. [Google Scholar] [CrossRef]
  33. Spears, B.A.; Taddeo, C.; Ey, L.A. Using participatory design to inform cyber/bullying prevention and intervention practices: Evidence-Informed insights and strategies. J. Psychol. Couns. Sch. 2021, 31, 159–171. [Google Scholar] [CrossRef]
  34. Johnston, N. The impact and management of mis/disinformation at university libraries in Australia. J. Aust. Libr. Inf. Assoc. 2023, 72, 251–269. [Google Scholar] [CrossRef]
  35. Salem, L.; Fiore, S.; Kelly, S.; Brock, B. Evaluating the Effectiveness of Turnitin’s AI Writing Indicator Model; Temple University: Philadelphia, PA, USA, 2021. [Google Scholar]
  36. Fowler, S.; Korolkiewicz, M.; Marrone, R. First 100 days of ChatGPT at Australian universities: An analysis of policy landscape and media discussions about the role of AI in higher education. Learn. Lett. 2023, 1, 1. [Google Scholar] [CrossRef]
  37. Bontridder, N.; Poullet, Y. The role of artificial intelligence in disinformation. Data Policy 2021, 3, e32. [Google Scholar] [CrossRef]
  38. Stracke, C.M.; Griffiths, D.; Pappa, D.; Bećirović, S.; Polz, E.; Perla, L.; Di Grassi, A.; Massaro, S.; Skenduli, M.P.; Burgos, D. Analysis of Artificial Intelligence Policies for Higher Education in Europe. Int. J. Interact. Multimed. Artif. Intell. 2025, 9, 124–137. [Google Scholar] [CrossRef]
  39. Williamson, S.M.; Prybutok, V. The era of artificial intelligence deception: Unraveling the complexities of false realities and emerging threats of misinformation. Information 2024, 15, 299. [Google Scholar] [CrossRef]
  40. Khairullah, S.A.; Harris, S.; Hadi, H.J.; Sandhu, R.A.; Ahmad, N.; Alshara, M.A. Implementing artificial intelligence in academic and administrative processes through responsible strategic leadership in the higher education institutions. Front. Educ. 2025, 10, 1548104. [Google Scholar] [CrossRef]
  41. Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [Google Scholar] [CrossRef]
  42. Braun, T. Liability for artificial intelligence reasoning technologies–a cognitive autonomy that does not help. Corp. Gov. Int. J. Bus. Soc. 2025. [Google Scholar] [CrossRef]
  43. Khan, R.H.; Balapumi, R. Artificial Intelligence (AI) as Strategy to Gain Competitive Advantage for Australian Higher Education Institutions (HEI) Under the New Post COVID-19 Scenario. In Artificial Intelligence-Enabled Businesses: How to Develop Strategies for Innovation; Scrivener Publishing LLC: Beverly, MA, USA, 2025; pp. 439–449. [Google Scholar]
  44. Lin, L.S.; Aslett, D.; Mekonnen, G.; Zecevic, M. The UN Cybercrime Convention: What It Means for Policing and Community Safety in Australia. 2024. Available online: https://www.internationalaffairs.org.au/australianoutlook/the-un-cybercrime-convention-what-it-means-for-policing-and-community-safety-in-australia/ (accessed on 22 November 2025).
  45. Wardle, C.; Derakhshan, H. Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking; Council of Europe Strasbourg: Strasbourg, France, 2017; Volume 27. [Google Scholar]
  46. Luu, X.; Rathjens, C.; Swadling, M.; Gresham, B.; Hockman, L.; Scott-Young, C.; Leifels, K.; Zadow, A.J.; Dollard, M.F.; Kent, L. How university climate impacts psychosocial safety, psychosocial risk, and mental health among staff in Australian higher education: A qualitative study. High. Educ. 2024. [Google Scholar] [CrossRef]
  47. Mitchell, M. The discursive production of public inquiries: The case of Australia’s Royal Commission into Institutional Responses to Child Sexual Abuse. Crime Media Cult. 2021, 17, 353–374. [Google Scholar] [CrossRef]
  48. Sandu, R.; Gide, E.; Elkhodr, M. The role and impact of ChatGPT in educational practices: Insights from an Australian higher education case study. Discov. Educ. 2024, 3, 71. [Google Scholar] [CrossRef]
  49. Whitty, M.T. Drug mule for love. J. Financ. Crime 2023, 30, 795–812. [Google Scholar] [CrossRef]
  50. Xing, C.; Mu, G.M.; Henderson, D. Submission or subversion: Survival and resilience of Chinese international research students in neoliberalised Australian universities. High. Educ. 2022, 84, 435–450. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA Flow Diagram.
Figure 1. PRISMA Flow Diagram.
Informatics 12 00132 g001
Table 1. Inclusion and Exclusion Criteria.
Table 1. Inclusion and Exclusion Criteria.
CriteriaInclusion CriteriaExclusion Criteria
DatabasesArticles indexed in SCOPUS, IEEE Xplore, Web of Science, and Google Scholar.Articles not indexed in the specified databases (SCOPUS, IEEE Xplore, Web of Science, Google Scholar).
KeywordsArticles that include keywords such as ““online harm,” “digital harm”, “cyber harm,” “online safety,” “cyberbullying,” “online harassment,” and “Australian higher education,” “Australian universities,” “institutional responses,” “institutional strategies,” “university interventions,” “AI-generated misinformation,” “artificial intelligence misinformation,” “deepfake,” “synthetic media,” “AI-driven disinformation,” “generative AI,” “cybercrime prevention,” “cybersecurity,” “online crime prevention,” “digital security” and “qualitative analysis,” “qualitative research,” “thematic analysis,” “case studies”.Articles that do not include the specified keywords or focus on unrelated topics.
LanguageArticles published in English.Articles not published in English.
LocationStudies conducted in AustraliaStudies conducted outside Australia
Publication DateArticles published between 2000 and 2025.Articles published before 2000.
RelevanceArticles that focus on institutional strategies to counter AI-generated misinformation and cybercrime risks.Articles that do not address Institutional strategies to counter AI-generated misinformation and cybercrime risks or do not provide empirical data or theoretical insights relevant to the study.
Type of PublicationPeer-reviewed journal articles.Non-peer-reviewed articles, opinion pieces, and editorials.
(made by authors).
Table 2. Identified Themes.
Table 2. Identified Themes.
Main ThemeDescriptionReferences
Educational StrategiesInstitutional strategies to counter AI-generated misinformation and cybercrime risks, including AI literacy programs, academic integrity training, and digital citizenship education. [16,17,18,19,20,21,22].
Alignment with National FrameworksAlignment of university policies and practices with national frameworks for digital safety and cybercrime prevention, such as Australia’s Cyber Security Strategy and eSafety Commissioner guidelines. [16,17,18,19,21,23,24].
Policy Gaps and DevelopmentGaps in institutional responses, such as limited GenAI adoption and weak cyberbullying policies, and their implications for developing future cybercrime prevention strategies. [17,18,20,21,22,23,25,26].
(made by authors).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, L.S.F.; Mekonnen, G.T.; Zecevic, M.; Motsi-Omoijiade, I.; Aslett, D.; Allan, D.M.C. Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention. Informatics 2025, 12, 132. https://doi.org/10.3390/informatics12040132

AMA Style

Lin LSF, Mekonnen GT, Zecevic M, Motsi-Omoijiade I, Aslett D, Allan DMC. Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention. Informatics. 2025; 12(4):132. https://doi.org/10.3390/informatics12040132

Chicago/Turabian Style

Lin, Leo S. F., Geberew Tulu Mekonnen, Mladen Zecevic, Immaculate Motsi-Omoijiade, Duane Aslett, and Douglas M. C. Allan. 2025. "Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention" Informatics 12, no. 4: 132. https://doi.org/10.3390/informatics12040132

APA Style

Lin, L. S. F., Mekonnen, G. T., Zecevic, M., Motsi-Omoijiade, I., Aslett, D., & Allan, D. M. C. (2025). Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention. Informatics, 12(4), 132. https://doi.org/10.3390/informatics12040132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop