Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention
Abstract
1. Introduction
- What institutional strategies have Australian universities implemented to counter AI-generated misinformation and associated cybercrime risks?
- To what extent do current policies and practices align with national frameworks for digital safety and cybercrime prevention?
- What gaps exist in institutional responses to AI-driven online harms, and how can they inform future prevention?
1.1. Background: Australian Universities’ Current Generative AI Policies
1.1.1. Australian National and Sector-Specific Guidelines and Frameworks
1.1.2. Australian Universities’ GenAI Policies: General Stance and Focus Areas
2. Method
2.1. Systematic Literature Review: Identifying Key Themes
2.1.1. Literature Search Strategy and Source
2.1.2. Study Selection and Eligibility Criteria
2.1.3. Quality Assessment, Assessment of Risk of Bias, and Data Extraction
Mixed Methods Quality Assurance
Qualitative Methods Quality Assurance
Data Extraction
Search Results
3. Findings
3.1. Findings from SLR
Educational Strategies
3.2. Alignment with National Frameworks
3.3. Policy Gaps and Development
3.4. Findings from Policy Content Analysis
3.4.1. Educational Strategies
3.4.2. Alignment with National Frameworks
3.4.3. Policy Gaps and Development
4. Discussion
- training educators to teach students how to critique AI-generated outputs;
- mandating that institutional deployers of AI systems in educational settings run regular bias audits and testing;
- prohibiting the use of GenAI to create deceptive or malicious content in education settings;
- completing risk assessments;
- for example, identifying and seeking to eliminate bias and discrimination through the data the model is trained on, the design of the model and its intended uses;
- mandating to allow independent researchers ‘under-the-hood’ access to algorithmic information.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Nomenclature
| Online harm | refers to any negative experience caused by technology that affects an institution’s/a person’s, safety, reputation, or privacy. |
| Misinformation | According to Wardle and Derakhshan [45], misinformation is false information that is shared without the intent to cause harm. |
| Cybercrime prevention | refers to the proactive measures and strategies implemented to mitigate criminal activities targeting or leveraging computer systems and networks. |
| GenAI | is the application of artificial intelligence models that can generate novel content, which paradoxically offers both advanced defense mechanisms against cyber threats and new avenues for malicious exploitation by sophisticated actors. |
| Digital safety/security | is a broader concept encompassing the protection of personal data, privacy, and overall well-being in the digital realm. |
Appendix A
| University | Guidelines/Policies—General Stance | Guidelines/Policies—Focus Areas | Reference to Misinformation and Cybercrime in GenAI Policy | Sources | |
|---|---|---|---|---|---|
| 1 | Australian Catholic University | Specific GenAI policy information for ACU is not extensively detailed in the provided research. General academic integrity principles would apply. TEQSA guidelines and AAIN guidelines would likely inform ACU’s approach. | Not Specified | No explicit mention of misinformation or cybercrime in the context of generative AI. | Mitigating AI misuse in assessments; Research Centre for Digital Data and Assessment in Education research |
| 2 | Australian National University | ANU permits GenAI as a learning tool but emphasizes responsible and ethical use, consistent with academic integrity. The university acknowledges the diversity of applications across disciplines. |
| Data Privacy and Security: Strong emphasis on data privacy. Personal information and unpublished research should not be put into systems that may breach privacy or feed into GenAI data. University-approved tools are recommended for security. | Artificial Intelligence including generative AI; Best Practice When Using Generative AI |
| 3 | Avondale University | Encourages ethical and responsible use of GenAI if permitted by lecturers, consistent with academic integrity policies. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Our Policies—Avondale University; GenAI Library Guide |
| 4 | Bond University | Guided by the need for informed, mindful, and critical use of GenAI. Endorses specific licensed tools. Emphasizes academic integrity principles: honesty, trust, fairness, respect, responsibility, courage, and professionalism. |
| Data Privacy and Security: Strong warnings against uploading sensitive or copyrighted material to GenAI tools, especially those that use data for training. Licensed library resources generally forbid use as input to AI technologies. | Generative Artificial Intelligence |
| 5 | Central Queensland University | Developed a “Generative AI Toolkit” (launched March 2025) for the ICT discipline, promoting responsible AI adoption in education. The toolkit suggests a model for GenAI adoption including guided introduction, ethical use policy, and integrative learning. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | The Centre for Machine Learning—Networking and Education Technology |
| 6 | Charles Darwin University | Prioritizes prevention of academic dishonesty through education. Students are expected to act with honesty, trust, fairness, respect, and responsibility. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Using AI tools at university; NT Academic Centre for Cyber Security and Innovation |
| 7 | Charles Sturt University | Committed to preparing students to use AI tools effectively and ethically. Principles for student AI use include Integrity, Transparency, Accountability, Fairness, and Respect for Privacy. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Generative AI: For Study; Statement of Principles for the use of Artificial Intelligence; Your guide to generative Artificial Intelligence (AI) |
| 8 | Curtin University | Supports teaching students to use GenAI ethically and responsibly for future professional environments. GenAI should be used with caution. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Appropriate use of Gen-AI technologies; The Curtin AI in Research Group |
| 9 | Deakin University | Welcomes students to develop skills to use GenAI ethically and responsibly. Emphasizes acting with honesty, trust, fairness, respect, and responsibility. |
| Data Privacy and Security: Do not submit private/personal information or copyrighted/Deakin IP to AI platforms without prior written consent. | Generative Artificial Intelligence (AI); Responsible use of GenAI in Research; GenAI basics |
| 10 | Edith Cowan University | Encourages embracing emerging technologies responsibly. GenAI use must align with ECU’s Ethical Principles: Courage, Integrity, Personal Excellence, Rational Inquiry, and Respect. |
| Data Privacy and Security: Do not prompt using personal/sensitive data. Follow ECU guidelines for data security and privacy. | https://www.ecu.edu.au/schools/science/research/school-centres/centre-for-artificial-intelligence-and-machine-learning-aiml-centre/overview (accessed on 22 November 2025) |
| 11 | Federation University Australia | Emphasizes ethical considerations, copyright, transparency, accuracy, bias, reproducibility, privacy, and financial cost of AI tools. Policy examples provided range from “ZERO use” to “ENCOURAGED use” to “SOME use,” suggesting flexibility at course level. |
| Data Privacy and Security: Don’t share copyright content of others, personal information, or IP you don’t have rights to share. Prefer data-locked (private) tools like University Co-Pilot. Be aware that some tool terms allow reuse of inputs/outputs. | Generative artificial intelligence: Use at University |
| 12 | Flinders University | Committed to principles of academic integrity (honesty, respect, trust, fairness). Misusing AI tools (e.g., ChatGPT, Gemini, DALL-E) without permission and appropriate acknowledgement/citation is a failure to meet integrity requirements. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Flinders University Statement on the use of AI in research; Using AI tools in research; Good practice guide—Designing assessment for Artificial Intelligence and academic integrity |
| 13 | Griffith University | Academic integrity means students act with honesty, trust, fairness, respect, responsibility, and courage. Provides a module on “Using generative AI ethically and responsibly”. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Institute for Integrated and Intelligent Systems Topic Archives—Griffith News |
| 14 | James Cook University | Use of AI in learning and assessment must be ethical, transparent, and purposeful, upholding Academic Integrity principles. Students must always check GenAI output for credibility. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Generative AI and Assignments; GenAI Guidelines; Generative Artificial Intelligence |
| 15 | La Trobe University | Provides guides on understanding AI and working with it responsibly. Emphasizes abiding by Academic Integrity policy. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Generative AI in your research; AI and Machine Learning; Cisco—La Trobe Centre for Artificial Intelligence and Internet of Things |
| 16 | Macquarie University | Academic Integrity Policy (updated Aug 2, 2023) defines “Unauthorised use of generative artificial intelligence.” Recognizes AI may be used at many stages; use does not automatically constitute misconduct. Acceptable use varies by discipline/course/assessment. |
| Data Privacy and Security: Recognises privacy risks with GenAI tools (data recorded, may become public/shared) and IP issues (terms of service vary). | Guidance Note: Using Generative Artificial Intelligence in Research |
| 17 | Monash University | Acknowledges GenAI opportunities for enhancing research/innovation. Expects all GenAI use in research to comply with Australian Code for Responsible Conduct of Research and ARC Research Integrity Policy. Has an Artificial Intelligence Operations Policy Suite for responsible AI use. Students have free access to CoPilot; emphasizes safe and responsible use. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | https://www.monash.edu/graduate-research/support-and-resources/resources/guidance-on-generative-ai (accessed on 22 November 2025) |
| 18 | Murdoch University | Integrating AI to positively impact students, equipping them for the future. Staff receive training to support academics and innovative course offerings. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | https://www.murdoch.edu.au/schools/information-technology/research (accessed on 22 November 2025) |
| 19 | Queensland University of Technology | Specific GenAI policy information for QUT is not detailed in the provided research. General academic integrity principles and national guidelines (TEQSA, AAIN) would likely inform its approach. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Ethical and evaluative use |
| 20 | RMIT University | Supports critical and ethical engagement with GenAI, in accordance with established principles for responsible conduct of research. Provides “Val,” a private, secure, free AI tool for students. |
| Data Privacy and Security: Val (private GenAI Chatbot) ensures user-provided information is not used for training or shared with third parties. | Research Integrity and Generative AI; Principles for the use of Generative AI at RMIT; Teaching and Research guides |
| 21 | Southern Cross University | Supports and encourages appropriate GenAI use where it doesn’t pose unacceptable risk to academic integrity/standards. Approach is consistent with AAIN and TEQSA guidelines. Taking a “first principles approach”—GenAI is a tool that can be used constructively. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | GenAI tools for research GenAI |
| 22 | Swinburne University of Technology | Academic integrity is key. Students may use GenAI tools under direction of unit teaching staff and with proper acknowledgement. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Academic Integrity Swinburne; Beating the bots |
| 23 | The University of Adelaide | Academic Integrity Policy promotes and upholds academic integrity. Provides educational resources, support, and guidance. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Artificial Intelligence |
| 24 | The University of Melbourne | Supports staff use with AI/data literacy; does not ban for students but use varies by discipline/assessment. Emphasizes navigating GenAI for policy, practice, and integrity. Has 10 guiding AI principles developed by its Generative AI Taskforce (GAIT). |
| Data Privacy and Security: Warns against uploading university content/student data to external tools; provides Spark AI for secure processing. | Statement on responsible use of digital assistance tools in research; University of Melbourne AI principles; Graduate researchers and digital assistance tools |
| 25 | The University of New England | Encourages Unit Coordinators to take a balanced approach, considering discipline-appropriate applications, educating students on appropriate use, and assessment design that maintains integrity. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Ethical AI use and original thinking; Guidance for the Use of Artificial Intelligence (AI) in Research |
| 26 | The University of New South Wales | Has an AI leadership group and AI ecosystem to guide ethical, responsible, innovative AI use. Approved core principles for ethical/responsible AI use. AI capability framework for teaching staff. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Chat GPT and Generative AI at UNSW; Artificial Intelligence at USW: Using AI in assignments |
| 27 | The University of Newcastle | Recognises AI may be used by students at many stages; use is not automatically misconduct. Work submitted must be original. Acceptable use varies by discipline/course/assessment. Misuse may breach Student Conduct Rule. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Can I use Generative Artificial Intelligence (such as ChatGPT or Copilot) to complete an assignment? / AskUON / The University of Newcastle, Australia |
| 28 | The University of Notre Dame Australia | Use of AI tools must adhere to existing policies (e.g., Responsible Use of Data & IT Resources). Students expected to abide by Generative AI Policy for Students. Faculty to communicate clear expectations. |
| Data Privacy and Security: Protect confidential, copyrighted, personal information. Understand AI provider data policies. University reviewing AI tools for use with non-public data; see Approved AI Tools. | Policies, procedures and guidelines |
| 29 | The University of Queensland | Students may use AI tools responsibly where permitted. Some assessments may restrict/prohibit AI. Staff encouraged to explore AI in line with UQ policies. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | University of Queensland Library Guide on Artificial Intelligence; A framework for discussing AI-assisted academic research and writing; Artificial Intelligence at UQ |
| 30 | The University of South Australia | Balances benefits of AI in research efficiency with ethics, transparency, IP, and critical evaluation. No blanket ban on AI tools; incorporating technology as part of teaching responsible/ethical use. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | University of South Australia’s perspective on AI |
| 31 | University of the Sunshine Coast | Expects students to act with academic integrity (ethical, honest, responsible approach). Unauthorised use of GenAI or paraphrasing tools can be academic misconduct. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Generative AI and Artificial Intelligence library guide |
| 32 | The University of Sydney | Defines GenAI. Using AI responsibly involves ethical use, understanding limitations, balancing technology with traditional learning. Aims to develop digital literacy. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Artificial intelligence and education at Sydney; Generative AI Guardrails; Guidelines for Researchers |
| 33 | The University of Western Australia | UWA GenAI Think Tank (created 2024) offers strategic advice on risks/opportunities for teaching, research, operations. Core AI values: Collaborative responsibility, Data-informed and human-driven agility, Sustainable innovation. Academic integrity requires acknowledging contributions. |
| Data Privacy and Security: GenAI Think Tank advises on data sensitivity and selection of safe GenAI tools, risks of local GenAI platforms, educating staff on accidental data leakage. Users should not upload copyrighted works they don’t own into GenAI tools. | Using AI Tools at UWA: A Guide for Students |
| 34 | University of Canberra | Student must not use AI tools/services for assessment or preparation unless explicitly permitted in published assessment instructions. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Human Centred Technology Research Cluster |
| 35 | University of Southern Queensland | Students must use AI in assessments within clearly defined levels to maintain academic integrity |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Using artificial intelligence (AI) in study |
| 36 | University of Tasmania | Academic integrity policy requires ethical, responsible, trustworthy conduct. Where GenAI use is permitted, it must be accurately acknowledged. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Referencing guide: AI use |
| 37 | University of Technology Sydney | Specific GenAI policy information for UTS is not detailed in the provided research. General academic integrity principles and national guidelines would likely inform its approach. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Ethics of Artificial Intelligence: From Principles to Practice: summary; Generative AI: Ethical Use and Evaluation; Artificial Intelligence Operations Policy |
| 38 | University of Wollongong | Committed to embracing GenAI to enhance learning and develop work-readiness skills. No universal policy: guidance in Subject Outline, varies between subjects. |
| Data Privacy and Security: Data harvesting is a risk; UOW recommends Copilot for its Enterprise Data Protection. | Using Generative AI tools well; Research integrity: Generative artificial intelligence (GenAI) |
| 39 | Victoria University | Potential to use GenAI responsibly in study, but risks must be considered. Student responsibility to be aware of policy/guidelines. |
| Data Privacy and Security: Do not provide private/sensitive/confidential information to GenAI tools (can be used for training data). | AI in Education for Students |
| 40 | Western Sydney University | Important to use GenAI tools honestly and responsibly. Inappropriate use has serious consequences. Students and staff co-designing agreements on GenAI use. |
| No explicit mention of misinformation or cybercrime in the context of generative AI. | Integrating generative AI; AI Tools in Academic Writing and Research; Generative AI |
Appendix B
| Studies | The Research Questions Are Clearly Defined | The Collected Data Addresses the Research Questions | The Qualitative Approach Is Appropriate for Answering the Research Question | The Qualitative Data Collection Methods Are Adequate to Address the Research Question | The Findings Are Adequately Derived from the Data | The Interpretation of Results Is Sufficiently Substantiated by the Data | There Is Coherence Between the Qualitative Data Sources, Collection, Analysis, and Interpretation | Total Score Out of 7 | Level of Bias |
|---|---|---|---|---|---|---|---|---|---|
| Yes | Yes | Yes | Yes | Yes | Yes | Yes | 7 | 100% |
| No | Yes | Yes | Yes | Yes | No | Yes | 5 | 71.4% |
| No | Yes | Yes | Yes | Yes | Yes | Yes | 6 | 86.7% |
| Yes | Yes | Yes | Yes | Yes | No | Yes | 6 | 86.7% |
| Yes | Yes | Yes | No | Yes | Yes | Yes | 6 | 86.7% |
| Yes | Yes | No | Yes | Yes | Yes | No | 5 | 71.4% |
| Yes | Yes | Yes | Yes | Yes | No | Yes | 6 | 86.7% |
| Yes | Yes | Yes | Yes | Yes | Yes | Yes | 7 | 100% |
| Yes | Yes | Yes | No | Yes | Yes | Yes | 6 | 86.7% |
| Yes | Yes | Yes | Yes | Yes | Yes | Yes | 7 | 100% |
| No | Yes | Yes | Yes | Yes | No | Yes | 5 | 71.4% |
| Yes | Yes | Yes | Yes | Yes | No | Yes | 6 | 86.7% |
| Yes | Yes | Yes | Yes | Yes | Yes | Yes | 7 | 100% |
| No | Yes | Yes | Yes | Yes | No | Yes | 5 | 71.4% |
| Yes | Yes | Yes | Yes | Yes | No | Yes | 6 | 86.7% |
| Yes | Yes | Yes | Yes | Yes | No | Yes | 6 | 86.7% |
| No | Yes | Yes | Yes | Yes | No | Yes | 5 | 71.4% |
References
- Noviandy, T.R.; Maulana, A.; Idroes, G.M.; Zahriah, Z.; Paristiowati, M.; Emran, T.B.; Ilyas, M.; Idroes, R. Embrace, Don’t Avoid: Reimagining Higher Education with Generative Artificial Intelligence. J. Educ. Manag. Learn. 2024, 2, 81–90. [Google Scholar] [CrossRef]
- Bahroun, Z.; Anane, C.; Ahmed, V.; Zacca, A. Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability 2023, 15, 12983. [Google Scholar] [CrossRef]
- Zhang, J.; Goyal, S. AI-driven decision support system innovations to empower higher education administration. J. Comput. Mech. Manag. 2024, 3, 35–41. [Google Scholar] [CrossRef]
- Mariam, G.; Adil, L.; Zakaria, B. The integration of artificial intelligence (ai) into education systems and its impact on the governance of higher education institutions. Int. J. Prof. Bus. Rev. 2024, 9, 13. [Google Scholar] [CrossRef]
- Loh, P.K.; Lee, A.Z.; Balachandran, V. Towards a hybrid security framework for phishing awareness education and defense. Future Internet 2024, 16, 86. [Google Scholar] [CrossRef]
- Balogun, A.Y.; Ismaila Alao, A.; Olaniyi, O.O. Disinformation in the digital era: The role of deepfakes, artificial intelligence, and open-source intelligence in shaping public trust and policy responses. Comput. Sci. IT Res. J. 2025, 6, 28–48. [Google Scholar] [CrossRef]
- Singh, P.; Dhiman, D.B. Exploding AI-Generated Deepfakes and Misinformation: A Threat to Global Concern in the 21st Century. Available at SSRN 4651093. 2023. Available online: https://www.qeios.com/read/DPLE2L (accessed on 23 May 2025).
- Lin, L.S.; Aslett, D.; Mekonnen, G.; Zecevic, M. The Dangers of Voice Cloning and How to Combat It. 2024. Available online: https://theconversation.com/the-dangers-of-voice-cloning-and-how-to-combat-it-239926 (accessed on 22 May 2025).
- Bearman, M.; Ryan, J.; Ajjawi, R. Discourses of artificial intelligence in higher education: A critical literature review. High. Educ. 2023, 86, 369–385. [Google Scholar] [CrossRef]
- Bower, M.; Henderson, M.; Slade, C.; Southgate, E.; Gulson, K.; Lodge, J. What generative Artificial Intelligence priorities and challenges do senior Australian educational policy makers identify (and why)? Aust. Educ. Res. 2025, 52, 2069–2094. [Google Scholar] [CrossRef]
- Lodge, J.M. The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action. Tert. Educ. Qual. Stand. Agency 2024, 8. Available online: https://www.teqsa.gov.au/sites/default/files/2024-08/evolving-risk-to-academic-integrity-posed-by-generative-artificial-intelligence.pdf (accessed on 22 May 2025).
- Selçuk, A.A. A guide for systematic reviews: PRISMA. Turk. Arch. Otorhinolaryngol. 2019, 57, 57. [Google Scholar] [CrossRef]
- Yazdanifard, R.; Oyegoke, T.; Seyedi, A.P. Cyber-crimes: Challenges of the millennium age. In Advances in Electrical Engineering and Electrical Machines; Springer: Berlin/Heidelberg, Germany, 2012; pp. 527–534. [Google Scholar]
- Tan, S. Regulating online harms: Are current efforts working–or even workable? RSIS Comment. 2023, 170-23. Available online: https://dr.ntu.edu.sg/entities/publication/068523a4-583b-4df8-8cd3-95166a9723a9 (accessed on 22 May 2025).
- Hong, Q.N.; Pluye, P.; Fàbregues, S.; Bartlett, G.; Boardman, F.; Cargo, M.; Dagenais, P.; Gagnon, M.-P.; Griffiths, F.; Nicolau, B. Mixed methods appraisal tool (MMAT), version 2018. Regist. Copyr. 2018, 1148552, 1–7. [Google Scholar]
- Cubbage, C.J.; Smith, C.L. The function of security in reducing women’s fear of crime in open public spaces: A case study of serial sex attacks at a Western Australian university. Secur. J. 2009, 22, 73–86. [Google Scholar] [CrossRef]
- Reeves, A.; Delfabbro, P.; Calic, D. Encouraging Employee Engagement With Cybersecurity: How to Tackle Cyber Fatigue. SAGE Open 2021, 11, 21582440211000049. [Google Scholar] [CrossRef]
- Striepe, M.; Thomson, S.; Sefcik, L. Understanding Academic Integrity Education: Case Studies from Two Australian Universities. J. Acad. Ethics 2023, 21, 1–17. [Google Scholar] [CrossRef]
- Fudge, A.; Ulpen, T.; Bilic, S.; Picard, M.; Carter, C. Does an educative approach work? A reflective case study of how two Australian higher education Enabling programs support students and staff uphold a responsible culture of academic integrity. Int. J. Educ. Integr. 2022, 18, 5. [Google Scholar] [CrossRef]
- Samar, S.; Rajan, K.; Aakanksha, S. Framework for Adoption of Generative Artificial Intelligence (GenAI) in Education. IEEE Trans. Educ. 2024, 67, 777–785. [Google Scholar] [CrossRef]
- Vaill, Z.; Campbell, M.; Whiteford, C. Analysing the quality of Australian universities’ student anti-bullying policies. High. Educ. Res. Dev. 2020, 39, 1262–1275. [Google Scholar] [CrossRef]
- Young, H.; Campbell, M.; Spears, B.; Butler, D.; Cross, D.; Slee, P. Cyberbullying and the role of the law in Australian schools: Views of senior officials. Aust. J. Educ. 2016, 60, 86–101. [Google Scholar] [CrossRef]
- Jacqueline, M.D. A study of cybercrime victimisation and prevention: Exploring the use of online crime prevention behaviours and strategies. J. Criminol. Res. Policy Pract. 2020, 6, 17–33. [Google Scholar]
- Pennell, D.; Campbell, M.; Tangen, D. The education and the legal system: Inter-systemic collaborations identified by Australian schools to more effectively reduce cyberbullying. Prev. Sch. Fail. 2022, 66, 175–185. [Google Scholar] [CrossRef]
- Pennell, D.; Campbell, M.; Tangen, D.; Knott, A. Should Australia have a law against cyberbullying? Problematising the murky legal environment of cyberbullying from perspectives within schools. Aust. Educ. Res. 2022, 49, 827–844. [Google Scholar] [CrossRef]
- Sheanoda, V.; Bussey, K.; Jones, T. Sexuality, gender and culturally diverse interpretations of cyberbullying. New Media Soc. 2024, 26, 154–171. [Google Scholar] [CrossRef]
- Jayshri, N. Comprehensive Review of Digital Harassment Prevention and Intervention Strategies: Bystanders, Automated Content Moderation, Legal Frameworks, AI, Education, Reporting, and Blocking. Int. J. Multidiscip. Res. 2025, 7. [Google Scholar] [CrossRef]
- Australia’s Cyber Security Strategy. Australia’s Cyber Security Strategy 2020 at a Glance; Commonwealth of Australia: Barton, Australia, 2020. [Google Scholar]
- Bell, M.; Keles, S.; Furenes Klippen, M.I.; Caravita, S.C.S.; Fandrem, H. Cooperation within the school community to overcome cyberbullying: A systematic scoping review. Scand. J. Educ. Res. 2025, 1–16. [Google Scholar] [CrossRef]
- Ballantine, J.; Boyce, G.; Stoner, G. A critical review of AI in accounting education: Threat and opportunity. Crit. Perspect. Account. 2024, 99, 102711. [Google Scholar] [CrossRef]
- Smiderle, R.; Rigo, S.J.; Marques, L.B.; Peçanha de Miranda Coelho, J.A.; Jaques, P.A. The impact of gamification on students’ learning, engagement and behavior based on their personality traits. Smart Learn. Environ. 2020, 7, 3. [Google Scholar] [CrossRef]
- Bassanelli, S.; Vasta, N.; Bucchiarone, A.; Marconi, A. Gamification for behavior change: A scientometric review. Acta Psychol. 2022, 228, 103657. [Google Scholar] [CrossRef]
- Spears, B.A.; Taddeo, C.; Ey, L.A. Using participatory design to inform cyber/bullying prevention and intervention practices: Evidence-Informed insights and strategies. J. Psychol. Couns. Sch. 2021, 31, 159–171. [Google Scholar] [CrossRef]
- Johnston, N. The impact and management of mis/disinformation at university libraries in Australia. J. Aust. Libr. Inf. Assoc. 2023, 72, 251–269. [Google Scholar] [CrossRef]
- Salem, L.; Fiore, S.; Kelly, S.; Brock, B. Evaluating the Effectiveness of Turnitin’s AI Writing Indicator Model; Temple University: Philadelphia, PA, USA, 2021. [Google Scholar]
- Fowler, S.; Korolkiewicz, M.; Marrone, R. First 100 days of ChatGPT at Australian universities: An analysis of policy landscape and media discussions about the role of AI in higher education. Learn. Lett. 2023, 1, 1. [Google Scholar] [CrossRef]
- Bontridder, N.; Poullet, Y. The role of artificial intelligence in disinformation. Data Policy 2021, 3, e32. [Google Scholar] [CrossRef]
- Stracke, C.M.; Griffiths, D.; Pappa, D.; Bećirović, S.; Polz, E.; Perla, L.; Di Grassi, A.; Massaro, S.; Skenduli, M.P.; Burgos, D. Analysis of Artificial Intelligence Policies for Higher Education in Europe. Int. J. Interact. Multimed. Artif. Intell. 2025, 9, 124–137. [Google Scholar] [CrossRef]
- Williamson, S.M.; Prybutok, V. The era of artificial intelligence deception: Unraveling the complexities of false realities and emerging threats of misinformation. Information 2024, 15, 299. [Google Scholar] [CrossRef]
- Khairullah, S.A.; Harris, S.; Hadi, H.J.; Sandhu, R.A.; Ahmad, N.; Alshara, M.A. Implementing artificial intelligence in academic and administrative processes through responsible strategic leadership in the higher education institutions. Front. Educ. 2025, 10, 1548104. [Google Scholar] [CrossRef]
- Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [Google Scholar] [CrossRef]
- Braun, T. Liability for artificial intelligence reasoning technologies–a cognitive autonomy that does not help. Corp. Gov. Int. J. Bus. Soc. 2025. [Google Scholar] [CrossRef]
- Khan, R.H.; Balapumi, R. Artificial Intelligence (AI) as Strategy to Gain Competitive Advantage for Australian Higher Education Institutions (HEI) Under the New Post COVID-19 Scenario. In Artificial Intelligence-Enabled Businesses: How to Develop Strategies for Innovation; Scrivener Publishing LLC: Beverly, MA, USA, 2025; pp. 439–449. [Google Scholar]
- Lin, L.S.; Aslett, D.; Mekonnen, G.; Zecevic, M. The UN Cybercrime Convention: What It Means for Policing and Community Safety in Australia. 2024. Available online: https://www.internationalaffairs.org.au/australianoutlook/the-un-cybercrime-convention-what-it-means-for-policing-and-community-safety-in-australia/ (accessed on 22 November 2025).
- Wardle, C.; Derakhshan, H. Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking; Council of Europe Strasbourg: Strasbourg, France, 2017; Volume 27. [Google Scholar]
- Luu, X.; Rathjens, C.; Swadling, M.; Gresham, B.; Hockman, L.; Scott-Young, C.; Leifels, K.; Zadow, A.J.; Dollard, M.F.; Kent, L. How university climate impacts psychosocial safety, psychosocial risk, and mental health among staff in Australian higher education: A qualitative study. High. Educ. 2024. [Google Scholar] [CrossRef]
- Mitchell, M. The discursive production of public inquiries: The case of Australia’s Royal Commission into Institutional Responses to Child Sexual Abuse. Crime Media Cult. 2021, 17, 353–374. [Google Scholar] [CrossRef]
- Sandu, R.; Gide, E.; Elkhodr, M. The role and impact of ChatGPT in educational practices: Insights from an Australian higher education case study. Discov. Educ. 2024, 3, 71. [Google Scholar] [CrossRef]
- Whitty, M.T. Drug mule for love. J. Financ. Crime 2023, 30, 795–812. [Google Scholar] [CrossRef]
- Xing, C.; Mu, G.M.; Henderson, D. Submission or subversion: Survival and resilience of Chinese international research students in neoliberalised Australian universities. High. Educ. 2022, 84, 435–450. [Google Scholar] [CrossRef] [PubMed]

| Criteria | Inclusion Criteria | Exclusion Criteria |
|---|---|---|
| Databases | Articles indexed in SCOPUS, IEEE Xplore, Web of Science, and Google Scholar. | Articles not indexed in the specified databases (SCOPUS, IEEE Xplore, Web of Science, Google Scholar). |
| Keywords | Articles that include keywords such as ““online harm,” “digital harm”, “cyber harm,” “online safety,” “cyberbullying,” “online harassment,” and “Australian higher education,” “Australian universities,” “institutional responses,” “institutional strategies,” “university interventions,” “AI-generated misinformation,” “artificial intelligence misinformation,” “deepfake,” “synthetic media,” “AI-driven disinformation,” “generative AI,” “cybercrime prevention,” “cybersecurity,” “online crime prevention,” “digital security” and “qualitative analysis,” “qualitative research,” “thematic analysis,” “case studies”. | Articles that do not include the specified keywords or focus on unrelated topics. |
| Language | Articles published in English. | Articles not published in English. |
| Location | Studies conducted in Australia | Studies conducted outside Australia |
| Publication Date | Articles published between 2000 and 2025. | Articles published before 2000. |
| Relevance | Articles that focus on institutional strategies to counter AI-generated misinformation and cybercrime risks. | Articles that do not address Institutional strategies to counter AI-generated misinformation and cybercrime risks or do not provide empirical data or theoretical insights relevant to the study. |
| Type of Publication | Peer-reviewed journal articles. | Non-peer-reviewed articles, opinion pieces, and editorials. |
| Main Theme | Description | References |
|---|---|---|
| Educational Strategies | Institutional strategies to counter AI-generated misinformation and cybercrime risks, including AI literacy programs, academic integrity training, and digital citizenship education. | [16,17,18,19,20,21,22]. |
| Alignment with National Frameworks | Alignment of university policies and practices with national frameworks for digital safety and cybercrime prevention, such as Australia’s Cyber Security Strategy and eSafety Commissioner guidelines. | [16,17,18,19,21,23,24]. |
| Policy Gaps and Development | Gaps in institutional responses, such as limited GenAI adoption and weak cyberbullying policies, and their implications for developing future cybercrime prevention strategies. | [17,18,20,21,22,23,25,26]. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, L.S.F.; Mekonnen, G.T.; Zecevic, M.; Motsi-Omoijiade, I.; Aslett, D.; Allan, D.M.C. Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention. Informatics 2025, 12, 132. https://doi.org/10.3390/informatics12040132
Lin LSF, Mekonnen GT, Zecevic M, Motsi-Omoijiade I, Aslett D, Allan DMC. Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention. Informatics. 2025; 12(4):132. https://doi.org/10.3390/informatics12040132
Chicago/Turabian StyleLin, Leo S. F., Geberew Tulu Mekonnen, Mladen Zecevic, Immaculate Motsi-Omoijiade, Duane Aslett, and Douglas M. C. Allan. 2025. "Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention" Informatics 12, no. 4: 132. https://doi.org/10.3390/informatics12040132
APA StyleLin, L. S. F., Mekonnen, G. T., Zecevic, M., Motsi-Omoijiade, I., Aslett, D., & Allan, D. M. C. (2025). Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention. Informatics, 12(4), 132. https://doi.org/10.3390/informatics12040132

