Next Article in Journal
Academic Use of Generative Artificial Intelligence Among Adolescents and University Students: Associations with Self-Esteem, Self-Efficacy, and Academic Confidence and Anxiety
Previous Article in Journal
Operationalizing Symbolic Violence to Advance Gender Equality: Women’s Mobility and Everyday Injustices in Public Transport in Mexico
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI Ethics Bylaws for Academia: Teaching, Learning, and Assessment

by
Ali F. Almutairi
1,
Jonathan Pils
2,
Nazeer Muhammad
3,* and
Shafiullah Khan
3
1
College of Engineering and Energy, Abdullah Al Salem University, Khaldiya 72303, Kuwait
2
College of Integrated Studies, Abdullah Al Salem University, Khaldiya 72303, Kuwait
3
College of Computer and Systems Engineering, Abdullah Al Salem University, Khaldiya 72303, Kuwait
*
Author to whom correspondence should be addressed.
Societies 2026, 16(4), 106; https://doi.org/10.3390/soc16040106
Submission received: 29 November 2025 / Revised: 11 March 2026 / Accepted: 12 March 2026 / Published: 25 March 2026
(This article belongs to the Topic AI Trends in Teacher and Student Training)

Abstract

The establishment of AI ethics bylaws in academia is needed for teaching, learning, and assessment. The adaptive parameters of these bylaws define the ethical, pedagogical, and operational standards for the use of artificial intelligence tools within academia. The main aim is to ensure that AI tools are used to enhance educational practices while preserving human judgment, safeguarding academic integrity, and promoting critical thinking. Specifically, these are intended to mentor all domains of academia to uphold the core values of fairness and transparency while adapting to the advent of modern technologies. While many are enthused by the support provided by large language models, it is also important to prevent over-reliance or misuse of AI technologies. This establishes clear responsibility for faculty, students, and administration. These significant bylaws pay more attention to these issues to provide a foundation for good governance, evaluation, and amendment of AI-related practices. To provide normative insight into the anticipated reception of these bylaws, we conducted a small exploratory pilot study with STEM faculty. The resulting observations offer preliminary indications of the feasibility of the proposed method for future research and policy development.

1. Introduction

Ethics in computing education has gained popularity in academia due to developments in AI. Students have been exponentially attracted to computing majors. Hence, usage of AI to support academia has also increased. This enhances the usage of AI support in diverse fields of interest for faculty and students. The ACM updated its ethical code of conduct [1]. The previous code was focused on quality and professionalism. The updated code addresses potential risks in AI and the ethical issues created by new systems. Rapid advances in AI technologies in automation and robotics have displaced labor jobs [2]. These advancements create complex tasks that may harm low-skill jobs for various individuals [3]. According to an existing field expert on AI, Andrew Ng [3], AI may be able to transform multiple industries, but far from the claim of science fiction that it will surpass human intelligence.
AI ethics is a new field of critical thinking that emerged from the rapid development, design, and implementation of AI [4]. The power of AI has often been demonstrated to resolve complex tasks [5]. For example, AlphaGo Zero was used to defeat a world winner in the game of Go [6]. Automated decision for social impact, e.g., the COMPAS risk assessment tool predicts the individual on biasedness [7]. The ProPublica news organization claimed that this is biased towards black defendants [8]. Sometimes the AI process can be unbiased, but if the data used are biased to make the decision, then the decision may be biased. Regarding ethics, the IEEE Global Initiative aimed to improve the fairness of decision systems [9]. Due to the nonexistence of AI ethics bylaws, AI models are often regarded as “black boxes” [10]. These are difficult to understand. Experts advocated the implementation of ethical deliberations in AI systems to deal with black boxes [11]. In this way, experts can regulate AI algorithms in terms of Fairness, Accountability, and Transparency (FAT) [10]. The idea of the explainability of AI-based decisions is contested in Europe [12]. It is difficult to justify the decisions made by AI systems that are crucial to human life [13]. Moreover, the explainability of AI is virtually impossible based on artificial neural networks [14]. Bremner et al. argued that AI systems’ ethical reasoning should certifiably abide by ethics bylaws and establish code of conduct [14,15].
The curricula of various institutions consider AI ethics policy issues more seriously [16,17,18]. This leads to awareness among graduates with regard to the legal and ethical aspects of the use of advanced computing such as AI integration in studies [8]. It is emphasized that ethics represents moral principles [19]. The AI ethics bylaws provide us with a moral map to find the beacon of light in darkness, especially in complex situations [19]. The AI ethics bylaws are the basis for information in the decision-making process [20]. Ethics bylaws are important in academia for AI usage models. It requires higher accountability. Without established AI ethics bylaws in academia, decisions could lead to distrust in the public. It is also important to practice positive and responsible outcomes in teaching, learning, and assessment levels for AI ethics bylaws [21]. The use of the data requires ethical considerations due to its collection and possible outcomes [22]. The quality of the data and the authenticity of the evidence associated with them are also important [23].
Quinn in his study [24] focused on ethics laws for the age of information and computing reliability. Previously, AI experts followed the ethical framework known as “utilitarianism” [25]. Goldsmith and Burton described utilitarian theory and virtue ethics [25]. They clearly mentioned [25,26] that ethics bylaws are an integral part of AI-based curricula. Researchers claim that ethical knowledge of AI should be embedded in all curricula [26,27]. Currently, various organizations have already established ethical codes of professional conduct due to court cases [28]. It is important to protect the dignity of an organization and the reputation of the profession, e.g., the Association of Computer Machinery (ACM) [1]. Zeng [29] listed 27 AI principles that follow ethics bylaws.
These by-laws apply to the use of AI tools in the following areas: teaching and instructional design, learning and self-directed study, formative and summative assessment, academic supervision and mentoring, research and scholarly communication, administrative or support functions linked to academic activity. With the advent of modern advancements, AI ethics bylaws are concerned with good or bad, right or wrong. The ethical laws in academic institutions differ from cultural or societal laws. There is a dire need to address this with a basic understanding of the decision-making process [30]. Computer Ethics Education was not common several years ago, but recently due to advances in AI, AI ethics gained serious attention [30].
Its purpose is to fill a gap in the existing institutional AI governance frameworks in academics, although the application of AI tools has been prevalent in STEM education application centers [31]. The concept is that the proposed work is aimed at creating a bridge over the gap between the advent of modern technology and the conventional practices of TLA. Although the ethos of the paper was the development of an ethical governance framework, an exploratory pilot study was carried out with faculty in Mathematics, Computing, and Engineering to contextualize the disciplinary view of the proposed bylaws. The pilot study is not aimed at empirical validation of the bylaws but is rather preliminary evidence of the feasibility and indicates the differences in disciplinary ethical awareness in terms of disciplines’ awareness towards the ethical issues and circumstances of the novel contexts of the interpersonal relationship [32].

Research Questions

In order to direct the exploratory aspect of this research, the following research questions were analyzed:
1.
Among faculty in STEM fields, what are the perceptions of faculty regarding key dimensions of AI Ethics Bylaws addressing TLAs?
2.
Do perceived differences in the perceived importance of AI ethics principles like transparency, fairness, and human oversight vary between conditions of a high and low level of disciplinary presence?
3.
Does the pilot study give preliminary evidence on the relevance and viability of the suggested AI governance model in the academic setting?
Structure of this paper: Section 1 describes the issues related to the promise of AI decision making in academia and ethical problems of autonomous systems associated with it. Section 2 is about Teaching, Learning, and Assessment issues related to AI ethics. Section 3 deals with role dependencies. Section 4 describes justifications and initiatives for AI ethics bylaws on academic code of conduct and ethical dilemmas. Section 5 is a statistical analysis and Section 6 is the conclusion.

2. Materials and Methods

The proposed method supports the normative framework through exploratory evidence using a pilot study sample and literature exploration.
In this study, the literature review was conducted as a structured literature exploration rather than a formal systematic review. This approach involved purposefully identifying key publications relevant to AI ethics, academic integrity, and governance in higher education without implementing predefined search strings, multi-stage screening procedures, or formal inclusion/exclusion criteria. The aim of this structured exploration was to map established ethical principles—particularly from UNESCO, OECD, and IEEE AI frameworks—onto academic practices in teaching, learning, and assessment. Hence, the literature review served a normative and conceptual function that informed the development of the proposed bylaws, rather than operating as an exhaustive or protocol-driven systematic evidence synthesis.
The pilot study will not aim to prove statistically the correctness of the bylaws, but will have the effect of demonstrating the first signs of tendencies and putting into perspective the way academic staff view the suggested governance structure. This was aimed at designing and confirming AI Ethics Bylaws for Teaching, Learning, and Assessment (TLA) in academic institutions. The suggested framework focused on such aspects as transparency, disclosure, and responsible utilization of AI tools in the context of learning and research. A systematic review of relevant literature was identified and used in the development of the bylaws [31,33,34,35,36,37,38]. Notable issues, including privacy, bias, transparency, and academic misconduct, were defined and classified, as shown in Table 1, Table 2, Table 3, Table 4 and Table 5. The pilot study is carried out among 30 subjects of the faculty members of three subjects, Mathematics, Computing, and Engineering. The profile of the participants consisted of years of experience, gender proportion, tenure in ethics committees, and leadership.
Data were collected using structured questionnaire statements on AI ethics in teaching, learning, and assessment dimensions. These statements are formulated to ensure the scope of AI assistance, human oversight, and adherence to the principles of academic integrity. The disclosure framework was integrated into the proposed bylaws and validated through the well known existing literature [32,39,40,41,42,43,44,45,46] and further evaluated through mathematical formulation and statistical analysis, as shown in Table 1.

2.1. Major Use of AI in TLA

Any AI engagement that adds value to the overall content creation, organization, analysis or presentation of scholarly work, such as writing significant parts of learning materials, creating assessment questions or assessment rubrics to support instructional use [47], or assessment analysis are defined as the main use of AI [48]. The major use of AI disclosure is obligatory and should be presented in the acknowledgments section of the written work, on the title page, and where needed, in the project documentation section of the capstone or thesis work.
Table 1. The proposed major and minor AI-use taxonomy in comparison with existing university AI policies.
Table 1. The proposed major and minor AI-use taxonomy in comparison with existing university AI policies.
DimensionTypical University PracticeProposed TaxonomyRefs.
Classification logic“Allowed vs. prohibited” criteriaContribution rules for consistent categorization[49,50]
Minor AI UseGrammar or formatting assistance toolsNon-transformative aid; disclosure optional unless required[51]
Major AI UseVague reference to “substantive” AI assistanceMeaningful input into ideas, structure, or analysis; mandatory disclosure[41]
Disclosure requirementsTo disclose AI use; no placement rulesStructured area (title page, acknowledgments, appendices)[49]
Pedagogical sensitivityRules across disciplinesDiscipline templates; draws on pilot studies[32]
Governance integrationEnforcement by instructor; workflow alignmentLinked to university workflow: approval, audits, documentation, appeals[52]
Our contributionsHigh-level guidance with operational detail with referencesRole-based dependencies, disclosure taxonomy, approval criteria, workflowsOur work

2.2. Minor Use of AI in TLA

The most common or helpful types of interaction that do not produce a significant change in academic output, such as grammar check/correction or spelling fixes formatting, citation styles can be a type of minor use of AI in TLA. The disclosure of minor uses is not usually required unless designated by the instructor or department [51].

2.3. Important Protocols for Use of AI in TLA

Teachers should clarify in lectures on the usage of AI at the course level in the course cards/syllabus and reiterate it during the first lecture. The departments will have a centralized policy and address any accessibility-related exceptions (e.g., students with disabilities) of institutional ethics and inclusion standards [41,52]. Any variation or exception should be recorded and accepted by the head of academics in order to have uniformity among the programs. The taxonomy of refined disclosures and role relationships are represented in Figure 1 and Figure 2, while illustrations of ethical issues and their correlations to the existing literature are summarized in summaries in the form of a table in Table 2.

2.4. Governance Workflow Process

The management of its governance bylaws is officially approved by the University Academic Council after consultation with departmental heads and faculty representatives [50,51]. The university ethics committee will conduct a periodic review as a biennial review cycle to introduce additional developments in AI technologies and the ever-evolving academic standards [41]. Its updates will entail organized feedback with students and faculty along with university administrators with surveys and focus groups, thus being inclusive and transparent [32,39]. The investigation of the alleged violations will be carried out with the help of documented evidence (e.g., submitted work, logs of AI tools, etc.) Figure 3. The committee will use standards that are in accordance with institutional academic integrity policies [52]. Procedural justice is ensured because the student and faculty can appeal to an independent academic integrity appeal board in case of their decision. Where institutional bylaws come into conflict with rules special to the course, the institutional policy will prevail. Exception requests from the Head of Academics seek the use of accessibility by the departments.
The workflow is summarized in Figure 4, which illustrates the role dependencies and approval hierarchy. Key ethical concerns and their mapping to governance processes are detailed in Table 2. Statistical validation of these operational definitions and their perception across disciplines is presented in Section 5 and emphasized in [43,44]. By cross-verifying and observing, the following governing steps are established:
1.
Draft bylaws from committee member, forward to departmental review, and then forward to university academic council final approval.
2.
Publish the bylaws in the institutional repository and the course syllabus.
3.
Conduct review and stakeholder consultation.
4.
Investigate violations and collect documented evidence and apply academic integrity policies.
5.
Provide an appeal mechanism and resolve conflicts through institutional authority.

2.5. Role of the Empirical Component

This study has an empirical analysis, which fulfills an exploratory purpose and does not objectively and generalizably confirm the proposed bylaws. The purpose is to place in context the faculty in the different STEM fields regarding their perceptions of key ethical aspects (e.g. transparency, fairness, integrity of assessment) that are the subject of the proposed governance model. In line with the small and convenience-based sample, the statistical operations (ANOVA, reliability analysis, and post hoc tests) are reported to show the internal consistency associated with the topic, as well as to emphasize the tendencies in disciplines. This frame admits that the normative and policy-based contribution is the main focus and only some preliminary, non-generalizable results serve as the support of the contextual grounding offered by the empirical results.

2.6. AI Ethics Bylaws for TLA

We made proposals of transparency and disclosure requirements to TLA. The members of the university community should reveal significant applications of AI tools in scholarly work when its implementation adds value to the content creation, organization, analysis, or presentation. More to the point, a more sophisticated disclosure should contain the name of the AI tool applied and a short description of the role it played in the details of the work. Learners will make disclosures in written text, presentations or projects according to the instructions of their instructors or coursework. Faculty members should also report their personal use of AI when it has a material impact on teaching, feedback, or design of assessment, as depicted in Figure 1. Non-disclosure of key utilization of AI tools can be seen as an academic offense.
Routine and minor interactions with AI that do not influence academic output and are not specified otherwise are not subject to disclosure, e.g., idea prompting or grammar correction.

2.7. Sample Disclosure Statement for Students

“I acknowledge that I used artificial intelligence (AI) tools, such as (tools names), to assist with parts of this work. These tools were used to help with (mention the type of help taken like idea generation, grammar correction, and improving the clarity of written content). All AI-generated material was carefully reviewed, verified, and supplemented with my own understanding and effort. I confirm that the final submission reflects my own learning and comprehension of the subject matter.”

2.8. Sample Disclosure Statement for Faculty Using AI Tools in Assessment

“As part of continuous efforts to enhance the efficiency and effectiveness of the assessment process, artificial intelligence (AI) tools (e.g., ChatGPT by OpenAI or similar platforms) were used to support the development of assessment materials, including question formulation, rubric drafting, and generation of formative feedback.”
“The use of AI was strictly limited to assistive functions. All AI-assisted outputs were critically reviewed, edited, and aligned with the intended learning outcomes and academic standards of the course. Final decisions regarding student evaluation and feedback were made solely by the faculty member to ensure fairness, accuracy, and pedagogical integrity.”

2.9. Sample Disclosure Statement for Researchers Using AI Tools

“I acknowledge the use of artificial intelligence (AI) tools (e.g., ChatGPT by OpenAI, GPT, or similar platforms) to assist in various stages of the research process, including literature review, data analysis, hypothesis generation, and the drafting of sections of the manuscript.”
“These tools were utilized to help enhance efficiency, provide suggestions, and support data-driven insights. All AI-generated content was carefully reviewed, verified, and integrated with my own analysis, expertise, and understanding of the subject matter.”
“I confirm that the findings, conclusions, and interpretations presented in this research are based on my independent academic judgment. The use of AI was supplementary and did not replace critical thinking, academic integrity, or original analysis in any phase of the research.”
The experimental information for AI ethics bylaws are verified for the effectiveness of our proposed method as shown in Table 2.
Table 2. AI ethics bylaw observations in classroom teaching and faculty feedback pilot studies.
Table 2. AI ethics bylaw observations in classroom teaching and faculty feedback pilot studies.
ConcernsObservationsExisting Studies
Loss of privacy1. Leakage of
personal information.
2. Increasing the
surveillance culture.
3. Compromised consent.
Kobis and Mehner [33]
Reiss [34]
Adams et al. [35]
Bias and discrimination1. Gender discrimination.
2. Regional and ethnic
discrimination.
3. Class discrimination.
4. Cultural discrimination.
Ghotbi and Ho [36]
Ghotbi et el. [53]
Matias and Zipitria [37]
Masters [38]
Transparency issue1. Teachers’ vs. student
relation for knowledge
imparting and learning.
2. Potential risks of
using AI models
in the classroom.
Memarian and Doleck [54]
Wang et al. [55]
Academic misconductPlagiarism and cheating
issues.
Adams et al. [35]
Autonomy learningFreedom of knowledge
sharing the environment
is limited.
Han et al. [31]
To avoid ambiguity regarding the origin of the concerns listed in Table 2, we explicitly note that the items in the “Existing Studies” column are derived from prior literature, whereas entries in the “Observations” column reflect preliminary insights gathered during the exploratory pilot study. These pilot-derived observations are illustrative and non-generalizable, and they should not be interpreted as empirical validation or as reflective of broader disciplinary trends. This distinction ensures transparency regarding the evidentiary basis of each concern.
Table 3. Essential approval components for AI tools in academia.
Table 3. Essential approval components for AI tools in academia.
Evaluation ComponentDescriptionEvidence Required
Privacy Impact AssessmentCompliance with GDPR and local data lawsFormal PIA report
Data Handling and RetentionPolicy for storage, encryption, and deletionVendor documentation
Model LimitationsKnown biases, explainability constraintsTechnical whitepaper
Training RequirementsUser training for ethical and secure useTraining module outline
Re-approval ProcessAnnual review or upon major version changeAudit checklist
De-listing CriteriaNon-compliance or security breachIncident report
Table 4. Mapping of academic tasks to permit AI uses, disclosures, and prohibitions.
Table 4. Mapping of academic tasks to permit AI uses, disclosures, and prohibitions.
Academic TaskPermitted AI UseRequired DisclosureProhibited Use
Closed-book ExamNone (except formatting)N/AAnswer generation
Take-home ProjectGrammar checkScope of helpFull project
Thesis/DissertationLanguage, citation, formattingContribution detailsFabrication/ plagiarism
Programming AssignmentSyntax, debugging hintsTool name, assistanceCode generation
Table 5. Comparison of UNESCO, OECD AI ethics frameworks and proposed academic bylaws.
Table 5. Comparison of UNESCO, OECD AI ethics frameworks and proposed academic bylaws.
DimensionUNESCO AI Ethics [51]OECD AI Principles [48]Proposed Academic AI Ethics Bylaws
ScopeGlobal policy and societal impactEconomic growth and innovationInstitutional governance for teaching, learning, and assessment
Focus AreaHuman rights, fairness, transparencyResponsible AI, accountabilityAcademic integrity, disclosure, assessment validity
Implementation LevelMacro-level national strategiesPolicy guidelines for governmentsMicro-level university bylaws and workflows
Operational ToolsPrinciples and recommendationsHigh-level policy statementsDisclosure templates, mapping tables, approval dossier (Table 3)
Governance MechanismEthical principles for AI systemsRisk-based regulatory approachRole-based governance workflow (Figure 4)
Applied ExamplesNot specifiedNot specifiedIllustrative bylaw clauses, academic task mapping (Table 4)

3. AI Ethics Bylaw Roles with Dependencies

In order to prevent hallucination by AI in academic work, it is essential to consider AI-generated content as just a starting point but not the final one. Users must never just accept information, data and citations without consulting a reliable scholarly database like IEEE Xplore, ScienceDirect, Springer, Google Scholar or other valid sources. AI can also create references or distort information, and this is why all statements must be checked and only sources that are provable and reviewed by other specialists should be provided. Academic honesty and the validity of academic work can be upheld by brainstorming, outlining, writing, or putting scholarly work under human control to be validated through human depiction and a critical viewpoint by using AI in responsible ways. Alleged violations related to AI are going to be investigated and based on the intention and context of use are established with such roles and dependencies as seen in Figure 2. These are the key elements of the proposed AI ethics bylaws and the way they relate structurally in the Teaching, Learning, and Assessment (TLA) ecosystem. The figure outlines that three major artefacts include AI tools, AI-generated products, and AI-related data that are the operational foundations of academic AI application. Such artefacts are instilled in the ethical framework of fairness, accountability, transparency, data protection, and human control. The two-way interactions between artefacts and ethical parameters demonstrate that ethical issues do not represent fixed limitations but continue to be an obligation, and can be re-examined as AI is implemented into academic settings. Figure 2, which serves as the conceptual foundation of the governance model, identifies what should be governed and which ethical principles should apply. This contributes to anchoring the process of governance in which the non-disclosure of key application of AI tools could be considered academic misconduct.
Sanctions for AI-related misconduct must align with those applied for comparable academic integrity violations. Faculty and departments shall maintain records of suspected misuse and report patterns that may require institutional review or intervention.

Data Privacy and Security

All members of the institution must ensure that the use of AI tools is in accordance with relevant data protection laws and institutional privacy policies. Personal, confidential or sensitive data shall not be entered into AI tools unless explicitly authorized by the university and protected by secure agreements [52]. In addition, AI ethics bylaws will be cautious of the risks that may arise due to the use of data, e.g., biasedness, explainability, legality, societal and religious implications and impersonating human nature.

4. AI Ethics Bylaws’ Theoretical and Governance Foundations

The developed bylaws are also informed by the established theories of ethics and governance models and not pure prescriptive guidelines. The main characteristics of global normative practice are transparency, fairness and human control, which describe the global normative practices voiced, expressed through the UNESCO principles of AI use in [49], the OECD AI Principles in [50] and the IEEE Ethically Aligned design. The model matches the norms of responsible AI use with the internationally required norms because the bylaws are positioned in the context of the utilitarian, virtue and FAT (Fairness, Accountability, Transparency) traditions. To a greater extent, the findings of the exploration of the empirical situation aspire to the issues peculiar to the discipline, such as the elevated importance of integrity of assessment among the Faculty of Engineering, or high sensitivity to privacy among the Faculty of Mathematics, which can be viewed to have an influence on the practical design of the disclosure policy, approval procedures, and functional responsibility. The combination of ethical theory, comparative analysis of policy, and empirical insight renders the bylaws analytical-based, situation-oriented, and fit to the existing governance paradigms as standard, not mentioning the blanket of an empty void of AI policies at university level.

4.1. Contributions of the Proposed Bylaws

Current institutional AI policies and international bodies (e.g., UNESCO, OECD, IEEE) offer general tenets of high levels of ethical principles, and the general standards are seldom specified as to how they should be operationalized in the agreement of higher education. The proposed bylaws imply many types of novelty that are not limited to adaptation and agglomeration of existing guidelines. First, the framework formalizes the use of role-based dependencies which explicitly translate the interaction of the faculty, departments, ethics committee, and academic council in the process of evaluating, approving, disclosing and overseeing AI use. Second, the bylaws include a systematic major–minor disclosure taxonomy and have discipline-sensitive forms as well as specific disclosure placement requirements that are generally missing in university-level policies which only demand generic statements of AI usage. Third, the optional provision of a specific dossier to approach the AI tool (discussing the privacy impact study, model constraints, training data, re-approval schedule, and sacrifice criteria) amounts to procedural specificity uncharacteristic of institutional AI guidance. Lastly, these elements are operationalized through the governance workflow, which provides a clear step-by-step process that involves the formulation, approval, enforcement, and appeals of policies, as shown in Figure 4. Taken altogether, the contributions make the bylaws a governance model that is actionable, institution-ready, and that builds to the level that currently existing frameworks cannot reach because of its depth in operation, clarity, and implementability.
Faculty, students, and staff must ensure that any use of AI is consistent with the university’s academic policies and code of conduct, respects data privacy, confidentiality, and intellectual property, avoids bias, discrimination, or harm to individuals or communities. Users must apply critical judgment when interpreting AI output and remain responsible for all academic content submitted, published, or distributed. The university promotes openness in the use of AI tools. Any major use of AI in academic work must be disclosed in accordance with AI ethics bylaw standards defined by the Head of Academics of the institute. AI tools must not be used to impersonate individuals, fabricate data, or manipulate academic outcomes. Students are expected to use AI tools in a manner that supports their learning and respects the principles of academic honesty. Faculty are encouraged to design assessments that reward original thinking, process awareness, and responsible AI interaction [56]. Departments and colleges may issue further assessment-specific AI guidelines, subject to alignment with these bylaws.

4.2. AI Tool Evaluation and Approval

Any AI tools applied to teaching, learning, or assessment should meet institutional standards of security, reliability, and academic integrity. The Head for Academics will be in charge of the assessment and recommendation of the use of AI tools at the Institute should there be a need to do so irrespective of the presence of department/program input, as seen in Figure 4. This represents the governance scheme that realizes the suggested bylaws of AI ethics. The flow chart explains the chain of command of the participants, including faculty, students, departments, the AI ethics committee, and the academic council, and how they all interact in their duties.
It is required that all outputs generated with AI be reviewed, interpreted, and evaluated by a human before use in any academic context. Human oversight ensures that accountability and judgment remain with the user. This includes any formal assessment contributing significantly to a student’s academic record, including but not limited to exams, final projects, thesis work, research work, and capstone assignments. This also includes formative or practice tasks that do not directly impact degree qualification but are used for learning, development, or feedback. All students, academic staff, researchers, administrators, and other individuals participate in teaching, learning, or research activities within the academic community are subject to this. All use of AI tools within academics shall reflect the core values of academic integrity, human responsibility, and transparency. The support of learning, teaching, and research should be done by AI tools rather than to occupy human thinking, understanding, and authorship, as presented in Figure 3. This gives the directional factors that govern the ethical practice of using AI tools in education. This highlights the importance of the alignment of institutional values of accuracy, fairness, confidentiality, redressing bias and transparency to pedagogical obligations to ensure the ethical use of AI. All these values are represented by the arrows flowing into instructional design, assessment practices, and learning support to make sure that AI tools do not displace human judgment; the latter only improves it. This tangential charting explains how ethical values can be applied in everyday academic actions and demonstrates the practical connection among the values of the idea and the practical duties in the suggested bylaws as represented in Figure 5. It is the layer of bylaws’ practical implementation in the form of a workflow that helps transfer ethical principles and role expectations into a clear administrative process.
The direct rule reaction in the design of the proposed AI ethics bylaws can be illustrated in terms of the AI ethics bylaws as represented in Figure 5. The workflow follows six major steps, which we observe here to supplement the visual diagram to include (1) initial drafting of the bylaws by the committee or the academic body in question; (2) departmental review and refinement; (3) university with the help of the academic council; (4) publication and incorporation into the institutional documents and syllabi, policy repositories; (5) periodic monitoring and audit with respect to the ethics committee and (6) mechanisms of dealing with so-called violations (review of documents, imposing sanctions in line with policies on academic integrity). This overview explains how the workflow works and how every step is associated with responsible institutional actors as exhibited in the diagram. Individual colleges, departments or academic programs are free to propose those tools that may be appropriate in their area of expertise as long as this proposal adheres to these bylaws. The use of any AI tools is not allowed in case it is forbidden by the head of academics, as shown in Table 3. Where feasible, we shall prefer tools that facilitate human supervision, explainability, as well as customization to academic environments, as indicated in Table 6. All the accepted tools should also have documentation or training to make users know how to properly handle the tool, restrictions, and the ethics.
Figure 6 illustrates the relationship between compliance achievement and the risk of non-compliance at different stages of governance workflow [57]. The findings show that the highest level of compliance is registered at the initial stage of the review and then it decreases, reaching an annual re-approval. On the other hand, the risk of non-compliance increases by up to 30 percent at all stages. This tendency implies that although the first checks are positive, compliance maintenance during the period requires the constant control and the reinforcement of the rules, periodical inspection of the auditors, and subsequent training of employees. The inverse relationship between compliance and risk highlights the importance of proactive governance strategies to mitigate long-term vulnerabilities.

4.3. Academic Integrity and Misuse of AI

A research ethics committee at the university level will investigate and take action related to any violation or misuse of AI tools. This committee will be formed by the Office of the Vice President for Research and Graduate Studies. The use of AI tools to deceive, misrepresent authorship, or gain an unfair academic advantage shall be considered a violation of academic integrity policies. Misuse includes, but is not limited to:
1.
Submitting AI-generated work as original human work.
2.
Fabricating sources, data, or citations using AI tools.
3.
Using AI in assessments where its use is prohibited.

4.4. Comparison with International AI Ethics in Academia

This section highlights our approach with the principles of the UNESCO and the OECD, that international AI ethics in academia primarily focus on macro-level policy and research ethics, while our work governs the pedagogical level of teaching, learning and assessment, which remains under-represented in the current literature. The originality of this work lies in the taxonomy of AI ethics bylaws that is tailored for academic contexts, integrating disclosure requirements, governance workflows and role-based responsibilities, as shown in Figure 3. To link AI governance directly to classroom practice and assessment integrity, we have shown the implementation pathway in Table 5, rather than limiting it to research or data governance. Moreover, practical tools such as discipline-sensitive disclosure templates and a mapping table are connected academic tasks for permissible usage of AI, as shown in Table 4. This includes a minimal approval dossier and re-approval/de-listing process for working governance model, which is absent in most international frameworks. This operational detail provides implementable steps for institutions, as shown in Table 3. We have added citations and discussions that compare our approach with the AI Ethics Recommendations [51], the AI Principles of OECD [48], and the recent institutional AI policies, explaining how our bylaws extend these by embedding ethical AI use into teaching, learning, and assessment workflows, as shown in Figure 4.

4.5. Interpretive Status of the Taxonomy

The division of AI use into major and minor users as well as the mapping of scholarly uses to those which may or may not be employed and those which cannot be or must not be disclosed are supposed to serve as a series of recommended institutional defaults but not rules which are fixed and prescriptive. These mappings demonstrate the ways in which the recommended bylaws could be operationalized in higher education context and provide a consistent starting point that could be utilized by institutions, in its own way, adapted or improved to fit the local pedagogical traditions, disciplinary cultures, and governance needs. The framework therefore provides viable examples which aim to help maintain a consistent application, despite allowing the flexibility of the situation to be clarified quite explicitly as universities come up with their own policies regarding the usage of AI.

5. Statistical Analysis

The analysis conducted in this study is exploratory and is intended only to provide contextual insight on how faculty across STEM disciplines perceive the proposed AI ethics bylaws in Teaching, Learning, and Assessment (TLA). With a small pilot sample (n = 10 per group) and convenience-based sampling, the results are not generalizable and are not used to validate the bylaws. The statistical component therefore serves to expose indicative disciplinary tendencies that support the normative governance framework as shown in Table 6.
To address reviewer feedback, this section has been streamlined to emphasize interpretive meaning rather than technical computation. All formulaic derivations, textbook-style statistical explanations, and extended methodological detail have been removed. Standard analyses (descriptives, ANOVA, Tukey HSD, and reliability estimates) were applied using conventional procedures, and are presented only to contextualize disciplinary patterns relevant to the implementation of AI ethics bylaws.

5.1. Descriptive Insights

Table 7 summarizes perceptions across TLA dimensions. The computing faculty consistently reported slightly higher mean scores in Teaching, Learning, and Assessment, followed by Engineering and Mathematics. This suggests that computing faculty, due to more frequent interaction with AI tools, may exhibit heightened sensitivity to AI ethics integration in academic practice.

5.2. Disciplinary Comparisons

The results of the ANOVA (Table 8 and Table 9) indicate marginal differences in the Teaching dimension and statistically noticeable differences in the Learning and Assessment dimensions, with small-to-medium effect sizes. Tukey HSD post hoc comparisons (Table 10) show that:
1.
Mathematics and Computing differ in the Learning dimension;
2.
Computing and Engineering differ in the Assessment dimension;
3.
Mathematics and Engineering remain similar across most dimensions.
These results are not confirmatory but indicate early tendencies suggesting that disciplinary cultures may influence how faculty perceive AI integration in TLA, as shown in Figure 7. Such tendencies justify embedding discipline-sensitive disclosure patterns and governance pathways in the proposed bylaws.

Interpretive Insights

Overall, three insights emerge from the streamlined analysis: (1) All STEM faculty acknowledge the relevance of AI ethics in TLA; (2) Perceptions vary moderately by discipline; (3) These variations underscore the need for adaptable, context-sensitive governance rather than uniform enforcement.
Future research with larger stratified samples is needed to investigate these preliminary patterns more rigorously and to examine how disciplinary norms shape the reception and operationalization of AI ethics bylaws.

5.3. Limitations and Considerations

It should be noted that the pilot study offers only exploratory knowledge of how AI is perceived by faculty regarding TLA. As much as internal consistency values (e.g., Cronbach’s α ) indicate reasonable reliability of the scale elements, they do not indicate the validity of the construction or the fact that the instrument can represent the entire conceptual range of AI ethics perceptions. The pilot data are only used to demonstrate how the faculty of various disciplines in the STEM field might react to the ethical aspects of the proposed bylaws at first. The sample, n = 10 per group, reflects the statistical power, with ranges of 56–78% in Table 8, which indicates a moderate risk of Type II errors. Our model in its current approach can detect medium level effects, as shown in Table 7, but for a very large sample it may perform differently. As it is a pilot project, we will be sure to use it for a large dataset in future studies. In Table 8, values of η p 2 represent small to medium effects according to Cohen’s conventions. However, effect sizes suggest that small effects may be practically important in AI ethics standards [58]. The pattern of the p-values reflects a noticeable difference for attention purposes, as shown in Table 10. The overlap of the confidence interval (CI) in post hoc tests, as shown in Table 10, highlights the need for a cautious interpretation of AI ethics bylaws. However, computing shows the highest means across all three dimensions, suggesting a robust trend for the establishment of the bylaws proposed to be implemented.

6. Conclusions

The article suggests an ethically based, governance-oriented version of AI Ethics Bylaws in teaching, learning, and assessment, purposely placed between the principles of utilitarianism, in virtue ethics, and FAT (Fairness, Accountability Transparency), disclosure, governance work, and role responsibilities, to operationalize the ethical use of AI in teaching, learning, and assessment in higher education. The bylaws bring the high-level direction of UNESCO and OECD to standard micro-level institutional practices, such as course-level policies and procedures of approving AI applications and norms of structured disclosure.
The preliminary investigation was conducted within STEM faculty. It is based on findings of an exploratory pilot study on current attitudes towards AI ethics in TLA in Mathematics, Computing, and Engineering. The statistical result identifies the new trends and differences among disciplines in perceived ethical integration without claiming to prove the validity of the bylaws or the effect of those laws. Rather, these results are used to put these findings in context so that implementation strategies might also have to be sensitive to disciplinary cultures and practices.
The suggested bylaws can offer an ordered channel of controlling AI tools in a manner likely to generate academic integrity, human accountability, and transparency by implementing AI ethics principles into everyday practice of teaching, assessment formulation, and learning on campus. The framework also reinstates the importance of establishing a rigid qualification between meaningful and meaningless AI application in the facilities of disclosure into formal scholarly artifacts and indicating AI activity and institutional administration procedures.
A broadening of this normative foundation with more extensive studies investigating the effect of different AI ethics bylaws on academic conduct, views of fairness, and academic learning in post-secondary education should also be included in future professional activities.

Author Contributions

Conceptualization, A.F.A. and S.K.; methodology, A.F.A., J.P., and N.M.; software, A.F.A. and S.K.; validation, A.F.A., J.P., N.M., and S.K.; formal analysis, A.F.A., S.K., and N.M.; investigation, N.M.; resources, A.F.A.; data curation, A.F.A.; writing—original draft preparation, A.F.A., S.K., and N.M.; writing—review and editing, A.F.A., J.P., S.K., and N.M.; visualization, A.F.A.; supervision, A.F.A.; project administration, A.F.A. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Abdullah Al Salem University, Kuwait.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data supporting the results of the study are available from the corresponding author upon request.

Acknowledgments

All authors have reviewed and approved the final version of the manuscript and consent to its publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Association for Computing Machinery. ACM Code of Ethics and Professional Conduct. 2018. Available online: https://www.codes-isss.org/ethics_subdomain/code-of-ethics/code-2018-update-project/ (accessed on 28 September 2025).
  2. Acemoglu, D.; Restrepo, P. The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares, and Employment. Am. Econ. Rev. 2018, 108, 1488–1542. [Google Scholar] [CrossRef]
  3. Ng, A. What Artificial Intelligence Can and Can’t Do Right Now. Harvard Business Review, 9 November 2016. Available online: https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now (accessed on 11 March 2026).
  4. European Commission. Ethics Guidelines for Trustworthy AI (Draft). 2019. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 11 March 2026).
  5. Rafique, Z.; Bibi, N.; Muhammad, N. Quantum-Inspired Ant Colony Optimization for Task Scheduling in Edge Environment. In Proceedings of the 2025 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 15–16 December 2025; pp. 1–6. [Google Scholar] [CrossRef]
  6. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. Mastering the game of Go without human knowledge. Nature 2017, 550, 354–359. [Google Scholar] [CrossRef]
  7. State v. Loomis, 881 N.W.2d 749 (Wis. 2016). Justia U.U. Law, Supreme Court of Wisconsin Decision, 2016. 13 July 2016. Available online: https://law.justia.com/cases/wisconsin/supreme-court/2016/2015ap000157-cr.html (accessed on 11 March 2026).
  8. Feller, A.; Pierson, E.; Corbett-Davies, S.; Goel, S. A Computer Program Used for Bail and Sentencing Decisions Was Labeled Biased Against Blacks. It’s Actually Not That Clear. The Washington Post, 17 October 2016.
  9. IEEE Standards Association. Ethically Aligned Design, 2nd ed.; IEEE Standards Association: Piscataway, NJ, USA, 2018. [Google Scholar]
  10. Doshi-Velez, F.; Kortz, M.; Budish, R.; Bavitz, C.; Gershman, S.; O’Brien, D.; Scott, K.; Schieber, S.; Waldo, J.; Weinberger, D.; et al. Accountability of AI Under the Law: The Role of Explanation. arXiv 2017, arXiv:1711.01134. [Google Scholar] [CrossRef]
  11. AI Now Institute. AI Now Report; AI Now Institute: New York, NY, USA, 2017. [Google Scholar]
  12. Cath, C. Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philos. Trans. R. Soc. A 2018, 376, 20180080. [Google Scholar] [CrossRef]
  13. Villani, C. For a Meaningful Artificial Intelligence: Towards a French and European Strategy. In Villani Report; Conseil National du Numérique: Paris, France, 2018. [Google Scholar]
  14. Bremner, P.; Dennis, L.A.; Fisher, M.; Winfield, A.F. On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots. Proc. IEEE 2019, 107, 541–561. [Google Scholar] [CrossRef]
  15. Bashir, A.; Bibi, N.; Muhammad, N. Ransomware Detection using Machine Learning Approaches. In Proceedings of the 2025 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 15–16 December 2025; pp. 1–6. [Google Scholar] [CrossRef]
  16. Wilk, A. Cyber Security Education and Law. In Proceedings of the 2016 IEEE International Conference on Software Science, Technology and Engineering (SWSTE), Beer Sheva, Israel, 23–24 June 2016; pp. 58–62. [Google Scholar] [CrossRef]
  17. A Report in the Computing Curricula Series Joint Task Group on Computer Engineering Curricula Association for Computing Machinery (ACM), IEEE Computer Society 2016. 15 December 2016. Available online: https://www.acm.org/binaries/content/assets/education/ce2016-final-report.pdf (accessed on 11 March 2026).
  18. Bielefeldt, A.R.; Polmear, M.; Knight, D.; Swan, C.; Canney, N. Education of Electrical Engineering Students about Ethics and Societal Impacts in Courses and Co-curricular Activities. In Proceedings of the 2018 IEEE Frontiers in Education Conference (FIE), San Jose, CA, USA, 3–6 October 2018; pp. 1–5. Available online: https://ieeexplore.ieee.org/abstract/document/8658888 (accessed on 11 March 2026).
  19. Duncan, S.; Healey, J. (Eds.) The Ethics. In Trauma Reporting; Routledge: Abingdon, UK, 2019; pp. 186–198. ISBN 9781138482098. Available online: https://strathprints.strath.ac.uk/70023/ (accessed on 11 March 2026).
  20. Shalvi, S.; Gino, F.; Barkan, R.; Ayal, S. Self-Serving Justifications: Doing Wrong and Feeling Moral. Curr. Dir. Psychol. Sci. 2015, 24, 125–130. [Google Scholar] [CrossRef]
  21. World Economic Forum. World Economic Forum DATE: 14 Jan 2020 Emerging Technologies How Global Tech Companies Can Champion Ethical AI. Available online: https://www.weforum.org/stories/2020/01/tech-companies-ethics-responsible-ai-microsoft/ (accessed on 11 March 2026).
  22. Saltz, J.S.; Dewar, N.I.; Heckman, R. Key Concepts for a Data Science Ethics Curriculum. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education (SIGCSE’18), New York, NY, USA, 21–24 February 2018; pp. 952–957. Available online: https://dl.acm.org/doi/abs/10.1145/3159450.3159483 (accessed on 11 March 2026).
  23. Zeide, E.; Nissenbaum, H. Learner Privacy in MOOCs and Virtual Education. Theory Res. Educ. 2018, 16, 280–307. Available online: https://journals.sagepub.com/doi/10.1177/1477878518815340 (accessed on 11 March 2026). [CrossRef]
  24. Quinn, M.J. Ethics for the Information Age, 7th ed.; Pearson: London, UK, 2017. [Google Scholar]
  25. Goldsmith, J.; Burton, E. Why Teaching Ethics to AI Practitioners Is Important. In Proceedings of the AAAI-17 Workshop on AI, Ethics, and Society; Association for the Advancement of Artificial Intelligence: Washington, DC, USA, 2017. [Google Scholar]
  26. Burton, E.; Goldsmith, J.; Koenig, S.; Kuipers, B.; Mattei, N.; Walsh, T. Ethical Considerations in Artificial Intelligence Courses. arXiv 2017, arXiv:1701.07769. [Google Scholar] [CrossRef]
  27. Lafollette, H. The Practice of Ethics; Blackwell Publishing: Oxford, UK, 2007. [Google Scholar]
  28. Fort, T.; Presser, S. The Legal Environment of Business; West Academic Publishing: Saint Paul, MN, USA, 2017. [Google Scholar]
  29. Zeng, Y.; Lu, E.; Huangfu, C. Linking Artificial Intelligence Principles. arXiv 2018, arXiv:1812.04814. [Google Scholar] [CrossRef]
  30. Wilk, A. Teaching AI, Ethics, Law and Policy. arXiv 2019, arXiv:1904.12470. [Google Scholar] [CrossRef]
  31. Han, B.; Nawaz, S.; Buchanan, G.; McKay, D. Ethical and Pedagogical Impacts of AI in Education. In Artificial Intelligence in Education (AIED); Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2023; Volume 13916, pp. 667–673. [Google Scholar] [CrossRef]
  32. Borenstein, J.; Howard, A. Emerging Challenges in AI and the Need for Ethics Education. AI Soc. 2021, 1, 61–65. Available online: https://link.springer.com/article/10.1007/s43681-020-00002-7 (accessed on 11 March 2026). [CrossRef] [PubMed]
  33. Köbis, N.; Mehner, C. Ethical Questions Raised by AI-Supported Mentoring in Higher Education. Front. Artif. Intell. 2021, 4, 624050. [Google Scholar] [CrossRef]
  34. Reiss, M.J. The use of AI in education: Practicalities and ethical considerations. Lond. Rev. Educ. 2021, 19, 1–14. [Google Scholar] [CrossRef]
  35. Adams, C.; Pente, P.; Lemermeyer, G.; Rockwell, G. Ethical principles for artificial intelligence in K-12 education. Comput. Educ. Artif. Intell. 2023, 4, 100131. [Google Scholar] [CrossRef]
  36. Ghotbi, N.; Ho, M.T. Moral Awareness of College Students Regarding Artificial Intelligence. Asian Bioeth. Rev. 2021, 13, 421–433. [Google Scholar] [CrossRef]
  37. Matias, A.; Zipitria, I. Promoting Ethical Uses in Artificial Intelligence Applied to Education. In Augmented Intelligence and Intelligent Tutoring Systems; International Conference on Intelligent Tutoring Systems; Frasson, C., Mylonas, P., Troussas, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2023; Volume 13891, pp. 604–615. [Google Scholar] [CrossRef]
  38. Masters, K. Ethical use of Artificial Intelligence in Health Professions Education: AMEE Guide No. 158. Med. Teach. 2023, 45, 574–584. [Google Scholar] [CrossRef]
  39. Anderson, J.; Rainie, L. The Future of Human Agency. Pew Research Center. 24 February 2023. Available online: https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/ (accessed on 11 March 2026).
  40. Herlinawati, H.; Marwa, M.; Ismail, N.; Junaidi, J.; Liza, L.O.; Situmorang, D.D.B. The Integration of 21st Century Skills in the Curriculum of Education. Heliyon 2024, 10, e35148. Available online: https://www.cell.com/heliyon/fulltext/S2405-8440(24)11179-6?uuid=uuid%3Abeb69b96-916f-44ef-aa85-d0092138f43a (accessed on 11 March 2026). [CrossRef]
  41. Chatila, R.; Havens, J.C. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. In Robotics and Well-Being; Springer: Cham, Switzerland, 2019; pp. 11–16. Available online: https://link.springer.com/chapter/10.1007/978-3-030-12524-0_2 (accessed on 11 March 2026).
  42. Matsiola, M.; Lappas, G.; Yannacopoulou, A. Generative AI in Education: Assessing Usability, Ethical Implications, and Communication Effectiveness. Societies 2024, 14, 267. [Google Scholar] [CrossRef]
  43. Field, A. Discovering Statistics Using IBM SPSS Statistics, 5th ed.; Sage Publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  44. Cronbach, L.J. Coefficient alpha and the internal structure of tests. Psychometrika 1951, 16, 297–334. [Google Scholar] [CrossRef]
  45. Hsu, H. Multiple Comparisons: Theory and Methods; Chapman & Hall: London, UK, 1996. [Google Scholar]
  46. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Routledge: London, UK, 1988. [Google Scholar]
  47. Naqvi, S.R.; Akram, T.; Haider, S.A.; Khan, W.; Kamran, M.; Muhammad, N.; Nawaz Qadri, N. Learning outcomes and assessment methodology: Case study of an undergraduate engineering project. Int. J. Electr. Eng. Educ. 2019, 56, 140–162. [Google Scholar] [CrossRef]
  48. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.-P.T. Ethical Principles for Artificial Intelligence in Education. Educ. Inf. Technol. 2023, 28, 4221–4241. Available online: https://link.springer.com/article/10.1007/s10639-022-11316-w (accessed on 11 March 2026). [CrossRef]
  49. UNESCO. Recommendation on the Ethics of Artificial Intelligence; UNESCO: Paris, France, 2021; Available online: http://unesdoc.unesco.org/in/rest/annotationSVC/DownloadWatermarkedAttachment/attach_import_75c9fb6b-92a6-4982-b772-79f540c9fc39?_=381137eng.pdf&to=44&from=1 (accessed on 11 March 2026).
  50. OECD. OECD Principles on Artificial Intelligence; OECD: Paris, France, 2019; Available online: https://archive.epic.org/algorithmic-transparency/OECD-AI-Principles-flyer.pdf (accessed on 11 March 2026).
  51. UNESCO. Recommendation on the Ethics of Artificial Intelligence. Updated 2024. Global Normative Framework for Ethical AI. 2021. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 11 March 2026).
  52. Okulich-Kazarin, V.; Artyukhov, A. (Un)invited Assistant: AI as a Structural Element of the University Environment. Societies 2025, 15, 297. [Google Scholar] [CrossRef]
  53. Ghotbi, N.; Ho, M.T.; Mantello, P. Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI Soc. 2022, 37, 283–290. [Google Scholar] [CrossRef]
  54. Memarian, B.; Doleck, T. Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and Higher Education: A Systematic Review. Computers and Education: Artificial Intelligence 2023, 5, 100152. Available online: https://www.sciencedirect.com/science/article/pii/S2666920X23000310 (accessed on 11 March 2026). [CrossRef]
  55. Wang, D.; Tao, Y.; Chen, G. Artificial Intelligence in Classroom Discourse: A Systematic Review of the Past Decade. Int. J. Educ. Res. 2024, 123, 102275. Available online: https://www.sciencedirect.com/science/article/pii/S0883035523001386 (accessed on 11 March 2026). [CrossRef]
  56. Cheong, B.C. Transparency and Accountability in AI Systems: Safeguarding Wellbeing in the Age of Algorithmic Decision-Making. Front. Hum. Dyn. 2024, 6, 1421273. Available online: https://www.frontiersin.org/journals/human-dynamics/articles/10.3389/fhumd.2024.1421273 (accessed on 11 March 2026). [CrossRef]
  57. Aror, T.A.; Mupa, M.N. Risk and compliance paper what role does Artificial Intelligence (AI) play in enhancing risk management practices in corporations. World J. Adv. Res. Rev. 2025, 27, 1072–1080. [Google Scholar] [CrossRef]
  58. Khogali, H.O.; Mekid, S. Perception and Ethical Challenges for the Future of AI as Encountered by Surveyed New Engineers. Societies 2024, 14, 271. [Google Scholar] [CrossRef]
Figure 1. Academic evolving need for AI ethics bylaws.
Figure 1. Academic evolving need for AI ethics bylaws.
Societies 16 00106 g001
Figure 2. Main component of AI ethics bylaws for academic TLA.
Figure 2. Main component of AI ethics bylaws for academic TLA.
Societies 16 00106 g002
Figure 3. Directional elements of AI ethics bylaws for academic TLA.
Figure 3. Directional elements of AI ethics bylaws for academic TLA.
Societies 16 00106 g003
Figure 4. Governance flowchart of AI ethics bylaws for academic TLA.
Figure 4. Governance flowchart of AI ethics bylaws for academic TLA.
Societies 16 00106 g004
Figure 5. AI ethics bylaw stages for executing and implementing the proposed model.
Figure 5. AI ethics bylaw stages for executing and implementing the proposed model.
Societies 16 00106 g005
Figure 6. Governance workflow compliance vs. risk comparison across key stages. Higher compliance indicates stronger adherence to governance standards, while risk values represent potential vulnerabilities.
Figure 6. Governance workflow compliance vs. risk comparison across key stages. Higher compliance indicates stronger adherence to governance standards, while risk values represent potential vulnerabilities.
Societies 16 00106 g006
Figure 7. Mean ethical integration scores across STEM disciplines (1–5 Likert scale). Each score represents the average of dimension items within the respective domain (Teaching, Learning, Assessment). The “Ethical Integration Score” is computed as the arithmetic mean of responses to items aligned with ethical principles such as fairness, transparency, and accountability.
Figure 7. Mean ethical integration scores across STEM disciplines (1–5 Likert scale). Each score represents the average of dimension items within the respective domain (Teaching, Learning, Assessment). The “Ethical Integration Score” is computed as the arithmetic mean of responses to items aligned with ethical principles such as fairness, transparency, and accountability.
Societies 16 00106 g007
Table 6. Participant teacher profile (N = 30).
Table 6. Participant teacher profile (N = 30).
Discipline Experience/Years Gender (M/F) Ethics Committee Leadership Role
Mathematics126/44.270%
Computing107/38.180%
Engineering145/55.675%
Table 7. Comprehensive statistics of TLA sample courses (n = 10 per discipline).
Table 7. Comprehensive statistics of TLA sample courses (n = 10 per discipline).
Dimension Discipline Mean SD SEM 95% CI Cronbach’s α
TeachingMathematics7.60.80.25[7.04, 8.16]0.86
Computing7.90.60.19[7.47, 8.33]
Engineering7.70.70.22[7.20, 8.20]
LearningMathematics6.81.10.35[6.01, 7.59]0.89
Computing7.20.90.28[6.56, 7.84]
Engineering7.01.00.32[6.27, 7.73]
AssessmentMathematics6.31.20.38[5.44, 7.16]0.83
Computing6.51.00.32[5.77, 7.23]
Engineering6.11.10.35[5.31, 6.89]
Table 8. ANOVA results with power analysis.
Table 8. ANOVA results with power analysis.
Dimension F(2,27) p-Value Significance η p 2 Power Required N *
Teaching3.120.058Marginal0.1880.5618
Learning4.900.015*0.2660.7412
Assessment5.440.011*0.2870.7811
* Required sample size per group for power = 0.80, α = 0.05.
Table 9. Assumption checks for ANOVA (Shapiro–Wilk and Levene’s tests).
Table 9. Assumption checks for ANOVA (Shapiro–Wilk and Levene’s tests).
DimensionShapiro–Wilk (p) Levene’s Test (p)
Teaching0.130.23
Learning0.100.22
Assessment0.160.22
Table 10. Tukey HSD post hoc test results with exact computations.
Table 10. Tukey HSD post hoc test results with exact computations.
Comparison Mean Diff SE 95% CI p-Value Sig.
Teaching ( M S w i t h i n = 1.39, q = 3.51)
Math vs. Computing−0.300.167[−0.89, 0.29]0.071NS
Math vs. Engineering−0.100.167[−0.69, 0.49]0.735NS
Computing vs. Engineering0.200.167[−0.39, 0.79]0.284NS
Learning ( M S w i t h i n = 1.57, q = 3.51)
Math vs. Computing−0.400.177[−1.02, 0.22]0.038*
Math vs. Engineering−0.200.177[−0.82, 0.42]0.427NS
Computing vs. Engineering0.200.177[−0.42, 0.82]0.406NS
Assessment ( M S w i t h i n = 1.57, q = 3.51)
Math vs. Computing−0.200.177[−0.82, 0.42]0.543NS
Math vs. Engineering0.200.177[−0.42, 0.82]0.543NS
Computing vs. Engineering0.400.177[−0.22, 1.02]0.023*
* shows significant and NS shows not significant.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almutairi, A.F.; Pils, J.; Muhammad, N.; Khan, S. AI Ethics Bylaws for Academia: Teaching, Learning, and Assessment. Societies 2026, 16, 106. https://doi.org/10.3390/soc16040106

AMA Style

Almutairi AF, Pils J, Muhammad N, Khan S. AI Ethics Bylaws for Academia: Teaching, Learning, and Assessment. Societies. 2026; 16(4):106. https://doi.org/10.3390/soc16040106

Chicago/Turabian Style

Almutairi, Ali F., Jonathan Pils, Nazeer Muhammad, and Shafiullah Khan. 2026. "AI Ethics Bylaws for Academia: Teaching, Learning, and Assessment" Societies 16, no. 4: 106. https://doi.org/10.3390/soc16040106

APA Style

Almutairi, A. F., Pils, J., Muhammad, N., & Khan, S. (2026). AI Ethics Bylaws for Academia: Teaching, Learning, and Assessment. Societies, 16(4), 106. https://doi.org/10.3390/soc16040106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop