Rethinking Ethics, Standards, and Criteria in Language Testing and Assessment in the Age of AI
A special issue of Education Sciences (ISSN 2227-7102). This special issue belongs to the section "Language and Literacy Education".
Deadline for manuscript submissions: 5 July 2026 | Viewed by 378
Special Issue Editors
Interests: AI; academic writing; digital language learning; alternative assessments; peer review; teacher–student dynamics; quantitative analysis
Interests: AI education; language learning; language teaching; AI-based feedback; human raters
Interests: language assessment; reading comprehension; teaching and assessment; technology-based instruction and assessment; self-regulated learning; test-taker strategies
Special Issue Information
Dear Colleagues,
For centuries, language assessment has been conducted in the form of traditional methods under timed conditions, which remain common even today across higher education institutions. The rapid integration of artificial intelligence (AI) into language education has evidently reshaped teaching and learning and should also inform language testing and assessment practices. The fact is that assessment design is a complex subject, even without the addition of AI tools that might influence our understanding of academic standards and student performance. While AI presents new opportunities for personalization, efficiency, scalability, and innovation in assessment, it challenges long-standing principles in language testing, giving way to ethical, methodological, and epistemological concerns. Such concerns relate to algorithmic bias, data privacy, construct shift, automated decisions, automated feedback, automated scoring, and the evolving role of human evaluation and assessment. These concerns have prompted renewed scrutiny of standards and ethical guidelines. It is deemed necessary to reconsider the relevance of traditional assessment measurement to the context of AI. There is a growing need for critical, empirical, and conceptual work that revisits ethical frameworks, quality criteria, and standards in language assessment to ensure that they remain fit for purpose in the age of AI and rethink assessment literacy for teachers and learners.
Addressing assessment reform in the age of AI, this Special Issue is situated within the most recent advances in research, theory, and practice related to ethics, standards, and criteria in language testing and assessment in AI‑mediated contexts. It seeks to bring together interdisciplinary perspectives from applied linguistics, language testing, educational measurement, AI ethics, and educational technology to rethink how we assess language learning. Considering the ongoing concerns about the reliability and fairness of AI-mediated assessment, as well as arguments that AI may signal the end of traditional testing, we aim to contribute to these debates by examining the potential for transformative assessment with AI. We address the question of which aspects of assessment and evaluation should remain the responsibility of teachers when AI systems can assess students’ assignments and examinations.
Submissions may cover, but are not limited to, the following:
- Assessment literacy.
- The role of GenAI tools in assessment and feedback.
- Implications of GenAI on each phase of the assessment.
- Capabilities and limitations of GenAI in assessment.
- Impact of GenAI on assessment practices in language education.
- Possibilities for transformative assessment with AI.
- Inclusive and multimodal foreign language assessment.
- Grading contracts, digital badges, and adaptable assessments.
- Flexible evaluation of students’ multimodal competencies.
- Evidence of the learning process rather than products per se.
- Validity, reliability, and fairness in automated and AI‑assisted scoring.
- Algorithmic bias, equity, and inclusivity in language assessment systems.
- Transparency, explainability, and accountability of AI‑driven assessment decisions.
- Ethical frameworks for AI‑mediated language testing.
- Assessment of out-of-class language learning.
- Construct definition and construct shift in the presence of generative AI tools.
- Fostering human–AI approaches to assessment design, scoring, and feedback.
- Data privacy, surveillance, and governance in large‑scale AI‑based assessments.
- AI, academic integrity, and the ethics of test use and interpretation.
- Assessment literacy for teachers, testers, and learners in AI‑rich environments.
- Policy implications and regulatory responses to AI in language testing.
- Empirical studies, theoretical analyses, methodological innovations, and practitioner‑oriented perspectives related to AI and language assessment ethics.
The submissions can be empirical research articles, conceptual papers, methodological contributions, and practice‑oriented studies that advance scholarly understanding and inform responsible assessment practice in this rapidly evolving domain.
Dr. Abdu Al-Kadi
Dr. Jamal Ali
Dr. Asma Maaoui
Guest Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Education Sciences is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- alternative assessment
- assessment literacy
- ethical principles in assessment
- assessment criteria
- validity
- reliability
- language proficiency benchmarks
- inclusivity in assessment
- scoring
- feedback
- feedforward
- ethics
- AI
- washback effect
- test bias
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.
Further information on MDPI's Special Issue policies can be found here.


