Special Issue "Artificial Intelligence Applications for Education"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 28 February 2022.

Special Issue Editors

Dr. Kevin Gary
E-Mail Website
Guest Editor
School of Computing and Augmented Intelligence, The Ira A. Fulton Schools of Engineering, Arizona State University, Tempe, AZ, USA
Interests: AI in software engineering; agile methods; software engineering education; mHealth
Dr. Ajay Bansal
E-Mail Website
Guest Editor
School of Computing and Augmented Intelligence, The Ira A. Fulton Schools of Engineering, Arizona State University, Tempe, AZ, USA
Interests: intelligent systems; knowledge representation and reasoning; computational logic; declarative programming

Special Issue Information

Dear Colleagues,

The MDPI journal Information is inviting submissions to a Special Issue on “Artificial Intelligence Applications for Education”.

Artificial intelligence (AI) plays an increasingly important and pervasive role in society, including in the area of education. Recent events have cast a spotlight on technology’s role in education, and AI is leading the way in creating intelligent, impactful, and scalable education solutions. From intelligent tutors to machine learning for learning analytics, AI’s impact is being felt on multiple levels—from the individual learner–teacher relationship to organizational strategies for achieving large-scale outcomes.

This Special Issue seeks novel research reports on the spectrum of AI’s influence on education. The editors welcome submissions on all forms of AI approaches, though with an emphasis on applications of these approaches in real-world settings with fully analyzed research results. Quantitative, qualitative, and mixed methods studies are welcome, as are case studies and experience reports if they describe an impactful application at scale that delivers useful lessons to journal readership.

Topics of Interest include (but are not limited to):

  • Intelligent tutoring systems
  • Applications of learning analytics to learning situations
  • Personalized and adaptive learning systems
  • AI in support of behavior change models for learning
  • Hybrid teacher–agent implementation support for teachers
  • AI impacts on pedagogy
  • AI for learning at scale
  • Intelligent assessment models
  • Natural language processing (NLP) in education
  • Challenges implementing AI in real-world scenarios
  • Modeling learner types using AI
  • Modeling domain expertise using AI
  • Human–AI hybrid systems for learning
  • Modeling learning contexts using AI
  • Informal learning using educational games
  • Domain-specific learning using AI
  • Evaluation of AI-based learning systems

Dr. Kevin Gary
Dr. Ajay Bansal
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • personalized and adaptive learning
  • learning analytics
  • intelligent tutoring systems

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Predicting Student Dropout in Self-Paced MOOC Course Using Random Forest Model
Information 2021, 12(11), 476; https://doi.org/10.3390/info12110476 - 17 Nov 2021
Viewed by 254
Abstract
A significant problem in Massive Open Online Courses (MOOCs) is the high rate of student dropout in these courses. An effective student dropout prediction model of MOOC courses can identify the factors responsible and provide insight on how to initiate interventions to increase [...] Read more.
A significant problem in Massive Open Online Courses (MOOCs) is the high rate of student dropout in these courses. An effective student dropout prediction model of MOOC courses can identify the factors responsible and provide insight on how to initiate interventions to increase student success in a MOOC. Different features and various approaches are available for the prediction of student dropout in MOOC courses. In this paper, the data derived from a self-paced math course, College Algebra and Problem Solving, offered on the MOOC platform Open edX partnering with Arizona State University (ASU) from 2016 to 2020 is considered. This paper presents a model to predict the dropout of students from a MOOC course given a set of features engineered from student daily learning progress. The Random Forest Model technique in Machine Learning (ML) is used in the prediction and is evaluated using validation metrics including accuracy, precision, recall, F1-score, Area Under the Curve (AUC), and Receiver Operating Characteristic (ROC) curve. The model developed can predict the dropout or continuation of students on any given day in the MOOC course with an accuracy of 87.5%, AUC of 94.5%, precision of 88%, recall of 87.5%, and F1-score of 87.5%, respectively. The contributing features and interactions were explained using Shapely values for the prediction of the model. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications for Education)
Show Figures

Figure 1

Article
WebPGA: An Educational Technology That Supports Learning by Reviewing Paper-Based Programming Assessments
Information 2021, 12(11), 450; https://doi.org/10.3390/info12110450 - 29 Oct 2021
Viewed by 326
Abstract
Providing feedback to students is one of the most effective ways to enhance their learning. With the advancement of technology, many tools have been developed to provide personalized feedback. However, these systems are only beneficial when interactions are done on digital platforms. As [...] Read more.
Providing feedback to students is one of the most effective ways to enhance their learning. With the advancement of technology, many tools have been developed to provide personalized feedback. However, these systems are only beneficial when interactions are done on digital platforms. As paper-based assessment is still a dominantly preferred evaluation method, particularly in large blended-instruction classes, the sole use of electronic educational systems presents a gap between how students learn the subject from the physical and digital world. This has motivated the design and the development of a new educational technology that facilitates the digitization, grading, and distribution of paper-based assessments to support blended-instruction classes. With the aid of this technology, different learning analytics can be readily captured. A retrospective analysis was conducted to understand the students’ behaviors in an Object-Oriented Programming and Data Structures class from a public university. Their behavioral differences and the associated learning impacts were analyzed by leveraging their digital footprints. Results showed that students made significant efforts in reviewing their examinations. Notably, the high-achieving and the improving students spent more time reviewing their mistakes and started doing so as soon as the assessment became available. Finally, when students were guided in the reviewing process, they were able to identify items where they had misconceptions. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications for Education)
Show Figures

Figure 1

Article
Online At-Risk Student Identification using RNN-GRU Joint Neural Networks
Information 2020, 11(10), 474; https://doi.org/10.3390/info11100474 - 09 Oct 2020
Cited by 4 | Viewed by 1264
Abstract
Although online learning platforms are gradually becoming commonplace in modern society, learners’ high dropout rates and serious academic performance require more attention within the virtual learning environment (VLE). This study aims to predict students’ performance in a specific course as it is continuously [...] Read more.
Although online learning platforms are gradually becoming commonplace in modern society, learners’ high dropout rates and serious academic performance require more attention within the virtual learning environment (VLE). This study aims to predict students’ performance in a specific course as it is continuously running, using the statistic personal biographical information and sequential behavior data with VLE. To achieve this goal, a novel recurrent neural network (RNN)-gated recurrent unit (GRU) joint neural network is proposed to fit both static and sequential data, where the data completion mechanism is also adopted to fill the missing stream data. To incorporate the sequential relationship of learning data, three kinds of time-series deep neural network algorithms: simple RNN, GRU, and LSTM are first taken into consideration as baseline models. Their performances are compared in identifying at-risk students. Experimental results on Open University Learning Analytics Dataset (OULAD) show that simple methods like GRU and simple RNN have better results than the relatively complex LSTM model. The results also reveal that different models have different peak performance time, which results in the proposed joint model that achieves over 80% prediction accuracy of at-risk students at the end of the semester. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications for Education)
Show Figures

Figure 1

Back to TopTop