A Systematic Model of an Adaptive Teaching, Learning and Assessment Environment Designed Using Genetic Algorithms
Abstract
:Featured Application
Abstract
1. Introduction
2. Literature Review
2.1. General Aspects
- personalization in education [6] leading to individual optimized learning experiences [7]: Personalization in education is an approach that aims to adapt the learning process to the needs, abilities and interests of individual students, with the aim of creating optimized experiences for each individual student.
- educational data analysis based on ML and AI technologies [8], which involves the use of advanced algorithms and techniques to analyze educational data in order to gain insights, make predictions, and improve educational outcomes.
- Extensive accessibility [11], which refers to ensuring that educational resources, materials, and opportunities are easily available and reachable to all learners, regardless of their backgrounds, abilities, or circumstances.
2.2. Quantitative Analysis
- Step 1:
- Keyword search: A direct search in the Dimensions.ai database using the keyword “education AND digital technology”.
- Step 2:
- Export of search results: export the search results using the database’s configuration options.
- Step 3:
- Mapping of exported results: utilize mapping software (VOSViewer 1.6.20) to visualize the exported results.
2.3. Direct Observation Analysis
2.3.1. Usage of Digital Technology in Teaching
2.3.2. Usage of Digital Technology in Learning
2.3.3. Usage of Digital Technology in Assessment
- Creating a fair assessment for a group of students who were taught as a group. Equity refers to a fair distribution of balanced assessment items for each student within a group.
- The human errors that occur.
- Covering an extended area of previously learned subjects.
- The mix between practical and theoretical evaluation.
2.3.4. Review Summary
- lack of real-time adaptive personalization: the majority of the research directions are related to the development of a standardized educational context, which is based on a determined non-variated educational content for all the participants of the educational process.
- emphasis on the integrative aspect of the educational process: The solutions often treat teaching, learning, and assessment as separate components, without integrating them into a system.
- Limited use of evolutionary optimization algorithms: The usage of AI and ML-based methods are extensive for the learning and assessment components, but the usage of heuristic algorithms, such as genetic methods, are under-explored.
- Lack of validation in real-world educational environments: The integration of models within educational tools is low, the described models being validated only through simulated data or theoretical models.
3. DMAIR Description
3.1. General Considerations and Purpose
- Dynamic personalization: The learning process, emphasized by the learning path generation, and the assessment component, related to assessment test generation, carry a high potential for learning personalization.
- An integrated model for adaptive learning: The proposed model offers a unified framework that connects the teaching, learning, and assessment processes.
- Usage of genetic algorithms: Based on specific elements of Classical Test Theory and Item Response Theory, among other theoretical frameworks, for a valid scientific foundation of the model design, the advantages of the solutions of the genetic algorithms are explored.
- scalable architecture: the model is designed with flexibility in mind, following a scalability-driven structure and taking into account the future integration into existing and developing Learning Management Systems.
- The interaction between the learning and teaching component and the assessment component, both using dynamic evolutionary-based methods to obtain the desired results;
- the traditional approaches related to learning path generation are related to rule-based approaches or static branching logic. In this matter, the current model uses genetic algorithms to optimize the structure and sequence of learning blocks for each learner;
- the integration of a closed feedback loop by integrating automated generation, real-time analysis, and interpretation of results related to assessment;
- the accent is put on the optimization of the learning and assessment process with a direct influence on learning progression; the model adapts both what is taught and how learners are assessed;
- the architecture of the model is a modular one, with the scalability and integration of the implementation in specific learning platforms in mind.
3.2. Learning and Teaching (LT)
3.2.1. General Considerations and Purpose
- Context input: a set of learning blocks described by the concepts that are needed for learning each block are given.
- Desired output: a learning path that would respect specific criteria based on start and end concepts, duration and order of generality are requested.
3.2.2. LT Component Structure
Learning Structures
- O is an ontology of concepts, which is a structured set of semantic tags related by a hierarchic relationship. It can be represented as a graph or as a symbolic logic structure. In this paper, an ontology is a directed graph of concepts:
- -
- is the set of concepts that can be determined using an automated method of extracting concepts from a corpus of text (e.g., using Natural Language Processing);
- -
- is the set of edges, where an edge defines a hierarchical relationship between two concepts;
- B is the learning block, which is the most basic unit of learning. A block is a quintuple of elements, codified as follows:
- -
- is the set of keywords that define the prerequisite concepts, i.e., the concepts needed to be known in order to follow the current learning block;
- -
- is the set of keywords that define the concepts learnt in the current learning block;
- -
- is the set of semantic tags that reflects the upper level in a digital ontology of concepts that describe the learning block;
- -
- is the set of semantic tags that reflects the lower level in a digital ontology of concepts that describe the learning block;
- -
- P is the ensemble of learning processes, methods, instruments, and elements needed in the learning process run in the current learning block.
A graphical representation of a learning block is shown in Figure 2. We can observe that the elements of and sets are represented as sockets that can be connected with similar ones. Also, a block is characterized by a level in a correspondent ontology of concepts that is formed based on the set of blocks and delimited by an upper level (concepts that are more general than the ones studied in the current block) and by a lower level (concepts that are more particular or detailed than the one studied in the current block). The colours are expressed in order to differentiate easier between the elements of the block. - is the block relationship established between two blocks, which can be differentiated into two types:
- -
- linear, , which is established between two blocks situated on the same ontology level, based on the keywords matching. The connection can be made if the output keywords of the first block are fully or partially matched with the input keywords of the second block:
- -
- leveled, , which is established between two blocks situated on neighboring levels related to a specific ontology, based on the sets of semantic tags. The connection can be made if the current semantic tag of the first block is connected in the ontology O with the semantic tag of the second block:
- is the bidimensional array or matrix that contains a specific configuration of a set of learning blocks. A bidimensional array is a structure formed of learning blocks that may be connected based on the four edges of a block:A matrix is obtained as a result of a genetic algorithm that will be described in the next sections. In this context, L(O) represents the number of levels within the generated ontology (O).
- is a learning path, a successive enumeration of blocks that start with an initial block BS and ends with a final block . A path contains only blocks that are connected either linearly or leveled:
- The requirements for the determination of the best are as follows:
- -
- R1: the matrix contains the and learning blocks;
- -
- R2: the matrix has the highest number of connections between the forming blocks;
- -
- R3: the matrix has a generality index () as close as a value given by the user (), where the generality index is the ratio between the column index of the block in the determined and .
- The requirements for the determination of the best are as follows:
- -
- R4: the path is the optimal path between the and learning blocks, where optimal may consist of the minimum or the maximum steps in the matrix, according to the user requirement;
- -
- R5: the path has the “lowest” block (i.e., lowest refers to the maximum index of the matrix lines where a block from is found) as close as .
Genetic Structures
- g is the gene of a genetic algorithm and codifies a given block in the bidimensional array:
- C is a chromosome and codifies a bidimensional array, which can be structured as an unfolded unidimensional array of :
- P is the parameter set of the genetic algorithm, a quadruple of variables which defines the set of the genetic algorithm parameters, , where is the initial population size, the number of generations, the mutation rate and the crossover rate, , .
- is the fitness function, defined as the maximum number of valid connections between the blocks within the chromosome, as follows:
- -
- defines the R1 requirement;
- -
- defines the R2 requirements based on relationships;
- -
- defines the R2 requirements based on relationships;
- -
- defines the R3 requirement.
Genetic Operators
- Mutation (Mut), represented by the replacement of a randomly-selected gene with a randomly-chosen block in a randomly-chosen position;
- Crossover (Crs), determined between two parent chromosomes C1 and C2. A common random position for both chromosomes is generated. The chromosomes are split by the position. The first part of the C1 chromosome is combined with the second part of the chromosome C2 and the first part of the chromosome C2 is combined with the second part of the chromosome C1. Two offspring chromosomes are obtained;
- Selection (Sel), represented by the sort operation of the chromosome by the fitness function value.
3.2.3. LT Component Functionality
Genetic Algorithms
- Step 1:
- The input data (B set, BS, BE, P, uGP) is read.
- Step 2:
- The genetic algorithm is applied as follows:
- (a)
- the generation of the initial population of items is made;
- (b)
- the mutation operation is applied;
- (c)
- the crossover operation is applied;
- (d)
- the resulted chromosomes are selected;
- (e)
- after NG generations, the best chromosome is selected.
- Step 3:
- The best chromosome is input.
Matrices Algorithms
- Step 1:
- Initialize:
- (a)
- Initialize a queue to store cells to explore.
- (b)
- Mark the start cell as visited and add it to the queue.
- (c)
- Initialize the current level to 0.
- Step 2:
- Explore and expand:
- (a)
- While the queue is not empty:
- Increment the current level.
- Determine the number of cells at the current level.
- For each cell at the current level:
- Check if this cell is the end cell.
- If it is, return the current level (indicating the shortest path is found).
- Otherwise, for each unvisited neighbor cell:
- Mark the neighbor as visited.
- Enqueue the neighbor to the queue.
- Step 3:
- Update queue: Remove all cells at the current level from the queue.
- Step 4:
- Termination: If no path is found, return a message indicating that there is no path between the start and end cells.
3.3. Assessment–Test Generation (TG)
3.3.1. General Considerations and Purpose
3.3.2. TG Component Structure
Assessment Structures
- q: the item, a tuple , generated and stored in a database, where the elements of the tuple are as follows:
- -
- : the unique identification particle of the item;
- -
- : the statement, which consists of a phrase or set of phrases that describes the initial data and item requests to be resolved;
- -
- : the number of keywords that define an item;
- -
- : the set of keywords, considered similar to a semantic tag, represents a collection of keywords defining an item. A keyword, denoted as , refers to a term or phrase that characterizes the subject matter of the item. These keywords can be acquired either manually by a human operator or automatically through Machine Learning (ML) powered Natural Language Processing (NLP) methodologies;
- -
- : the degree of difficulty of the item, determined through specific metrics (typically statistical, such as the ratio between the number of correct responses to the item and the total number of attempts or responses);
- -
- : the item type, where m has the meaning of multiple-choice item, e essay item and s short-answer item;
- -
- (where necessary): a list of two or more possible answers when the item type is multiple or null or when the item type is short or essay;
- -
- : the theoretical or practical nature of the item, where 0 is theoretical and 1 is practical.
- SI: the sequence of items, a tuple that encodes an educational assessment test created according to specified criteria or requirements using genetic algorithms. The components of the tuple are as follows:
- -
- : the unique identification particle of the test;
- -
- : the test size (the number of questions);
- -
- : the set of items that form the test;
- -
- : the union of the sets of keywords of all the items q within the sequence;
- -
- : the degree of difficulty of the item, which is calculated as an average of the degrees of difficulty of all the items that form a test, as follows:
- -
- : the theoretical-practical ratio, which gives the predominant type of SI, the value of the ratio consisting of the proportion of theoretical questions and the difference 1 – TP being the proportion of practical questions;
- -
- : an array determining the predominant item type in SI. The values of the vector contain the number of items of each type in SI, being the number of multiple-choice items, being the number of short-type items, and being the number of essay-type items.
- R: the set of requirements , where is a requirement for the test generation and k the total number of requirements. For this paper, or and the requirements are as follows:
- -
- represents the requirement associated with the topic of the items required in the sequence. This requirement is linked to the set of user-desired keywords, denoted as , where represents the list of user-defined keywords and represents their total count;
- -
- is the requirement related to the degree of difficulty. is related to the desired degree of difficulty, where , a value closer to zero means that the test is desired to be “less difficult” and closer to 1 being “more difficult”;
- -
- is the requirement related to the predominant item type, which can take values from the type set; thus, ;
- -
- is the requirement related to the desired theoretical/practical ratio ().
Arborescent Structures
- forms a partial tree of the main tree;
- the number of missing edges between nodes in the generated subtree is zero or minimal, based on the connections in the main tree (the tree is connected);
- the generated tree contains items whose cardinal of the reunion of sets of keywords within the tree is the closest as the number of keywords desired by the user.
- : the undirected graph that represents the items and the relationships between them. The set of vertices or nodes contains the items in the database and the set of edges contains all the conceptual relationships between the items, determined based on the ontology of concepts (O);
- : the subgraph generated by various methods. The set of vertices or nodes contains the selected items to be part of an SI and the set of edges contains all the conceptual relationships between the items in the SI, based on the graph G.
- P: a quadruple of variables which defines the set of the genetic algorithm parameters, , where NP is the initial population size, NG the number of generations, rm the mutation rate and rc the crossover rate, ;
- is the fitness function that verifies that the generated subgraph is connected, is a tree and contains the keywords given by the user. Thus, it combines the two given requirements described above
Genetic Structures
- : the gene, which encodes the items , within a test;
- : the chromosome which encodes a sequence of items SI;
- P: a quadruple of variables which defines the set of the genetic algorithm parameters, , where NP is the initial population size, NG the number of generations, rm the mutation rate and rc the crossover rate, ;
- : the fitness function, defined in various stages depending on the requirements. Two forms used in various papers are presented:
- -
- as an average value of several sigmoid functions, as follows:
- *
- is related to the number of common keywords between the SI and the desired keywords;
- *
- is the keyword coverage of the SI. It measures the proportion of the uKW keywords in the sequence;
- *
- is the inverse of the dispersion of the variation of user-defined keyword frequencies throughout the sequence (the balance of the keywords within the sequence);
- *
- is the inverse value of the absolute difference between the desired degree of difficulty and the sequence one;
- *
- defines the predominant type of item, where is the frequency of the user-defined item type in the sequence.
- -
- As a sum of various functions. This form has the next description:In short, the fitness function calculates the value of the average of all constraints as follows:
- *
- the highest average value of similarity between user-given keywords and item keywords, calculated using edit distance and specific NLP methods;
- *
- the smallest value of the difference between the desired degree of difficulty (uD) and the calculated degree of difficulty (D) for SI;
- *
- the smallest value of the difference between the desired theoretical/practical ratio (uTP) and calculated for SI;
- *
- the smallest value of the sum of the differences between the components of the desired vector values () and calculated for the SI that describes the predominant type of item.
Genetic Operators
3.3.3. TG Component Functionality–func1(I)
- Step 1:
- The input data (q set, uD, uTP, P, kw set, nKW) are read.
- Step 2:
- The genetic algorithm is applied as follows:
- (a)
- the generation of the initial population of items is made;
- (b)
- the mutation operation is applied;
- (c)
- the crossover operation is applied;
- (d)
- the resulted chromosomes are selected;
- (e)
- after NG generations, the best chromosome is selected.
- Step 3:
- The best chromosome is input.
- Step 1:
- The initial set of items is constructed, either retrieved from a database or generated from a corpus.
- Step 2:
- A tree containing a relational structure of items is created based on Automatic Taxonomy Construction (ATC) and/or sequential difficulty.
- Step 3:
- The leaf nodes and their number are determined, their values being stored in the leaf array. In a simplified scheme, the determination is made as follows:
- Step 4:
- Using the values determined in step 3, the leaf-to-node sequences are constructed starting from the leaves to the root and the nodes are stored in an array L.
- Step 4:
- Within the sequence, we determine the number of keywords that appear in it and the number of occurrences of the keyword in the sequence.
- Step 5:
- The sequence with the maximum number of keywords is determined and found. The output values are the sequence and number of occurrences of each set keyword.
3.4. Assessment–func2(II) (Check Mechanism–CM)
3.5. Assessment–func3(III) (Item Analysis–IA)
- q: the item, described in the previous subsection, but with some additional statistical features;
- : the sequence of items, also described in the previous subsection, which will be enriched with more statistical indicators;
- : student results, which contains information related to the assessment results of a particular student;
- G: the result of the group of students, which contains statistical information related to the results of the assessment of a certain group of students (for example, class, group).
- Step 1:
- Students connect and solve the item sequences.
- Step 2:
- For each student and a specific sequence of items, a report is generated, created by following the following steps:
- (a)
- Elements that obtained lower values of mq (the average score of an item q) and lq (the number of correct answers to the item q) are filtered out.
- (b)
- The item parameter values dq (the discrimination index of the item q), pbsq (the biserial point of the item q), taq (the number of students who answered the item q), ddq (the degree of difficulty of the item q), uD and tsS (the total score of a student in the items of the same subject) are checked.
- (c)
- Item subjects are then extracted and verified to have obtained lower values for mq and lq in other items with the same subject for a large number of students.
- Step 3:
- The subjects of the items that validate the rule are presented in substep 2c).
- Step 4:
- The reports are entered into a report dataset, hereafter referred to as BD2.
Algorithm 1 IA approach algorithm |
|
4. Results
4.1. General Methodology
- Step 1:
- The definition of objectives and purpose: The main purposes were related to model simulations in a laboratory or real environment using several methods (direct observation, comparison, etc.).
- Step 2:
- The implementation of the model
- (a)
- Application design: the application design comprised the choice of the optimal application development environment (web, mobile, desktop, etc.) and the instruments used (formal modeling languages and techniques, programming languages, methods, architectures, frameworks, etc.);
- (b)
- Application development: the development consisted in the actual implementation of the model based on the blueprint design obtained at the previous step;
- (c)
- Testing and troubleshooting: the testing phase consisted in the calibration of the obtained instruments and the identification of specific errors or miscalculations.
- (d)
- Application integration and launching: the integration consisted of the connection action of the resulted implementation in a common learning framework. The launching aspects were related to the dissemination of implementation and its usage in data collection for various research contexts.
- Step 3:
- Data collection
- (a)
- The definition of objectives: the main purposes were related to the model validation using domain-specific methods, general (direct observation, comparison, etc.) or statistical;
- (b)
- Data collection: the data collection consisted of obtaining information based on the implementation behavior or specific research contexts;
- (c)
- Data pre-processing: In order to apply several instruments of methods, in several cases a pre-processing was necessary.
- Step 4:
- Data analysis: the collected data were analyzed to assess the achievement of the objectives and to test the formulated hypotheses. This analysis involved the use of statistical or analytical techniques to identify patterns, relationships, or trends in the collected data.
- Step 5:
- Data interpretation: The results of the analysis were interpreted in the context of previously established objectives and assumptions. The assessment was related to whether or not the data collected supported the hypotheses formulated and to identify the implications of these results for the implementation and the model theoretical assumptions.
4.2. Learning and Teaching-LT
- for the levels: 1-fundamentals, 2-algorithms, 3-programming, 4-advanced_techniques, and 5-applications;
- for the inputs and outputs: 1-Data types, 2-Operations, 3-Structures, 4-Algorithms, 5-Searching, 6-Sorting, 7-Functions, 8-Recursion, 9-OOP, 10-Threads, 11-Databases, 12-SQL, 13-Web, 14-JavaScript, and 15-APIs.
4.3. Assessment–Test Generation (TG)
4.3.1. TG Using Arborescent Structures
- the number of nodes (the total number of items) equal to 35;
- the initial graph in form of a tree, given by the parent array t = (0, 1, 1, 2, 15, 15, 3, 3, 4, 4, 4, 4, 9, 25, 2, 10, 11, 11, 12, 7, 7, 8, 8, 8, 8, 5, 5, 6, 6, 20, 20, 21, 22, 23, 24);
- the keywords given by the user = (hardware, PC, hard disk, memory, unit, external, reading, peripheral, software, application, browser, Internet).
- L = (5,1,2,4,11,18)-hardware 1 times; PC 1 times; harddisk 1 times; memory 1 times; unit 1 times;
- L = (5,1,2,15,5,27)-hardware 1 times; PC 1 times; external 1 times; reading 1 times; peripheral 1 times;
- L = (5,1,2,15,6,29)-hardware 1 times; PC 1 times; external 1 times; reading 1 times; peripheral 1 times;
- L = (5,1,3,8,25,36)-PC 1 times; software 1 times; application 1 times; browser 1 times; Internet 1 times.
4.3.2. TG Using Genetic Structures
- the number of items in the database (N) was 800;
- the number of desired items in the sequence (m) was 10;
- three keywords were chosen (uKW = 3);
- a degree of difficulty of 0.4 was chosen (uD = 0.4);
- the desired type of question was chosen as multiple-choice (uT = ′m′);
- the mutation rate was established at 0.1 (rm = 0.1);
- the crossover rate was established at 0.5 (rc = 0.5);
- the population size was established at 50 (NP = 50);
- the number of generations was established at 100 (NG = 100);
- the fitness function is the sigmoid average.
- is the intercept (the constant term),
- , , and are the coefficients that determine the influence of each variable (N, K, NG) on Runtime.
4.4. Assessment–Item Analysis (IA)
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
TLA | Teaching, Learning and Assessment |
HCI | Human-Computer Interaction |
ML | Machine Learning |
AI | Artificial Intelligence |
LMS | Learning Management System |
UDL | Universal Design for Learning |
UDA | Universal Design for Assessment |
LA | Learning Analytics |
QG | Question Generation |
AE | Answer Evaluation |
AES | Automated Essay Scoring |
NLP | Natural Language Processing |
IA | Item Analysis |
CTT | Classical Test Theory |
IRT | Item Response Theory |
DMAIR | Dynamic Model for Assessment and Interpretation of Results |
TG | Test Generation |
CM | Check Mechanism |
LT | Learning and Teaching |
OOP | Object-Oriented Programming |
SQL | Structured Query Language |
API | Application Programming Interface |
IT | Information Technology |
PC | Personal Computer |
MySQL | My Structured Query Language |
PHP | Hypertext Preprocessor |
MVC | Model-View-Controller |
RAM | Random Access Memory |
MSE | Mean Squared Error |
RMSE | Root Mean Squared Error |
MAE | Mean Absolute Error |
MAPE | Mean Absolute Percentage Error |
Determination Coefficient | |
GUI | Graphical User Interface |
References
- Nuțescu, C.I.; Mocanu, M. Test data generation using genetic algorithms and information content. U.P.B. Sci. Bull. Ser. C 2020, 2, 33–44. [Google Scholar]
- Nuțescu, C.I.; Mocanu, M. Creating a personality model using genetic algorithms, behavioral psychology, and a happiness dataset. U.P.B. Sci. Bull. Ser. C 2023, 85, 25–36. [Google Scholar]
- Al-Alwash, H.M.; Borcoci, E. Non-dominated sorting genetic optimisation for charging scheduling of electrical vehicles with time and cost awareness. U.P.B. Sci. Bull. Ser. C 2024, 1, 117–128. [Google Scholar]
- Choudhury, N. World wide web and its journey from web 1.0 to web 4.0. Int. J. Comput. Sci. Inf. Technol. 2014, 5, 8096–8100. [Google Scholar]
- MacKenzie, I.S. Human-Computer Interaction: An Empirical Research Perspective; Morgan Kaufmann: Burlington, MA, USA, 2024. [Google Scholar]
- Klašnja-Milićević, A.; Ivanović, M. E-learning personalization systems and sustainable education. Sustainability 2021, 13, 6713. [Google Scholar] [CrossRef]
- Kolb, D.A. Experiential Learning: Experience as the Source of Learning and Development; FT Press: Upper Saddle River, NJ, USA, 2014. [Google Scholar]
- Romero, C.; Ventura, S. Educational data mining and learning analytics: An updated survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1355. [Google Scholar] [CrossRef]
- Birkeland, N.R.; Drange, E.M.D.; Tønnessen, E.S. Digital collaboration inside and outside educational systems. E-Learn. Digit. Media 2015, 12, 226–241. [Google Scholar]
- Langset, I.D.; Jacobsen, D.Y.; Haugsbakken, H. Digital professional development: Towards a collaborative learning approach for taking higher education into the digitalized age. Nord. J. Digit. Lit. 2018, 13, 24–39. [Google Scholar]
- Coverdale, A.; Lewthwaite, S.; Horton, S. Digital Accessibility Education in Context: Expert Perspectives on Building Capacity in Academia and the Workplace. ACM Trans. Access. Comput. 2024, 17, 1–21. [Google Scholar] [CrossRef]
- Seman, L.O.; Hausmann, R.; Bezerra, E.A. On the students’ perceptions of the knowledge formation when submitted to a Project-Based Learning environment using web applications. Comput. Educ. 2018, 117, 16–30. [Google Scholar] [CrossRef]
- Tiţa, V.; Necula, R. Trends In Educational Training for Agriculture In Olt County. Sci. Pap. Ser. Manag. Econ. Eng. Agric. Rural. Dev. 2015, 15, 357–364. [Google Scholar]
- Wang, F.; Wang, W.; Yang, H.; Pan, Q. A novel discrete differential evolution algorithm for computer-aided test-sheet composition problems. In Proceedings of the Information Engineering and Computer Science ICIECS 2009, Wuhan, China, 19–20 December 2009; pp. 1–4. [Google Scholar]
- Science, D. Dimensions [Software]. Free Version. Digital Science, London, UK. Under Licence Agreement. 2018. Available online: https://www.dimensions.ai (accessed on 27 March 2024).
- Wong, D. Vosviewer. Tech. Serv. Q. 2018, 35, 219–220. [Google Scholar]
- Bradley, V.M. Learning Management System (LMS) use with online instruction. Int. J. Technol. Educ. 2021, 4, 68–92. [Google Scholar] [CrossRef]
- Ortakand, B.R. Monitor and Predict Student Engagement and Retention Using Learning Management System (LMS). In Teaching and Learning in the Digital Era: Issues and Studies; World Scientific Publishing Co Pte Ltd.: Singapore, 2024; pp. 259–277. [Google Scholar]
- Johnston, E.; Olivas, G.; Steele, P.; Smith, C.; Bailey, L. Exploring pedagogical foundations of existing virtual reality educational applications: A content analysis study. J. Educ. Technol. Syst. 2018, 46, 414–439. [Google Scholar] [CrossRef]
- Meyer, M.; Zosh, J.M.; McLaren, C.; Robb, M.; McCafferty, H.; Golinkoff, R.M.; Hirsh-Pasek, K.; Radesky, J. How educational are “educational” apps for young children? App store content analysis using the Four Pillars of Learning framework. J. Child. Media 2021, 15, 526–548. [Google Scholar] [CrossRef]
- Hakimi, M.; Katebzadah, S.; Fazil, A.W. Comprehensive Insights into E-Learning in Contemporary Education: Analyzing Trends, Challenges, and Best Practices. J. Educ. Teach. Learn. (JETL) 2024, 6, 86–105. [Google Scholar]
- Hinchliffe, G. What is a significant educational experience? J. Philos. Educ. 2011, 45, 417–431. [Google Scholar] [CrossRef]
- Jung, S.; Son, M.; Kim, C.I.; Rew, J.; Hwang, E. Video-based learning assistant scheme for sustainable education. New Rev. Hypermedia Multimed 2019, 25, 161–181. [Google Scholar] [CrossRef]
- Vinu, P.V.; Sherimon, P.C.; Krishnan, R. Towards pervasive mobile learning–the vision of 21st century. Procedia-Soc. Behav. Sci. 2011, 15, 3067–3073. [Google Scholar]
- Kabudi, T.; Pappas, I.; Olsen, D.H. AI-enabled adaptive learning systems: A systematic mapping of the literature. Comput. Educ. Artif. Intell. 2021, 2, 100017. [Google Scholar]
- O’Donnell, A.M.; Hmelo-Silver, C.E. Introduction: What is collaborative learning?: An overview. In The International Handbook of Collaborative Learning; Taylor and Francis: New York, NY, USA, 2013; pp. 1–15. [Google Scholar]
- Hamari, J.; Xi, N.; Legaki, Z.; Morschheuser, B. Gamification. In Proceedings of the Hawaii International Conference on System Sciences, Maui, HI, USA, 3–6 January 2023; p. 1105. [Google Scholar]
- Craig, S.L.; Smith, S.J.; Frey, B.B. Professional development with universal design for learning: Supporting teachers as learners to increase the implementation of UDL. Prof. Dev. Educ. 2019, 48, 22–37. [Google Scholar] [CrossRef]
- Ketterlin-Geller, L.R. Knowing what all students know: Procedures for developing universal design for assessment. J. Technol. Learn. Assess. 2005, 4, 2. [Google Scholar]
- Clow, D. An overview of learning analytics. Teach. High. Educ. 2013, 18, 683–695. [Google Scholar] [CrossRef]
- Bokander, L. Psychometric Assessments. In The Routledge Handbook of Second Language Acquisition and Individual Differences; Routledge: London, UK, 2022; pp. 454–465. [Google Scholar]
- Moses, T. A Review of Developments and Applications in Item Analysis. In Methodology of Educational Measurement and Assessment; Springer: Cham, Switzerland, 2017; pp. 19–46. [Google Scholar]
- Webb, M.; Gibson, D.; Forkosh-Baruch, A. Challenges for information technology supporting educational assessment. J. Comput. Assist. Learn. 2013, 29, 451–462. [Google Scholar] [CrossRef]
- Ben-Simon, A.; Bennett, R.E. Towards more substantively meaningful automated essay scoring. J. Teach. Learn. Assess. 2007, 6, 4–44. [Google Scholar]
- Deane, P. On the relation between automated essay scoring and modern views of the writing construct. Assess. Writ. 2013, 18, 7–24. [Google Scholar]
- Gardner, J.; O’Leary, M.; Yuan, L. Artificial intelligence in educational assessment: ‘Breakthrough? Or buncombe and ballyhoo?’. J. Comput. Assist. Learn. 2021, 37, 1207–1216. [Google Scholar] [CrossRef]
- Bidyut, D.; Mukta, M.; Santanu, P.; Arif, A.S. Multiple-choice question generation with auto-generated distractors for computer-assisted educational assessment. Multimed. Tools Appl. 2021, 80, 31907–31925. [Google Scholar]
- Dhawaleswar, R.C.; Sujan, K.S. Automatic Multiple Choice Question Generation From Text: A Survey. IEEE Trans. Learn. Technol. 2020, 13, 14–25. [Google Scholar]
- Zou, B.; Pengfei, L.; Liangming, P.; Ai, T.A. Automatic True/False Question Generation for Educational Purpose. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022); Association for Computational Linguistics: Seattle, WA, USA, 2022. [Google Scholar]
- Bidyut, D.; Majumder, M. Factual open cloze question generation for assessment of learner’s knowledge. Int. J. Educ. Technol. High. Educ. 2017, 14, 1–12. [Google Scholar]
- Malafeev, A. Automatic Generation of Text-Based Open Cloze Exercises. In Communications in Computer and Information Science; Springer: Cham, Switzerland, 2014; pp. 140–151. [Google Scholar]
- Husam, A.; Yllias, C.; Sadid, A.H. Automatic Question Generation from Sentences. In Proceedings of the Actes de la 17e Conférence sur le Traitement Automatique des Langues Naturelles, Montréal, QC, Canada, 19–22 July 2010. [Google Scholar]
- Zheng, X. Automatic Question Generation from Freeform Text. Master’s Thesis, Nanyang Technological University, Singapore, 2022. [Google Scholar]
- Burrows, S.; Gurevych, I.; Stein, B. The eras and trends of automatic short answer grading. Int. J. Artif. Intell. Educ. 2014, 25, 60–117. [Google Scholar]
- Mohd, J.A.A.; Fatimah, D.A.; Abdul, A.A.G.; Ramlan, M. Automated marking system for Short Answer Examination (AMS-SAE). In Proceedings of the 2009 IEEE Symposium on Industrial Electronics & Applications, Xi’an, China, 25–27 May 2009. [Google Scholar]
- Zhong, V.; Shi, W.; Yih, W.T.; Zettlemoyer, L. RoMQA: A Benchmark for Robust, Multi-evidence Multi-answer Question Answering. arXiv 2022, arXiv:2210.14353. [Google Scholar]
- Sujan, K.S.; Dhawaleswar, R.C. Development of a practical system for computerized evaluation of descriptive answers of Middle School Level Students. Interact. Learn. Environ. 2019, 30, 215–228. [Google Scholar]
- Ganz, R. An individualistic approach to item analysis. In Readings in Mathematical Social Science; The MIT Press: Cambridge, MA, USA, 1966; pp. 89–108. [Google Scholar]
- Aqeel, K.H.; Aqeel, M.A.H. Testing & the impact of item analysis in improving students’ performance in end-of-year final exams. Engl. Linguist. Res. 2022, 11, 30. [Google Scholar]
- Novick, M.R. The axioms and principal results of classical test theory. J. Math. Psychol. 1966, 3, 1–18. [Google Scholar]
- Weiss, D.J.; Yoes, M.E. Item response theory. In Advances in Educational and Psychological Testing: Theory and Applications; Springer: Dordrecht, The Netherlands, 1991; pp. 69–95. [Google Scholar]
- Hambleton, R.K.; Jones, R.W. An NCME instructional module on Comparison of classical test theory and item response theory and their applications to test development. Educ. Meas. Issues Pract. 1993, 12, 38–47. [Google Scholar]
- Abdelrahman, G.; Wang, Q.; Nunes, B. Knowledge Tracing: A Survey. ACM Comput. Surv. 2023, 55, 1–37. [Google Scholar]
- Popescu, D.A.; Nijloveanu, D.; Bold, N. Approaches on Generating Optimized Sequences of Items Used in Assessment. In Proceedings of the GeNeDis 2016: Computational Biology and Bioinformatics; Springer International Publishing: Cham, Switzerland, 2017; pp. 73–87. [Google Scholar]
- Popescu, D.A.; Bold, N.; Popescu, A.I. The generation of tests of knowledge check using genetic algorithms. In Proceedings of the Soft Computing Applications: Proceedings of the 7th International Workshop Soft Computing Applications (SOFA 2016), Arad, Romania, 24–26 August 2016; Springer International Publishing: Cham, Switzerland, 2018; Volume 2, pp. 28–35. [Google Scholar]
- Domşa, O.; Bold, N. Reusing Assessments Tests by Generating Arborescent Test Groups Using a Genetic Algorithm. Int. J. Inf. Commun. Eng. 2017, 10, 1434–1439. [Google Scholar]
- Popescu, D.A.; Bold, N.; Domşa, O. Generating assessment tests with restrictions using genetic algorithms. In Proceedings of the 2016 12th IEEE International Conference on Control and Automation (ICCA), Kathmandu, Nepal, 1–3 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 696–700. [Google Scholar]
- Popescu, D.A.; Constantin, D.; Bold, N. Generating assessment tests using image-based items. In Proceedings of the 2023 IEEE International Conference on Data Mining Workshops (ICDMW), Shanghai, China, 1–4 December 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 379–385. [Google Scholar]
- Popescu, D.A.; Bold, N. Learning Using Connections Between Concepts. eLearning Softw. Educ. 2017, 1, 224–229. [Google Scholar]
- Popescu, D.A.; Nijloveanu, D.; Bold, N. Generator of Tests for Learning Check in Case of Courses that Use Learning Blocks. In Proceedings of the Methodologies and Intelligent Systems for Technology Enhanced Learning, 8th International Conference, Toledo, Spain, 20–22 June 2019; pp. 239–244. [Google Scholar]
- Popescu, D.A.; Nijloveanu, D.; Bold, N. Puzzle Learning Trail Generation Using Learning Blocks. In Soft Computing Applications: Proceedings of the 8th International Workshop Soft Computing Applications (SOFA 2018); Springer: Cham, Switzerland, 2021; Volume I, pp. 385–391. [Google Scholar]
- Popescu, D.A.; Nicolae, D. Determining the Similarity of Two Web Applications Using the Edit Distance. In Proceedings of the Soft Computing Applications-Proceedings of the 6th International Workshop Soft Computing Applications, SOFA 2014, Volume 1, Timisoara, Romania, 24–26 July 2014. In Advances in Intelligent Systems and Computing; Balas, V.E., Jain, L.C., Kovacevic, B.D., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; Volume 356, pp. 681–690. [Google Scholar]
- Popescu, D.A.; Domsa, O.; Bold, N. The Determination of the Learning Performance based on Assessment Item Analysis. In Proceedings of the CSW@WSDM, Singapore, 27 February–3 March 2023; pp. 59–76. [Google Scholar]
- Popescu, D.A.; Cristea, D.M.; Bold, N. On an Integrated Assessment for the Students Within an Academic Consortium. In Proceedings of the International Conference on Intelligent Tutoring Systems, Corfu, Greece, 2–5 June 2023; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 518–529. [Google Scholar]
Rank | Term | Relevance Score | Occurrences |
---|---|---|---|
54 | older adult | 4.2628 | 130 |
82 | teaching | 3.0753 | 423 |
46 | learner | 2.9500 | 167 |
81 | teacher | 2.7435 | 377 |
80 | systematic review | 2.4310 | 159 |
19 | digital health | 2.4148 | 134 |
60 | patient | 2.3371 | 319 |
64 | population | 2.3194 | 256 |
85 | university | 2.2092 | 307 |
No. | Educational Process Phase | Methods | Instruments |
---|---|---|---|
1. Teaching | Integration of educational information and processes | LMS | |
Content generation, processing, storage and presentation | Educational apps | ||
Content management | LMS | ||
Experience development | LMS, Educational apps | ||
Video-based content, webinars, videoconferences | Zoom, Microsoft Teams, Google Meet, LMS videoconferencing apps | ||
2. Learning | Pervasive learning | Mobile devices | |
Adaptive learning systems | AI-based tools, ML techniques | ||
Collaborative learning | Cloud apps | ||
Gamification | Educational apps | ||
3. Assessment | Universal Design for Assessment | LMS | |
Question Generation | Natural Language Processing, ML techniques | ||
Answer Evaluation | Natural Language Processing, ML techniques | ||
Item Analysis | ML techniques, statistical instruments |
Indicator | Description |
---|---|
Knowledge Gain (KG) | Measures the improvement in knowledge by comparing pre-test and post-test scores. |
Learning Efficiency (LE) | Assesses how quickly and effectively learners reach educational objectives. |
Error Reduction Rate (ERR) | Evaluates the decrease in repeated errors throughout the learning process. |
Engagement Level (EL) | Reflects the level of student interaction with content, such as time spent and activities accessed. |
B | N | S | I | O |
---|---|---|---|---|
B1 | - | 2 | - | 1,2 |
B2 | - | 2 | 1 | 2, 3 |
B3 | 2 | 3 | 2, 3 | 2, 3 |
B4 | 2 | 3 | 3 | 3, 4 |
B5 | 3 | 4 | 2, 3, 4 | 4, 5 |
B6 | 3 | 4 | 4, 5 | 5, 6 |
B7 | 3 | 4 | 5 | 5, 6 |
B8 | 4 | 5 | 5, 6 | 7, 8, 9 |
B9 | 4 | 5 | 7, 8 | 10, 11, 12 |
B10 | 4 | 5 | 10, 11, 12 | 12, 13, 14, 15 |
L | |||||
---|---|---|---|---|---|
1 | B1 | B2 | - | - | - |
2 | - | B3 | B4 | - | - |
3 | - | B5 | B6 | B7 | - |
4 | - | - | - | B8 | - |
5 | - | - | - | B9 | B10 |
Level | ID | Keywords | NCK * |
---|---|---|---|
1 | 35 | block, path, cloud, data | 1 |
2 | 87 | array, process, sync, bit, concat | 1 |
3 | 9 | bfs, stack, object | 1 |
4 | 18 | loop, merge | 1 |
5 | 1 | logic, sync, bfs, security | 1 |
6 | 16 | write, lock, bool, array | 1 |
7 | 19 | queue, char, graph | 0 |
8 | 22 | process, path, modulo | 1 |
9 | 12 | node, array, write, sort | 1 |
ID | Keywords | Difficulty | Type |
---|---|---|---|
75 | algorithm, python | 0.96 | m |
102 | python, data | 0.34 | m |
466 | data, python | 0.21 | e |
705 | python, data | 0.43 | m |
663 | data, algorithm | 0.44 | m |
369 | python, algorithm | 0.68 | m |
583 | python, data | 0.31 | m |
46 | algorithm | 0.43 | s |
118 | data, algorithm | 0.09 | s |
272 | python, algorithm | 0.28 | s |
Run | Obtained Difficulty | Initial Fitness Value | Final Fitness Value | Runtime (s) | Population Variation |
---|---|---|---|---|---|
Run 1 | 0.383 | 0.584 | 0.584 | 12.63 | 463.505 |
Run 2 | 0.402 | 0.585 | 0.589 | 11.85 | 457.852 |
Run 3 | 0.331 | 0.585 | 0.583 | 13.81 | 442.229 |
Run 4 | 0.374 | 0.587 | 0.587 | 13.41 | 442.998 |
Run 5 | 0.357 | 0.584 | 0.586 | 13.35 | 438.508 |
Run 6 | 0.367 | 0.586 | 0.590 | 15.13 | 438.234 |
Run 7 | 0.363 | 0.587 | 0.586 | 12.12 | 449.449 |
Run 8 | 0.372 | 0.589 | 0.599 | 21.35 | 428.399 |
Run 9 | 0.333 | 0.588 | 0.586 | 15.37 | 427.699 |
Run 10 | 0.370 | 0.589 | 0.589 | 15.32 | 439.631 |
N | K | NG | Runtime (s) | Runtime for Previous Algorithm (s) |
---|---|---|---|---|
1000 | 20 | 500 | 6.655214961 | 8.098882903 |
2000 | 6.682372113 | 8.346786351 | ||
3000 | 6.869802016 | 8.200428548 | ||
4000 | 6.568096433 | 8.857499464 | ||
5000 | 6.759531288 | 9.293001839 | ||
2000 | 30 | 500 | 7.124899292 | 10.177044323 |
50 | 7.311507564 | 9.689560915 | ||
70 | 7.738539504 | 12.886415575 | ||
90 | 8.295781972 | 12.904098568 | ||
100 | 8.626971045 | 12.422781658 | ||
2000 | 20 | 600 | 8.002793317 | 10.171294628 |
800 | 10.636422365 | 13.425690146 | ||
1000 | 13.094546275 | 16.932125198 | ||
1200 | 15.762861716 | 19.769909546 | ||
1400 | 18.808763568 | 23.090812005 |
Item | Score | dd_q | sd_q | d_q | pbs_q | m_q | l_q |
---|---|---|---|---|---|---|---|
T1Q2 | 4 | 0.80 | 0.40 | 0.6 | 0.06 | 0.20 | 4 |
T2Q4 | 6 | 0.70 | 0.46 | 0.8 | 0.28 | 0.30 | 6 |
T2Q5 | 5 | 0.75 | 0.43 | 0.8 | 0.55 | 0.25 | 5 |
T4Q2 | 5 | 0.75 | 0.43 | 0.2 | −0.22 | 0.2 | 5 |
T4Q5 | 4 | 0.80 | 0.40 | 0.4 | 0.11 | 0.20 | 4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Popescu, D.A.; Bold, N.; Stefanidakis, M. A Systematic Model of an Adaptive Teaching, Learning and Assessment Environment Designed Using Genetic Algorithms. Appl. Sci. 2025, 15, 4039. https://doi.org/10.3390/app15074039
Popescu DA, Bold N, Stefanidakis M. A Systematic Model of an Adaptive Teaching, Learning and Assessment Environment Designed Using Genetic Algorithms. Applied Sciences. 2025; 15(7):4039. https://doi.org/10.3390/app15074039
Chicago/Turabian StylePopescu, Doru Anastasiu, Nicolae Bold, and Michail Stefanidakis. 2025. "A Systematic Model of an Adaptive Teaching, Learning and Assessment Environment Designed Using Genetic Algorithms" Applied Sciences 15, no. 7: 4039. https://doi.org/10.3390/app15074039
APA StylePopescu, D. A., Bold, N., & Stefanidakis, M. (2025). A Systematic Model of an Adaptive Teaching, Learning and Assessment Environment Designed Using Genetic Algorithms. Applied Sciences, 15(7), 4039. https://doi.org/10.3390/app15074039