Next Article in Journal
Impact of Geopolitical Risk on G7 Financial Markets: A Comparative Wavelet Analysis between 2014 and 2022
Next Article in Special Issue
Analysis of a Predictive Mathematical Model of Weather Changes Based on Neural Networks
Previous Article in Journal
A Community Detection and Graph-Neural-Network-Based Link Prediction Approach for Scientific Literature
Previous Article in Special Issue
Enhanced Checkerboard Detection Using Gaussian Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Traditional Assessment: A Fuzzy Logic-Infused Hybrid Approach to Equitable Proficiency Evaluation via Online Practice Tests

1
Department of Computer Technology, Faculty of Mathematics and Informatics, University of Plovdiv Paisii Hilendarski, 4000 Plovdiv, Bulgaria
2
Department of Computer Systems, Faculty of Mathematics and Informatics, University of Plovdiv Paisii Hilendarski, 4000 Plovdiv, Bulgaria
3
Department of Mathematical Analysis, Faculty of Mathematics and Informatics, University of Plovdiv Paisii Hilendarski, 4000 Plovdiv, Bulgaria
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(3), 371; https://doi.org/10.3390/math12030371
Submission received: 23 December 2023 / Revised: 15 January 2024 / Accepted: 22 January 2024 / Published: 24 January 2024

Abstract

:
This article presents a hybrid approach to assessing students’ foreign language proficiency in a cyber–physical educational environment. It focuses on the advantages of the integrated assessment of student knowledge by considering the impact of automatic assessment, learners’ independent work, and their achievements to date. An assessment approach is described using the mathematical theory of fuzzy functions, which are employed to ensure the fair evaluation of students. The largest possible number of students whose reevaluation of test results will not affect the overall performance of the student group is automatically determined. The study also models the assessment process in the cyber–physical educational environment through the formal semantics of calculus of context-aware ambients (CCAs).

1. Introduction

Student assessment is an important aspect of the educational process that aims to measure the knowledge, skills, and achievements of students in the course of their studies. The development of modern education, along with traditional learning, also requires the use of new assessment models, which must provide flexibility in terms of the ways and forms of education, personalization, and the possibility of taking into account the individual characteristics of each student. The problem is becoming more relevant also in terms of reducing the degree of formal education as compared to hybrid forms and the use of virtual educational platforms [1].
Cyber–physical educational environments provide integration of virtual and physical components in the educational process. This model of education is used in various contexts, including in educational institutions, vocational training, distance learning, and for nonformal, lifelong learning. Motivated by the advantages that these environments provide, teams from the Faculty of Mathematics and Informatics (FMI) of the University of Plovdiv are developing prototypes of cyber–physical educational platforms intended to train both university students and nonlearners [2]. Assessment is a key feature of any such platform, and the difficulties are related to the hybrid nature of learning—face-to-face, distance learning, self-study, project-based, etc.
This article presents a model for assessing students’ knowledge in a cyber–physical educational environment by using the mathematical theory of fuzzy sets. Modeling of processes and communication between intelligent components is realized through the formal semantics of the calculus of context-aware ambients (CCAs). To test the proposed model, data were used from the assessment of first-year students from FMI of the University of Plovdiv in their English language course.
An exceptional aspect of our work lies in the assessment approach utilizing fuzzy functions. Notably, our paper introduces a novel methodology by using a cyber–physical educational platform to automatically determine the largest feasible number of students whose work can undergo reevaluation through the application of fuzzy functions. This allows for the adjustment of individual grades—either upward or downward—without impacting the overall result of the entire student cohort. In other words, the average score of the entire group of students remains unaltered despite these individual adjustments. This innovative feature sets our work apart and significantly contributes to the existing body of knowledge in the field.
The structure of this article is as follows: In the second section, the motivation and research related to the topic are discussed. The third part considers the possibilities of applying the theory of fuzzy sets to evaluate students and the CCA modeling of the process of assessment, and the fourth part comments on the findings of the research. The conclusion summarizes the results and points out perspectives for future studies.

2. Motivation and Related Works

Based on the definition of the National Science Foundation (NSF) [3], cyber–physical systems (CPSs) provide integration between computational, network, and physical processes. Embedded digital components and networks monitor and control physical processes, supplying continuous feedback and control. CPSs integrate the dynamics of physical processes with those of software and networks, providing opportunities for modeling, design, and analysis techniques. These systems must ensure the dynamic interaction with objects from the physical and virtual worlds, which requires the use of autonomous intelligent components. Thus, cyber–physical systems move into cyber–physical spaces, in which users are placed at the center of social interactions. In this sense, the cyber–physical–social space (CPSS) is a fusion of the physical space, cyberspace, and the social space. The evolution from the CPS to the CPSS [4] is a long process that involves solving challenges and problems of a different nature. These spaces have the potential to be adapted in all spheres of the modern world—in a ”smart city“, agriculture, animal husbandry, transport, medicine, tourism, and, of course, education.
The creation of educational cyber–physical platforms is a challenge but also a necessity [5]. In these spaces, continuous blended (face-to-face and virtual) learning is possible, tailored to the personal characteristics of each student. An important component of these environments is the development of an appropriate test platform that provides opportunities for assessment and self-assessment of learners [6].
The Virtual–Physical Space (ViPS) is being developed by a team of the Distributed e-Learning Center (DeLC) laboratory at the University of Plovdiv as a reference IoT-based architecture that can be adapted to different CPSS applications in different application areas. In the field of higher education, the Virtual Education Space (VES) platform is being developed, which builds on the DeLC educational platform. DeLC has been actively used in university education since 2004 and provides e-learning services and SCORM-based learning content [7]. Some of the strengths of the testing platform in DeLC are that it is user-friendly and employs metadata, which allows tests to be compiled based on different criteria, enables the use of photos and other visual components, provides an option to preview the test before being administered to students, and offers the option to set a time limit and different validity periods for each test.
An alternative platform used for e-tests at the Faculty of Mathematics and Informatics is the Distributed Platform for e-Learning (DisPeL), whose key elements are administration of the learning process and adaptability of the learning content [8]. DisPeL enhances the educational process by providing the following electronic services:
  • Automation of the administrative process;
  • Maintaining an adaptive learning process;
  • Online review of student progress and assessment;
  • Supporting conventional testing and evaluation.
Due to its advantages and characteristics, the DisPeL test system has been successfully implemented in several universities in Bulgaria. By choosing the appropriate platform, the cyber–physical space allows for the automatic processing of the learners’ scores, and through fuzzy sets, corrections can be made, leading to a fairer assessment. We have further illustrated that there is a limit on the number of students being reevaluated, which ensures that after their grades are corrected, the mean and the variance of the group does not change statistically significantly. If this set of revised scores is increased, not only does a different distribution result, but the corrected scores may not correspond to the actual knowledge of the learners.

3. Materials and Methods

The study was carried out in the context of study at FMI of the University of Plovdiv. All students study English as a foreign language for 160 or 200 academic hours in the first academic year depending on their major. Learners have seminars in English once a week, and they are taught in groups of about 20 learners of a similar level of knowledge and skills in accordance with the Common European Framework of Reference for Languages (CEFR) [9]. Since grading is done through continuous assessment, besides the placement test, students at FMI sit for a midterm and a final test. Their final grades in English are formed at the end of the course on a six-point scale (2 is the lowest grade and corresponds to fail, and 6 is the highest grade and corresponds to excellent).

3.1. Application of the Fuzzy Set Theory in Student Assessment

The standard evaluation of online practice tests at FMI of the University of Plovdiv is conducted by using the following grading system:
  • An excellent 6 grade is awarded for test scores from 87 % to 100 % ;
  • A very good 5 grade from 75 % to 86 % ;
  • A good 4 grade from 62 % to 74 % ;
  • A satisfactory 3 grade from 50 % to 61 % ;
  • A poor 2 grade for 49 % and below.
However, the fairness of marginal, “borderline” scores that determine whether a learner is assigned the higher or the lower grade can be viewed as questionable. It can be considered unjust for a learner to pass a test with 50 points out of 100, and for another one to fail the same exam with only a single point less than him/her. In an attempt to make students’ boundary grades more equitable, we have employed a fuzzy-set technique to change the grades of the borderline cases in two examination tests. Similar efforts have been described in [10,11], where fuzzy logic and fuzzy functions were applied to estimate learners’ tests with a view to allocate them a more objective grade.
The first exam was considered to be final test taken by 78 students at the end of the language course. It consisted of 60 closed questions and one open, with a maximum score of 80 points, 20 of which were awarded for the open question. When test questions measure similar capability or expertise, they yield a large inner consistency reliability. If a test comprises various types of questions evaluating different kinds of capabilities and knowledge, Cronbach’s coefficient tends to be smaller as a consequence of the dissimilarity of the questions in the context of layout and content. For this reason, initially, we needed to decide what weight to assign the open question to guarantee the reliability of the test; an erroneous choice can often prove discriminatory. The alpha coefficient of Cronbach can effortlessly be estimated by means of the following formula:
α = k k 1 1 Sum of Item Variances Test Variances
The evaluated test item is k = 61 with sum of variances 50.91 , and the test variance is 179.45 . Inserting these numbers in (1), we acquire α = 0.72 , which means that the reliability is simply acceptable. The largest variance is present in the open question; consequently, we search for γ > 0 to multiply the scores of the open-ended question for α to become larger. If we choose γ = 0.2 , we obtain the sum of the variances 12.92 , and the test variance is 114.99 . Implementing the new numbers in (1), we obtain α = 0.902 ; therefore, the reliability is excellent.
Thus, the maximum score that a learner can receive is 64 (60 for the closed questions + 20 × 0.2 for the open one); i.e., the maximum points for the closed questions are 60, and the maximum score for the open one is 4. With the standard grading of online practice tests at FMI, the results should be interpreted as follows: an excellent 6 grade is from 56 to 64 points, a very good 5 grade is from 48 to 55, a good 4 grade is from 40 to 47, a satisfactory 3 grade is from 32 to 39, and a poor 2 grade is 31 points and below.
The second test is a midterm test, taken approximately in the middle of the language course. It was administered to 36 students altogether. The test consists of 70 closed questions, with 1 point awarded for a correct answer, and 3 open questions, with a maximum number of 6 for each. Thus, the overall maximum test points are 88, and the highest possible score for the open items is 18. The Cronbach’s alpha was 0.82 , meaning that the reliability of the test is good. There were no great differences in the variances of the different types of questions, and thus searching for a scaling coefficient to increase Cronbach’s alpha coefficient, as was done for the first test, is not required. At first glance, the results of the second test appear to be worse when compared with the first one because not a single test-taker obtained the maximum number of points. Solely to simplify the calculations and the notations and in order to use one and the same function in the fuzzy sets technique, we scaled the results of the second test. Thus, we scaled the points to ensure that the maximum score obtained by a student in the second test would represent the maximum points of the test; i.e. the highest number of points received by the students was 80; therefore, we scaled the results of all students with the factor 64 / 80 . Since the scores from the open question(s) are used for fuzzifying some of the test results, we scaled the 18 points from the results from the open questions in the second test by a factor of 20 / 18 in order to be able to use the same functions in the calculations.
Fuzzy logic, fuzzy sets, and fuzzy functions have been widely used since first being introduced by Zadeh [11]. We would like to emphasize only several works that have a connection with our investigation in e-learning and e-testing: [12,13,14,15].
A classical technique for reevaluations of test results with fuzzy sets is considering some borderline grades that need to be reassessed [12,13]. However, we present a different approach, in which we search for a maximum number of possible borderline grades to be fuzzified without changing the statistical distribution of the overall grade. We considered five functions that stand for the fuzzy membership ones to the sets of marks. We would like to mention that in very recent years there has been a large increase in the usage of fuzzy logic in the evaluation of students’ performance [16,17,18,19,20,21,22,23,24,25,26].
Let f : R 3 R and g : R 3 R be two functions. We specify five functions, which denote the bell-like functions of fuzzy membership to the sets of marks, as follows:
μ P o o r = 1 x < x 1 f ( x 1 , x 2 , x ) x 1 x < x 2 0 x 2 x x 9     μ S a t i s . = 0 x 0 x < x 1 g ( x 1 , x 2 , x ) x 1 x < x 2 1 x 2 x < x 3 f ( x 3 , x 4 , x ) x 3 x < x 4 0 x 4 x x 9
μ G o o d = 0 x 0 x < x 3 g ( x 3 , x 4 , x ) x 3 x < x 4 1 x 4 x < x 5 f ( x 5 , x 6 , x ) x 5 x < x 6 0 x 6 x x 9     μ V e r y G o o d = 0 x 0 x < x 5 g ( x 5 , x 6 , x ) x 5 x < x 6 1 x 6 x < x 7 f ( x 7 , x 8 , x ) x 7 x < x 8 0 x 8 x x 9
μ E x c e l l e n t = 0 x 0 x < x 7 g ( x 7 , x 8 , x ) x 7 x < x 8 0 x 8 x x 9
We can consider the following functions:
f ( a , b , x ) = cos a π a b π x a b 2 + 1 2 , g ( a , b , x ) = cos ( 2 a b ) π a b π x a b 2 + 1 2
from which we obtain a bell–shaped fuzzy function μ P o o r , μ S a t i s . , μ G o o d , μ V e r y G o o d , μ E x c e l l e n t , denoted by μ P , μ S , μ G , μ V , μ E , respectively.

3.2. The Test Construction

The test consists of several sections, each one aimed to check specific knowledge.
Criterion I (reproduction of information) is the lowest level in the cognitive domain; hence, multiple choice questions (MCQ) are employed most frequently to determine whether students remember correctly the form of certain expressions. The following is an example:
He kept explaining his point of view until he was ........... in the face but the inspectors were not impressed
(a) 
red
(b) 
blue
(c) 
black
(d) 
pink.
The test questions related to criterion II usually incorporate multiple choice, true/false, or closed questions to match words or expressions with their definitions, synonyms, and antonyms. The following is an example:
Choose the word which is a SYNONYM (a word with a similar meaning) to the capitalized word in the sentence:
The company was FOUNDED by two partners.
(a) 
set up
(b) 
discovered
(c) 
reformed
(d) 
destroyed
The test questions relating to criterion III (detection of errors in various contexts) are usually multiple choice ones to choose the part of the sentence which contains a spelling or grammar mistake and true or false to determine whether the sentences are free of lexical or grammatical errors or not. The following is an example:
Choose the part of the sentence which contains a spelling or grammar mistake.
The money doesn’t smell.
(a) 
the
(b) 
money
(c) 
doesn’t
(d) 
smell
Criterion IV (analysis of the lexical and grammatical items of a sentence) most often comprises the MCQ to select a sentence which most accurately explains the meaning of another one, choosing the correct grammatical form of a verb, or short-answer test items to write the most appropriate word(s)/ expression(s) or a grammatical form in a sentence. For instance,
Select the sentence which most accurately explains the meaning of the given one.
He was considered a nut and was ridiculed for standing out.
(a) 
He was believed to be crazy so everyone mocked him because he wasn’t sitting down like the others.
(b) 
The others made fun of him because he liked nuts more than anything else.
(c) 
He wasn’t like the others so people thought he was out of his mind and praised him.
(d) 
People thought he was a lunatic and laughed at him because he was different.
Finally, the test questions with reference to criterion V (text creation) usually instruct students to compose a text with a limited number of words to demonstrate specific knowledge or skills such as writing an email or a review of a product that they have bought, to explain and illustrate the meaning and use of idioms, etc. An example would be the following task:
WRITING: Your friend Kate had asked you to look after her cat while she was abroad but the cat disappeared. Write a short email (50–70 words) to her in which you:
  • Include a greeting;
  • Apologize and say that you lost her cat;
  • Explain how exactly it happened;
  • Say what you have done about it.
  • Close your email.

3.3. Illustration of the Fuzzy Logic Usage in Recalculating Students’ Marks

We illustrate the functions defined above in the case when x 0 = 0 , x 1 = 29.5 , x 2 = 34.5 , x 3 = 37.5 , x 4 = 42.5 , x 5 = 45.5 , x 6 = 50.5 , x 7 = 53.5 , x 8 = 58.5 , x 9 = 64 (Figure 1).
Therefore, a student with 41 overall points belongs to the set of satisfactory grades with a degree 0.21 ; to the class of good grades with a degree 0.79 ; and to the other sets of poor, very good, or excellent with a degree of 0. A student with an overall score of 44 points belongs to the set of good grades with a degree 1 and to the other sets of poor, satisfactory, very good, or excellent with a degree 0.
In case the points a learner has received in a test do not belong definitively to a given set, we need to choose a different criterion, which depends on the learner’s result, in order to decide which grade to assign him/her, and that will be the learner’s result on the open items. In addition, we once again divide the learners’ marks into 5 groups: poor (from y 0 to y 1 points), satisfactory (from y 2 to y 3 ), good (from y 4 to y 5 ), very good (from y 6 to y 7 ), and excellent (from y 8 to y 9 ), and we also define their membership functions μ , which we denote by ν P , ν S , ν G , ν V , and ν E
We illustrate the defined above functions in the case y 0 = 0 , y 1 = 6.9 , y 2 = 9.1 , y 3 = 10.4 , y 4 = 12.6 , y 5 = 13.4 , y 6 = 15.6 , y 7 = 16.4 , y 8 = 18.6 , y 9 = 20 (Figure 2).
The norms for executing set procedures of union (OR), intersection (AND), and complement (NOT) that concern us the most are given below.
For onion, we look at the degree of membership for each set and pick the lower one of the two as follows: μ A B = min ( μ A , μ B ) (Figure 3).
For intersection, we inspect the degree of membership for each set and choose the larger of the two as follows: μ A B = max ( μ A , μ B ) (Figure 4).
The fuzzy associative matrix (Table 1) provides an appropriate manner to immediately integrate the input relations in order to get the fuzzified output results [12] or [10]. The input values for the scores of the open-ended items are at the upper section of the matrix, and the input values for the total results of the test are down left section in the matrix. We have used the conventional Bulgarian grading scale.
Let us review a learner with a total result of 49 points and a mark on the open-ended item of 19 points. He or she belongs to the set of very good marks with a degree of μ V e r y G o o d ( 49 ) = 0.79 and to the set of good grades with a degree of μ E x c e l l e n t ( 49 ) = 0.21 . Normally, he/she will be assessed as very good (5). Nevertheless, the crossing of the two marks—the total points together with the points of the open question—denotes the following: according to the matrix in Table 1, he or she belongs to the set μ V e r y G o o d ν E x c e l l e n t with 0.79 degree, to the set μ G o o d ν E x c e l l e n t with 0.21 degree, to the set μ V e r y G o o d ν V e r y G o o d with 0.005 degree, and to the set μ G o o d ν V e r y G o o d with 0.005 degree. Consequently, we can assign him or her excellent (6).
In accordance with [12] and [10], we have to recalculate the mark for each learner, whose test score does not belong definitely to a given set. For this purpose, we can consider the table, in which the function F returns the minimums of μ and ν . The highest membership grade that is obtained from the table stands for the corrected mark from the matrix (Table 2).
Now, by way of illustration, let a student have a total test score of 57 points and open question result of 18 points ( p = 57 and q = 18 ). As shown in Table 3, after the fuzzification, the student will be marked with excellent (6), which coincides with the traditional evaluation.
If we analyze another learner that has received 42 points (which corresponds to good (4) in the traditional scoring system) and 17 points for the open-ended item, it is seen from Table 4 that after the correction, this particular learner should get a higher mark, namely very good (5).
To recalculate the test results, we only need to input the test scores, and MapleSoft 2016.0 automatically chooses the scores to be fuzzified and calculates the fuzzified grades.
We define two parameters ε > 0 and δ > 0 and we specify the following: x 0 = 0 , x 1 = 32 ε , x 2 = 32 + ε , x 3 = 40 ε , x 4 = 40 + ε , x 5 = 48 ε , x 6 = 48 + ε , x 7 = 56 ε , x 8 = 56 + ε , x 9 = 64 , y 0 = 0 , y 1 = 8 δ , y 2 = 8 + δ , y 3 = 11 δ , y 4 = 11 + δ , y 5 = 14 δ , y 6 = 14 + δ , y 7 = 17 δ , y 8 = 17 + δ , y 9 = 20 .
As a result, Maple indicates that when ε = 2.6 and δ = 1.5 , the two distributions do not differ statistically, and ε + δ = 4.1 is the largest possible sum. In this case, we have changed 46 marks. When we fuzzify the grades with ε = 2.7 and δ = 1.5 , the two distributions differ statistically. We acquire 56 marks to be changed as follows: by increasing the set of fuzzified marks, we only add new students, whose marks will be reevaluated. That is why we have listed in Table 5 the 56 fuzzified marks (the information below is organized in this order: [classical test grade, classical open question grade], student’s number in the list, test score, open question score, fuzzified grade, classical grade). We have used a bold font in Table 5 for the 10 new students that were added by increasing ε = 2.6 to ε = 2.7 .
At any stage of the calculations, Maple tests the standard t-test with paired samples. In the case of ε = 2.6 and δ = 1.5 , we can understand that we should accept the hypothesis that the distributions have equal means and in the case ε = 2.7 , δ = 1.5 , Maple indicates that we should reject the hypothesis that the two distributions have equal means.
We will analyze the fuzzified grades in the Discussion below in order to justify that we have obtained fairer marks in the first case and less equitable marks in the second one.
By comparing the results of second exam in terms of overall points (78, 76, 80, 63, 67, 61, 68, 75, 13, 72, 74, 76, 72, 79, 70, 58, 57, 66, 66, 59, 74, 67, 36, 35, 75, 38, 36, 52, 47, 55, 25, 47, 73, 35, 31, 70) and points on the open questions (15, 14, 15, 7, 6, 8, 9, 13, 0, 13, 13, 14, 11, 15, 15, 3, 9, 11, 8, 9, 13, 12, 0, 3, 14, 0, 0, 6, 7, 8, 0, 3, 13, 0, 0, 7) to those first exam in terms of overall points (57, 74, 58, 58, 22, 65, 19, 70, 21, 63, 42, 42, 62, 42, 63, 23, 47, 62, 66, 74, 59, 56, 69, 59, 67, 55, 48, 41, 52, 46, 42, 37, 53, 61, 58, 30, 35, 69, 63, 66, 54, 41, 48, 47, 50, 63, 56, 41, 50, 56, 45, 36, 60, 43, 69, 55, 70, 77, 71, 74, 24, 66, 54, 43, 49, 59, 53, 42, 54, 60, 62, 56, 35, 59, 71, 53, 61, 52) and points on the open questions (18, 20, 19, 19, 7, 18, 0, 20, 0, 17, 14, 16, 14, 16, 18, 10, 8, 20, 16, 19, 16, 16, 20, 14, 20, 18, 18, 0, 14, 7, 9, 0, 8, 13, 20, 9, 0, 12, 16, 18, 18, 13, 19, 15, 17, 19, 8, 15, 20, 19, 16, 0, 16, 15, 20, 20, 19, 20, 20, 20, 10, 19, 19, 0, 0, 0, 16, 10, 14, 14, 18, 18, 7, 19, 17, 16, 16, 6), we can see that the results from the second test f0r the open questions are much lower than those from the first.
When we fuzzify the grades with ε = 2.4 and δ = 1.2 , the two distributions differ statistically. We obtain 18 grades to be modified, without changing the distributions of the overall marks before and after the fuzzification.

3.4. CCA Modeling of the Assessment Process in a Cyber–Physical Educational Environment

Ambient-oriented modeling ( A O M ) is a type of computational process, in the context of which interactions between objects from the physical and the virtual worlds play a major role. The calculus of context-aware ambient ( C C A ) formalism models the system’s ability to respond to changes in the surrounding space [27]. A C C A is an identity that is used to describe an object or a component—a process, device, location, etc. Each ambient has a name and boundaries and can contain other ambients within itself and can be included in another ambient. There are three possible relationships between any two ambients—parent, child, and relative. Each ambient can communicate with the ambients around it, and ambients can exchange messages with each other. The process of exchanging messages is done using the handshaking process. In the notation, : : is a symbol for relative ambients; and are parent and child symbols, respectively; < > means sending; and ( ) means receiving a message. An ambient can be mobile; i.e., it can move within its surroundings. With C C A s , there are two movement options, in and out, which allow ambients to move from one location to another. In C C A s , four syntactic categories can be distinguished:
  • Processes P;
  • Capabilities M;
  • Locations α ;
  • Context expressions k.
As we have already pointed out, the concept of ambients is an abstraction of the limited space where some computation is performed. Ambients are mobile and can build ambient hierarchies. Through these hierarchies, any entity in a cyber–physical system can be modeled regardless of its nature (physical, logical, mobile, or static) or the environment (or context) of that entity. In addition, an ambient contains a process representing its capabilities; i.e., the actions that this ambient is allowed to perform, as well as mobility capabilities, contextual capabilities, and communication capabilities.
Due to its dynamic and hybrid nature, the process of assessing student knowledge in the context described in the previous section can be modeled using the mathematical notation of C C A s . The cyber–physical educational environment is, by its nature, a multiagent system that implements processes and services through interaction between various intelligent agents. Each component of the environment is served by one or more specialist assistants, and users are represented in the platform by their personal assistants. Each such intelligent environment component can be represented by a separate C C A . Let us consider the following ambients:
  • PA_T—a personal assistant to the teacher;
  • PA_Si—a personal assistant of the i-th student;
  • SA_TS—a specialist assistant serving the test system in the education space;
  • SA_DM—a specialist assistant providing services related to the use of data from the data module;
  • SA_SB—a specialist assistant supporting interaction with the student books component;
  • AA—an analytical assistant that provides services related to information analysis by using the described fuzzy set approach.
We model the processes of these ambients according to the hybrid approach described above.
The instructor, through their personal assistant, sends a message to the assistant of the test system requesting to open the test for all students. After a student completes the test, their score is recorded in the data module, and the teacher receives information about it. The instructor’s personal assistant communicates with the A A ambient with a request to analyze the results of that student according to the considered approach and as a consequence, receives a proposal for an assessment, which he/she sends to the student’s virtual student book. The process of this ambient is represented by (2).
P P A _ T S A _ T S : : < O p e n   t h e   t e s t > . 0 | S A _ D M : : ( S t u d e n t i c o m p l e t e d   t h e   t e s t ) . 0 | A A : : < A n a l y z e   t h e   r e s u l t s   o f   s t u d e n t i > . A A : : ( P o s t a n a l y s i s   e v a l u a t i o n   p r o p o s a l ) . S A _ S B : : < R e c o r d   t h e   g r a d e   o f   s t u d e n t i > . 0 )
After receiving a request to open the test from the teacher’s personal assistant PA_T, the SA_TS ambient sends information to the students’ personal assistants. This communication with the i-th student is modeled in (3).
P S A _ T S P A _ T : : ( O p e n   t h e   t e s t ) . S A _ S i : : < T e s t   i s   o p e n , y o u   c a n   s t a r t > . 0
As soon as the student finishes working on the test, his/her personal assistant sends a message to the specialist assistant of the data module SA_DM with a request to record the results obtained. The ambient process is represented by (4).
P P A _ S i S A _ T S : : ( T e s t   i s   o p e n , y o u   c a n   s t a r t ) S A _ D M : : < T h e   t e s t   i s   c o m p l e t e , s a v e   t h e   r e s u l t > . 0
The specialist assistant of the data module SA_DM records the results of the students and sends information to the teacher. When it receives a request from the A A ambient, it selects the requested data and sends it for analysis. The process of this ambient is represented in (5).
P S A _ D M P A _ S i : : ( T h e   t e s t   i s   c o m p l e t e , s a v e   t h e   r e s u l t ) P A _ T : : < S t u d e n t i c o m p l e t e d   t h e   t e s t > . 0 | A A : : ( N e e d   d a t a   f o r   a n a l y s i s ) . A A : : < S e t   o f   d a t a > . 0
The A A ambient analyzes the results of the conducted test after a request from the teacher’s personal assistant. To access a particular set of data, it sends a request to the SA_DM ambient. The process is presented in (6).
P A A P A _ T : : ( A n a l y z e   t h e   r e s u l t s   o f   s t u d e n t i ) . 0 | S A _ D M : : < N e e d   d a t a   f o r   a n a l y s i s > . 0 | S A _ D M : : ( S e t   o f   d a t a ) . P A _ T : : < P o s t a n a l y s i s   e v a l u a t i o n   p r o p o s a l > . 0
The closing stage of the implementation of the process is the recording of the final assessment of the students in the administrative system of the virtual student book (SA_SB).
The c c a P L programming language is a computer-readable version of the C C A syntax. The interpreter of this language enables testing and verification of the modeled scenario (Figure 5).

4. Results and Discussion

We analyze the fuzzified marks in this section in order to illustrate that we have obtained fairer marks regardless of the fact that the two distributions differ statistically.
From practical experience, we can verify the use of practice tests can be an effective tool for both student evaluation and learning. By combining tests with open and closed questions and using the fuzzy set technique to correct the results, we achieve more accurate assessment, which helps, to a certain extent, avoid the possibility of randomly selected answers, which is a well-known risk when administering tests. To justify the ethics and credibility of changing borderline grades, we explain why we consider it fair to apply changes to the grades when the two distributions after the fuzzification do not differ statistically and vice versa—that it would not be equitable to make alterations when the two distributions differ statistically.
Let us take as an example the grades of of student number 35 from the list of test takers [4, 6] including 35, 42.0, 20, 5, and 4. Initially, he had obtained the grade good (4) with 42 out of 64 points. Inspecting his test, it becomes obvious that he has some knowledge gaps related to grammar usage; for example, his use of the present perfect tense is rather inconsistent because in some cases he uses it correctly but in others he confuses it with past simple tense. However, he has grasped the use of the present simple and present continuous tenses and has completed all the tasks involving these two tenses appropriately. On the other hand, the student has received the maximum of 20 points for the open question—his answer is coherent, logical, and clearly structured, and his thesis is supported by a relevant example. Moreover, from our observations as teachers of this particular student, we can confirm that he is diligent and hard-working and that he puts a good deal of effort in his work. Therefore, we are convinced that it would be unfair to assess his test with the lower grade––good (4)— but he well deserves the grade of very good 5, obtained after the process of fuzzification when the two distributions do not differ statistically.
Next, we consider the instance when delta is larger, in which case new students are added, whose grades will be automatically revised. As the hypothesis test shows, a great number of the distributions of assessments are statistically different from the classical evaluation, and we prove that the changes in the assessments that occur in this case can be assumed as unfair.
Now, let us regard another example of a student whose grade would be unjust to alter (when the two distributions differ statistically). Let us review the results, for instance, of student number 6 from the list of takes: [5, 6] including 6, 50.6, 18, 6, and 5. His original grade, based on the standard grading system, is very good 5. Although he has obtained a considerably high score on the open question, the mistakes made on the MCQ are quite sporadic; for instance, in certain cases, he has applied a grammar rule correctly, while in others he has not, or the student he has selected the appropriate preposition from a list of options in a closed question and then has used it incorrectly in his written text, which suggests a random choice of answers in the test. Consequently, we believe that it would be improper to correct his grade to the maximum possible score.
On the other hand, ambient-oriented CCA modeling makes it possible to describe, in a unified way, different objects from the physical and the virtual worlds. The development of cyber–physical educational platforms is a labor-intensive, long, and expensive process, which is why the preliminary modeling of the main processes and services in the space is of particular importance. In the context of the need to assess students’ knowledge in the cyber–physical educational space, CCA modeling provides abundant opportunities for preliminary verification, testing, and process analysis.

5. Conclusions

The article discusses a hybrid approach to assessing students’ foreign language knowledge in a cyber–physical educational environment. The presented assessment approach, through the use of the mathematical theory of fuzzy functions, ensures a fair assessment of students, which motivates them to take tests conscientiously in order to get the most out of them.
Ambient-oriented CCA modeling provides ample opportunities for preliminary testing, verification, and analysis of the base scenarios related to both student assessment and provision of the entire learning process in the cyber–physical educational space.

Author Contributions

Formal analysis: T.G., V.I. and B.Z.; Methodology: T.G., V.I. and B.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financed by the European Union-NextGenerationEU through the National Recovery and Resilience Plan of the Republic of Bulgaria under project no. DUECOS BG-RRP-2.004-0001-C01.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Morales-Salas, R.E.; Infante-Moro, J.C.; Gallardo-Pérez, J. Evaluation of virtual learning environments. A management to improve. Int. J. Educ. Res. Innov. 2020, 2020, 126–142. [Google Scholar] [CrossRef]
  2. Todorov, J.; Krasteva, I.; Ivanova, V.; Doychev, E. BLISS-A CPSS-like Application for Lifelong Learning. In Proceedings of the IEEE International Symposium on Inovations in Intelligent SysTems and Applications, INISTA 2019—Proceedings, Sofia, Bulgaria, 3–5 July 2019. [Google Scholar] [CrossRef]
  3. National Science Foundation (US). Cyber-Physical Systems (CPS). 2008. Available online: https://www.nsf.gov/pubs/2008/nsf08611/nsf08611.htm (accessed on 20 December 2023).
  4. Wang, X.; Yang, J.; Han, J.; Wang, W.; Wang, F.Y. Metaverses and DeMetaverses: From digital twins in CPS to parallel intelligence in CPSS. IEEE Intell. Syst. 2022, 37, 97–102. [Google Scholar] [CrossRef]
  5. Gürdür Broo, D.; Boman, U.; Törngren, M. Cyber-physical systems research and education in 2030: Scenarios and strategies. J. Ind. Inf. Integr. 2021, 21, 100192. [Google Scholar] [CrossRef]
  6. National Academies of Sciences, Engineering, and Medicine. A 21st Century Cyber-Physical Systems Education. 2017. Available online: https://nap.nationalacademies.org/catalog/23686/a-21st-century-cyber-physical-systems-education (accessed on 20 December 2023).
  7. Stoyanov, S.; Glushkova, T.; Stoyanova-Doycheva, A.; Todorov, J.; Toskova, A. A Generic Architecture for Cyber-Physical-Social Space Applications. Intell. Syst. Theory Res. Innov. Appl. Stud. Comput. Intell. 2020, 864, 319–343. [Google Scholar] [CrossRef]
  8. Rahnev, A.; Pavlov, N.; Golev, A.; Stieger, M.; Gardjeva, T. New electronic education services using the Distributed E–Learning Platform (DisPeL). Int. Electron. J. Pure Appl. Math. (IEJPAM) 2014, 7, 63–72. [Google Scholar] [CrossRef]
  9. Council of Europe. Common European Framework of Reference for Languages: Learning, Teaching, Assessment; Press Syndicate of the University of Cambridge: Cambridge, UK, 2011; Available online: https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=0900001680459f97 (accessed on 20 December 2023).
  10. Ivanova, V.; Zlatanov, B. Implementation of fuzzy functions aimed at fairer grading of students’ tests. Educ. Sci. 2019, 8, 214. [Google Scholar] [CrossRef]
  11. Zadeh, L. Fuzzy Sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  12. Fahad, S.; Shah, A. Intelligent testing using fuzzy logic: Applying fuzzy logic to examination of students. Innov. Learn. Instr. Technol. Assess. Eng. Educ. 2007, 95–98. [Google Scholar] [CrossRef]
  13. Gokmena, G.; Akinci, T.; Tektau, M.; Onat, N.; Kocyigit, G.; Tektau, N. Evaluation of student performance in laboratory applications using fuzzy logic. Procedia Soc. Behav. Sci. 2010, 2, 902–909. [Google Scholar] [CrossRef]
  14. Dias, S.; Diniz, J. FuzzyQoI model: A fuzzy logic-based modelling of users’ quality of interaction with a learning management system under blended learning. Comput. Educ. 2013, 69, 38–59. [Google Scholar] [CrossRef]
  15. Troussas, C.; Krouska, A.; Sgouropoulou, C. Collaboration and fuzzy–modeled personalization for mobile game–based learning in higher education. Comput. Educ. 2020, 144, 103698. [Google Scholar] [CrossRef]
  16. Aldana-Burgos, L.; Gaona-García, P.; Montenegro-Marín, C. A Fuzzy Logic Implementation to Support Second Language Learning Through 3D Immersive Scenarios. Smart Innov. Syst. Technol. 2023, 320, 501–511. [Google Scholar] [CrossRef]
  17. Nandwalkar, B.; Pardeshi, S.; Shahade, M.; Awate, A. Descriptive Handwritten Paper Grading System using NLP and Fuzzy Logic. Int. J. Perform. Eng. 2023, 19, 273–282. [Google Scholar] [CrossRef]
  18. Brimzhanova, S.; Atanov, S.; Moldamurat, K.; Baymuhambetova, B.; Brimzhanova, K.; Seitmetova, A. An intelligent testing system development based on the shingle algorithm for assessing humanities students’ academic achievements. Educ. Inf. Technol. 2022, 27, 10785–10807. [Google Scholar] [CrossRef] [PubMed]
  19. Doz, D.; Cotič, M.; Felda, D. Random Forest Regression in Predicting Students’ Achievements and Fuzzy Grades. Mathematics 2023, 11, 4129. [Google Scholar] [CrossRef]
  20. Doz, D.; Felda, D.; Cotič, M. Demographic Factors Affecting Fuzzy Grading: A Hierarchical Linear Regression Analysis. Mathematics 2023, 11, 1488. [Google Scholar] [CrossRef]
  21. Doz, D.; Felda, D.; Cotič, M. Assessing Students’ Mathematical Knowledge with Fuzzy Logic. Educ. Sci. 2022, 12, 266. [Google Scholar] [CrossRef]
  22. Doz, D.; Felda, D.; Cotič, M. Combining Students’ grades and Achievements on the National assessment of Knowledge: A fuzzy Logic Approach. Axioms 2022, 11, 359. [Google Scholar] [CrossRef]
  23. Doz, D.; Felda, D.; Cotič, M. Using Fuzzy Logic to Assess Students’ Mathematical Knowledge. In Proceedings of the Nauka i Obrazovanje—Izazovi i Perspektive, Užice, Serbia, 21 October 2022; pp. 263–278. [Google Scholar]
  24. Özseven, B.E.; Çağman, N. A Novel Student Performance Evaluation Model Based on Fuzzy Logic for Distance Learning. Int. J. Multidiscip. Stud. Innov. Technol. 2022, 6, 29–37. [Google Scholar] [CrossRef]
  25. Özseven, B.E.; Çağman, N. A novel evaluation model based on fuzzy logic for distance learning. Soft Comput. 2022; preprint. [Google Scholar] [CrossRef]
  26. Sobrino, A. Fuzzy Logic and Education: Teaching the Basics of Fuzzy Logic through an Example (by Way of Cycling). Educ. Sci. 2013, 3, 75–97. [Google Scholar] [CrossRef]
  27. Siewe, F.; Zedan, H.; Cau, A. The calculus of context-aware ambients. J. Comput. Syst. Sci. 2011, 77, 597–620. [Google Scholar] [CrossRef]
Figure 1. Plots of μ P o o r , μ S a t i s . , μ G o o d , μ V G o o d and μ E x c e l l e n t .
Figure 1. Plots of μ P o o r , μ S a t i s . , μ G o o d , μ V G o o d and μ E x c e l l e n t .
Mathematics 12 00371 g001
Figure 2. Plots of ν P o o r , ν S a t i s . , ν G o o d , ν V G o o d and ν E x c e l l e n t .
Figure 2. Plots of ν P o o r , ν S a t i s . , ν G o o d , ν V G o o d and ν E x c e l l e n t .
Mathematics 12 00371 g002
Figure 3. Plots of μ A , μ B and μ A B = min ( μ A , μ B ) .
Figure 3. Plots of μ A , μ B and μ A B = min ( μ A , μ B ) .
Mathematics 12 00371 g003
Figure 4. Plots of intersection μ A B = max ( μ A , μ B ) .
Figure 4. Plots of intersection μ A B = max ( μ A , μ B ) .
Mathematics 12 00371 g004
Figure 5. Testing and verification using the c c a P L interpreter and animator.
Figure 5. Testing and verification using the c c a P L interpreter and animator.
Mathematics 12 00371 g005
Table 1. Fuzzy associative matrix.
Table 1. Fuzzy associative matrix.
ν Poor ν Satisf . ν Good ν VeryGood ν Excellent
μ P o o r 22334
μ S a t i s f . 23344
μ G o o d 23455
μ V e r y G o o d 34556
μ E x c e l l e n t 34566
Table 2. The different combinations of the minimums of μ and ν , computed by means of F.
Table 2. The different combinations of the minimums of μ and ν , computed by means of F.
F ( μ P ( p ) , ν P ( q ) ) F ( μ P ( p ) , ν S ( q ) ) F ( μ P ( p ) , ν G ( q ) ) F ( μ P ( p ) , ν V G ( q ) ) F ( μ P ( p ) , ν E ( q ) )
F ( μ S ( p ) , ν P ( q ) ) F ( μ S ( p ) , ν S ( q ) ) F ( μ S ( p ) , ν G ( q ) ) F ( μ S ( p ) , ν V G ( q ) ) F ( μ S ( p ) , ν E ( q ) )
F ( μ G ( p ) , ν P ( q ) ) F ( μ G ( p ) , ν S ( q ) ) F ( μ G ( p ) , ν G ( q ) ) F ( μ G ( p ) , ν V G ( q ) ) F ( μ G ( p ) , ν E ( q ) )
F ( μ V ( p ) , ν P ( q ) ) F ( μ V ( p ) , ν S ( q ) ) F ( μ V ( p ) , ν G ( q ) ) F ( μ V ( p ) , ν V G ( q ) ) F ( μ V ( p ) , ν E ( q ) )
F ( μ E ( p ) , ν P ( q ) ) F ( μ E ( p ) , ν S ( q ) ) F ( μ E ( p ) , ν G ( q ) ) F ( μ E ( p ) , ν V G ( q ) ) F ( μ E ( p ) , ν E ( q ) )
Table 3. The fuzzified learner’s mark when p = 57 , q = 18 .
Table 3. The fuzzified learner’s mark when p = 57 , q = 18 .
00000
00000
00000
000 0.005 0.21
000 0.005 0.79
Table 4. The fuzzified learner’s mark when p = 42 , q = 17 .
Table 4. The fuzzified learner’s mark when p = 42 , q = 17 .
00000
000 0.02 0.02
000 0.5 0.49
00000
00000
Table 5. Fuzzified students’ marks.
Table 5. Fuzzified students’ marks.
[ 4 , 6 ] , 1 , 42.6 , 18 , 5 , 4 [ 6 , 6 ] , 2 , 58.0 , 20 , 6 , 6 [4, 6], 3, 42.80, 19, 5, 4
[4, 6], 4, 42.80, 19, 5, 4 [ 5 , 6 ] , 6 , 50.6 , 18 , 6 , 5 [ 5 , 6 ] , 8 , 54.0 , 20 , 6 , 5
[ 5 , 5 ] , 10 , 49.4 , 17 , 5 , 5 [ 2 , 5 ] , 11 , 30.8 , 14 , 3 , 2 [2, 5], 12, 29.20, 16, 3, 2
[5, 5], 13, 50.80, 14, 5, 5[2, 5], 14, 29.20, 16, 3, 2 [ 5 , 6 ] , 15 , 48.6 , 18 , 6 , 5
[ 4 , 2 ] , 17 , 40.6 , 8 , 2 , 4 [ 4 , 6 ] , 18 , 46.0 , 20 , 5 , 4 [5, 5], 19, 53.20, 16, 5, 5
[6, 6], 20, 58.80, 19, 6, 6 [ 4 , 5 ] , 21 , 46.2 , 16 , 4 , 4 [ 5 , 5 ] , 24 , 47.8 , 14 , 5 , 4
[ 4 , 6 ] , 26 , 40.6 , 18 , 5 , 4 [ 3 , 6 ] , 27 , 33.6 , 18 , 4 , 3 [ 4 , 2 ] , 28 , 41.0 , 0 , 2 , 4
[ 4 , 5 ] , 29 , 40.8 , 14 , 4 , 4 [ 4 , 2 ] , 30 , 40.4 , 7 , 2 , 4 [3, 3], 31, 34.80, 9, 3, 3
[ 5 , 2 ] , 33 , 46.6 , 8 , 3 , 4 [ 5 , 4 ] , 34 , 50.6 , 13 , 5 , 5 [ 4 , 6 ] , 35 , 42.0 , 20 , 5 , 4
[ 5 , 5 ] , 39 , 50.2 , 16 , 5 , 5 [ 4 , 6 ] , 41 , 39.6 , 18 , 5 , 3 [ 2 , 4 ] , 42 , 30.6 , 13 , 3 , 2
[ 3 , 6 ] , 43 , 32.8 , 19 , 4 , 3 [ 5 , 6 ] , 46 , 47.8 , 19 , 6 , 4 [ 5 , 2 ] , 47 , 49.6 , 8 , 3 , 5
[ 3 , 6 ] , 49 , 34.0 , 20 , 4 , 3 [ 4 , 6 ] , 50 , 40.8 , 19 , 5 , 4 [ 3 , 5 ] , 51 , 32.2 , 16 , 4 , 3
[ 5 , 5 ] , 53 , 47.2 , 16 , 5 , 4 [ 2 , 5 ] , 54 , 31.0 , 15 , 3 , 2 [ 3 , 6 ] , 56 , 39.0 , 20 , 4 , 3
[ 5 , 6 ] , 57 , 54.80 , 19 , 6 , 5 [ 5 , 6 ] , 59 , 55.0 , 20 , 6 , 5 [ 6 , 6 ] , 60 , 58.0 , 20 , 6 , 6
[5, 6], 62, 50.80, 19, 6, 5 [ 3 , 6 ] , 63 , 38.8 , 19 , 4 , 3 [ 5 , 2 ] , 65 , 49.0 , 0 , 3 , 5
[ 4 , 5 ] , 67 , 40.2 , 16 , 4 , 4 [ 3 , 3 ] , 68 , 34.0 , 10 , 3 , 3 [4, 5], 69, 42.80, 14, 4, 4
[ 5 , 5 ] , 70 , 48.8 , 14 , 5 , 5 [ 5 , 6 ] , 71 , 47.6 , 18 , 6 , 4 [ 4 , 6 ] , 72 , 41.6 , 18 , 5 , 4
[ 2 , 2 ] , 73 , 29.4 , 7 , 2 , 2 [ 6 , 6 ] , 75 , 57.4 , 17 , 6 , 6 [ 4 , 5 ] , 76 , 40.2 , 16 , 4 , 4
[ 5 , 5 ] , 77 , 48.2 , 16 , 5 , 5 [ 5 , 2 ] , 78 , 47.2 , 6 , 3 , 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Glushkova, T.; Ivanova, V.; Zlatanov, B. Beyond Traditional Assessment: A Fuzzy Logic-Infused Hybrid Approach to Equitable Proficiency Evaluation via Online Practice Tests. Mathematics 2024, 12, 371. https://doi.org/10.3390/math12030371

AMA Style

Glushkova T, Ivanova V, Zlatanov B. Beyond Traditional Assessment: A Fuzzy Logic-Infused Hybrid Approach to Equitable Proficiency Evaluation via Online Practice Tests. Mathematics. 2024; 12(3):371. https://doi.org/10.3390/math12030371

Chicago/Turabian Style

Glushkova, Todorka, Vanya Ivanova, and Boyan Zlatanov. 2024. "Beyond Traditional Assessment: A Fuzzy Logic-Infused Hybrid Approach to Equitable Proficiency Evaluation via Online Practice Tests" Mathematics 12, no. 3: 371. https://doi.org/10.3390/math12030371

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop