Next Article in Journal / Special Issue
Accuracy Evaluation of Geoid Heights in the National Control Points of South Korea Using High-Degree Geopotential Model
Previous Article in Journal
An Experimental Investigation of Streamwise and Vertical Wind Fields on a Typical Three-Dimensional Hill
Previous Article in Special Issue
Diversity Balancing for Two-Stage Collaborative Filtering in Recommender Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inheritance Coding with Gagné-Based Learning Hierarchy Approach to Developing Mathematics Skills Assessment Systems

1
Department of Education, National Pingtung University, 4–18 Min-Sheng Road, Pingtung 900, Taiwan
2
Department of Computer Science, National Pingtung University, 4–18 Min-Sheng Road, Pingtung 900, Taiwan
3
Graduate Institute of Education, National Chung Cheng University, 168, Sec. 1, University Rd., Min-Hsiung Township, Chia-Yi County 621, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(4), 1465; https://doi.org/10.3390/app10041465
Submission received: 16 December 2019 / Revised: 12 February 2020 / Accepted: 17 February 2020 / Published: 21 February 2020
(This article belongs to the Special Issue Selected Papers from IMETI 2018)

Abstract

:
This study developed an inheritance coding with Gagné-based learning hierarchy approach to building systems for assessing mathematics skills and diagnosing student learning problems. The proposed Gagné-based learning hierarchy approach combines Gagné learning hierarchy theory with an inheritance coding approach. First, Gagné learning hierarchy theory is used to generate test questions and learning path diagrams for a skills assessment system. To assess learning achievement, an inheritance coding approach is used to encode the test questions according to learning hierarchy paths. The analysis, design, development, implementation, and evaluation design model is used throughout the process of developing the assessment system. Statistical analyses of the test questions for assessing student learning achievement included expert validity, internal reliability, test–retest reliability, and parallel-form reliability. System performance questionnaires were also designed to survey the opinions of the students regarding the mathematics skills assessment system. The internal reliability of the overall questionnaire was also calculated. The experimental practical application of the assessment system, developed by the Gagné-based learning hierarchy approach, showed that it can accurately diagnose student learning barriers and provide learning suggestions for students and teachers.

1. Introduction

The Gagné model identifies five major categories of learning: verbal information, intellectual skills, cognitive strategies, motor skills, and attitudes [1]. Robert M. Gagné suggested that learning tasks for intellectual skills could be hierarchically organized according to their complexity and proposed the following categories of learning tasks: stimulus recognition, response generation, procedure following, use of terminology, discriminations, concept formation, rule application, and problem solving. The alternative Gagné taxonomy worked well with student outcomes in many courses because it could distinguish among verbal information, attitudes, and psychomotor skills. Additionally, the taxonomy effectively distinguished among intellectual skills, i.e., concept learning, rule using, and problem solving [2]. The significance of the hierarchy was that it identified the prerequisites for facilitated learning at each level. Learning hierarchies provided a basis for determining the best sequence of instruction [3]. A study by Flynn [4] established that the Gagné learning theory, including its events of instruction, accurately described and supported learning in a cooperative learning environment. The findings clearly suggested that collaborative learning should also be consistent with and/or supportive of the Gagné learning hierarchy [4,5]. In addition to improving understanding of learning hierarchies and hierarchical analysis, the Gagné model introduced an important notion: learners incrementally acquire prerequisite skills before they attempt to master higher or more complex skills. According to the decomposition hypothesis, using the Gagné learning hierarchies to analyze complex tasks required an assumption that mastery of less complex units is a prerequisite for mastery of more complex units. The theory is applicable in the design of increasingly complex task levels [6,7]. Gagné [6] conceptualized learning as a change in human disposition or capability. Gagné also argued that the change could be retained and should not simply be ascribed to the growth process. Curriculum planners can apply this concept when examining the nature, depth and breadth of coverage of linear programming material such as that in a mathematics curriculum [8]. In the hierarchy proposed by Gagné, problem solving is the highest level of learning because it requires mastery of the next lower type of learning. Problem solving requires the application of principles and facts to explain and solve new phenomena or to predict consequences from known conditions. Additionally, problem solving requires the use of prediction and the analysis of facts and principles to identify cause and effect relationships among physical phenomena in the environment [9]. In the above research studies, the results were in agreement with the Gagné cognitive theory of prerequisite knowledge and learning hierarchy. Therefore, this study uses the Gagné model to develop learning hierarchy diagrams and test questions for a mathematics skills assessment system used to diagnose student learning problems.
The analysis, design, development, implementation, and evaluation (ADDIE) model is an instructional design model [10] that provides descriptive guidelines for building effective academic performance support tools in five phases. The main purpose of the ADDIE model is to improve learning efficiency by analyzing learning needs and developing a system of learning tools. Although it has been criticized in recent years, the ADDIE model has proven to be sufficiently flexible for building effective academic performance support tools [11,12]. Arkün and Akkoyunlu [13] examined the process of using the ADDIE model and student opinions to develop a multimedia learning environment. The authors concluded that, in terms of problem solving, the ADDIE model is highly effective, in a cyclical way. Hsu, Lee-Hsieh, Turton, and Cheng [14] used the ADDIE model to develop online continuing education courses on caring for nurses. The authors concluded that ADDIE is useful and developed a procedure for using the model to collect data and analyze results. Moradmand, Datta, and Oakley [15] used the ADDIE instructional system design method to guide the process of designing, implementing and evaluating educational multimedia software for learning mathematics. They concluded that this application appears to have considerable potential for use in teaching elementary school mathematics through storytelling and multimedia. Azimi, Ahmadigol, and Rastegarpour [16] evaluated the effectiveness of ADDIE instructional design and multimedia for learning key skills in futsal. Participants who were trained by an ADDIE model had higher mean scores for key futsal skills compared to students trained by traditional methods. Additionally, the authors successfully used ADDIE for developing e-materials for elementary students, including a mathematics speed unit, a complex graphics area unit, a fraction area unit, and a Mandarin character recognition learning course [17,18,19,20,21,22]. According to these studies, the ADDIE design model is sufficiently dynamic and flexible for use throughout the process of developing a mathematics skills assessment system.
For both students and educators, assessment systems developed using the ADDIE design model have proven useful for evaluating learning outputs. Researchers have developed various methods of analyzing the learning problems and barriers of students [23,24,25,26]. For example, Chen [27] developed an intelligent personalized learning system that used genetic algorithms to determine the best learning paths according to the incorrect answers of individual learners. Ketterlin-Geller and Yovanoff [28] used cognitive diagnostic assessments to provide detailed and precise information about the cognitive processes of students who had difficulty learning mathematics. Ozyurt, Ozyurt, and Baki [29] proposed a structure and improvement processes for computerized adaptive testing systems. Conventional systems for assessing the abilities of students are now being replaced by adaptive assessment systems. The objective of adaptive assessment systems is to evaluate students according to their actual ability rather than according to their test grades. Hauswirth and Adamoli [30] proposed a pedagogical approach to facilitating students in learning from their mistakes by answering a series of questions. Hwang, Panjaburee, Triampo, and Shih [31] proposed a group decision approach for developing a concept-effect relationship model with the cooperation of multiple domain experts. Low-achievement students who were taught using the group decision approach had significantly better learning achievement compared to those who were taught using the conventional approach. Yang, Hwang, Yang, and Hwang [32] proposed a two-tier test-based learning approach to enhancing learning outcomes in computer-programming courses in a web-based learning environment. Lai, Kobrin, DiCerbo, and Holland [33] described an application of the assessment triangle, and compared two studies to evaluate whether evidence of student performance is consistent. Wilkins and Norton [34] used a hierarchy of fraction scheme that charted the outline of a progression from part–whole concepts to measurement concepts of fractions. However, for elementary students struggling in mathematics, the issue of which diagnostic method is most effective for understanding their learning problems needs further study.
The aim of the proposed research is to propose effective theoretical methods, construct processes, and establish assessment systems. The research question is to diagnose student learning problems. This study proposes a Gagné-based learning hierarchy approach (GBLHA) to developing mathematics skills assessment systems that use the ADDIE model to diagnose student learning problems. Firstly, a learning hierarchy diagram for a mathematics unit is built by applying Gagné learning hierarchy theory. Secondly, an inheritance coding approach is used to encode the test questions according to the learning hierarchy paths. The level value for a lower-level skill is added to the level value for a higher-level skill. Thirdly, when the questions are used to test a student, the accuracy ratio (AR) for the skill level concept is computed to assess the learning achievement of the student. The experiments showed that students and teachers can use the mathematics skills assessment system developed by the GBLHA to identify barriers to learning and to provide suggestions for improving learning efficiency.
This paper is organized as follows. Section 2 discusses how Gagné theory is used to diagnose student learning problems. Section 3 introduces the proposed Gagné-based hierarchy approach. Section 4 describes the system development process. Section 5 discusses our observations in practical applications of the approach in diagnosing student learning problems. Finally, Section 6 concludes the study.

2. Diagnosing Student Learning Problems by Applying Gagné Theory

Gagné [6] advocated a hierarchical analysis of intellectual skills. The component skills are task-analyzed in a parts-to-whole sequence that follows the hierarchy in a “bottom-up” fashion, followed by increasingly complex combinations of the parts [35]. Gagné developed the concept of a schema, which he defined as a modifiable information structure that contains prototypical information about frequently experienced situations. The schema can be used to interpret new situations. So, understanding the situations requires prior knowledge [1]. For example, prior knowledge of mathematics can be identified by pre-test scores in mathematics. Acquisition of mathematics knowledge can then be assessed by post-tests administered after each module.
Merrill [36] applied the Gagné hierarchical task analysis by breaking down the processes of learning and teaching mathematics into bite-sized chunks. Merril [36] concluded that the best way to teach such tasks would be having a subject-matter expert demonstrate how the task should be performed and then letting the learners practice performing the task. Mathematics and science assessments based on the Gagné theory of learning would address some limitations of using tests for assessment. When Gagné theory is used to define concepts, the instructional designer or test writer must explicate the dimensions of a concept, and the learner must discriminate among examples or generate instances based on these defined or agreed-upon limits.
For learning mathematics, e.g., long division, Gagné theory is much too complex to be applied in only one session. The subparts such as multiplication and subtraction must be achieved in prior instructional sessions before the teacher can attempt to teach long division. In such a case, the subparts are often independent and functional tasks in their own right. Some paths include all the steps of other paths. Such paths have hierarchical part–whole relationships to one another. The hierarchical relationship can be used to specify an instructional or learning prerequisite sequence. Students cannot learn how to perform the task until they know how to perform the subtasks. A positive transfer from simpler tasks to more complex tasks is expected to occur when performance of the simpler tasks is required to complete the more complex tasks [36]. Gagné developed hierarchical task analysis, which is the most widely accepted task analysis approach. This approach entails identification of a hierarchy of subskills such that the acquisition of low-level skills or behaviors is followed by a positive transfer of the skills to a higher level. Gagné conceptualized intellectual skills as a hierarchy of skills organized from simple to complex. In other words, students must learn a low level of elements before moving on to a higher level of elements, which requires a cumulative learning process. The intellectual skills involved include discriminative learning, concrete concept learning, defined concept learning, rule learning, and problem solving. According to the learning hierarchy, concept learning is the foundation of problem solving [37]. Further studies are needed to address issues such as how long a unit on counting should last and to what extent children should be required to master relevant concepts before learning a mathematical procedure. Development of mathematics knowledge in children is often described as a hierarchical process during which later skills build on earlier skills. Some skills acquired earlier are considered particularly foundational for further learning of mathematics in children. Identifying mathematics skills that should be acquired at an early stage and structuring mathematics curricula around them may be an effective way to boost subsequent efficiency in learning mathematics. Questions about which specific mathematics skills are most important are often addressed through a combination of experimental studies and correlational research based on observational data. Although correlational research is free from certain logistical problems associated with experimental research, its utility for making causal inferences about the likely effects of interventions often relies on untested assumptions. For example, some researchers have hypothesized that individual differences in early acquisition of mathematics skills reflect variation in truly unique constructs [38,39,40].

3. Gagné-Based Learning Hierarchy Approach

According to the Gagné learning hierarchy theory, the acquisition of intellectual skills is cumulative, i.e., mastery of higher-level skills requires prior mastery of lower-level skills. Since intellectual skills are acquired in a hierarchical order, successful instruction requires the student to acquire lower-level skills before progressing to higher-level skills [6,41,42]. The current study uses a learning hierarchy to build learning path diagrams and test questions for a mathematics skills assessment system. To assess the knowledge of students, teachers were required to apply their expertise in designing test questions for each level. For example, integers and fractions are both related to certain learning processes even though they have different learning paths. Therefore, the learning hierarchy diagrams for integers and for fractions were integrated to enhance student learning. Figure 1 shows the learning hierarchy diagrams for integers and fractions. The a-path and b-path are the paths for integers and fractions, respectively, and the numbers in front of a and b represent the learning level.
To compute the learning achievements of students, an inheritance coding approach was used to encode the test questions according to the learning hierarchy paths. Mastering higher-level skills requires students to understand the concepts of lower-level skills. In other words, in inheritance coding, higher-level skills inherit the concepts from lower-level skills. However, higher-level skills add an additional concept, which differentiates them from lower-level skills. Therefore, each skill level inherits concepts from lower skill levels and presents a new concept. However, the lowest skill level only presents its own concept. Each test question was encoded as a vector of binary numbers (i.e., 1 and 0) with the same length as the vector of the selected skill level in the learning hierarchy diagram. A code of 1 indicated that the test question included the skill level concept. Otherwise, the code was 0. The inheritance coding representation is accurate and efficient because it represents the skill level concept in a hierarchical order. For convenience and simplicity, individuals Xj for skill level codes in the learning hierarchy diagram are shown below.
Xj = (x1, x2, …, xi, …, xρ)
where xi (i = 1, 2, …, ρ) denotes the code for the skill level, its value being 0 or 1, and ρ is the number of selected skill levels in a learning hierarchy diagram. j = 1, 2, …, ps, where ps denotes the total number of test questions.
For example, the range of skill levels from 1b to 14b in the fractions unit (Figure 1) is used to generate test questions. For a test question generated from 1b of the fractions unit, the vector for individual X is encoded as X1b = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1). For test questions generated from 7b and 14b of the fractions unit, the vectors for individuals Xs are encoded as X7b = (0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1) and X14b = (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), respectively. For example, the higher skill level X7b inherits the concepts of lower-level skills 1b to 7b. To solve a test question for 7b of the fractions unit, a student must have learned the concepts for skill levels 1b to 7b.
The inheritance coding procedure uses the following steps to produce ps individuals.
Step 1.
Design test questions from each skill level in the learning hierarchy diagram. The number of selected skill levels is ρ.
Step 2.
Encode Xj in Equation (1). The value of xi is 0 or 1. Repeat ρ times, and produce a vector (x1, x2, …, xρ).
Step 3.
Repeat Step 2 ps times to produce ps feasible individuals for test questions.
The questions designed for each skill level in the learning hierarchy diagram are used to test a student, and the accuracy ratio (AR) of the skill level concept is computed to determine the learning achievement of a student. For the skill level concept, ARi is calculated as follows.
A R i = n i m i × 100 %
where ni is the number of correct answers a student gave for skill level concept Ci, and mi is the number of test questions for skill level concept Ci.
The threshold is set to evaluate the learning achievement of a student. The threshold should be set so that the teacher can determine whether a student answered a question correctly due to mastery of the concept or due to a correct guess. The threshold should also indicate whether a student answered a question incorrectly due to failure in the concept or due to a careless mistake. Based on the test data analyses and discussions with the participating teachers in the study, a threshold between 50% and 60% was considered reasonable. As the AR was higher than the threshold for the skill level concept, a teacher could determine whether the student understood the skill level concept. Furthermore, the AR trend in learning achievement for a student was shown from high to low, i.e., progressing from mastery of lower-level concepts to mastery of higher-level concepts.

4. Developing Mathematics Skills Assessment Systems

Figure 2 shows the five phases of ADDIE (analysis, design, development, implementation, and evaluation), which is widely used for instructional design. The “Analysis” phase identifies the probable causes of a performance gap. The “Design” phase verifies the desired performance level and selects appropriate testing methods. The “Development” phase generates and validates the learning resources. The “Implementation” phase prepares the learning environment and engages the students. The last phase, “Evaluation”, assesses the quality of the instructional products and processes [10]. The authors have successfully used ADDIE to develop an auto-reply system for questions and answers in a mathematics speed unit [18,22].
This study used the ADDIE design model throughout the process of developing the mathematics skills assessment system.

4.1. Analysis Phase

The analysis phase clarifies the skills assessment problems, establishes the assessment goals, and identifies the existing knowledge and skills of the learner. The aim is to ensure that the assessment problems can be used to diagnose student learning barriers. In Taiwan, textbooks from Kang-Hsuan edition, Han-Lin edition, and Nan-I edition were integrated in the database of questions for the mathematics skills assessment system. Benchmark mathematics skills for grades 1–9 were selected according to curriculum guidelines. The learning hierarchy for the mathematics unit was then integrated according to the Gagné learning hierarchy. The aim of this phase was to analyze student learning barrier points and to integrate the appropriate test questions for designing learning hierarchy diagrams.

4.2. Design Phase

Based on the results of the analysis phase, learning objectives and assessment questions were selected in the design phase. The test questions were developed and evaluated for each learning hierarchy. The aim was to ensure appropriate test questions were selected for each learning hierarchy.
The skills assessment system designed in this study includes several mathematics units. The test questions for each mathematics unit were designed by professional teachers. Figure 1 shows that the integers unit has 11 (1a–11a) levels, and the fractions unit has 14 (1b–14b) levels. Figure 3 shows that the area computation unit has 10 (1a–10a) levels.
Two course design experts and two teaching experts examined and revised the test questions. Expert validity exceeded 0.8, which indicated that expert agreement was acceptable.
The test questions designed for each skill level were evaluated in terms of their reliability for assessing mastery of each skill level concept. For all skill level concepts, the Cronbach alpha value exceeded 0.8, which indicated good internal reliability.
The test questions designed to assess student learning in mathematics were administered to a group of students two times at a 1-month interval. The Pearson correlation coefficient for test–retest reliability approximated 0.91, which indicated acceptable stability of the test questions over time.
A large set of test questions pertaining to all critical concepts was generated. The questions were then randomly split into two parallel-form sets. The parallel-form test questions were given to the same group of students. The Pearson correlation coefficient obtained for parallel-form reliability approached 0.9, which indicated that both versions had acceptable consistency for assessing student learning achievement.

4.3. Development Phase

In the development phase, the test questions and content were selected, and web-based storyboards and graphics were laid out. Software technologies, including ASP.NET, C#, and Structured Query Language (SQL) server, were integrated in the web-based mathematics skills assessment system. ASP.NET is an open-source server-side web-application framework designed for web development to produce dynamic web pages, developed by Microsoft to allow programmers to build dynamic web sites, applications, and services. The system was created by the ASP.NET Web Form web-based application with C# programming and with an SQL server database. ASP.NET, C#, and the SQL server were all developed by Microsoft, so they had good compatibility. Finally, tests of the system were performed.
The aim was to complete all content and software and to test the mathematics skills assessment system. The system was reviewed by students and teachers during development and then revised according to their feedback.

4.4. Implementation Phase

In the implementation phase, a procedure for training teachers and students was developed. Teachers were trained in using the web-based software, performing tests, and assessing learning outcomes. Students were trained in using the software and completing the testing procedures. Tests of the web-based system were also performed to ensure that it functioned properly.
The aim of the implementation phase was to ensure that the mathematics skills assessment system effectively diagnosed student learning barrier points and provided learning suggestions for students and teachers.
Purposive sampling was performed to evaluate the efficiency and compatibility of the system. The participating students included grade 5 and grade 6 students in Pingtung and Kaohsiung, Taiwan. The teachers actively participated in testing the system and provided test responses.

4.5. Evaluation Phase

The evaluation phase was performed in two parts: formative and summative. Formative evaluation was performed at each stage of the ADDIE process to improve the effectiveness of the system and the content of the test questions. Summative evaluation was performed to understand the effects of the system, to survey user satisfaction, and to survey expert opinions. The aims of the evaluation phase were to use the feedback to revise the system and judge its effectiveness. The results were then used to refine the content of the test questions and web-based storyboards.
A self-administered questionnaire with acceptable expert validity was used to assess student opinions of the mathematics skills assessment system. The overall questionnaire had a Cronbach alpha value higher than 0.785, which indicated acceptable internal reliability.

5. Actual Implementation of the GBLHA Mathematics Skills Assessment System for Diagnosing Student Learning Problems

A practical implementation of the mathematics skills assessment system developed by the GBLHA enabled evaluation of its effectiveness for diagnosing student learning problems. Eighty students used the mathematics skills assessment system to diagnose learning problems. The results are shown below for five representative students in each of the two test subjects.

5.1. Test Subject 1: Fractions Unit

Figure 1 shows that the range of tested material included 1b to 12b in the fractions unit. Therefore, twelve skill level concepts were used to diagnose student learning problems. Three test questions were designed for each skill level, which resulted in 36 questions. Table 1 shows the inheritance codes for the test questions (Qj) and concepts (Ci) in each skill level, where j = 1, …, 36 and i = 1, …, 12. The total number of test questions, ps, is 36. The numbers of test questions, mi, for skill level concept Ci (i = 1, 2, …, 12) are 36, 33, …, 3, respectively. To account for incorrect answers resulting from carelessness rather than lack of understanding, the threshold was set to 50%, recognized by the teacher. An AR exceeding 50% for a skill level concept indicated that the student understood the skill level concept.
Table 2 shows the learning achievement results for student 1, who correctly answered questions Q1–Q17, Q19, Q20, Q24–Q26, Q29–Q32, and Q36. Student 1 correctly answered 27 of the 36 test questions. Traditionally, if the 36 test questions are worth 100 points, student 1 scored 75 points, which can be interpreted as an understanding of 75% of the tested concepts. However, according to the learning achievement analysis at a threshold of 50%, student 1 understood 92% (C1C11) of concepts C1C12. The teacher inferred that student 1 made many careless mistakes. The AR trend in learning achievement was from high (75%) to low (50%) from C1 to C11, respectively.
Table 3 shows the learning achievement results for student 2, who correctly answered questions Q1Q12, Q15Q23, Q26, Q28Q30, Q32, and Q35. Student 2 correctly answered 27 of the 36 test questions. If the 36 test questions are worth 100 points, student 2 scored 75 points, which would traditionally be interpreted as an understanding of 75% of the tested concepts. However, according to the learning achievement analysis at a threshold of 50%, student 2 only understood 83% (C1C10) of concepts C1C12. The interpretation by the teacher was that student 2 needed an improved understanding of concepts C11C12. The teacher also inferred that student 2 made several careless mistakes.
Table 4 shows the learning achievement results for student 3, who correctly answered questions Q1, Q3, Q4, Q7, Q8, Q10, Q14, Q15, Q17, Q18, Q21, Q23, Q26, Q28, Q30, Q33, and Q36. Student 3 correctly answered 17 of the 36 test questions. If the 36 test questions are worth 100 points, student 3 scored 47 points, which would traditionally be interpreted as an understanding of 47% of the tested concepts. However, according to the learning achievement analysis at a threshold of 50%, student 3 did not completely understand any concepts because none of the AR values for the 12 concepts exceeded 50%. Since the AR for learning achievement showed no trend from high to low from C1 to C12, the teacher inferred that student 3 did not understand any concepts and answered the test questions by guessing. This example shows the tolerance behavior of inheritance coding with AR. Even if a student correctly answers some test questions, an AR value lower than the threshold would still inform the teacher that the student does not completely understand those concepts.
Table 5 shows the learning achievement results for student 4, who correctly answered questions Q1Q24, Q26Q29, Q31, Q32, Q35, and Q36. Student 4 correctly answered 32 questions of the 36 test questions. Traditionally, if the 36 test questions are worth 100 points, student 4 scored 89 points, which can be interpreted as an understanding of 89% of the tested concepts. According to the learning achievement analysis at a threshold of 50%, student 4 understood 100% of concepts C1C12. In the example, the AR trend for learning achievement was from high (89%) to low (67%) from C1 to C12, respectively. The teacher inferred that student 4 made careless mistakes.
Table 6 shows the learning achievement results for student 5, who correctly answered questions Q1, Q2, Q5, Q6, Q9Q11, Q14Q17, Q20-Q22, Q25, Q26, Q29, and Q30. Student 5 correctly answered 18 questions of the 36 test questions. Traditionally, if the 36 test questions are worth 100 points, student 5 scored 50 points, which can be interpreted as an understanding of 50% of the tested concepts. However, according to the learning achievement analysis at a threshold of 50%, student 5 only understood 8% of the concepts for C1. The teacher concluded that student 5 needed an improved understanding of C2C12 and inferred that student 5 answered the test questions by guessing. In the example, the AR trend in learning achievement was from high (50%) to low (0%) from C1 to C12, respectively.
For students 1–5, the learning achievements for the fractions unit were validated by their mathematics teachers. The assessment system accurately diagnosed student learning barriers and provided students and teachers with suggestions for improving learning efficiency.

5.2. Test Subject 2: Area Computation Unit

Figure 3 shows that concepts 1a to 10a of the area computation unit were tested. Therefore, ten skill level concepts were used to diagnose student learning problems. Three test questions were designed for each skill level, which resulted in 30 test questions. Table 7 shows the inheritance codes for the test questions (Qj) and concepts (Ci) for all skill levels, where j = 1, …, 30 and i = 1, …, 10. The total number of test questions, ps, is 30. The numbers of test questions, mi, for skill level concept Ci (i = 1, 2, …, 10) are 30, 27, …, 3, respectively. To account for incorrect answers resulting from carelessness rather than poor understanding, the threshold was set to 50%. A teacher could confirm that the student understood the skill level concept if the student had an AR higher than 50%.
Table 8 shows the learning achievement results for student 1, who correctly answered questions Q1–Q13, Q15–Q17, Q19, Q20, Q24, Q25, and Q30. Student 1 correctly answered 21 questions of the 30 test questions. Traditionally, if the 30 test questions are worth 100 points, student 1 scored 70 points, which can be interpreted as an understanding of 70% of the tested concepts. However, according to the learning achievement analysis at a threshold of 50%, student 1 only understood 50% (C1C5) of concepts C1C10. The teacher concluded that student 1 needed an improved understanding of concepts C6C10. In the example, the AR trend in learning achievement was from high (70%) to low (50%) from C1 to C5, respectively.
Table 9 shows the learning achievement results for student 2, who correctly answered questions Q1Q12, Q14Q20, Q22Q23, Q26, and Q28. Student 2 correctly answered 23 out of 30 test questions. Traditionally, if the 30 test questions are worth 100 points, student 2 scored 77 points, which can be interpreted as an understanding of 77% of the tested concepts. However, according to the learning achievement analysis at a threshold of 50%, student 2 only understood 70% (C1C7) of concepts C1C10. The teacher concluded that student 2 needed an improved understanding of C8C10. In this example, the AR trend in learning achievement was from high (77%) to low (50%) for C1 to C7, respectively.
Table 10 shows the learning achievement results for student 3, who correctly answered questions Q1, Q3, Q4, Q7, Q9, Q10, Q14, Q15, Q17Q19, Q21, Q23, Q25, Q27, and Q29. Student 3 correctly answered 16 out of 30 test questions. Traditionally, if the 30 test questions are worth 100 points, student 3 scored 53 points, which can be interpreted as an understanding of 53% of the tested concepts. However, according to the learning achievement analysis at a threshold of 50%, student 3 understood 70% (C1C7) of concepts C1C10 or 90% (C1C9) of concepts C1C10. The teacher inferred that student 3 made many careless mistakes.
Table 11 shows learning achievement results for student 4, who correctly answered questions Q1Q25, Q27, Q29, and Q30. Student 4 correctly answered 28 out of 30 test questions. Traditionally, if the 30 questions are worth 100 points, student 4 scored 93 points, which can be interpreted as an understanding of 93% of the tested concepts. According to the learning achievement analysis at a threshold of 50%, student 4 understood 100% of concepts C1C12. In the example, the AR trend for learning achievement was from high (93%) to low (67%) for C1 to C12, respectively. The teacher inferred that student 4 made careless mistakes.
Table 12 shows the learning achievement results for student 5, who correctly answered questions Q1, Q3, Q4, Q6, Q9, Q10, Q12, Q14, Q16, Q18, Q21, Q24, Q26, and Q30. Student 5 correctly answered 14 out of 30 test questions. Traditionally, if the 30 test questions are worth 100 points, student 5 scored 47 points, which can be interpreted as an understanding of 47% of the tested concepts. However, according to the learning achievement analysis at a threshold of 50%, student 5 did not completely understand any concepts because none of the AR values for the 12 concepts exceeded 50%. The teacher inferred that student 5 answered the test questions by guessing and concluded that student 5 needed an improved understanding of concepts C1C12. In this example, the AR trend for learning achievement was from high (47%) to low (33%) for C1 to C12, respectively.
For the area computation unit, the learning achievements of students 1–5 were validated by their mathematics teachers. The assessment system accurately diagnosed student learning barrier points and provided students and teachers with suggestions for improving learning efficiency.
The above results obtained in practical applications of the GBLHA indicate that it can accurately diagnose student learning barrier points and provide students and teachers with suggestions for improving learning efficiency. The GBLHA for developing mathematics skills assessment systems has three notable requirements. The test questions for each level of a mathematics unit should be carefully designed according to the Gagné learning hierarchy theory and according to the expertise of the elementary school teacher in assessing student learning achievement. Based on the learning hierarchy paths, the test questions for each level must be designed to enable inheritance coding. When a student is tested by the proposed assessment system, the AR trend in learning achievement from high to low from lower-level to higher-level concepts, respectively, is shown to depict barriers in student learning. Notably, some elementary schools in Taiwan have already adopted the proposed GBLHA for assessing student learning achievement in mathematics. Based on the test data analyses, student learning barrier points, and discussions with teachers, the threshold should be set to between 50% and 60% when evaluating the learning achievement of a student.
Figure 4 shows part of the mathematics skills assessment system. Students were required to complete the questionnaire after participating in the learning activities to survey their satisfaction with the mathematics skills assessment system, including its system interface, function, appearance, operation, stability, and response speed. Each item was scored from 1 (low satisfaction) to 10 (high satisfaction). Among the eighty students who used the system, 10% of the students gave 6 points, 28% gave 7 points, 32% gave 8 points, 20% gave 9 points, and 10% gave 10 points. The questionnaire results indicate that most students who used the system rated their satisfaction as 6 or higher. In terms of satisfaction with the system, 7 points or more can be interpreted as 90% satisfaction. Therefore, the system appears to have practical applications for diagnosing student learning achievement.

6. Conclusions

By integrating Gagné learning hierarchy theory and an inheritance coding approach, the GBLHA provided a systematic method of building a mathematics skills assessment system for diagnosing student learning problems. The ADDIE design model was used throughout the process of developing the system. The main contribution of this study is the use of the proposed GBLHA to build a system that can accurately identify learning barriers and can provide students and teachers with suggestions for improving learning efficiency in real-world classroom settings with fewer tests. The related statistical indices for the test questions used in the mathematics assessment system revealed acceptable expert validity, internal reliability, test–retest reliability, and parallel-form reliability. The function and performance of the system were rated favorably by the students. Practical applications of the system in elementary schools showed that the GBLHA is superior to conventional methods in terms of accuracy in assessing student learning achievement. Since the learning achievements of students who used the system were validated by their mathematics teachers, we conclude that the proposed GBLHA has excellent performances in mathematics skills assessment. Future research will add different mathematics units to test the learning problems of a large number of students and further improve the performance of the system.

Author Contributions

The inheritance coding approach and GBLHA were proposed by W.-L.T. and J.-T.T. Experiments were performed by C.-Y.H. Data analyses were performed by W.-L.T. and J.-T.T. The manuscript was prepared by C.-Y.H., W.-L.T., and J.-T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Ministry of Science and Technology, Taiwan, R.O.C., grant numbers MOST 106-2410-H-153-013, MOST 107-2410-H-153-008-MY2, and MOST 107-2221-E-153-005-MY2. The APC was funded by MOST 107-2410-H-153-008-MY2.

Acknowledgments

The authors thank R.Y. Hong and S.Y. Tsai for their assistance in developing systems and experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gagné, R.M.; Glaser, R. Foundations in learning research. In Instructional Technology: Foundations; Robert, M.G., Ed.; Routledge: New York, NY, USA, 2010. [Google Scholar]
  2. Goodson, L.A.; Slater, D.; Zubovic, Y. Adding confidence to knowledge. J. Scholarsh. Teach. Learn. 2015, 15, 20–37. [Google Scholar] [CrossRef] [Green Version]
  3. Gagné, R.M.; Briggs, L.; Wager, W. Principles of Instructional Design; HBJ College Publishers: Fort Worth, TX, USA, 1992. [Google Scholar]
  4. Flynn, J.L. Cooperative learning and Gagné’s events of instruction: A syncretic view. Educ. Technol. 1992, 32, 53–60. [Google Scholar]
  5. Singleton, C.M. Evaluate Whether Uri Treisman’s Model of Collaborative Learning is Consistent and/or Supportive of Robert Gagné’s Learning Hierarchy (ERIC Report ED559985). Available online: http://files.eric.ed.gov/fulltext/ED559985.pdf (accessed on 1 December 2019).
  6. Gagné, R.M. The Conditions of Learning and Theory of Instruction; Holt, Rinehart & Winston: New York, NY, USA, 1985. [Google Scholar]
  7. Khare, A.P.; Kumar, J. A framework for evaluation of e-learning applications in developing countries. Adv. Comput. Sci. Inf. Technol. 2015, 2, 62–67. [Google Scholar]
  8. Shikuku, B.N.; Wasike, D.W. Problem based learning and its effect on Kenyan secondary school students learning outcomes in linear programming. J. Educ. Res. Behav. Sci. 2015, 1, 1–8. [Google Scholar]
  9. Devi, K.R.P. Role of problem solving ability on enhancing students’ achievement. Indian J. Res. 2016, 5, 238–240. [Google Scholar]
  10. Branch, R.M. Instructional Design: The ADDIE Approach; Springer: New York, NY, USA, 2009. [Google Scholar]
  11. Molenda, M. In search of the elusive ADDIE Model. Perform. Improv. 2003, 42, 34–37. [Google Scholar] [CrossRef]
  12. Morrison, G.R.; Ross, S.M.; Kalman, H.; Kemp, J.E. Designing Effective Instruction; John Wiley & Sons: New York, NY, USA, 2003. [Google Scholar]
  13. Arkün, S.; Akkoyunlu, B. A study on the development process of a multimedia learning environment according to the ADDIE model and students’ opinions of the multimedia learning environment. Interact. Educ. Multimed. 2008, 17, 1–19. [Google Scholar]
  14. Hsu, T.C.; Lee-Hsieh, J.; Turton, M.; Cheng, S.F. Using the ADDIE model to develop online continuing education courses on caring for nurses in Taiwan. J. Contin. Educ. Nurs. 2004, 45, 124–131. [Google Scholar] [CrossRef] [Green Version]
  15. Moradmand, N.; Datta, A.; Oakley, G. The design and implementation of an educational multimedia mathematics software: Using ADDIE to guide instructional system design. J. Appl. Instr. Des. 2014, 4, 37–49. [Google Scholar]
  16. Azimi, K.; Ahmadigol, J.; Rastegarpour, H. A survey of the effectiveness of instructional design ADDIE and multimedia on learning key skills of futsal. J. Educ. Manag. Stud. 2015, 5, 180–186. [Google Scholar]
  17. Huang, C.Y.; Tang, W.L.; Chen, C.H.; Tsai, J.T. Development of e-materials on the complex graphics area unit for elementary school students. In Proceedings of the SICE 2014 Annual Conference, Sapporo, Japan, 9–12 September 2014; pp. 1899–1901. [Google Scholar]
  18. Huang, C.Y.; Tang, W.L.; Tseng, S.H.; Tsai, J.T. Action research on the development of digital material in the mandarin character recognition learning course for an elementary school in Taiwan. In Proceedings of the Asian Conference on Education & International Development, Kobe, Japan, 3–6 April 2016; p. 22930. [Google Scholar]
  19. Tang, W.L.; Hsu, P.S.; Tsai, J.T. Study of instructional design for elementary school teachers applying digital materials to remedial instruction on mathematics in Taiwan. In Proceedings of the Asian Conference on Education & International Development, Kobe, Japan, 3–6 April 2016; p. 22928. [Google Scholar]
  20. Tang, W.L.; Kang, H.M.; Tsai, J.T. Action research of mathematical learning effects. In Proceedings of the SICE 2013 Annual Conference, Nagoya, Japan, 14–17 September 2013; pp. 873–875. [Google Scholar]
  21. Tang, W.L.; Tsai, J.T. Research on the development of mathematical digital materials on the fraction area unit for elementary school students. In Proceedings of the 2015 Asia-Pacific Conference on Education, Society, and Psychology, Seoul, Korea, 8–10 January 2015; p. 405. [Google Scholar]
  22. Tsai, J.T.; Wu, Y.F.; Tang, W.L. Auto-reply system of questions and answers on mathematic speed unit for elementary school students. Curric. Instr. Q. 2012, 15, 179–210. [Google Scholar]
  23. Algraini, S.; McIntyre-Mills, J. Human development in Saudi education: A critical systemic approach. Syst. Pract. Action Res. 2018, 31, 121–157. [Google Scholar] [CrossRef]
  24. Díaz, A.; Olaya, C. An engineering view for social systems: Agency as an operational principle for designing higher education access policies. Syst. Pract. Action Res. 2017, 30, 627–649. [Google Scholar] [CrossRef]
  25. Henshaw, J.L. Systems thinking for systems making: Joining systems of thought and action. Syst. Pract. Action Res. 2018. [CrossRef]
  26. Malik, S.I. Improvements in introductory programming course: Action research insights and outcomes. Syst. Pract. Action Res. 2018, 31, 637–656. [Google Scholar] [CrossRef]
  27. Chen, C.M. Intelligent web-based learning system with personalized learning path guidance. Comput. Educ. 2008, 51, 787–814. [Google Scholar] [CrossRef]
  28. Ketterlin-Geller, L.R.; Yovanoff, P. Diagnostic assessments in mathematics to support instructional decision making. Pract. Assess. Res. Eval. 2009, 14, 1–11. [Google Scholar]
  29. Ozyurt, H.; Ozyurt, O.; Baki, A. Architecture and design process of the individualized assessment system integrable to distance education. Turk. Online J. Distance Educ. 2012, 13, 212–225. [Google Scholar]
  30. Hauswirth, M.; Adamoli, A. Teaching Java programming with the Informa clicker system. Sci. Comput. Program. 2013, 78, 499–520. [Google Scholar] [CrossRef]
  31. Hwang, G.J.; Panjaburee, P.; Triampo, W.; Shih, B.Y. A group decision approach to developing concept effect models for diagnosing student learning problems in mathematics. Br. J. Educ. Technol. 2013, 44, 453–468. [Google Scholar] [CrossRef]
  32. Yang, T.C.; Hwang, G.J.; Yang, S.J.H.; Hwang, G.H. A two-tier test-based approach to improving students’ computer-programming skills in a web-based learning environment. Educ. Technol. Soc. 2015, 18, 198–210. [Google Scholar]
  33. Lai, E.R.; Kobrin, J.L.; DiCerbo, K.E.; Holland, L.R. Tracing the assessment triangle with learning progression-aligned assessments in mathematics. Meas. Interdiscip. Res. Perspect. 2017, 15, 143–162. [Google Scholar] [CrossRef]
  34. Wilkins, J.; Norton, A. Learning progression toward a measurement concept of fractions. Int. J. STEM Educ. 2018, 5, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Reigeluth, C.M.; Curtis, R.V. Learning situations and instructional models. In Instructional Technology: Foundations; Robert, M.G., Ed.; Routledge: New York, NY, USA, 2010. [Google Scholar]
  36. Merrill, P.F. Job and task analysis. In Instructional Technology: Foundations; Robert, M.G., Ed.; Routledge: New York, NY, USA, 2010. [Google Scholar]
  37. Kim, M.K.; Cho, M.K. Design and implementation of integrated instruction of mathematics and science in Korea. Eurasia J. Math. Sci. Technol. Educ. 2014, 11, 3–15. [Google Scholar]
  38. Schenke, K.; Rutherford, T.; Lam, A.C.; Bailey, D.H. Construct confounding among predictors of mathematics achievement. AERA Open 2016, 2, 1–16. [Google Scholar] [CrossRef] [Green Version]
  39. MacDonald, B.L.; Wilkins, J.L.M. Subitising activity relative to units construction: A case study. Res. Math. Educ. 2019, 21, 77–95. [Google Scholar] [CrossRef]
  40. Jäder, J.; Lithner, J.; Sidenvall, J. Mathematical problem solving in textbooks from twelve countries. Int. J. Math. Educ. Sci. Technol. 2019. [CrossRef]
  41. Gagné, R.M.; Wager, W.W.; Golas, K.C.; Keller, J.M. Principles of Instructional Design, 5th ed.; Thomson/Wadsworth: Belmont, CA, USA, 2005. [Google Scholar]
  42. Takker, S.; Subramaniam, K. Knowledge demands in teaching decimal numbers. J. Math. Teach. Educ. 2019, 22, 257–280. [Google Scholar] [CrossRef]
Figure 1. Gagné learning hierarchy diagram for integers and fractions.
Figure 1. Gagné learning hierarchy diagram for integers and fractions.
Applsci 10 01465 g001
Figure 2. Analysis, design, development, implementation, and evaluation (ADDIE) model for the system development process.
Figure 2. Analysis, design, development, implementation, and evaluation (ADDIE) model for the system development process.
Applsci 10 01465 g002
Figure 3. Gagné learning hierarchy diagram for the area computation unit.
Figure 3. Gagné learning hierarchy diagram for the area computation unit.
Applsci 10 01465 g003
Figure 4. Part of the mathematics skills assessment system. (a) Four mathematics units in the system; (b) screenshot of test answers.
Figure 4. Part of the mathematics skills assessment system. (a) Four mathematics units in the system; (b) screenshot of test answers.
Applsci 10 01465 g004
Table 1. Codes for test questions (Qj) and concepts (Ci) in each skill level for the fractions unit.
Table 1. Codes for test questions (Qj) and concepts (Ci) in each skill level for the fractions unit.
Qj\CiC12C11C10C9C8C7C6C5C4C3C2C1
Q1000000000001
Q2000000000001
Q3000000000001
Q4000000000011
Q5000000000011
Q6000000000011
Q7000000000111
Q8000000000111
Q9000000000111
Q10000000001111
Q11000000001111
Q12000000001111
Q13000000011111
Q14000000011111
Q15000000011111
Q16000000111111
Q17000000111111
Q18000000111111
Q19000001111111
Q20000001111111
Q21000001111111
Q22000011111111
Q23000011111111
Q24000011111111
Q25000111111111
Q26000111111111
Q27000111111111
Q28001111111111
Q29001111111111
Q30001111111111
Q31011111111111
Q32011111111111
Q33011111111111
Q34111111111111
Q35111111111111
Q36111111111111
Sum of concepts (mi)369121518212427303336
Table 2. Learning achievement results for student 1, who correctly answered Q1Q17, Q19Q20, Q24Q26, Q29Q32, and Q36.
Table 2. Learning achievement results for student 1, who correctly answered Q1Q17, Q19Q20, Q24Q26, Q29Q32, and Q36.
C12C11C10C9C8C7C6C5C4C3C2C1
No. of correct concepts1357810121518212427
Sum of concepts369121518212427303336
AR (%)33%50%56%58%53%56%57%63%67%70%73%75%
Table 3. Learning achievement results for student 2, who correctly answered Q1Q12, Q15Q23, Q26, Q28Q30, Q32, and Q35.
Table 3. Learning achievement results for student 2, who correctly answered Q1Q12, Q15Q23, Q26, Q28Q30, Q32, and Q35.
C12C11C10C9C8C7C6C5C4C3C2C1
No. of correct concepts1256811141518212427
Sum of concepts369121518212427303336
AR (%)33%33%56%50%53%61%67%63%67%70%73%75%
Table 4. Learning achievement results for student 3, who correctly answered Q1, Q3, Q4, Q7, Q8, Q10, Q14, Q15, Q17, Q18, Q21, Q23, Q26, Q28, Q30, Q33, and Q36.
Table 4. Learning achievement results for student 3, who correctly answered Q1, Q3, Q4, Q7, Q8, Q10, Q14, Q15, Q17, Q18, Q21, Q23, Q26, Q28, Q30, Q33, and Q36.
C12C11C10C9C8C7C6C5C4C3C2C1
No. of correct concepts12456791112141517
Sum of concepts369121518212427303336
AR (%)33%33%44%42%40%39%43%46%44%47%45%47%
Table 5. Learning achievement results for student 4, who correctly answered Q1Q24, Q26Q29, Q31, Q32, Q35, and Q36.
Table 5. Learning achievement results for student 4, who correctly answered Q1Q24, Q26Q29, Q31, Q32, Q35, and Q36.
C12C11C10C9C8C7C6C5C4C3C2C1
No. of correct concepts24681114172023262932
Sum of concepts369121518212427303336
AR (%)67%67%67%67%73%78%81%83%85%87%88%89%
Table 6. Learning achievement results for student 5, who correctly answered Q1, Q2, Q5, Q6, Q9Q11, Q14Q17, Q20Q22, Q25, Q26, Q29, and Q30.
Table 6. Learning achievement results for student 5, who correctly answered Q1, Q2, Q5, Q6, Q9Q11, Q14Q17, Q20Q22, Q25, Q26, Q29, and Q30.
C12C11C10C9C8C7C6C5C4C3C2C1
No. of correct concepts00245791113141618
Sum of concepts369121518212427303336
AR (%)0%0%22%33%33%39%43%46%48%47%48%50%
Table 7. Codes for test questions (Qj) and concepts (Ci) in each skill level for the area computation unit.
Table 7. Codes for test questions (Qj) and concepts (Ci) in each skill level for the area computation unit.
Qj\CiC10C9C8C7C6C5C4C3C2C1
Q10000000001
Q20000000001
Q30000000001
Q40000000011
Q50000000011
Q60000000011
Q70000000111
Q80000000111
Q90000000111
Q100000001111
Q110000001111
Q120000001111
Q130000011111
Q140000011111
Q150000011111
Q160000111111
Q170000111111
Q180000111111
Q190001111111
Q200001111111
Q210001111111
Q220011111111
Q230011111111
Q240011111111
Q250111111111
Q260111111111
Q270111111111
Q281111111111
Q291111111111
Q301111111111
Sum of concepts (mi)36912151821242730
Table 8. Learning achievement results for student 1, who correctly answered Q1–Q13, Q15–Q17, Q19, Q20, Q24, Q25, and Q30.
Table 8. Learning achievement results for student 1, who correctly answered Q1–Q13, Q15–Q17, Q19, Q20, Q24, Q25, and Q30.
C10C9C8C7C6C5C4C3C2C1
No. of correct concepts12357912151821
Sum of concepts36912151821242730
AR (%)33%33%33%42%47%50%57%63%67%70%
Table 9. Learning achievement results for student 2, who correctly answered Q1Q12, Q14Q20, Q22Q23, Q26, and Q28.
Table 9. Learning achievement results for student 2, who correctly answered Q1Q12, Q14Q20, Q22Q23, Q26, and Q28.
C10C9C8C7C6C5C4C3C2C1
No. of correct concepts124691114172023
Sum of concepts36912151821242730
AR (%)33%33%44%50%60%61%67%71%74%77%
Table 10. Learning achievement results for student 3, who correctly answered Q1, Q3, Q4, Q7, Q9, Q10, Q14, Q15, Q17Q19, Q21, Q23, Q25, Q27, and Q29.
Table 10. Learning achievement results for student 3, who correctly answered Q1, Q3, Q4, Q7, Q9, Q10, Q14, Q15, Q17Q19, Q21, Q23, Q25, Q27, and Q29.
C10C9C8C7C6C5C4C3C2C1
No. of correct concepts134681011131416
Sum of concepts36912151821242730
AR (%)33%50%44%50%53%56%52%54%52%53%
Table 11. Learning achievement results for student 4, who correctly answered Q1Q25, Q27, Q29, and Q30.
Table 11. Learning achievement results for student 4, who correctly answered Q1Q25, Q27, Q29, and Q30.
C10C9C8C7C6C5C4C3C2C1
No. of correct concepts24710131619222528
Sum of concepts36912151821242730
AR (%)67%67%78%83%87%89%90%92%93%93%
Table 12. Learning achievement results for student 5, who correctly answered Q1, Q3, Q4, Q6, Q9, Q10, Q12, Q14, Q16, Q18, Q21, Q24, Q26, and Q30.
Table 12. Learning achievement results for student 5, who correctly answered Q1, Q3, Q4, Q6, Q9, Q10, Q12, Q14, Q16, Q18, Q21, Q24, Q26, and Q30.
C10C9C8C7C6C5C4C3C2C1
No. of correct concepts1234679101214
Sum of concepts36912151821242730
AR (%)33%33%33%33%40%39%43%42%44%47%

Share and Cite

MDPI and ACS Style

Tang, W.-L.; Tsai, J.-T.; Huang, C.-Y. Inheritance Coding with Gagné-Based Learning Hierarchy Approach to Developing Mathematics Skills Assessment Systems. Appl. Sci. 2020, 10, 1465. https://doi.org/10.3390/app10041465

AMA Style

Tang W-L, Tsai J-T, Huang C-Y. Inheritance Coding with Gagné-Based Learning Hierarchy Approach to Developing Mathematics Skills Assessment Systems. Applied Sciences. 2020; 10(4):1465. https://doi.org/10.3390/app10041465

Chicago/Turabian Style

Tang, Wei-Ling, Jinn-Tsong Tsai, and Ching-Ying Huang. 2020. "Inheritance Coding with Gagné-Based Learning Hierarchy Approach to Developing Mathematics Skills Assessment Systems" Applied Sciences 10, no. 4: 1465. https://doi.org/10.3390/app10041465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop