A Cuboid Registers Topic, Activity and Competency Data to Exude Feedforward and Continuous Assessment of Competencies

: Evaluating competencies achieved by students within a subject and its different topics is a multivariable and complex task whose outcome should provide actual information on their evolution. A relevant feature when a continuous assessment (CA) rules this evaluation is to track their learning process so that pertinent feedforward may be harnessed to proactively promote improvement when required. As this process is performed via a number of activities, such as lectures, problem solving, and lab practice, different competencies are developed, depending on the recurrence and type of conducted activity. Measuring and registering their achievement is the leitmotif of competency-based assessment. In this paper, we assemble topic, activity and competency data into a 3D matrix array to form what we call a TAC cuboid. This cuboid showcases a detailed account of each student evolution, aiding instructors and students to design and follow, respectively, an individualized curricular strategy within a continuous and aligned assessment methodology, which facilitates each student to adequately modify his/her level of development of each competency. In addition, we compare the TAC cuboids’ usage in grading a mathematics subject versus a traditional CA method as well as when a dynamical continuous assessment approach is considered to measure the achievement of mathematical competencies.


Introduction
Assessment is a very important part of the learning process, not only because it can measure the assimilation of some educational aspect, skill, competence, or knowledge, but because it can also be used as a compass for decision-making and fixing pathways so that it becomes a complete and true learning endeavour. In the literature, assessment definitions highlight some of the aforementioned objectives. For example, in [1], test development is the process of measuring some feature of an individual's knowledge, skills, abilities, interests, attitudes, or other characteristics. In [2], assessment is defined as a systematic collection of information about a student's learning process, using the time, knowledge, expertise and resources available, with the objective to inform decisions that affect student learning. Similar definitions can be found in [3,4].
Both the measurement and decision-making aspects are closely linked to the objectives of instruction [4,5], since these provide the cognitive development steps that must be achieved. In this line, Baker and Zuvela [6] used feedforward to provide first-year students with some exposure on their real capabilities prior to actual assessment and through online and distributed learning environments, so that students could get involved in meaningful engagement with their learning process with assessment and enhanced student performance as the goal. Cathcart et al. [7] went further, and, looking for student-centred agile teaching and learning methodologies, they engaged in a framework involving feedforward closely related to a concurrent evaluation with feedback. Thus, assessment was continuously used in an agile learning environment, hence demonstrating that evaluation did not just benefit future students but it benefited current students. To obtain meaningful feedforward for students, an important characteristic that arises in the continuous assessment (CA) process is that it must be systematic. An approach to CA of competencies under a blended-learning methodology, where the strengths of both strategies have assembled, was developed in [8] by considering a dynamical component to encourage students' improvement of previously assessed competencies, which was coined as Dynamical Continuous Discrete Assessment (DCDA).
Despite all the above advances in assessment techniques, at the end, students usually receive just a single number as their final mark reflecting their level of competency. Bearing in mind the different activities that may have been developed, such as project-based learning, lab sessions, collaborative learning, and so on, this single number may look like an average grain of sand that does not reflect all the learning pyramid that has been generated.
Mastering (acquisition and training) a set of competencies is linked to properly conducting the activities related to some topic(s). The grade for a given activity or topic may indicate that some competencies are lacking, but the student is not usually aware which competencies should have been developed or improved. Concurrently, data on the progress of competency achievement are not normally communicated in the academic world.
Thus, the research question addressed in this paper is how to register all the relevant data of the learning process, i.e., all the activities conducted, linked to the learning objectives and the competencies to which they are related, so that a meaningful feedforward may be provided at any moment. This paper also aims to achieve thorough monitoring to provide a clear picture of the activities and level of competencies achieved on the different topics of a subject during the learning process.
This issue recalls the rationale for the Bologna Framework to provide a mechanism to relate national frameworks to each other so as to enable international transparency. This mechanism is the Diploma Supplement, which ensures that qualifications can be easily read and compared across borders [9,10]. The Diploma Supplement provides much more information than just the name of a degree, as it provides a wide range of information, including personal achievements, course credits, grades, and what a student has learned. It contains information on the type and level of qualification awarded, the institution that issued the qualification, the content of the courses, results gained and even details of the national education system.
The procedure and computational method developed to tackle the proposed research question generates a TAC (Topics addressed through Activities and Competencies achieved) cuboid that provides a device exuding exhaustive information on the activities performed and competencies achieved related to the topics covered in a subject, similar to how the Diploma Supplement depicts, in a transparent way, a degree under the Bologna Framework.
The method has been tested in the assessment of mathematical competencies but it can be used to register data and monitor the learning process of any subject where different topics, activities and competencies are on stage.
In Section 2, we highlight the relevance of feedforward within a formative methodology and the three elements of TAC. Section 3 describes how the TAC cuboid emerges in a natural way to register TAC data, and in Section 4, we show how the cuboid has been implemented to assess topics, activities and competencies in a Mathematics subject run in a first-year engineering degree. Sections 5-7 present the results and discussion, conclusions, and possible future work, respectively.

Feedback vs. Feedforward
Assessment is defined as formative when decisions based on it can positively influence the learning process [4,8] by providing students with some feedforward as input for their improvement and growth.
Assessment is summative if it occurs once the instruction has ended and the students do some activity to show their competencies in some relevant moment of evaluation. Then, students may receive only some feedback as a summary of their performance, not allowing them to take advantage of this information as part of their learning process. Kirshner et al. [11] highlight that summative assessment ignores the structures that constitute human cognitive architecture, despite the abundant evidence from empirical studies indicating that any instruction which is minimally guided becomes less efficient and less effective than those instructional approaches that emphasize the student learning process guidance.
Although some students may consider the summative assessment to be important, since it reflects the degree of fulfillment of the objectives of the instruction, and it is a mark won and remaining in their curriculum, in our opinion, only the formative assessment allows them to develop skills, knowledge and competencies during the learning process. That is, formative assessment gives room to provide the students with some feedforward that, in this way, allows students to gain control of the evolution of their competencies.
Indeed, decision-making based on the results of an assessment usually comes on the part of the instructor and the institution, though in a curriculum following the principles of andragogy [12], the students must be an active element in their education. Recent results have pursued this approach, in which feedforward prevails over feedback, with different environments [13][14][15], in some cases making it even more effective when feedforward comes out through a dialogue or verbal communication.

Topics, Activities and Competencies (TAC)
According to Tyler [16] and Nilson [5], possibly the most important feature of a curriculum is the detailed description of the learning objectives. This is most certainly the case in all subjects where the assessment of learning outcomes on the topics covered is fundamental both at the student level and the curriculum level [17,18]. This assessment seems to be significantly useful when it is part of a CA methodology [19,20].
When STEM subjects are on stage, there are other relevant features in addition to the learning objectives, e.g., mathematical language skills, understanding of that language, intrinsic competencies, etc. The Danish KOM project [21] considers several relevant points concerning mathematical competencies that are to be achieved and exposes a guide seeking the coherence and evolution in mathematics teaching, paying attention to how to evaluate the mathematical competencies. That study overlined eight mathematical competencies and how they are displayed in the different activities that are performed while covering the different topics that conmpose a syllabus.
Activities performed to achieve competencies on some given topic so that the learning objectives are reached are common beyond STEM subjects and occur frequently in any learning process. Hence, even though the next section exemplifies mathematical competencies, the message conveyed may be exported to any other discipline where competencies and activities are properly detailed, both in terms of their development and assessment.

Assessment of Activities Based on Competencies
The SEFI Mathematics Working Group [22], concerned in engineering education and inspired by the KOM project, clustered mathematical competencies into two groups: • Competencies related to asking and answering questions: 1.
Thinking mathematically. Involves recognising mathematical concepts, understanding their scope and limitations, and extending their scope by abstraction and generalization results;

2.
Reasoning mathematically. Includes understanding the notion of proof, recognising the ongoing ideas in proofs and the ability to distinguish between different kinds of mathematical statements; 3.
Posing and solving mathematical problems. Comprises identifying and specifying mathematical problems and the ability to solve them; 4.
Modelling mathematically. Deals with analysing and working with existing models, and the ability to conduct active modelling, too; • Competencies related to managing mathematical language and tools:
Handling mathematical symbols and formalism. Includes using and proper manipulating of symbolic statements and expressions; 7.
Communicating in, with and about mathematics. Involves understanding mathematical statements made by others and being able to express oneself mathematically to their audience; 8.
Using aids and tools. Includes skills regarding using digital aids and tools that are available, as well as their capabilities and limitations.
These competencies overlap, but since they emphasise different aspects, they may be considered separately. The distinguishing of relevant competencies is not a closed task. There were, for example, the competencies underlined by García et al. [23], who considered the following competencies: (a) self-learning; (b) critical thinking; (c) ICT usage; (d) problem solving; (e) technical communication; and (f) team work. Their (a) refers to engaging in independent life-long learning abilities; (b) is related to analysing, synthesising and applying relevant information; (c) is in regards to using modern digital technologies; (d) and (e) are related to to 3 and 7; and (f) is related to the ability to work efficiently in a multidisciplinary team.
In any case, if these or some other competencies are intrinsically relevant in the learning process, there should be a constructive alignment in the assessment of mathematical competencies and grading of students in every moment of evaluation (MoE), where a MoE is any assessed activity, as students take more seriously goals and settings when they are part of their assessment [24][25][26].
In order to register students' evolution and the level of achievement of their mathematical competencies, we will evaluate how each activity performance contributes to the development of the competencies related to that given activity by using a binary rubric assessment.

Rubrics Assessing Competencies
Traditionally, each student activity has been evaluated under a quantitative grade or a letter. Instead of providing a single mark, this paper aims to evaluate the impact of each activity on the overall competencies. Therefore, it will be necessary to know the relation between each activity and the competencies involved, that is, to know what part of each competency is embedded or comprised in the performance of an activity. The constructive alignment concept will be followed, which means that students build up meaning via conducting relevant learning activities [24].
Binary evaluation lies within the assessment by a rubric framework. A rubric is a list of criteria for students' work that describes different levels of performance quality [27]. Rubrics provide a directed structure to observe the quality of each student performance, but, moreover, they may help teachers and students judge the quality and the progression of student performance.
Rubrics are categorized as analytic or holistic depending on whether all criteria involved are evaluated separately or globally. The former are fit for formative assessment, as students may know the features of their work that need some attention. They are also for grading that is going to be used in decision making in an immediate future. The latter are fit in situations in which students do not see the grading results and information is going to be used only for grading.
Rubrics can be used for different purposes in assessment procedures, including, inter alia, promoting learning in the classroom (instructional rubrics) [28] and fixing a set of clear expectations or criteria on what is a valued in a topic or activity [29] (analytic-trait rubrics [30,31] or skill-focused rubrics) [32]. According to [33], rubrics can be used to teach and evaluate within a formative student-centered assessment, as they help develop an objective judgment about the quality of the performance. Just explaining a rubric apparently drags the student activity [28], and Hafner and Hafner [34] provide evidence that rubrics are an effective tool for peer grading by students in the realm of a university biology classroom. Schafer et al. [35] state that higher scores are the consequence of a clear operational performance definition. In short, rubrics have helped teachers and students establish a basis of common understanding for rating the performance and behavior of students during activities [36,37]. An assessment evaluation rubric with a wide scope may be found in [18] which was introduced to support, carry out and promote the systematic evaluation of learning outcomes.
We cannot dismiss that there are critical points of view on using rubrics in the educational system in general, as in [38], or on its misuse when they are poorly designed or implemented [39]. Sharing their explicit criteria with students has also been questioned, as this can lead to instrumental learning [40,41]. However, there is accumulated empirical support of their mainly positive effects [42][43][44], and we will focus on rubrics that use generic traits based on analytic performance criteria to evaluate the achievement or development of mathematical competencies.

Materials and Methods
The data regarding the topics covered, activities performed and competencies analysed come from all the activities executed by 118 students following the compulsory annual subject Mathematics I corresponding to the curricula of the BEng Degree in Aerospace Engineering at the Technical University of Valencia (UPV) during the 2020/21 academic year [45]. These students had been evaluated following a DCDA methodology [8] and had fulfilled a number of different activities during the course consisting of assignments controlled by tests prior to relevant exams, weekly lab sessions following a flipped methodology and closely linked to the topics and learning objectives of the subject, individual lab exams Lex1 and Lex2 at the end of each semester assessing their use of technological competencies achieved during the weekly sessions linked to the topics and learning objectives covered, and four relevant exams described in Section 4. In all these performed activities, the competencies worked out and the topics covered were clearly identified.

The TAC Cuboid Hacthling
The KOM project [21], Chapter 9, declares that the core idea of a competency is an insightbased readiness to act, where "the action" can be physical, behavioural -including oral -, or mental. Therefore, a valid and comprehensive assessment of the mathematical competencies of a person must start by identifying the presence and extent of these features in relation to the mathematical activities in which the respective person has been/is being involved. And also that a mathematical activity can, for example, to solve a pure or applied mathematical problem, to understand or construct a concrete mathematical model, to read a mathematical text with the view of understanding or acting on it, to prove a mathematical theorem, to study the interrelations of a theory, to write a mathematical text for others to read, or to give a presentation. Different activities have different impacts on each of the eight competencies [25] recalled in Section 4.1. For example, a theoretical question provides a stronger input in the first four competencies, while a problem-solving activity is more linked to other competencies, such as representing mathematical activities, the handling of mathematical symbols, and, in some cases, to using calculus tools. Then, the first task consists of estimating the usage of each competence while some activity or different types of activities are conducted and distributing the impact weights on a binary basis (1-weighted). With this aim, a collection of different types of activities is needed, as exemplified in the eighth chapter of the KOM project, where we find a descriptive matrix gathering subjects and competencies. Different type of activities present different impact spectra on the competencies considered. The SEFI MWG developed a study about the relationship between kinds of activities and competencies [22]. Hence, it follows the need of creating a picture of the influence of the different activities to achieve the desired competencies and, consequently, to design them with a competency-oriented perspective in the curricula. Clearly, this quantification should be done by teacher communities in order to obtain a normalized procedure of assessment. Then, it is possible to construct a master table about impacts of activities on competencies related to a given topic. Therefore, each class activity i (lectures, projects, etc.) should have some impact a ij on each of the competencies C j considered. Moreover, each type of activity will have some impact or weight w i on the assessed topic. Table 1 may be considered a master table where different activities of the same type might attribute different impacts on different competencies. This master table works as a reference table, and some minor tuning might be carried out in specific activities. In this way, a type of activity may be split into several subclass activities which have to be 1weighted in order to estimate a grade for each type of activity or else consider each subclass activity as a type of activity. Table 1. Activity/topic impact on basic competencies.

Topic (T) Impact on Competencies of Types of Activities
Activities Since a given topic is covered by different types of activities, a quantitative grade G of topic T can be obtained by means of where w i fixes the relevance of the each activity in topic T. Finally, a third dimension in the model is represented by the list of topics to be studied in a course, giving each of them a specific relevance represented by a given weight. The final grade FG may be calculated by means of the expression where p i represents the relevance of topic T i in the subject, with ∑ i p i = 100. Table 1 can be considered as a 2D matrix that measures the impacts of activities on competencies related to a given topic (Topic 1), as shown in Figure 1, in which the elements a ij have been changed by a ijk , where k = 1 indicates the topic being evaluated.
When one considers adding the impacts of a different topic (Topic 2), a 3D matrix arises ( Figure 2) with two levels, one per topic. This 3D matrix is called a cuboid, and it can be upgraded with as many levels as topics are evaluated.
In brief, a TAC cuboid is a 3D matrix exuding the connection between activities, topics and competencies which are quantified by assigning appropriate weights in each topicactivity-competency knot, as depicted in Figure 3. Namely, element a ijk of the TAC cuboid represents the weight corresponding to the activity A i as impact on the competencies C j weighted on a curricular list of topics T k .

Personal TAC Cuboid
A personal TAC cuboid (pTAC) is an all-zero cuboid assigned to each student at the beginning of a course where each one of its elements changes its value as long as it registers information on: • One of the different types of activities performed; • One of the different topics covered that conform to the learning objectives of the subject; • One of the competencies considered, or a combination of them if their achievement is considered to be assessed indistinguishably.
When some activity on a given topic is performed, it is assessed by means of a binary evaluation from the perspective of each competency involved based on a true/false assessment. This binary assessment generates a computed vector that is adequately placed in the pATC. To do so, each non-zero binary component of the assessed competencies is multiplied by the weight of the subclass activity assessed in that MoE and placed within the cuboid.
Whenever a subclass activity may be performed several times, the number of occurrences within each subclass activity has to be saved, and a new row must be placed within the cuboid duly averaged with the values of the respective row by using some activity frequency factor. Considering all the activities performed by each student in a course and pushing them into the corresponding pTAC with binary assessment, a tool to implement continuous assessment based on competencies is obtained.
As the course progresses, the students should increase their overall competencies. Thus, an activity that is properly and well-conducted should have a positive impact on the related results which, in turn, might redeem past deficiencies or gaps or reconsider previous assessments of the level of competency, as discussed in [8], where a dynamical component was included as a continuous assessment approach. For this purpose, a grade impact amplifier as a continuous assessment procedure (GI A| CAP ) is introduced to recognise the improvement in the level of competency or knowledge achieved throughout the learning process. This consists of a set of weights by considering the new activity rows placed within the pTAC. This may be monitored by using adequate GI A| CAP stages after relevant MoE in order to establish different controlling steps along the programme. These impact amplifiers are established only in the upward sense as a tool to motivate students to develop a continuous effort to improve their overall competency in the subject.
The keystone is to design a well-balanced TAC cuboid that enables each activity performance to be computed. It must be carefully designed, considering the linkage between each activity, competency and topic, based on historical data and tuned in a timely fashion, if appropriate, according to feedback obtained during the last course.

Targeting with TAC Cuboids
As the course advances, each pTAC is filled with the binary assessment results of the different activities conducted. At any given moment, the pTAC provides information on the activities performed and its impact on the level of competencies achievement related to the topics covered. The internal structure of the cuboid, far from being a simple scalar, works like a Computed Axial Tomography showing the status quo of the students mastery on topics and learning objectives covered up to that moment. This complex information provides data that iScholars [46], digital natives used to an increasingly digitalised society, may find more meaningful than single grades. With it as a handy source of the evolution and actual state of the learning process of each student, this registering methodology works as a valid tool to address continuing mathematical deficiencies with advanced diagnostic testing [47].
On the other hand, the educational objectives can be placed as reference cuboids to establish some target levels that can be called target TAC (tTAC). These tTAC may guide individualized pathways for students to reach the learning objectives. Targeting is a way to predict future results which embraces the studies that intend to get some class of predictors [48], or those based on conduction of daily activities such as quizzes or tests to follow the continuous progression as an estimate of the final results [49]. Finally, aggregating every pTAC, we may depict the development of competencies within the whole student collective and their topic mastery in the learning objectives.

Implementing the Computational Algorithm
A comparison between the evaluation method based on TAC cuboids and the continuous assessment run with Mathematics I students in Aerospace Engineering at UPV [45] has been carried out. In this degree, a formative dynamical continuous discrete assessment (DCDA) [8] has been developed for over 11 years, which has proven to encourage students toward the improvement of previously assessed competencies when chains of topics had been recognised. This meant a shifted re-assessment of their proven mathematical competencies when used in ulterior activities on topics that belong to those chains.
TAC cuboids systematically register data in the learning process. We will study if there is any significant deviation between grading provided by both methods in terms of final grades. Independently, the TAC method carries out a wealth of information on the whole learning process that awaits deep harvest.
Since we are dealing with Aerospace Engineering students, who traditionally show good mathematical skills and good levels of competency achievement, the authors decided to cluster the eight mathematical competencies considered in [22] into just four mathematical competencies characterised by the following facets: M1 Capacity of reasoning. This involves the "thinking mathematically" and "reasoning mathematically" competencies, for which students should show this capability in other contexts; M2 Ability to solve problems. This involves "posing and solving mathematical problems" and "modeling" of SEFI MWG. Here, attention is paid to the students' capabilities in calculation procedures and their accuracy in results. Posing and modeling complex problems is addressed in more advanced subjects; M3 Capacity of using formal language and communication. This clusters the "representing mathematical entities", "handling mathematical symbols and formalism" and "communicating in, with and about mathematics" competencies of SEFI MWG. Competency in this cluster means an adequate use of symbolic and written language and representation of mathematical objects. Oral language should also be observed when appropriate. Communication in and about mathematics must be understandable and contain the expected lexis; M4 Use of tools and aids. This involves the correct use of scientific computing algebraic systems, calculators, measuring instruments, and their implemented help tools.
The subject covers 4 topics: • Calculus I (CI): Devoted to one real variable functions with antiderivatives, definite and improper integration and their applications, and an introduction to ordinary differential equations and initial value problems; • Linear Algebra (LA): Devoted to linear algebra with matrix calculus, vector spaces and matrix diagonalisation topics; • Calculus II (CII): Devoted to differential calculus of two or more variables, multiple, line, surface integrals and their applications; • Series: Devoted to numerical, power and Fourier series.
Their specific learning outcomes are standard, and it is unnecessary to recall them, even though for the interested researcher, they are available [45]. Having fixed the competencies of interest and topics, we need to display the set of activities where these competencies will be analysed. This set must be split into different topics and each one of them must be accompanied by its corresponding MoE so that their assessment and feedforward are feasible. They happen to occur as follows: To follow the course development, we consider an activities backlog where each question of each MoE will be displayed as an independent activity so that the register of data performance is more comprehensive and meaningful. Whenever there are some questions with different values according to their extension or complexity, we will create some subclass of activities. With this structure, a master TAC is built to facilitate the learning process assessment as depicted in Table 2. The backlog consists of the 72 activities mentioned above. It enables the analysis of the results of any student and topic. For instance, the backlog corresponding to topic TP2 of one student, who we will call Student X, is exposed in Table 3. The pTAC of any student gathers all registered data concerning all his/her activities performed. For instance, once the 72 activities have been assessed, the pTAC of Student X may be downloaded. Its information is shown in Table 4.  The acronyms used in the columns of the pTAC are the same or similar to the ones used in Table 3. The new ones have the following meanings: • #Act: Number of activities performed in the given subclass activity; • D = ∑ WA: Sum of the weights of subclass activities within each activity type. It is obtained by adding the corresponding weight column in the backlog table, with N t being the number of activities performed. w S k the weight of the activity subclass (D t is used with averaging purpose when new binary assessment vectors are pushed into pTAC); • SM i : Sum of the corresponding M i for the group of activities performed in topic t, where M t ik is the binary component of the competency M i in the topic t for activity k; • wSM i : Sum of all products of wM i times the weight of each subclass activity, i.e., The grade g M t i obtained in competency M i for topic t is given in the g(M i ) column by The grade g(A t ) of the activities on a given topic t is given by For each topic t, its grade comes from evaluating all the activities belonging to the different subclasses S in topic t.
The final grade FG is the average of topic grades taking into account their corresponding weights, where g(t k ) is the grade of the topic t k and w t k is its weight in the course programme. Table 4 includes a contributive final grade (CFG) which represents the grade earned throughout all the activities up to a given moment and is given by In addition to the above, some other information may obtained from each pTAC, which also appears summarised in the above table. Therefore, we may be interested in obtaining the following data,

Implementing the Dynamical Computational Assessment
As may be observed, the MoE TP4 covers topics already addressed in TP1 and TP3, which means that there is a logical chain of topics, TP1 → TP3 → TP4, which has been exploited to apply the DCDA methodology in [8]. This has provided good results in the engagement of students to improve their level of competency on topics previously assessed and to recognize this positive evolution happens and all the mathematical competencies of the course have been achieved.
This dynamical assessment assumes that a good performance in an ulterior related activity showing a higher level of competency should be taken into account and lead to some reconsideration of previous MoE. The TAC cuboid system streams the evolution of the competencies achieved and, thus, when an activity is correctly performed according to defined rules, the system may issue some reassessment of previous MoEs, which would have a direct impact improving grades in previous MoEs corresponding to predecessor topics within identified chains of topics.
In this subsection, we follow the dynamical continuous assessment approach and provide the most simple one by fixing some conditions to trigger them. These conditions are fixed to assure that students have achieved some general basic competencies in all the parts of the subject. Other instructors may elect to make them more or less demanding, or just ignore them.
In our implementation, we have defined the next workflow within a dynamical continuous assessment: • Activity E42 may dynamically re-assess activities E12 and E13; • Activity E43 may dynamically re-assess activity E22; • Activity L32 may dynamically re-assess activity E32.
As triggering rules we requested that the grade of the activity should be greater than 40% with GI A| CAP = 1, and with GI A| CAP = 2 in case the activity grade is greater than 75%. Each dynamical assessment rule generates a binary vector with all components equal to 1 (grade 100%) when the retroactivity effect is applied.
The weights of the master TAC must not be changed, and they should be known since the beginning. Due to the additive nature of the pATC, an impact amplifier of order p, p being a natural number, can be enforced if agreed by introducing p identical assessment vectors with each activity conducted. This enables increasing its impact p times and accelerate the convergence of previous grading to the last assessment.
Our student X fulfills the triggering conditions, and the dynamical assessment applied to the mentioned MoE generates the assessments given in Table 5.
For this purpose, all activities done at every MoE were assessed following TAC methodology, and in this way, for each student, we obtained paired data of these assessments and the one originally done following DCDA. Since the sample size is greater than 40 items without outliers, and the Shaphiro-Wilk test provides a p-value = 0.0786389, at a significance level of 0.05, we can assume that the sample follows a normal distribution.
The hypotheses concern a new variable d, which is based on the difference between paired grades from two data sets corresponding to each of the moments of evaluation TP1, TP2, TP3, TP4, LP, T and FG. We establish the hypotheses: H0: The difference in the average of scores are not significant. Both procedures generate comparable results. H1: The difference in the average scores is significant. The procedures do not give comparable results. Table 7 shows the hypothesis test results for the difference between paired means. Since the p-value is greater than the significance level (0.05) in every MoE, we cannot reject the null hypothesis, and conclude that both assessment procedures give comparable results. The value of the weights in the master TAC cuboid are chosen by the instructor so that they reflect the previously agreed-upon scoring strategy.

Insight into a Sample of Students with Different Performances
In this subsection, we place a closer insight on 7 cases of the 118 students, who had performed with different levels of success: 2 A-level students (St-A1 with A+, and St-A2 with standard A), 2 C-level students (St-C1 with C+, and St-C2 with standard C), 2 D-level students (St-D1 with D+, and St-D2 with standard D), and 1 F-level student.
This insight has been done in two different stages. Firstly, Table 8 gathers the results of this sample without taking into account the dynamical assessment, i.e., without re-assessing competencies that students had proven to have improved along chains of topics whose activities used previously assessed competencies. If students receive acknowledgement of improving their level of achievement in competencies along chains of topics as considered in the DCDA approach developed in [8], Table 9 displays the results of that re-assessment. The results obtained by both methodologies are quite alike. In Figure 4 we find an illustrative image of the differences between TAC cuboids compared with the seminal DCDA approach [8]. This confirms that the general results of the previous section seem to have been conveyed to the different type of students. The dynamical assessment with TAC, in general, seem to provide some insignificantly higher increased recognition in lower marks and lower increase in higher marks. Clearly, the higher or lower effect depends on the design of the rules for dynamical continuous assessment. In this implementation, rules have mimicked those deployed at UPV.

Discussion
The TAC cuboid registers data on topics, activities and competencies to assess students' performance and provide feedforward. Its implementation is flexible enough that the dynamical re-assessment may be ignored, causing its implementation to be much simpler, or considered with different levels of intensity. It is a decision to be made by instructors in advance, before its implementation, and may be applied with different emphasis depending on purpose, intention and, obviously, possible local constraints.
The weight assigned to each activity within a topic, or set of activities that compose a given moment of evaluation, may be defined to highlight the importance of some activities over others. This way, the instructor may introduce a number of activities in the learning process where the keystone is their weight. In this line, in the case presented in this paper, the activities developed under FT have received little weight, so that, even though there might have been plenty of them, their input into the final grade is limited.
For convenience, the weight of the topics within the overall assessment process has received a 100 point distribution, but any other arrangement is feasible; a transformation might be required to get a final grade in the standard range of each local community. TAC-based assessment is additive but used with a formative goal; hence new activities may be added, and if dynamical assessment rules are triggered, students' improvement in their level of achievement of competencies is duly acknowledged. The main difference between DCDA-and TAC-based methodology is the procedure to assess an activity through binary indicators. Rather than getting scalar grades, TAC cuboids generate vector grades related to some predefined competencies. TAC cuboids register a detailed record of the progress of students who may receive instant feedforward.
From the results, it follows that the implementation of TAC cuboids does not necessarily convey a great difference in the grading records. Its advantage relies in the systematic registering of data and a wealth of information that apparently awaits to be exploited. This data favours making information on topics, activities and competencies visible and accessible, with a clear image of its complexity as described in this paper.

Conclusions
The objective of a competency assessment is to measure the progress in the achievement of skills associated with learning objectives, but in the current educational system, each student usually receives only a single score as the measure of the level of achievement of the course learning objectives.
TAC cuboids are structures that, through a binary evaluation, exude information in the context of the performance of activities designed to achieve the desired competencies. By placing adequate weights on each conducted activity in the competencies considered, the instructor and the students receive a detailed assessment of the outcome, as well as a grade for the activity.
Different activities may have different relevancy, as TAC cuboids support the performance of daily activities as well as discrete relevant moments of evaluation. This allows students to be provided with feedforward and enables the use of dynamical continuous assessment, which softens the effect of some unsuccessful performance in the context of a formative assessment methodology.
The personal TAC cuboid enables the instructor to control the performance of activities. The more activities are introduced, the better the pTAC will report the students' progress, and the evaluation will be closer to a continuous assessment in "real" time.
The competencies assessment becomes a control method that directs the system of competencies achieved with appropriately selected inputs so that the system output is applied as feedforward in the learning process, forcing the system output to converge to the desired level of achievement of competencies. Simultaneously, the learners get involved in their learning process as they indeed become co-producers of new learning activities that will eventually modify their level of achievement of competencies. This setting transforms continuous assessment into a meaningful, responsive dialogue between students and their instructors, providing continuous feedforward to the former ones that will grow as scholars in a non-planar assessment environment.
Master TAC cuboids represent the metrics that enable the quantification of the detailed binary assessment of the activities performed by each student to be placed into his/her personal cuboid. On the other hand, different target TAC cuboids that might be fixed would mark different development milestones of the subject. Thus, personal TAC cuboids reflect a finite succession of cuboids that must be compared to one of these target TAC cuboids. This would landmark the success of students in achieving the desired level of development of competencies.

Future Work
After implementing TAC cuboids, it might be interesting to investigate what activities and what type of activities related to different competencies are more meaningful in the learning process among different groups of students. This valuable feedback would enable future work to feedforward training needs within the implemented learning process and also adapt the activities for the different learning styles of students.
TAC cuboids may be used to register data whenever different topics, activities and competencies coexist within a course. The competences considered in the cuboid might include transversal competencies or soft skills as well. The weights on each competence might be reconsidered, if required, so that the instructor would consider the competencies that are relevant for the learning objectives of the subject, but all data would be available in the cuboids so that research may be developed in search of the influence and relationship between the different competencies and the specific competencies of the subject.  Acknowledgments: This experience has been developed within the GRoup of Innovative Methodologies for Assessment in Engineering Education GRIM4E, of Universitat Politècnica de València (Valencia, Spain).

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: