Next Article in Journal
Intent Classification by the Use of Automatically Generated Knowledge Graphs
Next Article in Special Issue
Improving CS1 Programming Learning with Visual Execution Environments
Previous Article in Journal
Using Genetic Algorithms to Improve Airport Pavement Structural Condition Assessment: Code Development and Case Study
Previous Article in Special Issue
Identification of Critical Parameters Affecting an E-Learning Recommendation Model Using Delphi Method Based on Expert Validation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Engagement with Optional Formative Feedback in a Portfolio-Based Digital Design Module

by
Eirini Kalaitzopoulou
1,*,
Paul Matthews
1,
Stylianos Mystakidis
2,3 and
Athanasios Christopoulos
4
1
School of Computing and Creative Technologies, University of the West of England, Bristol BS16 1QY, UK
2
School of Natural Sciences, University of Patras, 26504 Patras, Greece
3
School of Humanities, Hellenic Open University, 26335 Patras, Greece
4
Turku Research Institute for Learning Analytics, University of Turku, 20101 Turku, Finland
*
Author to whom correspondence should be addressed.
Information 2023, 14(5), 287; https://doi.org/10.3390/info14050287
Submission received: 19 April 2023 / Revised: 8 May 2023 / Accepted: 10 May 2023 / Published: 12 May 2023
(This article belongs to the Special Issue Information Technologies in Education, Research and Innovation)

Abstract

:
Design skills are considered important in software engineering, and formative feedback may facilitate the learning process and help students master those skills. However, little is known about student usage of and reaction to the feedback and its impact on learning and assessment outcomes. This study explores the effects of optional formative assessment feedback on learners’ performance and engagement by considering LMS interactions, student demographics, personality types, and motivation sources. Forty-five postgraduate students completed an enrolment questionnaire addressing the Big Five personality dimensions, the Situational Motivation Scale and background data. The main methods included monitoring LMS engagement over 10 weeks of teaching and analysing assessment marks to develop student profiles and assess the influence of formative feedback on engagement and performance. The main findings revealed that while formative feedback helped improve marks on portfolio tasks, it did not lead to higher performance overall compared to students who did not receive it. Students seeking feedback engaged more actively with the LMS assessments. Feedback-seeking behaviour was associated with gender, intrinsic motivation, conscientiousness, and extrinsic motivation, although not all associations were significant. The study’s main contributions are in highlighting the impact of formative feedback on performance in linked assessments and in starting to reveal the complex relationship between feedback-seeking behaviour and student characteristics.

1. Introduction

The provision of feedback helps justify the assigned grade or score that represents a certain level of learning outcome achievement [1]. In addition, providing timely and elaborate feedback significantly contributes to deep learning, reflection, and self-regulation. However, formal feedback, whether for assignments or exams, is typically provided to learners at the end of their modules when there are no more opportunities to act upon it. Furthermore, the tight time constraints faced by educators when assessing students’ work often leads to the creation of ‘patternised’ feedback (codified in the language and format of the rubric) which learners struggle to understand [2]. One potential solution is to encourage students to submit drafts of their summative assessments for feedback at an intermediate stage. However, the issue of interpretability may persist if this feedback is too brief. Consequently, students cannot determine if their work and approach would result in failure, pass, or distinction. This dilemma contributes to the identification of feedback as one of the most challenging aspects of teaching and learning, particularly in Higher Education Institutions (HEIs) [3].
We are seeking to better understand the impact of formative feedback in our postgraduate teaching with a view to improving its uptake and impact. In this study, we explore the connections between the seeking and receipt of formative feedback and students’ motivation and ability to engage with learning materials and successfully complete connected assessment tasks. To investigate this, we implemented an intervention in a digital design course where assignments were divided into sub-tasks, allowing students to successfully complete the human factors and problem definition phases. Consequently, students had the opportunity to reflect on their grades and feedback before advancing to solution design and development. This approach enabled them to develop the professional design skills required for both the module and their future careers. Moreover, feedback aimed to foster skill development by offering constructive criticism and guidance.
In consideration of the above, the present study envisions to provide answers to the following overarching research questions (RQs):
RQ1. Does the use of formative feedback on assessment tasks help students improve their performance in summative assessments?
RQ2. How does content engagement differ between students who get formative feedback and those who do not?
RQ3. What student characteristics are associated with seeking formative feedback?
RQ4. How useful do students find formative feedback in relation to the learning objectives of a digital design module? Does this kind of feedback change their attitudes to learning?
By addressing these RQs, this study aims to offer valuable insights into students’ uptake of formative feedback and its contribution to course engagement and outcomes. A deeper understanding of these dynamics can inform the development of more efficient feedback practices and strategies, ultimately leading to improved student performance and satisfaction.

2. Literature Review

In order to establish the context for this work, we will here define formative feedback, establish the centrality of the Learning Management Systems in contemporary learning and explore personal characteristics associated with engagement in both. The method used for the analysis is Learning Analytics which we will define. The aim is to introduce relevant research which highlights the need for a better understanding of the role of feedback in computer science education and to identify areas where increased knowledge is warranted.

2.1. Formative Assessment and Feedback

The advent of digital learning has triggered a paradigm shift in teaching and learning, with Learning Management Systems (LMS) serving as the primary platform for communication and interaction between teachers and students [4]. In this context, educators have the moral responsibility to empower and support students to be successful in their learning. Student active participation in the LMS, in terms of time spent and active effort, are predictors of academic performance [5]. On the other hand, disengagement with LMS can provide early indications of a possible poor academic performance or even a potential dropout [6]. Learning Analytics (LA) practices concern the extraction and analysis of educational data, providing valuable insights to prevent undesirable student outcomes [7]. The application of LA can help behavioural engagement by providing students with accurate information, notifications, and alerts regarding their progress [8,9]. However, there is a notable research gap in understanding how to best integrate LA with formative assessment and feedback practices to enhance student engagement and learning outcomes [10].
Formative assessments play a significant role in evaluating students’ work to support and help them improve their performance [11]. It allows teachers to identify weaknesses, provide directions for improvement, and offer guidelines to enhance student achievement [12]. In contrast, a summative assessment occurs at the end of a course and focuses on verifying and certifying student comprehension and achievement. Existing formative assessment techniques, such as quizzes, e-portfolios, discussion forums, and tests, may not always account for individual student needs and learning styles [2]. Additionally, feedback is often perceived as a one-way, transmissive process, limiting its effectiveness in supporting student improvement. A more dynamic, two-way feedback system that encourages personalised coaching or open dialogue where students can ask for clarifications and additional information or even argumentatively contest teachers’ interpretations could address these shortcomings [13].
Moreover, student motivation, either implicit or explicit, is crucial in the learning process [11]. Student goals, associated with education or a specific course, heavily inform and influence their decisions and actual behaviour [11]. According to self-determination theory, motivation ranges from intrinsic (high) through extrinsic to amotivation (low) [14]. Extrinsic motivation may be further divided into external (lower) and identified regulation (higher). In the former, behaviour is determined by the need to seek reward or avoid sanction, whereas in the latter, the motivation is still external but identified to have been chosen by oneself [14]. Akin to self-determination, self-regulation is the ability to independently set goals, plan, and successfully execute tasks and activities within a study program [11]. However, the current literature on motivation, particularly in relation to self-determination theory, lacks exploration into how formative assessment and feedback practices can effectively promote intrinsic motivation in online learning environments [15]. Addressing this gap could provide valuable insights for improving student engagement and performance.
Formative assessments can improve student engagement when feedback is characterised by immediacy, validity, and reliability [2]. Optional formative feedback activities are a solid method to facilitate student self-regulation and progress toward a desirable performance in the final summative assessment [5]. Innovative formative assessment methods based on gamification and game design elements can facilitate multiple emotional and cognitive dimensions of engagement, such as curiosity, interest, motivation, problem-solving, and durable retention [16,17]. It has been demonstrated that replacing mandatory assignments with optional formative peer assessment activities improves both student engagement and achievement in exam performance [18].
Considering the above, the present study proposes an approach to concurrently examine learners’ digital traces, self-regulation competence, and motivation levels [19]. This comprehensive approach aims to address the research gap in integrating Learning Analytics with formative assessment and feedback practices, as well as exploring the promotion of intrinsic motivation in online learning environments. By doing so, the study seeks to provide valuable insights that can help educators enhance student engagement, performance, and overall learning outcomes.

2.2. Leveraging Learning Analytics to Enhance Formative Assessment and Feedback in Computer Science Education

The positive influence of formative assessment and feedback strategies on students’ problem-solving skills, creativity, and critical thinking abilities, all crucial components of computing education, is well-established [20]. Formative assessments have also demonstrated their effectiveness in assisting computing students in comprehending complex concepts and mastering programming languages [21].
Despite the considerable potential of formative assessments and feedback to enhance student learning outcomes in computing education, there is limited research on effectively implementing and adapting these strategies in this context [22,23,24]. Furthermore, the role of LA in augmenting the impact of formative feedback and addressing the challenges associated with its implementation remains underexplored [25]. Understanding how to best tailor formative assessment and feedback practices, as well as the integration of LA to address the unique challenges and opportunities present in Computer Science Education, can greatly improve the learning outcomes and overall experience of learners [26].
The above highlight two key research gaps that the present study aims to fill: firstly, a deeper understanding of how students utilise and respond to the feedback they receive is necessary, as this aspect has not been sufficiently explored [22,24,27]. Secondly, the literature has not adequately investigated the influence of formative feedback during intermediate assessment tasks, especially when accompanied by a provisional grade [28,29]. Supplying an indicative or actual mark on draft coursework could potentially enable students to better understand their performance and ascertain whether they need to improve their work. A reference point, such as a mark, combined with personalized feedback based on assessment criteria, may help students better comprehend their standing and transform the feedback received into feedforward [25,30]. This opens up a research opportunity to explore the potential benefits and drawbacks of providing provisional grades alongside formative feedback in the context of Computer Science/Software Engineering education [27].
Addressing these gaps can contribute to the development of more effective feedback strategies tailored to individual learners, ultimately improving their learning outcomes.

3. Materials and Methods

3.1. General Procedure

A mixed-methods analysis approach was adopted, focusing on quantitative and qualitative data collected from LMS, psychometric instruments and freeform student feedback. The collection of multidimensional data enabled us, primarily, to examine how marked formative feedback is reflected by students’ engagement with the LMS features and, secondarily, how their behaviour alters after receiving formative feedback, combined with provisional marks, on draft portfolio submissions. The specific ‘portfolio’ assessment method—a series of linked, practical tasks—was the focus of this study.
As “assessment feedback” has many definitions, there is a need to also clarify this concept and its meaning in this study. The focus is on assessment portfolio tasks that have been voluntarily submitted to tutors, marked by tutors and returned to students along with comments during the span of the course. This is done to understand the impact of assessment feedback, where assessment tasks have formative characteristics that lead to the actual final summative assessment and grades.

3.2. Educational Context/Course Unit

Digital Design and Development is a core postgraduate module offered within MSc Information Technology. It ran in the first semester of 2022 and comprised over 10 weeks of one-hour pre-recorded lectures and two hours of practicals in class. Most of the students enrolled in the module have a computer science background and a basic understanding of the digital design and development process. However, student approaches toward learning can vary from those students who regularly engage with the materials provided and those who just want to pass the assessment. Therefore, it was considered important to divide the portfolio coursework into tasks and provide formative feedback and marks for each task separately to engage more students with the assignment process and enable them to master all the tasks of the portfolio. This approach helped us to assess student usage and reaction to the implementation of the formative assessment.
Table 1 provides a summary of the portfolio tasks during the module and includes the time of tasks released, a description of what students are required to submit for each task, time of submission, and details of when students received feedback.
As far as the assessment feedback is concerned, the summative assessment for this module comprises of two components: (a) a presentation worth 25% and (b) the portfolio assignment worth 75% of the overall mark for the module. The portfolio is comprised of four separate tasks based on the first four stages of the design process (empathise, define, ideate, prototype). Each task is worth 25% of the available marks for the portfolio.
Documentation was available to the students at the beginning of the module, outlining the objectives of the module and details of the assignment and how they were going to be assessed.
The assignment assesses the following module learning objectives, and students have to:
(1)
Conduct professional quality stakeholder, context and competitor research using industry-standard methodologies.
(2)
Identify and develop creative solutions to a design problem and iterate and select among them for prototyping.
(3)
Identify good practices in a particular programming language and use this to implement key features for mobile, web or other digital interfaces.
Students were assessed on the portfolio coursework that includes the four tasks and personal reflections explaining their choices in developing each item and what this shows about their growing abilities and skills. To help students completing the assessment tasks, we provided a list of assessment criteria indicating that students would be graded based on the evidence and quality of the following.
Task 1: Professional-looking image collage with a good range of contrasting colours in the colour theme. Excellent font pairings with good examples. Excellent coverage of reference images.
Task 2: Fully rounded definition of persona and professional standard. Excellent user-centred scenario offering a resolution to the persona’s needs and frustrations.
Task 3: Professional level of wireframing, comparable to sites you would find on the web. Excellent IA analysis, performed to a professional level.
Task 4: Professional output with great styling and comprehensive description. Excellent walkthrough with clear links back to the UX mapping.
Students were free to decide whether to complete and submit any of their drafts for feedback. The formative assessment tasks were the actual summative assessment tasks that students had to submit in their final assignment submission. If students, after the marked feedback, decided not to improve the mark received in their formative tasks, the formative mark was their final mark. Table 2 provides examples of formative feedback, and marks returned to students.
With regards to the LMS features, the main content areas that were analysed and considered important to have the most student content interaction were:
(a)
The module information which includes information about the module aims, the learning outcomes, and the technologies that are utilised in the context of the module;
(b)
The learning materials which contain the weekly lecture notes, the pre-recorded lectures, the tutorial notes, and the tutorial activities that students had to complete on a weekly basis;
(c)
The reading list which includes various resources for the students (books, journal articles);
(d)
The assessment requirements/criteria which include the assignment specification/criteria for each task.
More specifically, the data retrieved from the above content areas of Blackboard weekly focused on the total time spent in hours and the number of times accessed by each individual student as well as the total items and total logins of each student.
Students were asked to access the learning materials folder a week in advance and watch the pre-recorded lectures in order to be prepared for the weekly tutorial activities in class. The assessment tasks were made available once the weekly context was covered in class. The submission window for each task was two weeks, and tasks could only be submitted once. In this way, students had the opportunity to receive marked feedback from the instructor, which could subsequently be used to improve their work.

3.3. Sample

Seventy-six students were registered for the module, and forty-five volunteered to participate in this study. With regards to diversity, students had different backgrounds, ethnicity, and age, which could lead to different student experiences and behaviour patterns. The majority of the study’s participants were male, and the majority of the participants had no previous experience with the formative assessment process.

3.4. Measures and Data Analysis

To address RQ1, we recorded the formative feedback and indicative grades given to students, allowing us to disaggregate the data into feedback-seeking and non-feedback-seeking groups and compare summative and indicative (formative) marks. To address RQ2, data were collected from Blackboard analytics reports covering activity and time spent in each content area. For RQ3 and 4, an onboarding questionnaire was administered to gather content and to record students’ background information, experience and perceptions over the formative assessment, expectations from the module, Big Five personality traits, and motivation levels using the Situation-Motivational Scale (SIMS) [13]. Finally, to provide 360-degree data for RQ4, a retrospective questionnaire was distributed at the end of the module, where students reflected on the overall learning experience. We also performed a further qualitative analysis focusing on the last two open-ended questions asking students: “Please add some further comments about things that you felt went well” and “What would your opinion be if the personalised feedback you received during the module were replaced by feedback based on rubrics/criteria specified in the assignment specification?”.
Data from the grading, LMS interactions and the pre- and post-module questionnaires were aligned using the student ID initially, which was then anonymised into a simple numeric identifier. Analysis was conducted in R. For Blackboard interaction data, significance tests assumed non-parametric distributions, while for questionnaire tests, near-normal distributions were assumed.

4. Results

We will present the results in the order in which they relate to research questions one–three, respectively.
Firstly, there were no overall differences in the final marks between those obtaining formative feedback and those who did not (RQ1: Formative mean student mark 61%, N = 22; Non-formative mean 59%, N = 21). However, those who did get the feedback were able to improve their marks by 11–18% (Table 3).
Forty-nine percent of students submitted at least one task for feedback. Looking at the number of feedback tasks completed per student, the effect was cumulative, with those students receiving feedback for all tasks improving the most on average overall (Table 4).
In relation to RQ2, Blackboard interactions in terms of the number of items viewed were associated with formative feedback seeking (Spearman’s rho = 0.5, p < 0.01) but not with final marks achieved across all students (rho = 0.23, p = 0.23). In terms of content areas, the time spent in the ‘Assignments’ area was significantly higher for those who obtained formative feedback than for those who did not (Mann–Whitney W = 16,982, p < 0.01), but this association was not found for other content areas (Learning Materials, Module Information and Reading List). Figure 1 below shows the different patterns of engagement with the assessment area between the two groups.
Looking across the whole cohort (Table 5), when we regress the final marks on time spent in the different Blackboard content areas, the same relationship holds, with time spent in the Assessments being the only significant influence on performance but only with a small correlation with the grade outcome (beta = 0.01, p = 0.02).
To investigate the influence of personality and motivation preferences on the likelihood of formative feedback seeking (RQ3), we regress the number of formative tasks completed on these variables (Table 6). We see that gender has a large effect in terms of the coefficient (women seemed more likely to seek feedback), but this was not highly significant due to the relatively low number of women in our group (12 of the 45). Of the Big Five personality traits, Conscientiousness was strongly negatively associated with formative feedback seeking (beta = −0.35, p < 0.01). Of the motivational factors, Identified Regulation was also negatively associated with feedback tasks (beta = −0.22, p < 0.01), whereas, conversely, external regulation had a somewhat positive effect (beta = 0.13, p < 0.01). It should be noted that for all these cases, the confidence intervals were quite wide, though of the same sign.
Finally, to contribute to RQ4, we compared the self-assessed attitudes to the module from the pre- and post- module questionnaires (Figure 2). Pre-module statements Likert scale statements were:
  • “I will enjoy this module”
  • “The formative assessment tasks will be highly effective for my learning”
  • “Communication with the instructor will be very frequent”
  • “I will be able to identify my strengths and weaknesses within my own work”
  • “I will collaborate with others to complete some of the tasks”
  • “Formative assessment tasks will help me know how well I am doing”
  • “I will be a more independent learner”
  • “I will develop my writing skills and my approach to writing coursework as a result of the module”
  • “I will adjust my work regularly as I go along”
  • “I will stick to my original plan”
  • “I will produce an end result which I am proud of”
Figure 2. Attitudes to the module comparing pre- and post-module assessments by formative feedback seeking (Likert scale: 5 = Strongly Agree, 1 = Strongly Disagree).
Figure 2. Attitudes to the module comparing pre- and post-module assessments by formative feedback seeking (Likert scale: 5 = Strongly Agree, 1 = Strongly Disagree).
Information 14 00287 g002
Post-module statements were essentially the same but posed in the past tense. The majority of ratings fell slightly in the post-module survey, with feedback and non-feedback seekers giving similar ratings. Exceptions were “communication regularity with tutors”, with feedback seekers rating this slightly lower post-module but non-feedback seekers similar pre- and post; and “perception of self as independent learner”, which was the only rating to increase, with feedback seekers rising more sharply than non-feedback seekers. There was also a difference in the ratings of “sticking to the original plan” between groups, with feedback seekers starting from a lower expectation and both admitting more equivocation on this after the end of the module.
To examine the usefulness of feedback received, we analysed students’ answers. All students who received feedback found the feedback useful: more specifically, their ratings were (extremely useful = 6, very useful = 3, moderately useful = 2, slightly useful = 2, not at all useful = 0).
All students in the retrospective questionnaire reported that they found the feedback useful, and they appreciated the formative assessment and feedback process. The reported related positive comments for formative feedback were:
“feedback helped me to understand what the assessment is asking to do to meet the learning outcomes”;
“The good thing is that I had formative feedback on my tasks for this module, in which I was very happy as I could improve the tasks that went wrong or something was missing”.
In addition, students were asked if a different type of feedback would have been more or less effective. Students diverged on the effectiveness of feedback and reported the purpose of feedback to be helpful to improve their work and be clear and “straightforward”. Forty percent of students have the view that personalized feedback is more effective, whereas a slight majority, 60% of students, mentioned feedback based on criteria/rubrics as being more effective.
“I think using rubrics is more straightforward”.
“Actually, the feedback received has been mentioning comments that were in the rubric and marking criteria. It is more useful if the feedback comes from the marking criteria as this is what the marker sees when marking the assessment”.

5. Discussion

The evidence across our research questions indicates that for our design-thinking-oriented practical computing module, formative feedback, as offered, can help students improve their outcomes through earlier and better engagement with the assessment task requirements. For RQ1, while the average marks for students receiving or not receiving formative feedback were similar, there is a clear benefit for those who engage with the feedback, as the data provided earlier suggests. This finding aligns with previous research that has demonstrated the positive effects of formative feedback on student performance and learning [22,24]. It can be inferred that lower-performing students who did not receive feedback would have also benefitted from it, while higher-performing students might have needed it less.
In relation to RQ2, our data highlight the important finding that engagement, as measured through the LMS, is higher in the formative group when it comes to the assessment task specifications. This indicates a more significant effort by the formative group to understand the tasks and meet the criteria. However, the connection between feedback-seeking and LMS engagement, as well as between final outcomes and LMS engagement, is less significant compared to previous studies [5,31]. This discrepancy may indicate a need for better linkage between tasks and module resources, such as hyperlinks that enable students to navigate from the task specifications to other module content. This improvement could help direct students’ attention to relevant resources and enhance their overall learning experience.
In terms of student characteristics as per RQ3, our findings regarding the negative relationship between conscientiousness and feedback-seeking are counterintuitive, as we would expect this personality trait to be connected to both motivation and performance. However, as the authors in [32] suggest that conscientious individuals may engage in self-deception, believing themselves to be performing better than they are. This finding may explain the lower feedback-seeking behaviour among conscientious students in our data. Consequently, this indicates that educators should encourage these students to engage with feedback just as much as those less sure of their self-discipline.
In terms of sources of motivation, our results indicate that external regulators are more important for feedback-seeking than intrinsic ones. This finding corroborates previous research conducted by [33], who argued that extrinsic motivation, such as the need to pass the module, can sometimes be a more powerful motivator than intrinsic factors such as personal interest. Our results are also consistent with the Self-Determination Theory (SDT) [34], which posits that individuals are motivated to seek feedback when they perceive external contingencies linked to their performance. Furthermore, our findings confirm that internally motivated students may exhibit overconfidence, similar to conscientious students, leading to a reduced tendency to seek feedback. This observation is consistent with the Dunning-Kruger effect [35], where individuals with high levels of confidence in their abilities may not recognise their own shortcomings, and thus, may not perceive the need to seek feedback.
For RQ4, the impact of the module on students’ attitudes toward their learning was mixed. The data provided earlier shows a general grounding of attitudes from initial optimism to a more realistic perception after engaging and completing the assessments. Interestingly, the formative-seeking group displayed more flexibility in both their initial attitude and their final self-assessment. Research on cognitive flexibility suggests that inflexibility can hinder learning by leading students to resist changes in plans and struggle to come up with alternative approaches to problems [36]. Our data also show that formative seekers gained more in their self-perception as independent learners following engagement with the formative feedback, suggesting a feedback loop from extrinsic motivation to personal agency.
Our study highlights the importance of formative feedback in promoting student engagement, learning, and flexibility. It also underscores the potential pitfalls of overconfidence among conscientious and internally motivated students. Educators should consider implementing strategies to improve the connections between tasks and module resources and encourage all students, regardless of their personality traits or motivations, to seek and engage with formative feedback to enhance their learning outcomes.

6. Conclusions

Our study exploring learning behaviour in a practical module with formative feedback on a portfolio of design and development tasks reveals that formative feedback is indeed beneficial. Early feedback stimulates engagement with online task specifications, indicating that these tasks serve as the central point of focus for students studying the module.
Among those who believe they do not require formative input, some may be engaging in self-deception regarding their abilities to complete the work to a high standard without proper guidance. In contrast, those who benefit from feedback appear more flexible and begin with lower confidence in their knowledge and skills. Our findings also show that formative feedback seekers experience an increased self-perception as independent learners, suggesting a feedback loop from extrinsic motivation to personal agency.
While we do not yet advocate for making formative input mandatory, we recommend encouraging more students to take up the opportunity by discussing these findings. Educators should consider implementing strategies to improve connections between tasks and module resources and encourage all students, regardless of their personality traits or motivations, to seek and engage with formative feedback to enhance their learning outcomes. By doing so, we can foster an environment where students can better understand their strengths and weaknesses and develop the necessary skills to succeed in their academic pursuits.

7. Limitations and Future Work

Given the limitations of our study, we acknowledge that our formative and non-formative group sizes were relatively small. To address this issue and gain more statistical power, we plan to increase the data pool by repeating the study in future modules. This would help provide more robust insights and further validate our findings.
Regarding the assessment of LMS interactions associated with feedback, our current analysis does not capture whether and for how long students access their feedback; we only see their engagement with the tasks. Obtaining this additional piece of data could provide a more comprehensive understanding of students’ interaction with feedback and its impact on their learning experience. While this feature is not built into our LMS analytics capture, we can create new feedback interfaces that provide richer data and give more explicit links to sources elsewhere in the learning content that can help with specific shortcomings in the students’ submissions.
Lastly, considering the observed relationships between conscientiousness, extrinsic motivation, and cognitive flexibility, it would be beneficial to include further measures to shed more light on these characteristics, which we have found to be associated with feedback-seeking behaviour. This can be combined with student interviews to give a richer picture of their perceptions of both the form and content of the feedback.
Finally, we can make our findings more explicit in the progress data that the students see. We aim to work in a participatory way with students to design better learner dashboards that highlight feedback opportunities and progress against these.

Author Contributions

Conceptualization, E.K. and P.M.; methodology, E.K. and P.M.; formal analysis, P.M. and E.K.; writing—original draft preparation, E.K. and P.M.; writing—review and editing, P.M., A.C. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. P.M and E.K. acknowledge internally funded research time from the University of the West of England.

Data Availability Statement

Anonymised data that support the findings of this research are available from the following URL: https://github.com/paulusm/PADSTeams-Data (assessed on 9 May 2023).

Acknowledgments

We thank all the students who participated in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boud, D.; Molloy, E. What Is the Problem with Feedback? In Feedback in Higher and Professional Education; Routledge: Oxfordshire, UK, 2012; pp. 1–10. ISBN 0-203-07433-5. [Google Scholar]
  2. Gikandi, J.W.; Morrow, D.; Davis, N.E. Online Formative Assessment in Higher Education: A Review of the Literature. Comput. Educ. 2011, 57, 2333–2351. [Google Scholar] [CrossRef]
  3. Boud, D.; Molloy, E. Rethinking Models of Feedback for Learning: The Challenge of Design. Assess. Eval. High. Educ. 2013, 38, 698–712. [Google Scholar] [CrossRef]
  4. Orr, D.; Weller, M.; Farrow, R. How Is Digitalisation Affecting the Flexibility and Openness of Higher Education Provision? Results of a Global Survey Using a New Conceptual Model. J. Interact. Media Educ. 2019, 2019, 71539. [Google Scholar] [CrossRef]
  5. Avcı, Ü.; Ergün, E. Online Students’ LMS Activities and Their Effect on Engagement, Information Literacy and Academic Performance. Interact. Learn. Environ. 2022, 30, 71–84. [Google Scholar] [CrossRef]
  6. Lokkila, E.; Christopoulos, A.; Laakso, M.-J. A Data-Driven Approach to Compare the Syntactic Difficulty of Programming Languages. J. Inf. Syst. Educ. 2023, 34, 84–93. [Google Scholar]
  7. Helal, S.; Li, J.; Liu, L.; Ebrahimie, E.; Dawson, S.; Murray, D.J.; Long, Q. Predicting Academic Performance by Considering Student Heterogeneity. Knowl.-Based Syst. 2018, 161, 134–146. [Google Scholar] [CrossRef]
  8. Silvola, A.; Näykki, P.; Kaveri, A.; Muukkonen, H. Expectations for Supporting Student Engagement with Learning Analytics: An Academic Path Perspective. Comput. Educ. 2021, 168, 104192. [Google Scholar] [CrossRef]
  9. Tsimaras, D.O.; Mystakidis, S.; Christopoulos, A.; Zoulias, E.; Hatzilygeroudis, I. E-Learning Courses Evaluation on the Basis of Trainees’ Feedback on Open Questions Text Analysis. Educ. Sci. 2022, 12, 633. [Google Scholar] [CrossRef]
  10. Viberg, O.; Hatakka, M.; Bälter, O.; Mavroudi, A. The Current Landscape of Learning Analytics in Higher Education. Comput. Hum. Behav. 2018, 89, 98–110. [Google Scholar] [CrossRef]
  11. Broadbent, J.; Sharman, S.; Panadero, E.; Fuller-Tyszkiewicz, M. How Does Self-Regulated Learning Influence Formative Assessment and Summative Grade? Comparing Online and Blended Learners. Internet High. Educ. 2021, 50, 100805. [Google Scholar] [CrossRef]
  12. Fukuda, S.T.; Lander, B.W.; Pope, C.J. Formative Assessment for Learning How to Learn: Exploring University Student Learning Experiences. RELC J. 2022, 53, 118–133. [Google Scholar] [CrossRef]
  13. Hatzipanagos, S.; Warburton, S. Feedback as Dialogue: Exploring the Links between Formative Assessment and Social Software in Distance Learning. Learn. Media Technol. 2009, 34, 45–59. [Google Scholar] [CrossRef]
  14. Guay, F.; Vallerand, R.J.; Blanchard, C. On the Assessment of Situational Intrinsic and Extrinsic Motivation: The Situational Motivation Scale (SIMS). Motiv. Emot. 2000, 24, 175–213. [Google Scholar] [CrossRef]
  15. Ryan, R.M.; Deci, E.L. Intrinsic and Extrinsic Motivation from a Self-Determination Theory Perspective: Definitions, Theory, Practices, and Future Directions. Contemp. Educ. Psychol. 2020, 61, 101860. [Google Scholar] [CrossRef]
  16. Mystakidis, S. Sustainable Engagement in Open and Distance Learning with Play and Games in Virtual Reality. In Handbook of Research on Gamification Dynamics and User Experience Design; Bernardes, O., Amorim, V., Moreira, A.C., Eds.; IGI Global: Hershey, PA, USA, 2022; pp. 409–424. [Google Scholar]
  17. Zainuddin, Z.; Shujahat, M.; Haruna, H.; Chu, S.K.W. The Role of Gamified E-Quizzes on Student Learning and Engagement: An Interactive Gamification Solution for a Formative Assessment System. Comput. Educ. 2020, 145, 103729. [Google Scholar] [CrossRef]
  18. Haugan, J.; Lysebo, M.; Lauvas, P. Mandatory Coursework Assignments Can Be, and Should Be, Eliminated! Eur. J. Eng. Educ. 2017, 42, 1408–1421. [Google Scholar] [CrossRef]
  19. Pardo, A.; Han, F.; Ellis, R.A. Combining University Student Self-Regulated Learning Indicators and Engagement with Online Learning Events to Predict Academic Performance. IEEE Trans. Learn. Technol. 2017, 10, 82–92. [Google Scholar] [CrossRef]
  20. Ertmer, P.A.; Koehler, A.A. Facilitated versus Non-Facilitated Online Case Discussions: Comparing Differences in Problem Space Coverage. J. Comput. High. Educ. 2015, 27, 69–93. [Google Scholar] [CrossRef]
  21. Caspersen, M.; Bennedsen, J. Instructional Design of a Programming Course: A Learning Theoretic Approach. In Proceedings of the Third International Computing Education Research Workshop, ICER’07, Atlanta, GA, USA, 15–16 September 2007; pp. 111–122. [Google Scholar] [CrossRef]
  22. Hattie, J.; Timperley, H. The Power of Feedback. Available online: https://journals.sagepub.com/doi/abs/10.3102/003465430298487 (accessed on 7 May 2023).
  23. Shute, V.J. Focus on Formative Feedback. Rev. Educ. Res. 2008, 78, 153–189. [Google Scholar] [CrossRef]
  24. Nicol, D.J.; Macfarlane-Dick, D. Formative Assessment and Self-regulated Learning: A Model and Seven Principles of Good Feedback Practice. Stud. High. Educ. 2006, 31, 199–218. [Google Scholar] [CrossRef]
  25. Sadler, I.; Reimann, N.; Sambell, K. Feedforward Practices: A Systematic Review of the Literature. Assess. Eval. High. Educ. 2022, 48, 305–320. [Google Scholar] [CrossRef]
  26. Morris, R.; Perry, T.; Wardle, L. Formative Assessment and Feedback for Learning in Higher Education: A Systematic Review. Rev. Educ. 2021, 9, e3292. [Google Scholar] [CrossRef]
  27. Carless, D.; Boud, D. The Development of Student Feedback Literacy: Enabling Uptake of Feedback. Assess. Eval. High. Educ. 2018, 43, 1315–1325. [Google Scholar] [CrossRef]
  28. Winstone, N.E.; Nash, R.A.; Parker, M.; Rowntree, J. Supporting Learners’ Agentic Engagement with Feedback: A Systematic Review and a Taxonomy of Recipience Processes. Educ. Psychol. 2017, 52, 17–37. [Google Scholar] [CrossRef]
  29. Tai, J.; Ajjawi, R.; Boud, D.; Dawson, P.; Panadero, E. Developing Evaluative Judgement: Enabling Students to Make Decisions about the Quality of Work. High. Educ. 2018, 76, 467–481. [Google Scholar] [CrossRef]
  30. Iraj, H.; Fudge, A.; Khan, H.; Faulkner, M.; Pardo, A.; Kovanović, V. Narrowing the Feedback Gap: Examining Student Engagement with Personalized and Actionable Feedback Messages. J. Learn. Anal. 2021, 8, 101–116. [Google Scholar] [CrossRef]
  31. Chen, Z.; Jiao, J.; Hu, K. Formative Assessment as an Online Instruction Intervention. Int. J. Distance Educ. Technol. 2021, 19, 50–65. [Google Scholar] [CrossRef]
  32. Colquitt, J.; Simmering, M. Conscientiousness, goal orientation, and motivation to learn during the learning process: A longitudinal study. J. Appl. Psychol. 1998, 83, 654–665. [Google Scholar] [CrossRef]
  33. Ryan, R.M.; Deci, E.L. Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. Am. Psychol. 2000, 55, 68–78. [Google Scholar] [CrossRef]
  34. Deci, E.L.; Ryan, R.M. The “What” and “Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol. Inq. 2000, 11, 227–268. [Google Scholar] [CrossRef]
  35. Kruger, J.; Dunning, D. Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments. J. Personal. Soc. Psychol. 1999, 77, 1121–1134. [Google Scholar] [CrossRef] [PubMed]
  36. Huizenga, M.; Smidts, D.P.; Ridderinkhof, K.R. Change of Mind: Cognitive Flexibility in the Classroom. Perspect. Lang. Lit. 2014, 40, 31–35. [Google Scholar]
Figure 1. Time spent in the assignments area of Blackboard by formative feedback seeking.
Figure 1. Time spent in the assignments area of Blackboard by formative feedback seeking.
Information 14 00287 g001
Table 1. Assessment tasks.
Table 1. Assessment tasks.
Portfolio-AssessmentRelease TimeDescription of What Students Are Required for Each Task.Submission DeadlineFeedback (One Week after Submitted)
Task 1 (Empathise)
25%
Beginning of week 1Create a mood board containing representative photos, colours, fonts, and reference designs using Figma and provide a description of the process of creating the mood board.End of week 2 Beginning of week 3
Task 2 (Define)
25%
Beginning of week 3Define the User Experience (UX) mapping, define a persona and an associated scenario represented as a design map and describe this process.End of week 4Beginning of week 5
Task 3 (Ideate)
25%
Beginning of week 5Capture their user interface as a wireframe and provide an analysis of the Information Architecture of the wireframe(s) containing Organisation, Labelling, Navigation, and Search. It also required them to include a blueprint diagram representing the organisational hierarchy to indicate flow between screens and describe this process.End of week 6Beginning of week 7
Task 4 (Prototype)
25%
Beginning of week 7Construct an interactive functional prototype of their app. This would be one or more static web pages that run in the browser, created using HTML and CSS. Students also had to provide a walkthrough of the scenario and a description of the CSS styling usedEnd of week 8Beginning of week 9
Table 2. Examples of feedback given for each task.
Table 2. Examples of feedback given for each task.
Task 1 (14 Out of 25)Task 2 (17 Out of 25)Task 3 (15 Out of 25)Task 4 (17 Out of 25)
“Good introduction.
Good choice of collage images but more explanation on the choices is needed.
Good research and choice of colours, but you could say more about them i.e., what personality/energy do they express? which are the base/accent/neutral colours?
Some background research for the font selection would be beneficial.
Good set of reference designs and reasons for selecting them.”
“Persona defined and explanation on goals and frustration of the user provided. Scenario describes solutions to the persona’s needs and includes the points that the user interacts with the app.”“Good grid stylist and good use of wire flow.
Good analysis and good blueprint.
However, navigation, labelling, search etc. could be further discussed.”
“You Describe the prototype more from a technical perspective. It would be good to return to the user scenario and describe how this plays out.
Good work using the Open Data to extract data, this is what a functional app have to do.
You are way over your word count but your description of the task is very good.”
Table 3. Mean mark improvements per task with formative feedback.
Table 3. Mean mark improvements per task with formative feedback.
Task NoDescriptionImprovement (%)N
1Empathise1517
2Define1420
3Ideate1115
4Prototype186
Table 4. Overall mark improvement by tasks completed.
Table 4. Overall mark improvement by tasks completed.
Tasks CompletedOverall Improvement (%)N
14.73
287
39.87
413.65
Table 5. Time in content areas and engagement weeks—relationship to overall module marks (all students).
Table 5. Time in content areas and engagement weeks—relationship to overall module marks (all students).
CharacteristicBeta95% CI *pVIF **
Learning Materials0.01−0.01, 0.040.32.4
Assessments0.010.00, 0.020.0151.4
Reading List−0.01−0.07, 0.040.62.2
Module Information0.03−0.07, 0.010.131.1
Engaged Weeks0−0.02, 0.01>0.91.2
* Confidence Interval, ** Variance Inflation Factor.
Table 6. Impact on formative feedback seeking on dependent variables.
Table 6. Impact on formative feedback seeking on dependent variables.
CharacteristicBeta95% CI *pVIF **
Gender0.68−0.27, 1.60.21.1
Intrinsic−0.05−0.18, 0.070.42.7
Identified regulation−0.22−0.37, −0.080.0062.4
External regulation0.130.05, 0.210.0031.5
Amotivation−0.03−0.09, 0.020.21.5
Agreeableness0.05−0.13, 0.220.61.2
Extroversion0.06−0.15, 0.270.61.3
Openness0.03−0.19, 0.260.81.9
Conscientiousness−0.35−0.56, −0.150.0031.4
* Confidence Interval, ** Variance Inflation Factor.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kalaitzopoulou, E.; Matthews, P.; Mystakidis, S.; Christopoulos, A. Engagement with Optional Formative Feedback in a Portfolio-Based Digital Design Module. Information 2023, 14, 287. https://doi.org/10.3390/info14050287

AMA Style

Kalaitzopoulou E, Matthews P, Mystakidis S, Christopoulos A. Engagement with Optional Formative Feedback in a Portfolio-Based Digital Design Module. Information. 2023; 14(5):287. https://doi.org/10.3390/info14050287

Chicago/Turabian Style

Kalaitzopoulou, Eirini, Paul Matthews, Stylianos Mystakidis, and Athanasios Christopoulos. 2023. "Engagement with Optional Formative Feedback in a Portfolio-Based Digital Design Module" Information 14, no. 5: 287. https://doi.org/10.3390/info14050287

APA Style

Kalaitzopoulou, E., Matthews, P., Mystakidis, S., & Christopoulos, A. (2023). Engagement with Optional Formative Feedback in a Portfolio-Based Digital Design Module. Information, 14(5), 287. https://doi.org/10.3390/info14050287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop