Next Article in Journal
A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing
Previous Article in Journal
Social Media-Related Geographic Information in the Context of Strategic Environmental Assessment of Municipal Masterplans: A Case Study Concerning Sardinia (Italy)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Educational Opportunities with Computer-Mediated Assessment Feedback

1
School of ICT, Griffith University, Nathan Campus, Qld. 4111, Australia
2
Australian Digital Futures Institute, University of Southern Queensland, Springfield Campus, 4300, Australia
*
Author to whom correspondence should be addressed.
Future Internet 2015, 7(3), 294-306; https://doi.org/10.3390/fi7030294
Submission received: 17 June 2015 / Revised: 30 July 2015 / Accepted: 5 August 2015 / Published: 11 August 2015

Abstract

:
As internet technologies make their way into developing areas, so too does the possibility of education and training being delivered to the people living in those previously unserved areas. The growing catalogue of free, high quality courseware, when combined with the newly acquired means of delivery, creates the potential for millions of people in the developing world to acquire a good education. Yet a good education obviously requires more than simply delivering information; students must also receive high quality feedback on their assessments. They must be told how their performance compares with the ideal, and be shown how to close the gap between the two. However, delivering high quality feedback is labor-intensive, and therefore expensive, and has long been recognized as a problematic issue by educators. This paper outlines a case study that uses a Learning Management System (LMS) to efficiently deliver detailed feedback that is informed by the principles of best practice. We make the case that the efficiencies of this method allow for large-scale courses with thousands of enrolments that are accessible to developing and developed areas alike. We explore the question; is computer-mediated feedback delivery efficient and effective and might it be applied to large-scale courses at low-cost?

1. Introduction

In the increasingly connected world of the 21st Century, Internet technology is spreading into developing areas where access to the World Wide Web was previously limited or non-existent. Over time, a range of social benefits will derive from this connectedness, not least of which will be the provision of educational services to the people living in those areas. However, connectedness is only one half of the proposition; the other half is the growing catalogue of free educational materials that is now available from top-rated universities in the form of MOOCs (Massive Open Online Courses), lectures from the Khan Academy, TED Talks (Technology, Entertainment and Design Conference) and YouTube EDU, which in one year alone partnered with over 300 universities and other providers to offer more than 65,000 free lectures.
The scale of this free on-line content is impressive by any standard, with TED Talks, for example, being viewed by over 450,000 people a day [1]. World experts in their subject area present ideas on a wide range of topics and these high-quality videos can be used as a free learning resource by anyone with an Internet connection, irrespective of their location. The student only needs to have sufficient digital literacy to access the video. However, in order to be fully effective, on-line education should go beyond the presentation of content alone; it needs to go further by providing assessment tasks that allow the student to reinforce and integrate the content into their knowledge-base. This is achieved by applying the knowledge acquired in assessment tasks. Once submitted and marked, the student needs to be given feedback, which allows them to see the difference between their performance and the ideal, and how they can close the gap [2]. This is the essence of good assessment feedback.
This case study examines whether the process of delivering feedback can be automated by using the assessment feedback function that comes standard in Learning Management Systems (LMS). This strategy, were it to be successful, will address the issue of delivering quality feedback, which has, historically, been problematic and resulted in student expectations for feedback not being met [3]. It is a labor-intensive process, with the time and expense adding considerably to the cost of delivering education. The feedback in this case study, while generic, is informed and structured by best practice principles [4]. We argue that the efficiencies of this method will allow for the scalability of course delivery to make it cost-effective to deliver education to people in the developing and developed world alike.

2. Review of Literature

2.1. What Is Good Feedback?

The technology of computer-mediated assessment feedback is relatively user-friendly and easily mastered by anyone with basic computer skills. If the strategy is to work, it is critical that the teacher arrives at a good understanding of the nature of quality feedback and is able to apply it. An analysis of 250 studies [5] from across all educational sectors confirm what many educators already know—high quality feedback improves learning outcomes, and this has been observed across all disciplines and levels. Sadler [2] maintains that the student must know (a) how to recognize good performance, (b) how their own performance compares in relation to the good performance, and (c) what they must do to close the gap between actual and ideal performance. The student needs to know what the teacher knows if they are to close the gap, which involves the provision of quality feedback and results in the creation of self-regulated, self-directed learners. Such students creatively devise strategies to achieve those goals [4].
Self-regulated learning is inhibited when there is an excessive focus on grades and the inevitable negative comparisons that this generates. Good feedback, which can be characterized by positive, helpful comments, gives constructive guidance on closing the gap between actual and ideal performance, while avoiding being judgmental or otherwise negative [6,7,8,9]. . In short, they suggest that the assessor (a) situates the feedback in the context of knowledge-making practice, (b) draws explicitly on social cognition, (c) measures metacognition, (d) addresses multimodal texts, (d) is “for learning”, not just “of learning” and (e) is ubiquitous. These principles are broadly compatible with those outlined earlier [2,4].

2.2. Principles of Effective Assessment Feedback

Central to the strategy of computer-mediated delivery of feedback is the structuring of said feedback around principles of best practice. A search of such principles in the literature reveals Nicol and MacFarlane-Dick’s seven principles of effective feedback [4]. These principles were used as the basis for structuring the feedback in this study, although adjustments have been made for computer-mediated delivery.
Nicol and MacFarlane-Dick [4] consolidated the literature on self-regulation and formative assessment in order to formulate a set of principles for good feedback practice. To the authors’ knowledge, these principles have not been applied to summative assessment or computer-mediated delivery. Students use the teacher’s feedback to critically assess their work and produce their own feedback based on their interpretation of the teacher’s comments. Ultimately, it is desirable that students become increasingly self-regulating learners who take the teacher’s feedback and build upon their understanding of how to do the job correctly—effectively to “close the gap” [2].
Good assessment feedback therefore:
  • Clarifies the nature of desirable performance. Students have a clear idea of what they are aiming for so they will know when they have achieved it. There must be a clearly expressed assessment criteria which helps to mitigate the difficulties [10,11] with regards to the importance of explaining up-front the essence or underlying meaning of an assessment task.
  • Cultivates the capacity to self-assess. Purposeful engagement with an assessment task should automatically and systematically lead a student to self-monitor their performance against the stated goals or objectives of the assessment task.
  • Provides information about performance. Peer-feedback can be useful, but it is teachers who are the primary source of corrective feedback on student performance, particularly in relation to correcting misconceptions and fostering self-regulation. Nicol and MacFarlane-Dick [4] propose this definition: good quality external feedback is information that helps students troubleshoot their own performance and self-correct in relation to assessment goal, that is, it helps students take action to reduce the discrepancy between their intentions and the resulting effects.
  • Stimulates two-way discussion. Students should not just receive information from teachers, the feedback should be conveyed so the student understands and is able to take action [12].
  • Generates motivation and self-esteem. Positive motivational beliefs and high self-esteem is influenced by the student’s perception of their own abilities. With the entity view, the student believes that their abilities are fixed, while those invested in the incremental view see their ability as improvable with effort. In a survey of 2269 undergraduate students, roughly one third were found to take the entity view [13].
  • Allows student to close the gap between current and desired performance. Feedback cannot be said to be fully effective unless it shows how to close the gap [2].
  • Provides the teacher with feedback to help her/him improve. The teacher is conscious of their performance and the feedback they are getting from the students. They make continuous improvements to their performance as they proceed.

2.3. Computer-Mediated Feedback Delivery

The integration of computer-mediated learning tools into the delivery of courses creates several benefits; however, it is acknowledged that implementing such tools is often challenging [14]. A survey of the literature examining the use of technology as a feedback mechanism offers strong support for the assertion that technology can be effective at engaging students in a process of productive learning [15,16,17,18,19,20]. It is noted that students view electronic feedback as superior; a Mann-Whitney analysis of the satisfaction ratings point to four ways that the feedback was improved by electronic delivery. These include marking scheme clarity, legibility of feedback, guidance on deficient aspects, and the identification of where the student performed well.
The problems with conventional approaches to feedback, such as the assessor writing individual feedback for each student, might therefore be remedied by the use of technology [21]. Computer-mediated delivery offers the kind of flexibility not found in other methods; flexibility for how and when said feedback is delivered and read by the end-user. Electronically delivered feedback is also more private, which means that the student avoids the peer-group pressure that classroom settings might have. Electronic feedback can also be stored on-line for future reference, for example when undertaking future assessments.

2.4. Usefulness of Computer-Mediated Feedback Delivery in Developing Countries

In an early landmark study [22], Levin and Lockheed identify five characteristics of effective schools, which are applicable in the developing world; (a) strong leadership by the principal, (b) mastery of basic skills, (c) clean and orderly school environment, (d) high teacher expectations of student performance, and (e) frequent assessments of student progress [22]. These characteristics are clearly relevant to schools in the conventional sense of having a physical location. In the world of the early 1990’s, Internet technology was only beginning to be widely available in the developed world. It would be decades before such technologies made their way into developing areas.
The first four of the five characteristics are provided by educators on-site in the developing areas; school principals being effective leaders and role models, focus on the basic skills of literacy and numeracy, a safe and orderly environment and the setting of appropriately high expectations by the teachers for the students to live up to. The fifth—frequent assessment with appropriate feedback—can also be delivered by people on-site, but the computer-mediated technology now available lends itself well to the conduct of assessment, including the delivery of feedback. The efficiency of this method allows more frequent assessment and/or the enrolment of larger numbers of students, since the time-consuming nature of manual marking would certainly be a limiting factor to how many students can be managed at any given time.
The technology of computer-mediated feedback also lends itself to home-schooling in the event that suitable public or private schools are too far away or other factors that make attendance difficult.

3. Method

The nature of the research question and the availability of a suitable cohort of students make a case study approach appropriate to explore the research question in two parts; (a) does computer-mediated feedback, suitably informed by best practice principles, offer an improved educational experience for students, and (b) does this method have the potential to allow large-scale courses to be delivered in a cost-effective way to people in developing and developed areas alike. The first part will be answered based on the empirical evidence gathered by a series of student surveys, while the second part will be deduced, based on the results of the first.
In general, case studies allow the underlying factors of phenomena to be explored, including those occurring in an educational context. The case study described here involved the participation of 312 university level students doing a first year, first semester Communications course. The course proceeded over a standard 13-week semester and across three campuses at an Australian university. One campus is in a low Socio-Economic Strata (SES) (15 students), one in a low to middle SES (166 students), and one in middle to high SES (141). 85% of students are male, 65% of participants are domestic students, while 35% are international, mainly from East Asia and India.
The research method focuses on three successive assessment items (due in semester weeks four, six and nine). Each item is submitted via the Learning Management System (LMS) assignment submission function. The process is paperless, with students not needing to print out their work, however it was necessary for the documents to be developed in a word processor and lodged either as a PDF (Portable Document Format) or MS Word compatible document. Students are not obliged to buy MS Word, they can use a free word processor if they wish, and save it in either of the previously mentioned formats. The requirement for particular formats comes from the third party plagiarism checker. If such checking is not necessary, almost any submission format is acceptable by the LMS.
Table 1. Data Collection Instruments.
Table 1. Data Collection Instruments.
Data collectionProcedure
First SurveyIn Week 0, students surveyed as to their expectations for assessment feedback.
Second SurveyIn week 10, after the third and final assessment task was concluded, the students completed an anonymous web-survey.
Third SurveyIn week 10 of the following semester (5 months later), the students completed a follow-up two-question web-survey to determine whether the process had had a positive, negative or neutral effect on their learning (as distinct from student satisfaction). The time delay between surveys 2 and 3 is due to the need to gather further information as to whether the technique works.
The assessor/marker displays the submissions online, and the feedback is given by entering mark-appropriate, pre-written text into the feedback text-box or, alternatively, a file containing the feedback can be attached.
A series of three data collection surveys were performed, as shown in Table 1.

3.1. Feedback

For each assessment item, a graduated set of feedback was developed across four bands of marks: 5–10 out of 20, 11–15 out of 20, 16–18 out of 20, and 18+ out of 20 (see Appendix). The appropriate feedback, personally addressed, was copied from a source text file (Notepad) and pasted into the LMS text feedback box. The copy/paste feedback process is appropriate, as it has long been observed that many, if not most students, make the same errors and omissions when doing essays. As long as the feedback is well written and applicable to the individual student’s work, the student should benefit from receiving it. The feedback provided was informed by Nicol and Macfarlane-Dick’s (2008) seven principles of good feedback [4], as listed earlier.

3.2. Recruitment of Participants

All 312 active enrolments in the course were potential participants. Recruitment was performed via a two-stage process; (a) the project was announced during the lectures and the nature and benefits of the outcomes described; (b) prior to each of the three surveys, an email was sent to all enrollees, which included the URL link to the online survey, with one follow-up reminder.

4. Results and Discussion

4.1. Results

The pool of participants was drawn from three campuses, which collectively represent a broad cross-section of demographic categories. The results from the three campuses are consolidated into a single merged group. Overall, between 53% and 57% of students said they did not expect as much feedback as they received in relation to the three assessment tasks. A high proportion, between 70% and 74%, were satisfied overall with the feedback they received, while between 15% and 18% were dissatisfied with amount of feedback. In relation to student learning, 76% said the process had improved their learning to a significant degree.
Survey 1 was undertaken independently in week 0 as part of a larger orientation day survey. In Survey 2, from a potential pool of 312 active enrolments, a total of 33 students participated, yielding a participation rate of 10.6%. In Survey 3, a total of 20 students participated in the non-compulsory survey, yielding a participation rate of 6.7%. This relatively low participation rate is most likely the product of survey fatigue. The results for each on the five-point scale are expressed as a percentage of the total; for example, a result of 3.3% indicates one response in a sample of 33. The surveys were a combination of quantitative and optional qualitative responses in the form of “do you have any comment on this?”
The surveys were administered by the SurveyMonkey online survey service (see links at end of paper). The surveys were administered by sending an email to all students asking for their participation and included a web-link to the survey. One click on the survey link and the student is presented with page 1 of the survey. SurveyMonkey consolidates the results for easy processing and presentation.

4.2. Discussion

Computer-mediated methods of delivering detailed assessment feedback have strong potential as an efficient way of delivering quality feedback when it is structured according to best practice and which contains orders of magnitude and more information than a conventional marker would have time to write.
We might therefore characterize the process of computer-mediated feedback examined in this study as follows:
  • Has the virtue of simplicity and efficiency, requiring fewer marking resources, which is an advantage, particularly in larger courses, allowing for courses to be scaled-up without loss of quality if not an improvement in quality;
  • Can deliver quality feedback that is informed by best practice;
  • Can deliver a large amount of feedback with no extra effort;
  • The widespread availability of LMS technology and the simplicity of the process means that the practice could be generalized across disciplines; and
  • The process produces measurable improvements in both student satisfaction and learning, the combination of which is likely to improve student engagement and retention which is a strategic imperative for any institution.
Scaled-up, computer-mediated methods such as this have potential to address equity of access to education in developing parts of the world. Courses can be made available to people in disadvantaged areas provided they have a computer and an internet connection. This may be a great benefit to women, given the lower rates of participation in technology relative to men [23].
The practice of delivering standardized feedback via the LMS does not please everyone, specifically those who expect personally addressed feedback. In this study, less than 20% of respondents expressed dissatisfaction with the method. A possible remedy is for the marker to append a voice-recorded message in addition to the written feedback, or to offer one-to-one consultations, where practical.

4.3. Limitations

The surveys had a low response rate. At the school and faculty level, the numbers of surveys that students had been asked to participate in was relatively high. Survey-fatigue is therefore the most likely explanation for the relatively low response rate (10.6% and 6.7%). Response rates of this order make this more of an exploratory study in terms of its usefulness. Future research could focus on organizing the surveys to achieve higher response rates.

5. Conclusions

The case study set out to explore (a) the efficiency of using the Learning Management Systems’ feedback delivery function in conjunction with graduated feedback informed by the principles of best practice, and (b) whether this method might be adapted to a scaled-up course offering for deployment in the developing world.
In relation to the first part of the research question, the results of the study clearly indicate that this method works well to provide feedback that is (a) more detailed than a marker might otherwise write for each student, (b) meets (and in some cases exceeds) the students’ expectations for the amount of feedback, and (c) delivered in a more timely manner because it is less labor intensive. All of these factors contribute to a positive reaction from the students to the method. The concerns of the proportion (<20%) who preferred custom-written feedback can be solved by offering one-to-one consultations in which feedback can be delivered in person. These were largely the high-achieving students who would likely benefit from a more direct mentoring-based approach.
In relation to the second part of the research question, we conclude that a strong argument exists for the method to be used on a larger scale. What is required is for a carefully developed set of graduated feedback to be developed for each assessment item, based on Nicol and Macfarlane-Dick’s method [4], and for an assessor/marker to use this to provide the feedback via an LMS. In this way, an experienced marker can process around three or four assignments in the same amount of time that a conventional marker takes to mark one assignment. This would permit the number of enrolments in a course to be scaled-up, which would be an advantage in both developed and developing world applications.
While the participants in this study are IT students, the generic nature of the process can be generalized for a wide range of disciplines and student types, including those living in developing areas and/or people without much formal education. As long as the feedback is written in clear, simple language, there is no reason why it should not be effective.

Acknowledgments

This research was carried out with the support of Griffith University and the University of Southern Queensland. The research team gratefully acknowledges the financial support from Griffith and USQ. The team also acknowledges the willingness of the participants, particularly the students of the case study classes.
Data access. A secure facility provided by SurveyMonkey.com was used for data collection. Survey 1 data can be viewed at: https://www.surveymonkey.com/results/SM-7HGQPGY8/. Survey 2 data can be viewed at: https://www.surveymonkey.com/results/SM-9XSCWQ3L/.
Disclosure statement. David Tuffley and Amy Antonio do not work for, consult to, own shares in or receive funding from any company or organizations that would benefit from this paper. They have no relevant affiliations.
Human Research Ethics Committee advice, dated 14 February 2014 for a period of one year.
“I write further to the additional information provided in relation to the conditional approval granted to your application for ethical clearance for your project “NR: Meeting Students’ Expectations for Assessment Feedback Using a Learning Management System (LMS)” (GU Ref No: ICT/01/14/HREC). This is to confirm receipt of the remaining required information, assurances or amendments to this protocol. Consequently, I reconfirm my earlier advice that you are authorized to immediately commence this research on this basis”.

Author Contributions

David Tuffley performed the data collection and the literature review. David Tuffley and Amy Antonio jointly analysed the data. David Tuffley drafted the analysis of results. David Tuffley and Amy Antonio drafted the paper and reiteratively refined it to the final version.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix: Graduated Feedback Samples

Notes (Not Included in Feedback, Provided here for Reference)

  • Clarifies the nature of good performance.
  • Engenders the ability to self-assess.
  • Provides quality information about a student’s performance.
  • Stimulates two-way teacher/student and student/student discussion
  • Engenders positive motivational beliefs and self-esteem in students.
  • Allows student to close the gap between current and desired performance.
  • Gives the teacher feedback to help her/him improve their teaching practice.

Poor (between 5 and 10 out of 20 Marks)

4 (student’s first name), 1,2,3,6 a good essay/report is one that informs and entertains, conveying information in a way that does not draw attention to itself but puts the spotlight on the message. There should be evidence of research and thoughtful reflection, resulting in a piece of work that the reader has learned something from. The language should be clear and concise, appropriate to the topic, and spelling/grammar should be correct (use the word processor’s spell/grammar checker). That is what the ideal essay would be like.
1,2,3,4,5,6 Hopefully, doing this essay has helped you clarify your future aspirations or at least allowed you to learn something about the technology of the future that you did not already know. You have tried and that is good. Your topic is an interesting one. Your essay shows a degree of insight that tells me that you have thought about it somewhat, but you needed to go much further. I encourage you to make use of the support services available in the university that helps with writing assignments. The Student Success Advisor will also be most happy to help you in this regard. It is important that you do this, since your future success depends to some degree on being able to communicate well.
1,2,3,6 The textbook for this course is an example of how to communicate a complex topic in simple direct language. As you continue with your quizzes and work through the chapters, take note of the writing style and try to emulate it in the future.
2,3,6 Assessment Criteria (from the assignment specification):
  • Writing Style: your sentences need to have a more coherent structure. Paragraphs should have a unified theme, referring to a single identifiable topic. Many of your paragraphs have multiple topics.
  • Evidence of Research: more, higher quality sources needed.
  • Content: you have covered some good points but there is room for improvement. You should have edited for conciseness - that is remove words that were not adding value to the essay. Some points needed a reference to support them. Without references, these points are simply opinion, and cannot be considered evidence.
  • Coherence of Argument: your argument was reasonably clear.
  • Correct APA Style: some inconsistencies - see material provided on with assignment on how to do referencing.
  • Overall Impression: as described in the first two paragraphs above.
5 You should not be discouraged by this result—you have the potential to improve and with further effort I am sure you can improve.
4 (Signed the Assessor)

Average (between 11 and 15 out of 20 Marks)

Student’s given name, a good essay/report is one that informs and entertains, conveying information in a way that does not draw attention to itself but puts the spotlight on the message. There is evidence of research and thoughtful reflection, resulting in a piece of work that the reader has learned something from. The language should be clear and concise, spelling and grammar should be correct (making use of the MS Word spell/grammar checker). Your essay achieves these to a reasonable degree. The textbook for this course is an example of how to communicate a complex topic in simple direct language. As you continue with your quizzes and work through the chapters, I recommend you take note of the writing style and try to emulate it.
Hopefully, doing the essay has helped you clarify your future aspirations or at least allowed you to learn something about the technology of the future that you did not already know. Your topic is an interesting one. Your essay has achieved a degree of insight that shows me that you have thought about it and been able to find suitable sources. The way you have expressed your ideas shows me that have the ability to write. I encourage you to continue your efforts to communicate well in writing. Your chances of future success will be improved by this ability.
Assessment Criteria:
  • Writing Style: You have written in generally well-structured sentences. Paragraphs are coherent and refer to an identifiable topic. Some aspects of the style could be improved in relation to getting a good flow of ideas, one after another that connects logically to build a coherent narrative. You have achieved this to some extent, though there is some room for improvement.
  • Evidence of Research: your sources are generally suitable. More would have been better.
  • Content: You have covered the important points. Could have edited for conciseness—that is remove words that were not carrying their weight. Some points would have benefited from references to support them.
  • Coherence of Argument: Argument was reasonably clear. Order of sections/content was logical. Conclusion was justified.
  • Correct APA Style: References were correctly formatted, citations used appropriately and correctly formatted
  • Overall Impression: as described above.
Keep up the good work.
(Signed the Assessor)

Very Good (between 16 to 18 out of 20 Marks)

(Student’s given name), a good essay/report is one that informs and entertains, conveying information in a way that does not draw attention to itself but puts the spotlight on the message. There is evidence of research and thoughtful reflection, resulting in a piece of work that the reader has learned something from. The language should be clear and concise, spelling and grammar should be correct. Your essay achieves these to a high degree, well done. The textbook for this course is an example of how to communicate a complex topic in simple direct language. As you continue with your quizzes and work through the chapters, I recommend you take note of the writing style and try to integrate it into your already well-developed writing style.
Hopefully, doing the essay has helped you clarify your future aspirations or at least allowed you to learn something about the technology of the future that you did not already know. Your topic is an interesting one. Your essay has achieved in-depth (in the space allowed which is a challenge in itself) insight into the topic. It shows me that you have thought very carefully about it and have found some good sources. I like your writing style. I encourage you to continue your efforts to communicate very well in writing. Your future success will certainly be improved by this ability.
Assessment Criteria
  • Writing Style: You have written well-structured sentences. Paragraphs are coherent and refer to an identifiable topic.
  • Evidence of Research: your sources are mostly very good.
  • Content: You have covered the important points in good detail. There is still some room for editing for more conciseness - that is remove words that were not carrying their weight. Some points needed a reference to support them.
  • Coherence of Argument: Argument was clear. Order of sections/content was logical. Conclusion was justified.
  • Correct APA Style: References were correctly formatted, citations used appropriately and correctly formatted
  • Overall Impression: as described above.
Keep up the great work.
(Signed the Assessor)

Excellent (18+ out of 20 Marks)

(Student’s given name), a good essay/report is one that informs and entertains, conveying information in a way that does not draw attention to itself but puts the spotlight on the message. There is evidence of research and thoughtful reflection, resulting in a piece of work that the reader has learned something from. The language should be clear and concise, spelling and grammar should be correct. Your essay has achieved these to an exceptional degree—very well done. The textbook for this course is an example of how to communicate a complex topic in simple direct language. As you continue with your quizzes and work through the chapters, I recommend you take note of the writing style and try to integrate it into your already very good writing style.
Hopefully, doing the essay has helped you clarify your future aspirations or at least allowed you to learn something about the technology of the future that you did not already know. Your essay has achieved in-depth (in the space allowed which is a challenge in itself) insight into the topic. It shows me that you have thought very carefully about it and have found some good sources. I like your writing style. I encourage you to continue your efforts to communicate very well in writing. Your future success will certainly be improved by this ability.
Assessment Criteria
  • Writing Style: You have written well-structured sentences. Paragraphs are coherent and refer to an identifiable topic.
  • Evidence of Research: your sources are mostly very good.
  • Content: You have covered the important points in good detail. There is still some room for editing for more conciseness—that is remove words that were not carrying their weight. Some points needed a reference to support them.
  • Coherence of Argument: Argument was clear. Order of sections/content was logical. Conclusion was justified.
  • Correct APA Style: References were correctly formatted, citations used appropriately and correctly formatted
  • Overall Impression: as described above.
Keep up the great work.
(Signed the Assessor)

References

  1. May, K. Who Else is watching TEDTalks? A Visual Map. 2012. Available online: http://blog.ted.com/who-else-is-watching-tedtalks-a-visual-map/ (accessed on 15 June 2015).
  2. Sadler, D.R. Formative assessment and the design of instructional systems. Instr. Sci. 1989, 18, 119–144. [Google Scholar] [CrossRef]
  3. Robinson, S.; Pope, D.; Holyoak, L. Can we meet their expectations? Experiences and perceptions of feedback in first year undergraduate students. Assess. Evaluation High. Educ. 2013, 38, 260–272. [Google Scholar] [CrossRef]
  4. Nicol, D.J.; Macfarlane-Dick, D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Stud. High. Educ. 2006, 31, 199–218. [Google Scholar] [CrossRef]
  5. Black, P.J.; Wiliam, D. Assessment and classroom learning. Assess. Educ. 1998, 5, 7–74. [Google Scholar] [CrossRef]
  6. Hyland, F.; Hyland, K. Sugaring the pill: Praise and criticism in written feedback. J. Second Lang. Writ. 2001, 10, 185–212. [Google Scholar] [CrossRef]
  7. MacClellan, E. Assessment for learning: The differing perceptions of tutors and students. Assess. Evaluation High. Educ. 2001, 26, 307–318. [Google Scholar] [CrossRef] [Green Version]
  8. Cope, B.; Kalantzis, M.; McCarthey, S.; Vojak, C.; Kline, S. Technology-mediated writing assessments: Principles and processes. Comput. Compos. 2011, 28, 79–96. [Google Scholar] [CrossRef]
  9. Whitelock, D. Activating Assessment for Learning: Are we on the way with Web 2.0? In Web 2.0-Based-E-Learning: Applying Social Informatics for Tertiary Teaching; Lee, M.J.W., McLoughlin, C., Eds.; IGI Global: Hershey, PA, USA, 2011; pp. 319–342. Available online: http://oro.open.ac.uk/19706/ (accessed on 9 June 2015).
  10. Rust, C.; Price, M.; O’Donovan, B. Improving students’ learning by developing their understanding of assessment criteria and processes. Assess. Evaluation High. Educ. 2003, 28, 147–164. [Google Scholar] [CrossRef]
  11. Yorke, M. Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. High. Educ. 2003, 45, 477–501. [Google Scholar] [CrossRef]
  12. Chanock, K. Comments on essays: Do students understand what tutors write? Teach. High. Educ. 2000, 5, 95–105. [Google Scholar] [CrossRef]
  13. Yorke, M.; Knight, P. Self-theories: Some implications for teaching and learning in higher education. Stud. High. Educ. 2004, 29, 25–37. [Google Scholar] [CrossRef]
  14. Alavi, M. Computer-Mediated Collaborative Learning: An Empirical Evaluation. MIS Q. 1994, 18, 159–174. [Google Scholar] [CrossRef]
  15. Bloxham, S.; Boyd, P. Developing Effective Assessment in Higher Education; Open University Press: Maidenhead, UK, 2007. [Google Scholar]
  16. Crossouard, B.; Pryor, J. Using email for formative assessment with professional doctorate students. Assess. Evaluation High. Educ. 2009, 34, 377–388. [Google Scholar] [CrossRef]
  17. Denton, P. Generating and e-mailing feedback to students using MS Office. In Proceedings of the 5th International Computer Assisted Assessment Conference, Loughborough University, Leicestershire, UK, 2–3 July 2001; pp. 157–173.
  18. Denton, P.; Madden, J.; Roberts, M.; Rowe, P. Students’ response to traditional and computer-assisted formative feedback: A comparative case study. Br. J. Educ. Technol. 2008, 39, 486–500. [Google Scholar] [CrossRef]
  19. Denton, P. Generating coursework feedback for large groups of students using MS Excel and MS Word. Univ. Chem. Educ. 2001, 5, 1–8. [Google Scholar]
  20. Gipps, C.V. What is the role for ICT-based assessment in universities? Stud. High. Educ. 2005, 30, 171–180. [Google Scholar] [CrossRef]
  21. Hepplestone, S.; Holden, G.; Irwin, B.; Parkin, H.J.; Thorpe, L. Using technology to encourage student engagement with feedback: A literature review. Res. Learn. Technol. 2011, 19, 117–127. [Google Scholar] [CrossRef]
  22. Levin, H.M.; Lockheed, M.E. (Eds.) Effective Schools in Developing Countries. Education and Employment Division Population and Human Resources Department, The World Bank, 1991. Available online: http://www-wds.worldbank.org/servlet/WDSContentServer/WDSP/IB/1991/06/01/000009265_3961002213218/Rendered/PDF/multi_page.pdf (accessed on 15 June 2015).
  23. Antonio, A.; Tuffley, D. The Gender Digital Divide in Developing Countries. Future Internet 2014, 6, 673–687. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Tuffley, D.; Antonio, A. Enhancing Educational Opportunities with Computer-Mediated Assessment Feedback. Future Internet 2015, 7, 294-306. https://doi.org/10.3390/fi7030294

AMA Style

Tuffley D, Antonio A. Enhancing Educational Opportunities with Computer-Mediated Assessment Feedback. Future Internet. 2015; 7(3):294-306. https://doi.org/10.3390/fi7030294

Chicago/Turabian Style

Tuffley, David, and Amy Antonio. 2015. "Enhancing Educational Opportunities with Computer-Mediated Assessment Feedback" Future Internet 7, no. 3: 294-306. https://doi.org/10.3390/fi7030294

Article Metrics

Back to TopTop