Next Article in Journal
Whitelist or Leave Our Website! Advances in the Understanding of User Response to Anti-Ad-Blockers
Previous Article in Journal
Enhancing Small Medical Dataset Classification Performance Using GAN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strategies for Enhancing Assessment Information Integrity in Mobile Learning

by
Godwin Kaisara
1,* and
Kelvin Joseph Bwalya
1,2
1
Department of Information and Knowledge Management, College of Business and Economics, Auckland Park, Kingsway Campus, University of Johannesburg, Johannesburg 2006, South Africa
2
Research Department, Sohar University, Sohar 311, Oman
*
Author to whom correspondence should be addressed.
Informatics 2023, 10(1), 29; https://doi.org/10.3390/informatics10010029
Submission received: 3 February 2023 / Revised: 28 February 2023 / Accepted: 7 March 2023 / Published: 10 March 2023

Abstract

:
Mobile learning is a global trend, which has become more widespread in the post-COVID-19 pandemic era. However, with the adoption of mobile learning comes new assessment approaches to evaluate the understanding of the acquired information and knowledge. Nevertheless, there is scant knowledge of how to enhance assessment information integrity in mobile learning assessments. Due to the importance of assessments in evaluating knowledge, integrity is the sine qua non of online assessments. This research focuses on the strategies universities could use to improve assessment information integrity. This research adopts a qualitative design, employing interviews with academics as well as teaching and learning support staff for data collection. The findings reveal five strategies that academics and support staff recommend to enhance assessment information integrity in mobile learning. The theoretical and practical implications are discussed, as well as future research directions.

1. Introduction

The worldwide closures of universities in response to the outbreak of the COVID-19 pandemic generated significant challenges for university administrators and academics. Arguably, the inaccessibility of learning information was one of the more serious challenges, as it deprived students of opportunities for learning and development. As a result, the adoption of educational technologies is receiving renewed attention from academia, government and the business sectors [1], thus leading to a proliferation of research output addressing various facets of e-learning. Mobile ICTs in particular have become indispensable to bridging and binding students and academics together in the midst of the chasms which are a consequence of the social distancing regulations. For example, Yuan et al. [2] highlight that Chinese universities introduced mandatory mobile learning courses so as to enable students to remotely access learning information. The shift in how academic information is acquired, accessed, exchanged and stored has had a marked influence on integrity. Further compounding information integrity are new software and mobile technologies making cheating easier, thereby exacerbating an old problem with new techniques [3].
Particularly, one of the key challenges, one that remains pertinent today, is the need to ensure the academic integrity of online assessments. Through a comprehensive literature review of studies related to online assessments, Butler-Henderson and Crawford [4] found that cheating in examinations was the most prevalent focus of the majority of the studies. Concerns around assessment information integrity are not new, with evidence of research around the topic dating back to about three decades [3]. Nevertheless, the global mass migration to online assessment has provided a new impetus to promote and maintain assessment information integrity. The academic integrity of online assessments is still critical to the future of e-learning in all its forms. Currently, online assessments continue to be necessary due to the prevalence of the highly infectious COVID-19, which, although subsided somewhat, remains a concern. Yet, cheating and other forms of academic dishonesty have been reported to be more widespread during the COVID-19 era than before [5], with some calling it “at an alarming level” [6]. In studies undertaken in some parts of the world such as Germany [7], students confirmed that they cheated more frequently during online assessments than during traditional forms of face-to-face assessments, with some studies [8] establishing that up to 95% of students cheated. Cheating manifests itself in different forms, such as impersonation, forbidden aids, peeking, peer collaboration, outside assistance and student–staff collusion [9]. As a result, there has been a crescendo of voices calling on stakeholders to “rescue academic integrity and combat academic dishonesty” [5]. Furthermore, some scholars [10] argue that, beyond the COVID-19 pandemic, there will be a continued push towards the use of online assessments in order to guard against future COVID-19-types of disruptions to assessments.
The main objective of this paper is to determine the strategies adopted by universities to promote assessment information integrity in mobile learning. Whilst there is evidence of research on academic integrity in online learning environments, to the best of our knowledge, there is little evidence of studies that focused on assessment information integrity in mobile learning. The need for such studies cannot be overstated, particularly in a mobile-centric continent such as Africa [11], where mobile devices are the most commonly used digital device by students to access learning information [12]. Understanding mobile learning assessment information integrity strategies can cultivate the confidence of various stakeholders in the quality of mobile learning qualifications. Since evidence suggests that students’ performance improved during the COVID-19 online-mediated assessments [10], until concrete reasons for the improved performance are advanced, suspicions of potential academic dishonesty are likely to persist. Such suspicions are vindicated by studies such as that of Harmon and Lambrinos [13] and Arnold [14], who found that students scored significantly less in supervised assessments as opposed to the high marks attained when there was no supervision. The study is situated in Namibia, a middle-income country with a mobile penetration rate of over 100 percent.

2. Literature Review

2.1. Academic Integrity

Assessment is defined as “the systematic collection of information about student learning, using the time, knowledge, expertise and resources available, in order to inform decisions that affect student learning” ([15], p. 2). Through assessments, educators can establish the skill or knowledge level of students [16]. As noted by Comas-Forgas and colleagues [6], the assessment results obtained by students play a critical part in shaping their future career as well as economic and social prospects. Consequently, the pressure to score high marks may make cheating an attractive option for many students. In traditional assessments, cheating is discouraged through the physical presence of invigilators, something which is a near impossibility during online assessments. Due to the distributed geography of students enrolled for online learning, questions around the authenticity and integrity of online assessments remain a topical issue among scholars and practitioners. As a result, some employers often dismiss online education as sub-standard, and often discriminate against holders of online degrees, regardless of how competent they may appear to be [17]. There is a stigma attached to online degrees, one which could disadvantage holders of such qualifications when applying for employment opportunities.
On the other hand, institutions that ensure that staff and students act in an integrous manner reinforce their reputation [18], which in turn cultivates confidence in the quality of their online degrees. In a study conducted among employers in Tanzania, Kisanga [17] found that the reputation of the institution awarding a qualification was an important consideration that influenced the hiring decisions of otherwise sceptical employers. Against the foregoing, it is apparent that the enforcement of assessment information integrity is beneficial not only to universities, but to students and graduates as well.

2.2. Proctoring in Online Assessments

A popular solution to the cheating phenomena is the use of proctoring. The frenzied efforts to adopt proctoring tools has been labelled a technological arms race as universities attempt to address assessment integrity concerns associated with online learning [19]. Proctoring may involve the use of live webcams to observe student behaviour during online assessments. Bilen and Matros [20] argue that in the absence of proctoring, it is likely that cheating among students will increase. Consequently, the authors propose that universities devise and implement uniform policies that inform the use of cameras to monitor student behaviour during online assessments.
Whilst proctoring has been proven to reduce cheating and inflated assessment performance [7], it is costly, compromises privacy [19,21], communicates educators’ lack of trust in their students [8] and it may compromise students’ mental health [6]. In a study undertaken in the United Arab Emirates on students’ attitudes and concerns regarding the use of e-proctoring tools during their final exams, Kharbat and Abu Daabes [22] found that more than 86% of students deemed online proctoring as impractical and invasive of their privacy. Furthermore, the majority of the students (91.6%) felt that being watched via webcams made them feel anxious and negatively affected their exam performance. In addition to privacy concerns, some proctoring software has been found to discriminate against students based on their skin colour [18]. Furthermore, proctoring may be hindered by a lack of access to proctoring software [7], particularly in developing countries.

2.3. Mobile Devices and Learning

In traditional face-to-face assessments, mobile devices have been identified as one of the chief enablers of cheating and academic dishonesty during assessments [6]. In a study undertaken among accounting students in Saudi Arabia, El-Sayed Ebaid [23] found that the most common type of cheating behaviour during examinations was the use of cell phones to text exam questions and answers between students, as well as consulting personal class notes. The role of cell phones in cheating was further elucidated by Bernardi and Higgins [24] in the United States of America. To mitigate against the use of mobile devices to cheat, interventions such as the use of mobile phone detectors and jammers have been developed [25].
Although mobile devices have gained notoriety due to their ability to act as an enabler of cheating, they remain an invaluable tool for enhancing the accessibility of learning information. The role of mobile devices in knowledge acquisition, or mobile learning, was further highlighted due to the social distancing regulations necessitated by the COVID-19 pandemic. Several studies have elucidated the significant role played by mobile learning in various contexts such as Greece [26], Namibia [12] and Bangladesh [27]. The popularity of mobile devices such as smartphones and tablets is increasing at an exponential pace, particularly in developing countries [26]. Due to the limited number of computers in households in developing countries, the prevalence of mobile devices plays a critical role in enhancing access to the Internet. For example, in South Africa, only 22% of households have access to a computer, and of these, only 10–12% have access to the Internet [28,29]. In such a case, mobile devices such as smartphones become an indispensable tool for accessing online information.
Whilst the interest in mobile learning has grown, there is scant literature that focuses on mobile learning assessment, or m-assessment [26]. This is despite the fact that the lack of a specific monitoring system has been identified as a major weakness of mobile learning. However, with the COVID-19 pandemic transforming mobile learning from an option to a necessity, there is a need to respond to the vacuity in the literature and devise measures that ensure that assessment integrity in mobile learning is not compromised.

3. Materials and Methods

3.1. Methodology

This paper was underpinned by a qualitative design. Qualitative research questions the taken-for-granted variables [30], allowing researchers to derive explanations of processes and human behavioural patterns that may be otherwise difficult to capture quantitatively. Qualitative approaches are most appropriate where little is known about a phenomenon, which we believe to be the case with assessment integrity in mobile learning. Additionally, a qualitative study of this nature can unearth critical information that is useful for guiding policy-making [31]. In mobile learning research, some studies [11,32] have found that quantitative studies are more prevalent than qualitative studies. Therefore, in adopting a qualitative approach, we chose to take the road less travelled. Qualitative studies are good at eliciting thick descriptions and insider perspectives when compared to quantitative studies, which are predominantly used in the Global North. A qualitative approach allowed us to gain a richer understanding of interviewees’ lived experiences and perceptions regarding mobile learning assessment integrity. This was particularly important, given the scant mobile learning research emanating from Namibia. Interviews were conducted with academics and e-learning system administrators from 2 public universities in Namibia, as well an Australian mobile learning expert who previously worked in the Southern African region. The goal of the interviews was to elicit respondents’ perceptions (and lived experiences) of how assessment integrity could be enhanced in mobile learning. In order to achieve our goal, the following research question was asked: what strategies could be employed to mitigate against cheating and enhance assessment integrity in mobile learning?

3.2. Participants

In selecting participants for this study, we sought participants who were deemed to have the best knowledge of the subject and thus able to provide a rich and detailed account of the phenomenon [33]. The interviews were conducted with a total of 12 participants as depicted in Table 1, which were made up of 8 academics and 4 teaching and learning technical support staff from 2 public universities in Namibia. The 4 support (administrative) staff members also had lecturing experience (only 1 was not teaching at the time of the study). The purposive sampling technique was adopted in the selection of participants. The number of participants in this study seems sufficient, given that scholars have indicated that even as low as 4 participants may be enough in a qualitative study due to the large quantities of data produced.
We wish to highlight that in some cases, the list of courses taught by interviewees is not exhaustive. Many of the interviewees indicated that their courses often changed, depending on the semester and level of class(es) that they are allocated in that particular year. However, the department where the academic works rarely changes; therefore, the department variable provides a better idea of the field the academic belongs to.

3.3. Data Analysis

The interview recordings were transcribed by one of the researchers and shared with a professional transcriber for quality checks to ensure the accuracy of the transcripts. Then, the interview data were coded and analysed using inductive thematic analysis. Thematic analysis is useful where researchers seek to understand participants’ feelings, experiences and behaviours from a given dataset, as opposed to a single set or data item [34].

3.4. Dependability and Trustworthiness

Whilst validity and reliability are common terms in research, they are more suited to quantitative research designs rather than qualitative research. In qualitative research, it is more common to use the terms dependability and trustworthiness. To ensure trustworthiness and rigor, we adopted the member checking technique to verify the results. As noted by Brady [35], member checking is commonly used in qualitative research to ensure the integrity and accuracy of results.

4. Results

Our analysis revealed five key themes that could be used to inform approaches aimed at enhancing assessment information integrity in mobile learning. The themes and supporting verbatim quotations are presented next.

4.1. Theme One: Assessment Approach

A number of the respondents indicated that educators should strive to set assessment questions that require learners to draw on higher-order thinking skills, rather than merely recall questions. This approach was also suggested by a number of the interviewees. For example, interviewee BC argued that “you can give them open book tests… open book does not necessarily mean that the questions come from the book, you can have case studies for example”. Interviewee ND added that academics “…could maybe bring the concept of applications more”. Through application questions, students are given questions that pose “a world problem”. In this way, students are forced to apply their understanding, and searching for answers online is less likely to be useful to them. Rather, students are forced to draw from their “critical and analytical thinking in the process” (interviewee TD).
However, for this to happen, it is important that educators are capacitated with the requisite skills to formulate the right questions. Interviewee BC added:
“I definitely feel that there is a need for the teachers themselves to be capacitated to learn more about mobile learning because they think [it] is still the traditional way of teaching.”
The educators’ lack of capacity to formulate assessments suitable for mobile learning had some negative repercussions. As highlighted by interviewee BC:
“there were quite a number of challenges, our students actually performed better but the reason why they performed better during the past two years is because they cheated and…that I can say comes from the fact that our lecturers or tutors…do not know how to set the assessments or the types of assessments.
Interviewee LK, who has acted in various capacities as an educator, mobile learning researcher and other middle management roles, corroborated the above, arguing that
…it all depends on how the questions are set up, you can set up [questions] and then that brings another dimension; how are you setting up your questions because if you are setting up a question where the student can easily find the answer or copy and paste the answer then yah, student will automatically cheat but…, if the questions are designed in such a way that they have to apply their mind and really think through and they are not going to find the answer in the book or even by googling, that would make it a bit more of a challenge.
Furthermore, assessments integrity could also be enhanced through what interviewee AW, a distance learning division senior administrator, called “near transfer”. The interviewee argued:
I believe in what we call near transfer and far transfer…if I receive content right now and I get an activity where I need to interact, or an instruction where I need to interact with a certain topic whether an activity [or what] and it is done in class, as [in] right now, it is like the formative assessment, that is my near transfer, because what you taught me I can immediately apply or I can immediately give feedback on it.
Such an approach calls for continuous formative assessments to be administered. This viewpoint is shared by interviewee BC, who opined that “I would even suggest that you do not do summative assessments you do formative assessments, I mean continuously”. AW added that some assessments, the “far transfer” format, should be “alive”, that is, include aspects such as “a problem based assignment, project based assignment, I would want to see that assignment to be a living document”.
Respondents stated that if questions were framed properly, then open book assessment could be a useful approach that eliminated the need to monitor students.

4.2. Theme Two: Monitoring

Some respondents suggested that educators could require that students switch on their cameras for them to be monitored when writing certain assessments such as examinations. When cameras are on, students’ movements and actions during the course assessment are monitored, and thus they may be dissuaded from engaging in any behaviour that could compromise assessment integrity. Interviewee JL argued that “as long as the student knows that they are being watched somehow, they cannot go beyond what is prescribed…”. Monitoring could also be through inviting students to write assessments under controlled environments such as regional centres, where invigilators are utilised. Interviewee ES stated that “although we have the mobile learning option, we can have regional centers or regional places where they go” to write assessments. The interviewee argued that technical fields such as engineering and health sciences “…also need the physical presence somehow, so it’s a matter of blending the two…”. The interviewee acknowledged that whilst mobile learning apps could be used for simulation, “that component of physical presence is also important”. Interviewee JL shared similar sentiments, but also adding that “this can only apply in a normal situation where you do not have an issue like [the] pandemic, where people could not be [put] together”.

4.3. Theme Three: Technical Tools

Technical features could also be built into the mobile system to minimise opportunities for cheating. Interviewee LK stated that mobile learning can “have a few security features which you can put in and…once you start the quiz it does not allow you to copy and paste and it does not allow you to get out of the browser window…”. Proctoring was proposed as a tool that could be used to enhance mobile learning assessment integrity. According to ES, proctoring tools “can monitor the movement…as to where the student was actually browsing at the time of the assessment. So that is really recorded. It’s analysed and provided with the submission. So when the student is submitting [the assessment] also the records of such [navigation patterns during assessment] are there. I can actually verify and the student is aware of that”. Not only can students’ browsing activities be monitored, but they can be restricted, whereby students are not able to open other online pages during the test. This functionality is sometimes referred to as safe exam browser. Interviewee EG stated that “safe exam browser is to make sure that they do not leave the screen where they currently are”. Nevertheless, the interviewee indicated that in some instances the tool was not functional when students accessed their tests using mobile devices.
Interviewee CK shared how the university e-learning is configured to make it possible to identify the location or device used to take a particular test. Scrutinising IP addresses is often a reactive measure that is taken if there are suspicions of cheating activities. Narrating a previous incident where the university used an IP address to scrutinise network activity, interviewee CK stated that the system detected that many “students were taking the test at the same IP address”. She added that “we learned that it may have been one person doing it for them”.

4.4. Theme Four: Awareness Creation

Assessment integrity can also be enhanced through awareness campaigns that are meant to change students’ attitudes towards cheating during assessments. To cultivate awareness, “…academic integrity workshops were conducted” and some academics “often reminded students of the consequences of cheating in examinations” (AB). This approach was supported by RM, who argued that students need to be taught to maintain “a certain level of ethics knowing that they should maintain and uphold”. Nevertheless, she expressed doubts over the effectiveness of this method, adding that “I do not know how that can be administered, how do you ensure that everyone is ethical since the interest of most students is that we want to pass”. The interviewee went on to suggest that students may be made aware of the implications of cheating behaviours on their future career prospects. She argued that there is a need for “learning for students in terms of the impact of, what the impact of cheating would have on their future careers, like the moment you are not able to master a certain concept or subject and then you get into the workplace it means it is going to be challenging”.

4.5. Theme Five: Verification of Identity

Sometimes it is possible that the various measures that are put in place to promote integrity do not work as they are intended to work. Some academics indicated they had their suspicions raised when there was a significant discrepancy between assignment marks (not under controlled conditions) and test marks (assessed under controlled conditions). One interviewee stated:
I even had a lot of zeros in my assessments but in the assignments I will see people who have 74, 75 in the assignments, having zero in the test, so it was a puzzle that what happens, what is happening in the background? So I experienced that a lot, when you give an assignment they perform but when you give an assessment of up to 2 h, suddenly some people who were even passing, who passed their assignment are having zero in a test or in the assessment.
(ET)
The interviewee indicated that they resorted to taking steps to authenticate the identity of the student who wrote the assessment. The interviewee argued:
I could also verify because when I have received term papers from students and I just call them that they should come and present their work to see if, I really want to authenticate whether they are the ones who did the work.
(ET)
The interviewee (ET) was confident of their authentication approach, and proceeded to narrate how his approach had worked in the recent past. When suspicious of compromised integrity, the academic would call the student for an oral assessment, whereby he asks questions from the student’s assignment script to test their knowledge. He shared the following experience:
I called the student in my office [to] ask one or two questions, he could not even answer one question, he did not even know anything then at the end…the conclusion was that it was not written by him…

5. Discussion

This study sought to identify the strategies adopted by academics to ensure assessment information integrity in mobile learning in the Namibian higher education landscape. The results of this study have important implications for scholars and practitioners alike, as well the broad future of mobile learning.

5.1. Theoretical Implications

Namibian academics and support staff revealed that academic and integrity in mobile learning could be enhanced by firstly accepting that, if given a chance, students will consult illegal information such as notes when writing an assessment without close supervision. Therefore, they argued that academics need to set assessments in cognisance of this fact. The approach adopted by some Namibian academics, whereby they set their assessment with an open-book mindset, is supported by authors such as Butler-Henderson and Crawford ([4], p. 7), who argue that “online examinations should be oriented towards the same theory of open book examinations”. This approach places the student in the role of an expert witness, and seeks to validate their knowledge by setting questions that do not require rote answers. Furthermore, the approach usually involves setting essay questions, and more importantly, citing real examples [21]. Reedy and others [36] also suggest setting higher-order questions that require students to use information instead of merely regurgitating it. With mobile learning characterised by the high mobility of the learners, this approach could prove useful and practical to incorporate into mobile learning implementation plans.
Our findings also indicate that academics believe that the monitoring of students through remote proctoring using webcams could enhance assessment integrity in mobile learning. This is an approach that has been used in various contexts, such as Australia [36], and the United States of America [37]. This approach could be useful where there is contract cheating, that is, where students pay for someone to take the assessment on their behalf and then present it as their own effort [21]. However, it is worth noting that some students have complained of the intrusive nature of monitoring, feeling that their privacy was being violated [38], which then raises ethical issues. On the other hand, arguments have been made that in-person invigilation is also invasive of students’ privacy [39]. One of the software tools that have been used in e-learning is Respondus [38,40], although its collection of sensitive student personal information, whilst providing “nebulous terms of service…” ([40], p. 719) is strongly criticised. Concerns regarding monitoring through proctoring software have led some to suggest that other approaches of enhancing integrity be considered instead [19].
Technical tools may also be appropriated and used to improve integrity in mobile learning assessments. Some of the academics argued that a screen-lock or browser lock-down software can be used to prevent students from moving from the test website or app to another app or website, as well as preventing copying, screen-sharing or printing any information using their device during the time of the assessment.
Our findings reveal that academics and support staff believe that appealing to the students’ sense of morality could help to reduce instances of academic dishonesty. According to Hsu [41], the most important determinant of whether a student cheats or not is their moral anchor, which could be enhanced through explicit discussions on the importance of academic integrity. Indeed, such an approach has been found to be effective in some instances. In a study conducted by Reedy [36] among Australian students, many students indicated that although cheating was relatively easy, their own moral compass and beliefs made it impossible for them to cheat. Consequently, this necessitates further research on strategies that could be used to inculcate a sense of morality among mobile learning users.
However, cheating may still occur regardless of all measures enacted. In that case, our findings reveal that some Namibian academics choose to interview students to verify their ownership of the work in question. Such an approach, whilst effective, may be difficult in cases where mass cheating is suspected as it is practically impossible to individually interview each and every student. Furthermore, this approach relies on an educator’s subjective evaluation of potential cheating. If the instructor fails to identify any cues that suggest cheating took place, they are unlikely to initiate the interview process.

5.2. Practical Implications

This study has a number of practical implications. Firstly, the issue of academic dishonesty is a problem that persists, and the ubiquitous nature of mobile learning probably worsens it. The findings once again highlight that some students continue to cheat. The perpetuation of this malpractice is likely to reinforce the attitudes of naysayers who do not have confidence in knowledge acquisition through mobile learning and other forms of ICT-mediated learning. Nevertheless, academics use various methods to enhance information integrity in mobile learning assessments. This suggests that efforts to enhance assessment information integrity should take a multi-pronged approach, using various tools at the disposal of academics. Some tools are preventative (before the fact), whilst post hoc approaches are diagnostic and aimed at seeking to identify whether any dishonesty took place. Some of the approaches (e.g., design of open-book type of assessments) necessitate for academics to be trained on how to develop questions can test a student’s own knowledge with reasonable accuracy. Perhaps mobile learning can borrow the concept of reasonable assurance, rather than seeking absolute academic information integrity.

5.3. Limitations and Future Work

This study, however, is not without its limitations. Firstly, this study was conducted in Namibia. Added to the qualitative nature of the study, this means that the results cannot be easily generalized to other contexts. Therefore, future studies may consider conducting similar studies in other cultural and country contexts as we suspect that such variations may yield other interesting insights. Furthermore, other researchers may utilise a qualitative approach for more generalizable results, as well as being afforded the opportunity to use much larger sample sizes.

6. Conclusions

Cheating or academic dishonesty remains a concern for all stakeholders. As technology evolves, it is likely that newer ways to cheat will emerge. Enforcing assessment information integrity is not a unidimensional exercise, but rather one that could be approached from different angles. Mobile learning practitioners have various options that they could consider to improve confidence in the integrity of the information given by remote students during online assessments. The pursuit of absolute certainty in assessment integrity may be an elusive dream in mobile learning, but then, there is no evidence that there is absolute assessment integrity in offline assessment either.

Author Contributions

Conceptualization, G.K. and K.J.B.; methodology, G.K.; validation, G.K. and K.J.B.; formal analysis, G.K.; investigation, G.K.; resources, K.J.B.; data curation, G.K. and K.J.B.; writing—original draft preparation, G.K.; writing—review and editing, G.K. and K.J.B.; visualization, G.K.; supervision, K.J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Giannakos, M.N.; Mikalef, P.; Pappas, I.O. Systematic Literature Review of E-Learning Capabilities to Enhance Organizational Learning. Inf. Syst. Front. 2022, 24, 619–635. [Google Scholar] [CrossRef]
  2. Yuan, Y.; Tan, G.W.; Ooi, K.; Lim, W. Can COVID-19 pandemic influence experience response in mobile learning? Telemat. Inform. 2021, 64, 101676. [Google Scholar] [CrossRef] [PubMed]
  3. McHaney, R.; Cronan, T.P.; Douglas, D.E. Academic integrity: Information systems education perspective. J. Inf. Syst. Educ. 2016, 27, 153–158. [Google Scholar]
  4. Butler-Henderson, K.; Crawford, J. A systematic review of online examinations: A pedagogical innovation for scalable authentication and integrity. Comput. Educ. 2020, 159, 104024. [Google Scholar] [CrossRef] [PubMed]
  5. Mutongoza, B. Impetuses for Cheating in COVID-19-Induced Online Assessments at A Rural University in South Africa. In Proceedings of the 4th International Conference on Advanced Research in Social Sciences, Oxford, UK, 26–28 November 2021. [Google Scholar]
  6. Comas-Forgas, R.; Lancaster, T.; Calvo-Sastre, A.; Sureda-Negre, J. Exam cheating and academic integrity breaches during the COVID-19 pandemic: An analysis of internet search activity in Spain. Heliyon 2021, 7, e08233. [Google Scholar] [CrossRef]
  7. Janke, S.; Rudert, S.C.; Petersen, Ä.; Fritz, T.M.; Daumiller, M. Cheating in the wake of COVID-19: How dangerous is ad-hoc online testing for academic integrity? Comput. Educ. Open 2021, 2, 100055. [Google Scholar] [CrossRef]
  8. Li, M.; Luo, L.; Sikdar, S.; Nizam, N.I.; Gao, S.; Shan, H.; Kruger, M.; Kruger, U.; Mohamed, H.; Xia, L.; et al. Optimized collusion prevention for online exams during social distancing. npj Sci. Learn. 2021, 6, 5. [Google Scholar] [CrossRef]
  9. Chirumamilla, A.; Sindre, G.; Nguyen-Duc, A. Cheating in e-exams and paper exams: The perceptions of engineering students and teachers in Norway. Assess. Eval. High. Educ. 2020, 45, 940–957. [Google Scholar] [CrossRef]
  10. Aguilera-Hermida, P.A. College students’ use and acceptance of emergency online learning due to COVID-19. Int. J. Educ. Res. Open 2020, 1, 100011. [Google Scholar] [CrossRef]
  11. Kaisara, G.; Bwalya, K.J. Trends in Mobile Learning Research in sub-Saharan Africa: A Systematic Literature Review. Int. J. Educ. Dev. Using Inf. Commun. Technol. 2022, 18, 231–244. [Google Scholar]
  12. Kaisara, G.; Bwalya, K.J. Investigating the E-Learning Challenges Faced by Students during COVID-19 in Namibia. Int. J. High. Educ. 2021, 10, 308–318. [Google Scholar] [CrossRef]
  13. Harmon, O.R.; Lambrinos, J. Are online exams an invitation to cheat? J. Econ. Educ. 2008, 39, 116–125. [Google Scholar] [CrossRef]
  14. Arnold, I.J.M. Cheating at online formative tests: Does it pay off? Internet High. Educ. 2016, 29, 98–106. [Google Scholar] [CrossRef]
  15. Walvoord, B.E. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education, 2nd ed.; Jossey-Bass: San Francisco, CA, USA, 2010. [Google Scholar]
  16. Tosuncuoglu, I. Importance of Assessment in ELT. J. Educ. Train. Stud. 2018, 6, 163. [Google Scholar] [CrossRef]
  17. Kisanga, D.H. Employers’ perception of graduates with on-line degrees in Tanzania: Two-pronged lesson for on-line graduates and course developers. Inf. Learn. Sci. 2020, 121, 829–845. [Google Scholar] [CrossRef]
  18. Holden, O.L.; Norris, M.E.; Kuhlmeier, V.A. Academic Integrity in Online Assessment: A Research Review. Front. Educ. 2021, 6, 1–13. [Google Scholar] [CrossRef]
  19. Khalil, M.; Prinsloo, P.; Slade, S. In the nexus of integrity and surveillance: Proctoring (re)considered. J. Comput. Assist. Learn. 2022, 38, 1589–1602. [Google Scholar] [CrossRef]
  20. Bilen, E.; Matros, A. Online cheating amid COVID-19. J. Econ. Behav. Organ. 2021, 182, 196–211. [Google Scholar] [CrossRef]
  21. Gamage, K.A.A.; de Silva, E.K.; Gunawardhana, N. Online delivery and assessment during COVID-19: Safeguarding academic integrity. Educ. Sci. 2020, 10, 301. [Google Scholar] [CrossRef]
  22. Kharbat, F.F.; Abu Daabes, A.S. E-proctored exams during the COVID-19 pandemic: A close understanding. Educ. Inf. Technol. 2021, 26, 6589–6605. [Google Scholar] [CrossRef]
  23. El-Sayed Ebaid, I. Cheating among Accounting Students in Online Exams during COVID-19 Pandemic: Exploratory Evidence from Saudi Arabia. Asian J. Econ. Financ. Manag. 2021, 4, 9–19. [Google Scholar]
  24. Bernardi, R.A.; Higgins, K.M. Factors Associated with College Cheating and Suggestions for Reducing Classroom Cheating. Account. Educ. J. 2020, 30, 1–25. [Google Scholar]
  25. Nyamawe, A.S.; Mtonyole, N. The Use of Mobile Phones in University Exams Cheating: Proposed Solution. Int. J. Eng. Trends Technol. 2014, 17, 14–17. [Google Scholar] [CrossRef]
  26. Matzavela, V.; Alepis, E. M-learning in the COVID-19 era: Physical vs. digital class. Educ. Inf. Technol. 2021, 26, 7183–7203. [Google Scholar] [CrossRef]
  27. Biswas, B.; Roy, S.K.; Roy, F. Students Perception of Mobile Learning during COVID-19 in Bangladesh: University Student Perspective. Aquademia 2020, 4, 1–9. [Google Scholar] [CrossRef] [PubMed]
  28. Soudien, C.; Reddy, V.; Harvey, J. The Impact of COVID-19 on a Fragile Education System: The Case of South Africa. In Primary and Secondary Education during COVID-19: Disruptions to Educational Opportunity during a Pandemic; Reimers, F.M., Ed.; Springer: Berlin/Heidelberg, Germany, 2021; pp. 303–324. [Google Scholar]
  29. Frans, C.; Pather, S. Determinants of ICT adoption and uptake at a rural public-access ICT centre: A South African case study. African J. Sci. Technol. Innov. Dev. 2022, 14, 1575–1590. [Google Scholar] [CrossRef]
  30. Aspers, P.; Corte, U. What is Qualitative in Qualitative Research. Qual. Sociol. 2019, 42, 139–160. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Tolman, D.L.; Hirschman, C.; Impett, E.A. There is more to the story: The place of qualitative research on female adolescent sexuality in policy making. Sex. Res. Soc. Policy 2005, 2, 4–17. [Google Scholar] [CrossRef]
  32. Wu, W.H.; Jim Wu, Y.C.; Chen, C.Y.; Kao, H.Y.; Lin, C.H.; Huang, S.H. Review of trends from mobile learning studies: A meta-analysis. Comput. Educ. 2012, 59, 817–827. [Google Scholar] [CrossRef]
  33. Almaiah, M.A.; Al-Khasawneh, A.; Althunibat, A. Exploring the critical challenges and factors influencing the E-learning system usage during COVID-19 pandemic. Educ. Inf. Technol. 2020, 25, 5261–5280. [Google Scholar] [CrossRef]
  34. Kiger, M.E.; Varpio, L. Thematic analysis of qualitative data: AMEE Guide No. 131. Med. Teach. 2020, 42, 846–854. [Google Scholar] [CrossRef]
  35. Brady, S.R. Utilizing and Adapting the Delphi Method for Use in Qualitative Research. Int. J. Qual. Methods 2015, 14, 1–6. [Google Scholar] [CrossRef] [Green Version]
  36. Reedy, A.; Pfitzner, D.; Rook, L.; Ellis, L. Responding to the COVID-19 emergency: Student and academic staff perceptions of academic integrity in the transition to online exams at three Australian universities. Int. J. Educ. Integr. 2021, 17, 1–32. [Google Scholar] [CrossRef]
  37. Woldeab, D.; Brothen, T. Video Surveillance of Online Exam Proctoring: Exam Anxiety and Student Performance. Int. J. E-Learn. Distance Educ. 2021, 36, 1–27. [Google Scholar]
  38. Lee-Post, A.; Hapke, H. Online learning integrity approaches: Current practices and future solutions. Online Learn. J. 2017, 21, 135–145. [Google Scholar] [CrossRef] [Green Version]
  39. Coghlan, S.; Miller, T.; Paterson, J. Good Proctor or “Big Brother”? Ethics of Online Exam Supervision Technologies. Philos. Technol. 2021, 34, 1581–1606. [Google Scholar] [CrossRef] [PubMed]
  40. Paris, B.; Reynolds, R.; McGowan, C. Sins of omission: Critical informatics perspectives on privacy in e-learning systems in higher education. J. Assoc. Inf. Sci. Technol. 2022, 73, 708–725. [Google Scholar] [CrossRef]
  41. Hsu, J.L. Promoting Academic Integrity and Student Learning in Online Biology Courses. J. Microbiol. Biol. Educ. 2021, 22, ev22i1-2291. [Google Scholar] [CrossRef]
Table 1. Demographic profile of interviewees.
Table 1. Demographic profile of interviewees.
IntervieweePositionDepartmentCourses Taught
AWAdministratorCentre of Open and Lifelong LearningWeb Communications, Communication Skills
BCAdministratorCentre for Open and Lifelong LearningMicroteaching, Classroom and Workshop Management
LKAdministrator/AcademicTechnical and vocational education and trainingICT for Educational Management, Psychology of Learning, Microteaching
ESAcademic/ManagementMarketing and LogisticsCustomer Care, Product Innovation, Project Management
ETAcademicAccountingLabour Economics, Econometrics, Advanced Microeconomics Theory
ABAcademicInformation TechnologyPython, C#
RMAcademicLand and Property SciencesInformation Systems Law, Land Policy and Development, Project Management for Land Administration
TDAcademicArchitecture, Planning and ConstructionConstruction Economics, Building and Engineering Law, Project Management
JLAcademicDepartment of Biology, Chemistry and PhysicsAnalytical Principles and Practice, Molecular Spectroscopy and Chemometrics
EGAcademicDepartment of CommunicationProfessional Communication, English in Practice
CKAdministrator/AcademicDepartment of Higher Education and Lifelong LearningEducational Research, Instructional Design, Contemporary Social Issues
NDAcademicDepartment of Mathematics, Statistics and Actuarial ScienceAdvanced Calculus, Engineering Mathematics, Business Statistics
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaisara, G.; Bwalya, K.J. Strategies for Enhancing Assessment Information Integrity in Mobile Learning. Informatics 2023, 10, 29. https://doi.org/10.3390/informatics10010029

AMA Style

Kaisara G, Bwalya KJ. Strategies for Enhancing Assessment Information Integrity in Mobile Learning. Informatics. 2023; 10(1):29. https://doi.org/10.3390/informatics10010029

Chicago/Turabian Style

Kaisara, Godwin, and Kelvin Joseph Bwalya. 2023. "Strategies for Enhancing Assessment Information Integrity in Mobile Learning" Informatics 10, no. 1: 29. https://doi.org/10.3390/informatics10010029

APA Style

Kaisara, G., & Bwalya, K. J. (2023). Strategies for Enhancing Assessment Information Integrity in Mobile Learning. Informatics, 10(1), 29. https://doi.org/10.3390/informatics10010029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop