Next Article in Journal
Becoming a Socially Responsive Co-Learner: Primary School Pupils’ Practices of Face-to-Face Promotive Interaction in Cooperative Learning Groups
Previous Article in Journal
School Effectiveness in Multilingual Education: A Review of Success Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluate the Feasibility of the Implementation of E-Assessment in Objective Structured Clinical Examination (OSCE) in Pharmacy Education from the Examiner’s Perspectives

1
Department of Pharmacy, Tajen University, Pingtung 90741, Taiwan
2
Department of Digital Multimedia Design, Tajen University, Pingtung 90741, Taiwan
3
Department of Nursing, Tajen University, Pingtung 90741, Taiwan
*
Authors to whom correspondence should be addressed.
Educ. Sci. 2021, 11(5), 194; https://doi.org/10.3390/educsci11050194
Submission received: 19 March 2021 / Revised: 18 April 2021 / Accepted: 20 April 2021 / Published: 22 April 2021

Abstract

:
With the wide application of the OSCE in the field of pharmacy, the development of e-assessment in OSCE is a predictable trend. However, the feasibility of its practical application and the acceptance of examiners who are accustomed to using traditional paper-based methods are worthy of being discussed. The e-assessment system (EAS) was constructed in this study and used in the examination process. Examiner satisfaction and changes in acceptance of EAS were evaluated. Examiners showed high recognition for the advantages of EAS in data processing, but EAS obviously made the examiner more nervous than the paper-based method. After repeated use of the e-OSCE system, the examiner’s satisfaction and acceptance significantly improved, which showed that there was a certain correlation between examiner’s familiarity with the assessment method and their acceptance. Besides, EAS has great advantages over traditional paper-based methods, and it is feasible in clinical practice examination of pharmacy education.

1. Introduction

As medical care tends to be professional, a consensus was formed on the concept of patient oriented care. Professional clinical skills of pharmacists are multifaceted. The training methods in pharmacy education change accordingly. Besides basic professional knowledge, clinical practical and communication skills are gradually gaining importance in teaching [1,2]. In response to the changes in modes of teaching, innovative assessment methods such as the objective structured clinical examination (OSCE) [3,4], mini-clinical evaluation exercise (mini-CEX) [5], and direct observation of procedural skills (DOPS) [6] emerged to evaluate teaching effectiveness. Among them, the OSCE is the most widely used in pharmacy education or the evaluation of pharmacists’ clinical practical skills [7]. The OSCE originated in the United Kingdom (UK) as an objective means to assess medical students’ skills [8]. It facilitates the assessment of student’s competency with clinical skills in a controlled, simulated environment [9]. The OSCE is used for medical qualification examinations in many countries such as the UK, Canada, and the United States. This evaluation method is also applied elsewhere, such as in Asia, including Taiwan [10,11,12]. Taiwan has applied OSCE in the second stage of qualification examination for doctors since 2013. At present, OSCE is being extended to pharmacy education, and it is planned to implement it in the pharmacist qualification examination in the future. Although the value of OSCE in assessment of clinical competencies is well established, there are several barriers to its optimal integration in educational programs. Traditionally, OSCEs were assessed with paper-based methods. However, a number of issues were highlighted with this method. Considerable resource investment is required to administer an OSCE, including time, human resources, and a massive amount of paper forms as the primary drawbacks cited in literature [13,14,15]. Besides, the time-consuming data processing and analytics after testing can sometimes cause trouble; manually calculating results and entering them into a database is time-consuming [16,17]. It can be expected that conventional OSCE methods become increasingly challenging to administer with a larger number of students.
Recognizing these potential shortfalls of OSCE evaluation, technology solutions were explored to support evaluative efficiency and objectivity for OSCEs. Combining information technology may resolve many related problems. Some studies pointed out that tablets-assisted alternatives can be used to reduce the use of resources [18], or iPads and specialized software can be used to develop electronic OSCE (e-OSCE) to resolve related problems [15].
The development of e-OSCE is a predictable trend of the future. However, when considering implementing innovative assessment strategies, such as e-assessment systems (EAS) in OSCE, it is important to identify and evaluate potential advantages and disadvantages associated with the innovation. To implement this mode of assessment on a broader basis, the effectiveness of its practical application and the acceptance of administrators and examiners who are accustomed to using traditional paper-based assessment methods are interesting topics and must be discussed.
The aim of this work was to construct an e-OSCE system and apply EAS to the examination of clinical practice skills and to report on its evaluation of acceptance and the feasibility of the application of EAS from the examiner’s perspective.

2. Methods

2.1. Construction and Application of e-OSCE

Recognizing the potential of technology innovations to enhance OSCE efficiency, objectivity, and organization, the Department of Pharmacy cooperated with the Department of Digital Multimedia Design (DMD) to develop an e-OSCE system and applied it in the teaching and examination process. Pharmacy professional teachers took responsibility for clinical and organizational requirements of OSCE administration, developing the OSCE cases, marking guides, and schedule. The DMD specialist secured required information technology resources and supported technology implementation. This e-OSCE system could be divided mainly into two parts: the e-learning system (ELS) for student functions and the e-assessment system (EAS) for teacher functions.
The main functions of the ELS included OSCE situational teaching, teaching cases, and online test question-bank. Based on its functions, the ELS could be divided into (1) learning and discussion module, (2) teaching materials module, (3) question-bank and test module, and (4) learning process module. The users of the EAS were mainly administrators and examiners. It could be divided into administrator module and examiner module according to function. In the administrator module, the functions the administrator could operate were: (1) system management (performing system settings, such as managing accounts of examiners, teachers and students); (2) assessment management (performing settings of relevant information on OSCE); (3) results management (data analysis and management of ongoing or completed test results); (4) teaching materials management (providing related learning materials and teaching materials); (5) question-bank management (creating the OSCE-related test questions in the question-bank); (6) student portfolios management (generating OSCE portfolio records (test score records, test results analysis, and student feedback on test results) in the learning portfolios module after the OSCE tests). On the other hand, the examiner module was for examiners to score through the interface during OSCE testing. After the rating, the results were automatically calculated according to the formula preset by the administrator and were then stored. The examiners could, at any time during the test, view the total score of the students or revise mistaken scores through “student portfolios management” in the examiner interface.

2.2. OSCE Planning and Implementation

2.2.1. Description of Course

This study was approved by the Ethics Committees of Pingtung Christian Hospital (Institutional Review Board number IRB570B) and was carried out at Tajen University in Taiwan. Implementation of the OSCE requires specialized venues and facilities. Our university established a specialized classroom system for the objective structured clinical examination in 2014, thus students in simulated clinical situations can learn clinical practical skills more effectively. This study also used this specialized venue to conduct the OSCE assessment. The examinees were fourth-year students of the Department of Pharmacy who were about to undertake their internships. The total number of students that completed the OSCE in this study was 191. The examinees participated in the pharmacy-related course before taking part in the test. The OSCE was implemented in a compulsory three-credit course (Dispensing Pharmacy), which included instructional content related to practical clinical skills. OSCE has been implemented in this course for 6 years since the OSCE specialized classroom was established. Over the years, all topics of the course were taught by the same faculty members. The formulation of the examination topics for OSCE test was conducted by the clinical pharmacists of hospitals and the teachers responsible for the school curricula. The focus of the test was on professional skills, communication skills, and problem-solving abilities.

2.2.2. Implementation of EAS in OSCE

In this study, the e-OSCE was used in the teaching and testing process, with the aim to understand the feasibility of applying this system to the education of clinical pharmacy practice. Due to the diverse functions of the system, the focus of this paper was evaluating the use of EAS. If it showed good results in terms of the feasibility of applying the EAS to the OSCE to pharmacy, the next goal will be a study of the functionalities of the ELS to understand whether the ELS helps teachers and students in their teaching and learning. Before the test, the administrator (teacher-in-charge of the course) set the examination content for the EAS, which included date and time of the examination, students for the test, examination venues/examiners, examination questions, evaluation forms, and the marks calculation method. The topic of the test was “guidance on the use of MDI for asthma patients”, which is one of the topics most frequently consulted upon in the communication with patients. The duration of assessment for each student was 11 min, of which 1 min was for reading the test question, 8 min for the test, and 2 min for instant feedback by the examiner. In the 8 min test, examiners accessed the examiner interface on their tablets through the examiner module on their tablets and awarded appropriate marks as per the evaluation form preset by the administrator. With the EAS, an examiner used the tablets to enter marks and comments about a student’s performance. Information entered on the tablets was immediately sent to a server, which saved the data and then organized the data. After the 8 min test, the examiner provided a 2 min feedback to the students to inform them of their shortcomings and determine necessary improvement, with the aim of enhancing their learning outcomes. According to the results of the OSCE test, the students’ performance was satisfactory as a result of class lectures combined with watching teaching videos on the learning platform; 89.6% of examinees passed the exam. This indicated the feasibility of enhancing learning efficiency by providing an appropriate platform for the multi-modal learning process of students.

2.3. Examiner’s Acceptance Assessment of e-OSCE System

Examiners were 12 clinical course teachers from the Department of Pharmacy. They participated in relevant practical study seminars or training on OSCE. They all had experience in using traditional paper-based assessments and serving as examiners of the OSCE on the same test topic (guidance on the use of MDI for asthma patients). Therefore, in addition to assessing the examiner’s satisfaction with EAS, the comparison between paper-based evaluation and EAS was also evaluated based on the examiner’s perspectives. The students who participated in the OSCE test in the past years received the same course at school, and the assessment form of the paper-based version used previously was translated directly into the electronic format in this study. Therefore, the two assessment methods were comparable. Since examiners had never used the EAS before, the motives and the advantages of the test system were first explained, and the method used was also introduced to them as a guide for them to access this system to practice mock evaluation through the examiner interface, in which they had the opportunity to practice using the EAS. After the examiners conducted a few practice rounds on its operation and all examiners agreed that this was enough time for them to become familiar with using the tablet marking, they were invited to perform evaluation using this test system and fill out a questionnaire on their level of satisfaction with using the EAS (pre-test).
The questionnaire was slightly modified based on the content previously described by Luimes and Labrecque [13]. It consisted of a combination of Likert scale [19] based questions and a free text section for additional comments on the EAS. The questionnaire included three themes (Figure 1). Themes one and two were satisfaction with the operation of the EAS (5 questions) and satisfaction with the tablets and network signals (5 questions). Theme three was comparison of EAS with the paper-based evaluation form (7 questions). The results of the questionnaires and the suggestions of examiners were further used as a reference for the revising of functions to improve the functions of the system. Further, after the pre-test, a few mock online e-OSCEs were arranged by the administrator to familiarize the examiners with the operation of the EAS and to test the functions and the stability of the system after its revision. Through several mock practice rounds on the marking process, the application of EAS in the formal test was carried out smoothly. After the formal OSCE test, the examiners were invited to fill out the same questionnaire again (post-test) to evaluate the difference in the examiner’s acceptance of the EAS.

2.4. Statistical Analysis

Regarding the results of the examiner’s satisfaction using the EAS, the 5-point Likert scale was used for scoring (5 points for “very satisfied”, 4 for “satisfied”, 3 for “fair”, 2 for “dissatisfied”, and 1 for “very dissatisfied”). In addition to calculating the number of examiners and the average score of each question, the statistical methodologies of the paired t-test [20] were used to analyze the results of the pre-test and the post-test questionnaires of themes one and two. McNemar test [21] was used to compare the satisfaction differences of the examiners between EAS and paper-based assessment methods (theme three).

3. Results and Discussion

3.1. Results of the Examiners’ Satisfaction with EAS

For examiners, the time the pre-test questionnaire was filled out was that of the first examiners’ consensus meeting. After the methods of operating the system were explained and its operation was fully practiced, the examiners performed an evaluation of EAS. The post-test questionnaire was filled out after the formal OSCE test. With regard to the use of EAS, the examiners evaluated it in terms of three themes: operation of EAS, tablet and network signals, and comparison with the paper-based evaluation form, with the results shown in Table 1 and Figure 2.
For themes one and two, in the results of the pre-test, the items with the highest and the lowest satisfaction were “resolution of the tablet screen” (average score, 3.8) and “operational convenience” (average score, 2.7), respectively. Evidently, the examiners’ level of satisfaction with theme two (tablet and network signals) was higher than with theme one (the operation of EAS). The level of satisfaction shown towards all the questions for the former ranged from 3.1 to 3.8, while the latter was only between 2.7 and 2.8. For post-test, the average score for theme one ranged between 4.1 and 4.5, while the average score for theme two ranged between 4.6 and 4.8. For the results of themes one and two, a paired t-test was performed to analyze the differences between pre-test and post-test for each question. The statistical results showed that there was a significant difference (p ≤ 0.001) in the level of satisfaction of the examiners with these two themes pre-test to post-test.
In the pre-test, 5 examiners were dissatisfied with “operational convenience”, and 3 examiners were dissatisfied with “scoring convenience” (Table 2). As a result, their respective average scores were 2.7 and 2.8. As the examiners were more familiar with paper-based evaluation in the past, and some of them especially were not computer-savvy, they felt unfamiliar coming into contact for the first time with EAS design and operating it. As a result, they showed some resistance and doubt towards the test system, still preferring paper-based evaluation, which they were more familiar with. However, after the examiners became familiar with the operation of the system through repeated practice, their level of satisfaction in each aspect increased. The number of examiners post-test who expressed dissatisfaction in these two questions reduced to zero, and the numbers of those who were very satisfied were 4 and 6, with the average scores increasing to 4.2 and 4.5, respectively. Examiners need to receive proper training and sufficient practice to be confident in using the electronic assessment system. Further, in the analysis of the other questions in the first and the second themes, no examiners in the pre-test stage selected “very satisfied” in any questions of these two themes (Figure 2(1),(3)) while in the post-test stage, the number of examiners selecting “very satisfied” increased remarkably in all questions (Figure 2(2),(4)). This change from pre-test to post-test showed a significant difference (p ≤ 0.001). However, it was evident in these two questionnaires that, although the average score for “system functionality” of the first theme increased from 2.8 pre-test to 4.1, this item was still rated at a lower level of satisfaction by the examiners compared with other questions. Following discussions, it was found that there were two main reasons for the examiners’ dissatisfaction with the functional design of the system in the pre-test stage. One was that they could not mark or write comments on the evaluation form at any time as they were able to do in paper-based evaluation, and the system design lacked the function allowing examiners to write their feedback; this result was similar to the previous study by Hochlehnert [18]. The other was that they hoped the add/subtract score function would be listed as a click item, making it easier for the examiners to add or subtract scores at their discretion in the overall score according to the performance of the students. Therefore, based on the recommendations made in the first examiners’ consensus meeting, parts of the content of the system were immediately updated, which included two additional functions: feedback input (comments writing of examiners) and an add/subtract score floating window. After the recommendations were made and the system was revised, the results of the data analysis showed that the level of satisfaction with the “system functionality” increased remarkably, with two examiners stating “very satisfied”, and 9 expressing “satisfied”.
Although some examiners considered that it was still more convenient to write the feedback directly on an evaluation form than on tablets, in actuality, this method’s shortcomings are many, such as the consumption of a massive amount of paper, the enormous time needed to input the feedback after the examination into the computer for filing, and the difficulty in reading the writing due to illegible handwriting or accidental stains on the information paper. One might not get used to writing on tablets at the beginning, but it can be further improved after some practice and adapting. Therefore, EAS still holds considerable advantages. To overcome the issues raised by the examiners, adding a recording function may be an improvement that can be tried to make the functions of the system more complete. For network signals, as the school strengthened the network signals in the examination venue for the OSCE test according to the recommendations of the examiners in the first examiners’ consensus meeting, the level of satisfaction with this item post-test increased, with the average score for “network stability” rising from 3.1 (pre-test) to 4.6 (post-test), where 41.67% of examiners were satisfied, and 58.33% of the examiners were very satisfied. The average score for “stability of tablets” increased from 3.2 (pre-test) to 4.8 (post-test), where 25.00% of examiners were satisfied, and 75.00% of the examiners were very satisfied. Besides, 75.00% and 83.33% of the examiners were very satisfied with “network transmission rate” and “data-processing speed of tablets”, respectively. This result showed that the cooperation of the environment and the equipment was also very important for the examiner’s acceptance.
In respect to the third theme, “comparison between EAS and paper-based evaluation form”, the results of the questionnaire are shown in Figure 2(5). In addition, satisfaction differences of the examiners between pre-test and post-test were compared using the McNemar test, which is a matched pair test used when the dependent variable is dichotomous. Often, it is used to determine whether there is a significant change in nominal data before and after an event. Analyzed results of the McNemar test are shown in Table 2. The results verified the significant difference in examiner satisfaction. In its 7 questionnaire questions, only “make examiners more mobile” and “make examiners more nervous” showed no significant difference between the pre-test and the post-test (p = 1.000); the other questions were quite significant (p < 0.05). Although some examiners believed that the paper evaluation form was lighter than the tablet and therefore more convenient to move, some examiners indicated that the tablet was smaller and easier to move. In addition, in the pre-test of the third theme of the questionnaire (Table 2), although most examiners (9 of them) acknowledged that EAS made processing data more convenient, 10 of them still thought that this system made examiners nervous because they were apprehensive about the technology with a fear of mechanical failure, and half of them (6) hoped to use paper-based evaluation for the OSCE assessment. However, after repeated practice of EAS, the level of acceptance of the examiners increased remarkably, with some considering this test system as very novel and convenient and showing great interest in this innovative method. This was confirmed in the post-test satisfaction questionnaire.
The number of examiners choosing the EAS in the post-test survey increased evidently. The most noticeable differences were in “more convenient to use” and “which is your preferred method”. In the post-test results, all 12 examiners selected EAS as their favored choice of evaluation method. Compared to pre-test in which only 6 examiners chose EAS, such a change showed a statistical significance (p < 0.05), indicating that there was an obvious change in the examiners’ choice of evaluation method from pre-test to post-test. Similarly, the number of examiners post-test who considered the use of EAS as more convenient increased by 6 as compared to pre-test. The number of examiners who chose EAS increased from 6 (pre-test) to 12 (post-test). Further, examiners showed very high recognition for the advantages of EAS in respect to “speedier grading processing” and “greater convenience in processing data”. In the results of the post-test questionnaire, this number even reached 100% (12 people). Similarly, the question in the questionnaire that also attained 100% was “which is your preferred method”, in which all examiners chose EAS in the post-test surveys. These showed that there was a certain correlation between the examiners’ familiarity with the evaluation method and their level of acceptance.

3.2. Comments Relating to the Course Administrator with EAS

In the OSCE tests, the cumbersome work for the test is a significant burden to administrators, such as creation of various evaluation forms, candidates’ examination stations and arrangement of timetables, and complicated process of calculation. Departing from the aforementioned shortcomings, EAS provides the following advantages:
  • Removes the potential for missing data: due to certain factors, such as the examiner falling asleep or losing focus [22], the item numbers are frequently left unmarked in traditional paper-based methods, leading to uncertainty regarding student performance on these items and inaccurate results. EAS helps to reduce errors in evaluation, as submission is allowed only when the electronic forms are fully complete. If examiners overlook an evaluation item, the evaluation data is not transferred and stored in the system. For paper-based assessment forms, problems with illegible handwriting from examiners often cause troubles for the administrators. EAS facilitates the storage and the analysis of assessment results as well as reduces the possibility of data loss [14].
  • Instantly processes evaluation data and provides timely feedback to students: when paper assessment forms were used in the past, it was impossible to know instantly the total score and the overall performance of the students after the OSCE ended. Instead, the analyzed results could only be known after the data were integrated and calculated. The length of this process depended on the number of students taking the test. However, automatic scoring technology helped to make large-scale testing convenient and cost-effective [23]. A change from paper-based to computer-based assessment took place in many large-scale assessments [24]. Using e-assessment can reduce the teachers’ burden to assess a large student number [25]. For EAS, the evaluation data are instantly uploaded to the server-end for calculation and storage as the examiners carry out the evaluation. The administrators and the examiners are able to verify final scores quickly and efficiently. Besides, EAS analyzes the stored test results and collates these results into personal or overall performance reports for students, presenting them in diagrams for examiners to review, and to provide appropriate, timely feedback to students [14,26], enhancing the measurement of learner outcomes [24,27].
  • Reduces waste of resources and provides good data preservation functions: the paper-based evaluation method generates a large number of written materials, such as evaluation forms. Electronic methods not only reduce the huge amounts of paper waste in the course of the tests, other features such as the saving of space in the preservation of relevant data, the fact that stored data are not easily damaged, and the convenience in filing and data search are also advantages that paper assessment forms are unable to achieve. Kropmans et al. estimated the e-assessment method may reduce costs by 70% compared to paper-based methods [28].
  • Some studies indicated that using e-assessment saves the teacher or the staff time compared with paper-based assessments [23,29,30]. For administrators, although some time was required to input schedules and paper-based checklists into the EAS program, EAS was friendly and efficient to users. For the paper-based OSCE, the cumbersome preparation work for the test is a significant burden to organizers, such as preparation of various evaluation forms and arrangement of candidates’ examination stations and timetables, which usually takes 1 to 2 days. Further, the evaluation forms of each examiner have to be collected, collated, checked, and recorded after the test, which usually takes 3 to 6 days. It also causes even greater trouble if some scores happen to go missing or data become damaged or lost in the process. Compared with traditional paper-based methods, EAS requires less setup time before the test and less data processing time after the test. These tasks could be completed in about 1 day and 1 to 3 days, respectively. EAS reduces the overall workload of pre-test preparation and post-test data processing in administering a practical examination by approximately 40% to 50%, and this result is similar to a finding by Snodgrass [15].
On the other hand, EAS also has shortcomings that need to be overcome, such as:
  • Potential technical problems: additional contingency planning must be implemented to resolve potential technical problems that could occur, such as battery life [7]. The day before the exam, the tablets must be checked in advance to ensure tablets are fully charged and that the tablet charging stations are placed at each OSCE station. This may increase the workload of administrators. In order to prevent unpredictable failure of the tablet, a few paper copies of the marking checklists can be placed at each OSCE station to be used in the event of technical problems with EAS.
  • Technical issues inherent in using new technology: for administrators who have never used this system, it takes time to learn and to familiarize themselves with the operation of the system, which increases the time to establish the initial EAS. Problems for the examiner include technical problems encountered when logging in, trouble selecting and deselecting items due to unfamiliarity with the tablets touch screen, or holding the weight of the tablets throughout the examining, which makes the examiners more obviously fatigued.
  • For a longer checklist, the examiner needs to scroll through the longer list to grade student performance. Although it was reported that scrolling through the iPad checklists was easier than searching through pages of paper-based checklists [13], some examiners in this study stated in their feedback that, since the order in which students answered the grading items varied, scrolling through a longer list to find examinees’ answer items affected the simplicity of the scoring. On the other hand, some examiners noted that the noise caused by searching through pages of paper-based checklists made examinees nervous, while scrolling through the tablet checklists does not cause a similar problem.

3.3. Comments Relating to Examiners with EAS

An open-ended question was included in the questionnaire. Apart from the recommendations for improvement described previously, the examiners also provided plenty of positive feedback, which can be summarized as four points:
  • Convenient to use: the functions were clear and simple to click. After practice, the examiners became familiar with the operation and scored smoothly.
  • High calculation efficiency: when the examiners clicked to submit the results, the results were calculated instantaneously.
  • Fonts that can be enlarged: the fonts on paper evaluation forms were small, making it difficult for the examiners to mark, with the possibility of ticking the wrong items. The fonts in EAS could be enlarged with flexibility whenever necessary, making it more convenient for the examiners to mark the checkbox.
  • Completeness of the score: it is likely for examiners to overlook marking certain checkboxes in the paper-based checklist. EAS has the function where “results cannot be sent when the evaluation is incomplete”, which helps examiners to ensure the completeness of the evaluation and prevents affecting the score of the students.
Based on all the above results, a parts of the system’s functions were updated, the level of satisfaction of examiners with the overall functions and the use of the system increased evidently. EAS also possesses obvious advantages in terms of the examiners’ choice of evaluation methods. This study showed good results in terms of the feasibility of applying the e-assessment to the implementation of OSCE of pharmacy education. The next goal will be a study of the functionalities of the ELS on learning efficiency. The OSCE related lesson information and test contents will be converted into various kinds of question-banks, which can be used in teaching and practice to enhance learning outcomes in clinical practice training courses.

4. Conclusions

The application of EAS can resolve the difficulties encountered in the complicated OSCE process. Compared with the traditional OSCE paper records, EAS possesses the advantages of increased scoring integrity, accurate calculation, high speed and precise calculation of huge databases, and timely feedback to students. With the examiner’s familiarity with the operation of the system, the examiner’s satisfaction with overall function and system usage increased significantly. EAS can increase the efficiency of planning and implementation of the OSCE. It has great advantages and is feasible in clinical practice tests that can be expected to be widely used in practical learning in related medical fields in the future.

Author Contributions

Conceptualization, D.-H.K.; methodology, C.-Y.L. and H.-H.L.; software, T.-S.W. and P.-S.H.; investigation, W.-H.C. and C.-Y.L.; validation, M.-C.W.; writing—original draft preparation, W.-H.C. and G.A.; writing—review and editing, W.-H.C., D.-H.K. and G.A.; funding acquisition, D.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology (MOST), grant number MOST 106-2632-S-127-001 and MOST 109-2637-H-127-001.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Pingtung Christian Hospital (protocol code IRB570B and 2018/01/15).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical considerations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grover, A.B.; Mehta, B.H.; Rodis, J.L.; Casper, K.A.; Wexler, R.K. Evaluation of pharmacy faculty knowledge and perceptions of the patient-centered medical home within pharmacy education. Curr. Pharm. Teach. Learn. 2014, 6, 210–225. [Google Scholar] [CrossRef]
  2. Paul, W.J.; Kristi, W.K.; Dana, P.H.; Stuart, T.H.; Karen, F.M. Addressing competencies for the future in the professional curriculum. Am. J. Pharm. Educ. 2009, 73, 156–170. [Google Scholar] [CrossRef] [Green Version]
  3. Branch, C. An assessment of students’ performance and satisfaction with an OSCE early in an undergraduate pharmacy curriculum. Curr. Pharm. Teach. Learn. 2014, 6, 22–31. [Google Scholar] [CrossRef]
  4. Austin, Z.; Ensom, M.H. Education of pharmacists in Canada. Am. J. Pharm. Educ. 2008, 72, 1–11. [Google Scholar] [CrossRef]
  5. Vaughan, B.; Moore, K. The mini Clinical Evaluation Exercise (mini-CEX) in a pre-registration osteopathy program: Exploring aspects of its validity. Int. J. Osteopath. Med. 2016, 19, 61–72. [Google Scholar] [CrossRef] [Green Version]
  6. Hassanpour, N.; Chen, R.; Baikpour, M.; Moghimi, S. Video observation of procedural skills for assessment of trabeculectomy performed by residents. J. Curr. Ophthalmol. 2016, 28, 61–64. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Savage, A.; Minshew, L.M.; Anksorus, H.N.; McLaughlin, J.E. Remote OSCE experience: What first year pharmacy students liked, learned, and suggested for future implementations. Pharmacy 2021, 9, 62. [Google Scholar] [CrossRef] [PubMed]
  8. Harden, R.M.; Stevenson, M.; Downie, W.W.; Wilson, G.M. Assessment of clinical competence using objective structured examination. Br. Med. J. 1975, 1, 447–451. [Google Scholar] [CrossRef] [Green Version]
  9. Baid, H. The objective structured clinical examination within intensive care nursing education. Nurs. Crit. Care 2011, 16, 99–105. [Google Scholar] [CrossRef]
  10. Reznick, R.K.; Blackmore, D.; Dauphinee, W.D.; Rothman, A.I.; Smee, S. Large-scale high-stakes testing with an OSCE. Acad. Med. 1996, 71, S19–S21. [Google Scholar] [CrossRef]
  11. Rahayu, G.R.; Suhoyo, Y.; Nurhidayah, R.; Hasdianda, M.A.; Dewi, S.P.; Chaniago, Y.; Wikaningrum, R.; Hariyanto, T.; Wonodirekso, S.; Achmad, T. Large-scale multi-site OSCEs for national competency examination of medical doctors in Indonesia. Med. Teach. 2016, 38, 801–807. [Google Scholar] [CrossRef]
  12. Yang, Y.Y.; Lee, F.Y.; Hsu, H.C.; Huang, C.C.; Chen, J.W.; Lee, W.S.; Chuang, C.L.; Chang, C.C.; Chen, H.M.; Huang, C.C. A core competence-based objective structured clinical examination (OSCE) in evaluation of clinical performance of postgraduate year-1 (PGY1) residents. J. Chin. Med. Assoc. 2011, 74, 198–204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Luimes, J.; Labrecque, M. Implementation of electronic objective structured clinical examination evaluation in a nurse practitioner program. J. Nurs. Educ. 2018, 57, 502–505. [Google Scholar] [CrossRef] [PubMed]
  14. Meskell, P.; Burke, E.; Kropmans, T.J.; Byrne, E.; Setyonugroho, W.; Kennedy, K.M. Back to the future: An online OSCE Management Information System for nursing OSCEs. Nurse Educ. Today 2015, 35, 1091–1096. [Google Scholar] [CrossRef]
  15. Snodgrass, S.J.; Ashby, S.E.; Rivett, D.A.; Russell, T. Implementation of an electronic objective structured clinical exam for assessing practical skills in pre-professional physiotherapy and occupational therapy programs: Examiner and course coordinator perspectives. Australas. J. Educ. Technol. 2014, 30, 152–166. [Google Scholar] [CrossRef] [Green Version]
  16. Patricio, M.F.; Julião, M.; Fareleira, F.; Carneiro, A.V. Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Med. Teach. 2013, 35, 503–514. [Google Scholar] [CrossRef]
  17. Alkureishi, M.A.; Lee, W.W.; Lyons, M.; Wroblewski, K.; Farnan, J.M.; Arora, V.M. Electronic-clinical evaluation exercise (e-CEX): A new patient-centered EHR use tool. Patient Educ. Couns. 2018, 101, 481–489. [Google Scholar] [CrossRef]
  18. Hochlehnert, A.; Schultz, J.H.; Möltner, A.; Tımbıl, S.; Brass, K.; Jünger, J. Electronic acquisition of OSCE performance using tablets. GMS J. Med. Educ. 2015, 32, Doc41. [Google Scholar] [CrossRef]
  19. Likert, R. A technique for the measurement of attitudes. Arch. Psychol. 1932, 140, 1–55. [Google Scholar]
  20. Müller, S.; Koch, I.; Settmacher, U.; Dahmen, U. How the introduction of OSCEs has affected the time students spend studying: Results of a nationwide study. BMC Med. Educ. 2019, 19, 146. [Google Scholar] [CrossRef] [PubMed]
  21. Alkhateeb, N.E.; Al-Dabbagh, A.; Ibrahim, M.; Al-Tawil, N.G. Effect of a formative objective structured clinical examination on the clinical performance of undergraduate medical students in a summative examination: A randomized controlled trial. Indian Pediatr. 2019, 56, 745–748. [Google Scholar] [CrossRef]
  22. Monteiro, S.; Sibbald, D.; Coetzee, K. i-Assess: Evaluating the impact of electronic data capture for OSCE. Perspect. Med. Educ. 2018, 7, 110–119. [Google Scholar] [CrossRef] [Green Version]
  23. Alruwais, N.; Wills, G.; Wald, M. Advantages and challenges of using e-assessment. Int. J. Inf. Educ. Technol. 2018, 8, 34–37. [Google Scholar] [CrossRef] [Green Version]
  24. Buerger, S.; Kroehne, U.; Goldhammer, F. The transition to computer-based testing in large-scale assessments: Investigating (partial) measurement invariance between modes. Psychol. Test. Assess. Model. 2016, 58, 597–616. [Google Scholar]
  25. Nicol, D. E-assessment by design: Using multiple-choice tests to good effect. J. Furth. High. Educ. 2007, 31, 53–64. [Google Scholar] [CrossRef]
  26. McKinley, R.K.; Strand, J.; Gray, T.; Schuwirth, L.; Alun-Jones, T.; Miller, H. Development of a tool to support holistic generic assessment of clinical procedure skills. Med. Educ. 2008, 42, 619–627. [Google Scholar] [CrossRef] [PubMed]
  27. Crews, T.B.; Curtis, D.F. Online course evaluations: Faculty perspective and strategies for improved response rates. Assess. Eval. High. Educ. 2011, 36, 865–878. [Google Scholar] [CrossRef]
  28. Kropmans, T.; O’Donovan, B.G.G.; Cunningham, D.; Murphy, A.W.; Flaherty, G.; Nestel, D.; Dunne, F. An online management information system for Objective Structured Clinical Examinations. Comput. Sci. Inf. Syst. 2012, 5, 38–48. [Google Scholar] [CrossRef]
  29. Sorensen, E. Implementation and student perceptions of e-assessment in a chemical engineering module. Eur. J. Eng. Educ. 2013, 38, 172–185. [Google Scholar] [CrossRef] [Green Version]
  30. Gikandi, J.W.; Morrow, D.; Davis, N.E. Online formative assessment in higher education: A review of the literature. Comput. Educ. 2011, 57, 2333–2351. [Google Scholar] [CrossRef]
Figure 1. Satisfaction questionnaire design of the EAS.
Figure 1. Satisfaction questionnaire design of the EAS.
Education 11 00194 g001
Figure 2. Results of the examiners’ satisfaction with EAS. (1) Theme one: satisfaction with the operation of EAS (pre-test), (2) theme one: satisfaction with the operation of EAS (post-test), (3) theme two: satisfaction with the tablets and network signals (pre-test), (4) theme two: satisfaction with the tablets and network signals (post-test), (5) survey theme three: comparison of EAS with the paper-based evaluation form.
Figure 2. Results of the examiners’ satisfaction with EAS. (1) Theme one: satisfaction with the operation of EAS (pre-test), (2) theme one: satisfaction with the operation of EAS (post-test), (3) theme two: satisfaction with the tablets and network signals (pre-test), (4) theme two: satisfaction with the tablets and network signals (post-test), (5) survey theme three: comparison of EAS with the paper-based evaluation form.
Education 11 00194 g002aEducation 11 00194 g002bEducation 11 00194 g002c
Table 1. Analysis of the satisfaction questionnaire on the examiners’ use of the EAS (themes one and two).
Table 1. Analysis of the satisfaction questionnaire on the examiners’ use of the EAS (themes one and two).
QuestionMean ± SD #p Value a
PrePost
Theme one: Satisfaction with the operation of EAS
Q1:Operational convenience2.7 ± 0.74.2 ± 0.7<0.001 ***
Q2:Scoring convenience2.8 ± 0.64.5 ± 0.5<0.001 ***
Q3:System functionality2.8 ± 1.14.1 ± 0.50.001 **
Q4:System stability2.8 ± 0.74.3 ± 0.8<0.001 ***
Q5:Overall satisfaction with the system2.8 ± 0.64.4 ± 0.5<0.001 ***
Theme two:Satisfaction with the tablets and network signals
Q6:Network stability3.1 ± 0.74.6 ± 0.5<0.001 ***
Q7:Network transmission rate3.6 ± 0.54.7 ± 0.7<0.001 ***
Q8:Resolution of the tablet screen3.8 ± 0.44.6 ± 0.50.001 **
Q9:Data-processing speed of tablets3.5 ± 0.54.7 ± 0.80.001 **
Q10:Stability of tablets3.2 ± 0.64.8 ± 0.5<0.001 ***
Pre: pre-test; Post: post-test; # values are expressed as mean ± SD (n = 12); a Paired t-test was used to determine significance; ** p < 0.01; *** p < 0.001 significant levels.
Table 2. Analysis of the satisfaction questionnaire on the examiners’ use of EAS (theme three).
Table 2. Analysis of the satisfaction questionnaire on the examiners’ use of EAS (theme three).
Q11: More Convenient to UsePre #p Value a
EASPaper-based
PostEAS36<0.05 *
Paper-based03
Q12:Clearer in FormationPrep value
EASPaper-based
PostEAS55<0.01 **
Paper-based02
Q13:Make Examiners More MobilePrep value
EASPaper-based
PostEAS511.000
Paper-based06
Q14:Make Examiners More NervousPrep value
EASPaper-based
PostEAS1001.000
Paper-based02
Q15:Speedier EvaluationPrep value
EASPaper-based
PostEAS75<0.05 *
Paper-based00
Q16:More Convenient in Processing DataPrep value
EASPaper-based
PostEAS93<0.05 *
Paper-based00
Q17:Which is Your Preferred MethodPrep value
EASPaper-based
PostEAS76<0.05 *
Paper-based00
Pre: pre-test; Post: post-test, # values are expressed as number of examiners, a McNemar was used to determine significance, * p < 0.05; ** p < 0.01 significant levels.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chuo, W.-H.; Lee, C.-Y.; Wang, T.-S.; Huang, P.-S.; Lin, H.-H.; Wen, M.-C.; Kuo, D.-H.; Agoramoorthy, G. Evaluate the Feasibility of the Implementation of E-Assessment in Objective Structured Clinical Examination (OSCE) in Pharmacy Education from the Examiner’s Perspectives. Educ. Sci. 2021, 11, 194. https://doi.org/10.3390/educsci11050194

AMA Style

Chuo W-H, Lee C-Y, Wang T-S, Huang P-S, Lin H-H, Wen M-C, Kuo D-H, Agoramoorthy G. Evaluate the Feasibility of the Implementation of E-Assessment in Objective Structured Clinical Examination (OSCE) in Pharmacy Education from the Examiner’s Perspectives. Education Sciences. 2021; 11(5):194. https://doi.org/10.3390/educsci11050194

Chicago/Turabian Style

Chuo, Wen-Ho, Chun-Yann Lee, Tzong-Song Wang, Po-Sen Huang, Hsin-Hsin Lin, Meng-Chuan Wen, Daih-Huang Kuo, and Govindasamy Agoramoorthy. 2021. "Evaluate the Feasibility of the Implementation of E-Assessment in Objective Structured Clinical Examination (OSCE) in Pharmacy Education from the Examiner’s Perspectives" Education Sciences 11, no. 5: 194. https://doi.org/10.3390/educsci11050194

APA Style

Chuo, W. -H., Lee, C. -Y., Wang, T. -S., Huang, P. -S., Lin, H. -H., Wen, M. -C., Kuo, D. -H., & Agoramoorthy, G. (2021). Evaluate the Feasibility of the Implementation of E-Assessment in Objective Structured Clinical Examination (OSCE) in Pharmacy Education from the Examiner’s Perspectives. Education Sciences, 11(5), 194. https://doi.org/10.3390/educsci11050194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop