Next Article in Journal
Navigating Multiple Perspectives in Discussing Controversial Topics: Boundary Crossing in the Classroom
Next Article in Special Issue
Interprofessional Climate Change Curriculum in Health Professional Programs: A Scoping Review
Previous Article in Journal
Adjusting the ChildProgramming Methodology to Educational Robotics Teaching and Debugging
Previous Article in Special Issue
Structural Competence and Equity-Minded Interprofessional Education: A Common Reading Approach to Learning
 
 
Article
Peer-Review Record

A Comparative Study of Face-to-Face and Online Interprofessional Education Models for Nursing Students in Japan: A Cross-Sectional Survey

Educ. Sci. 2023, 13(9), 937; https://doi.org/10.3390/educsci13090937
by Aya Saitoh 1,*, Tomoe Yokono 1, Tomoko Sumiyoshi 1, Izumi Kawachi 2,3 and Mieko Uchiyama 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Educ. Sci. 2023, 13(9), 937; https://doi.org/10.3390/educsci13090937
Submission received: 16 June 2023 / Revised: 7 September 2023 / Accepted: 12 September 2023 / Published: 15 September 2023

Round 1

Reviewer 1 Report

I enjoyed reading your manuscript. It was obvious that a lot of time and attention went into creating the educational experiences. Media-comparative studies, such as yours, are not unique. For example, Carbonaro et al (2008) published a study comparing in-person and synchronous online IPE. There have been others since then, as well. 

I was curious about the use of the RIPLS as one of your key outcome measures. Why is readiness (an attitudinal measure) of interest? Is it because it is an available instrument? There is previous literature citing key limitations with RIPLS - Oates and Davidson (2015) and Violato and King (2020), are two examples outlining the limitations.

I would rethink what your study contributes to the IPE literature. I think focusing on a short report describing the education  may interest readers. 

 

 

 

 

 

Author Response

  1. I enjoyed reading your manuscript. It was obvious that a lot of time and attention went into creating the educational experiences. Media-comparative studies, such as yours, are not unique. For example, Carbonaro et al (2008) published a study comparing in-person and synchronous online IPE. There have been others since then, as well.

Response: We appreciate your comments. We revised our Introduction to further highlight the study’s novelty (Lines 77–94): “Owing to the rapid transition to online learning, higher education institutions had to adapt established interprofessional learning (IPL) environments to online formats on an impro-vised basis [14]. Since the transformation of team functioning into an online environment was largely unknown, this posed numerous challenges [15]. There is limited prior research comparing in-person and online IPE events, and the results are mixed. In one study, when transitioning IPE student simulations from in-person to online, higher IPE skills were observed compared to previous in-person simulations. Additionally, all student learning outcomes were achieved, indicating the effectiveness of virtual IPE simulation education [16]. In a prior study comparing the cognitive effects of in-person and virtual remote debriefing, remote IPE debriefing was less effective than in-person IPE debriefing but proved to be a viable option when in-person IPE debriefing opportunities were unavailable [17]. Research on facilitation strategies used in synchronous and asynchronous IPE experiences revealed that students and facilitators perceived previously recognized facilitation strategies in asynchronous and in-person IPE environments to be utilized in online synchronous environments, as well [18].”

 

  1. I was curious about the use of the RIPLS as one of your key outcome measures. Why is readiness (an attitudinal measure) of interest? Is it because it is an available instrument? There is previous literature citing key limitations with RIPLS - Oates and Davidson (2015) and Violato and King (2020), are two examples outlining the limitations.

Response: We appreciate your comments. We added the following sentences to our Methods section (Lines 163–169):The RIPLS is one of the most commonly used outcome measures in IPE events, as it allows for the assessment of attitudes, beliefs, and preparedness for interacting with other students during shared learning experiences among healthcare professional students. We adopted the RIPLS because while there are doubts about the robustness of its subscales [21], its overall consistency is suitable across various cultural backgrounds [22–24]. Moreover, the reliability of the RIPLS total score has consistently demonstrated high reliability [25].

 

  1. I would rethink what your study contributes to the IPE literature. I think focusing on a short report describing the education may interest readers.

Response: Amid the varied outcomes of prior research comparing in-person and online IPE events, the current results suggest the potential for an improvement in readiness for interprofessional learning when compared to students participating in in-person learning sessions. Remote interventions are expected to remain valuable even after the conclusion of the COVID-19 pandemic, comprehensively supplementing the in-person educational model and thereby creating an impactful contribution. Therefore, considering these points, we believe that this study has the potential to make a modest contribution to IPE literature.

Reviewer 2 Report

Many thanks for the manuscript. Below are my comments and suggestions:

(1) Abstract: Information regarding the sampling method(s) should be presented.

(2) Keywords: face-to-face, online and nursing are not good keywords. They should be used with other words to make compound nouns.

(3) Introduction: The statement of the research purpose and specific objectives should be described more clearly.

(4) Literature review: This manuscript lacks the literature review, a very important section.

(5) Materials and Methods: The sampling method(s) should be described. Also, the kind of test (e.g. T-tests) should be presented.

(6) Results: Tables are difficult to see. They should be presented more clearly.

(7) Discussion: The author(s) did not clearly compare the current research results with those of previous studies. It is difficult for readers to see the current research results support or contradict previous studies.

(8) Conclusion: This section is too short and it does not summarise the key results as well as provide the implications of the study.

 

The author(s) should have the manuscript carefully proofread.

 

Author Response

  • Abstract: Information regarding the sampling method(s) should be presented.

Response: We added the following text regarding the sampling (Line 7–8): “All students who enrolled in the ‘Team Medical Practice’ course in both 2019 and 2020 were invited to participate.” 

(2) Keywords: face-to-face, online and nursing are not good keywords. They should be used with other words to make compound nouns.

Response: We revised the keywords to “in-person,” “virtual,” and “nursing students.” (Line 21).

(3) Introduction: The statement of the research purpose and specific objectives should be described more clearly.

Response: We revised our Introduction accordingly (Lines 92–94): “The study purpose is to evaluate the outcomes of an online IPE intervention (remote education model) compared with a face-to-face IPE format (conventional education model).”

(4) Literature review: This manuscript lacks the literature review, a very important section.

Response: We appreciate your suggestion. We revised our manuscript as follows (Lines 77–94): “Owing to the rapid transition to online learning, higher education institutions had to adapt established interprofessional learning (IPL) environments to online formats on an impro-vised basis [14]. Since the transformation of team functioning into an online environment was largely unknown, this posed numerous challenges [15]. There is limited prior research comparing in-person and online IPE events, and the results are mixed. In one study, when transitioning IPE student simulations from in-person to online, higher IPE skills were observed compared to previous in-person simulations. Additionally, all student learning outcomes were achieved, indicating the effectiveness of virtual IPE simulation education [16]. In a prior study comparing the cognitive effects of in-person and virtual remote debriefing, remote IPE debriefing was less effective than in-person IPE debriefing but proved to be a viable option when in-person IPE debriefing opportunities were unavailable [17]. Research on facilitation strategies used in synchronous and asynchronous IPE experiences revealed that students and facilitators perceived previously recognized facilitation strategies in asynchronous and in-person IPE environments to be utilized in online synchronous environments, as well [18].

(5) Materials and Methods: The sampling method(s) should be described. Also, the kind of test (e.g. T-tests) should be presented.

Response: We revised our Methods section as follows (Lines 98–101): “All students who enrolled in the ‘Team Medical Practice’ course (one credit)—required for third-year nursing students—in both 2019 and 2020 were invited to participate in a survey. The survey was administered twice, voluntarily, before and after the commencement of the IPE joint classes.” The statistical test methods employed in the study are described in the Statistical analysis and ethical considerations section.

(6) Results: Tables are difficult to see. They should be presented more clearly.

Response: We revised our tables for clarity and readability.

(7) Discussion: The author(s) did not clearly compare the current research results with those of previous studies. It is difficult for readers to see the current research results support or contradict previous studies.

Response: We revised our Discussion as follows (Lines 341–371): “In a previous study that evaluated the readiness for IPE conducted online targeting healthcare students in Indonesia, explorations of the perceived competency identity scores for IPE competencies revealed a higher number of positive perceptions [33]. In our study, the post-IPE scores of the online group were significantly higher in all subscales of the RIPLS, suggesting a positive educational effect of online IPE sessions. Our results align with previous studies, showing that all participants exhibited positive perceptions of IPE [34,35].

The results of the IPE Attitude Test (IPET) assessing the attitudes of healthcare students toward IPE showed that the online group exhibited a significant increase in post-IPE scores in Domains 3–5. These domains pertain to the necessity of teamwork, group work, and collaboration among different healthcare professions. Evans et al. reported the effectiveness of online IPE in improving student attitudes and knowledge related to interprofessional practice [36]. Additionally, while some students might have initially faced barriers owing to limited experience with online education, there are advantages such as the potential for active discussions and effective time utilization, which can contribute to positive outcomes [37]. Our study aligns with these previous findings. Another factor contributing to the increase in IPET scores could be the implementation of peer assessment as part of the IPE program evaluation, exclusively in the online group. This might have motivated students to evaluate themselves and be evaluated by peers, fostering a proactive engagement in group work, collaborative activities, and enhancing their motivation to learn. Consequently, the utilization of online communication tools like Zoom and Teams could have facilitated effective interaction between faculty and students, as well as among students themselves. Even in an online setting, providing an environment for interaction, along with clear instructions and tasks, could potentially surpass face-to-face instruction in terms of learning efficiency. With the increasing prevalence of online healthcare, including telemedicine, online IPE is expected to become a valuable experience for students. Constructing such an online IPE program requires thorough discussions among faculty members through web conferencing beforehand, as well as a high level of collaboration among interdisciplinary experts within the IPE team [38–40].”

(8) Conclusion: This section is too short and it does not summarise the key results as well as provide the implications of the study.

Response: Thank you for your suggestion. We revised our Conclusion as follows (Lines 402–414): “An evaluation of the results of the face-to-face IPE format (conventional education model) and the online IPE intervention (remote education model) revealed that both the face-to-face group and the online group obtained positive outcomes from IPE. Specifically, the online group demonstrated higher scores in various domains related to teamwork, collaboration, and participation in group work. The results of the RIPLS indicated that the post-IPE scores of the online group were significantly higher in all subscales of the RIPLS, suggesting the positive educational effects of online IPE sessions. In the IPET, the online group showed increased post-intervention scores in domains related to teamwork, group work, and collaboration among different professions. These findings suggest that understanding the importance of team members, fulfilling respective roles, and working together toward team achievements can be effectively realized even in online IPE. By adopting a positive approach that recognizes the unique opportunities provided by online learning, new possibilities for future education can be opened up.”

Comments on the Quality of English Language

The author(s) should have the manuscript carefully proofread

Response: We sought the services of a professional English-language proofreading company: Editage.

 

Reviewer 3 Report

Limit the use of abbreviations as it interrupts with the flow of ideas as one needs to constantly check what each abbreviation stands for.

Justify the use of Wilcoxon signed-rank test and Mann-Whitney U test. Revise the method of reporting the test results for both tests.

lines 91-92: The participants were divided into groups and subjected to pre- and post-intervention testing through questionnaires. This statement needs to be rephrased. How were the groups formed? How many members were each group. Randomisation needs to be clear!

There is need to be clear on the control if there was any.

The methodology needs to be clear so that it can be replicated by other researchers. Currently, it's not very clear.

What was the nature of the rubric used? You may need to provide this as an appendix.

Revise the Title of Table 1. Participants characteristics is not the correct title, I suggest number of participants. Instead of 2019 and 2020 write face-to-face and online. The likert scale items need to be clear and the mean values must tally with the likert scale items, e.g. when the scale is from 1 strongly disagree to 5 strongly agree, the mean cannot outside the range of values.

Language editing is required

Author Response

Limit the use of abbreviations as it interrupts with the flow of ideas as one needs to constantly check what each abbreviation stands for.

Response: Thank you for pointing this out. We reduced our use of abbreviations, removing two that were unnecessary. In the revised manuscript, abbreviations are used only when the term appears more than five times in the paper.

 Justify the use of Wilcoxon signed-rank test and Mann-Whitney U test. Revise the method of reporting the test results for both tests.

Response: For the comparison between before and after (Tables 2 and 4), the Wilcoxon signed-rank test was used to test the differences in means of paired data measured twice for a single sample. For Tables 3 and 5, which involve comparisons between two independent samples (face-to-face and online groups), the Mann–Whitney U test was employed. Both tests were chosen owing to the non-normal distribution of the data; thus, non-parametric methods were utilized.

lines 91-92: The participants were divided into groups and subjected to pre- and post-intervention testing through questionnaires. This statement needs to be rephrased. How were the groups formed? How many members were each group. Randomisation needs to be clear!  There is need to be clear on the control if there was any. The methodology needs to be clear so that it can be replicated by other researchers. Currently, it's not very clear.

Response: The study design is not a randomized controlled trial but an observational comparative design. We revised the manuscript as follows for additional clarity (Lines 98–103): “All students who enrolled in the ‘Team Medical Practice’ course (one credit)—required for third-year nursing students—in both 2019 and 2020 were invited to participate in a survey. The survey was administered twice, voluntarily, before and after the commencement of the IPE joint classes. Participants accessed the questionnaire by logging into the academic information system and providing their responses through an online interface.”

What was the nature of the rubric used? You may need to provide this as an appendix.

Response: The rubric used for the PIA evaluation has been added to Supplementary Table 1.

Revise the Title of Table 1. Participants characteristics is not the correct title, I suggest number of participants. Instead of 2019 and 2020 write face-to-face and online. The likert scale items need to be clear and the mean values must tally with the likert scale items, e.g. when the scale is from 1 strongly disagree to 5 strongly agree, the mean cannot outside the range of values.

Response: We appreciate your comments. We revised Table 1 accordingly. Concerning the scores on the scale, they represent the average of the total scores of the subscales. We added the number of items and the score range for each subscale item to the table.

Thank you again for your thoughtful review and helpful comments. We hope our article is now suitable for publication in Education Sciences.

Round 2

Reviewer 2 Report

Thank you very much for your hard work and effort to revise the manuscript. All of my suggestions have been addressed in detail. I do not have any further comment.

Minor editing of English language required.

Author Response

Thank you very much for reviewing this manuscript. Your feedback is greatly appreciated.

We sought the services of a professional English-language proofreading company: Editage.

Reviewer 3 Report

Line 110 indicates that there are 20 mixed groups, each with 10-11 members. The minimum number, assuming that each group has 10 members, is 200. This contradicts line 104, which states that there were 160 participants. The researcher needs to align the statement in line 104 with the statement in 110 to ensure that the statements are complimentary.

There is need to report the Wilcoxon test using the APA standard e.g.  Matched Pairs Example A significant difference in behavioural disturbances was found between students assigned to the new drug trial (n = 50, M = 0.5008, SD = 0.29) and students not assigned to the new drug trial (n = 50, M = 0.9867, SD = 0.37), t(49) = -7.531, p < 0.05, d = 1.46. This indicates that students who were assigned to the new drug trial exhibited significantly fewer behavioural disturbances than their peers in the control group.

 

 

Mann-Whitney U test was conducted to determine whether there is a difference in Math test scores between males and females. The results indicate a non-significant difference between groups [U = 53.00, p = .173]. In conclusion, We fail to reject the null hypothesis and conclude that there is no difference in the Math test scores between males and females.

Author Response

Line 110 indicates that there are 20 mixed groups, each with 10-11 members. The minimum number, assuming that each group has 10 members, is 200. This contradicts line 104, which states that there were 160 participants. The researcher needs to align the statement in line 104 with the statement in 110 to ensure that the statements are complimentary.

Response: We appreciate your comments. The group membership consisted of six to seven medical students and four nursing students, with four nursing students placed in each of the 20 groups.

We added the following sentences to our Methods section:

(Lines 109–113): The initial sample comprised approximately 160 nursing students who were participants in IPE sessions held in 2019 and 2020 (80 third-year students from each year). All were with the Department of Nursing, had registered for Theory of Team Medical Practice, and had participated in the Joint Seminar during the 2019–2020 academic year.

(Lines 117–118): The nursing students were placed in 20 groups of four students each.

 

There is need to report the Wilcoxon test using the APA standard e.g.  Matched Pairs Example A significant difference in behavioural disturbances was found between students assigned to the new drug trial (n = 50, M = 0.5008, SD = 0.29) and students not assigned to the new drug trial (n = 50, M = 0.9867, SD = 0.37), t(49) = -7.531, p < 0.05, d = 1.46. This indicates that students who were assigned to the new drug trial exhibited significantly fewer behavioural disturbances than their peers in the control group.

Response: We revised our Results section as follows (Lines 226–268):

3.1 Comparison of RIPLS Scores

Table 2 shows a comparison of the RIPLS scores between the face-to-face group and the online group. When comparing the scores before and after the IPE, there was a significant difference in the mean total score for the online group (face-to-face: p = .153, d = 0.17, online: p = .001, d = 0.34). When comparing the scores of the subscales before and after the IPE, both groups showed a significant increase in subscale 2, “Opportunities for IPE,” after the IPE (face-to-face: p = .004, d = 0.28, online: p = .053, d = 0.18). However, only the online group showed a significant increase in subscale 1, “Teamwork and Collaboration,” after the IPE (face-to-face: p = .558, d = 0.04, online: p = .008, d = 0.69).

A comparison of the changes in scores before and after IPE between the face-to-face and online groups showed no significant differences in subscale 3, “Uniqueness of the profession” (face-to-face: p = .108, d = 0.25, online: p = .068 d = 0.2).

Table 3 shows a comparison of RIPLS scores between the online and face-to-face groups before and after IPE. Analysis of pre-IPE scores revealed that the score for subscale 2, “Opportunities for IPE” (face-to-face: mean score 6.6 vs. online: mean score 7.9; p = .001, d = 0.69), and the total RIPLS score (face-to-face: mean score 71.2 vs. online: mean score 74.7; p = .013, d = 0.46) were significantly higher in the online group. In post-IPE score analysis, the online group showed significantly higher scores in all subscales, including subscale 1, “Teamwork and collaboration” (face-to-face: mean score 51.9; vs. online: mean score 54.7; p = .048, d = 0.36), subscale 2, “Opportunities for IPE” (face-to-face: mean score 7.3 vs. online: mean score 8.2; p = .002, d = 0.38), and subscale 3, “Uniqueness of profession” (face-to-face: mean score 13.5 vs. online mean score 14.4; p = .046, d = 0.34), as well as a higher total RIPLS score (face-to-face: mean score 72.7 vs. online mean score: 77.3; p = .001, d = 0.51).

 

Back to TopTop