Next Article in Journal
A Problem-Centered Approach to Designing Blended Courses: Unifying Online and Face-to-Face Modalities
Previous Article in Journal
Single-Case Writing Interventions for Students with Disorders of Intellectual Development: A Systematic Review and Meta-Analysis
Previous Article in Special Issue
Workloads and Emotional Factors Derived from the Transition towards Online and/or Hybrid Teaching among Postgraduate Professors: Review of the Lessons Learned
 
 
Article
Peer-Review Record

Teaching Online: Lessons Learned about Methodological Strategies in Postgraduate Studies

Educ. Sci. 2022, 12(10), 688; https://doi.org/10.3390/educsci12100688
by Anabel Ramos-Pla 1,*, Leslie Reese 2, Consuelo Arce 3, Jorge Balladares 4 and Blanca Fiallos 4
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Educ. Sci. 2022, 12(10), 688; https://doi.org/10.3390/educsci12100688
Submission received: 12 September 2022 / Revised: 28 September 2022 / Accepted: 5 October 2022 / Published: 10 October 2022

Round 1

Reviewer 1 Report

This is an interesting contribution to the post-Covid literature in HE, particularly in linking mid-pandemic practice to current/future face-to-face teaching practice.

It is understandably difficult to refer to relevant literature as pandemic-related research has only just begun to be published. This could partially explain why some references were not from peer-reviewed publications, or from those with a lower H-index. I would suggest looking at more recent publications as well, e.g. international journal BJET's special issue on the post-Covid future of online learning in HE: https://bera-journals-onlinelibrary-wiley-com.ezphost.dur.ac.uk/toc/14678535/2022/53/3; also this literature review: Sanjaya Mishra, Sidhartha Sahoo & Shriram Pandey (2021) Research trends in online distance learning during the COVID-19 pandemic, Distance Education, 42:4, 494-519, DOI: 10.1080/01587919.2021.1986373 

In relation to this, it wasn't entirely clear in the findings or discussion whether some professors who meant to change their teaching were going to continue to use online platforms or were going to use new methods in face-to-face teaching. This was the case for the flipped classroom method, collaborative learning and academic advising in particular. If this wasn't clear from the survey data, this should be explained to avoid ambiguity.

Following on from this, there seemed to be a few other limitations to the study that were not mentioned. For example, could differences between institutions or between female and male respondents be caused by factors other than the institution or gender per se (e.g. respondents' disciplines)? was it outside the scope of the study to ask participants why they chose to continue or discontinue certain teaching methods?

One detail to note - Table 3 does not seem to include the non-significant (NS) or almost significant (=) indicators in the table itself.

Author Response

Reviewer

Aspects to improve

Modifications made

1st

This is an interesting contribution to the post-Covid literature in HE, particularly in linking mid-pandemic practice to current/future face-to-face teaching practice.

Thank you for your comment.

It is understandably difficult to refer to relevant literature as pandemic-related research has only just begun to be published. This could partially explain why some references were not from peer-reviewed publications, or from those with a lower H-index. I would suggest looking at more recent publications as well, e.g. international journal BJET's special issue on the post-Covid future of online learning in HE: https://bera-journals-onlinelibrary-wiley-com.ezphost.dur.ac.uk/toc/14678535/2022/53/3; also this literature review: Sanjaya Mishra, Sidhartha Sahoo & Shriram Pandey (2021) Research trends in online distance learning during the COVID-19 pandemic, Distance Education, 42:4, 494-519, DOI: 10.1080/01587919.2021.1986373

We add the references.

In relation to this, it wasn't entirely clear in the findings or discussion whether some professors who meant to change their teaching were going to continue to use online platforms or were going to use new methods in face-to-face teaching. This was the case for the flipped classroom method, collaborative learning and academic advising in particular. If this wasn't clear from the survey data, this should be explained to avoid ambiguity.

We clarify this in the results. We add: “The activities that are expected to be transferred to face-to-face”.

Following on from this, there seemed to be a few other limitations to the study that were not mentioned. For example, could differences between institutions or between female and male respondents be caused by factors other than the institution or gender per se (e.g. respondents' disciplines)? was it outside the scope of the study to ask participants why they chose to continue or discontinue certain teaching methods?

 

Thank you for your comments. The following limitation and future prospect has been included in the discussion: Another limitation of the study, and also a future prospect, is to go deeper into the data obtained in relation to the reasons that informants had for using the selected strategies in the face-to-face setting.

One detail to note - Table 3 does not seem to include the non-significant (NS) or almost significant (=) indicators in the table itself.

We add this information in the table.

Reviewer 2 Report

The article describes how different educators applies different strategies to transit from face-to-face towards online education. To do so, it relies on a large data-pool from four universities in europe and South America.

The large data pool is positive, and is a strength of this paper. It tells a good story on how many educational practitioners have meet the transition. However, the presentation is less than optimal, and hence, it took a while to find the meaning. 

Suggested improvements: 

Tables: There seem to be two table designs in this paper. Why does it change, and what is the meaning of the order of the categories? They are neither alphabetical nor ordered after frequency. 

Figure 1: Hard do read, and no obvious order of categories.

I see a few "close to significant" comments. As I also see Cronbachs alpha applied to ordinal data, for which it is not fit, I will suggest recalculating e.g. ordinal alpha, which is more suited for likert scale data. Se Lindell and Kruschke below. Furthermore, I would also advise to discuss the value instead of stating significance or not - It is 90% likely that this is the case rather than "this is significant". 

In the discussion, a future implication for educational professionals might strengthen this part as well. 

 

Torrin M. Liddell, John K. Kruschke,

Analyzing ordinal data with metric models: What could possibly go wrong?,

Journal of Experimental Social Psychology,

Volume 79,

2018,

Pages 328-348,

ISSN 0022-1031,

https://doi.org/10.1016/j.jesp.2018.08.009.

(https://www.sciencedirect.com/science/article/pii/S0022103117307746)

 

 

Author Response

Reviewer

Aspects to improve

Modifications made

2nd

The article describes how different educators applies different strategies to transit from face-to-face towards online education. To do so, it relies on a large data-pool from four universities in europe and South America.

The large data pool is positive, and is a strength of this paper. It tells a good story on how many educational practitioners have meet the transition. However, the presentation is less than optimal, and hence, it took a while to find the meaning.

Thank you for your comments.

 

Suggested improvements:

 

Tables: There seem to be two table designs in this paper. Why does it change, and what is the meaning of the order of the categories? They are neither alphabetical nor ordered after frequency.

 

The tables are different because some are extracted from the SPSS programme (quantitative) and others from the results of Atlas.ti (qualitative).

The order of the categories is based on the order of appearance in the questionnaire.

 

Figure 1: Hard do read, and no obvious order of categories.

 

 

Figure 1 has been expanded. The order of the categories is according to appearance in the questionnaire. Figure 1 has been expanded. The order of the categories is according to appearance in the questionnaire.

 

I see a few "close to significant" comments. As I also see Cronbachs alpha applied to ordinal data, for which it is not fit, I will suggest recalculating e.g. ordinal alpha, which is more suited for likert scale data. Se Lindell and Kruschke below. Furthermore, I would also advise to discuss the value instead of stating significance or not - It is 90% likely that this is the case rather than "this is significant".

 

We add: From this, the degree of reliability of the participants was calculated through the use of Cronbach’s Alpha coefficient, obtaining a value of 0.67, which was considered good. The value of the ordinal Cronbach's "alpha" has turned out to be somewhat higher than the classical "alpha" coefficient calculated earlier. Its value is 0.75, so being >0.70 and <0.90 the reliability is considered to be good.

 

In the discussion, a future implication for educational professionals might strengthen this part as well.

 

We add a paragraph at the final of discussion: The results obtained show that the most successful virtual strategies and activities are those that involve the active participation of students. It is therefore necessary for education professionals to reflect on this in order to encourage more active methods.

Reviewer 3 Report

Thank you very much for giving me the opportunity to read this manuscript. There is no doubt that the topic is of extraordinary topicality and interest in the field of educational research and, therefore, falls within the scope of the journal. The approach is quantitative, which is in line with the instrument used (questionnaire) and the type of statistical analysis carried out on the responses, which seems to be correct. Moreover, the sample is large and representative, as the authors rigorously explain. However, it seems to me that the formulation of the research objectives is a bit vague and generalist, which affects the development of the study. Basically, the authors analyze the frequency of use of a family of ICT tools during the pandemic and to what extent they are also used after the pandemic. They conclude results that are to be expected: there has been a greater increase in the use of tools that were previously little used, such as assessment tools, and, in this sense, teachers have been forced to restructure the definition of the training activity (in this case, assessment). In my view, the problem with the article is that, at least at this time, it does not establish novel or impactful results, even though, in my opinion, it has the potential to do so.

I propose to the authors a thorough revision of their manuscript along several lines:

1. strengthen the theoretical underpinning and literature review. This will help to better situate the state of the art and make it easier for the authors to discuss the results obtained in relation to previous work. In addition, the classification of the ICT families considered should be based on the different taxonomies included in the literature. In this regard, I suggest some sources:

- Impact of the pandemic on the use of ICT in higher education: https://doi.org/10.3390/educsci12090635

- Process of virtualization of higher education in Latin America: https://doi.org/10.7238/rusc.v11i3.1729

- ICT in higher education (classifications and new trends in the use of teaching ICT): https://doi.org/10.1186/s41239-017-0076-8

http://unesdoc.unesco.org/images/0012/001295/129533e.pdf

https://doi.org/10.1002/cae.22504

2. Define independent variables and specific hypotheses. I believe that, based on the data obtained by the authors, a comparative analysis of the Spanish and Latin American cases could be proposed. This approach would make it possible to overcome the lack of novelty present at this time. I also suggest analyzing differences by sex or other sociodemographic or academic variables.

3. It would be convenient if some of the results were expressed graphically, in order to facilitate reading.

Author Response

Reviewer

Aspects to improve

Modifications made

3rd

Thank you very much for giving me the opportunity to read this manuscript. There is no doubt that the topic is of extraordinary topicality and interest in the field of educational research and, therefore, falls within the scope of the journal. The approach is quantitative, which is in line with the instrument used (questionnaire) and the type of statistical analysis carried out on the responses, which seems to be correct. Moreover, the sample is large and representative, as the authors rigorously explain. However, it seems to me that the formulation of the research objectives is a bit vague and generalist, which affects the development of the study. Basically, the authors analyze the frequency of use of a family of ICT tools during the pandemic and to what extent they are also used after the pandemic. They conclude results that are to be expected: there has been a greater increase in the use of tools that were previously little used, such as assessment tools, and, in this sense, teachers have been forced to restructure the definition of the training activity (in this case, assessment). In my view, the problem with the article is that, at least at this time, it does not establish novel or impactful results, even though, in my opinion, it has the potential to do so.

Thank you for your comments.

I propose to the authors a thorough revision of their manuscript along several lines:

 

1. strengthen the theoretical underpinning and literature review. This will help to better situate the state of the art and make it easier for the authors to discuss the results obtained in relation to previous work. In addition, the classification of the ICT families considered should be based on the different taxonomies included in the literature. In this regard, I suggest some sources:

 

- Impact of the pandemic on the use of ICT in higher education: https://doi.org/10.3390/educsci12090635

 

- Process of virtualization of higher education in Latin America: https://doi.org/10.7238/rusc.v11i3.1729

 

- ICT in higher education (classifications and new trends in the use of teaching ICT): https://doi.org/10.1186/s41239-017-0076-8

 

http://unesdoc.unesco.org/images/0012/001295/129533e.pdf

 

https://doi.org/10.1002/cae.22504

 

We add some of this references.

 

2. Define independent variables and specific hypotheses. I believe that, based on the data obtained by the authors, a comparative analysis of the Spanish and Latin American cases could be proposed. This approach would make it possible to overcome the lack of novelty present at this time. I also suggest analyzing differences by sex or other sociodemographic or academic variables.

 

 

Specific hypotheses have been added from line 175 onwards.

In line 137 you can find the independent variables.

In relation to the analysis of differences by gender or other socio-demographic or academic variables, this is defined in the limitations of the study and, in turn, is also a prospective for the future.

 

3. It would be convenient if some of the results were expressed graphically, in order to facilitate reading.

We add three figures at the final of results.

Round 2

Reviewer 2 Report

My comments are meet, only a minor thing: Line 220-224 could benefit from a rephrasing to make it more simple and precise. 

 

Author Response

Reviewer

Aspects to improve

Modifications made

2nd

My comments are meet, only a minor thing: Line 220-224 could benefit from a rephrasing to make it more simple and precise. 

 

Thank you for your comments. The sentence has been rephrased.

Author Response File: Author Response.docx

Reviewer 3 Report

In my opinion, the authors have satisfactorily addressed the comments and suggestions and some of the issues I noted in my first report have been clarified. I suggest the authors revise the text in the following ways:

1. I do not clearly understand the hypothesis 3. It is unclear and lacks specificity. If it could be made more specific, that would be great.

2. I recommend improving the aesthetics of the graphs: include solid axes, increase the size of the graphs slightly, and indicate in the figure caption on what scale the data are represented (for example, in Figure 2, is it out of 1?).

Author Response

Reviewer

Aspects to improve

Modifications made

3rd

In my opinion, the authors have satisfactorily addressed the comments and suggestions and some of the issues I noted in my first report have been clarified. I suggest the authors revise the text in the following ways:

Thank you for your comments.

1. I do not clearly understand the hypothesis 3. It is unclear and lacks specificity. If it could be made more specific, that would be great.

We have rewritten hypothesis 3.

 

2. I recommend improving the aesthetics of the graphs: include solid axes, increase the size of the graphs slightly, and indicate in the figure caption on what scale the data are represented (for example, in Figure 2, is it out of 1?).

The graphics have been made as large as possible (as large as the template allows). The scale has been referenced at the bottom of the graphs.

Author Response File: Author Response.docx

Back to TopTop