Next Article in Journal
Exploring the Influences of Innovation Climate and Resource Endowments through Two Types of University–Industry Collaborative Activities on Regional Sustainable Development
Next Article in Special Issue
Data Mining for Attitudinal and Belief Profiles Determination towards Hypnosis
Previous Article in Journal
The Relationship between Organizational Learning at the Individual Level and Perceived Employability: A Model-Based Approach
Previous Article in Special Issue
Validation of a Football Competence Observation System (FOCOS), Linked to Procedural Tactical Knowledge
 
 
Article
Peer-Review Record

Observational Analysis of Corner Kicks in High-Level Football: A Mixed Methods Study

Sustainability 2021, 13(14), 7562; https://doi.org/10.3390/su13147562
by Rubén Maneiro 1,2,*, José Luís Losada 2, Mariona Portell 3 and Antonio Ardá 4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sustainability 2021, 13(14), 7562; https://doi.org/10.3390/su13147562
Submission received: 8 April 2021 / Revised: 28 June 2021 / Accepted: 1 July 2021 / Published: 6 July 2021

Round 1

Reviewer 1 Report

Having read the article " Observational analysis of the static phases in opposition sports: a mixed methods study", I believe there can be an interesting contribution for the current state of the art in this specific topic. The clarity and flow in some parts still need improvement. Hence, I've recommended revision to improve further text clarity before I can consider recommending it for publishing, according to the following reasons:

 

Keywords should be complementary from the text. Not repeated.

 

The authors define the corner kicks as “static ball situations”. Although the designation is not incorrect, the term “set pieces” is the most adequate. I suggest that in the first line of introduction you can insert the following text “Static ball actions (or set-pieces) are those….”

 

Last paragraph of introduction, please use the past tense to define the objective of the paper.

In the section of participants please write “ ..the units of analysis were the corner kicks taken in the…” instead of “…the units of analysis were the kicks taken in the…”

 

“3740 events…” please don´t start the sentence with numbers.

 

Results – the results were presented in a detailed way.

 

If possible, provide a Figure 3 with better quality.

 

Tables 6,7 and 8 are not necessary, since the authors can introduce all the information in the text.

 

In my opinion, the limitations should be places before the conclusion section.

 

Please, provide some practical applications for coaches and practitioners.

Author Response

Having read the article " Observational analysis of the static phases in opposition sports: a mixed methods study", I believe there can be an interesting contribution for the current state of the art in this specific topic. The clarity and flow in some parts still need improvement. Hence, I've recommended revision to improve further text clarity before I can consider recommending it for publishing, according to the following reasons:

Dear Reviewer

First of all, thank you for your suggestions for improvement of the study. You can check the changes made in green color in the text

Keywords should be complementary from the text. Not repeated.

The keywords have been corrected

The authors define the corner kicks as “static ball situations”. Although the designation is not incorrect, the term “set pieces” is the most adequate. I suggest that in the first line of introduction you can insert the following text “Static ball actions (or set-pieces) are those….”

The term “set pieces” has been corrected and used, both in the abstract and throughout the document.

Last paragraph of introduction, please use the past tense to define the objective of the paper.

It has been corrected and the past tense has been used

In the section of participants please write “ ..the units of analysis were the corner kicks taken in the…” instead of “…the units of analysis were the kicks taken in the…”

Thanks for the observation, it has been corrected.

“3740 events…” please don´t start the sentence with numbers.

This section has been corrected

Results – the results were presented in a detailed way.

If possible, provide a Figure 3 with better quality.

The figure format has been changed to tiff for better quality. Sharpness filter has also been added

Tables 6,7 and 8 are not necessary, since the authors can introduce all the information in the text.

Tables 6, 7 and 8 have been removed, and the information in the tables is in the text.

In my opinion, the limitations should be places before the conclusion section.

This section has been modified, the limitations have been placed before the conclusions.

Please, provide some practical applications for coaches and practitioners.

A specific section on practical applications for coaches has been included

Reviewer 2 Report

1. The abstract is not clear:

  • Line 19. Please provide more detailed information about what observational methodology and mixed methods are.
  • Please rephrase the abstract to better explain the results, discussion, and conclusion of this study.

2. Section 2.2 is confused.

  • Line 99. This section does not provide information about participants, but only the data used in the dataset. Please change the title of the section or provide information about the players involved.
  • Line 103. What does "The observation sample was a convenience sample" mean? It is not clear.
  • Lines 103-107. Are 3740 events related to the 1704 corners? Are the games analyzed 192 or 52? In the first part of this section, you asserted that 192 matches are analyzed, while in the second paragraph is 52. Which events do you record? Please better describe the dataset.
  • What does the possession end in a draw means? Are the results at the end of a match?
  • Please provide a description of all the features used in this study.

3. Section 2.4 have to be improved by adding the reliability test of the raters.

4. Line 147. The formula have to be described.

5. Line 213. What the “criterion1.Shot” is? Please describe this variable.

6. Line 216. Is the train and test split performed with stratified approach?

7. Please provide the precision, recall, f1-score and accuracy of the model.

8. Line 234. Please provide the model goodness of cross-validation.

9. To validate your models, a comparison with other machine learning models is mandatory. Please provide the results of at least one baseline classifier.

10. Please provide the feature importance in each competition and statistical comparison. Moreover, discuss the differences in the Discussion Section.

Author Response

Dear Reviewer

First of all, thank you for your suggestions to improve the study.

You can see the changes made in green to the text.

  1. The abstract is not clear:
  • Line 19. Please provide more detailed information about what observational methodology and mixed methods are.

Thanks for the observation. Due to reasons of extension of the abstract, a specific section has been included in the materials and methods section. In addition, new references have been included in this section to accompany the explanation, by reference authors in this field.

 

  • Please rephrase the abstract to better explain the results, discussion, and conclusion of this study.

A new abstract has been rewritten

  1. Section 2.2 is confused.
  • Line 99. This section does not provide information about participants, but only the data used in the dataset. Please change the title of the section or provide information about the players involved.

Participants have been changed to sample

 

  • Line 103. What does "The observation sample was a convenience sample" mean? It is not clear.

Intentional sampling or convenience sampling, according to Anguera, Arnau, Ato, Martínez, Pascual and Vallejo (1995), is the sample that does not pretend to represent the population in order to generalize results but to obtain data to gather information. In accordance with the previous statement, the sample of this research work is made up of the corner kicks executed in three international championships. With the aforementioned sampling, the participation of high-level players with previous experience in competition has been guaranteed.

  • Lines 103-107. Are 3740 events related to the 1704 corners? Are the games analyzed 192 or 52? In the first part of this section, you asserted that 192 matches are analyzed, while in the second paragraph is 52. Which events do you record? Please better describe the dataset.

What does the possession end in a draw means? Are the results at the end of a match?

Please provide a description of all the features used in this study.

Thanks for the observation, this whole section has been corrected.

  1. Section 2.4 have to be improved by adding the reliability test of the raters.

Test results have been included in this section

  1. Line 147. The formula have to be described.

The formula has been included in this section

  1. Line 213. What the “criterion1.Shot” is? Please describe this variable.

An explanation of this variable has been included in the section

  1. Line 216. Is the train and test split performed with stratified approach?

Sample selection was based on a randomized process.

  1. Please provide the precision, recall, f1-score and accuracy of the model.

Excuse me but we don't understand the question, what do you mean by f1-score?

  1. Line 234. Please provide the model goodness of cross-validation.

It has been included in the section

  1. To validate your models, a comparison with other machine learning models is mandatory. Please provide the results of at least one baseline classifier.

It has been included in the section

  1. Please provide the feature importance in each competition and statistical comparison. Moreover, discuss the differences in the Discussion Section.

It has been included in the Results section and also in the Discussion section.

Round 2

Reviewer 2 Report

The authors have sufficiently answered all of my doubts. However, some others problems have to be fixed:

  1. Please provide in the title that the analysis was conducted in soccer and on Corner kicks.
  2. Section 2.4 (Procedure): Please provide more details about the raters reliability. In particular, how many familiarization sections are done by raters? How many matches were assessed to compute the Cohens’ Kappa coefficient? Finally, the intra-rater reliability is missing. Instead of Cohens’ Kappa coefficient, please provide the Intraclass-correlation coefficient for your aim providing also the kind of ICC (e.g., 1,k and 2,1).
  3. Please provide in Method section the description of all the features used in the text and the procedure used to extract them.
  4. Please provide the results of the ROC curves. You describe it in section 3.3 but I do not find the results in any of the championships analysed. Moreover, for each classifier please provide precision, recall, F1-score (weighted mean of precision and recall).
  5. Finally, To validate your classifiers, a comparison with other machine learning models is mandatory. In particular, please provide a baseline classifier (e.g., stratified prediction) results. A Dummy Classifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. If the prediction performance of a Dummy Classifier is similar or higher than the trained classifier, this last model is not detecting any pattern in the data.

Author Response

Dear Reviewer

 

Here we respond to your suggestions for improvement:

Comments and Suggestions for Authors

The authors have sufficiently answered all of my doubts. However, some others problems have to be fixed:

  1. Please provide in the title that the analysis was conducted in soccer and on Corner kicks.

The title has been modified and specified more. The new title is: Observational analysis of corner kicks in high-level football: a mixed methods study

2. Section 2.4 (Procedure): Please provide more details about the raters reliability. In particular, how many familiarization sections are done by raters? How many matches were assessed to compute the Cohens’ Kappa coefficient? Finally, the intra-rater reliability is missing. Instead of Cohens’ Kappa coefficient, please provide the Intraclass-correlation coefficient for your aim providing also the kind of ICC (e.g., 1,k and 2,1).

Thanks for the observation. The description of the data quality process, the results of the Kappa analysis, and also information on the training of observers have been included in point 2.4.

Following previous studies on performance analysis in team sports using observational methodology, Cohen's Kappa coefficient has been used as it is a more reliable tool for this type of análisis (Castellano, 2000; Castellano & Hernández-Mendo, 2002; Anguera et al., 2017; Castañer et al., 2018; Lapresa et al., 2015; Lapresa et al., 2016).

 

3.Please provide in Method section the description of all the features used in the text and the procedure used to extract them.

An explanation has been included in the method section of the methodological process carried out

 

4. Please provide the results of the ROC curves. You describe it in section 3.3 but I do not find the results in any of the championships analysed. Moreover, for each classifier please provide precision, recall, F1-score (weighted mean of precision and recall).

Roc curves have been included for each of the competitions considered

 

5. Finally, To validate your classifiers, a comparison with other machine learning models is mandatory. In particular, please provide a baseline classifier (e.g., stratified prediction) results. A Dummy Classifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. If the prediction performance of a Dummy Classifier is similar or higher than the trained classifier, this last model is not detecting any pattern in the data.

They have been included in the text (tables 7, 8 and 9)

 

Round 3

Reviewer 2 Report

The authors have sufficiently answered some of my doubts. However, some other problems have to be still fixed:

  1. As asked in the previous revision, please provide a table with precision, recall, and F1-score for each class. Moreover, I think that the plot of ROC curve is useless. Please insert in this table also the roc curve value.
  2. I am not sure to fully understand Table 7, 8, and 9. Are they the prediction matrices? If yes, why there are different labels for columns and raw? Additionally, if Table 7 refers to ROC curves of Figure 2 the results are not linked. If table 7 is correct, the roc score will be 1 but you report 0.68. Please revise these analyses.
  3. You do not provide a baseline classifier analysis to assess the validity of your prediction. Please provide a baseline classifier (e.g., stratified prediction) results. A Dummy Classifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. If the prediction performance of a Dummy Classifier is similar or higher than the trained classifier, this last model is not detecting any pattern in the data. Of note, you can find a python library that could help with this analysis at this link: https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html.

Author Response

Dear Reviewer

First of all, thank you for your proposals to improve the study. We have listened to your suggestions, and below we detail the answer

 

  • As asked in the previous revision, please provide a table with precision, recall, and F1-score for each class. Moreover, I think that the plot of ROC curve is useless. Please insert in this table also the roc curve value.

Thank you for the correction. The ROC curve values are already included. On the other hand, excuse me but we do not know the meaning of F1-score for each class, and we do not know how it is done.

 

  • I am not sure to fully understand Table 7, 8, and 9. Are they the prediction matrices? If yes, why there are different labels for columns and raw? Additionally, if Table 7 refers to ROC curves of Figure 2 the results are not linked. If table 7 is correct, the roc score will be 1 but you report 0.68. Please revise these analyses.

Thank you for your observation and for showing us the error. It was our fault that the values were not correct. They have been again analyzed and corrected. Again, thanks for this fix, because it was a major bug.

 

  • You do not provide a baseline classifier analysis to assess the validity of your prediction. Please provide a baseline classifier (e.g., stratified prediction) results. A Dummy Classifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. If the prediction performance of a Dummy Classifier is similar or higher than the trained classifier, this last model is not detecting any pattern in the data. Of note, you can find a python library that could help with this analysis at this link: https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html.

Training tables have been included in the results section for more information. However, we cannot do it in the Python library that it offers us, since we do not know the use of this program, and with the short time we have to answer the questions, it was impossible for us to do a quick learning.

Instead, we will take it into account for future studies the mastery of this program.

Round 4

Reviewer 2 Report

Unfortunately, the answers provided by the authors are not solve the problems highlighted in the previous revisions:

  • It is possible to train and test a dummy classifier by R programming language (https://www.r-bloggers.com/2017/10/practical-machine-learning-with-r-and-python-part-4/). Please provide the results of this classifier based on a stratified approach.
  • Additionally to ROC curves please provide also classification metrics (precision, recall, and f1-score). Due to the fact that you provide the classification matrices, you could compute these metrics from these tables. You can find an example in R at this link: https://rpubs.com/tmartens/classification_metrics
  • Table 10 is useless. This result is obvious. Please remove it.

Author Response

  • It is possible to train and test a dummy classifier by R programming language (https://www.r-bloggers.com/2017/10/practical-machine-learning-with-r-and-python-part-4/). Please provide the results of this classifier based on a stratified approach.
  • Additionally to ROC curves please provide also classification metrics (precision, recall, and f1-score). Due to the fact that you provide the classification matrices, you could compute these metrics from these tables. You can find an example in R at this link: https://rpubs.com/tmartens/classification_metrics
  • Table 10 is useless. This result is obvious. Please remove it.

 

Dear Reviewer

Corrections have been made. You can check them in the results tables below the ROC curves.

Table 10 has been maintained to maintain the structure with respect to previous tables and championships.

Author Response File: Author Response.docx

Round 5

Reviewer 2 Report

The authors sufficiently answered all of my doubts. Before accepting the paper, the authors should provide in the method section a description of all the metrics (i.e., accuracy, error rate, precision, sensitivity, specificity, and f1 score) and how the dummy classifier is created. In this way, the readers could better understand the results of this paper.

Author Response

Dear Reviewer

These suggestions were previously included, in tables 8, 12 and 14.

Author Response File: Author Response.docx

Back to TopTop