Next Article in Journal
The Impact of a Science Center Student Lab Project on Subject Attitudes Toward STEM Subjects and Career Choices in STEM Fields
Previous Article in Journal / Special Issue
Psychometric Properties and Rasch Validation of the Herth Hope Index in a Sample of Portuguese Higher Education Students During a Pandemic
 
 
Article
Peer-Review Record

Evaluation of a PILOT School-Based Mindfulness Program in Primary Education

Educ. Sci. 2025, 15(9), 1088; https://doi.org/10.3390/educsci15091088
by Matej Hrabovsky and Iveta Kovalcikova *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Educ. Sci. 2025, 15(9), 1088; https://doi.org/10.3390/educsci15091088
Submission received: 24 April 2025 / Revised: 9 August 2025 / Accepted: 18 August 2025 / Published: 22 August 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The study aims to verify the effectiveness of a mindfulness-based intervention in primary school. The topic is very interesting, we need more evidence on the effectiveness of mindfulness-based interventions in primary school and on the characteristics that the intervention must have to be effective in this age group. I appreciated the use of different types of measures and the fact that the intervention was conducted by a certified teacher with experience in mindfulness.

Despite these positive aspects, the very small sample, some aspects of the method, and the way the hypothesis was tested do not allow me to recommend publication of this article. My major issues are outlined below:

My main concern is the small sample size of the experimental group. There are only 7 participants in this group. Although this is a pilot study and you mentioned in the discussion that the small sample size is a limitation of the study, I believe that with such a small number of participants in the experimental group the results are very weak.

Participants: I personally had to reread and check the article more than once to understand if there was a control group or not. This is because the participants section states that there were 14 participants (also reported in the abstract), which later became 7; it does not specify that there were two groups (experimental and control), and only reports information for those 7 participants. The results section compared the experimental and control groups, so I assumed there was a control group, but realizing that there is a control group in the results section in my opinion is confusing, as is the sentence "Another limitation of the study is the lack of a control group" in the discussion. If you have a control group, is important to describe it in the participants section, including information about gender.

Procedure: To allow other researchers to replicate the study, it is important to specify the duration of each session of the intervention.

Measures: Has the FFMQ-39 been validated for children in the country where the study was conducted? If yes, please provide the reference. If not, please explain whether back-translation was used in this study and whether items were adapted for children (and how), as in other countries. Please provide a measure of internal consistency (e.g. Cronbach's alpha) and specify how the score was calculated (is it the mean of all 39 items?).

Results: In my opinion, as I mentioned above, with such a small sample the results are too weak; in addition, I think that it is more appropriate to use non-parametric analysis when the sample is too small; finally, one could add mean and standard deviation of each group (pre and post- intervention)

Author Response

Dear Reviewer,
We sincerely thank you for your valuable comments, which have contributed to improving the overall quality of the manuscript. The following revisions have been made to the manuscript:

  1. The sample size of the experimental group is very small (n = 7), which limits the strength of the conclusions.
    Response:
    We agree and have now emphasized this limitation more clearly in the Discussion section. The study is described as exploratory and interpreted as a pilot study with limited generalizability. We also explain that the final group size was affected by a high dropout rate in the experimental group.
  2. It is unclear whether there was a control group.
    Response:
    Thank you for this observation. We have revised the Participants section to clarify that the initial sample of 21 participants was divided into an experimental group (n = 14) and a control group (n = 7). The dropout rate and its effects on group sizes are now clearly described.
  3. The duration of each session is not mentioned.
    Response:
    We have added that each session lasted approximately 45 minutes and was delivered weekly after school hours.
  4. Has the FFMQ-39 been validated for children in your country? Was it adapted?
    Response:
    The FFMQ-39 has not been formally validated for children in Slovakia. We have added a note to the Measures section explaining that the wording was adapted to be age-appropriate using back-translation procedures and simplified language. Internal consistency (Cronbach’s alpha) was calculated and is now reported.
  5. Mean and SD values per group (pre/post) should be reported.
    Response:
    We have included summary tables reporting the mean and standard deviation values for each group and time point (pre- and post-intervention) for all dependent variables.
  6. Consider using non-parametric tests due to the small sample size.
    Response:
    As mentioned earlier, we acknowledge the limitations of parametric methods with small samples. However, we chose to retain repeated-measures ANOVA and ANCOVA based on their robustness and interpretability, especially in pilot studies. This decision is now explicitly justified in both the Methods and Discussion sections, and a note about this limitation has been added to the Limitations section.

 

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript presents a pilot study aimed at evaluating the impact of a structured mindfulness-based intervention on executive functioning and dispositional mindfulness in young learners.

I commend the authors for several strengths in their work, including:

  1. The study used neuropsychological tests — the Delis-Kaplan Executive Function System subtests (D-KEFS) and the Five Facet Mindfulness Questionnaire (FFMQ), which enhances the validity and reliability of the result.
  2. The analyses included ANOVA, ANCOVA, and Kendall's tau correlations, which demonstrate the thoroughness and reliability of the data analysis conducted.
  3.  This study contributes to the growing body of evidence supporting the effectiveness of mindfulness interventions in enhancing executive functioning and trait mindfulness in children at the primary education level.

These are all important strengths of the study.

Considering these strengths, however, as I read the manuscript, I encountered certain areas where greater clarity would be appreciated. I believe the paper could be further improved by:

  1. The methods used. 

    After reviewing the methodology section, I noticed that it lacks several important details regarding the research procedures. The article does not clearly present the exclusion criteria for participants, nor does it provide information about how the intervention was implemented. The absence of these details may limit the ability to thoroughly assess the validity of the research approach.

    I recommend that the authors consider expanding the methods section to include the following elements:

    a) Precise inclusion and exclusion criteria for the study participants,

    b). Detailed information on how the intervention was carried out, including session duration, location, the presence of a teacher, and whether the children participated individually or in groups.
  2. I appreciate the authors’ comments on the practical application of their findings; however, further elaboration would be beneficial — for example, by providing a more detailed explanation of how the assessment of symptoms reported by students could support the daily work of school psychologists.

 

Author Response

Dear Reviewer,
We sincerely thank you for your valuable comments, which have contributed to improving the overall quality of the manuscript. The following revisions have been made to the manuscript:

  1. Missing details on inclusion and exclusion criteria.
    Response:
    We have revised the Methods section to clearly state both inclusion and exclusion criteria. Specifically, children with special educational needs were excluded to ensure a developmentally typical sample.
  2. More information needed about the intervention delivery (format, location, supervision, group vs. individual).
    Response:
    The Procedure section has been expanded to clarify that the intervention was delivered in small groups (n = 7) in a quiet room at the school by a trained mindfulness facilitator. Sessions were 45 minutes long and included both formal mindfulness exercises and interactive components.
  3. Add practical implications for school psychologists.
    Response:
    We appreciate this suggestion. The Conclusion section has been revised to include practical implications for school-based applications. Specifically, we discuss how school psychologists and educators may use brief, cost-effective mindfulness interventions to support students’ executive functioning and emotional regulation.

We have also added the following content:
School psychologists may deliver group mindfulness interventions, with a recommended maximum of 10 students per group—especially for participants with no prior mindfulness experience. As familiarity increases, group size can be gradually scaled up to match typical classroom sizes.

Group-based delivery also allows the psychologist to observe students' behavioral patterns in a natural setting. Observational insights may subsequently inform individualized psychological support and intervention planning.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

Dear Author(s),

the quality of the manuscript has improved. However, as I wrote in my first review, my main concern remains the small sample size of the experimental group. With only 7 participants in the experimental group, I believe the results are too weak. For this reason, I cannot recommend publication of this article.

I am including a few suggestions, if you think could be useful:

  • I suggest specifying in the descriptions of the instruments how the variables were calculated.
  • It would also be helpful to indicate in the tables which group is experimental and which is control.
  • For consistency, please clarify whether the study design should be defined as experimental, as stated in the abstract, or quasi-experimental, as indicated in the procedure section.
  • In the description of the FFMQ-39, please verify the accuracy of the sentence: “...the adult version was used in this pilot study without modifications.” In your response to my previous review, you explained that the wording was adapted to be age-appropriate using back-translation procedures and simplified language.

Author Response

Reviewer 1 Comments

Specify how the variables were calculated.
Response: This has been clarified in the Instruments section. For each cognitive task (e.g., Stroop, TMT), we now specify how performance scores were computed.

Indicate in the tables which group is experimental and which is control.
Response: We have updated all tables to label the experimental and control groups clearly and add the notes under the tables.

Clarify whether the design is experimental or quasi-experimental.
Response: We now consistently refer to the design as "quasi-experimental" across the abstract, method, and procedure sections to reflect the non-random group allocation.

Verify the FFMQ-39 adaptation statement.
Response: We revised the sentence to reflect the actual procedure.

 

Editor’s Specific Comments

Expand the literature review to better justify the variables.
Response: We significantly expanded the Theoretical Background section. We now provide a clearer rationale for the selection of inhibitory control, working memory, and cognitive flexibility as outcome variables. Each is explicitly linked to mindfulness-based mechanisms and supported by relevant literature (e.g., Diamond, 2013; Jha et al., 2010; Zelazo, 2020). Information about prospective possible relation/transfer mindfulness intervention to ability to learn was added:  we approached the research with the anticipation that students might benefit from the intervention along two main lines: (1) improvements in wellbeing, and (2) enhancements in the level of executive functioning. Mindfulness training aimed at enhancing executive functions can support students’ academic performance by strengthening abilities such as selective attention, inhibition, and cognitive flexibility. Executive functioning is typically stimulated through both domain-specific (e.g., mathematical or language-based) and domain-general (e.g., working memory training) cognitive programs; however, emerging evidence suggests that mindfulness-based interventions can also enhance cognitive capacities essential for academic success. Improvements in executive functioning would have a direct transfer to the school context—skills such as maintaining attention, suppressing distractions, and responding deliberately are critical for effective learning and classroom performance.

   

Contextualize intervention elements in terms of the dependent variables.
Response: We revised the Intervention section to link each core activity to the executive function it targeted. A new table has been added (Table 1) listing sessions, core activities and targeted EF components.

Add a visualization of the intervention.
Response: A detailed table summarizing all intervention sessions has been included in the main text (Table 1), rather than in the appendix, to ensure clarity and visibility.

Rephrase interpretations of “intervention effect.”
Response: We revised all formulations of “intervention effect.” The Results and Discussion sections now refer to “group differences in change over time” or “greater improvements in the intervention group” instead. No causal attributions were made regarding the control group.

Include qualitative observations.
Response: We added a subsection titled “Qualitative observations” in the Results. It includes real examples from facilitator field notes (e.g., children's engagement, behavioral shifts), providing depth to the interpretation of findings.

Expand the discussion on implications and connect to literature.
Response: The Discussion was expanded with practical implications for educators and psychologists. We now include three additional references to support the transferability of findings (e.g., Mettler et al., 2023; Sciutto et al., 2021; Zelazo, 2020), and clarify how each finding could inform school-based interventions.

 

Final Editorial Revisions

Terminology: The word “skupina” has been replaced with “group” throughout.
Formatting: All statistical symbols (e.g., F, p, η²) are italicized as per APA style.
Justification: Full text justification has been replaced by left alignment throughout the document to improve readability.

 

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

Comments and Suggestions for Authors

Dear Author(s),

I appreciated the responses provided. However, as I previously stated in both the first and second rounds of review, I believe that the very small sample sizes in both the experimental and control groups significantly limit the strength of the findings. I am sorry but, in my view, this limitation renders the article unsuitable for publication.

Author Response

Esteemed Reviewer,

 

I am writing regarding the recent decision on our manuscript "[EVALUATION OF A PILOT SCHOOL-BASED MINDFULNESS PROGRAM IN PRIMARY EDUCATION]" (Manuscript ID: [education-3635384]). While I fully respect the reviewer right to reject a paper, I would like to express concern about the review process in this particular case.

 

From the initial submission, the sample size was clearly stated and could not be increased due to the nature of the study. This information was also present in the methods section of the first version. Despite this, the manuscript went through two rounds of revisions, in which we addressed in detail all the reviewers’ comments, including substantial changes to the introduction, methods, results, and discussion. In both rounds, no indication was given that the sample size alone would ultimately preclude publication.

 

In the final decision, the main stated reason for rejection was the small sample size — a factor that has remained unchanged and was known to the reviewer and editorial team from the very beginning. This raises concerns about whether the review process in this case was efficient and fair, given the time and effort invested by both the authors and the reviewers.

 

Back to TopTop