A Theory of Impacts Model for Assessing Computer Science Interventions through an Equity Lens: Identifying Systemic Impacts Using the CAPE Framework
Round 1
Reviewer 1 Report
This paper reports on a feasibility study conducted in a multi-school intervention to test the ToI-CAPE model (an extension of the CAPE model using the Theory of Impacts) and evaluate its ability to help measuring equity in CS education.
- The topic is timely, as every today student must get education in computer science.
- The feasibility study is well designed and explained.
- Hypotheses are well-thought and in sufficient number.
- The intervention is large enough to allow generalization of outcomes.
- The multi-school setting brings a sufficient complexity to provide evidence on the feasibility of using ToI-CAPE.
- The study shows a positive appreciation of the proposed model
- Limitations of the study are stated.
These are strengths and positive aspects of the paper that play in favor of its acceptance. There are some points of improvement:
- The choice of the Theory of Impacts to complement CAPE needs to be more motivated and explained.
- The contribution of the paper needs to be more clearly stated in the introductory part.
- How to use the ToI-CAPE model in other settings should be delt with in a specific section, providing guidelines if possible. Alternatively, those could be introduced in section 4. That would justify/explain better the content of Figures, 5, 6 etc.
Author Response
For the item, "the choice of the Theory of Impacts to complement CAPE needs to be more motivated and explained.", we rearranged sections 1 and 2 and then specified the article's significant contributions in the last three paragraphs of the introduction. By so doing, we explained why we were motivated/explained to do this work. We also strengthened the conclusion to ensure the reader is left with this understanding.
For "the contribution of the paper needs to be more clearly stated in the introductory part," we rearranged sections 1 and 2 and then specified the article's significant contributions in the last two paragraphs of the introduction.
For "how to use the ToI-CAPE model in other settings should be delt with in a specific section, providing guidelines if possible. Alternatively, those could be introduced in section 4. That would justify/explain better the content of Figures, 5, 6 etc.", in Section 4, for each component of CAPE, we've added "important considerations" that deals with how to use the model in other research settings. We provide some basic recommendations to assist.
In addition, we also cleaned up some of the text for clarity. We changed the title to make it more accurately reflect the content and slightly altered the abstract to make it fit better.
Reviewer 2 Report
This paper presents a new model for evaluating computer science education interventions, which is based on the CAPE (Capacity, Access, Participation, and Experience) framework and is focused on equity.
Although the idea of proposing an equity-focused model for evaluating computer science education interventions is somewhat novel and useful, the paper has several flaws that makes its contribution very weak.
Firstly, the model seems to have been developed tailored for a specific intervention and thus, it is questionable whether it can be used for other interventions, as well as its usefulness for such interventions.
Secondly, the paper does not provide robust evidence on the usefulness of the proposed model. Authors should provide evidence proving that the model is useful and provide significant advantages compared with other approaches. Authors should also clarify what benefits the model can bring to researchers in the computing education field. Why should a team of researchers use the proposed model?
Taking the previous issues into account, it becomes clear to me that the contribution of the paper is not very useful for the research community.
Author Response
- "Firstly, the model seems to have been developed tailored for a specific intervention and thus, it is questionable whether it can be used for other interventions, as well as its usefulness for such interventions."
- We've addressed this in our limitations and we invite others to use it to build upon it. This is an early phase work.
- Secondly, the paper does not provide robust evidence on the usefulness of the proposed model. Authors should provide evidence proving that the model is useful and provide significant advantages compared with other approaches. Authors should also clarify what benefits the model can bring to researchers in the computing education field. Why should a team of researchers use the proposed model?
- We provided additional framing to better describe the article's contribution and who can benefit. We are unable to find "other approaches" for theory of impacts (thought admittedly they may exist).
Round 2
Reviewer 2 Report
This paper presents a new model for evaluating computer science education interventions, which is based on the CAPE (Capacity, Access, Participation, and Experience) framework and is focused on equity.
In my first review, I pointed out two major flaws of the article:
1) The presented model seems to have been developed tailored for a specific intervention and thus, it is questionable whether it can be used for other interventions, as well as its usefulness for such interventions.
2) The paper does not provide robust evidence on the usefulness of the proposed model. Authors should provide evidence proving that the model is useful and provide significant advantages compared with other approaches. Authors should also clarify what benefits the model can bring to researchers in the computing education field.
Both flaws still remain in the new version of the manuscript. Therefore, I still think that the contribution of the paper is too weak.
Author Response
Thank you for your comments. When we revised our paper based on the comments, we centered our changes on the meta-reviewer/editor's feedback, which highlighted different criteria. Therefore, we did not address Reviewer 2's concerns.
The two primary concerns of Reviewer 2 are:
"1) The presented model seems to have been developed tailored for a specific intervention and thus, it is questionable whether it can be used for other interventions, as well as its usefulness for such interventions.
2) The paper does not provide robust evidence on the usefulness of the proposed model. Authors should provide evidence proving that the model is useful and provide significant advantages compared with other approaches. Authors should also clarify what benefits the model can bring to researchers in the computing education field."
For 1), this was indeed not developed for a specific intervention. We developed this model for use within one intervention; however, we abstracted our model and its description for use on any intervention where research is needed and hypothesis need to be formed. Therefore, the model itself is generic to any intervention and in this way is similar to other logic model tools that enable others to lay out the investigation of their invention using clear and concrete steps.
For 2), in our paper we provide early evidence that the model is useful based on one intervention. We are unable to survey others to see if it was useful, since this work has yet to be published or used elsewhere. As a tool, we believe it will be useful. We will be using the model in another upcoming intervention and at that time we will be able to share further evidence for its work. We agree that it would be ideal to have significantly more evidence--however, we also are aware of the urgent need for additional models in K-12 computer science education. For example, the CAPE framework itself is only published on a website and in a two-page article. Yet, it is quickly being adopted in the CS education research community since we are incredibly in need of such models. (And for the last sentence, we in fact did clarify the benefits that this model brings to researchers in the computing education field.)
We appreciate your comments and feedback. We believe this new model is ready for investigation by others and therefore support its publication as is.