Does Increased Choice over Learning Topic Improve the Effectiveness of Automated Feedback for Educators?
Abstract
1. Introduction
- Does providing instructors choice over feedback impact their engagement with the feedback, their perception of the feedback, or their teaching practice?
- Does choice over feedback for instructors impact their students’ outcomes?
- How do treatment effects vary by instructor demographics and whether the instructor engages with self-directed training beyond automated feedback (i.e., training modules, teaching simulations)?
2. Related Work
2.1. Educator Agency and Choice
2.2. Empirical Studies of Educator Choice in Professional Learning
2.3. Automated Feedback on Instruction
2.4. Instructors Versus K-12 Educators
3. Study Background
3.1. Participants
3.2. Automated Feedback to Instructors
4. Randomized Controlled Trial
4.1. Emails About Feedback
4.2. Measures of Outcomes
- Ever Viewed: Whether instructors ever viewed their feedback before their subsequent session (binary). We also tracked the number of times they viewed the feedback, but the results were similar to Ever Viewed—hence, we use this binary measure.
- Seconds Spent: Total seconds spent viewing feedback across weeks.
- Net Promoter Score (NPS): 1–10 rating of the likelihood of recommending the feedback tool.
- Overall Perception: Aggregated items from the final instructor survey measuring perceptions of feedback utility and satisfaction. As explained in the preregistration, a factor analysis showed a single dominant factor explaining most variance; hence, we mean-aggregated the items.
- Week 1 Talk Move Rate: The standardized talk move rate(s) within the first session, across all talk moves. This measure captures discourse practices after treatment, but before instructors received any feedback.
- Week 2+ Talk Move Rate: The standardized talk move rate(s) within the second through sixth sessions, across all talk moves. This measure captures discourse practices after instructors received their first feedback. To improve precision, in models that use this outcome we controlled for talk move rates in the first session.
- Number of Sessions Attended: Number of sessions attended by students between Week 2 and Week 6. We excluded attendance at the first session—while the first session was after random assignment, students did not interact with instructors until showing up (or not) for this session; thus, attendance at the first session could not have been affected by treatment.
- Number of Assignments Completed: The total number of assignments completed by summing completion rates across the six course assignments (usually one assignment per week).
4.3. Variables for Subgroup Analysis
- Training Modules About Talk Moves: The four optional training modules included interactive videos and reflection questions related to each of the three talk moves (Getting Ideas On the Table, Building On Ideas, Orienting Students to One Another), as well as a module synthesizing all three. We used a binary measure indicating whether the instructor completed any of these modules. (Using the number of completed modules did not change our results.) Overall, 43% of instructors completed at least one training module.
- GPTeach: GPTeach (Markel et al., 2023) is an LLM-powered chat-based training tool that allows instructors to practice engaging with simulated students. Created via GPT-3, the simulated students had diverse backgrounds and familiarity with course material (programming), and the instructor was asked to facilitate office hours with these simulated students. We used a binary measure indicating whether the instructor accessed GPTeach; however, using the number of times they accessed GPTeach did not change our results. Overall, 23% of instructors accessed GPTeach at least once during the course.
4.4. Validating Randomization
4.5. Regression Analysis
5. Results
6. Discussion
6.1. Summary and Theoretical Implications
6.2. Practical Implications
6.3. Limitations
6.4. Future Directions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Experimental Feedback
Appendix B. Final Survey on AI Feedback
- How often did you engage with the AI teaching feedback?
- (a)
- Not at all.
- (b)
- Once or twice.
- (c)
- Regularly.
- Could you tell us why you didn’t engage with the AI teaching feedback? Select all that apply.
- (a)
- I didn’t know about it.
- (b)
- It wasn’t available to me.
- (c)
- I didn’t have the time.
- (d)
- I didn’t think it would be helpful.
- (e)
- Other (please explain).
- Could you tell us why you engaged with the AI teaching feedback only once or twice? Select all that apply.
- (a)
- I only learned about it later in the course.
- (b)
- It wasn’t available to me after each session.
- (c)
- I didn’t have the time.
- (d)
- I didn’t find it helpful.
- (e)
- Other (please explain).
- To what extent do you agree with the following about the AI teaching feedback? (Strongly disagree, Somewhat disagree, Neither agree nor disagree, Somewhat agree, Strongly agree)
- (a)
- The feedback has helped me become a better teacher.
- (b)
- The feedback made me realize things about my teaching that I otherwise would not have.
- (c)
- The feedback was difficult to understand.
- (d)
- The feedback made me pay more attention to the teaching strategies I was using.
- (e)
- I tried new things in my teaching because of this feedback.
- (f)
- The feedback areas (e.g., getting ideas on the table, building on student ideas, orienting students to one another) represented important aspects of good teaching.
- (g)
- The feedback allowed me to improve my teaching around areas that were important to me.
- (h)
- The feedback felt appropriate to my teaching strengths and weaknesses.
- (i)
- The feedback aligned with my priorities for growth in my teaching.
- How likely are you to recommend AI teaching feedback to other educators? (Scale of 1–10)
- How helpful was each of the following types of feedback?
- (a)
- Getting Ideas on the Table.
- (b)
- Building on Student Ideas.
- (c)
- Orienting Students to One Another.
- (d)
- Experimental (ChatGPT) Feedback.
- Please rank the different elements of feedback in terms of helpfulness.
- (a)
- Number of talk move moments identified.
- (b)
- Chart to compare the number of moments to previous weeks.
- (c)
- Comparison of the number of moments to class average.
- (d)
- Talk time percentage.
- (e)
- Tips to improve the talk move.
- (f)
- Examples from your transcript demonstrating the talk move.
- (g)
- Selecting moments when curiosity was exhibited.
- (h)
- Answering the reflection question.
- (i)
- Seeing other section leaders’ answers to the reflection question.
- (j)
- Resources to improve the talk move. (k) Other (please explain).
- Do you have any suggestions for how we could improve this feedback tool?
- Any other thoughts/comments?
Appendix C. Talk Move Rates for Weeks 2+ with No Week 1 Controls
Week 2+ Talk Move Rate | |
---|---|
Treatment | 0.017 (0.047) |
Control Mean | −0.014 |
R2 | 0.023 |
Observations | 7992 |
Appendix D. Heterogeneity by Instructor Demographics
Engagement | Perception | Practice | Students | |||||
---|---|---|---|---|---|---|---|---|
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | |
Ever Viewed | Seconds Spent | NPS | Overall Perception | Wk 1 Talk Move Rate | Wk 2+ Talk Move Rate | Num. Sessions Attended | Num. Assn. Completed | |
(a) Treatment = 0 # Female Instr. = 1 | −0.001 (0.039) | 79.820 (95.513) | 0.392 (0.566) | 0.232 (0.189) | 0.040 (0.114) | 0.012 (0.074) | 0.092 (0.074) | 0.010 (0.081) |
(b) Treatment = 1 # Female Instr. = 0 | 0.004 (0.031) | 52.090 (86.134) | 0.140 (0.489) | 0.172 (0.151) | 0.104 (0.086) | 0.043 (0.057) | 0.147 * (0.058) | 0.059 (0.063) |
(c) Treatment = 1 # Female Instr. = 1 | −0.069 (0.046) | 28.036 (100.069) | 0.136 (0.586) | 0.115 (0.198) | 0.079 (0.105) | −0.029 (0.072) | 0.127+ (0.074) | 0.041 (0.082) |
(c)-(a) | −0.068 | −51.784 | −0.256 | −0.117 | 0.039 | −0.041 | 0.035 | 0.031 |
(c)-(b) | −0.073 | −24.054 | −0.004 | −0.057 | −0.025 | −0.072 | −0.020 | −0.018 |
Control Mean | 0.882 | 462.026 | 5.903 | 3.432 | −0.014 | −0.014 | 3.560 | 3.435 |
R2 | 0.068 | 0.089 | 0.132 | 0.120 | 0.029 | 0.023 | 0.043 | 0.013 |
Observations | 567 | 567 | 193 | 193 | 1611 | 7992 | 8254 | 8254 |
Engagement | Perception | Practice | Students | |||||
---|---|---|---|---|---|---|---|---|
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | |
Ever Viewed | Seconds Spent | NPS | Overall Perception | Wk 1 Talk Move Rate | Wk 2+ Talk Move Rate | Num. Sessions Attended | Num. Assn. Completed | |
(a) Treatment = 0 # Returning Instr. = 1 | −0.067 (0.050) | −319.437 ** (116.275) | −0.792 (0.718) | −0.239 (0.195) | −0.011 (0.112) | −0.069 (0.073) | 0.047 (0.080) | −0.022 (0.087) |
(b) Treatment = 1 # Returning Instr. = 0 | −0.049 (0.030) | 23.512 (89.590) | −0.047 (0.444) | 0.086 (0.148) | 0.099 (0.082) | −0.005 (0.056) | 0.111 * (0.056) | 0.064 (0.061) |
(c) Treatment = 1 # Returning Instr. = 1 | 0.003 (0.040) | −310.959 ** (91.156) | −0.542 (0.610) | −0.162 (0.191) | 0.031 (0.120) | 0.014 (0.073) | 0.159 * (0.077) | −0.001 (0.085) |
(c)-(a) | 0.07 | 8.478 | 0.25 | 0.077 | 0.042 | 0.083 | 0.112 | 0.021 |
(c)-(b) | 0.052 | −334.471 *** | −0.495 | −0.248 | −0.068 | 0.019 | 0.048 | −0.065 |
Control Mean | 0.896 | 528.682 | 6.405 | 3.616 | −0.003 | −0.003 | 3.486 | 3.391 |
R2 | 0.071 | 0.088 | 0.131 | 0.114 | 0.029 | 0.023 | 0.042 | 0.012 |
Observations | 567 | 567 | 193 | 193 | 1611 | 7992 | 8254 | 8254 |
Engagement | Perception | Practice | Students | |||||
---|---|---|---|---|---|---|---|---|
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | |
Ever Viewed | Seconds Spent | NPS | Overall Perception | Wk 1 Talk Move Rate | Wk 2+ Talk Move Rate | Num. Sections Attended | Num. Assn. Completed | |
Treatment = 0 # Instr. in US = 1 | −0.101 * (0.041) | −95.381 (82.656) | −1.444 * (0.650) | −0.363+ (0.212) | 0.222 * (0.110) | 0.114 (0.070) | −0.024 (0.070) | −0.095 (0.077) |
(b) Treatment = 1 # Instr. in US = 0 | 0.007 (0.029) | 171.193+ (90.395) | 0.009 (0.460) | 0.094 (0.153) | 0.010 (0.092) | −0.006 (0.067) | 0.064 (0.066) | 0.035 (0.072) |
(c) Treatment = 1 # Instr. in U.S. = 1 | −0.148 ** (0.043) | −239.925 ** (83.171) | −1.408 * (0.692) | −0.298 (0.223) | 0.390 ** (0.108) | 0.156 * (0.069) | 0.143 * (0.070) | −0.026 (0.076) |
(c)-(a) | −0.047 | −144.544 | 0.036 | 0.065 | 0.168 | 0.042 | 0.167 * | 0.069 |
(c)-(b) | −0.155 *** | −411.118 *** | −1.417 * | −0.392+ | 0.38 ** | 0.162 * | 0.079 | −0.061 |
Control Mean | 0.917 | 469.458 | 6.71 | 3.683 | −0.087 | −0.087 | 3.471 | 3.417 |
R2 | 0.067 | 0.096 | 0.131 | 0.115 | 0.031 | 0.023 | 0.043 | 0.013 |
Observations | 567 | 567 | 193 | 193 | 1611 | 7992 | 8254 | 8254 |
1 | Students were assigned to instructors prerandomization, using the following process: (1) instructors selected their preferred time slots; (2) students chose available time slots; (3) within each time slot, students were assigned to sections randomly, with one exception—instructors were sorted by age, so older students were assigned to older instructors, and underage students (<18) were never paired with adult [18+] instructors. Our study focuses only on adult instructors and their students. |
2 | This was the best performing cost effective GPT model available at the time of the study (spring 2023). |
3 | We had thought that the course would only be 5 weeks long; hence, the choice interface only had Week 5 listed for the third box. When we realized the course would be 6 weeks long, we applied their choices for Week 5 to Week 6 as well. |
References
- Anderson, L. (2010). Embedded, emboldened, and (net) working for change: Support-seeking and teacher agency in urban, high-needs schools. Harvard Educational Review, 80(4), 541–573. [Google Scholar] [CrossRef]
- Biesta, G., Priestley, M., & Robinson, S. (2015). The role of beliefs in teacher agency. Teachers and Teaching, 21(6), 624–640. [Google Scholar] [CrossRef]
- Bill & Melinda Gates Foundation. (2014). Teachers know best: Teachers’ views on professional development. ERIC Clearinghouse. [Google Scholar]
- Brod, G., Kucirkova, N., Shepherd, J., Jolles, D., & Molenaar, I. (2023). Agency in educational technology: Interdisciplinary perspectives and implications for learning design. Educational Psychology Review, 35(1), 25. [Google Scholar] [CrossRef]
- Brodie, K. (2021). Teacher agency in professional learning communities. Professional Development in Education, 47(4), 560–573. [Google Scholar] [CrossRef]
- Calvert, L. (2016). The power of teacher agency. The Learning Professional, 37(2), 51. [Google Scholar]
- Carter Andrews, D. J., & Richmond, G. (2019). Professional development for equity: What constitutes powerful professional learning? (Vol. 70, No. 5) SAGE Publications. [Google Scholar]
- Chadha, D. (2013). Reconceptualising and reframing graduate teaching assistant (GTA) provision for a research-intensive institution. Teaching in Higher Education, 18(2), 205–217. [Google Scholar] [CrossRef]
- Chen, X., Mitrovic, A., & Mathews, M. (2019). Investigating the effect of agency on learning from worked examples, erroneous examples and problem solving. International Journal of Artificial Intelligence in Education, 29(3), 396–424. [Google Scholar] [CrossRef]
- Clarke, D., & Hollingsworth, H. (2002). Elaborating a model of teacher professional growth. Teaching and Teacher Education, 18(8), 947–967. [Google Scholar] [CrossRef]
- Darling-Hammond, L., Wei, R. C., Andree, A., Richardson, N., & Orphanos, S. (2009). Professional learning in the learning profession: A status report on teacher development in the united states and abroad. National Staff Development Council. [Google Scholar]
- Deci, E. L., & Ryan, R. M. (2013). Intrinsic motivation and self-determination in human behavior. Springer Science & Business Media. [Google Scholar]
- Demszky, D., & Liu, J. (2023, July 20–22). M-powering teachers: Natural language processing powered feedback improves 1:1 instruction and student outcomes. Tenth ACM Conference on Learning @ Scale (L@S ’23), Copenhagen, Denmark. [Google Scholar]
- Demszky, D., Liu, J., Hill, H. C., Jurafsky, D., & Piech, C. (2023). Can automated feedback improve teachers’ uptake of student ideas? evidence from a randomized controlled trial in a large-scale online course. Educational Evaluation and Policy Analysis, 46(3), 483–505. [Google Scholar] [CrossRef]
- Demszky, D., Liu, J., Hill, H. C., Sanghi, S., & Chung, A. (2024). Automated feedback improves teachers’ questioning quality in brick-and-mortar classrooms: Opportunities for further enhancement. Computers & Education, 227, 105183. [Google Scholar]
- Demszky, D., Liu, J., Mancenido, Z., Cohen, J., Hill, H., Jurafsky, D., & Hashimoto, T. (2021, August 1–6). Measuring conversational uptake: A case study on student-teacher interactions. 59th Annual Meeting of the Association for Computational Linguistics (pp. 1638–1653), Online. [Google Scholar]
- Diaz-Maggioli, G. (2004). Teacher-centered professional development. ASCD. [Google Scholar]
- Doan, S., Fernandez, M.-P., Grant, D., Kaufman, J. H., Setodji, C. M., Snoke, J., Strawn, M., & Young, C. J. (2021). American instructional resources surveys: 2021 technical documentation and survey results. research report. rr-a134-10. Rand Corporation. [Google Scholar]
- Fischer, C., Fishman, B., & Schoenebeck, S. Y. (2019). New contexts for professional learning: Analyzing high school science teachers’ engagement on Twitter. AERA Open, 5(4), 2332858419894252. [Google Scholar] [CrossRef]
- Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. [Google Scholar] [CrossRef]
- Hill, H. C. (2009). Fixing teacher professional development. Phi Delta Kappan, 90(7), 470–476. [Google Scholar] [CrossRef]
- Hübner, N., Fischer, C., Fishman, B., Lawrenz, F., & Eisenkraft, A. (2021). One program fits all? Patterns and outcomes of professional development during a large-scale reform in a high-stakes science curriculum. Aera Open, 7, 23328584211028601. [Google Scholar] [CrossRef]
- Jacobs, J., Scornavacco, K., Harty, C., Suresh, A., Lai, V., & Sumner, T. (2022). Promoting rich discussions in mathematics classrooms: Using personalized, automated feedback to support reflection and instructional change. Teaching and Teacher Education, 112, 103631. [Google Scholar] [CrossRef]
- Jensen, E., Dale, M., Donnelly, P. J., Stone, C., Kelly, S., Godley, A., & D’Mello, S. K. (2020, April 25–30). Toward automated feedback on teacher discourse to enhance teacher learning. 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–13), Honolulu, HI, USA. [Google Scholar]
- Kelly, S., Olney, A. M., Donnelly, P., Nystrand, M., & D’Mello, S. K. (2018). Automatically measuring question authenticity in real-world classrooms. Educational Researcher, 47(7), 451–464. [Google Scholar] [CrossRef]
- Kennedy, M. (2016a). How does professional development improve teaching? Review of Educational Research, 86(4), 945–980. [Google Scholar] [CrossRef]
- Kennedy, M. (2016b). Parsing the practice of teaching. Journal of Teacher Education, 67(1), 6–17. [Google Scholar] [CrossRef]
- Knowles, M. S. (1984). The adult learner: A neglected species. Gulf Publishing Company. [Google Scholar]
- Kraft, M. A., Blazar, D., & Hogan, D. (2018). The effect of teacher coaching on instruction and achievement: A meta-analysis of the causal evidence. Review of Educational Research, 88(4), 547–588. [Google Scholar] [CrossRef]
- Kupor, A., Morgan, C., & Demszky, D. (2023). Measuring five accountable talk moves to improve instruction at scale. arXiv, arXiv:2311.10749. [Google Scholar] [CrossRef]
- Lieberman, A., & Pointer Mace, D. H. (2008). Teacher learning: The key to educational reform. Journal of Teacher Education, 59(3), 226–234. [Google Scholar] [CrossRef]
- Liu, Y. (2019). Roberta: A robustly optimized BERT pretraining approach. arXiv, arXiv:1907.11692. [Google Scholar]
- Lynch, K., Hill, H. C., Gonzalez, K. E., & Pollard, C. (2019). Strengthening the research base that informs stem instructional improvement efforts: A meta-analysis. Educational Evaluation and Policy Analysis, 41(3), 260–293. [Google Scholar] [CrossRef]
- Markel, J. M., Opferman, S. G., Landay, J. A., & Piech, C. (2023, July 20–22). GPTeach: Interactive ta training with GPT based students. Tenth ACM Conference on Learning @ Scale (L@S ’23), Copenhagen, Denmark. [Google Scholar]
- Martin, L. E., Kragler, S., Quatroche, D., & Bauserman, K. (2019). Transforming schools: The power of teachers’ input in professional development. Journal of Educational Research and Practice, 9(1), 179–188. [Google Scholar] [CrossRef]
- Merriam, S. B. (2001). Andragogy and self-directed learning: Pillars of adult learning theory. New Directions for Adult and Continuing Education, 2001(89), 3. [Google Scholar] [CrossRef]
- Mohammad Nezhad, P., & Stolz, S. A. (2024). Unveiling teachers’ professional agency and decision-making in professional learning: The illusion of choice. Professional Development in Education, 1–21. [Google Scholar] [CrossRef]
- Molla, T., & Nolan, A. (2020). Teacher agency and professional practice. Teachers and Teaching, 26(1), 67–87. [Google Scholar] [CrossRef]
- Morales, M. P. E. (2016). Participatory action research (par) cum action research (ar) in teacher professional development: A literature review. International Journal of Research in Education and Science, 2(1), 156–165. [Google Scholar] [CrossRef]
- O’Brien, E., & Reale, J. (2021). Supporting learner agency using the pedagogy of choice. Unleashing the Power of Learner Agency, 73–82. [Google Scholar] [CrossRef]
- O’Connor, C., Michaels, S., & Chapin, S. (2015). Scaling down” to explore the role of talk in learning: From district intervention to controlled classroom study. In Socializing intelligence through academic talk and dialogue (pp. 111–126). American Educational Research Association (AERA). [Google Scholar]
- Philpott, C., & Oates, C. (2017). Teacher agency and professional learning communities; what can learning rounds in Scotland teach us? Professional Development in Education, 43(3), 318–333. [Google Scholar] [CrossRef]
- Priestley, M., Biesta, G., Philippou, S., & Robinson, S. (2015). The teacher and the curriculum: Exploring teacher agency. In The SAGE handbook of curriculum, pedagogy and assessment (pp. 187–201). SAGE Publications Ltd. [Google Scholar]
- Schön, D. A. (2017). The reflective practitioner: How professionals think in action. Routledge. [Google Scholar]
- Slaughter, J., Rodgers, T., & Henninger, C. (2023). An evidence-based approach to developing faculty-wide training for graduate teaching assistants. Journal of University Teaching and Learning Practice, 20(4), 1–20. [Google Scholar] [CrossRef]
- Smith, K. (2017). Teachers as self-directed learners. Springer. [Google Scholar]
- Stanley, A. M. (2011). Professional development within collaborative teacher study groups: Pitfalls and promises. Arts Education Policy Review, 112(2), 71–78. [Google Scholar] [CrossRef]
- Stoll, L., Bolam, R., McMahon, A., Wallace, M., & Thomas, S. (2006). Professional learning communities: A review of the literature. Journal of Educational Change, 7(4), 221–258. [Google Scholar] [CrossRef]
- Suresh, A., Jacobs, J., Lai, V., Tan, C., Ward, W., Martin, J. H., & Sumner, T. (2021). Using transformers to provide teachers with personalized feedback on their classroom discourse: The talkmoves application. arXiv, arXiv:2105.07949. [Google Scholar] [CrossRef]
- Vähäsantanen, K., Hökkä, P., Paloniemi, S., Herranen, S., & Eteläpelto, A. (2017). Professional learning and agency in an identity coaching programme. Professional Development in Education, 43(4), 514–536. [Google Scholar] [CrossRef]
- Wang, Z., Miller, K., & Cortina, K. (2013). Using the LENA in teacher training: Promoting student involvement through automated feedback. Unterrichtswissenschaft, 41(4), 290–302. [Google Scholar]
- Weisenfeld, G. G., Hodges, K. S., & Copeman Petig, A. (2023). Qualifications and supports for teaching teams in state-funded preschool in the United States. International Journal of Child Care and Education Policy, 17(1), 18. [Google Scholar] [CrossRef]
- Zeichner, K. (2019). The importance of teacher agency and expertise in education reform and policymaking. Revista Portuguesa de Educação, 32(1), 5–15. [Google Scholar] [CrossRef]
- Zuo, G., Doan, S., & Kaufman, J. H. (2023). How do teachers spend professional learning time, and does it connect to classroom practice? Findings from the 2022 American instructional resources survey. American Educator Panels. Research Report. rr-a134-18. RAND Corporation.
Variable | Mean/% | SD |
---|---|---|
A. Instructor Characteristics | ||
Number of instructors | 583 | |
Female | 31.7% | |
Age | 30.153 | 12.076 |
First time Code in Place instructor | 74.6% | |
In United States | 47.9% | |
In India | 14.1% | |
In Great Britain | 3.9% | |
In Canada | 3.6% | |
In Bangladesh | 3.4% | |
In other country | 27.1% | |
B. Student Characteristics | ||
Number of students | 8254 | |
Female | 51.9% | |
Age | 31.357 | 10.091 |
In United States | 27.5% | |
In Bangladesh | 8.0% | |
In India | 7.9% | |
In China | 5.6% | |
In Canada | 4.3% | |
In Great Britain | 3.9% | |
In Turkiye | 3.2% | |
In other country a | 39.6% |
Control Mean | Treatment Mean | p Value | N | |
---|---|---|---|---|
Female | 0.31 | 0.32 | 0.75 | 583 |
In United States | 0.49 | 0.47 | 0.55 | 583 |
Age | 30.61 | 29.72 | 0.37 | 583 |
Returning Instructor | 0.25 | 0.25 | 0.98 | 583 |
Female | 0.31 | 0.32 | 0.75 | 583 |
Number of Transcripts | 5.75 | 5.74 | 0.81 | 583 |
Proportion of Female Students | 0.52 | 0.52 | 0.681 | 567 |
Proportion of Students in United States | 0.31 | 0.26 | 0.004 | 567 |
Mean Student Age | 31.39 | 30.91 | 0.535 | 567 |
Engagement | Perception | Practice | ||||
---|---|---|---|---|---|---|
(1) | (2) | (3) | (4) | (5) | (6) | |
Ever Viewed | Seconds Spent | NPS | Overall Perception | Wk 1 Talk Move Rate (std) | Wk 2+ Talk Move Rate (std) | |
Treatment | −0.019 | 19.713 | 0.018 | 0.084 | 0.084 | −0.001 |
(0.027) | (69.283) | (0.393) | (0.126) | (0.071) | (0.047) | |
Control Mean | 0.876 | 483.979 | 6.03 | 3.505 | −0.014 | −0.014 |
R2 | 0.065 | 0.088 | 0.131 | 0.114 | 0.029 | 0.032 |
Observations | 567 | 567 | 193 | 193 | 1611 | 7686 |
(1) | (2) | |
---|---|---|
Num. Sessions Attended | Num. Assignments Completed | |
Treatment | 0.112 * | 0.050 |
(0.048) | (0.052) | |
Control Mean | 3.575 | 3.434 |
R2 | 0.042 | 0.013 |
Observations | 8254 | 8254 |
Engagement | Perception | Practice | Students | |||||
---|---|---|---|---|---|---|---|---|
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | |
Ever Viewed | Seconds Spent | NPS | Overall Perception | Wk 1 Talk Move Rate | Wk 2+ Talk Move Rate | Num. Sessions Attended | Num. Assn. Completed | |
(a) Treatment = 0 # Compl. Module = 1 | 0.078 * | 390.388 ** | 1.322 * | 0.401 * | 0.158 | 0.125+ | 0.113 | 0.058 |
(0.037) | (94.403) | (0.602) | (0.191) | (0.105) | (0.069) | (0.070) | (0.076) | |
(b) Treatment = 1 # Compl. Module = 0 | −0.007 | 143.350 * | 0.740 | 0.261 | 0.114 | 0.018 | 0.081 | −0.005 |
(0.040) | (66.800) | (0.733) | (0.224) | (0.092) | (0.060) | (0.064) | (0.071) | |
(c) Treatment = 1 # Compl. Module = 1 | 0.041 | 240.703 ** | 0.796 | 0.347+ | 0.198 * | 0.135+ | 0.260 ** | 0.176 * |
(0.037) | (90.675) | (0.593) | (0.193) | (0.099) | (0.070) | (0.069) | (0.076) | |
(c)-(a) | −0.037 | −149.685 | −0.526 | −0.054 | 0.04 | 0.01 | 0.147 * | 0.118 |
(c)-(b) | 0.048 | 97.353 | 0.056 | 0.086 | 0.084 | 0.117 | 0.179 ** | 0.181 * |
Control Mean | 0.827 | 273.056 | 5.410 | 3.311 | −0.070 | −0.070 | 3.468 | 3.378 |
R2 | 0.074 | 0.113 | 0.155 | 0.138 | 0.033 | 0.026 | 0.043 | 0.013 |
Observations | 567 | 567 | 193 | 193 | 1611 | 7992 | 8254 | 8254 |
Engagement | Perception | Practice | Students | |||||
---|---|---|---|---|---|---|---|---|
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | |
Ever Viewed | Seconds Spent | NPS | Overall Perception | Wk 1 Talk Move Rate | Wk 2+ Talk Move Rate | Num. Sessions Attended | Num. Assn. Completed | |
(a) Treatment = 0 # Used GPTeach = 1 | 0.142 ** | 380.527 ** | 0.158 | 0.036 | 0.069 | 0.222 ** | 0.049 | 0.091 |
(0.026) | (116.504) | (0.580) | (0.184) | (0.117) | (0.075) | (0.084) | (0.091) | |
(b) Treatment = 1 # Used GPTeach = 0 | 0.000 | 34.228 | −0.000 | 0.069 | 0.114 | 0.069 | 0.080 | 0.025 |
(0.033) | (72.142) | (0.484) | (0.153) | (0.083) | (0.052) | (0.054) | (0.060) | |
(c) Treatment = 1 # Used GPTeach = 1 | 0.060 | 353.411 * | 0.226 | 0.154 | 0.057 | 0.067 | 0.265 ** | 0.226 ** |
(0.039) | (147.144) | (0.617) | (−0.186) | −0.116 | (0.090) | (0.080) | (0.087) | |
(c)-(a) | −0.082 * | −27.116 | 0.068 | 0.118 | −0.012 | −0.155 | 0.216 * | 0.135 |
(c)-(b) | 0.06 | 319.183 * | 0.226 | 0.085 | −0.057 | −0.002 | 0.185 * | 0.201 * |
Control Mean | 0.853 | 392.078 | 5.918 | 3.470 | −0.050 | −0.050 | 3.568 | 3.413 |
R2 | 0.084 | 0.116 | 0.132 | 0.116 | 0.030 | 0.027 | 0.043 | 0.013 |
Observations | 567 | 567 | 193 | 193 | 1611 | 7992 | 8254 | 8254 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Demszky, D.; Hill, H.C.; Taylor, E.; Kupor, A.; Varuvel Dennison, D.; Piech, C. Does Increased Choice over Learning Topic Improve the Effectiveness of Automated Feedback for Educators? Educ. Sci. 2025, 15, 1162. https://doi.org/10.3390/educsci15091162
Demszky D, Hill HC, Taylor E, Kupor A, Varuvel Dennison D, Piech C. Does Increased Choice over Learning Topic Improve the Effectiveness of Automated Feedback for Educators? Education Sciences. 2025; 15(9):1162. https://doi.org/10.3390/educsci15091162
Chicago/Turabian StyleDemszky, Dorottya, Heather C. Hill, Eric Taylor, Ashlee Kupor, Deepak Varuvel Dennison, and Chris Piech. 2025. "Does Increased Choice over Learning Topic Improve the Effectiveness of Automated Feedback for Educators?" Education Sciences 15, no. 9: 1162. https://doi.org/10.3390/educsci15091162
APA StyleDemszky, D., Hill, H. C., Taylor, E., Kupor, A., Varuvel Dennison, D., & Piech, C. (2025). Does Increased Choice over Learning Topic Improve the Effectiveness of Automated Feedback for Educators? Education Sciences, 15(9), 1162. https://doi.org/10.3390/educsci15091162