Exploring Relationships Between Qualitative Student Evaluation Comments and Quantitative Instructor Ratings: A Structural Topic Modeling Framework
Abstract
1. Introduction
1.1. The Evolution of Student Evaluations of Teaching
1.2. The Rise of Qualitative Comment Analysis
1.3. Gender and Language in Student Comments
1.4. Topic Modeling in Educational Research
1.5. Research Questions
- What latent topics emerge from student comments, and how do they vary across course and instructor contexts?
- How do these topics relate to quantitative SET scores, and what does this reveal about students’ perceptions of effective teaching?
- Do students emphasize different themes when evaluating male and female instructors, after accounting for course and instructor characteristics?
- Are the relationships between comment content and SET scores consistent across instructor gender?
2. Materials and Methods
2.1. Sample
2.2. Measures
- Academic division of the course: Arts and Humanities (Humanities), Social Science, Science, Engineering and Applied Science (SEAS), First-Year Writing (FYW), First-Year Seminars (FRSM), and General Education (Gen Ed).
- Instructor rank: Professor, Associate Professor, Assistant Professor, Senior Non-Ladder Faculty (SrNonLadder), Non-Faculty of Arts and Sciences or SEAS (NonFAS), Visitor (Visiting faculty member), and Other (including lecturers, etc.).
- Instructor gender (self-reported).
- Instructor age (as a proxy for career stage).
- Instructor age squared.
- Natural Logarithm of enrollment.
- Fraction of students taking the course as an elective (as a proxy for whether the course is a requirement).
- Year and term that the course was taught.
- numeric SET score
- interaction between the numeric SET score and instructor gender.
2.3. Method
2.3.1. Optimal Number of Topics
- If too few topics are specified, distinct themes may be merged into a single topic, resulting in broad or ambiguous classifications.
- If too many topics are specified, coherent ideas may become fragmented across multiple topics, introducing redundancy.
2.3.2. Topic Labeling
- “Enthusiastic, engaging, funny, and fantastic lecturer; fun and enjoyable course.” (Human)
- “Their lectures are very engaging, and they have a great sense of humor.” (Human)
- “Lectures are engaging.” (Human)
- “Students deeply appreciate instructors who deliver highly engaging, humorous, and enthusiastic lectures that make the material enjoyable and keep them actively interested in class.” (GPT-4o)
- Engaging lectures; humorous lecturer.
- Boring and disorganized lectures.
- Explains complex concepts effectively.
- Poor teaching and course design; disorganized instructor.
- Caring and enthusiastic instructor.
- Lectures are interesting and relevant.
- Praised as best professor.
- Facilitates effective discussion; encourages critical thinking; creates welcoming environment.
- Provides constructive, timely feedback.
- Passionate about subject; committed teacher.
- Approachability and accessibility of instructor.
2.4. Estimating Effects of Covariates on Topic Prevalence
- Statistical significance: The effect is significantly different from zero at the 5% level (p < 0.05).
- Practical significance: The effect corresponds to a change in topic prevalence of at least 0.1 standard deviations (SD).
3. Results
3.1. Topic Prevalence
3.2. Correlation with SET Scores
- Engaging lectures; humorous lecturer
- Caring and enthusiastic instructor
- Lectures are interesting and relevant
- Praised as best professor
- Provides constructive, timely feedback
- Boring and disorganized lectures
- Poor teaching and course design; disorganized instructor
- Explains complex concepts effectively
- Facilitates effective discussion; encourages critical thinking; creates welcoming environment
- Passionate about subject; committed teacher
- Approachability and accessibility of instructor
3.3. Academic Division and Course Type
3.4. Variation by Instructor Gender
- Caring and enthusiastic instructor (1.4 percentage points, 0.25 SD)
- Facilitates effective discussion; encourages critical thinking; creates welcoming environment (1.2 percentage points, 0.16 SD)
- Provides constructive, timely feedback (1.1 percentage points, 0.16 SD)
- Lectures are interesting and relevant (1.3 percentage points, 0.28 SD)
- Engaging lectures; humorous lecturer (1.0 percentage point, 0.15 SD)
- Passionate about subject; committed teacher (0.5 percentage points, 0.19 SD)
4. Discussion
4.1. Topics Reflect Meaningful Aspects of Teaching
4.2. SET and Topic Prevalence: Alignment and Interpretation
4.3. Gender Differences in Comment Content
- Caring and enthusiastic instructor
- Facilitates effective discussion; encourages critical thinking; creates welcoming environment
- Provides constructive, timely feedback
- Engaging lectures; humorous lecturer
- Lectures are interesting and relevant
- Passionate about subject; committed teacher
4.4. Topic–SET Correlations Are Consistent Across Genders
4.5. Limitations and Future Directions
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Model Selection and Evaluation Metrics for STM
Appendix B. Topic Labeling of Student Written Evaluation Comments by Human Readers and GPT-4o
Topic | Reader 1 | Reader 2 | Reader 3 | GPT-4o | Label |
---|---|---|---|---|---|
1 | Enthusiastic, engaging, funny, and fantastic lecturer; fun and enjoyable course. | Their lectures are very engaging, and they have a great sense of humor. | Lectures are engaging. | Students deeply appreciate instructors who deliver highly engaging, humorous, and enthusiastic lectures that make the material enjoyable and keep them actively interested in class. | Engaging lectures; humorous lecturer. |
2 | Lectures are dry, boring, monotonous, disorganized, and difficult to follow. | They were boring, and lectures were disorganized and hard to follow. | Lectures are boring, difficult to follow, or disorganized. | Students frequently found lectures difficult to follow due to issues such as disorganization, unclear slides or notes, rapid or inconsistent pacing, and lack of engaging delivery. | Boring and disorganized lectures. |
3 | Great teacher who is a thorough, thoughtful, clear, organized, and effective lecturer. | They are very good at explaining complex concepts in ways students can understand. | Instructor breaks down complex concepts effectively. | Students consistently praise instructors for delivering clear, effective, and well-organized explanations of complex concepts, especially during lectures and review sessions. | Explains complex concepts effectively. |
4 | Course is not recommended; course head is disorganized and uninvolved, and the course falls short of expectations. | They are incredibly disorganized, no syllabus, and exam format changed with little notice. | Quality of instruction and/or course design is poor. | Students express dissatisfaction with poor teaching practices, disorganization, lack of engagement, unclear communication, and instructors who are perceived as dismissive, unapproachable, or disrespectful. | Poor teaching and course design; disorganized instructor. |
5 | Really wonderful professor who is friendly, kind, knowledgeable, and an incredible instructor. | They are kind, caring and enthusiastic about their subject matter. | Instructor is enthusiastic and caring. | Students overwhelmingly praise instructors who are deeply passionate about their subject, genuinely care for and support their students, and create an engaging, approachable, and inspiring learning environment. | Caring and enthusiastic instructor. |
6 | Lectures were interesting and informative. | Their lectures were interesting and relevant. | Lectures connect course concepts to topics of student interest. | Students praise instructors who are engaging and dynamic lecturers, capable of presenting complex or relevant topics in a clear, insightful, and interesting manner—often incorporating personal experiences, current events, or real-world applications to enhance understanding. | Lectures are interesting and relevant. |
7 | Outstanding professor, who is one of the best at [Institution], and is compassionate, dedicated, and exceptional at their job. | They are the best professor students have ever had at [Institution]. | Instructor is the best professor at [Institution]. | Students enthusiastically praise the instructor as one of the best or most memorable they have had, highlighting exceptional teaching ability, deep subject knowledge, kindness, dedication, and a lasting positive impact on their academic experience. | Praised as best professor. |
8 | Effective and welcoming classroom environment; excellent teacher, moderator, and facilitator who is warm and passionate. | They are great at facilitating meaningful discussion where everyone feels comfortable sharing ideas. | Instructor effectively facilitates group discussion and encourages critical thinking. | Instructors foster thoughtful, inclusive, and engaging classroom discussions by creating safe spaces for dialogue, effectively guiding conversations, and encouraging critical thinking and student participation. | Facilitates effective discussion; encourages critical thinking; creates welcoming environment. |
9 | Wonderful teacher who was helpful, supportive, accessible, and provided great feedback. | They were a great instructor and gave constructive feedback. | Instructor gave helpful and timely feedback throughout the course. | Students consistently praise the instructor for providing detailed, timely, and constructive feedback on writing assignments, being highly accessible and supportive throughout the writing and revision process. | Provides constructive, timely feedback. |
10 | Faculty member is passionate, enthusiastic, knowledgeable, and clearly put a lot of work into the course. | They are passionate about their subject matter and puts great effort into teaching it. | Instructor is committed to teaching and passionate about the course material. | Students frequently comment on instructors who demonstrate passion and deep knowledge of the subject matter, often balancing a strong personality or unique teaching style with genuine care for student learning and engagement. | Passionate about subject; committed teacher. |
11 | Knowledgeable, helpful, approachable, and engaging lecturer who is very patient with questions. | Their method of answering student’s questions made the majority feel they were willing to answer any question to make sure they understand the material, but for a subset their response style made students feel stupid for asking. | Instructor is either accessible and approachable or inaccessible and/or unapproachable for student questions. | Instructors who foster a supportive learning environment by being approachable, patient, and responsive to students’ questions—particularly through effective use of office hours and individualized guidance. | Approachability and accessibility of instructor. |
Appendix C. Protocol Used to Generate Summaries by GPT-4o
Prevalence | Instructor’s Teaching: Top 100 Comments for Topic 4 |
0.47 | Professor LastName is the most amazing teacher. He is incredibly kind, caring, and truly cares about the success of students. |
0.46 | Professor LastName is a very friendly and kind teacher—he is obviously extremely knowledgeable about his field and eager to share his knowledge with students. |
0.46 | Professor LastName is fantastic. She demonstrates a genuine love for the subject matter and exudes a brilliance that is both friendly and extremely articulate. |
. . . | |
0.38 | Professor LastName is an excellent and caring teacher. He has a lot of passion for what he does, is very knowledgeable about his field, and extends that passion to all his students. He is also accessible and accommodating of what students are going through- just a lovely person overall. |
Appendix D. Statistical and Practical Significance of Topic Prevalence: Marginal SET and Gender Effects
Topic | Topic Description | Statistical Significance | Practical Significance | |||||
---|---|---|---|---|---|---|---|---|
Marginal Effect of SET | Stand. Error | t Value | Pr(>|t|) | Mean Prevalence | St. Dev. of Prevalence | Est. Diff./St. Dev. (Econ. Sign.) | ||
1 | Engaging lectures; humorous lecturer. * | 0.029 | 0.000 | 61.377 | 0.000 | 0.115 | 0.069 | 0.424 |
2 | Boring and disorganized lectures. * | −0.058 | 0.001 | −100.724 | 0.000 | 0.100 | 0.098 | −0.591 |
3 | Explains complex concepts effectively. | −0.004 | 0.000 | −10.380 | 0.000 | 0.099 | 0.068 | −0.055 |
4 | Poor teaching and course design; disorganized instructor. * | −0.046 | 0.000 | −120.094 | 0.000 | 0.096 | 0.072 | −0.634 |
5 | Caring and enthusiastic instructor. * | 0.024 | 0.000 | 103.338 | 0.000 | 0.094 | 0.056 | 0.432 |
6 | Lectures are interesting and relevant. * | 0.010 | 0.000 | 52.305 | 0.000 | 0.087 | 0.045 | 0.224 |
7 | Praised as best professor. * | 0.027 | 0.000 | 69.460 | 0.000 | 0.086 | 0.075 | 0.369 |
8 | Facilitates effective discussion; encourages critical thinking; creates welcoming environment. | 0.007 | 0.000 | 27.055 | 0.000 | 0.085 | 0.075 | 0.091 |
9 | Provides constructive, timely feedback. * | 0.008 | 0.000 | 29.202 | 0.000 | 0.085 | 0.067 | 0.118 |
10 | Passionate about subject; committed teacher. | −0.002 | 0.000 | −14.931 | 0.000 | 0.077 | 0.025 | −0.087 |
11 | Approachability and accessibility of instructor. | 0.003 | 0.000 | 16.513 | 0.000 | 0.074 | 0.042 | 0.082 |
Topic | Topic Description | Statistical Significance | Practical Significance | |||||
---|---|---|---|---|---|---|---|---|
Est. Diff. (F-M) | Stand. Error | t Value | Pr(>|t|) | Mean Prevalence | St. Dev. of Prevalence | Est. Diff./St. Dev. (Econ. Sign.) | ||
1 | Engaging lectures; humorous lecturer. | −0.010 | 0.001 | −10.348 | 0.000 | 0.115 | 0.069 | −0.149 |
2 | Boring and disorganized lectures. | −0.008 | 0.001 | −6.894 | 0.000 | 0.100 | 0.098 | −0.080 |
3 | Explains complex concepts effectively. | 0.000 | 0.001 | 0.207 | 0.836 | 0.099 | 0.068 | 0.003 |
4 | Poor teaching and course design; disorganized instructor. | −0.004 | 0.001 | −6.946 | 0.000 | 0.096 | 0.072 | −0.057 |
5 | Caring and enthusiastic instructor. | 0.014 | 0.001 | 26.658 | 0.000 | 0.094 | 0.056 | 0.251 |
6 | Lectures are interesting and relevant. | −0.013 | 0.001 | −23.803 | 0.000 | 0.087 | 0.045 | −0.279 |
7 | Praised as best professor. | 0.003 | 0.001 | 3.175 | 0.001 | 0.086 | 0.075 | 0.036 |
8 | Facilitates effective discussion; encourages critical thinking; creates welcoming environment. | 0.012 | 0.001 | 13.897 | 0.000 | 0.085 | 0.075 | 0.158 |
9 | Provides constructive, timely feedback. | 0.011 | 0.001 | 15.647 | 0.000 | 0.085 | 0.067 | 0.164 |
10 | Passionate about subject; committed teacher. | −0.005 | 0.000 | −15.836 | 0.000 | 0.077 | 0.025 | −0.194 |
11 | Approachability and accessibility of instructor. | 0.000 | 0.001 | −0.303 | 0.762 | 0.074 | 0.042 | −0.004 |
Appendix E. Covariates with Prevalence
1 | Each year, fewer than five comments, on average, are removed at the request of the instructor and with institutional approval; these are not included in the analysis. There were no evaluations for Spring 2020 due to COVID-19. |
2 | Response categories include 5 “Excellent,” 4 “Very Good,” 3 “Good,” 2 “Fair,” and 1 “Unsatisfactory.” |
3 | Reading/Research/Thesis Writing courses; courses with enrollments aggregated by the Registrar’s Office (i.e., lectures taught in section and tutorials); and courses taught by Teaching Fellows or Teaching Assistants were not included in the original dataset. Additionally, not all students choose to submit qualitative comments. |
4 | Observations with one or more of the considered covariates missing were not included in the original dataset. |
5 | Prior to fitting the STM, we replaced the instructors’ last names in the comments with “LastName” and their first names with "FirstName". We then followed the standard procedure of preparing the comments by dropping punctuation, numbers, and stop words (e.g., “the,” “is,” “at”). Comments that contained only stop words, numbers, or punctuation became empty and were excluded from the dataset. Finally, we applied stemming to reduce words to their root form. |
6 | Another commonly used metric for evaluating topic models is held-out likelihood, which measures the extent to which a model can be generalized to out-of-sample data by assessing its ability to predict withheld portions of the text. Models with higher held-out likelihood are considered to have stronger predictive performance. This metric is particularly useful when topic models are applied in predictive tasks or classification contexts (Wallach et al., 2009). However, held-out likelihood does not assess whether topics are thematically coherent or meaningfully distinct—features that are essential when topic modeling is used for interpretive purposes (Chang et al., 2009; Mimno et al., 2011; M. E. Roberts et al., 2014; Wallach et al., 2009). |
7 | All three human readers were blind to the questions studied in this paper. |
8 | Per the University, “The AI Sandbox provides a secure environment in which to explore Generative AI, mitigating many security and privacy risks and ensuring the data entered will not be used to train any vendor large language models (LLMs).” |
9 | Topic prevalence values were sampled using the ‘thetaPosterior’ function, and marginal effects were estimated using a modified version of ‘estimateEffect’, both from the ‘stm’ R package (used with R version 3.5.2 was used; https://cran.r-project.org/web/packages/stm/stm.pdf, accessed on 5 August 2025). The ‘estimateEffect’ function was modified to incorporate clustering by faculty-course. |
10 | The clustered covariance matrix was estimated via the ‘vcovCL’ function from the ‘sandwich’ package (used with R version 3.5.2), where we used the “HC1” estimation type. |
11 | This analysis is conducted using the modified ‘estimateEffect’ function from the ‘stm’ package (used with R version 3.5.2), as previously described. |
12 | To obtain the results presented in Figure A1, we specified 30 STM runs per topic count, 6 of which converged successfully in each case and were included in the analysis. |
References
- Adams, S., Bekker, S., Fan, Y., Gordon, T., Shepherd, L. J., Slavich, E., & Waters, D. (2022). Gender bias in student evaluations of teaching: ‘Punish[ing] those who fail to do their gender right’. Higher Education, 83, 787–807. [Google Scholar] [CrossRef]
- Bain, K. (2004). What the best college teachers do. Harvard University Press. [Google Scholar]
- Binderkrantz, A. S., Bisgaard, M., & Lassesen, B. (2022). Contradicting findings of gender bias in teaching evaluations: Evidence from two experiments in Denmark. Assessment & Evaluation in Higher Education, 47, 1345–1357. [Google Scholar] [CrossRef]
- Boring, A. (2017). Gender biases in student evaluations of teaching. Journal of Public Economics, 145, 27–41. [Google Scholar] [CrossRef]
- Brookfield, S. D. (2015). The skillful teacher: On technique, trust, and responsiveness in the classroom (3rd ed.). Jossey-Bass. [Google Scholar]
- Chang, J., Boyd-Graber, J., Gerrish, S., Wang, C., & Blei, D. M. (2009, December 7–10). Reading tea leaves: How humans interpret topic models. NIPS’09: Proceedings of the 23rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada. [Google Scholar]
- Fan, Y., Shepherd, L. J., Slavich, E., Waters, D., Stone, M., Abel, R., & Johnston, E. L. (2019). Gender and cultural bias in student evaluations: Why representation matters. PLoS ONE, 14(2), e0209749. [Google Scholar] [CrossRef] [PubMed]
- Feldman, K. A. (2007). Identifying exemplary teachers and teaching: Evidence from student ratings. In R. P. Perry, & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 93–143). Springer. [Google Scholar]
- Fuller, K. A., Morbitzer, K. A., Zeeman, J. M., Persky, A. M., Savage, A. C., & McLaughlin, J. E. (2024). Exploring the use of ChatGPT to analyze student course evaluation comments. BMC Medical Education, 24(1), 423. [Google Scholar] [CrossRef] [PubMed]
- Gelber, K., Brennan, K., Duriesmith, D., & Fenton, E. (2022). Gendered mundanities: Gender bias in student evaluations of teaching in political science. Australian Journal of Political Science, 57(2), 199–220. [Google Scholar] [CrossRef]
- Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112. [Google Scholar] [CrossRef]
- Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172–177. [Google Scholar] [CrossRef]
- Kraft, M. A. (2020). Interpreting effect sizes of education interventions. Educational Researcher, 49(4), 241–253. [Google Scholar] [CrossRef]
- MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40, 291–303. [Google Scholar] [CrossRef]
- Marsh, H. W. (1984). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76(5), 707–754. [Google Scholar] [CrossRef]
- Martin, L. L. (2016). Gender, teaching evaluations, and professional success in political science. PS: Political Science and Politics, 49(2), 313–319. [Google Scholar] [CrossRef]
- Mengel, F., Sauermann, J., & Zölitz, U. (2019). Gender Bias in Teaching Evaluations. Journal of the European Economic Association, 17(2), 535–566. [Google Scholar] [CrossRef]
- Miles, P. C., & House, D. (2015). The tail wagging the dog: An overdue examination of student teaching evaluations. International Journal of Higher Education, 4(2), 116–126. [Google Scholar] [CrossRef]
- Mimno, D., Wallach, H., Talley, E., Leenders, M., & McCallum, A. (2011, July 27–31). Optimizing semantic coherence in topic models. 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, UK. [Google Scholar]
- Mitchell, K. M. W., & Martin, J. (2018). Gender bias in student evaluations. PS: Political Science and Politics, 51(3), 648–652. [Google Scholar] [CrossRef]
- Okoye, K., Arrona-Palacios, A., Camacho-Zuñiga, C., Achem, J. A. G., Escamilla, J., & Hosseini, S. (2021). Towards teaching analytics: A contextual model for analysis of students’ evaluation of teaching through text mining and machine learning classification. Education and Information Technologies, 27, 3891–3933. [Google Scholar] [CrossRef] [PubMed]
- OpenAI. (2024). GPT-4o. Available online: https://sandbox.ai.huit.harvard.edu/c/new (accessed on 1 May 2025).
- Roberts, M., Stewart, B., & Tingley, D. (2016). Navigating the local modes of big data: The case of topic models. In Computational social science: Discovery and prediction. Cambridge University Press. [Google Scholar]
- Roberts, M. E., Steward, B. M., & Tingley, D. (2019). stm: An R packagae for structural topic models. Journal of Statistical Software, 91(2), 1–40. [Google Scholar] [CrossRef]
- Roberts, M. E., Stewart, B. M., Tingley, D., Lucas, C., Leder-Luis, J., Gadarian, S. K., Albertson, B., & Rand, D. G. (2014). Structural topic models for open-ended survey responses. American Journal of Political Science, 58(4), 1064–1082. [Google Scholar] [CrossRef]
- Sun, J., & Yan, L. (2023). Using topic modeling to understand comments in student evaluations of teaching. Discover Education, 2, 25. [Google Scholar] [CrossRef]
- Taylor, M. A., Su, Y., Barry, K., & Mustillo, S. (2021). Using structural topic modelling to estimate gender bias in student evaluations of teaching. In E. Zaitseva, B. Tucker, & E. Santhanam (Eds.), Analysing student feedback in higher education (1st ed.). Routledge. [Google Scholar]
- Theall, M., & Franklin, J. (2001). Looking for bias in all the wrong places: A search for truth or a witch hunt in student ratings of instruction? New Directions for Institutional Research, 27(5), 45–56. [Google Scholar] [CrossRef]
- Wallach, H. M., Murray, I., Salakhutdinov, R., & Mimno, D. (2009, June 14–18). Evaluation methods for topic models. ICLM ’09: Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada. [Google Scholar]
- Zipser, N., & Mincieli, L. (2024). A framework for using SET when evaluating faculty. Studies in Higher Education, 50(1), 168–182. [Google Scholar] [CrossRef]
- Zipser, N., Mincieli, L., & Kurochkin, D. (2021). Are there gender differences in quantitative student evaluations of instructors? Research in Higher Education, 62(7), 976–997. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zipser, N.; Kurochkin, D.; Yu, K.W.; Mincieli, L.A. Exploring Relationships Between Qualitative Student Evaluation Comments and Quantitative Instructor Ratings: A Structural Topic Modeling Framework. Educ. Sci. 2025, 15, 1011. https://doi.org/10.3390/educsci15081011
Zipser N, Kurochkin D, Yu KW, Mincieli LA. Exploring Relationships Between Qualitative Student Evaluation Comments and Quantitative Instructor Ratings: A Structural Topic Modeling Framework. Education Sciences. 2025; 15(8):1011. https://doi.org/10.3390/educsci15081011
Chicago/Turabian StyleZipser, Nina, Dmitry Kurochkin, Kwok Wah Yu, and Lisa A. Mincieli. 2025. "Exploring Relationships Between Qualitative Student Evaluation Comments and Quantitative Instructor Ratings: A Structural Topic Modeling Framework" Education Sciences 15, no. 8: 1011. https://doi.org/10.3390/educsci15081011
APA StyleZipser, N., Kurochkin, D., Yu, K. W., & Mincieli, L. A. (2025). Exploring Relationships Between Qualitative Student Evaluation Comments and Quantitative Instructor Ratings: A Structural Topic Modeling Framework. Education Sciences, 15(8), 1011. https://doi.org/10.3390/educsci15081011