Next Article in Journal
Enhancing Earth and Environmental Science Undergraduate Students’ Perception of Geographic Information Systems through Short Clips
Previous Article in Journal
Predictors of Deep Learning and Competence Development in Children Aged 5–7 Using Augmented Reality Technology
Previous Article in Special Issue
Enhancing ICT Literacy and Achievement: A TPACK-Based Blended Learning Model for Thai Business Administration Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Closing the Gap? The Ability of Adaptive Learning Courseware to Close Outcome Gaps in Principles of Microeconomics

by
Karen Gebhardt
1,*,† and
Christopher D. Blake
2,†
1
Department of Economics, University of Colorado Boulder, 256 UCB, Boulder, CO 80309, USA
2
Oxford College, Emory University, Seney 317, 801 E Emory Str., Oxford, GA 30054, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Educ. Sci. 2024, 14(9), 1025; https://doi.org/10.3390/educsci14091025
Submission received: 15 July 2024 / Revised: 6 September 2024 / Accepted: 17 September 2024 / Published: 19 September 2024

Abstract

:
Research shows that students who identify as low-income, first-generation, and/or racially diverse disproportionately underperform in college and earn fewer degrees than other students. This study explores the integration of adaptive learning courseware assignments as a tool to help close these outcome gaps and to ensure more equitable learning across diverse student groups. Adaptive learning courseware is an educational technology that requires students to master the same learning objectives but, for each student, the courseware determines the order and timing of content based on how that student interacts with the courseware, thus enabling an individualized learning path for each student. Adaptive learning assignments were implemented in five sections of a highly-enrolled Principles of Microeconomics course at a medium-sized state university in the United States. This study draws from student data ( n = 581 ), which includes adaptive learning assignment completion data, detailed exam and final grade data, and institutional demographic data. Descriptive statistics and regression analyses are used to explore if the completion of adaptive learning assignments disproportionately benefited low-income, first-generation, or racially diverse students, thus helping close the gap between students from different backgrounds. Findings include significant evidence that adaptive learning assignment completion was correlated with more exam questions answered correctly by all students, with this correlation being disproportionately stronger for students who identify as being from a minority background and for foundational (easy) exam questions.

1. Introduction

Students begin their higher educational journeys with unique experiences, diverse backgrounds, and different preparation levels which can affect their outcomes in college [1,2]. Research shows that students who identify as low-income, first-generation, and/or racially diverse disproportionately underperform in college and earn fewer degrees than other students [3,4,5], leading to outcome gaps. This raises equity concerns, as students who are low-income, for example, might not have the baseline resources available to them that others from higher-income households do. As completion of a college education is linked with future outcomes such as higher incomes, lower unemployment rates, better health, and increased longevity [6], a natural question becomes how to best close outcome gaps between students from all backgrounds, thus helping students to end their educational journeys more equitably.
The integration of adaptive learning courseware can be a way to help close this gap and ensure equitable learning across diverse student groups [5]. Adaptive learning is an approach for personalized learning which moves education away from a “one-size fits all” model to one that meets the needs of each learner individually [7]. Adaptive learning courseware is recognized as having the potential to improve student outcomes by making personalized learning scalable.
Using data collected from Principles of Microeconomics taught in the spring of 2017 at a medium-sized state university in the United States, this research seeks to determine whether adaptive courseware use helped close the outcome gap between students from different backgrounds (e.g., first-generation, low-income, racially diverse) by comparing adaptive learning assignment completion and corresponding outcomes on exam questions. Findings include significant evidence that adaptive learning assignment completion was correlated with more exams questions answered correctly by all students, and that this correlation was disproportionately stronger for students who identify as being from a minority background and for foundational (easy) exam questions. These findings provide important evidence in the context of ongoing discussions about how educators and higher education institutions can close the gap between students from different backgrounds.
The paper is organized as follows. This section continues with a discussion of adaptive learning courseware and its impact on outcomes. Section 2 introduces the research questions, describes how adaptive learning courseware was implemented into the course, provides information about the data used in the study, and describes the research methods. Results are in Section 3 and a discussion is provided in Section 4.

Adaptive Learning Courseware and Its Impact on Outcomes

Adaptive learning courseware is an educational technology that requires students to master the same learning objectives, but for each student, the order and timing of content is determined by the courseware. The courseware assesses the student’s performance on a number of factors (e.g., confidence in answer and correctness), which are used to create an individualized learning path for each student. Adaptive courseware has been available for more than twenty years (e.g., ALEKS for mathematics), has expanded to many disciplines (e.g., economics, foreign languages, business, anatomy and physiology) and is used across all levels of education including K-12 [8], higher education [9], and professional development [10]. For example, at this study’s university, adaptive courseware was implemented in language, physics, accounting, economics, philosophy, biology, chemistry, sociology, government, and several other courses [11]. Adaptive courseware can be used to create individual adaptive assignments, or even to develop an entire course that is adaptive [12].
The features of adaptive courseware are the primary reason for the potential for the courseware to help close outcome gaps between students of diverse backgrounds. Adaptive courseware tends to be self-paced, is often graded based on completion (sometimes called mastery), allows for flexibility in the timing of completion, and frequently includes features to improve metacognitive awareness. The adaptive courseware is typically designed to focus student effort towards content not yet mastered and allows the student to keep working until mastery is demonstrated. For students less prepared or otherwise struggling with course content, the benefit is clear. As an untimed activity focused on the student and their pace, adaptive learning courseware is a different way for students to interact with course content while still holding them accountable for the course’s learning objectives. Moreover, adaptive learning courseware places emphasis on the study process while allowing each student to take a unique learning path.
Studies researching the impact of the adoption and use of adaptive learning courseware on outcomes have shown mixed results, ranging from no impact to a positive impact (see the systemic review of adaptive learning courseware research between 2007 abd 2017 in Xie et al. [13]). These studies typically compare outcomes between courses that use and do not use adaptive courseware. Outcomes are frequently measured as (1) the proportion of students earning passing grades (A, B, or C) versus the proportion of those either not passing (D or F) or withdrawing (W) from the course, (2) grades earned on formative or summative assessments, (3) final course grades, (4) retention, or (5) a combination of these.
Some studies show that adaptive courseware has no impact on outcomes. For example, in a study related to a digital literacy course, Murray and Pérez [14] find that outcomes, measured by grades earned on two examinations, do not vary significantly across the sections when comparing an adaptive learning versus a more traditional quiz method as a mode of instructional delivery and assessment. Griff and Matter [15] find no significant improvement on post-tests relative to pre-tests, grade distributions, and retention in undergraduate anatomy and physiology courses between sections using either only adaptive learning courseware or online quizzes of equal length (in time to complete). White [16] shows no relationship between optional adaptive learning usage and test or course grades in a management information course.
Other studies show a positive impact. Gebhardt [11], for example, showed that students who complete adaptive learning assignments in an economics course scored higher on exams, correctly answering significantly more exam questions as compared to non-completers. Results from a study evaluating the effectiveness of ALEKS (an adaptive learning courseware) in college algebra courses show that the students using the courseware outperformed students not using the courseware on a comprehensive final exam [17]. Another study, funded by the Bill & Melinda Gates Foundation, reports that several universities find gains in student outcomes in courses integrating adaptive learning [18].
Many other studies show mixed results. For example, results from the broad Adaptive Learning Market Acceleration Program (ALMAP) study are mixed, indicating that some courses using adaptive courseware resulted in slightly higher course grades, but the majority report no discernible impact on overall course grades [19]. Additionally, the researchers found that the odds of successfully completing a course are not affected by adaptive courseware use. Despite the lack of significance on completion, other findings from this study show that in seven controlled side-by-side comparisons of scores on exams, the average impact of adaptive learning courseware use is modest yet significantly positive.
When demographic data were included in these studies, the typical research focus was on measuring the change in outcomes across groups when adaptive learning courseware is used, not analyzing if its use changes outcomes between groups. Only a few focus on changes in outcomes between groups to analyze whether or not the use of adaptive learning courseware disproportionately benefits, for example, low-income, first-generation, or racially diverse students. The ALMAP study gathers data on student race and Pell Grant status (a proxy for whether a student comes from a low-income background). That study’s findings only include some results for Pell Grant students and show that there is no disproportionate advantage for these students [19]. The study by Bailey et al. [18] also includes some of these key demographic data. Relevantly, findings show that Georgia State University experienced a disproportionate decrease in failure (i.e., “DFW”) rates for minority and Pell-eligible students in introductory writing courses when using adaptive courseware. More recently, Anderson and Devlin [5] found that in some core math courses that used adaptive courseware, students in general earned higher final course grades and students who were identified as low-income disproportionately benefited as compared to students in courses that did not use the courseware. The authors did not find similar results for racially diverse or first-generation students in math courses, or at all in core English courses.
These latter studies are of particular interest to this research, because this study seeks not only to determine if adaptive learning courseware benefits students, but also if it disproportionately benefits low-income, first-generation, or racially diverse students. As past research shows that these students tend to have lower outcomes, disproportionate benefits of adaptive courseware use would suggest a meaningful option to help close the gap in outcomes between students with different backgrounds.

2. Materials and Methods

2.1. Research Questions and Hypotheses

This research examines if the integration of adaptive learning courseware assignments is correlated with better student outcomes, with disproportionately stronger correlations for key demographics of students (of a low-income, first-generation, and/or racially diverse status). There are three testable hypotheses evaluated. Completing adaptive learning assignments is (1) correlated with improved outcomes for all students, (2) disproportionately correlated with improved outcomes for key demographics of students, and (3) correlated strongest (for both (1) and (2)) with foundational course content (i.e., easy questions). Outcomes are measured by the number of matched easy, moderate, or hard exam questions answered correctly. Here, “matching” means that adaptive courseware assignment completion for a particular chapter is compared to performance on exam questions from that same chapter where questions are ranked as easy, moderate, or hard.

2.2. Implementation of Adaptive Learning Courseware into Principles of Microeconomics

Adaptive learning courseware assignments were implemented in every large-lecture, in-person instruction section of an undergraduate Principles of Microeconomics course (five sections total, 907 students enrolled). A small (20-student) honors-only section was excluded from the sample because of an entirely different course design. Courses were taught during the Spring 2017 semester at a medium-sized state university in the United States (33,198 students enrolled in 2016–2017, of which 23,768 are resident instruction undergraduate students). At this university, Principles of Microeconomics is a first- or second-year course that introduces students to how economists model the decisions made by households, firms, and the government, and how these agents interact in a market setting. This course has high enrollment and is considered foundational as part of the university core curriculum. It is also considered a “gateway” course, as it is a required course for over 40 majors.
These five sections were taught by four instructors and assisted by a total of twelve graduate teaching assistants. Each section of this 3-credit course was structured in a lecture-recitation format where students attended a large, 90-, 180- or 270-student lecture twice weekly, led by the instructor, and a small, 30-student recitation once weekly, led by the graduate teaching assistant. The sections followed the same schedule and were coordinated (e.g., each section covered the same content in lecture and recitations, students completed the same quizzes and adaptive learning assignments, exam questions were drawn from a pool of questions curated by the instructors). To support student success, the instructors and graduate teaching assistants followed the assigned text closely in lecture and recitation. This is a deliberate design because the adaptive courseware used in this course is based on the chapters, learning objectives, and language in the text. Course assessments included low-stakes in-class polling (iClicker) for nearly every class session, almost weekly low-stakes adaptive learning assignments and higher-stakes quizzes, two high-stakes writing assignments, and three high-stakes exams. The adaptive assignments were assigned for 13 out of 15 weeks and were very low stakes (13 points out of a possible 500 in the course, or approximately 2.6% of the final grade). Adaptive assignments corresponded to textbook chapters covered in lecture and recitation and each week’s adaptive assignment was associated with a single exam (i.e., an exam for Unit 1, 2, or 3). The characteristics of the adaptive courseware, along with the university’s holistic perspective on course design (where adaptive learning use is considered important, but not the only useful pedagogical tool) informed how the technology was integrated into the Principles of Microeconomics course.
The adaptive learning courseware assignments assigned in Principles of Microeconomics came from LearnSmart (now called SmartBook) by McGraw Hill Higher Education. This technology was selected because it was associated with the course textbook. As students progress through LearnSmart, they answer a series of questions and indicate their confidence level that they answered correctly by choosing from 4 confidence levels ranging from high to low. Just like other adaptive courseware, LearnSmart focuses the student’s effort on learning objectives not yet mastered as determined by the student answering a question incorrectly or correctly without confidence, thus creating an individualized learning path for each student. The types of questions, or “probes”, associated with each learning objective are typically multiple choice or fill-in-the-blank questions, but true/false, multiple answer, or matching questions, among others, are also used. These questions are comparable in terms of language and content to a typical question that would be asked on another assignment (e.g., a quiz or exam).
Since the assignment is based on completion, all students demonstrate mastery of the learning objectives associated with the assigned sections of the chapter as shown by receiving full credit on the assignment. Students can complete the assignment all at once or complete it in multiple logins, with the technology maintaining and returning them to their current position in the assignment. If a student answers questions correctly with confidence, they will progress more quickly than a student who answers questions incorrectly or without confidence. Multiple types of remediation were available, including linked access to the relevant sections in the eBook, instructional videos, correct/incorrect indicators, explanations, and other types of learning objects. Students can freely access this remediation throughout the assignment with no grade penalty, supporting the reading of the textbook and additional moments of learning.
On the instructor side of each LearnSmart assignment, the instructor selects publisher-identified sections of the chapter and identifies the average time required to complete them (the so-called “depth of coverage”). The longer the average time, the more learning objectives are covered. The courseware predominantly focuses on foundational learning objectives at the lower levels of Bloom’s taxonomy (such as remember and understand). Some of the learning objectives involve higher levels of Bloom’s taxonomy (such as apply), but these are relatively few in number. If an instructor selects a shorter average time for assignment completion, then fewer of these higher-level Bloom’s competencies will be presented to students. Instructors chose a shorter average assignment time (15–20 min) so it is important to view this adaptive courseware assignment as a way to build foundational Bloom’s competencies. Therefore, these adaptive courseware assignments are better thought of as a way to build a specific content mastery corresponding with foundational (lower-level) Bloom’s competencies.
Based on the characteristics of the adaptive learning courseware and how it was implemented into the course, this study follows the work of Gebhardt [11] to explore how completion of an adaptive learning assignment is correlated with outcomes. Outcomes are measured as whether a student correctly answered a matched easy, moderate, or difficult exam question from the same chapter’s content. Measuring outcomes in this way, as compared to measuring outcomes as final exam or course grades, provides a more nuanced understanding of the strengths and weaknesses of this particular use of educational technology and its potential impact.

2.3. Data Collected for Analysis

The first step to assembling the dataset was to solicit student participation and consent. Students were invited to participate in this study twice during regularly scheduled lectures: first, approximately one month after the start of the semester, and second, in the last month of the semester. Students consented to participate in the study by signing a paper consent form and only students consenting to participate were included in the dataset. Per IRB procedure, consent was solicited by a member of the university who was not the instructor to help ensure students did not feel compelled to participate. For this reason, it may be that certain demographic groups may have consented at different rates, a fact noted in most studies utilizing survey data (e.g., program participation rates differ by demographic group Li et al. [20]). This does not appear to be the case for this sample, with specific numbers cited in the Descriptive Statistics section. All data were collected in accordance with protocols approved by the University’s Institutional Review Board.
Across all five sections, 581 (of 907) students consented to participate in the research and were included in the the sample population. Only consenting student’s course performance and demographic data were included in the dataset. In this sample population, there were 249 freshmen, 208 sophomores, 85 juniors, 38 seniors, and 1 graduate student. Sixty majors were represented, and the most prevalent major was Business Administration (n = 114) and the largest group was made up of Undeclared Students (n = 127). Twelve economics majors were in the sample.
The authors then collected course performance data by student from course records using administrative tools and added them to the dataset. These data included adaptive learning assignment completion, question-level exam correctness, final course grade, and if the course was required for a student to complete their major. Adaptive learning assignments were associated with each chapter of content covered and each student’s assignment completion was noted as either complete (completer = 1), not complete (completer = 0), or partially complete (completer is between 0 and 1). Since the focus of this paper is on engagement with adaptive software, for this study, partial completers were grouped with those that fully completed adaptive courseware assignments. Multiple choice exam questions were drawn from an instructor-curated pool selected from the textbook’s testbank which included information on each question’s difficulty, associated chapter, topic, and learning objective. As discussed earlier, based on LearnSmart’s characteristics and how it was implemented in the course, completion is predicted to prepare students for foundational (i.e., easy) exam questions. To identify exam question difficulty, each chapter’s exam questions were coded into 3 levels (1 = easy, 2 = moderate, 3 = difficult) based on the publisher’s difficulty rating and the instructors’ perception of difficulty using Bloom’s taxonomy as a guide. This allowed for matching between each week’s adaptive assignment completion and each question ranked by difficulty.
Table 1 summarizes the classification of exam questions. “Easy” questions corresponded to the foundational levels of Bloom’s taxonomy (i.e., remember, knowledge) and associated exam questions used keywords such as define, identify, and choose. “Moderate” questions corresponded to a higher level on the taxonomy and used keywords such as explain, interpret, show, and so on. Question-level performance (correctness) for all exam questions was then recorded for each student (correct = 1, incorrect = 0). Final course grades were recorded where passing grades included A (90–100 percent), B (80–89.9 percent), and C (70–79.9 percent). Non-passing grades included D (60–69.9 percent) and F (<60 percent).
Finally, institutional data for each student were added to the dataset. Student-level institutional data were retrieved by the university’s Office of Institutional Research, Planning, and Effectiveness and by the authors using university administrative tools. Institutional data collected focused on demographic identifiers and included gender, college GPA at census, residency, honors program participation, and this study’s key demographic identifiers (i.e., low-income, first-generation, and racially diverse status), among other variables. From the student-level institutional data in the dataset, the key demographic identifiers serve as the variables of interest since they best match the student groups that previous studies identified as those with worse outcomes. Pell Grant eligibility was used as the identifier for a low-income status (n = 105). U.S. Federal Pell Grants are usually awarded only to undergraduate students who display exceptional financial need. A first-generation flag was used to identify students whose parents never earned a bachelor’s degree (n = 148). These students may have less social capital as compared to non-first generation students. Racial and ethnic diversity was identified by students’ self-identified minority status (n = 149). For consistency throughout the rest of the paper, these students are termed “low-income,” “first-gen,” and “minority.”

2.4. Research Methods

This study follows and builds upon the methodology and findings of Gebhardt [11]. With the demographic identifiers present in the constructed dataset, this study investigates demographic differences in student performance in a manner beyond what is typical in previous studies because of the depth of data collected for analysis. Specifically, this study explores the relationship between the completion of the adaptive learning assignments and the outcomes for different student groups by comparing the completion of the assignment and the correctness on matched exam questions. This was possible because each week’s adaptive learning assignment was associated with a single chapter, which was then associated with a single exam (i.e., an exam for Unit 1, Unit 2, or Unit 3). This allows for a chapter-by-chapter comparison of a student’s adaptive assignment completion with performance on exam questions matched by difficulty and exam. This study uses a combination of quantitative methods (descriptive statistics and regression analysis) to test the hypotheses.

2.4.1. Descriptive Statistics

Descriptive statistics presented in Section 3 describe the demographic and academic characteristics of the sample population, show adaptive assignment completion frequency, and summarize student-level performance by key demographics. These statistics provide an overview of mean exam question correctness for key demographic and other students based on whether the student was a completer or non-completer for a particular adaptive learning assignment. t-tests were used to explore if students who frequently or always complete the adaptive assignment are different than their peers who infrequently or never complete the adaptive assignment. When interpreting the t-test, a p-value 0.05 suggests the difference between groups is statistically significant, whereas a p-value > 0.05 suggests the difference is insignificant. This basic comparison is helpful because it helps confirm that completer students from this study’s key demographic groups were not fundamentally different from non-completers. t-tests demonstrating statistical insignificance lend credence to the idea that results presented here are not driven solely by differences in the demographics of students who complete. Subsequent descriptive statistics of mean exam scores across completer and demographic groups provide the qualitative evidence needed to (1) show that output gaps exist, and (2) suggest that adaptive learning assignment completion helped close these gaps.

2.4.2. Regression Analysis (Fixed Effects Panel Models)

Fixed effects panel regression methods extend the evidence presented in the descriptive statistics to evaluate if adaptive learning assignment completion was correlated with exam question correctness for students in general, with disproportionately stronger correlations for the key demographics of students and for foundational exam questions. The estimation technique used is relatively simple. After categorizing students as completers or not for a given week’s adaptive assignment, a general-form function is estimated as described in Equation (1).
C q , i , j , t = f A i , j , t , D i , P i
where C q , i , j , t is a binary indicator for whether a question (q) was correctly answered by individual (i), on content from chapter (j), during exam (t). A i , j , t is the indicator of the adaptive learning assignment completer status, taking on a value of 1 if the student completed some or all of the adaptive assignment associated with that question’s content. Remaining variables are controls, with D i representing the vector of student-specific demographic identifiers. P i is a binary indicator of whether a particular student participated often in class, taking on a value of 1 if the student’s in-class polling (iClicker) score was at least 60%. This latter variable was included to differentiate between the effects of in-class participation relative to adaptive learning assignments completed outside of class.
Here, it should be noted that because each chapter’s content was only included on one exam throughout the semester, the indices j and t are tantamount to the same. However, as Figure 1 shows, there is some weekly variation in adaptive learning assignment completion. Exam question correctness is undoubtedly affected by week-to-week assignment completion; however, it is also impacted by the content progression itself (i.e., more difficult units are likely to have lower exam question correctness, regardless of weekly variation in assignment completion). To account for this fact, along with the myriad of unobserved factors that influence exam question correctness, the regression models presented in this study include fixed effects for each student, week, and unit.
To best address each of the hypotheses in this study, Equation (1) is converted to a linearly parameterized, fixed effects panel model described in Equation (2):
C q , i , j , t = α 0 + α 1 A i , j + α 2 A i , j , t D 1 , i + α 3 D 2 , i + α 4 P i , j + ϕ j , t + γ i + ω t + ϵ q , i , j , t
There are two differences between the variables displayed in Equations (1) and (2). The first is the separation of the demographic indicator vector into two parts, one of which separates the key demographic indicators (labeled D 1 , i ), and includes low-income, first-generation, and minority status, which interact with completer status. Other demographic characteristics are separated into D 2 , i and do not interact with completer status.
The second difference is the inclusion of ϕ , γ , ω , and ϵ . ϕ is a fixed effect for the unit of the course (where each unit ends with a non-cumulative exam) to capture the trend in course progression. The schedule of the course is fairly standard and presents (1) microeconomic core content, (2) extensions to market structures, and (3) applications. Content tends to be more challenging in later units. ω represents weekly fixed effects to account for content timing and varied student assignment completion, γ represents fixed effects for each individual student to capture unobserved heterogeneity, and ϵ is an error term.
The hypothesis that adaptive learning assignment completion is positively correlated with outcomes for all students should result in a statistically significant and positive coefficient α 1 in Equation (2). For the collection of coefficients estimated in the vector α 2 , this study hypothesizes a similarly positive and significant result for these key demographic interaction terms, signifying an above trend relationship between adaptive learning assignment completion and student outcomes for the key demographics of students captured by D 1 , i . Because the key demographic indicators interact with the adaptive assignment completer indicator, α 2 can be interpreted in much the same way as a coefficient from a difference-in-difference model. If the adaptive learning assignment completion is disproportionately correlated with benefiting certain groups, a positive and significant α 2 would reflect this.
Equation (2) is estimated using Fixed Effects Panel Estimation Methods provided in the R package fixest [21]. Standard errors are clustered by student to further account for student heterogeneity. Futher, as a robustness check, standard errors were clustered by section to account for inter-instructor heterogeneity. The results for all specifications did not change for either clustering technique. The inclusion of student-specific fixed effects is highly correlated with student demographics in the assembled data, which was expected. As a result, this study cannot report coefficients for most demographic variables because they are highly collinear with other covariates to a degree that suggests they should be omitted, following the default collinearity tolerance parameter in the fixest package ( 1 × 10 6 ). The fixest package allows for the modeler to include variables of their choosing, but automatically drops highly collinear covariates, leaving associated coefficients unestimated to reduce bias. Even when coefficients are not automatically omitted due to collinearity concerns, fixed effects estimates of Equation (2) show that the coefficients attached to the key demographic indicators are very imprecisely measured, which further highlights that the student-specific fixed effects explain much of the potential explanatory power from many of the demographic indicators.
The frequency of collinearity between demographic indicators and fixed effects is quite high, meaning stories about some demographic sub-populations may be left untold in this study’s primary analyses. In such cases, stratified regressions can verify that results are consistent across different sub-populations. All told, the analysis undertaken stratified the sample to compare results for (1) students taking the course as a major requirement, (2) female students, (3) honors program participants, and (4) active class participants, versus each of their opposites. The results were too numerous to report, particularly given the consistency of significance, sign, and magnitude for each prospective grouping. Instead, Section 3 provides all results corresponding to major requirement and participation status, while summarizing the other stratified regressions. The interested reader can find full results for these stratified models in the appendices.
In addition, two more stratified samples are presented. The first constitutes a robustness check and focuses on “sometimes” completers. Students who complete the adaptive assignment only some of the time (frequent or infrequent completers) might be fundamentally different relative to those who either always or never completed the adaptive learning assignments. Making sure that the results are consistent when looking at these sometimes completers is important to ensure that the results are not merely reflections of engaged students (those who always complete them) versus less engaged students (those who never complete them).
The final stratified sample is by question difficulty and results help to directly address the third hypothesis. Given the frequency of lower-level Bloom’s taxonomy questions in LearnSmart assignments, this study’s third hypothesis is that the impact of adaptive learning assignment completion would be strongest for easy questions. In this set of stratified results, it is expected that α 1 and α 2 will be positive, but it is also hypothesized that the magnitude will be higher for easy questions than when compared to results for moderate and difficult questions.

3. Results

Before presenting the results, the hypotheses are again summarized as follows:
  • Adaptive learning assignment completion is positively correlated with the likelihood of answering exam questions correctly ( α 1 > 0 in Equation (2)), thus suggesting that completing improves student outcomes.
  • Adaptive learning assignment completion is positively correlated with a disproportionate effect on exam question correctness for key demographics of students ( α 2 > 0 in Equation (2)), thus suggesting that completing can help close outcome gaps.
  • Adaptive learning effects for the previous two hypotheses are strongest for foundational (easy) exam questions (with α 1 and α 2 being more positive for easy questions as compared to moderate and difficult questions), reflecting the adaptive assignment’s emphasis on the lower level of Bloom’s taxonomy.

3.1. Descriptive Statistics

To better understand the overall demographics and course performance, Table 2 displays key summary statistics for all 581 consenting students in the sample overall (left columns) and separates these students by frequency of adaptive assignment completion (right columns). Focusing on the left columns, this table shows, for example, the proportion of students in the sample who are low-income (18%), first-generation (25%), or who identify as being from a minority group background (26%). The proportion of key demographics of students consenting to participate in this study are equivalent or greater than the proportions represented at the university (low-income = 18%, first-generation = 22%, and minority = 19%) which alleviates concerns that this study’s sample is not representative of the student population as a whole. The left columns also show the average adaptive learning assignment completion (79%) and the average percentage of exam questions answered correctly (72%).
Average student adaptive assignment completion frequency was mostly consistent over time, but individual student completion frequency over time was more varied. Students could choose whether to complete adaptive assignments from week to week, which this study’s hypothesis suggests will impact correctness on matched exam questions from that week. Figure 1 shows the percentage of students that completed the adaptive learning assignments throughout the semester. Participation remained relatively high and consistent after the first two weeks. Figure 2 demonstrates that a large number of students completed all 13 assigned adaptive learning activities over the course of the semester (34%).
Because many students participated in the adaptive learning assignments only sometimes (i.e., skipped one week’s assignment or more), it allows this study to consider how to identify the impact of adaptive assignments for students who complete and students who do not complete the week’s assignment. Since the focus of this paper is on engagement with adaptive software, students who partially or fully completed the weekly assignment were included in the “completer” group and students who did not attempt the assignment were included in the “non-completer” group. Under this definition, a student could be a completer one week and a non-completer the next, based on whether they attempted the adaptive assignment for that week or not.
To further explore if there is a difference in performance between completers and non-completers by week, Table 3 shows the mean exam question correctness for students belonging to different demographic groups based on whether the student was a completer or a non-completer for a particular week’s adaptive assignment. These data provide two compelling pieces of evidence that adaptive learning assignment completion both increased mean question correctness for most students and disproportionately benefited some key demographics of students.
Here, observations do not represent a particular student (n = 581), but rather a particular question faced by a particular student (n = 52,290 = 581 students × 90 multiple choice questions possible). Recall that each unit’s exam questions were matched with each week’s adaptive learning assignment. Each mean in this table can therefore be interpreted as the percentage of matched questions students answered correctly based on their demographic identifiers and completer status for that week. “Baseline” represents all non-minority, non-low-income, non-first-generation students.
The first piece of evidence compares mean question correctness across non-completer and completer status. For example, non-completing baseline students faced 4354 matched exam questions and answered 69% correctly. As completers, these students answered 74% of exam questions correctly, which is a 5% gain (Diff (C-NC) column). Minority-status-only students who did not complete the adaptive assignments answered 65% of the questions correctly and this improved to 75% when they were completers, a 10% gain. Notably, the students who intersect across all three key demographics (low-income + first-generation + minority, last row) faced 286 questions as non-completers and 818 as completers, correctly answering 57% and 69% of questions, respectively. This is a 12% gain. What this evidence shows is that for most student groups, completers did better than their non-completing counterparts (except for low-income + first-gen students).
The second piece of evidence is shown in the columns “Diff (BL)” which compares key demographic to baseline students by completer status. For example, minority-only students who did not complete the adaptive learning assignments answered 65% of the questions correctly, which is 4% fewer questions that non-completing baseline students (69%). This difference completely disappears when students from a minority background complete the adaptive learning assignments. In this case, students from minority groups answer 75% of the questions correctly as compared to completing baseline students (74%), thus eliminating the gap in outcomes. Although this narrowing of outcome gaps is inconsistent and in some cases nonexistent (e.g., low-income) or worsening (e.g., low-income + first-gen), again, the students who intersect across all three key demographics showed a narrowing of the outcome gap when these students complete (gap goes from 12% to 5%). These two pieces of evidence combined provides strong suggestive evidence that the adaptive learning assignments are correlated with both increasing outcomes for all students and disproportionately benefiting some key demographics of students through narrowing outcome gaps.
It could be the case that students who frequently or always completed have fundamentally different demographic and academic characteristics as compared to those who infrequently or never completed an adaptive assignment (Figure 2). To understand the extent to which this may be the case, return to the columns on the right in Table 2. This part of the table separates students into two groups: students who completed eight or more adaptive assignments (frequent + always completers (F/A)) and student who completed seven or fewer assignments (infrequent + never completers (I/N)). A t-test was conducted to determine if the mean values across demographic and academic characteristics were significantly different between F/A and I/N students. When interpreting the t-test, a p-value > 0.05 suggests the the difference between groups is statistically insignificant, whereas a p-value 0.05 suggests that the difference is significant.
With most demographic or academic variables, there are no significant differences between students who frequently or always completed the adaptive assignments and those who did not. It is notable for this study that there is no significant difference for this study’s three key demographics between F/A and I/N students. With p-values > 0.05, this suggests that the probability of a student being low-income, first-generation, or from a minority group did not affect the adaptive assignment completion frequency. The same is observed for high-performing students (as proxied by census GPA and honors program participation) and student commitment to the course (as proxied by major requirement). There are also observed significant differences between F/A and I/N students. Consistent with other results, the percentage of exam questions answered correctly is higher for F/A students. Additional differences include that female students and students who frequently participate in in-class polling were significantly more likely to frequently or always complete the adaptive assignment.
As the effect of adaptive learning assignment completion on exam question correctness cannot be casually identified, these results both support and qualify this study’s findings. They support by providing evidence that the regression results presented in the next section are not simply reflecting demographic and academic-driven differences in students, but are instead representative of student performance. As the regression results will show, there is a statistically strong and positive relationship between completer status and exam question correctness, even when controlling for in-class participation and frequency of adaptive assignment completion. Data collected allows for strong correlative statements, but the ability to account for participation and frequency suggests the hypothesis that adaptive assignment completion benefits students has some merit, even if data availability prohibits a more causal argument.
With this in mind, the third hypothesis of this study was explored by comparing the mean correctness of matched questions and their difficulty for completers and non-completers. First, all students are grouped according to completer status and then key demographics of students and other students are separated out. Again in this table, observations do not represent a particular student (n = 581), but rather a particular question faced by a particular student (n = 52,290).
Table 4 displays these descriptive statistics. The first rows are data for all students and shows that adaptive learning assignment completers correctly answered 78% of the easy exam questions, 72% percent of the moderate questions, and 67% of the difficult questions. Students who did not complete the adaptive assignment correctly answered 72% of the easy exam questions, 68% percent of the moderate questions, and 62% of the difficult questions. Comparing between the completer and the non-completer statuses, the difference in the means is larger for the easy questions (6%) as compared to the moderate (4%) or difficult questions (5%). These data show that the completers’ answered more exam questions correctly across all question difficulties and adaptive assignment completion was disproportionately associated with improved correctness for the easy questions. This is consistent with what was hypothesized because LearnSmart emphasizes the foundational course content found on the lower levels of Bloom’s taxonomy and other assignments such as recitation activities and weekly quizzes would have emphasized higher levels of Bloom’s taxonomy.
Although the regression results presented in the next section provide more concrete evidence that mean question correctness was stronger for easy questions for the key demographics of students, evidence can also be presented with descriptive statistics. Table 4 provides select results for all students and for the key demographic groups by completer status. Non-completer baseline students (who are non-low-income, non-first-gen, and not from a minority group) answered 73%, 69%, and 63% of easy, moderate, and difficult questions correctly, respectively. This far surpassed low-income, first-gen, and students from minority groups, whose mean correctness was 67%, 51%, and 51%. Again, these descriptive statistics show that completing adaptive learning assignments disproportionately benefited some key demographic groups, particularly students from minority groups. Table 4 suggests that completer students from minority groups answered easy questions correctly 12% more than non-completers, with smaller (but positive) impacts of completer status for moderate and difficult exam questions. The final row, which provides descriptive data for the intersection of low-income, first-gen, and students from minority groups suggests the proof for the third hypothesis is weak for this group because adaptive learning assignment completion appeared to benefit these students most for moderate and difficult questions.

3.2. Regression Analysis Results

The descriptive statistics provide strong evidence that adaptive learning assignment completion is linked with improved outcomes for all students. They also suggest that adaptive learning assignment completion narrowed outcome gaps between students from different backgrounds, with disproportionately strong relationships between completion and outcomes for certain groups.
Table 5 presents the first Fixed Effects Panel Regression results, which provide statistical evidence for this study’s first two hypotheses. More specifically, Table 5 ignores the difficulty of exam questions in order to assess whether adaptive learning completion was linked with more questions answered correctly for all students, along with stronger links for the key demographics of students. Each unique model specification is presented as a separate column, with the primary distinction between each model being which key demographic indicators are included. All models include the controls listed in Table 2: female, census GPA, residency status, honors participation, and major requirement. From Equation (2), Table 5 includes fixed effects for each student (i), week (t), unit (exam), and course section, with the latter included to capture any instructor-level heterogeneity. Both week-specific fixed effects and unit-specific fixed effects are included to account for the timing of content as well as its progression.
Table 5 provides strong evidence for this study’s first hypothesis. Specifically, the positive and significant coefficients on “Completer Indicator (A)” suggest that adaptive learning assignment completion benefited all students when controlling for other factors that might influence the probability of a student correctly answering a matched exam question. These estimates reflect α 1 from the estimating function, Equation (2). Regardless of specification, the estimates suggest that adaptive learning assignment completion is linked with students answering between 1.8 and 3.8% more questions correctly on exams, as compared to otherwise equal non-completers. The consistently positive and statistically significant coefficients on the Completer Indicator suggest a strong link between adaptive assignment completion and improved student outcomes.
The next three rows in Table 5 (low-income indicator to minority indicator) could provide statistical evidence of whether outcome gaps exist for this study’s primary demographic groups. Statistically negative coefficients could be interpreted as the gap between students of a particular group and baseline students. However, each is insignificant, implying that the student-based fixed effects presented in the Descriptive Statistics section drive the evidence that outcome gaps exist. This is not to say that outcome gaps do not exist–indeed, the descriptive statistics show that they do–but simply that these gaps are captured by the student fixed effects. These insignificant results are carried through in every other specification of the panel regression results presented in this study. Therefore, future tables do not display these estimates to improve readability. It is important to note, though, that in the absence of fixed effects, the standard Ordinary Least Squares (OLS) model estimates significant and consistently negative coefficients for these rows.
Table 5 shows weak evidence in support of the second hypothesis. The remaining coefficients in Table 5 assess the extent to which adaptive learning assignment completion is differentially correlated with improved exam question correctness for this study’s key demographic groups. There is evidence that adaptive assignment completion was linked with improved exam question correctness for students from minority groups. When minority status indicators are included, the third column (which incorporates the minority status indicator, but no others) implies that completers from minority groups answered 5.5% more questions correctly than their non-completer peers from minority groups. Analogously, when all combinations of demographic indicators are included (fourth column), the coefficient is similar in magnitude, but barely insignificant. This highlights that adaptive assignement completion may have been correlated with improved student outcomes, but only for students from minority groups and not for either first-generation students or low-income students.
As a set of baseline results, Table 5 provides strong evidence that adaptive learning assignment completion benefited students overall. There is some evidence that this link is stronger for certain groups, given the significant and positive coefficient attached to minority completer students. With many other demographic variables collapsing in the student fixed effects, it is challenging to disentangle the extent to which results differ across other demographic groups. The following section provides estimates for samples stratified by the categorical variables mentioned in the Research Methods section.

3.2.1. Stratified Regression Results

To explore how adaptive learning assignment completion was related with improved student outcomes across groups, four sets of stratified regressions were estimated which focus on key demographic students as they are members of different sub-populations within the sample (e.g., female or male, honors participation or none).
The first set of stratified regression results compares students taking the course as part of their major requirements (major requirement) with those who are not (not required for major). Table 6 provides key results for this comparison. As shown in Table 5, the coefficients for low-income and first-generation students were either inestimable (due to correlation with the student fixed effects) or exceptionally insignificant. Therefore, Table 6 presents only results related to students from minority groups who taking the course as part of a major requirement and those who were not. Results show that adaptive learning assignment completion is positively correlated with improved exam question correctness for those taking the course as part of their major (improved correctness between 2% and 4%), but not for other students. In addition, Table 6 reinforces the potentially positive link between completion and outcomes for students from minority groups, with a coefficient of 4.1% in the first specification for students taking the course as a major requirement and 8.4% when the course is not a requirement.
Unlike the baseline results of Table 5, completer status is significantly correlated with outcomes for other groups. Low-income major-requirement students and low-income, first-generation, not-required students that completed adaptive assignments performed worse. This effect is canceled by an exceptionally large and significant coefficient for low-income, first-generation, minority group, not-required students. The same cannot be said for major requirement students.
The next set of stratified estimates focused on students who identify as either female or male. Female student adaptive learning assignment completion was more positively correlated with exam question correctness than that for male students. However, there are no other meaningful differences in the results for female and male students when compared with the full sample. Appendix A displays Table A1 and Table A2, which show results for female and male students.
Another set of stratified regressions focused on students who participated in the honors program, but enrolled in a non-honors section of Principles of Microeconomics. For these students, it is observed that completer status is not correlated with exam question correctness. Non-honors student results mirrored those of the full sample, with completers performing better on average, with some evidence of stronger correlations between completion and exam question correctness for students from minority groups. In Appendix B, Table A3 displays results for only honors students in the sample, while Table A4 provides the same for non-honors students.
To further explore the extent to which this study’s results are driven by student participation in the adaptive assignments specifically (as opposed to participation in class in general), Table 7 displays results for students who were active class participants. When controlling for class participation, the results show that completer status was positively correlated with exam question correctness. As in other specifications, completer students from minority groups had stronger correlations than non-minority completers, particularly for those at the intersection of all key demographic indicators. Table 7 also reinforces that low-income and first-generation completers did not have the same correlative outcomes. With statistically significant and negative coefficients on the individual indicators’ interaction with completer status, this implies worse outcomes for completers in these groups unless they also happened to be students from minority groups. This again underscores mixed evidence for the completion of adaptive learning assignments to close outcome gaps, despite the consistently positive suggested relationship resulting from completion in general. Analogous results for students with low participation show that completer status was not linked with improved outcomes, implying that prospective benefits from adaptive assignment completion are more likely to be present when students actively participate in class. Table A5 in Appendix C shows the results for lower participating students.

3.2.2. Robustness Checks

Several robustness checks were implemented for each of the models previously described. Earlier estimates for this study utilized OLS rather than fixed effects panel methods. The results from OLS models mirror those presented here, with two primary results: (1) completer status was positively correlated with improved correctness for exam questions, and (2) students from minority groups had more positive correlations. There were similarly mixed results for the other key demographic groups.
Other robustness checks include omissions of various fixed effects. When student fixed effects were omitted, the coefficients on low-income status, first-generation status, and minority status were all consistently negative and significant. Given the outcome gaps shown in the Descriptive Statistics section, this aligns with expectations. The omission of fixed effects for the week of the semester, the course section, and the unit alter the magnitude of the results, but not the sign nor the significance.
As a final robustness check, the same models as those in Table 5 were estimated but subset the data to only students that were “sometimes” completers. Some students did not ever attempt adaptive learning assignments and would therefore be non-completers every week (never completers). Others completed every adaptive learning assignment and would therefore be completers every week (always completers). There were 371 sometimes completers in the sample (out of 581 students) who completed adaptive learning assignments in some weeks, but not in others (these were the frequent or infrequent students in Table 2). Because sometimes completers made a choice each week about their assignment completion, they may be fundamentally different from never and always completers. A resulting potential concern may be that if results for sometimes completers differ from those in Table 5, then it could be the case that results thus far are driven more by students of different types than by the adaptive learning software itself. This robustness check provides strong, reinforcing support for the results of Table 5. In particular, sometimes completers performed better when they did complete adaptive learning assignments (between 2% and 4%). The magnitude of these coefficients is lower than when never completers and always completers are included in the sample, but results are consistent enough with the full sample to alleviate concerns regarding student type driving the results. The lower magnitude does suggest that there are innate benefits to adaptive learning assignment completion, with part of the benefits stemming from continuous use (i.e., colloquially, “practice makes perfect”). As shown in Table 5, the completer status of students from minority groups was positively correlated with improved outcomes beyond the implied baseline effect (3.5% to 5.8% in these specifications, relative to otherwise equal peers). Appendix D (Table A6) shows these results.
Between the results from the full sample and those focusing on sometimes completers, the consistency of results across specifications provides promising evidence in favor of the first two hypotheses. There is strong and consistent evidence that adaptive learning assignment completion was linked with improved student outcomes (the first hypothesis). There was mixed evidence in support of the second hypothesis that adaptive learning assignments can close outcome gaps, but only for some key demographics of students. Results consistently show that the completer status of students from minority groups was more positively correlated with improved outcomes than the non-minority completer status. The same cannot be concluded as consistently for first-gen and low-income students.

3.2.3. Results by Exam Question Difficulty

Turning to the final of this study’s three hypotheses, Table 8 displays the equivalent of the final column in Table 5, but stratified by exam question difficulty. The third hypothesis was that the effects of adaptive learning assignment completion should be strongest for easier exam questions. The first (and second) columns of Table 8 provide strong evidence that student performance on easier questions was more positively correlated with adaptive learning assignment completion than that on more difficult questions. Specifically, these stratified models suggest that completers performed 3.4% better on easier questions than their non-completer peers. The magnitude of this effect was diminished for moderate questions, with exam question correctness for difficult questions unaffected by completer status.
Table 8 also demonstrates that completer students from minortiy groups in particular performed better on easy exam questions where completion was disproportionately correlated with improved question correctness for easy questions (7.5% improvement). The first column of results suggests that adaptive assignment completion was a strict benefit to minority status students, with a combined effect of about 11% when paired with the baseline effect of adaptive learning assignment completion. This correlation also exists for difficult questions for students from minority groups. There is the continued result that completion for low-income and first-generation students was not positively correlated with increased exam question correctness.
Still, Table 8 provides strong evidence for the third hypothesis as findings show that the positive effect of assignment completion was strongest for easy questions, decreasing in magnitude until becoming insignificant for increasingly difficult questions. As with other presented models in this study, only students from minority groups reaped significant benefits while low-income and first-gen students did not.

4. Discussion

Improving student outcomes is a goal for higher education and adaptive learning courseware use has the potential to support this goal. This study evaluated three hypotheses, the first of which was that students who completed the low-stakes adaptive assignment (on LearnSmart) outperformed their peers who did not complete the assignment. Results show, both through descriptive statistics and regression analysis, that completion is correlated with a significant increase in exam question correctness. This result is consistent across most stratified regression specifications. This finding suggests that if adaptive learning courseware is integrated as low-stakes assignments, then student outcomes may be improved.
The second hypothesis was that adaptive assignment completion would disproportionately benefit key demographic groups (i.e., low-income, first-generation, and students from minority groups), thus helping to close outcome gaps between key demographics of students and their peers. While there is strong evidence for outcome gaps, the findings of this study suggest that only students from minority groups consistently and disproportionately benefited by completing adaptive learning assignments. This result is consistent across most stratified regression specifications. These strong results for students from minority groups are promising, suggesting that adaptive learning assignment completion is correlated with increased exam question correctness by between 4% and 6% beyond the baseline increase, depending on specification, which would help decrease outcome gaps. A more nuanced story evolved when evaluating sub-populations using the stratified specifications. For example, the magnitude of the coefficient attached to all key demographic groups when the course is not a requirement suggests that for educators who want to improve the success for these students in their course, adaptive learning assignment completion is correlated with higher outcomes. Results were less promising for low-income and first-generation students. For most specifications, the results show that for low-income and first-generation students, completing the adaptive assignment was associated with answering more exam questions correctly, but not disproportionately more, with many of these results being insignificant. In some specifications, the results are negative and significant. Without speculating, there is no way to know exactly why adaptive learning courseware completion was correlated with mixed outcomes for these groups, but it does highlight that adaptive learning courseware cannot (and should not) be treated as a “one-size-fits-all” solution for closing output gaps.
The third hypothesis was that adaptive learning use would most affect outcomes for more foundational (i.e., easy) content. This hypothesis provides an intuitive link for why adaptive learning assignment completion may benefit students, as the structure of adaptive assignments in the sample explicitly targeted lower-level Bloom’s taxonomy questions. Results show strong evidence that adaptive learning assignment completion was correlated with increased correctness for easy exam questions, again, with students from minority groups disproportionately benefiting.
Overall, the results of this study speak to success in using adaptive technology to improve outcomes, but it is important to identify that a key challenge is to encourage instructors to integrate adaptive courseware assignments into their course. Simply put, the benefit to students may be clear, but instructors may not feel it is worth their time and effort. Instructors often face the choice between generating the content themselves (which involves a huge time commitment) or using publisher-provided adaptive learning courseware (which can be expensive for students). It should therefore come as no surprise that the integration of these types of assignments is not as fast as the benefits might suggest. To facilitate greater use of adaptive courseware, departments and their administration can support instructors in collaborating to generate content themselves. If instructors have no time or interest in generating content, they can look towards publisher-authored content. Adaptive learning courseware is sometimes included when students purchase eBooks through large publishers for high enrollment or gateway courses. If available, and at no or low additional cost, instructors should be encouraged to integrate this technology into their course.
While this study generated some exciting results, further study is needed to more broadly determine the impacts on student outcomes related to the adoption of adaptive courseware. The results from this study suggest a modest ability of adaptive courseware to close the gap between students of different backgrounds, but the results are limited to a single course covering one topic at one school. Additional data collection would reinforce the results and provide insight into whether these results extend beyond these specific circumstances. These data could be combined with data from other course assignments (e.g., weekly, higher stakes quizzes, papers, etc.), student attendance data, and a pedagogical approach to create a more holistic picture of the role of adaptive courseware in promoting and supporting student success. Another extension could be to include qualitative data to better understand the student’s perspective on the contribution of these types of low-stakes assignments to their course outcomes. Including information about frequent or infrequent completion from a student’s perspective (perhaps when they feel least comfortable with the content) would be a valuable addition to this work. Finally, when observing the rate of completion of the adaptive assignments, some students completed all of the adaptive assignments (34.4%) and many students completed most of them (78.8% of students completed at least eight of the assignments). It could be the case that a particular student could be a completer one week and a non-completer the next. It would be interesting to conduct a longitudinal study following individual students throughout the semester to determine the individual impact of the completion of adaptive learning assignments on course outcomes.
The results of this study suggest that adaptive learning assignments can be an educational technology tool that helps to close the gap between students of different backgrounds. Based on the data collected, there is strong evidence that such activities are correlated with gap-closing for students from minority groups, but limited evidence of gap-closing for low-income or first-generation students. This study is just one contribution to the body of research related to how to close outcome gaps. Along with other research findings that highlight the the role of advising [22], the importance of a sense of belonging [23], the importance of active learning [24], and others, the results of this study contribute to the continuing effort by educators and higher education institutions to eliminate outcome gaps between different demographic groups.

Author Contributions

Conceptualization, K.G.; methodology, K.G. and C.D.B.; software, C.D.B.; validation, C.D.B.; formal analysis, C.D.B. and K.G.; investigation, K.G. and C.D.B.; resources, K.G.; data curation, K.G.; writing—original draft preparation, K.G.; writing—review and editing, C.D.B. and K.G.; visualization, K.G. and C.D.B.; supervision, K.G.; project administration, K.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and was approved by the Institutional Review Board (or Ethics Committee), USA (IRB ID 295-17H, 21 December 2016).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study that also contains study participants’ private demographic information. Requests to access the datasets should be directed to Karen Gebhardt ([email protected]).

Acknowledgments

The authors thank session participants that provided helpful feedback at the Thirteenth Annual American Economic Association (AEA) Conference on Teaching and Research in Economic Education (CTREE). Special thanks go to Karen Bernhardt-Walther of York University (Canada) for detailed comments both during and after the paper presentation. The authors also appreciate the thorough and helpful suggestions provided by this paper’s two anonymous reviewers.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
A (in tables)Adaptive Learning Assignment Completers
ALEKSAssessment and LEarning in Knowledge Spaces
ALMAPAdaptive Learning Market Acceleration Program
DFWD, F (grades earned), or Withdrawing students
F (in tables)First-gen Student Indicator
L (in tables)Low-income Student Indicator
M (in tables)Minority Student Indicator
MDPIMultidisciplinary Digital Publishing Institute
OLSOrdinary Least Squares

Appendix A. Fixed Effects Regression Results for Female and Male Students

Table A1. Fixed Effects Panel Regression estimates for all question difficulties for female students.
Table A1. Fixed Effects Panel Regression estimates for all question difficulties for female students.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Low-Income Only First-Gen Only Minority Only Non-All Interactions
Completer Indicator (A)0.032 **0.041 ***0.025 *0.036 **
(0.013)(0.013)(0.014)(0.017)
A:Low-Income Indicator (L)0.026 −0.001
(0.031) (0.072)
A:First-Gen Indicator (F) −0.014 −0.047 *
(0.027) (0.028)
A:Minority Indicator (M) 0.0360.029
(0.025)(0.030)
A:L:F −0.010
(0.137)
A:L:M −0.023
(0.084)
A:F:M 0.003
(0.069)
A:L:F:M 0.122
(0.161)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations19,89819,89819,89819,898
R20.1020.1020.1020.102
Significance of coefficients denoted as: * = p-value < 0.05; ** = p-value < 0.01; *** = p-value < 0.001.
Table A2. Fixed Effects Panel Regression estimates for all question difficulties for male students.
Table A2. Fixed Effects Panel Regression estimates for all question difficulties for male students.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Low-Income Only First-Gen Only Minority Only All Interactions
Completer Indicator (A)0.042 ***0.032 ***0.0130.030 **
(0.011)(0.012)(0.012)(0.014)
A:Low-Income Indicator (L)−0.047 * −0.083 **
(0.028) (0.037)
A:First-Gen Indicator (F) −0.005 0.005
(0.025) (0.028)
A:Minority Indicator (M) 0.069 ***0.042
(0.023)(0.027)
A:L:F −0.021
(0.052)
A:L:M 0.072
(0.058)
A:F:M 0.009
(0.042)
A:L:F:M 0.271 **
(0.124)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations21,00221,00221,00221,002
R20.1000.1000.1000.101
Significance of coefficients denoted as: * = p-value < 0.05; ** = p-value < 0.01; *** = p-value < 0.001.

Appendix B. Fixed Effects Regression Results for Honors and Non-Honors Students

Table A3. Fixed Effects Panel Regression estimates for all question difficulties for honors students.
Table A3. Fixed Effects Panel Regression estimates for all question difficulties for honors students.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Low-Income Only First-Gen Only Minority Only All Interactions
Completer Indicator (A)0.0460.0580.0600.072
(0.035)(0.040)(0.056)(0.067)
A:Low-Income Indicator (L)0.116 ** 0.137 ***
(0.041) (0.032)
A:First-Gen Indicator (F) −0.044 −0.060
(0.061) (0.078)
A:Minority Indicator (M) −0.015−0.046
(0.058)(0.062)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations1324132413241324
R20.1000.0990.0990.100
Significance of coefficients denoted as: ** = p-value < 0.01; *** = p-value < 0.001.
Table A4. Fixed Effects Panel Regression estimates for all question difficulties for non-honors students.
Table A4. Fixed Effects Panel Regression estimates for all question difficulties for non-honors students.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Low-Income Only First-Gen Only Minority Only All Interactions
Completer Indicator (A)0.038 ***0.035 ***0.017 *0.031 ***
(0.009)(0.009)(0.009)(0.011)
A:L−0.019 −0.055
(0.022) (0.035)
A:F −0.007 −0.014
(0.019) (0.021)
A:M 0.057 ***0.045 **
(0.017)(0.022)
A:L:F −0.025
(0.050)
A:L:M 0.025
(0.048)
A:F:M −0.011
(0.046)
A:L:F:M 0.152 *
(0.089)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations39,57639,57639,57639,576
R20.1000.1000.1010.101
Significance of coefficients denoted as: * = p-value < 0.05; ** = p-value < 0.01; *** = p-value < 0.001.

Appendix C. Fixed Effects Regression Results for Non-Participant Students

Table A5. Fixed Effects Panel Regression estimates for all question difficulties for non-active-participant students.
Table A5. Fixed Effects Panel Regression estimates for all question difficulties for non-active-participant students.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Low-Income Only First-Gen Only Minority Only All Interactions
Completer Indicator (A)0.0080.0140.007−0.007
(0.018)(0.018)(0.018)(0.022)
A:L0.059 0.061 *
(0.037) (0.032)
A:F 0.035 0.072 *
(0.036) (0.038)
A:M 0.0520.066
(0.032)(0.051)
A:L:F −0.098
(0.070)
A:L:M −0.045
(0.064)
A:F:M −0.146 *
(0.074)
A:L:F:M 0.157
(0.144)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations5829582958295829
R20.1080.1080.1080.109
Within R20.0020.0020.0020.002
Significance of coefficients denoted as: * = p-value < 0.05.

Appendix D. Fixed Effects Regression Results for “Sometimes-Completer” Students

Table A6. Fixed Effects Panel Regression estimates for all question difficulties for sometimes completers only.
Table A6. Fixed Effects Panel Regression estimates for all question difficulties for sometimes completers only.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Low-Income Only First-Gen Only Minority Only All Interactions
Completer Indicator (A)0.040 ***0.038 ***0.020 **0.034 ***
(0.009)(0.009)(0.009)(0.011)
A:Low-Income Indicator (L)−0.015 −0.052
(0.022) (0.035)
A:First-Gen Indicator (F) −0.007 −0.015
(0.019) (0.020)
A:Minority Indicator (M) 0.058 ***0.035 *
(0.017)(0.020)
A:L:F −0.031
(0.049)
A:L:M 0.044
(0.047)
A:F:M 0.014
(0.045)
A:L:F:M 0.121
(0.087)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations27,52227,52227,52227,522
R20.1040.1040.1050.105
Significance of coefficients denoted as: * = p-value < 0.05; ** = p-value < 0.01; *** = p-value < 0.001.

References

  1. Kuh, G.D.; Kinzie, J.L.; Buckley, J.A.; Bridges, B.K.; Hayek, J.C. What Matters to Student Success: A Review of the Literature; National Postsecondary Education Cooperative: Washington, DC, USA, 2006; Volume 8. [Google Scholar]
  2. Millea, M.; Wills, R.; Elder, A.; Molina, D. What matters in college student success? Determinants of college retention and graduation rates. Education 2018, 138, 309–322. [Google Scholar]
  3. Lewis, N.A., Jr.; Yates, J.F. Preparing disadvantaged students for success in college: Lessons learned from the preparation initiative. Perspect. Psychol. Sci. 2019, 14, 54–59. [Google Scholar] [CrossRef] [PubMed]
  4. Kassis, M.M.; Boldt, D.J. Factors impacting student success in introductory economics courses. J. Econ. Educ. 2020, 20, 41–63. [Google Scholar]
  5. Anderson, J.; Devlin, M. Data analytics in adaptive learning for equitable outcomes. In Data Analytics and Adaptive Learning; Routledge: Abingdon, UK, 2023; pp. 170–188. [Google Scholar]
  6. Hout, M. Social and economic returns to college education in the United States. Annu. Rev. Sociol. 2012, 38, 379–400. [Google Scholar] [CrossRef]
  7. Tyton Partners. Learning to Adapt: A Case for Accelerating Adaptive Learning in Higher Education. 2013. Available online: https://tytonpartners.com/learning-to-adapt-a-case-for-accelerating-adaptive-learning-in-higher-education/ (accessed on 1 May 2024).
  8. McCarthy, B. Journey to Personalized Learning-Bright Future: A Race to the Top-District Initiative in Galt Joint Union Elementary School District. WestEd. 2017. Available online: https://www.wested.org/wp-content/uploads/2017/03/resource-journey-to-personalized-learning.pdf (accessed on 1 May 2024).
  9. Konnova, L.; Lipagina, L.; Postovalova, G.; Rylov, A.; Stepanyan, I. Designing adaptive online mathematics course based on individualization learning. Educ. Sci. 2019, 9, 182. [Google Scholar] [CrossRef]
  10. Sharma, A.; Szostak, B. Adapting to Adaptive Learning. 2018. Available online: https://www.chieflearningofficer.com/2018/01/10/adapting-adaptive-learning/ (accessed on 1 May 2024).
  11. Gebhardt, K. Adaptive learning courseware as a tool to build foundational content mastery: Evidence from principles of microeconomics. Curr. Issues Emerg. eLearning 2018, 5, 2. [Google Scholar]
  12. Dziuban, C.; Moskal, P.; Parker, L.; Campbell, M.; Howlin, C.; Johnson, C. Adaptive Learning: A Stabilizing Influence across Disciplines and Universities. Online Learn. 2018, 22, 7–39. [Google Scholar] [CrossRef]
  13. Xie, H.; Chu, H.C.; Hwang, G.J.; Wang, C.C. Trends and development in technology-enhanced adaptive/personalized learning: A systematic review of journal publications from 2007 to 2017. Comput. Educ. 2019, 140, 103599. [Google Scholar] [CrossRef]
  14. Murray, M.C.; Pérez, J. Informing and performing: A study comparing adaptive learning to traditional learning. Inform. Sci. Int. J. Emerg. Transdicipline 2015, 18, 111. [Google Scholar]
  15. Griff, E.R.; Matter, S.F. Evaluation of an adaptive online learning system. Br. J. Educ. Technol. 2013, 44, 170–176. [Google Scholar] [CrossRef]
  16. White, G. Adaptive learning technology relationship with student learning outcomes. J. Inf. Technol. Educ. Res. 2020, 19, 113–130. [Google Scholar] [CrossRef] [PubMed]
  17. Hagerty, G.; Smith, S. Using the web-based interactive software ALEKS to enhance college algebra. Math. Comput. Educ. 2005, 39, 183–194. [Google Scholar]
  18. Bailey, A.; Vaduganathan, N.; Henry, T.; Laverdiere, R.; Pugliese, L. Making Digital Learning Work: Success Strategies from Six Leading Universities and Community Colleges; Boston Consulting Group: Boston, MA, USA, 2018. [Google Scholar]
  19. Yarnall, L.; Means, B.; Wetzel, T. Lessons learned from early implementations of adaptive courseware. SRI Educ. 2016. [Google Scholar] [CrossRef]
  20. Li, W.; Sun, K.; Schaub, F.; Brooks, C. Disparities in students’ propensity to consent to learning analytics. Int. J. Artif. Intell. Educ. 2022, 32, 564–608. [Google Scholar] [CrossRef]
  21. Bergé, L. Efficient Estimation of Maximum Likelihood Models with Multiple Fixed-Effects: The R Package FENmlm. CREA Discussion Papers. 2018. Available online: https://cran.r-project.org/web/packages/FENmlm/vignettes/FENmlm.html (accessed on 1 August 2024).
  22. Renick, T.M. Predictive analytics, artificial intelligence and the impact of delivering personalized supports to students from underserved backgrounds. In Data Analytics and Adaptive Learning; Routledge: Abingdon, UK, 2023; pp. 78–91. [Google Scholar]
  23. Long, M.G.; Gebhardt, K.; McKenna, K. Success Rate Disparities between Online and Face-to-Face Economics Courses: Understanding the Impacts of Student Affiliation and Course Modality. Online Learn. 2023, 27, 461–485. [Google Scholar] [CrossRef]
  24. Haak, D.C.; HilleRisLambers, J.; Pitre, E.; Freeman, S. Increased structure and active learning reduce the achievement gap in introductory biology. Science 2011, 332, 1213–1216. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Weekly adaptive learning assignment completer status (percentage of students).
Figure 1. Weekly adaptive learning assignment completer status (percentage of students).
Education 14 01025 g001
Figure 2. Number of adaptive learning assignments completed (number of students).
Figure 2. Number of adaptive learning assignments completed (number of students).
Education 14 01025 g002
Table 1. Exam question difficulty classification.
Table 1. Exam question difficulty classification.
Difficulty RankingDifficulty DescriptionBloom’s TaxonomyExam Question Keywords
1EasyRemember
Knowledge
Define
Identify
Choose
2ModerateUnderstand
Comprehension
Explain
Interpret
Show
3DifficultApply
Application
Calculate
Implement
Solve
Table 2. Summary of student statistics.
Table 2. Summary of student statistics.
All StudentsFrequent + Always
Completers (F/A)
Infrequent + Never
Completers (I/N)
(F/A) −
(I/N)
VariablenMeanSDMin.Max.nMeanSDnMeanSDt-Test (p-Value)
Female5810.4700.5000.0001.0004900.5000.500910.3100.4600.0004
Census GPA5812.9500.8900.0004.0004902.9300.910913.0600.7400.1476
Resident5810.7500.4400.0001.0004900.7500.430910.7300.4500.6427
Honors Participation5810.0400.1900.0001.0004900.0300.180910.0400.2100.6897
Low-income (L)5810.1800.3900.0001.0004900.1800.380910.2000.4000.6564
First-gen (F)5810.2500.4400.0001.0004900.2500.430910.3000.4600.3400
Minority (M)5810.2600.4400.0001.0004900.2500.440910.2700.4500.6717
Participation5800.8500.3600.0001.0004900.9000.300910.5800.5000.0000
Major Requirement4680.7200.2600.0001.0003920.7200.450760.7000.4600.7036
Adaptive Assignment5810.7900.2600.0001.0004900.8800.140910.2800.1700.0000
Exam Questions Correct5790.7200.1100.3800.9804900.7200.110890.6900.1100.0039
Notes: Most variables are expressed as a binary with values equal to 1 if the characteristic is true of the student, 0 otherwise, unless otherwise noted. Participation is 1 if the in-class polling (iClicker) score was at least 60%. Adaptive Assignment is the LearnSmart assignment completion as a percentage (where students who fully complete earn 100%, partially complete earn between 0% and 100%, and do not complete earn 0%). F/A students completed 8 or more adaptive assignments and I/N students completed 7 or fewer.
Table 3. Mean exam question correctness by adaptive learning assignment completer status and key demographic status.
Table 3. Mean exam question correctness by adaptive learning assignment completer status and key demographic status.
Non-Completer (NC)Completer (C)
GroupnMeanSDDiff (BL)nMeanSDDiff (BL)Diff (C-NC)Comp (%)
Baseline43540.690.46 20,7510.740.44 0.0583
L only6210.710.450.0221440.720.45−0.020.0178
F only14840.690.46−0.0167120.700.46−0.040.0182
M only14160.650.48−0.0449740.750.440.010.1078
L + F3850.690.46−0.0212100.650.48−0.09−0.0476
L + M4600.720.450.0332300.760.430.020.0488
F + M3530.620.49−0.0714870.660.47−0.080.0481
L + F + M2860.570.50−0.128180.690.46−0.050.1274
Notes: Diff (BL) is the difference in mean from baseline by completer status for each student status group. Diff (C-NC) is the difference in mean for each student group by completer status. Comp is the percent of students classified as completers of the adaptive learning assignments.
Table 4. Mean question correctness by adaptive learning assignment completer status and exam question difficulty.
Table 4. Mean question correctness by adaptive learning assignment completer status and exam question difficulty.
Non-Completer (NC)Completer (C)
Difficulty    n Mean SD n Mean SD Diff (C-NC)
All students    
Easy36390.72   0.45      15,721      0.78      0.42      0.06   
Moderate30490.680.4713,9720.720.450.04
Difficult26710.620.4911,6500.670.470.05
Baseline
Easy16840.730.4478820.790.410.06
Moderate14820.690.4670070.730.440.04
Difficult12410.630.4858620.680.470.05
L only
Easy2450.740.448090.780.420.04
Moderate2020.720.457260.710.45−0.01
Difficult1740.660.476080.640.48−0.02
F only
Easy5790.730.4425550.740.440.01
Moderate4850.680.4722650.710.450.03
Difficult4200.640.4818920.640.480.00
M only
Easy5370.680.4719050.800.400.12
Moderate4680.650.4816850.720.450.07
Difficult4110.600.4913840.690.460.09
L + F + M
Easy1090.670.473150.730.440.06
Moderate880.510.502780.680.470.17
Difficult890.510.502250.630.480.12
Table 5. Fixed Effects Panel Regression estimates for all question difficulties.
Table 5. Fixed Effects Panel Regression estimates for all question difficulties.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Low-Income Only First-Gen Only Minority Only All Interactions
Completer Indicator (A)0.038 ***0.036 ***0.018 *0.033 **
(0.008)(0.009)(0.009)(0.011)
Low-Income Indicator (L)−1.754 −1.510
(50,650.2) (40,453.8)
First-Gen Indicator (F) −0.530 −0.554
(22,125.3) (21,763.2)
Minority Indicator (M) −0.581
(27,499.7)
A:L−0.018 −0.056
(0.021) (0.034)
A:F −0.008 −0.015
(0.019) (0.020)
A:M 0.055 ***0.039
(0.017)(0.020)
A:L:F −0.024
(0.050)
A:L:M 0.034
(0.047)
A:F:M −0.006
(0.045)
A:L:F:M 0.144
(0.088)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations40,90040,90040,90040,900
R20.1000.1000.1010.101
Each column identifies regressions with different combinations of included key demographic controls. Significance of coefficients is denoted as * = p-value < 0.05; ** = p-value < 0.01; *** = p-value < 0.001. In other words, one * indicates that the estimate is statistically significant at the 95% confidence level ( α = 0.05 ). Controls are consistent for each specification, and include female status, census GPA, residency status, honors program participation, major requirement indicator, and participation indicator. Week-of-semester fixed effects are included as timing controls. The coefficients on rows such as A:L are differential effects of adaptive learning assignment completion for the combinations of key demographics.
Table 6. Fixed Effects Panel Regression estimates for all question difficulties for students from minority groups, taking the course as a major requirement vs. those who are not.
Table 6. Fixed Effects Panel Regression estimates for all question difficulties for students from minority groups, taking the course as a major requirement vs. those who are not.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Major Requirement,
Minority Only
Major Requirement,
All Interactions
Not a Requirement,
Minority Only
Not a Requirement,
All Interactions
Completer Indicator (A)0.024 **0.043 ***0.0040.008
(0.011)(0.013)(0.016)(0.020)
A:Low-Income Indicator (L) −0.080 * 0.031
(0.041) (0.024)
A:First-Gen Indicator (F) −0.019 −0.006
(0.025) (0.036)
A:Minority Indicator (M)0.041 **0.0270.084 ***0.065
(0.019)(0.024)(0.032)(0.039)
A:L:F 0.006 −0.167 *
(0.057) (0.094)
A:L:M 0.052 −0.058
(0.058) (0.056)
A:F:M 0.009 −0.035
(0.057) (0.061)
A:L:F:M 0.040 0.456 ***
(0.098) (0.137)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations40,90040,90040,90040,900
R20.1000.1000.1010.101
Significance of coefficients denoted as:* = p-value < 0.05; ** = p-value < 0.01; *** = p-value < 0.001.
Table 7. Fixed Effects Panel Regression estimates for all question difficulties for active participant students.
Table 7. Fixed Effects Panel Regression estimates for all question difficulties for active participant students.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Low-Income Only First-Gen Only Minority Only All Interactions
Completer Indicator (A)0.043 ***0.040 ***0.019 *0.044 ***
(0.009)(0.010)(0.011)(0.012)
A:Low-Income Indicator (L)−0.039 −0.073 **
(0.025) (0.037)
A:First-Gen Indicator (F) −0.019 −0.039 *
(0.021) (0.022)
A:Minority Indicator (M) 0.053 ***0.024
(0.020)(0.022)
A:L:F −0.004
(0.053)
A:L:M 0.030
(0.055)
A:F:M 0.030
(0.053)
A:L:F:M 0.175 *
(0.102)
Fixed Effects:
   StudentYesYesYesYes
   WeekYesYesYesYes
   Course SectionYesYesYesYes
   Unit (Exam)YesYesYesYes
Observations35,07135,07135,07135,071
R20.0980.0980.0980.099
Significance of coefficients denoted as: * = p-value < 0.05; ** = p-value < 0.01; *** = p-value < 0.001.
Table 8. Fixed Effects Panel Regression estimates stratified by question difficulty.
Table 8. Fixed Effects Panel Regression estimates stratified by question difficulty.
Dependent Variable: Exam Question Correct (1 = Yes, 0 = No)
Easy Questions Moderate Questions Difficult Questions
Completer Indicator (A)0.034 **0.032 *0.026
(0.016)(0.016)(0.023)
A:Low-Income Indicator (L)−0.033−0.011−0.132 **
(0.059)(0.044)(0.063)
A:First-Gen Indicator (F)−0.0370.030−0.035
(0.033)(0.046)(0.042)
A:Minority Indicator (M)0.075 ***−0.0330.084 **
(0.026)(0.034)(0.037)
A:L:F−0.050−0.0870.076
(0.080)(0.087)(0.085)
A:L:M−0.0280.0280.108
(0.073)(0.066)(0.085)
A:F:M−0.0460.041−0.011
(0.054)(0.080)(0.086)
A:L:F:M0.1960.1750.046
(0.132)(0.127)(0.139)
Fixed Effects:
   StudentYesYesYes
   WeekYesYesYes
   Course SectionYesYesYes
   Unit (Exam)YesYesYes
Observations15,67113,71611,512
R20.1250.1440.133
Significance of coefficients denoted as: * = p-value < 0.05; ** = p-value < 0.01; *** = p-value < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gebhardt, K.; Blake, C.D. Closing the Gap? The Ability of Adaptive Learning Courseware to Close Outcome Gaps in Principles of Microeconomics. Educ. Sci. 2024, 14, 1025. https://doi.org/10.3390/educsci14091025

AMA Style

Gebhardt K, Blake CD. Closing the Gap? The Ability of Adaptive Learning Courseware to Close Outcome Gaps in Principles of Microeconomics. Education Sciences. 2024; 14(9):1025. https://doi.org/10.3390/educsci14091025

Chicago/Turabian Style

Gebhardt, Karen, and Christopher D. Blake. 2024. "Closing the Gap? The Ability of Adaptive Learning Courseware to Close Outcome Gaps in Principles of Microeconomics" Education Sciences 14, no. 9: 1025. https://doi.org/10.3390/educsci14091025

APA Style

Gebhardt, K., & Blake, C. D. (2024). Closing the Gap? The Ability of Adaptive Learning Courseware to Close Outcome Gaps in Principles of Microeconomics. Education Sciences, 14(9), 1025. https://doi.org/10.3390/educsci14091025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop