Transfer has attracted educational and psychological research attention for more than 100 years (see for example, [13
]). However, despite the ubiquity of transfer, there are some gaps in the literature, including a scarcity of applied studies. Although there are insightful sociocultural perspectives with qualitative accounts of transfer [14
], to date research measuring transfer has been dominated by experimental approaches, controlled conditions, and customised research assessment tools [16
]. This study, by contrast, provides a naturalistic measure of transfer, using standard university assessments (details to follow). By naturalistic, we mean that the transfer has been measured in natural and authentic learning contexts—in this case, classroom tests and end-of-semester exams that form part of university assessment routines.
The transfer examined in this paper can be conceived of using Barnett and Ceci’s [20
] taxonomy for transfer (see Table 1
), which outlines the contexts possible for transfer. In terms of this study, we examine transfer of mathematical learning to science and engineering in the following contexts: knowledge domain (mathematics/science), physical context (different rooms at university), temporal context (one semester later), functional context (both clearly academic), social context (both individual), and modality (both written exams or tests). Applying this typology we can describe the transfer of learning examined here as near transfer.
2.1. Transfer in Science/Engineering Educational Research
Given the acknowledged importance of transfer of learning, we argue that there has been a relative paucity of research examining it in recent decades. This is especially the case in relation to transfer between mathematics and science [21
], and surprising given the substantial policy and investment focus recently directed to STEM education. However, within the relatively slim literature, several different aspects of transfer have been investigated, with some studies exploring quantitative skills in university science education (see [22
]), and others looking at the transfer of mathematics more generally, with both quantitative and qualitative methodological approaches (see for example, [14
According to Becker and Park’s [25
] meta-analysis, the integration of all four STEM domains together has the largest effect size on student achievement. However, other research suggests that while learning outcomes are best predicted by prior learning in that subject area, mathematics learning is consistently the most influential interdisciplinary predictor of those outcomes [26
Mathematics is foundational to science and engineering, and it has been argued that “the best ‘practical’ approach to mathematics is to understand it as a language for describing physical and chemical laws” [28
] (p. 145). Sazhin [28
] also emphasises the need for balance between practical application and in-depth understanding of mathematical equations. Given the ever-increasing dependency of science/engineering on the mathematical sciences, this need for a balanced approach takes on even more importance [7
To date, there has been little research examining transfer of mathematics to science. This is surprising, as worldwide many college and university programs are built upon the assumption that learning in one area is transferred into the primary disciplinary learning area. For example, university “service courses” in mathematics are offered by Mathematics Department for students of non-mathematics majors, such as science and engineering. These courses cover mathematical contents (e.g., differential and integral calculus), which can be applied to learning in disciplines other than mathematics. In this, what is learned in the service courses is assumed to contribute to the learning in other disciplinary areas, such as science and engineering. Having a measure of transfer of learning from these courses to others would be a practical aid in evaluating and innovating to improve that interdisciplinary learning. There are a handful of studies measuring transfer, which can help inform development of a measure for evaluating interdisciplinary learning. We review these, and subsequently develop a measure of transfer from mathematics to science/engineering at university. Such a measure will enable us to test the assumption that mathematics learning is transferred to science/engineering, and provide a tool to understand this process more, so that it can be explored and improved.
2.2. Quantitative Measures of Transfer
There are a wide range of approaches used to quantitatively measure transfer, and much of the broader transfer literature highlights the difficulties of empirical study, including problems in demonstrating examples [29
]. Furthermore, Potgieter et al. [34
] remind us of the difficulty and the subsequent disappointment faced by researchers in demonstrating transfer, and of their tendency to assume that the mathematics has first been learnt, but this may not always be the case. Thus, attempting to measure mathematical transfer in university is a difficult but important endeavor.
Within the scant literature specifically on mathematics transfer, there are two different formulae for quantitative measurement of students’ transfer of mathematical learning to science (see Table 2
). First, Britton et al. [35
] made an instrument consisting of two parts: mathematics components and non-mathematics components, such as physics. They used the transfer rating
to give a relative score of transfer, based on the comparison of scores in mathematics components with scores of its application, such as in the physics components. However, this transfer rating (No. 1) had a problem: the student with the lowest mathematics score had the highest transfer rating.
To overcome this problem, a second mathematics transfer index
(No. 2) was developed by Roberts et al. [36
]. Their index was calculated by summing transfer scores calculated for pairs of questions with mathematically-matched content, reflecting the degree of transfer of learning. Using this index, researchers showed that transfer was associated with Universities Admission Index and university marks in mathematics and science (with Spearman’s rank correlation coefficient 0.58 (n
= 36, p
< 0.01), 0.62 (n
= 47, p
< 0.01) and 0.61 (n
= 43, p
< 0.01) respectively). However, like Britton et al. [35
], this study also used a customised exam to assess transfer, and the sample consisted of student volunteers. The test was composed of two sections, i.e., mathematics and non-mathematics (biology). There were seven pairs of questions, consisting of mathematics and non-mathematics components, and paired questions were matched up in terms of transfer of mathematics. In other words, students were required to apply mathematical skills and knowledge from a mathematics question (i.e., exponential and logarithmic functions) into its corresponding non-mathematics question (i.e., exponential decay in the context of biology). For each pair of questions, the transfer score was calculated, and a score of 0, 1 or 2 was allocated to participants. Four patterns to allocate transfer scores were as follows:
A student gave the correct answers in both sections. His or her transfer score was 2, and it is assumed that transfer of learning occurred;
A student gave the wrong answer in a mathematics question; however, he or she answered correctly on its corresponding non-mathematics question. A score of 1 was given, as it was considered that to some extent, transfer of learning had occurred;
If students gave a right answer in a mathematics section, but did not get the corresponding question in non-mathematics section, a score of 0 was awarded;
If students answered incorrectly in both questions, a score of 0 was given.
In particular, the contrast between pattern (ii) and the patterns (i) and (iii) was important, in terms of considering the degree to which transfer of learning has occurred. We adopted this approach for calculating transfer scores, although we acknowledge that it has limitations, particularly in relation to allocation of points for (ii) and (iii). Where students get questions wrong in exams, we know that this may not be due to a lack of learning, but in fact may be a product of exam conditions and performance factors in assessment. In measuring transfer, these factors are important and apply to two different sets of assessments, done under differing conditions.
Roberts et al.’s [36
] transfer index has strong content validity, because mathematicians and scientists cooperated to develop the customised transfer tasks, i.e., they designed science questions specifically requiring transfer of mathematical skills and knowledge. However, this strength is offset by other challenges, in terms of limited external and ecological validity. The development of the transfer index used only mathematics and biology questions in the instrument, but the students volunteered from a range of degree programs, and some had no experience in biology. The representativeness of the self-selecting samples of student volunteers is unknown and open to question, especially as it relies on volunteers happy to sit an additional transfer exam. The customised nature of the tasks also means that its relationship with the actual teaching, learning, and assessment that go on in universities can be questioned. We extend the work of Britton, Roberts, and colleagues, by adapting and applying their approach to extant university assessment data, thus building ecological validity and exploring transfer in a full university cohort.
From the outset, it is important to note that because we use existing university exams, there are some constraints on transparency, and we are not able to provide all the details of the exam questions. Instead we provide some synthetic examples. As educational researchers, we worked closely with mathematics and science faculty staff to conduct the study, but we had no influence over the exam content, and needed to respect the confidentiality of the university exam system.
We developed two transfer-of-learning indices, and provided a path model to explain attainments in science, based upon data measuring prior attainment in mathematics, Australian Tertiary Admission Rank (ATAR, a university entrance rank), and the students’ transfer of mathematics. In doing so, we tested the feasibility of assessing transfer within an applied university context, and evaluated the premise that transfer of learning contributes to overall attainment.
We asked the question: what is the measurable transfer of learning from mathematics university service courses to biology, molecular bioscience, engineering, and physics? More specifically:
Can transfer of mathematics learning be observed in the biology, molecular bioscience, engineering, and physics exam performances?
How is transfer related to overall attainment in mathematics and science/engineering courses?
What are the relationships between general educational attainment (university entrance rank), mathematics attainment, science/engineering attainment, and the transfer of learning between mathematics and science/engineering?
We hope that our approach can be replicated and used by academics to evaluate and improve teaching for interdisciplinary transfer. Such a measure could also be used to empirically test the assumption that interdisciplinary learning occurs within university classes, across a wide range of disciplines.