1. Introduction
The term “student experience” increasingly functions as a fraught nomenclature and a discipline of its own within higher education [
1]. Under this banner different political ideas about the university, the purpose of education, student voice, economics and learning are debated. These issues manifest within student experience survey work, with high political stakes externally through rankings and internally through quality assurance and enhancement regimes within institutions [
2,
3]).
Policy changes have thrown the “student experience” to the forefront of the debate by promoting a different model, that of “student as consumer” or “student as customer” [
4]. Across the UK, the Browne Review, consultation on fees and changes to government agencies have led to new models of funding and regulation [
5,
6]. Particularly in England, the shift from central allocation to a student purchase model places greater emphasis on bottom-up data collection: for information for applicants as well as institutional marketing [
7]. In this respect, the nature of information about student experience, how this information is used and who it is used by have all undergone radical changes [
8].
Measuring the student experience has become a key policy initiative in the UK [
9], using data for three primary purposes. The first is part of the neo-liberal discourse of providing public information, largely to help prospective students, as customers, make informed choices in a market context [
10]. The other two are for internal and external quality assurance and quality enhancement efforts [
11]. This tri-partite purpose has presented challenges because one short survey currently dominates the undergraduate UK higher education landscape.
The US offers another approach to measuring the student experience. To counteract discussions on quality based on research and reputation-based rankings, the focus moved toward measuring the student experience based on student activities linked to student success in higher education [
12]. Importantly, this provides actionable data for institutions to improve the student experience.
Selecting what to measure is a highly political undertaking, based on value judgements about the purpose of higher education. While research details the history and rationale for developing student experience surveys in specific contexts [
12,
13,
14], little is known about the relationship between them.
This study aims to address this gap by analysing satisfaction-based and engagement-based survey items through an institutional student experience survey. It puts forward an alternative methodology to the one currently used in the UK, and other countries. In doing so, it aims to bridge the seeming divergence between institutional pressures on the one hand and student and staff negative perceptions and experiences on the other. It highlights the need for a different focus in data collection approaches to the student experience. By exploring the relationship between satisfaction and engagement benchmarks this study provides insights into different approaches to measuring the student experience and, in doing so, enhance it.
2. NSS: Measuring the UK Student Experience
The National Student Survey (NSS) originated in Australia [
13], and was launched in the UK in 2005. It was agreed with the UK Government that the sector would publish key data on quality matters to help enable prospective students to make more informed judgements on where to study, and thus help to fulfil the accountability function of a sector in receipt of large sums of public money [
14] (p. 557).
The origins of the NSS stemmed from the challenges of institutionally based student evaluations of teachers, which focused on individual staff, not broader course units, so were not suitable for aggregation. Such evaluations were difficult to scale up to use as wider performance indicators due to different administration and collection procedures, a lack of standardization and their non-mandatory nature. In response, the Course Experience Questionnaire (CEQ) was explicitly developed as a performance indicator of teaching quality [
13]. The NSS largely adopted items from the CEQ to replace the subject review process to assure the quality of provision. Although it was developed primarily to inform prospective student choice it soon became used for quality assurance and quality enhancement purposes [
15], and has now wider regulatory purposes.
Performance in most league tables is very heavily influenced by NSS results [
16]. The NSS covers particular aspects of support for teaching and learning and focuses on student satisfaction. However, there remains no clear consensus on a definition of student satisfaction or an accepted measure of satisfaction with higher education experiences (see [
17]). It necessarily contains implicit assumptions about what good teaching provision looks like. Douglas et al. [
18] found that student satisfaction is strongly correlated to how “attentive” the teaching staff was to students during their studies, particularly in the final year. Disadvantages are inevitable in any methodology, and there is now extensive literature critiquing the NSS and other similar approaches [
19,
20,
21,
22]. Additionally, the NSS faced strong opposition from several Russell Group Student Unions at its inception, and further boycotts due to its mooted link to raising tuition fees through the Teaching Excellence Framework. The National Student Union is strongly opposed to the NSS and has called for its boycott since 2017.
Some have argued that the consumer theory basis of satisfaction surveys places the student in the role of customer and that the responsibilities and contribution of the student as learner are not represented. The NSS does not focus on active student engagement; it collects student perception data based on satisfaction measures. This highlights an implicit measure of students as consumers of education, with an inherent focus on “satisfaction” as opposed to “engagement” [
23]. It has also been argued that it does not help institutions improve teaching provision [
24]. “The course” as the unit of analysis silences arguments about institutional responsibility and policy context [
1]. Furthermore, whilst the breadth of the NSS allows for comparisons across institutions, it presents challenges for institutions to improve internally. In his development of the NSS, Richardson [
25] concluded that “As with students’ evaluations of teaching, there is little evidence that the collection of student feedback using the CEQ in itself leads to any improvement in the perceived quality of programmes of study” (p. 400).
Universities await the summer release of the NSS results with anxiety and institutions usually take drops in satisfaction very seriously. Universities that fare well tend to be very vocal about it and publicise the results, whereas those that do not tend to focus a lot of their attention on improving their scores the following year [
26]. It is important to note that in many cases, despite the substantial resources allocated to improving scores, these efforts are met with mixed success.
The NSS has presented challenges to institutions [
27], particularly with how the data is presented publicly [
28]. On one hand, it has led them to pay considerable attention to specific aspects of the student experience, and devote quite a lot of resources to address issues flagged up by NSS results (which has primarily been issues around assessment and feedback across the sector) (see [
29]). On the other hand, the survey is widely disliked by faculty [
30] and students alike. For students, since they fill in the survey at the end of their last year, there really is no “benefit” to them from changes put in place as a consequence of what may be deemed poor results of various items. For academics, it is often used as a blunt stick for departments and provides such general data that individuals struggle to respond to the results. Academics are generally sceptical of survey instruments that are mainly used by external bodies to rank departments and of the quality of the information provided, not least because of the perceived lack of quality of the survey in question [
24,
30].
However, regular reviews of the NSS, including the most recent call for a “root and branch” review in 2020 by the Department for Education, have all concluded that the sector is comfortable with the NSS. The data has become so embedded within the sector quality assurance and quality enhancement processes and integral to marketing campaigns that sector leaders would be lost without it. Furthermore, although there is little evidence that NSS scores directly influence students in their higher education choices, they do provide a sense-check in students’ decision-making processes.
Within a student consumer model, student experience is measured by pre-entry student expectations, on course programme satisfaction and post-course graduate reflection, employment and salary data [
31]. Institutions are interested in taking advantage of such data for enhancement; however, in practice, the underlying aims and epistemologies often challenge coherent efforts for improvement, particularly of teaching and learning [
32]. As any student survey will implicitly transmit a certain paradigm of learning, it is essential that institutions deploy quantitative research instruments that are appropriate, cohesive and meaningful [
33].
3. NSSE: Measuring the US Student Experience
There has been quite a different story in the US regarding student experience data. In response to public discourse dominated by institutional rankings, largely based on research and reputation metrics, a need was seen to change the conversation around quality in higher education. Research in the 1980s led to the College Student Experience Questionnaire (CSEQ), which focused on “quality of effort” to explore student learning, development and progress towards the attainment of important goals of higher education, funded through the Spencer Foundation [
34]. This questionnaire was concerned with the quality of the process in addition to the quality of the product (and it was not only concerned with the former because of its relationship with the latter). It focused on objectively observable behaviour, with 14 scales, half of which addressed students’ use of facilities and opportunities for experiences in their environment.
This notion of student involvement, or engagement, has been seen to provide an instructive focus for researching the student experience. Researchers have developed two key components of the concept of student engagement [
35]. The first is the amount of time and effort students put into academic pursuits and other activities associated with high levels of learning and personal development, as demonstrated by decades of research (see [
36,
37,
38]. The second is how institutions allocate their resources and organise their curriculum, other learning opportunities and support services [
23]. Together, these areas measure how institutions provide the environment for students that lead to the experiences and outcomes that constitute student success, broadly defined as persistence, learning and degree attainment [
12].
Chickering and Gamson [
36] outline a variety of educational practices that are associated with high levels of student engagement: student–staff contact, cooperation among students, active learning, prompt feedback, time on task and high expectations. These practices and metrics form the basis of the National Survey of Student Engagement (NSSE), used widely in the US, Canada, Ireland and other countries to provide data on students’ university experiences. Whilst retaining a focus on aspects that are likely to lead to effective student learning, the NSSE reflects a move away from student satisfaction. It asks about student behaviours, institutional actions and requirements, reactions to the institution, and student background information, providing a system of student evaluation and feedback that allows for local customisation, in keeping with the particular mission of the institution. The NSSE focuses on the time and effort students devote to educationally purposeful activities and on students’ perceptions of the quality of other aspects of their university experience and the data allow institutions to identify areas of student experience, inside and outside of the classroom, that can be improved. Engagement in this context is also seen as mutual responsibility: both the institution and the students have to be actively involved in many aspects of the student experience.
In its development, the NSSE explicitly did not focus on directly measuring student outcomes, but rather information for institutions to focus their efforts on to improve the student experience. “Indices of effective educational practice can thus serve as a valuable proxy for quality in undergraduate education” ([
12], p. 12), because quality is not just about inputs, but what institutions do with them [
12]. It was designed as a tool to assess the extent to which institutions were using the good educational practices identified by leading higher education researchers [
36,
38].
The NSSE has been running yearly since 2000 in the US and 484,242 students at 601 institutions completed it in 2020 [
39]. There has been extensive international interest in the NSSE, including from Australia, China, Ireland and South Africa. Engagement provides a useful measure of what students do inside and outside of the classroom and the gains in skills and competencies they have acquired. Institutions with highly engaged students can be used as benchmarks of effective educational practice. Ideally, information can be used by prospective students to choose where they attend university, and by employers to seek out the best-prepared graduates.
3.1. Student Engagement in the UK
In the UK, there has been an increasing interest in NSSE, with the government funding several studies that explored engagement work abroad and its role and relevance in the UK [
19,
40,
41,
42,
43]. This line of work resulted in the development of the UK Engagement Survey (UKES) [
44]. However, in policy and practice, the NSS continues to drive the decisions of many senior management teams and the institutional use of engagement surveys has been sporadic.
Several key themes are addressed in the comparison of engagement and satisfaction surveys in the UK context. These include the relevance and definition of the engagement concept and the need to adopt questions to different national contexts. Since its inception, there have been competing agendas of merging engagement themes in the NSS, running an engagement survey alongside the NSS or replacing the NSS. There is widespread debate about the usefulness of institutionally based nationally standardised learning experience surveys, which are common in Western higher education systems but not universally adopted (as module- and course-based evaluations largely have been).
3.2. Summary of Satisfaction and Engagement Approaches
There are different outcome goals of engagement and satisfaction. These include who the precise audiences are (i.e., government, quality assurance, current students, prospective students, university managers and academic staff), who controls and disseminates data and results (both officially and unofficially through league tables), and who has responsibility for using the data to enhance learning and direct change.
The NSS has had a significant influence on national policy decisions. Data from the survey featured in the launch of the Teaching Excellence Framework, a new regulatory tool in England. It is also a key factor in domestic league tables, which influence institutional decision making. This can support institutional enhancement efforts but the literature on this is sparse [
45]. The accountability focus of the NSS has led to accusations of regulatory burden and links to grade inflation [
46].
In the US, however, the NSSE is sustained off institutional subscriptions and has a wide impact on institutional-level enhancement. There is a broad community of users of the data and a plethora of research and case studies on the use of engagement data for accreditation and improvement (see [
47]). Research drawing on engagement data is used as evidence to support institutional, state and regional policy decision-making [
48].
Despite some integration of a few engagement-based questions into the NSS, these two student experience survey approaches have largely been seen in opposition. This paper draws on a case study that attempts to merge the two approaches into one institutional survey. Survey data was collected from both the approaches of the NSSE and the NSS to provide a platform for discussion about the use of satisfaction and engagement surveys in the UK context.
4. Methodology
The Institutional Experience Survey (IES) was developed at a large, urban research-intensive UK institution to provide information and assistance for faculties, departments, managers, students and others, to improve student learning and the student experience. Data was collected to help understand how the student population engaged with different aspects of their student life and to help the institution ensure it was offering the best possible experience, matched to students’ needs. The survey documents dimensions of quality in undergraduate education, inviting students to assess the extent to which they engage in educational practices associated with high levels of learning and development. All participants gave their informed consent for participation in the study, which was granted ethical approval by the institution’s ethics committee.
4.1. Survey Design and Sampling
The IES combined the approaches of the NSSE and the NSS, with 40 of the survey’s 71 questions derived from the NSSE. Another 10 questions looked at how students spend their time, four drew from the NSS, there were two open-ended questions about what the institution does well and what the institution could do to improve and the rest were designed to address strategic concerns of the institution. A majority of the items were drawn from validated scales from the core NSS and NSSE surveys. Both the NSS and NSSE have been widely tested for validity and reliability (see [
12,
25]). Further scales were developed from additional banks of the surveys. Reliability and cognitive testing of the complete IES survey were completed in partnership with groups of undergraduate students, validating the wording of the questions and item responses and confirming the context and construct validity. The survey was specifically designed to incorporate engagement and satisfaction survey approaches to measuring the relationship between them.
The survey adopted a probabilistic sampling and was sent to just over 9000 non-final year undergraduate students during the winter term with 1480 responses. The institutional response rate was 16.4 per cent. Of the respondents, 42 per cent were first-year students, 40 per cent second-year students and the rest were third- and fourth-year students. In common with national survey responses, two-thirds of respondents were female. Three-quarters of the respondents were home students, the remainder were split between international and EU students. The reason for sending the survey to non-final year students was to avoid overburdening final year students with surveys, since they would be responding to the NSS around the same time. The survey was run as part of the institution’s internal quality assurance and quality enhancement processes.
4.2. Benchmarks
Items were grouped into “benchmarks” of related activities, following the NSSE approach, to group items thematically and improve the understanding and use of the data by staff with varying levels of quantitative skills [
12]. Each benchmark summarises students’ responses to a set of related questions and concisely distils important aspects of the student experience inside and outside of the classroom. They were created through scales of 55 of the 71 questions on the survey. Each benchmark is expressed on a 100-point scale, computed by rescaling responses to each component question from 0 to 100, then taking the average of the survey items. The 17 benchmarks are as follows: Critical Thinking, Course Challenge, Academic Integration, Collaborative Learning, Research-rich Environment, Interdisciplinarity, Academic Literacy, Community Engagement, Global Connectedness, Academic Challenge, Learning with Peers, Student-Academic Relationships, Feedback, Assessment, Academic Support, and Co-Curricular Engagement. Benchmarks were drawn from NSSE data, UK-based work and institutional priorities. All were tested and modified using exploratory factor analysis to ensure they were robust and further supporting their construct validity [
49].
5. Findings
Overall, more engaged students, measured across the 17 different scales and benchmarks, also reported significantly higher levels of satisfaction than less engaged students. It is clear from the data that there is a strong correlation between levels of student engagement and levels of student satisfaction. Furthermore, the higher the level of engagement on each benchmark, the higher the level of satisfaction.
The satisfaction mean scores for each of the engagement benchmarks are shown in
Table 1. As an example, the Benchmark “Course Challenge” is a composite score of questions about whether students felt they worked harder than they thought they would to meet instructors’ expectations, preparedness for class and feeling challenged to do their best work, put on a 100-point scale. For students who were unsatisfied with their experience at the institution, they reported an average of 40.98 on “Course Challenge”, meaning most disagreed they had opportunities to provide feedback on their course. The average moved to 46.69 for students who said they were neither satisfied nor dissatisfied with their course. However, students who were satisfied with their experience at the institution had an average of 58.14, indicating they reported higher levels of Course Challenge, showing a correlation between students who felt their instructors pushed them to work hard and being more satisfied with their student experience.
The data demonstrate the positive relationship between student satisfaction and student engagement. The higher the mean satisfaction score in each benchmark, the higher the mean benchmark average. From the survey, the highest level of engagement can be found on the items related to the following benchmarks: Academic Challenge, Critical Thinking and Feedback on the Course. Critical thinking reflects levels of engagement relating to analysis and evaluation of ideas, theories and knowledge, as well as the formation of new ideas. In this, student responses indicate medium to high levels of engagement. Thus, students who are less satisfied with their overall course experience have a reported engagement score of 50.36, those who report not being satisfied nor dissatisfied with their overall course experience report a higher engagement score of 54.79, and, finally, those who are overall satisfied with their course experience, report a considerably higher engagement score. Other benchmarks also rated quite high, which are also where students reported higher levels of satisfaction. In the case of Academic Support, students who were satisfied with their overall course experience reported high levels of engagement—with a mean score of 51.26. However, students who were dissatisfied with their overall course experience, reported considerably lower levels of engagement in this benchmark—with a mean score of 29.45.
The correlation coefficients between each benchmark and overall satisfaction are all positive (see
Table 2). All coefficients are statistically significant but one, indicating that there is no significant correlation between students’ engagement with co-curricular activities and satisfaction. The higher the correlation coefficient is, the stronger the correlation between the engagement benchmark and satisfaction. In this case, a very strong correlation was found for the following benchmarks: Feedback on the Course, Feedback Overall, Academic Support, Academic Challenge, Student-Academic Relationships, Course Challenge, Assessment, Interdisciplinarity and Community Engagement. This indicates very strong relationships between each benchmark and satisfaction, indicating that more satisfied students report greater opportunities to provide feedback on their course, greater indications of academic challenge and support and more positive relationships with staff. Similar findings were reported in the regression analysis of satisfaction scores, although not reported here.
These results demonstrate that the more students engage with different aspects of their overall experience as university students, the more satisfied they will be with it. Furthermore, they show that engagement benchmarks can be more useful than satisfaction scores alone in helping institutions understand how their students experience the multiple elements of their university life. If we look, for instance, at the scores of the respondents in
Table 1 who stated they were dissatisfied with their overall course experience, we can see that their engagement scores vary considerably, from an engagement score of 52.51 in Academic Challenge, to an engagement score of 22.30 in Assessment (composite of discussed academic performance and/or feedback with a member of the academic staff; made significant changes to work based on feedback, was provided with detailed oral feedback on work, was provided with detailed written feedback on work). On a satisfaction survey these important variations in levels of engagement—which reflect very different levels of student experience of the different benchmarks—would be lost.
An engagement survey, which includes satisfaction questions, allows us to understand a variety of variables that play a role in the student experience. Equally importantly, it gives institutions a much clearer path to improve the student experience, as well as providing the students with greater opportunities to engage more fully with their university experience and all that it can encompass. In this sense, a more engaged student experience is an improved student experience.
6. Discussion
This study shows that more engaged students are more satisfied, and that more satisfied students are more engaged. There is a strong positive relationship between engagement and satisfaction, whichever way causation runs. Therefore, if the enhancement of the student experience is focused on engagement rather than on satisfaction, there is a shift toward engaging students in educationally purposeful activities, rather than “making students happy”. Following up on specific engagement survey items, particularly those around academic challenge and critical thinking, concentrates institutional activities on areas at the core of the academic endeavour, rather than driving resources towards tangentially related aspects such as library resources (as it matters more how resources are integrated with the curriculum than if they are there or not).
The aim here is not to be prescriptive since different institutions have different types of students and are themselves very different in their institutional practices. Different institutions also have different resources that they can draw upon in devising their relationships with their students and in improving the student experience. Our experience in researching the student experience strongly suggests that an approach that includes engagement is not only more holistic but will probably be met more enthusiastically by students than the current NSS.
Therefore, engagement surveys that follow the NSSE model or the Institutional Experience Survey, which includes a satisfaction element, are likely to provide richer information and, more importantly, information that institutions can act upon for enhancement purposes. Engagement surveys view the student experience as an all-encompassing reality, including dynamics that go beyond the classroom and the immediate resources made available to students. Engagement surveys view the experience of the students as a more dialectic relationship: engagement requires both parties to be active participants and share responsibility in the relationship. Moreover, they move away from the student-as-customer approach that has become more prevalent in the decade following changes to tuition fees.
An engagement survey puts categories that both students and the literature emphasise at its core: higher-order learning, course challenge, collaborative learning, academic integration, reflective learning, skills development and academic support. These are obviously categories that go well beyond the “agree–disagree” scale and that encompass a multitude of elements that should be, and are, part of what students value in their university experience. The information provided by engagement surveys allows universities to be much clearer about what the student experience is and, therefore, provides ground for more directed action if necessary.
Engagement surveys may require different institutional resources than the NSS currently does. In many ways, they are a more complex survey tool to analyse and require central administration staff who are more fluent in statistical analysis. Constructively implementing findings from an engagement survey also necessitates relationships between those analysing the data and those involved in the teaching and learning activities. For example, raising levels of academic challenge cannot be enacted with simple directives like mandated feedback turn-around times. However, engagement items do provide numerous indications of good practice to develop and enhance student learning.
Furthermore, there are concerns with the NSS as a basis for teaching and learning development. These issues are both epistemic and methodological. Significant methodological concerns relate to the timing of the NSS, student interpretation of survey statements (see [
50]), applicability across disciplines and the central issues of the validity of student evaluations of teaching quality and the representational validity of results.
The NSS does not touch on
active student engagement with the curriculum or co-curriculum. Current measures do not engage with students’ intellectual development. In its current use, the student experience is articulated in a way that sees it fused with the commodity of education, arguably occluding more diverse perspectives on both students and experience [
1] (Sabri, 2011). In short, “The student experience homogenises students at the same time as apparently giving them ‘voice’” [
1] (p. 657).
This study demonstrates the relevance of repositioning student experience surveys from a service delivery and satisfaction only approach to one that equates individual engagement in the context of institutional offerings. We acknowledge that the use of quantitative satisfaction data is very useful and can allow institutions to collect valuable and timely data. However, when framing this within broader analyses of engagement, which call upon a more in-depth understanding of the student experience, institutions will likely be able to enhance the experience of their students through a more in-depth understanding of the multidimensional characteristics of that experience.
In terms of learning experience, Sabri’s discourse analysis highlights how experience is represented as individually focused rather than co-created or socially constructed. The NSS discourse of satisfaction evokes oppositional relationships between university and student. “The course” as the unit of analysis silences arguments about institutional responsibility and policy context [
1]. In addition, as Feldman [
51] and many others have observed, “A good score for “teaching” does not necessarily equate with good teaching or learning. Students don’t necessarily respond well to difficult concepts or challenging assessments” [
51].
This study builds upon substantial international literature focusing on student engagement and satisfaction, by bringing these two approaches together to support institutional enhancement. It contributes to national and international reflections around methodological approaches and tools to understand the student experience in order to improve it. Rather than positioning satisfaction and engagement in opposition, this study shows the relationship between the two, most strongly in areas where students and staff interact. This is in contrast to practices for improving satisfaction that focus on non-academic staff resources [
52,
53]. The link between satisfaction and academic challenge also highlights how students expect a rigorous learning environment and are not “happier” by a dumbing down of the curriculum [
46].
This paper reports results from a single research-intensive urban institution and as such its conclusions need to be thus situated. Whereas results could be applicable to other similar institutions further studies are needed in order to substantiate this hypothesis. Furthermore, we cannot extrapolate these results directly into institutions with very different characteristics and contexts. Research into various institutional practices and realities is necessary. There would also be benefits to adopting a mixed methodology investigation of the student experience and satisfaction for providing a more in-depth understanding of their lived experiences.
7. Conclusions
This paper provides insights into two different approaches to measuring the student experience. Interestingly, despite coming from different views on quality and uses of data, the findings presented here show the interrelationship between engagement and satisfaction for students. Efforts to raise satisfaction are critiqued for dumbing down the student experience, increases in grade inflation and a student-as-customer mentality. Whereas the design of engagement metrics leads to institutions improving through adopting educationally purposeful activities. The policy discourse surrounding the development of these surveys offers insights for other countries looking to measure the student experience and change the conversation around quality.
The NSS offers useful lessons for policy relevance and changing national and institutional policy decision making. Satisfaction-based questions provide insights into the success of meeting student expectations. NSSE data has a proven record of supporting institutional enhancement and presenting data through benchmarks that can be used by a wide range of stakeholders. This paper highlights how combining these two approaches in student experience surveys can support policy decisions and institutional enhancement. This is particularly relevant for the UK which is reviewing the NSS yet again and looking for a greater focus on enhancement [
54].
To enhance student experience, more meaningful and ongoing engagement with students is necessary to ensure that the appropriate aspects of student experience are measured and that pressures to enhance experience do not result in counter-productive moves that narrow curriculum content or inhibit innovation. Within this, a key question is whether voice, engagement and satisfaction measures and perspectives can, or should, be reconciled.