Next Article in Journal
Differences of Gender in Oral and Written Communication Apprehension of University Students
Previous Article in Journal
Improving Graduation Rate Estimates Using Regularly Updating Multi-Level Absorbing Markov Chains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Capturing Student Satisfaction: A Case Study on the National Student Survey Results to Identify the Needs of Students in STEM Related Courses for a Better Learning Experience

by
Anastasia Sofroniou
1,*,†,
Bhairavi Premnath
1,† and
Konstantinos Poutos
2,†
1
School of Computing and Engineering, University of West London, London W5 5RF, UK
2
School of Engineering and the Environment, Kingston University, London KT1 2EE, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Educ. Sci. 2020, 10(12), 378; https://doi.org/10.3390/educsci10120378
Submission received: 18 November 2020 / Revised: 5 December 2020 / Accepted: 8 December 2020 / Published: 14 December 2020

Abstract

:
The UK higher education has been one of the top destinations for international students over the last few decades, and it is beneficial to the UK local and national economy. However, recent changes of the governmental policies on the way UK universities are funded and the recession that still affects economies around the world have left many universities around the UK at financial survival risk. With relatively limited access to reduced research funds, student recruitment has a vital importance for most universities. It has been well established in the literature that academic reputation and the level of services have the most significant impact on national and international students. Thus, universities, to maintain their market share, must spend much energy and resources to improve the level of services offered to their students. The recently introduced National Student Survey (NSS) has become one of the most important metrics to assess student satisfaction that influences directly the university league tables and the Teaching Excellence Framework (TEF), which in turn effects international and national student recruitment. It is not surprising that underpinning student satisfaction has become the major target of UK universities. Therefore, a research investigation has been carried out to identify the most influential factors that comprise to the decision of overall satisfaction for the students studying Science, Technology, Engineering and Mathematics (STEM) subjects. On this purpose, a detailed statistical analysis was carried out on the NSS results and it was concluded that there is strong evidence, that “teaching” and “organisation and management” are the vital influential factors on the overall satisfaction of students.

1. Introduction

The UK higher education has been one of the top destinations for international students over the last few decades, and the benefit to the UK local and national economy has been well reported in the literature [1,2]. According to Higher Education Statistics Agency (HESA) for the academic year AY2011/2012 [3], more than 435,000 non-UK students preferred UK universities for their studies with an estimated direct and indirect contribution to the UK economy of £8 billion [4].
However, the recent changes of the governmental policies in the way universities are funded completely changed the higher education landscape. Most of the UK higher education institutions have increased their tuition fees from £3000 to £9000 over the last three years. This increase in the tuition fees, along with the global recession resulted to a significant reduction of national and international student recruitment. Universities across the UK experienced a sharp drop of student recruitment number. UCAS (2012) [5] reported that UK applicants reduced by approximately 8% between 2011 and 2012 while applications from EU prospective students fell in 2012 by approximately −12.4%.
This reduction in student recruitment had an immediate effect on the financial sustainability of many universities and departments across the UK. According to University and College Union [6], many universities and departments will face financial and survival risk because of the recent governmental changes to university funding. Among these, four universities will face very high risk while 45 universities across the UK will face high to high/medium risk.
The relatively new introduced National Student Survey (NSS) along with the increased university tuition fees, elevated the expectation of the students (customers) to expect more and request better services from the academic staff [7,8]. The NSS report plays a crucial role in both student recruitment and also university rankings which also indirectly influences student recruitment. Research works published by Roberts, Thomson, and Bell, Adrian and Brooks [9,10] indicate that university rankings has been significantly considered from new applicants both nationally and internationally and therefore plays a significant role in student recruitment.
Consequently, this case study proposes to examine the National Student Survey (NSS) database with aim to “decode” the leading factors for student’s satisfaction that then enable higher education providers to invest in these in order to attract more students to their institutions. From a parallel viewpoint, many retail industries use various methods to adhere to their customer’s voice by looking into the feedback provided about their products and services, thus the voice of the student is eminent for this purpose.
The NSS, which is the focus of this paper, helps us to investigate and understand the mentality of students in higher education institutions in the UK. The statistical aspect helps us to conclude and decide which factor is the most influential for student satisfaction during the period of study in their respective courses. Since there is a very broad range of subjects and combinations taught at different universities, this study only focuses on the students pursuing a degree associated with STEM (Science, Technology, Engineering and Mathematics) whilst simultaneously comparing these with Modern and Redbrick Universities. It is specified that Redbrick Universities are the universities founded in major industrial cities in England during the 19th century.

2. Literature Review

According to Coulter et al. [11], satisfaction is a feeling someone has to describe contentment or disappointment upon comparing the perceived performance of a product with the expected product performance. Within the context of this study, Ramsden qualified that ‘students do not have a right to be satisfied’ [12], yet university rankings play a vital role for the students decision to enter higher education and hence the NSS consequently must have an impact on them [13,14]. Students’ satisfaction and engagement have not yet been critically analysed [15], allowing for a scope in further research. Accentuating the necessity to evaluate the NSS results much deeper the response to the research question of this study, that being epitomising the most influential factor of student satisfaction as perceived by the student voice, would enrich the current literature and perhaps aid to develop an improved education service in the future [16,17].
Students are also often addressed as ‘customers’ who help the economy and, in view of this, allow for an affiliation to the customers in retail industry [18]. ‘Customer satisfaction’ plays an important role in retail industries [19]. Hence, this study connects customer satisfaction of the retail industry to the student satisfaction output from the NSS results. Feedback from the students are considered valued information for the universities to use improve and develop for future generations [20]. Developing good educational attitudes for staff and students will be fruitful in the future to yield extraordinary outcomes and that can be done by assessing the student feedback [21,22]. Examination of customer satisfaction differs from each company [23]. Similarly, some institutions change their focus according to the NSS results published to grow and to protect some areas of provisions [24]. Students are the ‘primary customers’ for the universities in the UK even before they pay their tuition fees to their universities [25]. Customers’ loyalty and service operation are very important factors for the retail networks in general and this is done by analysing and improving the satisfaction they receive from the services provided [26]. The idea is that both sectors ultimate goal is to offer better services to their customers and to keep their business chain growing. The students mainly see the NSS and consider this survey before actually applying to study any course. The ranking of the university and the previous experiences of the students is thus very important in selecting the universities [27].
The companies use qualitative methods such as customer satisfaction survey, customer satisfaction score, net promoter score, customer effort score and social media monitoring to analyse the satisfaction [28]. This study finds numerical results based on the NSS which are accordingly reliable for the universities. Rankings of the universities play an important role in increasing student demand and earn revenue [29].Student satisfaction is a path for the universities where a competitive advantage can be obtained [30]. There may be low response rates on some questions by the students and so it is also necessary for the universities to research as to why some students do not opt to answer the questionnaires and in addition make sure that as many students as possible participate in the survey [31]. The educators should always be voluntarily involved in analysing the student feedback by conducting a smaller survey in class before they answer any other bigger questionnaires [32,33]. In comparison to Redbrick Universities, those established during the 19th century, the Modern Universities are those founded after 1992 [34].
The National Student Survey (NSS) was established in 2005 to analyse student satisfaction and experience of final year undergraduate students in the UK. After abolishing previous methods of examining student satisfaction, the NSS was then formed to help understand the students’ encounters with more efficacy [35]. It is a questionnaire with 27 questions and each question has a scale from 1 to 5, where 5 is considered as the ‘most satisfied’ option in this survey (Appendix A). Table 1 summarises the different factors, scales as they defined by the NSS.
Other Scales assessed by the NSS include the following scales: learning opportunities, learning resources, learning community and student voice. However, these have not been considered in this study as they do not contribute to rankings or TEF positions.

3. Methodology

3.1. Research Approach

This paper begins with a literature review showcasing the importance of analysing customer satisfaction to improve the economy of the country, whereby customer satisfaction in retail can be linked to student satisfaction in education, solidifying the interest of the paper and marking the applicability of the analysis. Data about each question of the NSS survey are collated using excel software and the responses by the students studying different STEM courses, specifically civil engineering, building and surveying, computer science and mathematics, are depicted initially using graphs. Using excel, the relationship between the questions relative to that of Question 27 (overall satisfaction) is presented in a bar charts and tables. These visibly show which universities have a relation between each question and the overall satisfaction.
The following process was carried out in order to perform the analysis, as depicted in Figure 1.

3.2. Sampling Universities for the Analysis

To identify any potential differences between students with different A level performance, this case study analyses a representative sample of universities from both Modern and Redbrick Universities (Table 2).
The above universities were extracted out of all universities in the UK; in other words, it is a sample taken from the whole population of institutions. For the benefit of the analysis of this study, approximately 250 responses were taken for each subject from each university.
Although the main focus of the paper is on the “other universities” (i.e., Modern Universities), data collated from Redbrick Universities are also used to provide a comprehensive comparison and enhance the results of the study. The target group of universities analysed in this study are summarised in Table 2, specifically for 2017, 2018 and 2019.

3.3. Courses under Investigation

In addition, the sampled universities are then further examined by considering different STEM courses. The courses under consideration are:
  • Civil Engineering
  • Building
  • Mathematics
  • Computer Science
It is acknowledged that there are many branches for the STEM subjects, but again only the above catalytic subjects are considered so as to find evidence of the influential factor. The same method, however, can be used and extended to other subjects, not only STEM related but also to any other field.

3.4. Software and Results Processing

NSS data were inputted into MATLAB, producing graphs for each question relative to Question 27, and this was done for validity and comparability reasons to the Excel findings, providing a strong conclusion on the analysis of the data and aiding to find the most influential factors.
MATLAB was employed to produce measures of central tendency and dispersion of the data (mean, median, standard deviation, etc.), calculations required to find the relationship between the questions.
Analysis of the NSS data was also configured using R Studio, in order to strengthen the deductions from the aforementioned software. The latter statistical package was used to produce diagrams for each question relative to Question 27 (Overall Satisfaction question).
The graphs were plotted accordingly by each software and compared. Further, from the information received from the depiction of the graphs, coefficients of regression for each question and Q27 (Overall Satisfaction) were calculated with the objective being to find the trend in correlation. A value obtained very close to 1 suggests a strong positive correlation between the two variables. Here, Q27 is considered as the independent variable and the other questions are the dependent variables. The p values were also found through the ‘data analysis’ option on Eexcel. These showed the relationship between the variables and supported strongly the findings from the coefficient correlation and the earlier plotted graphs. The t-tests performed in Excel gave a sound understanding regarding the means and variances.
All three software packages used different types of graphical representation of the raw data—bar charts, histograms and tables—to interpret the relationship between the variables considered.
The regression analysis gave an improved appreciation about which questions had more influence on the overall satisfaction of student experience. Finding the correlation coefficients aided to find the relationship between each question and specifically Question 27. Correlation coefficient values close to 1 meant a strong relationship between the two considered variables, (each question and Q27) and a smaller coefficient value implied a weak relationship among the variables. Similarly, p values were also calculated for the specific questions whereby p values less than both the significance levels 0.05 and 0.01 suggested a strong relationship amongst the questions versus the overall satisfaction of the students. The confidence interval was also found for each question and analysed for both types of universities. These intervals provide stronger evidence of where the true value lies.

3.5. Statistical Analysis Models and Justification

For the question to have an effect on the satisfaction, it is expected that the correlation coefficient should have a value closer to 1 and the p values should be very small (less than both 0.05 and 0.01). If these two conditions are satisfied, then we can conclude that that the questions under each factor have an effect on the Q27 (overall satisfaction). Hence, forthcoming graphs show how many questions under each factor have an effect on the satisfaction results. Variance is calculated with the formula below [36]:
s x x = s 2 = 1 ( n 1 ) i = 1 n ( x i x ¯ ) 2
The coefficient of correlation is found by using the following [37]:
r = s x y s x x s y y
Coefficient of Variation (CV) is also a term used to measure the dispersion. It is a unitless term. It helps in analysing the relationship between the variables. The smaller is the CV, the better is the validity of the outcome, that is the group is less variable and uniform. The formula is given by,
C V = M e a n S t a n d a r d D e v i a t i o n
Effect size in statistics is said to be the measurement of magnitude of an event [38]. This topic is usually connected with hypothesis testing used in statistics. As the above authors suggest, effect size gives valid scientific evidence towards the analysis. In this study, it is calculated using the relation below,
e f f e c t s i z e = A v e r a g e o f Q 27 A v e r a g e o f o t h e r q u e s t i o n A v e r a g e o f t h e s t a n d a r d d e v i a t i o n s o f t w o q u e s t i o n s c o n s i d e r e d
This is also used by Hedge and Olkin’s equation, which can be calculated is Excel [37].
Usually, the absolute effect size is considered when analysing the statistical aspect of the variables [39]. This article also emphasises the importance of effect size. It is mostly used in discussions related to statistics. The above authors also suggest that the effect size does not depend on the sample size but the p value depends on it. Cohen’s d (effect size) is classified as follows:
  • small effect when the value is 0.2 or less;
  • medium effect when the value is 0.5 or less;
  • large effect when the value is 0.8 or less; and
  • very large effect when the value is 1.3 or less.
In this investigation, the idealistic scenario is to find an effect size with value that is within 0–0.5 (representing a small to medium effect) which shows a small effect size proving that the findings are true. Cohen also suggested that, if the sample size is less than 50, a correction factor should be applied to the found effect size, which is given by:
c o r r e c t i o n f a c t o r = N 3 N 2.25 N 2 N

4. Results

This study mainly focused on The National Student Survey conducted for the final year bachelor’s degree students in the UK to find the most influential factor for the students’ overall satisfaction studying STEM subjects. This survey plays a vital role in helping higher education organisations to improve the satisfaction rate and increase the number of students in their respective institutions. Only a few Redbrick and non-Redbrick Universities were considered as a sample out of the population to analyse the trend and find relationships between the important Question 27 (Overall Satisfaction) for 2019, 2018 and 2017. A small sample of universities was considered for this study yet still producing an idea of the trend that universities as a whole follow relative to students’ satisfaction. Large correlation coefficients and low p values were initially used to find numerical evidence between the questions. Furthermore, the questions which obeyed these conditions were analysed in greater depth with effect size and the coefficient of variation as parameters. Out of the 27 questions, only 13 were considered as directly impacting the students’ satisfaction rate and the other questions being already connected to the university environment and facilities. The main factors considered in this study are teaching, assessment and feedback, academic support and organization and management.

4.1. Graphical Representation

Figure 2, Figure 3 and Figure 4 shown the correlation coefficients of each question considered with the overall satisfaction question and the p values. They portray the factors affecting the overall satisfaction for the three consecutive years for both types of universities. As the years goes by, Redbrick Universities show evidence of having factors that influence the overall satisfaction. The main deductions for these years being teaching and assessment and feedback are reasons that motivate students to answer the NSS question on overall satisfaction (Q27).

4.2. Statistical Analysis

In this section, the statistical analysis adopted is graphically represented by bar charts and by tabulating the correlation coefficients, p values and effect sizes for every year for the two university type categories.
Modern Universities
(1) 
Civil Engineering
The presentation of these results begins with the focus on the civil engineering subject group in Modern Universities. Statistical analysis was performed and the results are as shown below.
The questions with high correlation coefficients and very small p values are clearly visible in Table 3, these showing the influence on the student satisfaction rate. Questions 2, 3, 09, 11, 13 and 15 have therefore an impact on satisfaction.
Question 2 (Staff have made the subject interesting) under the teaching factor has an effect size of magnitude 0.41. Post the correcting factor, the effect size becomes 0.291, thus this is a small effect on the means. The coefficient of variation is 0.056 (5.6%). The confidence interval for this question is (−1.47, 0.651), where the true effect size lies within this range.
For Q03 (The course is intellectually stimulating), which again is part of the teaching factor, the effect size is calculated as 0.335, and after the correction it becomes 0.238. This is considered as a small effect size. The Coefficient of variation is 0.043 (4.3%) and the confidence interval is (−0.72, 1.39). This interval shows where the true effect size lies.
The same analysis is performed for Q09 (Marking and assessment has been fair), which is under the assessment and feedback factor, bringing an effect size of 1.014, whilst, with the correction factor, it becomes 0.721, revealing a large effect size. The coefficient of variation is 0.081 (8.1%) and the confidence interval at 5% significance level is (−2.127, 0.099).
On the other hand, Q11 (I have received helpful comments on my work) under the assessment and feedback factor has an effect size of 0.92, yielding a value of 0.65 after the correction factor is imposed. This is a large effect size on the means. The coefficient of variation is 0.087 (8.7%) and with a 95% confidence, it is said that the true effect size lies within the confidence interval (−2.03, 0.179).
Focusing on Q13 (I have received sufficient advice and guidance in relation to my course) under the academic support feedback factor, the effect size is calculated as having a magnitude of 0.35. Incorporating the correction factor, this value changes to 0.249, representing a small effect size. The CV is 0.061 (6.1%) and the confidence interval from the table above is (−1.41, 0.7), where the true effect size lies within this range.
Finally, in view of Q15 (The course is well organised and is running smoothly), which is a question under the organisation and management factor, the effect size after the correction factor possesses a value of 0.2683, which again shows a small effect size based upon Cohen’s criteria. The coefficient of variation is found to be 0.043 (4.3%) and the confidence interval is (−1.434, 0.68), where the true effect size lies within. This metric shows the influence that Q15 has on Q27. All questions have a small effect size after the correction factor is applied, hence with certainty it is concluded that the findings are true.
For the same subject group within Modern Universities, but for 2018, the analysis shows that Questions 1–3 and 15 have an influence on the student satisfaction as they posses large correlation coefficients and very small p values. This can be clearly seen through the bar chart below, Figure 5.
Q01 (Staff are good at explaining things) has an effect size of 0.1679. After the correction factor, it is 0.119. This is a small effect size and has a small effect between the means. The coefficient of variation is 0.080 (8%). The confidence interval at significance level 0.05 is (−0.882, 1.2174), where the true effect size lies. That is this effect size shows the influence of Q01 on Q27. This interval shows where the true effect size for this question’s influence on Q27 lies on.
Q02 (Staff have made the subject interesting) is with an effect size is 0.189. After applying correction factor, Cohen’s d is 0.134. Thus, it has a small effect size. The CV is 0.081 (8.1%). The confidence interval from the table is (−1.239, 0.861), where the true effect size lies.
The Q03 (The course is intellectually stimulating) has a Cohen’s d (effect size) of 0.493. After correction factor it is 0.3508. This is also a small effect size. The coefficient of variation is 0.049 (4.9%). The confidence interval is (−0.57, 1.557), where the true effect size lies which shows the effect on Q27 by Q03.
Finally, Q15 (The course is well organised and is running smoothly) has an effect size of 0.599. After the correction factor, it is 0.426. This is still a small effect. The CV is 0.1698 (16.9%). The confidence interval at 5% significance is (−1.67, 0.472), where the true effect size lies.
All the considered questions have a small effect size which shows that the findings done are true.
For 2019, Table 4, it is evident that Questions 4, 13, 15 and 17 are the ones that have an impact on the student satisfaction. To distinguish between the questions with an influence on student satisfaction, the analytics behind Question 4 is considered as a start.
The effect size of this question (My course has challenged me to achieve my best work) has an effect size of 0.246 (small effect), which suggests that the means of Q04 and Q27 do not differ by 0.2 standard deviation and thus have a trivial difference. That is, Q04 has a small effect to Q27. The coefficient of variation is low compared to other questions, 0.068 (6.8%), hence this group is less variable. In addition, at 0.05 significance, the confidence interval is found to be (−0.81, 1.298), where the true effect size lies within this range. After applying the correction factor, Cohen’s d is 0.175, which is still a small effect size.
For Q13 (I have received sufficient advice and guidance in relation to my course), the effect size is −0.03, but the absolute of this is considered so as to see simply the magnitude of the effect size. A very small effect (0.03) exemplifies a very small difference. Further, the coefficient of variation is calculated as 0.079 (7.9%) and the 95% confidence interval level as (−1.08, 1.016), showing that true effect size (the influence on Q27) lies within the range. Upon including the correction factor, the effect size for Q13 is 0.021, which is again a very small effect size.
For Q15 (The course is well organised and is running smoothly), the absolute of the effect size of value 0.743 is considered. This shows that it has a large effect size and the difference between means are also large. The CV for Q15 is 0.158 (15.8%). This shows that the CV is large in comparison to the questions considered, thus this group is more variable and less uniform. The confidence interval found is (−1.827, 0.34), showing that the effect size lies between this range. Including the correction factor, Cohen’s d is now 0.528799, representing a medium effect size.
For Q17 (Any changes in the course or teaching have been communicated effectively), the absolute value is 0.22, which implies that it has a small effect between the two groups considered. The CV is 0.141 (14.1%). The confidence interval at 5% significance is (−1.27, 0.836), where the true effect size lies. The effect size post applying the correction factor is 0.1565, and it is a small effect.
(2) 
Building
The next subject group to be considered within this paper is the Building subject group for the Modern University type category. This includes the general building courses delivered by all universities at hand. Filtering from the large available data, statistical analysis was performed, and Figure 6, Figure 7 and Figure 8 summarise the correlation coefficients and p values for the Building subject group for 2017, 2018 and 2019, respectively.
(3) 
Computer Science
Figure 9 represents a summary of the statistical analysis findings for the computer science subject group for Modern Universities in 2017, 2018 and 2019, respectively.
(4) 
Mathematics
The results of the statistical analysis tools employed within this paper, for the last STEM subject group, Mathematics, for the Modern University category and for the respective years of interest, are tabulated and presented in Table 5.
For 2017, the results indicate that no question agreed with having both high correlation coefficients and very low p values (at both significance levels 0.01 and 0.05).
Gradually, in 2018, the mathematics group showed signs of improvement where it appears that Question 8 has evidence of having an influence on the overall satisfaction.
In 2019, only Question 9 has an impact on the student satisfaction analysis, as shown in the table with the high correlation coefficient near to unity and the very low p value.
Redbrick Universities
This section comprises a similar analysis but implemented on Redbrick Universities, for all four group subjects. Moreover, the final results accumulated are explained below.
(1) 
Civil Engineering
Comparative yearly analysis of the data showed that only in 2019 two specific questions showed evidence of influence, whereas, in 2017 and 2018, no questions had influence for the civil engineering subject group for Redbrick Universities. This can be seen by the low values of the correlation coefficient and the p values depicted in Figure 10. These values and hence deductions are also supported by the effect size that accompany each question for the stated years.
(2) 
Building
Out of the list of Redbrick Universities considered, only two of the universities provide Building as a course. Thus, it is very difficult to analyse only two universities as the number is not enough to calculate the correlation coefficients and the p values for all three years of 2019, 2018 and 2017.
(3) 
Computer Science
For this subject group, only for 2018 and 2019, there is clear evidence of strong relationships between some of the questions considered. The questions that are influential towards the students’ overall satisfaction can be seen in Table 6, whereby high correlation coefficient values and low p values are evident for the above stated years.
(4) 
Mathematics
Performing the statistical analysis adopted throughout the study, it is deduced that, for Redbrick Universities under this subject group, only one question i 2018 and one question in 2019 show an influence on the overall satisfaction of students. Figure 11 portrays the questions with the high peaked correlation coefficient value and simultaneously the low p values.

5. Discussion

In 2019, it was found that the Building group considered in Modern Universities had shown evidence that, out of the 12 questions considered for the analysis, 10 questions clearly indicated as having an influence on the overall satisfaction question. Those are related to the factors ‘teaching’ (Q02–Q04), ‘Assessment and feedback’ (Q08–Q11), ‘Academic support’ (Q13) and ’organisation and management’ (Q15 and Q17). Students used these questions to answer Question 27. That is, they used at least one question from each factor considered in this study. However, Redbrick Universities clearly did not show any evidence towards having a relationship towards Q27, as shown in the study. For the Computer science group, findings showed evidence of using eight questions out of the 17 to answer Q27. They are also related to the factors teaching, assessment and feedback and organisation and management (Q02, Q03, Q10, Q11, Q13, Q15, Q16 and Q17). These are also under all the factors considered. However, in Redbrick Universities, only Q16 and Q17, which are under the factor organisation and management, supported an influence on the overall satisfaction of the students. Both these types of universities have ‘The timetable works efficiently for me’ (Q16) and ‘Any changes in the course or teaching have been communicated effectively’ (Q17) as common questions and this shows the importance of organisation and management factor through the view point of the computer science students. On the other hand, Civil Engineering subject group only showed four questions (Q04, Q13, Q15 and Q17) which are linked to teaching and organisation and management again having an influence on Q27 and mainly under the factors Academic support, organisation and management.
In view of the Redbrick Universities, only two questions (Q01 and Q03—teaching) played a vital part in the influence of student satisfaction during 2019. This shows the clear gap between the Redbrick and the other universities under consideration. Finally, the maths group for Modern Universities showed evidence of only ‘Marking and assessment have been fair’ (Q09) part of the ‘Assessment and feedback’ having an impact when answering the overall satisfaction, and correspondingly for the Redbrick Universities it was only Q04 (My course has challenged me to achieve my best work). This shows the different expectations of students for each type of university. All questions under this year possessed a small effect size, thus providing a numerical value that would support that the findings are true.
In 2018, Civil Engineering subject category for Modern Universities provides evidence of four questions (Q01–Q03 (teaching) and Q15 (organisation and management)) having a connection to Q27. Out of these, three of them are under the teaching factor. However, Redbrick Universities did not show any questions having an impact on deciding on the overall satisfaction. Next, for the Building group in non-Redbrick institutions, three questions (Q03, Q04 and Q15) relating to ‘teaching’ and ‘organisation and management’ factors showed clear relationships to Q27. However, students from Redbrick Universities, yet again, did not support strongly any of these questions, allowing for the deduction that there was not enough evidence to show the impact between the questions. The computer science students at non-Redbrick education providers also supported that Q03, Q04 and Q15 have impacts on the overall satisfaction, and, moreover, for the students studying at Redbrick Universities, Q12 and Q13 (Academic support) portrayed an influence on Q27, thus clearly showing the importance of teaching for students in non-Redbrick Universities and the necessity of sound academic support for students at Redbrick Universities.
The mathematics subject group at Modern Universities however showed evidence of Q08 (The criteria used in marking have been clear in advance) having an impact on the satisfaction rate whilst the opinions of students at Redbrick Universities support that Q02 (Staff have made the subject interesting) has a strong impact on Q27.
For 2017, it can be seen that the Computer Science group at Modern Universities showed Q01–Q04 (teaching), Q08–Q11 (assessment and feedback), Q13 (academic support) and Q15 and Q17 (organisation and management) as having a positive strong relationship between the questions but the analysis undertaken for Redbrick Universities did not convey any evidence of questions having an influence on the overall satisfaction. The civil group at Modern showed that Q02 and Q03 (teaching), Q09 and Q11 (assessment and feedback), Q13 (academic support ) and Q15 (organisation and management) provided proof of having an effect on overall satisfaction, however the analysis for Redbrick institutions did not produce any evidence of questions having an impact on students’ satisfaction.
The results for the Building subject category for Modern Universities provided only ‘I have received sufficient advice and guidance in relation to my course’ (Q13) as having a connection towards Q27, but the analogous subject group at Redbrick Universities did not show any evidence of any questions having an influence on the overall satisfaction. In 2017, the mathematics degrees under both Modern and Redbrick providers did not justify any questions as to have an impact on the overall student satisfaction.

6. Conclusions

The NSS is an evergreen topic whereby its aftermath and the perceptions of students should be taken into consideration for the healthy running of any university. In some literature, authors refer to students as ‘customers’ and the education sector as a ‘business’ and that universities would do anything to increase their student intake number in order to get a large revenue [26]. Additionally, students also acknowledge the importance of university degrees [15]. Hence, the analysis entailed within this paper provides a wider viewpoint on how to encourage more students to continue their studies in higher education. The findings in this study give detailed options to improve the students’ satisfaction rate. The rankings of the institutions in the league tables might be considered as a condition for some students who wish to pursue their studies [14]. The NSS helps universities to think and ponder about the results and elevate the level of services and facilities for the betterment of the students. Therefore, the findings that stem from the analysis in this paper provide a vision to the universities, numerically evidenced, as to what features must be addressed in order to enhance student satisfaction. In addition, it is said that the data from the NSS survey are accurate, thus exemplifying the significance of the results obtained here within providing reliable information for all type of universities [18].
The results obtained from The National Student Survey have not before been analysed critically, but this paper focuses on statistical analysis of the data obtained, giving clear suggestions to the universities and higher education providers about the overall satisfaction of their students. This study has delivered methods for the universities to analyse and improve students’ satisfaction rate with the ultimate aim of providing them with a good education. The numerical values calculated for each year shows the impact of each factor considered on the students’ satisfaction rate.
During 2017–2019, upon considering the Modern Universities, the results in this study indicated that there was a strong positive relationship between the overall satisfaction and the factors teaching (Q01–Q04) and organization and management (Q15–Q17). This outcome enforces that the delivery of a course plays an integral part in the education for the students and makes them a part of the learning environment [22]. The statistical analysis performed in this study clearly shows that the aforementioned questions have an impact on students overall satisfaction (Q27). This shows that, although there are other factors involved in the study affecting Q27, only these two specific factors stood out, allowing for the stipulation that students still prefer the teaching factor including lecturers, facilities and teaching techniques to amount for their overall satisfaction and also the organisation of the course factor, which in turn affects the student’s time during their studies. Universities thus increase their efforts and time in teaching in order to improve the overall scores [10], allowing students to value teaching and gain advice by experienced educators [7,8]. Although in 2017 for Redbrick Universities no evidence can be found of questions having an impact towards the overall satisfaction, the subsequent years, 2018 and 2019, showed evidence of having strong correlation with “teaching” and “organisation and management”, factors similar to the outcome of the Modern Universities. This implies that students studying at these type of universities also seem to prefer the same factors, exemplifying these needs for overall satisfaction.
The NSS plays an important role in helping the higher education providers to analyse and improve students’ satisfaction rate. This study will help the NSS to understand the students and what they expect from the universities. Many universities carry out internal surveys to improve the satisfaction rate systematically, before the students have the opportunity to respond to the NSS [33]. The techniques used here can be used by the survey creators and perhaps even guide them to include more factors that might trigger students’ decision on overall satisfaction, hence help the education sector of the country as a whole. NSS will be able to attain a wider explanation on what factors should be deliberated even further in order to help universities provide a better educational service. It may also pave the way to in turn attract more students to higher education sector from different backgrounds.
In conclusion, the analysis performed in this study from the data collected from the National Student Survey reveals that there is clear and reliable evidence for both university groups that their students consider the factors teaching and organisation management, as to have a greater influence in their overall satisfaction of a better learning experience. This study has also shown that, despite what group of universities students are graduating from, in general, students in higher education institutions want good teaching and a good organisation of their chosen degree program. Most of the students still consider good teaching practice as one of the main influence in answering Q27 even with the major developments in the technological aspect of the education system. Regardless of the many new discoveries and inventions towards the field of science and education nowadays, students still value teaching and the importance of the organisation of the course when investing their money and efforts towards their chosen degree path.

7. Suggestions for Future Research

The statistical techniques and methods used to find the influential factor for overall satisfaction has been the theme for this study. Therefore, this study can be used not only by the education sector but also by other sectors such as finance, business and healthcare to find relationships between any two variables set by them.
Many businesses and companies spend a lot of money to know about customer satisfaction with their products or services. Most of them introduce a questionnaire to the customers and get feedback from it and analyse the data through charts. They use these charts through presentations, etc. to understand the satisfaction of customers and how to develop the products sold accordingly. However, this study provides clear details and methods to find the satisfaction rate of students, which in this case can be derived and used in other fields other than education to receive a broad vision about the preferences of the customers or consumers.
Businesses such as supermarkets or even financial companies can use this analysis to tackle large amounts of data in order to find out more about their customer satisfaction or employment satisfaction, improving in turn their business by increasing revenue, which could even perhaps impact the economy of the country. The statistical techniques used in this study can be applied to the business sector so as to obtain valid numerical evidence to form reliable conclusions and thereby use the results to improve customer satisfaction.
Healthcare sectors can use these methods to identify any drawbacks in the care they provide for the patients. For example, increase comfort levels of patients by improving facilities, improving waiting times, etc. by analysing the factors affecting them. The methods in this study can be applied to analyse patients, satisfaction rate and help the health institutes to develop their service that they provide.
The transport sector can also use the statistical analysis entailed within this paper to find out about passenger satisfaction rates in trains, buses, subways or aircrafts. This analysis could further provide companies with a wider insight to the problems hindering the smooth running of a company, such as waiting periods, cancellations, delays and hence can aid to improve these issues and provoke the company to act accordingly, allowing in return improvements to the customer satisfaction rates.
These are merely a few of the sectors that the analysis of this study can be applied to. The methods in this study can provide assistance in finding and developing satisfaction rates for ‘customers’ and in response perhaps even contribute to the development of the economy of the country.

Author Contributions

Conceptualisation, K.P. and A.S.; Methodology, A.S. and K.P.; Software, B.P.; Validation, A.S. and B.P.; Formal analysis, all authors; Investigation, all authors; Resources, K.P. and B.P.; Data curation, B.P.; Writing—original draft preparation, B.P. and A.S.; Writing—review and editing, All authors; Visualisation, A.S. and B.P.; and Supervisors, A.S. and K.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NSSNational Student Survey
STEMScience, technology, engineering and mathematics

Appendix A

Appendix A.1.

The National Student Survey Questionnaire (Office for Students Website and Readkong website)
Teaching
1. Staff are good at explaining things.
2. Staff have made the subject interesting.
3. The course is intellectually stimulating.
4. My course has challenged me to achieve my best work.
Learning Opportunities
5. My course has provided me with opportunities to explore ideas or concepts in depth.
6. My course has provided me with opportunities to bring information and ideas together from different topics.
7. My course has provided me with opportunities to apply what I have learnt.
Assessment and Feedback
8. The criteria used in marking have been clear in advance.
9. Marking and assessment has been fair.
10. Feedback on my work has been timely.
11. I have received helpful comments on my work.
Academic support
12. I have been able to contact staff when I needed to.
13. I have received sufficient advice and guidance in relation to my course.
14. Good advice was available when I needed to make study choices on my course.
Organisation and management
15. The course is well organised and is running smoothly.
16. The timetable works efficiently for me.
17. Any changes in the course or teaching have been communicated effectively.
Learning resources
18. The IT resources and facilities provided have supported my learning well.
19. The library resources (e.g., books, online services and learning spaces) have supported my learning well.
20. I have been able to access course-specific resources (e.g., equipment, facilities, software, collections) when I needed to.
Learning community
21. I feel part of a community of staff and students.
22. I have had the right opportunities to work with other students as part of my course.
Student voice
23. I have had the right opportunities to provide feedback on my course.
24. Staff value students’ views and opinions about the course.
25. It is clear how students’ feedback on the course has been acted on.
26. The students’ union (association or guild) effectively represents students’ academic interest.
27. Overall, I am satisfied with the quality of the course.
28. Looking back on the experience, are there any particularly positive or negative aspects you would like to highlight?
Questions for students studying NHS subjects Practice placements
29. I received sufficient preparatory information prior to my placement(s).
30. I was allocated placement(s) suitable for my course.
31. I received appropriate supervision on placement(s).
32. I was given opportunities to meet my required practice learning outcomes/competences.
33. My contribution during placement(s) as part of the clinical team was valued.
34. My practice supervisor(s) understood how my placement(s) related to the broader requirements of my course.
Please note that students will be asked to respond to the practice placement questions outlined above; however, they will be included before Question 27 (although for the purposes of this guidance they are listed afterwards).
Questions for students studying degree apprenticeships Degree apprenticeships
1. How satisfied or dissatisfied are you with the advice you have been given about what you can do after this training programme? (Likert 0–10 with 0 labelled ‘very dissatisfied’ and 10 labelled ‘very satisfied’. Include ‘Does not apply’ option).
2. How likely is it that you would recommend the college or organisation that provides your learning to friends or family? (Extremely likely; Likely; Neither likely nor unlikely; Unlikely; Extremely unlikely; Does not apply).
3. Why did you choose to do the training programme? (Tick all that apply).
To gain skills and knowledge
To get a qualification
To meet people and make new friends
For personal interest or pleasure
To help me take part in social activities
To help me get into work It is needed for my work
To improve my health or well-being
To progress onto another course or higher education
To help other people Other reason
4. What was the main reason for choosing to do the training programme? (Tick one only)
To gain skills and knowledge
To get a qualification
To meet people and make new friends
For personal interest or pleasure
To help me take part in social activities
To help me get into work It is needed for my work
To improve my health or well-being To progress onto another course or higher education
To help other people Other reason
5. Which of the following do you think will apply when you have finished your training programme? (Tick all that apply)
I’ll have more skills or knowledge
I’ll have gained a qualification
I’ll have made new friends
I’ll be more confident
I’ll be more likely to take part in social activities
I’ll be more likely to get into work
I’ll be more likely to progress at work
My health or well-being will have improved
I’ll be more likely to progress onto another course or higher education
I’ll be more able to help other people
None of the above
6. What do you think will be the main outcome of taking the training programme? (Tick one only)
I’ll have more skills or knowledge
I’ll have gained a qualification
I’ll have made new friends
I’ll be more confident
I’ll be more likely to take part in social activities
I’ll be more likely to get into work
I’ll be more likely to progress at work
My health or well-being will have improved
I’ll be more likely to progress onto another course or higher education
I’ll be more able to help other people
None of the above
These questions will be included after the optional banks (although for the purposes of this guidance they are listed before).
Marketing question: What prompted you to complete the National Student Survey through our website www.thestudentsurvey.com (Optional).

References

  1. Mellors-Bourne, R.; Humfrey, C.; Kemp, N.; Woodfield, S. The Wider Benefits of International Higher Education in the UK; Department of Business Innovation and Skills: London, UK, 2013.
  2. The Impact of Universities on the UK Economy—Universities UK. Fourth Edition. 2009. Available online: Http://www.universitiesuk.ac.uk/highereducation/Documents/2009/EconomicImpa ct4Full.pdf (accessed on 16 October 2020).
  3. Higher Education Statistics Agency. Students at UK HE Institutions by Domicile 2010/11 and 2011/12; Higher Education Statistics Agency: Cheltenham, UK, 2012. [Google Scholar]
  4. Conlon, G.; Litchfield, A.; Sadlier, G. Estimating the Value to the UK of Education Exports; BIS Research Paper 46; Department for Business, Innovation and Skills: London, UK, 2011.
  5. UCAS, End of Cycle Report, Rosehill. 2012. Available online: Http://www.ucas.com (accessed on 16 October 2020).
  6. University and College Union. Universities at risk—The impact of cuts in higher education spending in local economy. 2010. Available online: http://www.ucu.org.uk (accessed on 16 October 2020).
  7. MacKay, J.R.D.; Hughes, K.; Marzetti, H.; Lent, N.; Rhind, S.M. Using National Student Survey (NSS) qualitative data and social identity theory to explore students’ experiences of assessment and feedback. High. Educ. Pedagog. 2019, 4, 315–330. [Google Scholar] [CrossRef] [Green Version]
  8. Nurunnabi, M.; Abdelhadi, A.; Aburas, R.; Fallatah, S. Does teaching qualification matter in higher education in the UK? An analysis of National Student Survey data. MethodsX 2019, 6, 788–799. [Google Scholar] [CrossRef] [PubMed]
  9. Roberts, D.; Thompson, L. Reputation Management for Universities, University League Tables and the Impact on Student Recruitment. Marketing, Strategy and Communications for an Educated World. 2007. Available online: Http://www.theknowledgepartnership.com (accessed on 7 November 2019).
  10. Bell, A.R.; Brooks, C. What makes students satisfied? A discussion and analysis of the UK’s national student survey. J. Furth. High. Educ. 2018, 42, 1118–1142. [Google Scholar] [CrossRef]
  11. Coulter, K.S.; Coulter, R.A. The effects of industry knowledge on the development of trust in service relationships. Int. J. Res. Mark. 2003, 20, 31–43. [Google Scholar] [CrossRef]
  12. Ramsden, P. National Student Survey. 2007. Available online: Https://paulramsden48.wordpress.com (accessed on 7 November 2019).
  13. Williams, J.; Kane, D.; Sagu, S. Exploring the national student survey: Assessment and feedback issues. In The Higher Education Academy, Centre for Research into Quality; The Higher Education Academy: Heslington, UK, 2008. [Google Scholar]
  14. Poutos, K. Analysis of Anglia Ruskin University in UK Higher Education Environment. 2015. Available online: Https://www.readkong.com/page/nss-2019-core-questionnaire-192181 (accessed on 16 October 2020).
  15. Zepke, N. Student engagement research in higher education: Questioning an academic orthodoxy. Teach. High. Educ. 2014, 19, 697–708. [Google Scholar] [CrossRef]
  16. Cheng, J.H.S.; Marsh, H.W. National student survey: Are differences between universities and courses reliable and meaningful? Oxf. Rev. Educ. 2010, 36, 693–712. [Google Scholar] [CrossRef]
  17. Buckley, A. Making It Count: Reflecting on the National Student Survey in the Process of Enhancement; The Higher Education Academy: Heslington, UK, 2012. [Google Scholar]
  18. Bunce, L.; Baird, A.; Jones, S.E. The student-as-consumer approach in higher education and its effects on academic performance. Stud. High. Educ. 2017, 42, 1958–1978. [Google Scholar] [CrossRef] [Green Version]
  19. Angilella, S.; Corrente, S.; Greco, S.; Slowinski, R. MUSA-INT: Multicriteria customer satisfaction analysis with interacting criteria. Omega 2014, 42, 189–200. [Google Scholar] [CrossRef] [Green Version]
  20. Pokorny, H.; Pickford, P. Complexity, cues and relationships: Student perceptions of feedback. Act. Learn. High. Educ. 2010, 11, 21–30. [Google Scholar] [CrossRef]
  21. Kuh, G.D. Framework and Psychometric Properties the National Survey of Student Engagement: Conceptual Framework and Overview of Psychometric Properties; Indiana University Center for Postsecondary Research and Planning: Bloomington, IN, USA, 2002. [Google Scholar]
  22. Gibbons, C. Stress, Positive Psychology and the National Student Survey. Psychol. Teach. Rev. 2012, 18, 22–30. [Google Scholar]
  23. Josephat, P.; Ismail, A. A Logistic Regression Model of Customer Satisfaction of Airline. Int. J. Hum. Resour. Stud. 2012, 2, 255. [Google Scholar] [CrossRef] [Green Version]
  24. Taylor, C.; Mccaig, C. Evaluating the Impact of Number Controls, Choice and Competition: An Analysis of the Student Profile and the Student Learning Environment in the New Higher Education Landscape; The Higher Education Academy: Heslington, UK, 2014; pp. 1–72. [Google Scholar]
  25. Douglas, J.; Douglas, A.; Barnes, B. Measuring student satisfaction at a UK university. Qual. Assur. Educ. 2006, 14, 251–267. [Google Scholar] [CrossRef]
  26. Ramanathan, U.; Subramanian, N.; Yu, W.; Vijaygopal, R. Impact of customer loyalty and service operations on customer behaviour and firm performance: Empirical evidence from UK retail sector. In Production Planning and Control; Taylor and Francis Ltd.: Abingdon, UK, 2017; Volume 28, pp. 478–488. [Google Scholar] [CrossRef]
  27. Abbott, A.; Leslie, D. Recent trends in higher education applications and acceptances. Educ. Econ. 2004, 12, 67–86. [Google Scholar] [CrossRef]
  28. Pascal. 6 Proven Methods for Measuring Customer Satisfaction. 2016. Available online: Https://www.userlike.com/en/blog/6-proven-methods-for-measuring-your-customer-satisfaction (accessed on 16 October 2020).
  29. Lenton, P. Determining student satisfaction: An economic analysis of the National Student Survey. Econ. Educ. Rev. 2015, 47, 118–127. [Google Scholar] [CrossRef] [Green Version]
  30. Elliott, K.M.; Shin, D. Student Satisfaction: An alternative approach to assessing this important concept. J. High. Educ. Policy Manag. 2002, 24, 197–209. [Google Scholar] [CrossRef]
  31. Hoel, A.; Dahl, T.I. Why bother? Student motivation to participate in student evaluations of teaching. In Assessment and Evaluation in Higher Education; Routledge: Abingdon-on-Thames, UK, 2019; Volume 44, pp. 361–378. [Google Scholar] [CrossRef]
  32. Moore, S.; Kuol, N. Students evaluating teachers: Exploring the importance of faculty reaction to feedback on teaching. Teach. High. Educ. 2005, 10, 57–73. [Google Scholar] [CrossRef]
  33. Arthur, L. Evaluating student satisfaction-restricting lecturer professionalism: Outcomes of using the UK national student survey questionnaire for internal student evaluation of teaching. Assess. Eval. High. Educ. 2020, 45, 331–344. [Google Scholar] [CrossRef]
  34. Guardian. What Makes a University Modern. 2013. Available online: www.theguardian.com (accessed on 16 October 2020).
  35. Richardson, J.T.E.; Slater, J.B.; Wilson, J. The National Student Survey: Development, findings and implications. Stud. High. Educ. 2007, 32, 557–580. [Google Scholar] [CrossRef]
  36. Startrek Website. 2020. Available online: Https://startrek.com/statistics/ (accessed on 16 October 2020).
  37. Coe, R. It’s the effect size, stupid: What effect size is and why it is important. In Proceedings of the Annual Conference of the British Educational Research Association, Exeter, UK, 12–14 September 2002. [Google Scholar]
  38. Sofroniou, A.; Poutos, K. Investigating the effectiveness of group work in mathematics. Educ. Sci. 2016, 6, 30. [Google Scholar] [CrossRef] [Green Version]
  39. Sullivan, G.M.; Feinn, R. Using effect size—Or why the p Value is not enough. J. Grad. Med Educ. 2012, 4, 279–282. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Process used for this study at a glance.
Figure 1. Process used for this study at a glance.
Education 10 00378 g001
Figure 2. Comparison of the factors between Redbrick and Modern Universities for 2019.
Figure 2. Comparison of the factors between Redbrick and Modern Universities for 2019.
Education 10 00378 g002
Figure 3. Comparison of the factors between Redbrick and Modern Universities for 2018.
Figure 3. Comparison of the factors between Redbrick and Modern Universities for 2018.
Education 10 00378 g003
Figure 4. Comparison of the factors between Redbrick and Modern Universities for 2017.
Figure 4. Comparison of the factors between Redbrick and Modern Universities for 2017.
Education 10 00378 g004
Figure 5. Depiction of correlation coefficients and p values (2018).
Figure 5. Depiction of correlation coefficients and p values (2018).
Education 10 00378 g005
Figure 6. Depiction of correlation coefficients and p values (2017).
Figure 6. Depiction of correlation coefficients and p values (2017).
Education 10 00378 g006
Figure 7. Depiction of correlation coefficients and p values (2018).
Figure 7. Depiction of correlation coefficients and p values (2018).
Education 10 00378 g007
Figure 8. Depiction of correlation coefficients and p values (2019).
Figure 8. Depiction of correlation coefficients and p values (2019).
Education 10 00378 g008
Figure 9. Figure summarising the correlation coefficients and p values for the Computer science group at Modern Universities for 2017, 2018 and 2019, respectively.
Figure 9. Figure summarising the correlation coefficients and p values for the Computer science group at Modern Universities for 2017, 2018 and 2019, respectively.
Education 10 00378 g009
Figure 10. Figure summarising the correlation coefficients and p values for the Civil Engineering subject group at Redbrick Universities for 2017, 2018 and 2019, respectively.
Figure 10. Figure summarising the correlation coefficients and p values for the Civil Engineering subject group at Redbrick Universities for 2017, 2018 and 2019, respectively.
Education 10 00378 g010
Figure 11. Figure summarising the correlation coefficients and p values for the Mathematics subject group at Redbrick Universities for 2017, 2018 and 2019, respectively.
Figure 11. Figure summarising the correlation coefficients and p values for the Mathematics subject group at Redbrick Universities for 2017, 2018 and 2019, respectively.
Education 10 00378 g011
Table 1. Factors considered and their respective question number representation within the survey.
Table 1. Factors considered and their respective question number representation within the survey.
ScaleCategoryQuestions
1The teaching in my course1. Staff are good at explaining things.
2. Staff have made the subject interesting.
3. The course is intellectually stimulating.
4. My course has challenged me to achieve my best work.
2Learning Opportunities5. My course has provided me with opportunities to explore ideas or concepts in depth.
6. My course has provided me with opportunities to bring information and ideas together from different topics
7. My course has provided me with opportunities to apply what I have learnt.
3Assessment and Feedback8. The criteria used in marking have been clear in advance.
9. Marking and assessment has been fair.
10. Feedback on my work has been timely.
11. I have received helpful comments on my work
4Academic Support12. I have been able to contact staff when I needed to.
13. I have received sufficient advice and guidance in relation to my course.
14. Good advice was available when I needed to make study choices on my course.
5Organisation and management15. The course is well organised and is running smoothly.
16. The timetable works efficiently for me.
17. Any changes in the course or teaching have been communicated effectively.
6Learning Resources18. The IT resources and facilities provided have supported my learning well.
19. The library resources (e.g., books, online services and learning spaces) have supported my learning well
20. I have been able to access course-specific resources (e.g., equipment, facilities, software, collections) when I needed to.
7Learning Community21. I feel part of a community of staff and students.
22. I have had the right opportunities to work with other students as part of my course
8Student Voice23. I have had the right opportunities to provide feedback on my course.
24. Staff value students’ views and opinions about the course.
25. It is clear how students’ feedback on the course has been acted on.
26. The students’ union (association or guild) effectively represents students’ academic interest.
27. Overall, I am satisfied with the quality of the course.
Table 2. The universities considered in this study.
Table 2. The universities considered in this study.
Modern UniversitiesRedbrick Universities
Angila Ruskin University Higher CorporationThe University of Birmingham
University of BrightonUniversity of Bristol
University of East LondonThe University of Leeds
University of GreenwichThe University of Liverpool
Kingston UniversityThe University of Manchester
London South Bank UniversityUniversity of Nottingham, The
The University of West LondonThe University of Sheffield
Table 3. Correlation coefficients, p values and effect sizes for the civil engineering group in Modern Universities for 2017.
Table 3. Correlation coefficients, p values and effect sizes for the civil engineering group in Modern Universities for 2017.
2017
Question NumberCorrelation Coefficientp ValuesEffect Size
10.84440.01680.321
20.96180.0005−0.41
30.91990.00330.335
40.87260.01040.3
80.79580.0323−0.4
90.93180.0022−1.014
100.62280.1352−1.2
110.88680.0078−0.92
120.80370.02940.195
130.98310.0001−0.35
150.90430.0052−0.377
160.55630.19470.219
170.80390.0293−0.37
Table 4. Correlation coefficients, p values and effect sizes for civil engineering group in Modern Universities for 2019.
Table 4. Correlation coefficients, p values and effect sizes for civil engineering group in Modern Universities for 2019.
2019
Question NumberCorrelation Coefficientp ValuesEffect Size
10.84570.0165−0.0620
20.86210.0126−0.1396
30.79860.03130.4689
40.96160.00050.2460
80.72380.0659−0.4117
90.55580.1951−0.7685
100.71780.0693−1.0057
110.75600.0493−0.5984
120.81110.02680.4356
130.91980.0034−0.0321
150.95740.0007−0.7433
160.87210.10470.2714
170.88230.0086−0.2151
Table 5. Correlation coefficients, p values and effect sizes for mathematics group in Modern Universities for 2017, 2018 and 2019.
Table 5. Correlation coefficients, p values and effect sizes for mathematics group in Modern Universities for 2017, 2018 and 2019.
201720182019
No.Coefficientsp Values Effect SizeCoefficientsp Values Effect SizeCoefficientsp Values Effect Size
10.82390.3836−0.2660.0420.9732−1.930.86740.3316−0.793
20.95630.1888−1.020.16270.896−6.7050.25460.8361−2.281
30.98960.09210.0670.82250.38520.27250.96520.1684−0.340
40.870.3282−0.2270.00920.9941−1.030.62960.5665−0.508
80.97370.1462−1.0910.99990.0072−5.0730.04030.9744−2.779
90.36990.7588−0.6680.88890.3029−2.3210.0036−2.314
100.57570.6095−0.930.18040.8846−3.970.97260.1494−0.660
110.36990.7588−0.910.58360.6033−3.80.42440.7209−2.212
120.42260.72220.9340.26830.82711.4480.97740.13562.294
130.93480.2312−0.20.95710.1872−1.9110.27340.8237−1.173
150.94780.2066−0.30.88030.3147−2.690.85460.3477−1.220
160.90.2871−0.320.64110.5569−0.2990.95650.1884−0.708
170.9940.06960.3020.38410.7491−2.410.61390.5792−0.249
Table 6. Correlation coefficients, p values and effect sizes for Computer Science subject group at Redbrick Universities for 2017, 2018 and 2019.
Table 6. Correlation coefficients, p values and effect sizes for Computer Science subject group at Redbrick Universities for 2017, 2018 and 2019.
201720182019
No.Coefficientp Values Effect SizeCoefficientsp Values Effect SizeCoefficientsp Values Effect Size
10.2480.688−3.3310.7940.059−1.3370.7990.031−0.476
20.8690.056−5.9520.5420.266−1.8500.1680.719−0.997
30.7000.1880.6120.7090.1150.8180.5030.2500.931
40.8000.104−1.4230.7640.077−0.7640.2390.605−0.249
80.3010.622−2.8350.8770.022−2.5690.5410.209−1.814
90.2560.678−3.2450.7040.118−2.5970.7180.069−1.443
100.1270.838−3.1010.4420.380−3.2110.8630.012−1.972
110.0950.880−3.8330.7720.072−3.3000.5910.162−2.402
120.7570.1382.3030.9640.0021.3900.7520.0510.874
130.8350.078−2.0500.9360.006−1.0060.8150.025−1.013
150.3650.546−2.7050.6010.207−2.5680.7670.044−1.614
160.4110.492−2.6460.3160.542−1.5840.9260.003−0.780
170.2700.661−1.6060.6150.194−1.0660.9100.004−0.331
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sofroniou, A.; Premnath, B.; Poutos, K. Capturing Student Satisfaction: A Case Study on the National Student Survey Results to Identify the Needs of Students in STEM Related Courses for a Better Learning Experience. Educ. Sci. 2020, 10, 378. https://doi.org/10.3390/educsci10120378

AMA Style

Sofroniou A, Premnath B, Poutos K. Capturing Student Satisfaction: A Case Study on the National Student Survey Results to Identify the Needs of Students in STEM Related Courses for a Better Learning Experience. Education Sciences. 2020; 10(12):378. https://doi.org/10.3390/educsci10120378

Chicago/Turabian Style

Sofroniou, Anastasia, Bhairavi Premnath, and Konstantinos Poutos. 2020. "Capturing Student Satisfaction: A Case Study on the National Student Survey Results to Identify the Needs of Students in STEM Related Courses for a Better Learning Experience" Education Sciences 10, no. 12: 378. https://doi.org/10.3390/educsci10120378

APA Style

Sofroniou, A., Premnath, B., & Poutos, K. (2020). Capturing Student Satisfaction: A Case Study on the National Student Survey Results to Identify the Needs of Students in STEM Related Courses for a Better Learning Experience. Education Sciences, 10(12), 378. https://doi.org/10.3390/educsci10120378

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop