1. Introduction
The COVID-19 pandemic has had significant impact on the ways in which higher education institutions deliver their education provision [
1]. The sudden shift from traditional learning to online learning in the spring of 2020 had a major impact on academics and on many higher education students as their academic studies were radically changed. The challenges faced by both teachers and students in the context of emergency remote teaching include social, educational, psychological and technical aspects as teachers and students both had to rethink their roles and ways in which to organize and carry out their work and their studies [
2,
3,
4]. While the rapid shift to digital forms of teaching and learning was challenging in many ways, both positive and negative perspectives have been highlighted internationally.
As we are now transgressing to what has been termed “the new normal” [
5,
6] in a post-pandemic setting, where pedagogy and technologies must be balanced in new ways, it is important to keep in mind that well-planned online learning experiences are different from courses offered online in response to a crisis or disaster. Colleges and universities working to maintain instruction during the pandemic should understand those differences when evaluating emergency remote teaching. At the same time, we must learn from the experiences of unplanned and forced versions of online learning and teaching to bridge the gap between online teaching and campus learning in the years to come. The pandemic has called for innovative adaptations that can be used for the digital transformation of higher education institutions by building on the empirical evidence accumulated during this period of crisis. A literary review of papers reveals that the emergency remote teaching during the pandemic can indeed give way to a more long-term integration of physical and digital tools and methods that can sustain a more active, flexible and meaningful learning environment [
7]. Understanding how students perceive the learning environment and learning process is important as it both affects student engagement and helps educators rethink the principles of learning [
8]. However, in exploring studies on student experiences with online learning during the pandemic we saw that a majority focused on specific student groups or study programs. The few cross-sectional studies that we found also focused on a specific student groups, such as e.g., medical students at various universities [
9].
The motivation behind this study is two-fold. First, we wanted to see if there are detectable variations between students with different experiences of online use across subject fields and study programs at one higher education institute. Second, the understanding how students with different online platform experiences and habits experience online learning may result in the identification of specific pedagogical challenges and possible means of quality improvement across the university. As such, the study is based on an underlying assumption that previous experience and perceptions of online platforms for leisure (e.g., Facebook and YouTube) have an influence on students’ experiences and perceptions of platforms for online learning.
The study was carried out in the latter part of the pandemic, when it could be argued that emergency remote teaching had merged closer to what is generally referred to as online learning. We can thus discern some of the lasting traces that the pandemic left as we discuss the question how to plan our teaching and learning environment in a post-pandemic educational environment.
2. Related Work
The integration of online and face-to-face instruction is historically been referred to as blended learning [
8]. As we have discussed in a previous article [
10], although the term was coined in the late 1990s [
11] there is still a lack of a shared blended learning terminology. Hrasinski has even characterized the discourse on blended learning as ”pre-paradigmatic, searching for generally acknowledged definitions and ways of conducting research and practice” while urging for the need for established and clear definitions and conceptualizations [
12]. As we have pointed out, however, the variety of proposed blended learning models find common ground in the ingredients of face-to-face and online instruction or learning [
10,
13,
14]. Face-to-face instruction is here synonymous with the traditional classroom setting, where teacher and students interact in a physical learning space [
6].
With online learning, also referred to as distance learning and e-learning, access to a learning environment is achieved through a computer (or mobile media) mediated via the internet [
15,
16]. Online education today uses technology platforms such as Zoom, Microsoft Teams and Adobe Connect to provide a video conferencing experience [
17,
18]. Along the same line as our previous article [
10], we follow a broad definition of blended learning and use the terms online learning, distance learning and e-learning interchangeably [
19]. While some researchers and practitioners distinguish between these terms, for others they carry the same meaning. We further suggest that online learning changes the components of teaching and learning which require specific considerations, such as the need for a higher involvement on behalf of both academics and students, a higher social online presence, and a series of personal characteristics [
20].
As stated above, both positive and negative perspectives have been highlighted internationally with regards to online learning in the wake of the pandemic [
21]. Feedback from teachers is one important aspect that has been considered from both positive and negative viewpoints [
22]. A scoping review that synthesizes and describes research related to online learning interventions in six disciplines across six continents, focusing on technological outcomes in higher education, shows that digital formats of learning were seen as both effective, resource-saving and participatory forms of learning in a majority of the articles [
22]. Students especially appreciated the self-paced learning and many found a greater motivation to learn. Other studies show that students appreciated the flexibility that online learning provides [
23]. On a similar note, a study from France shows that learners liked the possibility of re-playing lectures [
24] and a survey among STEM students experiences with the COVID pandemic and online learning shows that a majority enjoyed the convenience and flexibility of online learning [
25].
While much could be said about the positive aspects of online learning, a sizeable body of research indicate negative academic outcomes among students in online courses as opposed to campus courses [
26,
27,
28,
29]. Students in online courses have lower rates of course completion and grades and tend to be less persistent and motivated to finish [
30]. Among the reasons raised, procrastination is a key issue [
31]. While psychosocial challenges have received much attention in the academic field, especially during the COVID-19 pandemic, technical challenges in using the digital devices or platforms are also common [
32]. Technical difficulties was a central result in a cross-sectional study among Malaysian medical students [
33], while others considered the effects of technologies in a more nuanced manner, considering various specific factors. Interestingly, a study from two of Romania’s largest universities shows that the advantages of online learning identified in other studies seem to diminish in value, while disadvantages become more prominent [
32].
A growing body of research on online learning reinforces the potential negative impacts that an abrupt shift to online courses could have on students’ academic performance. Well-planned online learning experiences are claimed to be “meaningfully different” from online courses offered is response to an emergency or crisis [
34]. Increased levels of anxiety and stress for both teachers and students [
35] are underlying factors. Piaget’s rationale that effective factors have a major impact on the cognitive process supports this line of argumentation. While the pandemic forced emergency changes with raised levels of anxiety and stress, studies show that online learning and the new technologies and teaching strategies that emerged can continue to be used in a post-pandemic setting [
36,
37]. Among these we find student response technologies, which are used to increase students’ engagement [
38,
39].
In pondering the question of how to maintain and develop teaching strategies in a post-pandemic setting, it is important to understand how students perceive the learning environment and learning process and its connection to student motivation and engagement [
8]. While most studies tend to focus on the here and now, which includes issues such as teacher visibility, feedback and challenges related to psychosocial and technical matters, a smaller body of research focuses on how student experience with online platforms affect their experience of the online learning setting. As noted in the introduction, however, a majority of the studies of online learning focus on target student groups or study programs, even across universities. The cross-sectional studies that have been made during the COVID-19 pandemic rarely include all students across one particular university. One of the biggest studies made on online experiences among students focuses on understanding the characteristics of online learning experiences among Chinese undergraduate medical students [
9]. As the authors investigate students’ perceptions of online education developed in response to the COVID-19 pandemic in relation to prior online learning experience they find a significant positive correlation. Another result is that this correlation decreases in significance the higher the students are in their learning phases. In a similar fashion as our study, the authors collected data using an online survey albeit with a significantly higher number of students (the questionnaire was sent to 225,329 students, of whom 52.38 (118,080/225,329) replied, with valid data available for 44.18.
In this study we wanted to see if we could detect variations between students with different experiences of online use across subject fields and study programs at one higher education institute. The study rests on the belief that it is indeed important to understand how students perceive the learning environment and learning process as it affects both student engagement and helps educators rethink the principles of learning and the ways in which we should support our students in their learning process. We hope to contribute to an understanding of how students with different online platform experiences and habits experience the online learning environment. We also hope that the results can assist in the identification of specific pedagogical challenges and possible means of improvement in the ways in which we plan our teaching and support the students in their learning process.
3. Methodology
The research study presented in this manuscript was carried out at Blekinge Institute of Technology (BTH) in Sweden. BTH is a technical university with programs that include computer science, software engineering, mechanical engineering, nursing, industrial engineering and management as well as spatial planning. Similar to most universities around the world, BTH was forced to switch to a completely online teaching mode during the COVID-19 pandemic. The study was carried out across all study programs in the spring of 2021 to get insights into the students’ experiences of this teaching reform and get insights for future online teaching.
3.1. Research Objective
In this section, we will present the research objective and detail the design that was used to collect the research results and its analysis. The objective of this work was to elicit students’ experiences with online teaching and evaluate if students from different programs experienced the teaching in different ways. To guide the research we have split up the research objective into two research questions.
RQ1: How do students at BTH experience online teaching in terms of quality and usefulness?
The rationale for this question was to gain insights, from the students, if the teaching methods that were used during the pandemic had been sufficient. The results of this analysis would provide an indicator for the current quality as well as what possible improvements, or concerns, which would be required in the future.
RQ2: Are there differences in experience with online teaching related to subject areas?
The rationale for this question is based on an underlying assumption that there would be clear differences in how different students would experience the online learning, for instance, due to previous technical experience with online platforms for streamed media. Identifying differences between groups would help teachers in finding which groups that may require additional, or different, forms of teaching in an online teaching environment.
3.2. Research Design
The data for this study was collected through a questionnaire survey with students at BTH.
Figure 1 presents a visualization of the methodology, which was divided into five phases;
formulation of research objective, construction of the survey, pilot survey, data collection and
analysis. The boxes in the figure explain the activities of each phase, whilst boxes with rounded edges are the outcomes of each phase. Arrows indicate the order of activities, where back-arrows indicate redesign due to collected results.
In the continuation of this section, the activities and outcomes of each activity, will be detailed.
3.3. Phase 1: Formulation of Research Objective
In the first phase of the study, the research objective was defined. This was achieved through discussions among the authors as well as discussions within the faculty at BTH. The origin for the research also stemmed from a previous study, conducted by the authors, looking at student perceptions about online teaching [
10]. In the previous study, the influence of prior experience with online platforms was investigated. However, this study was limited to only one class of students. In this work, this evaluation was scaled up to encompass a larger sample of the students at BTH.
Primary concerns that were discussed were based on the assumption that students with varying technical backgrounds would have different experience with the online teaching. This assumption, if true, would indicate that students also require different modes of online teaching, e.g., with varying teaching materials, tools or platforms. Furthermore, BTH recognized the value of eliciting what students’ current experiences were of existing online teaching to help give inputs for future improvements.
These objectives make the study descriptive in nature, which justifies the use of a survey. Questionnaires were chosen as the research method since the objective required an outreach to all students at the university. This made interviews an impractical data collection method due to resource constraints.
3.4. Phase 2: Construction of the Survey
The survey was constructed in an iterative manner, as indicated in
Figure 1 by the back-arrow within the phase. As a first step, the sample frame [
40] for the study was considered. Since the study aimed to study the research objective on a university level, the entire student body was set as the sample frame, from which a representative sample could be drawn, i.e., one-stage sampling [
41]. From this sample-frame students were selected from all 20 programs at the university, from both 1st cycle—Undergraduate or bachelor’s level—and 2nd cycle—Graduate or master’s level—students [
42]. However, due to surveys being a frequently used method for data elicitation at the university, we identified a risk of survey-fatigue [
43]. To mitigate this threat, random sampling was used if first, second or third year (if applicable) students were selected. Whilst this approach mitigated the situation that some students were exposed to multiple surveys under a short time-span, it introduced a threat in terms of sampling-error [
44]. Hence, that some groups within the sample would not be representative for their peers. However, since this random sampling was deployed over the entire sample frame, where multiple programs have shared characteristics, this threat is perceived to be low. Using this approach, a final sample of 1515 students were chosen for the study.
Once the sample had been decided, the second step of this phase was to define the survey questions. The survey was initially drafted by three of the study’s four authors, in an online word document, after which the questions were reviewed by all authors. These revisions were carried out iterative in online sessions. Before each session, all authors individually reviewed the questions and commented upon the semantics as well as syntax of the questions. These comments were then discussed, question by question. After that, the questions were updated until a consensus was met by the authors, inspired by the Delphi method [
45]. This was possible due to the authors’ combined domain, survey research and pedagogical knowledge, in excess of 35 years of experience.
The focus of the reviews was primarily on the semantic meaning of each question, i.e., that they correctly captured the phenomenon of investigation. Secondly, that the syntax of each question was unambiguous, i.e., that the questions could not be interpreted in multiple ways. Through this process, the survey guide went through six iterations, after which a final draft of 18 questionnaire questions was proposed, most of which were multiple-choice or Likert-scale [
46]. The drafted questions were written in Swedish, the authors’ native language, after which they were translated into English to be applicable for the entire sample. In the translation process, care was taken to pertain both the semantic meaning and syntactical non-ambiguity of the Swedish questionnaire guide.
The questionnaire guide was divided into five parts. Part 1 was a description of the study, its intent and how collected data would be used. A statement of anonymity was also given to ensure that students answered the questionnaire truthfully.
Part 2, consisting of only one question, aimed to elicit demographic data about the students’ age. We were careful not to elicit other information due to potential ethical concerns. We did not elicit what program the students belonged to because we were able to elicit this by how the questionnaire was distributed among the sample. This was achieved by using the University’s questionnaire system, Blue, which allows different surveys to be sent to different student groups. Thus allowing us to trace the answers for a specific survey with a particular group of students.
Part 3 of the questionnaire, consisted of 11 questions related to the students’ prior knowledge, experience and habits with using online media platforms, e.g., YouTube, Facebook, etc. These questions included what platforms the students’ were familiar with, how often they used them on a monthly basis, when they were introduced to online platforms, and why they used the different platforms. This was followed up with questions aimed to compare the students experiences with the platforms they use for leisure and the ones available at the university. The students’ were also asked what devices they use to view these platforms, both for leisure and education. These questions consisted of multiple-choice questions where students could either mark one or several alternatives. In addition, the questions included the option to answer “other” if the predefined answers were not sufficient.
Part 4 of the questionnaire then focused on the students’ study habits and use of available platforms, in six questions. The first of these aimed to elicit what study materials the students use, provided by a multiple-choice list where students could also add “other” materials if needed. This was followed by questions regarding the experienced quality of the education, if the quality was comparable to campus education, and if teachers did a suitable job in the online education. Next, the students were asked if they would see a perceived benefit of having the lectures subtitled. Hence, a more theoretical question, based on observations and the assumption that subtitles would be of value to student learning. In the final two questions, the students were asked if they experienced that their education has been affected by the switch to online learning and what changes/improvements they would like to be seen made to the education. The latter question was open-ended, allowing students to write their answers.
Part 5 of the questionnaire consisted of a single question where the students were allowed to give feedback on the questionnaire itself. For instance, if they found any questions ambiguous or if they felt that they wish to express some information that was not elicited by the questionnaire.
For detailed information about the questionnaire, the questionnaire guide can be found in
Appendix A. A replication package with the survey answers has also been made available here:
3.5. Phase 3: Pilot Survey
To evaluate the correctness of the questionnaire guide, it was distributed in a pilot study with a sample of 226 students. Out of the 226, 96 students responded, providing a response rate [
47] of 42.5 percent. Particularly, the final question was analyzed where students could provide feedback on the questionnaire itself, e.g., if any of the questions were considered ambiguous. Analysis of the feedback revealed only minor detailed improvements, which were applied by updating the wording, i.e., the syntax, of affected questions. These changes are indicated in
Figure 1 by the backwards arrow from Phase 3 to Phase 2.
Since these changes did not affect the semantic meaning of the affected questions, the results of the pilot was kept to be integrated in the final dataset. Although this presents a minor threat to the validity, our analysis of the pilot results, also when compared to the results of the final survey, indicate no mayor differences. Hence, we perceive this threat to be low.
The pilot was distributed using the Blue survey tool in the same way as the final survey. Blue provides an online interface for taking the survey with multiple pages with one to several questions per page. Access to the tool was provided through an URL sent to students via e-mail. For the pilot study, only students from industrial economy and the master of business and administration programs were sampled. This sampling was motivated by the programs having a heterogeneous group of students in terms of nationality, i.e., both Swedish and international students. However, this sampling does impose a possible threat, since all students came from the economy domain and thereby have a similar background. Since the response rate was so high, and initial analysis showed a diversity in the answers, we perceive this threat to be low.
3.6. Phase 4: Data Collection
After the pilot verification and updates, the questionnaire was distributed to the full sample of 1515 students. As mentioned, to track the results from individual programs, the survey was sent via e-mail to e-mail lists for each of the 20 included programs. Thus allowing us to track the results from the individual programs.
Out of the 1515 sent requests, 431 students completed the final survey, providing a response rate of 28 percent [
47]. The results were provided in comma-separated value (CSV) files but also descriptive statistics generated by the Blue tool. These outputs were used as input for the continued analysis in Phase 5.
3.7. Phase 5: Analysis
In the final phase of the methodology, the survey results were analyzed to draw conclusions to answer the study’s research questions and verify/deny the study’s assumptions. The analysis was carried out in three different activities.
In the first activity, a qualitative analysis was made of the resulting descriptive statistics provided by the Blue survey tool. This analysis was done first individually by all authors, where each author drew conclusions from the graphs. Next, the Delphi method was once more used where the individual conclusions were discussed and compared to reach a consensus.
This analysis, since it was based to the answers of individual questions, were of lower level of abstraction and thereby only contributed to answer RQ1 regarding the students’ experiences of the quality and usefulness of the online teaching at BTH. In particular, survey questions A, B, C, D and E, were analyzed during this activity.
In the second activity, formal statistics was applied together with descriptive visualization to answer RQ2 regarding the differences in experience of the online learning in different subject areas. The formal statistics were calculated using the statistical programming tool R, starting with a correlation analysis of the answers of all quantitative answers. To simplify analysis of the results, the results were plotted in a correlation matrix. Results of showed that there was no correlation between any pairs of questions in the questionnaire, implying that the student responses were heterogeneous across the sample.
To gain deeper insights into potential sub-divisions within the data set, a cluster analysis was performed using hierarchical clustering [
48] with euclidean distance of the complete data set. The result was plotted in a Dendrogram that revealed six clusters within the data set. Thus, indicating, from a semantic perspective, six sets of students with similar answers, i.e., experiences and perceptions.
To answer RQ2, these clusters were then analyzed to see if any of the clusters were populated by more students from any given program. This analysis was based on the hypothesis that if previous experience was influential in the students’ perception of online learning, there would be correlation between prior experience and students’ experiences with online learning, i.e., between responses to Part 3 and Part 4 of the questionnaire. Since the number of respondents from each program varied, the number of respondents were first normalized based on number of respondents. The normalized number of respondents in each cluster were then compared to see if students from any particular program had a higher affiliation to any one cluster. For this final analysis, students from the 20 programs were divided into their core programs, being computer science, software engineering, mechanical engineering, etc. Thus, resulting in eight main programs. In addition to looking for trends in program distribution among the clusters, we also looked at the distribution of 1st and 2nd cycle students. The analysis aimed to find if any cluster was more highly populated by students from a given cycle of students.
The results of the cluster analysis provided insights into the distribution of students from different programs which allowed us to draw conclusions to answer RQ2.
To complement the qualitative conclusions from the statistical analysis, qualitative analysis was made of the open-ended questions. This analysis was inspired by open coding [
49], where the students’ statements were read and re-read, and codes generated from, or for, each statement. Each statement was considered in relation to its content, semantics, related words and phrases. When a new statement was found with similar content, it was assigned the existing code. The statements and connected codes were then re-read, searching for thematic patterns. Thus, clustering statements of similar content together. The coding resulted in eight codes/themes resulting from the analysis of 248 statements. These codes were: pedagogical/didactic improvement, technical improvements, course structure, interaction/feedback, online lectures, negative non-constructive feedback, positive non-constructive feedback and online/recorded lectures.
3.8. Threats to Validity
In this section we report on the threats to the validity of this research, including both its design and results.
Sample: The sample frame for the study was the entire student body at Blekinge Institute of Technology. However, since there was, as discussed, a risk of survey fatigue, we delimited the sample by randomly sampling only one year of students per program. Whilst this provides us with coverage over all programs, it does not provide a comprehensive view of all years of the students. This is a threat since the second and third year students had experienced both online and campus teaching, whilst first year students had only experienced online learning. However, due to the size of the sample and overlap between programs, we perceive this threat to be minor.
Furthermore, although the sample size was smaller (N = 431), this still represented a response rate of 28 percent. Hence, although the response rate could be higher, we perceive this result to be adequate for our study.
A larger threat of the sample was that first year students had only experienced the online teaching format. Thus, limiting their ability to answer questions regarding the comparison of campus and online teaching. However, these questions had an option for students to not answer and this group of students was only a subset of the either sample. Still, we can not rule out some impact of students from this group providing their perceptive answers rather than their experiences.
Questionnaire ambiguity: Although we did our utmost to mitigate ambiguity in the questionnaire, it can not be completely excluded. This conclusion is supported by responses to one of the questionnaire questions where we asked the students to mention potential ambiguities in the questions. The answers highlighted that questions that aimed to compare the higher education materials with media that the students consume for leisure, were difficult to answer, or difficult to understand. Hence, we recognize that there might be some variation in these results what students included in their answer, e.g., content quality, quality of the production or the platform features.
Research focus: The study aimed to elicit how students previous experiences with online platforms affected the experiences with online teaching. The questionnaire was explicitly designed to elicit these two factors and results then synthesised to draw conclusions. From a construct validity stand-point the study is thereby perceived to be of high quality. However, there is still the risk, as discussed regarding questionnaire ambiguity, that the questionnaire questions may not have elicited the information in the right way. We do however perceive this as a limited threat since the questionnaire was iterated several times and also piloted within the sample frame with successful results.
Analysis: The main result of this work is based on formal statistical analysis. Although the sample on which the result was calculated is quite large, 431 students, in the analysis this sample was broken down in subgroups. These subgroups varied in size, with some groups being smaller, affecting their statistical power. However, once more, due to the overlapping characteristics of the student population, we perceive this threat to be minor for the study’s main conclusions. A larger threat is instead in terms of the generalizability of the result within the sub-clusters of the sample.
Generalizability: The study was performed at only one higher education institute in Sweden. Thus, it cannot be excluded that contextual, regional or national factors may affect the results. As the sample is focused on only technical engineers and nursing students, the study can not be said to be comprehensive enough to include all forms of higher education. This is a delimitation of the results, which must be considered when considering the results. In addition, this delimitation must be considered for replication of this study, where the choice of included programs may have an impact on the result.
Reliability: The questionnaire was scrutinized prior to it being sent out and was also validated in a pilot survey. However, there is still a risk that some students intentionally, or unintentionally, answered the questionnaire incorrectly, which would be a threat to the reliability of the results. Especially since the questionnaire, due to restrictions on size, did not include correlating or verifying questions. To mitigate this threat, a sanity check was done to identify extreme outliers.
5. Discussion
The main conclusion of this paper is that prior experience with online media platforms have no direct impact on students’ experience with online learning. No patterns could be identified with regards to what programs the students are studying nor if they were 1st or 2nd cycle students. The results stand in stark contrast to the survey of online experiences among students of Chinese undergraduate medical students [
9] that was discussed in Related works. As the study investigates students’ perceptions of ongoing online education developed in response to the COVID-19 pandemic in relation to prior online learning experience they find a significant positive correlation. An important difference, however, is that the study targets a specific student group, i.e., medical students. The fact that we can find no significant correlation, then, may signify that such differences are less discernible when comparing various study programs and students groups. When studying student experiences across the curriculum, differences may be less significant, and more difficult to detect. When studying student experiences within the same student group or study program differences may be more prominent. Another potential explanation would be that lecture-based teaching is more common at BTH and that the mode of delivery is therefore not dramatically changed when transferred to online mode. Since the sample is taken from all programs at the university, however, such a conclusion cannot be made. Across the curriculum and in the different courses taught, a wide range of varying teaching methods are used at BTH. This was also the case during the pandemic.
This is not of course to say that “one size fits all” with regards to how we deliver our educational provision online, even within the same student group or study program. Learning variations will still exist, between individuals and student groups, and these must be catered to. Research finds that student-centered teaching engages students more at the same time as the level of achievement increases compared with traditional classroom settings [
50]. In student-centered environments, as learning is personalized and competency-based, regardless of whether or not it is on campus or online, students take ownership over their learning [
50,
51,
52]. In other words, a differentiated learning approach is more likely to engage and motivate more students than the traditional one-size-fits-all approach, within the same student group as well as across study program. Research also shows that student engagement not only increases student satisfaction and enhances student motivation to learn, it also reduces the sense of isolation, and improves student performance in online courses [
50,
51,
52]. What we can say, then, based on the results of our study and previous research, is that while we should continue to push towards a student-centered teaching in our online learning platforms, we do not need to tailor-make our teaching based on the students’ previous experiences with online platforms.
Even though the results of our study indicate few or no variations in the students’ perception on online learning based on previous experience with online platforms, however, there are other factors to consider. Although this study did not explicitly investigate these factors, the qualitative answers regarding changes the students want to see with the online learning do provide some important insights. One such insight is that the students are divided when it comes to the quality of the online teaching as only 67 percent are of the opinion that the teaching has provided enough knowledge to pass assessments. Furthermore, the fact that one third of the students do not perceive the online learning to provide knowledge to pass the course assessments is an intimidating result. However, there are also several possible reasons for this outcome. It may suggest, for instance, that there is something about the way we plan and execute our teaching online that we must change. It should be emphasized, of course, that this study was performed at the end of the COVID-19 pandemic and not all teachers may have been able to transition well to online teaching. Learning activities or methods of assessments, e.g., labs or exams, may not have been fully aligned with the learning outcomes (i.e., construtively aligned [
53]), specifically when swiftly transferred to an online setting. Changes in the course plans take time, and requires both careful planning and needs to follow certain university processes. Moreover, teachers may not have been trained in the differences between campus and online teaching. An alternative explanation is that the courses do not provide sufficient knowledge for students, whether on campus or online. Whilst this latter explanation cannot be discarded, we have no data to either support or decline this possibility. A third explanation could be that students may have had one or several negative experiences that influence their overall result, i.e., the result of the question does not reflect all courses, but some. Regardless of what line of argument or assumption we would like follow, the result that students do not perceive the online learning to provide enough qualitative knowledge to pass the course assessments is one of the more troubling outcomes of this study. As such, it will be investigated further at the university to find areas of improvement for the future.
Similarly to our former results, the division between students perception regarding how their education has been affected by the online learning is also troublesome. Although 43 percent of the students perceive a positive effect, 57 percent perceive it to have been negative. An interesting observation of this result is that there is a clear division between students that have negative, neutral and positive perceptions. These divisions, spread among the different programmes and types of students, once more reflect other factors that influence student perception and experience. In our previous work, we have noted that such factors include the students’ inclination towards being intrinsically or extrinsically motivated or if they are introverted or extroverted individuals [
10].
One crucial conclusion that can be drawn based on the open ended questions is that the students who experience online learning lack the sense of group-belonging that comes with campus education. This factor was not explicitly elicited, but is one of the core differences between how the students work in the two educational contexts. From these results we draw the conclusion that students feel that interaction suffers from online teaching, which also relates to interaction with student colleagues. Thus, in future instances where online teaching becomes mandatory, more emphasis should be placed on creating group-belonging in the student population.
6. Conclusions
In this study, to gain insights into students’ perceptions about online learning, we have performed a questionnaire survey at Blekinge Institute of Technology. The survey included 431 students from 20 programs, ranging from technical programs to social sciences, with students of varying age and gender. The survey responses were then analyzed using descriptive statistics and qualitative analysis to draw conclusions.
Our analysis shows that regardless of the students’ previous experiences and habits regarding online platforms, e.g., video/music streaming or social media, there was no correlation with their perceptions about online teaching and learning. Moreover, there was no correlation their experiences of online learning and their program of study or whether they were 1st or 2nd cycle students. Hence, student background is not a core attribute to their perception of teaching quality and learning online.
However, the analysis of the qualitative answers from the survey do provide some additional insights. Students highlight eight factors, including interaction/feedback and pedagogical/didactic challenges with materials, as significant in their learning process. From these results we draw the conclusion that students feel that interaction suffers from online teaching, which also relates to interaction with student colleagues. An important subject for future research is thereby how to improve student group engagement and collaboration.