1. Introduction
The development and adoption of artificial intelligence (AI) tools have caused significant disruption across many industries. In a similar fashion, higher education institutions have grappled with what the advancement and adoption of AI will mean for the teaching and learning landscape [
1,
2]. While ethical and privacy concerns surrounding the use of AI in educational settings are abundant [
3,
4], many higher education institutions have promoted the integration of AI in teaching and learning to improve efficiencies and to better prepare students for the changing demands of the twenty-first-century workforce [
5].
Baker and Smith (2019) [
6] developed three categories to classify educational AI tools: (a) learner-facing; (b) teacher-facing; and (c) system-facing. With learner-facing tools, the student uses the software to facilitate their learning of subject matter (i.e., personalized Learning Management System; LMS). Teacher-facing AI tools support teaching and administration, assessment, feedback, and plagiarism detection. The least developed tool category is system-facing, where AI tools are used by school administrators or managers to analyze data and make informed system-wide decisions. It is important to note that AI tools are not limited to being classified as one category but can extend into additional categories based on their intended users and functions.
There are abundant opportunities to harness AI tools to improve teaching and learning. For example, Merino-Campos (2025) [
7] conducted a systematic review and identified a consensus that AI technologies can be integrated into educational systems to optimize student learning by providing tailored content and feedback to individual learners. These student-facing AI tools can provide real-time feedback on students’ writing; thus, students can improve their writing through virtual coaching [
8]. Such AI-generated feedback has been shown to improve writing outcomes in college students over traditional, human-only feedback [
9].
When considering these AI tools as teacher-facing tools, many platforms can provide automatic, tailored student feedback and can even provide automatic grading [
10,
11]. These features can save teachers copious amounts of time and can increase teaching efficiencies [
12,
13]. For example, instead of providing task-heavy and routine feedback on things like grammatical edits, teachers have more time to provide more meaningful interventions, such as in-depth constructive feedback or to better assist struggling students [
14]. This may be particularly advantageous in large enrollment courses and in lower-stakes yet meaningful assignments, such as online discussion boards.
1.1. AI-Driven Discussion Boards
Asynchronous discussion boards are a common teaching approach in higher education, particularly in asynchronous, online courses [
15]. Discussion boards on course topics can uniquely foster the multiple types of interactions important to online teaching: (a) teacher-to-student; (b) student-to-student; and (c) student-to-content [
16]. In fact, many LMS platforms, such as Canvas, integrate native online discussion tools to provide educators the opportunities to implement graded or ungraded discussion boards in their courses to foster students’ critical thinking and application of course content through discussion and reflection [
16].
New educational technologies that harness AI have been developed to facilitate online discussion boards. Packback is a digital teaching and learning platform that integrates instructional AI to guide the writing and grading of online discussions (Packback Questions) and writing forums (Deep Dives). According to Packback [
17], their inquiry-based student discussion platform can “engage curiosity, increase motivation, build community, and support writing practice” through “built-in AI coaching that helps students develop better questions and responses.”
In the present study, we integrated Packback Questions (online discussion boards) throughout our course. Prior to the use of Packback, only Canvas discussion features were utilized. We integrated Packback into our course with the attempt to foster students’ higher-order thinking skills and for the platform to assist us with the grading load of the course. In these discussion boards, Packback requires students to generate an open-ended question based on the guidance from an instructor’s prompt. Although instructors can set varying criteria, students are typically required to respond to a set number of questions posed by their peers. As students generate written responses, they receive real-time, personalized writing feedback that can help improve writing quality and performance. The AI technology provides student writers with feedback on (a) curiosity or effectiveness and depth of inquiry; (b) credibility or the presence of cited sources; (c) writing convention or the effectiveness of rich text, formatting, and paragraphs; and (d) writing communication or accuracy of grammar, spelling, and sentence structure. To note, at the time of this study, Packback does not generate writing for students through Generative AI but instead offers instant feedback and guidance to improve students’ writing quality [
17]. However, peer-reviewed research on AI-generated feedback in the classroom has revealed that students believe instructor feedback to be more useful compared to AI-generated feedback [
18]. Additionally, in lab settings, students showed greater improvement in scores when receiving instructor feedback compared to AI-generated feedback [
18].
1.2. Concerns of AI in Education
Gallup estimates 60% of U.S. teachers currently use AI tools for work, saving an approximate 6 h of work a week [
19]. However, some critics have called into question teachers’ use of AI tools. Concerns have centered around the use of AI for grading, and while teachers are expected to double-check the accuracy of the technology, many question whether this step is happening [
20]. Other critics point out the hypocritical stance some teachers have on AI, where the technology is prohibited for students to use in class, yet the teacher will use it to grade assignments [
21]. This has led people to wonder if we are at a moment in education where teachers are using AI to grade papers that, whether allowed or not, students used AI to write [
21], which brings into question whether we can even know if the learning outcomes were met. Prior research has also highlighted gender discrepancies in the context of AI and education that may raise equity concerns. Female students have been found to be less knowledgeable on AI, adopt AI less frequently, and have lower AI efficacy, compared to their male peers [
22,
23,
24].
In response to these growing concerns of AI use in education [
25], schools have introduced guidance and training for educators interested in using the technology. These trainings have focused on the limitations associated with AI-powered grading and emphasize the need for the instructor to remain engaged with providing feedback [
26]. While the developers of Packback have explored the technology’s impact on students’ learning outcomes, there is a need to understand how the students perceive the technology and if its use has any influence on how they perceive their teachers. If the AI technology is too challenging to use or leads to negative perceptions of the instructor, the teacher will likely not continue its use in the future. Therefore, there is a need to explore students’ perceptions of using Packback and its influence on perceptions of teachers’ credibility to understand its potential for sustained use in higher education.
1.3. Theoretical and Conceptual Framework
To best guide our study, we utilized instructor communication theory and developed a conceptual framework informed by this theory in addition to ‘user experience’ (UX) research. Since instructors’ integration of AI is not intended to replace the role of the instructor, but may influence their credibility, we also included concepts related to teacher credibility in our framework. A description of these concepts and the framework follows.
1.3.1. Instructor Communication Theory
Instructor communication theory postulates that communication between the instructor and student is central to teaching and learning. According to this theory, teachers’ behaviors, strategies, and messages used in their classroom communication influence students’ affective, behavioral, and cognitive learning outcomes [
27]. Scholars have investigated how instructor choices (e.g., nonverbal behaviors, messages, feedback, etc.) influence students’ learning and their perceptions toward the learning environment [
28,
29,
30].
Prior research in instructional communication has also linked the communication strategies instructors have utilized to students’ perceptions of their credibility as teachers [
31,
32,
33]. In fact, McCroskey et al. (2004) [
34] included student perceptions of the teacher as a component of their instructional communications model and expressed that students can form perceptions of a teacher before they even have a class with the instructor due to pre-existing knowledge or word-of-mouth.
An emerging application of this theory is the impact of technology as a modality for communication, particularly technologies that aid instructors’ communication with students. While technologies can enhance teacher–student communication [
29,
30], the integration of some AI platforms can supplement instructors’ communication to students by replacing teacher feedback with AI feedback. Therefore, we ask how teachers’ integration of AI platforms influences the instructional communication process, particularly on students’ initial perceptions of teachers who require students’ use of these platforms. To further guide our study, we integrated concepts from teacher credibility scholarship and concepts from technology research and development.
1.3.2. User Experience
Scholars have coined the term ‘user experience’ (UX) as the process to examine the experience of human–product interaction [
35]. Prior studies have examined students’ UX of educational technologies to improve student satisfaction, motivation, and course experience [
36,
37,
38]. Díaz et al. (2015) [
36] examined students’ UX using Edmodo as a course tool, finding that most students perceived the platform as both useful and user-friendly. Similarly, Nieto García and Sit (2023) [
37] found that students had a positive UX of the educational technology, Kahoot!, and their positive experience using the technology motivated them to attend class. Acemoglu and Jonson (2023) [
39] described that a human-centered approach must be taken when integrating AI into education systems in order to complement human capabilities and to empower learners. Therefore, as novel AI-driven educational tools emerge and are adopted in teaching and learning, it is critical to evaluate students’ UX of these tools.
1.3.3. Teacher Credibility
The concept of source credibility and its influence on the transfer and acceptance of information has been central to the study of communication and persuasive discourse since the times of Aristotle in ancient Greece [
40]. However, research has shown that source credibility within educational contexts is critical for fostering positive interactions between students and teachers [
41]. The culmination of various studies on teacher credibility has provided substantiated evidence that positive associations between perceived teacher credibility and student learning exist. For example, student outcomes such as greater motivation to learn [
42], increased communication with their instructors [
32], and increased cognitive and affective learning [
43] have been linked to teacher credibility. Additionally, teachers who are perceived to have more credibility by their students receive higher teaching evaluation scores from students [
44]. Teachers should aim to maintain a high level of credibility in the classroom, which is foundational to a positive teacher-student relationship and student learning.
Scholars exploring teacher credibility and its influences have long analyzed gender as a characteristic influencing credibility. For instance, some researchers have found that students perceived male instructors to be more credible [
45], while others have reported female instructors to be found more credible [
46], and yet some have found no difference when it comes to gender and teacher credibility [
47]. There is a lack of research that has examined how gender differences of students influence their perception of teacher credibility, but further exploration of this area has been called for [
48].
When discussing source credibility, scholars typically break it down into three sub-dimensions to more holistically describe how these sources of information are viewed. These dimensions of credibility are (1) competence; (2) trustworthiness, and (3) goodwill (or caring) [
49]. Within the context of education, competence has been defined as the instructor’s apparent knowledge or perceived expertise related to a topic [
41]. Trustworthiness reflects the teacher’s perceived character, including whether they are perceived as sincere, respectable, and selfless [
50,
51]. Finally, student perceptions of an instructor’s concern for their welfare and willingness to respond to problems with a student-centered approach have been defined as goodwill or caring [
52]. While some teachers may be high in one of these dimensions and low in others, it is the culmination of the three that informs their overall credibility as a teacher. Those with positive perceptions across all three dimensions will be viewed as the most credible [
53].
1.3.4. Model for AI Use Influences on Credibility
As the landscape of higher education is rapidly changing due to AI integration, the impact of teachers’ use of AI on their perceived credibility is not well understood. Denecke et al. (2023) [
13] postulated that human interaction and interpersonal connections are foundational to effective teaching, and the use of AI, particularly by the teacher to provide student feedback, may undermine interpersonal connections in favor of efficiency. In addition to the use of learner-facing and teacher-facing AI as a potential threat to student-teacher relationships [
13], it may have unintended impacts on student perceptions of instructor credibility.
Figure 1 demonstrates the anticipated relationship between student perceptions of the learning tool and teacher credibility.
We propose that students’ overall perceptions of the AI learning platform will be informed by their personal characteristics, past experiences using the specific AI tool, and the UX of the platform. These perceptions of the AI learning tool would have a direct influence on students’ perceptions of their teacher’s credibility. Depending on the students’ prior perception and experience with AI, the integration of it into a class setting may lead them to question whether or not their instructors possess the knowledge to teach the content. Additionally, they may question how much teachers care about the students if it is used to replace instructor feedback. Negative UX in the AI-powered learning platform may also cause the student to believe the instructor is not concerned about addressing students’ problems, thus further eroding perceptions of goodwill [
52].
Alternatively, students may view teachers’ use of AI positively and at the cutting edge of education. If students have used the platform before and had a positive experience, they would likely transfer those positive perceptions to their instructor as well. This would improve students’ perceptions of their instructors’ competence and trustworthiness, which would elevate their overall credibility.
Because perceived teacher credibility has been linked to teaching evaluation scores [
44], we expect teachers to continue to use the AI learning platform when it elicits positive perceptions of credibility but terminate its use if not. Additionally, students’ overall perceptions of the platform and its UX would likely have a direct influence on continued use of the program in the future as well. While this study did not look at intent to continue using Packback and focused on how its use influenced teacher credibility, the proposed model provides a greater understanding of the sustained use of AI in education.
1.4. Study Purpose and Research Objectives
As the use of AI has been rapidly integrated into higher education [
54,
55], the implications of specific AI-driven technologies must be critically analyzed. While measures of teaching and learning efficiency and impact on student learning outcomes are central in this regard, it is also imperative to examine students’ UX of these technologies and the impact instructors’ integration of these technologies has on their students’ perspectives of their own credibility as instructors. Packback has emerged as a leading AI-driven platform in higher education as a mode for instructors to facilitate online discussion boards. The purpose of this study was to examine students’ user experience of Packback in a college course and to explore how instructors’ integration of AI tools may affect their students’ perspective of their own credibility as an instructor. Two descriptive research objectives that guided this study were as follows:
The data collected in alignment with the two descriptive research objectives provided insight into variables that permitted a testable hypothesis. Namely, data associated with students’ UX provides information on their attitudes and perceptions of AI-generated feedback, which can differ from those associated with instructor feedback [
18]. Additionally, students’ perceptions of AI as an evaluation tool to assess their performance have been mixed, with a noted distrust toward AI grading of assignments [
56]. Based on these factors, we propose the following null and alternative hypotheses:
H0. Students’ personal characteristics, past required use of AI in coursework, and the UX of Packback do not serve as significant predictors of student perceptions of credibility for instructors who require the use of AI platforms in their courses.
H1. Students’ personal characteristics, past required use of AI in coursework, and the UX of Packback serve as significant predictors of student perceptions of credibility for instructors who require the use of AI platforms in their courses.
2. Materials and Methods
2.1. Context of the Study
Undergraduate students enrolled in the course, Knowledge, Society, and Leadership, during the spring 2024 semester at the University of Tennessee-Knoxville served as the participants for this study. Two separate sections of the 3-credit-hour, 16-week course were offered in the spring semester of 2024. One section was administered in an asynchronous online modality and enrolled 61 students. The other section was administered face-to-face and enrolled 39 students. Both sections of the course followed the same general structure and learning objectives, but they were stand-alone courses, and students only registered for either the in-person or online section of the course. The course served as an elective for students to fulfill a general education requirement for the Arts and Humanities, and therefore, the course typically enrolled students from majors across the university. However, the home department of the course was within the Herbert College of Agriculture, and therefore, a majority of the enrollees have majors within the college.
The structure of the course was similar for the in-person and online modality. Students were exposed to historical concepts and schools of thought regarding knowledge and education. Toward the conclusion of the course, students applied these foundational concepts to propose solutions to address challenging issues in education, agriculture, food, and natural resources that have emerged in modern times.
In addition to several larger assignments spaced throughout the semester, a major component of the course was to foster essential student-to-student interaction [
16] through the completion of online discussion boards throughout the course. In the 2024 spring semester, eight discussion boards were moderated using the Packback Questions platform in both the online and in-person sections of the course. These discussion boards prompted students to ask an initial question on a topic related to their current class module. To provide an example, in the two-week course module on educational philosophy, a student referenced our class material on vocational education and early educational philosophers by posting the question, “If Sneddon’s educational philosophy was applied instead of Dewey’s to the K-12 education system, how might schools be different today?”. After each student wrote their own open-ended question, they were asked to provide detailed responses to at least two questions proposed by their peers. Responses were typically several paragraphs in length and had to meet a Packback curiosity score of 50 to receive full credit. Students were typically given a one to two-week duration to complete each discussion board.
2.2. Survey Design
An online survey was developed to serve as the data collection instrument for this study, which examined students’ UX of Packback and perceived credibility of instructors who require the use of AI-based platforms, such as Packback. The survey was administered via Qualtrics, an online survey provider, at the conclusion of the 16-week semester. All students were asked to complete the survey for participation credit; however, students could opt out if they did not want their data to be used in this research and still receive credit. Students were told their identifying information would be kept confidential and that their responses to the survey would not be seen by the instructors of the course until after grades were posted. Additionally, the survey was set up so that student responses would be anonymous. At the end of the survey, students clicked on a link that took them to a new browser where they entered their name to receive credit for taking the survey. Therefore, student names were not linked to student responses. This study was reviewed by the University of Tennessee-Knoxville Institutional Review Board and deemed exempt (#UTK IRB-24-08244-XM).
This survey was part of a larger research project that utilized several instruments to allow researchers to understand students’ perspectives of the course, their experiences using the Packback platform, and their perceptions of instructors who require the use of AI technology in their courses. The instruments and data reported hereafter were solely used and reported for this study and are not reported elsewhere. All components of the survey were reviewed by a panel of experts for content and face validity.
2.2.1. User Experience (UX) of Packback
We measured students’ general UX of the Packback platform through a 14-item questionnaire consisting of a series of statements. Students were asked to read each statement and select their extent of agreement using a 5-point Likert scale measured from 1 = strongly disagree to 5 = strongly agree. Example statements include “The interface of Packback is pleasant” and “I believe I became productive quickly using Packback.” Post hoc internal scale reliability for the UX questionnaire (α = 0.97) was found to be above the threshold for reliability (α > 0.70; [
57]). The original instrument from which we modified our statements was from an empirical research study on software testing by Clarke et al. (2012) [
58]. We modified each statement from the existing instrument by replacing the word “website” with “Packback”.
2.2.2. Belief of Instructor Credibility
We assessed students’ belief in the credibility of instructors who require the use of AI-assisted platforms in their courses through the measure of three sub-constructs: competence, goodwill, and trustworthiness. The scales to measure competence and trustworthiness used in this study were originally produced by McCroskey and Young (1981) [
41]. McCroskey and Teven (1999) [
49] later added a scale to measure goodwill or perceived caring, in addition to trustworthiness and competence, to identify students’ belief in instructor credibility. The subscales of trustworthiness, competence, and goodwill were each measured by a 6-item, 7-point, bipolar semantic differential scale, and all 18 items were used to measure overall credibility. For each item, participants were asked to select the point between two sets of contrary adjectives (e.g., Unethical and Ethical) that best represented their thoughts about instructors who require the use of AI-assisted technology in their classrooms. To limit the influence of other teacher characteristics, the prompt asked students not to include their current teacher(s) in their assessment, but rather to envision a future teacher who requires the use of AI technology. The prompt for the scale instrument was “NOT considering the current instructor of ALEC 211, please indicate how you would feel about future instructors who incorporate artificial intelligence (AI) platforms, such as Packback, as requirements into their courses?” Several adjective pairs were reverse-ordered in our instrument and later recoded to reduce item-response bias. Post hoc internal scale reliability was found for the subscales of competence (α = 0.95), goodwill (α = 0.91), and trustworthiness (α = 0.97). Scale reliability was also found for the full, 18-item scale to measure credibility (α = 0.97).
2.2.3. Respondent Characteristics
Several multiple-choice questions were asked at the end of the survey to gather information on participants’ demographics and characteristics. Participants were asked to identify their gender, ethnicity, and race. Students also reported their year in school (freshman, sophomore, junior, senior), if they were majoring within or outside the Herbert College of Agriculture, if they were taking the course, Knowledge, Society, and Leadership, through the online or in-person modality, and if they had previously used Packback in other courses prior to the current semester.
2.3. Data Analysis
Data were transferred from Qualtrics to SPSS Version 29 for analyses. Descriptive statistics (means, standard deviations, frequencies, and percents) were used to describe students’ user experience of Packback. Means and standard deviations were used to describe students’ perceived credibility of instructors who require the use of AI platforms in their courses. Lastly, a linear regression was used to determine if the variables studied can be used to predict students’ beliefs of instructor credibility. In the regression analysis, dummy-coded categories were used for gender (0 = male, 1 = female), class level (0 = underclassman, 1 = upperclassman), and prior student use of Packback (0 = no, 1 = yes). Students’ UX of Packback and belief in instructor credibility, determined by mean scores from each scale, were treated as continuous data.
2.4. Respondents
Ninety-six (
n = 96) students out of 100 enrolled students provided their informed consent and completed the survey in full. The characteristics of these students are shown in
Table 1.
4. Discussion
Examining students’ UX of novel AI technologies is an important step to ensure that the integration of these tools is fostering a human-centered approach to teaching and learning [
39]. Similar to prior research examining students’ UX of educational technologies [
36,
37,
38], we found that students had an overall favorable UX of the integration of education technology in the classroom. Particularly, our investigation focused on the AI-driven educational tool, Packback, as a means to facilitate online discussion boards as a required course component. Students’ overall favorable UX of Packback is a positive sign, illustrating that students are not only improving learning outcomes as a result of this technology [
14,
59] but are also experiencing a positive human-technology interaction.
The replacement of typical teacher tasks (i.e., writing assistance, grammatical editing, and grading) by AI-driven technologies reduces these typical teacher–student interactions. Although these AI tools have been shown to save teachers time and become more efficient [
12,
13], as well as to offer benefits to student learning [
8,
9], skepticism toward teachers’ use of AI tools has arisen [
20,
21,
25], and their impacts on teacher credibility are not well known. Our findings show that students perceive teachers who integrate AI-driven tools, such as Packback, in their courses as being credible. Students perceived instructors who incorporate AI tools in their teaching to have moderate-to-high competence, goodwill, and trustworthiness. However, significant variability in these perceptions was observed. Our findings illustrated that female students were more likely to perceive teachers who required the use of AI tools in their courses as more credible compared to males. This finding was surprising as prior research has identified that male students use AI technologies more often and have more positive attitudes toward AI use [
22,
23,
24]. Perhaps, female students view teachers’ adoption of AI tools in education as a strategy to improve equity and support, thereby increasing teachers’ credibility. On the other hand, male students, who typically report having higher AI efficacy [
22], may view the required use of a specific AI tool as a hindrance and feel they should have the autonomy to navigate AI use independently.
A novel finding of our study was that students’ UX of specific AI-driven technologies (i.e., Packback) was influential to students’ perceptions of the credibility of teachers who require the use of these technologies in the classroom. Students who had a more negative UX perceived teachers integrating AI technologies to have lower credibility. This finding is not surprising as students who had negative experiences of the AI-driven technology may interpret the teacher as not being competent in applying appropriate pedagogical skills or addressing students’ problems, which erodes perceptions of goodwill [
52]. Furthermore, students may not like using these technologies when they believe that teachers requiring these technologies may be shortchanging their education by replacing teacher tasks (e.g., support, grading, etc.) with AI [
20,
21,
25]. On the contrary, students who had a positive UX likely felt that the technology was beneficial to their learning and felt the teachers’ required use of AI technology was purposeful to improve their education, thus supporting their perceptions of teacher credibility.
This finding underscores the need for instructors to critically examine the functionality and student experience of using these tools, as it not only impacts student learning outcomes but also impacts students’ perceptions of their teachers’ credibility. Teachers should provide careful consideration to the selection and adoption of these tools, ensuring that students have a positive UX. Therefore, corroborating Grosseck et al. (2024) [
60], we recommend teacher training on the best practices to integrate AI tools to enhance the learning environment.
Limitations
We acknowledge several limitations to this study. As the participants in this study were students enrolled in two course sections of a single class, our results are not well-suited for broad generalizations. Course, university, and instructor context may have impacted the results of this study. We recommend replication of this study across broader university and geographical scales. This research primarily focused on the integration of the AI tool, Packback, in course discussion boards. The abundance and variety of AI tools used in education should be examined in a similar context to undercover trends in students’ UX and teacher perceptions. Additional variables, such as the students’ perception of the degree of teacher engagement and support, were not measured in this study. These variables may have an impact on students’ perceptions of teacher credibility and should be considered in future research. Although students in our study could opt out of their survey responses being used in reporting, and that survey responses were anonymously collected, students did complete this survey for participation credit. Therefore, they may have intentionally or unintentionally answered questions in a manner that they believed the instructor of the course wanted to see, leading to a possibility of response bias [
61]. Lastly, as AI technologies rapidly evolve and are adopted, we recognize that the results of this study are limited to the time and place of the study. Continuous research should be conducted to investigate the evolving acceptance, use, and attitudes toward AI technologies in education.
5. Conclusions
We can conclude that students in our study had an overall positive UX of the AI-driven platform, Packback, when using the platform to complete online discussion boards. Additionally, our findings show that students believe instructors who implement AI tools in their courses are credible, as indicated by overall moderate-to-high beliefs of instructor competence, goodwill, and trustworthiness. However, we also found that students’ UX of these technologies impacted their perceptions of teacher credibility, such as that a more positive UX of these technologies supported perceptions of instructor credibility. We recommend future research to further investigate the association between teacher credibility and the selection and use of AI tools in education. Additionally, future research employing a qualitative lens can be useful to explore the complexities between student characteristics, user experience of AI tools, and their perceptions toward teachers’ integration of AI in coursework. As the integration of AI tools becomes commonplace in higher education, there is a need to investigate student learning outcomes and the UX of these novel tools, and furthermore, the impact they may have on student perceptions toward the most important facilitator of teaching and learning—the educator.