Next Article in Journal
Social Factors Causing Burnout of Disabled Students: Views of One Group of Allies of Disabled People
Previous Article in Journal
From Enrollment to Graduation: Pathways to Success in STEM Programs in Ibero-American Countries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Examining Student Perceptions of AI-Driven Learning: User Experience and Instructor Credibility in Higher Education

Department of Agricultural Leadership, Education and Communications, University of Tennessee, Knoxville, TN 37996, USA
*
Author to whom correspondence should be addressed.
Trends High. Educ. 2025, 4(4), 59; https://doi.org/10.3390/higheredu4040059
Submission received: 2 June 2025 / Revised: 4 October 2025 / Accepted: 9 October 2025 / Published: 13 October 2025

Abstract

The increasing prevalence of artificial intelligence (AI) in higher education has established the need to examine the implications of specific AI-based technologies. We analyzed students’ perceptions of Packback, an AI-driven discussion board platform, in a large-enrollment undergraduate course at the University of Tennessee, United States. Valid and reliable quantitative survey instruments were used to measure students’ (n = 96) user experience (UX) of Packback and their perceptions of instructors who require the use of AI platforms in their courses. Data were analyzed to determine how students’ personal characteristics, prior use of Packback, and the UX of Packback influence their perceptions of the credibility (competence, goodwill, trustworthiness) of instructors who require the use of AI platforms. Findings indicated that students had an overall favorable experience of the Packback platform, despite moderate variability. For the credibility of instructors who require the use of AI technologies, students reported a moderate-to-high belief of competence, a moderate belief of goodwill, and a moderate-to-high belief of trustworthiness. A significant model was produced to explain the variance in students’ perception of teacher credibility. Female students and students who had more favorable UX were significantly associated with having higher beliefs in instructor credibility. Although the use of AI platforms can improve efficiency in teaching and learning, our data suggest it can also influence students’ perceptions of instructor credibility.

1. Introduction

The development and adoption of artificial intelligence (AI) tools have caused significant disruption across many industries. In a similar fashion, higher education institutions have grappled with what the advancement and adoption of AI will mean for the teaching and learning landscape [1,2]. While ethical and privacy concerns surrounding the use of AI in educational settings are abundant [3,4], many higher education institutions have promoted the integration of AI in teaching and learning to improve efficiencies and to better prepare students for the changing demands of the twenty-first-century workforce [5].
Baker and Smith (2019) [6] developed three categories to classify educational AI tools: (a) learner-facing; (b) teacher-facing; and (c) system-facing. With learner-facing tools, the student uses the software to facilitate their learning of subject matter (i.e., personalized Learning Management System; LMS). Teacher-facing AI tools support teaching and administration, assessment, feedback, and plagiarism detection. The least developed tool category is system-facing, where AI tools are used by school administrators or managers to analyze data and make informed system-wide decisions. It is important to note that AI tools are not limited to being classified as one category but can extend into additional categories based on their intended users and functions.
There are abundant opportunities to harness AI tools to improve teaching and learning. For example, Merino-Campos (2025) [7] conducted a systematic review and identified a consensus that AI technologies can be integrated into educational systems to optimize student learning by providing tailored content and feedback to individual learners. These student-facing AI tools can provide real-time feedback on students’ writing; thus, students can improve their writing through virtual coaching [8]. Such AI-generated feedback has been shown to improve writing outcomes in college students over traditional, human-only feedback [9].
When considering these AI tools as teacher-facing tools, many platforms can provide automatic, tailored student feedback and can even provide automatic grading [10,11]. These features can save teachers copious amounts of time and can increase teaching efficiencies [12,13]. For example, instead of providing task-heavy and routine feedback on things like grammatical edits, teachers have more time to provide more meaningful interventions, such as in-depth constructive feedback or to better assist struggling students [14]. This may be particularly advantageous in large enrollment courses and in lower-stakes yet meaningful assignments, such as online discussion boards.

1.1. AI-Driven Discussion Boards

Asynchronous discussion boards are a common teaching approach in higher education, particularly in asynchronous, online courses [15]. Discussion boards on course topics can uniquely foster the multiple types of interactions important to online teaching: (a) teacher-to-student; (b) student-to-student; and (c) student-to-content [16]. In fact, many LMS platforms, such as Canvas, integrate native online discussion tools to provide educators the opportunities to implement graded or ungraded discussion boards in their courses to foster students’ critical thinking and application of course content through discussion and reflection [16].
New educational technologies that harness AI have been developed to facilitate online discussion boards. Packback is a digital teaching and learning platform that integrates instructional AI to guide the writing and grading of online discussions (Packback Questions) and writing forums (Deep Dives). According to Packback [17], their inquiry-based student discussion platform can “engage curiosity, increase motivation, build community, and support writing practice” through “built-in AI coaching that helps students develop better questions and responses.”
In the present study, we integrated Packback Questions (online discussion boards) throughout our course. Prior to the use of Packback, only Canvas discussion features were utilized. We integrated Packback into our course with the attempt to foster students’ higher-order thinking skills and for the platform to assist us with the grading load of the course. In these discussion boards, Packback requires students to generate an open-ended question based on the guidance from an instructor’s prompt. Although instructors can set varying criteria, students are typically required to respond to a set number of questions posed by their peers. As students generate written responses, they receive real-time, personalized writing feedback that can help improve writing quality and performance. The AI technology provides student writers with feedback on (a) curiosity or effectiveness and depth of inquiry; (b) credibility or the presence of cited sources; (c) writing convention or the effectiveness of rich text, formatting, and paragraphs; and (d) writing communication or accuracy of grammar, spelling, and sentence structure. To note, at the time of this study, Packback does not generate writing for students through Generative AI but instead offers instant feedback and guidance to improve students’ writing quality [17]. However, peer-reviewed research on AI-generated feedback in the classroom has revealed that students believe instructor feedback to be more useful compared to AI-generated feedback [18]. Additionally, in lab settings, students showed greater improvement in scores when receiving instructor feedback compared to AI-generated feedback [18].

1.2. Concerns of AI in Education

Gallup estimates 60% of U.S. teachers currently use AI tools for work, saving an approximate 6 h of work a week [19]. However, some critics have called into question teachers’ use of AI tools. Concerns have centered around the use of AI for grading, and while teachers are expected to double-check the accuracy of the technology, many question whether this step is happening [20]. Other critics point out the hypocritical stance some teachers have on AI, where the technology is prohibited for students to use in class, yet the teacher will use it to grade assignments [21]. This has led people to wonder if we are at a moment in education where teachers are using AI to grade papers that, whether allowed or not, students used AI to write [21], which brings into question whether we can even know if the learning outcomes were met. Prior research has also highlighted gender discrepancies in the context of AI and education that may raise equity concerns. Female students have been found to be less knowledgeable on AI, adopt AI less frequently, and have lower AI efficacy, compared to their male peers [22,23,24].
In response to these growing concerns of AI use in education [25], schools have introduced guidance and training for educators interested in using the technology. These trainings have focused on the limitations associated with AI-powered grading and emphasize the need for the instructor to remain engaged with providing feedback [26]. While the developers of Packback have explored the technology’s impact on students’ learning outcomes, there is a need to understand how the students perceive the technology and if its use has any influence on how they perceive their teachers. If the AI technology is too challenging to use or leads to negative perceptions of the instructor, the teacher will likely not continue its use in the future. Therefore, there is a need to explore students’ perceptions of using Packback and its influence on perceptions of teachers’ credibility to understand its potential for sustained use in higher education.

1.3. Theoretical and Conceptual Framework

To best guide our study, we utilized instructor communication theory and developed a conceptual framework informed by this theory in addition to ‘user experience’ (UX) research. Since instructors’ integration of AI is not intended to replace the role of the instructor, but may influence their credibility, we also included concepts related to teacher credibility in our framework. A description of these concepts and the framework follows.

1.3.1. Instructor Communication Theory

Instructor communication theory postulates that communication between the instructor and student is central to teaching and learning. According to this theory, teachers’ behaviors, strategies, and messages used in their classroom communication influence students’ affective, behavioral, and cognitive learning outcomes [27]. Scholars have investigated how instructor choices (e.g., nonverbal behaviors, messages, feedback, etc.) influence students’ learning and their perceptions toward the learning environment [28,29,30].
Prior research in instructional communication has also linked the communication strategies instructors have utilized to students’ perceptions of their credibility as teachers [31,32,33]. In fact, McCroskey et al. (2004) [34] included student perceptions of the teacher as a component of their instructional communications model and expressed that students can form perceptions of a teacher before they even have a class with the instructor due to pre-existing knowledge or word-of-mouth.
An emerging application of this theory is the impact of technology as a modality for communication, particularly technologies that aid instructors’ communication with students. While technologies can enhance teacher–student communication [29,30], the integration of some AI platforms can supplement instructors’ communication to students by replacing teacher feedback with AI feedback. Therefore, we ask how teachers’ integration of AI platforms influences the instructional communication process, particularly on students’ initial perceptions of teachers who require students’ use of these platforms. To further guide our study, we integrated concepts from teacher credibility scholarship and concepts from technology research and development.

1.3.2. User Experience

Scholars have coined the term ‘user experience’ (UX) as the process to examine the experience of human–product interaction [35]. Prior studies have examined students’ UX of educational technologies to improve student satisfaction, motivation, and course experience [36,37,38]. Díaz et al. (2015) [36] examined students’ UX using Edmodo as a course tool, finding that most students perceived the platform as both useful and user-friendly. Similarly, Nieto García and Sit (2023) [37] found that students had a positive UX of the educational technology, Kahoot!, and their positive experience using the technology motivated them to attend class. Acemoglu and Jonson (2023) [39] described that a human-centered approach must be taken when integrating AI into education systems in order to complement human capabilities and to empower learners. Therefore, as novel AI-driven educational tools emerge and are adopted in teaching and learning, it is critical to evaluate students’ UX of these tools.

1.3.3. Teacher Credibility

The concept of source credibility and its influence on the transfer and acceptance of information has been central to the study of communication and persuasive discourse since the times of Aristotle in ancient Greece [40]. However, research has shown that source credibility within educational contexts is critical for fostering positive interactions between students and teachers [41]. The culmination of various studies on teacher credibility has provided substantiated evidence that positive associations between perceived teacher credibility and student learning exist. For example, student outcomes such as greater motivation to learn [42], increased communication with their instructors [32], and increased cognitive and affective learning [43] have been linked to teacher credibility. Additionally, teachers who are perceived to have more credibility by their students receive higher teaching evaluation scores from students [44]. Teachers should aim to maintain a high level of credibility in the classroom, which is foundational to a positive teacher-student relationship and student learning.
Scholars exploring teacher credibility and its influences have long analyzed gender as a characteristic influencing credibility. For instance, some researchers have found that students perceived male instructors to be more credible [45], while others have reported female instructors to be found more credible [46], and yet some have found no difference when it comes to gender and teacher credibility [47]. There is a lack of research that has examined how gender differences of students influence their perception of teacher credibility, but further exploration of this area has been called for [48].
When discussing source credibility, scholars typically break it down into three sub-dimensions to more holistically describe how these sources of information are viewed. These dimensions of credibility are (1) competence; (2) trustworthiness, and (3) goodwill (or caring) [49]. Within the context of education, competence has been defined as the instructor’s apparent knowledge or perceived expertise related to a topic [41]. Trustworthiness reflects the teacher’s perceived character, including whether they are perceived as sincere, respectable, and selfless [50,51]. Finally, student perceptions of an instructor’s concern for their welfare and willingness to respond to problems with a student-centered approach have been defined as goodwill or caring [52]. While some teachers may be high in one of these dimensions and low in others, it is the culmination of the three that informs their overall credibility as a teacher. Those with positive perceptions across all three dimensions will be viewed as the most credible [53].

1.3.4. Model for AI Use Influences on Credibility

As the landscape of higher education is rapidly changing due to AI integration, the impact of teachers’ use of AI on their perceived credibility is not well understood. Denecke et al. (2023) [13] postulated that human interaction and interpersonal connections are foundational to effective teaching, and the use of AI, particularly by the teacher to provide student feedback, may undermine interpersonal connections in favor of efficiency. In addition to the use of learner-facing and teacher-facing AI as a potential threat to student-teacher relationships [13], it may have unintended impacts on student perceptions of instructor credibility. Figure 1 demonstrates the anticipated relationship between student perceptions of the learning tool and teacher credibility.
We propose that students’ overall perceptions of the AI learning platform will be informed by their personal characteristics, past experiences using the specific AI tool, and the UX of the platform. These perceptions of the AI learning tool would have a direct influence on students’ perceptions of their teacher’s credibility. Depending on the students’ prior perception and experience with AI, the integration of it into a class setting may lead them to question whether or not their instructors possess the knowledge to teach the content. Additionally, they may question how much teachers care about the students if it is used to replace instructor feedback. Negative UX in the AI-powered learning platform may also cause the student to believe the instructor is not concerned about addressing students’ problems, thus further eroding perceptions of goodwill [52].
Alternatively, students may view teachers’ use of AI positively and at the cutting edge of education. If students have used the platform before and had a positive experience, they would likely transfer those positive perceptions to their instructor as well. This would improve students’ perceptions of their instructors’ competence and trustworthiness, which would elevate their overall credibility.
Because perceived teacher credibility has been linked to teaching evaluation scores [44], we expect teachers to continue to use the AI learning platform when it elicits positive perceptions of credibility but terminate its use if not. Additionally, students’ overall perceptions of the platform and its UX would likely have a direct influence on continued use of the program in the future as well. While this study did not look at intent to continue using Packback and focused on how its use influenced teacher credibility, the proposed model provides a greater understanding of the sustained use of AI in education.

1.4. Study Purpose and Research Objectives

As the use of AI has been rapidly integrated into higher education [54,55], the implications of specific AI-driven technologies must be critically analyzed. While measures of teaching and learning efficiency and impact on student learning outcomes are central in this regard, it is also imperative to examine students’ UX of these technologies and the impact instructors’ integration of these technologies has on their students’ perspectives of their own credibility as instructors. Packback has emerged as a leading AI-driven platform in higher education as a mode for instructors to facilitate online discussion boards. The purpose of this study was to examine students’ user experience of Packback in a college course and to explore how instructors’ integration of AI tools may affect their students’ perspective of their own credibility as an instructor. Two descriptive research objectives that guided this study were as follows:
  • Describe students’ UX of Packback.
  • Describe students’ perceptions of instructors who require the use of AI platforms in their courses.
The data collected in alignment with the two descriptive research objectives provided insight into variables that permitted a testable hypothesis. Namely, data associated with students’ UX provides information on their attitudes and perceptions of AI-generated feedback, which can differ from those associated with instructor feedback [18]. Additionally, students’ perceptions of AI as an evaluation tool to assess their performance have been mixed, with a noted distrust toward AI grading of assignments [56]. Based on these factors, we propose the following null and alternative hypotheses:
H0. 
Students’ personal characteristics, past required use of AI in coursework, and the UX of Packback do not serve as significant predictors of student perceptions of credibility for instructors who require the use of AI platforms in their courses.
H1. 
Students’ personal characteristics, past required use of AI in coursework, and the UX of Packback serve as significant predictors of student perceptions of credibility for instructors who require the use of AI platforms in their courses.

2. Materials and Methods

2.1. Context of the Study

Undergraduate students enrolled in the course, Knowledge, Society, and Leadership, during the spring 2024 semester at the University of Tennessee-Knoxville served as the participants for this study. Two separate sections of the 3-credit-hour, 16-week course were offered in the spring semester of 2024. One section was administered in an asynchronous online modality and enrolled 61 students. The other section was administered face-to-face and enrolled 39 students. Both sections of the course followed the same general structure and learning objectives, but they were stand-alone courses, and students only registered for either the in-person or online section of the course. The course served as an elective for students to fulfill a general education requirement for the Arts and Humanities, and therefore, the course typically enrolled students from majors across the university. However, the home department of the course was within the Herbert College of Agriculture, and therefore, a majority of the enrollees have majors within the college.
The structure of the course was similar for the in-person and online modality. Students were exposed to historical concepts and schools of thought regarding knowledge and education. Toward the conclusion of the course, students applied these foundational concepts to propose solutions to address challenging issues in education, agriculture, food, and natural resources that have emerged in modern times.
In addition to several larger assignments spaced throughout the semester, a major component of the course was to foster essential student-to-student interaction [16] through the completion of online discussion boards throughout the course. In the 2024 spring semester, eight discussion boards were moderated using the Packback Questions platform in both the online and in-person sections of the course. These discussion boards prompted students to ask an initial question on a topic related to their current class module. To provide an example, in the two-week course module on educational philosophy, a student referenced our class material on vocational education and early educational philosophers by posting the question, “If Sneddon’s educational philosophy was applied instead of Dewey’s to the K-12 education system, how might schools be different today?”. After each student wrote their own open-ended question, they were asked to provide detailed responses to at least two questions proposed by their peers. Responses were typically several paragraphs in length and had to meet a Packback curiosity score of 50 to receive full credit. Students were typically given a one to two-week duration to complete each discussion board.

2.2. Survey Design

An online survey was developed to serve as the data collection instrument for this study, which examined students’ UX of Packback and perceived credibility of instructors who require the use of AI-based platforms, such as Packback. The survey was administered via Qualtrics, an online survey provider, at the conclusion of the 16-week semester. All students were asked to complete the survey for participation credit; however, students could opt out if they did not want their data to be used in this research and still receive credit. Students were told their identifying information would be kept confidential and that their responses to the survey would not be seen by the instructors of the course until after grades were posted. Additionally, the survey was set up so that student responses would be anonymous. At the end of the survey, students clicked on a link that took them to a new browser where they entered their name to receive credit for taking the survey. Therefore, student names were not linked to student responses. This study was reviewed by the University of Tennessee-Knoxville Institutional Review Board and deemed exempt (#UTK IRB-24-08244-XM).
This survey was part of a larger research project that utilized several instruments to allow researchers to understand students’ perspectives of the course, their experiences using the Packback platform, and their perceptions of instructors who require the use of AI technology in their courses. The instruments and data reported hereafter were solely used and reported for this study and are not reported elsewhere. All components of the survey were reviewed by a panel of experts for content and face validity.

2.2.1. User Experience (UX) of Packback

We measured students’ general UX of the Packback platform through a 14-item questionnaire consisting of a series of statements. Students were asked to read each statement and select their extent of agreement using a 5-point Likert scale measured from 1 = strongly disagree to 5 = strongly agree. Example statements include “The interface of Packback is pleasant” and “I believe I became productive quickly using Packback.” Post hoc internal scale reliability for the UX questionnaire (α = 0.97) was found to be above the threshold for reliability (α > 0.70; [57]). The original instrument from which we modified our statements was from an empirical research study on software testing by Clarke et al. (2012) [58]. We modified each statement from the existing instrument by replacing the word “website” with “Packback”.

2.2.2. Belief of Instructor Credibility

We assessed students’ belief in the credibility of instructors who require the use of AI-assisted platforms in their courses through the measure of three sub-constructs: competence, goodwill, and trustworthiness. The scales to measure competence and trustworthiness used in this study were originally produced by McCroskey and Young (1981) [41]. McCroskey and Teven (1999) [49] later added a scale to measure goodwill or perceived caring, in addition to trustworthiness and competence, to identify students’ belief in instructor credibility. The subscales of trustworthiness, competence, and goodwill were each measured by a 6-item, 7-point, bipolar semantic differential scale, and all 18 items were used to measure overall credibility. For each item, participants were asked to select the point between two sets of contrary adjectives (e.g., Unethical and Ethical) that best represented their thoughts about instructors who require the use of AI-assisted technology in their classrooms. To limit the influence of other teacher characteristics, the prompt asked students not to include their current teacher(s) in their assessment, but rather to envision a future teacher who requires the use of AI technology. The prompt for the scale instrument was “NOT considering the current instructor of ALEC 211, please indicate how you would feel about future instructors who incorporate artificial intelligence (AI) platforms, such as Packback, as requirements into their courses?” Several adjective pairs were reverse-ordered in our instrument and later recoded to reduce item-response bias. Post hoc internal scale reliability was found for the subscales of competence (α = 0.95), goodwill (α = 0.91), and trustworthiness (α = 0.97). Scale reliability was also found for the full, 18-item scale to measure credibility (α = 0.97).

2.2.3. Respondent Characteristics

Several multiple-choice questions were asked at the end of the survey to gather information on participants’ demographics and characteristics. Participants were asked to identify their gender, ethnicity, and race. Students also reported their year in school (freshman, sophomore, junior, senior), if they were majoring within or outside the Herbert College of Agriculture, if they were taking the course, Knowledge, Society, and Leadership, through the online or in-person modality, and if they had previously used Packback in other courses prior to the current semester.

2.3. Data Analysis

Data were transferred from Qualtrics to SPSS Version 29 for analyses. Descriptive statistics (means, standard deviations, frequencies, and percents) were used to describe students’ user experience of Packback. Means and standard deviations were used to describe students’ perceived credibility of instructors who require the use of AI platforms in their courses. Lastly, a linear regression was used to determine if the variables studied can be used to predict students’ beliefs of instructor credibility. In the regression analysis, dummy-coded categories were used for gender (0 = male, 1 = female), class level (0 = underclassman, 1 = upperclassman), and prior student use of Packback (0 = no, 1 = yes). Students’ UX of Packback and belief in instructor credibility, determined by mean scores from each scale, were treated as continuous data.

2.4. Respondents

Ninety-six (n = 96) students out of 100 enrolled students provided their informed consent and completed the survey in full. The characteristics of these students are shown in Table 1.

3. Results

3.1. Research Objective 1: Describe Students’ UX of Packback

The UX questionnaire modified from Clarke et al. (2012) [58] was used to describe students’ UX of Packback in our course. Analysis of student responses across the 14-item instrument indicated a grand mean score of 3.79, indicating an overall favorable experience. However, a standard deviation of 0.90 for the grand mean was found, demonstrating moderate variability in the student experience.
The item with the most positive rating was “I feel comfortable using Packback,” with 83% (n = 80) of students selecting that they agreed or strongly agreed with this statement. Students also reported that they felt the platform was easy to learn how to use, with 83% (n = 80) of students agreeing or strongly agreeing with the statement “It was easy to learn how to use Packback.” Students reported a less favorable UX with the interface of the platform, as illustrated by approximately only half of students agreeing (n = 35; 36.5%) or strongly agreeing (n = 16; 16.7%) that they “like using the interface of Packback.” Similarly, approximately one-third (n = 27) of students indicated they would not recommend Packback to fellow students. Student responses across the scale can be seen in Table 2.

3.2. Research Objective 2: Describe Students’ Beliefs Toward the Credibility of Instructors Who Require the Use of AI Platforms in Their Courses

Students’ perceptions of instructors were measured through three sub-constructs (competence, goodwill, and trustworthiness), which are measures of credibility [49].

3.2.1. Competence

The scale average for students’ beliefs of instructors’ competence was 5.30 on the 7-point scale. When considering real limits, this can be interpreted as a moderate-to-high belief in competence. However, a high standard deviation was found (SD = 1.17), representing variable perceptions. Descriptive statistics for the adjective pairs used in the 6-item competence scale are illustrated in Table 3.

3.2.2. Goodwill

The scale average for students’ belief in goodwill was 5.00 (SD = 1.08), which can be interpreted as a moderate belief in goodwill. A high standard deviation was found indicating a substantial degree of variability. Descriptive statistics for the adjective pairs used in the six-item goodwill scale are illustrated in Table 4.

3.2.3. Trustworthiness

The scale average for students’ beliefs in trustworthiness was 5.20, which can be interpreted as a moderate to high belief in trustworthiness. Once again, a high variability (SD = 1.27) was seen, indicating varying perceptions. Descriptive statistics for the adjective pairs used in the six-item trustworthiness scale are illustrated in Table 5.

3.3. Hypothesis: Predictive Ability of Students’ Personal Characteristics, Past Required Use of AI in Coursework, and User Experience of Packback, on Student Perceptions of Instructors Who Require the Use of AI Platforms in Their Courses

A regression analysis was used to examine factors that can be used to predict students’ perceptions of teacher credibility of instructors who require the use of AI platforms in their courses. A significant model was produced. Therefore, we reject our null hypothesis and accept the alternative hypothesis that students’ personal characteristics, past required use of AI in coursework, and UX of Packback can serve as significant predictors of student perceptions of credibility for instructors who require the use of AI platforms in their courses.
The significant model produced in our regression analysis could be used to explain 42.9% of the variance in perceived credibility of instructors who require the use of AI platforms in their courses (R2 = 0.429, F4, 91) = 18.845, p < 0.001). While students’ class level and prior use of Packback were not significant predictors, gender (β = 2.32, p < 0.005) and Packback UX (β = 0.599, p < 0.001) were significant. Findings reveal that females were more likely to view these instructors as more credible, and that students who have a more positive UX of Packback view these instructors as more credible. Table 6 illustrates the regression output for predictors of perceived credibility of instructors who require the use of AI in their teaching.

4. Discussion

Examining students’ UX of novel AI technologies is an important step to ensure that the integration of these tools is fostering a human-centered approach to teaching and learning [39]. Similar to prior research examining students’ UX of educational technologies [36,37,38], we found that students had an overall favorable UX of the integration of education technology in the classroom. Particularly, our investigation focused on the AI-driven educational tool, Packback, as a means to facilitate online discussion boards as a required course component. Students’ overall favorable UX of Packback is a positive sign, illustrating that students are not only improving learning outcomes as a result of this technology [14,59] but are also experiencing a positive human-technology interaction.
The replacement of typical teacher tasks (i.e., writing assistance, grammatical editing, and grading) by AI-driven technologies reduces these typical teacher–student interactions. Although these AI tools have been shown to save teachers time and become more efficient [12,13], as well as to offer benefits to student learning [8,9], skepticism toward teachers’ use of AI tools has arisen [20,21,25], and their impacts on teacher credibility are not well known. Our findings show that students perceive teachers who integrate AI-driven tools, such as Packback, in their courses as being credible. Students perceived instructors who incorporate AI tools in their teaching to have moderate-to-high competence, goodwill, and trustworthiness. However, significant variability in these perceptions was observed. Our findings illustrated that female students were more likely to perceive teachers who required the use of AI tools in their courses as more credible compared to males. This finding was surprising as prior research has identified that male students use AI technologies more often and have more positive attitudes toward AI use [22,23,24]. Perhaps, female students view teachers’ adoption of AI tools in education as a strategy to improve equity and support, thereby increasing teachers’ credibility. On the other hand, male students, who typically report having higher AI efficacy [22], may view the required use of a specific AI tool as a hindrance and feel they should have the autonomy to navigate AI use independently.
A novel finding of our study was that students’ UX of specific AI-driven technologies (i.e., Packback) was influential to students’ perceptions of the credibility of teachers who require the use of these technologies in the classroom. Students who had a more negative UX perceived teachers integrating AI technologies to have lower credibility. This finding is not surprising as students who had negative experiences of the AI-driven technology may interpret the teacher as not being competent in applying appropriate pedagogical skills or addressing students’ problems, which erodes perceptions of goodwill [52]. Furthermore, students may not like using these technologies when they believe that teachers requiring these technologies may be shortchanging their education by replacing teacher tasks (e.g., support, grading, etc.) with AI [20,21,25]. On the contrary, students who had a positive UX likely felt that the technology was beneficial to their learning and felt the teachers’ required use of AI technology was purposeful to improve their education, thus supporting their perceptions of teacher credibility.
This finding underscores the need for instructors to critically examine the functionality and student experience of using these tools, as it not only impacts student learning outcomes but also impacts students’ perceptions of their teachers’ credibility. Teachers should provide careful consideration to the selection and adoption of these tools, ensuring that students have a positive UX. Therefore, corroborating Grosseck et al. (2024) [60], we recommend teacher training on the best practices to integrate AI tools to enhance the learning environment.

Limitations

We acknowledge several limitations to this study. As the participants in this study were students enrolled in two course sections of a single class, our results are not well-suited for broad generalizations. Course, university, and instructor context may have impacted the results of this study. We recommend replication of this study across broader university and geographical scales. This research primarily focused on the integration of the AI tool, Packback, in course discussion boards. The abundance and variety of AI tools used in education should be examined in a similar context to undercover trends in students’ UX and teacher perceptions. Additional variables, such as the students’ perception of the degree of teacher engagement and support, were not measured in this study. These variables may have an impact on students’ perceptions of teacher credibility and should be considered in future research. Although students in our study could opt out of their survey responses being used in reporting, and that survey responses were anonymously collected, students did complete this survey for participation credit. Therefore, they may have intentionally or unintentionally answered questions in a manner that they believed the instructor of the course wanted to see, leading to a possibility of response bias [61]. Lastly, as AI technologies rapidly evolve and are adopted, we recognize that the results of this study are limited to the time and place of the study. Continuous research should be conducted to investigate the evolving acceptance, use, and attitudes toward AI technologies in education.

5. Conclusions

We can conclude that students in our study had an overall positive UX of the AI-driven platform, Packback, when using the platform to complete online discussion boards. Additionally, our findings show that students believe instructors who implement AI tools in their courses are credible, as indicated by overall moderate-to-high beliefs of instructor competence, goodwill, and trustworthiness. However, we also found that students’ UX of these technologies impacted their perceptions of teacher credibility, such as that a more positive UX of these technologies supported perceptions of instructor credibility. We recommend future research to further investigate the association between teacher credibility and the selection and use of AI tools in education. Additionally, future research employing a qualitative lens can be useful to explore the complexities between student characteristics, user experience of AI tools, and their perceptions toward teachers’ integration of AI in coursework. As the integration of AI tools becomes commonplace in higher education, there is a need to investigate student learning outcomes and the UX of these novel tools, and furthermore, the impact they may have on student perceptions toward the most important facilitator of teaching and learning—the educator.

Author Contributions

Conceptualization, B.C.C., T.K.R., V.B. and T.G.; methodology, B.C.C. and T.K.R.; software, B.C.C.; validation, B.C.C. and T.K.R.; formal analysis, B.C.C.; investigation, B.C.C., T.K.R., V.B. and T.G.; resources, B.C.C., T.K.R., V.B. and T.G.; data curation, B.C.C.; writing—original draft preparation, B.C.C.; writing—review and editing, B.C.C., T.K.R., V.B. and T.G.; visualization, B.C.C.; supervision, B.C.C., T.K.R., V.B. and T.G.; project administration, B.C.C. and V.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of University of Tennessee (protocol code #UTK IRB-24-08244-XM) on 14 June 2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. All students were asked to complete the survey for participation credit; however, students could opt out if they did not want their data to be used in this research and still receive credit. Students were told their identifying information would be kept confidential and that their responses to the survey would not be seen by the instructors of the course until after grades were posted. Additionally, the survey was set up so that student responses would be anonymous.

Data Availability Statement

Individual student data cannot be shared to protect student respondents.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
LMSLearning Management System
UXUser Experience

References

  1. Batista, J.; Mesquita, A.; Carnaz, G. Generative AI and higher education: Trends, challenges, and future directions from a systematic literature review. Information 2024, 15, 676. [Google Scholar] [CrossRef]
  2. Jin, Y.; Yan, L.; Echeverria, V.; Gašević, D.; Martinez-Madlonado, R. Generative AI in higher education: A global perspective of institutional adoption polices and guidelines. Comput. Educ. Artif. Intell. 2025, 8, 100348. [Google Scholar] [CrossRef]
  3. Chen, B.; Lewis, C.; West, M.; Ziles, C. Plagiarism in the age of generative AI: Cheating method change and learning loss in an intro to CS course. In Proceedings of the L@S ’24: Eleventh ACM Conference on Learning, Atlanta, GA, USA, 18–20 July 2024; pp. 75–85. [Google Scholar]
  4. Pudasaini, S.; Miralles-Pechúan, L.; Lillis, D.; Salvadar, M.L. Survey on AI-generated plagiarism detection: The impact of large language models on academic integrity. J. Acad. Ethics 2024, 23, 1137–1170. [Google Scholar] [CrossRef]
  5. Kiesel, K.; Zuo, N.; Plakias, Z.T.; Peña-Lévano, L.M.; Barkley, A.; Lacy, K.; Hanson, E.; Treme, J. Enhancing student engagement in a changing academic environment-Tested innovations for traditional classes and online teaching. Appl. Econ. Teach. Resour. 2020, 2, 15–28. [Google Scholar] [CrossRef]
  6. Baker, T.; Smith, L. Educ-AI-tion Rebooted? Exploring the Future of Artificial Intelligence in Schools and Colleges. Available online: https://media.nesta.org.uk/documents/Future_of_AI_and_education_v5_WEB.pdf (accessed on 15 May 2025).
  7. Merino-Campos, C. The impact of artificial intelligence on personalized learning in higher education: A systematic review. Trends High. Educ. 2025, 4, 17. [Google Scholar] [CrossRef]
  8. Shum, S.B.; Sánor, A.; Goldsmith, R.; Bass, R.; McWilliams, M. Towards reflective writing analytics: Rationale, methodology, and preliminary results. J. Learn. Anal. 2017, 4, 58–84. [Google Scholar] [CrossRef]
  9. Weber, F.; Wambsganss, T.; Söllner, M. Enhancing legal writing skills: The impact of formative feedback in a hybrid intelligence learning environment. Br. J. Educ. Technol. 2024, 56, 650–677. [Google Scholar] [CrossRef]
  10. Ding, L.; Zou, D. Automated writing evaluation systems: A systematic review of Grammarly, Pigai, and Criterion with a perspective on future directions in the age of generative artificial intelligence. Educ. Inf. Technol. 2024, 29, 14151–14203. [Google Scholar] [CrossRef]
  11. Gnanaprakasam, J.; Lourdusamy, R. The role of AI in automating grading: Enhancing feedback and efficiency. In Artificial Intelligence and Education—Shaping the Future of Learning; Kadry, S., Ed.; Intech Open: London, UK, 2024. [Google Scholar]
  12. Lantz, J.L.; Liu, J.C.; Basnyat, I. Piloting Artificial Intelligence (AI) to Facilitate Online Discussion in Large Online Classes: A Case Study. In Cases on Innovative and Successful Uses of Digital Resources for Online Learning; IGI Global: Hershey, PA, USA, 2022. [Google Scholar]
  13. Denecke, K.; Glauser, R.; Reichenpfader, D. Assessing the potential and risk of AI-based tools in higher education: Results from an eSurvey and SWOT analysis. Trends High. Educ. 2023, 2, 667–688. [Google Scholar] [CrossRef]
  14. Archibald, A.; Hudson, C.; Heap, T.; Thompson, R.; Lin, L.; DeMeritt, J.; Lucke, H. A validation of AI-enabled discussion platform metrics and relationships to student efforts. TechTrends 2023, 67, 285–293. [Google Scholar] [CrossRef]
  15. Fehrman, S.; Watson, S.L. A systematic review of asynchronous online discussions in online higher education. Am. J. Distance Educ. 2020, 35, 200–213. [Google Scholar] [CrossRef]
  16. Bernard, R.M.; Abrami, P.C.; Borokhovski, E.; Wade, C.A.; Tamim, R.M.; Surkes, M.A.; Bethel, E.C. A meta-analysis of three types of interaction treatments in distance education. Rev. Educ. Res. 2009, 79, 1243–1289. [Google Scholar] [CrossRef]
  17. Packback.co. Impact of Online Discussion Platform and Pedagogy on Student Outcomes—Packback. Available online: https://www.packback.co/resources/impact-of-online-discussion-platform-and-pedagogy-on-student-outcomes/ (accessed on 15 May 2025).
  18. Er, E.; Akcapinar, G.; Bayazit, A.; Noroozi, O.; Banihashem, S.K. Assessing student perceptions and use of instructor versus AI-generated feedback. Br. J. Educ. Technol. 2024, 56, 1074–1091. [Google Scholar] [CrossRef]
  19. Gallup. Teaching for Tomorrow: Unlocking Six Weeks a Year with AI. 2024. Available online: https://www.gallup.com/analytics/659819/k-12-teacher-research.aspx (accessed on 15 May 2025).
  20. Tangermann, V. Lazy High School Teachers Using ChatGPT for Grading. 2024. Available online: https://futurism.com/the-byte/teachers-ai-grading (accessed on 15 May 2025).
  21. Prada, L. Teachers Are Using AI to Grade Papers—While Banning Students from It. 2025. Available online: https://www.vice.com/en/article/teachers-are-using-ai-to-grade-papers-while-banning-students-from-it/ (accessed on 15 May 2025).
  22. Cachero, C.; Tomás, D.; Pujol, F.A. Gender bias in self-perception of AI knowledge, impact, and support among higher education students: An observational study. ACM Trans. Comput. Educ. 2025, 25, 1–26. [Google Scholar] [CrossRef]
  23. Stöhr, C.; Ou, A.W.; Malmström, H. Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Comput. Educ. Artif. Intell. 2024, 7, 100259. [Google Scholar] [CrossRef]
  24. Nouraldeen, R.M. The impact of technology readiness and use perceptions on students’ adoption of artificial intelligence: The moderating role of gender. Dev. Learn. Organ. 2022, 37, 7–10. [Google Scholar] [CrossRef]
  25. Hill, K. Professors are using ChatGPT. Students aren’t happy about it. The New York Times Newsletters, 14 May 2025. [Google Scholar]
  26. Associated Press. 6 in 10 teachers use AI to grade papers, write lessons—But they say it’s making their students stop thinking. The New York Post, 25 June 2025. [Google Scholar]
  27. Wheeles, L.R.; Hurt, H.T. Instructional communication theory and research: An overview of instructional strategies as instructional communication systems. Instr. Commun. 1979, 3, 525–541. [Google Scholar] [CrossRef]
  28. Selvaraj, A.M.; Azman, H.; Wahi, W. Teacher’s feedback practice and students’ academic achievement: A systematic literature review. Int. J. Learn. Teach. Educ. Res. 2021, 20, 308–322. [Google Scholar] [CrossRef]
  29. Froment, F.; González, A.J.G.; Bohórquez, M.R. The use of social networks as a communication tool between teachers and students: A literature review. Turk. Online J. Educ. Technol. 2017, 16, 126–144. [Google Scholar]
  30. Xie, J.; Correia, A. The effects of instructor participation in asynchronous online discussions on student performance: A systematic review. Br. J. Educ. Technol. 2023, 55, 71–89. [Google Scholar] [CrossRef]
  31. Fernandes, C. The Relationship Between Teacher Communication, and Teacher Credibility, Student Motivation, and Academic Achievement in India. Ph.D. Thesis, Concordia University, Portland, OR, USA, 2019. [Google Scholar]
  32. Finn, A.N.; Schrodt, P.; Witt, P.L.; Elledge, N.; Jerberg, K.A.; Larson, L.M. A meta-analytical review of teacher credibility and its associations with teacher behaviors and student outcomes. Commun. Educ. 2009, 58, 516–537. [Google Scholar] [CrossRef]
  33. Henning, Z. Teaching with style to manage student perceptions: The effects of socio-communicative style and teacher credibility on student affective learning. Commun. Res. Rep. 2010, 27, 58–67. [Google Scholar] [CrossRef]
  34. McCroskey, J.C.; Valencia, K.M.; Richmond, V.P. Toward a general model of instructional communication. Commun. Q. 2004, 52, 197–210. [Google Scholar] [CrossRef]
  35. Nielsen, J. A 100-Year View of User Experience. Available online: https://www.nngroup.com/articles/100-years-ux/ (accessed on 15 May 2025).
  36. Díaz, A.; Magreñán, A.A.; Orcos, L. UX of social network Edmodo in undergraduate engineering students. Int. J. Interact. Multimed. Artif. Intell. 2015, 3, 31–36. [Google Scholar] [CrossRef]
  37. Gracía, M.N.; Sit, J. Student’s recalled desirability of using game-based student response systems (GSRSS): A user experience (UX) perspective. Mark. Educ. Rev. 2023, 33, 272–284. [Google Scholar] [CrossRef]
  38. Sefcik, L.; Veeran-Colton, T.; Baird, M.; Price, C.; Steyn, S. An examination of student user experience (UX) and perceptions of remote invigilation during online assessment. Australas. J. Educ. Technol. 2022, 38, 49–69. [Google Scholar] [CrossRef]
  39. Acemoglu, D.; Johnson, S. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity; Basic Books: London, UK, 2023. [Google Scholar]
  40. Garver, E. Aristotle’s Rhetoric: An Art of Character; University of Chicago Press: Chicago, IL, USA, 1994. [Google Scholar]
  41. McCroskey, J.C.; Young, T.J. Ethos and credibility: The construct and its measurement after three decades. Cent. States Speech J. 1981, 32, 24–34. [Google Scholar] [CrossRef]
  42. Tibbles, D.; Richmond, V.P.; McCroskey, J.C.; Weber, K. Organizational orientations in an instructional setting. Commun. Educ. 2008, 57, 389–407. [Google Scholar] [CrossRef]
  43. Teven, J.J.; McCroskey, J.C. The relationship of percieved teacher caring with student learning and teacher evaluation. Commun. Educ. 1997, 46, 1–9. [Google Scholar] [CrossRef]
  44. Schrodt, P. Students’ appraisals of instructors as a function of students’ perceptions of instructors’ aggressive communication. Commun. Educ. 2003, 52, 106–121. [Google Scholar] [CrossRef]
  45. Clune, M.K.F. Students’ Perceptions of Instructor Credibility: Effects of Instructor Sex, Gender Role, and Communication Style. Ph.D. Thesis, University of Kansas, Lawrence, KS, USA, 2009. [Google Scholar]
  46. Gomez, C.F.R.; Perason, J.C. Students’ perceptions of the credibility and homophily of native and non-native English speaking teaching assistants. Commun. Res. Rep. 1990, 7, 58–62. [Google Scholar] [CrossRef]
  47. Statham, A.; Richardson, L.; Cook, J. Gender and University Teaching: A Negotiated Difference; SUNY Press: Albany, NY, USA, 1991. [Google Scholar]
  48. Froment, F.; Gutiérrez, M.d.-B. The prediction of teacher credibility on student motivation: Academic engagement and satisfaction as mediating variables. Rev. Psicodidáctica 2022, 27, 149–157. [Google Scholar] [CrossRef]
  49. McCroskey, J.C.; Teven, J.J. Goodwill: A reexamination of the construct and its measurement. Commun. Monogr. 1999, 66, 90–103. [Google Scholar] [CrossRef]
  50. Frymier, A.B.; Thompson, C.A. Perceived teacher affinity-seeking relation to perceived teacher credibility. Commun. Educ. 1992, 41, 388–399. [Google Scholar] [CrossRef]
  51. Thweatt, K.S.; McCroskey, J.C. The impact of teacher immediacy and misbehaviors on teacher credibility. Commun. Educ. 1998, 47, 348–358. [Google Scholar] [CrossRef]
  52. Teven, J.J. The relationship among teacher characteristics and perceived caring. Commun. Educ. 2001, 50, 159–169. [Google Scholar] [CrossRef]
  53. Semlak, J.L.; Pearson, J.C. Through the years: An examination of instructor age and misbehavior on perceived teacher credibility. Commun. Res. Rep. 2008, 25, 76–85. [Google Scholar] [CrossRef]
  54. Bozkurt, A.; Xiao, J.; Lambert, S.; Pazurek, A.; Crompton, H.; Koseoglu, S.; Farrow, R.; Bond, M.; Nerantzi, C.; Honeychurch, S.; et al. Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational land-scape. Asian J. Distance Educ. 2023, 18, 53–130. [Google Scholar] [CrossRef]
  55. Nagaraj, B.K.; Kalaivani, A.; Begum, R.S.; Akila, S.; Sachdev, H.K.; Kumar, N.S. The emerging role of artificial intelligence in STEM higher education: A critical review. Int. Res. J. Multidiscip. Technovation 2023, 5, 1–19. [Google Scholar] [CrossRef]
  56. Tossell, C.C.; Tenhundfeld, N.L.; Momen, A.; Cooley, K.; Visser, E.J. Student perceptions of ChatGPT use in a college essay assignment: Implications for learning, grading, and trust in artificial intelligence. IEEE Trans. Learn. Technol. 2024, 17, 1069–1081. [Google Scholar] [CrossRef]
  57. Field, A. Discovering Statistics Using IBM SPSSS Statistics, 4th ed.; Sage Publications: Thousand Oaks, CA, USA, 2013. [Google Scholar]
  58. Clarke, P.J.; Pava, J.; Davis, D.; Hernandez, F.; King, T.M. Using WReSTT in SE courses: An empirical study. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education, Raleigh, NC, USA, 29 February–3 March 2012; pp. 307–312. [Google Scholar]
  59. Rutner, S.M.; Scott, R.A. Use of artificial intelligence to grade student discussion boards: An exploratory study. Inf. Syst. Educ. J. 2022, 20, 4–18. [Google Scholar]
  60. Grosseck, G.; Bran, R.A.; Tîru, G. Digital assessment: A survey of Romanian higher education teachers’ practices and needs. Educ. Sci. 2024, 14, 32. [Google Scholar] [CrossRef]
  61. Richardson, J.T.E. The role of response biases in the relationship between students’ perceptions of their courses and their approaches to studying in higher education. Br. Educ. Res. J. 2011, 38, 399–418. [Google Scholar] [CrossRef]
Figure 1. Anticipated relationship between student perceptions of the learning tool and teacher credibility.
Figure 1. Anticipated relationship between student perceptions of the learning tool and teacher credibility.
Higheredu 04 00059 g001
Table 1. Characteristics of participants (n = 96).
Table 1. Characteristics of participants (n = 96).
Characteristicf%
Gender
  Male4445.8
  Female5052.1
  Other/Prefer Not to Say22.1
Ethnicity
  Spanish, Hispanic, or Latino Origin 77.3
  Non-Spanish, Hispanic, or Latino Origin 8891.7
Race
  White or Caucasian 8689.6
  Black or African American 99.4
  Native American or Alaskan Native66.3
  Asian22.1
  Native Hawaiian or Pacific Islander44.2
  Other44.2
Year in School
  Freshman2526.0
  Sophomore3132.3
  Junior2020.8
  Senior2020.8
Major
  Within College of Agriculture 5961.5
  Outside of College of Agriculture 3738.5
Modality
  Online6163.5
  In-Person3536.5
Prior Use of Packback
  Yes5961.5
  No3738.5
Table 2. Students’ user experience of Packback.
Table 2. Students’ user experience of Packback.
StatementStrongly DisagreeDisagreeNeither Agree nor DisagreeAgreeStrongly Agree
f%f%f%f%f%
Overall, I am satisfied with how easy it is to use Packback.44.255.288.34749.03233.3
It is simple to use Packback.44.233.188.34850.03334.4
I feel comfortable using Packback.33.133.11010.44041.74041.7
It was easy to learn how to use Packback.33.144.299.44243.83839.6
I believe I became productive quickly using Packback.66.399.42324.03536.52324.0
The information provided with Packback was clear.33.166.31616.74951.02222.9
It is easy to find the information I need.33.177.31919.84344.82425.0
The information is effective in helping me complete the tasks and scenarios.22.11010.41919.84041.72526.0
The interface of Packback is pleasant.33.177.32121.94546.92020.8
I like using the interface of Packback.88.31515.62222.93536.51616.7
Packback has all the functions and capabilities I expect to have.44.233.11111.55759.42121.9
I believe that Packback helped me earn a better grade.88.31010.42121.92829.22930.2
I would recommend Packback to fellow students.1010.41717.71919.82728.12324.0
Overall, I am satisfied with Packback.66.399.42425.03334.42425.0
Table 3. Students’ belief in competence for instructors who incorporate AI learning platforms, such as Packback, as requirements in their courses.
Table 3. Students’ belief in competence for instructors who incorporate AI learning platforms, such as Packback, as requirements in their courses.
Adjective Pairs 1MeanStandard Deviation
Unintelligent/Intelligent 25.401.30
Untrained/Trained5.191.46
Inexpert/Expert5.111.37
Uninformed/Informed 25.271.33
Incompetent/Competent5.381.25
Stupid/Bright 25.451.12
Scale Average5.301.17
1 A 7-point bipolar semantic differential scale was used for each adjective pair and coded so that negative adjectives were a 1 and positive adjectives were a 7. Real limits for the bipolar semantic differential scale can be interpreted as 1 being the strongest negative anchor and 7 being the strongest positive anchor. 2 Adjective pairs were reversed in the instrument, and reverse coding was used.
Table 4. Students’ belief in goodwill for instructors who incorporate AI learning platforms, such as Packback, as requirements in their courses.
Table 4. Students’ belief in goodwill for instructors who incorporate AI learning platforms, such as Packback, as requirements in their courses.
Adjective Pairs 1MeanStandard Deviation
Doesn’t care about me/Cares about me 25.011.30
Doesn’t have my interest at heart/Has my interest at heart 25.001.34
Self-centered/Not self-centered5.051.41
Unconcerned with me/Concerned with me 24.841.24
Insensitive/Sensitive4.901.21
Not understanding/Understanding5.221.27
Scale Average5.001.08
1 A 7-point bipolar semantic differential scale was used for each adjective pair and coded so that negative adjectives were a 1 and positive adjectives were a 7. Real limits for the bipolar semantic differential scale can be interpreted as 1 being the strongest negative anchor and 7 being the strongest positive anchor. 2 Adjective pairs were reversed in the instrument, and reverse coding was used.
Table 5. Students’ belief in trustworthiness for instructors who incorporate AI learning platforms, such as Packback, as requirements in their courses.
Table 5. Students’ belief in trustworthiness for instructors who incorporate AI learning platforms, such as Packback, as requirements in their courses.
Adjective Pairs 1MeanStandard Deviation
Dishonest/Honest 25.421.23
Untrustworthy/Trustworthy5.271.32
Dishonorable/Honorable 25.021.36
Immoral/Moral 25.141.45
Unethical/Ethical5.241.38
Phony/Genuine5.121.44
Scale Average5.201.27
1 A 7-point bipolar semantic differential scale was used for each adjective pair and coded so that negative adjectives were a 1 and positive adjectives were a 7. Real limits for the bipolar semantic differential scale can be interpreted as 1 being the strongest negative anchor and 7 being the strongest positive anchor. 2 Adjective pairs were reversed in the instrument, and reverse coding was used.
Table 6. Regression analysis for students’ perceived credibility of instructors.
Table 6. Regression analysis for students’ perceived credibility of instructors.
Predictor VariableB (Coefficient)SEββtp
Constant2.1350.398 5.364<0.001
Class Level−0.2200.179−0.100−1.2330.221
Gender0.5060.1700.2322.9800.004 a
Prior Packback Use0.1990.1780.0901.1150.268
UX Packback0.7330.0960.5997.657<0.001 a
a Significant at p < 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Colclasure, B.C.; Ruth, T.K.; Beasley, V.; Granberry, T. Examining Student Perceptions of AI-Driven Learning: User Experience and Instructor Credibility in Higher Education. Trends High. Educ. 2025, 4, 59. https://doi.org/10.3390/higheredu4040059

AMA Style

Colclasure BC, Ruth TK, Beasley V, Granberry T. Examining Student Perceptions of AI-Driven Learning: User Experience and Instructor Credibility in Higher Education. Trends in Higher Education. 2025; 4(4):59. https://doi.org/10.3390/higheredu4040059

Chicago/Turabian Style

Colclasure, Blake C., Taylor K. Ruth, Victoria Beasley, and Tyler Granberry. 2025. "Examining Student Perceptions of AI-Driven Learning: User Experience and Instructor Credibility in Higher Education" Trends in Higher Education 4, no. 4: 59. https://doi.org/10.3390/higheredu4040059

APA Style

Colclasure, B. C., Ruth, T. K., Beasley, V., & Granberry, T. (2025). Examining Student Perceptions of AI-Driven Learning: User Experience and Instructor Credibility in Higher Education. Trends in Higher Education, 4(4), 59. https://doi.org/10.3390/higheredu4040059

Article Metrics

Back to TopTop