Abstract
In response to the growing need for accessible data analytics education among low-computing disciplines, this study presents the design, implementation, and outcomes of a no-coding graduate-level data analytics course offered within the Engineering Management and Technology Department at the University of Tennessee at Chattanooga. The course utilizes Alteryx Designer 2025.2, an end-to-end, drag-and-drop analytics platform that enables students with minimal programming background to conduct complete data workflows, including data cleansing, transformation, and predictive modeling. Through a project-based learning (PBL) approach, students engage in real-world problem solving, developing data reasoning and interpretation skills rather than focusing on programming syntax. Course artifacts, student project outcomes, and instructional observations suggest that the use of a no-code platform, combined with hands-on assessment through video exercises and mentored projects, supports the development of analytical reasoning, engagement, and data interpretation skills. The paper concludes that GUI-based, no-code tools can effectively bridge the technical accessibility gap in data analytics education, making data-driven learning practical and scalable across low-computing academic programs. This paper is presented as a descriptive pedagogical case study, focusing on course design, instructional practices, and observed learning outcomes rather than a controlled empirical evaluation.
1. Introduction
In the 21st century, data has been at the heart of every field and discipline including accounting, financing, process control, engineering, robotics, and construction. Through the process of data analysis, organizations can discern prevailing patterns and trends in a fast-paced environment, acquiring useful knowledge that helps guide their decision-making processes. More than half of firms worldwide consider data analytics (DA) to be an integral element of their operational framework (Menukin et al., 2023).
While interest in DA academic programs has increased, it still falls short of expectations due to the field’s rapidly evolving technical and non-technical skill requirements. A typical DA curriculum balances math proficiency (e.g., statistical analysis, probability), programming skills (e.g., data management, Python 3.14, R 4.5.2, machine learning), and business influence competencies (e.g., communication, visualization, storytelling). However, most courses remain programming-focused and often require prerequisites in technical subjects such as programming or machine learning. Consequently, many students lose enthusiasm for DA-related subjects despite their interest and opportunities for advancement (Selwyn, 2019).
To address these challenges, universities have refined DA curricula across low-computing majors and non-computing fields. For instance, Sullivan (2013) proposed a data-centric computing course for non-majors, while Anderson et al. (2015) and Krishnamurthi and Fisler (2020) developed introductory computing courses to teach DA to students from diverse backgrounds. However, these still retained coding components. In contrast, Liu et al. (2023) eliminated coding entirely by designing a web-based Data Science Learning Platform (DSLP) for non-computing majors, and Velaj et al. (2022) used a no-code DA platform to teach DA to non-computer science students. Similarly, Sundberg and Holmström (2024) and Delen (KNIME, n.d.) used no-code AI tools to teach machine learning in higher education.
Prior work also highlights important pedagogical trade-offs between no-code analytics tools and traditional programming-based data analytics instruction. Programming-centric courses (e.g., Python 3.14 or R 4.5.2) typically emphasize algorithmic implementation, mathematical formulation, and coding proficiency. While this approach provides strong theoretical foundations and flexibility for advanced customization, it often imposes high technical and cognitive entry barriers, especially for students in low-computing or practice-oriented programs (Selwyn, 2019; Velaj et al., 2022). As a result, instructional time in such courses is frequently devoted to debugging, syntax, and software configuration rather than data reasoning or decision-making.
In this paper, the term “low-computing programs” refers to academic programs in which computing and programming are not core curricular requirements and where students are not expected to enter with prior coding experience. These programs emphasize applied decision-making, domain knowledge, and managerial or professional competencies rather than software development or algorithmic design. Conceptually, “low-computing programs” overlap with categories described in prior literature, including non-computer science (non-CS) programs (Sullivan, 2013; Anderson et al., 2015), non-computing majors (Liu et al., 2023), and computationally light or data-enabled disciplines in which data analysis is increasingly required but traditional programming-centric instruction presents barriers (Selwyn, 2019; Velaj et al., 2022). However, existing terms are often broad and do not explicitly distinguish between programs that require computational thinking and those that intentionally minimize programming depth. Accordingly, the term “low-computing programs” is adopted in this manuscript as a descriptor that emphasizes curricular context rather than disciplinary identity.
The Need for Data Analytics in Low-Computing Programs at UTC
The Engineering Management and Technology (EMT) department at UTC offers diverse programs at the graduate (GR) levels. Graduate programs include Engineering Management (EM) and Construction Management (CM). GR programs can be characterized as low-computing, requiring little to no programming skills due to their focus on practical rather than computational competencies. However, with the rise of big data and data-driven operations, students in these programs encounter complex datasets. For instance, EM coursework integrates principles from engineering and business thus covers business processes, quality management, project engineering, and supply chain management (all grounded in data-driven decision making). Teaching EM students DA helps them develop accurate and timely solutions. Similarly, CM integrates architecture, business, and engineering, emphasizing construction materials, project management, and financial operations. DA skills enable CM students to analyze patterns and generate insights that reduce risks and optimize project performance. While GR students may take computer science electives, these courses often require programming prerequisites, discouraging enrollment.
To encourage interest in DA, a pilot graduate course was introduced in Spring 2021 for EM students. The purpose of this manuscript is to document and reflect on the design, implementation, and instructional outcomes of a graduate-level DA course delivered in a low-computing academic context. Rather than presenting a hypothesis-driven empirical study, the paper adopts a descriptive pedagogical case study approach, drawing on course artifacts, student projects, enrollment trends, and instructor observations to illustrate instructional effectiveness and practical lessons learned.
2. Data Analytics Platform
Alteryx
To make DA more accessible to students without programming experience, Alteryx Designer was integrated into the course. While there are Graphical User Interface (GUI)-based DA tools, such as RapidMiner 2025.1Weka 3.8.6, SPSS V31, CODAP 3.0.0, and KNIME 5.9.0, which have gained popularity by eliminating the need for coding, these tools are often limited to small datasets. They lack advanced variable type conversion and automation features and may have less intuitive interfaces. More advanced GUI-based tools like Informatica and IBM Infosphere DataStage meet most analytical needs but are costly and difficult to learn. For instance, both IBM DataStage and Informatica have a steep learning curve and are not freely available to students or educators.
Alteryx allows users to extract, merge, manipulate, analyze, and present data from multiple sources without writing code. Its intuitive, no-code interface (drag and drop type of platform) supports users of all skill levels and integrates data preprocessing and predictive analytics functions. It operates by linking visual tools representing specific tasks such as importing, cleaning, transforming, analyzing, and exporting data. Users can combine these tools into workflows to prepare datasets, run predictive models, and visualize results. For example, a sample workflow provided in Figure 1 combines two datasets using a series of regular expressions, filtering, and cleaning operations. The platform serves as a modern alternative to traditional programming environments, offering accessibility to users lacking technical expertise.
Figure 1.
Sample Workflow: Alteryx.
The main data preprocessing functions within Alteryx are summarized in Table 1, while Table 2 presents the range of analytical capabilities, including predictive, prescriptive, and classification analyses available within the Alteryx Designer software. The platform also supports integration with Python 3.14, R 4.5.2, and JavaScript-ECMAScript, allowing reusability of code within visual workflows. Through the Alteryx SparkED education program, both students and educators receive a fully enabled, renewable one-year Designer license, making it a cost-effective and scalable choice for teaching data analytics.
Table 1.
Data Preprocessing in Alteryx.
Table 2.
Alteryx Tool Sets.
3. Course Design
This graduate course is offered in two modalities: hybrid and online. It is structured so that during the first seven weeks, students in both sections, engage with and practice the course materials on the online learning platform (e.g., watching lecture videos, completing practices). The lecture videos are designed for students with no prior background in data analytics. They use animations and clear explanations (e.g., https://www.youtube.com/watch?v=4wI0xAuz59A, accessed on 2 November 2025) to simplify complex and technical concepts. In the second half of the semester, students then focus on their projects, receiving one-on-one guidance from the instructor, as detailed in the following sections.
During the first week of the class, students visit the Alteryx for Education website, apply for a free one-year Education License using their university email, and receive a download link with an activation key. After installing Alteryx Designer, the license can be activated in just a few steps. Then, students download the Predictive Tools Package from the same page to access regression, clustering, and predictive tools. Once students complete the installation, the course transitions into hands-on, application-based learning. Hence, students make effective data-driven decisions using data investigation tools, predictive tools, A/B testing tools (test and learn experiments), time series tools, predictive grouping tools and prescriptive tools. The course content includes data extraction, data cleaning, data profiling, designing data processing pipelines including data transformation and standardization models, probability and regression models, image processing and text mining methods, the operational and applied use of supervised and unsupervised learnings including the use of Naïve Bayes, Support Vector Machine, K-Means Approach, K Nearest Neighbor, and Decision Trees. After successful completion of the course, students are able to (1) perform data munging and exploratory DA, (2) apply the basics of ETL (Extract–Transform–Load) process, (3) conduct supervised and unsupervised learning experiments at an applied and interpretive level to generate predictions from real-world data based on historical records, while also engaging in text mining and image classification tasks to explore diverse data analytics applications, (4) communicate business insights using real data, in both written and oral presentations. The course does not aim to develop mathematical or algorithmic mastery of machine learning models; instead, it emphasizes decision-oriented application, interpretation of results, and critical evaluation of model performance within a no-code analytics environment. The tentative topic sequence of the course is offered in a typical 16-week spring setting that is made up of 16 Modules, as shown in Figure 2.
Figure 2.
Course Outline.
3.1. Course Assignments
This course consists of two assessments. It offers hands-on (1) exercises that reinforce students’ creativity and critical thinking, and (2) a term project where students gain firsthand experience using real-world data sets. By working on hands-on projects and presenting their findings through video, students not only apply theoretical knowledge but also develop critical thinking, communication, and digital literacy skills (Zhang & Ma, 2023; UBicast, 2023). Both assessment types are detailed in the next section.
3.1.1. Exercises
The rapid adoption of generative artificial intelligence (AI) tools such as ChatGPT 5.2 has created significant challenges for higher education. The authenticity of student work and the integrity of assessment have become major concerns. Recent studies show that many students use AI tools to complete parts of their assignments (Jones, 2023; Yeo, 2023). While some use AI for idea generation or editing, others rely on it to produce entire submissions, raising questions about authorship and genuine learning outcomes (Das & Eliseev, 2025; Karkoulian et al., 2025). Traditional written assessments are increasingly vulnerable to AI-assisted plagiarism and fabrication. To address these issues, some educators redesign tasks to emphasize creativity, process, and reflection, making it harder for AI to replicate (Jones, 2023; Peters & Angelov, 2025). Despite these efforts, the uncontrolled use of AI in assignments continues to threaten academic credibility and critical thinking development. To promote active learning and critical skills, this course uses video and audio submissions for exercises. After selecting Modules, students complete hands-on tasks to grasp complex concepts through experiential learning. Each exercise requires a video submission (8–14 min), where students record their Alteryx workflows and interpret their results verbally by answering exercise questions. The instructor uses Vimeo to provide time-coded feedback. Grading is done based on how thoroughly students answer questions and demonstrate confidence in building predictive or statistical models (see Table 3). At the beginning of each exercise, students define the implemented model (e.g., building a Support Vector Machine (SVM) model and explain kernel parameters in Exercise 3 (Module 5)) and they also interpret confusion matrices and SVM plots as part of their analysis. Overall, video submissions reduce reliance on AI-generated work by requiring students to build and explain their own workflows. This approach enhances engagement, creativity, oral communication, and digital literacy (UBicast, 2023).
Table 3.
Sample Exercise Evaluation for SVM.
3.1.2. Course Project
Research indicates that PBL increases student engagement and develops critical thinking and problem-solving skills by allowing students to apply knowledge in real-world contexts (Zhang & Ma, 2023; Gratchev, 2023). Unlike exams that assess memorization, PBL assessments promote creativity, collaboration, and integration of interdisciplinary knowledge, leading to improved academic outcomes (Lucas Education Research, 2021). PBL is also particularly effective for students from diverse backgrounds, supporting equitable learning experiences and fostering motivation and ownership of learning (Bell, 2010). Overall, in business and data analytics education, PBL has been shown to improve academic achievement and higher-order thinking skills compared to traditional models (Zhang & Ma, 2023).
In this course, students are instructed to work with real datasets and apply statistical methods learned in class to produce academic papers or technical reports suitable for conference or journal submission. Projects range from analyzing large-scale traffic safety datasets (e.g., NHTSA), to health-related datasets from HealthData.gov, as well as original data collected from manufacturing lines where students are completing internships. For example, one study focused on predictive modeling of electronic control unit (ECU) system defects in automotive manufacturing (Varol & Ridder, 2024). In another case, a graduate student utilized institutional data to analyze student retention patterns at UTC (Varol & Odougherty, 2022).
A 16-page PDF provides step-by-step formatting guidance (e.g., APA style) and outlines what to include in key sections such as the introduction and literature review. Additional resources and sample papers from previous years are available to help students understand expectations. All submissions are evaluated by the instructor and an external subject-matter expert selected from a pool of academic reviewers using a comprehensive rubric (see Table 4). A peer evaluation system involving 14 faculty members from various universities further enhances feedback quality and supports faculty-mentored student research. This collaborative evaluation model strengthens academic engagement and contributes to the development of practical research and analytical skills, as discussed in Section 6.
Table 4.
Project Evaluation.
The project follows an eight-step process designed to promote mentorship, structure, and independent research. (1) Students begin with a pre-assessment to evaluate their data analytics (DA) competencies. (2) They then submit an initial proposal outlining their research interests, background, and objectives. (3) Next, students complete a series of article summaries to build a foundation for their project. (4) A final proposal is then submitted with an extended literature review and detailed statistical plan. (5) Students attend three individual meetings (via Zoom or in person) to discuss progress, datasets, and feedback. (6) They submit a full draft paper in APA format. The draft is reviewed by the instructor and an external evaluator who provides comments on structure, methodology, and clarity. (7) After revisions, the final paper is submitted and graded for clarity, rigor, interpretation, and organization. (8) Finally, students create a voice-over PowerPoint presentation summarizing their research, evaluated for content accuracy, logical flow, and delivery quality. This structured sequence ensures continuous guidance, authentic assessment, and practical experience in conducting and communicating real-world data analytics projects.
The overall workflow of the project paper preparation and submission process is depicted in Figure 3 below. The workflow is structured so that students receive mentorship from the instructor at key stages, including data acquisition, preprocessing, and model development. Each student’s work is thoroughly reviewed by both the instructor and an external reviewer assigned to the project. This ensures quality and strengths the overall quality of the work and paper. The next section presents two case studies that exemplify the scope and depth of projects achievable through this course design.
Figure 3.
Project Paper Workflow.
4. Alteryx in Action: Case Studies
The following case studies are presented as illustrative examples of the types and complexity of analytical workflows students can complete using the course structure and Alteryx platform. They are not intended as formal empirical validations of model performance.
4.1. Case 1: Prediction Engineering Student Retention
The overall goal of this project was to build a logistic regression workflow in Alteryx Designer to investigate whether an incoming student would be retained after the first year. Initially, the student imported the original dataset using the input data Module (corresponds to Module 1 content). The select function (covered in Module 2) was then applied to retain only the relevant variables. Next, the data cleansing Module (covered in Module 2) was utilized to address data quality issues such as removing extra or trailing spaces and imputing missing values. Following this, the filter function (covered in Module 2) was used to transform variables (e.g., converting string values into binary indicators). Finally, the logistic regression (covered in Module 3 and 4) Module was employed to perform the analysis. The student used a logit model, but this Module also includes various customization options (e.g., choosing between logit, probit, and complementary log-log link functions, or applying regularized regression techniques). After building the workflow and conducting the analysis, the student discovered that variables such as Highest Placement Score and Cumulative GPA show positive, statistically significant relationships with retention (p < 0.05). This result confirmed the initial hypothesis that mathematical readiness and early academic success are central factors. Figure 4 presents a visual representation of the modules used, ordered as data input, selection, data cleansing, filtering, and logistic regression.
Figure 4.
Modules Used for Case 1.
Beyond executing the workflow, the student was required to explain the preprocessing and modeling choices (such as handling missing GPA values and converting categorical fields). This reflection helped the student understand the connection between data preparation and model validity. The entire exercise was completed within 15 weeks of class time and a series of independent analysis.
4.2. Case 2: Predicting Semiconductor Defects in Automotive Manufacturing
The overall goal of this project was to build a predictive analytics workflow in Alteryx Designer to classify electronic control unit (ECU) test failures as either supplier-related or non-supplier-related. After receiving approval from the company’s data quality department, the student imported the original production dataset using the Input Data Module (corresponding to Module 1 content). The Select function (Module 2) was then applied to keep the relevant variables from the 6700-record dataset, which included testing parameters such as temperature, voltage, and inspection status. Next, the Data Cleansing Module (Module 2) was used to address quality issues (e.g., removing missing fields, standardizing column names, and trimming spaces). Then, the formula tool (Module 3) was used to transform categorical outcomes into binary indicators (“1” for supplier-related, “0” for non-supplier-related).
The dataset was then split into training and validation subsets using the Sampling Module (Module 3), allowing comparison of model performance at different sampling ratios (75/25, 80/20, and 85/15). Finally, the student built three predictive models using Logistic Regression, Decision Tree, and Random Forest (covered in Modules 4 and 5) to analyze defect patterns. Among these, the Random Forest model achieved the highest validation accuracy of approximately 81.8%, outperforming the other two models. The student also examined ROC curves and F1 scores to evaluate each model’s discriminative power. Figure 5 presents a visual representation of the modules used, illustrating the workflow in the following order: data input, regex, selection, data cleansing, filtering, formula, find-and-replace, text-to-columns, logistic regression, random forest model, decision tree, and scoring.
Figure 5.
Modules Used for Case 2.
The student justified each modeling decision, including the choice of sampling ratios, model selection, and parameter tuning. This reflective process helped the student understand how data preparation, feature selection, and model configuration directly influence predictive performance.
5. Experiential Learning and Discussions
The outcomes reported in this section summarize descriptive trends observed across multiple course offerings and are based on rubric-based evaluations, enrollment data, and instructor reflections rather than controlled experimental comparisons.
Using Alteryx, an end-to-end analytics platform, the course emphasized hands-on data pre-processing methods and machine learning models. Rather than focusing on theory or coding, it prioritized practical, experiential learning where students collected real data and solved real-world problems. The strong student engagement and positive outcomes led to increased interest beyond the EM program. During Spring 2021–2022, 16 students enrolled; 15 were EM students and one from Environmental Science. Due to its success, the course was subsequently opened to other departments. By Spring 2025, enrollment reached 31 students from multiple disciplines, including Mechanical, Civil, Electrical, Chemical Engineering and Computer Science. Between 2021 and 2025, a total of 51 student project papers were submitted in this course. External reviewers evaluated 38 projects (some restricted by non-disclosure agreements) using a standardized rubric. Based on their assessments, 25 projects demonstrated appropriate data preprocessing techniques and successfully applied relevant analytical Modules.
The success rate was initially low in Spring 2021 but improved notably in Spring 2022 and Spring 2023. In 2021, the inclusion of quizzes and exams limited students’ engagement in hands-on learning. Beginning in 2022, the course was redesigned as fully project- and exercise-based, leading to significant gains in student performance. In Spring 2024, all lecture videos were revised and shortened by approximately 38%, focusing on fewer algorithms and emphasizing conceptual understanding and application. This instructional enhancement has had a positive impact on student outcomes, as over 90% of projects demonstrated appropriate use of data and analytical technique. In addition, the inclusion of the course project has supported collaborative research between students and faculty. Between 2021 and 2025, two journal articles and twelve conference papers were published. Integrating a PBL approach provides accessible, career-relevant skills, increases interest in data analytics, and promotes data literacy across disciplines.
Although designed for engineering students with limited computing skills, the course framework is adaptable to non-engineering disciplines such as business, health, and social sciences. No-code platforms remove programming barriers, enabling diverse students to engage with real data and focus on interpretation and decision-making. Gaining proficiency in Alteryx helps students transition easily to other GUI-based tools like IBM InfoSphere DataStage and Informatica, which share similar interfaces and functionalities. Benchmarks suggest that users with limited programming experience can reduce data preprocessing time by 70–90% compared to coding-based methods (Markaicode, 2025; Sghani, 2022). However, script-based tools remain more flexible and scalable for advanced, customized models.
6. Pedagogical Methods and Teaching Philosophy
This manuscript adopts a descriptive pedagogical case study design. Evidence used to support instructional claims is drawn from multiple non-experimental sources (e.g., course enrollment records, student project artifacts, rubric-based evaluations, external reviewer feedback, and instructor observations collected across multiple course offerings between 2021 and 2025). These sources are used descriptively to illustrate patterns in student engagement, project quality, and learning outcomes rather than to establish causal effects or statistical significance.
Overall, the teaching design of this course is guided by a structured pedagogical framework (see Figure 6) consisting of five interrelated categories that collectively promote experiential, inclusive, and reflective learning. These categories include the following: (1) Learning Approach: Learn by Doing: This method encourages constructive, hands-on exploration using Alteryx Designer. The hands-on engagement reinforces conceptual comprehension and helps students link theory to application (Kirstein, 2021; Velaj et al., 2022). (2) PBL: This integrates real-world analytics challenges into the course design. Through iterative stages of proposal submission, one-on-one mentoring, draft review, and presentation, students learn to manage complete analytics workflows from data acquisition to interpretation. In literature, this has been shown to improve engagement, retention, and problem-solving abilities (Bell, 2010; Lucas Education Research, 2021; Zhang & Ma, 2023). (3) Scaffolded Hands-On Exercises: This approach helps to build analytical competency through structured practice and reflection. Course exercises require video submissions in which students record and explain their workflows. This helps reinforce metacognitive skills through reflection and verbal explanation. This approach discourages AI-assisted plagiarism and promotes authentic demonstration of analytical reasoning (Jones, 2023; Peters & Angelov, 2025). (4) Video Assessment and Feedback: This technique replaces traditional testing with evidence-based evaluation of reasoning, interpretation, and communication. Such strategy aligns with modern assessment models that prioritize creativity, integrity, and iterative learning (Das & Eliseev, 2025; Gratchev, 2023). (5) Inclusive and Technology-Enhanced Pedagogy: This approach ensures accessibility and engagement for students across diverse academic backgrounds through no-code analytics tools. It ensures equitable learning across diverse majors/programs while allowing all learners to participate in data-driven decision-making process (Liu et al., 2023; Qazi & Pachler, 2025). Together, these five categories form the foundation of a learning environment where students develop both technical proficiency and critical analytical thinking without the barriers of coding.
Figure 6.
Pedagogical Method.
7. Conclusions
Teaching data analytics to students in low-computing programs presents challenges because many students may lack prior experience with programming, statistics, or data operations and data manipulation. However, embedding non-technical skills (e.g., problem-solving, data interpretation, and visualization) into the learning process helps students develop competencies that are highly valued in industry, even when their technical background is limited (Mew, 2019; Qazi & Pachler, 2025). In literature, research has highlighted the importance of using accessible tools and frameworks that emphasize practical application and conceptual understanding rather than technical complexity (Cellante, 2021; Kirstein, 2021). Overall, these studies suggest that a combination of simplified tools, scaffolded instruction, and focus on applied analytics can make data analytics education accessible and effective for non-technical learners.
It is not feasible to expect students to learn Python or similar programming languages while simultaneously completing a full project. Students would struggle to decide whether to focus on understanding the programming, the data itself, or the project findings. Therefore, for low computing programs, educators should focus on prioritizing students’ understanding of the data and their ability to interpret results, rather than on mastering theoretical models. Hence, no-code and GUI-based analytics platforms shift the instructional focus from implementation to application. These tools enable students to engage with the full data analytics workflow (including data preparation, modeling, and evaluation) while focusing on model selection, interpretation of results, and contextual decision-making. However, this increased accessibility comes at the cost of limited algorithmic transparency and reduced opportunities for low-level model customization. As a result, no-code approaches are best viewed not as replacements for programming-based analytics education but as complementary pedagogical strategies that align more effectively with the learning objectives of low-computing programs.
8. Limitations
Perhaps, the most obvious limitation of this course approach is the number of students the instructor can effectively mentor. Because significant time is dedicated to evaluating video submissions and reviewing papers, enrollment must be capped at 20 students max per course (e.g., 10 online and 10 hybrid). This limit is set to maximize instructional efficiency, learning quality, and overall student performance. While it has been very uncommon, another limitation we rarely observed is the lack of fundamental statistical knowledge among students. Although the course is designed for low-computing majors, implementing it for students with no prior quantitative background can be challenging, as it requires an understanding of basic concepts such as the central limit theorem, standard deviation, related statistical principles and excel. Also, the extensive set of tools and icons can be overwhelming for students unfamiliar with data workflows. Sometimes, this makes it difficult to determine which tool to use for specific analytical tasks. To address this challenge, students can use Alteryx Expert Assistant (AEA), an AI-powered feature within Alteryx Designer, to build workflows more efficiently. It acts like an intelligent guide that allows users to describe what they want to do in plain language. For example, “clean missing values” or “merge two datasets” prompts can automatically generate or suggesting the appropriate tools and workflow steps. Moreover, there is a built-in Alteryx Module/coach powered by ChatGPT. It allows users to describe the data workflow goals in natural language, and then ChatGPT (using the Alteryx Expert Assistant) can help to design and explain Alteryx workflows step by step, suggesting the right tools (it can generate or edit. yxmd workflow code and so on). Because this work is presented as a descriptive pedagogical case study conducted within a single institutional context, findings should be interpreted as illustrative rather than broadly generalizable.
Funding
This research received no external funding.
Institutional Review Board Statement
This study did not involve the collection of new human subjects research data and relied exclusively on retrospective course artifacts and instructional documentation. Institutional Review Board approval was therefore not required.
Informed Consent Statement
No informed consent was necessary for this study.
Data Availability Statement
No new research data was generated for this study.
Acknowledgments
During the preparation of this manuscript/study, the author used Artificial intelligence (AI) tools, including ChatGPT 5.2 (OpenAI), version information for the purposes of grammar correction, sentence refinement, and formatting consistency. The author has reviewed and edited the output and take full responsibility for the content of this publication.
Conflicts of Interest
The author declares no conflicts of interest.
References
- Anderson, R. L., Ernst, M., Ordóñez, R., Pham, P., & Tribelhorn, B. (2015, March 4–7). A data programming CS1 course. 46th ACM Technical Symposium on Computer Science Education (SIGCSE ’15) (pp. 150–155), Kansas City, MO, USA. [Google Scholar] [CrossRef]
- Bell, S. (2010). Project-based learning for the 21st century: Skills for the future. The Clearing House, 83(2), 39–43. [Google Scholar] [CrossRef]
- Cellante, D. (2021). Teaching non-technical skills in a data analytics program in higher education. Issues in Information Systems, 22(3), 197–211. [Google Scholar]
- Das, S., & Eliseev, A. (2025). Predicting ChatGPT use in assignments: Implications for AI-aware assessment design. arXiv, arXiv:2508.12013. [Google Scholar]
- Gratchev, I. (2023). Replacing exams with project-based assessment: Analysis of students’ performance and experience. Education Sciences, 13(4), 408. [Google Scholar] [CrossRef]
- Jones, M. (2023). Preserving academic integrity in the age of artificial intelligence: Redesigning courses to combat AI-assisted plagiarism. International Dialogues on Education Journal, 10(1), 101–123. [Google Scholar]
- Karkoulian, S., Sayegh, N., & Sayegh, N. (2025). ChatGPT unveiled: Understanding perceptions of academic integrity in higher education—A qualitative approach. Journal of Academic Ethics, 23, 1171–1188. [Google Scholar] [CrossRef]
- Kirstein, K. (2021). Integrating common data analytics tools into non-technical education. Central Washington University Faculty Publications. Available online: https://digitalcommons.cwu.edu/cgi/viewcontent.cgi?article=1201&context=cepsfac (accessed on 4 October 2025).
- KNIME. (n.d.). Teaching low code data science: A lecturer’s view. KNIME. Available online: https://www.knime.com/blog/teach-low-code-data-science-lecturers-view (accessed on 4 October 2025).
- Krishnamurthi, S., & Fisler, K. (2020). Data-centricity. Communications of the ACM, 63(8), 24–26. [Google Scholar] [CrossRef]
- Liu, X., Golen, E., Raj, R. K., & Fluet, K. (2023). Offering data science coursework to non-computing majors. In DataEd ’23: Proceedings of the 2nd international workshop on data systems education: Bridging education practice with education research. Association for Computing Machinery. [Google Scholar] [CrossRef]
- Lucas Education Research. (2021). Research summary of project-based learning (PBL). Available online: https://www.lucasedresearch.org/wp-content/uploads/2021/04/Research-Summary-of-PBL-Rev1-1.pdf (accessed on 9 October 2025).
- Markaicode. (2025). No-code vs. traditional coding: Development speed, costs, and scalability. Markaicode Blog. Available online: https://markaicode.com/no-code-vs-traditional-coding-2025/ (accessed on 22 December 2025).
- Menukin, O., Mandungu, C., Shahgholian, A., & Mehandjiev, N. (2023). Guiding the integration of analytics in business operations through a maturity framework. Annals of Operations Research, 348, 2017–2047. [Google Scholar] [CrossRef]
- Mew, L. (2019, November 6–9). Developing an undergraduate data analytics program for non-traditional students. EDSIG Conference, Cleveland, OH, USA. Available online: https://iscap.us/proceedings/2019/pdf/4929.pdf (accessed on 13 December 2025).
- Peters, M., & Angelov, D. (2025). Redefining assessment tasks to promote students’ creativity and integrity in the age of generative artificial intelligence. International Journal of Educational Integrity, 21, 25. [Google Scholar] [CrossRef] [PubMed]
- Qazi, A. G., & Pachler, N. (2025). Conceptualising a data analytics framework to support targeted teacher professional development. Professional Development in Education, 51(3), 495–518. [Google Scholar] [CrossRef]
- Selwyn, N. (2019). What’s the problem with learning analytics? Journal of Learning Analytics, 6(3), 11–19. [Google Scholar] [CrossRef]
- Sghani, S. (2022). Alteryx vs. Python. Medium. Available online: https://medium.com/@sghani77/alteryx-vs-python-afce59c60ffb (accessed on 29 September 2025).
- Sullivan, D. G. (2013). A data-centric introduction to computer science for non-majors. In SIGCSE ’13: Proceedings of the 44th ACM technical symposium on computer science education (pp. 71–76). Association for Computing Machinery. [Google Scholar] [CrossRef]
- Sundberg, L. S., & Holmström, J. H. (2024). Using no-code AI to teach machine learning in higher education. Journal of Information Systems Education, 35(1), 56–66. [Google Scholar] [CrossRef]
- UBicast. (2023). Include video assignments in your courses. UBicast Education. Available online: https://news.ubicast.eu/en/include-video-assignment-to-your-courses (accessed on 9 October 2025).
- Varol, S., & Odougherty, P. (2022). A predictive analysis of electronic control unit system defects within automotive manufacturing. Journal of Failure Analysis and Prevention, 22, 918–925. [Google Scholar] [CrossRef]
- Varol, S., & Ridder, Z. (2024). Analysis of engineering student retention based on math placement and performance. Journal of College Orientation, Transition, and Retention, 31, 5978. [Google Scholar] [CrossRef]
- Velaj, Y., Dolezal, D., Ambros, R., Plant, C., & Motschnig, R. (2022, October 8–11). Designing a data science course for non-computer science students: Practical considerations and findings. 2022 IEEE Frontiers in Education Conference (FIE), Uppsala, Sweden. [Google Scholar] [CrossRef]
- Yeo, M. A. (2023). Academic integrity in the age of artificial intelligence authoring apps. TESOL Journal, 14(3), e716. [Google Scholar] [CrossRef]
- Zhang, L., & Ma, Y. (2023). A study of the impact of project-based learning on student learning effects: A meta-analysis study. Frontiers in Psychology, 14, 1202728. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.





