Next Article in Journal
ODEI: Object Detector Efficiency Index
Previous Article in Journal
Leveraging ChatGPT in K-12 School Discipline: Potential Applications and Ethical Considerations
Previous Article in Special Issue
AI Detection of Human Understanding in a Gen-AI Tutor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Practice-Oriented Computational Thinking Framework for Teaching Neural Networks to Working Professionals

NUS-ISS, National University of Singapore, Singapore 119615, Singapore
AI 2025, 6(7), 140; https://doi.org/10.3390/ai6070140
Submission received: 27 May 2025 / Revised: 27 June 2025 / Accepted: 27 June 2025 / Published: 29 June 2025

Abstract

Background: Conventional machine learning courses are usually designed for academic learners, instead of working professionals. This study addresses this gap by proposing a new instructional framework that builds practical computational thinking skills for developing neural network models on business data. Methods: This study proposes a five-component computational thinking framework tailed for working professionals, aligned with the standard data science pipeline and an artificial intelligence instructional taxonomy. The proposed course instructional framework consists of mixed lectures, visualization-driven and coding-driven workshops, case studies, group discussions, and gamified model tuning tasks. Results: Across 28 face-to-face course iterations conducted between 2019 and 2024, participants consistently demonstrated satisfactions in gaining computational-thinking skills. Conclusions: The tailored framework has been implemented to strengthen working professionals’ computational thinking skills for neural-network work on industrial applications.

1. Introduction

Machine learning has become a core capability in the modern digital economy, driving advances in fields ranging from healthcare and finance to manufacturing and supply chain [1,2,3]. As there is a growing demand for industrial organizations to rely on data to automate processes and perform decision-making, the opportunities for training machine learning talent continue to grow [4]. Motivated by this need, education and training in machine learning are now essential not only for students in formal academic programs but also for working professionals across a wide range of industries [5]. Working professionals are expected to apply machine learning to real-world tasks, such as forecasting customer demand, detecting fraud, or optimizing operations. As a result, many institutions and companies offer focused training programs, including online courses, bootcamps, and short workshops designed specifically to equip professionals with practical machine learning skills, particularly in using tools such as neural networks [6].
However, machine learning education cannot follow a one-size-fits-all model. The needs and learning habits of academic students differ significantly from those of working professionals in many aspects [7,8]. In academic class settings, learners often progress through a structured curriculum that builds from foundational mathematics, such as linear algebra and statistics, to core machine learning concepts and finally to neural network architectures. This approach provides deep conceptual understanding, which is valuable for research-driven machine learning system development. However, it often lacks hands-on experience and real-world application. In contrast, working professionals usually prefer learning experiences that are applied and directly relevant to their job roles. This approach often begins with solving a real-world application problem and helps maintain motivation and engagement, particularly in short-term or intensive training contexts. Moreover, theoretical explanations are then introduced as needed, supporting a just-in-time learning model that aligns with how professionals apply new skills on the job.
In both academic and professional settings, computational thinking is a critical foundation for success in machine learning [9,10]. Computational thinking refers to the mindset and skillset used to understand and solve problems using principles from computer science [11]. This includes a few core components, such as problem decomposition, abstraction, pattern recognition, algorithmic thinking, data handling, and systematic debugging. In machine learning, these skills are essential at every stage [12]: defining the problem by converting the analytic task to a machine learning task (e.g., a classification task or regression task), cleaning and structuring the data with feature engineering (if necessary), designing the machine learning model including the model type and model architecture, interpreting the machine learning model outputs, and refining the model architecture or configuration to boost its model performance. Academic students typically learn computational thinking through structured exercises, theoretical coursework, and algorithm implementation. However, working professionals develop and apply computational thinking skills in more complex and dynamic environments, often using real-world data that is messy. Their computational thinking is shaped through practice—iteratively analyzing results, debugging model errors, and refining workflows to meet business needs [13].
Given these differences between developing computational skills in academic and professional settings, there is a clear need for a tailored approach to teaching machine learning for developing computational thinking for working professionals. The desired framework must reflect the realities of professional environments and emphasize practical application. Furthermore, it must build the underlying computational thinking skills that enable learners to evolve with the fast-changing machine learning landscape. To address the challenges of teaching neural networks to working professionals, this paper investigates the following two research questions.
  • Research Question 1: How can a computational thinking framework be tailored for working professionals, and how should it align with both the standard data science pipeline and artificial intelligence instructional taxonomy? To address this question, this study proposes a computational thinking framework specifically designed for working professionals who apply neural networks in real-world data science contexts.
  • Research Question 2: How can the proposed computational thinking framework be implemented to improve participants’ computational thinking skills and neural network competence? Leveraging the proposed tailored computational thinking framework, this study develops a detailed instructional framework with various learning activities. They were implemented across 28 course runs (2019–2024).
The rest of this paper is organized as follows. Section 2 provides a brief introduction to relevant works in computational thinking, instructional taxonomy, and machine learning education, as well as an analysis of existing studies. Then, Section 3 presents our course context, the proposed computational thinking framework tailored for practitioners, and a mapping between it and the standard data science pipeline and AI-instructional taxonomy. The detailed course implementation plan is presented in Section 4, which contains discussions, reflections, and limitations. Finally, Section 5 concludes this paper.

2. Background and Related Works

This section introduces relevant works in computational thinking, instructional taxonomy, and machine learning education, as well as an analysis of existing studies.

2.1. Computational Thinking

Computational thinking is a method for solving problems using ideas drawn from computer science [14,15,16]. It encourages logical reasoning and planning based on the formulated problems rather than relying on trial and error. This approach includes several key aspects that help manage the complexity of the analytics problem. It usually starts with breaking down a large, complicated problem into smaller and more manageable subproblems. Then, it focuses on identifying what information is most important, making it easier to understand and work with complex situations. It also involves noticing patterns or repeated structures, which can reveal useful insights or help predict outcomes. Finally, it includes using digital tools to carry out repetitive or time-consuming tasks efficiently. Integrating these components together, it forms an effective method for solving problems that is not limited to programming but supports critical thinking and systematic reasoning in a wide range of real-world situations [17,18].
While the computational thinking skill is taught in both schools and workplaces, how it is learned and used can be quite different [19]. While both groups use core skills like breaking down problems and designing solutions, professionals apply them in a broader and more flexible way. They also care more about model performance, explainability, and how their work fits into larger systems. Working professionals focus more on practical outcomes, like building systems that solve real problems [20], and they rely heavily on data storytelling [21] and project-based learning [16,22]. This shows that the same basic thinking skills need to be taught and applied differently depending on the learner’s context.

2.2. Instructional Taxonomy

Educational psychologist Benjamin Bloom [23] introduces a taxonomy of cognitive learning objectives that has become foundational in education. This framework emphasizes fostering higher-order thinking rather than mere memorization and is widely adaptable across various contexts. Bloom’s taxonomy comprises six hierarchical levels: knowledge (recalling facts), comprehension (understanding and interpreting information), application (using knowledge to solve problems), analysis (breaking down information and examining relationships), synthesis (combining elements to create new ideas or solutions), and evaluation (making judgments based on criteria). Each level builds on the previous, encouraging learners to progress from basic recall to critical thinking and creativity. Anderson and Krathwohl revised Bloom’s taxonomy [24]. Andrew Churches develops a revised version of Bloom’s taxonomy tailored to the digital age, integrating web technologies and aligning higher-order thinking skills with modern digital demands [25]. This updated framework incorporates tools at each level to support effective and seamless learning, making it more comprehensive in addressing the educational needs of a technology-driven era.
A new AI-driven taxonomy framework enhances education through six key components [26]. Collect helps students efficiently find and organize online resources, improving knowledge acquisition. Adapt personalizes learning experiences with AI-powered adaptive systems, tailoring instruction to individual progress. Simulate incorporates technologies like virtual reality to enable hands-on exploration of complex concepts. The process utilizes AI tools for data visualization and interpretation, uncovering insights beyond traditional methods. Evaluate empowers students to track their progress while providing teachers with personalized feedback. Innovate grants access to cutting-edge research and AI tools, fostering creativity and experimentation.

2.3. Machine Learning Education

Machine learning education plays a vital role in preparing the workforce with the skills needed in today’s data-driven world [27,28]. It can generally be divided into three categories. The first focuses on building basic machine learning awareness among the general public, using approaches such as traditional classroom instruction and online courses. These materials typically assume little prior knowledge and offer clear explanations of what machine learning is, along with its potential uses, benefits, and risks [29]. The second category is aimed at those who want to deepen their technical expertise and contribute to the development of machine learning methods. This includes advanced topics such as theoretical foundations, algorithm design, model architecture, and the current challenges and frontiers in the field. Examples include graduate-level AI courses and research seminars [30]. The third category is domain-specific learning, where machine learning is taught as a practical tool within a particular professional or academic context [31,32,33]. This approach helps learners apply machine learning techniques directly to problems in areas such as finance, healthcare, marketing, or engineering. Each category serves a different purpose and audience, reflecting the diverse ways machine learning knowledge can be developed and applied.
The best practice of the data science pipeline is key in machine learning education [34,35]. It starts by collecting the information the project needs from databases or external feeds because useful output depends on trustworthy raw material. It then cleans and fixes that material, fills gaps, and converts formats, so the dataset becomes consistent and ready for study. With clean data in hand, it explores the data through quick charts and summaries to spot trends and possible signals worth chasing. Guided by those findings, it builds a learning model—choosing and training algorithms that can classify, predict, or cluster the records. Next, it checks how well the model performs on unseen samples, measuring accuracy, recall, precision, or other goals and adjusting things if results fall short. When the model meets its targets, it moves into a live setting where new data flows through and real-time predictions or insights reach end users, turning analysis into action.

2.4. Analysis of Related Works

Most existing research on teaching computational thinking focuses on students in academic settings. These studies often look at how to help students understand basic computer science ideas or learn programming. However, working professionals, such as data scientists and analysts, have very different learning needs. They are usually focused on solving real-world problems quickly, applying skills directly to their jobs, and learning in short, targeted sessions. They also tend to have limited time and prefer hands-on, practical approaches rather than theory-heavy lessons. Inspired by this, a learning approach designed for students may not work well for professionals. This creates a clear need for a computational thinking framework tailored to working professionals.
In addition, newer AI-related instructional taxonomy suggests that good instruction should follow both the typical steps of a data science project and the goals of modern AI education. These include skills like experimenting with tools, interpreting data, and creating new solutions. Therefore, any new framework should not only reflect what professionals do in their daily work but also follow best practices in how AI is taught. Our study aims to fill this gap by building a practical computational thinking framework that supports real-world learning for professionals using neural networks.

3. Methodology

3.1. Course Context

Our designed course prepares participants to apply neural networks to real-world business data, turning raw records into insights that drive predictive decisions. It shows how trend detection and forecasting can help organizations act sooner in marketing and operations, cutting costs and boosting returns. It is designed for working professionals who build predictive tools to streamline processes and data or business analysts who want deeper machine learning skills to strengthen their recommendations. It assumes prior study of statistics and basic predictive techniques, including linear and logistic regression and decision trees.

3.2. Proposed Framework

This paper proposes a new practical computational thinking framework tailored for data practitioners in the context of a neural network course. This tailored framework helps working professionals solve real-world problems using neural networks, especially when working with structured data like tables or records. It focuses on practical steps that support reliable and effective model development. The proposed framework consists of the following five key components.
  • The first part is problem decomposition, which means turning a big business question into smaller tasks, like gathering the right data, defining what needs to be predicted, and deciding how to measure success.
  • Next is data representation, quality, and imbalance handling. This involves turning raw data into useful features, fixing missing or incorrect values, and making sure rare cases are not ignored.
  • The third part is model architecture and training strategy. This means choosing the right type of neural network and setting up how it will learn, including selecting loss functions, handling class imbalance, and tuning settings for the best results.
  • The fourth component is interpretability-aware analysis, which helps explain why the model makes certain predictions. Tools like feature importance are used to check if the model focuses on the right things and to build trust in its output.
  • Finally, testing, debugging, and error analysis ensure that everything works correctly. This includes checking the data pipeline, monitoring training results, and studying errors to improve the model.
Each element of the proposed computational-thinking framework is paired with a stage of the data-science pipeline and an AI-driven Bloom action [26], as illustrated in Figure 1. Domain-driven problem decomposition anchors the framework because, like the Data Collection step and Collect level, it clarifies the business question and what information must be gathered and organized before any modeling can begin. After data are collected, data representation, quality control, and imbalance handling align with Data Preprocessing and the Process level because they transform messy inputs into well-structured tensors. Model architecture and training strategy correspond to Modeling and the Innovate level; selecting layers or loss functions demands creative experimentation, exactly the kind of higher-order thinking Bloom labels as innovation. Interpretability-aware analysis fits Model Evaluation and the Evaluate level because judging feature importance requires critical analysis of a model’s reasoning. Finally, testing, debugging, and error analysis map to Model Deployment and the Simulate level because running controlled tests and monitoring in a staging environment help identify failures before the model serves real users. A list of recommended activities, such as lectures, hands-on programming, and discussions, is presented in Table 1.

3.3. Course Implementation

This course follows a blended learning approach that combines lectures, hands-on programming workshops, case studies, group discussions, and assessments. Each component is designed to support practical computational skill development. The following sections describe the details of each component.

3.3.1. Lecture

The lecture begins by introducing a five-step learning flow, as illustrated in Figure 1. It starts with a discussion of real-world use cases across key industries in Singapore, such as manufacturing, healthcare, government services, and finance. For instance, in the manufacturing sector, predictive maintenance is a common application. This can take two forms: one approach uses historical maintenance records to estimate the failure risk of machine units; the other uses real-time Internet of Things sensor data for time-series-based failure prediction. The next part of the lecture addresses data processing, guided by key questions: What types of data can be analyzed by neural networks? What preprocessing steps, such as normalization or formatting, are necessary before analysis? This is followed by a technical introduction to neural networks, beginning with the perceptron (the simplest model) and then expanding to multilayer perceptrons. Core concepts include model architecture (e.g., how to choose the number of layers and neurons) and model training (e.g., the role of loss functions and optimizers). Finally, the lecture covers feature importance for model interpretability and concludes with best practices for applying neural networks in practical settings [36,37].

3.3.2. Programming Workshop

To support hands-on learning in neural network model building, the course offers two types of programming workshops: visualization-based and coding-based. The visualization-based workshops use Rattle (for R language version 3) [38] and Orange (for Python version 3 language) [39], both of which provide user-friendly and interactive interfaces. These tools allow participants to load datasets (such as Excel files) [40] and build neural network models without writing any code. This format is especially helpful for beginners, as it focuses on the core concepts of model training and evaluation. However, it offers only limited options for model tuning and customization. The coding-based workshops are delivered through Jupyter notebooks for both R version 4.4.1 and Python version 3. Jupyter supports a combination of code, explanations, and visual outputs, making it well-suited for instructional use. Participants use notebook-based lab materials to experiment with neural network models, adjust hyperparameters, and immediately observe the impact on model performance through visualizations and metrics. All coding-based programming exercises are designed to run smoothly on Google Colab [41], which provides free, cloud-based execution with no setup required. This approach ensures accessibility for all participants and enables a deeper, more flexible exploration of model-building techniques.

3.3.3. Case Study

The course includes a case study selected to reflect an important economic sector in the Singaporean context: machine health classification using Internet of Things sensor data [42]. The Air Pressure System Failure at Scania Trucks dataset was released by Scania and Chalmers University for the 2016 Industrial Data Analytics challenge and is widely used to study predictive maintenance methods. It contains sensor readings and operational logs collected from heavy trucks in day-to-day service, focusing on the air-pressure system that supplies compressed air for braking and gear changes. Each record is described by 170 anonymized features, ranging from single numerical counters to histogram-type measures, and the full set spans roughly 60,000 instances split into predefined training and test files. The goal is to perform a binary classification: the positive class flags failures of a specific Air Pressure System component, while the negative class covers trucks whose issues lie elsewhere. Only a small percentage of rows belong to the positive class, making it a classic imbalanced-data problem that mirrors real industrial conditions in which true failures are rare. Researchers use the dataset to develop and benchmark neural network models that can detect faults early, helping fleets cut downtime and maintenance costs.
The case study session begins with exploratory data analysis using Excel, focusing on key preprocessing tasks such as handling missing values, normalizing continuous features, and converting categorical variables into numerical form. Once preprocessing is finished, a prepared Python script is used to train a multilayer perceptron model. The instructor demonstrates how changes to specific hyperparameters can influence model performance. Evaluation is discussed beyond accuracy, highlighting the importance of precision, recall, and F1-score, especially in real-world industrial applications [43].
To encourage active engagement and hands-on learning, a gamification strategy is used during the model tuning activity. After running the baseline model using the prepared Python code, students are divided into teams and challenged to improve its performance. Each team applies their understanding of neural networks by adjusting model architecture, training parameters, or optimization settings. The goal is to achieve the highest possible accuracy on the validation dataset. At the end of the session, the team with the best performance presents their solution to the class, explaining what changes they made and how those led to improvements. Other teams are encouraged to ask questions or suggest alternative ideas.

3.3.4. Group Discussion

Given the naturally imbalanced nature of the dataset because actual machine failures are rare, the class engages in a discussion on handling class imbalance in practical scenarios. Participants work in small groups to analyze the case, respond to structured prompts, and present their strategies and insights to the full class.
Each group starts with duplicating the baseline notebook provided in the class, then implements one resampling or re-weighting method to retrain the multilayer perceptron with identical hyperparameters except for the new balancing technique. During the experiments, they record the following in a short report: (1) updated metrics, precision, recall, F1-score, and confusion matrix, compared with the baseline; (2) a brief reflection (three to four sentences) on any performance trade-offs observed, such as changes in minority-class recall versus overall accuracy; and (3) at least one recommendation for further improvement. Finally, each group takes a turn to share their ideas on handling class imbalance and applies their chosen strategy to the machine-health dataset.

3.3.5. Assessment

Assessment consists of two parts. The first is a graded multiple-choice quiz that tests foundational neural network concepts, including computation mechanisms and parameter tuning. The second is formative, with ongoing feedback provided during group discussions to support reflective learning and concept reinforcement.

4. Implementations and Reflections

4.1. Implementations

From March 2019 to August 2024, the course has been delivered 28 times, enrolling a total of 683 participants, as shown in Table 2 and Table 3. This one-day class spans 6.5 h in a face-to-face manner, running from 9:00 AM to 5:00 PM with two short breaks and a lunch break. The course materials are hosted in an online learning management system, Canvas [44]. An anonymous post-course survey was conducted at the end of the course. No demographic data were collected to preserve participant anonymity. The survey consists of five questions, as shown in Table 4. The questions were drawn from our routine teaching evaluation for their relevance to a short course on neural network concepts for working professionals. They are organized into three dimensions:
  • Skill-related (Q1–Q2). These items assess whether the course materials and activities enabled participants to acquire practical skills and knowledge they can apply to neural network tasks.
  • Delivery-related (Q3–Q4). These items capture participants’ reflections on (i) their confidence in applying what they learned and (ii) the instructor’s effectiveness in explaining concepts and facilitating class interaction.
  • Overall satisfaction (Q5). This item measures whether the course met its stated objectives and provides an overall indicator of participant satisfaction.

4.2. Instructor Reflection

Based on observations obtained from 28 course runs, several key lessons have been learned. First, it is critical to start with real-world use cases and actual datasets rather than abstract mathematical models to help participants better understand the relevance of neural networks to their work. Second, hands-on programming workshops are essential for participants to observe how changes in model configuration affect training and performance. Third, in-class discussions on practical challenges, such as dealing with imbalanced datasets, are critical for equipping participants with the skills needed to address similar problems in their jobs.
Several opportunities for instructional improvement have also been identified. Currently, the course focuses on traditional neural networks, such as multilayer perceptrons, and does not cover deep learning models due to the limited course scope and duration (only one day for this course). Therefore, it is recommended to provide post-course materials to bridge the gap between foundational knowledge and more advanced deep neural networks. Moreover, due to the organizational data policies, some participants are unable to use public platforms like Google Colab. To accommodate this, a locally installable version of the programming workshop should be offered for those working on restricted computers.

4.3. Limitations

This study has several limitations. Demographic information such as participants’ job roles, educational backgrounds, or gender was not collected, which limits the ability to analyze how different learner profiles may influence the development of computational thinking skills. Moreover, the course is delivered in a short, one-day, face-to-face format. The impact of longer-term learning or alternative delivery formats, such as online or hybrid classes, remains an open area for future exploration. A pre-test/post-test design will be embedded in each course run to capture individual learning gains and enable paired-sample statistical analysis. Furthermore, only descriptive statistics were used; future studies will include inferential comparisons on outcomes across class sizes and yearly cohorts. These enhancements will provide a stronger evidence base for the effectiveness of the proposed framework and reveal factors that influence skill acquisition among working professionals.

5. Conclusions

This study bridges the gap between academic machine learning instruction and the practice-driven needs of working professionals. Inspired by the fact that working professionals require data-centric skills that traditional curricula rarely emphasize, this study proposes a five-component computational thinking framework tailored for working professionals. The tailored framework has been implemented to strengthen working professionals’ computational thinking skills for neural network work on industrial applications.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived because this study focused on instructional design and its implementation, conducted during routine teaching activities in the author’s course; no student grades or course feedback were analyzed.

Informed Consent Statement

Not applicable.

Data Availability Statement

Course materials will be available upon request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, J.C.; Pratt, T.L. Navigating AI Integration in Career and Technical Education: Diffusion Challenges, Opportunities, and Decisions. Educ. Sci. 2024, 14, 1285. [Google Scholar] [CrossRef]
  2. Babashahi, L.; Barbosa, C.E.; Lima, Y.; Lyra, A.; Salazar, H.; Argolo, M.; Almeida, M.A.d.; Souza, J.M.d. AI in the Workplace: A Systematic Review of Skill Transformation in the Industry. Adm. Sci. 2024, 14, 127. [Google Scholar] [CrossRef]
  3. Sidhu, G.S.; Sayem, M.A.; Taslima, N.; Anwar, A.S.; Chowdhury, F.; Rowshon, M. AI and Workforce Development: A Comparative Analysis of Skill Gaps and Training Needs in Emerging Economies. Int. J. Bus. Manag. Sci. 2024, 4, 12–28. [Google Scholar] [CrossRef]
  4. Xu, J.J.; Babaian, T. Artificial intelligence in business curriculum: The pedagogy and learning outcomes. Int. J. Manag. Educ. 2021, 19, 100550. [Google Scholar] [CrossRef]
  5. Memarian, B.; Doleck, T. Teaching and learning artificial intelligence: Insights from the literature. Educ. Inf. Technol. 2024, 29, 21523–21546. [Google Scholar] [CrossRef]
  6. Geron, A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2019. [Google Scholar]
  7. Salazar-Gomez, A.F.; Bagiati, A.; Minicucci, N.; Kennedy, K.D.; Du, X.; Breazeal, C. Designing and implementing an AI education program for learners with diverse background at scale. In Proceedings of the IEEE Frontiers in Education Conference, Uppsala, Sweden, 8–11 October 2022; pp. 1–8. [Google Scholar] [CrossRef]
  8. Schleiss, J.; Laupichler, M.C.; Raupach, T.; Stober, S. AI Course Design Planning Framework: Developing Domain-Specific AI Education Courses. Educ. Sci. 2023, 13, 954. [Google Scholar] [CrossRef]
  9. Juca-Aulestia, M.; Cabrera-Paucar, E.; Sánchez-Burneo, V. Education and Characteristics of Computational Thinking: A Systematic Literature Review. In Proceedings of the World Conference on Information Systems and Technologies, Pisa, Italy, 4–6 April 2023; Volume 800, pp. 156–171. [Google Scholar] [CrossRef]
  10. Dohn, N.B.; Kafai, Y.; Mørch, A.; Ragni, M. Survey: Artificial Intelligence, Computational Thinking and Learning. KI Kunstl. Intell. 2022, 36, 5–16. [Google Scholar] [CrossRef]
  11. Wing, J.M. Computational thinking. Commun. ACM 2006, 49, 33–35. [Google Scholar] [CrossRef]
  12. Tedre, M. Computational Thinking 2.0. In Proceedings of the Proceedings of the 17th Workshop in Primary and Secondary Computing Education, Morschach, Switzerland, 31 October–2 November 2022. [CrossRef]
  13. Denning, P.J.; Tedre, M. Computational thinking for professionals. Commun. ACM 2021, 64, 30–33. [Google Scholar] [CrossRef]
  14. Lyon, J.A.; Magana, A.J. Computational thinking in higher education: A review of the literature. Comput. Appl. Eng. Educ. 2020, 28, 1174–1189. [Google Scholar] [CrossRef]
  15. Tsai, M.J.; Liang, J.C.; Hsu, C.Y. The Computational Thinking Scale for Computer Literacy Education. J. Educ. Comput. Res. 2021, 59, 579–602. [Google Scholar] [CrossRef]
  16. Agbo, F.J.; Everetts, C. Towards Computing Education for Lifelong Learners: Exploring Computational Thinking Unplugged Approaches. In Proceedings of the ACM Virtual Global Computing Education Conference, Virtual, 5–8 December 2024; pp. 295–296. [Google Scholar] [CrossRef]
  17. Angeli, C.; Giannakos, M. Computational thinking education: Issues and challenges. Comput. Hum. Behav. 2020, 105, 106185. [Google Scholar] [CrossRef]
  18. de Jong, I.; Jeuring, J. Computational Thinking Interventions in Higher Education: A Scoping Literature Review of Interventions Used to Teach Computational Thinking. In Proceedings of the International Conference on Computing Education Research, Virtual, 10–12 August 2020. [Google Scholar] [CrossRef]
  19. Liu, T. Relationships Between Executive Functions and Computational Thinking. J. Educ. Comput. Res. 2024, 62, 1267–1301. [Google Scholar] [CrossRef]
  20. Ezeamuzie, N.O.; Leung, J.S.C. Computational Thinking Through an Empirical Lens: A Systematic Review of Literature. J. Educ. Comput. Res. 2022, 60, 481–511. [Google Scholar] [CrossRef]
  21. Parsazadeh, N.; Cheng, P.Y.; Wu, T.T.; Huang, Y.M. Integrating Computational Thinking Concept Into Digital Storytelling to Improve Learners’ Motivation and Performance. J. Educ. Comput. Res. 2021, 59, 470–495. [Google Scholar] [CrossRef]
  22. Zhou, C.; Zhang, W. Computational Thinking (CT) towards Creative Action: Developing a Project-Based Instructional Taxonomy (PBIT) in AI Education. Educ. Sci. 2024, 14, 134. [Google Scholar] [CrossRef]
  23. Bloom, B. Taxonomy of Educational Objectives: The Classification of Educational Goals; Number 1 in Taxonomy of Educational Objectives; The Classification of Educational Goals: Longmans, Green, 1956. [Google Scholar]
  24. Anderson, L.W.; Krathwohl, D.R. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Longman: London, UK, 2001. [Google Scholar]
  25. Churches, A. Bloom’s Digital Taxonomy. 2008. Available online: http://burtonslifelearning.pbworks.com/f/BloomDigitalTaxonomy2001.pdf (accessed on 25 May 2025).
  26. Hmoud, M.; Shaqour, A. AIEd Bloom’s Taxonomy: A Proposed Model for Enhancing Educational Efficiency and Effectiveness in the Artificial Intelligence Era. Int. J. Technol. Learn. 2024, 31, 111–128. [Google Scholar] [CrossRef]
  27. Ng, D.T.K.; Lee, M.; Tan, R.J.Y.; Hu, X.; Downie, J.S.; Chu, S.K.W. A review of AI teaching and learning from 2000 to 2020. Educ. Inf. Technol. 2023, 28, 8445–8501. [Google Scholar] [CrossRef]
  28. Hazzan, O.; Mike, K. Guide to Teaching Data Science: An Interdisciplinary Approach; Springer: Cham, Switzerland, 2023; pp. 1–321. [Google Scholar] [CrossRef]
  29. Camerlingo, G.; Fantozzi, P.; Laura, L.; Parrillo, M. Teaching Neural Networks Using Comic Strips. In Proceedings of the International Conference in Methodologies and Intelligent Systems for Technology Enhanced Learning, Salamanca, Spain, 26–28 June 2024; pp. 1–10. [Google Scholar] [CrossRef]
  30. Xiao, T.; Gao, D.; Chen, S.; Mei, X.; Yang, Y.; Lu, X. Artificial Neural Network Course Designing Based on Large-Unit Teaching Mode. In Proceedings of the International Conference on Computer Science and Technologies in Education, Xi’an, China, 19–21 April 2024; pp. 342–346. [Google Scholar] [CrossRef]
  31. Nadzinski, G.; Gerazov, B.; Zlatinov, S.; Kartalov, T.; Dimitrovska, M.M.; Gjoreski, H.; Chavdarov, R.; Kokolanski, Z.; Atanasov, I.; Horstmann, J.; et al. Data Science and Machine Learning Teaching Practices with Focus on Vocational Education and Training. Inform. Educ. 2023, 22, 671–690. [Google Scholar] [CrossRef]
  32. Dogan, G. Teaching Machine Learning with Applied Interdisciplinary Real World Projects. In Proceedings of the Machine Learning Research; ML Research Press: Norfolk, MA, USA, 2022; Volume 207, pp. 12–15. [Google Scholar]
  33. Brungel, R.; Bracke, B.; Ruckert, J.; Friedrich, C.M. Teaching Machine Learning with Industrial Projects in a Joint Computer Science Master Course: Experiences, Challenges, Perspectives. In Proceedings of the IEEE German Education Conference, Berlin, Germany, 2–4 August 2023. [Google Scholar] [CrossRef]
  34. Stoyanovich, J. Teaching Responsible Data Science. In Proceedings of the International Workshop on Data Systems Education, Philadelphia, PA, USA, 12–17 June 2022; pp. 4–9. [Google Scholar] [CrossRef]
  35. Lewis, A.; Stoyanovich, J. Teaching Responsible Data Science: Charting New Pedagogical Territory. Int. J. Artif. Intell. Educ. 2022, 32, 783–807. [Google Scholar] [CrossRef]
  36. Mersha, M.; Lam, K.; Wood, J.; AlShami, A.K.; Kalita, J. Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction. Neurocomputing 2024, 599, 128111. [Google Scholar] [CrossRef]
  37. Sheridan, H.; Murphy, E.; O’Sullivan, D. Human Centered Approaches and Taxonomies for Explainable Artificial Intelligence. In Proceedings of the International Conference on Human-Computer Interaction, Washington, DC, USA, 29 June–4 July 2024; pp. 144–163. [Google Scholar] [CrossRef]
  38. Williams, G.J. Data Mining with Rattle and R: The Art of Excavating Data for Knowledge Discovery; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar] [CrossRef]
  39. Demšar, J.; Curk, T.; Erjavec, A.; Črt Gorup; Hočevar, T.; Milutinovič, M.; Možina, M.; Polajnar, M.; Toplak, M.; Starič, A.; et al. Orange: Data Mining Toolbox in Python. J. Mach. Learn. Res. 2013, 14, 2349–2353. [Google Scholar]
  40. Dua, D.; Graff, C. UCI Machine Learning Repository. 2017. Available online: https://archive.ics.uci.edu/ml (accessed on 25 May 2025).
  41. Google. Google Colaboratory. 2025. Available online: https://colab.research.google.com/ (accessed on 25 May 2025).
  42. UCI Machine Learning Repository. APS Failure at Scania Trucks. 2016. Available online: https://archive.ics.uci.edu/dataset/421/aps+failure+at+scania+trucks (accessed on 25 May 2025). [CrossRef]
  43. Paleyes, A.; Urma, R.G.; Lawrence, N.D. Challenges in Deploying Machine Learning: A Survey of Case Studies. ACM Comput. Surv. 2022, 55, 114. [Google Scholar] [CrossRef]
  44. Instructure. Canvas LMS. 2025. Available online: https://www.instructure.com/canvas (accessed on 25 May 2025).
Figure 1. A high-level illustration of the proposed practical computational thinking framework tailored for data practitioners, each component aligns with a specific stage of the data science pipeline and a matching AI-driven Bloom-style action [26], which is a six-component framework including Collect, Adapt, Simulate, Process, Evaluate, and Innovate.
Figure 1. A high-level illustration of the proposed practical computational thinking framework tailored for data practitioners, each component aligns with a specific stage of the data science pipeline and a matching AI-driven Bloom-style action [26], which is a six-component framework including Collect, Adapt, Simulate, Process, Evaluate, and Innovate.
Ai 06 00140 g001
Table 1. A list of recommended learning activities.
Table 1. A list of recommended learning activities.
LectureHands-on ProgrammingDiscussion
Domain-driven problem decompositionFormulate a business analytics question and decompose it into machine learning tasks. Start with a use case (like predictive maintenance) and show step-by-step how to turn the business goal into sub-tasks, such as data collection, annotation, and success metrics.Provide a small CSV file of machine operation records. Students write a short script that lists the subtasks as to-do comments and then programmatically checks whether each required column and label exists in the file.Split into groups; let them outline which subtasks they would create and why. Groups share their formulations.
Data representation, quality, and imbalance handlingCover data preprocessing, encoding (one-hot, embeddings), handling missing values, detecting data drift, and techniques such as weighting for rare classes.Use the public dataset; students clean nulls, scale numerical data, and print before/after class counts.Ask learners to reflect on which preprocessing step changed performance most and debate whether synthetic examples could create hidden bias.
Model architecture and training strategyDefine the neural network architecture, such as multiple layer perceptron, and define the training strategy, such as the choice of loss, data augmentation (if necessary), optimizer, hyperparameters such as learning rate, epoch, etc.Students build a neural network with a programming tool, perform model training with different architectures and configurations, and plot the model performance curve for the training and validation dataset.Learners debate trade-offs and the choice of the best model based on speed, accuracy, and model complexity.
Interpretability-aware analysisExplain the black-box neural network model, such as global and local feature importance and inner calculations.Load the trained neural network model and apply the explainer tool to plot the feature importance curve; students write two sentences interpreting the top few features.Discuss and share the identified important features; debate them with the business domain understanding of the dataset.
Testing, debugging, and error analysisCover the model maintenance, updating, and tuning instead of retraining from the scratch.Students write a test that fails if any feature contains nulls at prediction time and implement a confusion-matrix heatmap across customer data subsets.Simulate the model failure scenarios, such as the model precision suddenly falling for the new data; teams inspect provided logs and propose root causes and fix solutions.
Table 2. Frequency of course runs conducted annually over a six-year period (2019–2024).
Table 2. Frequency of course runs conducted annually over a six-year period (2019–2024).
Year201920202021202220232024
Number of runs994222
Table 3. Number of course runs categorized by class size.
Table 3. Number of course runs categorized by class size.
Number of Participants Per Class≤2021–4041–60≥61
Number of runs15922
Table 4. A summary of questions used in the post-course survey.
Table 4. A summary of questions used in the post-course survey.
Skill-relatedQ1The training resources provided were useful for my learning.
Q2I have acquired new skills and/or knowledge from the training.
Delivery-relatedQ3I am confident that I am able to apply what I learnt in the course.
Q4The instructor was able to communicate ideas effectively, link concepts to practices with examples, and has good class interaction and facilitation/coaching.
OverallQ5The course met its intended objective (s).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, J. A Practice-Oriented Computational Thinking Framework for Teaching Neural Networks to Working Professionals. AI 2025, 6, 140. https://doi.org/10.3390/ai6070140

AMA Style

Tian J. A Practice-Oriented Computational Thinking Framework for Teaching Neural Networks to Working Professionals. AI. 2025; 6(7):140. https://doi.org/10.3390/ai6070140

Chicago/Turabian Style

Tian, Jing. 2025. "A Practice-Oriented Computational Thinking Framework for Teaching Neural Networks to Working Professionals" AI 6, no. 7: 140. https://doi.org/10.3390/ai6070140

APA Style

Tian, J. (2025). A Practice-Oriented Computational Thinking Framework for Teaching Neural Networks to Working Professionals. AI, 6(7), 140. https://doi.org/10.3390/ai6070140

Article Metrics

Back to TopTop