The AI-Atlas: Didactics for Teaching AI and Machine Learning On-Site, Online, and Hybrid

: We present the “AI-Atlas” didactic concept as a coherent set of best practices for teaching Artiﬁcial Intelligence (AI) and Machine Learning (ML) to a technical audience in tertiary education, and report on its implementation and evaluation within a design-based research framework and two actual courses: an introduction to AI within the ﬁnal year of an undergraduate computer science program, as well as an introduction to ML within an interdisciplinary graduate program in engineering. The concept was developed in reaction to the recent AI surge and corresponding demand for foundational teaching on the subject to a broad and diverse audience, with on-site teaching of small classes in mind and designed to build on the speciﬁc strengths in motivational public speaking of the lecturers. The research question and focus of our evaluation is to what extent the concept serves this purpose, speciﬁcally taking into account the necessary but unforeseen transfer to ongoing hybrid and fully online teaching since March 2020 due to the COVID-19 pandemic. Our contribution is two-fold: besides (i) presenting a general didactic concept for tertiary engineering education in AI and ML, ready for adoption, we (ii) draw conclusions from the comparison of qualitative student evaluations ( n = 24–30) and quantitative exam results ( n = 62–113) of two full semesters under pandemic conditions with the result of previous years (participants from Zurich, Switzerland). This yields speciﬁc recommendations for the adoption of any technical curriculum under ﬂexible teaching conditions—be it on-site, hybrid, or online.


The Problem of Teaching Artificial Intelligence as a Foundational Subject
Students enter their careers in a time that awaits nothing short of a digital disruption [1]. The core of the disruptive potential of the digital transformation is provided by technological developments, foremost by Artificial Intelligence (AI) and its currently most prominent branch, Machine Learning (ML) [2]. While technical in nature, AI and ML have the potential to disrupt society on all levels, including business, public service, justice, art, science and health [3]. Numerous scientific articles are published daily [4], and coverage in popular media contributes to these technological developments not just a scientific, but also a mainstream hype [5]. The hype leads to misconceptions about what the technology is capable of, or designed to do, including that it could be concerned with human intelligence or creating conscious machines [6] (while the reality boils down to mere "complex problem solving" [7]). This exemplary problem highlights the necessity of a foundational understanding of AI for engineers in two respects: (i) the understanding needs to be solid, i.e., rooted in the foundations of the discipline rather than purely in its applications, tools or business value, to manage expectations; (ii) like algebra and other foundational subjects, the understanding of AI needs to serve as an underlying methodological framework for an increasing number of future engineering efforts, i.e., it needs to serve as a foundation for future careers in engineering rather than merely as an interesting fashionable specialization.
We think of AI and its sub-discipline ML as one of the five pillars of the discipline of computer science (besides theoretical, practical, technical, and applied computer science [8]). As such, it requires similar treatment to other foundational subjects (e.g., thorough establishment of basic ideas and theory), but holds a peculiarity: different from foundations, such as algebra, AI and ML already build upon a body of knowledge from computer science. This implies a later slot in respective programs, and hence more experienced students who can better judge the impact of AI on other aspects of their profession or society as a whole. However, teaching AI as a foundational aspect of a technical education was until recently largely neglected in curricula according to surveys [9], and novel data science courses [10] do not cover the same terrain.
We, as educators and researchers in this context, thus, are faced with the problem of having to prepare experienced students with diverse professional backgrounds for the reality of AI and ML applications and their implications. For this, we need university courses that provide a solid theoretical foundation while providing opportunities and a mind-set for applied life-long-learning, especially in tertiary engineering education at universities of applied science. We need to teach solid AI and ML foundations to this diverse, technical audience while the field is rapidly evolving, students may have initial misconceptions, and the outcome has to be practice-oriented and relevant for a broad range of careers. In short: we need a didactic concept to solve for this problem by providing a powerful heuristic-a set of coherent best practices-for curriculum and didactic planning and implementation. While many courses exist, and new ones are rapidly created to keep up with the current surge, to our knowledge, no such didactic concept exists explicitly. Earlier AI didactics, like References [11,12], focused only on special teaching situations or one of the sub-discipline of AI.
In this paper, we report on the iterative design of two courses based on the "AI-Atlas" didactic concept. We discuss how the concept itself was developed in specific response to the aforementioned challenges induced by the current surge of AI, and how the resulting course design was adapted based on student feedback and the rapidly changing context of COVID-19. Specifically, our case report is based on the structure of Design-Based Research (DBR) [13] to present the AI-Atlas (didactic concept) and two exemplary implementations (concrete courses), and evaluate the merits of the concept based on student results and feedback. The courses, "Artificial Intelligence 1" (referred to as the "AI course" below) for undergraduate students of computer science, and "Machine Learning" ("ML course") within an interdisciplinary graduate program in engineering, are described in detail in Appendix A. The atlas (or map) metaphor itself that gave rise to the AI-Atlas concept and its underlying design choices is introduced in Section 2.1. We chose to present our findings as a case-report rather than a research article because there are certain limitations regarding our evaluation data as explained in Section 3.2. Our intention is, therefore, to introduce our didactic concept and to motivate educators to test it in order to gather more experience and evidence for the suitability of the AI-Atlas as a flexible basis for designing a curriculum for AI and ML courses in tertiary education.

Existing AI Curricula and Their Relation to Our Didactic Concept
It is beyond the scope of this case report to present a comprehensive survey of related literature on AI and ML teaching methodology. Nevertheless, the AI-Atlas fits well within the recent discussion on AI curricula: like Williams et al. [14], constructionist ideas are central to our didactic design as discussed in Section 2.3. Moreover, the origins of the AI-Atlas [15,16] evolved from teaching best practices of the involved lecturers, putting this case report in line with similar post-hoc analyses, for example, References [17][18][19].
Chiu and Chai [20] reflect AI curriculum development for K-12 [21] by teachers with and without prior exposure to AI training. They use the psychological framework of Self-Determination Theory (SDT) to understand how teachers' motivation controls curriculum design. Further, the authors use the four planning paradigms of curriculum as either content, product, process, or practice to distinguish fundamentally different ways to think about course development and respective didactic designs. While SDT can also be used to explain our motivation for the specific design of the AI-Atlas, our concept can be classified as being a blend of content (the syllabus is important to us), product (we care about achieved educational objective), and practice (we frequently connect the learning experience to real-world applications); see Section 2.5.
Finally, Li et al. [22] explore the fit between university AI curricula and the demands of the job market. Our design with its focus on professional applicability can be seen as a solution to the problems they identify within their analyzed courses: What distinguishes our concept from the many other adoptions of, e.g., the AIMA textbook (cf. Reference [23]), is an end-to-end focus on the usefulness of the taught foundations in daily application that should lead to successful transfer into personal problem-solving skills and professional practice. This implies to be less focused on covering a large amount of different AI/ML algorithms, but rather teach the underlying relationships. Of course, practical relevance is a major concern for any course at a university of applied sciences. While we cannot compare the AI-Atlas to a previous AI course at our institutions (it was developed for the initial design of the first AI and ML courses here), we notice that our approach takes an atypical route to this destination: most courses are made more "applied" by stripping them of theoretical concepts. However, we put theoretical foundations at the center. The reasoning for this seemingly uncommon choice follows below.

Design and Construction
In this section, we explain why we named our didactic concept the AI-Atlas and what the core mentality is it means to convey. We also look at three didactic principles that guided us in its design and three aspects about AI and ML teaching we believe it needs to cover in order to solve the above stated problem before breaking them down into the suggestion of specific didactic settings. Together, the principles and ideas layed out in the following five subsections constitute the AI-Atlas didactic concept.

The Atlas Metaphor
In the late 16th century, Gerhard Mercator's "Atlas Sive Cosmographicae Meditationes de Fabrica Mundi et Fabricati Figura" combined maps and associated explanations of the known world [24]. They were used by generations to explore, push boundaries, and further trade and development [25]. For an atoll, for example, they would show all individual islands with their border, list their characteristics, and show their relation to each other. This aids travelers by allowing them to plan the most effective or efficient route based on their current needs or interest. However, it does not set an pre-conceived path and, if a new island is discovered, it can be added to the atlas without disrupting existing knowledge.
Within the AI-Atlas, we think of the sub-disciplines of AI-such as, e.g., search, planning, and machine learning-as individual islands of an atoll: well developed in themselves and somewhat related to each other, but missing a direct connection. Hence, to solve the problem we have identified, future generations of professionals need the analogue to what Mercator gave to his contemporaries: an atlas to the world of AI (cf. Figure 1). An explorer can profit from the help of this atlas to get an overview and find the best path given a specific journey (i.e., application). It highlights main routes (i.e., baseline approaches), special landmarks (e.g., important algorithms, killer applications), and borders (i.e., limits of the approaches, future work) but never restricts learners to knowing or using only a single path. Figure 1. Illustration of the AI-Atlas metaphor: the sub-disciplines of AI are like an atoll that is best navigated with the help of an atlas. The AI-Atlas didactic concept provides the means, to be employed by educators, to let such atlases emerge in their learner's minds. Image credit: Colorbox, used with permission.
Our AI-Atlas didactic concept contains the means to let the actual atlas emerge only in a learner's mind. It is, thus, created by analog means and stored in analog form (in natural neural networks) and is not manifested in some digital format on the learner's computer. This aspect of the metaphor underlines the AI philosophy underlying the didactic concept and derived courses: artificial intelligence is not primarily replacing human intelligence, and machine learning does not render human learning unnecessary, just like digital does not primarily replace analog, but augments it [26]. AI, thus, finds an optimal environment for application where human and machine complement each other with their respective strengths and weaknesses.

The Core Mentality: AI Professionals Are Explorers
The discipline of AI and its major tool of ML do not have a single goal ("creating intelligence"), but rather offer a methodological toolbox to approach multiple targets ("solving complex problems") [7]. Thus, at their core, they are not constituted by technology as much as by a specific mentality: since AI's inception as a discipline in the 1950s, AI researchers notoriously approached the kind of problems with creativity and pragmatism that had been laid aside by fellow researchers from other disciplines as "too hard" [27]. In other words, AI researchers explored previously unknown territories. They did so by employing an interdisciplinary "let's do it" mentality. Today, this mentality distinguishes the work of the AI professional from other modeling approaches used by software engineers, database designers or statisticians, although skills in all these areas are relevant for success in and with AI or ML. The AI-Atlas not only acknowledges but actively hones this explorer mentality [28]. It does so by building on a set of corresponding didactic principles. Since the late 1980s, constructivist ideas have increasingly found their way into pedagogy, as well as into discussions on the design of teaching-learning environments. Constructivism postulates that individuals do not take over information faithfully from external sources, but actively construct it through social negotiation processes based on previously made experiences. Knowledge is situation-specific and must be actively and independently linked by the individual to prior knowledge [29]. It follows that teaching-learning settings must be designed in such a way that learners are given the opportunity to actively engage with the learning content, as well as the associated problems and to relate these to their prior experiences, whereby active engagement can be of both visible and non-visible nature.

Principle of "Intrinsic Motivation"
In their Self-Determination Theory of motivation (SDT), Deci and Ryan [30] explain the relationship between motivation, learning, and the influence of the social environment on the fulfillment of basic needs. Intrinsically motivated behavior is associated with individuals freely engaging with the subject matter and striving of their own accord to understand phenomena and master activities that appear to them to be at least personally highly significant. As a further component of their theory, Deci and Ryan postulate three basic human needs that motivate behavior: (i) the need to experience competence, (ii) the need for social inclusion, and (iii) the need for autonomy. Deci and Ryan now assume that the striving for satisfaction of these three basic needs explains why persons pursue certain goals of action and why certain actions are more likely to be perceived as motivating by themselves.

Principle of "EEE" (Good Explanation, Enthusiasm, and Empathy)
According to Winteler [31], the following characteristics of university teaching promote student learning: the instructor's preparation, organization of the course, clarity and comprehensibility, perceived efficiency of the teaching, the instructor's openness to questions and discussion, and openness to other opinions. Helmke and Schrader [32] reduce the state of research on key characteristics of successful university teaching to the short formula "EEE": (i) good explanation, which facilitates information processing and arouses curiosity and factual interest; (ii) commitment and enthusiasm, i.e., the infectious enthusiasm of the lecturer about the content; and (iii) empathy, by which they include personal appreciation of students, openness to problems, and efforts to obtain feedback to better adapt teaching. The fact that the didactic setup of a course is well planned and fine-tuned is a "conditio sine qua non", i.e., a necessary but not sufficient condition for a successful course. Nevertheless, a committed, authentic (i.e., experienced in the field), and enthusiastic teacher, open for questions and igniting curiosity and factual interest, can bring the majority of the students to engage in the topic and start the learning process in a self-motivated manner.

The Aspects of Establishing AI Foundations
The above didactic principles need to be combined with the proper mediation of AI and ML foundations, if the AI-Atlas is to successfully guide our explorers-in-training. We suggest that there are three aspects to which the principles need to be applied: canonization, deconstruction, and cross-linkage.

Canonization
The aforementioned hype [5] around AI, and especially deep learning, and the daily growth of scientific literature on the topic [4] make a proper selection of content a key aspect of teaching AI and ML. Hence, a key aspect of the AI-Atlas is to suggest a timely selection of materials that emphasize topics with future relevance alongside their historic development, thereby making the overarching principles that stood the test of time stand out. This is given priority over intriguing detail or formal derivations. Thus, for, e.g., a specific implementation of the AI-Atlas for an introduction to AI course, canonization means that we ensure to teach the full canon of relevant methods (ranging from heuristic search and logical planning to machine learning). We link each of these areas with practice (e.g., controlling a fashionable browser game, building a dragnet investigation system, or decision support for second-hand vehicles). This way, students see for themselves that not only the currently most fashionable methodology, ML, and, within ML not only neural networks, have practical relevance.

Deconstruction
Due to the current extensive media coverage of AI and ML, many misconceptions about the field abound in prospective students (such as the focus of the field being the understanding of human intelligence or the creation of conscious machines [6]). Thus, an important aspect of teaching according to the AI-Atlas is a form of demystification that keeps the original motivation of the students and channels it into more realistic, sustainable paths. We suggest to support students with forming a personal view through critical engagement with scientific texts and programming tasks, which they then present in own write-ups and oral discussions.

Cross-Linkage
Both aspects above-a stable body of knowledge in AI and ML fundamentals and careful treatise of real and misguided excitement-become a firm foundation given the third ingredient: a dense network of cross-references to other subjects in the study program that is compatible with the different occupations of a professional career in engineering and related fields. The AI-Atlas suggests to teach AI not only to future scientists but also to software developers, data scientists, or process engineers, acknowledging the future importance of AI methodology in any field.

The AI-Atlas in Practice: Suggested Didactic Settings to Combine It All
Building on the principles described above, we suggest to adopt the following specific didactic settings for any AI or ML course facing the problems outlined in Section 1.1. Section 3 then continues to evaluate to which extent employing these means in the two exemplary courses mentioned above achieves these ends. Nevertheless, the following subsections already make frequent references to examples in the AI course and ML course to put the abstract suggestions into concrete terms.

Basic Didactic Setting
As laid out in the theoretical framework above, active engagement is a mandatory key component in an AI-Atlas inspired course. One way to increase engagement is to work with small to medium classes. For example, both exemplary courses had only 30 to 60 students. Courses should, therefore, build upon the "lecture + lab" format widespread in engineering education: weekly lectures accompanied by lab exercises with a roughly even time-split between them. However, we make important adaptations to foster active engagement as follows.
For lectures, the students should read weekly portions of well-established text books as accompaniment to the lectures, e.g., Reference [33] on AI and References [34,35] on ML, complemented where necessary by shorter articles (e.g., References [26,36]). The conveyed anecdotes and historical notes therein specifically contribute to the students' socialization in the discipline of AI and the field of ML. The lectures themselves connect such context with problem awareness and technical solutions without degenerating into pure 90-min talks that would push learners into passive consuming roles (cf. Section 2.5.5).
Labs on the other hand should go beyond just programming and development, to accommodate essay writing or examining philosophical questions. This way, the AI-Atlas ensures educational objectives for professional and methodical competences on levels K1-K4 [37] by presenting AI and ML as socio-technical and not purely technological. One example of how broader theory can be consolidated by practice are the gamification elements provided in one lab of the AI course (cf. Appendix B). In addition, programming skills are only a means to an end in AI and ML labs, while problem analysis and experimentation become the focus, thereby encouraging exploration.

Fostering Reflection
We suggest to repeatedly ask students to reflect on their preconceptions of the course content and the technical and societal ramifications of this prior knowledge. Making them think about the myths and ethical problems of the application of AI and learning algorithms starts an active, though non-visible, process in each individual. For example, in the context of the ML course, we repeatedly highlight the cognitive dissonance between the current focus on deep learning methodology in public opinion and the irrefutable results of the "no free lunch" theorem [38]. In the AI course, a lab assignment at the semester start asks students to create a blog post that presents their well-founded and reflected opinion on the contents of a futuristic essay [6]. At the end of the semester, students can reflect on their initial statement with a second blog post that may incorporate insights gained throughout the course. While all opinions are welcome, the emphasis in grading is on self-reflection and reasoning.
As a more regular intervention, in the AI course, lectures end with an outlook called "where's the intelligence?". It explains why what was discussed is a "clever solution", but also what separates it from human-like intelligence. In the ML course, the same time slot is used to show state-of-the-art implementations of the discussed material. This not only aids to demystify the technology; it also helps the students spot the kind of tasks they might approach in their future job using the conveyed foundations.

Encouraging Self-Responsibility and Motivation
Up to twenty percent of the final grade should be acquired by each student during the semester through self-chosen lab assignments, with results depending on an oral defense of one's own work. The choice of 20% is justified by allotting, in this way, a substantial part of the final result to active participation throughout the semester without replacing competence-based results with effort-based grading.
From the existing six assignments in the AI course, for example, that are distributed evenly throughout the semester, students can choose any two to get graded within a short colloquium between the student team and lecturer during the in-class time (students usually work on all assignments, but put considerably more time into the two graded ones). This way, students get empowered to prioritize own learning goals and take ownership of their investment of time and its distribution over the semester. Even if grading is not explicitly tied to the lab assignments as in the ML course, the further questions presented there, the inquiring of the lecturers, and not least the relevance of practical implementation skills for the final exams motivate students to work deeply on the assignments, even if sample solutions are freely provided. A second method to encourage self-exploration and motivation is to set up the labs in such a fashion that it requires students to independently dive deeper into respective methods to find practical solutions. The lab descriptions of the ML course, for example, actively encouraged this, and the lab exercises are often not solvable without going beyond the lecture content.

Promoting Cooperative Competence Development
Lab assignments should usually be worked on in teams of two students. This way, students can strategically pair up their existing competencies, as well as learn from each other. Teams should be allowed to help each other as long as any help is disclosed (according to good scientific practice), and competitive elements, such as the public leaderboard for the AI lab assignment presented in Appendix B, only increase the appeal of and the necessity for good team work.

Activation of Students
Each 90-min lecture block should contain a part of up to 30 min that assigns an active role to the students rather than the lecturer. Technical understanding is deepened by embedding interactive parts, such as small group research tasks and discussion, as well as thinking and pen-and-paper exercises, thereby increasing the practical treatment of the subject. For example, in the AI course, a classical brain twister [39] can be used to show the difference between AI (having a computer program appear intelligent) and human intelligence: approaching it by efficient search through all combinations of possible solution steps constitutes an excellent AI solution for that problem but typically gets labeled "just brute force" by the students at first sight. Other activations in the two exemplary courses take on the forms of jointly solving a puzzle (e.g., "escape from the Wumpus world"), computing results in small groups (e.g., "help inspector Clouseau to probabilistically convict a murderer"), individually applying learned principles (e.g., logic training), or sharing insights from individual research at tables (e.g., exploration of possibilities with OpenAI Gym [40]).

Enabling Social Learning
A prominent place throughout a course based on the AI-Atlas concept should be given to the research work and careers of course alumni and junior teaching staff. Linking course content to concrete outcomes of applied research projects with regional industrial partners known by the students creates a pull that contributes to the students' motivation and expanded vision for AI and ML in practice, as well as their role in it. Key to create this are tutors (e.g., graduate students) that teach part of the labs: closer in age and role to the course participants, they are in our experience approached frequently by the class to give a second opinion on the more philosophical and career aspects of AI. Innumerable lunches, coffee invitations and after-work beers have been realized this way between teaching staff and students in the AI course and ML course.

Providing Open Educational Resources (OER) and Blended Learning
All course materials, including lecture recordings, slides, and lab materials, should be fully and freely available online [41]. This should enable flexible deepened learning (e.g., for exam preparation), but does not compromise live lecture attendance in our experience. Students can also recap all details when needed on the job as all material is permanently and openly available. This enhances the atlas of AI and ML solution strategies they know by heart. As an add-on, it supports the transition to live online teaching (as was required during the COVID-19-induced lockdowns in 2020, see Section 3.2) as content is already designed to be streaming-friendly.

Creating Practical Career Relevance
Students' diverse professional backgrounds should be addressed by showing how different AI and ML methods serve as puzzle pieces in numerous everyday situations of (software) engineering. For example, in the AI course and ML course, the lab tasks and in-class exercises are strictly sourced from practical applications, such as automatic university time tabling, biometric access control, or data analysis, to reinforce this point. By connecting the practical coursework with typical tasks of an engineer, programmer or consultant, students clearly see how learning foundations of AI and ML makes them better in their original career goal. By confronting them with new opportunities in and through AI and learning algorithms in business and research, they recognize new and viable career paths (e.g., data scientist [10]) that only begin to gain traction in public awareness.
Additionally, we suggest to invite specialists, ideally course alumni, from regional industrial partners for guest lectures to report on recent successes. Learners should be encouraged to actively use these opportunities to network and engage with those speakers and their ideas. In contrast to the culturally typical reticence of engineering students, the fresh setup with people on stage that might be considered peers age-wise opens them up in the direct (active participation) and metaphorical sense (opening up to the idea of other career options within the fields of AI and ML).

Implementation and Analysis
In this section, we evaluate to what extent the different measures advised by the AI-Atlas didactic concept, as implemented within our two exemplary courses, achieve their aspired goals of contributing to providing foundational AI education in times of a height-ened AI surge. By this, we aim to shed light in a post-hoc fashion on the merits of the AI-Atlas concept per se to serve as a basis for designing future courses for similar needs.

Participants and Data Basis for Analysis
The principles and settings laid out in the AI-Atlas didactic concept emerged in parallel to designing our exemplary courses out of teaching experience and got codified only upon their completion. The presented analysis, thus, is necessarily a post-hoc analysis: no baseline data from before the AI-Atlas is available due to no existing previous courses on the subject at the involved universities. Furthermore, the analysis is purely based on routinely available data for any course: qualitative student feedback (Likert-scaled, as well as free-text comments) from surveys conducted by the central program administration, and quantitative results from end-of-semester exams (points and graded per task and overall). This is also due to the evolving fashion of our design that had not been planned as a study from the beginning. It means, inter alia, that no control group is available. While this form of evaluation leaves certain aspects to be desired, it is not an uncommon situation for data scientists to having to work with the data that is available in the best possible way, without the possibility to change the basis for evaluations [42]. In what follows, thus, we evaluate our concept with the mindset of data scientists, aiming to establish a relationship between learning success and the employed measures from the AI-Atlas concept.
Demographic and background information on the participants in our courses is listed in Appendix A, while information on specific class sizes (or number of students who returned a questionnaire) will be listed in the captions of the figures below that deal with them. For the ML course, most of these students predominantly seek a career in their original fields of study, though a growing minority considers a job related to ML engineer or ML researcher (the possibility to, e.g., take up graduate studies, is typically completely unknown to our students due to the setup of a "Fachhochschule" [43]). Our students of computer science in the AI course usually envisage a career in software engineering, not specifically AI.
We will be using the qualitative student feedback in Section 3.3 and exam-data in Section 3.4 to evaluate whether the design choices of the AI-Atlas led to a appreciable impact. We do this for two iterations: The first iteration was executed with on-site teaching methods as anticipated by the original AI-Atlas concept (data basis: qualitative and quantitative data from fall term 2019 for the AI course; qualitative data from spring term 2019 for the ML course). The second, due to the COVID-19 pandemic, was executed using hybrid and online teaching, which was not specifically anticipated in the design of the AI-Atlas (data basis: quantitative data from fall term 2020 for the AI course; qualitative data from fall term 2020 and quantitative data from spring and fall terms 2020 for the ML course). The context for the second iteration, the natural experiment that the COVID-19 pandemic provided us with, is described below in Section 3.2. For reasons explained there, we organize our data in the remaining sections as follows: the qualitative feedback is combined for both iterations and evaluated per didactic principle or setting from Section 2. The exam-data is presented specifically per term to allow for comparison between the two iterations.

Going Online by Necessity
The AI-Atlas was designed specifically for the on-site teaching of small to medium groups (30 to 60 students), but the COVID-19 pandemic forced its execution as hybrid or fully online teaching for two full semesters. Of course, the rapid digitization of higher education in the wake of the COVID-19 pandemic forced teaching around the world to move online in a matter of days. Good teaching methodology differs whether one is going to teach on-site or online [44,45], but the new didactic credo seems to be the one of flexibility: one year into the pandemic, there is still a huge degree of volatility about the possibility, desirability, and potential timeline of returning to a teaching mode of choice, in many countries.
Thus, it is important to know how a course specifically designed for, e.g., on-site teaching, will perform in a hybrid or online setting in terms of the students reaching that course's educational objectives. This transcends the question of whether going online is merely possible at all [46], as the didactic concept and respective teaching material cannot be adapted that quickly. The move away from on-site teaching was done very rapidly and involuntarily, with the side-effect of no planned, controlled data collection on student learning before and after. Furthermore, as the courses are single-semester, we also do not have longitudinal data within the same cohort of students. Thus, we saw ourselves forced to stray from DBR principles and to (i) combine the qualitative data with (ii) use the quantitative exam data to specifically compare between the effectiveness of the AI-Atlas for on-site and online teaching.

Qualitative Assessment
In the following subsections, we collect evidence for and against the effectiveness of specific dimensions (i.e., didactic aspects and settings, cf. Sections 2.4 and 2.5) of the AI-Atlas by providing quotations taken from students' feedback forms at the end of different semesters, and drawing conclusions from them. A short tag at the end of each statement ("AI" or "ML") indicates the source course. These written qualitative comments are optional for the students to make and, thus, are normally very sparse (though most precious for the improvement of the curriculum and course). The answers might be highly skewed since we cannot control the subset of students that did write some comments. We, therefore, refrain from a statistical analysis of these comments. Nevertheless, the presented example statements are chosen to be representative to allow for the conclusions we draw. In case of counter evidence, we rather over-sample critical comments to avoid any cherry picking (cf. Section 3.3.5).

Dimensions "Canonization and Deconstruction"
The following statements are taken from student's free and voluntary comments in evaluation surveys: Students seem to grasp that AI and ML are ways to solve complex practical problems rather than theories to explain how we think or create artificial life. The content is indeed perceived as a foundation for practice rather than a narrow specialization (cf. Sections 2.4.1 and 2.4.2).

Dimensions "Motivation and Social Learning"
Quality assessment of the two courses is unfortunately neither done in the same way nor regularly by the involved central administrations (cf. Sections 2.5.3 and 2.5.6). In addition to being given the opportunity for providing free-text comments, the students of the ML course are asked to rate their consent with the following statements on a Likert scale (cf. Table 1 for details): (i) Motivation: I regard the lecturer as being motivated and committed; (ii) Competence: I regard the lecturer as being competent in their subject; (iii) Teaching Skills: I regard the lecturer as having good teaching skills; (iv) Clear structure: His/her teaching is clearly structured (a clear thread), and the subject matter was imparted in a comprehensible manner. Table 1 summarizes the evaluation of teaching skills and motivation for the ML course (unfortunately, no evaluation of the ML course was carried out in spring term 2020 due to COVID-19-induced stress in the administration, and the AI course evaluates slightly different questions not pertaining to the dimensions discussed here). The presented score is the averaged score for two lecturers. It mainly reflects the qualitative judgements given by the students also in the free-text comments: Table 1. Teaching evaluation scores for the ML course. Two teachers were assessed, and the average score is presented (spring 2019: n = 30, and fall 2020: n = 27). "The professors [. . . ] enthusiastically explained it very precisely. I also had the feeling that the fun of the topic seemed very important to them. It was also important for them that everyone understood." (ML)

Topic Semester 1 (Strongly Disagree) 2 (Disagree to Some Extent) 3 (Agree to Some Extent) 4 (Strongly Agree) Score
"The two lecturers are very motivated and they pass on their enthusiasm and experience in the respective field. I find the exercises and tools we use (Jupyter notebooks, scikitlearn, Orange) very useful and they complement the lessons well. I also appreciate that discussions among each other and in plenary are stimulated." (ML) "You can feel that the lecturer is convinced of the subject. He also often brings good examples to help the students on their way." (AI) "Very good commitment, super presentation style. Enthusiasm for the subject is obvious and motivates me a lot." (AI) Students' perception of the course contents is in our perspective strongly connected with and dependent on the person that teaches. Insofar, the concept, curriculum or OER availability alone is no guarantee for the intended outcome: enthusiastic teaching is an integral part of the AI-Atlas as it facilitates activation. Table 2 summarizes evaluations of the AI course as it has specific questions on the perceived activation (cf. Section 2.5.5) of students and the practical relevance of the presented material (unfortunately, no such questions are asked for the ML course in the central questionnaires as they are issued by a different program administration). Specifically, the students were asked to rate their agreement with the following statements: (i) Activation: the students are actively involved in the teaching process; (ii) Practical relevance: in class, theory is reinforced with examples and applications. Table 2. Evaluation of the activation in and the practical relevance of the AI course in fall 2019 (unfortunately, no evaluation of the course took place in fall 2020 due to anti-pandemic measures). The number of students who handed in the questionnaire was n = 24. These findings are also supported by the following optional written statements that these students handed in:

Semester 1 (Strongly Disagree) 2 (Disagree to Some Extent) 3 (Agree to Some Extent) 4 (Strongly Agree) Score
"The labs support the learning process very much; similarly helpful are the exercises throughout the lectures." (AI) "Very handy are the labs where one implements hands-on what should be learned." (AI) "Good lecture-style presentation, active presence of the lecturers during the labs that motivates students to listen even on Friday afternoons." (AI) A similar response was received from students of the ML course (spring term 2019): "Good, guided (tutorial) exercises that were very helpful, especially for less advanced students." (ML) "The lectures are very interactive." (ML) We have increased the time for in-class exercises and interactivity over the years and received increasingly positive feedback on its effects. Despite the success of more modern teaching styles, such as "flipped classroom" [47], lecture-style teaching still seems to be a very helpful didactic setting for technical education if mixed with practical and interactive aspects where applicable.

Dimension "Open Educational Resources"
"The videos on YouTube are ideal for repeating." (ML) "The recording of the lectures is very helpful. It gives the students the possibility to review parts of the lecture for exam preparation or if you haven't understood everything during the lecture." (ML) Students use video recordings as intended for repetition without getting distracted by the new flexibility (a real danger of digital transformation: procrastination due to everything being available anytime) (cf. Section 2.5.7).

Criticism: Dimensions "Self-Responsibility and Activation"
Most criticism that we face concerns the workload, the practical work in the lab sessions and the depth of mathematical derivation versus pure application of taught algorithms (cf. Sections 2.5.3 and 2.5.5).  Table 3 summarizes the evaluation of the appeal and the organization of the ML course for spring term 2019 and fall term 2020 (the same data is not available for the AI course due to differing questionnaires). The students were asked to evaluate the following statements (the same scale applies as seen in Table 1): (i) Labs: the labs supplement the lectures in a meaningful manner and support the learning process; (ii) Material: the support materials (e.g., recommended books, documents handed out) are appropriate; (iii) Organization: the module is sensibly organized, and the coordination between the different lecturers works well. Table 3. Appeal and organization evaluation scores for the ML course (spring 2019: n = 28, and fall 2020: n = 27). From Table 3, we see that there is much room for improvement. We are of the opinion that learning should be done based on examples and not by pure mathematical derivation. In addition, in a master course of applied science as the ML course, the practical application of the methods should be the deciding factor, and only the necessary mathematical definitions and derivations should be presented. From these critics, we conclude that there should even be more time spent on the application, on lab sessions and exercises. The topics covered in the ML course should be reduced to even less knowledge islands within the big ocean of AI and ML. The lecturer needs the courage to omit certain topics (some islands) and feel confident that the educational objective of generating a helpful map in each student's mind still can be reached. Over the years, hence, we moved more and more content from the actual lecture slides to the (optional) appendix of the lectures.

Quantitative Assessment
The following quantitative assessments focus on evaluating student learning success (i.e., reaching of educational objectives) under changing teaching and assessment modes (e-assessments since spring term 2020, on-site written exams before, corresponding to the distance and contact teaching modes during respective terms). In the absence of more direct data to measure the effectiveness of the AI-Atlas, we aim at drawing conclusions about its effectiveness in different teaching and learning settings from these outcome-based comparisons. This is possible since the respective exams where all designed with the same educational objectives and assessment goals, as well as similar question types in mind, irrespective of the rather drastic changes in assessment mode (from close-book in-class to self-supervised, open-book, open-internet).

AI Course Fall Terms 2019 vs. 2020
The content of the AI final exam has stayed largely stable over the years. It contains free text questions used to test students' ability to precisely define and argue for specific viewpoints; multiple choice exercises to show comprehensive knowledge; programming exercises inspired by the labs; and design tasks with transfer components to reveal higher level competencies. The move from a closed-book written exam to an open-book, open-internet e-assessment in the fall term 2020 of course changed the wording on numerous questions, but the overall depth and composition remained the same, as did the numbers and topics of tasks.
The distributions of the achieved relative scores for pairable tasks as shown in Figure 2 are very similar, which suggests that AI-Atlas suggestions also work well in distance learning mode. Nevertheless, significant differences are discernible in the overall result. The distribution is shifted to the left by about 10 percentage points as shown in Figure 3 (left), as discussed later in Section 4.1. It is striking that, in the planning task (in the bottom left of Figure 2), which is a modeling task involving transfer based on a suitable problem formulation using the Planning Domain Definition Language (PDDL) [48], many students did not know at all how to deal with it in 2020, while, in the group with contact instruction, this rather dry matter could be conveyed and apparently learned well.   Table A1. The content of the 9 tasks is irrelevant for the evaluation here unless otherwise noted, but indicated above each plot.  Figure 3. Comparisons of the total achieved relative scores of the AI (left) and ML (right) course exam results of respective semesters (higher is better; kde=kernel density estimate): spring terms 2019 (total number of participants n = 91) and 2020 (n = 13) for the AI course and spring (n = 68) and fall term 2020 (n = 62) for the ML course. An overview of the group sample can be found in Table A1 (AI) resp. Table A3 (ML).

ML Course Spring Term 2020 vs. Fall Term 2020
Due to the COVID-19 pandemic, the final assessment of spring and fall terms 2020 had to be taken in full distant mode over the Moodle [49] learning platform. This opened up the opportunity to re-design the existing exam: open book, as online proctoring could not be extended far enough to meaningfully control the use of only permissible aids; and involving hands-on programming, as every participant would sit in front of a well setup developer's machine (the personal laptop). For this reason, programming, which was paramount for the lab exercises, could now also be included respectively in the exam in form of two programming tasks, together making up 50% of the exams' content. This is also the reason why a comparison with pre-pandemic results is not possible for the ML course: the respective exams would be too different to be comparable in a meaningful way.
Participants uploaded Jupyter notebooks [50] containing all programming at the end of the 120-min exam. The programming tasks asked the students to implement a small, but full ML process in Python (using scikit-learn), including (i) explorative data analysis (EDA), (ii) data preprocessing, (iii) feature generation and selection, (iv) algorithm selection, (v) hyper-parameter tuning, (vi) performance assessment, and (vii) comparison and conclusion. Although the tasks were different in topic and based on different data sets, we think that the results, nevertheless, can be compared as they are based on the same learning objectives and lecture parts. With these two programming tasks, we aimed to reach levels K3 and K4 of Bloom's taxonomy [51] and to test the educational objective of being able to apply and to reflect the ML process on a real data set from end to end.
The result is noteworthy: most of the participants did very well in programming. Figure 4 shows the histograms of the relative scores of the programming tasks. They are left-skewed, meaning that most of the students know how to apply machine learning to solve tasks in real life (the many 0-point entries might be the result of time problems with the exam as a whole, as this was the last task). This indicates that the overall educational objectives of the ML course-to apply ML algorithms-are met by the majority.  . Histograms and kernel density estimates (kde) of the achieved relative scores (higher is better) for the programming tasks in the ML course's spring and fall term final exams 2020. The number of participating students was n spring = 68 for the spring, and n fall = 62 for the fall term. An overview of the group sample can be found in Table A3.

Evaluation and Reflection
According to our experience and as demonstrated above, the AI-Atlas is highly effective for teaching AI and ML principles in an on-site teaching setting. Aside from the presented data here, there are two additional, anecdotal pieces of evidence supporting this claim further: First, our alumni and alumnae have produced several award-winning theses inspired by the courses and many have ongoing (research) careers in ML. Second, the AI-Atlas was recognized by the Zurich University of Applied Sciences with the "best teaching-best practice" award in 2019.

Tracing Weaker Quantitative Results in Online Teaching Mode
In our opinion, one reason for the effectiveness of the AI-Atlas is that it embodies the general learning settings, as shown in Section 2.3, under which students learn best, adapted specifically to the problems faced by current AI and ML tertiary education. In distance teaching mode, this effectiveness suffers somewhat for the AI course as can be seen Figure 3 (left). We performed a discrete Kolmogorov-Smirnov hypothesis test [52] for the total scores of the final exams of the AI and the ML course to check whether the samples of the two semesters stem from a common distribution (Null hypothesis H 0 ). With a significance level of α = 5%, we had to reject the Null hypothesis H 0 and assume that there is a significant difference in the distribution of the final scores in both courses when going from either hybrid to online (ML course, just barely different distributions) or from contact to online teaching (AI course, very clearly different distributions). We believe there is one main reason for this drop-off that will need addressing in future iterations, especially in those educational objectives that are based on practical implementation (programming, labs), social interaction (by discussions, competition, study-groups), and the teachers' presence (theoretical foundations), as explained next.
Tracing the shift in overall grades in the AI course between 2019 and 2020 as visible in Figure 3 (left), we see from Figure 2 that the reduction of effectiveness is mainly due to tasks 1, 5, and 7. These are tasks concerning precise definitions (1) or modeling (5, 7), each with a high transfer aspect. While modeling is technically difficult and complex, precise definitions are a matter of overarching concerns. We hypothesize that these two instances are the first aspects that suffer from the increasingly indirect influence of the lecturer on the students that takes place when shifting from contact to distance teaching: our students are usually technically interested, with less intrinsic motivation for overarching concerns, like, e.g., the precise difference between an AI system and other complex software systems (part of a question on defining what makes up AI). And, while they are technically interested, they are less motivated for prolonged sequences on, e.g., formal logic (as is part of the "planning" topic). In contact mode, the enthusiasm of the lecturer may help carry the motivation of a larger proportion of students through such sequences. In distant mode, where attention is naturally divided between the video stream, chat, the home environment, etc., less motivation is transmitted, and the subjects with the least intrinsic motivation suffer. Additionally, the social learning component through team-work is likely weaker in distant mode with imperfect collaboration tools, so that not every student that would normally be a member of a successful group is able to develop the deep skills necessary for novel modeling tasks on his or her own. Although AI and ML are practically done on a computer, and despite of the fact that "break-out rooms" could be used to organize team work remotely, the social inclusion of each individual likely suffered.

Tracing Worst Quantitative Results in Hybrid Teaching Mode
Quite counterintuitively, the overall quantitative results for the ML course as depicted in Figure 3 (right) improve again when moving to full online mode. In our opinion, the counterintuitive feeling lifts when considering that it is compared not to on-site teaching in figure 4, but to hybrid teaching. In our experience, hybrid teaching is most demanding for all participants, educators and learners alike, as the teacher has to try to address people in the room, as well as on the computer, which usually results in neglecting one group. In the first pandemic semester of spring term 2020, this condition was worsened by virtually no training for these special circumstances on the side of the educators, and imperfect hardware equipment, leading to frequent technical problems (degraded acoustic quality for discussion in the lecture room, illegible writings on whiteboards for students online, etc.).
We conjecture that the increase in overall scores for the ML course fall term 2020 (full online mode) is due to the ability of the educator in this setting being able to fully con-centrate on one stakeholder group again, focusing on delivering good streaming content. This interpretation is in line with the somewhat weaker result from the K-S test of the similarity of the two distributions, which are significantly dissimilar, but not as clearly different as for the AI course when going from on-site (good results) to online (degraded results).

Moving towards a Didactic Concept for Flexible Teaching and Learning
Ultimately, we believe that the AI-Atlas, conceived of and designed for an on-site learning environment, may be equally effective for online teaching if we solve for the problem of transporting some of the more social and teacher-enthusiasm-based design principles in ways that are suitable for the format. This would allow the AI-Atlas to become a fully flexible concept to support teaching regimes that might be any mix of online, hybrid and on-site teaching. What follows are the respective adjustments we are planning for future semesters (regardless of format) that are based on students' feedback and the principles outlined in Section 2.3 in the light of above's discussion.

Regarding "Reflection" and "Motivation"
To address the diminished attention span of students and subsequent lower motivation for overarching concerns in distance education settings, we plan more exercises and labs on such aspects to train reflective, interdisciplinary or even holistic thought patterns and reinforce the importance of such "soft" content (cf. Sections 2.5.2 and 2.5.3).

Regarding "Cooperation" and "Social Learning"
To strengthen the connection of students with their core groups for the lab exercises and other key persons in the class (e.g., top students), it is important to solicit quick discussions in the whole video call or frequent, smaller (possibly random) break-out rooms in distance lectures (cf. Sections 2.5.4 and 2.5.6).

Regarding "Activation of Students"
This element can be strengthened by frequent activation of students via online survey tools, like Mentimeter [53], during lectures to have them think on the latest input and produce some output (write a free text answer, make a choice, solve a puzzle or quiz, etc.). We already made positive experiences with a frequency of every second to sixth slide for such pauses for thought (corresponding to a 6-15-min interval between them) (cf. Section 2.5.5).

Regarding "Blended Learning"
The usual best practices for online teaching and learning [54] apply also to AI and ML. What we found especially helpful was to train students to have their cameras on most of the time (increases the perception of connectedness and the degree of interaction), and to roll out tools, like wonder.me (also see Reference [55]), for informal conversation with instantaneous participant groups, much like in a physical break or reception context. This informal networking is important to initiate cooperative competence development, as well (cf. Section 2.5.7).

Conclusions
With these measures added to the AI-Atlas didactic concept, most of them already implemented within our current AI and ML courses, we see any course crafted after the AI-Atlas principles fit for a very flexible range of teaching modes-whether it is on-site, online, or hybrid. Thus, we recommend the AI-Atlas as a viable basis for consideration when designing tertiary educational courses in AI, ML, and beyond (for the former ones, the syllabi and online teaching material presented in the appendix can also serve as starters to create new respective courses).  Institutional Review Board Statement: Ethical review and approval were waived for this study, due to only standard processes and data sources of the involved study programs being used for data collection and analysis (exams, voluntary feedback).

Informed Consent Statement: Not applicable.
Data Availability Statement: Exam results, voluntary feedback and student inscriptions are hosted at the involved universities in consent with applicable law.

Acknowledgments:
We are thankful for the recognition of the AI-Atlas didactic concept through the "best teaching-best practice" award (3rd rank) of the Zurich University of Applied Sciences; for very helpful discussions with and insights on design-based research by Andrea Reichmuth; and for the fruitful discussion on e-didactics with Elisabeth Dumont.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Outline of the AI and ML Modules
Appendix A.1. The AI Course "Artificial Intelligence 1" (cf. https://stdm.github.io/ai-course/, accessed on 23 June 2021) is a practice-oriented elective course in the final year of a B.Sc. computer science program at a university of applied sciences, encompassing selected foundations of AI and ML and aiming at hands-on problem-solving competency for everyday software challenges. It is geared towards students who have a general curiosity for smartness in software but no aspirations towards research. Most of them, when starting the course, look forward to a career as software engineers, with some thinking about becoming data scientists or about further interdisciplinary studies in areas, like information engineering, speech processing, computer vision, or robotics. This group is quite homogeneous with respect to demography and educational background (cf . Table A1). Age-wise, the students are predominantly in their early twenties, ca. 1-3 years younger as in the ML course due to the AI course taking place at least a year earlier, and some students entering the industry for a while before engaging in master studies. The B.Sc. computer science program can be completed on a full-time or part-time basis. The superior learning objectives are defined as (a) knowing the breadth of AI and particularly ML problem solving strategies, thus identifying such challenges in practice and developing corresponding solutions on one's own; (b) being able to explain the discussed algorithms and methodologies, thus being enabled to transfer the respective knowledge to the real world. The corresponding syllabus is depicted in Table A2. It is structured in five phases based on the main approaches to AI (symbolic and sub-symbolic) and an elaborate parenthesis dealing with overarching concerns.
The AI course is based on the well-known "AIMA" text book [33] (the much welcomed updates to the recent 4th edition from April 2020 have not yet been adopted; they include a more timely selection and framing of the contents that has partly been anticipated by our curriculum design). It presents AI as a toolbox with separate compartments (=sub-fields), each containing tools to mimic specific aspects of intelligent behavior suitable for certain ranges of practical problems. The curriculum is special in that it gives equal time to the most relevant ideas from the complete field of AI, not just to fashionable topics around ML and neural networks or the main research areas of the lecturers. The course is taught once per year on-site during fall terms since 2017. The fall term 2020 started in online-only mode and went hybrid for the second half of the course. Table A2. The curriculum of the AI course, spanning a 14-week semester with 2 lectures and 2 labs (45 min each) per week. On successful completion, the students are awarded 4 ECTS, meaning they have invested ca. 120 h into the coursework (i.e., they spent roughly twice the amount of time in self-study as in class, with most of this time invested into the lab assignments). "Machine Learning" (cf. https://stdm.github.io/ml-course/, accessed on 23 June 2021) is an elective course in an interdisciplinary joint graduate program on engineering of different universities of applied sciences. It builds upon basic knowledge in math, programming, analytics, and statistics as is typically gained in respective undergraduate courses of diverse engineering disciplines and draws on a respective diverse audience with homogeneous demographics (age: 22-25 years) but rather heterogeneous backgrounds (cf. Table A3).
The module teaches the foundations of modern machine learning techniques in a way that focuses on practical applicability to real-world problems. The complete process of building a learning system is considered: formulating the task at hand as a learning problem; extracting useful features from the available data; and choosing and parameterizing a suitable learning algorithm. The syllabus highlights cross-cutting concerns, like ML system design and debugging (how to get intuition into learned models and results), as well as feature engineering, aspects typically cut short in previous courses these students took that touched on learning algorithms.
The corresponding educational objectives are designed as follows: (a) students know the background and taxonomy of machine learning methods; (b) on this basis, they formulate given problems as learning tasks and select a proper learning method; (c) students are able to convert a data set into a trained model by first defining a proper feature set fitting for a task at hand; then they evaluate the chosen approach in a structured way using proper design of experiment; they know how to select models, and "debug" features and learning algorithms if results do not fit expectations; finally, they are able to leverage on the evaluation framework to tune the parameters of a given system and optimize its performances; (d) students have seen examples of different data sources and problem types and are able to acquire additional expert knowledge from the scientific literature. The curriculum, depicted in Table A4, spends most time on first principles and illustrates them by specific, selected learning algorithms as the basis for life-long learning in ML. The ML course is not built around any specific textbook, but draws upon multiple sources, including References [33][34][35][36]56], having >90% original content. This is contrary to many courses that try to teach a large number of learning algorithms; it also eases the problem of heterogeneous entry competencies where students might have learned about the typical ML algorithms in some class already, but do not know what reasoning led to this specific class of algorithms. The ML course is structured four-fold with an introduction followed by supervised, unsupervised, and reinforcement learning and specifically does not touch neural networks as this is treated in a specialized course. The course has been taught on-site usually once a year in spring terms since 2017. Since spring 2020, the course moved to online teaching mode (with hybrid episodes) and is also taught in the fall term.  Figure A1. Top: screenshot from the 2048 number puzzle; goal of the game is to reach a 2048 tile by joining adjacent tiles of similar value through consecutive up/down/left/right movements of the whole board (cf. Reference [57] for a fuller description of the gameplay). Bottom: exemplary search tree as processed by Expectimax for a fictional board configuration, excerpted from the assignment.
Phase one is about taking one's software development and problem solving skills, together with one's understanding of the game after a few hours of playing, and implement an agent ad hoc by designing useful heuristics (links to the literature and online forums are provided, where ideas abound). The usual experience of a student after phase one is that it is very difficult and not overly successful to try encoding one's own strategies purely ad hoc (and that it is impossible to exhaust the knowledge on the web and in the literature without a clear idea of how to conceptually approach the problem).
Phase two introduces the conceptual framework of adversarial heuristic search and the Expectimax algorithm. Students can leverage on their developed ideas of a heuristic function here, but thanks to the look-ahead provided by the search, reach scores usually an order of magnitude higher than their previous results (or manual play). This drives home the point that mapping the problem at hand to the best fitting conceptual/algorithmic approach from the literature pays off way more in AI than investing many hours of manual labor. It also reinforces Sutton's "bitter lesson" that leveraging compute through search is usually the smartest thing one can do [58].