Next Article in Journal
User Pairing and Power Allocation for NOMA-CoMP Based on Rate Prediction
Next Article in Special Issue
The Deinstitutionalization of Business Support Functions through Artificial Intelligence
Previous Article in Journal
Local Transformer Network on 3D Point Cloud Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adopting AI in the Context of Knowledge Work: Empirical Insights from German Organizations

by
Georg von Richthofen
1,*,
Shirley Ogolla
1 and
Hendrik Send
1,2,*
1
Alexander von Humboldt Institute for Internet and Society, Französische Straße 9, 10117 Berlin, Germany
2
Hochschule für Technik und Wirtschaft (HTW) Berlin, 10313 Berlin, Germany
*
Authors to whom correspondence should be addressed.
Information 2022, 13(4), 199; https://doi.org/10.3390/info13040199
Submission received: 17 March 2022 / Revised: 8 April 2022 / Accepted: 12 April 2022 / Published: 15 April 2022

Abstract

:
Artificial Intelligence (AI) is increasingly adopted by organizations. In general, scholars agree that the adoption of AI will be associated with substantial changes in the workplace. Empirical evidence on the phenomenon remains scarce, however. In this article, we explore the adoption of AI in the context of knowledge work. Drawing on case study research in eight German organizations that have either implemented AI or are in the process of developing AI systems, we identify three pervasive changes that knowledge workers perceive: a shift from manual labor and repetitive tasks to tasks that involve reasoning and empathy, an emergence of new tasks and roles, and an emergence of new skill requirements. In addition, we identify three factors that are conducive to the development of AI systems in the context of knowledge work: leadership support, participative change management, and effective integration of domain knowledge. Theoretical and managerial implications are discussed.

1. Introduction

Advancements in the field of Artificial Intelligence (AI), combined with interrelated developments such as the growth of cloud-based services, have enabled more capable applications and a more wide-ranging adoption of AI by organizations [1]. Organization researchers are therefore increasingly interested in studying how the use of AI shapes and is shaped by organizations [2,3]. One question, in the context of this larger phenomenon, is how the adoption of AI by organizations shapes and is shaped by the world of work [3,4,5].
Scholars from various disciplines generally agree that the increasing adoption of AI will fundamentally change work [6,7,8]. The self-learning capabilities of AI algorithms imply that these changes even concern cognitive non-routine tasks performed by knowledge workers [3,9,10], including those of experts in fields such as law [11], medicine [12], and marketing [13]. Where scholars disagree, however, is what these changes will actually look like.
Labor economists generally assume that AI and automation will impact employment [6,9]. In an often cited article, Frey et al. [14] predicted that a substantial proportion of jobs could be automated by AI in the near future. Although such predictions failed to materialize to date [15], labor economists continue to focus on how AI and automation impact employment [16].
Scholars from both organizational research and sociology have argued that focusing on the impact of AI on employment often suffers from a form of technological determinism that disregards the complexity of context [4,17] and that the more pressing question may concern the quality, not the quantity of jobs [18]. While the exploration of such changes is still at an early stage, pioneering studies point to a number of relevant developments, namely, the emergence of new forms of control [19], new roles [20,21], and entirely new forms of labor, some of which are of questionable quality, however [22].
When studying the adoption of AI in the context of work, it is important to distinguish between the use of AI in different sectors of the economy. For example, AI plays a key role in the shift towards digital manufacturing [23,24]. In the context of Industry 4.0, AI is often used in conjunction with big data and robots to automate tasks that are too strenuous, dangerous, or tiring for human workers [25] or in machine control and fault detection [26]. These deployment scenarios differ from those in knowledge work, where analytical and cognitive tasks are the focus of automation and augmentation through AI [27].
Knowledge work refers to work that focuses on the generation, editing, processing, and transfer of knowledge and information [28]. According to Pyöriä [29], knowledge work has three characteristics: the use of information technology as an integral part of the labor process, a high degree of education of employees, and a high degree of non-routine tasks. Despite these characteristics, however, knowledge work is no clear-cut category but a continuum along which different occupations can vary. The concept was introduced by scholars as a fourth sector of the economy, in order to distinguish the work performed by a growing number of office workers from the work performed in established economic sectors, namely, agriculture, industry, and service [28].
The introduction of AI in the area of knowledge work is a complex challenge for organizations. Surveys show a low adoption rate in the single-digit percentage range [30]. Anecdotal and empirical reports suggest a high rate of failed AI projects [31]. Similarly, studies on AI adoption in the healthcare sector indicate a variety of challenges, such as the lack of data integration, resistance among employees, and insufficient competencies in organizations [32,33]. The goal of this article, therefore, is to find answers to the following research questions. Our first research question concerns the changes that knowledge workers perceive, who have started working with AI applications. Given the numerous challenges associated with the adoption of AI, our second research question further asks about the factors that participants perceive as conducive to the development and implementation of their respective AI systems.
Responding to calls for empirical organizational research on AI adoption at work [2,4,5], we chose a qualitative research design and conducted case studies in German organizations that are in the process of adopting AI in their work processes. We compare multiple cases of organizations to evaluate whether findings are specific to a single case or reliable across several cases [34,35]. Based on an inductive analysis of these case studies, we find three broad changes associated with the implementation of AI: a shift from manual labor and repetitive tasks to tasks involving reasoning and empathy, the emergence of new tasks and roles, and the emergence of new skill requirements. In addition, we identify three factors that are conducive to the development of AI systems in the context of knowledge work: leadership support, participative change management, and effective integration of domain knowledge. In what follows, we review the extant literature on AI and work and describe our data and method in more detail. Next, we will present our findings and discuss their theoretical and managerial implications.

2. Literature Review

2.1. Artificial Intelligence

The term AI has a variety of meanings [36]. One popular meaning in academia is that AI refers to “a field of computer science dedicated to the creation of systems performing tasks that usually require human intelligence” [12] (p. 2). This explains why AI is sometimes also referred to as intelligent machines [1,36]. Even in academia, however, there is no universally accepted substantial definition of AI [37,38].
There are several reasons why it is difficult to (conclusively) define AI. First, the meaning of the concept of intelligence itself is ambiguous. With more than 70 definitions in the literature, the concept is associated with abilities such as learning, planning, and problem solving, but can also encompass abilities such as consciousness, reasoning, creativity, logic, and critical thinking [37]. This, De Bruyn, Viswanathan, Beh, Brock and von Wangenheim [37] (p. 2) argue, is one of the reasons why the widely used definition of AI as intelligence demonstrated by machines can lead to such different claims of what AI means:
Based on this widely accepted definition of AI, and depending on how intelligence is defined or understood, some may argue that we are decades away from achieving AI, while others may consider that a simple regression analysis […] is achieving artificial intelligence already. This quite lax definition has allowed companies to claim they offer AI powered products and services […], where most AI researchers would be dubious at best to qualify them as such.
Second, there is the tendency to see complex tasks, once they are performed by machines, as tasks that do not require intelligence [37,38,39]. This so-called AI effect “makes the definition of AI a moving target in the sense that AI always seems to be out of reach” [39] (p. 39).
In this article, we adopt the definition by De Bruyn, Viswanathan, Beh, Brock and von Wangenheim [37] (p. 3), who define AI as “machines that mimic human intelligence in tasks such as learning, planning, and problem-solving through higher-level, autonomous knowledge creation.” The benefit of this definition is that it does not claim that AI actually achieves intelligence and that it confines AI to algorithms that perform tasks autonomously in contrast to regular algorithms [3]. Nevertheless, we remained sensitize to the fact in our field work that practitioners may have different understandings of AI [36].

2.2. Impact of AI on Work

In the past, the role of information systems in knowledge work has been primarily that of a support tool for domain experts. Due to AI methods such as machine learning, however, information systems can now adapt their behavior and the rules of their actions without human intervention [10]. In some contexts, these systems are increasingly able to perform individual tasks quicker and more reliably than domain experts [3].
Broadly speaking, AI can be used in the workplace to automate or augment human work [5,40,41]. Automation implies that machines take over tasks that were previously performed by humans. In the case of augmentation, humans work closely with machines to perform a task [5]. In the case of customer service, for example, automation could mean that a chatbot answers certain customer inquiries autonomously. An example for a form of augmentation would be an AI-application that provides call center employees possible solutions for customer inquiries that employees can then choose from.
The decision to automate or augment human work is often framed as a trade-off [40]. Raisch and Krakowski [5] explain, however, that automation and augmentation have a paradoxical relationship. Specifically, they argue that the process of automation requires a period of intense collaboration between developers and domain experts. For example, developing an AI-application that screens job candidates requires that developers cooperate with human resources (HR) experts. Raisch and Krakowski [5] further argue that automation may often not be permanent. For example, in the long-term, HR experts are still needed even after the completion of the project, because a change in job profiles will likely necessitate that developers and HR experts collaborate again to adjust the algorithm used to evaluate candidates. Thus, increasing the temporal (–or spatial–) scale often reveals a paradoxical relation between automation and augmentation [5]. Although the distinction between automation and augmentation is helpful to characterize the use of AI at work, it provides little insights into its quantitative and qualitative impacts.
Labor economists focus on studying the impact of AI and automation on the quantity of work [16]. While initial predictions focused on the impact of AI and jobs more broadly, economists now use a more fine-grained approach and study the impact of AI on tasks. This approach is consistent with the argument advanced by Tschang and Mezquita [8], namely that the deployment of AI is associated with a modularization of human work. More specifically, they argue that job profiles are increasingly broken down into modules, that is, into groups of tasks that are highly interdependent and depend little on tasks in other modules—which are then increasingly automated.
Both sociologist and organizational scholars have criticized studies on the quantitative impact of AI on work based on the ground that they tend to share a problematic “implicit technological deterministic assumption that AI has the power in itself to change work” [4] (p. 307), although these changes may actually be driven by profit-seeking [18]. One reason why economic predictions tend to overestimate the impacts of technologies on work and employment is because they do not sufficiently account for the complexity of (knowledge) work [42]. Wajcman [18] emphasizes more generally that the risk of a debate that is focused (almost exclusively) on job losses is that visions can have performative effects and that it may lead us to neglect potentially more interesting and relevant qualitative changes associated with AI adoption.
To date, we still know surprisingly little about the qualitative changes associated with the adoption of AI, especially in established organizations. A notable example in this regard is the study by Waardenburg, Sergeeva and Huysman [21] on predictive policing, who found that the AI system led to the introduction of a new occupational group called “intelligence officers” that assumed more and more responsibility over time. While existing studies certainly contribute important insights, more studies are needed to provide a more comprehensive understanding of AI at work. Responding to calls by organizational scholars for empirical research on the use of AI in organizations and to participate various stakeholders involved in the AI adoption process [4,5], we conducted case study research and observed the qualitative changes associated with the adoption of AI systems.

2.3. Challenges and Success Factors of Adopting AI at Work

According to von Krogh [1], four interrelated technological developments propel the increasing adoption of AI by organizations, namely, (1) significant advances in the field of AI (e.g., convolutional neural networks) and the availability of these technologies under open-source licenses, (2) information technology that is increasingly effective in capturing and storing data that are needed for the training and performance of algorithms, (3) the increasing affordability of computational power needed for AI, and (4) the growth of cloud-based services. Regardless of these advancements, however, adopting AI can be complicated by numerous challenges that can be grouped into seven categories: (1) social challenges, (2) economic challenges, (3) technological challenges, (4) data challenges, (5) organizational and managerial challenges, (6) ethical challenges, and (7) political, legal, and policy challenges [32,33]. Organizational and managerial challenges, for example, involve challenges such as a resistance to data sharing, a lack of in-house AI talent, and a fear in the workforce that AI threatens to replace them [32,33].
While the challenges associated with adopting AI have been studied extensively, relatively few studies identify conditions for success to overcome these challenges. A notable exception is the study by Chen et al. [43], who explore success factors that impact AI adoption in China’s telecom industry. Among other success factors, they find a significant positive relationship between management support and adoption of AI. This indicates that the conditions for success in the adoption of AI have strong parallels to the conditions for success in the adoption of past IT innovations [44]. In addition, there are practitioner-oriented studies, which describe the importance of change management during the adoption of AI [45]. These include, among other things, early participation of the workforce and a change in corporate culture toward decentralized decision-making and an agile way of working. While valuable, such studies are often not grounded in systematic empirical research. In addition, Fountaine, McCarthy and Saleh [45] do not distinguish between AI adoption in different sectors, for example, between the adoption of AI in the context of Industry 4.0 and knowledge work. To start addressing this gap in the literature, our second research question asks about the factors that are conducive to the development of AI systems in the context of knowledge work.

3. Materials and Methods

3.1. Case Selection

Due to the explorative nature of our research questions, we chose a qualitative research design. We compare multiple cases of organizations to evaluate whether findings are specific to a single case or reliable across several cases [34,35]. The goal of the case selection was to capture a variety of application scenarios and to represent typical cases [46].
Overall, we studied eight organizations for this study (see Table 1). We selected the case studies to (1) reflect the diversity of use cases typically encountered in practice (e.g., customer service). In addition, the cases (2) cover a range of sectors of the economy (including financial services, telecommunications, media and information services), and (3) also represent variance with respect to company size (small, medium-sized and large companies). In addition, the respective AI applications studied are (4) in different stages of development: while some AI applications were still in the research stage at the time of the study, other applications were under development or already implemented. The selected organizations are typical of their respective markets in Germany in the sense that they have only started experimenting with automating and augmenting specific tasks in the context of knowledge work.

3.2. Data Collection

To account for the perspectives of different stakeholders involved in the development process, we interviewed project leaders and managers, developers, and employees/domain experts. In total, we conducted 41 semi-structured interviews, covering the following topics: (1) reasons for developing or implementing AI systems, (2) challenges and conditions of success during the development and implementation of AI applications in the context of work, and (3) work-related changes that actors perceive or anticipate. Due to contact restrictions in the context of the COVID-19 pandemic, we conducted the vast majority of interviews virtually via Zoom. We recorded and verbatim transcribed all interviews. In addition, we collected and analyzed published materials such as press releases, website content, publications, and when accessible, internal documents. Due to the pandemic, we were unable to conduct on-site observations at the organizations.

3.3. Data Analysis

Data analysis involved that we first considered each case separately and analyzed it with respect to our research questions. More specifically, this involved that we engaged in several rounds of coding of all interview transcripts and documents using Microsoft Word. This led to a number of initial more descriptive codes. In regard to our first research question, for example, this yielded codes such as “automation of assignment of customer inquiries” or “automatic response to invoice-related inquiries.” Similarly, in regard to our second research question, this procedure yielded open codes such as “training users to use AI system.” In a next step, we categorized these descriptive codes into broader and more abstract “clusters” [46]. We then conducted a cross-case analysis regarding similarities and differences. Our data analysis process was guided by a cross-case analysis technique [34]. Reasoning inductively, we ensured continuous reflection of our findings in three ways. First, we divided fieldwork among two of the three authors of this study. This reduces the tendency to base emerging insights too heavily on a subset of cases. Second, we met up regularly as a team to discuss intermediate results. Third, we started with data analysis early on and continued to collect data until the newly collected and analyzed data did no longer add to theory building [35]. Below, we present the findings of our analysis.

4. Results

In this section, we will summarize the findings of our analysis. Therefore, we will first summarize what changes knowledge workers perceive in association with the adoption of AI. Next, we will present conditions that are conducive to the adoption of AI in the context of knowledge work. Table 2 provides an overview of our findings.

4.1. Perceived Changes in the Workplace

Based on our analysis, we identify three broad changes associated with the adoption of AI in the context of knowledge work: (1) a shift from manual labor and repetitive tasks to tasks that involve reasoning and empathy, (2) the emergence of new tasks and roles, and (3) the development of new skills and/or skill requirements.

4.1.1. Shift from Manual Labor and Repetitive Tasks to Tasks Involving Reasoning and Empathy

Based on our analysis, we find emerging evidence that AI adoption is associated with a shift from manual labor and repetitive tasks to tasks involving reasoning and empathy.
First, we find support for the notion that the deployment of AI is associated with a modularization and automation of tasks performed by humans [8]. However, we find that there can be a period of coexistence, in which certain tasks are performed by both AI-applications and human workers. One especially illustrative example is O1, a company that offers transcription services for audio records to market research firms, media companies, and research institutes. Since 2019, the company offers both manual and automatic transcriptions to customers. The reason for this coexistence is in this case that the quality of automatic transcriptions does not match that of manual transcriptions yet.
The modularization and automation of tasks is especially salient in cases in our sample, where AI is used to increase automation in customer service. In the case of a telecommunication provider that is increasingly relying on chatbots (O6), managers of the service division teamed up with data scientists to determine which use cases are eligible for automation. One time-intensive (and thus costly) task that was automated early on, for example, was the authentication of customers. Later, data scientists identified other use cases such as providing feedback, when customers contact the company with regard to their phone bills.
Similarly, O3, a customer service provider in the energy sector, uses AI to analyze customer requests, in order to identify the underlying concerns, and assign them to employees that are specialized for the respective cases, thus automating the task of coordinating customer requests. Moreover, O3 uses robotic process automation to send reminders to customers. In the following quote, a team manager at O3 describes the relief provided by the robotic process automation that was implemented two years ago as follows:
[The system] relieves employees from a lot of work. You have to imagine that we once wrote individual letters or emails. Then we started using text templates. […] And now we sort of only press a button and 500 emails go out.
(Team manager, O3)
In some cases in which AI systems were used to automate manual and repetitive tasks, humans were still needed to control the system’s output. This was especially salient in the case of O1, where the deployment of an AI application to transcribe audio files shifts the tasks of freelancers from manually transcribing text themselves to controlling the machine’s transcriptions. Similarly, at O4, auditors no longer have to manually assign accounts, but are still responsible for checking the output of the AI application to make sure all accounts were assigned correctly and manually code accounts that the machine had difficulties to allocate with sufficient certainty.
As automation relieves employees to some extent from manual and repetitive tasks, they become more focused on tasks that involve reasoning and empathy. Instead of writing dozens of reminders, for example, the employees of O3 can now focus on more complex cases: “The AI takes care of cases that are easy and the employee gets […] more difficult cases” (Team leader, O3). Similarly, the leader of the automation division at O3 argues that automating repetitive tasks enables employees to focus on more complex and customer-oriented tasks that involve problem solving:
And this has led to the elimination of these routine tasks so that employees can work on tasks where they have strengths, where they do research, where they have to make conclusions, but also where they work customer-oriented and service-oriented. Thus this means they can concentrate on really important things.
(Project leader, O3)
To summarize, we find that the adoption of AI was in several cases associated with the automation of specific manual and repetitive tasks so that employees became more focused on tasks that involve reasoning and empathy. It is worth emphasizing that the elimination of tasks was not associated with an elimination of entire roles. Instead, we find that AI adoption was actually associated with the emergence of new roles. We elaborate on this finding next.

4.1.2. Emergence of New Tasks and Roles

We find that the adoption of AI is associated with the emergence of new tasks, which, in some cases, justifies the creation of new roles. Consider O6, a telecommunication provider that is increasingly using chatbots to automate its customer service. One key task involved in the chatbot’s development and implementation involves the creation of content for the chatbot. Starting out, the project leader and her colleagues from the business side scripted most of the content themselves, which, however, became increasingly unfeasible over time. In order to create the substantial amount of content needed for the chatbot, the project leader hired several frontline employees, who are used to dealing with customers directly over chat and phone and have knowledge in a range of domains including sales and complaint handling. Eventually, the project leader created an entire team dedicated to creating chatbot content: “The content team consists of former customer service employees and agents who used to work at the hotline or in chat. And who do nothing nowadays but create content for artificial intelligence”. In our interviews, the manager stressed the importance of these employees’ work. They are not only familiar with the company’s products, but also know how conversations between frontline employees and customers unfold. Similarly, the leader of the technical development team also emphasized their relevance in the development process:
You actually need people, who know how customers react and who [–in case that is needed–] perhaps simply write content for two weeks. What would they explain to the customer on the phone or what would the customer […] ask in the first place?
(Developer, O6)
The project leader predicts that training chatbots may even become a new role in customer service more generally:
I believe this will be a new occupational role in customer service […] that employees no longer work at the hotline but train chatbots. We started with three or four colleagues and now they are eight [employees], whose daily business consists of preparing content, creating intents and training [the chatbot] and maintaining the […] knowledge database.
(Project leader, O6)
It is further worth noting that work in the newly created roles provides workers better working conditions compared with their former positions on the frontlines of customer service. Consider the following quote by one of the team members, in which he contrasts his new role with his previous role:
Generally and in contrast to my first job, chatting with customers, I like it considerably more. It is fun. […] It’s exactly my thing. And with Rasa [a program for creating chatbot conversations] I admit I still have my problems. But I think it’s the future. Because I believe that no customer will be interested in this classical question-answer-game in three, four, five years. Instead, it’s about conversational design. […] This is why I find it really exciting to learn how to do this.
(Employee, O6)
Note that the quote indicates several facets of satisfying work, namely that the employee perceives the work as fun, challenging, and meaningful.

4.1.3. Emergence of New Skill Requirements

Corresponding to our previous finding, we find that AI adoption is also associated with the emergence of new skill requirements at various hierarchical levels. In the case of O6, for example, both the former frontline employees and the project leader had to learn new skills to contribute in their respective roles to implementation of the chatbot, which include both technical skills such as working with open source software to create conversations as well as soft skills such as working in interdisciplinary teams following the Scrum framework. In the following quote, the project leader of O6 describes how she acquired the AI-skills that she needed in her role through a combination of training, collaborating with colleagues, and self-study.
I also attended training for the use of AI […] for businesses. I also obtained certification and certificates for that to have an official acknowledgment of [my training] but at the end of the day I learned most of what I am doing nowadays from colleagues. So […] I am learning every day. And I would have never thought that I would consider architecture infrastructure diagrams totally appealing one day, because I didn’t even know what that was four years ago. So, that’s why there is a lot of self-learning involved.
(Project leader, O6)
In cases in which AI systems are still in development, participants voiced the expectation that the deployment of AI will be associated with the emergence of new skill requirements and/or a shift in terms of the relative importance of skills requirements. For example, one librarian at O5 expects that the relative importance of competences will change due to the implementation of AI systems and digitization more generally. She illustrates this point using paleography, the study of historic writing systems, as example.
Due to digitization, I expect that these classic so-called historical ancillary sciences will disappear and competences will decline in these fields. […] At the same time, these cultural objects are not self-explanatory, which is why I believe that the service of conveying the meaning of these objects will become more important.
(Librarian, O5)
Thus, while the practice of deciphering, reading, and dating manuscripts may be performed more and more with or by AI systems, the task of explaining the context of these manuscripts may become more relevant than ever.
Similarly, the project leader of O2 expects that the implementation of the AI system to support employees in their decisions when to dispose trains will change the profile of the occupation sustainably: “Overall, the job description will change in consequence [of the implementation of the AI system], namely, it will require a higher digital affinity because it is supported more strongly” (Project leader, O2).
Last but not least, it is worth noting that new skill requirements do not only emerge in positions where people work with or are directly affected by AI systems. An AI expert at O2 illustrated the importance of teaching relevant skills to various organizational members by giving the example of when a division within the organization wants to announce a public tender for a project that involves the use or development of AI systems. Even if the respective employees charged with writing the announcement may not be involved in the project themselves, they still require sufficient knowledge of AI to write the announcement.
Now and this is the point: Making an EU-wide tender for an AI-application requires that the person who writes it has a clue. He has to write down things such as: “It’s not allowed to be a neural network, because of explainability”. But they can’t do this. We first need to educate those people to an extent that they feel capable to write the text for the tender.
(AI expert, O2)
In summary, we find that the adoption of AI is associated with the emergence of new skill requirements which involve both knowledge of AI, technical skills and soft skills and that these skills are not only relevant to employees who work directly with AI.

4.2. Organizational Conditions Conducive to the Development of AI Systems

Based on our cross-case analysis, we identified three conditions that are conducive to the development of AI systems in the context of knowledge work: leadership support, participative change management, and effective integration of domain knowledge. Next, we describe and illustrate each condition in more detail.

4.2.1. Leadership Support

A key condition that is conducive to the development of AI systems in the context of knowledge work is that the top management supports the respective AI projects. This is because top management support enables project members to secure the significant financial and human resources that are often needed during the development and implementation stage. Moreover, top management support is also valuable to receive the freedom to actually work on and realize projects—something that would not be possible being relieved to some extent from day-to-day operations, as a project leader from O5 explains:
“My direct supervisor is […] responsible for digital transformation in our house. In this respect, it was possible to get certain freedoms and implement certain things that […] would not have been possible in regular operations.”
It is further beneficial, if there are members in the management, who possess AI-expertise and understand the characteristics of AI projects. In the case of O2, for example, a member of the board of directors is a professor who has done research on AI. In the following quote, an interview participant elaborates on how conducive this circumstance has been:
On the other hand, there is the group and the group sets targets […], in our case it is the board member […]. And that is the fortunate circumstance that she has a background in AI. […] She is a computer scientist by training, […] and did research on AI before she joined [O2]. So she really gets it and is now really pushing the [group in this regard]. For example, this disposition topic in Stuttgart. That was her idea. […] So you don’t have to explain that to her, but she comes up with ideas and challenges us to implement them. And she has set up a so-called house on AI at the board level, which is trying to form the group-wide hub that we as [subsidiary] cannot build, because we are only a service provider.

4.2.2. Participative Change Management

Participative change management is another condition that facilitates the development of AI systems in the context of knowledge work. Several interview participants mentioned the importance of involving employees in the AI development process at an early stage in order to address concerns by employees and to communicate potential impacts of the application. Several participants stressed that transparent communication helped to reduce reservations about AI in general and to promote acceptance among its employees. The reason why transparent communication and early involvement is so important seems related to the fact that the discourse on AI and work is so focused on job losses. Consider the following statement by one of the project leaders: “There are websites on the internet called “Will a robot take my job?” […] This danger and this fear that digitization reduces jobs is there every day and […] is a major topic in work councils” (Project leader, O6).
Another component of effective change management during the adoption of AI in the context of work involves enabling employees, who will ultimately work with the application by training them to use the application, as our interview partner in an auditing and tax consulting firm elaborates:
[It was important] […] that we teach the [users] what we know about the application so that they can really use it in a meaningful way and don’t just say “Yeah, so what’s that about? I don’t get it. I don’t want it”.
(Developer, O4)
Similarly, enabling employees does not only involve training those who will directly work with AI applications, but also determining competency requirements for different roles and providing appropriate training. The AI expert at O2 illustrates the different training offerings as follows:
We now have a range of trainings that we offer. Depending on your role in the company, you can attend a half-day training, that enables you to differentiate between a neural network and machine learning, up to a training that lasts several days, where we say: “Okay, the person who now implements an AI application needs to know in a bit more detail what is going on”.

4.2.3. Effective Integration of Domain Knowledge

A third factor conducive to the development of AI projects concerns the effective integration of domain knowledge. AI developers depend on the knowledge of domain experts to both better understand the use case, develop the solution, and to evaluate the performance of the algorithms and applications they develop. A developer at O4 explicitly emphasized the value of integrating the domain knowledge of auditors to develop a system which assigns accounts automatically: “We want to feed the intelligence of the auditors into the artificial intelligence. And for that you need the knowledge of the auditors […].” In the following quote, he describes how the development team collaborated with one of the auditors to that end:
[W]e worked very closely with the manager who provided the data […] and […] had regular meetings and [trained] each other a bit, so that I told him in machine learning this matters and then he told me […] in the allocation of the audit data that matters and then we tried to bring our knowledge a bit closer together.
(Developer, O4)
While we found in several cases that the effective integration of domain knowledge was conducive to the development of AI systems, we also found vice versa that the lack of collaboration from domain experts hampered the development in one case.
Our data analysis further indicates that managers and developers use three distinct practices to effectively integrate domain knowledge in the development process; we labeled these practices “incorporating”, “investigating” and “iterative feedback” and will consider each practice in turn.
Incorporating entails managers creating new roles to involve domain experts in the development process. These roles may be temporary and project-based or permanent. As mentioned above, the project leader of O6 initially recruited several call center employees to support her to create chatbot scripts for customer inquiries. Similarly, O5 is experimenting with appointing individual domain experts as cross-departmental product owners for the AI solutions. Therefore, they train them with the necessary skills for agile working methods. In addition, the product owners also serve as facilitators and translators between the developers and the domain experts.
Investigating involves conducting research to understand and extract domain knowledge. For example, the developers of O2 carried out interviews with domain experts and observed how they make decisions in their everyday work.
Actually, we work together closely with them [the train dispatchers]. I was on the early shift today, from 6:30 to 9:30 […] and observed the disposition for the three hours […] a month ago, we also conducted interviews with them.
(Project leader, O2)
Similarly, the project team of O6 listened to conversations between employees and customers to develop chatbot scripts: “Well, we also listened to calls, […] how do [call center] employees lead customers through the conversation” (Project leader, O6).
A third practice that organizations use to integrate domain knowledge in the AI development process is through iterative feedback from domain experts. Although organizations use different ways to integrate expert feedback, they all have institutionalized some sort of forum or process. At O2, for example, the project team invites a number of end-users to participate in regular analogue “workshop conversations”; the project leader in O6 has created both a regular meeting and a digital channel that domain experts can use to comment on the project, to express criticism, and make suggestions for improvement. In O8, regular test-cycles allow domain experts to evaluate the decision-making of the AI solution based on their expertise.
It’s more of a mutual, they get a feel for what those data look like and then they in turn can maybe give us a gut feeling about whether or not a machine is doing something systematically wrong.
(Project leader, O8)
The identified practices of incorporating, investigating, and iterative feedback are of course not mutually exclusive. Project leaders may draw on one or all of the identified practices.

5. Discussion

5.1. Theoretical Implications

The goal of this paper was to contribute to the burgeoning organizational literature on AI and work by exploring the changes that knowledge workers perceive when adopting AI systems as well as the conditions that are conducive to the development of AI systems in the context of knowledge work. Below, we summarize our insights and discuss their implications.
Overall, we identified three broad changes associated with the adoption of AI systems in the context of knowledge work. First, we found that AI adoption is associated with a shift from manual labor and repetitive tasks to tasks that require reasoning and empathy. Our findings in this regard support the notion that the deployment of AI is associated with a modularization of work and a subsequent automation of tasks performed by humans [8]. However, we also found that there can be a period of co-existence in which the same tasks are performed by both an AI system and humans. This finding indicates that AI-related changes in the workplace may be gradual rather than disruptive, with tasks not entire jobs being automated over longer periods of time. Moreover, our finding resonates with the theorizing by Huang, Rust and colleagues, who have argued that the adoption of AI systems will initially propel the automation of mechanical tasks, then of analytical tasks, and finally of feeling tasks [13,27,47]. It is further worth noting that the automation of tasks has thus far not led to the elimination of jobs in the cases we studied. Instead, it relieved employees from manual and highly repetitive tasks. While we found emerging evidence that employees perceived the identified shift as a relief, further research is required to explore employees’ perception of this shift in more detail.
Second, we present additional support for the notion that the adoption of AI can be associated with the emergence of new tasks and roles, thus adding to previous studies that have noted the development of new tasks and/or roles [21]. Our insight that the newly created roles enable an upskilling of former call center employees indicates that AI adoption may present opportunities not only for developers and managers, but also for employees and workers at the bottom of the hierarchy and that AI adoption does not necessarily lead to a deskilling of workers.
Our third finding that AI adoption is also associated with the emergence of new skill requirements resonates with the two previous findings. As tasks shift from manual and repetitive tasks to tasks that involve reasoning and empathy, and new tasks and roles emerge, the relevance of specific skills change. It is worth reiterating that these skill requirements do not only relate to technical (e.g., programming), but also to soft skills such as collaborating across functional and disciplinary boundaries during the development, implementation, and maintenance of AI systems. Moreover, our findings also indicate the complexity of AI skills, that is, there is not one set of AI skills but different actors in different roles need very different types of AI skills. This is in line with arguments by scholars studying skill development in the context of AI adoption [48].
Our findings regarding the factors that are conducive to the development of AI systems in the context of work resonate with initial research on the topic [43,45,49]. First, we found consistent with previous studies that leadership support is a key requirement to successfully develop AI projects [43]. Leadership support does not only involve providing sufficient resources; it also involves encouraging the development of AI projects, and creating supporting structures to enable knowledge exchange and cooperation across functional and organizational boundaries. Therefore, we further found emerging evidence that it may be beneficial if (some of the) board members themselves have AI expertise.
Our second finding on the relevance of participative change management is consistent with practitioner-oriented studies [45]. Our cross-case analysis indicates that early involvement of the workforce in the development of AI projects is important to address a pervasive challenge of organizations during the adoption of AI, namely fears that AI could be used to replace workers [32]. Moreover, we found that effective change management also involves enabling employees to use AI applications and to train different employees according to their potential needs.
Third, we found that the successful development of AI projects in the context of knowledge work is facilitated by the effective integration of domain knowledge. This finding indicates that claims suggesting that the adoption of AI will render employees increasingly irrelevant may underestimate the value of domain knowledge and how various actors (can) contribute to the development of AI systems [20,49]. In addition, the finding provides empirical support for the notion that any form of automation involves a period of intense collaboration between humans and AI systems and hence a form of augmentation [5]. In addition, we identified three practices that project leaders and developers can use to integrate the knowledge of domain experts in the development process: incorporating, investigating and iterative feedback. The plurality of these practices indicates there is not one but many ways to integrate domain experts in the development process.

5.2. Managerial Implications

Our findings have several managerial implications. First, we found that project participants perceive leadership support as an important enabling factor during the development of AI projects in the context of knowledge work. Leadership support goes beyond the mere provision of sufficient resources—although this may constitute a necessary condition; it also concerns leaders finding ways to inspire projects and challenge organizational members to experiment with AI [45]. Our findings suggest that it may be beneficial to that end to have organizational leaders, who have acquired first-hand experience in the field of AI. Second, we found that participative change management is needed given that the popular discourse is mainly focused on the elimination of jobs due to automation [18]. Involving employees in the development of AI projects emerges as an effective tactic to combat the “hype and fear narrative” [4] (p. 307). In the studied cases, managers used various means to inform employees about the development of AI projects, including but not limited to trainings, presentations, and newsletters. Moreover, effective participative change management also involved teaching employees how to use the respective AI system. Third, managers should find ways to effectively integrate the domain knowledge of domain experts during the development and implementation of AI systems. We identified three practices that managers can draw on strategically to that end—incorporating, investigating, and iterative feedback. Incorporating entails managers creating new roles to involve domain experts in the development process. Investigating involves conducting research to understand and extract domain knowledge (e.g., conducting interviews with or observing domain experts). A third practice that organizations we studied used to integrate domain knowledge in the AI development process is through iterative feedback from domain experts. Incorporating has the advantage that it serves two ends simultaneously, namely, the participation of employees and the integration of domain knowledge.
Based on our findings that AI adoption can lead to the emergence of new tasks and roles as well as new skill requirements, we further suggest managers to monitor AI projects in view of emerging roles and tasks, in order to enable these developments. This seems especially important as existing organizational structures, processes and taken for granted ways of doing things may hinder their emergence. In terms of training, it is important for managers to consider the specific skill requirements for different occupations, roles, and tasks.

5.3. Limitations and Future Research

One limitation of this study is that it is based on cross-sectional data. An avenue for future research is therefore to study the questions raised in this article using longitudinal and ethnographic research approaches [21] and to engage in process theorizing [50]. This is especially desirable given the long duration of AI projects and that it will take time for work practices to change following the implementation of AI systems. The still relatively low diffusion of AI, the high rate of failure in adoption, and the rapid evolution of AI applications and complementary innovations suggest that the importance of AI at work will continue to intensify. Therefore, longitudinal studies are needed in order to understand the further development of knowledge work.
A second limitation of this article relates to our sampling approach. For this study, we sampled organizations from various sectors. While this provided us insights into a range of sectors and an understanding of the commonalities of dynamics when AI is adopted at work, it also meant focusing more on breadth than depth. One avenue for future research is therefore to explore the adoption of AI in specific industries in more depth. Moreover, the organizations in our sample constitute arguably rather positive examples of AI adoption in the workplace. The employees we interviewed, for example, generally highlighted the relief provided by automation of tasks. In none of the cases was automation associated with job losses. Therefore, future research should identify cases in which the potential dark side of AI for work is more salient.
One final avenue for future research concerns a more detailed examination of new skill requirements. Our analysis indicates that even employees, who are neither involved in the development of AI systems nor work with these systems directly, need to have a foundational understanding of AI systems in order to fulfill their tasks. Future research could develop a typology of skills that are relevant for different types of actors in organizations with varying exposure to AI systems.

Author Contributions

Conceptualization, G.v.R., S.O. and H.S.; methodology, G.v.R., S.O. and H.S.; project administration, G.v.R.; investigation: G.v.R. and S.O.; formal analysis, G.v.R., S.O. and H.S.; writing—original draft preparation, G.v.R. and S.O.; writing—review and editing, G.v.R. and H.S.; supervision, G.v.R.; funding acquisition, S.O. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Federal Ministry of Labour and Social Affairs.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful for the support from all interview participants who provided their valuable time to enable this research. Moreover, the authors are grateful for the feedback by the members of the “Innovation, Entrepreneurship and Society” research group at the HIIG and the participants of the “AI at Work” subtheme at the 37th EGOS Colloquium in Amsterdam. Last but not least, the authors appreciate the time and effort that the three anonymous reviewers and the editor invested into providing feedback and valuable improvements to the paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Von Krogh, G. Artificial Intelligence in Organizations: New Opportunities for Phenomenon-Based Theorizing. Acad. Manag. Discov. 2018, 4, 404–409. [Google Scholar] [CrossRef] [Green Version]
  2. Bailey, D.; Faraj, S.; Hinds, P.; von Krogh, G.; Leonardi, P. Special Issue of Organization Science: Emerging Technologies and Organizing. Organ. Sci. 2019, 30, 642–646. [Google Scholar] [CrossRef] [Green Version]
  3. Faraj, S.; Pachidi, S.; Sayegh, K. Working and organizing in the age of the learning algorithm. Inf. Organ. 2018, 28, 62–70. [Google Scholar] [CrossRef]
  4. Huysman, M. Information systems research on artificial intelligence and work: A commentary on “Robo-Apocalypse cancelled? Reframing the automation and future of work debate”. J. Inf. Technol. 2020, 35, 307–309. [Google Scholar] [CrossRef]
  5. Raisch, S.; Krakowski, S. Artificial Intelligence and Management: The Automation-Augmentation Paradox. Acad. Manag. Rev. 2021, 46, 192–210. [Google Scholar] [CrossRef]
  6. Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; WW Norton & Company: New York, NY, USA, 2014. [Google Scholar]
  7. Østerlund, C.; Jarrahi, M.H.; Willis, M.; Boyd, K.T.; Wolf, C. Artificial intelligence and the world of work, a co-constitutive relationship. J. Assoc. Inf. Sci. Technol. 2021, 72, 128–135. [Google Scholar] [CrossRef]
  8. Tschang, F.T.; Mezquita, E.A. Artificial Intelligence as Augmenting Automation: Implications for Employment. Acad. Manag. Perspect. 2021, 35, 642–659. [Google Scholar] [CrossRef]
  9. Susskind, R.E.; Susskind, D. The Future of the Professions: How Technology Will Transform the Work of Human Experts; Oxford University Press: Oxford, MI, USA, 2015. [Google Scholar]
  10. Ågerfalk, P.J. Artificial intelligence as digital agency. Eur. J. Inf. Syst. 2020, 29, 1–8. [Google Scholar] [CrossRef] [Green Version]
  11. Armour, J.; Sako, M. AI-enabled business models in legal services: From traditional law firms to next-generation law companies? J. Prof. Organ. 2020, 7, 27–46. [Google Scholar] [CrossRef]
  12. Pesapane, F.; Codari, M.; Sardanelli, F. Artificial intelligence in medical imaging: Threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur. Radiol. Exp. 2018, 2, 35. [Google Scholar] [CrossRef] [Green Version]
  13. Huang, M.-H.; Rust, R.T. A strategic framework for artificial intelligence in marketing. J. Acad. Mark. Sci. 2021, 49, 30–50. [Google Scholar] [CrossRef]
  14. Frey, C.B.; Osborne, M.A. The future of employment: How susceptible are jobs to computerisation? Technol. Forecast. Soc. Chang. 2017, 114, 254–280. [Google Scholar] [CrossRef]
  15. Georgieff, A.; Milanez, A. What happened to jobs at high risk of automation? In OECD Social, Employment and Migration Working Papers; OECD Publishing: Paris, France, 2021. [Google Scholar] [CrossRef]
  16. Frank, M.R.; Autor, D.; Bessen, J.E.; Brynjolfsson, E.; Cebrian, M.; Deming, D.J.; Feldman, M.; Groh, M.; Lobo, J.; Moro, E.; et al. Toward understanding the impact of artificial intelligence on labor. Proc. Natl. Acad. Sci. USA 2019, 116, 6531–6539. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Pettersen, L. Why Artificial Intelligence Will Not Outsmart Complex Knowledge Work. Work Employ. Soc. 2019, 33, 1058–1067. [Google Scholar] [CrossRef]
  18. Wajcman, J. Automatisierung: Ist es diesmal wirklich anders? In Marx und Die Roboter: Vernetzte Produktion, Künstliche Intelligenz und lebendige Arbeit; Karl Dietz Berlin GmbH: Berlin, Germany, 2019; Volume 1. [Google Scholar]
  19. Kellogg, K.C.; Valentine, M.A.; Christin, A. Algorithms at Work: The New Contested Terrain of Control. Acad. Manag. Ann. 2020, 14, 366–410. [Google Scholar] [CrossRef]
  20. Frey, W.R.; Patton, D.U.; Gaskell, M.B.; McGregor, K.A. Artificial Intelligence and Inclusion: Formerly Gang-Involved Youth as Domain Experts for Analyzing Unstructured Twitter Data. Soc. Sci. Comput. Rev. 2020, 38, 42–56. [Google Scholar] [CrossRef]
  21. Waardenburg, L.; Sergeeva, A.; Huysman, M. Hotspots and Blind Spots. In Living with Monsters? Social Implications of Algorithmic Phenomena, Hybrid Agency, and the Performativity of Technology, Proceedings of the IFIP WG 8.2 Working Conference on the Interaction of Information Systems and the Organization, San Francisco, CA, USA, 11–12 December 2018; Springer: Cham, Switzerland, 2018; pp. 96–109. [Google Scholar]
  22. Gray, M.L.; Suri, S. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass; Houghton Mifflin Harcourt: Boston, MA, USA, 2019. [Google Scholar]
  23. Oztemel, E.; Gursev, S. Literature review of Industry 4.0 and related technologies. J. Intell. Manuf. 2020, 31, 127–182. [Google Scholar] [CrossRef]
  24. Kerin, M.; Pham, D.T. A review of emerging industry 4.0 technologies in remanufacturing. J. Clean. Prod. 2019, 237, 117805. [Google Scholar] [CrossRef]
  25. Lara, B.; Ciria, A.; Escobar, E.; Gaona, W.; Hermosillo, J. Cognitive Robotics: The New Challenges in Artificial Intelligence. In Advanced Topics on Computer Vision, Control and Robotics in Mechatronics; Vergara Villegas, O.O., Nandayapa, M., Soto, I., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 321–347. [Google Scholar]
  26. Wang, J.; Zhang, L.; Duan, L.; Gao, R.X. A new paradigm of cloud-based predictive maintenance for intelligent manufacturing. J. Intell. Manuf. 2017, 28, 1125–1137. [Google Scholar] [CrossRef]
  27. Huang, M.-H.; Rust, R.; Maksimovic, V. The Feeling Economy: Managing in the Next Generation of Artificial Intelligence (AI). Calif. Manag. Rev. 2019, 61, 43–65. [Google Scholar] [CrossRef]
  28. Boes, A.; Kämpf, T. Informations- und Wissensarbeit. In Lexikon der Arbeits- und Industriesoziologie, 2nd ed.; Hirsch-Kreinsen, H., Minssen, H., Eds.; Nomos: Baden-Baden, Germany, 2017; pp. 184–187. [Google Scholar]
  29. Pyöriä, P. The concept of knowledge work revisited. J. Knowl. Manag. 2005, 9, 116–127. [Google Scholar] [CrossRef]
  30. Zolas, N.; Kroff, Z.; Brynjolfsson, E.; McElheran, K.; Beede, D.N.; Buffington, C.; Goldschlag, N.; Foster, L.; Dinlersoz, E. Advanced Technologies Adoption and Use by US Firms: Evidence from the Annual Business Survey; National Bureau of Economic Research: Cambridge, MA, USA, 2021. [Google Scholar]
  31. Brock, J.K.-U.; von Wangenheim, F. Demystifying AI: What Digital Transformation Leaders Can Teach You about Realistic Artificial Intelligence. Calif. Manag. Rev. 2019, 61, 110–134. [Google Scholar] [CrossRef]
  32. Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.; Eirug, A.; et al. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2019, 57, 101994. [Google Scholar] [CrossRef]
  33. Sun, T.Q.; Medaglia, R. Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Gov. Inf. Q. 2019, 36, 368–383. [Google Scholar] [CrossRef]
  34. Eisenhardt, K.M. Building Theories from Case Study Research. Acad. Manag. Rev. 1989, 14, 532–550. [Google Scholar] [CrossRef]
  35. Eisenhardt, K.M.; Graebner, M.E. Theory Building From Cases: Opportunities And Challenges. Acad. Manag. J. 2007, 50, 25–32. [Google Scholar] [CrossRef] [Green Version]
  36. Liu, Z. Sociological perspectives on artificial intelligence: A typological reading. Sociol. Compass 2021, 15, e12851. [Google Scholar] [CrossRef]
  37. De Bruyn, A.; Viswanathan, V.; Beh, Y.S.; Brock, J.K.-U.; von Wangenheim, F. Artificial Intelligence and Marketing: Pitfalls and Opportunities. J. Interact. Mark. 2020, 51, 91–105. [Google Scholar] [CrossRef]
  38. Kaplan, A.; Haenlein, M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz. 2019, 62, 15–25. [Google Scholar] [CrossRef]
  39. Kaplan, A.; Haenlein, M. Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Bus. Horiz. 2020, 63, 37–50. [Google Scholar] [CrossRef]
  40. Davenport, T.H.; Kirby, J. Only Humans Need Apply: Winners and Losers in the Age of Smart Machines; Harper Business: New York, NY, USA, 2016. [Google Scholar]
  41. von Richthofen, G.; Gümüsay, A.A.; Send, H. Künstliche Intelligenz und die Zukunft von Arbeit. In CSR und Künstliche Intelligenz; Altenburger, R., Schmidpeter, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2021; pp. 353–366. [Google Scholar]
  42. Spencer, D.A. Fear and hope in an age of mass automation: Debating the future of work. New Technol. Work Employ. 2018, 33, 1–12. [Google Scholar] [CrossRef]
  43. Chen, H.; Li, L.; Chen, Y. Explore success factors that impact artificial intelligence adoption on telecom industry in China. J. Manag. Anal. 2021, 8, 36–68. [Google Scholar] [CrossRef]
  44. Oliveira, T.; Martins, M.F. Literature review of information technology adoption models at firm level. Electron. J. Inf. Syst. Eval. 2011, 14, 110. [Google Scholar]
  45. Fountaine, T.; McCarthy, B.; Saleh, T. Building the AI-powered organization. Harv. Bus. Rev. 2019, 97, 62–73. [Google Scholar]
  46. Miles, M.B.; Huberman, A.M. Qualitative Data Analysis: An Expanded Sourcebook; Sage: Thousand Oaks, CA, USA, 1994. [Google Scholar]
  47. Huang, M.-H.; Rust, R.T. Artificial Intelligence in Service. J. Serv. Res. 2018, 21, 155–172. [Google Scholar] [CrossRef]
  48. Stephany, F. There is Not One But Many AI: A Network Perspective on Regional Demand in AI Skills. OSF Prepr. 2020. [Google Scholar] [CrossRef] [Green Version]
  49. Pfeiffer, S. Kontext und KI: Zum Potenzial der Beschäftigten für Künstliche Intelligenz und Machine-Learning. HMD Prax. Wirtsch. 2020, 57, 465–479. [Google Scholar] [CrossRef] [Green Version]
  50. Langley, A. Strategies for Theorizing from Process Data. Acad. Manag. Rev. 1999, 24, 691–710. [Google Scholar] [CrossRef] [Green Version]
Table 1. Organizations and interviews.
Table 1. Organizations and interviews.
OrganizationDescriptionAI-ApplicationNumber of Interviews
O1Transcription of audio- and video records Since 2018, O1 offers its customers AI-based transcription of audio- and video records. After the automatic transcription by the AI system, a freelancer is hired to correct mistakes. The AI system was developed by an external solution provider.4
O2Railroad companyO2 is developing an AI system that provides decision support to train dispatchers. Train dispatching involves making decisions such as which train receives priority at track switches and train stations in case of delays. The project’s goal is to minimize delays in the system.5
O3Customer service provider in the energy industryO3 is using an AI system developed by an external solution provider to identify customer concerns in customer inquiries (i.e., the reason why customers contact their energy provider). The inquiry is then forwarded to the employee specialized for the type of inquiry.5
O4Auditing and tax consulting firmO4 developed and implemented an AI system to support employees with the allocation of accounts on specific target structures (e.g., commercial code). 3
O5Universal libraryO5 is developing several interrelated AI systems to digitize its historical library collection. For example, one AI system is meant to recognize the layout of pages, another to process text. The project’s goal is that the collection can be accessed by scholars around the globe. 8
O6Telecommunications companyO6 purchased an external chatbot solution to automate customer service processes and tasks. Over time, the project team has been implementing additional use cases. For example, the chatbot is used for the authentication of customers or providing feedback on invoices.4
O7Collaborative, free knowledge baseO7 is developing an AI system to support editors with the quality assurance of content in the knowledge database. The software automatically evaluates the quality of entries and identify cases of vandalism. 6
O8Library for economic literatureO8 is developing an AI system that automatically indexes publications for its library catalog and provides suggestions for keywords to librarians during the intellectual indexing process. The goal is support librarians to handle the ever increasing number of publications.6
Total 41
Table 2. Overview of findings.
Table 2. Overview of findings.
Perceived Changes in the Workplace
Shift from manual labor and repetitive tasks to tasks involving reasoning and empathyEmergence of new tasks and rolesEmergence of new skill requirements
Organizational conditions conducive to the development of AI systems
Leadership supportParticipative change managementEffective integration of domain knowledge
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

von Richthofen, G.; Ogolla, S.; Send, H. Adopting AI in the Context of Knowledge Work: Empirical Insights from German Organizations. Information 2022, 13, 199. https://doi.org/10.3390/info13040199

AMA Style

von Richthofen G, Ogolla S, Send H. Adopting AI in the Context of Knowledge Work: Empirical Insights from German Organizations. Information. 2022; 13(4):199. https://doi.org/10.3390/info13040199

Chicago/Turabian Style

von Richthofen, Georg, Shirley Ogolla, and Hendrik Send. 2022. "Adopting AI in the Context of Knowledge Work: Empirical Insights from German Organizations" Information 13, no. 4: 199. https://doi.org/10.3390/info13040199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop