In this section, we will summarize the findings of our analysis. Therefore, we will first summarize what changes knowledge workers perceive in association with the adoption of AI. Next, we will present conditions that are conducive to the adoption of AI in the context of knowledge work.
Table 2 provides an overview of our findings.
4.1. Perceived Changes in the Workplace
Based on our analysis, we identify three broad changes associated with the adoption of AI in the context of knowledge work: (1) a shift from manual labor and repetitive tasks to tasks that involve reasoning and empathy, (2) the emergence of new tasks and roles, and (3) the development of new skills and/or skill requirements.
4.1.1. Shift from Manual Labor and Repetitive Tasks to Tasks Involving Reasoning and Empathy
Based on our analysis, we find emerging evidence that AI adoption is associated with a shift from manual labor and repetitive tasks to tasks involving reasoning and empathy.
First, we find support for the notion that the deployment of AI is associated with a modularization and automation of tasks performed by humans [
8]. However, we find that there can be a
period of coexistence, in which certain tasks are performed by both AI-applications and human workers. One especially illustrative example is O1, a company that offers transcription services for audio records to market research firms, media companies, and research institutes. Since 2019, the company offers both manual and automatic transcriptions to customers. The reason for this coexistence is in this case that the quality of automatic transcriptions does not match that of manual transcriptions yet.
The modularization and automation of tasks is especially salient in cases in our sample, where AI is used to increase automation in customer service. In the case of a telecommunication provider that is increasingly relying on chatbots (O6), managers of the service division teamed up with data scientists to determine which use cases are eligible for automation. One time-intensive (and thus costly) task that was automated early on, for example, was the authentication of customers. Later, data scientists identified other use cases such as providing feedback, when customers contact the company with regard to their phone bills.
Similarly, O3, a customer service provider in the energy sector, uses AI to analyze customer requests, in order to identify the underlying concerns, and assign them to employees that are specialized for the respective cases, thus automating the task of coordinating customer requests. Moreover, O3 uses robotic process automation to send reminders to customers. In the following quote, a team manager at O3 describes the relief provided by the robotic process automation that was implemented two years ago as follows:
[The system] relieves employees from a lot of work. You have to imagine that we once wrote individual letters or emails. Then we started using text templates. […] And now we sort of only press a button and 500 emails go out.
(Team manager, O3)
In some cases in which AI systems were used to automate manual and repetitive tasks, humans were still needed to control the system’s output. This was especially salient in the case of O1, where the deployment of an AI application to transcribe audio files shifts the tasks of freelancers from manually transcribing text themselves to controlling the machine’s transcriptions. Similarly, at O4, auditors no longer have to manually assign accounts, but are still responsible for checking the output of the AI application to make sure all accounts were assigned correctly and manually code accounts that the machine had difficulties to allocate with sufficient certainty.
As automation relieves employees to some extent from manual and repetitive tasks, they become more focused on tasks that involve reasoning and empathy. Instead of writing dozens of reminders, for example, the employees of O3 can now focus on more complex cases: “The AI takes care of cases that are easy and the employee gets […] more difficult cases” (Team leader, O3). Similarly, the leader of the automation division at O3 argues that automating repetitive tasks enables employees to focus on more complex and customer-oriented tasks that involve problem solving:
And this has led to the elimination of these routine tasks so that employees can work on tasks where they have strengths, where they do research, where they have to make conclusions, but also where they work customer-oriented and service-oriented. Thus this means they can concentrate on really important things.
(Project leader, O3)
To summarize, we find that the adoption of AI was in several cases associated with the automation of specific manual and repetitive tasks so that employees became more focused on tasks that involve reasoning and empathy. It is worth emphasizing that the elimination of tasks was not associated with an elimination of entire roles. Instead, we find that AI adoption was actually associated with the emergence of new roles. We elaborate on this finding next.
4.1.3. Emergence of New Skill Requirements
Corresponding to our previous finding, we find that AI adoption is also associated with the emergence of new skill requirements at various hierarchical levels. In the case of O6, for example, both the former frontline employees and the project leader had to learn new skills to contribute in their respective roles to implementation of the chatbot, which include both technical skills such as working with open source software to create conversations as well as soft skills such as working in interdisciplinary teams following the Scrum framework. In the following quote, the project leader of O6 describes how she acquired the AI-skills that she needed in her role through a combination of training, collaborating with colleagues, and self-study.
I also attended training for the use of AI […] for businesses. I also obtained certification and certificates for that to have an official acknowledgment of [my training] but at the end of the day I learned most of what I am doing nowadays from colleagues. So […] I am learning every day. And I would have never thought that I would consider architecture infrastructure diagrams totally appealing one day, because I didn’t even know what that was four years ago. So, that’s why there is a lot of self-learning involved.
(Project leader, O6)
In cases in which AI systems are still in development, participants voiced the expectation that the deployment of AI will be associated with the emergence of new skill requirements and/or a shift in terms of the relative importance of skills requirements. For example, one librarian at O5 expects that the relative importance of competences will change due to the implementation of AI systems and digitization more generally. She illustrates this point using paleography, the study of historic writing systems, as example.
Due to digitization, I expect that these classic so-called historical ancillary sciences will disappear and competences will decline in these fields. […] At the same time, these cultural objects are not self-explanatory, which is why I believe that the service of conveying the meaning of these objects will become more important.
(Librarian, O5)
Thus, while the practice of deciphering, reading, and dating manuscripts may be performed more and more with or by AI systems, the task of explaining the context of these manuscripts may become more relevant than ever.
Similarly, the project leader of O2 expects that the implementation of the AI system to support employees in their decisions when to dispose trains will change the profile of the occupation sustainably: “Overall, the job description will change in consequence [of the implementation of the AI system], namely, it will require a higher digital affinity because it is supported more strongly” (Project leader, O2).
Last but not least, it is worth noting that new skill requirements do not only emerge in positions where people work with or are directly affected by AI systems. An AI expert at O2 illustrated the importance of teaching relevant skills to various organizational members by giving the example of when a division within the organization wants to announce a public tender for a project that involves the use or development of AI systems. Even if the respective employees charged with writing the announcement may not be involved in the project themselves, they still require sufficient knowledge of AI to write the announcement.
Now and this is the point: Making an EU-wide tender for an AI-application requires that the person who writes it has a clue. He has to write down things such as: “It’s not allowed to be a neural network, because of explainability”. But they can’t do this. We first need to educate those people to an extent that they feel capable to write the text for the tender.
(AI expert, O2)
In summary, we find that the adoption of AI is associated with the emergence of new skill requirements which involve both knowledge of AI, technical skills and soft skills and that these skills are not only relevant to employees who work directly with AI.
4.2. Organizational Conditions Conducive to the Development of AI Systems
Based on our cross-case analysis, we identified three conditions that are conducive to the development of AI systems in the context of knowledge work: leadership support, participative change management, and effective integration of domain knowledge. Next, we describe and illustrate each condition in more detail.
4.2.3. Effective Integration of Domain Knowledge
A third factor conducive to the development of AI projects concerns the effective integration of domain knowledge. AI developers depend on the knowledge of domain experts to both better understand the use case, develop the solution, and to evaluate the performance of the algorithms and applications they develop. A developer at O4 explicitly emphasized the value of integrating the domain knowledge of auditors to develop a system which assigns accounts automatically: “We want to feed the intelligence of the auditors into the artificial intelligence. And for that you need the knowledge of the auditors […].” In the following quote, he describes how the development team collaborated with one of the auditors to that end:
[W]e worked very closely with the manager who provided the data […] and […] had regular meetings and [trained] each other a bit, so that I told him in machine learning this matters and then he told me […] in the allocation of the audit data that matters and then we tried to bring our knowledge a bit closer together.
(Developer, O4)
While we found in several cases that the effective integration of domain knowledge was conducive to the development of AI systems, we also found vice versa that the lack of collaboration from domain experts hampered the development in one case.
Our data analysis further indicates that managers and developers use three distinct practices to effectively integrate domain knowledge in the development process; we labeled these practices “incorporating”, “investigating” and “iterative feedback” and will consider each practice in turn.
Incorporating entails managers creating new roles to involve domain experts in the development process. These roles may be temporary and project-based or permanent. As mentioned above, the project leader of O6 initially recruited several call center employees to support her to create chatbot scripts for customer inquiries. Similarly, O5 is experimenting with appointing individual domain experts as cross-departmental product owners for the AI solutions. Therefore, they train them with the necessary skills for agile working methods. In addition, the product owners also serve as facilitators and translators between the developers and the domain experts.
Investigating involves conducting research to understand and extract domain knowledge. For example, the developers of O2 carried out interviews with domain experts and observed how they make decisions in their everyday work.
Actually, we work together closely with them [the train dispatchers]. I was on the early shift today, from 6:30 to 9:30 […] and observed the disposition for the three hours […] a month ago, we also conducted interviews with them.
(Project leader, O2)
Similarly, the project team of O6 listened to conversations between employees and customers to develop chatbot scripts: “Well, we also listened to calls, […] how do [call center] employees lead customers through the conversation” (Project leader, O6).
A third practice that organizations use to integrate domain knowledge in the AI development process is through iterative feedback from domain experts. Although organizations use different ways to integrate expert feedback, they all have institutionalized some sort of forum or process. At O2, for example, the project team invites a number of end-users to participate in regular analogue “workshop conversations”; the project leader in O6 has created both a regular meeting and a digital channel that domain experts can use to comment on the project, to express criticism, and make suggestions for improvement. In O8, regular test-cycles allow domain experts to evaluate the decision-making of the AI solution based on their expertise.
It’s more of a mutual, they get a feel for what those data look like and then they in turn can maybe give us a gut feeling about whether or not a machine is doing something systematically wrong.
(Project leader, O8)
The identified practices of incorporating, investigating, and iterative feedback are of course not mutually exclusive. Project leaders may draw on one or all of the identified practices.