4.1. Perceived Changes in the Workplace
Based on our analysis, we identify three broad changes associated with the adoption of AI in the context of knowledge work: (1) a shift from manual labor and repetitive tasks to tasks that involve reasoning and empathy, (2) the emergence of new tasks and roles, and (3) the development of new skills and/or skill requirements.
4.1.1. Shift from Manual Labor and Repetitive Tasks to Tasks Involving Reasoning and Empathy
Based on our analysis, we find emerging evidence that AI adoption is associated with a shift from manual labor and repetitive tasks to tasks involving reasoning and empathy.
First, we find support for the notion that the deployment of AI is associated with a modularization and automation of tasks performed by humans [8
]. However, we find that there can be a period of coexistence
, in which certain tasks are performed by both AI-applications and human workers. One especially illustrative example is O1, a company that offers transcription services for audio records to market research firms, media companies, and research institutes. Since 2019, the company offers both manual and automatic transcriptions to customers. The reason for this coexistence is in this case that the quality of automatic transcriptions does not match that of manual transcriptions yet.
The modularization and automation of tasks is especially salient in cases in our sample, where AI is used to increase automation in customer service. In the case of a telecommunication provider that is increasingly relying on chatbots (O6), managers of the service division teamed up with data scientists to determine which use cases are eligible for automation. One time-intensive (and thus costly) task that was automated early on, for example, was the authentication of customers. Later, data scientists identified other use cases such as providing feedback, when customers contact the company with regard to their phone bills.
Similarly, O3, a customer service provider in the energy sector, uses AI to analyze customer requests, in order to identify the underlying concerns, and assign them to employees that are specialized for the respective cases, thus automating the task of coordinating customer requests. Moreover, O3 uses robotic process automation to send reminders to customers. In the following quote, a team manager at O3 describes the relief provided by the robotic process automation that was implemented two years ago as follows:
[The system] relieves employees from a lot of work. You have to imagine that we once wrote individual letters or emails. Then we started using text templates. […] And now we sort of only press a button and 500 emails go out.
(Team manager, O3)
In some cases in which AI systems were used to automate manual and repetitive tasks, humans were still needed to control the system’s output. This was especially salient in the case of O1, where the deployment of an AI application to transcribe audio files shifts the tasks of freelancers from manually transcribing text themselves to controlling the machine’s transcriptions. Similarly, at O4, auditors no longer have to manually assign accounts, but are still responsible for checking the output of the AI application to make sure all accounts were assigned correctly and manually code accounts that the machine had difficulties to allocate with sufficient certainty.
As automation relieves employees to some extent from manual and repetitive tasks, they become more focused on tasks that involve reasoning and empathy. Instead of writing dozens of reminders, for example, the employees of O3 can now focus on more complex cases: “The AI takes care of cases that are easy and the employee gets […] more difficult cases” (Team leader, O3). Similarly, the leader of the automation division at O3 argues that automating repetitive tasks enables employees to focus on more complex and customer-oriented tasks that involve problem solving:
And this has led to the elimination of these routine tasks so that employees can work on tasks where they have strengths, where they do research, where they have to make conclusions, but also where they work customer-oriented and service-oriented. Thus this means they can concentrate on really important things.
(Project leader, O3)
To summarize, we find that the adoption of AI was in several cases associated with the automation of specific manual and repetitive tasks so that employees became more focused on tasks that involve reasoning and empathy. It is worth emphasizing that the elimination of tasks was not associated with an elimination of entire roles. Instead, we find that AI adoption was actually associated with the emergence of new roles. We elaborate on this finding next.
4.1.2. Emergence of New Tasks and Roles
We find that the adoption of AI is associated with the emergence of new tasks, which, in some cases, justifies the creation of new roles. Consider O6, a telecommunication provider that is increasingly using chatbots to automate its customer service. One key task involved in the chatbot’s development and implementation involves the creation of content for the chatbot. Starting out, the project leader and her colleagues from the business side scripted most of the content themselves, which, however, became increasingly unfeasible over time. In order to create the substantial amount of content needed for the chatbot, the project leader hired several frontline employees, who are used to dealing with customers directly over chat and phone and have knowledge in a range of domains including sales and complaint handling. Eventually, the project leader created an entire team dedicated to creating chatbot content: “The content team consists of former customer service employees and agents who used to work at the hotline or in chat. And who do nothing nowadays but create content for artificial intelligence”. In our interviews, the manager stressed the importance of these employees’ work. They are not only familiar with the company’s products, but also know how conversations between frontline employees and customers unfold. Similarly, the leader of the technical development team also emphasized their relevance in the development process:
You actually need people, who know how customers react and who [–in case that is needed–] perhaps simply write content for two weeks. What would they explain to the customer on the phone or what would the customer […] ask in the first place?
The project leader predicts that training chatbots may even become a new role in customer service more generally:
I believe this will be a new occupational role in customer service […] that employees no longer work at the hotline but train chatbots. We started with three or four colleagues and now they are eight [employees], whose daily business consists of preparing content, creating intents and training [the chatbot] and maintaining the […] knowledge database.
(Project leader, O6)
It is further worth noting that work in the newly created roles provides workers better working conditions compared with their former positions on the frontlines of customer service. Consider the following quote by one of the team members, in which he contrasts his new role with his previous role:
Generally and in contrast to my first job, chatting with customers, I like it considerably more. It is fun. […] It’s exactly my thing. And with Rasa [a program for creating chatbot conversations] I admit I still have my problems. But I think it’s the future. Because I believe that no customer will be interested in this classical question-answer-game in three, four, five years. Instead, it’s about conversational design. […] This is why I find it really exciting to learn how to do this.
Note that the quote indicates several facets of satisfying work, namely that the employee perceives the work as fun, challenging, and meaningful.
4.1.3. Emergence of New Skill Requirements
Corresponding to our previous finding, we find that AI adoption is also associated with the emergence of new skill requirements at various hierarchical levels. In the case of O6, for example, both the former frontline employees and the project leader had to learn new skills to contribute in their respective roles to implementation of the chatbot, which include both technical skills such as working with open source software to create conversations as well as soft skills such as working in interdisciplinary teams following the Scrum framework. In the following quote, the project leader of O6 describes how she acquired the AI-skills that she needed in her role through a combination of training, collaborating with colleagues, and self-study.
I also attended training for the use of AI […] for businesses. I also obtained certification and certificates for that to have an official acknowledgment of [my training] but at the end of the day I learned most of what I am doing nowadays from colleagues. So […] I am learning every day. And I would have never thought that I would consider architecture infrastructure diagrams totally appealing one day, because I didn’t even know what that was four years ago. So, that’s why there is a lot of self-learning involved.
(Project leader, O6)
In cases in which AI systems are still in development, participants voiced the expectation that the deployment of AI will be associated with the emergence of new skill requirements and/or a shift in terms of the relative importance of skills requirements. For example, one librarian at O5 expects that the relative importance of competences will change due to the implementation of AI systems and digitization more generally. She illustrates this point using paleography, the study of historic writing systems, as example.
Due to digitization, I expect that these classic so-called historical ancillary sciences will disappear and competences will decline in these fields. […] At the same time, these cultural objects are not self-explanatory, which is why I believe that the service of conveying the meaning of these objects will become more important.
Thus, while the practice of deciphering, reading, and dating manuscripts may be performed more and more with or by AI systems, the task of explaining the context of these manuscripts may become more relevant than ever.
Similarly, the project leader of O2 expects that the implementation of the AI system to support employees in their decisions when to dispose trains will change the profile of the occupation sustainably: “Overall, the job description will change in consequence [of the implementation of the AI system], namely, it will require a higher digital affinity because it is supported more strongly” (Project leader, O2).
Last but not least, it is worth noting that new skill requirements do not only emerge in positions where people work with or are directly affected by AI systems. An AI expert at O2 illustrated the importance of teaching relevant skills to various organizational members by giving the example of when a division within the organization wants to announce a public tender for a project that involves the use or development of AI systems. Even if the respective employees charged with writing the announcement may not be involved in the project themselves, they still require sufficient knowledge of AI to write the announcement.
Now and this is the point: Making an EU-wide tender for an AI-application requires that the person who writes it has a clue. He has to write down things such as: “It’s not allowed to be a neural network, because of explainability”. But they can’t do this. We first need to educate those people to an extent that they feel capable to write the text for the tender.
(AI expert, O2)
In summary, we find that the adoption of AI is associated with the emergence of new skill requirements which involve both knowledge of AI, technical skills and soft skills and that these skills are not only relevant to employees who work directly with AI.
4.2. Organizational Conditions Conducive to the Development of AI Systems
Based on our cross-case analysis, we identified three conditions that are conducive to the development of AI systems in the context of knowledge work: leadership support, participative change management, and effective integration of domain knowledge. Next, we describe and illustrate each condition in more detail.
4.2.1. Leadership Support
A key condition that is conducive to the development of AI systems in the context of knowledge work is that the top management supports the respective AI projects. This is because top management support enables project members to secure the significant financial and human resources that are often needed during the development and implementation stage. Moreover, top management support is also valuable to receive the freedom to actually work on and realize projects—something that would not be possible being relieved to some extent from day-to-day operations, as a project leader from O5 explains:
“My direct supervisor is […] responsible for digital transformation in our house. In this respect, it was possible to get certain freedoms and implement certain things that […] would not have been possible in regular operations.”
It is further beneficial, if there are members in the management, who possess AI-expertise and understand the characteristics of AI projects. In the case of O2, for example, a member of the board of directors is a professor who has done research on AI. In the following quote, an interview participant elaborates on how conducive this circumstance has been:
On the other hand, there is the group and the group sets targets […], in our case it is the board member […]. And that is the fortunate circumstance that she has a background in AI. […] She is a computer scientist by training, […] and did research on AI before she joined [O2]. So she really gets it and is now really pushing the [group in this regard]. For example, this disposition topic in Stuttgart. That was her idea. […] So you don’t have to explain that to her, but she comes up with ideas and challenges us to implement them. And she has set up a so-called house on AI at the board level, which is trying to form the group-wide hub that we as [subsidiary] cannot build, because we are only a service provider.
4.2.2. Participative Change Management
Participative change management is another condition that facilitates the development of AI systems in the context of knowledge work. Several interview participants mentioned the importance of involving employees in the AI development process at an early stage in order to address concerns by employees and to communicate potential impacts of the application. Several participants stressed that transparent communication helped to reduce reservations about AI in general and to promote acceptance among its employees. The reason why transparent communication and early involvement is so important seems related to the fact that the discourse on AI and work is so focused on job losses. Consider the following statement by one of the project leaders: “There are websites on the internet called “Will a robot take my job?” […] This danger and this fear that digitization reduces jobs is there every day and […] is a major topic in work councils” (Project leader, O6).
Another component of effective change management during the adoption of AI in the context of work involves enabling employees, who will ultimately work with the application by training them to use the application, as our interview partner in an auditing and tax consulting firm elaborates:
[It was important] […] that we teach the [users] what we know about the application so that they can really use it in a meaningful way and don’t just say “Yeah, so what’s that about? I don’t get it. I don’t want it”.
Similarly, enabling employees does not only involve training those who will directly work with AI applications, but also determining competency requirements for different roles and providing appropriate training. The AI expert at O2 illustrates the different training offerings as follows:
We now have a range of trainings that we offer. Depending on your role in the company, you can attend a half-day training, that enables you to differentiate between a neural network and machine learning, up to a training that lasts several days, where we say: “Okay, the person who now implements an AI application needs to know in a bit more detail what is going on”.
4.2.3. Effective Integration of Domain Knowledge
A third factor conducive to the development of AI projects concerns the effective integration of domain knowledge. AI developers depend on the knowledge of domain experts to both better understand the use case, develop the solution, and to evaluate the performance of the algorithms and applications they develop. A developer at O4 explicitly emphasized the value of integrating the domain knowledge of auditors to develop a system which assigns accounts automatically: “We want to feed the intelligence of the auditors into the artificial intelligence. And for that you need the knowledge of the auditors […].” In the following quote, he describes how the development team collaborated with one of the auditors to that end:
[W]e worked very closely with the manager who provided the data […] and […] had regular meetings and [trained] each other a bit, so that I told him in machine learning this matters and then he told me […] in the allocation of the audit data that matters and then we tried to bring our knowledge a bit closer together.
While we found in several cases that the effective integration of domain knowledge was conducive to the development of AI systems, we also found vice versa that the lack of collaboration from domain experts hampered the development in one case.
Our data analysis further indicates that managers and developers use three distinct practices to effectively integrate domain knowledge in the development process; we labeled these practices “incorporating”, “investigating” and “iterative feedback” and will consider each practice in turn.
Incorporating entails managers creating new roles to involve domain experts in the development process. These roles may be temporary and project-based or permanent. As mentioned above, the project leader of O6 initially recruited several call center employees to support her to create chatbot scripts for customer inquiries. Similarly, O5 is experimenting with appointing individual domain experts as cross-departmental product owners for the AI solutions. Therefore, they train them with the necessary skills for agile working methods. In addition, the product owners also serve as facilitators and translators between the developers and the domain experts.
Investigating involves conducting research to understand and extract domain knowledge. For example, the developers of O2 carried out interviews with domain experts and observed how they make decisions in their everyday work.
Actually, we work together closely with them [the train dispatchers]. I was on the early shift today, from 6:30 to 9:30 […] and observed the disposition for the three hours […] a month ago, we also conducted interviews with them.
(Project leader, O2)
Similarly, the project team of O6 listened to conversations between employees and customers to develop chatbot scripts: “Well, we also listened to calls, […] how do [call center] employees lead customers through the conversation” (Project leader, O6).
A third practice that organizations use to integrate domain knowledge in the AI development process is through iterative feedback from domain experts. Although organizations use different ways to integrate expert feedback, they all have institutionalized some sort of forum or process. At O2, for example, the project team invites a number of end-users to participate in regular analogue “workshop conversations”; the project leader in O6 has created both a regular meeting and a digital channel that domain experts can use to comment on the project, to express criticism, and make suggestions for improvement. In O8, regular test-cycles allow domain experts to evaluate the decision-making of the AI solution based on their expertise.
It’s more of a mutual, they get a feel for what those data look like and then they in turn can maybe give us a gut feeling about whether or not a machine is doing something systematically wrong.
(Project leader, O8)
The identified practices of incorporating, investigating, and iterative feedback are of course not mutually exclusive. Project leaders may draw on one or all of the identified practices.