Next Article in Journal
Toward Decentralized Intelligence: A Systematic Literature Review of Blockchain-Enabled AI Systems
Previous Article in Journal
Exploring Predictive Insights on Student Success Using Explainable Machine Learning: A Synthetic Data Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI Integration in Organisational Workflows: A Case Study on Job Reconfiguration, Efficiency, and Workforce Adaptation

by
Pedro Oliveira
*,
João M. S. Carvalho
and
Sílvia Faria
REMIT—Research on Economics, Management and Information Technologies, Universidade Portucalense Infante D. Henrique, 4200-072 Porto, Portugal
*
Author to whom correspondence should be addressed.
Information 2025, 16(9), 764; https://doi.org/10.3390/info16090764
Submission received: 13 June 2025 / Revised: 26 August 2025 / Accepted: 2 September 2025 / Published: 3 September 2025

Abstract

This study investigates how the integration of artificial intelligence (AI) transforms job practices within a leading European infrastructure company. Grounded in the Feeling Economy framework, the research explores the shift in task composition following AI implementation, focusing on the emergence of new roles, required competencies, and the ongoing reconfiguration of work. Using a qualitative, single-case study methodology, data were collected through semi-structured interviews with ten employees and company documentation. Thematic analysis revealed five key dimensions: the reconfiguration of job tasks, the improvement of efficiency and quality, psychological and adaptation challenges, the need for AI-related competencies, and concerns about dehumanisation. Findings show that AI systems increasingly assume repetitive and analytical tasks, enabling workers to focus on strategic, empathetic, and creative responsibilities. However, psychological resistance, fears of job displacement, and a perceived erosion of human interaction present implementation barriers. The study provides theoretical contributions by empirically extending the Feeling Economy and task modularisation frameworks. It also offers managerial insights into workforce adaptation, training needs, and the importance of ethical and emotionally intelligent AI integration. Additionally, this study highlights that the Feeling Economy must address AI’s epistemic risks, emphasising fairness, transparency, and participatory governance as essential for trustworthy, emotionally intelligent, and sustainable AI systems.

Graphical Abstract

1. Introduction

In recent years, artificial intelligence (AI) has become increasingly omnipresent, driven by advancements in data storage, computing capabilities, and enhanced methodologies in machine learning, big data analytics, and robotics. This widespread adoption has led to a growing impact on the way people work [1,2]. Currently, AI is increasingly capable of performing functions once exclusively managed by humans. This extensive integration of AI into human tasks suggests substantial changes like workflows and specific functions [3,4]. Such integration will likely significantly alter roles, responsibilities, and tasks in the near and foreseeable future [5,6].
As AI gradually takes over various cognitive tasks, it may eventually extend to intuitive and emotional capabilities [7,8]. However, humans have not yet been surpassed in their most valuable contributions, particularly in areas such as emotions, empathy, and interpersonal connections [9]. This transition heralds the emergence of a “Feeling Economy”, where AI excels in cognitive tasks while humans excel in emotional understanding [8,9]. While we have not fully reached this stage, the trajectory is evident and substantiated by empirical evidence [6,9].
While conceptual frameworks such as the Feeling Economy [10] suggest a growing importance of emotional tasks in the age of AI, few empirical studies have tested this proposition in real organisational settings. Furthermore, research on AI adoption in infrastructure companies, where human–machine collaboration tends to be highly operational, remains limited. Our study seeks to address this gap by investigating how AI-driven reconfiguration affects emotional, cognitive, and mechanical tasks, thereby contributing to both theoretical development and practical insight.
Despite increasing scholarly attention to AI’s impact on task automation, there remains a notable lack of empirical research examining how emotional tasks are reconfigured in organisations that have already integrated AI into their operations. Existing studies tend to focus on cognitive and mechanical tasks, often overlooking the complex emotional dimension of work, particularly in traditionally technical sectors such as infrastructure. This gap presents an opportunity to explore how emotional and interpersonal responsibilities are transformed or preserved in AI-mediated environments. To address this, we conduct an exploratory case study of a company specialising in the construction, management, and maintenance of highways, to analyse how emotional labour is reshaped through the AI transition. In this regard, based on the theoretical framework of the Feeling Economy [7,8], our research question is: How did changes in job practices occur when the company implemented AI in its operations? One expects to understand the advantages and difficulties in the AI implementation process within a great company considering the Feeling Economy. This approach aligns with the argument advanced by Tschang and Almirall [11], who posited that AI deployment is associated with the modularisation of human work, where job profiles are increasingly divided into interdependent task clusters that are progressively automated. Recent contributions by Shin [12,13] add a critical dimension to this perspective by highlighting the epistemic and ethical implications of algorithmic systems. While the Feeling Economy emphasises the rising importance of emotional labour in AI-mediated environments, Shin draws attention to how AI shapes organisational knowledge, decision-making, and perceptions through mechanisms such as artificial misinformation and embedded bias. By linking emotional intelligence with epistemic integrity, this study not only addresses changes in job design but also interrogates how AI redefines trust, fairness, and human agency within hybrid human–machine systems. Together, these frameworks support a more comprehensive and critical analysis of the social, cognitive, and ethical transformations associated with AI integration.

2. Theoretical Framework

The economy is undergoing swift transformations in terms of the skills demanded in the workplace. AI is poised to alter the characteristics of tasks, occupations, and services traditionally handled by human workers [7,14].
The literature on AI is primarily centred on the advancement of machine intelligence to mimic human intelligence [15]. These include skills such as knowledge and reasoning [16], problem-solving [17], communicating, perceiving, interacting, learning, and acting [18]. Moreover, Huang et al. [9] proposed three artificial intelligences: mechanical, thinking, and feeling.
Mechanical intelligence relates to simple, standardised, routine, and repeated tasks (e.g., many tasks in call centres are routine). Thinking intelligence is used for complex, systematic, rule-based, and well-defined tasks. It also involves the ability to process complex information, think creatively and holistically, and be effective in novel situations that require understanding (e.g., accounting, robot-advisory services, legal advice, and medical diagnostics). Feeling intelligence relates to the ability to read, understand, and respond to people’s emotions. It is used for social, emotional, communicative, and interactive tasks [19]. These three types of AI, manifested by machines that exhibit various aspects of human intelligence, are increasingly employed in job tasks [9].
Most roles within companies involve a blend of mechanical tasks (such as managing routine operations and monitoring attendance), thinking tasks (like analysing customer preferences and organising logistics), and feeling tasks (such as empathising with customers and providing therapy recommendations to patients). These task dimensions can differ across various roles and demand different levels of intelligence [7,9]. As AI increasingly handles mechanical and thinking tasks previously performed by humans, individuals need to shift their attention towards tasks that are more challenging for AI to undertake, specifically those that require empathy and emotional aptitude [8,9].
Emotional tasks within jobs, like communication, coordination, and collaboration, are gaining prominence in the workplace for human workers compared to the technical and cognitive aspects of roles. This shift increases the demand for positions that emphasise emotional intelligence [20]. Simultaneously, “soft” skills such as intuition, empathy, adaptability, flexibility, critical thinking, and creativity are forecasted to become relatively more significant in the labour market [9,21].
This discussion arises from the collection of studies conducted by Huang and Rust, which present a theoretical framework outlining the concept of the Feeling Economy [7,8,9]. According to their research, the impact of AI on employees’ job prospects depends significantly on the nature of the tasks they are engaged in. For example, in the service sector, AI will gradually supplant service workers by performing a growing share of routine cognitive tasks such as answering questions, processing transactions, or providing basic recommendations [8,9].
In parallel, a growing body of survey-based research has examined how employees across various sectors perceive the integration of AI into their professional environments. These studies reveal that perceptions are shaped not only by technological capabilities but also by organisational support structures, communication clarity, and ethical safeguards. For instance, Qin et al. [22] found that while AI was perceived as more consistent in performance evaluations, human-led assessments were considered fairer and more empathetic. Abdullah and Fakieh [23] observed that healthcare professionals’ acceptance of AI applications correlated with clarity in task definition and the availability of training. Similarly, Kelley [24] highlighted that ethical adoption principles—such as transparency and accountability—contributed to greater trust in AI systems. Zhu et al. [25] emphasised that perceptions of organisational support were key predictors of positive attitudes toward AI in the workplace. These contributions underline the importance of contextual and sectoral variables in shaping how AI is received and integrated. However, they often rely on quantitative measures that may overlook the nuanced, task-level transformations and affective dynamics brought about by AI adoption. Building on these insights, the present study adopts a qualitative approach to examine how AI reconfigures tasks, roles, and emotional labour in a complex, data-driven infrastructure organisation, thereby offering a more granular understanding of employee experiences within the Feeling Economy.

3. Materials and Methods

3.1. Research Design

This exploratory study adopted a qualitative research method to understand how changes in job composition occur when a company implements AI in its operations. To comprehensively understand these changes, we adopted a case study approach, as outlined by Dyer Jr and Wilkins [26] and Yin [27], leveraging rich qualitative data. Single case studies are appropriate for providing a deep, contextualised examination of complex socio-economic phenomena in real-world settings [28]. Scholars widely acknowledge that a single case can serve as a robust foundation for the in-depth investigation of emerging phenomena [29]. Consequently, this approach is well-suited to our exploratory objective regarding the emerging concept of the Feeling Economy [7,19], representing a novel inquiry area. Furthermore, case studies are frequently employed to yield unique insights that may not be accessible through other research methodologies [27].
In line with the single case study approach [27], we collected information from the company’s documents and via interviews [30] with 10 professionals who utilised AI systems in their work, which were submitted to thematic analysis. The documents analysed included public presentations, reports, and company website information. These sources were instrumental in understanding the company’s history, work context, and recent innovations related to AI. However, the company did not allow its identification. As such, the site and all the documents are confidential and cannot be shared as supplementary material.
The interviewees were selected based on their direct engagement with AI-related changes across diverse operational roles, including technical, administrative, and supervisory positions. This purposive sampling strategy aimed at capturing multiple perspectives on AI-driven task transformation. Interviewees were identified with the support of a liaison within the company and represented various departments, thereby contributing to data triangulation. Although we conducted 10 interviews, we achieved thematic saturation as no new relevant themes emerged in the final interviews, consistent with recommendations in qualitative research methodology.
Interviews were conducted face-to-face and averaged approximately 30 min. All interviews were audio recorded and transcribed verbatim for analysis. The data were analysed using thematic analysis, supported by a hybrid inductive–deductive coding framework aligned with our theoretical lenses (Feeling Economy, modularisation, and epistemic risk). Coding was performed independently by the first two researchers, and the research team resolved discrepancies to ensure inter-coder reliability.
Finally, the study followed the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines [31], which provide a 32-item checklist to ensure transparency, methodological rigour, and comprehensiveness in reporting qualitative studies. This approach includes clear documentation of the study context, participant selection, interview procedures, data analysis steps, and reflexivity considerations. Adherence to COREQ enhances the credibility and reproducibility of qualitative research, addressing concerns about transparency and analytical robustness.

3.2. The Case Study

To understand how AI capabilities change work tasks, we conducted an in-depth case study at a leading European company in the Portuguese market, boasting majority ownership in six motorway concessions. The primary endeavours of this company are related to asset management and providing services for the acquisition, operation, and maintenance of road infrastructures. It has over twenty years of operational history and is esteemed for its innovative prowess and effectiveness. The company consistently pursues profitable and strategic avenues for business growth by investing in human and intellectual capital. Notably, it focuses on projects related to digital transformation, including initiatives such as digital training programmes for employees and the creation of infrastructure management tools utilising AI.
In pursuit of its objectives, the company incorporated Vaisala’s RoadAI solution into its toolkit in January 2022. This integration led to enhancements in three key areas of pavement visual inspections: safety, information quality, and performance. Additionally, the company utilises Drive, a solution developed by ARMIS Intelligent Transport Systems. Drive 3.0 software offers intelligent management tailored to support traffic control centre operations, facilitating network monitoring, incident management, traffic control, roadwork coordination, traffic surveillance, and the operation and control of various road and telematics equipment. The company also employs DAS (Distributed Acoustic Sensing) and DTS (Distributed Temperature Sensing) technologies, developed by Indra and Aragon Photonics Labs, which integrate artificial intelligence and fibre optics to enhance road safety. DAS detects incidents by monitoring the vibrations of optical fibre installed within the infrastructure, while DTS utilises the same fibre to monitor temperature changes in critical structures like tunnels. With artificial intelligence algorithms, these systems can identify various situations, including accidents, hazardous driving, congestion, and fires, enabling motorway operators to respond swiftly and accurately. This eco-friendly solution not only contributes to road safety but also detects wildlife presence on the road and assesses road deterioration using environmentally sustainable methods.
Furthermore, the company has implemented technologies such as automatic licence Plate Reading (ALPR) systems and Robotic Process Automation (RPA) to streamline operational and administrative processes. These tools automate data extraction, recognition, and decision-making tasks, ranging from toll enforcement to internal documentation workflows, thereby reshaping both technical and emotional components of daily routines. The choice of this company for the present study is justified by the integration of AI tools that span across the three levels of task intelligence: mechanical, cognitive, and emotional.

3.3. Participants

To deepen our understanding of how job roles are being transformed by AI, we selected 10 participants from distinct hierarchical levels and departments that actively engage with AI technologies. The interviewees included the company’s General Manager (holding the position of Business Manager), as well as the directors of the three departments where AI tools are most actively deployed. The remaining six participants were operational staff members directly involved with AI-based systems. Although other operational employees in the organisation also interact with AI, these six participants were selected for their strategic or operational proximity to the technologies under study. Notably, the company has not dismissed any employees as a direct result of AI implementation. Instead, its approach has focused on internal relocation and upskilling. In this context, two of the interviewees had experienced significant role transitions due to the integration of AI into their departments. Their perspectives enriched the dataset by offering insight into adaptation trajectories and internal workforce restructuring.
Thematic saturation was reached during the final stages of data collection. Specifically, the ninth and tenth interviews did not generate novel insights compared to earlier interviews, suggesting that the main themes had been adequately captured and additional data collection would likely yield redundant information. Given the absence of job losses, concerns over survivorship bias, commonly associated with studies on automation and workforce displacement, were not applicable in this case.
To enhance transparency while preserving confidentiality, the participants have been assigned anonymized codes (E01–E10) used throughout the Results section. The correspondence between these labels and their professional roles is as follows:
-
E01—General Manager (Business Manager)
-
E02, E03, E05, E08, E09, E10—Operational staff from departments employing AI tools
-
E04—Director of the Operation and Maintenance Department
-
E06—Director of Information Systems Department
-
E07—Director of Conservation Management Department
This classification allows readers to contextualise the quoted material while respecting the anonymity agreements established with the participants.

3.4. The Interview

In the semi-structured interviews, each participant was questioned for an average of 30 min, allowing them to convey their understanding of the issues under study. The interviewees were encouraged to describe their subjective perspectives [32] on the changes in their work practices in the AI-assisted context. The researcher asked questions to acquire a holistic understanding of the phenomenon. All interviews were audio recorded with participants’ consent to enhance data accuracy and reduce potential bias. To ensure anonymity, respondents were assigned coded identifiers in place of names. The data collection was conducted in July and August 2024.
The guiding questions were divided into three sections. The first section explored how tasks were performed prior to AI implementation, with a focus on classifying them as mechanical, cognitive, or emotional (see questions 1–3 in Appendix A). The second section examined the integration of AI into these tasks and its influence on the nature and classification of work activities (see questions 4–6 in Appendix A). The third section investigated the emergence of new tasks and roles, the need for novel skillsets, and the ongoing reconfiguration of organisational structures in response to AI adoption (see questions 7–13 in Appendix A).
To enhance methodological transparency, Appendix B provides a summary table of categorical responses to selected interview questions, offering an overview of participant perceptions and reported task changes.

3.5. Data Analysis

The interviews were fully transcribed, after which the data were analysed using a thematic approach based on the framework established by [33]. This approach involves six key phases: (1) familiarising with the data, (2) generating initial codes, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing the final report. This method offered a structured yet flexible framework to identify and interpret meaningful patterns across participants’ accounts.
Although a hybrid inductive–deductive strategy was defined a priori (as detailed in Section 3.1), the analysis began with an inductive process to allow themes to emerge directly from the empirical material, without being constrained by predefined categories. Coding was conducted manually by the first two authors, who independently reviewed all transcripts and generated initial codes. The research team discussed these codes and refined them through successive analytical rounds. Disagreements were resolved through deliberation with the third author until consensus was achieved, ensuring the reliability of the thematic structure.
The development of themes focused on how participants described changes in their work related to AI technologies. The emerging categories were organised by the researchers around the three types of tasks highlighted in the Feeling Economy framework—mechanical, cognitive, and emotional—as well as additional dimensions linked to perceived value, professional identity, psychological impact, and ethical risks. These were interpreted considering the theoretical constructs of modularisation and epistemic risk.
To enhance analytical rigour, the researchers revisited the deviant cases and themes were re-examined by the research team in relation to raw data excerpts to ensure consistency, credibility, and depth of interpretation.

4. Results

Presenting the results following the six phases of the thematic analysis [33]. In phase one, we conducted transcription and initial reading, capturing participants’ responses, which enabled a thorough and repeated reading to understand their perceptions and experiences fully. It was observed that workers notice changes, such as the replacement of repetitive tasks by AI, and mention mixed feelings about the integration of these technologies, highlighting both benefits (increased productivity and quality) and challenges (concerns about dehumanisation and job loss). In the second phase, text segments were identified and coded with a focus on changes in job descriptions and practices, resulting in codes such as “replacement of repetitive tasks”, “increased productivity”, “feeling of job insecurity”, “need for new skills”, and “improved process accuracy”. Through collaborative discussions among the researchers, the process was reviewed and validated, ensuring that the codes adequately represented participants’ perceptions of the effects of AI on the workplace. The third phase led us to create initial codes that were clustered to identify potential themes, like Task Reconfiguration, Effects on Productivity and Quality, Psychological and Adaptation Challenges, Need for AI Skills and Knowledge, and Dehumanisation of Interactions with AI. These preliminary themes reflect an analysis of perceptions about how AI influences the organisation of work and expectations for the future of jobs. Afterwards, these themes were reviewed to ensure they aligned with the data. For example, themes on productivity and quality were grouped under Improving Efficiency and Quality of Work, given the frequent mention of the benefits of AI for performance and error reduction. The research team worked to resolve discrepancies and reached consensus, consolidating themes and subthemes to represent the participants’ experiences best. The final theme definitions are as follows:
-
Reconfiguring Work Tasks: includes shifting repetitive tasks to AI, freeing up time for other activities;
-
Improving Efficiency and Quality of Work: represents the benefits of AI, including increased productivity and reduced errors;
-
Psychological and Adaptation Challenges: relates to insecurities and resistance regarding the loss of functions to AI;
-
Need for Qualification and Knowledge in AI: emphasises the demand for specific skills to maximise the use of AI tools; and
-
Dehumanisation of Interactions with AI: reflects concerns about the possible loss of empathy and human interactions.
The results of the thematic analysis are presented in Table 1, where the five key themes are identified, which address the research question of understanding how job descriptions and practices change when the company integrates AI into its operations.

4.1. Reconfiguration of Job Tasks

Integrating AI into organisational processes has led to significant changes in the allocation and execution of job tasks [9]. This reconfiguration is primarily characterised by automating repetitive, mechanical processes, enabling employees to redirect their efforts toward more analytical and strategic activities [8], as emphasised by the interviewees:
“[…] Before, we spent hours filtering and sorting data manually; now, the robot completes that process instantly.”
(E02)
“[…] tasks that were extremely repetitive, such as manually entering data, are now automated by robots, […]”
(E06)
“[…] The AI system helps reduce redundant work, allowing us to allocate resources to more valuable processes.”
(E08)
“[…] the introduction of AI has significantly reduced the mechanical workload on my team. […]”
(E10)
Similarly, some interviewees described that with the implementation of RPA:
“[…] We used to check thousands of transactions manually, but now, the system processes them, and we only intervene in exceptions.”
(E02)
“[…] Previously, we had to go through all cases one by one, but now the system flags the ones that require manual review, making our work much faster.”
(E04)
“[…] replaced routine, simple tasks that were part of everyday life.”
(E05)
“[…] manual tasks like data entry and validation were reduced.”
(E07)
This shift in job responsibilities exemplifies what Kalateh et al. [34] refer to as the Feeling Smart Industry, where AI allows workers to move from repetitive manual tasks toward strategic decision-making and collaboration. This integration demonstrates the shift predicted in the Feeling Economy, where AI takes over thinking tasks, leaving humans to focus on interpersonal and creative roles [8,9].
The reconfiguration of tasks is further supported by the modularisation of workflows, where AI systems take over segmented portions of work while employees oversee more complex operations [11]. This modularisation framework explains how AI divides workflows into distinct components, allowing for an optimised combination of human and machine contributions.
The integration of AI into organisational processes has shifted job tasks by automating repetitive and mechanical functions, enabling employees to focus on more strategic activities [9]. As noted by the interviewees:
“[…] Now that AI handles repetitive validation tasks, we have more time to refine processes and improve decision-making criteria.”
(E02)
“[…] Instead of performing manual verifications, we can now focus on investigating discrepancies and improving system efficiency.”
(E07)
“[…] AI not only alleviates monotony but also enhances productivity by reallocating resources to more meaningful work.”
(E08)
“[…] The introduction of AI significantly reduced the mechanical workload of my team, enabling us to dedicate more time to tasks requiring critical thinking and creativity.”
(E10)
This jobs’ tasks shift aligns with empirical findings from von Richthofen et al. [35], who studied AI integration in German companies and identified three key transformations: (1) the transition from repetitive manual work to tasks requiring reasoning and empathy, (2) the emergence of novel responsibilities, and (3) the evolution of required skills and qualifications.
However, these transitions are not without challenges. Employees are often required to assist in training AI systems during the initial implementation phase. Interviewees described this role as follows:
“[…] We continuously adjust AI parameters and refine decision rules to improve system efficiency.”
(E02)
“[…] The AI requires frequent supervision, especially in the first months, to ensure that its learning process aligns with operational needs.”
(E05)
“[…] feeding the system data so that it can build memory and gradually operate independently.”
(E09)
This result aligns with findings from Jaiswal et al. [36], who emphasise that AI does not replace jobs outright but shifts the skill requirements, necessitating continuous learning and adaptation. Additionally, Vorobeva et al. [6] argue that framing AI as an augmentative tool rather than a replacement increases acceptance among employees, fostering collaboration rather than resistance.
In summary, the reconfiguration of job tasks demonstrates the transformative power of AI. It enables a shift from operational execution to strategic oversight while reshaping the workforce’s role in achieving organisational goals by allowing employees to engage in tasks that add greater value to the organisation. This transition aligns with the broader theoretical framework of the Feeling Economy [7,8], which predicts an increasing focus on emotional and strategic tasks as AI capabilities evolve.

4.2. Enhancement of Efficiency and Work Quality

AI adoption is strongly linked to improvements in efficiency and work quality across organisations. By automating repetitive processes and enhancing decision-making accuracy, AI enables organisations to deliver faster and more reliable outcomes [8]. Studies indicate that AI reduces human error by improving data processing accuracy and optimising workflows, ensuring more consistent and efficient operations [9]. Interviewees demonstrated the technology’s ability to minimise errors:
“[…] Now, mistakes are rare because AI validates data before we even review it.”
(E02)
“[…] The implementation of AI drastically reduced the errors in documentation processing, making our workflow much smoother.”
(E04)
“[…] automated decisions have increased the accuracy of routine tasks, […]”
(E06)
“[…] The speed with which we can complete projects today is remarkable thanks to intelligent systems.”
(E08)
These findings align with previous research emphasising that AI excels in handling mechanical intelligence tasks, which are characterised by standardisation and repetition, thereby reducing human error and improving precision [9,35].
These efficiencies have dual benefits, positively impacting both the organisation and its employees. By handling routine processes, AI liberates workers from monotonous, time-intensive tasks, allowing them to concentrate on higher-order contributions. This result reflects Rust and Huang’s [8] observation that AI enables humans to redirect their efforts toward activities requiring critical thinking, creativity, and emotional intelligence.
Interviewees emphasised how automation supports consistent, high-quality output:
“[…] From a management perspective, AI has allowed us to streamline key processes without compromising service quality.”
(E01)
“[…] Before, we would spend hours verifying cases manually. Now, AI does the first check, and we just confirm the critical points.”
(E05)
“[…] AI allows us to complete tasks more efficiently, reducing backlog and enhancing overall service quality.”
(E07)
“[…] With AI, we can maintain higher quality standards within shorter deadlines, […]”
(E10)
Employees gain additional time for detailed analysis and creative problem-solving, fostering a more engaged and productive workforce. This shift echoes Huang and Rust’s [7] concept of the Feeling Economy, where AI handles mechanical and analytical tasks, allowing humans to focus on empathetic and value-driven roles. Moreover, the benefits of enhanced efficiency extend to customer interactions. Research suggests that automating routine customer service tasks enhances response times and reduces customer frustration, contributing to greater satisfaction and loyalty [36]. Interviewees highlighted that minimising errors through automation reduces customer complaints and increases satisfaction:
“[…] Customers appreciate the speed of automated responses, but they still expect human support for complex issues.”
(E03)
“[…] If we have a machine that helps us to be more efficient, faster, and more effective in our processes […], this will generate […] a greater level of satisfaction among our customers.”
(E05)
“[…] AI now resolves simple customer inquiries instantly, reducing wait times and frustration.”
(E07)
This improvement not only strengthens customer relationships but also contributes to organisational reputation and loyalty. Huang et al. [9] emphasise that integrating AI into customer-facing roles can enhance perceived service quality, driving trust and long-term engagement. The transition to AI-supported service models is especially evident in industries such as finance, healthcare, and retail, as well as in the company studied, where real-time responses and accuracy are crucial [34].
Despite these benefits, achieving optimal efficiency requires continuous refinement of AI systems. Research suggests that while AI systems improve accuracy and speed, their long-term effectiveness depends on their adaptability and periodic adjustments to decision-making algorithms [11]. Interviewees emphasised the importance of revising AI parameters, noting that:
“[…] We continuously analyse AI outputs to fine-tune the system, ensuring it remains aligned with real-world requirements.”
(E02)
“[…] AI is effective, but it needs human intervention to optimise and adjust to evolving conditions.”
(E06)
“[…] initial rules often require adjustments to ensure optimal functionality.”
(E07)
This iterative process reflects the modularisation framework described by [11], where workflows are continuously optimised to align AI functionalities with organisational objectives. Such adjustments ensure that AI systems deliver measurable gains in productivity and quality while maintaining alignment with strategic goals.

4.3. Psychological Challenges and Adaptation

The integration of AI into the workplace often triggers psychological challenges for employees, including fears of job displacement and resistance to change [6]. Research indicates that technological unemployment has been a concern since the Industrial Revolution, with ongoing debates about whether AI will replace human labour or create new job opportunities [37]. Participants frequently expressed concerns about the potential for AI to replace human roles. For example, interviewees reflected:
“[…] At first, there was widespread fear that robots would take our jobs.”
(E07)
“[…] Acceptance of technological changes was difficult for many on the team.”
(E09)
“[…] AI is good for the company, but I hope it does not harm workers.”
(E09)
“[…] It is scary. AI keeps taking jobs, fewer people are needed now, and we have already lost colleagues due to automation.”
(E09)
These concerns align with [8] observation that as AI increasingly handles mechanical and thinking tasks, employees often fear becoming obsolete. This result reflects a broader sentiment described by Huang et al. [9], who note that psychological resistance is a common obstacle in the transition to AI-driven workplaces.
Adaptation to AI requires a significant shift in perspective. Employees must learn to view AI as a collaborative tool rather than a threat to their professional identity. Interviewees explained:
“[…] At the beginning, people thought robots would replace us, but now we see they are here to help.”
(E02)
“[…] People feared AI would replace them, but we have realised that it actually assists us in routine tasks.”
(E06)
“[…] The initial months were challenging, with resistance to the idea of working alongside AI systems.”
(E10)
This resistance underscores the need for organisational strategies to build trust and foster acceptance among the workforce. Research suggests that AI is most effectively integrated when it is framed as augmenting human capabilities rather than replacing them [6]. Vorobeva et al. [6] emphasise that framing AI as augmentation rather than substitution can improve employee receptiveness by enhancing the perceived value and usability of the technology. Studies also indicate that leadership plays a crucial role in managing AI-related change, with support from management being essential for successful implementation [35].
Effective communication and inclusive implementation processes are critical to overcoming psychological barriers. Interviewees emphasised the importance of early involvement, stating:
“[…] AI implementation must be accompanied by training; otherwise, employees will reject it.”
(E04)
“[…] Demonstrating the benefits of technology to employees reduces resistance.”
(E05)
“[…] We need to reassure employees that AI is not here to eliminate jobs but to improve our work.”
(E06)
This result aligns with research on AI adaptation, which emphasises the necessity of transparent communication and training programmes to help employees transition smoothly [36]. Studies suggest that companies should focus on reskilling workers to enhance their analytical and interpersonal skills, which will become increasingly valuable as AI assumes routine tasks [34].
Additionally, AI’s impact on professional roles requires employees to redefine their value within the organisation. As tasks become increasingly automated, workers may need to focus on areas where human insight remains irreplaceable, such as critical thinking, interpersonal communication, and problem-solving. Research supports this shift, noting that the future of work will likely prioritise “feeling intelligence” skills, including emotional intelligence and adaptability, as AI takes on mechanical and cognitive functions [5,8,9].
This shift highlights the importance of leadership in guiding employees through the transition and ensuring their continued engagement and contribution. Leaders play a crucial role in fostering a culture of innovation, encouraging reskilling initiatives, and providing employees with the tools to thrive alongside AI systems. This process echoes Tschang and Almirall’s [11] modularisation framework, which suggests that successful AI integration relies on reimagining workflows while maintaining a human-centric approach. Additionally, studies indicate that while AI can enhance workplace efficiency, its implementation must be handled with care to prevent employee disengagement and resistance [38].
By adopting strategic approaches to AI integration, organisations can mitigate psychological resistance and foster a collaborative environment where employees view AI as a valuable tool rather than a disruptive force.

4.4. Need for Skills and AI Competence

The adoption of AI in the workplace necessitates new skills and competencies for employees to interact with and leverage these technologies [35] effectively. As interviewees observed:
“[…] Fundamentally, it is about having computer skills, that is, people having the appetite to use these tools.”
(E01)
“[…] Working with AI requires both technical knowledge and the ability to adapt to new digital processes quickly.”
(E04)
“[…] We had to learn how to configure and supervise the use of AI systems, […]”
(E06)
The ability to understand and manage AI tools has become a critical competency for modern workers. This result aligns with Jaiswal et al. [36], who found that as AI takes over mechanical intelligence tasks, employees must enhance their analytical and problem-solving abilities to remain competitive in the workforce.
Training and skill development are central to successful AI implementation [35]. Interviewees emphasised the role of structured training, stating:
“[…] Training is essential; otherwise, employees struggle to see the advantages of AI and resist using it effectively.”
(E05)
“[…] There has to be much training, because people are not used to working with artificial intelligence.”
(E05)
“[…] Specific training programmes have been instrumental in helping the team adapt to technological tools.”
(E08)
These programmes equip employees with the knowledge needed to maximise the benefits of AI systems while reducing the anxiety associated with learning new technologies. Vorobeva et al. [6] note that training and transparent communication are critical to increasing acceptance of AI by demonstrating its potential as a collaborative tool. Research by Jaiswal et al. [36] highlights that organisations must invest in upskilling initiatives to prepare employees for evolving AI-driven roles, particularly focusing on intuitive and empathetic skills.
In addition to technical proficiency, soft skills such as adaptability, critical thinking, and effective communication are becoming increasingly valuable. Interviewees illustrated the importance of an agile mindset:
“[…] Employees who are open-minded and flexible adapt better to AI-driven changes in the workplace.”
(E03)
“[…] I think there should be training, essential, in relation to AI, because I think that at the moment, we do not have specific training on AI.”
(E3)
“[…] Learning how to use the new tools was essential to maintaining productivity, […]”
(E09)
These skills reflect the human capabilities highlighted by Huang and Rust [7] as essential for success in the Feeling Economy, where employees must excel in areas requiring empathy, creativity, and interpersonal communication.
Employees must be prepared to continuously evolve alongside technological advancements to remain relevant in an AI-driven environment. This dynamic reflects Rust and Huang [8], who assert that organisations must encourage the development of “feeling” and “thinking” intelligences to ensure a human-centric approach to AI integration.
Organisations play a crucial role in fostering this growth by creating opportunities for continuous learning [35,36]. Interviewees stressed the importance of ongoing education, stating:
“[…] Training ensures that employees are equipped to use AI effectively and benefit from the technology.”
(E05)
“[…] I think that for a person to progress in the company, it starts with having more training and also having a little understanding of the introduction of AI at work.”
(E9)
“[…] Companies should invest in AI education to ensure that employees feel confident and empowered, rather than threatened, by new technologies.”
(E09)
This perspective aligns with Tschang and Almirall’s [11] modularisation framework, which suggests that employees should be empowered to manage increasingly complex workflows that integrate human and machine capabilities. By investing in their workforce, companies can build a skilled and confident team capable of navigating the complexities of AI integration. Von Richthofen et al. [35] also emphasise that AI is reshaping knowledge work by requiring employees to engage in reasoning- and empathy-driven tasks rather than routine, repetitive duties.
In summary, the need for skills and AI competence underscores the transformative impact of technology on workforce development. Through training and a focus on both technical and soft skills, organisations can ensure their employees remain adaptable, engaged, and equipped to maximise the potential of AI-driven systems.

4.5. Dehumanisation of Interactions with AI

One of the most nuanced challenges of AI integration is the risk of dehumanising workplace interactions [35]. Participants expressed concerns about the inability of AI systems to replicate the empathy and interpersonal connection provided by human employees. These concerns align with Huang and Rust [7], who argue that while AI has made significant progress in mechanical and analytical intelligence, it struggles with feeling intelligence-the ability to understand, respond to, and express emotions effectively. These examples highlight the limitations of AI in contexts requiring emotional intelligence:
“[…] I do not think it will be the same. It will be colder […] while we are people full of feelings.”
(E03)
“[…] There are times when people just want to talk to another human, even if it is a simple request.”
(E03)
“[…] The robot will not be able to, perhaps, make the answer I want […] it will not have empathy with my case.”
(E04)
“[…] When a customer is emotional, the AI does not react appropriately; it just provides a standard response.”
(E04)
“[…] Although efficient, AI cannot convey human care in care.”
(E06)
“[…] Automated interactions are often perceived as cold and impersonal.”
(E10)
This issue is particularly evident in customer-facing roles, where personal connection is critical to service quality [39]. AI-driven customer service systems, while efficient, often fail to recognise context-dependent emotional cues, leading to frustration among users [8]. This result aligns with the concerns raised by interviewees:
“[…] The absence of empathy in automated systems can lead to dissatisfaction, especially in customer-facing roles.”
(E07)
“[…] Customers often get frustrated because they expect to be understood, not just given a generic response.”
(E08)
“[…] People sometimes just want reassurance from a human being, not a machine following a script.”
(E08)
Despite these limitations, AI holds the potential to improve in this area. Research suggests that emotion-recognition algorithms, coupled with natural language processing advancements, could enable AI to detect and respond to human emotions more effectively [9]. However, current implementations remain inadequate in replicating human-like empathy, making human oversight essential in AI-mediated interactions [35]. Interviewees speculated on potential improvements:
“[…] Maybe one day AI will be able to recognise frustration and adjust its responses, but right now, it just follows pre-set rules.”
(E04)
“[…] In the future, AI might detect stress levels in a person’s voice and adapt its responses accordingly.”
(E05)
“[…] If developed further, AI could learn to adjust its tone based on emotional cues, bridging the gap in interpersonal communication.”
(E09)
This perspective aligns with Rust and Huang [8], who argue that advancements in AI could eventually extend its capabilities into emotional intelligence, paving the way for systems that better emulate human understanding. However, participants widely acknowledged that human oversight would remain essential to ensure that AI systems meet the expectations of empathy and understanding.
The dehumanisation concern extends beyond customer interactions to internal team dynamics [39]. Employees value the emotional support and camaraderie provided by their colleagues and leaders, which cannot be replaced by AI systems [6]. Interviewees highlighted the importance of preserving a human-centric workplace culture, stating:
“[…] AI may optimise workflows, but it cannot replace the human bonds that define a positive work culture.”
(E02)
“[…] Balancing AI efficiency with human values is crucial for fostering a supportive workplace environment.”
(E05)
“[…] There is a difference between getting the job done and feeling valued as a person. AI cannot replace that sense of belonging.”
(E08)
This result reflects the broader theoretical framework of the Feeling Economy, as discussed by Huang and Rust [7], which emphasises the growing importance of human emotional intelligence in an AI-driven world. By maintaining a balance between efficiency and empathy, organisations can ensure that AI enhances, rather than detracts from, meaningful interactions in the workplace.
In summary, while AI offers remarkable efficiency and precision, its limitations in replicating human empathy and emotional connection highlight the need for a complementary relationship between humans and machines. Organisations must strategically integrate AI to enhance productivity while preserving the human elements that define meaningful workplace and customer interactions.

5. Discussion

5.1. Theoretical Implications

This study contributes to the evolving theoretical landscape on artificial intelligence and organisational transformation, particularly through the lens of the Feeling Economy [7,8]. Our findings empirically support the theory’s proposition that AI displaces mechanical and cognitive tasks, thereby reconfiguring job roles and amplifying the importance of emotional and social intelligence in the workplace. The empirical evidence from the case study reinforces this shift, as employees increasingly engage in tasks requiring empathy, contextual judgement, and interpersonal interaction. This reallocation aligns with the broader theoretical discourse suggesting that human comparative advantage is gradually concentrating in domains that AI cannot easily replicate [5,38].
Moreover, this study extends the Feeling Economy theory by integrating the modularisation perspective advanced by Tschang and Almirall [11], who argue that AI drives the decomposition of work into discrete, automatable modules. The findings illustrate how AI applications in infrastructure management support this modularisation by handling clearly defined, interdependent tasks while humans focus on managing exceptions and emotionally charged situations. This result suggests a symbiotic human–AI work dynamic, which adds depth to the concept of hybrid intelligence systems [35].
The results also suggest that the Feeling Economy should evolve to account not only for the shift in task types but also for the cultural and affective dimensions of work in technologically mediated environments. Emotional labour, trust-building, and team cohesion emerge as critical theoretical constructs in AI-augmented workplaces. Further integration of psychological and sociological theories, such as affective organisational behaviour and sociotechnical systems theory [14,34], may enrich the explanatory power of the Feeling Economy framework.
To push the theoretical boundaries further, this study engages with recent critical literature that interrogates the epistemic and ethical implications of AI systems. In this context, Shin [12] offers a compelling perspective on the concept of “artificial misinformation,” arguing that algorithmic systems do not merely automate tasks but actively participate in shaping knowledge, perception, and organisational understanding. This perspective is especially relevant in contexts where algorithmic decisions—though efficient—can inadvertently distort truths, amplify confirmation bias, or erode trust. Shin [12] emphasises human–algorithm interaction as a site of misinformation production, invites us to consider how AI’s representations of reality are co-produced with users and shaped by system design, data infrastructure, and broader power dynamics. These insights challenge the often-implicit assumption that AI is a neutral executor and open avenues for reconceptualising human–machine collaboration as inherently sociotechnical and ideologically mediated [12].
Additionally, Shin’s [13] work on “Debiasing AI” provides a theoretically rich and ethically grounded framework to address the structural inequities encoded in AI design and deployment. Shin [13] advances a pluralistic and interdisciplinary approach that confronts AI’s ontological and epistemological biases and reimagines algorithmic governance through a sustainability lens. By drawing on communication theory, cognitive science, and moral philosophy, the framework underscores that effective AI integration must be grounded not only in technical accuracy but also in cultural sensitivity, social justice, and contextual fairness [13]. Rather than advocating a purely technical fix, Shin [13] demonstrates the necessity of iterative, adaptive, and inclusive strategies to mitigate bias and ensure that AI systems are robust, trustworthy, and human-centred. These contributions enrich the Feeling Economy framework by highlighting that the future of human value creation lies not only in emotional labour, but also in cultivating ethically resilient ecosystems of human–AI interaction.
While our findings align with previous theoretical propositions, particularly the Feeling Economy framework [10], they do more than merely confirm existing models. They extend and nuance them in several important ways. First, the results empirically demonstrate that emotional tasks are not exclusive to sectors traditionally perceived as emotion intensive. Although infrastructure management is not typically perceived as an emotion-intensive domain, the findings in Section 4.4 reveal how emotional labour manifests in subtle but significant ways—such as user interaction, decision mediation, and internal coordination under digital transformation. These observations suggest that the Feeling Economy may have broader applicability beyond service or care-related professions, requiring an expansion of its scope to include emotional microtasks embedded in technical roles.
Second, the study offers a more dynamic interpretation of task modularisation. As illustrated in Section 4.1, our data show how modularisation occurs unevenly across departments and functions, producing hybrid configurations where emotional, cognitive, and mechanical elements coexist and shift in response to AI integration. This result challenges the assumption of a linear task migration and instead supports a view of modularisation as a negotiated and context-dependent process.
Third, the study enriches the debate on epistemic risks by highlighting how professional accountability is being redefined not simply through the use of opaque AI systems, but through the redistribution of interpretive responsibility among human actors. Engineers in the case study expressed the need to “contextualise” algorithmic outputs, effectively becoming curators of machine-generated knowledge. This approach extends Shin’s [12] warnings about algorithmic opacity by showing how epistemic vigilance must be operationalised within real daily work routines.
Taken together, these contributions illustrate that our study not only empirically grounds but also theoretically expands existing frameworks. It invites a rethinking of how emotional labour, modularisation, and epistemic integrity are practiced and shaped in contemporary workplaces augmented by AI.
Moreover, while this study focuses on AI-driven transformations, it is important to acknowledge the historical continuity between previous waves of automation and current developments in artificial intelligence. The concerns expressed by E09—“It is scary. AI keeps taking jobs, fewer people are needed now, and we have already lost colleagues due to automation.”—echo long-standing anxieties associated with technological displacement, from mechanised assembly lines to robotic process automation. However, AI differs in that it increasingly targets cognitive and emotional labour, rather than merely mechanical functions. This distinction is essential for refining theoretical frameworks such as the Feeling Economy and modularisation, which must account for not only the novel capabilities of AI but also the layered history of task specialisation and labour restructuring.

5.2. Managerial Implications

From a managerial standpoint, the integration of AI requires a holistic rethinking of leadership strategies, human resource development, and organisational design. This study underscores the need for managers to anticipate and mitigate the psychological impacts of automation, including fear of redundancy, resistance to change, and uncertainty about future roles [6,37]. Effective AI integration must be framed as augmentation rather than substitution to maintain employee engagement and psychological safety [8].
Managers should prioritise the development of reskilling and upskilling programmes that encompass both technical AI literacy and soft skills, such as adaptability, emotional regulation, and interpersonal communication. Empirical evidence indicates that employees equipped with these competencies are more resilient and collaborative in AI-mediated environments [36]. Training must be proactive and inclusive, ensuring that all employees have access to continuous learning opportunities tailored to evolving job profiles.
The study also suggests that performance metrics should be re-evaluated. Traditional key performance indicators may no longer capture the qualitative contributions of employees in AI-augmented workflows, particularly those related to empathy, innovation, and decision-making in ambiguous contexts [21]. Managers must develop new evaluative tools that reflect the emotional and cognitive demands of modern roles.
To responsibly navigate the implementation of AI in the workplace, managerial practices must internalise the lessons from Shin [12,13]. From Shin’s [12] perspective on artificial misinformation, managers should be acutely aware of how algorithmic systems may propagate distorted information, whether through feedback loops, opaque logic, or misrepresentations of employee performance. Human-in-the-loop protocols and transparency tools must be adopted, not merely as safeguards, but as structural components of trustworthy AI governance [12]. Similarly, Shin’s [13] framework for debiasing AI urges managers to move beyond compliance checklists toward a culture of ethical reflection and participatory design. This approach involves embedding fairness metrics, diversity-sensitive architectures, and ethical nudges into algorithmic workflows while remaining vigilant about the unintended consequences of even well-intentioned interventions [13]. In sum, managerial engagement with AI must transcend operational efficiency to become an exercise in moral stewardship, reinforcing the organisation’s accountability, inclusivity, and epistemic integrity.

5.3. Divergent Perceptions and Nuanced Contributions

Although the five themes identified in this study provide a robust analytical foundation, some divergent perspectives from participants warrant further attention. While these opinions did not occur with sufficient frequency or conceptual consistency to be elevated to the level of standalone themes or subthemes, they nonetheless introduce critical nuance into the analysis. For instance, although the Feeling Economy framework Huang and Rust [10] suggests a migration of human labour toward emotional and interpersonal tasks, some respondents questioned whether this shift translates into increased value or autonomy. E02 remarked, “I don’t feel more productive; I feel more monitored,” while E09 noted, “It doesn’t free up my time. It adds more work, because I have to validate everything.” This scepticism suggests that, in some cases, modularisation of tasks may increase complexity rather than reduce cognitive or emotional burden, complicating the optimistic trajectory posited by models of task augmentation [11]).
Likewise, while the epistemic risk literature [12,13]) emphasises the dangers of algorithmic opacity, some participants expressed more pragmatic, trust-based approaches. E06 stated, “AI has helped streamline reporting and decision-making processes, reducing time spent on manual data input,” suggesting a relatively high level of trust in AI-driven systems. These varied perceptions point to a contested terrain of digital transition in which workers are not passive recipients of automation but active interpreters, sometimes resisting or reconfiguring AI’s role in their workflows.
These contradictory insights do not discredit the main findings; instead, they highlight the complexity and multiplicity of experiences with AI. They signal that job reconfiguration is not uniformly empowering nor disempowering, but shaped by situated negotiations, task-specific dynamics, and individual adaptation strategies. Incorporating these nuances strengthens the study’s theoretical contribution by advancing a more pluralistic and grounded understanding of the Feeling Economy and the modularisation of labour. Future research should explicitly investigate these contested interpretations, particularly with respect to emotional labour, perceptions of autonomy, and epistemic trust in AI systems.

5.4. Limitations and Future Research

While this study offers significant insights, it is limited by its reliance on a single case study within the European infrastructure sector. Although the findings offer deep contextual understanding, they may not generalise to other sectors or geographical regions where AI adoption, work cultures, and labour regulations differ substantially. Future research should incorporate multi-case and cross-national comparative studies to examine sectoral and cultural variations in AI integration.
Another limitation concerns the study’s cross-sectional nature. Longitudinal studies are necessary to assess the long-term impacts of AI on job satisfaction, career progression, organisational culture, and employee well-being. Tracking these dimensions over time will provide a more comprehensive understanding of the sustainability of AI-driven job transformations [35].
One methodological limitation of this study concerns the potential for survivorship bias, as the data collection did not include individuals who may have exited the organisation due to automation-related restructuring. While no formal dismissals occurred during the implementation of AI technologies in the company under study, task reallocation and role transformation did affect certain employees. Two of the interviewees reported having changed roles in response to technological shifts, offering partial insights into the organisational handling of such transitions. However, the absence of direct perspectives from displaced workers constrains the study’s ability to capture the impact of AI on employment security fully. Future research would benefit from including those who experienced job loss or contract termination, thereby enriching the understanding of the broader socio-economic consequences of job reconfiguration resulting from AI integration.
Although the five themes identified in the findings are well-supported by recurring patterns in the data, a few interview excerpts revealed opinions that were not aligned with the dominant narratives. These views did not recur across the sample and therefore did not form new themes or subthemes. However, they do not undermine the thematic structure of the analysis. Instead, they introduce nuances that add complexity and suggest avenues for future research. Future studies should explicitly investigate these contested interpretations, particularly with respect to emotional labour, perceptions of autonomy, and epistemic trust in AI systems.
Further research is also needed to investigate the role of emotional intelligence in AI-mediated work. As AI encroaches upon increasingly complex tasks, the boundary between cognitive and emotional labour becomes blurred. Future theoretical and empirical work should explore how feeling intelligence can be systematically developed, evaluated, and integrated into human resource practices.
Additionally, interdisciplinary approaches are encouraged to bridge technological, ethical, and humanistic perspectives. Research that combines insights from AI ethics, organisational psychology, and sociology can inform more inclusive and responsible AI adoption strategies. For example, exploring how AI systems impact diversity, equity, and inclusion in the workplace could lead to important advancements in fair algorithm design and usage [11,39].
The incorporation of insights from Shin [12,13] also opens fruitful pathways for future scholarship. Researchers should explore how algorithmic misinformation affects organisational knowledge production and employee perception, especially in sectors with high informational asymmetry [12]. Moreover, Shin [12] emphasises cultural and contextual adaptability, offering a roadmap for evaluating AI’s real-world use across varied geopolitical and institutional environments. Future work could build on Shin’s [13] actionable strategies for mitigating bias—such as the deployment of diversity-aware system architectures, nudges, and heuristics—and examine their actual effectiveness in promoting fairness. Such inquiry would enhance both scholarly and applied understanding of how ethical AI can be designed, implemented, and governed in a manner that genuinely respects human dignity and social justice [13].
Ultimately, a richer research agenda must treat AI not only as a technical artefact but as a social actor entangled in networks of meaning, power, and responsibility. Addressing algorithmic bias, misinformation, and human–machine trust is therefore not peripheral but central to building resilient, human-centred organisations in the AI era.

Author Contributions

Conceptualization, P.O. and J.M.S.C.; methodology, P.O., J.M.S.C. and S.F.; investigation: P.O.; formal analysis, P.O. and J.M.S.C.; writing—original draft preparation, P.O. and J.M.S.C.; writing—review and editing, J.M.S.C. and S.F.; funding acquisition, J.M.S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by national funds through FCT—Fundação para a Ciência e a Tecnologia, I.P., under the support UID/05105: REMIT—Investigação em Economia, Gestão e Tecnologias da Informação.

Institutional Review Board Statement

This study involved interviews with adult professionals in their workplace context. The Ethics Committee of Universidade Portucalense reviewed the project and concluded that specific ethical approval was not required, given that the participants were not vulnerable individuals and the study did not involve sensitive personal data or interventions.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of the authors’ affiliated institution, which considered the study to have no ethical issues because it only required informed consent, which was obtained by the authors.

Data Availability Statement

The data that support the findings of this study are not publicly available due to confidentiality agreements with the participating organisation and to protect the anonymity of interviewees. Access to the data is therefore restricted but may be available from the corresponding author upon reasonable request and subject to ethical approval.

Acknowledgments

The authors are grateful for the support from all interview participants who provided their valuable time to enable this research. Moreover, the authors appreciate the time and effort that the three anonymous reviewers and the editor invested in providing feedback and valuable improvements to the paper.

Conflicts of Interest

The authors declare no conflicts of interest. There is no financial/personal interest or belief that impacts the authors’ objectivity in the creation of this work. There are also no actual or potential competing interests in the creation of this work.

Abbreviations

The following abbreviations were used in this manuscript:
AIArtificial intelligence
DASDistributed Acoustic Sensing
DTSDistributed Temperature Sensing
ALPRAutomatic Licence Plate Reading
RPARobotic Process Automation

Appendix A. Interview Guide Questions

[English version]
  • What is your role (profession) within the company?
  • Have your responsibilities (tasks) changed following the introduction of AI?
  • If so, what were your previous responsibilities before AI adoption? What kind of tasks did you perform?
  • How would you characterise these tasks? Were they mechanical, cognitive, or emotional? Could you provide examples?
  • Could you describe the main tasks that have been integrated into AI systems, which were previously performed by you?
  • As a result of AI implementation, have you taken on new responsibilities or tasks? If so, what are they?
  • In your role, do you perceive AI implementation in the company as positive or negative? Why?
  • Overall, do you consider AI implementation in the company to be positive or negative? Why?
  • With AI integration in the company, what skills do you think are necessary for employees to progress?
  • Do you believe that interpersonal communication (e.g., active listening, defending one’s point of view, negotiation) can be carried out with the same quality through AI?
  • Have your interactions with subordinates changed in any way after AI was introduced?
  • Do you consider the division of work between humans and AI to be beneficial? Why?
  • What other relevant aspects, not previously mentioned, do you consider important regarding the distribution of work between humans and AI?

Appendix B. AI Impact Table

ParticipantPerception of AITask ChangeAI Influence
E01PositiveYesIndirect
E02NegativeYesDirect
E03PositiveYesDirect
E04PositiveYesDirect
E05PositiveYesDirect
E06PositiveYesDirect
E07PositiveYesDirect
E08PositiveYesDirect
E09NegativeYesDirect
E10PositiveYesDirect

References

  1. Butler, D. A world where everyone has a robot: Why 2040 could blow your mind. Nature 2016, 530, 398–401. [Google Scholar] [CrossRef]
  2. Davenport, T.H.; Kirby, J. Just how smart are smart machines? MIT Sloan Manag. Rev. 2016, 57, 21. [Google Scholar]
  3. Belk, R.W.; Belanche, D.; Flavián, C. Key concepts in artificial intelligence and technologies 4.0 in services. Serv. Bus. 2023, 17, 1–9. [Google Scholar] [CrossRef]
  4. Holm, J.R.; Lorenz, E. The impact of artificial intelligence on skills at work in Denmark. New Technol. Work Employ. 2022, 37, 79–101. [Google Scholar] [CrossRef]
  5. Oliveira, P.M.A.; Carvalho, J.M.S.; Faria, S. Feeling Economy, Artificial Intelligence, and Future Jobs—A Systematic Literature Review. Interciencia 2024, 49, 2–35. [Google Scholar] [CrossRef]
  6. Vorobeva, D.; El Fassi, Y.; Pinto, D.C.; Hildebrand, D.; Herter, M.M.; Mattila, A.S. Thinking skills don’t protect service workers from replacement by artificial intelligence. J. Serv. Res. 2022, 25, 601–613. [Google Scholar] [CrossRef]
  7. Huang, M.-H.; Rust, R.T. Artificial intelligence in service. J. Serv. Res. 2018, 21, 155–172. [Google Scholar] [CrossRef]
  8. Rust, R.T.; Huang, M.-H. The Feeling Economy, 1st ed.; Palgrave Macmillan: Cham, Switzerland, 2021. [Google Scholar]
  9. Huang, M.-H.; Rust, R.; Maksimovic, V. The feeling economy: Managing in the next generation of artificial intelligence (AI). Calif. Manag. Rev. 2019, 61, 43–65. [Google Scholar] [CrossRef]
  10. Huang, M.-H.; Rust, R.T. Engaged to a robot? The role of AIin service. J. Serv. Res. 2021, 24, 30–41. [Google Scholar] [CrossRef]
  11. Tschang, F.T.; Almirall, E. Artificial intelligence as augmenting automation: Implications for employment. Acad. Manag. Perspect. 2021, 35, 642–659. [Google Scholar] [CrossRef]
  12. Shin, D. Artificial Misinformation, 1st ed.; Palgrave Macmillan: Cham, Switzerland, 2024. [Google Scholar]
  13. Shin, D. Debiasing AI: Rethinking the Intersection of Innovation and Sustainability, 1st ed.; Routledge: London, UK, 2025. [Google Scholar]
  14. Dolev, N.; Itzkovich, Y. In the AI era soft skills are the new hard skills. In Artificial Intelligence and its Impact on Business; Information Age Publishing: Charlotte, NC, USA, 2020; p. 55. [Google Scholar]
  15. Russell, S.J.; Norvig, P. Artificial Intelligence a Modern Approach, 3rd ed.; Pearson Education, Inc.: London, UK, 2010. [Google Scholar]
  16. Leidner, D.S. Cognitive Reasoning for Compliant Robot Manipulation, 1st ed.; Springer: Cham, Switzerland, 2019. [Google Scholar]
  17. Ibarz, J.; Tan, J.; Finn, C.; Kalakrishnan, M.; Pastor, P.; Levine, S. How to train your robot with deep reinforcement learning: Lessons we have learned. Int. J. Robot. Res. 2021, 40, 698–721. [Google Scholar] [CrossRef]
  18. Richards, L.E.; Matuszek, C. Learning to understand non-categorical physical language for human robot interactions. In UMBC Student Collection, Proceedings of the RSS 2019 Workshop on AI and Its Alternatives in Assistive and Collaborative Robotics (RSS: AI+ACR), Freiburg, Germany, 23 June 2019; University of Maryland, Baltimore County: Baltimore, MD, USA, 2019. [Google Scholar]
  19. Huang, M.-H.; Rust, R.T. Astrategic framework for artificial intelligence in marketing. J. Acad. Mark. Sci. 2021, 49, 30–50. [Google Scholar] [CrossRef]
  20. Bank, A. Asian Development Outlook (ADO) 2018: How Technology Affects Jobs; Working Papers from eSocialSciences; Asian Development Bank: Mandaluyong, Philippines, 2018. [Google Scholar]
  21. Lagorio, A.; Cimini, C.; Gaiardelli, P. Reshaping the concepts of job enrichment and job enlargement: The impacts of Lean and Industry 4.0. In Proceedings of the IFIP International Conference on Advances in Production Management Systems, Nantes, France, 31 August 2021; Springer: Cham, Switzerland, 2021; pp. 721–729. [Google Scholar]
  22. Qin, S.; Jia, N.; Luo, X.; Liao, C.; Huang, Z. Perceived fairness of human managers compared with artificial intelligence in employee performance evaluation. J. Manag. Inf. Syst. 2023, 40, 1039–1070. [Google Scholar] [CrossRef]
  23. Abdullah, R.; Fakieh, B. Health care employees’ perceptions of the use of artificial intelligence applications: Survey study. J. Med. Internet Res. 2020, 22, e17620. [Google Scholar] [CrossRef]
  24. Kelley, S. Employee perceptions of the effective adoption of AI principles. J. Bus. Ethics 2022, 178, 871–893. [Google Scholar] [CrossRef]
  25. Zhu, Y.-Q.; Corbett, J.U.; Chiu, Y.-T. Understanding employees’ responses to artificial intelligence. Organ. Dyn. 2021, 50, 100786. [Google Scholar] [CrossRef]
  26. Dyer, W.G., Jr.; Wilkins, A.L. Better stories, not better constructs, to generate better theory: A rejoinder to Eisenhardt. Acad. Manag. Rev. 1991, 16, 613–619. [Google Scholar] [CrossRef]
  27. Yin, R.K. Case Study Research and Applications, 6th ed.; Sage Publications, Inc.: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  28. Eriksson, P.; Kovalainen, A. Qualitative Methods in Business Research: A Practical Guide to Social Research, 2nd ed.; SAGE Publications Ltd.: London, UK, 2015. [Google Scholar]
  29. Eisenhardt, K.M. Building theories from case study research. Acad. Manag. Rev. 1989, 14, 532–550. [Google Scholar] [CrossRef]
  30. Croucher, S.M.; Cronn-Mills, D. Understanding Communication Research Methods: A Theoretical and Practical Approach, 4th ed.; Routledge: New York, NY, USA, 2014. [Google Scholar]
  31. Tong, A.; Sainsbury, P.; Craig, J. Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. Int. J. Qual. Health Care 2007, 19, 349–357. [Google Scholar] [CrossRef]
  32. Ritchie, J.; Lewis, J.; Nicholls, C.M.; Ormston, R. (Eds.) . Qualitative Research Practice: A Guide for Social Science Students and Researchers, 1st ed.; SAGE Publications Ltd.: London, UK, 2013. [Google Scholar]
  33. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  34. Kalateh, S.; Estrada-Jimenez, L.A.; Pulikottil, T.; Hojjati, S.N.; Barata, J. Feeling smart industry. In Proceedings of the 2021 62nd International Scientific Conference on Information Technology and Management Science of Riga Technical University (ITMS), Riga, Latvia, 14 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
  35. von Richthofen, G.; Ogolla, S.; Send, H. Adopting AI in the context of knowledge work: Empirical insights from German organizations. Information 2022, 13, 199. [Google Scholar] [CrossRef]
  36. Jaiswal, A.; Arun, C.J.; Varma, A. Rebooting employees: Upskilling for artificial intelligence in multinational corporations. Int. J. Hum. Resour. Manag. 2022, 33, 1179–1208. [Google Scholar] [CrossRef]
  37. Acemoglu, D.; Restrepo, P. Robots jobs: Evidence from USlabor markets. J. Political Econ. 2020, 128, 2188–2244. [Google Scholar] [CrossRef]
  38. Patulny, R.; Lazarevic, N.; Smith, V. ‘Once more, with feeling,’ said the robot: AI, the end of work and the rise of emotional economies. Emot. Soc. 2020, 2, 79–97. [Google Scholar] [CrossRef]
  39. Vorobeva, D.; Pinto, D.C.; António, N.; Mattila, A.S. The augmentation effect of artificial intelligence: Can AI framing shape customer acceptance of AI-based services? Curr. Issues Tour. 2023, 27, 1551–1571. [Google Scholar] [CrossRef]
Table 1. Results of the thematic analysis. Source: authors.
Table 1. Results of the thematic analysis. Source: authors.
ThemesDescription
Reconfiguring of Job TasksThis theme highlights the replacement or complementation of human tasks by AI, with a focus on automating mechanical and repetitive processes. Addresses the transfer of repetitive tasks to AI systems, allowing the optimisation of human functions.
Enhancement of Efficiency and Work QualityIt refers to increasing productivity and reducing errors using AI. AI is perceived as a tool that enhances productivity and reduces errors in operational processes.
Psychological Challenges and AdaptationIt explores workers’ concerns regarding AI replacement and the need to adapt. It emphasises employees’ concerns about task substitution and adaptation to technology.
Need for Skills and AI CompetenceIt reflects the demand for training and specific skills to work with new tools. This theme reflects the growing demand for training to work effectively with AI technologies.
Dehumanisation of Interactions with AIIt addresses the impact of AI on interpersonal interactions and the lack of empathy perceived in automated processes. It examines concerns regarding the decline of empathy and human interaction in tasks handled by AI.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oliveira, P.; Carvalho, J.M.S.; Faria, S. AI Integration in Organisational Workflows: A Case Study on Job Reconfiguration, Efficiency, and Workforce Adaptation. Information 2025, 16, 764. https://doi.org/10.3390/info16090764

AMA Style

Oliveira P, Carvalho JMS, Faria S. AI Integration in Organisational Workflows: A Case Study on Job Reconfiguration, Efficiency, and Workforce Adaptation. Information. 2025; 16(9):764. https://doi.org/10.3390/info16090764

Chicago/Turabian Style

Oliveira, Pedro, João M. S. Carvalho, and Sílvia Faria. 2025. "AI Integration in Organisational Workflows: A Case Study on Job Reconfiguration, Efficiency, and Workforce Adaptation" Information 16, no. 9: 764. https://doi.org/10.3390/info16090764

APA Style

Oliveira, P., Carvalho, J. M. S., & Faria, S. (2025). AI Integration in Organisational Workflows: A Case Study on Job Reconfiguration, Efficiency, and Workforce Adaptation. Information, 16(9), 764. https://doi.org/10.3390/info16090764

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop