Next Article in Journal
Smart Mobility and Last-Mile Rail Integration
Next Article in Special Issue
Digital Transformation in Port Logistics
Previous Article in Journal
Toward an Integrated Model of Reading: Bridging Lexical Quality and Comprehension Systems
Previous Article in Special Issue
Implementing Universal Design for Learning to Transform Science Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Artificial Intelligence and Emerging Risks in Occupational Safety and Health

by
Xavier Baraza
1,2,* and
Joan Torrent-Sellens
1,2,*
1
Faculty of Economics and Business, Universitat Oberta de Catalunya, 08014 Barcelona, Spain
2
i2TIC-IA Lab and UOC-DIGIT, Universitat Oberta de Catalunya, 08014 Barcelona, Spain
*
Authors to whom correspondence should be addressed.
Encyclopedia 2026, 6(1), 25; https://doi.org/10.3390/encyclopedia6010025
Submission received: 1 December 2025 / Revised: 9 January 2026 / Accepted: 15 January 2026 / Published: 19 January 2026
(This article belongs to the Collection Encyclopedia of Social Sciences)

Definition

Artificial intelligence (AI) refers to autonomous or semi-autonomous systems capable of interpreting data, generating inferences, and guiding decisions, thereby reshaping the foundations of work and organizational processes. Its rapid integration into productive settings gives rise to emerging risks, understood as new or evolving hazards that stem from human–machine interaction, algorithmic decision-making, and shifting sociotechnical conditions. Within occupational safety and health (OSH), these risks encompass novel cognitive, psychosocial, organizational, and ethical challenges, making it necessary to develop preventive frameworks that align technological innovation with human well-being, transparency, and responsible governance.

1. Introduction

The concept of “artificial intelligence” (AI) was introduced by John McCarthy in 1955 [1]. Since then, the term has referred to the ability of machines to replicate human cognitive processes such as reasoning, learning, and problem-solving. AI is grounded in algorithms and computational models that enable systems to perform tasks previously dependent on human intelligence, including language comprehension, visual or auditory pattern recognition, complex decision-making, and language translation [2].
AI is a dynamic concept whose definition has been debated across international organizations. The Organisation for Economic Co-operation and Development (OECD) [3] and the European Union in the AI Act [4] converge in describing AI as a “machine-based system capable of operating with varying levels of autonomy and exhibiting post-deployment adaptability, which, with explicit or implicit objectives, infers from input data to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. UNESCO highlights the functional nature of AI, noting that systems built on data, hardware, and connectivity enable machines to emulate human abilities such as perception, problem-solving, linguistic interaction, and creativity [5]. These definitions, summarized in Table 1, reveal relevant nuances: while the OECD and the European Union emphasize autonomy, adaptability, and input–output relations, UNESCO stresses imitative capacities and the constitutive role of technological resources.
From an economic and business perspective, AI can be understood as a technological innovation process [6]. It refers to the social stock of knowledge used to create digital artifacts that, when applied to economic activity, emulate and may enhance or replace human cognitive capacities [7]. These techniques rely on AI’s ability to generate value through prediction. Predictive AI (PAI) encompasses computational systems and machine and deep learning algorithms designed to interpret and anticipate events, support or automate decisions, and execute actions in controlled contexts. PAI is a higher-order technology, a driver of radical innovation, and a general-purpose technology [8]. It also fosters technological convergence, derivative innovations, complementarities with economic assets, particularly intangible assets and human capital, new business models, productivity and employment gains, and a long-term economic cycle [9].
The rapid emergence of generative AI (GAI), with revolutionary “killer apps” such as ChatGPT and Gemini, has created a new and clearly disruptive inflection point at several critical inflection points, that is, moments when gradual technological developments trigger major changes in how work is organized and risks emerge, shaping new trajectories for occupational safety and health [10]. GAI is also a general-purpose technology and is extending a key new value: the value of creation [11]. This value, driven by transformer-based machine and deep learning algorithms that generate digitalized artifacts, enhances AI’s performance and is profoundly transforming production [12] and work [13,14]. Progress in connectionist AI will not stop here: future algorithmic generations will produce far more advanced systems, with more agents, greater power, and enhanced capacities for learning, replication, and resource acquisition [15].
Transformative AI (TAI) refers to highly capable systems that operate as independent and autonomous agents pursuing their objectives, and whose performance far exceeds that of human labour across a wide range of tasks, including many that are essential to the economy, work, and society [16,17]. The value associated with TAI is the value of transformation.
As their capabilities expand, predictive, generative, and transformative AIs emulate an increasing number of human skills, accelerating their potential to replace non-routine cognitive work. Technically, the emergence of TAI capable of producing ideas, generating innovations, and decoupling economic growth from human labor may occur in the coming decades [18]. Such TAI poses an existential risk: its superiority in prediction, creation, and transformation would grant it an economic advantage that could render humans redundant in many social domains, particularly work [19]. This would misalign AI from human progress, exacerbating automation challenges, insufficient control, polarization, and inequality already observed with PAI and GAI. However, if TAI were aligned and directed toward human and organizational well-being, it could foster a new era of growth and social prosperity through its capacity to enhance productivity, economic expansion, social welfare, and environmental protection [20].
In occupational risk prevention, these conceptual differences are not merely semantic but have significant practical implications. Autonomy and adaptiveness allow anticipating risks from systems that, once deployed, may behave in not fully predictable ways, creating uncertainty regarding supervision and control [21,22]. The reference to explicit or implicit objectives raises questions about responsibility allocation and failure management [23], directly influencing organizational prevention culture. The distinction between physical and virtual environments indicates that emerging risks also include digital dimensions such as surveillance, automated decision-making, and the management of workers’ data [24]. UNESCO’s focus on imitative human capacities introduces psychosocial and ethical risks related to human–machine interaction, potential substitution of cognitive functions, and associated tensions in work organization [1]. Overall, these perspectives show that defining AI and its dimensions is not only a conceptual task but an essential first step for identifying and managing emerging risks in workplace settings.
In the workplace, AI plays a dual role that combines opportunities and challenges for occupational risk prevention. From a positive perspective, it can reduce workers’ exposure to hazardous environments through collaborative robots, drones, or intelligent monitoring systems [25]. AI also enhances ergonomic workstation design, anticipates physical overloads, and strengthens preventive management through predictive models capable of identifying accident patterns before they occur [21,22]. Moreover, it enables more personalized preventive training by tailoring content to worker characteristics, thereby improving the effectiveness of occupational safety and health education [22].
However, alongside these advantages, new categories of risk arise that must be considered in preventive planning. Surveillance algorithms may create psychosocial tensions linked to perceptions of excessive control, affecting emotional well-being and organizational trust [1,23]. Human–machine interaction in digitalized environments can lead to additional cognitive load, misinterpretation of automated recommendations, or excessive reliance on technological systems [26]. Ethical and legal dilemmas also emerge regarding algorithmic transparency, biases in automated decision-making, and the allocation of responsibility in the event of failure [24]. These issues add to the broader impact of AI on employment, including skill polarization and work reorganization, which directly influence workers’ safety and health conditions [2].
Concrete examples of these challenges can already be observed in practice. In large-scale warehouse and logistics operations, the use of algorithmic management systems for task allocation and performance monitoring has been associated with work intensification, reduced autonomy, and increased musculoskeletal and psychosocial risks. Similar issues have been reported in road transport and delivery sectors, where AI-driven scheduling and monitoring systems have contributed to time pressure, fatigue, and elevated safety risks among drivers in several national contexts.
In this context, the connection between AI and emerging risks places occupational risk prevention in a strategic position. It must anticipate both the benefits and the threats that technology introduces into workplace environments [27]. This requires developing adaptive regulatory frameworks, innovative assessment methodologies, and intervention strategies that integrate not only the technical dimension but also the organizational, psychosocial, and ethical aspects of digital transformation.
The aim of this entry is to provide a preventive perspective on the intersection between AI and emerging risks in workplace environments. In this entry, a preventive perspective refers to an approach that explicitly prioritizes the early identification, anticipation, and mitigation of potential risks associated with AI adoption, rather than assuming that technological innovation will automatically lead to harm reduction. This perspective emphasizes the need for governance, human oversight, and precautionary design choices throughout the lifecycle of AI systems. Beyond approaches focused solely on technological innovation or AI’s economic potential, this analysis centers on implications for occupational safety and health. Occupational risk prevention (ORP) is presented as a framework capable of anticipating, interpreting, and managing the uncertainties arising from the adoption of intelligent systems at work [21,23,24]. This perspective helps identify benefits, such as reduced physical exposure, ergonomic improvements, and early incident detection, while also addressing emerging threats, including technostress and ethical dilemmas related to algorithmic transparency and responsibility allocation [1,23].
This entry aims to provide a synthetic resource, understood as a digitally generated source of information derived from data integration, modelling, and algorithmic inference, rather than from direct human observation alone, for both the academic community and prevention professionals, offering an updated overview of the opportunities and risks associated with AI in the workplace. It also seeks to foster interdisciplinary debate on the need for more flexible regulatory frameworks, innovative assessment methodologies, and proactive preventive strategies capable of addressing rapid technological change with significant social implications [26]. Ultimately, it aspires to support the safe, ethical, and sustainable integration of AI in work environments, ensuring that technological advances translate into real improvements in workers’ health, safety, and well-being [24,25,26].

2. Artificial Intelligence in the Workplace

2.1. Current Applications of AI in the Labor Context

In the workplace, AI has progressively integrated into multiple processes, reshaping how work is organized and performed [28]. One of its most widespread applications is task automation across industrial and service sectors, where intelligent systems take over repetitive or high-precision activities, increasing efficiency and reducing human error [29,30]. This is evident in assembly lines, logistics operations, and customer service through chatbots [31], where AI handles routine functions and enables workers to focus on higher value-added tasks [30,32,33].
AI is increasingly used to support decision-making through algorithms capable of analysing large amounts of data in real time. In business management, such systems assist in human resource planning, shift organization, and forecasting production needs [34,35]. In occupational risk prevention, AI-based predictive models already analyse accident data, identifying patterns that enable organizations to implement more effective preventive measures [21,24,26]. Similarly, in healthcare and occupational health, AI is applied to monitor worker well-being and to detect early physiological changes or fatigue using sensors and wearable devices [36,37,38].
In advanced industrial settings, collaborative robotics represents another area of AI expansion. Cobots are robots designed to interact physically with humans safely within a shared workspace. They are used in tasks such as assembly, material handling, or internal transport and integrate computer vision and sensors that detect worker proximity, automatically stopping movement to prevent collisions [39]. This technology not only enhances efficiency but also redefines job roles, creating new challenges in ergonomics and in the skills required to work alongside intelligent systems [40].
AI-based surveillance and monitoring systems are expanding across sectors. They include applications for controlling environmental variables such as noise and vibration [41], temperature [42], or contaminant concentrations, as well as real-time analysis of smart-camera images to detect risky behaviours or safety protocol violations [43]. Some systems can even recognize awkward postures and alert workers before an injury occurs, opening new possibilities for proactive prevention [44]. However, these applications also raise concerns related to privacy, control, and trust in the worker–employer relationship.
Taken together, these applications demonstrate that AI is no longer a future technology, but a tool already integrated into many productive [22] and service sectors. Its impact is evident across fields such as construction [43,45], mining [44], logistics [46,47,48], the energy sector [25], healthcare [36,38], and digital services, shaping a scenario in which interaction between workers and intelligent systems becomes a routine component of work organization.

2.2. Potential Benefits of AI for Occupational Risk Prevention

The deployment of AI in work environments offers several potential benefits for occupational safety and health, provided its implementation is responsible and ethically guided. One of the most significant contributions is the reduction of exposure to physical hazards, as automated systems and collaborative robots can replace workers in particularly dangerous tasks. Examples include handling toxic substances, performing interventions in confined spaces with hazardous atmospheres [49,50], or conducting work at height, where robots already undertake inspection and maintenance activities [51]. These applications help minimize severe accidents while enabling personnel to focus on safer supervision and control tasks.
Another significant benefit relates to improved ergonomics. AI is used in predictive workstation design through simulations and digital modeling that help anticipate physical overload, identify awkward postures, and optimize workspace layout before musculoskeletal injuries occur [52,53,54]. Additionally, computer-vision systems and smart sensors now monitor workers’ movements in real time, detecting deviations from safe ergonomic patterns and issuing preventive alerts [55,56]. These developments reinforce a shift from reactive ergonomics, focused on correcting problems after they appear, to more proactive approaches centred on anticipation and continuous improvement. This orientation is not new within ergonomics, but builds on long-standing traditions of corrective, design, and prospective ergonomics, which have historically aimed to shape future work situations and integrate technologies from early design stages [57,58]. In this context, AI-based tools can be understood as extending and strengthening these established ergonomic approaches rather than replacing them.
Finally, AI could become a strategic ally in preventive management. Algorithms capable of analysing large datasets can identify accident patterns, hidden correlations among risk variables, and hazard scenarios that may not be evident to human teams [21,22]. These insights help organizations prioritize interventions, allocate resources more efficiently, and design evidence-based preventive strategies [21,26]. Some risk-management platforms already integrate AI to generate automatic recommendations or simulate the impact of preventive measures across different scenarios, supporting proactive decision-making in occupational risk prevention [59,60].
In sum, these benefits show that, when properly implemented, AI can become a key tool for advancing toward safer, healthier, and more sustainable workplaces. However, the realization of these advantages depends on human-centered design, system transparency, and the active involvement of workers and prevention specialists in technological adoption processes [61,62].

2.3. Key Sectors for AI Application in the Workplace

The deployment of AI does not currently occur uniformly but shows particular concentration in specific productive and service sectors. Analysing these areas allows for a better understanding of both the potential benefits and the challenges associated with occupational safety and health.
In the manufacturing and automotive industries, AI is applied mainly through advanced automation and collaborative robotics. Intelligent systems control assembly lines, manage inventories, and conduct real-time quality inspections [63,64]. Cobots enable safe interaction with workers in assembly tasks, reducing exposure to repetitive physical strain and improving precision in critical operations [65,66].
In the logistics and transport sector, AI is used for route optimization, predictive supply chain management [67], and the development of autonomous vehicles and drones for goods distribution [68]. These applications help reduce accidents related to fatigue and human workload [21,69], although they also raise questions regarding responsibility in the event of incidents [70].
The healthcare sector represents another major area of application. Diagnostic AI systems support the interpretation of medical images, patient monitoring, and the prediction of clinical complications [71,72]. In preventive practice, AI manages large volumes of occupational health data [73], identifies psychosocial risks [74], and improves resource planning in hospitals. Surgical [75] and assistive robotics also relieve staff from physically demanding tasks, although they introduce new requirements for technological supervision.
Lastly, in digital and financial services, AI is applied to large-scale data processing, fraud detection, risk analysis, and automated customer service through chatbots [76]. Although these environments involve lower exposure to physical risks, intensive digitalization can heighten psychosocial risks such as technostress, algorithmic surveillance, and work intensification [77].
Taken together, these examples show that the key sectors applying AI exhibit distinct risk profiles, requiring occupational risk prevention to adapt its analytical methodologies and intervention strategies to each productive context.
Beyond its role in risk reduction, the application of AI in healthcare-related work environments is increasingly contributing to a broader transformation of human health, occupational well-being, and professional practice. Advanced diagnostic systems, assistive and surgical robotics, digital twins, and data-driven personalization of interventions are progressively blurring the traditional boundaries between occupational health, clinical medicine, ergonomics, and preventive care [78].
In this evolving context, AI does not merely function as a tool for preventing harm or detecting adverse events, but also as an enabler of proactive health promotion, early intervention, and continuous well-being acquisition throughout working life. Predictive analytics and intelligent monitoring systems support the anticipation of functional decline, fatigue, or psychosocial strain, while AI-assisted decision tools increasingly inform individualized prevention and recovery strategies [10,24].
This convergence of disciplines reflects a shift toward more integrated models of worker health, where prevention, care, and performance are addressed simultaneously. As recent scholarship on AI-driven healing and future-oriented medicine emphasizes, such integration requires interdisciplinary collaboration, robust ethical governance, and careful attention to privacy, transparency, and human oversight [4,5,78]. From an occupational safety and health perspective, these developments reinforce the need to understand healthcare-related AI not only as a risk-management instrument, but also as a catalyst for redefining how work, health, and well-being are jointly sustained in digitalized environments.

2.4. Limitations and Conditions for the Realization of Benefits

Although AI offers a wide range of potential benefits in the workplace, these do not materialize automatically. Evidence from various sectors shows that its impact on safety and health depends on organizational, technical, and social conditions that shape its acceptance and effectiveness.
First, a human-centered design is essential. Implementing intelligent systems without considering workers’ abilities, limitations, and needs can produce the opposite effect, creating new physical or cognitive burdens instead of reducing them [61]. An example is ergonomic monitoring systems: while they can help prevent musculoskeletal injuries, if perceived as tools of excessive control, they may provoke rejection and psychosocial stress [79].
Another critical factor is algorithmic transparency and explainability. The benefits of AI in preventive management are realized only when workers and safety professionals understand how these systems generate decisions and recommendations [62,80]. Algorithmic opacity undermines trust, increases uncertainty regarding responsibility in the event of failures, and hinders integration into the organization’s prevention culture.
Likewise, worker training and capacity building are essential conditions. The introduction of cobots, predictive analytics platforms, or intelligent monitoring systems requires personnel to acquire new technical and digital competencies [81]. Without such training, AI may become a source of exclusion and inequality rather than a tool for support and improved working conditions.
Finally, the active participation of workers and prevention professionals in technological adoption is essential to ensure acceptance and long-term sustainability. The introduction of AI must be accompanied by social dialogue, impact assessment, and updated preventive regulations [4,23,82]. Only under these conditions can the potential benefits of AI be fully realized, preventing technological innovation from becoming an additional source of emerging risks.

3. Emerging Occupational Risks Associated with Artificial Intelligence

3.1. Psychosocial Risks

The introduction of AI in workplace environments generates emerging psychosocial risks that affect both emotional well-being and work organization. One key risk is technostress, understood as the tension arising when workers must adapt to advanced digital tools without adequate resources, training, or time [83]. This stress may appear as techno-anxiety (fear of making technological mistakes) or techno-overload linked to excessive information and the need to constantly supervise intelligent systems [84,85].
Another risk is the perception of excessive surveillance derived from AI-based monitoring systems. Smart cameras, biometric sensors, or productivity algorithms may heighten feelings of reduced privacy, foster distrust and deteriorating the work climate [86]. In some cases, such technologies prompt self-censorship or behavioural changes due to fear of continuous evaluation [87].
A third psychosocial risk involves distrust toward opaque systems. When workers do not understand how algorithms make decisions affecting their performance, schedules, or evaluation, uncertainty and loss of control emerge [87,88]. Algorithmic opacity can undermine motivation and commitment, particularly when automated decisions are perceived as unfair [89].
Work intensification mediated by AI is another concern. By enabling stricter supervision and task optimization, algorithms may pressure workers to maintain high and continuous work rhythms, reducing breaks and limiting autonomy over time management [1,90]. Without proper governance, AI may contribute to burnout and other mental health issues.
In this context, AI-driven reductions in worker autonomy resulting from algorithmic task allocation, continuous monitoring, and performance optimization should be considered a core psychosocial risk factor. Such reductions directly affect job control, emotional well-being, and perceived fairness at work, reinforcing stress, disengagement, and psychological strain in AI-mediated work environments (see Section 3.4).
Overall, these psychosocial risks show that advanced digitalization transforms not only technical processes but also emotional and relational dynamics at work, requiring new preventive analysis and management tools.

3.2. Ergonomic and Organizational Risks

In addition to psychosocial risks, the introduction of AI in the workplace poses new ergonomic and organizational challenges that can affect both workers’ physical health and their performance.
A first risk is the cognitive overload that arises from the need to interpret and supervise multiple intelligent systems. Workers must process real-time information from predictive algorithms, control panels, and automated alerts, which can increase mental fatigue and the likelihood of human error [91]. This phenomenon is particularly evident in sectors such as logistics and healthcare, where AI is used to coordinate complex task flows.
New physical and postural demands also emerge from interacting with collaborative robots or automated systems not designed according to ergonomic principles. If the workstation is not properly adapted, workers may face repetitive movements, awkward postures, or unexpected exertion caused by shared manipulation with intelligent machines [39,65]. These subtler risks can accumulate over time and lead to musculoskeletal disorders in the medium and long term.
Another relevant aspect is technological dependence, which can alter work organization. When critical tasks rely almost entirely on automated systems, the loss of manual skills and the reduction in workers’ practical experience can increase vulnerability in the event of technological failure [1,90]. This phenomenon, known as “deskilling” [92], reduces the workforce’s autonomous response capacity and heightens exposure to risk when unexpected events occur.
Finally, AI systems can introduce changes in work organization that affect occupational safety and health. Algorithms for shift scheduling or task allocation may intensify workloads for certain groups, create imbalances in work–life balance, or reduce workers’ autonomy in managing their schedules [87]. These structural changes require adaptations in preventive methodologies to anticipate and mitigate their negative effects.
In summary, the ergonomic and organizational risks associated with AI show that advanced digitalization not only transforms tasks but also reshapes how work is organized and distributed, making it necessary to integrate ergonomics and organizational management into the analysis of emerging risks.

3.3. Ethical and Legal Risks

The incorporation of AI into workplace environments also introduces significant ethical and legal risks, stemming from the ways in which algorithms process information and make decisions that directly affect workers.
One of the main concerns is algorithmic bias, which can reproduce or even amplify existing inequalities. When AI systems are trained on historical data that reflect sex, age, or ethnic discrimination, these patterns may persist in processes such as recruitment, performance evaluation, or task allocation [93,94]. In this sense, AI may reinforce structural inequalities if appropriate oversight and correction mechanisms are not implemented [95].
Another critical issue is the lack of transparency in automated decision-making. Many systems operate as “black boxes,” making it difficult to understand how recommendations or evaluations affecting workers are generated [62,80]. This opacity raises legal dilemmas related to the right to information and the need to provide understandable explanations. It also limits the ability of occupational risk prevention professionals to identify and anticipate potential failures or biases in intelligent systems.
Liability in the event of a workplace accident or harm is another emerging challenge. When an AI system is involved in work organization or machinery operation, it becomes difficult to determine whether responsibility lies with the software developer, the company implementing it, or the worker interacting with it [23,80,82]. This legal uncertainty complicates the application of preventive frameworks that require a clear allocation of responsibility.
Finally, the ethical and legal risks associated with AI in the workplace extend to personal data protection and privacy. The use of intelligent surveillance systems, biometric sensors, or behavioural analysis may infringe fundamental rights if their scope and purpose are not properly regulated [23,82]. The challenge lies in balancing AI’s use to improve occupational safety and health with the need to uphold workers’ dignity, fairness, and protection.
Taken together, these risks show that the ethical and legal dimension of AI is not an accessory aspect but a central component for its responsible integration into the workplace.

3.4. Social and Labour Risks

Beyond psychosocial, ergonomic, and legal risks, AI also raises social and labour risks that directly affect employment structures and organizational dynamics. One of the most debated issues is its impact on employment and labour polarization. The automation of routine tasks may displace jobs in sectors such as manufacturing, logistics, and administrative services [96]. At the same time, demand increases for highly skilled profiles in data science, AI engineering, and cybersecurity, widening the gap between workers with differing levels of training and digital competencies [97,98].
A related risk is inequality in access to employment opportunities. Workers with low digital literacy or those employed in small and medium-sized enterprises may fall behind compared to those with continuous training and more advanced technological environments [99]. This inequality may also have a territorial dimension, widening the gap between highly digitalized regions and those with lower technological investment capacity [100].
AI can also generate changes in organizational culture and worker autonomy. Algorithms that assign tasks, control timing, or monitor productivity may reduce individual decision-making capacity, affecting perceptions of autonomy and motivation [1,87,90]. While these dynamics have clear social and labour implications, the reduction of worker autonomy and decision latitude is also closely connected to psychosocial risk factors, particularly those related to job control, perceived fairness, and emotional strain in AI-mediated work environments. For this reason, these aspects should be interpreted in close connection with psychosocial risks, rather than as purely labour-related issues. In this regard, work intensification and reduced personal control over the workday can deteriorate job quality.
Finally, there are risks to social cohesion and trust in labour institutions. The perception that AI is used mainly to reduce costs or monitor workers can fuel resistance, labour conflicts, and lower acceptance of digitalization [101,102]. Without supporting policies, training, and active participation, the benefits of AI may concentrate among a few, while social costs are unevenly distributed.
Taken together, the social and labour risks associated with AI require rethinking the role of prevention in a broader sense, incorporating not only physical health and safety but also equity, inclusion, and the sustainability of employment in a context of rapid technological transformation.

3.5. Technological, Environmental, and Infrastructure Risks

Although technological, environmental, and infrastructure risks associated with AI may appear to extend beyond immediate workplace boundaries, they have significant indirect implications for occupational safety and health. Failures related to cybersecurity, energy dependence, or critical digital infrastructures can directly affect work organization, exposure scenarios, operational continuity, and workers’ safety, particularly in highly digitalized and automated work environments.
In addition to psychosocial, ergonomic, ethical, and social risks, AI also generates technological, environmental, and infrastructure risks that affect occupational safety and health indirectly but significantly [59,60].
One of the main challenges is cybersecurity. AI systems, which rely on large volumes of data and are connected to digital networks, are vulnerable to cyberattacks that may alter their functioning [103]. An attack on a collaborative robot, an intelligent surveillance system, or a risk-management platform could trigger incidents that endanger both worker safety and operational continuity. These scenarios make it essential to integrate cybersecurity into preventive strategies.
Algorithmic reliability is another critical issue. Models trained on incomplete or low-quality data may produce prediction errors, incorrect diagnoses, or false alarms. In high-criticality sectors such as healthcare or the chemical industry, such failures can have serious consequences for worker safety. Moreover, the interoperability of multiple intelligent systems in complex environments, such as smart factories or automated logistics centers, may lead to cascading failures when algorithms are not properly coordinated [104].
Environmental and infrastructure impacts add to these technological risks. Training advanced models requires high energy consumption, resulting in a significant carbon footprint [105,106]. Moreover, reliance on critical infrastructures, such as electrical grids, telecommunications, or transport systems, means that technical failures or cyberattacks may trigger cascading effects that compromise productivity and worker safety [107]. Additionally, hardware manufacturing and the growing volume of electronic waste create risks in extractive and recycling sectors, where workers may be exposed to highly hazardous conditions [108].
Taken together, technological, environmental, and infrastructure risks show that occupational risk prevention must adopt a broader and more systemic approach that integrates digital security, sustainability, and the resilience of critical infrastructures as inseparable components of protecting health and safety at work in the era of AI.
From an occupational risk prevention perspective, these systemic risks reinforce the need to integrate technological resilience, cybersecurity, and sustainability considerations into workplace safety strategies, particularly in sectors where AI-driven systems are critical for daily operations.

4. Occupational Risk Prevention in the Face of AI

4.1. Adaptation of the Regulatory Framework and Preventive Policies

The rapid incorporation of AI in workplaces requires adapting regulatory frameworks and preventive policies to adequately address emerging risks. Traditionally, occupational risk prevention regulations have focused on physical, chemical, or ergonomic factors; however, advanced digitalization demands a broader approach that integrates technological safety, ethics, and sustainability [4,23,82].
In the European context, the EU Occupational Safety and Health Strategy 2021–2027 already highlights the need to anticipate risks arising from digitalization and automation. Likewise, EU-OSHA has published foresight reports emphasizing that AI-based systems can generate both preventive benefits and new hazards, urging Member States to review their regulatory frameworks [109].
A recent milestone in this direction is the approval of the European Artificial Intelligence Regulation, the AI Act [4], which establishes specific obligations for high-risk systems, including those that affect human resource management or workplace safety. This regulatory framework introduces principles of transparency, traceability, and human oversight that are directly applicable to the field of prevention.
At the international level, organizations such as the International Labour Organization (ILO) have emphasized the importance of integrating AI into global occupational safety and health strategies, stressing the need for social dialogue and the active participation of workers in policy development [110,111]. Complementarily, UNESCO has promoted ethical guidelines on AI that can serve as a reference for shaping regulation in workplace environments [5].
In this regard, regulatory adaptation must be accompanied by a preventive paradigm that prioritizes proactive and anticipatory action over reactive responses that occur only after harm has taken place. Such anticipation, supported by a solid and flexible legal framework, will enable AI to serve as a tool for improving workplace safety rather than a source of uncertainty or reduced protection.

4.2. Methodologies for Identifying and Assessing Risks in AI-Enabled Work Environments

The incorporation of AI in workplaces requires a substantial update of risk identification and assessment methodologies, as traditional prevention tools may fall short in capturing the new scenarios generated by advanced digitalization.
A first challenge is the adaptation of classical risk assessment methods. Probability–severity matrices, checklists, and ergonomic evaluations must be expanded to include emerging factors such as cognitive load resulting from supervising intelligent systems, loss of autonomy caused by algorithmic management, or stress linked to digital surveillance [1,86,87,91]. Incorporating these elements makes it possible to identify risks that would otherwise remain hidden.
Second, algorithmic audits are becoming increasingly relevant, as they assess the transparency, fairness, and reliability of AI systems used in the workplace. These audits help identify biases in recruitment algorithms, failures in task-recommendation systems, or limitations in the explainability of results [62,87,89,93,94]. Approaches such as ethics by design and safety by design are useful because they integrate preventive principles during algorithm development rather than correcting issues after deployment [61,62].
A third key aspect is the development of hybrid assessment models that combine approaches from different disciplines, including digital ergonomics, cybersecurity, and organizational analysis. These evaluations enable an integrated analysis of risks related to human–machine interaction, technical reliability, and intensive data use [59,61,62,103,104]. Worker participation is essential to ensure that the assessment accurately reflects workplace realities.
Finally, international guidelines and best practices are emerging to support the adaptation of evaluation methodologies. EU-OSHA recommends integrating digital risks into the overall prevention matrix [109], while NIOSH has proposed specific frameworks for assessing risks in advanced automation environments [112]. These guidelines promote a more proactive approach aimed at anticipating and minimizing AI-related risks before they materialize.
In summary, risk assessment in AI-enabled environments requires moving from a classical, reactive approach to a proactive, multidimensional, and participatory model that combines technical analysis of algorithms with workers’ experience, ensuring that digitalization translates into real improvements in occupational safety and health.

4.3. Evidence-Based Management as a Useful Approach

The introduction of AI in workplace environments heightens the need for approaches that ensure robust, well-grounded preventive decisions. In this context, evidence-based management is particularly useful for occupational risk prevention. This approach combines the best available scientific evidence, workplace data, and professional expertise to guide decision-making [21,22,26]. Given the complexity of AI-related emerging risks, where information may be limited or rapidly evolving, evidence-based management enables the prioritization of well-supported, context-specific interventions.
One of its key contributions is the ability to integrate diverse sources of knowledge. Academic and technical literature provides guidance on best practices in digital ergonomics [61], cybersecurity [103], and algorithmic management [87,88]. In addition, data generated by AI itself, such as exposure records, cognitive load indicators, or performance metrics, can be used to enhance risk assessment. Finally, the expertise and judgment of prevention professionals ensure that decisions incorporate both technical criteria and the organizational context.
Moreover, evidence-based management promotes a proactive and reflective preventive culture by requiring that each decision be grounded in evidence rather than intuition or production pressures [36,37,38,55,56]. This approach also enhances transparency and accountability, which are essential in contexts where the introduction of AI may generate distrust among workers.
There is already evidence demonstrating the usefulness of this approach in the prevention field. Recent studies show that applying evidence-based practice to ergonomics and psychosocial risk management enhances professionals’ argumentative capacity and supports the design of more effective interventions [21,26,61,91]. Likewise, in educational and training settings, this approach strengthens critical thinking and autonomy among future prevention technicians’ skills that are essential for addressing the challenges posed by advanced digitalization [21,22,26,61].
In short, evidence-based management is not only a robust methodology for assessing traditional risks but also an essential tool for addressing the unprecedented challenges posed by AI in the workplace, ensuring that preventive responses are consistent, adaptable, and socially responsible.

5. Good Practices of AI and Use

5.1. AI for the Early Detection of Ergonomic Risks

One of the most promising applications of AI in occupational risk prevention is the early detection of ergonomic risks. Through computer vision algorithms, wearable sensors, and motion analysis, AI can identify awkward postures, repetitive strain, or abrupt movements that may lead to musculoskeletal disorders (MSDs) [52,55,56].
In industrial environments, computer vision systems analyse workers’ gestures and postures in real time, generating alerts when deviations from safe parameters are detected. These tools support preventive action by adjusting workstation design or break frequency before injuries occur [44,55,56]. In the healthcare sector, AI models have been developed to identify high-risk patient-handling tasks, helping to reduce lower-back and upper-limb injuries [36,37,38].
Another emerging line involves wearable devices, such as inertial sensors and smart vests, which capture movement and posture data. Using machine learning, these systems identify individualized risk patterns and provide personalized recommendations [36,37,38]. In some cases, aggregated data make it possible to redesign processes or identify critical tasks across the organization, driving continuous improvement in ergonomic management.
The application of AI in ergonomics also has strong training potential. AI-based simulations can be used in training programs, showing workers in real time how their movements compare to ergonomic standards and how to correct them [113,114]. This approach reinforces visual learning and fosters a participatory prevention culture.
However, the introduction of these technologies raises ethical and organizational challenges. It is essential to ensure that such systems are not used for surveillance or control purposes but rather for prevention and the improvement of occupational health. Worker participation in the design and implementation of these tools is key to ensuring their acceptance and effectiveness [86,87].
Taken together, AI applied to the early detection of ergonomic risks represents a clear example of how digitalization can contribute to more proactive, personalized, and efficient prevention, provided it is accompanied by appropriate governance and a human-centred approach.
Recent developments indicate that AI-based ergonomic assessment is progressively evolving from a purely preventive function toward a broader role in supporting recovery, rehabilitation, and active health promotion. Machine learning–based ergonomic models, combined with computer vision and wearable technologies, increasingly support adaptive interventions that integrate prevention, learning, and functional preservation over time [52,55].
In healthcare and occupational health settings, AI-driven ergonomic tools increasingly intersect with clinical and rehabilitative practices. Digital twins of workers or tasks allow the simulation of ergonomic adaptations, workload redistribution, and recovery scenarios, facilitating individualized and preventive decision-making. When combined with assistive robotics or intelligent exoskeletons, such systems contribute to reducing physical strain while supporting healing-oriented approaches that extend beyond traditional risk avoidance [39,62].
This evolution raises important ethical and organizational considerations. The same technologies that enable personalized prevention and health promotion may also intensify concerns related to privacy, surveillance, and data governance if their purpose and limits are not clearly defined [4,23]. As emphasized in recent work on AI-enabled healing and the medicine of the future, the responsible integration of these tools requires explicit ethical frameworks, transparency regarding data use, and a clear distinction between supportive health technologies and instruments of control [78]. Within occupational risk prevention, acknowledging this continuum between prevention and healing helps clarify how AI can contribute to safer, healthier, and more sustainable work without undermining workers’ autonomy or dignity.

5.2. AI for Accident and Injury Prediction

Another high-potential area for applying AI in occupational risk prevention is accident prediction. Using machine learning techniques and predictive analytics, AI can identify hidden patterns in large accident datasets, anticipate the likelihood of incidents, and support the design of more precise preventive measures [21,22,26,115].
These models draw on historical accident data, safety observations, performance indicators, and contextual variables (environmental conditions, shifts, workload, worker experience, etc.). Using classification algorithms and neural networks, AI identifies correlations that are not detectable through traditional statistical methods [21,22,26]. For example, a predictive system may indicate that the combination of accumulated fatigue, high temperatures, and elevated production pace significantly increases accident probability on an assembly line.
In high-risk sectors such as construction [43,47], mining [44,115], or transportation [67,68], AI models are already used to prioritize inspections, plan tasks, or adjust safety protocols. In underground mining, predictive algorithms integrate gas sensors, vibration data, and human-behaviour indicators to anticipate equipment failures or unsafe actions. In transportation, AI-based vision systems detect driver distraction or drowsiness, generating automatic alerts that help reduce accident rates.
Moreover, AI enables dynamic risk management: models are continuously updated with new information, increasing their accuracy as the dataset grows. This continuous learning capability supports a transition from reactive to predictive prevention, where measures are implemented before the risk materializes [21,22,26].
However, the use of AI for accident prediction raises ethical and methodological challenges. Model reliability depends on data quality and algorithmic transparency. A biased or poorly trained model may generate false alarms or, conversely, overlook real risks [22,24,62,80]. Therefore, ensuring decision traceability, validating models with safety experts, and maintaining human oversight in preventive decisions are essential.
In sum, AI applied to accident prediction represents one of the most mature and transformative digitalization applications in OSH, enabling the identification of latent causes, prioritization of interventions, and optimization of resources. However, its implementation must be supported by clear policies on data quality, transparency, and ethical oversight to ensure that prevention remains human-centred rather than purely algorithmic.

5.3. Experiences of Responsible and Safe Integration

Recent experiences with AI in occupational risk prevention show that the benefits largely depend on how these technologies are implemented and managed. The most successful projects are those that integrate AI within a responsible, participatory, and human-centred prevention strategy [87,116].
In Europe, several pilot initiatives within the Industry 4.0 framework stand out, where AI has been used to monitor working conditions, enhance ergonomics, and optimize operational safety without compromising worker autonomy. Programs in Germany, Sweden, and the Netherlands show that AI is most effective when accompanied by training, involvement of safety committees, and continuous assessment of human impact [1,59,87,117].
A relevant example is the European “AI@Work” project, designed to develop AI tools that improve workplace well-being and safety through predictive and adaptive systems. In these initiatives, AI serves as support for preventive decision-making rather than a replacement for professional judgment [26,62]. Similarly, in the healthcare and logistics sectors, several companies have integrated intelligent systems to detect overloads or unsafe movements, complementing the traditional observations of prevention specialists [36,37,44,65].
These experiences have also highlighted the importance of establishing a technological governance framework. Creating multidisciplinary committees, including experts in OSH, computer science, ethics, and human resources, helps assess the suitability and impact of AI systems in the workplace [87]. Likewise, transparency in data use, respect for privacy, and clear communication of preventive objectives are essential conditions for building trust and preventing perceptions of monitoring or excessive control.
Finally, responsible integration projects show that AI can serve as a catalyst for organizational learning. Data generated by intelligent systems help identify trends, prioritize actions, and strengthen the prevention culture. However, this potential is only realized when organizations adopt a continuous-improvement mindset that combines technological innovation with ethics, evidence, and the active participation of all stakeholders [21,22,87].
In summary, experiences of responsible and safe integration show that the success of AI in prevention depends not only on its technical capabilities, but also on its alignment with the core values of occupational risk prevention: participation, transparency, and the comprehensive protection of workers.

6. Debates and Future Perspectives

6.1. Balancing Innovation and Worker Protection

The advance of AI in the workplace presents a central challenge for prevention: balancing technological innovation with the protection of workers. Digitalization can reduce risks and improve efficiency, but it may also introduce new vulnerabilities if productivity is prioritized over safety and health [1,23,87].
Achieving this balance requires acknowledging that AI is not neutral: its design and use reflect human decisions about what to optimize, measure, or overlook. When algorithms focus solely on efficiency, they may create production pressures [1], intensify work [90], or exclude certain groups [94]. Conversely, AI designed with sustainability and social justice criteria can support prevention by identifying overloads, automating hazardous tasks, and anticipating risks before they materialize [118].
Several international experiences show that technological innovation and worker protection are compatible, provided that appropriate governance is in place. The European Industry 5.0 Strategy promotes a “human-centred” approach to digitalization, in which technology is subordinated to well-being and safety. Likewise, reports from the ILO and EU-OSHA emphasize the need to systematically assess the human impact of these technologies before their implementation.
Achieving this balance requires strengthening three key pillars: first, preventive leadership capable of integrating innovation into a safety-oriented organizational culture; second, worker participation in the design, validation, and oversight of intelligent systems, which reinforces acceptance and trust; and finally, algorithmic transparency, enabling a clear understanding of how automated decisions influence working conditions and task allocation [62,86,87].
In summary, the challenge is not to halt innovation but to steer it toward human well-being, ensuring that AI enhances autonomy, health, and safety rather than undermining them. The prevention of the future must act as a bridge between technology and ethics, ensuring that each digital advance translates into a real improvement in working life quality.

6.2. The Challenge of Preventive Digital Literacy

The expansion of AI in the workplace requires not only new technologies but also new competencies. One of the major challenges for 21st-century occupational risk prevention is promoting preventive digital literacy, that is, the ability to understand, use, and oversee digital technologies from a safety, health, and ethical perspective [1,62,87,119].
This literacy goes beyond technical proficiency: it requires training workers and prevention professionals in critical thinking about technology, enabling them to question algorithmic biases, interpret data generated by intelligent systems, and recognize when automation may create new risks [62,87,88]. Accordingly, preventive education should include basic concepts on how AI models work, how data are processed, and what ethical boundaries must be respected when applying these technologies to the workplace.
Leading organizations are incorporating hybrid training programs that combine digital competencies with social and ethical skills. For example, workshops increasingly teach data-analysis tools alongside “safety by design” or “ethics by design” principles [62], fostering a multidisciplinary approach [61,87]. In countries such as Finland and Germany, national AI-training initiatives for industrial workers integrate the preventive dimension into continuous learning [120].
Likewise, prevention services must adapt to include professional profiles with competencies in data science, digital ergonomics, and technological ethics professionals capable of collaborating with engineers, computer scientists, and managers in assessing emerging risks. This convergence of knowledge is essential to ensure that AI is used as a tool for improvement rather than for surveillance or control [62,87].
Preventive digital literacy also has a social dimension: reducing the technological gap across generations, genders, and educational levels. If only a few understand AI, there is a risk of reproducing labor inequalities and excluding workers less familiar with digital technologies [81,87,94]. Therefore, training must be inclusive, accessible, and empowerment-oriented, ensuring that everyone understands the functioning and implications of the intelligent tools they interact with.
In sum, preventive digital literacy is a strategic pillar for a safe, fair, and sustainable digitalization. It is not only about learning to use AI, but about learning to engage with it critically, integrating its potential into a preventive culture that strengthens autonomy, professional competence, and the protection of health at work.

6.3. The Need for an Ethical and Governance Framework for AI in Occupational Risk Prevention

The rapid incorporation of AI into the world of work has often outpaced the adaptation of regulatory and ethical frameworks. Consequently, there is an urgent need to establish robust AI governance systems that ensure its responsible, transparent, and human-centred use within the field of occupational risk prevention [23,24,82].
The ethical framework should not be seen as a limitation on technological development but as a prerequisite for its social and preventive legitimacy. In the workplace, AI directly affects workers’ health, autonomy, and fundamental rights; therefore, its use must be guided by principles of justice, explainability, privacy, and meaningful human control [23,82]. These principles, embedded in international instruments such as the OECD AI Principles and the European Union’s AI Act, must be reinterpreted from a preventive perspective to ensure safe and equitable working environments.
Ethical AI governance in occupational risk prevention requires action across three complementary levels. First, the normative level, which must establish clear standards for risk assessment, algorithmic oversight, and legal responsibility in cases of system failure or bias [4,23,82]. Second, the organizational level, where companies should implement policies on transparency, participation, and ethical training, including digital ethics committees and periodic audits of intelligent systems [86,87,101]. Finally, the technical level, which requires designing algorithms with guarantees of safety, interpretability, and traceability, following “safety by design” principles [62,80].
The European Commission, through the AI Act [4], has taken a decisive step by classifying high-risk AI systems, including those affecting people management and occupational safety, and requiring prior conformity assessments, data traceability, and human oversight. This regulation, together with the recommendations of EU-OSHA [109] and the ILO [110], marks a paradigm shift: occupational risk prevention cannot be separated from the ethical design of technology.
However, beyond regulation, the success of this framework depends on the preventive and ethical culture within organizations. The responsible adoption of AI requires multidisciplinary teams capable of integrating technical, social, and occupational health perspectives [26,61]. It is also essential to foster an inclusive digital culture that reduces inequalities and enables workers to understand the technological and ethical foundations of intelligent systems [81]. Only through a cooperative, evidence-based approach can AI truly contribute to safer, fairer, and more human-centred work.
From a global perspective, international comparative analyses highlight significant differences in how countries approach AI governance. The World Bank has documented evolving national strategies that combine regulatory instruments, ethical guidelines, institutional capacity-building, and risk-based approaches to manage the societal and labour implications of artificial intelligence across different economic and regulatory contexts [121].
In sum, developing an ethical and governance framework for AI in occupational risk prevention is not an optional add-on but an essential requirement to ensure that digital innovation progresses in alignment with the core values of prevention: protecting the life, dignity, and well-being of workers.

6.4. AI, Frontiers of Substitution Possibilities, and New Existential Risks

As the different families of AI consolidate their evolutionary trajectories and interactions, new psychosocial and occupational risks emerge, expanding as these technologies increase their ability to emulate workers’ cognitive capacities. The following section briefly reviews this new risk landscape according to the three families of AI that currently shape labor markets.
In this context, it is useful to distinguish between different levels of artificial intelligence development. Artificial Narrow Intelligence (ANI) refers to AI systems designed to perform specific tasks within a limited domain, operating under predefined objectives and constraints. Most AI applications currently deployed in workplaces fall within this category. Artificial General Intelligence (AGI), by contrast, describes hypothetical AI systems capable of understanding, learning, and applying knowledge across a wide range of tasks at a level comparable to human intelligence. While AGI has not yet been realized, its potential emergence raises profound ethical, social, and occupational implications.
The link between predictive AI (PAI), automation, and labor control raises two major social challenges. PAI is typically used to increase average productivity but not necessarily marginal productivity. For PAI to create more and better jobs, new tasks must emerge that drive improvements in both types of productivity and justify new hiring. Moreover, for PAI to move beyond standard automation and control processes and enhance work at scale, it requires approaches grounded in machine usefulness and the complementarity of intelligences [122]. In other words, PAI must be directed toward human benefit, while workers’ skills must be aligned with its uses. This is the main social challenge to avoid the risk that PAI fails to generate sufficient creative destruction to sustain economic dynamism [123].
Furthermore, it is essential to consider the effects of predictive AI (PAI) on polarization and inequality among individuals and firms. Owing to economies of scale and scope, as well as network, platform, bias, and polarization effects embedded in its economic deployment, organizations can leverage PAI to automate prediction and build digital business models that are efficient, low-cost (with near-zero marginal reproduction costs), and highly scalable. This efficiency and scalability are far more attainable for large firms than for small ones, and they tend to displace workers with low and medium skill levels, including cognitive skills [124]. Such polarization is currently the main existential risk linked to PAI. Although it can be mitigated through public policies that protect and upskill workers, it remains a significant concern [125]. Table 2 summarizes the key characteristics of the three AI families and the associated existential and extreme risks.
Generative AI (GAI) and its value of creation could enhance productivity and employment by automating routine cognitive tasks and enabling workers to shift toward more creative, higher-value activities [11]. In principle, GAI could support effective human–machine collaboration, if workers retain the ability to decide which tasks to share. However, evidence of its labour effects remains limited. Further research is needed, especially as major providers standardize more powerful paid GAI models and shift adoption decisions from individuals to corporations. Uncertainty also persists regarding its impact on job quality, wages, and occupational health. There is still little evidence on the nature of new GAI-related jobs or on how skills, work intensity, or autonomy may change once this technology is fully integrated into workplaces [126].
For now, we know that the occupation making the greatest use of GAI, office work, is also the one facing the highest risk of substitution and the most significant issues in job quality and occupational health. Despite the appearance of widespread AI adoption, many workers still perform routine, repetitive, or unpleasant tasks under precarious or tightly controlled conditions, as seen in cognitive jobs such as monitoring or content moderation [127]. Labor market polarization at regional and local levels is also increasing, as GAI particularly affects non-routine cognitive work and knowledge-intensive sectors, with young workers at the start of their careers being especially vulnerable [128].
Given the rapid productive and labour transformation driven by GAI, worker training and changes in cognitive work represent its main social challenge. In the coming years, the need for education, reskilling, and new cognitive capabilities to interact with GAI will be substantial. Failure to meet this demand may increase the risks of widespread deskilling, rising inequality, and social polarization, heightening the likelihood of conflict and existential risks. Research also points to major risks linked to unequal access to GAI in education and health, as well as its use for disinformation or the erosion of individual, civil, and political rights [129]. In this context, restoring the public value of GAI, through co-designed open generative algorithms and public ownership of certain datasets, will be essential for building shared social progress.
The emergence of transformative AI (TAI) capable of driving decoupled economic growth and socially misaligned progress significantly increases the likelihood of extreme risks for humanity. There is broad agreement among researchers that a highly effective multi-agent TAI, able to emulate and perform most human skills essential for economic dynamics, including idea generation and innovation, is technically feasible. With the rise of large language models and generative AI, estimates for the arrival of TAI have shifted forward by about a decade, now projected for the 2030s to early 2040s [130]. According to the scaling hypothesis, massive increases in data, layers, and parameters in current deep-learning models, particularly convolutional and generative adversarial networks, will enable sustained meaningful learning and the emergence of new transformative capacities [131].
Another key factor in the emergence of TAI and its existential risks is the lack of alignment with human work, well-being, and social flourishing [132]. Indeed, if the alignment problem remains unsolved, the appearance of extreme risks can be taken almost for granted [16]. Predictive and generative AIs are largely developed following technical and optimization-driven criteria by computer science and data science professionals. This technological-solutionist approach generates numerous biases and incorporates limited social ethics. These systems are deployed with logics of automation and control, through business models that create wealth but also extract rents, concentrated in a handful of superstar tech firms. They replace an increasing number of human skills and exhibit highly uneven uses across individuals, firms, groups, and territories [17,133].

7. Conclusions

7.1. Synthesis of the Main Contributions

The advance of AI in the workplace presents a central challenge for prevention: balancing technological innovation with the protection of workers. Digitalization can reduce risks and improve efficiency, but it may also introduce new vulnerabilities if productivity is prioritized over safety and health.
The integration of AI into the world of work is deeply transforming the paradigms of occupational risk prevention (ORP). This entry has shown that AI is a double-edged tool: it can strengthen safety, anticipate risks, and improve decision-making, but it can also amplify inequalities, create new cognitive burdens, and erode professional autonomy if not properly managed.
The most significant advances emerge in early detection of ergonomic risks, accident prediction, and assessment of psychosocial factors through machine learning, smart sensors, and large-scale data analytics. When implemented within ethical and participatory frameworks, these technologies demonstrate the potential for more proactive, personalized, and evidence-based prevention.
However, the analysis also highlights the need to reinforce regulatory, ethical, and competency frameworks. Digitalization must align with the core values of ORP: dignity, participation, and holistic health. Developments such as Industry 5.0 and the European AI Act show that innovation and well-being are compatible when technology is designed and applied with responsibility and transparency.
In sum, the future of prevention in the AI era will depend on our collective ability to align technological intelligence with ethical and organizational intelligence, ensuring that each digital advance translates into safer, fairer, and more human-centred workplaces.

7.2. Call for Interdisciplinary Research and Continuous Updating

The responsible deployment of AI in occupational risk prevention requires an interdisciplinary scientific agenda that integrates engineering, ergonomics, psychology, ethics, law, and organizational management. Understanding emerging risks cannot be limited to technical analysis; it must also address the social, cognitive, and cultural dimensions of digitalized work. It is essential to promote applied research that assesses the real impact of intelligent systems on safety, mental health, and workplace equity. Prospective studies should address issues such as algorithmic governance, digital surveillance, the cognitive load associated with AI interaction, and the implications of autonomous systems for legal and preventive responsibility.
Likewise, the continuous updating of training in occupational risk prevention becomes a strategic pillar. Preventive digital literacy must extend across all organizational levels, incorporating critical thinking, evidence-based management, and an ethical understanding of technology. Only then will prevention professionals be able to anticipate and manage future risks with technical competence and strong human commitment.
In conclusion, AI will not replace the preventive function, but it will redefine its methods and responsibilities. Prevention in the twenty-first century will need to be more interdisciplinary, more ethical, and more adaptive focused on accompanying technological change without abandoning its essential mission: protecting the life, health, and well-being of people at work.

7.3. The Growing Need to Manage the Emerging Risks of Transformative AI

If, to all the dysfunctions and issues linked to the emergence of ANI and AGI, we add the technical possibility of the rise of an ASI capable of surpassing humanity in idea generation and innovation, thereby decoupling economic growth from human contribution, the risks for humanity would be immense. The emergence of an economy and society driven by intelligent machines, based on their superior capacity to predict, create, and transform wealth, would not only constitute an existential risk, but would also have unprecedented labour and social consequences.
It is necessary to redefine what work is, how tasks are allocated between humans and machines, the associated incentives and outcomes, and how work is socially organized. It is also crucial to address individual and social life in a transhumanist context of human–machine fusion, or in post-Anthropocene scenarios where humans lose privileges to superior machines. Even without singularity or artificial consciousness scenarios, the economy could render humanity redundant in many dimensions, particularly labour, with unprecedented impact. It is necessary to begin managing these extreme risks, which are no longer science fiction, as evidenced by growing concern in the scientific community, including technological and labour economists.

Author Contributions

Conceptualization, X.B. and J.T.-S.; methodology, X.B. and J.T.-S.; validation, X.B. and J.T.-S.; formal analysis, X.B. and J.T.-S.; investigation, X.B. and J.T.-S.; resources, X.B. and J.T.-S.; data curation, X.B. and J.T.-S.; writing—original draft preparation, X.B. and J.T.-S.; writing—review and editing, X.B. and J.T.-S.; visualization, X.B. and J.T.-S.; supervision, X.B. and J.T.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Howard, J. Algorithms and the future of work. Am. J. Ind. Med. 2022, 65, 943–952. [Google Scholar] [CrossRef]
  2. Georgieff, A.; Hyee, R. Artificial intelligence and employment: New cross-country evidence. Front. Artif. Intell. 2022, 5, 832736. [Google Scholar] [CrossRef]
  3. OECD. Explanatory Memorandum on the Updated OECD Definition of an AI System; OECD Artificial Intelligence Papers No. 8; OECD Publishing: Paris, France, 2024. Available online: https://www.oecd.org/en/publications/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_623da898-en.html (accessed on 15 November 2025).
  4. European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on Artificial Intelligence (AI Act); Official Journal of the European Union; European Union: Brussels, Belgium, 2024. Available online: https://artificialintelligenceact.eu/ai-act-explorer/ (accessed on 15 November 2025).
  5. UNESCO. Recommendation on the Ethics of Artificial Intelligence; UNESCO: Paris, France, 2021. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 15 November 2025).
  6. Chhillar, D.; Aguilera, R.V. An eye for artificial intelligence: Insights into the governance of artificial intelligence and vision for future research. Bus. Soc. 2022, 61, 1197–1241. [Google Scholar] [CrossRef]
  7. Torrent-Sellens, J. Homo digitalis: Narrative for a new political economy of digital transformation and transition. New Political Econ. 2024, 29, 125–143. [Google Scholar] [CrossRef]
  8. Goldfarb, A.; Taska, B.; Teodoridis, F. Could machine learning be a general-purpose technology? A comparison of emerging technologies using data from online job postings. Res. Policy 2023, 52, 104653. [Google Scholar] [CrossRef]
  9. Torrent-Sellens, J. Digital transition, data-and-tasks crowd-based economy, and the shared social progress: Unveiling a new political economy from a European perspective. Technol. Soc. 2024, 79, 102739. [Google Scholar] [CrossRef]
  10. Wang, S.; Cooper, N.; Eby, M. From human-centered to social-centered artificial intelligence: Assessing ChatGPT’s impact through disruptive events. Big Data Soc. 2024, 11, 20539517241290220. [Google Scholar] [CrossRef]
  11. Eloundou, T.; Manning, S.; Mishkin, P.; Rock, D. GPTs are GPTs: Labor market impact potential of LLMs. Science 2024, 384, 1306–1308. [Google Scholar] [CrossRef]
  12. Erdil, E.; Besiroglu, T. Explosive growth from AI automation: A review of the arguments. arXiv 2023, arXiv:2309.11690. [Google Scholar] [CrossRef]
  13. Gmyrek, P.; Berg, J.; Bescond, D. Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality; ILO Working Paper No. 96; International Labour Organization: Geneva, Switzerland, 2023. [Google Scholar]
  14. Autor, D.; Chin, C.; Salomons, A.; Seegmiller, B. New frontiers: The origins and content of new work, 1940–2018. Q. J. Econ. 2024, 139, 1399–1465. [Google Scholar] [CrossRef]
  15. Agrawal, A.K.; Gans, J.S.; Goldfarb, A. Genius on Demand: The Value of Transformative Artificial Intelligence; NBER Working Paper No. w34316; National Bureau of Economic Research: Cambridge, MA, USA, 2025. [Google Scholar]
  16. Growiec, J. Existential risk from a transformative AI: An economic perspective. Technol. Econ. Dev. Econ. 2024, 30, 1682–1708. [Google Scholar] [CrossRef]
  17. Jones, C.I. The A.I. dilemma: Growth versus existential risk. Am. Econ. Rev. Insights 2024, 6, 575–590. [Google Scholar] [CrossRef]
  18. Trammell, P.; Korinek, A. Economic Growth Under Transformative AI; NBER Working Paper No. 31815; National Bureau of Economic Research: Cambridge, MA, USA, 2023. [Google Scholar]
  19. Bengio, Y.; Hinton, G.; Yao, A.; Song, D.; Abbeel, P.; Darrell, T.; Harari, Y.N.; Zhang, Y.-Q.; Xue, L.; Shalev-Shwartz, S.; et al. Managing extreme AI risks amid rapid progress. Science 2024, 384, 842–845. [Google Scholar] [CrossRef] [PubMed]
  20. Acemoglu, D.; Lensman, T. Regulating transformative technologies. Am. Econ. Rev. Insights 2024, 6, 359–376. [Google Scholar] [CrossRef]
  21. Pishgar, M.; Issa, S.F.; Sietsema, M.; Pratap, P.; Darabi, H. REDECA: A novel framework to review artificial intelligence and its applications in occupational safety and health. Int. J. Environ. Res. Public Health 2021, 18, 6705. [Google Scholar] [CrossRef]
  22. Tang, K.H.D. Artificial intelligence in occupational health and safety risk management of construction, mining, and oil and gas sectors: Advances and prospects. J. Eng. Res. Rep. 2024, 26, 241–253. [Google Scholar] [CrossRef]
  23. Todolí-Signes, A. Making algorithms safe for workers: Occupational risks associated with work managed by artificial intelligence. Transf. Eur. Rev. Labour Res. 2021, 27, 433–452. [Google Scholar] [CrossRef]
  24. El-Helaly, M. Artificial intelligence and occupational health and safety: Benefits and drawbacks. Med. Lav. 2024, 115, e2024014. [Google Scholar] [CrossRef]
  25. Thangamani, R.; Suguna, R.K.; Kamalam, G.K. Drones and autonomous robotics incorporating computational intelligence. In Computational Intelligent Techniques in Mechatronics; Wiley: Hoboken, NJ, USA, 2024; pp. 243–296. [Google Scholar] [CrossRef]
  26. Jetha, A.; Bakhtari, H.; Irvin, E.; Biswas, A.; Smith, M.J.; Mustard, C.; Arrandale, V.H.; Dennerlein, J.T.; Smith, P.M. Do occupational health and safety tools that utilize artificial intelligence have a measurable impact on worker injury or illness? Findings from a systematic review. Syst. Rev. 2025, 14, 146. [Google Scholar] [CrossRef]
  27. Castillo, C.; Shahriari, M.; Casarejos, F.; Arezes, P. Prioritization of leading operational indicators in occupational safety and health. Int. J. Occup. Saf. Ergon. 2023, 29, 806–814. [Google Scholar] [CrossRef]
  28. Gallego, A.; Kurer, T. Automation, digitalization, and artificial intelligence in the workplace: Implications for political behavior. Annu. Rev. Political Sci. 2022, 25, 463–484. [Google Scholar] [CrossRef]
  29. Mathew, D.; Brintha, N.C.; Jappes, J.W. Artificial intelligence powered automation for Industry 4.0. In New Horizons for Industry 4.0 in Modern Business; Springer International Publishing: Cham, Switzerland, 2023; pp. 1–28. [Google Scholar] [CrossRef]
  30. Spring, M.; Faulconbridge, J.; Sarwar, A. How information technology automates and augments processes: Insights from artificial-intelligence-based systems in professional service operations. J. Oper. Manag. 2022, 68, 592–618. [Google Scholar] [CrossRef]
  31. Nicolescu, L.; Tudorache, M.T. Human–computer interaction in customer service: The experience with AI chatbots—A systematic literature review. Electronics 2022, 11, 1579. [Google Scholar] [CrossRef]
  32. Tschang, F.T.; Almirall, E. Artificial intelligence as augmenting automation: Implications for employment. Acad. Manag. Perspect. 2021, 35, 642–659. [Google Scholar] [CrossRef]
  33. Leyer, M.; Schneider, S. Decision augmentation and automation with artificial intelligence: Threat or opportunity for managers? Bus. Horiz. 2021, 64, 711–724. [Google Scholar] [CrossRef]
  34. Porkodi, S.; Cedro, T.L. The ethical role of generative artificial intelligence in modern HR decision-making: A systematic literature review. Eur. J. Bus. Manag. Res. 2025, 10, 44–55. [Google Scholar] [CrossRef]
  35. Bankins, S. The ethical use of artificial intelligence in human resource management: A decision-making framework. Ethics Inf. Technol. 2021, 23, 841–854. [Google Scholar] [CrossRef]
  36. Alves, M.; Seringa, J.; Silvestre, T.; Magalhães, T. Use of artificial intelligence tools in supporting decision-making in hospital management. BMC Health Serv. Res. 2024, 24, 1282. [Google Scholar] [CrossRef]
  37. Khosravi, M.; Zare, Z.; Mojtabaeian, S.M.; Izadi, R. Artificial intelligence and decision-making in healthcare: A thematic analysis of a systematic review of reviews. Health Serv. Res. Manag. Epidemiol. 2024, 11, 23333928241234863. [Google Scholar] [CrossRef]
  38. Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef]
  39. Patalas-Maliszewska, J.; Dudek, A.; Pajak, G.; Pajak, I. Working toward solving safety issues in human–robot collaboration: A case study for recognising collisions using machine learning algorithms. Electronics 2024, 13, 731. [Google Scholar] [CrossRef]
  40. Jung, K.; Yang, J.S. Mitigating safety challenges in human–robot collaboration: The role of human competence. Technol. Forecast. Soc. Change 2025, 213, 124022. [Google Scholar] [CrossRef]
  41. Jiang, Z.; Xue, H.; Yue, H.; Bao, X.; Zhu, J.; Wang, X.; Zhang, L. A review of artificial intelligence–driven active vibration and noise control. Machines 2025, 13, 946. [Google Scholar] [CrossRef]
  42. Adamopoulos, I.; Valamontes, A.; Tsirkas, P.; Dounias, G. Predicting workplace hazard, stress and burnout among public health inspectors: An AI-driven analysis in the context of climate change. Eur. J. Investig. Health Psychol. Educ. 2025, 15, 65. [Google Scholar] [CrossRef] [PubMed]
  43. Deng, S.; Ni, P.; Zhu, H.; Cai, Y.; Pan, Y. Artificial cognition to predict and explain the potential unsafe behaviors of construction workers. J. Constr. Eng. Manag. 2024, 150, 04024074. [Google Scholar] [CrossRef]
  44. Arthur, A.A.; Annankra, J.A.; Yakin, Z. Examining the role of AI and machine learning in improving hazard detection and predictive analytics for accident prevention in mining operations. World J. Adv. Eng. Technol. Sci. 2025, 15, 640–646. [Google Scholar] [CrossRef]
  45. Abioye, S.O.; Oyedele, L.O.; Akanbi, L.; Ajayi, A.; Delgado, J.M.D.; Bilal, M.; Akinade, O.O.; Ahmed, A. Artificial intelligence in the construction industry: A review of present status, opportunities and future challenges. J. Build. Eng. 2021, 44, 103299. [Google Scholar] [CrossRef]
  46. Klumpp, M. Automation and artificial intelligence in business logistics systems: Human reactions and collaboration requirements. Int. J. Logist. Res. Appl. 2018, 21, 224–242. [Google Scholar] [CrossRef]
  47. Richey, R.G., Jr.; Chowdhury, S.; Davis-Sramek, B.; Giannakis, M.; Dwivedi, Y.K. Artificial intelligence in logistics and supply chain management: A primer and roadmap for research. J. Bus. Logist. 2023, 44, 532–549. [Google Scholar] [CrossRef]
  48. Kediya, S.; Mohanty, V.; Saifee, M.; Kumar, R.; Agrawal, L.; Kulkarni, A. AI and the future of work in logistics: A Delphi study on workforce transformation. In Proceedings of the 2024 2nd DMIHER International Conference on Artificial Intelligence in Healthcare, Education and Industry (IDICAIEI), Wardha, India, 29–30 November 2024; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar] [CrossRef]
  49. Daher, E.; Schoeib, S. Integrated, remote, digital–confined space monitoring in IR4. In Proceedings of the SPE International Conference and Exhibition on Health, Safety, Environment, and Sustainability, Abu Dhabi, United Arab Emirates, 10–12 September 2024; Paper D021S015R002. Society of Petroleum Engineers: Richardson, TX, USA. [Google Scholar] [CrossRef]
  50. Moura, D.R.; da Silva, P.D.; Gomes, R.C.; Alberto, P.; Siviero, F.M.; Calabria, L.; Hoentsch, K. Artificial intelligence platform for monitoring and risk prevention in confined spaces. In Proceedings of the Offshore Technology Conference, Houston, TX, USA, 5–8 May 2025; Paper D041S054R008. OTC: Houston, TX, USA. [Google Scholar] [CrossRef]
  51. Ollero, A.; Suarez, A.; Marredo, J.M.; Cioffi, G.; Penicka, R.; Vasiljevic, G.; Viguria, A. Application of intelligent aerial robots to the inspection and maintenance of electrical power lines. In Robotics and Automation Solutions for Inspection and Maintenance in Critical Infrastructures; Now Publishers: Norwell, MA, USA, 2024. [Google Scholar]
  52. Chan, V.C.; Ross, G.B.; Clouthier, A.L.; Fischer, S.L.; Graham, R.B. The role of machine learning in the primary prevention of work-related musculoskeletal disorders: A scoping review. Appl. Ergon. 2022, 98, 103574. [Google Scholar] [CrossRef]
  53. Jung, S.; Kim, B.; Kim, Y.J.; Lee, E.S.; Kang, D.; Kim, Y. Prediction of work-relatedness of shoulder musculoskeletal disorders using machine learning. Saf. Health Work 2025, 16, 113–121. [Google Scholar] [CrossRef] [PubMed]
  54. Shakerian, M.; Barakat, S.; Saber, E. Risk management of work-related musculoskeletal disorders using an artificial intelligence approach (Narrative review). J. Occup. Health Epidemiol. 2025, 14, 214–225. [Google Scholar] [CrossRef]
  55. Svertoka, E.; Saafi, S.; Rusu-Casandra, A.; Burget, R.; Marghescu, I.; Hosek, J.; Ometov, A. Wearables for industrial work safety: A survey. Sensors 2021, 21, 3844. [Google Scholar] [CrossRef] [PubMed]
  56. Naranjo, J.E.; Mora, C.A.; Bustamante Villagómez, D.F.; Mancheno Falconi, M.G.; Garcia, M.V. Wearable sensors in industrial ergonomics: Enhancing safety and productivity in Industry 4.0. Sensors 2025, 25, 1526. [Google Scholar] [CrossRef]
  57. Garrigou, A.; Daniellou, F.; Carballeda, G.; Ruaud, S. Activity analysis in participatory design and analysis of participatory design activity. Int. J. Ind. Ergon. 1995, 15, 311–327. [Google Scholar] [CrossRef]
  58. Barcellini, F.; Van Belleghem, L.; Daniellou, F. Design projects as opportunities for the development of activities. Constr. Ergon. 2014, 2014, 150–163. [Google Scholar]
  59. Koutroumpinas, P.; Zhang, Y.; Wallis, S.; Chang, E. An artificial intelligence empowered cyber physical ecosystem for energy efficiency and occupational health and safety. Energies 2021, 14, 4214. [Google Scholar] [CrossRef]
  60. Khurram, M.; Zhang, C.; Muhammad, S.; Kishnani, H.; An, K.; Abeywardena, K.; Chadha, U.; Behdinan, K. Artificial intelligence in manufacturing industry worker safety: A new paradigm for hazard prevention and mitigation. Processes 2025, 13, 1312. [Google Scholar] [CrossRef]
  61. Sawyer, B.D.; Miller, D.B.; Canham, M.; Karwowski, W. Human factors and ergonomics in design of A3: Automation, autonomy, and artificial intelligence. In Handbook of Human Factors and Ergonomics; Wiley: Hoboken, NJ, USA, 2021; pp. 1385–1416. [Google Scholar] [CrossRef]
  62. Mollaei, N.; Fujao, C.; Silva, L.; Rodrigues, J.; Cepeda, C.; Gamboa, H. Human-centered explainable artificial intelligence: Automotive occupational health protection profiles in prevention of musculoskeletal symptoms. Int. J. Environ. Res. Public Health 2022, 19, 9552. [Google Scholar] [CrossRef]
  63. Sundaram, S.; Zeid, A. Artificial intelligence-based smart quality inspection for manufacturing. Micromachines 2023, 14, 570. [Google Scholar] [CrossRef]
  64. Archana, T.; Stephen, R.K. The future of artificial intelligence in manufacturing industries. In Industry Applications of Thrust Manufacturing: Convergence with Real-Time Data and AI; IGI Global Scientific Publishing: Hershey, PA, USA, 2024; pp. 98–117. [Google Scholar] [CrossRef]
  65. Colim, A.; Faria, C.; Cunha, J.; Oliveira, J.; Sousa, N.; Rocha, L.A. Physical ergonomic improvement and safe design of an assembly workstation through collaborative robotics. Safety 2021, 7, 14. [Google Scholar] [CrossRef]
  66. Patil, S.; Vasu, V.; Srinadh, K.V.S. Advances and perspectives in collaborative robotics: A review of key technologies and emerging trends. Discov. Mech. Eng. 2023, 2, 13. [Google Scholar] [CrossRef]
  67. Mediavilla, M.A.; Dietrich, F.; Palm, D. Review and analysis of artificial intelligence methods for demand forecasting in supply chain management. Procedia CIRP 2022, 107, 1126–1131. [Google Scholar] [CrossRef]
  68. Bathla, G.; Bhadane, K.; Singh, R.K.; Kumar, R.; Aluvalu, R.; Krishnamurthi, R.; Kumar, A.; Thakur, R.N.; Basheer, S. Autonomous vehicles and intelligent automation: Applications, challenges, and opportunities. Mob. Inf. Syst. 2022, 2022, 7632892. [Google Scholar] [CrossRef]
  69. Li, Y.; He, J. A review of strategies to detect fatigue and sleep problems in aviation: Insights from artificial intelligence. Arch. Comput. Methods Eng. 2024, 31, 4655–4672. [Google Scholar] [CrossRef]
  70. Mehra, I.; Samuel, A.J. AI-driven autonomous vehicles: Safety, ethics, and regulatory challenges. J. Sci. Technol. Eng. Res. 2024, 2, 18–31. [Google Scholar] [CrossRef]
  71. Chauhan, A.S.; Singh, R.; Priyadarshi, N.; Twala, B.; Suthar, S.; Swami, S. Unleashing the power of advanced technologies for revolutionary medical imaging: Pioneering the healthcare frontier with artificial intelligence. Discov. Artif. Intell. 2024, 4, 58. [Google Scholar] [CrossRef]
  72. Khalifa, M.; Albadawy, M. Artificial intelligence for clinical prediction: Exploring key domains and essential functions. Comput. Methods Programs Biomed. Update 2024, 5, 100148. [Google Scholar] [CrossRef]
  73. Fagundes, T.P.; Wichmann, R.M.; Oliveira, T.A.D. Big data on occupational health: How far are we? Rev. Bras. Saúde Ocup. 2024, 49, edcinq11. [Google Scholar] [CrossRef]
  74. Popa, M.V.; Buzea, C.G.; Gurzu, I.L.; Salim, C.; Gurzu, B.; Rusu, D.I.; Ochiuz, L.; Duceac, L.D. An integrated AI framework for occupational health: Predicting burnout, long COVID, and extended sick leave in healthcare workers. Healthcare 2025, 13, 2266. [Google Scholar] [CrossRef]
  75. Heinold, E.; Rosen, P.H.; Wischniewski, S. Advanced robots in healthcare and their impact on the health and safety of medical workers. In Proceedings of the 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Pasadena, CA, USA, 26–30 August 2024; IEEE: Piscataway, NJ, USA; pp. 258–263. [Google Scholar] [CrossRef]
  76. Ahmadi, S. A comprehensive study on integration of big data and AI in financial industry and its effect on present and future opportunities. Int. J. Curr. Sci. Res. Rev. 2024, 7, 66–74. [Google Scholar] [CrossRef]
  77. Fettahoglu, S.; Yikilmaz, I. Reframing technostress for organizational resilience: The mediating role of techno-eustress in the performance of accounting and financial reporting professionals. Systems 2025, 13, 550. [Google Scholar] [CrossRef]
  78. Caligiore, D. Healing with Artificial Intelligence, 1st ed.; CRC Press: Boca Raton, FL, USA, 2025. [Google Scholar] [CrossRef]
  79. Tao, Y.; Hu, H.; Xue, J.; Zhang, Z.; Xu, F. Evaluation of ergonomic risks for construction workers based on multicriteria decision framework with the integration of spherical fuzzy set and alternative queuing method. Sustainability 2024, 16, 3950. [Google Scholar] [CrossRef]
  80. Berezutskyi, V. Assessing the risks of applying artificial intelligence to occupational safety. Technol. Audit Prod. Reserves 2025, 5, 26–32. [Google Scholar] [CrossRef]
  81. Farahani, M.; Ghasemi, G. Artificial intelligence and inequality: Challenges and opportunities. Int. J. Innov. Educ. 2024, 9, 78–99. [Google Scholar] [CrossRef]
  82. Faioli, M. Assessing risks and liabilities of AI-powered robots in the workplace: An EU–US comparison. Dirit. Sicur. Lav. 2025, 1, 79–113. [Google Scholar] [CrossRef]
  83. Bail, C.; Harth, V.; Mache, S. Digitalization in urology—A multimethod study of the relationships between physicians’ technostress, burnout, work engagement and job satisfaction. Healthcare 2023, 11, 2255. [Google Scholar] [CrossRef]
  84. Routray, R.; Choudhary, P.; Sinha, V. Intelligent technology and enhanced well-being: Can artificial intelligence mitigate digital overload? Future Bus. J. 2025, 11, 268. [Google Scholar] [CrossRef]
  85. Zhang, S.; Guo, P.; Yuan, Y.; Ji, Y. Anxiety or engaged? Research on the impact of technostress on employees’ innovative behavior in the era of artificial intelligence. Acta Psychol. 2025, 259, 105442. [Google Scholar] [CrossRef]
  86. Van Zoonen, W.; von Bonsdorff, M.E.; van der Heijden, B.I. Algorithmic surveillance and workers’ compliance: The role of trust, privacy concerns, and fairness in online crowdwork. Hum. Relat. 2025. OnlineFirst. [Google Scholar] [CrossRef]
  87. Benlian, A.; Wiener, M.; Cram, W.A.; Krasnova, H.; Maedche, A.; Möhlmann, M.; Recker, J.; Remus, U. Algorithmic management: Bright and dark sides, practical implications, and research opportunities. Bus. Inf. Syst. Eng. 2022, 64, 825–839. [Google Scholar] [CrossRef]
  88. Jarrahi, M.H.; Newlands, G.; Lee, M.K.; Wolf, C.T.; Kinder, E.; Sutherland, W. Algorithmic management in a work context. Big Data Soc. 2021, 8, 20539517211020332. [Google Scholar] [CrossRef]
  89. Jabagi, N.; Croteau, A.M.; Audebrand, L.K.; Marsan, J. Do algorithms play fair? Analysing the perceived fairness of HR decisions made by algorithms and their impacts on gig workers. Int. J. Hum. Resour. Manag. 2025, 36, 235–274. [Google Scholar] [CrossRef]
  90. Christenko, A. The complex relationship between automation and work intensity: Evidence from selected EU countries. Int. Rev. Appl. Econ. 2024, 38, 438–454. [Google Scholar] [CrossRef]
  91. Mojumder, M.U.; Ruddro, R.A. Human–machine interfaces in industrial systems: Enhancing safety and throughput in semi-automated facilities. Am. J. Interdiscip. Stud. 2023, 4, 1–26. [Google Scholar] [CrossRef]
  92. Natali, C.; Marconi, L.; Dias Duran, L.D.; Cabitza, F. AI-induced deskilling in medicine: A mixed-method review and research agenda for healthcare and beyond. Artif. Intell. Rev. 2025, 58, 356. [Google Scholar] [CrossRef]
  93. Khan, V. Artificial intelligence and gender bias: Analyzing algorithmic discrimination in language models. J. Gend. Power Soc. Transform. 2024, 1, 31–40. [Google Scholar]
  94. Kyriakidou, O. Algorithms and global diversity management. In Research Handbook on Global Diversity Management; Edward Elgar Publishing: Cheltenham, UK, 2025; pp. 148–163. [Google Scholar] [CrossRef]
  95. Nazer, L.H.; Zatarah, R.; Waldrip, S.; Ke, J.X.C.; Moukheiber, M.; Khanna, A.K.; Hicklen, R.S.; Moukheiber, L.; Moukheiber, D.; Ma, H.; et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLoS Digit. Health 2023, 2, e0000278. [Google Scholar] [CrossRef]
  96. Chhibber, S.; Rajkumar, S.R.; Dassanayake, S. Will artificial intelligence reshape the global workforce by 2030? A cross-sectoral analysis of job displacement and transformation. Blockchain Artif. Intell. Future Res. 2025, 1, 35–51. [Google Scholar] [CrossRef]
  97. Graham, C.M. AI skills in cybersecurity: Global job trends analysis. Inf. Comput. Secur. 2025, 33, 673–689. [Google Scholar] [CrossRef]
  98. Ersanlı, C.Y.; Çelik, F.; Barjesteh, H.; Duran, V.; Manoochehrzadeh, M. A review of global reskilling and upskilling initiatives in the age of AI. AI Ethics 2025, 5, 5719–5728. [Google Scholar] [CrossRef]
  99. Andino-González, P.; Vega-Muñoz, A.; Salazar-Sepúlveda, G.; Contreras-Barraza, N.; Lay, N.; Gil-Marín, M. Systematic review of studies using confirmatory factor analysis for measuring management skills in sustainable organizational development. Sustainability 2025, 17, 2373. [Google Scholar] [CrossRef]
  100. Cuadrado-Roura, J.R.; Kourtit, K.; Nijkamp, P. Spatial disparities, convergence and economic development: A global and local orientation. Ann. Reg. Sci. 2025, 74, 83. [Google Scholar] [CrossRef]
  101. Hasija, A.; Esper, T.L. In artificial intelligence (AI) we trust: A qualitative investigation of AI technology acceptance. J. Bus. Logist. 2022, 43, 388–412. [Google Scholar] [CrossRef]
  102. Rane, N.; Choudhary, S.P.; Rane, J. Acceptance of artificial intelligence: Key factors, challenges, and implementation strategies. J. Appl. Artif. Intell. 2024, 5, 50–70. [Google Scholar] [CrossRef]
  103. Zeb, S.; Lodhi, S.K. AI and cybersecurity in smart manufacturing: Protecting industrial systems. Am. J. Artif. Intell. Comput. 2025, 1, 1–23. [Google Scholar]
  104. Ogunmolu, A.M.; Olaniyi, O.O.; Popoola, A.D.; Olisa, A.O.; Bamigbade, O. Autonomous artificial intelligence agents for fault detection and self-healing in smart manufacturing systems. J. Energy Res. Rev. 2025, 17, 20–37. [Google Scholar] [CrossRef]
  105. Delanoë, P.; Tchuente, D.; Colin, G. Method and evaluations of the effective gain of artificial intelligence models for reducing CO2 emissions. J. Environ. Manag. 2023, 331, 117261. [Google Scholar] [CrossRef]
  106. Pimenow, S.; Pimenowa, O.; Prus, P. Challenges of artificial intelligence development in the context of energy consumption and impact on climate change. Energies 2024, 17, 5965. [Google Scholar] [CrossRef]
  107. Abisoye, A.; Akerele, J.I.; Odio, P.E.; Collins, A.; Babatunde, G.O.; Mustapha, S.D. Using AI and machine learning to predict and mitigate cybersecurity risks in critical infrastructure. Int. J. Eng. Res. Dev. 2025, 21, 205–224. [Google Scholar]
  108. Côté, D.; Gravel, S.; Gladu, S.; Bakhiyi, B.; Gravel, S. Worker health in formal electronic waste recycling plants. Int. J. Workplace Health Manag. 2021, 14, 292–309. [Google Scholar] [CrossRef]
  109. EU-OSHA. Foresight on New and Emerging Occupational Safety and Health Risks Associated with Digitalisation and Artificial Intelligence; European Agency for Safety and Health at Work: Bilbao, Spain.
  110. ILO. AI and Digitalization Are Transforming Safety and Health at Work; International Labour Organization, News/Policy Resource: Geneva, Switzerland, 2025.
  111. Mishiba, T. Transforming occupational health and safety regulation: Strategic pathways in the era of Industry 4.0. J. Work Health Saf. Regul. 2024, 3, 150–168. [Google Scholar] [CrossRef]
  112. Centers for Disease Control and Prevention. Exploring Approaches to Keep an AI-Enabled Workplace Safe for Workers. September 9, 2024 by John Howard, MD, and Paul A. Schulte, PhD. Available online: https://www.cdc.gov/niosh/blogs/2024/ai-risk-management.html (accessed on 20 November 2025).
  113. Eom, T.; Im, S.; Lee, E.H.; Kim, R.J.; Ihm, J. Effectiveness of virtual reality-based real-time ergonomics training on dental posture improvement. Int. Dent. J. 2025, 75, 103908. [Google Scholar] [CrossRef]
  114. Hamilton, B.C.; Dairywala, M.I.; Highet, A.; Nguyen, T.C.; O’Sullivan, P.; Chern, H.; Soriano, I.S. Artificial intelligence–based real-time video ergonomic assessment and training improves resident ergonomics. Am. J. Surg. 2023, 226, 741–746. [Google Scholar] [CrossRef] [PubMed]
  115. Vinay, L.S.; Bhattacharjee, R.M.; Ghosh, N.; Kumar, S. Machine learning approach for the prediction of mining-induced stress in underground mines to mitigate ground control disasters and accidents. Geomech. Geophys. Geo Energy Geo Resour. 2023, 9, 159. [Google Scholar] [CrossRef]
  116. Mishiba, T.; Brun, E.; Anyfantis, I.; McGarry, F.; Kort, J.; Suzuki, K.; Furukawa, K.; Yamagiwa, K. International online conference on occupational health and safety policy in the artificial intelligence era. J. Work. Health Saf. Regul. 2025, 4, cor.25-008. [Google Scholar] [CrossRef]
  117. Lombardi, I.; Monaco, M.G.L.; Capece, S. European data and framework analysis of human–machine interaction in Manufacturing 4.0: An update. Chem. Eng. Trans. 2024, 111, 199–204. [Google Scholar] [CrossRef]
  118. Naidoo, C.M.; Obi, C.L.; Mkolo, N.M. Future trends and innovations: Exploring the future potential of AI in occupational health and safety. In Cases on AI Innovations in Occupational Health and Safety; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 115–140. [Google Scholar] [CrossRef]
  119. El Bouchikhi, M.; Weerts, S.; Clavien, C. Behind the good of digital tools for occupational safety and health: A scoping review of ethical issues surrounding the use of the Internet of Things. Front. Public Health 2024, 12, 1468646. [Google Scholar] [CrossRef]
  120. Qawqzeh, Y.; Shraah, A.A.; Rizwan, A.; Sánchez-Chero, M.; More, L.A.V.; Shabaz, M. Exploring the effectiveness of virtual reality-based training for sustainable health and occupational safety in Industry 4.0. Sci. Rep. 2025, 15, 28930. [Google Scholar] [CrossRef]
  121. Sharmista, A.; Jeremy, N. Global Trends in AI Governance: Evolving Country Approaches (English); World BankGroup: Washington, DC, USA; Available online: http://documents.worldbank.org/curated/en/099120224205026271 (accessed on 20 November 2025).
  122. Acemoglu, D.; Johnson, S. Poder y Progreso: Nuestra Lucha Milenaria Por la Tecnología y la Prosperidad; Deusto/Planeta: Barcelona, Spain, 2023. [Google Scholar]
  123. Aghion, P.; Antonin, C.; Bunel, S. El Poder de la Destrucción Creativa: ¿Qué Impulsa el Crecimiento Económico? Deusto/Planeta: Barcelona, Spain, 2021. [Google Scholar]
  124. Torrent-Sellens, J.; Díaz-Chao, A.; Miró-Pérez, A.P.; Sainz, J. Towards the Tyrell corporation? Digitisation, firm size and productivity divergence in Spain. J. Innov. Knowl. 2022, 7, 100185. [Google Scholar] [CrossRef]
  125. Castellani, D.; Lamperti, F. Aggregate Megatrends and the Risk of Labour Market Exclusion Across Europe; European Union: Brussels, Belgium; European Research Executive Agency: Brussels, Belgium, 2024. [Google Scholar]
  126. Margaryan, A. Artificial intelligence and skills in the workplace: An integrative research agenda. Big Data Soc. 2023, 10, 20539517231206804. [Google Scholar] [CrossRef]
  127. Muldoon, J.; Graham, M.; Cant, C. Feeding the Machine: The Hidden Human Labour Powering AI; Canongate Books: Edinburgh, UK, 2024. [Google Scholar]
  128. OECD. Job Creation and Local Economic Development 2024: The Geography of Generative AI; OECD Publishing: Paris, France, 2024. [CrossRef]
  129. Capraro, V.; Lentsch, A.; Acemoglu, D.; Akgun, S.; Akhmedova, A.; Bilanchini, E.; Bonnefon, J.-F.; Brañas-Garza, P.; Butera, L.; Douglas, K.M.; et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus 2024, 3, pgae191. [Google Scholar] [CrossRef]
  130. Grace, K.; Stewart, H.; Sandküler, J.F.; Thomas, S.; Weinstein-Raun, B.; Brauner, J. Thousands of AI authors on the future of AI. arXiv 2024, arXiv:2401.02843. [Google Scholar] [CrossRef]
  131. Bubeck, S.; Chandrasekaran, V.; Eldan, R.; Gehrke, J.; Horvitz, E.; Kamar, E.; Lee, P.; Lee, Y.T.; Li, Y.; Lundberg, S.; et al. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv 2023, arXiv:2303.12712. [Google Scholar] [CrossRef]
  132. Korinek, A.; Juelfs, M. Preparing for the (non-existent?) future of work. In The Oxford Handbook of AI Governance; Bullock, J.B., Ed.; Oxford University Press: Oxford, UK, 2022; pp. 746–776. [Google Scholar] [CrossRef]
  133. Belk, R.W.; Humayum, M.; Gopaldas, A. Artificial life. J. Macromark. 2020, 40, 221–236. [Google Scholar] [CrossRef]
Table 1. Definitions of artificial intelligence according to leading international organizations and institutions.
Table 1. Definitions of artificial intelligence according to leading international organizations and institutions.
InstitutionMain DefinitionKey Elements/Nuances
OCDE (p. 3) [3]“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
  • “Explicit” or “implicit” objectives.
  • Produces outputs such as predictions, recommendations, decisions, or content.
  • Influences physical or virtual environments.
  • Varies in autonomy and in its ability to adapt after deployment (“autonomy” and “adaptiveness”).
  • Is “machine-based,” that is, systems grounded in machines/computation.
European Union (chapter 1, article 3) [4]“An AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”Very similar to the OECD definition (co-aligned). It adds:
  • “Designed to operate with various levels of autonomy.”
  • Adaptability (“adaptiveness”) after deployment.
  • Explicit recognition of input and output, and their effect on physical or virtual environments.
  • Clear indication that objectives may be explicit or implicit.
UNESCO [5]“Built from data, hardware and connectivity, AI allows machines to mimic human intelligence such as perception, problem-solving, linguistic interaction or creativity.”This definition places emphasis on:
  • Imitation of human intelligence functions: perception, problem-solving, linguistic interaction, creativity.
  • Constitutive elements: data, hardware, connectivity.
  • A more descriptive focus on capabilities (what AI does) rather than on internal functioning or the degree of autonomy/adaptability.
Table 2. AIs, Firms, and Employment: Characteristics, Social Challenges, and Extreme Risks.
Table 2. AIs, Firms, and Employment: Characteristics, Social Challenges, and Extreme Risks.
Types of AICharacteristicsSocial ChallengesExtreme Risks
PAI
Predictive AI
  • Adoption through corporate decision
  • Predictive value
  • Complementarities between people, organizations, and PAI
  • Uses in automation and control
  • Increases in average productivity and skilled employment
  • Sustained increases in marginal productivity
  • Ability to create new tasks and jobs
  • Redirecting PAI toward the needs of people and companies
  • Redirecting people and companies toward hybrid employment with PAI
  • Massive substitution of routine tasks
  • Increase in labour-market polarization and wage inequality
  • Corporate polarization: political and market power of superstar firms
GAI
Generative AI
  • Adoption through individual decision- Creative value
  • Complementarities of intelligences between humans and AI (centaurs vs. cyborgs)
  • Uses for employment augmentation
  • Increases in the speed, quality, and productivity of tasks
  • Redirecting the highly unequal distribution of GAI benefits
  • Undertaking the large-scale reconversion of cognitive work
  • Increasing people’s STEM and social skills to interact with the entire GAI chain
  • Building GAI for public value
  • Decline in job quality
  • Polarization and inequality between individuals and companies
  • Expansion of inequality of access to other social sectors (education and health)
  • Political polarization and democratic deterioration
TAI
Transformative (Agentic) AI
  • Adoption through individual, corporate, and societal decision
  • Transformative value
  • Complementarities between people–organizations–society-AI
  • Uses for social transformation
  • Ethical, inclusive, and responsible TAI at individual and societal levels
  • Political economy of TAI production and adoption
  • Large benefits with extreme risks
  • Managing decoupled growth between people and machines
  • Redefining work and its economic and social role
  • Managing the loss of economic and labour value of people
  • Controlling the social and political power of TAI
  • Preparing humanity for the society of intelligent machines and the post-Anthropocene
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baraza, X.; Torrent-Sellens, J. Artificial Intelligence and Emerging Risks in Occupational Safety and Health. Encyclopedia 2026, 6, 25. https://doi.org/10.3390/encyclopedia6010025

AMA Style

Baraza X, Torrent-Sellens J. Artificial Intelligence and Emerging Risks in Occupational Safety and Health. Encyclopedia. 2026; 6(1):25. https://doi.org/10.3390/encyclopedia6010025

Chicago/Turabian Style

Baraza, Xavier, and Joan Torrent-Sellens. 2026. "Artificial Intelligence and Emerging Risks in Occupational Safety and Health" Encyclopedia 6, no. 1: 25. https://doi.org/10.3390/encyclopedia6010025

APA Style

Baraza, X., & Torrent-Sellens, J. (2026). Artificial Intelligence and Emerging Risks in Occupational Safety and Health. Encyclopedia, 6(1), 25. https://doi.org/10.3390/encyclopedia6010025

Article Metrics

Back to TopTop