Next Article in Journal
A Review on Anaerobic Digestate as a Biofertilizer: Characteristics, Production, and Environmental Impacts from a Life Cycle Assessment Perspective
Previous Article in Journal
Dehazing Algorithm Based on Joint Polarimetric Transmittance Estimation via Multi-Scale Segmentation and Fusion
Previous Article in Special Issue
A Kansei-Oriented Morphological Design Method for Industrial Cleaning Robots Integrating Extenics-Based Semantic Quantification and Eye-Tracking Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ROS-Compatible Robotics Simulators for Industry 4.0 and Industry 5.0: A Systematic Review of Trends and Technologies

by
Jose M. Flores Gonzalez
,
Enrique Coronado
* and
Natsuki Yamanobe
National Institute of Advanced Industrial Science and Technology (AIST), Tokyo 135-0064, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8637; https://doi.org/10.3390/app15158637
Submission received: 18 June 2025 / Revised: 28 July 2025 / Accepted: 29 July 2025 / Published: 4 August 2025
(This article belongs to the Special Issue Intelligent Robotics in the Era of Industry 5.0)

Abstract

Simulators play a critical role in the development and testing of Industry 4.0 and Industry 5.0 applications. However, few studies have examined their capabilities beyond physics modeling, particularly in terms of connectivity and integration within broader robotic ecosystems. This review addresses this gap by focusing on ROS-compatible simulators. Using the SEGRESS methodology in combination with the PICOC framework, this study systematically analyzes 65 peer-reviewed articles published between 2021 and 2025 to identify key trends, capabilities, and application domains of ROS-integrated robotic simulators in industrial and manufacturing contexts. Our findings indicate that Gazebo is the most commonly used simulator in Industry 4.0, primarily due to its strong compatibility with ROS, while Unity is most prevalent in Industry 5.0 for its advanced visualization, support for human interaction, and extended reality (XR) features. Additionally, the study examines the adoption of ROS and ROS 2, and identifies complementary communication and integration technologies that help address the current interoperability challenges of ROS. These insights are intended to inform researchers and practitioners about the current landscape of simulation platforms and the core technologies frequently incorporated into robotics research.

1. Introduction

Industry 4.0, formally defined in 2011, transformed manufacturing and society by integrating emergent technologies to enhance efficiency and high-quality production [1]. This paradigm prioritized efficiency and high-quality production through the adoption of Artificial Intelligence (AI), Big Data Analytics, the Internet of Things (IoT), edge computing, cloud computing, and cyber–physical systems (CPSs) [2,3,4]. However, Industry 4.0’s strong emphasis on automation and optimization has raised ethical and sustainability concerns, potentially threatening the future of human labor and the environment [5]. In response to these challenges, the concept of Industry 5.0 has gained significant attention in recent years [6]. Industry 5.0 builds upon Industry 4.0 but shifts the focus from pure automation to human well-being, resilience, and sustainability, emphasizing human roles, skills, and rights in the workplace [5,6,7]. Driven by emerging applications such as assistive technologies, human–robot collaboration (HRC), and human digital twins [4,8,9], Industry 5.0 aims to augment rather than replace human capabilities with intelligent robotic systems [7].
A key challenge in developing Industry 4.0/5.0 applications is the safe and seamless integration of intelligent robotic systems into real-world environments [5,9]. As robots become increasingly embedded in social, service, and industrial settings, their capabilities must extend beyond basic automation to enable advanced perception, decision-making, and adaptability. In this context, the Robot Operating System (ROS) has emerged as a fundamental framework for robotics development, providing a modular, open-source architecture that facilitates communication, visualization, and algorithm integration. However, deploying intelligent robotic systems requires rigorous development, testing, and validation of complex algorithms for vision, navigation, localization, mapping, and control. Moreover, early-stage research and real-world deployment of intelligent robotic systems can be risky and resource-intensive. To mitigate these challenges, robotic simulators play a crucial role by offering a safe and cost-effective environment for refining and validating algorithms before physical implementation [3]. Additionally, simulators enhance collaboration among research groups, accelerating development timelines and fostering innovation [6,10,11].
Given the complexity and multifaceted nature of Industry 4.0/5.0 applications, selecting the appropriate simulators requires a comprehensive approach. Therefore, this article aims to systematically examine the role of robotic simulators in the context of Industry 4.0/5.0. This review specifically focuses on ROS-enabled simulation platforms due to ROS’s critical role in modern robotics research and deployment.

Related Work

Few recent studies have thoroughly analyzed and compared robotic simulation platforms from diverse perspectives. For example, Liu et al. [12] discussed how simulation systems can be used to model robotic actuators, sensors, environments, and human interactions. They emphasized that effective simulation requires accurately representing four key physical elements: rigid bodies, deformable bodies, fluids, and granular materials. However, their study primarily focused on the physics of simulation, overlooking critical aspects for building Industry 4.0/5.0 applications, such as connectivity and interoperability. Similarly, Collins et al. [13] analyzed physics-based simulators across various autonomous robotics sub-domains, including mobile ground robotics, soft robotics, medical robotics, marine robotics, and aerial robotics, as well as applications such as robotic manipulation and learning. Their study provided a comparative analysis of key simulator capabilities, with a strong emphasis on sensors, actuators, and environmental modeling. Baratta et al. [10] reviewed commercial simulation software for digital twin development in manufacturing, analyzing eight simulation tools. They identified five key requirements for simulation in HRC: human operator simulation, robot agent simulation, ergonomic analysis, time measurement, and data exchange/interoperability. Unlike their study, which focused solely on commercial simulation software, our analysis includes commercial, free-to-use, and open-source platforms. Furthermore, we emphasize simulators that support ROS, enabling seamless connectivity with robotic modules developed by the research community, industry, and broader innovation ecosystems. Kargar et al. [11] conducted a systematic review of ROS-enabled simulation platforms, focusing on AI-driven wheeled mobile robots (WMRs). Their analysis covered WMR applications, perception and control tasks, commonly used platforms, AI-enhanced simulation tools, and human-in-the-loop (HITL) testing. Their comparison between simulators emphasized physics engines, programming language support, and open-source availability. In contrast, our study extends this evaluation to Industry 4.0/5.0 applications.
Table 1 summarizes the key differences between previous surveys in robotics simulation and this study. As shown in the table, there is a clear need for an updated systematic review that holistically analyzes the use of robotics simulators for the development and testing of Industry 4.0/5.0 applications from a holistic perspective. Therefore, this article aims to fill this research gap.

2. Methodology

As reported in [14], most secondary studies in software engineering are qualitative in nature. In contrast, PRISMA, one of the most widely known frameworks for conducting systematic reviews, is primarily oriented toward quantitative reviews, mixed-methods syntheses, and meta-analyses, particularly in health and clinical research domains. To address the unique needs of engineering-focused reviews, this study adopts the SEGRESS methodology (Software Engineering Guidelines for REporting Secondary Studies) [15], a framework specifically developed for systematic reviews in software engineering and technology-oriented disciplines. SEGRESS builds upon and adapts many of the core principles of PRISMA, including transparent study selection, clearly defined inclusion and exclusion criteria, and the use of the PRISMA-style flow diagram to report the different stages of the literature search process. SEGRESS provides tailored guidance for analyzing studies that focus on tools, platforms, architectures, and integration strategies, making it particularly well-suited for domains such as robotics simulation, where traditional meta-analytic approaches are often infeasible due to the diversity of study designs, evaluation methods, and reporting practices.

2.1. Research Questions

To ensure a systematic and transparent scope, this review integrates the SEGRESS methodology with the PICOC framework. SEGRESS provides the overall structure for conducting systematic reviews in software engineering, including the stages of planning, execution, and reporting. Within this structure, PICOC (Population, Intervention, Comparison, Outcome, and Context) is applied during the planning stage to guide the formulation of research questions and the definition of inclusion and exclusion criteria.
In this study, the Population consists of ROS-compatible robotics simulators used in the development of applications related to Industry 4.0 and Industry 5.0. The Intervention involves evaluating these simulation tools by analyzing their technical features, support for various robot configurations, and integration with key enabling technologies. The Comparison focuses on identifying the advantages and limitations of different simulators in relation to interoperability, technical requirements, and suitability for specific use cases within industrial contexts. The expected Outcome is that we will provide a structured understanding of the most frequently used ROS-compatible simulators, the technologies they enable or integrate, and their role in advancing Industry 4.0 and Industry 5.0 applications. The Context of the review includes industrial and manufacturing environments, where simulation is a critical component in the development, testing, and deployment of robotic systems.
By combining SEGRESS and PICOC, this review ensures that the research questions are both methodologically rigorous and practically relevant. Based on this structured approach, the following research questions were formulated:
RQ1: What ROS-compatible simulator platforms have been used for developing Industry 4.0/5.0 applications? RQ1 aims to identify the most commonly used ROS-compatible simulation platforms employed in the development of Industry 4.0/5.0 applications. Additionally, it seeks to present the reported advantages and disadvantages of each platform.
RQ2: What are the most common applications in the Industry 4.0/5.0 domains developed using these simulators? RQ2 aims to identify and categorize the most frequently developed Industry 4.0/5.0 applications that leverage ROS-compatible simulation platforms. This research question seeks to identify the primary use cases in different robotics domains.
RQ3: What are the commonly used robot configurations and key technologies that are integrated in these applications? RQ3 aims to identify the predominant robot configurations employed in Industry 4.0/5.0 applications. Additionally, this research question investigates the core technologies that enable these robotic systems, specifically focusing on sensing, perception, planning, mapping and control.
RQ4: How have ROS 1 and ROS 2 been adopted in Industry 4.0/5.0 applications? RQ4 aims to examine the adoption trends of ROS 1 and ROS 2 in Industry 4.0/5.0 applications, identifying key differences and limitations of ROS 1 and ROS 2.
RQ5: What are the additional communication and integration technologies that enable compatibility between simulators and modules used for developing Industry 4.0/5.0 applications? RQ5 investigates methods that facilitate efficient, scalable, and interoperable robotic systems. This research question aims to identify how state-of-the-art technologies have addressed the limitations of ROS and ROS 2 regarding connectivity.

2.2. Search Strategy and Study Selection

The search terms were carefully crafted to align with the research questions, aiming to strike a balance between broad retrieval and precise filtering to ensure the inclusion of the most relevant literature. Adjustments to search terms were made iteratively. Table 2 presents a summary of the keyword combinations and specific filters applied to each database. The search for relevant peer-reviewed journal articles was conducted in January 2025 using the IEEE Xplore, ACM Digital Library, SpringerLink, and ScienceDirect databases. Due to the high concentration of relevant articles initially found in the ScienceDirect database, a second round of searches was conducted. In this phase, the MDPI database, which was not included in the initial search, was also incorporated to improve coverage. This updated search process led to the identification of 226 additional candidate articles in ScienceDirect and 9 in MDPI, thereby enhancing the comprehensiveness of the review. Since the initial search string yielded only a small number of results in MDPI, an additional broader search was conducted. This expanded query returned 126 results. The search window begins in 2021, aligning with the formal introduction and growing academic focus on the Industry 5.0 paradigm [5]. The literature prior to 2021 was excluded to maintain alignment with the specific objectives of this review, which aims to capture recent trends, technologies, and use cases that reflect the shift from Industry 4.0 to Industry 5.0.
The proposed inclusion and exclusion criteria are outlined in Table 3. The article selection process is illustrated in Figure 1. We screened the titles, abstracts, and keywords of the articles to identify those containing relevant terms related to robotics and simulation, in accordance with the proposed inclusion criteria. Subsequently, the exclusion criteria were applied after obtaining and reviewing the full articles to eliminate those that were out of scope or lacked sufficient technical details and quality. During the full-text review, a quality assessment was conducted based on the following questions:
  • Was the article published in a reputable, peer-reviewed journal (preferably indexed and with an impact factor)?
  • Does the study present a clearly described system architecture or implementation?
  • Is the research grounded in applied science and relevant to real-world applications?
To mitigate potential biases, the screening process was conducted in multiple rounds by multiple reviewers. Any disagreements between reviewers were resolved through discussion.
Table 3. Inclusion and exclusion criteria.
Table 3. Inclusion and exclusion criteria.
Inclusion Criteria
(1) Studies proposing robotics simulation frameworks for intelligent robotics, HRI, human–robot collaboration, or industrial automation.
(2) Research integrating simulation-related technologies such as digital twins, cyber–physical systems, or extended reality.
Exclusion Criteria
(1) Articles that lack a clear application within Industry 4.0 (automation and smart manufacturing) or Industry 5.0 (human-centric collaboration).
(2) Studies focusing primarily on robot learning or data augmentation without practical validation in industrial settings.
(3) Articles focusing solely on unrelated domains, including entertainment, education, search and rescue, aerial robotics, or medical robotics.
(4) Studies lacking explicit details about the used simulator, its integration with robotic systems, or its role in an industrial workflow.
(5) Research that does not use or integrate ROS, ROS 2, or other widely adopted robotics middleware for industrial applications.
(6) Studies relying primarily on 3D modeling tools (e.g., Blender) without actual simulation or control of robotic systems.
(7) Articles unavailable in full text or published in languages other than English, conference articles, book chapters, technical reports, and short articles (less than six pages)
Figure 1. PRISMA flow diagram of the article selection process.
Figure 1. PRISMA flow diagram of the article selection process.
Applsci 15 08637 g001

2.3. Data Extraction and Synthesis

We ultimately identified a total of 65 primary studies for in-depth analysis. Of these 65 articles, 27 were classified under Industry 5.0 purposes, while 38 were categorized under Industry 4.0. Most of the identified articles were published in the Journal of Robotics and Computer-Integrated Manufacturing (n = 10), the Journal of Manufacturing Systems (n = 9), and Procedia CIRP (n = 7). During the full-text review, we conducted a systematic data extraction process to gather all relevant information necessary for a comprehensive and insightful analysis of our research questions. The extracted data included key attributes such as article ID, title, publication year, venue, and publisher. Additionally, we collected details on the study’s objectives, contributions, and the simulators used. Other extracted information encompassed the integration of ROS/ROS 2, the types of robots and applications examined, and aspects related to sensing, perception, mapping, decision making, and control. We also documented the communication methods used (wired or wireless) and the software tools implemented to address limitations of ROS/ROS 2.

3. RQ1: What ROS-Compatible Simulator Platforms Have Been Used for Developing Industry 4.0/5.0 Applications?

The results shown in Figure 2 and Figure 3 indicate that Gazebo is the most commonly used simulator platform for developing Industry 4.0 applications, with 18 studies implementing this tool. The next most frequently used simulators are Unity (n = 10), PyBullet (n = 2), MuJoCo (n = 2), and Isaac Sim (n = 1). For Industry 5.0 applications, Unity was by far the most widely used simulator, implemented in 18 articles. Other notable simulators implemented for building Industry 5.0 applications include Gazebo, Visual Components, and Unreal Engine 4, which were used in [16,17,18] respectively.

3.1. Description of Common Simulation Platforms

A comparison of the technical and usability features of common simulation platforms is provided in Table 4 and Table 5. Moreover, Table 6 summarizes the advantages and disadvantages of some of the simulation reported in the reviewed literature. We identify that each simulator offers unique strengths (for instance, Gazebo’s strong ROS integration, Unity’s immersive visualization, or Isaac Sim’s high-fidelity physics), but also presents notable limitations, such as high computational demands, reduced accessibility, or limited realism in certain scenarios.
Gazebo is one of the most widely used robotics simulators, mainly due to its deep integration with ROS and its versatility [19]. However, its high computational demands can be a limiting factor, particularly for real-time applications requiring high-resolution sensor models and large-scale environments. Multiple studies reported challenges related to real-time performance, sim-to-real transfer fidelity, and computational resource constraints when using Gazebo [16,25,26,27,28,29,30,31,32]. Moreover, Terei et al. [33] identified several limitations of Gazebo for micro-assembly applications, including challenges in simulating realistic textures, telecentric camera imagery, and micro-scale physical phenomena.
Unity has gained popularity for its high-quality visualization capabilities and applications in human–robot interaction (HRI) and extended reality (XR). It has been widely used in collaborative robotic training and immersive simulation environments [21]. However, its primary focus on visualization rather than physics simulation makes it less suitable for precise robotic modeling. Several studies have identified issues with calibration drift in AR tracking, synchronization inconsistencies in VR-based simulations, and high computational demands when integrating Unity into real-time or multi-agent setups [9,20,21,34,35,36,37,38,39,40,41].
Unreal Engine is more commonly used for video game creation; however, its graphics and physics engines have proven useful in robotics applications. Additionally, it offers a vast array of tools for developing interactive systems [24]. Nonetheless, several limitations have been observed in robotics contexts: Unreal Engine requires precise calibration with real-world systems, offers low adaptability to dynamic environments, and can be sensitive to changes in the physical setup, which may hinder long-term or reconfigurable experiments [18].
Visual Components is widely used in industrial robotics for factory automation and digital twin modeling. Its flexible workflow supports dynamic task allocation and robotic layout optimization [24]. However, its adoption outside of industrial automation remains limited, reducing its applicability to broader robotics research. Additionally, it is often criticized for its restricted extensibility, particularly in applications requiring deep customization or integration with open-source control frameworks [17].
NVIDIA’s Isaac Sim is highly regarded for its use of the PhysX physics engine and its compatibility with AI and reinforcement learning frameworks. It excels in large-scale industrial robotics applications, particularly those requiring GPU acceleration for high-performance simulation [22]. Nevertheless, its reliance on high-end computing hardware can be a barrier for smaller research teams, and scalability issues have been reported when applied to large and complex simulation environments [42].
MuJoCo is a physics-based simulator designed for reinforcement learning and complex robotic dynamics modeling. It offers high-precision collision detection and motion planning capabilities [22]. However, it has been noted to struggle with performance in large-scale simulations, especially under dynamic environments. Additionally, limitations have been observed in sim-to-real transfer, which can reduce its effectiveness for real-world deployment in some contexts [43].
A novel solution observed in the surveyed articles is the integration of multiple simulators to overcome individual limitations. For instance, Gazebo and Unity have been combined to take advantage of Gazebo’s high physics accuracy while utilizing Unity’s advanced visualization capabilities for robotic training [44]. This hybrid approach enables researchers to leverage the strengths of different platforms, enhancing both simulation realism and ease of deployment. Box 1 summarizes the findings of RQ1.
Box 1. Summary of Findings for RQ1.
Between the ROS-compatible simulators identified in our review, Gazebo emerged as the most widely used simulator for Industry 4.0 applications due to its strong ROS integration. In contrast, Unity was preferred for building Industry 5.0 applications due to its visualization capabilities in HRI and XR. Other simulators such as PyBullet, MuJoCo, Isaac Sim, Unreal Engine, and Visual Components were used less frequently, each offering specific strengths (e.g., physics realism, GPU acceleration, or industrial layout modeling) alongside trade-offs like limited extensibility, scalability issues, or hardware requirements. A strategy to overcome these limitations is the combined use of multiple simulators (e.g., Gazebo + Unity), enabling researchers to leverage the strengths of each platform.

3.2. Domain-Specific Robotic Simulation Tools

In addition to the commonly used simulation environments identified in our review, we found several specialized tools designed for specific roles in robotics and industrial applications. For example, MSC ADAMS is a multi-body dynamics simulator that models systems with many degrees of freedom and complex contacts. Zhou et al. [45] used it to develop a coupled dynamic model of a high DOF collaborative mobile manipulator, which improved the accuracy of physical interactions between humans and robots. EMA Work Designer is a digital factory and ergonomic analysis tool that can perform certified MTM evaluations. Glogowski et al. [46] employed it to integrate adaptive robot speed planning in all trajectories, ensuring safe collaboration by continuously monitoring speed and separation. AnyLogic is a multi-method simulation platform that combines discrete events, system dynamics, and agent-based techniques. Cimino et al. [47] leveraged it to build and validate a digital twin that helps make decisions in automotive assembly.
Other notable tools include an AI-supported digital representation of the shop floor, used by Katsampiris et al. [48] to predict human trajectories through infrared sensing for safer robot path planning. MATLAB’s Simscape Multibody was used by Almaghout et al. [49] to control large deformations in linear deformable objects during collaborative manipulation. Klampt is an open-source planning toolkit, integrated by Winter et al. [50] with reinforcement learning and a digital twin to optimize assembly sequences and reduce manual programming. Rhino Grasshopper, combined with NVIDIA Flex, was utilized by Felbrich et al. [51] for GPU-accelerated particle simulations that support autonomous additive manufacturing. Fanuc ROBOGUIDE was used by Touzani et al. [52] for offline collision-aware task scheduling in multi-robot industrial cells. MATLAB Simulink with Simscape and RViz/Gazebo was employed by Antonelli et al. [53] to evaluate the fidelity and limitations of digital twins for mobile manipulators. Lastly, ABB’s RobotStudio was used by Loppenberg et al. [54] to speed up reinforcement learning training for dynamic routing problems through state space decomposition.
DhaibaWorks is a digital human modeling and ergonomic assessment platform developed by AIST, primarily used for virtual ergonomic analysis, full body kinematic and dynamic simulation, and human-centered design in industrial environments (e.g., [55]). Maruyama et al. [55] integrated DhaibaWorks with a ROS-based virtual robot to create a digital twin capable of real-time joint torque estimation, ergonomic evaluation, and dynamic task scheduling in a collaborative parts picking scenario effectively bridging physical human workload sensing with virtual robot control. Shopfloor is an AI-enhanced shop-floor simulation tool focused on 3D factory layout and process modeling, integrating data from infrared or depth sensors to monitor human motion and support virtual commissioning. Katsampiris et al. [48] leveraged Shopfloor alongside AI and infrared imaging to predict operator trajectories in human–robot collaborative environments, enabling dynamic robot path adjustments that enhance both safety and productivity.

4. RQ2: What Are the Most Common Industry 4.0/5.0 Applications That Have Been Developed Using These Simulators?

In the context of Industry 4.0, the results highlight the diverse applications and tasks addressed using ROS-enabled simulators across the reviewed articles, as summarized in Table 7. Among these, autonomous assembly stands out as one of the most frequently simulated tasks, appearing prominently across multiple studies. For instance, Niermann et al. [56] introduced a digital twin model that enables accurate object pose estimation, supporting the intelligent and precise assembly of printed circuit boards (PCBs). Zhang et al. [57] proposed a residual reinforcement learning policy that integrates visual and force feedback to improve robotic assembly performance, validated in both real-world and simulated environments using PyBullet. Furthermore, Winter et al. [50] employed a digital twin in simulation to autonomously learn efficient, collision-free assembly plans from demonstrations, aiming to reduce programming complexity.
Beyond assembly, pick-and-place operations are also heavily represented due to their relevance in packaging, part handling, and inspection. In this context, Antonelli et al. [53] used MATLAB/Simscape to simulate force-based grasping in semi-structured environments, while Li et al. [32] integrated Gazebo and ROS for executing autonomous grasping pipelines that combine perception with adaptive manipulation. Autonomous navigation and path planning were similarly prominent, particularly in logistics and warehouse contexts. Horelican et al. [19] used Gazebo to simulate obstacle-aware multi-robot exploration, and Mi et al. [27] validated adaptive path recovery strategies through the ROS Navigation Stack. These efforts extend naturally to multi-robot coordination, as illustrated by Keung et al. [58], who proposed an edge-based system for real-time task distribution and synchronization across multiple agents.
Table 7. Summary of Industry 4.0 articles with simulators and applications.
Table 7. Summary of Industry 4.0 articles with simulators and applications.
ReferenceSimulatorApplication
[59]PyBulletAssembly, force-based control, vision-guided manipulation
[19]GazeboMulti-robot navigation, autonomous path planning, obstacle avoidance
[25]GazeboMonitoring, inspection, mapping
[22]MuJoCoAssembly
[57]PyBulletAssembly, force-based control, vision-guided manipulation
[49]MatlabShape control, large deformation, co-manipulation
[60]Visual ComponentsReconfiguration, assembly, layout planning, digital twin simulation
[61]GazeboObject detection, quality inspection, process automation
[34]UnityRobot teaching, motion planning, AR-assisted control
[26]GazeboReconfigurable cell programming, skill modification, automated layout adaptation
[58]GazeboMulti-robot collaboration, resource synchronization, edge-based control, conflict resolution
[27]GazeboPath planning, obstacle avoidance, maintenance automation
[50]KlamptAssembly planning, collision avoidance, autonomous sequence optimization
[36]UnityPick-and-place, positioning optimization, reinforcement learning-based control
[51]Rhino-GH, Nvidia FlexAdditive manufacturing, block stacking, tool path planning, autonomous construction
[35]Unity3DObject detection, camera pose optimization, collision avoidance
[52]Fanuc ROBOGUIDEWelding, task sequencing, collision avoidance
[62]GazeboLocalization, tracking
[28]GazeboExploration, surveillance, path planning
[29]GazeboCollision avoidance, pick-and-place
[30]GazeboPickup, delivery, navigation
[63]Gazebo, Unity3DContainerization, task orchestration, resource management in edge-cloud architecture, mission programming
[64]UnityDigital twin analysis
[65]Unity, GazeboDigital twin analysis
[31]GazeboDeployment of smart sensors in a confident space
[43]MuJoCoScrew-tightening and screw-loosening
[56]GazeboPick-and-place
[53]MATLAB, Rviz/GazeboPick-and-place
[32]GazeboPick-and-place
[66]GazeboRobotic prefabrication tasks such as milling, gluing, and nailing
[42]Isaac SimPick-and-place
[33]UnityAssembly
[67]GazeboNavigation, assistive robotics
[68]GazeboPick-and-place, transportation, inspection
[69]UnityLogistics/Material Handling
[42]Isaac SimSorting and manipulation
[54]RobotStudioWelding
[70]UnityPick-and-place, HRI
[71]UnityPick-and-place, digital twin, human-in-the-loop
Deep and reinforcement learning models are often trained in simulation or with a combination of simulated and real data [72]. For instance, Liu et al. [34] developed a Unity-based augmented reality (AR) interface integrated with deep reinforcement learning algorithms to simulate adaptive control in dynamic environments. Enabling virtual testing and validation-building digital twins applications continue to gain relevance, with Pascher et al. [24] using Visual Components to model and evaluate production cell layouts before deployment, significantly improving commissioning workflows.
Intra-logistics and material handling tasks were addressed through realistic simulations of mobile robots in industrial environments. In this context, Lumpp et al. [63] employed a hybrid Gazebo and Unity setup to test distributed task orchestration within edge-cloud architectures, while Zhang et al. [69] simulated AI-enhanced sorting pipelines to support smart manufacturing. In more specialized scenarios, simulators have also been used for welding and torque-based tasks: Loppenberg et al. [54] employed RobotStudio for high-precision welding path control, and Xu et al. [43] used MuJoCo to simulate torque-sensitive screw tightening in collaborative settings.
Within the domain of Industry 5.0, a shift towards human-centric applications is evident, as shown in Table 8. Chan et al. [40] leveraged Unity to simulate ergonomic HRC scenarios, including real-time adaptation to human proximity. Xie et al. [21] focused on task coordination and mission programming in hybrid robot teams, using simulation to balance flexibility and performance. Immersive technologies were also employed to bridge the gap between operators and robots: Pang et al. [73] developed a VR-based programming interface to teach collaborative tasks, while Dianatfar et al. [74] used Unity for immersive assistive robotics focused on user personalization.
Remote operation and safety monitoring are additional Industry 5.0 applications where simulators play a vital role. Mourtzis et al. [41] tested latency mitigation and real-time feedback loops in telemanipulation scenarios using Unity, and Yun et al. [37] implemented a camera-based system for workplace safety inspection in collaboration with assistive robots. Finally, collaborative robotic assembly was explored in Dimosthenopoulosa et al. [75], where simulation was used to evaluate co-manipulation routines and human–robot workspace sharing under dynamic conditions. Box 2 summarizes the findings of RQ2.
Table 8. Summary of Industry 5.0 articles with simulators and applications.
Table 8. Summary of Industry 5.0 articles with simulators and applications.
ReferenceSimulatorApplication
[37]UnityInspection, spraying
[18]Unreal Engine 4Enhancing safety in manufacturing, human–robot collaboration
[21]UnityCollaborative robotic assembly, task coordination
[16]GazeboCooperative tele-recovery during manufacturing failure, task coordination
[76]Unity3DAssembly, human–robot collaboration
[17]Visual ComponentsResource sharing, feasibility testing, flexible automation
[38]UnityInspection, remote handling
[45]MSC ADAMSHuman–robot interaction
[46]ema Work Designer (EMA)Human–robot interaction
[77]Unity3DCollaborative robotic assembly and surface following
[20]UnityTask distribution, on-site assembly, collaborative task execution, human–robot interaction
[39]UnityLatency mitigation, teleoperation, predictive motion modeling, and remote welding
[40]UnityTask guidance, physical collaboration, workspace sharing, human–robot interaction
[41]UnityRemote manipulation
[9]UnityTelemanipulation, human-in-the-loop, peg-in-hole
[73]UnityAssembly assisted by AR interface, human–robot collaboration
[78]UnityHuman–robot collaboration, natural language commands
[75]UnityCollaborative robotic assembly
[79]UnityExoskeleton-based teleoperation for pick-and-place
[74]UnityVR-based simulation for human–robot collaboration
[47]AnyLogicHuman–robot collaboration in assembly
[80]UnityProgramming assistance and visualization
[48]Shopfloor Digital RepresentationCollaborative robotic assembly
[81]CoppeliaSimHuman–robot collaboration
[82]UnityTeleoperation
[83]UnityDigital twin
[55]DhaibaWorksErgonomic evaluation
Box 2. Summary of Findings for RQ2.
The most common Industry 4.0 applications developed using ROS-compatible simulation platforms include autonomous assembly, pick-and-place operations, autonomous navigation, virtual testing, welding, and intra-logistics and material handling. In contrast, Industry 5.0 emphasizes human-centric applications, where simulation platforms are primarily used for ergonomic and safety-enhancing human–robot collaboration, immersive teleoperation, and user-personalized assistive robotics.

5. RQ3: What Are the Most Commonly Used Robot Configurations and Key Algorithms That Are Integrated in These Applications?

This section describes the most common robot configurations and the algorithms/tools used in the development of Industry 4.0/5.0 applications identified in this article. Algorithms and tools are categorized into sensing devices, perception, mapping, cognition, and control tools.

5.1. Robot Configurations

Figure 4 illustrates the distribution of robot types used in the reviewed articles, distinguishing between Industry 4.0 (in red) and Industry 5.0 (in blue). The results indicate that robotic arms are the most frequently used configuration, followed by mobile robots and mobile manipulators (i.e., robotic arms mounted on mobile platforms). Robot arms have been widely used for tasks such as assembly, force-based control, and vision-guided manipulation [57,59,60]. Among the various models, Universal Robots’ UR series (UR3, UR5, UR10, and their extended versions UR3e, UR5e, UR10e) are the most commonly used, likely due to their extensive documentation and seamless integration with simulators such as Gazebo and Unity. Mobile robots enable advanced navigation, autonomous path planning, obstacle avoidance, mapping, monitoring, and inspection [19,22,25]. Our review indicates that TurtleBot 3 is the most frequently used mobile platform, likely due to its modular design, simplicity, and native compatibility with ROS-based simulators. Finally, mobile manipulators have been employed for task orchestration, mission programming, collaborative robotic assembly, and human–robot interaction [45,53,63,75]. However, no specific recurring models were identified in this category.

5.2. Sensing

In this article, we use the notation I4.0 to indicate the number of articles related to Industry 4.0 that use a specific technology. Similarly, I5.0 is used to represent the number of articles related to Industry 5.0. Among the various types of sensors discussed in the reviewed literature, 3D cameras emerged as the most frequently used, appearing in 12 Industry 4.0 articles (I4.0 = 12) and 4 Industry 5.0 articles (I5.0 = 4). The most commonly used models were from the Intel RealSense family, particularly the D400 series, including the D415 [35,63], D435 [49,51,68], and D435i [67], with the latter incorporating an IMU sensor for enhanced motion tracking. Encoders were the second most utilized sensors across both Industry 4.0 and Industry 5.0 articles (I4.0 = 6, I5.0 = 6) due to their crucial role in providing real-time feedback on the robot’s state, enabling precise trajectory planning in automation and collaborative scenarios [9,59]. Notably, VR headsets (I5.0 = 7, I4.0 = 0) were predominantly used in Industry 5.0 studies, serving as intuitive human–robot interfaces that enhance collaboration through immersive environments, with Microsoft HoloLens [40] and HTC Vive [20] being the most referenced devices. 2D cameras remained widely used (I4.0 = 5, I5.0 = 2) [26,27], primarily for object recognition tasks, often paired with depth-sensing devices such as Kinect (I5.0 = 4, I4.0 = 0) [48], LiDAR (I4.0 = 4, I5.0 = 0) [28], and laser scanners (I4.0 = 3, I5.0 = 0) [62], which are essential for perceiving spatial structures in the environment. IMU sensors were also common (I4.0 = 4, I5.0 = 2), supporting motion estimation and stability control in mobile robots. Lastly, RFID systems (I4.0 = 2, I5.0 = 0) were employed in navigation tasks, assisting robots in spatial localization through signal-based tracking [31,62].

5.3. Perception

Robotic perception refers to a robot’s ability to interpret and understand data from its environment and the various components within it [84], relying on a range of computational techniques to support key industrial tasks. For instance, ref. [57] presents an end-to-end training approach for a peg-in-hole assembly policy that integrates visual and force feedback, using convolutional neural networks (CNNs) to extract the peg’s relative pose to the hole directly from RGB images. A multimodal fusion framework designed to improve generalization in cluttered scenes by combining local visual and haptic inputs was introduced in [59]. In this framework, visual features are extracted from point clouds using a PointNet network [85], and tactile features are derived from GelSight sensor images using a ResNet-18 model [86]. To further enhance robotic perception, ref. [49] proposes a virtual feature points (VFPs) algorithm for tracking deformable linear objects such as cables and sutures by representing them as virtual points, thereby removing the need for physical markers and reducing the computational cost of contour processing. As noted in [49], deformable linear objects are prevalent in industrial, medical, and everyday contexts, yet their manipulation remains a significant bottleneck in automation and robotics [87]. To address the need for more accurate perception in complex industrial environments, ref. [43] employs a multimodal Transformer architecture combined with a deep autoencoder model. In addition to AI-driven methods, some studies leverage popular vision frameworks to support robotic perception; for instance, ref. [34] utilizes Vuforia AR Tracking to create an intuitive robot teaching system that integrates AR, deep reinforcement learning, and cloud-edge orchestration for streamlined robot motion planning. Meanwhile, MVTec HALCON is applied in [35] to enable object detection tasks, and the OpenCV open-source library has been adopted in studies such as [48,54,67] to support advanced visual processing techniques. For Industry 5.0 applications, ref. [18] proposes a digital twin framework that utilizes Unreal Engine 4 to create a synthetic, annotated environment for training and validating a Faster R-CNN model, enabling accurate detection of humans and robots to assess compliance with safety standards. Finally, ref. [21] proposes a human–robot collaboration assembly system that integrates XR and blockchain technologies using Unity3D. The system enables part recognition and environmental perception using the Vuforia plug-in to scan component surface features and create interactive virtual scenes.

5.4. Mapping

The mapping strategies identified in the reviewed articles were grouped into six distinct categories. The most frequently used approach was SLAM-based algorithms (I4.0 = 6, I5.0 = 1), which simultaneously perform mapping and localization, typically relying on sensors such as LiDAR or cameras [19,21]. The most commonly adopted implementation was SLAM Toolbox [88], which offers compatibility with both ROS and ROS 2. The second most utilized approach was OctoMap mapping (I4.0 = 4, I5.0 = 0), a volumetric representation based on octrees, particularly suited for 3D environments and collision detection [25]. The third and fourth mapping techniques were grid-based mapping (I4.0 = 0, I5.0 = 1), which relies on classic two-dimensional cell-based representations [48], and point cloud methods (I4.0 = 1, I5.0 = 0), which build maps directly from raw 3D point data without the use of complex SLAM algorithms [32].
Additionally, two alternative methods for map generation were identified. The first is based on the use of a digital twin mapping (I4.0 = 1, I5.0 = 1), where the map is generated by synchronizing known reference points between the physical and simulated environments, effectively producing a mirrored representation between the two domains [39]. The second involves AR-based mapping (I4.0 = 0, I5.0 = 2), where physical and virtual elements are aligned using known landmarks, similarly enabling the creation of a shared, synchronized map [80].

5.5. Cognition and Control

Cognition and control functionality in robotics is critical for enabling robots to conduct decision making based on sensor input and to interface with the physical world and its components. These are achieved through a wide range of algorithms and frameworks, many of which were referred to in the reviewed articles. For example, ref. [57] uses the MoveIt motion planning library to find feasible paths for robotic manipulators, taking inverse kinematics, collision checking, and joint bounds into account. Facilitatory tools such as TRAC-IK are employed for the enhancement of inverse kinematics solutions with higher speed and accuracy [45], while RoboTSPlanner is presented in [52] as a proprietary planner integrated with MoveIt for motion planning under various geometric and task-dependent constraints. To achieve trajectory execution and persistent path following, research like [27] integrates the ROS Navigation Stack, which provides modules for global and local planning, obstacle detection, and path recovery. Dynamic robot behavior adaptation is also enabled through real-time sensor feedback, as seen in [62], which uses the AMCL particle filter algorithm for probabilistic localization. Advanced robot controllers like ABB’s Externally Guided Motion (EGM) and Robot Web Services (RWSs) are utilized in [16] to enable remote manipulation and web-based control of motion parameters, while Riemannian Motion Policies (RMPs) are explored in [59] to generate smooth and adaptive motion under geometric constraints.
At the higher level of control, learning-based methods are increasingly used for improving decision making and adaptability. Deep reinforcement learning, particularly Proximal Policy Optimization (PPO), is employed in [36] to train agents for adaptive motion planning and skill acquisition, while Twin Delayed Deep Deterministic Policy Gradient (TD3) is applied in [61] for high-precision positioning in mobile manipulation tasks. Q-learning and transfer learning techniques are also explored in [26,43] to accelerate policy convergence and enable knowledge transfer between similar tasks. Behavior-based control structures are realized using finite state machines (FSMs), such as in [58] or the use of behavior trees (BTs), like in [78], or more advanced representations like Robust Logical Dynamic Systems (RLDSs), which are used with NVIDIA Isaac Cortex for modular task modeling [51]. For planning and task allocation, several works apply Distributed Model Predictive Control (DMPC) for coordinating multiple robots [29], while others employ evolutionary methods like genetic algorithms for sequencing and optimizing task assignment [52]. Hybrid simulation techniques that combine discrete-event simulation and 3D physics are adopted in [59] to manage task orchestration and resource allocation in agile manufacturing systems.
Meanwhile, AI-based decision-making paradigms are incorporated into robotic workflows to enhance perception and grasp strategies. CNN- and LSTM-based models are utilized in [49,59] to extract temporal and spatial features from RGB-D and tactile data for applications such as deformable object tracking and multimodal sensor fusion. Large Language Models (LLMs) are introduced in [75] to support natural language interfaces and provide contextual task guidance. Deep neural networks are used for zone classification and safety compliance in collaborative settings, such as in [18], where a deep learning system distinguishes between safe and unsafe zones during human–robot interactions. Grasping strategies are also improved through AI-based models, like the one proposed in [76], which accounts for occlusions and stacking constraints in multi-object-grasping scenarios. Box 3 summarizes the findings of RQ3.
Box 3. Summary of Findings for RQ3.
Key sensing technologies include 3D cameras (notably the Intel RealSense D400 series), encoders, 2D cameras, LiDAR, IMUs, and VR/AR systems. Perception systems integrate AI-driven methods such as CNNs, PointNet, and ResNet for tasks like object detection and multimodal sensor fusion. Control and cognition components involve motion planning libraries (e.g., MoveIt, TRAC-IK), Navigation Stacks, deep reinforcement learning, and behavior modeling techniques such as FSMs, BTs, and RLDS. Additional technologies include AR tracking and LLMs for human–robot collaboration and contextual task guidance. The most commonly used robot configurations are manipulators and mobile robots, with Universal Robots’ UR series and the TurtleBot 3 platform being frequently employed.

6. RQ4: How Have ROS 1 and ROS 2 Been Adopted in Industry 4.0/5.0 Applications? What Are Their Comparative Advantages and Limitations?

ROS has become a fundamental framework for robotic software development, providing essential tools and libraries for designing, testing, and deploying robotic systems. Originally introduced in 2007 and officially released in 2010, it rapidly gained widespread adoption due to its extensive package ecosystem and user-friendly architecture [89]. However, ROS 1 faced limitations in real-time communication, security, and scalability, particularly in complex, multi-robot environments. To overcome these challenges, ROS 2 was introduced in 2017, incorporating key improvements such as real-time system support, enhanced modularity, and improved security.
As illustrated in Figure 5, ROS 1 remains the most widely used version, appearing in (I4.0 = 32, I5.0 = 24) of the analyzed studies. Its dominance can be attributed to its maturity, extensive documentation, and well-established ecosystem. In contrast, ROS 2, adopted in (I4.0 = 5, I5.0 = 3) of the studies, is gaining traction for its advanced features, including multi-robot coordination and low-latency communication. Additionally, hybrid approaches that integrate both ROS 1 and ROS 2 (I4.0 = 1, I5.0 = 0) allow developers to leverage the stability of ROS 1 while gradually adopting ROS 2’s modern capabilities [25].
Table 9 presents a comparative analysis of ROS 1 and ROS 2, outlining key differences in architecture, communication models, and system performance. This comparison highlights the technological advancements that motivate the transition toward ROS 2 and the potential benefits of hybrid implementations in bridging legacy and next-generation robotic applications.
Hybrid implementations of ROS 1 and ROS 2 enable seamless integration of legacy systems with modern capabilities, offering significant advantages in various applications. For instance, Kıvrak et al. [25] developed a cyber–physical system that utilizes ROS 2 for real-time multi-robot coordination while retaining ROS 1 for legacy modules. A ROS bridge facilitates data exchange between the two frameworks, ensuring interoperability and stability. Similarly, Wang et al. [95] employed a hybrid ROS architecture for digital twin (DT) integration, where ROS 2 handles low-latency communication for real-time synchronization, while ROS 1 is used for visualization and compatibility with existing tools. These hybrid approaches highlight the potential of combining ROS versions to enhance system performance, adaptability, and continuity in evolving robotic applications. Box 4 summarizes the findings of RQ4.
Box 4. Summary of Findings for RQ4.
ROS 1 remains the dominant choice in Industry 4.0/5.0 applications due to its maturity and extensive ecosystem, while ROS 2 adoption is growing due to its real-time support, security, and scalability. Despite advantages of ROS 2, migration challenges exist, making hybrid solutions a practical alternative.

7. RQ5: What Are the Additional Communication and Integration Technologies That Enable Compatibility Between Simulators and Modules Used for Developing Industry 4.0/5.0 Applications?

Our review revealed several approaches to establish communication between systems, ranging from basic physical connections to advanced software-based integration frameworks. In terms of physical connectivity, three primary strategies were identified across the reviewed articles: wired (I4.0 = 16, I5.0 = 6), wireless (I4.0 = 13, I5.0 = 10), and hybrid combinations (I4.0 = 5, I5.0 = 3). One notable study employed 5G wireless communication [28], taking advantage of its high bandwidth, low latency, and increased network capacity to support real-time orchestration of collaborative robotic workflows.
Beyond the connection medium, we identified a wide range of tools that enable interoperability between simulators, robots, and external systems not natively supported by ROS. These tools include communication protocols, bridge frameworks, and middleware platforms. Between communication protocol, MQTT is commonly used in IoT applications for low-overhead message exchange [20]. WebSocket offers persistent, bidirectional communication channels suitable for interactive or teleoperation tasks [58]. TCP/IP, the foundational protocol on which ROS is built, supports general-purpose networking across various devices [32], while UDP provides a faster but less reliable alternative useful for time-sensitive tasks [29]. Other identified protocols include OPC UA, an open-source industrial communication standard designed for secure and reliable machine-to-machine communication [60], and EtherCAT, which was employed in scenarios requiring high-speed communication between controllers and actuators, thanks to its low-latency Ethernet-based architecture [22]. In some cases, researchers utilized REST APIs [33] to enable simple, HTTP-based interactions between robotic systems and web-based clients, facilitating cloud integration and web interface communication.
Integration frameworks for connecting ROS with non-natively supported simulators, such as Unity, often rely on bridge solutions. For example, ROS# [20] enables the integration of Unity with ROS functionalities into C#. Similarly, TCP-Connector [41] leverages ROS’s underlying TCP/IP protocol for enabling communication between ROS and Unity. Another widely used, general-purpose solution is the rosbridge suite [37,39], which facilitates communication between non-ROS clients and ROS systems using JSON-formatted messages over WebSocket or TCP/IP. However, performance issues have been reported when transmitting large messages [9].
In addition to these bridge capabilities, some studies incorporated complete frameworks to manage distributed robotic applications. Django was used as a lightweight web framework to support communication between edge devices, cloud platforms, and AR interfaces [34]. Other solutions, such as NEP+ [9], served as cross-platform middleware layers to coordinate data exchange across multiple modules and systems, including both ROSs.
Finally, several studies explored emerging and specialized technologies aimed at enhancing security, scalability, and adaptability in robotic communication. Blockchain was applied to ensure data integrity and prevent tampering in collaborative human–robot interaction scenarios [21]. Hidden Markov Models (HMMs) were employed to mitigate latency by predicting network traffic patterns and dynamically adjusting communication resources [39]. Lastly, Kubernetes [63] was utilized to orchestrate containerized robotic components in scalable and modular deployments. Box 5 summarizes the findings of RQ5.
Box 5. Summary of Findings for RQ5.
Our review highlights a broad spectrum of communication and integration technologies that enable interoperability between simulators and modules integrated on Industry 4.0 and Industry 5.0 applications. Advanced communication protocols like MQTT, WebSocket, TCP/IP, UDP, OPC UA, and EtherCAT, are used to address different needs in terms of latency, reliability, and compatibility. Bridge frameworks such as ROS#, TCP-Connector, and the rosbridge suite facilitate the connection between ROS and non-natively supported simulators like Unity, while middleware solutions such as Django and NEP+ extend integration across distributed systems and XR interfaces. Additionally, novel technologies like blockchain, Hidden Markov Models, and Kubernetes enhance the security, adaptability, and scalability of robotic communication infrastructures. Together, these technologies establish a robust foundation for dealing with the limitations of ROS and ROS 2.

8. Challenges and Opportunities

The reviewed articles highlight several persistent challenges in robotics simulation (summarized in Table 10). Computational efficiency and scalability remain a concern, as high-fidelity simulators demand significant resources [25], limiting real-time performance. Sim-to-real transfer limitations pose another challenge, as ensuring seamless deployment from simulation to physical systems remains complex [36]. Interoperability and standardization issues arise due to the lack of unified APIs and middleware, complicating system integration [38]. Additionally, latency and synchronization issues impact real-time control in distributed environments [39]. HRI modeling remains an open problem, as accurately simulating both physical and cognitive interactions is challenging [68]. Other concerns include learning curve and accessibility [65], where complex configurations hinder usability, and physics fidelity, where trade-offs exist between computational efficiency and physical accuracy. Finally, integration with robotics frameworks varies across simulators, requiring additional middleware for compatibility with established robotics ecosystems [65].
Despite these challenges, several advances present opportunities for improving robotics simulation. The integration of machine learning offers new possibilities for enhancing robotic adaptability, predictive capabilities, and real-time monitoring [22,59]. AI-driven learning frameworks can improve decision making and perception in simulation environments, bridging the sim-to-real gap. Cloud and edge computing provide scalable solutions by offloading computational tasks, reducing hardware constraints, and enhancing real-time execution [96]. Additionally, high-performance simulators such as Unreal Engine and Isaac Sim support large-scale simulations with AI-driven frameworks and cloud integration, enabling more complex robotic applications [24]. Improved multi-robot and HRI modeling in simulators like Unity and Unreal Engine enhances the realism of dynamic multi-agent scenarios, making them particularly valuable for XR applications [21]. Finally, the development of more user-friendly interfaces and modular frameworks can lower the barrier to entry for researchers and developers, improving accessibility and accelerating the adoption of robotics simulation technologies.

9. Limitations

While this study provides a comprehensive overview of robotics simulators in industrial and service domains, several limitations must be acknowledged. These limitations fall into three main categories: methodological, technical, and contextual.

9.1. Methodological Limitations

The review relied exclusively on articles published in English, which may have excluded valuable contributions from non-English research. Although the search strategy included major academic databases such as IEEE Xplore, MDPI, ACM Digital Library, ScienceDirect, and SpringerLink, some relevant studies might not have been captured due to variations in terminology and indexing inconsistencies across publications. Furthermore, the inclusion and exclusion criteria, while systematically applied, could have introduced unintended biases.

9.2. Technical Limitations

The analysis of robotics simulators was constrained by the availability of detailed technical documentation. Many simulators lacked comprehensive evaluations of fidelity, computational efficiency, and usability, making direct comparisons challenging. The absence of universal benchmarks further complicated efforts to assess simulator accuracy and scalability. Additionally, emerging simulators with limited adoption or insufficient documentation were excluded, potentially overlooking innovative developments in the field.

9.3. Contextual Limitations

This review focused primarily on simulators used in Industry 4.0 and Industry 5.0 applications, limiting the generalizability of the findings across broader domains.

10. Conclusions

This systematic review provides an overview of robotics simulators and their applications within the context of Industry 4.0 and Industry 5.0. It also highlights complementary tools and technologies integrated into these simulation environments. Our findings indicate that Gazebo remains the most widely used simulator for Industry 4.0, while Unity has gained traction in Industry 5.0 applications due to its advanced visualization and XR capabilities. ROS continues to play a central role in simulation-based research, although the transition to ROS 2 has been slow, largely due to the extensive existing infrastructure built around ROS 1. Based on these results, we recommend that future research explore alternative simulators better suited for Industry 5.0, such as Unreal Engine and Isaac Sim. These platforms offer high-fidelity physics, realistic rendering, and robust support for human-centric simulations, digital twins, and intelligent agent interactions. We also encourage the robotics community to accelerate the adoption of ROS 2, which provides enhanced Quality of Service (QoS) policies and improved support for real-time and distributed systems, features increasingly essential for complex industrial applications. Furthermore, the integration of blockchain technologies could enhance system security.
Improving reproducibility, reusability, and comparability in simulation-based robotics research is essential. However, our review revealed that few studies shared their source code or simulation assets, limiting opportunities for replication and collaborative progress. Future research should promote open science practices, such as publishing in long-term maintained public repositories with clear plug-and-play documentation and streamlined installation processes.
The combined use of multiple simulators can help meet the diverse requirements of complex industrial applications. However, many existing systems are tightly coupled to specific simulation environments and lack broad interoperability. To support modularity and enable meaningful cross-platform comparisons, future research should focus on developing flexible and interoperable bridging solutions that connect ROS-native simulators such as Gazebo with non-ROS-native platforms like Unity and Unreal Engine. These solutions should emphasize usability, offer comprehensive documentation, and support seamless integration across different simulation environments. In this context, platforms that leverage cross-platform communication tools such as MQTT, ZeroMQ, and Zenoh may provide effective and scalable alternatives.
The development of an open benchmarking suite could enable more consistent and objective evaluation of simulators across tasks such as sim-to-real transfer, robotic perception, and human–robot collaboration. Standardized benchmarks and metrics would help the community identify which platforms are most appropriate for specific applications, improving transparency and reproducibility. Moreover, to tackle the accessibility barrier posed by hardware and technical complexity, simulator developers could offer cloud-based access through low-cost subscription models tailored for students and small research teams. This would facilitate early-stage experimentation and learning, while also encouraging the growth of community-generated documentation and support resources. Both strategies would help democratize access to simulation tools and accelerate the adoption of robotics technologies across diverse contexts.

Author Contributions

Conceptualization, J.M.F.G. and E.C.; methodology, E.C. and J.M.F.G.; validation, E.C. and N.Y.; formal analysis, J.M.F.G.; investigation, J.M.F.G.; resources, E.C. and N.Y.; data curation, J.M.F.G.; writing—original draft preparation, E.C. and J.M.F.G.; writing—review and editing, N.Y.; visualization, J.M.F.G.; supervision, E.C.; project administration, N.Y.; funding acquisition, N.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This paper includes the results of Cross-ministerial Strategic Innovation Promotion Program (SIP) 3rd Phase, “Expansion of fundamental technologies and development of rules promoting social implementation to expand HCPS Human-Collaborative Robotics” promoted by Council for Science, Technology, and Innovation (CSTI), Cabinet Office, Government of Japan (Project Management Agency: New Energy and Industrial Technology Development Organization (NEDO), Project Code: JPJ012494, HCPS: Human–Cyber–Physical Space).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhong, R.Y.; Xu, X.; Klotz, E.; Newman, S.T. Intelligent Manufacturing in the Context of Industry 4.0: A Review. Engineering 2017, 3, 616–630. [Google Scholar] [CrossRef]
  2. Yang, T.; Yi, X.; Lu, S.; Johansson, K.H.; Chai, T. Intelligent manufacturing for the process industry driven by industrial artificial intelligence. Engineering 2021, 7, 1224–1230. [Google Scholar] [CrossRef]
  3. Huang, S.; Wang, G.; Lei, D.; Yan, Y. Toward digital validation for rapid product development based on digital twin: A framework. Int. J. Adv. Manuf. Technol. 2022, 119, 2509–2523. [Google Scholar] [CrossRef]
  4. Maddikunta, P.K.R.; Pham, Q.V.; B, P.; Deepa, N.; Dev, K.; Gadekallu, T.R.; Ruby, R.; Liyanage, M. Industry 5.0: A survey on enabling technologies and potential applications. J. Ind. Inf. Integr. 2022, 26, 100257. [Google Scholar] [CrossRef]
  5. Xu, X.; Lu, Y.; Vogel-Heuser, B.; Wang, L. Industry 4.0 and Industry 5.0—Inception, conception and perception. J. Manuf. Syst. 2021, 61, 530–535. [Google Scholar] [CrossRef]
  6. Huang, S.; Wang, B.; Li, X.; Zheng, P.; Mourtzis, D.; Wang, L. Industry 5.0 and Society 5.0—Comparison, complementation and co-evolution. J. Manuf. Syst. 2022, 64, 424–428. [Google Scholar] [CrossRef]
  7. Coronado, E.; Kiyokawa, T.; Ricardez, G.A.G.; Ramirez-Alpizar, I.G.; Venture, G.; Yamanobe, N. Evaluating quality in human–robot interaction: A systematic search and classification of performance and human-centered factors, measures and metrics towards an industry 5.0. J. Manuf. Syst. 2022, 63, 392–410. [Google Scholar] [CrossRef]
  8. Wang, B.; Zhou, H.; Li, X.; Yang, G.; Zheng, P.; Song, C.; Yuan, Y.; Wuest, T.; Yang, H.; Wang, L. Human Digital Twin in the context of Industry 5.0. Robot. Comput.-Integr. Manuf. 2024, 85, 102626. [Google Scholar] [CrossRef]
  9. Coronado, E.; Ueshiba, T.; Ramirez-Alpizar, I.G. A Path to Industry 5.0 Digital Twins for Human–Robot Collaboration by Bridging NEP+ and ROS. Robotics 2024, 13, 28. [Google Scholar] [CrossRef]
  10. Baratta, A.; Cimino, A.; Longo, F.; Nicoletti, L. Digital twin for human–robot collaboration enhancement in manufacturing systems: Literature review and direction for future developments. Comput. Ind. Eng. 2024, 187, 109764. [Google Scholar] [CrossRef]
  11. Kargar, S.; Yordanov, B.; Harvey, C.; Asadipour, A. Emerging Trends in Realistic Robotic Simulations: A Comprehensive Systematic Literature Review. IEEE Access 2023, 11, 1–25. [Google Scholar] [CrossRef]
  12. Liu, C.K.; Negrut, D. The role of physics-based simulators in robotics. Annu. Rev. Control. Robot. Auton. Syst. 2021, 4, 35–58. [Google Scholar] [CrossRef]
  13. Collins, J.; Chand, S.; Vanderkop, A.; Howard, D. A Review of Physics Simulators for Robotic Applications. IEEE Access 2021, 9, 51416–51431. [Google Scholar] [CrossRef]
  14. Budgen, D.; Brereton, P. Evolution of secondary studies in software engineering. Inf. Softw. Technol. 2022, 145, 106840. [Google Scholar] [CrossRef]
  15. Kitchenham, B.; Madeyski, L.; Budgen, D. SEGRESS: Software Engineering Guidelines for REporting Secondary Studies. IEEE Trans. Softw. Eng. 2023, 49, 1273–1298. [Google Scholar] [CrossRef]
  16. Itadera, S.; Domae, Y. Motion priority optimization framework toward automated and teleoperated robot cooperation in industrial recovery scenarios. Robot. Auton. Syst. 2024, 184, 104833. [Google Scholar] [CrossRef]
  17. Ribeiro da Silva, E.; Schou, C.; Hjorth, S.; Tryggvason, F.; Sørensen, M.S. Plug & Produce robot assistants as shared resources: A simulation approach. J. Manuf. Syst. 2022, 63, 107–117. [Google Scholar] [CrossRef]
  18. Wang, S.; Zhang, J.; Wang, P.; Law, J.; Calinescu, R.; Mihaylova, L. A deep learning-enhanced Digital Twin framework for improving safety and reliability in human–robot collaborative manufacturing. Robot. Comput.-Integr. Manuf. 2024, 85, 102608. [Google Scholar] [CrossRef]
  19. Horelican, T. Utilizability of navigation2/ros2 in highly automated and distributed multi-robotic systems for industrial facilities. IFAC-PapersOnLine 2022, 55, 109–114. [Google Scholar] [CrossRef]
  20. Alexi, E.V.; Kenny, J.C.; Atanasova, L.; Casas, G.; Dörfler, K.; Mitterberger, D. Cooperative augmented assembly (CAA): Augmented reality for on-site cooperative robotic fabrication. Constr. Robot. 2024, 8, 28. [Google Scholar] [CrossRef]
  21. Xie, J.; Liu, Y.; Wang, X.; Fang, S.; Liu, S. A new XR-based human–robot collaboration assembly system based on industrial metaverse. J. Manuf. Syst. 2024, 74, 949–964. [Google Scholar] [CrossRef]
  22. Ehrmann, C.; Min, J.; Zhang, W. Highly flexible robotic manufacturing cell based on holistic real-time model-based control. Procedia CIRP 2024, 127, 20–25. [Google Scholar] [CrossRef]
  23. Castellini, A.; Marchesini, E.; Farinelli, A. Partially Observable Monte Carlo Planning with state variable constraints for mobile robot navigation. Eng. Appl. Artif. Intell. 2021, 104, 104382. [Google Scholar] [CrossRef]
  24. Pascher, M.; Goldau, F.F.; Kronhardt, K.; Frese, U.; Gerken, J. AdaptiX-A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics. Proc. ACM Hum.-Comput. Interact. 2024, 8, 1–28. [Google Scholar] [CrossRef]
  25. Kivrak, H.; Karakusak, M.Z.; Watson, S.; Lennox, B. Cyber–physical system architecture of autonomous robot ecosystem for industrial asset monitoring. Comput. Commun. 2024, 218, 72–84. [Google Scholar] [CrossRef]
  26. Asif, S.; Bueno, M.; Ferreira, P.; Anandan, P.; Zhang, Z.; Yao, Y.; Ragunathan, G.; Tinkler, L.; Sotoodeh-Bahraini, M.; Lohse, N.; et al. Rapid and automated configuration of robot manufacturing cells. Robot. Comput.-Integr. Manuf. 2025, 92, 102862. [Google Scholar] [CrossRef]
  27. Mi, K.; Fu, Y.; Zhou, C.; Ji, W.; Fu, M.; Liang, R. Research on path planning of intelligent maintenance robotic arm for distribution lines under complex environment. Comput. Electr. Eng. 2024, 120, 109711. [Google Scholar] [CrossRef]
  28. Romero, A.; Delgado, C.; Zanzi, L.; Li, X.; Costa-Pérez, X. OROS: Online Operation and Orchestration of Collaborative Robots using 5G. IEEE Trans. Netw. Serv. Manag. 2023, 20, 4216–4230. [Google Scholar] [CrossRef]
  29. Gafur, N.; Kanagalingam, G.; Wagner, A.; Ruskowski, M. Dynamic collision and deadlock avoidance for multiple robotic manipulators. IEEE Access 2022, 10, 55766–55781. [Google Scholar] [CrossRef]
  30. Camisa, A.; Testa, A.; Notarstefano, G. Multi-robot pickup and delivery via distributed resource allocation. IEEE Trans. Robot. 2022, 39, 1106–1118. [Google Scholar] [CrossRef]
  31. Putranto, A.; Lin, T.H.; Tsai, P.T. Digital twin-enabled robotics for smart tag deployment and sensing in confined space. Robot. Comput.-Integr. Manuf. 2025, 95, 102993. [Google Scholar] [CrossRef]
  32. Li, X.; Liu, G.; Sun, S.; Yi, W.; Li, B. Digital twin model-based smart assembly strategy design and precision evaluation for PCB kit-box build. J. Manuf. Syst. 2023, 71, 206–223. [Google Scholar] [CrossRef]
  33. Terei, N.; Wiemann, R.; Raatz, A. ROS-Based Control of an Industrial Micro-Assembly Robot. Procedia CIRP 2024, 130, 909–914. [Google Scholar] [CrossRef]
  34. Liu, C.; Tang, D.; Zhu, H.; Nie, Q.; Chen, W.; Zhao, Z. An augmented reality-assisted interaction approach using deep reinforcement learning and cloud-edge orchestration for user-friendly robot teaching. Robot. Comput.-Integr. Manuf. 2024, 85, 102638. [Google Scholar] [CrossRef]
  35. Roveda, L.; Maroni, M.; Mazzuchelli, L.; Praolini, L.; Shahid, A.A.; Bucca, G.; Piga, D. Robot end-effector mounted camera pose optimization in object detection-based tasks. J. Intell. Robot. Syst. 2022, 104, 16. [Google Scholar] [CrossRef]
  36. Iriondo, A.; Lazkano, E.; Ansuategi, A.; Rivera, A.; Lluvia, I.; Tubío, C. Learning positioning policies for mobile manipulation operations with deep reinforcement learning. Int. J. Mach. Learn. Cyber 2023, 14, 3003–3023. [Google Scholar] [CrossRef]
  37. Yun, H.; Jun, M.B. Immersive and interactive cyber-physical system (I2CPS) and virtual reality interface for human involved robotic manufacturing. J. Manuf. Syst. 2022, 62, 234–248. [Google Scholar] [CrossRef]
  38. Fontanelli, G.A.; Sofia, A.; Fusco, S.; Grazioso, S.; Di Gironimo, G. Preliminary architecture design for human-in-the-loop control of robotic equipment in remote handling tasks: Case study on the NEFERTARI project. Fusion Eng. Des. 2024, 206, 114586. [Google Scholar] [CrossRef]
  39. Su, Y.; Lloyd, L.; Chen, X.; Chase, J.G. Latency mitigation using applied HMMs for mixed reality-enhanced intuitive teleoperation in intelligent robotic welding. Int. J. Adv. Manuf. Technol. 2023, 126, 2233–2248. [Google Scholar] [CrossRef]
  40. Chan, W.P.; Hanks, G.; Sakr, M.; Zhang, H.; Zuo, T.; Van der Loos, H.M.; Croft, E. Design and evaluation of an augmented reality head-mounted display interface for human robot teams collaborating in physically shared manufacturing tasks. ACM Trans. Hum.-Robot Interact. (THRI) 2022, 11, 1–19. [Google Scholar] [CrossRef]
  41. Mourtzis, D.; Angelopoulos, J.; Panopoulos, N. Closed-loop robotic arm manipulation based on mixed reality. Appl. Sci. 2022, 12, 2972. [Google Scholar] [CrossRef]
  42. Nambiar, S.; Jonsson, M.; Tarkian, M. Automation in Unstructured Production Environments Using Isaac Sim: A Flexible Framework for Dynamic Robot Adaptability. Procedia CIRP 2024, 130, 837–846. [Google Scholar] [CrossRef]
  43. Xu, W.; Yang, H.; Ji, Z.; Ba, M. Cognitive digital twin-enabled multi-robot collaborative manufacturing: Framework and approaches. Comput. Ind. Eng. 2024, 194, 110418. [Google Scholar] [CrossRef]
  44. Ostanin, M.; Zaitsev, S.; Sabirova, A.; Klimchik, A. Interactive Industrial Robot Programming based on Mixed Reality and Full Hand Tracking. IFAC-PapersOnLine 2022, 55, 2791–2796. [Google Scholar] [CrossRef]
  45. Zhou, Z.; Yang, X.; Wang, H.; Zhang, X. Coupled dynamic modeling and experimental validation of a collaborative industrial mobile manipulator with human–robot interaction. Mech. Mach. Theory 2022, 176, 105025. [Google Scholar] [CrossRef]
  46. Glogowski, P.; Böhmer, A.; Hypki, A.; Kuhlenkötter, B. Robot speed adaption in multiple trajectory planning and integration in a simulation tool for human–robot interaction. J. Intell. Robot. Syst. 2021, 102, 25. [Google Scholar] [CrossRef]
  47. Cimino, A.; Longo, F.; Nicoletti, L.; Solina, V. Simulation-based Digital Twin for enhancing human–robot collaboration in assembly systems. J. Manuf. Syst. 2024, 77, 903–918. [Google Scholar] [CrossRef]
  48. Katsampiris-Salgado, K.; Dimitropoulos, N.; Gkrizis, C.; Michalos, G.; Makris, S. Advancing human–robot collaboration: Predicting operator trajectories through AI and infrared imaging. J. Manuf. Syst. 2024, 74, 980–994. [Google Scholar] [CrossRef]
  49. Almaghout, K.; Cherubini, A.; Klimchik, A. Robotic co-manipulation of deformable linear objects for large deformation tasks. Robot. Auton. Syst. 2024, 175, 104652. [Google Scholar] [CrossRef]
  50. de Winter, J.; EI Makrini, I.; van de Perre, G.; Nowé, A.; Verstraten, T.; Vanderborght, B. Autonomous assembly planning of demonstrated skills with reinforcement learning in simulation. Auton Robot 2021, 45, 1097–1110. [Google Scholar] [CrossRef]
  51. Felbrich, B.; Schork, T.; Menges, A. Autonomous robotic additive manufacturing through distributed model-free deep reinforcement learning in computational design environments. Constr. Robot. 2022, 6, 15–37. [Google Scholar] [CrossRef]
  52. Touzani, H.; Séguy, N.; Hadj-Abdelkader, H.; Suárez, R.; Rosell, J.; Palomo-Avellaneda, L.; Bouchafa, S. Efficient industrial solution for robotic task sequencing problem with mutual collision avoidance & cycle time optimization. IEEE Robot. Autom. Lett. 2022, 7, 2597–2604. [Google Scholar] [CrossRef]
  53. Antonelli, D.; Aliev, K.; Soriano, M.; Samir, K.; Monetti, F.M.; Maffei, A. Exploring the limitations and potential of digital twins for mobile manipulators in industry. Procedia Comput. Sci. 2024, 232, 1121–1130. [Google Scholar] [CrossRef]
  54. Löppenberg, M.; Yuwono, S.; Diprasetya, M.R.; Schwung, A. Dynamic robot routing optimization: State–space decomposition for operations research-informed reinforcement learning. Robot. Comput.-Integr. Manuf. 2024, 90, 102812. [Google Scholar] [CrossRef]
  55. Maruyama, K.; Yamaoka, M.; Yamazaki, Y.; Hoshi, Y. Digital Twin-Driven Human Robot Collaboration Using a Digital Human. Sensors 2021, 21, 8266. [Google Scholar] [CrossRef]
  56. Niermann, D.; Doernbach, T.; Petzoldt, C.; Isken, M.; Freitag, M. Software framework concept with visual programming and digital twin for intuitive process creation with multiple robotic systems. Robot. Comput.-Integr. Manuf. 2023, 82, 102536. [Google Scholar] [CrossRef]
  57. Zhang, Z.; Wang, Y.; Zhang, Z.; Wang, L.; Huang, H.; Cao, Q. A residual reinforcement learning method for robotic assembly using visual and force information. J. Manuf. Syst. 2024, 72, 245–262. [Google Scholar] [CrossRef]
  58. Keung, K.L.; Chan, Y.; Ng, K.K.; Mak, S.L.; Li, C.H.; Qin, Y.; Yu, C. Edge intelligence and agnostic robotic paradigm in resource synchronisation and sharing in flexible robotic and facility control system. Adv. Eng. Inform. 2022, 52, 101530. [Google Scholar] [CrossRef]
  59. Zhang, Z.; Zhang, Z.; Wang, L.; Zhu, X.; Huang, H.; Cao, Q. Digital twin-enabled grasp outcomes assessment for unknown objects using visual-tactile fusion perception. Robot. Comput.-Integr. Manuf. 2023, 84, 102601. [Google Scholar] [CrossRef]
  60. Arnarson, H.; Mahdi, H.; Solvang, B.; Bremdal, B.A. Towards automatic configuration and programming of a manufacturing cell. J. Manuf. Syst. 2022, 64, 225–235. [Google Scholar] [CrossRef]
  61. Yang, X.; Zhou, Z.; Sørensen, J.H.; Christensen, C.B.; Ünalan, M.; Zhang, X. Automation of SME production with a Cobot system powered by learning-based vision. Robot. Comput.-Integr. Manuf. 2023, 83, 102564. [Google Scholar] [CrossRef]
  62. D’Avella, S.; Unetti, M.; Tripicchio, P. RFID gazebo-based simulator with RSSI and phase signals for UHF tags localization and tracking. IEEE Access 2022, 10, 22150–22160. [Google Scholar] [CrossRef]
  63. Lumpp, F.; Panato, M.; Bombieri, N.; Fummi, F. A design flow based on Docker and Kubernetes for ROS-based robotic software applications. ACM Trans. Embed. Comput. Syst. 2024, 23, 1–24. [Google Scholar] [CrossRef]
  64. Singh, M.; Kapukotuwa, J.; Gouveia, E.L.S.; Fuenmayor, E.; Qiao, Y.; Murry, N.; Devine, D. Unity and ROS as a Digital and Communication Layer for Digital Twin Application: Case Study of Robotic Arm in a Smart Manufacturing Cell. Sensors 2024, 24, 5680. [Google Scholar] [CrossRef]
  65. Singh, M.; Kapukotuwa, J.; Gouveia, E.L.S.; Fuenmayor, E.; Qiao, Y.; Murray, N.; Devine, D. Comparative Study of Digital Twin Developed in Unity and Gazebo. Electronics 2025, 14, 276. [Google Scholar] [CrossRef]
  66. Kaiser, B.; Reichle, A.; Verl, A. Model-based automatic generation of digital twin models for the simulation of reconfigurable manufacturing systems for timber construction. Procedia CIRP 2022, 107, 387–392. [Google Scholar] [CrossRef]
  67. Szabó, G.; Peto, J. Intelligent wireless resource management in industrial camera systems: Reinforcement Learning-based AI-extension for efficient network utilization. Comput. Commun. 2024, 216, 68–85. [Google Scholar] [CrossRef]
  68. Wang, Y.; Xie, Y.; Xu, D.; Shi, J.; Fang, S.; Gui, W. Heuristic dense reward shaping for learning-based map-free navigation of industrial automatic mobile robots. ISA Trans. 2025, 156, 579–596. [Google Scholar] [CrossRef]
  69. Zhang, L.; Yang, C.; Yan, Y.; Cai, Z.; Hu, Y. Automated guided vehicle dispatching and routing integration via digital twin with deep reinforcement learning. J. Manuf. Syst. 2024, 72, 492–503. [Google Scholar] [CrossRef]
  70. Gallala, A.; Kumar, A.A.; Hichri, B.; Plapper, P. Digital Twin for Human–Robot Interactions by Means of Industry 4.0 Enabling Technologies. Sensors 2022, 22, 4950. [Google Scholar] [CrossRef]
  71. Llopart, A.; Bou, M.; Ribas-Xirgo, J. Unity and ROS as a Digital and Communication Layer for the Implementation of a Cyber-Physical System for Industry 4.0. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20–21 August 2020; pp. 1069–1074. [Google Scholar] [CrossRef]
  72. Tsuji, C.; Coronado, E.; Osorio, P.; Venture, G. Adaptive contact-rich manipulation through few-shot imitation learning with Force-Torque feedback and pre-trained object representations. IEEE Robot. Autom. Lett. 2024, 10, 240–247. [Google Scholar] [CrossRef]
  73. Pang, J.; Zheng, P. ProjecTwin: A digital twin-based projection framework for flexible spatial augmented reality in adaptive assistance. J. Manuf. Syst. 2025, 78, 213–225. [Google Scholar] [CrossRef]
  74. Dianatfar, M.; Järvenpää, E.; Siltala, N.; Lanz, M. Template concept for VR environments: A case study in VR-based safety training for human–robot collaboration. Robot. Comput.-Integr. Manuf. 2025, 94, 102973. [Google Scholar] [CrossRef]
  75. Dimosthenopoulosa, D.; Basamakisa, F.P.; Mountzouridisa, G.; Papadopoulosa, G.; Michalosa, G.; Makrisa, S. Towards utilising Artificial Intelligence for advanced reasoning and adaptability in human–robot collaborative workstations. Procedia CIRP 2024, 127, 147–152. [Google Scholar] [CrossRef]
  76. Koukas, S.; Kousi, N.; Aivaliotis, S.; Michalos, G.; Bröchler, R.; Makris, S. ODIN architecture enabling reconfigurable human–robot based production lines. Procedia CIRP 2022, 107, 1403–1408. [Google Scholar] [CrossRef]
  77. Tuli, T.; Manns, M.; Zeller, S. Human motion quality and accuracy measuring method for human–robot physical interactions. Intell. Serv. Robot. 2022, 15, 503–512. [Google Scholar] [CrossRef]
  78. Konstantinou, C.; Antonarakos, D.; Angelakis, P.; Gkournelos, C.; Michalos, G.; Makris, S. Leveraging Generative AI Prompt Programming for Human-Robot Collaborative Assembly. Procedia CIRP 2024, 128, 621–626. [Google Scholar] [CrossRef]
  79. Park, H.; Shin, M.; Choi, G.; Sim, Y.; Lee, J.; Yun, H.; Jun, M.B.G.; Kim, G.; Jeong, Y.; Yi, H. Integration of an exoskeleton robotic system into a digital twin for industrial manufacturing applications. Robot. Comput.-Integr. Manuf. 2024, 89, 102746. [Google Scholar] [CrossRef]
  80. Adler, F.; Gusenburger, D.; Blum, A.; Müller, R. Conception of a Robotic Digital Shadow in Augmented Reality for Enhanced Human-Robot Interaction. Procedia CIRP 2024, 130, 407–412. [Google Scholar] [CrossRef]
  81. Fellin, T.; Pagetti, C.; Beschi, M.; Caselli, S. A Framework for Enhanced Human–Robot Collaboration during Aircraft Fuselage Assembly. Machines 2022, 10, 390. [Google Scholar] [CrossRef]
  82. Mueller, J.; Georgilas, I.; Visala, A.; Hentout, A.; Thomas, U. A Robotic Teleoperation System with Integrated Augmented Reality and Digital Twin Capabilities for Enhanced Human-Robot Interaction. In Proceedings of the 2021 IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vasteras, Sweden, 7–10 September 2021; pp. 1–8. [Google Scholar] [CrossRef]
  83. Erdei, T.I.; Krakó, R.; Husi, G. Design of a Digital Twin Training Centre for an Industrial Robot Arm. Appl. Sci. 2022, 12, 8862. [Google Scholar] [CrossRef]
  84. Chen, S.C.; Pamungkas, R.S.; Schmidt, D. The Role of Machine Learning in Improving Robotic Perception and Decision Making. Int. Trans. Artif. Intell. 2024, 3, 32–43. [Google Scholar] [CrossRef]
  85. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  86. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  87. Trommnau, J.; Kühnle, J.; Siegert, J.; Inderka, R.; Bauernhansl, T. Overview of the state of the art in the production process of automotive wire harnesses, current research and future trends. Procedia CIRP 2019, 81, 387–392. [Google Scholar] [CrossRef]
  88. Macenski, S.; Jambrecic, I. SLAM Toolbox: SLAM for the dynamic world. J. Open Source Softw. 2021, 6, 2783. [Google Scholar] [CrossRef]
  89. Quigley, M.; Gerkey, B.; Smart, W.D. Programming Robots with ROS: A Practical Introduction to the Robot Operating System; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2015. [Google Scholar]
  90. Maruyama, Y.; Kato, S.; Azumi, T. Exploring the Performance of ROS2. In Proceedings of the 13th International Workshop on Embedded Multicore Systems (MCSoC), Lyon, France, 21–23 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
  91. Open Robotics. ROS 2 Documentation. 2024. Available online: https://docs.ros.org/en/ros2_documentation/ (accessed on 27 January 2025).
  92. Klotzbucher, B.; Klotzbuecher, D.; Bruyninckx, H. ROS 2 for Real-Time Applications: Performance and Security Considerations. IEEE Robot. Autom. Mag. 2019, 26, 49–60. [Google Scholar] [CrossRef]
  93. Open Robotics. ROS 1: Architecture and Concepts. 2024. Available online: http://wiki.ros.org/ROS/Introduction (accessed on 27 January 2025).
  94. AWS Robomaker Team. Migrating from ROS 1 to ROS 2: A Technical Guide. 2023. Available online: https://docs.ros.org/en/foxy/The-ROS2-Project/Contributing/Migration-Guide.html#migrating-to-ros-2 (accessed on 27 January 2025).
  95. Wang, T.; Tan, C.; Huang, L.; Shi, Y.; Yue, T.; Huang, Z. Simplexity testbed: A model-based digital twin testbed. Comput. Ind. 2023, 145, 103804. [Google Scholar] [CrossRef]
  96. Yun, J.; Li, G.; Jiang, D.; Xu, M.; Xiang, F.; Huang, L.; Jiang, G.; Liu, X.; Xie, Y.; Tao, B.; et al. Digital twin model construction of robot and multi-object under stacking environment for grasping planning. Appl. Soft Comput. 2023, 149, 111005. [Google Scholar] [CrossRef]
Figure 2. Robotics simulators used in Industry 4.0 articles (individual tools counted separately).
Figure 2. Robotics simulators used in Industry 4.0 articles (individual tools counted separately).
Applsci 15 08637 g002
Figure 3. Robotics simulators used in Industry 5.0 articles.
Figure 3. Robotics simulators used in Industry 5.0 articles.
Applsci 15 08637 g003
Figure 4. Comparison of robot types used in Industry 5.0 and 4.0 articles.
Figure 4. Comparison of robot types used in Industry 5.0 and 4.0 articles.
Applsci 15 08637 g004
Figure 5. Comparison of ROS versions used in Industry 5.0 and 4.0 articles.
Figure 5. Comparison of ROS versions used in Industry 5.0 and 4.0 articles.
Applsci 15 08637 g005
Table 1. Comparison of related surveys on robotic simulation platforms.
Table 1. Comparison of related surveys on robotic simulation platforms.
YearReferenceScopeSimulator Description or ComparisonResearch Question FocusMethodology
2021Liu et al. [12]Physics-based simulationDynamics engines and platforms describedLimitation of physics-based simulation in robotic systems (implicit)Narrative review
2021Collins et al. [13]Field, soft, and medical robotics, manipulation and learningComparison of sensors, actuators, fluid/soft-body dynamics, IK, ROS, rendering, VRCapabilities, limitations, and application areas of physics simulators in robotics (implicit)Narrative review
2024Kargar et al. [11]AI-driven perception/control for wheeled mobile robotsComparison of physics engines, languages, open-source availabilityIdentifies WMR applications, tasks, commonly used ROS-compatible simulatorsPRISMA (systematic)
2024Baratta et al. [10]Digital-twin in manufacturingComparison of HRC simulation factors: human operator, robot agent, ergonomics, timing, interoperabilityIdentifies digital twin applications, tools, barriers in HRCPRISMA (systematic)
2025This studyIndustry 4.0/5.0 technologiesTechnical comparison: requirements, documentation, pricing, physics engines, learning curve, connectivity, scalability, advantages/limitations of simulatorsIdentifies ROS-compatible simulators, applications, robots, sensing/
perception/control technologies
SEGRESS (systematic)
Table 2. Keywords and filters applied per database.
Table 2. Keywords and filters applied per database.
DatabaseSearch Terms and Filters
IEEE Xplorerobot AND (simulation OR simulators OR “digital twin”) AND ROS AND industry. Filters: 2021–2025.
ScienceDirect (first search)robot AND (simulation OR simulators OR “digital twin”) AND ROS AND (industrial OR service OR assistive) AND human robot interaction. Filters: 2021–2025, Research Articles, Engineering.
ScienceDirect (second search)robot AND (simulation OR simulators OR “digital twin”) AND “Robot Operating System” AND (“industry 5.0” OR “industry 4.0”). Filters: 2021–2025, Research Articles, Engineering.
SpringerLinkrobot AND (simulation OR simulators OR “digital twin”) AND ROS AND (“industry 5.0” OR “industry 4.0”). Filters: Article, 2021–2025, English.
ACM Digital Libraryrobot AND (simulation OR simulators OR “digital twin”) AND ROS AND (“industry 5.0” OR “industry 4.0”). Filters: 2021–2025.
MDPI (first search)robot AND (simulation OR simulators OR “digital twin”) AND ROS AND (“industry 5.0” OR “industry 4.0”). Filters: 2021–2025.
MDPI (second search)robot AND (simulation OR simulators OR “digital twin”) AND (“industry 5.0” OR “industry 4.0”). Filters: 2021–2025.
Table 4. Technical comparison of robotics simulators (part 1).
Table 4. Technical comparison of robotics simulators (part 1).
FeatureIsaac SimGazeboUnity
OS (Minimum)Ubuntu 20.04/22.04, Windows 10/11UbuntuWindows, macOS, Ubuntu
RAM (Minimum)32 GB
VRAM (Minimum)8 GB
Supported LanguagesC++, PythonC++, PythonC#, UnityScript
ROS CompatibilityYesYesYes (via ROS-TCP)
Docker CompatibilityYesYesNo
Main CharacteristicsMulti-robot simulation with AIRealistic control simulationMulti-platform support
PriceFree/Business VersionFreeFree/Pro: USD 2200
Physics AccuracyHighVaries (engine dependent)Medium (extra config)
Learning CurveAdvancedModerateEasy
ScalabilityHighly scalableModerate scalabilityLimited scalability
A dash (–) indicates that the information was not explicitly reported or is unavailable.
Table 5. Technical comparison of robotics simulators (part 2).
Table 5. Technical comparison of robotics simulators (part 2).
FeatureMuJoCoUnreal EnginePyBullet
OS (Minimum)Windows, macOS, UbuntuWindows, Linux, macOS, Android
RAM (Minimum)16 GB2 GB
VRAM (Minimum)8 GB512 MB
Supported LanguagesPython, API: C++C++, Python, Lua, JavaScriptPython
ROS CompatibilityYesInvestigateYes
Docker CompatibilityYesYesYes
Main CharacteristicsAdvanced physics simulationAdvanced graphics for XR, PCPhysics simulation for learning
PriceFreeFree/Business VersionFree
Physics AccuracyHigh (advanced dynamics)Chaos Physics (advanced setup)High
Learning CurveDifficultDifficultModerate
ScalabilityHighly scalableHighly scalableModerate scalability
A dash (–) indicates that the information was not explicitly reported or is unavailable. The term “Investigate” implies a lack of clear documentation or integration reports in the reviewed literature.
Table 6. Comparison of advantages and disadvantages of popular robotics simulators.
Table 6. Comparison of advantages and disadvantages of popular robotics simulators.
SimulatorAdvantagesDisadvantages
GazeboHigh flexibility, ROS integration, multi-robot support, and extensive plugin ecosystem [19].High computational demands, real-time performance challenges, steep learning curve [19].
Unity 3DHigh-fidelity visualization, support for XR applications, modular simulation capabilities [20].Limited physics accuracy for robotics, high computational requirements for XR, synchronization issues in VR/AR setups [21].
Isaac SimAdvanced physics via NVIDIA PhysX, GPU-accelerated simulation, reinforcement learning support [22].Requires high-end hardware, proprietary platform limits accessibility [22].
CoppeliaSimIntuitive interface, real-time simulation capabilities, multi-robot interaction support [23].Limited physics realism, scalability constraints in complex environments [23].
MuJoCoHigh-precision multi-body physics, efficient for reinforcement learning [22].Slow performance for large-scale simulations, stability issues in long-running tasks [22].
Visual ComponentsOptimized for industrial automation, supports flexible DT modeling [24].Limited adoption outside industrial manufacturing, restricted extensibility [24].
Table 9. Comparison of ROS 1 and ROS 2.
Table 9. Comparison of ROS 1 and ROS 2.
FeatureROS 1ROS 2
Communication MiddlewareUses a custom TCP/UDP-based transport layer (ros_comm) [89].Uses the Data Distribution Service (DDS) for improved real-time performance, scalability, and security [90].
Real-Time SupportLimited real-time capabilities; requires external modifications (e.g., Orocos) [90].Built-in real-time support with execution management, priority scheduling, and deterministic behavior [91].
Multi-Robot SupportLimited multi-robot capabilities, requiring additional workarounds for managing distributed systems [89].Native support for multi-robot applications with improved node discovery and communication [91].
SecurityNo built-in security features; security relies on external tools and configurations [90].Built-in security features (authentication, encryption, access control) following SROS 2 (Secure ROS 2) [91,92].
Middleware
Flexibility
Uses a single communication layer (ros_comm), making it less adaptable to different network environments [89].DDS abstraction allows selection of different middleware implementations based on application needs [91].
Modularity and
Scalability
Designed primarily for single-system robots; lacks flexibility for distributed systems [93].Modular architecture enabling distributed systems, allowing cloud-based and EC applications [91].
API and Node
Management
Uses a centralized master node (roscore) for service discovery [89].Decentralized node discovery and communication, eliminating the need for a master node [91].
Compatibility with ROS 1Fully self-contained; does not support ROS 2 natively [89].Supports ROS 1 via the ROS 1 bridge, enabling hybrid deployments [91].
Best Use CasesSuitable for research, prototyping, and single-robot applications [89].Suitable for industrial, large-scale, real-time, and multi-robot applications [92,94].
Table 10. Challenges in Robotics Simulation.
Table 10. Challenges in Robotics Simulation.
ChallengeDescription
Computational efficiency and scalabilityHigh-fidelity simulators like Isaac Sim and Gazebo require significant computational resources, limiting real-time performance and accessibility [19,22].
Sim-to-real transfer limitationsEnsuring that behaviors simulated in environments like Gazebo and Isaac Sim translate effectively to real-world deployment remains a key challenge [95].
Interoperability and standardizationThe lack of standardized APIs and middleware complicates system integration across different hardware and software platforms, increasing development overhead [17,59,96].
Latency and synchronization issuesHigh-frequency data exchange in distributed environments introduces delays, impacting real-time control and sensor-actuator synchronization [17,19].
Human–robot interaction (HRI) modelingSimulating both the physical and cognitive aspects of human–robot interactions in real-time remains an open problem [20].
Learning curve and accessibilitySimplifying simulator setup and configuration can lower the entry barrier for researchers and developers without extensive robotics expertise. Some simulators, such as Unity and CoppeliaSim, offer more accessible learning environments, while Isaac Sim and MuJoCo require advanced configuration knowledge [19,40].
Physics fidelitySimulators like Isaac Sim and MuJoCo provide high-fidelity physics modeling for robotic manipulation and dynamic interaction tasks. In contrast, lighter simulators like CoppeliaSim prioritize efficiency at the cost of reduced physical accuracy [19,22].
Integration with robotics frameworksGazebo and Isaac Sim are deeply integrated with ROS, enabling streamlined transitions from simulation to real-world deployment. Unity and Unreal Engine, while powerful in visualization, require additional middleware to interface effectively with robotics frameworks [20,89].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Flores Gonzalez, J.M.; Coronado, E.; Yamanobe, N. ROS-Compatible Robotics Simulators for Industry 4.0 and Industry 5.0: A Systematic Review of Trends and Technologies. Appl. Sci. 2025, 15, 8637. https://doi.org/10.3390/app15158637

AMA Style

Flores Gonzalez JM, Coronado E, Yamanobe N. ROS-Compatible Robotics Simulators for Industry 4.0 and Industry 5.0: A Systematic Review of Trends and Technologies. Applied Sciences. 2025; 15(15):8637. https://doi.org/10.3390/app15158637

Chicago/Turabian Style

Flores Gonzalez, Jose M., Enrique Coronado, and Natsuki Yamanobe. 2025. "ROS-Compatible Robotics Simulators for Industry 4.0 and Industry 5.0: A Systematic Review of Trends and Technologies" Applied Sciences 15, no. 15: 8637. https://doi.org/10.3390/app15158637

APA Style

Flores Gonzalez, J. M., Coronado, E., & Yamanobe, N. (2025). ROS-Compatible Robotics Simulators for Industry 4.0 and Industry 5.0: A Systematic Review of Trends and Technologies. Applied Sciences, 15(15), 8637. https://doi.org/10.3390/app15158637

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop