Digital Twin and Virtual Reality Based Methodology for Multi-Robot Manufacturing Cell Commissioning

Featured Application: The results of the work may ﬁnd applications in process automation design, implementation, and commissioning. Abstract: Intelligent automation, including robotics, is one of the current trends in the manufacturing industry in the context of "Industry 4.0", where cyber-physical systems control the production at automated or semi-automated factories. Robots are perfect substitutes for a skilled workforce for some repeatable, general, and strategically-important tasks. However, this transformation is not always feasible and immediate, since certain technologies do not provide the required degree of ﬂexibility. The introduction of collaborative robots in the industry permits the combination of the advantages of manual and automated production. In some processes, it is necessary to incorporate robots from different manufacturers, thus the design of these multi-robot systems is crucial to guarantee the maximum quality and efﬁciency. In this context, this paper presents a novel methodology for process automation design, enhanced implementation, and real-time monitoring in operation based on creating a digital twin of the manufacturing process with an immersive virtual reality interface to be used as a virtual testbed before the physical implementation. Moreover, it can be efﬁciently used for operator training, real-time monitoring, and feasibility studies of future optimizations. It has been validated in a use case which provides a solution for an assembly manufacturing process.


Introduction
Lean manufacturing is a collection of synchronized methods and principles for organizing and controlling production sites in a technology-independent way to reach shortest lead time with minimum costs and the highest quality [1]. Due to the evolution of technology and its introduction in the factories, industry and manufacturing processes have changed and evolved throughout the industrial revolutions, from the introduction of mechanical production facilities powered by water and steam to the current cyber-physical production systems (CPPSs) and intelligent automation, keys of the Fourth Industrial Revolution, also known as "Industry 4.0" [2]. The integration of automation technologies and lean manufacturing is called "lean automation" [3]. These CPPSs monitor the physical processes, make decentralized decisions, and trigger actions, communicating and cooperating with each other and with humans in real time. Networked machines perform more efficiently, collaboratively, and resiliently [4].
The rest of the paper is organized as follows: Section 2 reviews the state of the art and previous work, Section 3 presents the proposed approach, Section 4 contains the use case, and main conclusions are found in Section 5.

The Concept of the Digital Twin
The concept of the "digital twin" (DT) comes from NASA. In the early days of space exploration, they were pioneers in studying what was called "pairing technologies". Maintaining, repairing, and operating systems without physical access to them, were the challenges at that time. Indeed, the first twin was a hardware twin, and it consisted of two identical space vehicles, one in the space and the other on ground to enable engineers to better assist astronauts in orbit [15]. The use of digital twins is now very common at NASA, using a virtual environment to build and test their equipment, including spatial robots. Only after a total approval in the virtual environment, does the physical construction begin. The final result and the virtual twin are then linked through the sensors for a continuous improvement process. The general digital twin model of a product consists of the physical entities, the virtual models, and the connected data which tie physical and virtual worlds [16]. However, far away from this case, in general, current research on product lifecycle data mainly focuses on physical products rather than virtual models. The connection between physical and virtual product data is needed to support product design, manufacturing, and service [17].
Similarly to equipment or products, manufacturing systems are becoming more autonomous. They need access to realistic models and real-time information about the processes for smart production management and control [18]. The use of model-based simulation is necessary not only during design and planning, but also during the production phases for such purposes as diagnosis, control, and optimization [10]. Given the uncertainty involved during the process of machinery degradation, proper design and adaptability of a digital twin model remain challenges [19]. Digital twins can be applied from initial factory planning and design to commissioning and maintenance, giving them value throughout the production lifecycle [20,21]. The digital twin can be also used for risk prediction and prevention pertaining to operators in processing plants [22]. In this sense, robots are perfect candidates for digital twin applications.

Robot Simulators
The first developments were orientated towards the simulation of the real robot or the real cell environment mainly for robot programming and operator training, but these first virtual environments were disconnected from the real movements. They were isolated tools provided by robot manufacturers which enabled one to foresee the manipulator behavior in a simulated environment. Each manufacturer provided its own solution for its robots; thus it was not possible to combine robots from different manufacturers. Moreover, as robotic languages are dependent on each manipulator, the simulation faces the same complexity as the real programming-one of the major hurdles still preventing automation using industrial robots [23]. Benchmarking of multi-robot systems is crucial for comparing the different existing solutions. However, there are only few and limited tools to support it. For instance, Yan in [14] presented a simulation tool based on a robot operating system (ROS).
Recently, these robot simulators have evolved by including new technologies, such as virtual and augmented reality, or new features, such as human-robot collaboration. The collaboration between humans and robots is necessary to increase industrial competitiveness, and the application of virtual and augmented reality is essential to enable a smoother collaboration with 3D immersive visualization [24]. Virtual reality (VR) offers a way to simulate reality. Originally, it was mainly used for entertainment purposes, but nowadays the evolution of the technologies, the appearance of multiple applications, and the reduction of costs have extended it to the manufacturing industry for a safer human-machine interaction [25]. Nowadays, several commercial simulation tools with VR visualization are available, such as Visual Components [26], Robotics and Automation [27], and RoboDK [28]. These tools are also used for the virtual commissioning of a robotic cell, which involves creating a digital twin and then testing and verifying the model in a simulated virtual environment [20]. ROS is also combined with virtual reality to create human interfaces [29].

Digital Twins and Robots
Several works relate the use of digital twins with robots. Kousi in [30] used the digital twin to adapt a robot's behavior in assembly tasks of the automobile industry. Malik in [31] presented a digital twin framework to support human-robot collaboration. Ma in [32] proposed a digital twin for enhanced human-machine interaction. Bilberg in [33] also combined the digital twin with human-robot collaboration but added a task allocation. Aivaliotis in [34] applied the digital twin of a robot for predictive maintenance. In the literature, the digital twin concept is applied not only to the single robot but also to the whole manufacturing cell [35,36].
In order to train people in virtual reality with systems that behave realistically, there is the interesting option of combining virtual reality and digital twin technologies [11]. Burghardt in [37] and Kuts in [38] present different methods for programming and controlling robots using virtual reality and digital twins, confirming that this combination facilitates human-robot interactions in terms of collaborative work, telecontrol, and programming.

Proposed Approach
Robots are perfect substitutes for a skilled workforce for some repeatable, general, and strategically-important tasks, but this substitution is not straightforward [5]. The automation of an industrial manufacturing process raises some previous issues: 1. What are the costs in terms of money, time, safety, etc., of the current manual process? 2. Is the use of robots technically feasible for the tasks? 3. Will the robot work isolated or collaboratively with humans? 4. What are the costs of the new automated or semi-automated process? Which costs are reduced and which ones are increased? 5. Is the new process cost-effective? 6. Does the automation reduce risks and enhance safety?
In order to answer the questions related to costs and safety, it is necessary to analyze and to compare the current situation (manual) with the new one using robots (total or partially automated). Thus, a sequential methodology with feedback loops has been defined to design, validate, implement, and operate the new robotized process. This methodology is based on using the digital twin of the new process as a virtual testbed to simulate and to analyze the layout and the suitability of the selected robots and the other components. An immersive VR-based interface permits a better visualization and understanding of the digital twin. This proposed approach permits the detection of design mistakes during the virtual commissioning before the real implementation, preventing costly and potentially unmanageable consequences. In addition, after the implementation, the digital twin can be used for operator training, thanks to the virtual reality interface; for real-time process monitoring, thanks to the real-time information received from sensors; and for testing future changes. All types of robots can be introduced in the digital twin framework, as it is independent of manufacturer. Figure 1 illustrates the methodology to design the robotized process and its digital twin according to the proposed approach. As shown, this methodology is a sequential cascade process with feedback loops for redesign and verification. The overall process is detailed step by step:  At this point, if redesign is necessary, the process will go back to step (1b).

Implementation:
(a) Real implementation. Once the process and the robot-based automation solution have been virtually tested, it is time for the real implementation. (b) VR model update (mirror). If during the real implementation there is any change from the original design, the virtual model should be updated in order to keep the mirror, and the simulation should be repeated by going back to (2d).

Digital twin:
(a) Connection between worlds. Sensor installation for real-time data communication between the real world and the virtual model to create the digital twin. (b) Digital twin with VR visualization. Visualization of the real actions and events in the digital twin, and operator training. (c) Real commissioning. Real functioning of the manufacturing process mirrored in its digital twin.

System Architecture
In general, a traditional robotic cell is composed by one or more robots, conveyor belts, the cell controller, physical safety systems, the human-machine interface (HMI), etc. If the cell is collaborative, more human factors need to be considered: safety, optimized task distribution, and human-robot interaction/adaptive control [39]. Here is where the digital twin becomes a key element for process automation design, enhanced implementation, and real-time monitoring in operation. As the digital twin mirrors real behavior, it should receive information about the movements of the robots and the other elements of the cell, including people. Therefore, additional sensors and a real-time connection between the real cell and the virtual one are necessary. Although the digital twin framework can be used afterwards to control the real manufacturing cell, it has not been considered in this work. It would only be necessary to add certain actuators in the cell which would receive the commands from the digital twin.
These components are structured and connected according to the architecture presented in Figure 2. The hardware is grouped in seven subsystems: . System architecture with the seven subsystems.

Control
• Control system. The cell is managed by the control system with a graphical user interface.

•
Robots and other cell components. The cell may be composed of one or several robots and other elements: conveyor belts, automatic tools, etc. As in a standard solution, the robots are controlled and commanded by their controllers.

•
Safety system. As the operators can enter in the working area of the robots, the safety system is aimed at avoiding collisions between them. When there is any potential risk of collision, the robot controller gets the alert signal to reduce the speed, or to stop, depending on the distance between the human and the robot. • HMI. During operation, the users will interact with the system using a HMI connected to the cell control system instead of dealing with the specific console of each robot. • Digital twin. The real cell is mirrored in a virtual space.

•
Sensors. In order to feed the digital twin with real-time information, additional sensors will be installed in cell.

•
Virtual reality interface. The VR interface permits an immersive visualization of the digital twin.
The main advantage of this architecture is its modularity. If in a future application, for example, the VR visualization is not required, this module will not be necessary. If additional capabilities are necessary, new modules can be added to increase comprehension, intelligence, and services. Moreover, these subsystems can be developed independently and permit the integration of robots from different manufacturers in the living digital twin of the whole cell, which results in a novel and intelligent tool for design, simulation, real-time monitoring, training, and safe human-robot collaboration. These processes have been traditionally separated, which means that there was not a unique framework covering all the steps; thus different applications were necessary for each step and for each robot. The proposed approach covers all the steps with a unique application, increasing the efficiency of the automation of industrial manufacturing processes.

The Virtual Interface
Virtual reality replaces real sense perceptions by computer-generated ones describing a 3D scene and animations of objects within the scene. The user needs to feel a totally immersive and authentic experience in the VR application. This is achieved by a realistic behavior of the elements, avoiding the latencies between actions and feedback, and creating a high quality 3D reconstruction to transmit to the user the sense of presence [25]. In order to avoid latencies and permit real-time interaction, the VR system requires a powerful dedicated computer [40] (this computer contains the digital twin of the cell also). Moreover, computer graphics and algorithms can be used to improve the rendering process. The high quality 3D reconstruction for the virtual interface can be achieved following the procedure described in [25]: 1. Scan the real scenario. Firstly, using a 3D reconstruction scanner, a dense 3D point cloud is obtained. The scanner of the mentioned reference can be used, but it is possible to use others. 2. Process the resulting 3D point cloud and apply filters. The point cloud is processed and filtered with the software of the 3D scanner in order to reduce noise and the number of points, guaranteeing a continuous density of points to facilitate the next step. 3. Model the point cloud to render the virtual environment. The point cloud is modeled to create the 3D reconstruction with the real dimensions of the real environment and the immersive effect for the user. 4. Implement the elements' behavior and the human interaction. Finally, the virtual scene is completed with the configuration of the physical behavior of the elements (animations, movements, events, actions, etc., so that virtual elements act similarly to the real ones), a set of virtual buttons, floating text charts, etc., to permit the immersive user interaction and the data visualization.

Use Case
A good assembly process plan can increase efficiency and quality, and decrease cost and time of the whole manufacturing process [41]; thus the design step is critical to achieving a successful implementation. For this reason, the proposed approach is applied to the design, implementation, and operation of an assembly manufacturing process, where humans and robots work collaboratively. This is a very representative case for different manufacturing industries wherein the final product is the result of the integration of several parts, such as aerospace, automotive, pharmaceutical, food and beverage, and electronics industries. Thus, it is very likely to find potential applications to create flexible and easily reconfigurable production lines.

Design
The aim of this process is to classify the different manufactured parts (Figure 3), to assemble the parts and the covers, and to put them on a tray for inspection and delivery. For this purpose, the parts and the covers are spread in different containers. The operator will prepare the batches according to the manufacturing orders, extracting the parts from the containers, placing them on the trays, and assembling the covers. Productivity and safety were limited by manual processes in the traditional industry. The introduction of automation and intelligent collaborative robots in the industrial manufacturing processes is resulting in a rapid increase in productivity, major material and energy savings, and safer working conditions [42]. Thus, in this use case robots will assist the operator during the tasks, and inspection systems will verify whether parts, covers, and batches are correct.
The overall assembly process is composed of the following steps ( Figure 4 shows the flowchart of the process):

Batch preparation and individual inspection:
(a) Once the operator and the robot are on the assembly table, the system indicates to the operator the batch and the first part to take. (b) Following the instructions, the operator puts the part in the buffer. (c) The robot verifies whether the part is correct with an on-board camera [43]. If the part is correct (type, dimensions, and color), it is picked and placed in the tray. If not, the robot puts away the wrong part. (d) The process is repeated for all the parts of the batch. Related to the wrong part, the program has a list of pending parts; thus as long as a part is still on this list, it is requested again. It will only be deleted if it is seen correctly in the buffer.

Covers assembly and batch inspection:
(a) When all the positions of the template of the tray are completed, the robot verifies again that all the parts are the required ones and that they are in the right position.

Batch distribution
Pending parts?
Is the batch correct?
Is the part correct?
Delivery robot Human Assembly robot Deliver To carry out this process, two different collaborative robots (cobots) and a conveyor belt have been selected instead of an autonomous mobile platform due to the spatial limitations and in order to save costs. Thus, the following components are necessary:

•
A table for the containers of parts and trays. • The assembly robot. • An assembly table with the containers of covers and discarded parts as well as a buffer where the operator will put the parts which the robot will move to the tray. • A vision system for part location and inspection. • A conveyor belt where the robot will place the tray once completed.

•
The delivery robot which is waiting the tray at the other side of the conveyor. Figure 5 shows the particularized architecture for the use case process: • Control system. The unit controller is a Raspberry Pi 3 B+.

•
Robots and other cell components. Two light weigh collaborative robots with grippers have been selected, the Omron TM5-900 and the Universal Robots UR5e. The first one is used in the assembly operation as it has an on-board vision system which is used for the inspection. The tool of this robot is the Robotiq 2F-140 gripper. The second robot is used for the delivery of the trays with the Robotiq Hand-e gripper as tool. The conveyor belt is own-manufacture and it is controlled by the Siemens S7-1200 PLC. Moreover, it contains presence sensors in order to control the position of the trays.

•
Safety system. The movements of the operators inside the working area of the robots are monitored by the Sick microScan 3 Core scanner. If the operator is in the collaborative area, the robot controller gets the alert signal to reduce the speed, and if he/she is too close to the robot, the robot controller gets the alert signal to stop.
• HMI. A touch screen is connected to the control system to permit the human interaction so that the operator can know what operations and actions must perform, and get feedback of his/her performance from the system. • Digital twin. The PC which hosts the virtual mirror of the real cell is connected to the cell network to receive the real-time information. • Sensors. The real-time information about movements, events, etc. is provided by the safety system, the presence sensors, the robots, the inspection systems, etc. • Virtual reality interface. Among the different available commercial hardware, HTC Vive glasses were selected. Unity3D is used as the VR engine.

Control System
Digital Twin + VR interface HMI Figure 5. System architecture particularized to the use case: subsystems and components. Figure 6 shows the layout that has been designed according to the previous steps. Assembly

VR Model
According to the proposed approach, the real scenario is created in virtual reality to visualize the designed process and to evaluate this layout, for instance, in terms of the robots working range, among others. Figure 7 shows the creation of the virtual reality environment: Figure 7a is the original area of the facilities, Figure 7b is the 3D point cloud resulting from scanning this area with FARO Focus3D X130 HDR, and Figure 7c is the Blender-resulting virtual environment ready for its use in virtual reality. The VR model is completed adding the virtual model of all the components of the cell (robots, conveyor belt, tables, parts, etc.). Finally, the physical behavior of the elements and the human interaction is implemented using Unity3D. The final result is shown in Figures 8 and 9, where the high quality of the 3D reconstruction to create the immersive effect in the VR interface can be noticed.  Using this virtual scenario, the process has been simulated and evaluated for the virtual commissioning via applying lean automation concepts and verifying that the design fulfills the requirements. Different work models were defined, assuming a distribution of tasks between the operator and the collaborative robots. Each of these models was mainly characterized by the layout, the logic of behavior, and the parameters associated with production management. The manual process without robots was also evaluated. After analyzing and comparing the models in terms of (1) efficiency and optimization, (2) reduction of movements and transportation, and (3) possibility of future extensions, these are the main conclusions which validate the designed process (including elements, layout, flowchart, etc.):

Efficiency and optimization:
• The parts are inspected previously to the assembly of the covers, avoiding spending time and wasting materials on wrong parts.

•
The operator receives instructions to continue his/her tasks in parallel with robot's tasks. It is not necessary to wait.

•
The buffer avoids bottlenecks.

•
The robot discards automatically the wrong parts. The operator does not have to wait for the inspection result. • If a part is pending, it is requested at the end of the batch, increasing process flexibility and avoiding confusions to the operator.

•
If all the tasks are done manually without any automation, the operator will spend time in repetitive tasks, the inspection will be subjective, and materials will be wasted, among other inefficiencies. Thus, it will not be the optimal situation.

Reduction of movements and transportation:
• Containers and trays are close to the operator in order to reduce movements.

•
The conveyor belt permits the transportation of the completed batches from the assembly area to the delivery area, avoiding the use of a mobile platform or an automatic guided vehicle (AGV).

Possibility of future extensions:
• New workstations can be added.

•
Augmented reality can be used as HMI instead of the current screen. For this purpose, it is necessary to locate QR-codes in the real scenario. Their possible location has been studied in the virtual scenario as it can be appreciated in Figure 9a.

Implementation
Once the process and the layout have been validated in the virtual space, they are physically implemented in the real scenario, as shown in Figure 10. Comparing Figures 8 and 10, the high accurate 3D reconstruction to create the immersive effect in the VR interface can be noticed. The virtual environment is totally accurate to the real one, including the minimum details (real dimensions, textures, colors, lighting effects, etc.) to transmit to the user the sense of presence. Furthermore, Figures 11 and 12 show the elements in detail.

Digital Twin
Finally, it is necessary to connect the real process with the virtual one in order to create the digital twin and feed it with real-time information. For this purpose, the virtual reality controller communicates with the cell control system to receive information related to the movements of the humans, the robots, the parts, the trays, etc.:

•
Manufacturing orders. They are directly transmitted to the digital twin at the beginning of the assembly process.

•
Operators. Their positions are controlled by the safety system, thus they are transmitted to the digital twin.

•
Robots. The control system is executing predetermined (previously programmed) movements which are communicated to the digital twin.

•
Inspection results. The control system receives the results of the part and batch inspections; thus they can be communicated and replicated in the digital twin.

•
Conveyor belt. The PLC which controls the movement of the conveyor is connected to the cell control system. It captures the data from the presence sensors, the speed, etc. This information is transmitted to the digital twin.

Results and Discussion
This work presents a novel methodology for multi-robot manufacturing cell design and operation, combining digital twin and virtual reality. The proposed framework and the modular architecture permit simulation and real-time monitorization. The fulfillment of the requirements is verified in a digital twin framework based on virtual reality, which permits the immersive visualization of the design and the simulation of the possible modifications to find the optimal solution during the virtual commissioning. Once the automated process is implemented in the real world, it is mirrored and linked to its digital twin in the virtual world, which permits real-time monitoring and continuous training and improvement. Thus, this work implies a theoretical outcome, which is the proposed methodology for robot-based automation, and a practical one, which is the digital twin framework with VR visualization used as testbed environment. Results show that the proposed methodology permits the efficient design and real commissioning of multi-robot manufacturing processes, including human-robot collaborative cells, which implies an intelligent, efficient, and unique work environment with high potential applications for process design, implementation, and control. Moreover, digital twins with VR visualization allow humans the possibility to work in totally safe environments with robots. Table 1 shows the comparison between simulation tools from robot manufacturers, commercial simulation tools with virtual reality, and the digital twin based on virtual reality in terms of low acquisition costs (labeled as "Low investment" in the table), integration of robots from different manufacturers ("Multi-robot"), orientation to human-robot collaboration ("Human-robot collab."), immersive effect and virtual reality ("Immersive"), environment customization ("Customization"), usability for training ("Training"), and versatility to include new functionalities ("Versatility"). The scale 1-3 represents a relative comparison between the tools, where "1" means the worst, or not supported, and "3" means the best. Many companies cannot purchase a specific simulation software for each type of robot when they are studying the introduction of robots in their manufacturing processes. The proposed methodology based on the digital twin is totally affordable as it only requires the VR system as an additional component, which is a mass consumer product. Although the methodology can be extended to the automation of other manufacturing processes, the disadvantage is that it requires an expert developer for the creation of the customized digital twin model and the immersive virtual environment. However, this fact provides great versatility to add new features and functionalities according to the needs of the company. Table 1. Comparative between simulation tools and the proposed approach. Low investment  2  1  3  Multi-robot  1  3  3  Human-robot collab.  1  1  3  Immersive  1  3  3  Customization  1  2  3  Training  1  2  3  Versatility  1  2  3 The proposed approach has been validated in the real commissioning of a representative use case of an assembly manufacturing process, where humans and robots from different manufacturers work collaboratively in classification, assembly, inspection, and delivery of batches of parts. Results show that the presented combination of the digital twin concept with virtual reality permits the design, simulation, training, and real-time monitoring of the manufacturing process. The digital twin of the robotic cell permits an efficient and optimized design, evaluating different options for the layout, the use and the number of robots, and other parameters to find the best solution according to lean automation concepts. All of them are validated in the virtual commissioning before the physical implementation. After the implementation, the cell is mirrored in the digital twin, monitoring productivity and safety for the real commissioning-key issues for industrial leadership. Figure 13 shows the tests in the virtual and real scenarios. Testers point out that the proposed methodology increases the efficiency as the same tool includes all the necessary steps for the real commissioning of the cell, integrating all types of robots and collaborative applications. The intermediate virtual commissioning includes all the minimum details, and the immersive VR visualization gives a sense of total realism (sense of presence). Moreover, this solution is safe, dynamic, and cost-effective. Potential applications can be found in different industries. Thus, a very likely outcome is extending these results to introduce robots in the manufacturing processes of multiple industries and to increase their efficiency. In this sense, the next steps of this work will focus on more complex manufacturing processes, and extend the capabilities to conduct data analysis. In the current Industry 4.0 revolution, where manufacturing technologies are continuously changing in order to achieve personalized products, in contrast with the traditional serial production, intelligent automation is a core element to increase the productivity and to improve the competitiveness of the industry. Robots are the future of the industry, and thus the design, the commissioning, and the operation of the robotized cells are critical to achieving success. The proposed approach demonstrates that the synergies between Industry 4.0 technologies, such as digital twins, virtual reality, and collaborative robotics, enable working at new levels and parallel environments which have not been accomplished yet. The future of manufacturing requires the interaction between humans and multiple types of robots, and between physical and virtual scenarios. Each manufacturing process will have its digital twin not only for visualizing or controlling, but also for continuous improving.