1. Introduction
Digitalization shapes how we learn, work, and entertain ourselves by providing the tools for location- and time-independent presence and control of things. Gaining new knowledge, learning new skills, playing games, socializing with others, and controlling production systems is possible from anywhere by using everyday mobile devices. Advancements in digitalization and the availability of mobile devices have grown a new generation of digital natives [
1,
2]. Socializing, learning, and working in virtual environments are natural for the digital native generation, who have grown up using mobile devices. The metaverse is one of the latest implementations of evolving digitalization enabling the aforementioned activities in a parallel digital version of our reality [
3].
In the metaverse, we can exist as avatars and learn or work similarly to in the real world, enabling a natural virtual environment for the digital native generation. Gaming and entertainment are the most popular ways to experience the metaverse; multiplayer gaming can be an immersive social event [
4]. In addition to entertainment and gaming, the industry has recognized the potential of the metaverse as an enabler of the digital workplace, and the first steps have already been taken to create the industrial metaverse [
5,
6]. The metaverse can increase production efficiency by enabling location-independent control of the physical machinery that is required to manufacture everyday products for the consumer market.
The underwater environment is unnatural for humans, and working in the deep sea requires diving gear or remotely operated vehicles (ROVs). Due to the cold temperatures and human physics, working in the deep sea is only possible for relatively short periods to avoid hypothermia and divers’ disease [
7]. In addition to the harsh physical conditions, the limited underwater visibility is challenging for divers and the teleoperators of ROVs. By combining XR interfaces, the metaverse, and DTs, it is possible to remove the user from harsh underwater conditions. In addition, XR enables the provision of an enhanced and processed virtual view of the underwater environment instead of a blurry live video stream that lacks visual cues for the teleoperator.
The benefits of location and time independence enabled by the metaverse are obvious to academia [
8]. In addition to learning not being bound to a specific time and place, the digital native generation that is being educated today has a different way of processing information compared to previous generations. Instead of reading books and writing essays, the digital native generation prefers to use social forums, online videos, and Google searches to gain knowledge [
9]. VEs are a way for academia to attract students’ interest in engineering topics, such as robotics and automation. The metaverse enables training in basic robotic skills, such as controlling the movements and creating programs for robots in a natural way, for the digital native generation. To bridge the reality gap between fully virtual experiences, DTs enable a bridge between physical robot systems and virtual training environments.
A bridge enabling bi-directional interaction is required for a metaverse user to control and monitor physical systems. The bridge is a middleware between a virtual environment and the controller of a physical system, and it enables the user to control the actuators and monitor the sensor information of the remote system. In addition, the physical characteristics of virtual and real-world systems must match to bridge the reality gap between the two [
10]. Digital twins (DTs) [
11] are suitable middleware, and they enable interactions between and merging of the virtual and physical worlds. DTs enable synchronized bi-directional communication between a physical system and a virtual user interface. In addition, a DT entity describes the physical characteristics of a physical system, enabling identical twins.
Extended reality (XR) is an essential enabler of the metaverse, as it provides a high-level user interface [
12] for experiencing the post-reality universe. XR enables fully virtual and mixed-reality experiences by blending reality and synthetic virtual elements [
13]. While the foundation for XR was laid decades ago by the pioneers in computing [
14] and entertainment [
15], everyday applications have been waiting for the evolution of computing, display technologies [
16], and optics [
17] to enable affordable and comfortable user devices to be used to experience virtual worlds.
The research questions of this publication are the following: Can DTs and XR enable the digital workplaces required by the industrial metaverse? Are virtual learning environments enabled by utilizing the aforementioned combination of technologies? This publication presents four use cases that take steps towards the metaverse by utilizing XR and DTs to control robotic systems:
An XR interface for future industrial robotics;
An interface for enhancing the cognitive capabilities of the teleoperator of an underwater vehicle;
DTs that are used as robotic training tools;
Movement toward a maritime metaverse with social communication, hands-on experiences, and DTs.
The rest of this paper is organized as follows:
Section 2 provides a review of research on the topics of the metaverse, XR, DTs, and teleoperation.
Section 3 describes the methods used to implement the use cases that are presented,
Section 4 presents the implemented use cases, and
Section 5 discusses the presented results and concludes the paper.
2. Related Research
2.1. Teleoperation
Teleoperation has been an active research topic for decades, and it has the aim of removing barriers between operators and machines [
18]. For example, a barrier preventing on-site operation can be a hazardous environment or a large distance between an operator and a machine. The first teleoperation applications were unmanned torpedoes [
19,
20]; lives could be saved by guiding the torpedoes to their targets from a safe location.
Since then, teleoperation has been applied to various robotic applications, such as surgery, space exploration, and the handling of hazardous materials [
21,
22,
23,
24,
25,
26]. González et al. [
27] proposed a robot teleoperation system that enabled the operator to perform industrial finishing processes by using an industrial robot. Duan et al. [
28] proposed a teleoperation system for ultrasound scanning in the healthcare sector, and the proposed solution was proven safe and effective. Caiza et al. [
29] introduced a teleoperated robot for inspection tasks in oil fields; the proposed system utilized a lightweight MQTT data transfer protocol that was described in detail [
30].
Underwater robotics is an efficient tool for studying, monitoring, and performing coastal conservation, coral restoration, and oil rig maintenance [
31,
32,
33]. The teleoperation of underwater vehicles presents various communication difficulties because of the environment’s harsh and constantly changing conditions. Among these difficulties are constraints on communication bandwidth and signal quality, packet losses, propagation delay, environmental variability, and security concerns [
34,
35]. As described in [
36], the difficulties related to human performance when controlling teleoperated systems can be divided into two categories. The first is
remote perception, which is challenging because natural perception processing is separated from the physical environment. The second category is
remote manipulation, which suffers from the limitations of the operator’s motor skills and their capacity to maintain situational awareness. Factors affecting remote perception and manipulation are commonly listed as a limited field-of-view (FOV) and camera viewpoint, degraded orientation, depth perception, and time delays [
37].
Another key challenge in developing teleoperation applications of virtual and augmented reality is the optimal design of human–robot interfaces. In other words, given a physical system and a user input device, how should a human–robot interface translate the configuration and action spaces between the user and the physical system for teleoperation? It is worth noting that the term “optimal” implies that such an interface complies with certain constraints related to user comfort, smoothness, efficiency, continuity, consistency, and the controllability and reachability of physical systems [
38].
2.2. Digital Twin Concept
The concept of DT has been an active research topic since it was introduced by Grieves [
11]. DTs are digital models of physical devices or systems featuring bi-directional communication and algorithms to match physical configurations between the two [
10,
39]. DTs enable a user to interact with the low-level functions of a physical twin. Single devices, such as a single industrial robot, or larger entities, such as smart cities or digital factories, can be twinned [
40,
41,
42]. Different approaches to categorizing DTs exist: Grieves divided DTs into DT prototypes (DTPs) and DT instances (DTIs). Kritzinger divided DTs into three levels according to the level of integration: the digital model (DM), digital shadow (DS), and digital twin (DT). In this paper, we follow Kritzinger’s method for categorizing DTs. Misinterpretations and misconceptions of the evolving DT concept have existed since it was presented by Grieves [
43].
The glossary of the digital twin consortium defines a DT as a virtual presentation of real-world entities and processes synchronized at a specified frequency and fidelity [
44]. The DT is a mature concept that was standardized by the International Organization for Standardization (ISO) in 2021 [
45].
2.3. Extended Reality
XR is a top category for virtual reality (VR) and mixed reality (MR) [
13]. MR can be further divided into the augmented reality (AR) and augmented virtuality (AV) subcategories. XR has been researched for decades [
14,
15,
46], and recent advancements in computing, optics, and electronics [
16,
17] have enabled immersive and augmented virtual experiences by using affordable stand-alone head-mounted displays (HMDs) or everyday mobile devices [
47,
48]. In addition to HMDs, a user can interact with virtual objects by using handheld controllers, which enable grabbing, pointing, and touching [
49]. The latest improvements in HMD sensor technology have enabled hand-tracking, thus enabling users to interact with virtual objects by using simple gestures, such as pinching and pointing [
50].
The Unity game engine, which was launched as a game engine for MacOS, has become one of the most popular game engines for desktop and mobile applications [
51]. The Godot engine was released under an MIT License in 2014 as an open-source alternative for Unity [
52,
53]. Unity and Gogot include rendering and physics engines, installable assets, and a graphical editor. The Visual Studio integrated development environment (IDE) is used to program functionalities for game objects, and the programming language is C#. In addition to C#, Godot supports Python like GDScript language programming language. The Godot engine and Unity enable XR applications to be compiled as WebXR-runtimes, which combine WebGL, HTML5, and WebAssembly [
54,
55,
56]. WebXR-runtimes can be distributed on the Internet, are cybersecure and accessible, and support cross-platform devices.
Epic’s Unreal Engine (UE) is a popular game engine that is utilized widely in game programming and industrial applications [
57]. Functionality programming in UE supports C++ or Blueprints, and Epic provides a content store for purchasing additional assets. While Blueprints are an easy-to-use visual tool for programming, C++ enables the programming of more complex functionalities. UE supports the compilation of WebXR binaries with version 4.24, which was released on GitHub [
58].
2.4. Extended Reality in Programming and Control of Robots
Since the introduction of industrial robots, the teach pendant has been and remains the most popular programming method; over 90% of industrial robots are programmed by using a teach pendant [
59,
60]. Programming by utilizing a teach pendant is not an intuitive way to program an industrial robot, and researchers have studied XR as an alternative programming method for industrial robots [
12,
61,
62,
63]. In addition to programming, XR has been studied as an interface for the teleoperation of robots. Recent cross-scientific research has utilized XR as a high-level human–machine interface for teleoperation and a DT as a middleware that enables the teleoperator to control a physical system [
64]. In addition to twinning a robot arm, González et al. and Li et al. twinned the surroundings of an environment by utilizing point clouds from three-dimensional cameras [
27,
65].
2.5. Communication Layer
An effective communication protocol is required to synchronize the states of digital and physical twins. MQTT [
30] enables efficient communication over the Internet and local networks. MQTT is one of the most popular IoT and IIoT communication protocols since it is a lightweight and efficient publisher–subscriber communication protocol [
66]. The messaging consists of three participants: the publisher, the subscriber, and a broker between the two. MQTT was originally developed for resource-constrained communications, and the latest updates enable cybersecure communications; encryption of the data and authentication of the users are enabled.
2.6. Real-Time Video
Web Real-Time Communication (WebRTC) is a real-time web-based video transfer technology [
67]. WebRTC is an open-source protocol that is implemented on the User Datagram Protocol (UDP) to enable low-latency video streaming. Since UDP does not natively support congestion control, a specific congestion control mechanism (GCC) was developed for WebRTC by Google. The GCC is intended to control the resolution of a video stream in proportion to the available bandwidth, thus enabling a low-latency video stream; the trade-off is the resolution. Video streaming latencies of 80 to 100 milliseconds have been measured on mobile platforms [
68].
2.7. Metaverse
The metaverse, as a post-reality universe, merges physical and virtual worlds [
3]. The metaverse started as a web of virtual worlds that enabled users to teleport from one virtual world to another. The evolution of virtual multi-user environments has gone through multi-user role-playing games and gaming platforms [
69] to the virtual social platforms of today [
70], and the ability to socialize is one of the metaverse’s key strengths. Based on the seven rules defined by Parisi [
71], the one and only metaverse is a free cross-platform network that is accessible to anyone. Industry 4.0 and Education 4.0 are ongoing parallel evolutions that are enabled by digitalization. As a virtual environment (VE), the metaverse is essential to both movements [
2,
8]. VEs enable location- and time-independent training and education for industrial companies and educational institutions. In addition, VEs are risk-free and do not have the physical limitations of classrooms [
72]. Industries are adopting the metaverse by utilizing DTs as core components to connect physical and virtual systems [
73]. Industries apply the metaverse for training, engineering, working, and socializing [
42].
The Industrial metaverse enables physical interaction in real time, improves the visualization of cyber–physical systems (CPSs), and can be seen as a DT of the workspace [
5]. According to Kang et al. [
6], the industrial metaverse is still in its infancy; in particular, privacy protection issues and the design of incentive mechanisms need more attention. Nokia’s CEO Pekka Lundmark stated, “The future of the metaverse is not for consumers” [
74]. Nokia has classified metaverse business into three categories: the consumer, enterprise, and industrial metaverse. In fact, Nokia is expecting the industrial metaverse to lead the commercialization of the metaverse [
75]. Siemens and Nvidia have expanded their partnership to enable the industrial metaverse by connecting the Xcelerator and Omniverse platforms [
76]. In addition, technology companies such as Lenovo, Huawei, HTC, Tencent, and Alibaba, as well as numerous startups, are exploring how to apply the industrial metaverse in their businesses [
77].
2.8. Extended Reality, Metaverse, and Digital Twins to Reality
The combination of the industrial metaverse, DTs, and robotics is still quite a new research area. The main focus of research is driven more by VR than by the metaverse. The metaverse received much publicity during and after the pandemic, and as shown above, it has received much visibility in business forecasts. VR, DTs, and robotics were studied, for example, in welding as a platform for interactive human–robot welding and welder behavior analysis [
78] and in BCI as a brain-controlled robotic arm system for achieving tasks in three-dimensional environments [
79].
The metaverse has enabled co-design while reducing the communication load in real-world robotic arms and the context of their digital models [
80]. In addition, in metaverse-, DT-, and robotics-related studies, a basic meta-universe simulation implementation method for the scene of an industrial robot was introduced in [
81], and a multi-agent reinforcement learning solution was defined to bridge the reality gap in dynamic and uncertain metaverse systems [
82]. Recent advances in artificial intelligence (AI), computing, and sensing technologies have also enabled the development of some DT applications in the underwater domain, such as in intelligent path planning and autonomous vehicle prototyping [
83,
84,
85].
3. Materials and Methods
This section presents the research approach and the methods used. The use cases that are presented follow a constructive research approach, aiming to solve practical problems by developing entities [
86,
87]. Since the research questions presented are practically relevant and the aim is to develop prototypes to create a foundation for use cases, the constructive approach supports the answering of the research questions.
The main phases were conceptualization, development, and preliminary validation. The research questions and the review of previous research led to the conceptualization of the following statement:
To enable the control of production systems by using the metaverse, DTs and XR are required. In the development phase, prototypes based on the aforementioned conceptualization were created, and they are presented in the
Section 4.
Figure 1 presents the main phases and methods utilized.
3.1. Requirement Specifications
The requirement specifications were drafted for the prototypes according to the IEEE Recommended Practice for Software Requirement Specifications [
88] to define their functional and non-functional requirements. The functional requirements were an essential guideline during the implementation and preliminary validation phases of the prototypes. The non-functional requirements were divided into the usability, security, and performance subcategories and are presented in
Table 1.
3.2. Modeling of the Virtual Environment
An industrial-grade three-dimensional scanner was used to form three-dimensional models of larger entities, such as Centria’s Robo3D Lab at Ylivieska and the harbor environment in Turku, Finland. Point clouds were imported into Blender to create digital copies of the environments by extruding features such as walls and shaping the terrains by using the point clouds as templates. Digital versions of robots, forklifts, and other production machinery were acquired from the ROS [
89] and OEM manufacturer libraries. Missing items, such as custom gripper jaws and pick-and-place objects, were manually modeled by using Blender. The models contained the kinematic information of the robots’ mechanic structure, and inverse kinematics were used to create virtual robot assemblies with physical constraints that were equivalent to those of physical robots. The models were also textured by utilizing Blender before exporting them to Unity. Furthermore, the final WebXR binaries were compiled by using Unity.
Regarding the prototype developed in
Section 4.2, the BlueSim [
90] simulator was utilized as a virtual environment to control the robot. The software-in-the-loop approach (SIL) was used to simulate the BlueROV2 hardware. A prototype was created to convert a commercial diving mask and smartphone into an HMD. The design process of a specially designed control device involved three-dimensional laser scanning, and the casing was designed by utilizing Blender.
3.3. Validation of the Prototypes
The prototypes were validated by defining high-level tasks for the robot stations. The high-level tasks were specific to the robot type in question, and if the task could not be completed, the task was failed. If the task was failed, the prototype was redesigned and developed until the task could be completed. Small groups of developers and students performed preliminary validations of the use cases. The high-level tasks that were defined are described in
Section 4.
After the preliminary validation, piloting and a user survey were conducted to collect user feedback. The feedback collection was conducted as an online survey since it was easy to distribute to the students that participated in lectures online. In addition, online surveys are easy to complete, and it is possible to automatically summarize their results [
91].
3.4. Cybersecurity Assessment
In the use cases that are presented, the methods for assessing cybersecurity were authorization, authentication, encryption, and vulnerability scans [
92]. Authentication and authorization services were implemented on a cloud server. Encryption was implemented by using CA certificates on data transfers between the robot, the cloud server, and the client device. The cloud server was periodically scanned to detect vulnerable software on the server. The methods for solving the detected cybersecurity issues were reconfiguring and updating vulnerable software components.
4. Implementation and Validation of Use Cases
In this section, the implementations of the use cases are presented. All of the presented use cases provide additional functionality over traditional virtual representations. First, we will present Probot’s implementation of DT and XR to teleoperate an arm robot installed on a mobile platform. The second use case presents FIU’s testbed for controlling and twinning an ROV. Furthermore, a unique method for translating a teleoperator’s body movements to control commands for an ROV is presented. Centria’s implementation of a virtual robotic training platform is presented in the third use case. In addition to enabling the user to teleoperate connected physical robots, multiuser capability and social aspects of the metaverse are presented. The fourth use case presents the TUAS social VR platform, which enables training and education in robotic aspects in the maritime sector. In addition to twinning the robots, harbor machines, such as forklifts, are included.
4.1. Probot’s Extended-Reality Interface for Future Industrial Robotics
Over the last decade, the amount of data collected in industrial processes has significantly increased. Furthermore, the application of new technologies, such as drones, mobile robots, and service robots, requires advanced user interfaces for control. Probot developed advanced user interfaces to provide solutions for presenting data from industrial processes and to control advanced robots by utilizing XR and DTs.
In the MIMIC project [
93], an eight-month project funded by the RIMA [
94], a DT of an arm robot installed on a mobile robot was created. The DT enabled the user to teleoperate a mobile manipulator by using a VR user interface and a specially developed glove that tracked the position of the user’s hand and sensed the positions of the user’s fingers. The aforementioned data were translated into the control commands for the manipulator. VE was implemented by using the Unreal Engine, and the communication layer was based on a custom socket-based protocol.
Figure 2 presents the prototype’s setup.
The focus during research and development was on user comfort and the efficiency of the teleoperation. Probot developed two control methods for teleoperating the robot: ghost and direct control. In ghost control, the user set the target position for the ghost robot and enabled movement with the controller button. After the target was validated as collision-free and within the joint limits, the target was commissioned for the physical robot. In direct control, the physical robot instantly followed the DT’s movements in near-real time.
Figure 3 shows a view of the VR user interface for teleoperation.
To validate the system and to compare the two control methods, Probot arranged testing sessions. During the testing, employees who were familiar with robot teleoperation and XR utilized the developed prototype to grasp and manipulate objects. During the preliminary validation, it was noted that the direct control method posed latency issues related to constantly sending position messages between the DT and the physical robot. The latency caused delays in the control and had a negative impact on the user experience. Furthermore, if the user controlled the DT at a high velocity, the positions of the digital and physical twins did not match.
The ghost control method minimized the data transfers between the DT and the physical robot. Furthermore, the ghost control mode enabled the DT to validate the trajectories before they were committed to the physical robot. While the direct control method enabled a natural way to move the robot by using the DT, the aforementioned latency and in-position control loop issues resulted in a poor user experience. The participants in the testing demonstration agreed that the ability to accurately set the robot’s position before committing to physical robots in the ghost control mode enabled more accurate and comfortable control of the manipulator compared to direct control.
4.2. FIU’s Robotics Testbed for Teleoperation in Environments with Sensing and Communication Challenges
In our recent work [
95], we introduced an optimization-based framework for designing human–robot interfaces that comply with user comfort and efficiency constraints. Additionally, we proposed a new approach to teleoperating a remotely operated underwater vehicle, which involved capturing and translating movements of the human body into control commands for the ROV.
The VE in this use case was the BlueSim [
90] simulator, which was compiled by using the Godot engine [
52,
53]. The communication layer between the simulator and the real environment enabled the connection of the virtual and physical ROVs. A software-in-the-loop (SIL) approach was used to simulate the BlueROV2 hardware for testing and refinement during our development work.
Figure 4 presents the virtual and physical environments and the communication layer between the DT and the physical ROV. A customized smartphone case was created to capture and translate body movements into ROV control commands to turn a commercial diving mask and smartphone into an HMD. The design process involved a three-dimensional laser scanner that extracted the mask’s point cloud data, which were then used to design the casing in 3D-CAD SolidWorks.
To access orientation and pressure data from the smartphone’s inertial measurement unit (IMU), we utilized the Sensorstream IMU+GPS application [
96], which streamed the data to the UDP port of the teleoperator’s workstation. In addition, an application based on OpenCV was created to process the virtual underwater video stream and send only black-and-white images for the teleoperator to save bandwidth. Finally, a Python script was developed to receive this sensor data stream and translate it into directional commands for the ROV and up-and-down commands for the ROV camera.
Figure 4 presents the current development of a DT for the BlueROV2 [
97], an open-source ROV platform for underwater navigation and exploration. On the virtual side, we used a high-fidelity model of a simulated BlueROV2 in a simulated pool. On the physical side, we will use the real BlueROV2 in FIU’s testbed.
To validate our system’s feasibility, usability, and performance, we created a prototype as a proof of concept; we conducted a study with human subjects by using the prototype to send commands to an ROV that was simulated in a virtual environment.
Figure 5 presents the experimental procedure, which involved three tasks. Firstly, users were given a simulated scenario of an empty pool and were allowed 3 min to become accustomed to the HMD and simulator. Secondly, users were presented with an RGB video stream from the front camera of the simulated robot, and they needed to locate a cubic shape in the pool by directing the camera toward it. Because the underwater environment presented significant limitations in terms of data communication and because humans possess an innate skill for interpreting and comprehending meaning and shapes, even in low-quality images, the third task provided users with a black-and-white video stream of the pool and asked them to locate an oval shape. This time, the oval shape was located in one corner of the pool, while the cubic shape was located in another.
User feedback was collected after the experiments. The users reported that it would be more comfortable to use the custom HMD if the distance between the eyes and the smartphone was increased. This would allow for a better field of view and reduce eye strain. Furthermore, our experiments indicated that slow communication, which resulted in laggy image updates on the smartphone screen, could significantly increase the time it took to complete tasks. This was because users needed to wait for the images to be updated before proceeding with their tasks.
4.3. Centria’s Extended Reality as a Robotic Training Tool
Centria created an online platform for education and training in robotics. The platform enabled students to learn robotics by reading online materials, watching instructional videos, and conducting practical exercises. Currently, collaborative and industrial robot types are available on the platform for students to practice their robotic skills. Students can conduct exercises by reserving a free time slot for a specific robot type and accessing the provided link at a specific time to access the virtual user interface and the DT of the robot. XR training scenarios are implemented by using the Unity game engine, and the digital models of the robots and environments were created by using Blender. To enable cross-platform compatibility, runtimes are compiled as WebXR binaries that are available online and accessible using mobile or desktop devices.
The bi-directional communication layer that synchronizes the states of digital and physical twins is based on the MQTT communication protocol, which runs on Websockets [
98]. XR web applications utilize a Websocket to publish and subscribe to MQTT topics on the cloud server’s MQTT broker. The communication layer enables the monitoring and control of the robots that are connected to the platform. The joint and cartesian positions of each articulated robot and the mobile robot’s spatial x, y, and z locations are published on the message broker on the cloud server. The method for controlling the robots is similar to the ghost control described earlier in
Section 4.1.
WebRTC provides a near-real-time video stream of the physical robots to the user. The cloud server manages the congestion control to maintain the low latency of the video stream, while the actual video stream data are cast directly to clients. In the Robo3D Lab, a local server is set up to host the WebRTC clients that are streaming the video and to connect the cameras to the platform.
Figure 6 presents the architecture of the cloud-based platform.
DTs on the platform enable the user to interact with physical robots by moving a robot’s TCP. The DTs are based on three-dimensional models of robots, including a kinematic model created by using Blender. In Unity, inverse and forward kinematic algorithms based on the Jacobian matrix and the Denavit–Hartenberg convention are used to calculate the joint positions based on the requested tool-center-point (TCP) position and to calculate the TCP position based on the joint values [
100,
101]. The algorithm for validating the trajectories for articulated arm robots is presented in pseudocode in Algorithm 1. In addition to kinematic limitations, the DTs utilize the Unity physics engine to calculate and validate only collision-free trajectories.
A user can join the platform in immersive mode by using a VR headset or in desktop mode by using a desktop computer or a mobile device. In VR mode, handheld controllers are used to interact with the VE; in desktop mode, two virtual joysticks are provided to interact with the environment. Each of the users is presented as an avatar in the virtual environment. To enable a multi-user system, the spatial locations of the avatars are centrally synchronized among the players. Synchronization of the avatars’ locations is implemented by utilizing the communication layer described in the previous section.
The feedback collection was conducted during a “construction robotics” course. The participating students were a group of ten students studying at the University of Oulu. The students had no prior experience in controlling or programming robots. The feedback for pre-defined questions in the form of “yes or no” options was collected after conducting online lectures and teleoperating the robots on the platform. Sixty percent of the students considered the platform suitable for learning the basics and programming of robots. Seventy percent considered the platform suitable for monitoring remote robot cells, and all considered the platform suitable for debugging existing robot programs.
Figure 7 presents the XR robotic training environment.
Algorithm 1 Inverse Kinematics. |
Require: requested position, requested pose, lower and higher thresholds for joint values, |
and , respectively. |
counter = 0 |
while counter < 100 do |
calculate Jacobian inverse matrix; |
calculate joint angles and position deviation; |
if joint angles < OR joint angles > OR position deviation > 1e-3 then |
counter++; |
else |
move physical twin; |
break; |
end if |
end while |
4.4. TUAS VR Social Platform
Turku University of Applied Sciences developed a metaverse technology called the TUAS VR Social Platform, which combines several- or multi-user environments into a unified, seamless platform that enables the visualization of big data and remote control solutions. This platform consists of features for social communication, hands-on experience, and DT integration [
102]. Social communication and hands-on experience enable collaborative training in the maritime sector, such as in operating forklifts in harbor environments, as presented in
Figure 8.
The multi-user environments that have been developed can also consist of DTs with specific functionalities that can be integrated into the platform. The customizable application programming interface (API) support of the platform enables the coupling of the DTs with physical machinery and equipment, thus providing access to the functionality of physical systems. In the design of the TUAS VR Social Platform, the data privacy protection concerns highlighted by Kang et al. [
6] were taken into account. The platform provides identity, authenticity, and authority services for industrial training, planning, and operations, such as in the teleoperation of systems.
Figure 8 presents the log-in screen and the VE.
In the teleoperation prototype, we aimed to create a self-contained system for teleoperating robots by using VR as the user interface based on previous work on this type of system [
103]. Creating a DT of a physical robot is essential for providing a realistic user experience. A low-latency communication layer between the digital and physical twins was implemented by using the MQTT protocol. WebRTC enabled the streaming of the video of the physical robot to the VE with low latency.
To create the VE, the Turku University of Applied Sciences robotics laboratory was scanned by utilizing a laser scanner to enable a matching virtual environment for an unmanned ground vehicle (UGV). The system’s control was developed to support cross-platform VR controllers and position updating of the DT by using an accelerometer. The digital models and texturing of the DT were improved. In addition, object distance detection was implemented in this phase by utilizing a robot’s camera to pick up an object with the robot arm during teleoperation. The simplified architecture of the TUAS platform is presented in
Figure 9.
The implemented teleoperation application comprises a physical robot, a VR user interface, and a DT connecting the two. The application was designed to be modular, thus enabling the merging of parts of the solution into already existing projects. The developed features mentioned above were merged into the industrial metaverse environment by adding the DTs of the robots and the laboratory into the existing metaverse environment.
This implementation allows any user of this collaborative training environment to take control of a physical robot and to move the robot around the laboratory by utilizing the feedback and control of the DT and the live video stream from the physical environment. At the end of this first pilot case, we validated the system by utilizing high-level gripping and moving tasks to prove the low latency of teleoperation and video streaming.
Figure 10 presents the virtual environment for teleoperation and the VR devices utilized.
To ascertain our system’s usability, usefulness, and effectiveness, staff members of the research group piloted the developed system and provided feedback on its usability. A high-level task was defined for the teleoperated robot to validate the DT. The defined task required the user to pick up a cardboard cube and place it in a different location by using the robot arm.
The test was considered successful if the users could complete the task, which would indicate that the system was accurate, effective, and usable. In addition to the control efficiency and latency, the video stream’s latency was evaluated. According to the user feedback, the latencies observed did not significantly affect the user experience. Approximately seventy percent of users reported that the system was usable and precise enough for the teleoperation of a UGV and robot arm. The piloting users suggested the addition of a diagram of each controller’s button function in the VE and the improvement of the DT’s accuracy for larger movements. The feedback was taken into account to improve the DT in the future.
5. Discussion
This paper presented the implementations of four DTs that enable the control of physical systems by using XR technologies. All use cases were extended reality–digital twin to real implementations where data validation and two-way data transmission between the twins were implemented as part of the process. Our study showed that remotely connected systems that enable an industrial metaverse can be built by using currently available tools.
The first use case showed an implementation of robot trajectory programming by utilizing a combination of VR and a haptic glove. A DT of a mobile robot equipped with a robot arm was controlled in a VR environment. The second use case was an implementation of a DT for ROVs controlled in VR. The VR was implemented by using a designed and manufactured HMD based on a smartphone. The ROV could be controlled by following a person’s head movements, and an enhanced live stream of the ROV was displayed on the HMD. Both use cases enabled a realistic and natural way for the teleoperator to control the devices by using the user’s head movements or finger tracking. By using these high-level control interfaces, users can interact with the environment by utilizing DTs.
The third and fourth use cases were implementations of a multi-agent VR environment with multiple DTs connected to mobile, collaborative, and industrial robots. Users could teleoperate physical robots in a VR environment. In addition, the use cases enabled social interaction and collaborative training. These use cases enabled location- and time-independent training and collaboration, making them more realistic and natural than single-user virtual environments.
The research questions set for the work are the following: Can DTs and XR enable the digital workplaces required by the industrial metaverse? Are virtual learning environments enabled by utilizing the aforementioned combinations of technologies?
With the achieved results, it was proven that digital workplaces, which are required by the industrial metaverse, can be implemented by using DTs and XR—to some extent. Depending on the nature of the work and the operator’s experience level, other ways of working, such as audio control, may be more flexible and faster. However, it must be noted that different ways of working have pros and cons; e.g., audio control is vulnerable to noise and raises severe concerns related to safety and cybersecurity. Additional haptic devices can make the experience more real with the feedback provided by the real world.
At its current stage, the proposed system is suitable for virtual learning and training. Through the experience that has been gained, it is possible that the proposed systems can be used in industrial use cases, assuming that users are comfortable and familiar with the systems and knows their restrictions. The new generation of digital natives might find these systems to be natural ways of interacting with the physical world.
6. Conclusions
In this publication, four use cases that utilized XR and DTs to control robotic systems were presented and discussed to answer the research questions. In conclusion, the usefulness of DTs and XR depends greatly on the nature of the work and the operator’s experience level. It must be noted that virtual learning by utilizing DTs is a natural learning method, especially for the digital native generation. In some cases, other methods, such as voice control, are more feasible for controlling systems.
The proposed approach can be further enhanced for user-assisted autonomous systems. With additional modules that enable them to learn, such as reinforcement learning, the proposed systems can learn various skills, such as robotized assembly or machining. Such systems can offer alternative methods for operators to perform tasks more efficiently. Although the presented use cases were small-scale demonstrations, the use cases can be replicated and scaled for more complex and physically larger systems.
In the future, more comprehensive feedback from users is required; in this paper, only small groups of developers and students participated in the piloting and surveys. By exposing the systems to larger audiences, they can be improved to meet the demanding requirements of real industrial applications. Studies that differentiate among utility, usability, and user experience in industrial environments provide feedback on systems’ feasibility. In addition, an automated online feedback system that is integrated into the platform can enable comprehensive feedback collection.
Author Contributions
Conceptualization, T.K., P.P., V.B.B., and M.T.; methodology, T.K. and V.B.B.; software, T.K, P.P., V.B.B., and M.T.; validation T.K., P.P., V.B.B., and M.T.; investigation, T.K.; data curation, T.K. and T.P.; writing—original draft preparation, T.K., N.L., T.H., V.B.B. and M.T.; writing—review and editing, T.P. and S.P.; visualization, T.K, V.B.B., N.L. and M.T.; supervision, S.P., L.B. and M.L.; project administration, T.K. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Program (grant agreements n° 825196 and n° 824990). Support was also received through the NSF grants IIS-2034123 and IIS-2024733, the U.S. Department of Homeland Security grant 2017-ST-062000002, and the Finnish Ministry of Education and Culture under the research profile for funding for the project “Applied Research Platform for Autonomous Systems” (diary number OKM/8/524/2020).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Rideout, V.; Foehr, U.; Roberts, D. GENERATION M2 Media in the Lives of 8- to 18-Year-Olds. 2010. Available online: https://files.eric.ed.gov/fulltext/ED527859.pdf (accessed on 13 April 2023).
- Lee, H.J.; Gu, H.H. Empirical Research on the Metaverse User Experience of Digital Natives. Sustainability 2022, 14, 14747. [Google Scholar] [CrossRef]
- Mystakidis, S. Metaverse. Encyclopedia 2022, 2, 486–497. [Google Scholar] [CrossRef]
- Barnes, A. Metaverse in Gaming: Revolution in Gaming Industry with Next-Generation Experience. 2023. Available online: https://www.datasciencecentral.com/metaverse-in-gaming-revolution-in-gaming-industry-with-next-generation-experience/ (accessed on 13 April 2023).
- Lee, J.; Kundu, P. Integrated cyber-physical systems and industrial metaverse for remote manufacturing. Manuf. Lett. 2022, 34, 12–15. [Google Scholar] [CrossRef]
- Kang, J.; Ye, D.; Nie, J.; Xiao, J.; Deng, X.; Wang, S.; Xiong, Z.; Yu, R.; Niyato, D. Blockchain-based Federated Learning for Industrial Metaverses: Incentive Scheme with Optimal AoI. In Proceedings of the 2022 IEEE International Conference on Blockchain (Blockchain), Espoo, Finland, 22–25 August 2022; pp. 71–78. [Google Scholar] [CrossRef]
- Francis, T.; Pearson, R.; Robertson, A.; Hodgson, M.; Dutka, A.; Flynn, E. Central nervous system decompression sickness: Latency of 1070 human cases. Undersea Biomed. Res. 1988, 15, 403–417. [Google Scholar] [PubMed]
- Miranda, J.; Navarrete, C.; Noguez, J.; Molina-Espinosa, J.M.; Ramírez-Montoya, M.S.; Navarro-Tuch, S.A.; Bustamante-Bello, M.R.; Rosas-Fernández, J.B.; Molina, A. The core components of education 4.0 in higher education: Three case studies in engineering education. Comput. Electr. Eng. 2021, 93, 107278. [Google Scholar] [CrossRef]
- Prensky, M. Digital Natives, Digital Immigrants. 2001. Available online: https://www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf (accessed on 15 April 2023).
- Kritzinger, W.; Karner, M.; Traar, G.; Henjes, J.; Sihn, W. Digital Twin in manufacturing: A categorical literature review and classification. IFAC-PapersOnLine 2018, 51, 1016–1022. [Google Scholar] [CrossRef]
- Grieves, M. Origins of the Digital Twin Concept; Florida Institute of Technology: Melbourne, FL, USA, 2016; Volume 8. [Google Scholar] [CrossRef]
- Burdea, G. Invited review: The synergy between virtual reality and robotics. IEEE Trans. Robot. Autom. 1999, 15, 400–410. [Google Scholar] [CrossRef]
- Milgram, P.; Kishino, F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. 1994, E77-D, 1321–1329. [Google Scholar]
- Sutherland, I.E. A Head-Mounted Three Dimensional Display. In Proceedings of the Fall Joint Computer Conference, Part I, New York, NY, USA, 9–11 December 1968; AFIPS’68 (Fall, part I). pp. 757–764. [Google Scholar] [CrossRef]
- Heilig, M.L. Sensorama Simulator. U.S. Patent 3050870A, 10 January 1961. [Google Scholar]
- Kawamoto, H. The history of liquid-crystal display and its industry. In Proceedings of the 2012 Third IEEE History of Electro-Technology Conference (HISTELCON), Pavia, Italy, 5–7 September 2012; pp. 1–6. [Google Scholar] [CrossRef]
- Howlett, E.M. Wide Angle Color Photography Method and System. U.S. Patent 4406532A, 27 September 1983. [Google Scholar]
- Siciliano, B.; Khatib, O. Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Wilson, E.; Evans, C.J. Method of Controlling Mechanism by Means of Electric or Electromagnetic Waves of High Frequency. U.S. Patent 663400A, 4 December 1900. [Google Scholar]
- Hammond, J.H.; Purington, E.S. A History of Some Foundations of Modern Radio-Electronic Technology. Proc. IRE 1957, 45, 1191–1208. [Google Scholar] [CrossRef]
- Ferrell, W.R.; Sheridan, T.B. Supervisory control of remote manipulation. IEEE Spectr. 1967, 4, 81–88. [Google Scholar] [CrossRef]
- Sheridan, T. Teleoperation, telerobotics and telepresence: A progress report. Control. Eng. Pract. 1995, 3, 205–214. [Google Scholar] [CrossRef]
- Goertz, R.C. Fundamentals of general-purpose remote manipulators. Nucleon. (U.S.) Ceased Publ. 1952, 10, 36–42. [Google Scholar]
- Kim, W.; Liu, A.; Matsunaga, K.; Stark, L. A helmet mounted display for telerobotics. In Proceedings of the Digest of Papers. COMPCON Spring 88 Thirty-Third IEEE Computer Society International Conference, San Francisco, CA, USA, 29 February–3 March 1988; pp. 543–547. [Google Scholar] [CrossRef]
- Marescaux, J. Nom de code: « Opération Lindbergh ». Ann. Chir. 2002, 127, 2–4. [Google Scholar] [CrossRef] [PubMed]
- Laaki, H.; Miche, Y.; Tammi, K. Prototyping a Digital Twin for Real Time Remote Control Over Mobile Networks: Application of Remote Surgery. IEEE Access 2019, 7, 20325–20336. [Google Scholar] [CrossRef]
- González, C.; Solanes, J.E.; Muñoz, A.; Gracia, L.; Girbés-Juan, V.; Tornero, J. Advanced teleoperation and control system for industrial robots based on augmented virtuality and haptic feedback. J. Manuf. Syst. 2021, 59, 283–298. [Google Scholar] [CrossRef]
- Duan, B.; Xiong, L.; Guan, X.; Fu, Y.; Zhang, Y. Tele-operated robotic ultrasound system for medical diagnosis. Biomed. Signal Process. Control 2021, 70, 102900. [Google Scholar] [CrossRef]
- Caiza, G.; Garcia, C.A.; Naranjo, J.E.; Garcia, M.V. Flexible robotic teleoperation architecture for intelligent oil fields. Heliyon 2020, 6, e03833. [Google Scholar] [CrossRef] [PubMed]
- IBM. Transcript of IBM Podcast. 2011. Available online: https://www.ibm.com/podcasts/software/websphere/connectivity/piper_diaz_nipper_mq_tt_11182011.pdf (accessed on 22 February 2022).
- Terracciano, D.S.; Bazzarello, L.; Caiti, A.; Costanzi, R.; Manzari, V. Marine Robots for Underwater Surveillance. Curr. Robot. Rep. 2020, 1, 159–167. [Google Scholar] [CrossRef]
- Quattrini Li, A.; Rekleitis, I.; Manjanna, S.; Kakodkar, N.; Hansen, J.; Dudek, G.; Bobadilla, L.; Anderson, J.; Smith, R.N. Data correlation and comparison from multiple sensors over a coral reef with a team of heterogeneous aquatic robots. In Proceedings of the International Symposium on Experimental Robotics, Nagasaki, Japan, 3–8 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 717–728. [Google Scholar]
- Shukla, A.; Karki, H. Application of robotics in offshore oil and gas industry—A review Part II. Robot. Auton. Syst. 2015, 75, 508–524. [Google Scholar] [CrossRef]
- Zereik, E.; Bibuli, M.; Mišković, N.; Ridao, P.; Pascoal, A. Challenges and future trends in marine robotics. Annu. Rev. Control 2018, 46, 350–368. [Google Scholar] [CrossRef]
- Domingues, C.; Essabbah, M.; Cheaib, N.; Otmane, S.; Dinis, A. Human-robot-interfaces based on mixed reality for underwater robot teleoperation. IFAC Proc. Vol. 2012, 45, 212–215. [Google Scholar] [CrossRef]
- Chen, J.Y.C.; Haas, E.C.; Barnes, M.J. Human Performance Issues and User Interface Design for Teleoperated Robots. IEEE Trans. Syst. Man, Cybern. Part C (Appl. Rev.) 2007, 37, 1231–1245. [Google Scholar] [CrossRef]
- Becerra, I.; Suomalainen, M.; Lozano, E.; Mimnaugh, K.J.; Murrieta-Cid, R.; LaValle, S.M. Human Perception-Optimized Planning for Comfortable VR-Based Telepresence. IEEE Robot. Autom. Lett. 2020, 5, 6489–6496. [Google Scholar] [CrossRef]
- Hauser, K. Design of Optimal Robot User Interfaces. In Proceedings of the Workshop on Progress and Open Problems in Motion Planning, 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
- Rosen, R.; von Wichert, G.; Lo, G.; Bettenhausen, K.D. About The Importance of Autonomy and Digital Twins for the Future of Manufacturing. IFAC-PapersOnLine 2015, 48, 567–572. [Google Scholar] [CrossRef]
- Gomez, F. AI-Driven Digital Twins and the Future of Smart Manufacturing. 2021. Available online: https://www.machinedesign.com/automation-iiot/article/21170513/aidriven-digital-twins-and-the-future-of-smart-manufacturing (accessed on 8 April 2023).
- Lv, Z.; Qiao, L.; Mardani, A.; Lv, H. Digital Twins on the Resilience of Supply Chain Under COVID-19 Pandemic. IEEE Trans. Eng. Manag. 2022, 1–12. [Google Scholar] [CrossRef]
- Nokia Oyj. How Digital Twins Are Driving the Future of Engineering. 2021. Available online: https://www.nokia.com/networks/insights/technology/how-digital-twins-driving-future-of-engineering/ (accessed on 8 April 2023).
- Grieves, M. Excerpt from Forthcoming Paper Intelligent Digital Twins and the Development and Management of Complex Systems the “Digital Twin Exists ONLY after There Is A Physical Product” Fallacy. 2021. Available online: https://www.researchgate.net/publication/350822924_Excerpt_From_Forthcoming_Paper_Intelligent_Digital_Twins_and_the_Development_and_Management_of_Complex_Systems_The_Digital_Twin_Exists_ONLY_After_There_Is_A_Physical_Product_Fallacy (accessed on 15 April 2023).
- Digital Twin Consortium. Glossary of Digital Twins. 2021. Available online: https://www.digitaltwinconsortium.org/glossary/glossary.html#digital-twin (accessed on 15 April 2023).
- ISO 23247-1:2021; Automation Systems and Integration—Digital Twin Framework for Manufacturing—Part 1: Overview and General Principles. International Organization for Standardization: Geneva, Switzerland, 2000.
- Krueger, M.W. Responsive Environments. In Proceedings of the National Computer Conference, New York, NY, USA, 13–16 June 1977; AFIPS’77. pp. 423–433. [Google Scholar] [CrossRef]
- Qadri, M.; Hussain, M.; Jawed, S.; Iftikhar, S.A. Virtual Tourism Using Samsung Gear VR Headset. In Proceedings of the 2019 International Conference on Information Science and Communication Technology (ICISCT), Karachi, Pakistan, 9–10 March 2019; pp. 1–10. [Google Scholar] [CrossRef]
- Perla, R.; Gupta, G.; Hebbalaguppe, R.; Hassan, E. InspectAR: An Augmented Reality Inspection Framework for Industry. In Proceedings of the 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Merida, Mexico, 19–23 September 2016; pp. 355–356. [Google Scholar] [CrossRef]
- Maereg, A.; Atulya, N.; David, R.; Emanuele, S. Wearable Vibrotactile Haptic Device for Stiffness Discrimination during Virtual Interactions. Front. Robot. AI 2017, 4, 42. [Google Scholar] [CrossRef]
- Neamoniti, S.; Kasapakis, V. Hand Tracking vs. Motion Controllers: The effects on Immersive Virtual Reality Game Experience. In Proceedings of the 2022 IEEE International Symposium on Multimedia (ISM), Naples, Italy, 5–7 December 2022; pp. 206–207. [Google Scholar] [CrossRef]
- Datta, S. Top Game Engines To Learn in 2022. 2022. Available online: https://blog.cloudthat.com/top-game-engines-learn-in-2022/ (accessed on 8 April 2023).
- Linietsky, J.; Manzur, A. Godot Engine. 2022. Available online: https://github.com/godotengine/godot (accessed on 13 April 2023).
- Thorn, A. Moving from Unity to Godot; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Hickson, I.; Hyatt, D. HTML 5. 2008. Available online: https://www.w3.org/TR/2008/WD-html5-20080122/ (accessed on 13 April 2023).
- Khan, M.Z.; Hashem, M.M.A. A Comparison between HTML5 and OpenGL in Rendering Fractal. In Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’sBazar, Bangladesh, 7–9 February 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Unity Technologies. WebAssembly Is Here! Available online: https://blog.unity.com/technology/webassembly-is-here (accessed on 13 April 2023).
- Peters, E.; Heijligers, B.; Kievith, J.; Razafindrakoto, X.; Oosterhout, R.; Santos, C.; Mayer, I.; Louwerse, M. Design for Collaboration in Mixed Reality Technical Challenges and Solutions. In Proceedings of the 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES), Barcelona, Spain, 7–9 September 2016. [Google Scholar] [CrossRef]
- Epic Games Inc. Accessing Unreal Engine Source Code on GitHub. 2022. Available online: https://www.unrealengine.com/en-US/ue-on-github (accessed on 16 April 2023).
- Zhang, F.; Lai, C.Y.; Simic, M.; Ding, S. Augmented reality in robot programming. Procedia Comput. Sci. 2020, 176, 1221–1230. [Google Scholar] [CrossRef]
- Abbas, S.M.; Hassan, S.; Yun, J. Augmented reality based teaching pendant for industrial robot. In Proceedings of the 2012 12th International Conference on Control, Automation and Systems, Jeju, Republic of Korea, 17–21 October 2012; pp. 2210–2213. [Google Scholar]
- Shu, B.; Arnarson, H.; Solvang, B.; Kaarlela, T.; Pieskä, S. Platform independent interface for programming of industrial robots. In Proceedings of the 2022 IEEE/SICE International Symposium on System Integration (SII), Narvik, Norway, 9–12 January 2022; pp. 797–802. [Google Scholar] [CrossRef]
- Togias, T.; Gkournelos, C.; Angelakis, P.; Michalos, G.; Makris, S. Virtual reality environment for industrial robot control and path design. Procedia CIRP 2021, 100, 133–138. [Google Scholar] [CrossRef]
- Lotsaris, K.; Gkournelos, C.; Fousekis, N.; Kousi, N.; Makris, S. AR based robot programming using teaching by demonstration techniques. Procedia CIRP 2021, 97, 459–463. [Google Scholar] [CrossRef]
- Kaarlela, T.; Padrao, P.; Pitkäaho, T.; Pieskä, S.; Bobadilla, L. Digital Twins Utilizing XR-Technology as Robotic Training Tools. Machines 2023, 11, 13. [Google Scholar] [CrossRef]
- Li, X.; He, B.; Wang, Z.; Zhou, Y.; Li, G.; Jiang, R. Semantic-Enhanced Digital Twin System for Robot–Environment Interaction Monitoring. IEEE Trans. Instrum. Meas. 2021, 70, 7502113. [Google Scholar] [CrossRef]
- Alkhafajee, A.R.; Al-Muqarm, A.M.A.; Alwan, A.H.; Mohammed, Z.R. Security and Performance Analysis of MQTT Protocol with TLS in IoT Networks. In Proceedings of the 2021 4th International Iraqi Conference on Engineering Technology and Their Applications (IICETA), Najaf, Iraq, 21–22 September 2021; pp. 206–211. [Google Scholar] [CrossRef]
- Jansen, B.; Goodwin, T.; Gupta, V.; Kuipers, F.; Zussman, G. Performance Evaluation of WebRTC-based Video Conferencing. ACM Sigmetrics Perform. Eval. Rev. 2018, 45, 56–68. [Google Scholar] [CrossRef]
- Eltenahy, S.; Fayez, N.; Obayya, M.; Khalifa, F. Comparative Analysis of Resources Utilization in Some Open-Source Videoconferencing Applications based on WebRTC. In Proceedings of the 2021 International Telecommunications Conference (ITC-Egypt), Alexandria, Egypt, 13–15 July 2021; pp. 1–4. [Google Scholar] [CrossRef]
- Dionisio, J.D.N.; III, W.G.B.; Gilbert, R. 3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities. ACM Comput. Surv. 2013, 45, 1–38. [Google Scholar] [CrossRef]
- Mystakidis, S.; Berki, E.; Valtanen, J.P. Deep and Meaningful E-Learning with Social Virtual Reality Environments in Higher Education: A Systematic Literature Review. Appl. Sci. 2021, 11, 2412. [Google Scholar] [CrossRef]
- Parisi, T. The Seven Rules of the Metaverse. 2021. Available online: https://medium.com/meta-verses/the-seven-rules-of-the-metaverse-7d4e06fa864c (accessed on 7 April 2023).
- Kaarlela, T.; Pieskä, S.; Pitkäaho, T. Digital Twin and Virtual Reality for Safety Training. In Proceedings of the 2020 11th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Mariehamn, Finland, 23–25 September 2020; pp. 115–120. [Google Scholar] [CrossRef]
- What Is the Industrial Metaverse—And Why Should I Care? 2022. Available online: https://new.siemens.com/global/en/company/insights/what-is-the-industrial-metaverse-and-why-should-i-care.html (accessed on 7 April 2023).
- Lundmark, P. The Real Future of the Metaverse Is Not for Consumers. 2022. Available online: https://www.ft.com/content/af0c9de8-d36e-485b-9db5-5ee1e57716cb (accessed on 8 April 2023).
- Nokia OYj. Metaverse Explained. 2023. Available online: https://www.nokia.com/about-us/newsroom/articles/metaverse-explained/ (accessed on 9 April 2023).
- Siemens. Siemens and NVIDIA to Enable Industrial Metaverse. 2022. Available online: https://press.siemens.com/global/en/pressrelease/siemens-and-nvidia-partner-enable-industrial-metaverse (accessed on 9 April 2023).
- MA, SI. Metaverse to Usher in Golden era of Innovation. 2022. Available online: http://www.chinadaily.com.cn/a/202212/19/WS639fba52a31057c47eba5047.html (accessed on 9 April 2023).
- Wang, Q.; Jiao, W.; Wang, P.; Zhang, Y. Digital Twin for Human-Robot Interactive Welding and Welder Behavior Analysis. IEEE/CAA J. Autom. Sin. 2021, 8, 334–343. [Google Scholar] [CrossRef]
- Jeong, J.H.; Shim, K.H.; Kim, D.J.; Lee, S.W. Brain-Controlled Robotic Arm System Based on Multi-Directional CNN-BiLSTM Network Using EEG Signals. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1226–1238. [Google Scholar] [CrossRef]
- Han, D.; Mulyana, B.; Stankovic, V.; Cheng, S. A Survey on Deep Reinforcement Learning Algorithms for Robotic Manipulation. Sensors 2023, 23, 3762. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, C.; Zhang, H. Industrial Robotic Intelligence Simulation in Metaverse Scenes. In Proceedings of the 2022 China Automation Congress (CAC), Xiamen, China, 25–27 November 2022; pp. 1196–1201. [Google Scholar] [CrossRef]
- Shi, H.; Liu, G.; Zhang, K.; Zhou, Z.; Wang, J. MARL Sim2real Transfer: Merging Physical Reality with Digital Virtuality in Metaverse. IEEE Trans. Syst. Man, Cybern. Syst. 2023, 53, 2107–2117. [Google Scholar] [CrossRef]
- Yang, J.; Xi, M.; Wen, J.; Li, Y.; Song, H.H. A digital twins enabled underwater intelligent internet vehicle path planning system via reinforcement learning and edge computing. Digit. Commun. Netw. 2022, in press. [CrossRef]
- Yang, M.; Wang, Y.; Wang, C.; Liang, Y.; Yang, S.; Wang, L.; Wang, S. Digital twin-driven industrialization development of underwater gliders. IEEE Trans. Ind. Inform. 2023, 1–11. [Google Scholar] [CrossRef]
- Barbie, A.; Pech, N.; Hasselbring, W.; Flögel, S.; Wenzhöfer, F.; Walter, M.; Shchekinova, E.; Busse, M.; Türk, M.; Hofbauer, M.; et al. Developing an Underwater Network of Ocean Observation Systems with Digital Twin Prototypes—A Field Report From the Baltic Sea. IEEE Internet Comput. 2022, 26, 33–42. [Google Scholar] [CrossRef]
- Lukka, K. The Constructive Research Approach; Publications of the Turku School of Economics and Business Administration: Turku, Finland, 2003; pp. 83–101. [Google Scholar]
- Kasanen, E.; Lukka, K.; Siitonen, A. The constructive approach in management accounting research. J. Manag. Account. Res. 1993, 5, 243–264. [Google Scholar]
- IEEE Std 830-1998; IEEE Recommended Practice for Software Requirements Specifications. IEEE: Piscataway, NJ, USA, 1998; pp. 1–40. [CrossRef]
- Messmer, F.; Hawkins, K.; Edwards, S.; Glaser, S.; Meeussen, W. Universal Robot. 2019. Available online: https://github.com/ros-industrial/universal_robot (accessed on 8 April 2023).
- Galvani, W.; Pereira, P. Bluesim. 2023. Available online: https://github.com/bluerobotics/bluesim (accessed on 13 April 2023).
- Singh, A.; Taneja, A.; Mangalaraj, G. Creating online surveys: Some wisdom from the trenches tutorial. IEEE Trans. Prof. Commun. 2009, 52, 197–212. [Google Scholar] [CrossRef]
- NIST. Guide to Enterprise Telework, Remote Access, and Bring Your Own Device (BYOD) Security. 2016. Available online: https://csrc.nist.gov/publications/detail/sp/800-46/rev-2/final (accessed on 15 April 2023).
- RIMA. MIMIC (2OC). 2022. Available online: https://community.rimanetwork.eu/6682/MIMIC-2OC (accessed on 17 April 2023).
- RIMA. RIMA Robotics for Inspection and Maintenance. 2022. Available online: https://rimanetwork.eu/ (accessed on 17 April 2023).
- Padrao, P.; Fuentes, J.; Kaarlela, T.; Bayuelo, A.; Bobadilla, L. Towards Optimal Human-Robot Interface Design Applied to Underwater Robotics Teleoperation. arXiv 2023, arXiv:2304.02002. [Google Scholar]
- Lorenz, A. Sensorstream IMU+GPS. 2023. Available online: https://play.google.com/store/apps/details?id=de.lorenz_fenster.sensorstreamgps (accessed on 21 February 2023).
- Bluerobotics Inc. BlueROV2. 2023. Available online: https://bluerobotics.com/store/rov/bluerov2/ (accessed on 2 February 2023).
- The WebSocket Protocol. 2011. Available online: https://www.rfc-editor.org/rfc/rfc6455.html (accessed on 8 April 2023).
- Kaarlela, T.; Arnarson, H.; Pitkäaho, T.; Shu, B.; Solvang, B.; Pieskä, S. Common Educational Teleoperation Platform for Robotics Utilizing Digital Twins. Machines 2022, 10, 577. [Google Scholar] [CrossRef]
- Craig, J.J. Introduction to Robotics: Mechanics and Control; Pearson: London, UK, 2022. [Google Scholar]
- Nicolescu, A.; Ilie, F.M.; Tudor George, A. Forward and inverse kinematics study of industrial robots taking into account constructive and functional parameter’s modeling. Proc. Manuf. Syst. 2015, 10, 157. [Google Scholar]
- Luimula, M.; Haavisto, T.; Pham, D.; Markopoulos, P.; Aho, J.; Markopoulos, E.; Saarinen, J. The use of metaverse in maritime sector—A combination of social communication, hands on experiencing and digital twins. In Proceedings of the AHFE (2022) International Conference, New York, NY, USA, 24–28 July 2022; AHFE Open Access. Volume 31. [Google Scholar] [CrossRef]
- Blanco Bataller, V. Control Remoto de un Brazo Robótico Utilizando Realidad Virtual. Bachelor’s Thesis, Universitat Politècnica de València, Valencia, Spain, 2021. Available online: https://riunet.upv.es/handle/10251/176778 (accessed on 17 April 2023).
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).