Next Article in Journal
Capacity State-of-Health Estimation of Electric Vehicle Batteries Using Machine Learning and Impedance Measurements
Next Article in Special Issue
Design of Three-Dimensional Virtual Simulation Experiment Platform for Integrated Circuit Course
Previous Article in Journal
High-Magnification Super-Resolution Reconstruction of Image with Multi-Task Learning
Previous Article in Special Issue
Convincing 3D Face Reconstruction from a Single Color Image under Occluded Scenes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Method for Tunnel Digital Twin Construction and Virtual-Real Fusion Application

1
China Academy of Transportation Sciences, Basic Research and Applied Innovation Center, Beijing 100029, China
2
Shanghai Urban Operation (Group) Co., Ltd., Shanghai 200023, China
3
Shanghai Municipal Road Transport Administrative Bureau, Shanghai 200125, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(9), 1413; https://doi.org/10.3390/electronics11091413
Submission received: 31 March 2022 / Revised: 22 April 2022 / Accepted: 24 April 2022 / Published: 28 April 2022
(This article belongs to the Special Issue New Advances in Visual Computing and Virtual Reality)

Abstract

:
Tunnels play important roles in integrated transport infrastructure. A digital twin reproduces a real tunnel scene in virtual space and provides new means for tunnel digital maintenance. Aiming at the existing problems of video fragmentation, separation of video and business data, and lack of two- and three-dimensional linkage response methods in tunnel digital operation, in this paper, we propose a novel method for tunnel digital twin construction and virtual-real integration operation. Firstly, the digital management requirements of tunnel operations are systematically analyzed to clarify the purpose of digital twin construction. Secondly, BIM technology is used to construct a static model of the tunnel scene that conforms to the real tunnel main structure. Thirdly, a three-dimensional registration and projection calculation method is proposed to integrate tunnel surveillance video into a three-dimensional virtual scene in real time. Fourthly, multi-source sensing data are gathered and fused to form a digital twin scene that is basically the same as the real tunnel traffic operations scene. Finally, a management model suitable for digital twins is discussed to improve the efficiency of tunnel operations and management, and a tunnel in China is selected to verify this method. The results show that the proposed method is helpful to realize the application of two- and three-dimensional linkages of tunnel traffic smooth, accident rescue, facility management, and emergency response, and to improve the efficiency of tunnel digital management.

1. Introduction

Tunnels are important components of traffic infrastructure. Tunnel traffic has high requirements for safe operations and high pressure to keep traffic flowing, due to its typical characteristics such as linear engineering, semi-enclosed space, concealed structure, and easy to become a bottleneck. Digital and intelligent means [1] can be used to provide insight into changes in tunnel traffic elements, to master the rules of traffic operation, and to predict the development trend of traffic, which can produce great social value and economic benefits when applied to all links of tunnel traffic operations management and service.
The rapid development of information technologies such as the Internet of Things, cloud computing, and high-definition video monitoring have presented new opportunities for digital upgrading of tunnel operations management. The United States, Canada, Germany and other developed countries have previously studied and applied tunnel monitoring systems. Vehicle detectors, photocell detectors, and other means have usually been used to realize more accurate traffic and environmental information monitoring and management in tunnels. In 2004, the European Union (EU) issued the Directive on Minimum Safety Requirements for Tunnels on the Trans-European Road Network (2004/54/EC) [2]. The EU countries all assess the safety risk of tunnel operations in the region based on this directive. In 2008, the World Road Association also proposed the monitoring and safety management method of highway tunnel operations [3]. In the “code for design of urban road traffic facilities”, the “code for design of highway tunnels”, and the “code for design of highway tunnel traffic engineering” documents, China has clearly put forward the requirements for the construction of tunnel mechanical and electrical systems, including tunnel monitoring systems, tunnel ventilation and lighting systems, tunnel power supply and distribution systems, tunnel fire alarm systems, etc. The documents also classify field control networks, traffic monitoring systems, closed-circuit television systems, emergency telephone systems, wired broadcasting systems, and environmental monitoring systems as tunnel monitoring systems.
Researchers and engineers at home and abroad have carried out extensive exploration on the digital management and innovation of tunnels, and improved the operations management ability in different aspects. However, there are still some problems to be solved, which include: (1) The video surveillance for tunnel holographic perception is presented in the form of fragmentation. Tunnel surveillance videos have strong similarity and massive videos are presented in fragmentation, therefore, daily managers experience significant cognitive pressures viewing real-time videos. (2) The video data and service data are separated from each other in tunnel operations management. The video monitoring system is separated from the asset, emergency, electromechanical, and other subsystems of tunnel operations, which makes it difficult to communicate with each other and leads to high cost of cross-system integration. (3) There is a lack of means for automatic association among two-dimensional maps, surveillance videos, and three-dimensional scenes in digital management. A traditional business subsystem is usually presented in 2D mode, and lacks a means for 3D scene monitoring that conforms to the spatial cognition of managers, as well as emergency response methods and tools that can quickly link between 2D signal monitoring, 3D scene, and local video.
A digital twin provides new ideas for the digital transformation and high-quality development of tunnel operations. It creates a virtual model of a physical entity in a digital way [4], simulates the behavior of the physical entity in the real environment with the help of data, and adds new capabilities to the physical entity by means of virtual-real interactive feedback, data fusion analysis, and iterative decision-making optimization. Aiming at the current problems of video fragmentation, separation of video and business data, and lack of 2D and 3D linkage response methods in tunnel digital operations, in this paper, we propose a novel tunnel digital twin construction and virtual-real integration operations application method. Based on the analysis of the characteristics and needs of tunnel operations business, from “person, vehicle, road (facilities), environment” the building of four elements, using BIM technology to build a basic model of tunnel facilities and real-time tunnel surveillance video fusion into 3D virtual scene. Multi-source Internet of Things (IoT) sensing data are gathered and fused to form a digital twin scene that is basically the same as the real tunnel traffic operations scene. The constructed digital twin scenarios are applied to tunnel traffic flow, accident rescue, facility management, emergency response and other aspects to achieve the tunnel digital twin perception and intelligent operations and maintenance management application of virtual-real integration.
The main contributions of this paper are as follows: Firstly, a video fusion method based on 3D registration and projection calculation is proposed to establish the spatio-temporal relationship between discrete surveillance video and 3D scene, and to solve the problem of video fragmentation in holographic perception of tunnel scene. Secondly, a tunnel digital twin construction and application method based on three-dimensional panoramic video fusion is proposed to reduce the pressure of all element modeling and to realize two-dimensional and three-dimensional linkage tunnel digital management.
The remainder of this paper is organized as follows: The related work and a general overview of the proposed method are presented in Section 2 and Section 3. In Section 4, we describe the demand analysis of tunnel operations management based on video surveillance. Tunnel main structure modeling based on BIM, video fusion, IoT data aggregation, and traffic operations simulation are presented in Section 5, Section 6 and Section 7, respectively. In Section 8, we analyze the management application based on digital twins. Demonstration application and effect analysis is discussed in Section 9, and the conclusions are drawn in Section 10.

2. Related Works

This section analyzes the key technologies and application status of tunnel video surveillance, digital twin, and tunnel digital twin involved in this paper.
With respect to Tunnel video monitoring surveillance, LonWorks Distributed Monitoring Technology launched in the United States, German INTERBUS fieldbus technology, and Japan Cotroller-Link ring network control technology [5,6,7] are widely used in distributed traffic safety monitoring of highway tunnels. The extra-long Mont Blanc Tunnel in France adopted a pre-monitoring system outside the tunnel and radar vehicle detection system inside the tunnel to avoid overloading or dangerous vehicles entering the tunnel and to manage the traffic inside the tunnel. Video monitoring and detection sensors are set up in the mountain ring highway tunnel passing through St. Gotha peak in the northwest Alps of Zurich to monitor air quality, illumination, and fire in the tunnel, which is managed and controlled by control equipment. Helmut Schwabach et al. [8] from the Technical University of Graz, Austria, carried out special research on tunnel video surveillance named “VitUS-1” to realize automatic identification of alarm locations, notification of tunnel managers and road users, and automatic storage of traffic accident video sequences. Reyes Rios Cabrera [9] of Belgium proposed a comprehensive solution for tunnel vehicle detection, tracking, and recognition, in which vehicles are identified and tracked through a non-overlapping camera network in a tunnel; the solution takes into consideration practical limitations such as real-time performance, poor imaging conditions, and decentralized architecture. Wang Xuyao [10] integrated measuring robot, static leveling, joint gauge, and other equipment to form a remote automatic monitoring system for tunnel structure deformation and ballast bed settlement monitoring, which has been applied in the Qingdao Rail Transit Line 13. Li Feng [11] combined image stitching and distorted image correction, and selected a fisheye camera to monitor the tunnel road conditions, and achieved tunnel scene coverage with small distortion and a large picture. Liu Zhihui et al. [12] developed a smart tunnel operations management and control system based on the IoT, which integrated four subsystems: comprehensive monitoring, emergency rescue, maintenance management, and data analysis and auxiliary decision making, to monitor the traffic and environmental data in a tunnel, to anticipate dangerous situations and accidents, and to ensure the safety of traffic operations in the area affected by the tunnel. Chen Hao et al. [13] proposed an intelligent monitoring platform for an expressway tunnel based on BIM (building information modeling) + GIS (geographic information system), with 2D and 3D integration and dynamic and static data fusion to realize digital, 3D, and accurate monitoring and management of a tunnel. Li Jianli et al. [14] designed an “Internet plus intelligent high-speed” BIM tunnel intelligent monitor system for the complex and diverse monitoring data of tunnels, with data dynamic demand and low modeling efficiency. With the BIM model as the center of electromechanical information flow, the “six layers and two wings” structure has been adopted to realize information coordination among systems and intelligent monitoring of tunnel life cycle. Liu Pengfei et al. [15] used high-definition video to transform and upgrade the analog video monitoring system of the Qiujiayakou Tunnel, which solved the problems of low video definition, high failure rate, and inability to realize secondary video application of the original monitoring system.
Video virtual reality fusion methods [16,17] use virtual reality technology to fuse discrete surveillance videos with different angles with a 3D model of the surveillance scene in real time, and therefore, establish the spatial correlation between different video pictures in a scene. At present, the mainstream methods of video fusion are divided into four categories: video tag map, video image mosaic, video overlay transition in 3D scene, and the fusion of video and 3D scene. The video label map method [18,19,20] places the video on the two-dimensional base map in the form of label. This is a 2D method with simple implementation. However, it is difficult to truly reflect the perceptual effect of the fusion of video and virtual reality in three-dimensional space, because this method always needs to display the video separately from the base map, and video. The method of video image mosaic [21,22,23,24] forms a panoramic image with wide viewing angle or 360 degree viewing angle after image transformation, resampling, and image fusion. This is a 2D+ method, but the fixed virtual model of the sphere restricts the viewpoint to be limited near the shooting viewpoint, and the degree of freedom of interaction is limited. The method of video overlay transition in 3D scene [25,26,27] is a pseudo 3D method. It realizes the overlay and fusion display of multiple videos based on 2D and 3D feature registration, but the viewing angle is limited to the camera path. The fusion method of video and 3D scene [28,29,30,31] is to capture the video image of a real object by camera, and register the video image in the virtual environment in real time in the form of texture. This method allows users to observe the fusion results from any virtual viewpoint, which is better than the other three methods in display effect and interaction. The method proposed in this paper belongs to the fusion method of video and 3D scene.
A digital twin [32,33] is a technical means to create a virtual entity of a physical entity in a digital way, and to simulate, verify, predict, and control the whole life cycle process of the physical entity with the help of historical data, real-time data, and an algorithm model. Digital twins were first proposed by Professor Michael Grieves when he taught at the University of Michigan in 2003, and since then have been widely adopted and practiced by some industries and enterprises. A digital twin virtual world describes the real physical world, and realizes the continuous iteration of diagnosis, prediction, and decision making based on data, model, and software, which helps to optimize the production or enterprise process and to reduce the trial-and-error cost. Professor Tao Fei’s team at the Beihang University put forward the concept of a “digital twin workshop” for the first time in the world, and published a review article entitled “Make More Digital Twins” online in Nature magazine [34]. Lim et al. [35] reviewed 123 papers on digital twinning, analyzed the tools and models involved in digital twinning, and proposed an architecture to maximize the interoperability between subsystems. The author’s team [4] also studied a traffic scene digital twin, put forward five key technologies that needed to be further developed, and explored the application prospects of digital twin technology in vehicle development, intelligent operations, and maintenance of important traffic infrastructures; unmanned virtual test, analysis, and experiments of traffic problems; traffic science popularization; and traffic civilization education. Weifei Hu et al. [36] provided a state-of-the-art review of DT history, different definitions and models, and six different key enabling technologies. Ozturk, G. B. et al. [37] discussed the current patterns, gaps, and trends in digital twin research in the architectural, engineering, construction, operations, and facility management (AECO-FM) industry and proposed future directions for industry stakeholders. Gao, Y. et al. [38] reviewed recent applications of four types of transportation infrastructure: railways, highways, bridges, and tunnels, and identified the existing research gaps. Jiang, F. et al. [39] reviewed 468 articles related to DTs (digital twins), BIM, and cyber physical systems (CPS); proposed a DT definition and its constituents in civil engineering; and compared DT with BIM and CPS. Bao, L. et al. [40] proposed a new digital twin (DT) concept of traffic based on the characteristics of DT and the connotation of traffic, and a three layer technical architecture was proposed, including a data access layer, a calculation and simulation layer, and a management and application layer.
Regarding a tunnel digital twin, Kasper Tijs [41], from the Delft University of Technology in the Netherlands, analyzed the requirements and architecture of a tunnel digital twin, and found that a digital twin should be considered in tunnel operations and maintenance based on the results of a demand analysis, and thus, should enhance the operations and maintenance of tunnels in the Netherlands. Chen Yifei [42] designed and implemented a set of 3D panoramic tunnel monitoring systems by using the 3D MAX modeling software, the Unity3D virtual reality engine, and the MY SQL database to integrate the design, construction, survey, and monitoring information of tunnel engineering with 3D panoramic images. However, the program applied only to set panoramic monitoring and data integration at fixed points. Zhu Hehua et al. [43] from the Tongji University proposed a design for automatic identification of a tunnel’s surrounding rock grade and digital dynamic support based on a digital twin, which was demonstrated and applied in the Grand Canyon Tunnel of bid Section 2, Section 3, Section 4, Section 5 and Section 6 of the Sichuan Ehan High-Speed Project, and solved the problems of serious disconnection, insufficient timeliness, and insufficient refinement of support design and construction in the practice of rock tunnel engineering. Wuxi Qingqi Tunnel [44] built a tunnel electromechanical ”digital twin” system using the Internet of Things (IoT) and BIM technology. With the system, through interactive operation, the 3D perspective view and real-time monitoring images of a certain area can be mastered, real-time operations data of each equipment in the area can be obtained, and remote “simulation inspection” of all parts and components can be combined with virtual and real, which can effectively overcome the deficiency of human supervision mode and ensure that a tunnel is not flooded. Tomar, R. et al. [45] proposed a digital twin method of tunnel construction for safety and efficiency, which could monitor both the construction process in real time and virtually visualize the performance of the tunnel boring machine (TBM) performance in advance. Nohut, B. K. [46], in his Master’s thesis, focused on a road tunnel use case for a digital twin and created a tunnel twin framework for a given tunnel. Van Hegelsom, J. et al. [47] developed a digital twin of the Swalmen Tunnel, which was used to aid the supervisory controller design process in its renovation projects. Yu, G. et al. [48] proposed a highway tunnel pavement performance prediction approach based on a digital twin and multiple time series stacking (MTSS), which has been applied in the life cycle management of the highway-crossing Dalian Tunnel in Shanghai.

3. Method Overview

Referring to the digital twin five-dimensional structure model [49] proposed by Tao Fei, a digital twin structure reference model suitable for tunnel scene was designed, as shown in Figure 1, including a real tunnel scene, a virtual tunnel scene model, twin data, service systems, and the interconnection among the above four. Twin data are the basis of digital twin construction. The virtual digital twin scene model is the core of digital twin construction, and the service system is the carrier of digital twin implementation. It supports the loop iteration of condition monitoring, digital experience, auxiliary decision making, and optimal control between real scene and virtual model.
Combined with the technical advantages of tunnel digital twin structure reference model and 3D video fusion, a tunnel digital twin construction and application method based on 3D video fusion is proposed, as shown in Figure 2. The main steps include: (1) An analysis of the tunnel scene characteristics and the business requirements of video monitoring, and clarification of the objectives of digital twin construction and application. (2) Three-dimensional reconstruction of the tunnel scene based on BIM and point cloud scanning to quickly generate the tunnel digital twin basic model. (3) Projection calculations and 3D real-time fusion of multiple videos in the tunnel establish the spatial correlation between discrete videos and 3D scenes. (4) Collection of the IoT monitoring data in tunnel scenes and enhancement in 3D scenes. (5) Carry out the tunnel traffic simulation and prediction analysis based on management demand and historical data. (6) Carry out the tunnel operations management application demonstration of virtual real integration based on the constructed digital twin scene. The details of each step are discussed in Section 3, Section 4, Section 5, Section 6, Section 7 and Section 8.

4. Analysis of Tunnel Scenarios and Monitoring Requirements

The main work objective of a tunnel operations unit is to “ensure smooth traffic”, which involves various business requirements including: operations monitoring, facility maintenance, civil maintenance, emergency disposal, etc. Operations monitoring involves the timely identification and warning of traffic congestion, accidents, car breakdowns, illegal intrusions, facility and equipment failures, emergencies, and other events, and then timely dealing with problems affecting the smooth driving of the tunnel, thus, maintaining safe and smooth traffic operations in the tunnel. Facility maintenance involves maintaining the facilities in the tunnel by introducing the maintenance design concept, carrying out active and preventive maintenance, effectively reducing maintenance costs, prolonging the overhaul cycle, improving the service quality of facilities, significantly reducing the impact on urban operation, improving urban operations efficiency, and enhancing the image of the city. Civil maintenance refers to routine cleaning and maintenance of tunnel civil construction. Emergency disposal refers to timely and early warning, response, disposal, aftermath, and recovery of emergencies related to “fire, water and electricity” in the tunnel, and the establishment of a perfect emergency disposal plan system and team. In addition, business requirements related to tunnel operations also include early warning patrol, anchoring traction, alarm reception and handling, electromechanical fault disposal, etc.
The demand for video monitoring in tunnel operations management mainly includes four aspects: tunnel traffic flow, accident rescue, facility management, and emergency response. Briefly, these four aspects involve: (1) tunnel traffic flow, i.e., patrolling the entire tunnel regularly, timely finding and responding to smooth operations of the tunnel through the monitoring video, and maintaining the tunnel traffic flow in combination with an analysis of traffic detection facilities in the tunnel, early warning of congestion events, traffic signal control, broadcasting, on-site disposal, and other measures; (2) accident rescues, for car breakdowns, traffic accidents, emergencies, facility failures, and other problems transferred to traffic police by phone, where monitoring cameras can be used to quickly confirm the location of a situation, and quickly dispatch a trailer for accident rescue and emergency treatment; (3) facility management using monitoring videos to patrol the important facilities and equipment in a tunnel, combined with the data and analysis of relevant IoT sensors, to judge the status of tunnel electromechanical facilities and equipment, and timely find and deal with facility faults; (4) emergency response to provide timely responses to emergencies, severe weather, and other events occurring during the operation of a tunnel, to guarantee the emergency process by using video monitoring and related IoT facilities, and to support emergency early warning, disposal, response, aftermath, and recovery.

5. Rapid Construction of the Basic Model of the Tunnel Digital Twin Based on BIM

BIM technology is used to construct the parametric model of the tunnel body, lining, tunnel portal, road, concealed works, auxiliary facilities, electromechanical facilities, and other objects, therefore, forming the initial model of the digital twin scene which is basically consistent with the actual tunnel scene. Using BIM tools such as Revit and Civil 3D, the basic wall model, road model, anti-collision wall, drainage ditch, and decorative board of tunnel sections such as circular tunnel section, rectangular tunnel section, and approach section are modeled in three dimensions. At the same time, the substation, ventilation shaft, crossing duty booth, and other facilities are modeled and rendered to form a realistic overall tunnel engineering model, as shown in Figure 3.
The accuracy of the BIM model built based on the design drawings is LOD (level of detail) 400, and integrates the multi-stage information of construction, management, maintenance, and transportation. The model has large volume, high accuracy, and excessive attribute information. For 3D real scene video fusion, the geometric apparent model of the tunnel is more important. The high-precision model details will seriously affect the efficiency of multi-channel video fusion and real-time rendering. Therefore, the original BIM model needs to be lightweight to meet the needs of real-time 3D virtual real scene fusion. A professional lightweight processing software is used to process the original BIM model, and the number of triangular patches and file size are reduced by merging, extracting, reorganizing, reducing, and repairing the data hierarchy. The number of triangular patches of the original BIM model of a tunnel model was 8.35 million (Figure 4a), and the number of triangular patches of the model after lightweight processing was 3.33 million (Figure 4b); the number of model patches was reduced by 60%. The original model file size was 28.5 m, and the model size was 12.5 m after lightweight treatment, which was reduced by 56%. This lightweight processing method and results can meet the needs of real-time 3D scene fusion and joint rendering, and maintain the integrity of the model.

6. 3D Real-Time Fusion of Tunnel Multi-Channel Surveillance Video

Based on the monitoring video acquisition and point calibration, the multi-channel monitoring video in the actual tunnel scene and the tunnel geometric model are fused in real time according to the topology through 3D registration and projection calculation. A 3D real-time fusion method of tunnel multi-channel video is proposed, as shown in Figure 5, including six steps.
Step 1: The point position, orientation, height, focal length, camera type, and other information of the camera were calibrated. The corresponding relationship between the real longitude and latitude coordinates and 3D space coordinates was established by combining the topological structure and important landmarks of the scene. The position and attitude information of the camera in the real environment was converted to the position and attitude value in 3D space.
Step 2: Use the converted position and attitude to calculate the model view matrix, Mmv, and projection matrix, Mp, of the camera in three-dimensional space. The calculation method of Mmv is shown in Formulas (1) and (2):
M m v = ( s c u c f c 0 0 0 0 1 )
{ f c = ( t     l )   ×   M s   ×   M r   ×   M t s c   =   f c   ×   ( ( l + u )   ×   M s   ×   M r   ×   M t ) u c   =   s c   ×   f c
where f c is the orientation vector of the camera viewing cone, s c is the side vector, u c is the positive up vector, l is the viewpoint position, t is the observation target position, u is the position directly above the camera during observation, M s is the scaling matrix, M r is the rotation matrix, and M t is the displacement matrix.
The calculation method of camera projection matrix M p is shown in Formulas (3) and (4):
M p = ( n r 0 0 0 0 0 n t 0 0 0 ( n   +   f ) n     f 1 0 0 2 n   ×   f n     f 0 )
{ t = n   /   f   r = n   ×   r a   /   f o
where (r, t) is the upper right position of the camera projection frustum, a is the field angle of view of the camera, r a is the aspect ratio of the taken picture, f is the far clipping plane coefficient, n is the near clipping plane coefficient, and f o is the focal length.
Step 3: Use the model view matrix and projection matrix to calculate the view cone structure of the camera in 3D space, take the far clipping plane of the view cone as the supplementary scene structure of the far plane, and project the video image without corresponding scene model onto the far plane.
According to the width w and height h of the video picture, the viewing cone of the camera is constructed in the equipment space. Then, set the far clipping plane depth coefficient d f and near clipping plane depth coefficient d n of the camera viewing cone. The near clipping plane of the viewing cone is p n 1 p n 2 p n 3 p n 4 , where the p n 1 coordinates are ( 0 ,   h ,   d n ) , the coordinates of p n 2 are ( w ,   h ,   d n ) , the coordinates of p n 3 are ( 0 ,   0 ,   d n )   , and the coordinates p n 4 of are ( w ,   0 ,   d n ) . The far clipping plane of the viewing cone is p f 1 p f 2 p f 3 p f 4 , where the p f 1 coordinates are ( 0 ,   h ,   d f ) , the coordinates of p f 2 are ( w ,   h ,   d f ) , the coordinates of p f 3 are ( 0 ,   0 ,   d f ) , and the coordinates of p f 4 are ( w ,   0 ,   d f ) . For each vertex on the far clipping plane and near clipping plane of the viewing cone, the camera model view matrix (Mmv) and projection matrix (Mp) are used to project it back into the 3D scene to obtain the coordinates of 8 vertices of the camera viewing cone in 3D space, as shown in Formula (5):
P = p i     ( M m v     M p     M w )   1
where p i is each vertex of the far and near clipping surface of the camera viewing cone in the equipment space and M w is the window matrix corresponding to the camera video picture.
Step 4: Filter the model set visible to the camera according to the visual cone, reduce the amount of calculation, and accelerate the fusion process. Calculate the view cone bounding box of the camera, intersect the bounding box with the bounding box of each model in the scene, and the fusion calculation is only carried out on the model intersecting with the view cone bounding box of the camera, so as to avoid traversing all vertices of each model in the scene every time the video of each camera is fused with the scene.
Step 5: Use the model view matrix and projection matrix to render the scene depth information under the camera viewpoint, and detect the occlusion of the vertices of the model. The occluded part adopts the original texture of the model, and the non-occluded part is fused with the video image.
Step 6: Perform fragment texturing and coloring in the graphics card. After raster operations, the slice is finally converted to the pixels seen on the screen, as shown in Formulas (6) and (7):
t = M p   ×   M m v   × p
c = f ( T ,   1     t )
where t is the slice texture coordinates corresponding to the surface vertex p, c is the texture color of the slice, T is the video image of the camera, and f is the texture sampling function. After calculating the texture color of the slice element, the video image is drawn on the scene model to realize the fusion of video and 3D scene. The 3D fusion examples of the monitoring gun and fisheye camera in tunnel scene are shown in Figure 6.

7. Tunnel Monitoring Data Aggregation and Scene Update Based on the IoT

Tunnel scenes include temperature, humidity, wind speed, height limit detection, variable intelligence board, ventilation, lighting, drainage, firefighting, fire alarm, CO/VI (visual identification) detector, broadcast system, speed limiting board, vehicle detection system, emergency phone, signal lamp, illuminance meter, and other IoT facilities and control systems. It can label and manage the digital assets of facilities and equipment in the 3D space of real scene fusion, access the real-time data of IoT sensors in message format, and carry out alarm linkage based on business requirements. The IoT sensing data include static data and dynamic data. Static data are mainly used for asset accounts and equipment attributes, and dynamic data mainly refer to real-time monitored traffic, facility status, the environment, and so on. Figure 7 shows an example of tunnel data aggregation and fusion.
As shown in Figure 7, the scene after real-time fusion of multi-channel video and BIM model is further labeled in real time with information such as camera point locations, covert engineering, and auxiliary labeling, which help to improve the cognitive ability of managers to understand the scene. In the digital twin scene, the point marking, status monitoring, and enhanced display of broadcasting, fans, and other facilities in the tunnel can be realized. In addition, the traffic detection, structural deformation monitoring, environmental monitoring, lighting, electric power, fire prevention, ventilation, water supply and drainage, traffic signal, variable information, broadcasting and telephone, and other IoT sensing facilities are associated with virtual devices in digital twin scenes and updated to 3D scenes at a certain frequency. Users can directly view device attributes and status information in 3D space. It also supports dynamic annotation of abnormal data and emergency response feedback control in 3D scenes. In order not to affect the normal use of the professional subsystems, when the uplink data are reported between the third-party application and the digital twin service system, the standard gateway Hypertext Transfer Protocol (HTTP) access is provided, and the third-party sends a standard BV JavaScript Object Notation (JSON) format to standard access gateway service. The service provides data analysis, authentication, parameter identification, and other operations. Users can also define and create various 3D labels in the 3D space, and directly associate important information related to service management to help scene cognition and service management.

8. Data-Driven Tunnel Traffic Operations Simulation and Prediction

Based on the historical traffic volume monitoring data and management requirements, the data channel is opened between the digital twin scene and the professional traffic simulation software, and can carry out 3D replay, predictive deduction, and auxiliary decision making of the traffic operations scene in the digital twin scene. Statistics and analysis of the actual tunnel monitoring video structured data and traffic monitoring data update the hourly origin destination (OD), vehicle type ratio, and other parameters of the traffic simulation model; fit the change law of the OD and vehicle type ratio in the same time period every week, month, and year; and update the recommended parameter values of the OD and vehicle type ratio in different periods on different days such as working days, weekends, holidays, and major activities, to support 3D replay and law analysis of tunnel traffic scenes. The tunnel simulation example is shown in Figure 8.
As shown in Figure 8, according to the needs or problems of a user in tunnel traffic management, it is possible to analyze the tunnel digital twin scene elements involved in the demand; to describe this demand or problem in the tunnel traffic digital twin scene; to adjust parameters that affect demand or problems and analyze the evolution results under different parameter values; to compare the results of different parameters, repeatable simulation analysis of demand, or problems; and to form auxiliary decision-making suggestions. Tunnel traffic simulation operations can be used to predict and deduce changes in vehicle flow, to regulate traffic signals, to respond to traffic incidents, and to test vehicle limits test with respect to truck limit, traffic congestion, and other factors related to vehicles, and therefore, find problems and give early and timely warnings.

9. Tunnel Digital Twin Operations Management Application Based on Virtual-Real Fusion

Using the constructed tunnel digital twin scene, combined with the actual business requirements of tunnel traffic flow, accident rescue, facility management, and emergency response in tunnel operations management, in this paper, we explored the tunnel operations management method based on the digital twin scene. As compared with the traditional video wall operations monitoring method, the digital twin scene adds spatial 3D information, which can control the tunnel scene in a more intuitive way. The reference application functions are shown in Figure 9. The main application innovations are embodied in the following aspects as described below.
(1) Virtual top view global monitoring: By fusing the discrete monitoring video distributed in the tunnel area into the virtual reality scene model, the association relationship of the video in the virtual reality space is established to provide continuous and intuitive monitoring of multiple areas, which is convenient for monitoring managers to overview overall situations. Combined with the real-time association of 2D pictures, the global monitoring of the tunnel top view across cameras is realized.
(2) Tunnel automatic video patrol: Set key patrol points and patrol lines in the digital twin scene to realize automatic video 3D patrol along the driving direction in a 3D way in line with the cognitive habits of the monitoring personnel, reduce the 3D patrol along the driving direction, reduce the frequency of manual actual patrol and the pressure of handling ball camera operations and inspection, improve the pertinence of tunnel daily monitoring and patrol efficiency, and improve the control ability of tunnel to ensure unimpeded traffic.
(3) Two- and three-dimensional linkage emergency response: When an abnormality such as congestion, accident, or anchoring occurs in a tunnel, view and confirm the videos related to rapid dispatching in 3D space, according to the location of the alarm or abnormal conditions; provide support by viewing the status of upstream and downstream vehicles at congestion points; find and analyze the causes of congestion; and realize more intuitive and convenient abnormal event discovery, response, and disposal mechanisms.
(4) Three-dimensional integration and linkage supervision of tunnel facilities: By effectively fusing fragmented video and multi-source IoT sensing information, various types of dynamic IoT sensing information can be real-time accessed to the 3D dynamic tag. The monitoring video can be used to patrol the important facilities in the tunnel, combined with the relevant IoT sensor data and analysis, and thus, determine the status of tunnel facility equipment and timely discovery and dispose of facility failures.
The application of new technologies will inevitably lead to changes in corresponding management modes. Combined with the characteristics of digital twin technology, business requirements of tunnel operations monitoring, and usage habits of information platforms, it is necessary to constantly innovate the management mode and improve the management efficiency.

10. Demonstration Application and Effect Analysis

Considering a tunnel scene in China as an example, in this paper, we launch a demonstration application of tunnel digital twin construction and operations management based on 3D video fusion, and analyze the application effect of the demonstration.

10.1. Selection and Implementation of the Demonstration

The demonstration tunnel is divided into two east and west lines. The east tunnel is 2.56 km long and the west tunnel is 2.55 km long; in the middle of the two tunnels, there are two contact channels. The design speed of the tunnel is 40 km/h, the tunnel cross section is two-way four lanes, the lane width is 3.75 m, the height limit is 4.2 m, and the maximum capacity of two-way four lanes is 5248 vehicles/h. There are 48 surveillance cameras in the east and west lines of the tunnel, including gun and ball cameras. They are installed at an interval of about 100 m and 4.5–5 m high. The video is stored and managed by using a digital video recorder (DVR). The tunnel operations center has an integrated broadcasting system, variable information board, speed limiting board, vehicle detection system, emergency telephone, signal lamp, illuminance meter, intelligent control terminal, and other professional subsystems, with better scene perception and control conditions. The tunnel applies new information technologies such as BIM, 5G, and the Internet of Things, and is committed to building a demonstration tunnel of full life cycle management and a model of new technology application, with better data and network foundation.
Through data acquisition, scene modeling, camera calibration, scene fusion, system design, function development, and other processes, the above key technologies are integrated to carry out the digital twin construction and operations management application of 3D video fusion for the demonstration tunnel. The demonstration process is shown in Figure 10.
Figure 10a shows an example of the BIM modeling result of the demonstration application tunnel. It can be seen that the internal structure of the model maintains geometric integrity after the lightweight treatment; Figure 10b shows an example of the effect of monitoring video fusion within the tunnel. The scene and video picture have a high matching degree by calibrating the position, orientation, and projection calculation of camera points. Figure 10c shows an example of fusion of the IoT sensor data for tunnel services. In addition to the direct annotation of dynamic and static labels in 3D scenes, it is also possible to display real-time and historical data such as temperature and humidity, wind speed, and traffic volume in the form of charts. Figure 10d shows a demonstration tunnel digital twin scene constructed by integrating the above data and key technologies, it can be seen that the tunnel scenes and data associations are more in line with managers visual awareness.

10.2. Demonstration Application Effect Analysis

Combined with the construction and operations status of the demonstration tunnel, we carry out the tunnel digital twin demonstration application verification based on 3D video fusion, develop a tunnel digital twin intelligent operations system, and carry out management applications such as tunnel traffic flow, accident rescue, facility management, and emergency response based on digital twin.
(1) The tunnel traffic flow global monitoring reduces cognitive pressures experienced by monitoring personnel. The tunnel digital twin intelligent operations system establishes a spatial relationship between 2D video and 3D scene. The global traffic status monitoring of the tunnel is provided with a virtual top view, and the user can visually view the running status of the tunnel, as shown in Figure 11. It supports independent roaming viewing and analysis in 3D space, reducing the cognitive pressures experienced by monitoring personnel.
(2) The full-line intelligent video patrol reduces artificial patrol costs. With the constructed tunnel digital twin scene and system, users can customize patrol lines, patrol points, patrol frequencies, and patrol time in the scene. According to the set intelligent patrol lines, the system can carry out full-line automatic video patrols in line with driving cognition, as displayed in Figure 12. It reduces the number of artificial field patrols, handles ball camera inspection, and reduces the cost of artificial patrol.
(3) The historical video panoramic renewal improves the incident traceability efficiency. By using the constructed tunnel digital twin scene and system, all the surveillance videos in the scene are unified clock control. The clock can be uniformly reversed to a certain time in the past according to the user’s requirements for tracing or evidence collection of the whole process of the occurrence, development, and disposal of an abnormal event, as displayed in Figure 13, which improves the work efficiency of abnormal event tracing and tracking.
(4) The convergency fusion tunnel IoT facilities realizes 3D digital asset management. The constructed tunnel digital twin scene and system supports multi-source IoT sensing facilities, state and data real-time access, as displayed in Figure 14, and therefore, realizes the 3D digital asset management of tunnel civil structure, electromechanical facilities, and special equipment.
(5) The tunnel alarm two- and three-dimensional linkage response improve accident rescue and emergency response capabilities. The tunnel digital twin system includes 2D map and 3D scene. Users can quickly jump, confirm, roam, and view between 2D and 3D when receiving an alarm and discovering abnormalities, as displayed in Figure 15, which improves the operation unit’s capabilities of alarm confirmation, anomaly location, accident rescue, and emergency response.
In addition, the tunnel digital twin intelligent monitoring system with 3D live video fusion also supports multi-level control, space tracking, camera relay, magnifying glass, and other operations, which revolutionizes the 2D matrix monitoring scheme and realizes management services in 3D space.

10.3. Comparison of Effect before and after Demonstration

Through the construction of a digital twin scene, discrete 2D surveillance video and 3D tunnel scene are fused to realize the fusion of virtual and real monitoring, and the monitoring application effects before and after the tunnel pilot are compared. Figure 16a shows the current matrix monitoring scheme of the tunnel. It can be seen that the multi-channel guns are arranged on the large screen, and the similarity of each channel video is very high. It is difficult to quickly locate the video to the specific location in the real tunnel when an abnormal problem is discovered. Figure 16b shows the global top view monitoring effect after the fusion of the real video. It can be seen that the spatial correlation between 2D video and 3D scenes is directly established in the scene after fusion. Combined with location auxiliary label information, it can quickly locate, roam, interactive, and view in 3D space. The 3D live video fusion tunnel scene can also be directly related to the 2D video to achieve the direct association and interaction of the two- and three-dimensional operation.
Figure 17a shows the situation where the tunnel monitoring personnel is performing ball camera manual operation inspection. In the demonstration tunnel, only four-way ball cameras are installed at the entrance and exit of the tunnel, focusing on monitoring the traffic conditions at the entrance and exit. The inspection inside the tunnel is still dominated by patrol vehicles. Figure 17b shows the effect of automated 3D video patrol; the monitoring personnel can customize patrol lines, points, and frequencies, and the system can carry out automatic video rotation of the east and west lines of the tunnel in line with the spatial cognition of the monitoring personnel.
Figure 18a shows the 2D interfaces of the subsystems such as signal, ventilation, and broadcast of the tunnel monitoring center. It can be seen that each professional subsystem and video system are separated from each other, and the data are independent with complex linkages. In Figure 18b, the fans and broadcasts in the tunnel are registered in 3D real space and displayed in real time, which realized the 3D integration of tunnel IoT sensing data.
Through the demonstration application, it can be seen that the problem of video fragmentation is solved by the tunnel digital twin intelligent operations technology and system based on 3D video fusion. It helps to resolution of the separation of video and business requirements, and provides tool support for the 2D and 3D linkage response of emergency events. It also improves the efficiency of managing business requirements such as tunnel traffic flow, accident rescue, facility management, and emergency response.
In fact, the construction and application cases of tunnel digital twins are still relatively rare at present. In this paper, we have outlined a beneficial exploration of this direction. The virtual-real fusion of multi-channel surveillance videos are used to replace all-element object modeling in tunnel scenes, which increases the realism of scene perception and reduces the workload of complex element modeling and simulation.

10.4. Demonstration Optimization Suggestions

Through the pilot application of the demonstration tunnel, it is found that there are still deficiencies, mainly including insufficient resolution of the analog camera, failure of the existing camera to achieve full coverage of the tunnel scene, reliance on other subsystems for the access of the IoT data, etc. In view of the above problems, the following suggestions for further optimization of the intelligent management of the demonstration tunnel scene are given:
(1) Improve the resolution of field monitoring cameras. The real scene fusion technology cannot solve the shortage of field monitoring resolution. It is suggested to improve the camera resolution of field monitoring terminals when conditions permit, and use high-definition digital cameras to replace the analog cameras currently in service, and therefore, improve the basic conditions of field monitoring cameras.
(2) Improve the coverage of field cameras. At present, the coverage area of the in-service cameras of the demonstration tunnel is limited and full coverage of the tunnel scene cannot be realized. It is suggested to increase the coverage of field cameras during the later field terminal upgrading. The main methods are as follows: First, reduce the distance between cameras and deploy cameras in scenes at intervals of 50 or 70 m. Second, increase the coverage and flexibility of scene monitoring using guns, balls, fisheye lens, etc. For example, use a fisheye or wide-angle camera for collection in curved areas, or install a ball camera in key monitoring area for space tracking, ball camera linkage, etc.
(3) Form dynamic IoT sensing data access, analysis, and application standards. The method of data transmission in message mode and associations among systems in control mode are in line with the reality of tunnel information management. For various types of IoT sensing data such as traffic detection, environmental detection, and structure detection, it is necessary to form a standard mode and data standard for data access, analysis, and application in combination with the reality of managing business requirements.
(4) Deepen the innovation of tunnel intelligent management mode based on 3D live video fusion. Based on the summary of pilot application experience, continue to deepen the fusion of business requirements and virtual reality fusion technology, and promote the management mode innovation based on 3D real-time video fusion technology and system, such as panoramic tunnel emergency command, tunnel subdivision, and hierarchical management.

11. Conclusions and Future Works

In this study, a digital twin scene of tunnel traffic was constructed and applied to tunnel traffic flow, accident rescue, facility management, emergency response, and other aspects. It provides new ideas and means for analyzing and solving complex traffic problems in tunnel traffic. In response to the problems existing in tunnel digital operation, such as video fragmentation, separation of video and business requirements data, and lack of 2D and 3D linkage response means, in this paper, we focused on the application method of tunnel digital twin construction and virtual-real fusion operations based on 3D video fusion. On the basis of analyzing the demand and characteristics of operational business requirements such as tunnel traffic flow, accident rescue, facility management, emergency response, BIM technology was used to form a 3D model of the tunnel body, lining, tunnel door, road, concealed works, ancillary facilities, electrical facilities, and other physical objects with the actual scene of the tunnel. Through 3D registration and projection calculation, the multi-channel surveillance video in tunnel scene and tunnel geometry model were combined in real time according to topology to form a panoramic digital twin scene of a tunnel with 3D real scene fusion. We integrated multi-source connected sensing data, associated the traffic detection, structural deformation monitoring, environmental monitoring, lighting, power, fire protection, ventilation, water supply and drainage, traffic signal, variable information board, broadcasting and telephone facilities and equipment in the actual scene of the tunnel, and used the digital twin in the virtual equipment to realize scene registration, data access, dynamic annotation, and real-time update.
Considering a tunnel in China as the example, we performed demonstration applications of tunnel operations management based on digital twin, such as virtual top view global monitoring, automatic tunnel video patrol, two- and three-dimensional linkage emergency response, 3D integration and linkage supervision of tunnel facilities, etc. The application results showed that the tunnel digital twin intelligent operations technology and system based on 3D video fusion solved the problem of video fragmentation, promoted the separation of video data and business data, and provided tool support for 2D and 3D linkage response of emergencies. It can help to improve the efficiency of managing business requirements such as tunnel traffic flow, accident rescue, facility management, and emergency response, and can support the accurate perception and fine management of tunnel infrastructure and traffic operation.
The disadvantages of the system and areas for improvement include: the camera position and attitude need to be calibrated one by one, the two-way interaction functions of virtual and real models need to be realized, the video data structure algorithm lacks integration, and so on. Future work should include multi-source online data-driven 3D scene updates; data-driven tunnel traffic operations simulations; and predictive deduction, tunnel digital twin perception, and application scenario analysis based on virtual and real fusion, etc.

Author Contributions

Z.W. offered the idea and controlled the progress and quality of present work; Q.L. undertake the management requirements analysis and digital twin application evaluation; Y.C. and R.C. undertake the case verification. All authors were involved in the writing, editing, and revising process of the present paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Basal Research Fund of the Central Public Research Institute of China (grant no. 20212701).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, Z.-H.; Wu, X.-B.; Wang, L. Prospect of Development Trend of Smart Transportation under the Background of Building China into a Country with Strong Transportation Network. Transp. Res. 2019, 5, 26–36. [Google Scholar]
  2. Direetive 2004/54EC of the European Parliament and of the Council of the 29 April-on Minimum Requirements for Tunnels in the Trans European Road Network; European Union: Maastricht, The Netherlands, 2004; pp. 56–76.
  3. PIARC Technical Committee on Road Tunnel Operation. Risk Analysis for Road Tunnels; PIARC: Paris, France, 2008. [Google Scholar]
  4. Wu, Z.; Liu, Z.; Shi, K.; Wang, L.; Liang, X. Review on the Construction and Application of Digital Twins in Transportation Scenes. J. Syst. Simul. 2021, 33, 295–305. [Google Scholar]
  5. Liu, Z. INTERBUS Technology of Tunnel Control and Monitor System. Autom. Panor. 2002, 19, 36–38. [Google Scholar]
  6. Ding, H.; Zhang, Y.; Zhao, Z. Highway tunnel measurement and control system based on Controller Link network. Highw. Tunn. 2008, 4, 47–53. [Google Scholar]
  7. Thomasian, A.; Kanakia, H. Performance Study of Loop Network Using Butter Insertion. Computer Networks 1979, 3, 419–425. [Google Scholar]
  8. Schwabach, H.; Harrer, M.; Holzmann, W.; Bischof, H.; Dominguez, G.F.; Nölle, M.; Pflugfelder, R.; Strobl, B.; Tacke, A.; Waltl, A. Video Based Image Analysis for Tunnel Safety–Vitus-1: A Tunnel Video Surveillance and Traffic Control System; TRB, Fevereiro: San Francisco, CA, USA, 2005. [Google Scholar]
  9. Rios-Cabrera, R.; Tuytelaars, T.; Gool, L.V. Efficient multi-camera vehicle detection, tracking, and identification in a tunnel surveillance application. Comput. Vis. Image Underst. 2012, 116, 742–753. [Google Scholar] [CrossRef]
  10. Wang, X. Application of Automated Monitoring Technology in Structural Operation Monitoring of Rail Transit Tunnels. Tunn. Rail Transit 2019, 126 (Suppl. S2), 193–197. [Google Scholar]
  11. Li, F. Application of Fisheye Camera in Video Surveillance in Expressway Tunnel. West. China Commun. Sci. Technol. 2019, 3, 153–155. [Google Scholar]
  12. Liu, Z.; Sun, Z. Intelligent Tunnel Operation Control System Based on Internet of Things Technology. Electron. Technol. Softw. Eng. 2020, 9, 150–152. [Google Scholar]
  13. Chen, H.; Shen, Y.; Zheng, H. Application Research of BIM + GIS in Highway Smart Tunnel Monitoring Platform. Highway 2021, 7, 376–381. [Google Scholar]
  14. Li, J.; Dong, L.; Wang, S. Design of “Internet + Wisdom Expressway”. BIM Tunnel. J. Lvliang Univ. 2020, 10, 66–68. [Google Scholar]
  15. Liu, P.; Li, X. Application of HD video system in tunnel monitoring. Highway 2021, 7, 360–363. [Google Scholar]
  16. Zhong, Z.; Ming, M.; Yi, Z. Massive Video Integrated Mixed Reality Technology. ZTE Technol. J. 2017, 23, 6–9. [Google Scholar]
  17. Zhou, Y.; Cao, M.; You, J.; Ming, M.; Wang, Y.; Zhou, Z. MR Video Fusion: Interactive 3D Modeling and Stitching on Wide-baseline Videos. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology(VRST), Tokyo, Japan, 28 November–1 December 2018. [Google Scholar]
  18. Wei, L.; Zhou, Z.; Chen, L.; Zhou, Y. A survey on image and video stitching. Virtual Real. Intell. Hardw. 2019, 1, 55–83. [Google Scholar]
  19. Ivanov, Y.; Wren, C.; Sorokin, A.; Kaur, I. Visualizing the history of living space. IEEE Trans. Vis. Comput. Graph. 2007, 13, 1153–1160. [Google Scholar] [CrossRef] [Green Version]
  20. Gay-Bellile, V.; Lothe, P.; Bourgeois, S.; Royer, E.; Naudet Collette, S. Augmented reality in large environments: Application to aided navigation in urban context. In Proceedings of the 2010 IEEE International Symposium on Mixed and Augmented Reality, Seoul, Korea, 13–16 October 2010; IEEE Computer Society Press: Piscataway, NJ, USA, 2010; pp. 225–226. [Google Scholar]
  21. Lei, Z.; Wang, G.; Fang, H. Video Image Stitching and its Application. China Cable Telev. 2003, 22, 70–72. [Google Scholar]
  22. Kuranov, A.; Gaikwad, D.; Nathan, V.; Egorov, S.; Dhoble, T. Image and Video Stitching and Viewing Method and System. U.S. Patent Application No. 12/072,186, 16 October 2008. [Google Scholar]
  23. Orduna, M.; Díaz, C.; Muñoz, L.; Pérez, P.; Benito, I.; García, N. Video multimethod assessment fusion (vmaf) on 360 vr contents. IEEE Trans. Consum. Electron. 2019, 66, 22–31. [Google Scholar] [CrossRef] [Green Version]
  24. Yang, J.; Liu, T.; Jiang, B.; Song, H.; Lu, W. 3D panoramic virtual reality video quality assessment based on 3D convolutional neural networks. IEEE Access 2018, 6, 38669–38682. [Google Scholar] [CrossRef]
  25. Veas, E.; Grasset, R.; Kruijff, E.; Schmalstieg, D. Extended Overview Techniques for Outdoor Augmented Reality. IEEE Trans. Vis. Comput. Graph. 2012, 18, 565–572. [Google Scholar] [CrossRef]
  26. Milosavljević, A.; Rančić, D.; Dimitrijević, A.; Predić, B.; Mihajlović, V. Integration of GIS and video surveil-lance. Int. J. Geogr. Inf. Sci. 2016, 30, 2089–2107. [Google Scholar]
  27. Jian, H.; Liao, J.; Fan, X.; Xue, Z. Augmented virtual environment: Fusion of real-time video and 3D models in the digital earth system. Int. J. Digit. Earth 2017, 10, 1177–1196. [Google Scholar] [CrossRef]
  28. Yi, Z.; Ming, M.; Wei, W.; Zhong, Z. Virtual-reality video fusion system based on video model. J. Syst. Simul. 2018, 30, 2550. [Google Scholar]
  29. Wu, Z.; Ren, C.; Wu, X.; Wang, X.; Zhu, L.; Lv, Z. Research on Digital Twin Construction and Safety Management Application of Inland Waterway Based on 3D Video Fusion. IEEE Access 2021, 9, 109144–109156. [Google Scholar] [CrossRef]
  30. Kim, K.; Oh, S.; Lee, J.; Essa, I. Augmenting aerial earth maps with dynamic information from videos. Virtual Real. 2011, 15, 185–200. [Google Scholar] [CrossRef]
  31. Chen, S.C.; Lee, C.Y.; Lin, C.W.; Chan, I.-L.; Chen, Y.-S.; Shih, S.-W.; Hung, Y.-P. 2D and 3D visualization with dual-resolution for surveillance. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, 16–21 June 2012; IEEE Computer Society Press: Piscataway, NJ, USA, 2012; pp. 23–30. [Google Scholar]
  32. China Institute of Electronic Technology Standardization. Digital Twin Application White Paper (2020 edition); China Institute of Electronic Technology Standardization: Beijing, China, 2021. [Google Scholar]
  33. Chinese Academy of Information and Communication. Digital Twin Cities Research Report (2019); Chinese Academy of Information and Communication: Beijing, China, 2020. [Google Scholar]
  34. Tao, F.; Qi, Q. Make more digital twins. Nature 2019, 573, 490–491. [Google Scholar] [CrossRef] [Green Version]
  35. Lim, K.; Zheng, P.; Chen, C.H. A state-of-the-art survey of Digital Twin: Techniques, engineering product lifecycle management and business innovation perspectives. J. Intell. Manuf. 2020, 31, 1313–1337. [Google Scholar] [CrossRef]
  36. Hu, W.; Zhang, T.; Deng, X.; Liu, Z.; Tan, J. Digital twin: A state-of-the-art review of its enabling technologies, applications and challenges. J. Intell. Manuf. Spec. Equip. 2021, 2, 1–34. [Google Scholar] [CrossRef]
  37. Ozturk, G.B. Digital twin research in the AECO-FM industry. J. Build. Eng. 2021, 40, 102730. [Google Scholar] [CrossRef]
  38. Gao, Y.; Qian, S.; Li, Z.; Wang, P.; Wang, F.; He, Q. Digital Twin and Its Application in Transportation Infrastructure. In Proceedings of the 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence (DTPI), Beijing, China, 15 July–15 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 298–301. [Google Scholar]
  39. Jiang, F.; Ma, L.; Broyd, T.; Chen, K. Digital twin and its implementations in the civil engineering sector. Autom. Constr. 2021, 130, 103838. [Google Scholar] [CrossRef]
  40. Bao, L.; Wang, Q.; Jiang, Y. Review of Digital twin for intelligent transportation system. In Proceedings of the 2021 International Conference on Information Control, Electrical Engineering and Rail Transit (ICEERT), Lanzhou, China, 30 October–1 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 309–315. [Google Scholar]
  41. Tijs, K. Digital Tunnel Twin; Delft University of Technology: Delft, The Netherlands, 2020. [Google Scholar]
  42. Chen, Y. Design and Implementation of 3D Panorama Tunnel Monitoring System. J. Zhongyuan Univ. Technol. 2019. [Google Scholar]
  43. Zhu, H. Successful Application of Intelligent Design Method for Rock Tunnel Support Based on Digital Twin. 2021. Available online: https://www.acabridge.edu.cn/hr/xueshu/xueshudt/202101/t20210128_2073420.shtml (accessed on 23 April 2022).
  44. Wuxi Tunnel Uses “Digital Twin” Technology. Available online: http://jsnews.jschina.com.cn/wx/a/201906/t20190627_2337032.shtml?isappinstalled=0 (accessed on 23 April 2022).
  45. Tomar, R.; Piesk, J.; Sprengel, H.; Ergin, I.; Sebnem, D.; Jamal, R. Digital twin of tunnel construction for safety and efficiency. In Tunnels and Underground Cities: Engineering and Innovation meet Archaeology, Architecture and Art; CRC Press: Boca Raton, FL, USA, 2019; pp. 3224–3234. [Google Scholar]
  46. Nohut, B.K. Digital Tunnel Twin Using Procedurally Made 3D Models; UNESCO Institute for Statistics: Montreal, QC, Canada, 2021. [Google Scholar]
  47. van Hegelsom, J.; van de Mortel-Fronczak, J.M.; Moormann, L.; van Beek, D.A.; Rooda, J.E. Development of a 3D Digital Twin of the Swalmen Tunnel in the Rijkswaterstaat Project. arXiv 2021, arXiv:2107.12108. [Google Scholar]
  48. Yu, G.; Zhang, S.; Hu, M.; Wang, Y.K. Prediction of highway tunnel pavement performance based on digital twin and multiple time series stacking. Adv. Civ. Eng. 2020, 2020, 8824135. [Google Scholar] [CrossRef]
  49. Tao, F.; Liu, W.; Liu, J.; Liu, X.; Qu, T.; Hu, T.; Zhang, Z.; Xiang, F.; Xu, W.; Wang, J.; et al. Digital Twin and its Potential Application Exploration. Comput. Integr. Manuf. Syst. 2018, 24, 4–21. [Google Scholar]
Figure 1. Reference model of the tunnel digital twin structure.
Figure 1. Reference model of the tunnel digital twin structure.
Electronics 11 01413 g001
Figure 2. Method followed in this paper.
Figure 2. Method followed in this paper.
Electronics 11 01413 g002
Figure 3. Example of the tunnel BIM modeling: (a) BIM modeling example of the tunnel entrance; (b) BIM modeling example of the tunnel exit.
Figure 3. Example of the tunnel BIM modeling: (a) BIM modeling example of the tunnel entrance; (b) BIM modeling example of the tunnel exit.
Electronics 11 01413 g003
Figure 4. Example of the lightweight processing method and results of the tunnel model. (a) Original BIM model; (b) BIM model after lightweight treatment.
Figure 4. Example of the lightweight processing method and results of the tunnel model. (a) Original BIM model; (b) BIM model after lightweight treatment.
Electronics 11 01413 g004
Figure 5. 3D real-time fusion method of tunnel multi-channel video.
Figure 5. 3D real-time fusion method of tunnel multi-channel video.
Electronics 11 01413 g005
Figure 6. Example of lightweight processing and results of the tunnel model: (a) Example of gun machine fusion; (b) Example of fisheye camera fusion.
Figure 6. Example of lightweight processing and results of the tunnel model: (a) Example of gun machine fusion; (b) Example of fisheye camera fusion.
Electronics 11 01413 g006
Figure 7. An example of tunnel asset and annotation data aggregation.
Figure 7. An example of tunnel asset and annotation data aggregation.
Electronics 11 01413 g007
Figure 8. Example of tunnel traffic simulation results.
Figure 8. Example of tunnel traffic simulation results.
Electronics 11 01413 g008
Figure 9. Application reference of tunnel digital twin operations management based on virtual-real fusion.
Figure 9. Application reference of tunnel digital twin operations management based on virtual-real fusion.
Electronics 11 01413 g009
Figure 10. Schematic of the tunnel digital twin demonstration implementation process: (a) Demonstration scene BIM modeling; (b) multi-channel video real-time fusion; (c) IoT sensor data aggregation and fusion; (d) digital twin scene construction.
Figure 10. Schematic of the tunnel digital twin demonstration implementation process: (a) Demonstration scene BIM modeling; (b) multi-channel video real-time fusion; (c) IoT sensor data aggregation and fusion; (d) digital twin scene construction.
Electronics 11 01413 g010
Figure 11. Tunnel virtual top view display.
Figure 11. Tunnel virtual top view display.
Electronics 11 01413 g011
Figure 12. Intelligent patrol of full-line tunnels.
Figure 12. Intelligent patrol of full-line tunnels.
Electronics 11 01413 g012
Figure 13. Historical video 3D unified backtracking forensics.
Figure 13. Historical video 3D unified backtracking forensics.
Electronics 11 01413 g013
Figure 14. 3D fusion and enhanced display of IoT facilities.
Figure 14. 3D fusion and enhanced display of IoT facilities.
Electronics 11 01413 g014
Figure 15. Two- and three-dimensional two-way linkage response operation.
Figure 15. Two- and three-dimensional two-way linkage response operation.
Electronics 11 01413 g015
Figure 16. Comparison of tunnel global monitoring effect before and after demonstration: (a) Fragmented 2D matrix monitoring; (b) global top view monitoring with spatial relevance.
Figure 16. Comparison of tunnel global monitoring effect before and after demonstration: (a) Fragmented 2D matrix monitoring; (b) global top view monitoring with spatial relevance.
Electronics 11 01413 g016aElectronics 11 01413 g016b
Figure 17. Comparison of tunnel inspection effect before and after demonstration: (a) Ball camera manual inspection; (b) automated 3D video patrol.
Figure 17. Comparison of tunnel inspection effect before and after demonstration: (a) Ball camera manual inspection; (b) automated 3D video patrol.
Electronics 11 01413 g017aElectronics 11 01413 g017b
Figure 18. Comparison of tunnel service data integration before and after demonstration: (a) Video is separated from the professional subsystems; (b) video and service data convergence and enhanced display.
Figure 18. Comparison of tunnel service data integration before and after demonstration: (a) Video is separated from the professional subsystems; (b) video and service data convergence and enhanced display.
Electronics 11 01413 g018aElectronics 11 01413 g018b
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, Z.; Chang, Y.; Li, Q.; Cai, R. A Novel Method for Tunnel Digital Twin Construction and Virtual-Real Fusion Application. Electronics 2022, 11, 1413. https://doi.org/10.3390/electronics11091413

AMA Style

Wu Z, Chang Y, Li Q, Cai R. A Novel Method for Tunnel Digital Twin Construction and Virtual-Real Fusion Application. Electronics. 2022; 11(9):1413. https://doi.org/10.3390/electronics11091413

Chicago/Turabian Style

Wu, Zhaohui, Ying Chang, Qing Li, and Rongbin Cai. 2022. "A Novel Method for Tunnel Digital Twin Construction and Virtual-Real Fusion Application" Electronics 11, no. 9: 1413. https://doi.org/10.3390/electronics11091413

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop