Development of Augmented Reality System for Productivity Enhancement in Offshore Plant Construction

: As the scale of offshore plants has gradually increased, the amount of management points has signiﬁcantly increased. Therefore, there are needs for innovative process control, quality management, and an installation support system to improve productivity and efﬁciency for timely construction. In this paper, we introduce a novel approach to deal with these issues using augmented reality (AR) technology. The core of successful AR implementation is up to scene matching through accurate pose (position and alignment) estimation using an AR camera. To achieve this, this paper ﬁrst introduces an accurate marker registration technique that can be used in huge structures. In order to improve the precision of marker registration, we propose a method that utilizes the natural feature points and the marker corner points in the optimization step simultaneously. Subsequently, a method of precisely generating AR scenes by utilizing these registered markers is described. Finally, to validate the proposed method, the best practices and its effects are introduced. Based on the proposed AR system, construction workers are now able to quickly navigate to onboard destinations by themselves. In addition, they are able to intuitively install and inspect outﬁtting parts without paper drawings. Through ﬁeld tests and surveys, we conﬁrm that AR-based inspection has a signiﬁcant time-saving effect compared to conventional drawing-based inspection.


Introduction
Recently, as the scale of offshore plants has tremendously increased, the scope of process management has become wider and more complicated. As a result, various Information & Communications Technology (ICT) based researches have been introduced to improve productivity and efficiency for offshore plant construction. In general, paper drawings are still used in the shipbuilding field for production and inspection. However, as the computation performance of mobile phones has dramatically improved with 3D visualization technology, various attempts have been made to utilize it in digital drawing. Beyond simply viewing 3D Computer Aided Design (CAD), more intuitive and fieldoriented ICT technologies are actually needed. Augmented reality (AR) has recently been in the spotlight as one of the innovative technologies that meet these needs [1].
The origins of AR can be traced back to the simple visual experiment performed by Sutherland [2] in 1965. Later, in the 1990s, the effects of AR in the manufacturing fields began to be known in earnest to academia and industry by the research of Caudell [3]. Subsequently, similar AR studies have been published by Airbus, BMW, and Ford, which have led to an exponential increase in AR research in the industry [4].
There have also been many attempts to apply AR in the shipbuilding industry. Fraga-Lamas [5] classified the use of AR in shipbuilding into six major categories: quality control, assistance in the manufacturing process, visualization of the location of products and tools, warehouse management, predictive maintenance, and augmented communication.
In this study, Fraga-Lamas predicted that AR technology would be one of the leading In this study, Fraga-Lamas predicted that AR technology would be one of the leading technologies that realize Industry 4.0 shipyards. In addition, ABI Research anticipates that AR device shipments will increase by more than 460 million units by 2021 [6]. Therefore, the development of various AR applications and their success stories are expected to continue for a while [7].
Many shipbuilding companies are still struggling with discrepancies between design and actual construction. Therefore, various research cases have been reported to solve these discrepancy problems using AR technology. In [8], Caricato et al. proposed 3D visualization tools that are useful for engineers to plan and create products using AR. Wuest [9] proposed an on-site CAD model correction tool that can immediately correct design errors when they are discovered in the construction field. Olbrich et al. [10] introduced an AR application that changes the layout of pipes when interference occurs inside an offshore plant. Nee et al. [11] demonstrated through experiments that a CAD model visualized by AR is very useful for engineers while creating and evaluating designs and products.
Assembly is also one of the known processes that can be dramatically improved in terms of productivity through AR [12]. Leu et al. [13] suggested an innovative approach to developing CAD model-based AR systems for assembly simulation. Recently, a research case was reported to improve the efficiency of assembly operations by utilizing a Head Mounted Display (HMD) device [14]. More comprehensive reviews of AR-based assembly processes can be found in [15].
In general, the most important key to successful AR system implementation is to realize precise scene alignment between the 3D CAD coordinate system and the world coordinate system. According to computer vision theory, this is a requirement of a tracking problem. Typically, within a few cm of the positional error is generally the maximum allowable value of the camera tracking error where a user can feel heterogeneity in an AR system [16]. However, the scale of offshore plants, such as Shell Prelude FLNG, is usually six times larger than that of the largest aircraft carrier. Thus, maintaining the precision of positional tracking within this error is a technical issue in this paper.
Most of the previously known successful AR systems have been restricted to be used in local areas (e.g., within a block or a sub-system) due to limitations of marker registration and management [17][18][19]. On the contrary, in this paper, there is a major contribution to expanding the scope of AR operation to the entire area of an offshore plant by improving the method of registering and managing markers more practically. Based on the proposed AR system, field workers are now able to quickly navigate to onboard destinations by themselves. In addition, they are able to intuitively install and inspect outfitting parts without paper drawings. These results can eventually lead to increased productivity and efficiency. Figure 1 shows the AR system implementation proposed in this paper.  Recently, as computer vision technology has advanced, various studies for pose tracking using image sensors have been reported [20][21][22]. However, these approaches have great difficulties in applying them directly to the production environment of offshore plants.
Most image sensor-based simultaneous localization and mapping (SLAM) technologies utilize natural feature points traced in image sequences as landmarks and use them for map generation. This map is used again to perform global pose recovery when a revisiting situation occurs. If the map is well organized enough, the SLAM algorithm guarantees a good accuracy of pose recovery within a few millimeters when revisiting occurs. However, as the moving range for performing SLAM becomes wider, the accuracy of the map is significantly reduced. This eventually leads to an increase in the uncertainty of the accuracy of the pose estimation. The left of Figure 2 illustrates the problem of the SLAM approach with long-distance movement.
Sci. Eng. 2021, 9, x FOR PEER REVIEW 3 of 22 Recently, as computer vision technology has advanced, various studies for pose tracking using image sensors have been reported [20][21][22]. However, these approaches have great difficulties in applying them directly to the production environment of offshore plants. Most image sensor-based simultaneous localization and mapping (SLAM) technologies utilize natural feature points traced in image sequences as landmarks and use them for map generation. This map is used again to perform global pose recovery when a revisiting situation occurs. If the map is well organized enough, the SLAM algorithm guarantees a good accuracy of pose recovery within a few millimeters when revisiting occurs. However, as the moving range for performing SLAM becomes wider, the accuracy of the map is significantly reduced. This eventually leads to an increase in the uncertainty of the accuracy of the pose estimation. The left of Figure 2 illustrates the problem of the SLAM approach with long-distance movement. In addition, the computational complexity also increases exponentially for map optimization. According to the existing state-of-the-art SLAM algorithm, Oriented FAST and Rotated BRIEF (ORB)-SLAM2 [21], about 100 MB is required for map generation when a closed-loop test is performed in a 15 × 15 m 2 area and it takes about 12 min to optimize the map with Coretex-A53 CPU and 3 GB RAM. Considering the size of the Shell Prelude offshore plant, which has a size of 489 × 74 × 105 m 3 , we can easily see SLAM approaches are very-time consuming. Furthermore, we also confirmed that SLAM approaches in offshore plants are wasteful of storage through field testing. This is because even though a sufficient number of markers (more than 100) are used for the localization in the 15 × 15 m 2 area, only less than 20 KB of storage space is required. Above all, the manufacturing environment of offshore plants changes very frequently. In this case, the SLAM approaches inevitably accelerate storage waste because the existing feature points that cannot be used for re-localization are increasingly accumulated. In order to solve this problem, the characteristics of the feature points should be explicitly static so that they are not affected by environmental changes, or they must be easily updated frequently. In this paper, we define this problem as the limitation of landmark management.
It is also important to consider the AR environment when initializing the pose of the camera in the global coordinate system or when immediate correction of the camera pose is required. In general, localization technologies such as a Bluetooth beacon or a Wi-Fi positioning system are often used to support global pose estimation in large-scale structures. However, in a shipyard environment which consists of many steel plates, these localization approaches cannot be used due to signal interference and distortion. GPS is also not free from this signal problem and cannot be used in indoor situations. In this paper, we define this problem as the limitation of instant global localization. In addition, the computational complexity also increases exponentially for map optimization. According to the existing state-of-the-art SLAM algorithm, Oriented FAST and Rotated BRIEF (ORB)-SLAM2 [21], about 100 MB is required for map generation when a closed-loop test is performed in a 15 × 15 m 2 area and it takes about 12 min to optimize the map with Coretex-A53 CPU and 3 GB RAM. Considering the size of the Shell Prelude offshore plant, which has a size of 489 × 74 × 105 m 3 , we can easily see SLAM approaches are very-time consuming. Furthermore, we also confirmed that SLAM approaches in offshore plants are wasteful of storage through field testing. This is because even though a sufficient number of markers (more than 100) are used for the localization in the 15 × 15 m 2 area, only less than 20 KB of storage space is required. Above all, the manufacturing environment of offshore plants changes very frequently. In this case, the SLAM approaches inevitably accelerate storage waste because the existing feature points that cannot be used for re-localization are increasingly accumulated. In order to solve this problem, the characteristics of the feature points should be explicitly static so that they are not affected by environmental changes, or they must be easily updated frequently. In this paper, we define this problem as the limitation of landmark management.
It is also important to consider the AR environment when initializing the pose of the camera in the global coordinate system or when immediate correction of the camera pose is required. In general, localization technologies such as a Bluetooth beacon or a Wi-Fi positioning system are often used to support global pose estimation in large-scale structures. However, in a shipyard environment which consists of many steel plates, these localization approaches cannot be used due to signal interference and distortion. GPS is also not free from this signal problem and cannot be used in indoor situations. In this paper, we define this problem as the limitation of instant global localization.
It is very important to solve the above two problems for successful AR execution. In the following Section 2, we first deal with a realistic and practical AR tracking approach that can be used for offshore plant construction. Section 3 introduces an overall overview of the proposed AR system, and Section 4 addresses a hardware configuration for this purpose. In Section 5, we introduce an automatic marker registration technique which is the most important part in this paper for practical large-scaled AR services. Then, we explain how to stably generate AR scenes for long-distance movement in offshore plants using the precisely registered markers in Section 6. In Section 7, we explain what best practices can be found by solving the global tracking issues and how they can lead to productivity improvement.

Solution
In order to overcome the tracking issues, in this study, we explicitly install artificial landmarks which are used as a marker for each specific area in the compartments of the offshore plant. We utilize them again to correct the pose of the AR camera in the application runtime. Through this approach, the pose drifting caused by long-distance movement is suppressed as much as possible. In addition, it also solves the problem of instant global localization by quickly recognizing the marker observed at close range from any position in the offshore plant. This approach also reduces data management points by maintaining maps only for explicit landmarks and preemptively defends map update issues caused by frequent environmental changes. The right side of Figure 2 schematically shows the advantages of this approach.
The strategy of suppressing error propagation using artificial landmarks is very simple and straightforward. Further, the recovery precision of the camera pose is very optimistic. However, this is only possible on the assumption that geometric information of markers is registered very precisely with respect to the CAD coordinate system. In other words, if the accuracy of the marker registration is bad, the result of the pose correction also becomes incorrect.
In general, most enterprise AR solutions precisely register the pose of the marker in a 3D CAD model at the design time [23]. Then, referring to the marker's coordinates registered in the 3D CAD model, a field worker attaches the real marker to the designed position on the offshore plant and, finally, the AR application is started. Most of the time, however, this approach suffers from some practical limitations.
At first, if a person who attaches the markers is not a skilled worker with more than 10 years of field experience, it takes too much time to precisely attach them as instructed in the drawing. Secondly, CAD designers usually register the geometric information of the markers without any deep understanding of field conditions so that the markers may be registered at locations that are not actually accessible. Finally, AR users usually want to place the marker in the center of a hall which is easily visible. However, if a CAD designer registers the marker in these places, a field worker has to essentially measure additional geometric information from the pre-installed parts to ensure proper marker installation. Figure 3 very clearly shows these problems caused by the pre-registration and post-installation approach.
In this paper, we propose an intuitive but effective marker registration method using a photogrammetric approach to solve these issues. Unlike the previous method, the marker installation is performed first, taking into account the convenience for AR users and the environmental conditions at the construction field, and then the registration is followed. As shown in Figure 4, the user just needs to take enough images to register the marker. Then, the images sent to the server are automatically registered through the method described in Section 5.
the markers without any deep understanding of field conditions so that the markers may be registered at locations that are not actually accessible. Finally, AR users usually want to place the marker in the center of a hall which is easily visible. However, if a CAD designer registers the marker in these places, a field worker has to essentially measure additional geometric information from the pre-installed parts to ensure proper marker installation. Figure 3 very clearly shows these problems caused by the pre-registration and postinstallation approach. In this paper, we propose an intuitive but effective marker registration method using a photogrammetric approach to solve these issues. Unlike the previous method, the marker installation is performed first, taking into account the convenience for AR users and the environmental conditions at the construction field, and then the registration is followed. As shown in Figure 4, the user just needs to take enough images to register the marker. Then, the images sent to the server are automatically registered through the method described in Section 5.  Figure 5 shows an overview of the proposed AR system. The AR system is divided into an online and an offline process mode according to the operating scenario. The offline mode is summarized as a step of installing markers, recovering their 3D coordinates, and then registering them to the CAD coordinate system. This process is repeated for all sectors in the offshore plant. More detailed explanations are given in Section 5.

System Overview
The online mode is defined as utilizing Android-based AR services such as the navigation and process management of installation parts in all areas within the offshore plant using the registered marker information. In the online mode, the user first recognizes the nearest observed marker. After that, the AR system precisely overlaps the 3D CAD scene with respect to the current camera's view. As the mobile camera moves, the 3D CAD scene is also changed and synchronized properly. The AR user also can change the operation  In this paper, we propose an intuitive but effective marker registration method using a photogrammetric approach to solve these issues. Unlike the previous method, the marker installation is performed first, taking into account the convenience for AR users and the environmental conditions at the construction field, and then the registration is followed. As shown in Figure 4, the user just needs to take enough images to register the marker. Then, the images sent to the server are automatically registered through the method described in Section 5.  Figure 5 shows an overview of the proposed AR system. The AR system is divided into an online and an offline process mode according to the operating scenario. The offline mode is summarized as a step of installing markers, recovering their 3D coordinates, and then registering them to the CAD coordinate system. This process is repeated for all sectors in the offshore plant. More detailed explanations are given in Section 5.

System Overview
The online mode is defined as utilizing Android-based AR services such as the navigation and process management of installation parts in all areas within the offshore plant using the registered marker information. In the online mode, the user first recognizes the nearest observed marker. After that, the AR system precisely overlaps the 3D CAD scene with respect to the current camera's view. As the mobile camera moves, the 3D CAD scene is also changed and synchronized properly. The AR user also can change the operation mode of the app at any time as needed.
The user may sometimes feel that the pose of the mobile camera is inaccurate while using the AR service. In this case, the user can correct the camera pose immediately and precisely by recognizing the marker that is observed nearby at any time. This is discussed in more detail in Section 6.  Figure 5 shows an overview of the proposed AR system. The AR system is divided into an online and an offline process mode according to the operating scenario. The offline mode is summarized as a step of installing markers, recovering their 3D coordinates, and then registering them to the CAD coordinate system. This process is repeated for all sectors in the offshore plant. More detailed explanations are given in Section 5.

System Overview
The online mode is defined as utilizing Android-based AR services such as the navigation and process management of installation parts in all areas within the offshore plant using the registered marker information. In the online mode, the user first recognizes the nearest observed marker. After that, the AR system precisely overlaps the 3D CAD scene with respect to the current camera's view. As the mobile camera moves, the 3D CAD scene is also changed and synchronized properly. The AR user also can change the operation mode of the app at any time as needed.  Figure 6 shows the AR instrument, the Project Tango Development Kit (PTDK), used in this study. The PTDK is a mobile-based AR platform developed by Google's Advanced Technology and Projects (ATAP) [24]. It is powered by the Android KitKat OS and includes NVIDIA's Tegra K1 CPU and 4 GB of memory.

AR Platform
As shown in Figure 6, the PTDK supports various sensors for AR implementation. The fisheye motion camera acquires wide-angle images at 120 FPS and has about a 180° field of view. This wide-angle camera is used to track the pose of the mobile device in real time. To implement this, a monocular SLAM algorithm is implemented. By combining the IMU inside the mobile phone with the fisheye camera to perform SLAM, Tango prevents the pose drift due to low-quality imaging and, at the same time, overcomes the scale problem which is one of the critical limitations of the monocular SLAM [25]. The RGB camera is used at the application level for AR services. The intrinsic parameters of the RGB camera are also known by the manufacturer. However, since the RGB image contains the radial distortion of the lens, a simple un-distortion logic is applied before starting the AR app as follows: where ( , ) and ( , ) are the image coordinates before and after correction, respectively, and 1 , 2 , and 3 denote the radial distortion coefficients. The user may sometimes feel that the pose of the mobile camera is inaccurate while using the AR service. In this case, the user can correct the camera pose immediately and precisely by recognizing the marker that is observed nearby at any time. This is discussed in more detail in Section 6. Figure 6 shows the AR instrument, the Project Tango Development Kit (PTDK), used in this study. The PTDK is a mobile-based AR platform developed by Google's Advanced Technology and Projects (ATAP) [24]. It is powered by the Android KitKat OS and includes NVIDIA's Tegra K1 CPU and 4 GB of memory.  Figure 6 shows the AR instrument, the Project Tango Development Kit (PTDK), used in this study. The PTDK is a mobile-based AR platform developed by Google's Advanced Technology and Projects (ATAP) [24]. It is powered by the Android KitKat OS and includes NVIDIA's Tegra K1 CPU and 4 GB of memory.

AR Platform
As shown in Figure 6, the PTDK supports various sensors for AR implementation. The fisheye motion camera acquires wide-angle images at 120 FPS and has about a 180° field of view. This wide-angle camera is used to track the pose of the mobile device in real time. To implement this, a monocular SLAM algorithm is implemented. By combining the IMU inside the mobile phone with the fisheye camera to perform SLAM, Tango prevents the pose drift due to low-quality imaging and, at the same time, overcomes the scale problem which is one of the critical limitations of the monocular SLAM [25]. The RGB camera is used at the application level for AR services. The intrinsic parameters of the RGB camera are also known by the manufacturer. However, since the RGB image contains the radial distortion of the lens, a simple un-distortion logic is applied before starting the AR app as follows: where ( , ) and ( , ) are the image coordinates before and after correction, respectively, and 1 , 2 , and 3 denote the radial distortion coefficients. As shown in Figure 6, the PTDK supports various sensors for AR implementation. The fisheye motion camera acquires wide-angle images at 120 FPS and has about a 180 • field of view. This wide-angle camera is used to track the pose of the mobile device in real time. To implement this, a monocular SLAM algorithm is implemented. By combining the IMU inside the mobile phone with the fisheye camera to perform SLAM, Tango prevents the pose drift due to low-quality imaging and, at the same time, overcomes the scale problem which is one of the critical limitations of the monocular SLAM [25].
The RGB camera is used at the application level for AR services. The intrinsic parameters of the RGB camera are also known by the manufacturer. However, since the RGB image contains the radial distortion of the lens, a simple un-distortion logic is applied before starting the AR app as follows: where (x o , y o ) and (x c , y c ) are the image coordinates before and after correction, respectively, and k 1 , k 2 , and k 3 denote the radial distortion coefficients. Figure 7 shows the schematic concept of the proposed marker. The overall size of the marker is 150 × 150 mm 2 and the size of the inner binary codeword is 100 × 100 mm 2 . The production environment of offshore plants is very hazardous, so the marker can be easily damaged by scratches, cracks, and thermal deformation due to welding. Considering these limitations, we propose a new marker design well optimized for the environment of offshore plant construction.  Figure 7 shows the schematic concept of the proposed marker. The overall size of the marker is 150 × 150 mm 2 and the size of the inner binary codeword is 100 × 100 mm 2 . The production environment of offshore plants is very hazardous, so the marker can be easily damaged by scratches, cracks, and thermal deformation due to welding. Considering these limitations, we propose a new marker design well optimized for the environment of offshore plant construction.

Marker Design
As shown in Figure 7, the marker consists of four layers. In order to protect the printing surface, a polycarbonate is used on the top side, and an alumite panel under the printing surface is also used to be resistant to external impact or scratches. Subsequently, a stainless steel is used to minimize the deformation of the marker. As an option, a magnet sheet is used to attach the marker on the wall.

Marker Detection
The codeword inside the marker consists of an 8 × 8 binary pattern. Outer cells are all black, and only the internal 6 × 6 has a pattern change according to the ID of each marker. We used the Aruco [26] library to generate a unique codeword pattern. In particular, we used the ARUCO_MIP_36h12 dictionary, which is robust to rotation conversion and excellent in error detection performance. This dictionary supports a total of 2320 unique IDs.
Aruco is one of the widely used libraries for marker generation and recognition, but it is difficult to use comfortably on mobile platforms due to its computational complexity. In this paper, we performed several steps of image processing to detect the marker candidate region more quickly. The detailed explanations are as follows. At first, when the captured image of the marker is input, the image is converted to grayscale and then the image is binarized. For image binarization, the Otsu [27] algorithm, which is a statistical and adaptive thresholding technique for image noise, is applied. A binarization threshold which maximizes an energy coefficient is determined as follows: where denotes the ratio of pixel intensities darker than , and 1 denotes the average intensity of these pixels. Similarly, denotes the ratio of pixel intensities equal to or brighter than t, and 2 means the average intensity. As shown in Figure 7, the marker consists of four layers. In order to protect the printing surface, a polycarbonate is used on the top side, and an alumite panel under the printing surface is also used to be resistant to external impact or scratches. Subsequently, a stainless steel is used to minimize the deformation of the marker. As an option, a magnet sheet is used to attach the marker on the wall.

Marker Detection
The codeword inside the marker consists of an 8 × 8 binary pattern. Outer cells are all black, and only the internal 6 × 6 has a pattern change according to the ID of each marker. We used the Aruco [26] library to generate a unique codeword pattern. In particular, we used the ARUCO_MIP_36h12 dictionary, which is robust to rotation conversion and excellent in error detection performance. This dictionary supports a total of 2320 unique IDs.
Aruco is one of the widely used libraries for marker generation and recognition, but it is difficult to use comfortably on mobile platforms due to its computational complexity. In this paper, we performed several steps of image processing to detect the marker candidate region more quickly. The detailed explanations are as follows. At first, when the captured image of the marker is input, the image is converted to grayscale and then the image is binarized. For image binarization, the Otsu [27] algorithm, which is a statistical and adaptive thresholding technique for image noise, is applied. A binarization threshold t which maximizes an energy coefficient γ is determined as follows: where α denotes the ratio of pixel intensities darker than t, and µ 1 denotes the average intensity of these pixels. Similarly, β denotes the ratio of pixel intensities equal to or brighter than t, and µ 2 means the average intensity. After image binarization, the contour detection step is followed. We implemented contour detection using the Teh algorithm [28]. When the Teh algorithm is performed, various contours including outliers are detected as shown in Figure 8c. In order to obtain the candidate region of the marker, a line approximation to the contours is performed and then some filtering rules are applied as follows: a.
The shape of the approximated contour must have four corner points; b.
The shape of the approximated contour must be convex; c.
The area of the approximated contour must be at least d pixels (in general, 500 ≤ d ≤ 1000).
After image binarization, the contour detection step is followed. We implemented contour detection using the Teh algorithm [28]. When the Teh algorithm is performed, various contours including outliers are detected as shown in Figure 8c. In order to obtain the candidate region of the marker, a line approximation to the contours is performed and then some filtering rules are applied as follows: a. The shape of the approximated contour must have four corner points; b. The shape of the approximated contour must be convex; c. The area of the approximated contour must be at least d pixels (in general, 500 ≤ d ≤ 1000).
After performing the filtering above, the result of refined detection can be obtained as shown in Figure 8d. Suppose that N convex contours are selected by the filtering. Then, perspective transformation is performed to generate N orthoimages corresponding to the N contours. Figure 8e shows an example of a list of the generated orthoimages.
Once the orthoimages are created, a test is performed to see if these orthoimages belong to marker candidates or not. The procedure for the test is as follows. First, an orthoimage is divided into an 8 × 8 grid. For each cell located at the boundary area, a vote is applied to determine whether the total intensity of the pixels in the cell belongs to the black or white color. If the intensity of all boundary cells is black, it is finally selected as a marker. Figure 8e shows an example of such selected markers. Orthoimages rendered with a green color to denote the last selected markers. Once this process is complete, the corner points of each marker are optimized again in the sub-pixel space to improve the detection accuracy. The last step is to decode the codeword to obtain the marker's ID and the rotation status. In the 6 × 6 grid inside the marker, binary values are collected while scanning the cells from coordinates (1,1) to (6,6). The value of the white cell is 0 and the value of the black cell is 1. Through scanning, a binary word composed of 40 digits is generated as After performing the filtering above, the result of refined detection can be obtained as shown in Figure 8d. Suppose that N convex contours are selected by the filtering. Then, perspective transformation is performed to generate N orthoimages corresponding to the N contours. Figure 8e shows an example of a list of the generated orthoimages.
Once the orthoimages are created, a test is performed to see if these orthoimages belong to marker candidates or not. The procedure for the test is as follows. First, an orthoimage is divided into an 8 × 8 grid. For each cell located at the boundary area, a vote is applied to determine whether the total intensity of the pixels in the cell belongs to the black or white color. If the intensity of all boundary cells is black, it is finally selected as a marker. Figure 8e shows an example of such selected markers. Orthoimages rendered with a green color to denote the last selected markers. Once this process is complete, the corner points of each marker are optimized again in the sub-pixel space to improve the detection accuracy.
The last step is to decode the codeword to obtain the marker's ID and the rotation status. In the 6 × 6 grid inside the marker, binary values are collected while scanning the cells from coordinates (1,1) to (6,6). The value of the white cell is 0 and the value of the black cell is 1. Through scanning, a binary word composed of 40 digits is generated as shown in Figure 9. Note that the 33rd through 36th digits of the binary word are set to zero. By dividing this binary word by 8 bits from the front side and converting it to a decimal, the binary word could be converted to a decimal word again. This decimal word is finally matched in the ARUCO_MIP_36h12 dictionary and then the ID value and the rotation angle of the marker can be determined. Figure 8f shows an example of the final marker detection. shown in Figure 9. Note that the 33rd through 36th digits of the binary word are set to zero. By dividing this binary word by 8 bits from the front side and converting it to a decimal, the binary word could be converted to a decimal word again. This decimal word is finally matched in the ARUCO_MIP_36h12 dictionary and then the ID value and the rotation angle of the marker can be determined. Figure 8f shows an example of the final marker detection. Figure 9. Example of codeword decoding to determine maker ID.

Marker Reconstruction
The marker detected in the camera image can be reconstructed. Here, reconstruction means finding a 3D coordinate value of the marker based on the camera coordinate system. As shown in Figure 10, the four corner points � M � ( = 1, … ,4) of the marker can be transformed into four 3D points � C � ( = 1, … ,4) in the camera coordinate system by applying the transformation matrix M C by following Equation (3).
Since the real size of the marker is already given, the coordinate value of each corner point M is also determined directly. What remains is how to determine the transformation matrix M C . Let { } ( = 1, … ,4) be the set of corresponding corner points on a camera image. { } is obtained by the marker detection algorithm introduced in Section 5.1. For simple matrix operation, M and are represented by homogeneous coordinate systems. If at least three pairs of 3D-to-2D matching relationships are given, the matrix M C = [ | ] ( ∈ ℝ 3×3 , ∈ ℝ 3×1 ) that satisfies Equation (4) can be calculated with the Perspective-n-Point [29] algorithm as follows: where is a rigid transformation to move the 3D corner points � M � from the marker's local coordinate system to the camera coordinate system, K is a 3 × 3 matrix for camera intrinsic parameters, and is a 3 × 4 matrix for performing perspective projection. To obtain precise results, we also applied the Levenberg-Marquardt [30] algorithm to minimize the energy function of Equation (4).
Each reconstructed 3D marker in the camera coordinate system has to be converted to the world coordinate system to maintain global consistency. In this study, we used Google's Tango library to acquire the tracking pose of the mobile phone and use it for the initial registration of the marker in the world coordinate system. The device coordinate system of the Android platform follows the OpenGL coordinate system as a world frame. Therefore, in order to fuse the pose information of the Tango tracker, it is necessary to convert the four corner points of the marker in the camera coor-

Marker Reconstruction
The marker detected in the camera image can be reconstructed. Here, reconstruction means finding a 3D coordinate value of the marker based on the camera coordinate system. As shown in Figure 10, the four corner points { M P i } (i=1,...,4) of the marker can be transformed into four 3D points C P i (i=1,..., 4) in the camera coordinate system by applying the transformation matrix C M T by following Equation (3).
Since the real size of the marker is already given, the coordinate value of each corner point M P i is also determined directly. What remains is how to determine the transformation matrix C M T.
where D W is a camera pose in the world coordinate system estimated by the Tango tracker, and C D is an axis transformation for converting the camera coordinate system into the device coordinate system. M C is the transformation matrix for the j-th marker recovery. Figure 11 briefly shows this global reconstruction process conceptually. Figure 10. Scheme of marker pose recovery in camera coordinate system using camera geometry. Figure 10. Scheme of marker pose recovery in camera coordinate system using camera geometry. 4) be the set of corresponding corner points on a camera image. {p i } is obtained by the marker detection algorithm introduced in Section 5.1. For simple matrix operation, M P i and p i are represented by homogeneous coordinate systems. If at least three pairs of 3D-to-2D matching relationships are given, the matrix C M T = [R|t] (R ∈ R 3×3 , t ∈ R 3×1 ) that satisfies Equation (4) can be calculated with the Perspective-n-Point [29] algorithm as follows: where T is a rigid transformation to move the 3D corner points { M P i } from the marker's local coordinate system to the camera coordinate system, K is a 3 × 3 matrix for camera intrinsic parameters, and Π is a 3 × 4 matrix for performing perspective projection. To obtain precise results, we also applied the Levenberg-Marquardt [30] algorithm to minimize the energy function of Equation (4). Each reconstructed 3D marker in the camera coordinate system has to be converted to the world coordinate system to maintain global consistency. In this study, we used Google's Tango library to acquire the tracking pose of the mobile phone and use it for the initial registration of the marker in the world coordinate system. The device coordinate system of the Android platform follows the OpenGL coordinate system as a world frame. Therefore, in order to fuse the pose information of the Tango tracker, it is necessary to convert the four corner points of the marker in the camera coordinate system to the OpenGL coordinate system. By combining this constraint, one corner point M P ij in the marker coordinate system is transformed into a point W P ij in the world coordinate system by Equation (5) as follows: where W D T is a camera pose in the world coordinate system estimated by the Tango tracker, and D C T is an axis transformation for converting the camera coordinate system into the device coordinate system. C M T j is the transformation matrix for the j-th marker recovery. Figure 11 briefly shows this global reconstruction process conceptually.
ci. Eng. 2021, 9, x FOR PEER REVIEW 10 of 22 point M in the marker coordinate system is transformed into a point W in the world coordinate system by Equation (5) as follows: where D W is a camera pose in the world coordinate system estimated by the Tango tracker, and C D is an axis transformation for converting the camera coordinate system into the device coordinate system. M C is the transformation matrix for the j-th marker recovery. Figure 11 briefly shows this global reconstruction process conceptually. Figure 10. Scheme of marker pose recovery in camera coordinate system using camera geometry. Figure 11. Concept of marker recovery in world coordinate system using Tango tracker.

Reconstruction Refinement
The Tango tracker typically shows very accurate localization performance in a small area (less than 5 square meters). However, as the moving distance increases, the accumulative positional error also increases exponentially. This may seriously affect the accuracy of the marker registration, which is why an additional optimization technique is needed to minimize the accumulative positional error. To solve this problem, we applied bundle adjustment (BA) [31], which is a well-known photogrammetric optimization algorithm. In this study, we also utilized the constraints that the marker has a rigid body and the size of the marker is given. Figure 11. Concept of marker recovery in world coordinate system using Tango tracker.

Reconstruction Refinement
The Tango tracker typically shows very accurate localization performance in a small area (less than 5 square meters). However, as the moving distance increases, the accumulative positional error also increases exponentially. This may seriously affect the accuracy of the marker registration, which is why an additional optimization technique is needed to minimize the accumulative positional error. To solve this problem, we applied bundle adjustment (BA) [31], which is a well-known photogrammetric optimization algorithm. In this study, we also utilized the constraints that the marker has a rigid body and the size of the marker is given.
The distance between the markers is usually at least 5 m. Therefore, there is no case in which two markers are simultaneously photographed in one image. Of course, it is also impossible to perform BA using only the marker scenes. In this paper, we propose a hybrid BA optimization using both natural feature points and marker corner points. For the scene where the marker is explicitly shown, the 2D points of the recognized marker corners and their corresponding 3D points are used directly. In the case where the marker is not visible, the natural feature points are extracted and tracked for BA optimization. Figure 12 briefly shows the concept of the proposed method. where the marker is explicitly shown, the 2D points of the recognized marker corners and their corresponding 3D points are used directly. In the case where the marker is not visible, the natural feature points are extracted and tracked for BA optimization. Figure 12 briefly shows the concept of the proposed method. BA is only executed for the camera views selected as keyframes. Here, the keyframe denotes an image frame whose image features are prominent and helpful for tracking. To extract robust features, the Accelerate-KAZE (AKAZE) [32] algorithm is used for each keyframe. The features between the reference frame and the target frame are matched by measuring the descriptor similarity using brute force scoring. At this time, the epipolar geometry constraint is applied to reject the matching outliers [33]. Once the good features are determined, the tracking state of the feature points is updated globally. According to the epipolar geometry, good feature matching between the two keyframes means that a pose between the keyframes is correctly estimated. In other words, if the Tango tracker returns a wrong pose, the estimation error can be detected by the epipolar geometry constraint. We preferentially used this constraint to determine the time of keyframe selection. In this study, the previous image frame at the time when the pose error is observed higher than any threshold value was selected as a new keyframe for explicit error correction. In addition, the following conditions were also considered as optimization points as follows: a. When a marker is detected in the image sequence; b. When the number of feature points newly started to be tracked exceeds 30% of the number of feature points tracked from the reference keyframe; c. When the ratio of the number of feature points tracked from the reference frame falls to 50% or less; d. When the distance from the reference frame exceeds 2 m.
Once the 2D-to-3D feature point sets are extracted from the keyframes through the conditions above, BA is performed for optimization. Suppose that m 3D feature points and n 3D markers are observed in N keyframes. Let and , respectively, be a tracked 2D feature point and a detected 2D marker point associated with the i-th 3D feature point and the k-th 3D corner point of j-th marker on the l-th keyframe. Let represent the weight variables that equal 1 if the 3D feature point is visible in the l-th keyframe and 0 otherwise, and similarly, let represent the weight variables that equal 1 if the j-th 3D marker is visible in the l-th keyframe and 0 otherwise. BA minimizes the total re-projection error with respect to all 3D feature points and camera extrinsic parameters as follows: BA is only executed for the camera views selected as keyframes. Here, the keyframe denotes an image frame whose image features are prominent and helpful for tracking. To extract robust features, the Accelerate-KAZE (AKAZE) [32] algorithm is used for each keyframe. The features between the reference frame and the target frame are matched by measuring the descriptor similarity using brute force scoring. At this time, the epipolar geometry constraint is applied to reject the matching outliers [33]. Once the good features are determined, the tracking state of the feature points is updated globally.
According to the epipolar geometry, good feature matching between the two keyframes means that a pose between the keyframes is correctly estimated. In other words, if the Tango tracker returns a wrong pose, the estimation error can be detected by the epipolar geometry constraint. We preferentially used this constraint to determine the time of keyframe selection. In this study, the previous image frame at the time when the pose error is observed higher than any threshold value was selected as a new keyframe for explicit error correction. In addition, the following conditions were also considered as optimization points as follows: a.
When a marker is detected in the image sequence; b.
When the number of feature points newly started to be tracked exceeds 30% of the number of feature points tracked from the reference keyframe; c.
When the ratio of the number of feature points tracked from the reference frame falls to 50% or less; d.
When the distance from the reference frame exceeds 2 m.
Once the 2D-to-3D feature point sets are extracted from the keyframes through the conditions above, BA is performed for optimization. Suppose that m 3D feature points and n 3D markers are observed in N keyframes. Let p il and q jkl , respectively, be a tracked 2D feature point and a detected 2D marker point associated with the i-th 3D feature point and the k-th 3D corner point of j-th marker on the l-th keyframe. Let u il represent the weight variables that equal 1 if the 3D feature point P i is visible in the l-th keyframe and 0 otherwise, and similarly, let v jl represent the weight variables that equal 1 if the j-th 3D marker is visible in the l-th keyframe and 0 otherwise. BA minimizes the total re-projection error with respect to all 3D feature points and camera extrinsic parameters as follows: and where K is a matrix representing the camera intrinsic parameters, Π is a matrix for the perspective projection, and C W T l is a camera extrinsic parameter related to l-th keyframe. Note that the condition of Equation (7) ensures that the size of the marker is always constant during the BA optimization. P i , Q jk , and , C W T l can be finally optimized through BA using Equation (6). In this study, only one type of mobile phone was used to register the markers. In addition, the camera of the mobile phone was already calibrated precisely. Therefore, we assume that the value of K is fixed so as not to be changed during the BA optimization.

Coordinate System Conversion
Once the poses of the markers are precisely refined, these markers have to be transformed to the CAD coordinate system for final registration. Suppose that a total of N markers are recovered by the reconstruction method described in the previous section. Among them, k markers are assumed to be pre-designated with the known marker IDs based on the CAD coordinate system. Let the world coordinate system where the markers are recovered be the reference frame and the CAD coordinate system be the target frame. Then, k × 4 3D-to-3D marker point sets W P ij , CAD P ij (i=1,...,k, j=1,...,4) can be established between the two coordinate systems. Using the matching pairs, a transformation matrix CAD W T can be derived to minimize the registration error as follows: where CADP ij is the normal vector of the point CAD P ij , and ω ij is a weight value that can be set to 0 to 1 according to the matching distance. To minimize the energy function of Equation (8), we applied a least squares-based fitting optimization, since an error may occur when markers are recovered, and corner points with poor matching quality are excluded from the CAD W T estimation using the Random Sample Consensus (RANSAC) algorithm. According to the above assumption, at least k markers must be designated in the CAD coordinate system. In this paper, we set k to 3 based on various experimental results. It means that even if the three markers need to be matched, the remaining markers can be registered automatically. Figure 13 shows the concept of registering the markers in the CAD coordinate system. ar. Sci. Eng. 2021, 9, x FOR PEER REVIEW 13 of 22 Figure 13. Concept of transforming global optimized markers into CAD coordinate system.

Experimental Results
An experiment was conducted to verify the precision of the proposed method. Table  1 shows the result of the quantitative error measurement. Among the three experiments, the largest re-projection error is about 1.06 pixels, and the result at this level is sufficient to perform an AR which makes it difficult for the user to recognize the registration error. This precision can also be checked through Figure 14. It can be confirmed that the recovered markers are registered in the CAD model very well. The three pairs of markers selected for coordinate conversion are marked with the yellow boxes.

Experimental Results
An experiment was conducted to verify the precision of the proposed method. Table 1 shows the result of the quantitative error measurement. Among the three experiments, the largest re-projection error is about 1.06 pixels, and the result at this level is sufficient to perform an AR which makes it difficult for the user to recognize the registration error. This precision can also be checked through Figure 14. It can be confirmed that the recovered markers are registered in the CAD model very well. The three pairs of markers selected for coordinate conversion are marked with the yellow boxes.  Figure 13. Concept of transforming global optimized markers into CAD coordinate system.

Experimental Results
An experiment was conducted to verify the precision of the proposed method. Table  1 shows the result of the quantitative error measurement. Among the three experiments, the largest re-projection error is about 1.06 pixels, and the result at this level is sufficient to perform an AR which makes it difficult for the user to recognize the registration error. This precision can also be checked through Figure 14. It can be confirmed that the recovered markers are registered in the CAD model very well. The three pairs of markers selected for coordinate conversion are marked with the yellow boxes. Figure 14. A result of marker registration using the proposed method. Table 1. Accuracy evaluation using re-projection error check.

Experiment Title # of Images # of Markers
Moving Distance (in Meters)

Re-Projection Error (in Pixels) Min
Max Mean StdDev Figure 14. A result of marker registration using the proposed method.

AR Scene Generation
In this study, an AR rendering engine was developed using an open source graphics library for mobile-optimized visualization. The system rendering engine supports the JT CAD format, and more than eight large assembly blocks can be drawn without a frame drop based on the Project Tango Development Kit. The proposed AR system starts the service by recognizing the markers installed at the construction site. At this point, the proper execution of the app is guaranteed only if the recognized marker information is registered in the CAD system. If the registration information is missing, the AR system requests a marker registration from the user and exits. However, if the registration information exists, the app initializes the world coordinate system in which AR services are to be executed. When the user inputs a project number and a block name, according to the Model-View-Projection (MVP) pipeline pattern of OpenGL, 3D assets are projected and rendered on the image plane of the mobile's camera to generate the AR scene. The matrices for the MVP pipelining are defined as follows: where M model , M view , and M proj represent the model transformation, view transformation, and projection, respectively. In Equation (10), C CAD T m is a transformation matrix determined when scene matching is performed with the marker ID m. It transforms the marker ID m from the CAD coordinate system to the camera coordinate system. W D T m denotes a pose of the mobile phone acquired at the time when scene matching of the marker ID m is performed, and W D T u represents an updated pose of the mobile camera whenever the motion is changed after scene matching. Both matrices are obtained by the Tango tracker. D C T is a matrix that transforms the axis of the camera coordinate system into the device coordinate system. proj( ) in Equation (10) represents a function that generates a perspective projection matrix. w, h, n and f denote the width and height of the camera image and the near and far distances of the view frustum for the perspective projection, respectively. Figure 15 shows two examples of creating AR scenes. We can see that the pipe and support parts are overlapped very precisely in the camera images.

AR Scene Generation
In this study, an AR rendering engine was developed using an open source graphics library for mobile-optimized visualization. The system rendering engine supports the JT CAD format, and more than eight large assembly blocks can be drawn without a frame drop based on the Project Tango Development Kit. The proposed AR system starts the service by recognizing the markers installed at the construction site. At this point, the proper execution of the app is guaranteed only if the recognized marker information is registered in the CAD system. If the registration information is missing, the AR system requests a marker registration from the user and exits. However, if the registration information exists, the app initializes the world coordinate system in which AR services are to be executed. When the user inputs a project number and a block name, according to the Model-View-Projection (MVP) pipeline pattern of OpenGL, 3D assets are projected and rendered on the image plane of the mobile's camera to generate the AR scene. The matrices for the MVP pipelining are defined as follows: where , , and represent the model transformation, view transformation, and projection, respectively. In Equation (10), CAD C is a transformation matrix determined when scene matching is performed with the marker ID m. It transforms the marker ID m from the CAD coordinate system to the camera coordinate system. D W denotes a pose of the mobile phone acquired at the time when scene matching of the marker ID m is performed, and D W represents an updated pose of the mobile camera whenever the motion is changed after scene matching. Both matrices are obtained by the Tango tracker. C D is a matrix that transforms the axis of the camera coordinate system into the device coordinate system.
( ) in Equation (10) represents a function that generates a perspective projection matrix. , ℎ, and denote the width and height of the camera image and the near and far distances of the view frustum for the perspective projection, respectively. Figure 15 shows two examples of creating AR scenes. We can see that the pipe and support parts are overlapped very precisely in the camera images. Figure 15. Examples of accurate AR scene generation. The pipe and pipe support CAD models are augmented very precisely on the real objects. Figure 15. Examples of accurate AR scene generation. The pipe and pipe support CAD models are augmented very precisely on the real objects. Figure 16 shows useful functions of the proposed AR system. The AR system supports not only stably augmenting 3D CAD on the screen but also intuitive understanding through following useful functions for productivity improvement.

Useful Functions
drawings are still needed to be checked regarding some critical information such as installation orders and dimensions. To improve these constraints, we extracted part names from the drawings and CAD, and then matched them to facilitate information linkage between each other. With this approach, users can now access the associated drawings very easily by simply selecting the part they are working with on the AR screen. This function makes it very easy to compare the physical target, CAD, and drawing information to enable effective installation and inspection.

Validation
The proposed AR system is actually being applied to the following offshore plant projects: Petronas FLNG2, BP Argos FPU, and ENI Coral FLNG, and is currently in active use for productivity enhancement. More than 100 field workers associated with the departments of quality control, pre-outfitting, and electrical installation are using the proposed system. Table 2 shows how the field workers utilize the developed AR function.

a. Transparency Control
One of the main advantages of the proposed AR system is its ability to directly compare the difference between the 3D CAD design and the installing parts. To support an easy and practical comparison, we implemented a function to adjust the transparency of the 3D rendering by moving a slide bar.

b. Discipline Filtering
In general, most field workers are only interested in the discipline they are responsible for. Therefore, the AR system is developed to selectively visualize only interesting disciplines such as pipes, supports, and equipment. With this function, the user can be protected from confusion caused by complex visualization and can focus more effectively on the target. c.
View Clipping In complex workplaces, some outfitting parts to be inspected are often hidden by other installation parts. In this case, the clipping function can be very helpful by adjusting the near and far distances of the view frustum. By using this clipping feature, it is possible to perform an AR inspection only for the range desired by the user.

d. Drawing Linkage
Although various CAD information can be intuitively monitored by the AR system, drawings are still needed to be checked regarding some critical information such as installation orders and dimensions. To improve these constraints, we extracted part names from the drawings and CAD, and then matched them to facilitate information linkage between each other. With this approach, users can now access the associated drawings very easily by simply selecting the part they are working with on the AR screen. This function makes it very easy to compare the physical target, CAD, and drawing information to enable effective installation and inspection.

Validation
The proposed AR system is actually being applied to the following offshore plant projects: Petronas FLNG2, BP Argos FPU, and ENI Coral FLNG, and is currently in active use for productivity enhancement. More than 100 field workers associated with the departments of quality control, pre-outfitting, and electrical installation are using the proposed system. Table 2 shows how the field workers utilize the developed AR function. Table 2. List of supported AR services by work scenario. When a worker first arrives at the working area with an AR device, the worker enters the project name and the block name for system initialization. If the part names defined in the work orders are given, it also can be used as an initialization option to make the AR viewpoint clearer. During the system initialization, 3D models for AR rendering and production metadata for information linkage are downloaded. After initialization, the worker finally recognizes a marker and starts the AR service. In the following sections, we deal with the details of each AR service.

Self-Navigation
As mentioned in Section 1, the size of offshore plant structures is very large, and more than 120,000 pieces of outfitting parts are usually in need of construction management. Therefore, it takes a lot of effort for workers who are not familiar with the construction environment (e.g., design staff, quality management staff, production support staff) to reach the working area in a timely manner. In this study, an AR approach is applied to overcome these problems and Figure 17 shows an example of the approach.
Therefore, it takes a lot of effort for workers who are not familiar with the construction environment (e.g., design staff, quality management staff, production support staff) to reach the working area in a timely manner. In this study, an AR approach is applied to overcome these problems and Figure 17 shows an example of the approach. The field worker first inputs the codename of the inspection part that needs to be searched to the AR system. Then, the app draws the trajectory route on the current camera image using AR rendering. Even if the mobile's pose changes, the trajectory route remains properly on the screen, keeping global consistency. The full map view also makes it easy to see where the worker is heading. When reaching the destination along the route, the worker can accurately recognize the pose of the target being searched as shown in Figure  18. This function is implemented by constructing a topology map only for the paths along which the field workers can move. The highlight of the target parts is now available for all sectors in the offshore plant. The field worker first inputs the codename of the inspection part that needs to be searched to the AR system. Then, the app draws the trajectory route on the current camera image using AR rendering. Even if the mobile's pose changes, the trajectory route remains properly on the screen, keeping global consistency. The full map view also makes it easy to see where the worker is heading. When reaching the destination along the route, the worker can accurately recognize the pose of the target being searched as shown in Figure 18. This function is implemented by constructing a topology map only for the paths along which the field workers can move. The highlight of the target parts is now available for all sectors in the offshore plant.

Fabrication Support
The main advantage of the proposed AR system is that it is more intuitive than using the drawings for manufacturing. In the near past, field workers mainly used drawings for installing outfitting parts. Therefore, frequent installation errors and time delays have often occurred due to the misinterpretation of drawings by the field workers. These prob- Figure 18. AR-based outfitting detection. Intuitive confirmation is possible with the highlighted box and distance and direction information.

Fabrication Support
The main advantage of the proposed AR system is that it is more intuitive than using the drawings for manufacturing. In the near past, field workers mainly used drawings for installing outfitting parts. Therefore, frequent installation errors and time delays have often occurred due to the misinterpretation of drawings by the field workers. These problems cause serious productivity degradation. However, with the AR system, workers can more easily confirm installation goals as shown in Figure 19. Workers intuitively identify the outfitting parts that need to be installed that day and also easily understand the direction and location where they are needed to be installed. In addition, as shown in Figure 20, production information such as the joint location, fluid flow, and painting code necessary for fabrication can be intuitively confirmed, thereby dramatically increasing the production efficiency of workers. Figure 18. AR-based outfitting detection. Intuitive confirmation is possible with the highlighted box and distance and direction information.

Fabrication Support
The main advantage of the proposed AR system is that it is more intuitive than using the drawings for manufacturing. In the near past, field workers mainly used drawings for installing outfitting parts. Therefore, frequent installation errors and time delays have often occurred due to the misinterpretation of drawings by the field workers. These problems cause serious productivity degradation. However, with the AR system, workers can more easily confirm installation goals as shown in Figure 19. Workers intuitively identify the outfitting parts that need to be installed that day and also easily understand the direction and location where they are needed to be installed. In addition, as shown in Figure  20, production information such as the joint location, fluid flow, and painting code necessary for fabrication can be intuitively confirmed, thereby dramatically increasing the production efficiency of workers.

Fabrication Support
The main advantage of the proposed AR system is that it is more intuitive than using the drawings for manufacturing. In the near past, field workers mainly used drawings for installing outfitting parts. Therefore, frequent installation errors and time delays have often occurred due to the misinterpretation of drawings by the field workers. These problems cause serious productivity degradation. However, with the AR system, workers can more easily confirm installation goals as shown in Figure 19. Workers intuitively identify the outfitting parts that need to be installed that day and also easily understand the direction and location where they are needed to be installed. In addition, as shown in Figure  20, production information such as the joint location, fluid flow, and painting code necessary for fabrication can be intuitively confirmed, thereby dramatically increasing the production efficiency of workers.    Figure 21 shows some examples of how the proposed AR system supports intuitive inspection. This feature allows workers to quickly and accurately recognize installation errors that are visible through inspection. In addition, it is possible to utilize this function for blocks manufactured by a third party company, so that it is possible to more effectively manage the quality control of outsourced products. Table 3 shows the effectiveness of performing inspection with the proposed AR system. Currently, more than 100 field workers are using our AR system. Among them, we received official comments on the effectiveness from three advanced managers. Through the survey, we confirmed that ARbased inspection has a significant time-saving effect compared to conventional drawingbased inspection.

Inspection Support
for blocks manufactured by a third party company, so that it is possible to more effectively manage the quality control of outsourced products. Table 3 shows the effectiveness of performing inspection with the proposed AR system. Currently, more than 100 field workers are using our AR system. Among them, we received official comments on the effectiveness from three advanced managers. Through the survey, we confirmed that ARbased inspection has a significant time-saving effect compared to conventional drawingbased inspection.

Process Management
Timely production and delivery are very important in the shipbuilding industry. However, as the design of offshore plants has become highly complex in recent years, timely production is also becoming increasingly difficult. To overcome this problem, an innovative process management approach that leads to productivity improvement is required. Traditionally, production managers have checked the outfitting progress by manually comparing work orders, drawings, and real objects. Unfortunately, it is a very timeconsuming task and often causes process management mistakes. However, as shown in

Process Management
Timely production and delivery are very important in the shipbuilding industry. However, as the design of offshore plants has become highly complex in recent years, timely production is also becoming increasingly difficult. To overcome this problem, an innovative process management approach that leads to productivity improvement is required. Traditionally, production managers have checked the outfitting progress by manually comparing work orders, drawings, and real objects. Unfortunately, it is a very time-consuming task and often causes process management mistakes. However, as shown in Figure 22, the proposed AR system makes it very easy to intuitively check the current process situation, and if any problem is found, it can be shared quickly. This has the effect of preemptively preventing process delays that may occur due to missed error detection. Figure 22, the proposed AR system makes it very easy to intuitively check the current process situation, and if any problem is found, it can be shared quickly. This has the effect of preemptively preventing process delays that may occur due to missed error detection. Figure 22. Intuitive process information inquiry and management through 4D-based AR visualization.

Issue Sharing
If installation errors are detected or design errors are suspected, a quick way to share them is needed for fast troubleshooting. To address the problems, we developed a method to quickly and effectively share the issues using AR technology. Figure 23 shows an example of issue sharing proposed in this study. If a problem is detected while using the AR system, the user captures the current AR scene and generates a screen shot. Next, the user leaves the content of the issue through a drawing or text input on the screen shot. Finally, the screen shot is sent to all design/procurement/production personnel associated with the issue through the mailing function of the AR system linked to the mailing system. This issue sharing approach has two main advantages. First, intuitively recognizing on-the-spot issues makes it possible to share the problem situation clearly and accurately. Second, by visualizing field scenes and 3D CAD simultaneously, relevant staff can communicate via e-mail at remote locations. In particular, CAD designers must presently visit the construction site if design errors are suspected. Due to these design questions, there are about 3000 field visits per month by CAD designers across all offshore plant constructions, which is a waste of time. However, by utilizing the AR technology, we can dramatically reduce the wasted time.
(a) (b) Figure 22. Intuitive process information inquiry and management through 4D-based AR visualization.

Issue Sharing
If installation errors are detected or design errors are suspected, a quick way to share them is needed for fast troubleshooting. To address the problems, we developed a method to quickly and effectively share the issues using AR technology. Figure 23 shows an example of issue sharing proposed in this study. If a problem is detected while using the AR system, the user captures the current AR scene and generates a screen shot. Next, the user leaves the content of the issue through a drawing or text input on the screen shot. Finally, the screen shot is sent to all design/procurement/production personnel associated with the issue through the mailing function of the AR system linked to the mailing system. Figure 22. Intuitive process information inquiry and management through 4D-based AR visualization.

Issue Sharing
If installation errors are detected or design errors are suspected, a quick way to share them is needed for fast troubleshooting. To address the problems, we developed a method to quickly and effectively share the issues using AR technology. Figure 23 shows an example of issue sharing proposed in this study. If a problem is detected while using the AR system, the user captures the current AR scene and generates a screen shot. Next, the user leaves the content of the issue through a drawing or text input on the screen shot. Finally, the screen shot is sent to all design/procurement/production personnel associated with the issue through the mailing function of the AR system linked to the mailing system. This issue sharing approach has two main advantages. First, intuitively recognizing on-the-spot issues makes it possible to share the problem situation clearly and accurately. Second, by visualizing field scenes and 3D CAD simultaneously, relevant staff can communicate via e-mail at remote locations. In particular, CAD designers must presently visit the construction site if design errors are suspected. Due to these design questions, there are about 3000 field visits per month by CAD designers across all offshore plant constructions, which is a waste of time. However, by utilizing the AR technology, we can dramatically reduce the wasted time. This issue sharing approach has two main advantages. First, intuitively recognizing on-the-spot issues makes it possible to share the problem situation clearly and accurately. Second, by visualizing field scenes and 3D CAD simultaneously, relevant staff can communicate via e-mail at remote locations. In particular, CAD designers must presently visit the construction site if design errors are suspected. Due to these design questions, there are about 3000 field visits per month by CAD designers across all offshore plant constructions, which is a waste of time. However, by utilizing the AR technology, we can dramatically reduce the wasted time.

Conclusions
In this paper, we introduced the development of an AR system for productivity improvement in offshore plant construction. The main contribution is to realize a practical AR implementation that can be effectively used in construction fields by solving the problems of instant global localization and landmark management that are the biggest constraints of successful activation of the field-oriented AR technology. In particular, due to the development of the natural feature point and marker fusion-based automatic marker registration method, the management of markers is now easily possible, reflecting conditions of the construction field and needs of AR users.
With stable AR pose tracking of whole areas in the field of offshore plant construction, innovative use cases to increase manufacturing productivity are also derived and implemented. The proposed AR system allows users to reach their targets by themselves and supports the users to perform installation and inspection in a very intuitive way using useful AR visualization functions. Through 4D visualization, users can control the fabrication process very effectively and respond to issues through the AR scene sharing with the e-mailing system when issues are found in the field.
In the near future, we plan to expand our works to realize the smart factory. As of 2020, most mobile vendors are scrambling to release new devices that have a built-in lidar sensor. As a result, immediate 3D reconstruction of real objects is currently possible. If the 3D sensing capability and our proposed AR technology are well combined, more innovative smart production is expected to be possible. At construction sites, we will be able to not only simply review CAD with AR services, but also compute quantitative differences between real objects and designs in real time via immediate 3D scanning. Moreover, instant interference checks or high-precision measurements between CAD and real objects would be possible. We also plan to actively transfer our AR technology to HMD equipment such as HoloLens for AR interface diversification. Through this, we intend to develop our research results as a smart assistant system that increasingly raises the job skills of field workers.