Next Article in Journal
Emoji-Driven Sentiment Analysis for Social Bot Detection with Relational Graph Convolutional Networks
Previous Article in Journal
Adaptive Autoencoder-Based Intrusion Detection System with Single Threshold for CAN Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mobile Tunnel Lining Measurable Image Scanning Assisted by Collimated Lasers

School of Automobile, Chang’an University, Xi’an 710064, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(13), 4177; https://doi.org/10.3390/s25134177
Submission received: 15 May 2025 / Revised: 1 July 2025 / Accepted: 2 July 2025 / Published: 4 July 2025
(This article belongs to the Section Remote Sensors)

Abstract

Highlights

What are the main findings?
  • A novel mobile tunnel lining scanning method aided by collimated lasers is presented, significantly improving image-stitching accuracy.
  • A complete measurement system was built, and a Laplace kernel, maximum correntropy criterion, camera-pose calibration algorithm was introduced to further enhance calibration precision.
What is the implication of the main finding?
  • The proposed approach yields near-seamless stitched images of tunnel linings.
  • Using the new calibration algorithm, when outliers increase from 0% to 25%, the Euler-angle error grows by about 44%, and the translation error by roughly 45%, outperforming comparable benchmark algorithms.

Abstract

The health of road tunnel linings directly impacts traffic safety and requires regular inspection. Appearance defects on tunnel linings can be measured through images scanned by cameras mounted on a car to avoid disrupting traffic. Existing tunnel lining mobile scanning methods often fail in image stitching due to the lack of corresponding feature points in the lining images, or require complex, time-consuming algorithms to eliminate stitching seams caused by the same issue. This paper proposes a mobile scanning method aided by collimated lasers, which uses lasers as corresponding points to assist with image stitching to address the problems. Additionally, the lasers serve as structured light, enabling the measurement of image projection relationships. An inspection car was developed based on this method for the experiment. To ensure operational flexibility, a single checkerboard was used to calibrate the system, including estimating the poses of lasers and cameras, and a Laplace kernel-based algorithm was developed to guarantee the calibration accuracy. Experiments show that the performance of this algorithm exceeds that of other benchmark algorithms, and the proposed method produces nearly seamless, measurable tunnel lining images, demonstrating its feasibility.

1. Introduction

The Road Tunnel Lining (RTL) is an essential component of road tunnels. It supports the surrounding rock to prevent collapse, inhibits groundwater infiltration, and enhances the tunnel’s appearance [1,2]. The technical condition of the tunnel lining directly affects the health of the tunnel and is closely related to traffic safety [3,4]. Owing to factors such as natural disasters, pressure [1,5,6], and groundwater erosion [7], RTLs inevitably develop defects such as cracking, leakage, spalling, and erosion. If these defects are not detected, evaluated, or repaired promptly, they may gradually worsen and threaten traffic safety. RTL inspection is a prerequisite for maintenance operations, requiring inspectors to work on-site with detection equipment. Inefficient inspection equipment means that inspectors must occupy the tunnel for extended periods, leading to traffic disruptions. Furthermore, low-precision equipment can result in distorted results, adversely affecting maintenance decisions.
The inspection of RTL mainly includes the detection of RTL deformation (RTL-D), RTL appearance defects (RTL-ADs), and RTL internal defects (RTL-IDs). RTL-D refers to the measurement of the deviation of the actual cross and longitudinal-sectional dimensions of the tunnel lining relative to the standard dimensions. RTL-ADs detection refers to the measurement of the distribution, length, width, and area of surface defects such as cracks and leakage on the lining. RTL-IDs detection refers to the detection of internal defects such as voids within the lining. In practice, inspectors typically perform a preliminary evaluation of the tunnel based on RTL-ADs detection results and then decide whether to conduct more resource-intensive RTL-D or RTL-IDs detections. Therefore, improving the efficiency of RTL-ADs detection is crucial for improving operational efficiency. RTL-ADs can be measured by capturing images with cameras and performing analysis.
Equipped with onboard cameras, a Road Tunnel Lining Inspection Car (RIC) captures continuous, high-resolution, measurable panoramic images of the tunnel lining while driving through the tunnel without stopping. The captured RTL images allow inspectors to assess the tunnel condition remotely from an office [8]. Although the RIC has significantly improved field inspection efficiency, there is still room for improvement to further enhance efficiency and reduce costs. In recent years, many researchers have exploited deep learning techniques to perform automatic RTL-AD recognition on RIC image data, achieving high detection accuracies [9,10,11]. Enhancing the scanning precision of the RIC and lowering the overall hardware cost would complete this technology chain, substantially improving the efficiency and scalability of tunnel inspection and maintenance operations.
Owing to the large size of the tunnel lining, a single camera cannot cover the entire cross-section of the lining while maintaining high image resolution. As a result, current RICs rely on camera arrays for inspection. With camera arrays in use, the RIC’s data processing system must employ image stitching techniques to merge multiple images with overlapping areas into a single, complete image [12]. The image stitching process consists of the following three main stages: image registration, image reprojection, and image blending.
Key challenges for tunnel defect detection arise during the image registration stage. The goal of image registration is to estimate the geometric relationship between two images with overlapping areas, using models such as translation, affine transformation, or perspective transformation. Standard image registration techniques typically rely on extracting the corresponding features from overlapping regions to estimate the model parameters [13]. However, these methods often fail in scenarios such as tunnel linings, where few features are available for registration [14], especially when the overlapping fields of view (FOVs) between cameras are small.
In this paper, an RTL scanning method based on a collimated laser array is proposed to improve the stitching accuracy of lining images. A RIC based on the proposed method was developed, and a Laplace kernel, maximum correntropy criterion (MCC)-based algorithm is developed to enhance the calibration accuracy of the system. The experimental results demonstrate that the algorithm can accurately estimate the pose of adjacent cameras. With the assistance of collimated laser spots, the system can generate near-seamless, high-quality RTL panoramic images.
The remainder of this paper is organized as follows: Section 2 introduces related research. Section 3 presents the schematic of the developed system. Section 4 describes the methodology. Section 5 presents experiments and discussions. Finally, Section 6 provides the conclusions.

2. Background

Because the FOV of a single camera is insufficient to encompass an entire tunnel lining cross-section at the resolution required for metric analysis, state-of-the-art RICs employ camera arrays. The fundamental technical problems in mobile RTL imaging are as follows:
(1) Image reprojection—establishing accurate mapping between each camera’s image and the three-dimensional lining surface. (2) Image registration—fusing the individual views into a seamless, metrically consistent panoramic image of the full lining.
Existing solutions can be classified into the following three methodological families: (1) Pure photogrammetric stitching, which relies solely on inter-image features; (2) LiDAR-assisted stitching, which incorporates point cloud geometry as an external constraint; (3) Collimated laser-assisted stitching, which exploits structured-light references to refine both projection and alignment. Table 1 lists the principal sensors adopted in commercial and research-grade RICs worldwide. As the table shows, most systems implement either the pure-photogrammetric or the LiDAR-assisted strategy, whereas only a few employ the more recent collimated laser approach.
Pure-photogrammetric stitching estimates the image-to-lining projection for each camera by combining the pre-calibrated camera extrinsics with an approximate camera-to-lining distance. Subsequently, adjacent camera images are registered with a simple similarity (translation) operator and concatenated. References [28,29] describe an RTL image stitching method based on the corresponding region matching. This method first computes the translation between images based on the similarity of overlapping regions, stitches the images into long-strip images, and finally assembles the long-strip images into a complete panoramic image of the lining. The method, however, is highly sensitive to vehicle pitch, bounce, or yaw; to minimize such disturbances the vehicle must travel at low speed, which severely degrades inspection efficiency. Pahwa et al. [33,34] introduced a cylindrical-tunnel image stitching strategy in which an initial mosaic is generated from the known tunnel cross-section to produce a coarse projection, after which a bundle adjustment refines the alignment using feature points. Their prototype system demonstrated that panoramic stitching is achievable, yet its robustness still depends on the quality of feature extraction. Jiang et al. [35] developed a RTL scanning system based on LSCs. Common features in the overlap of two adjacent LSC images are matched to accomplish lateral stitching, while the native LSC acquisition sequence provides longitudinal concatenation automatically, noticeably simplifying the pipeline. Nevertheless, the approach presupposes a feature-rich lining surface; in low-texture areas—scenarios frequently encountered when the camera FOV is narrow—reliable corresponding points cannot be obtained, causing the fine registration to fail.
LiDAR-assisted stitching adopts a coarse-to-fine strategy for image stitching [17,36,37]. First, rough stitching is performed based on the calibration pose between the cameras and the LiDAR. The corresponding feature-based image registration techniques are then used for fine stitching. Wang et al. [14] use LiDAR measurements of the camera-to-lining pose for coarse alignment and refine the mosaic with SURF feature points extracted from the overlapping regions. Their experiments report a SURF success rate of only about 66%, reflecting the general paucity of salient features in tunnel lining imagery. Du et al. [16] propose a “seam-driven” strategy. LiDAR again supplies the coarse match, whilst the final panorama is generated by optimizing the stitching seams with a graph cut blender, thus avoiding strict point-to-point registration. Although robust, the graph cut is computationally expensive—0.52 s per image for coarse alignment versus 42.7 s per image for seam optimization. Zou et al. [31] calibrate LSC-to-LiDAR extrinsics via a pyramidal calibration block and a large striped board. The intrinsic projection is first used for coarse alignment, a Bayesian scheme then increases pairwise registration accuracy in low-texture or corroded regions, and a graph cut refines the seam. Nevertheless, the entire pipeline remains computationally intensive. Overall, LiDAR assistance delivers substantially better accuracy than pure-photogrammetric stitching methods but it still incurs high computational cost for feature/region matching, and, in feature-deficient scenes, the fine stage may fail, leading to visible seams.
Collimated laser-assisted stitching projects a laser array into the overlapping FOVs of adjacent cameras, creating artificial tie points that guarantee reliable correspondences even when the lining offers little native texture. Some researchers have explored the use of collimated lasers to assist with stitching ASC RTL images in a static environment. Wang et al. [38,39] proposed a collimated laser-assisted method that utilizes a laser array to project these spots as auxiliary control points, solving the P4P problem to establish the pose relationship between the lining surface and camera, thus facilitating image stitching, which demonstrated the feasibility of this approach, showing that the laser array provides sufficient constraints for accurate alignment. However, their experiments were limited to stationary setups and have not yet been extended to mobile tunnel lining inspection, where vehicle motion, vibration, and exposure timing introduce additional challenges. The commercial system FOCUSα-T [30] likewise employs a collimated laser array and publishes impressive panoramic results, but no technical details of its calibration or stitching algorithms have been disclosed.
The accurate calibration of the structural parameters of the measurement system is essential for the development of such equipment, including the poses of the cameras and lasers. The intrinsic parameters of a single camera and the poses of the lasers in a camera coordinate frame can be calibrated using Zhang’s method with a checkerboard [40]. However, for the tunnel lining scanning systems shown in Figure 1 the overlapping FOVs between cameras are limited, presenting challenges for the pose calibration between cameras. Furthermore, some of the cameras and lasers face to the sky in the system, making it necessary to suspend the calibration boards or instruments in the air for calibration. Therefore, to reduce the operational difficulty, the calibration device should be sufficiently lightweight and compact.
A substantial body of work has been devoted to calibrating camera arrays whose FOVs are small or entirely non-overlapping. Li et al. [41] employ a dual-ring circular-coded target, and by translating this large marker to multiple locations and exploiting the uniqueness of each coded dot they solve each camera’s pose with respect to the board and, in turn, recover inter-camera extrinsics. Although the method is accurate, the target itself is bulky and difficult to deploy in constrained spaces such as tunnels. Crombrugge et al. [42] replace the physical board with a projector that casts an encoded stripe pattern onto a planar screen; the approach inherits the same drawback—namely, the requirement for a large projection surface—rendering it equally unsuitable for tunnel lining scanners. Yang et al. [43,44] mount two chessboards on a rigid bar and apply Zhang’s technique to determine the relative pose, yet the rigid target is heavy and unwieldy in confined environments. Liu et al. [45] adopt a lightweight alternative in which a collimated laser beam is intercepted by a planar plate, and each camera observes the resulting spot to derive control points. While the procedure is operationally convenient, its accuracy is ultimately constrained by the precision of the laser rangefinder.
To address these limitations we propose a single-checkerboard, line feature-based calibration framework tailored to mobile tunnel lining scanners, in that the considered line features can provide more stable and reliable characteristics. The collimated laser is modeled with dual quaternions (DQ) [46,47,48], providing a compact algebraic representation that accelerates pose computation and improves numerical stability. Conventional DQ solvers rely on least squares optimization [49] and are therefore vulnerable to outliers arising from specular reflections or defocused images of the checkboard. Inspired by the method in reference [50], a Laplace kernel-based algorithm is developed to improve the pose estimation accuracy of cameras.

3. Schematic of the System

A schematic of the proposed method is shown in Figure 1, which integrates an ASC array with a collimated laser array to capture tunnel lining images. Two laser beams were projected onto the overlapping FOVs of adjacent cameras. These lasers, along with ASCs, form a triangulation measurement unit, allowing for the simultaneous acquisition of tunnel image and image projection relationship measurements. The projected laser spots provide auxiliary reference points for stitching the lining images, thereby addressing the problem of image registration difficulties owing to the lack of features.
An RIC was developed based on this method, as illustrated in Figure 2a. The system comprised a top RTL photogrammetry setup mounted on the roof of the vehicle and a rotatable setup mounted at the rear (see Figure 2c). LED strobe lights served as the light source for image capture. As shown in Figure 2b, the top-mounted system is used to inspect large-span tunnel linings (with three or more lanes), whereas the rotatable rear system measures the side linings. The top system consisted of 11 narrow-FOV ASCs and 1 wide-FOV ASC, and the side system contained 21 narrow-FOV ASCs and 3 wide-FOV ASCs. All cameras are Basler acA2440-75 um/uc sensors (2440 × 2048 px; mono/color camera; Basler AG, Ahrensburg, Germany) fitted with C-mount fixed-focus lenses. The narrow-FOV cameras capture fine surface texture, whereas the wide-FOV cameras provide macroscopic context. The system includes 68 collimated lasers, with 2 lasers shared between adjacent cameras. Each ASC, together with the four lasers, forms a triangulation measurement unit. To prevent defocus, the system is equipped with a focus-adjustment mechanism for each camera lens, driven by a stepper motor, to ensure sharp images of the linings in the varied tunnel shapes.
For highway operation the inspection vehicle must maintain a cruising speed of 60 to 80 km/h. To suppress the motion-induced smear generated during exposure, the exposure time of each ASC is restricted to ≤10 µs. Because tunnel linings are generally gray-to-black and ambient lighting is extremely weak, high-intensity artificial illumination is mandatory. The developed lighting system adopts a modular LED strobe design. The side tunnel lining photography system is equipped with 160 LED modules, and the top one is assembled with 120 LED modules: each module contains 18 LED chips rated at 18 W. The collimated laser emitters deliver 70 mW at 520 nm. Given the large aggregate power of the lasers and LEDs, continuous operation would generate excessive heat, risking hardware damage, accelerating optical degradation, and increasing energy consumption.
The hardware architecture, as shown in Figure 3a, is proposed to mitigate these issues, with a pulse-distribution controller centrally scheduling the on–off timing of the LEDs, lasers, and cameras. Both lasers and LEDs are fired slightly before the start of camera exposure to ensure maximum irradiance during the exposure window. Each laser pulse lasts 1 ms, each LED pulse 40 µs, and the maximum repetition rate is about 72 Hz.
As shown in Figure 3b, every server computer controls 3 ASCs. Each server (Model AIIS-3410U, Advantech Co., Suzhou, China; Intel i7-6700 CPU, 8 GB RAM, Intel, Santa Clara, CA, USA) runs an independent acquisition service, which is orchestrated by a supervisory workstation. The data pipeline of the acquisition service is implemented in C++; message queues connect independent threads for each functional stage, converting raw image streams to JPEG streams and writing them to solid-state drives (SSDs).
The models, key specifications, and quantities of all major sensor components are summarized in Table 2.

4. Methodology

4.1. RTL Scanning Schematic

The principle of collimation laser-assisted photogrammetry is illustrated in Figure 4b. The optical axes of the cameras converged at the origin O s of the measurement system, and the collimation laser beams are parallel and intersected with the Y s axis of the coordinate system. According to the principles of triangular geometry, the relative distance of the laser spots in the object space from the camera is determined by the following relationship:
B = A p tan θ x p tan θ     x > p tan θ
Equation (1) can be used to design the structural parameters of the system. In practice, the laser spot position is given by the following:
z c u ~ = z c u v 1 = k x 0 u 0 0 k y v 0 0 0 1 x c y c z c = K X c
where k x = p / d x   k y = p / d y , p is the principal distance of the lens, d x and d y are the pixel dimensions of the camera sensor, K is the camera intrinsic matrix, and the superscript on X c is the point coordinate in the camera coordinate system ( c -frame). u and v are the coordinates of point X in the image and u 0 and v 0 are the projection coordinates of the optical axis and the intersection of the imaging plane in the image.
Before using Equation (2), the lens distortion values must be calculated and corrected. The lens distortion is described using the second-order Brown model, expressed as follows:
x 1 c d = x c 1 1 + k 1 r 2 + k 2 r 4
y 1 c d = y c 1 1 + k 1 r 2 + k 2 r 4
where k 1 and k 2 are the radial distortion coefficients, x c 1 and y c 1 are the undistorted normalized image plane coordinates, x 1 c d and y 1 c d are the distorted normalized image plane coordinates, and r 2 = x c 1 2 + y c 1 2 .
According to Equation (2), once the collimated laser position is given the three-dimensional position of the laser spot in the c -frame can be computed. Let the Plücker coordinate of the laser beam by the authors of [47] be as follows:
L = L T L O T T
where L and L O are both 3 × 1 matrices, L is the direction vector of the line, L O is the moment of the line, and L T L O = 0 . Given two points X 1 and X 2 on the three-dimensional line, the Plücker coordinates are given by the following:
L = X 2 X 1
L O = X 1 × X 2
where · × is the following skew-symmetric matrix operator:
X × = 0 X 3 X 2 X 3 0 X 1 X 2 X 1 0
When the laser line direction is normalized, i.e., L = 1 , the coordinates of any point on the collimated laser beam can be written as follows:
X = λ L + L × L O
where λ R is an arbitrary scalar.
Substituting Equation (7) into Equation (2) yields the following laser spot localization equation:
K 1 u ~ = λ L + η L × L O
where λ ,   η R are unknown scalars. Equation (8) can be solved via Singular Value Decomposition (SVD), thereby recovering the three-dimensional position of the laser spot in the object space.

4.2. Reprojection and Stitching of Tunnel Lining Images

The tunnel lining profile can be treated as a curve consisting of multiple arc segments. The tunnel lining surface within the FOV of a camera can be approximated as a plane owing to the small curvature of the tunnel profile and limited FOV of a single camera. Based on this assumption, the projection relationship between the image and the tunnel lining surface can be determined using the coordinates of the laser spots in the image.
As shown in Figure 4a, there are 4 laser spots in the FOV of each camera, which can be used to determine the projection plane. For readability, the superscripts of the coordinate frames are temporarily omitted. According to Equation (8), the object-side coordinates of the 4 laser spots, X ~ 1 , X ~ 2 , X ~ 3 , X ~ 4 , are obtained, and these points are coplanar according to the assumption. The normal of this plane is ε = ε x ε y ε z T , and the following relation holds:
X ~ 1 X ~ 2 X ~ 3 X ~ 4 T ϖ = 0 4
where ϖ = ε T ξ T is the projection plane.
Solving Equation (9) with SVD, the projection relationship between the tunnel lining surface and the camera image is given by the following:
z c u ~ 0 = K 0 ϖ X ~ ( c ) = K 0 η T ξ X ~ c
where ξ 0 to ensure that M is invertible. This formula can be used to calculate the coordinates in c -frame from u ~ through the following:
X ( c ) = K 1 u ~ ξ 1 η T K 1 u ~
The pixel-scale normalized RTL image can be obtained using Equations (10) and (11). The steps are as follows:
(1)
Equation (10) was used to project the border pixels of the image to determine the boundaries of the new image;
(2)
Equation (11) was used to calculate the backward interpolation mapping table of the new image, and interpolation is then performed to generate the new image.
The pixel-scale normalized RTL images must be stitched. The following stitching process is adopted (as shown in Figure 5):
(1)
The two-dimensional affine transformation parameters between two adjacent camera images are calculated based on the corresponding laser spots. Using these parameters, backward warping was applied to the benchmark camera images according to Equation (12). Obtain the overlapping region of the images, adjust the grayscale of the images, and finally perform pixel fusion within the overlapping region to generate the RTL profile images.
(2)
The overlapping region of the adjacent RTL profile images is roughly calculated based on the camera acquisition interval. Wavelet decomposition is then performed on the images in this region to separate the high- and low-frequency images. Next, Equation (13) is used to calculate the normalized cross-correlation (NCC) of the overlapping region between the two high-frequency images and to find its maximum position to achieve precise registration of adjacent profile images. Finally, the following pixel fusion was performed in the overlapping region to obtain a panoramic RTL image:
u 2 = H u 1 + h
N C C I 1 , I 2 = u , v I 1 u , v I ¯ 1 I 2 u , v I ¯ 2 u , v I 1 u , v I ¯ 1 2 u , v I 2 u , v I ¯ 2 2 1 / 2
where H is the affine transformation matrix, s is the scaling parameter, θ is the rotation angle, h is the two-dimensional translation vector, I 1 and I 2 are the pixel grayscale values of the two images, and
H = s 1 θ θ 1

4.3. Fast Search for Laser Spot in Image

Before image projection, it is necessary to obtain the image coordinates of the laser spots. Because the laser spot occupies a very small proportion of the camera image, applying a global image search algorithm would result in inefficient system operation. By utilizing the projection constraint of the laser beams and the epipolar constraint between the cameras, the laser spot can be found quickly.
Let the coordinates of the corresponding point X in the two camera coordinate frames be X c 1 and X c 2 , then the following can be obtained:
X ( c 2 ) = R X ( c 1 ) + t
The epipolar constraint is described as follows [51]:
u ~ 1 T F u ~ 2 = 0
where F = K 2 T E K 1 1 is the fundamental matrix between the cameras and E = t × R is the essential matrix.
Let the Plücker coordinates of the laser beam in the c 1 -Frame be L ( c 1 ) , then its projection in the image of camera 2 is as follows [52]:
l 2 ~ K 2 T t × R R L ( c 1 )
According to Equation (17), the projection of L c 1 in the image coordinate frame of camera 1 is as follows:
l 1 ~ K 1 T 0 3 × 3 I 3 L ( c 1 ) = K 1 T L O
Using Equations (16)–(18), a set of constraint lines for the corresponding point X can be obtained, as shown in Figure 6. Based on these lines, the following coarse-to-fine strategy can be used to quickly locate u 1 and u 2 :
(1)
Search along the line l 1 to find u ~ 1 , and then calculate u ~ 2 , r o u g h = l e p 2 × l 2 for a coarse location of u 2 ;
(2)
In the vicinity of u ~ 2 , r o u g h , methods such as grayscale centroid are used to estimate the precise value of u ~ 2 .

4.4. System Calibration

A two-step calibration method is adopted, as illustrated in Figure 7. Before applying this method, the intrinsic parameters of the cameras are calibrated using Zhang’s method. The two steps of the method are as follows:
Step 1: Collect the coordinate set of points on the laser, which includes the following sub-steps:
(1)
A checkerboard and the Perspective-n-Point (PNP) algorithm are used to independently sample the spatial points on the laser within the FOV of each camera, obtaining a non-corresponding control point (NCCP) coordinate set N C C P k i c n under the c 1 , c 2 frames. Here, the subscript k represents the index of the laser, i represents the index of the coordinate in the set, and n = 1,2 . Using these NCCP sets, the camera–laser triangulation unit can be calibrated based on Equations (7) and (8).
(2)
Using a flat plate, the corresponding control point (CCP) set C C P k i c n is obtained under c 1 , c 2 frames based on Equation (8).
Step 2: Estimate the camera pose, which includes the following sub-steps:
(1)
According to Equation (5), a Plücker coordinate can be given by two three-dimensional points, thus N three-dimensional points give N N 1 / 2 Plücker coordinates. The NCCP coordinate set is used to obtain the NCCP–Plücker coordinate set L k i N C C P c n for the k -th laser beam, where i is the index of the Plücker coordinate.
(2)
The CCP coordinate set is used to obtain the Plücker coordinates of several spatial lines that are not parallel to the lasers. These are called CCP–Plücker coordinates L i C C P c n , where each coordinate in this set corresponds to a common line in the object space.
(3)
The L k i N C C P c n and L k i C C P c n datasets are merged and input into the developed DQ-Laplacian maximum correntropy criterion (DLM) algorithm program to calculate the pose parameters of the two cameras.
This method requires both NCCP and CCP sets because the DLM algorithm requires the Plücker coordinates of at least two non-parallel lines, as explained in Section 4.5.2.

4.5. DLM Algorithm

The DLM algorithm uses the Laplace–MCC to model the fitting residuals and estimate the camera pose based on the Plücker coordinates of the corresponding lines in the two camera coordinate frames. In this section, the Laplace–MCC algorithm is introduced and the DLM algorithm is developed.

4.5.1. Introduction of Laplace–MMC

To make this study self-contained, an overview of the Laplace–MCC algorithm is provided [53]. The cross entropy of two random variables and is given by the following:
V σ = E κ σ Y Z
where κ σ is the kernel function and σ is the size of the kernel function used to control the radius of influence.
For a discrete sample set, there is the following:
V ^ σ Y , Z = 1 N i = 1 N κ σ Y i Z i
where κ σ Y i Z i = exp Y i Z i / σ is the Laplacian kernel.
As illustrated in Figure 8, the Laplacian kernel has a sharper central cusp and much heavier tail than the Gaussian kernel. This shape allows it to down-weight extreme outliers while still retaining useful information carried by moderate residuals. Consequently, the Laplacian kernel is well suited to heavy-tailed error distributions and datasets containing many moderate outliers—situations in which the Gaussian often over-penalizes residuals and degrades estimation accuracy. In our application, laser spots captured with a checkerboard typically exhibit this error pattern; numerous mid-level deviations persist after pre-processing, whereas gross outliers are largely removed, making the Laplacian kernel the more effective choice.
According to Equation (20), the optimization problem is given by the following:
max a 1 N i = 1 N exp Y i a Z i / σ
Note that f y = e y has the following conjugate convex function:
f y = sup ω ω y ϕ ( ω )
where the equation describes the maximum difference between the linear function ω y and ϕ ω , and this difference is maximized when ω = f y .
Substituting y i = Y i a Z i / σ into Equation (22) causes the raw optimization problem to become the following:
max a , ω i = 1 N ω i Y i a Z i / σ ϕ ω i
This problem can be solved using an alternating iterative approach by breaking it down into the following two subproblems:
Subproblem 1:
max a i = 1 N ω ¯ i Y i a Z i / σ
Subproblem 2:
max ω i = 1 N ω i Y i a ¯ Z i / σ ϕ ω i
where Equation (24a) updates the model, and Equation (24b) updates the weight. The solution to Subproblem 2 is given by the following:
ω ¯ = exp Y i a ¯ Z i / σ

4.5.2. DLM Derivation

The Plücker coordinates of the line L can be described using the following dual quaternions [47]:
L ˙ ^ = L ˙ + ϵ L ˙ O = L 1 + ϵ L O 1 i + L 2 + ϵ L O 2 j + L 3 + ϵ L O 3 k
where ϵ is the dual operator, ϵ 2 = 0 , ϵ 0 , i ,   j ,   k are the basis elements of the quaternion, L ˙ is the real part of L ˙ ^ , and L ˙ O is the dual part. Both L ˙ and L ˙ O are pure quaternions. The Euclidean transformation of the Plücker coordinates is given by the following:
L ˙ ^ = q ˙ ^ 1 L ˙ ^ q ˙ ^ = L ˙ + ϵ L ˙ O
where q ˙ ^ = 1 , and the following:
q ˙ ^ = cos θ ^ / 2 + sin θ ^ / 2 n ^
q ˙ ^ 1 = cos θ ^ / 2 sin θ ^ / 2 n ^
θ ^ = θ + ϵ S
n ^ = L 1 + ϵ L O 1 i + L 2 + ϵ L O 2 j + L 3 + ϵ L O 3 k
By separating the dual and non-dual parts, Equation (27) can be expanded as follows:
L ˙ = q ˙ L ˙ q ˙ 1                                                                           L ˙ O = q ˙ L ˙ q ˙ O 1 + q ˙ L ˙ O q ˙ 1 + q ˙ O L ˙ q ˙ 1
By right multiplying both sides by q ˙ , the first sub-equation of Equation (32) becomes the following:
L ˙ q ˙ q ˙ L ˙ = 0
By right multiplying both sides by q ˙ , and utilizing q ˙ 1 q ˙ O + q ˙ O 1 q ˙ = 0 , the second sub-equation becomes the following:
L ˙ O q ˙ q ˙ L ˙ O + L ˙ q ˙ O q ˙ O L ˙ = 0
The computation of the quaternions can be represented by matrices. Equations (33) and (34) can be rewritten as follows:
H L ˙ M L ˙ q ˙ = 0
H L ˙ O M L ˙ O q ˙ + H L ˙ M L ˙ q ˙ O = 0
where
H q ˙ = q 0 q T q q 0 I 3 + q × M q ˙ = q 0 q T q q 0 I 3 q ×
Based on these formulas, the coordinate parameter transformation problem can be decomposed into the following two sub-optimization problems according to Equation (21):
max q ˙ J 1 q ˙ = i = 1 N exp Γ i q ˙ 1 / σ                
max q ˙ o J 2 q ˙ O = i = 1 N exp Γ i q ˙ O + γ i 1 / σ
s . t . q ˙ T q ˙ = 1 q ˙ T q ˙ O = 0                                
where subscript i represents the index of Plücker coordinate, and the other parts are as follows:
Γ i = H L ˙ i M L ˙ i
γ i = H L ˙ O i M L ˙ O i q ˙
Equations (36a,b) can be transformed into an epigraph form according to Equation (24a), with the constraints in Equation (36c) resulting in the following two linear programming subproblems:
Subproblem 1:
min x J 1 x , ω ¯ 1 = c T ω ¯ x s . t . Γ ~ x Λ x a T x = 1                                    
Subproblem 2:
min y J 2 y , q ˙ ¯ , ω ¯ 2 = c T ω ¯ y s . t . Γ ~ y + γ Λ y b T q ˙ ¯ y = 0                              
where ω ¯ 1 and ω ¯ 2 are weight vectors of length 4 N , τ and η are slack vectors of length 4 N , vstack · represents the vertical stacking operator for the matrices, and the other symbols are as follows:
x = q ˙ T τ T T y = q ˙ O T η T T
Γ ~ = Γ 0 4 N × 4 N   Γ = v s t a c k Γ 1 , , Γ N
Λ = 0 4 N × 4 I 4 N a = 1 0 3 + 4 N T
b T q ˙ ¯ = q ˙ ¯ 0 4 N T   γ = γ 1 T γ N T T
  c ω ¯ = 0 4 T ω ¯ T T
The function of the second constraint a T x = 1 in Equation (39) is equivalent to q ˙ T q ˙ = 1 in Equation (36c), thus avoiding the need to solve a non-convex optimization problem. This equivalence follows from Theorem 1 (see Appendix A for proof). According to the proposition, and noticing that q ˙ T q ˙ = 1 , a unique solution for Γ q ˙ = 0 exists. Given that Equation (39) essentially solves Γ q ˙ = 0 , Equations (39) and (40) have a unique solution only when the condition in Theorem 1 is satisfied.
Theorem 1.
For matrix Γ , if the number of lines involved in the calculation is greater than 2 and the directions of these lines are not completely the same, then the rank of Γ is 3, indicating that the corresponding null-space dimension of Γ is 1. Otherwise, the rank of Γ is 2, and the null space dimension is 2.
After calculating x and y according to Equations (39) and (40), the estimated values of q ˙ and q ˙ O is given by the following:
q ˙ ^ = η x / η x 2
q ˙ ^ O = η y
where η = 1 1 × 4 0 1 × 4 N T .
Based on the derivation above, the DLM algorithm is presented in Algorithm 1. To improve the stability and accuracy of the results, the algorithm uses a modified Silverman’s rule (MSR) to update the Laplace kernel size σ , given by the following:
σ = 0.8 × E m a x × E I Q R × D S 1 ,                                               if   D s 1.5 × E I Q R 1.06 × min σ E , p 25 / 1.34 × N 0.2 ,                             other                              
where E max is the maximum fitting residual, E IQR is the interquartile range of the fitting residuals, D S = σ E E median represents the asymmetry of the fitting residual distribution, and E median is the median of the fitting residuals. The MSR prevents the value of σ from being too small, which would distort the results.
q ˙ and q ˙ O can be converted into a rotation matrix and translation vector, respectively, which can be substituted into Equations (15) and (16) for laser spot searching. The conversion formulas are as follows [54]:
R = 2 q 0 2 + q 1 2 1 2 q 1 q 2 q 0 q 3 2 q 1 q 3 + q 0 q 2 2 q 1 q 2 + q 0 q 3 2 q 0 2 + q 2 2 1 2 q 2 q 3 q 0 q 1 2 q 1 q 3 q 0 q 2 2 q 2 q 3 + q 0 q 1 2 q 0 2 + q 3 2 1
t = 2 q ˙ O q ˙ 1 = 2 M T q ˙ q ˙ O
Algorithm 1. DLM-MSR Algorithm
Input:
L i ( c 1 ) : A set of Plücker coordinates of at least 2 non-parallel lines in the c 1 -Frame;
L i ( c 2 ) : The corresponding set of Plücker coordinates in the c 2 -Frame;
K : The maximum number of iterations for the solver;
E : The minimum update step size;
Process:
0: Initialize ω T 1 = 1 , 1 , , 1 , z T 0 = 0 8 ;
for   k = 1 : K
1:
Set ω ¯ = ω ^ k , compute x ( k ) according to Equation (39) and q ˙ k according to Equation (41a);
2:
Set q ˙ ¯ = q ˙ ^ k and substitute it into Equation (40) to compute y ( k ) , then computer q ˙ ^ O k according to Equation (41b)
3:
Set z T ( k ) = q ˙ ^ k q ˙ ^ O k to update the result;
4:
If z ( k ) z ( k 1 ) < E or k = K , then break;
5:
Update the kernel size σ using Equation (42), and compute ω k + 1 using Equation (25);
end for
Return: q ˙ ^ and q ˙ ^ O

5. Experiment and Discussion

The content of this section is as follows:
(1)
Section 5.1, Section 5.2 and Section 5.3 present a numerical simulation, an indoor test, and an outdoor field test that collectively evaluate the performance of the DLM algorithm. The simulation was implemented in Python 3.10, and the optimization problems in Equations (39) and (40) were solved with the CVXPY library (ver. 1.5.2).
(2)
In Section 5.4, the DLM algorithm was used to calibrate the RIC, and actual RTL images were collected to verify the feasibility of the proposed laser-assisted image stitching method. The experimental data were processed with custom Python scripts, OpenCV 4.9.0 was used for fundamental image operations, and the Pywt library (ver. 1.7.0) was employed for wavelet analysis.

5.1. DLM Simulation

As shown in Figure 9, the simulation model consists of two cameras and two lasers. The lines representing the laser beams are labeled as L 1 and L 2 , and their directions are parallel. The optical axes of the lasers and cameras are both perpendicular to the X W -axis, and the angles between the optical axes of Cameras 1 and 2 and the laser beams are α 1 and α 2 , respectively. In the rectangular “Sampling Area” shown in the figure, the points on the lasers are evenly sampled at intervals of Δ d 1 . The sampling noise is assumed to follow a Gaussian distribution, with the standard deviation of the sampling noise being σ 1 at the nearest point to the camera and σ 2 at the farthest point. The standard deviation of the sampling noise between the nearest and farthest points is given by σ ( d ) = d σ 2 σ 1 / d 1 , where d is the distance from the sampling location to the nearest point.
In the simulation set d 1 = 1000 mm, d 2 = 300 mm, Δ d 1 = 50 mm, α 1 = α 2 = 8 ° , σ 1 = 1 mm, and σ 2 = 2 mm. A total of 42 points were sampled from L 1 and L 2 in each camera’s coordinate frame. Two NCCP-CCP mixed Plücker coordinate sets S ( c k ) = L i j   |   i j = 1 , , 21 ( c k ) , k = 1,2 , were obtained, and S c 1 S c 2 = 861 , where S denotes the number of members in the set S .
To simulate the deviation caused by ambient light, some of the samples were randomly selected and larger Gaussian noise was added to make them outliers, with a standard deviation of σ 3 = 4.58 mm. The experiment was divided into six groups, simulating calibration results when outliers accounted for 0%, 5%, 10%, 15%, 20%, and 25% of the sample set. Each experiment was repeated 300 times. During the experiment, the maximum number of iterations for the DLM algorithm was set to K = 100 , and the minimum update step was E = 1 0 10 .
The DQ-LS algorithm [49] was used as a comparison method, and the following three variants of the DLM algorithm were tested: DLM-MSR (DLM with Modified Silverman’s rule), DLM-SR (DLM with raw Silverman’s rule), and DM (DLM without the weight update step). Among these, DLM-MSR is the algorithm described in Algorithm 1, DLM-SR uses the original Silverman’s rule to update the kernel size, and the DM algorithm eliminates the weight update step.
The experimental results are listed in Table 3. The table shows the average L2 norm of the estimation error of Euler angle (EEA) and the estimation error of translate (ET). In addition, it shows the average number of iterations required for DLM-MSR and DLM-SR to satisfy the convergence conditions.
The results indicated that the DQ-LS method is extremely sensitive to noise and outliers. As the proportion of outliers increased from 0% to 25%, the EEA of the DQ-LS algorithm increased by 0.0062 (about 98%) and ET increased by about 21.85 (about 177%). The performance of the DLM-SR algorithm is inferior to that of the DM algorithm, suggesting that the raw Silverman’s rule prevents the algorithm from converging. In contrast, the DLM-MSR algorithm had the best estimation accuracy. As the proportion of outliers increased from 0% to 25%, the EEA increased by only 0.0011 (about 44%) and the ET increased by only 1.63 (about 45%). Furthermore, the average number of iterations required for convergence was about 4, significantly fewer than the iterations required by the DLM-SR, demonstrating that the modified Silverman’s rule can significantly enhance the accuracy and efficiency of the algorithm.

5.2. Indoor Experiment

An indoor experiment was carried out to validate the proposed method. The experiment platform was equipped with two Balser acA2440-75um cameras, each with an f   = 25 mm lens, as shown in Figure 10. The lasers were adjusted to be nearly parallel to simulate the worst condition. During the experiment the common FOV of the cameras was only minimally restricted, allowing the use of the EPNP algorithm [55] to estimate the camera pose parameters, which were then compared with the results estimated by other algorithms. The experiment followed the calibration method described in Section 4.4, and the camera pose parameters were estimated using DQ-LS, DLM-MSR, DLM-SR, DM, and EPNP.
During the experiment, the NCCP sets collected from lasers 1 and 2 under the c 1 -Frame and c 2 -Frame were denoted as N C C P i j , and the CCP set as C C P i , where i , j = 1,2 , with i representing the camera coordinate system index and j representing the collimation laser index. The numbers of collected NCCPs was as follows:
N C C P 11 = 480 N C C P 12 = 549
N C C P 21 = 887 N C C P 22 = 954
The corresponding number of NCCP–Plücker coordinates was 229920, 300852, 785882, and 909162, respectively. The number of collected CCP pairs was four, corresponding to four CCP–Plücker coordinates. The number of CCPs involved in this experiment was relatively small, because for the real equipment collecting CCPs requires a large flat plate which must be suspended in the air, making it difficult to move. Excessive CCPs would pose additional challenges for the production of the equipment.
To ensure balanced sample numbers, the following preprocessing was applied:
(1)
A minimum point pair distance of 500 mm was used to eliminate Plücker coordinates generated by closely spaced point pairs in the NCCP–Plücker set;
(2)
A total of 500 Plücker coordinates were randomly selected from the filtered NCCP–Plücker set for calculation;
(3)
The chosen NCCP–Plücker coordinates with the CCP–Plücker coordinates were used to obtain the set used for pose estimation.
The distributions of NCCPs and CCPs in the two camera coordinate frames after the above processing are shown in Figure 11.
Table 4 lists the estimated camera pose parameters from different methods, as well as the mean error from the reprojection of the CCP set. It also provides the root mean square (RMS) of the epipolar constraint deviation e e p = l e p T u ~ , calculated using the CCP coordinate set according to Equation (16). The RMS is given by the following:
RMS X = N 1 i = 1 N x i
According to the reprojection error of the CCPs and the epipolar lines, it is evident that the performances of DQ-LS, DLM-MSR, and DM are similar and better than those of EPNP. Furthermore, the results obtained using the DLM-SR method are severely distorted, indicating that the original Silverman’s rule fails to provide a reasonable Laplace kernel size.
Figure 12 shows the projection of the constrained epipolar lines from the different algorithms. It can be observed that l e p , LS , l e p , DLM - MSR , and l e p , DM are very similar, and their RMS values are also very similar, whereas l e p , DLM - SR significantly deviates from the laser spot.
The camera pose estimation results of each algorithm were further analyzed using the reprojection error of the NCCP–Plücker coordinate set. Because the complete N C C P i j –Plücker coordinate set is large, the mean of this set is statistically close to the true value, which can be used as a reference benchmark. Figure 13 shows a histogram of the L2 norm of the reprojection error. It can be seen that the reprojection errors of the DQ-LS, DLM-MSR, and DM methods were small and similar.
These comparison results indicate that DQ-LS, DLM-MSR, and DM have similar estimation accuracies in this case, but in terms of pose angle estimation DLM-MSR and DM can provide more accurate results. The performances of the DLM-MSR and DM algorithms were similar, likely because of the low number of outliers in the experimental sample.

5.3. Outdoor Experiment

The proposed method was applied to calibrate the side wall RTL subsystem. The camera highlighted in Figure 14 was selected, and several representative images are shown in Figure 15. During calibration the lens was focused on the mock tunnel lining to help the operator locate the laser spots. As a consequence, the checkerboard appears slightly out of focus in Figure 15c, which inevitably reduces the accuracy of spot localization.
During the experiment each camera operated at 33 fps while the operator moved the calibration board to intercept the laser spots. Extrinsic parameters were estimated for the three adjacent side wall cameras (IDs 29, 30, 31). The following four calibration strategies were evaluated: DLM-MSR, DM, DQ-LS, and EPNP.
For DLM-MSR, DM, and LS the numbers of NCCPs entering the computation were 140, 131, and 157, respectively, plus a single pair of CCPs. For the EPNP route 703, 514, and 855 checkboard image pairs were available; a pose was first computed for each pair and the mean pose was taken as the final estimate. As indicated in Figure 15c,d, point sets 1/4 constitute one set of conjugate features for adjacent cameras, whereas point sets 2/3 form the second set.
The results are summarized in Table 5 and the nominal installation angles are listed for reference. Convergence is assessed by comparing to the design angles. EPNP diverges markedly, showing that the wide-FOV-based strategy is unsuitable—most likely because the lower resolution of the wide-FOV images inhibits accurate corner localization. In contrast, DLM-MSR, DM, and LS all return angles close to the design values, with DLM-MSR and DM virtually identical. The three-dimensional reprojection plots in Figure 16 corroborate these findings. With DLM-MSR or DM the re-projected NCCP and CCP laser spots fall on a common straight line, whereas the LS solution fails to align the four laser tracks, indicating that LS breaks down under the present imaging conditions.
The numerical differences between DLM-MSR and DM are negligible. This is most likely because the mean shift pre-filter applied during calibration had already removed the majority of outliers, leaving both solvers with virtually the same inlier set. To magnify any residual discrepancy, the laser spot coordinates obtained from each solution were re-projected into three-dimensional space, a best-fit Plücker line was computed, and the perpendicular distance of every point to that line was evaluated as follows:
d = r T r r T L 2 0.5
where L and L O are the Plücker coordinates of the fitted line, L = 1 , r = P L × L O , and P is the coordinate of a three-dimensional point.
For each camera pair the RMS and mean absolute error (MAE, Equation (46)) of the distances are listed in Table 6. The 29/30 pair exhibits larger residuals than the 30/31 pair because its NCCP set is smaller and therefore has a higher noise fraction. Within each pair, RMS and MAE are almost identical, indicating an outlier-free dataset.
For cameras 29/30, DLM-MSR reduces the projection error by ≈0.9 mm with respect to DM, whereas for cameras 30/31 the two algorithms are statistically indistinguishable. Hence the MSR weight-updating mechanism does not provide a meaningful accuracy gain in this scenario; the non-iterative DM variant achieves comparable accuracy with lower computational cost. In practice, whether to activate the MSR re-weighting step should be decided adaptively according to the noise characteristics of the dataset.
M A E X = N 1 i = 1 N x i

5.4. Image Stitching Experiment Based on Real RIC Data

5.4.1. Selected Cameras

Cameras 5 and 6 of the top lining measurement system mounted on the RIC were selected for the experiment.
First, the intrinsic parameters of the cameras were calibrated using Zhang’s method. Then, the cameras and laser emitters were mounted on the vehicle. Finally, the calibration boards were suspended on a long pole to calibrate the camera–laser triangulation measurement units and camera pose parameters. The calibration results are listed in Table 7, where the Plücker coordinates of each laser were estimated using the least squares method based on the NCCP datasets. The intrinsic parameters of cameras were obtained using Zhang’s method, and the camera pose parameters were calculated using the DLM-MSR algorithm.
In the experiment, four pairs of CCPs were collected, and the numbers of NCCP samples were as follows:
N C C P 11 = 417 N C C P 12 = 328
N C C P 13 = 371 N C C P 14 = 484
N C C P 21 = 234 N C C P 22 = 445
N C C P 23 = 543 N C C P 24 = 453
The subscripts of the NCCP datasets correspond to the laser indices shown in Figure 17.
The calibration results are shown in Figure 17. From Figure 17b, it can be seen that owing to vibration or calibration errors, some laser spots may deviate slightly from the expected positions; therefore, Equations (12) and (13) are needed for precisely images alignment.

5.4.2. RTL Image Mosaicking with Laser Aid

A tunnel located on the G30 National Highway in Xi’an, Shaanxi Province, China, was selected for side testing. The site images are shown in Figure 18. The collected images were stitched using the method described in Section 4.2, and stitching results are shown in Figure 19. According to Figure 19b, there are no significant stitching seams in the stitched images at the macro level. However, some faint shadow areas can be observed near the stitching seams, caused by the attenuation of camera illumination, which leads to a decrease in brightness on both sides of the image and can be corrected through flat-field correction.
From Figure 19c,d, in the locally magnified images it is difficult to observe obvious stitching seams, and the image texture near the seams maintains good continuity even though there are few corresponding features on the tunnel lining image. This shows that the laser spot-assisted registration method is efficient for obtaining high-quality panoramic images of tunnel linings.

5.4.3. Comparison of RTL Image-Stitching Methods

To evaluate whether the laser-assisted stitching method proposed in this study genuinely improves registration accuracy, we benchmarked it against two widely used baselines—Windows Image Composite Editor (ICE 2.0.3) and the SIFT + RANSAC [56].
After undistortion and scale normalization with the calibration parameters, the RTL image pairs shown in Figure 20 were processed. The SIFT + RANSAC method produced reliable transformations only when a conspicuous crack spanned the overlap region (Figure 20(a2,b2)); for the remaining low-texture images, removing laser spot correspondences left either too few inliers to support a model or produced large geometric errors. Figure 21 contrasts the RTL mosaics generated by ICE and by our method. The ICE result exhibits conspicuous misalignment near the lining cracks, and the central laser spot trace deviates markedly from a straight line, indicating an unstable registration, whereas the mosaic produced by the proposed method preserves an almost perfectly straight laser line that coincides with the vehicle trajectory. These findings demonstrate that the laser-assisted method delivers a significant improvement in stitching accuracy for RTL imagery, a scenario in which the scarcity of natural texture limits conventional feature-based approaches.

6. Conclusions

This work addresses the dual goals of enhancing the quality and efficiency of RTL image acquisition. We introduce a collimated laser-assisted mobile scanning framework in which laser arrays supply artificial tie lines that (i) stabilize image stitching and (ii) provide an explicit mapping between individual camera images and the tunnel lining surface. To calibrate the camera array, we propose a single-checkerboard procedure that treats the laser lines as line correspondences, thereby determining the pose of each camera. To guarantee robustness, we develop a dual-quaternion Laplacian kernel-based algorithm (DLM-MSR) and establish the theoretical conditions under which this algorithm converges. Extensive simulations—as well as controlled indoor and field experiments—demonstrate that DLM-MSR achieves rapid convergence, delivers higher calibration accuracy than conventional methods, and remains stable even in the presence of substantial outlier noise.
A tunnel inspection prototype vehicle has been built on the basis of the proposed methodology. The paper presents a detailed hardware framework of this platform, including its principal sensors and computers together with their key performance specifications. These disclosures are intended to serve as a practical reference for the design and construction of similar inspection systems.
The performance of the proposed DLM-MSR algorithm was validated through both numerical simulation and physical experiments. Empirically, the solver converged in only 3~4 iterations on average and exhibited noticeably greater robustness. In Monte Carlo simulation, increasing the outlier ratio from 0% to 25% caused the DLM-MSR solution to degrade by only ~44% in Euler-angle error and ~45% in translation error, whereas competing algorithms deteriorated far more severely. Consistent results were obtained in laboratory and field trials: DLM-MSR delivered significantly higher calibration accuracy for the tunnel lining scanning system and completely avoided the divergence occasionally observed with the benchmark techniques.
An experiment using real RIC data confirmed the feasibility of the proposed RTL scanning method, and a comparative evaluation against conventional image-stitching techniques showed that the RTL mosaics produced by our approach achieve markedly higher registration accuracy than those obtained with traditional methods.
There are also the following drawbacks in this paper:
(1)
This study assumes that the projection relationship between the camera images and tunnel lining is a planar projection, ignoring the curvature of the tunnel cross-section. Therefore, the laser array-assisted image stitching method proposed in this study is only applicable to tunnels with smooth cross-sectional profiles and may fail for tunnels with non-smooth cross-sectional profiles, such as immersed tube tunnels.
(2)
This study did not address the issue of image stitching when laser points are missing.
(3)
Because of the limitations of the current experimental conditions, this study did not analyze the pixel scale error in the stitched lining images, and the panoramic RTL image was not provided in this study.
(4)
Because insufficiently diverse set of real-world RTL images, quantitative stitching-error statistics are not yet available.
We will investigate these issues in future research.

Author Contributions

Conceptualization, X.W. and H.S.; methodology, X.W.; software, X.W.; validation, X.W., J.W. and J.X.; formal analysis, X.W.; investigation, X.W.; resources, J.M. and H.S.; writing—original draft preparation, X.W.; writing—review and editing, X.W. and J.X.; visualization, X.W. and J.X.; supervision, J.M.; project administration, J.M. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Fundamental Research Funds for the Central Universities under Grant CHD300102223203 and the Key Research and Development Project in Shaanxi Province under Grant 2024CY2-GJHX-70.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RTLRoad Tunnel Lining
RTL-DRTL Deformation
RTL-ADsRTL Appearance Defects
RTL-IDsRTL Internal Defects
RICRoad Tunnel Lining Inspection Car
FOVsFields of View
ASCArea-scanning Camera
LSCLine-scanning Camera
LSLLine-scanning Laser
SVDSingular Value Decomposition
NCCNormalized Cross-correlation
PNPPerspective-n-Point
NCCPNon-corresponding Control Point
CCPCorresponding Control Point
DQDual Quaternions
MCCMaximum Correntropy Criterion
DLMDQ Laplace–MCC algorithm
RMSRoot Mean Square
MAEMean Absolute Error
MSRModified Silverman’s Rule
EEA Estimation Error of Euler Angle
ETEstimation Error of Translate
CTComputation Time
ICEWindows Image Composite Editor

Appendix A

Proof of Theorem 1. Case 1:
When only one line is involved in the calculation, from Equation (35a), we have the following:
Γ = 0 L L T L L L + L × = 0 a T a b ×
where a T = a 1 a 2 a 3 , b T = b 1 b 2 b 3 , a i = L i L i , and b i = L i + L i . It is well known that the rank of a 3 × 3 skew-symmetric matrix is 2. Therefore, when the rotation matrix R = I 3 , we have rank Γ = rank b × = 2 ; and not all a i are 0 if R I 3 . Assume a 1 0 , and after elementary row transformations, Γ becomes the following:
Γ a 1 0 b 3 b 2 0 1 a 2 / a 1 a 3 / a 1 0 0 γ 0 0 0 0 γ
where γ = b 1 + a 3 / a 1 b 3 + a 2 / a 1 b 2 . Substituting a i = L i L i and b i = L i + L i into Equation (A2), we obtain the following:
γ = L 2 2 L 2 2 L 1 L 1 = 0 L 2 = L 2
Therefore, if only one line involved in the calculation is parallel, rank Γ = 2 .
Case 2: When more than two lines are involved in the calculation, and they are all parallel, based on the properties of the matrix rank, we have rank Γ = 2 .
Case 3: When two or more lines with different directions are involved, considering the case where R = I 3 , it is clear that rank Γ 3 .
Case 4: When two or more lines with different directions are involved and R I 3 , assuming a i 1 0 and a j 1 0 , we have the following:
Γ 1 0 b i 3 / a i 1 b i 2 / a i 1 0 1 a i 2 / a i 1 a i 3 / a i 1 1 0 b j 3 / a j 1 b j 2 / a j 1 0 1 a j 2 / a j 1 a j 3 / a j 1 0 4 × 4 1 0 b i 3 / a i 1 b i 2 / a i 1 0 1 a i 2 / a i 1 a i 3 / a i 1 0 0 ψ 1 ψ 2 0 0 ϕ 1 ϕ 2 0 4 × 4
where ψ 1 = b j 3 / a j 1 + b i 3 / a i 1 , ψ 2 = b j 2 / a j 1 b i 2 / a i 1 , ϕ 1 = a j 2 / a j 1 a i 2 / a i 1 , and ϕ 2 = a j 3 / a j 1 a i 3 / a i 1 . Because the lines involved are not parallel, ϕ 1 and ϕ 2 cannot both be zero. Setting ψ 1 = 0 and ψ 2 = 0 , we have a i 1 / a j 1 = b i 2 / b j 2 = b i 3 / b j 3 = λ , and expanding it obtain the following:
r ¯ 1 e 1 T L i λ L j = 0 r ¯ 2 + e 2 T L i λ L j = 0 r ¯ 3 + e 3 T L i λ L j = 0
where r ¯ i is the i -th row element of R , and e i = δ i 1 δ i 2 δ i 2 T , with the following:
δ i j = 1 ,   if   i = j 0 ,   if   i j
Equation (A5) can be written as follows:
R e L i = λ R e L j
where R e = R diag [ 1 ,   1 ,   1 ] . Clearly, if R 0,0 1 then the Euler angle about the X -axis is not 0, then R e is invertible, which implies L i = λ L j , meaning L i and L j are parallel, contradicting the assumption. If the Euler angle about the X -axis is 0, it must hold that a i 1 = 0 and a j 1 = 0 , which contradicts the assumption. Therefore, ψ 1 and ψ 2 cannot both be 0, and thus rank Γ 3 .
To prove that rank Γ 3 , we need to show that det ψ ϕ 0 , that is ψ = A ϕ . Given a i = L i L i = R I L i and b i = L i + L i = R + I L i , we have b i = M a i , b j = M a j , where M = R + I R I 1 = m ¯ 1 T m ¯ 2 T m ¯ 3 T T , with R I 3 . Thus we obtain the following:
ψ = b j 3 a j 1 + b i 3 a i 1 b j 2 a j 1 b i 2 a i 1 = 1 a i 1 a j 1 a i 1 b j 3 + b i 3 a j 1 a i 1 b j 2 b i 2 a j 1 = 1 a i 1 a j 1 a i 1 m ¯ 3 a j + m ¯ 3 a i a j 1 a i 1 m ¯ 2 a j m ¯ 2 a i a j 1 = 1 a i 1 a j 1 m 32 m 22 m 33 m 23 a i 1 a j 2 a i 2 a j 1 a i 1 a j 3 a i 3 a j 1 = A ϕ
Therefore, under the conditions of the problem, there is rank Γ 3 . Because L i and L j are unit vectors, their elements cannot all be zero, and it can be verified that when a i m 0 and a j n 0 , we still have rank Γ 3 . □

References

  1. Xu, G.; He, C.; Chen, Z.; Liu, C.; Wang, B.; Zou, Y. Mechanical behavior of secondary tunnel lining with longitudinal crack. Eng. Fail. Anal. 2020, 113, 104543. [Google Scholar] [CrossRef]
  2. Zhang, N.; Zhu, X.; Ren, Y. Analysis and Study on Crack Characteristics of Highway Tunnel Lining. Civ. Eng. J. 2019, 5, 1119–1123. [Google Scholar] [CrossRef]
  3. 2015-16896; National Tunnel Inspection Standards. Federal Highway Administration: Washington, DC, USA, 2015.
  4. FHWA-HIF-15-005; Tunnel Operations, Maintenance, Inspection, and Evaluation (TOMIE) Manual. Federal Highway Administration: Washington, DC, USA, 2015.
  5. Xiao, J.Z.; Dai, F.C.; Wei, Y.Q.; Min, H.; Xu, C.; Tu, X.B.; Wang, M.L. Cracking mechanism of secondary lining for a shallow and asymmetrically-loaded tunnel in loose deposits. Tunn. Undergr. Space Technol. 2014, 43, 232–240. [Google Scholar] [CrossRef]
  6. Tuchiya, Y.; Kurakawa, T.; Matsunaga, T.; Kudo, T. Research on the Long-Term Behaviour and Evaluation of Lining Concrete of the Seikan Tunnel. Soils Found. 2009, 49, 969–980. [Google Scholar] [CrossRef]
  7. Lei, M.; Peng, L.; Shi, C.; Wang, S. Experimental study on the damage mechanism of tunnel structure suffering from sulfate attack. Tunn. Undergr. Space Technol. 2013, 36, 5–13. [Google Scholar] [CrossRef]
  8. Kaise, S.; Maegawa, K.; Ito, T.; Yagi, H.; Shigeta, Y.; Maeda, K.; Shinji, M. Study of the image photographing of the tunnel lining as an alternative method to proximity visual inspection. In Tunnels and Underground Cities: Engineering and Innovation Meet Archaeology, Architecture and Art; CRC Press: Boca Raton, FL, USA, 2019; pp. 2325–2334. [Google Scholar]
  9. Rosso, M.M.; Marasco, G.; Aiello, S.; Aloisio, A.; Chiaia, B.; Marano, G.C. Convolutional networks and transformers for intelligent road tunnel investigations. Comput. Struct. 2023, 275, 106918. [Google Scholar] [CrossRef]
  10. Zhou, Z.; Yan, L.; Zhang, J.; Yang, H. Real-time tunnel lining crack detection based on an improved You Only Look Once version X algorithm. Georisk Assess. Manag. Risk Eng. Syst. Geohazards 2023, 17, 181–195. [Google Scholar] [CrossRef]
  11. Zhou, Z.; Zhang, J.; Gong, C.; Wu, W. Automatic tunnel lining crack detection via deep learning with generative adversarial network-based data augmentation. Undergr. Space 2023, 9, 140–154. [Google Scholar] [CrossRef]
  12. Pandey, A.; Pati, U.C. Image mosaicing: A deeper insight. Image Vis. Comput. 2019, 89, 236–257. [Google Scholar] [CrossRef]
  13. Ghosh, D.; Kaabouch, N. A survey on image mosaicing techniques. J. Vis. Commun. Image Represent. 2016, 34, 1–11. [Google Scholar] [CrossRef]
  14. Wang, Z.; He, L.; Li, T.; Tao, J.; Hu, C.; Wang, M. Tunnel Image Stitching Based on Geometry and Features; Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020; p. 012013. [Google Scholar]
  15. ZOYON. ZOYON-TFS. Available online: http://en.zoyon.com.cn/index.php/list/49.html (accessed on 25 June 2025).
  16. Du, M.; Fan, J.; Huang, Y.; Cao, M. Mosaicking of mountain tunnel images guided by laser rangefinder. Autom. Constr. 2021, 127, 103708. [Google Scholar] [CrossRef]
  17. Liao, J.; Yue, Y.; Zhang, D.; Tu, W.; Cao, R.; Zou, Q.; Li, Q. Automatic tunnel crack inspection using an efficient mobile imaging module and a lightweight CNN. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15190–15203. [Google Scholar] [CrossRef]
  18. Tian, L.; Li, Q.; He, L.; Zhang, D. Image-Range Stitching and Semantic-Based Crack Detection Methods for Tunnel Inspection Vehicles. Remote Sens. 2023, 15, 5158. [Google Scholar] [CrossRef]
  19. Tjgeo. Tjgeo Tunnel Inspection Vehicle TDV-H. Available online: https://www.cnssce.org/29/201901/2199.html (accessed on 25 June 2025).
  20. Liu, X.; Li, Y.; Xue, C.; Liu, B.; Duan, Y. Optimal modeling and parameter identification for visual system of the road tunnel detection vehicle. Chin. J. Sci. Instrum. 2018, 39, 152–160. [Google Scholar]
  21. Keisokukensa Co., Ltd. MIMM. Available online: https://www.keisokukensa.co.jp/english#ttl-navi05 (accessed on 25 June 2025).
  22. Tonox Co., Ltd. Tonox TC2. Available online: http://www.tonox.com/tunnel.html (accessed on 25 June 2025).
  23. Ricoh. Tunnel Monitoring System Visualizing the Condition of Tunnels in Order to Keep Such Social Infrastructure Safe. Available online: https://www.ricoh.com/technology/tech/087_tunnel_monitoring (accessed on 25 June 2025).
  24. NEXCO. Smart-EAGLE Type-T (Tunnel). Available online: https://www.w-e-shikoku.co.jp/product/product-429/ (accessed on 25 June 2025).
  25. NEXCO. Road L&L System. Available online: https://www.w-e-shikoku.co.jp/product/product-424/ (accessed on 25 June 2025).
  26. Takahiro Osaki, T.K. Tomohiko Masuda, Development of Run Type High-resolution Image Measurement System (Tunnel Tracer). Robot. Soc. Jpn. 2016, 34, 591–592. [Google Scholar] [CrossRef]
  27. Kim, I.; Lee, C. Development of video shooting system and technique enabling detection of micro cracks in the tunnel lining while driving. J. Korean Soc. Hazard Mitig. 2018, 18, 217–229. [Google Scholar] [CrossRef]
  28. Kim, C.N.; Kawamura, K.; Shiozaki, M.; Tarighat, A. An image-matching method based on the curvature of cost curve for producing tunnel lining panorama. J. JSCE 2018, 6, 78–90. [Google Scholar] [CrossRef]
  29. Nguyen, C.; Kawamura, K.; Shiozaki, M.; Tarighat, A. Development of an Automatic Crack Inspection System for Concrete Tunnel Lining Based on Computer Vision Technologies; IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2018; p. 012015. [Google Scholar]
  30. Alpha-Product. FOCUSα-T for Tunnel. Available online: https://www.alpha-product.co.jp/focus (accessed on 25 June 2025).
  31. Zou, L.; Huang, Y.; Li, Y.; Chen, Y. Tunnel linings inspection using Bayesian-Optimized (BO) calibration of multiple Line-Scan Cameras (LSCs) and a Laser Range Finder (LRF). Tunn. Undergr. Space Technol. 2024, 147, 105653. [Google Scholar] [CrossRef]
  32. Wang, H.; Wang, Q.; Zhai, J.; Yuan, D.; Zhang, W.; Xie, X.; Zhou, B.; Cai, J.; Lei, Y. Design of Fast Acquisition System and Analysis of Geometric Feature for Highway Tunnel Lining Cracks Based on Machine Vision. Appl. Sci. 2022, 12, 2516. [Google Scholar] [CrossRef]
  33. Pahwa, R.S.; Leong, W.K.; Foong, S.; Leman, K.; Do, M.N. Feature-less stitching of cylindrical tunnel. arXiv 2018, arXiv:1806.10278. [Google Scholar]
  34. Pahwa, R.S.; Chan, K.Y.; Bai, J.; Saputra, V.B.; Do, M.N.; Foong, S. Dense 3D reconstruction for visual tunnel inspection using unmanned aerial vehicle. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 7025–7032. [Google Scholar]
  35. Jiang, Y.; Zhang, X.; Taniguchi, T. Quantitative condition inspection and assessment of tunnel lining. Autom. Constr. 2019, 102, 258–269. [Google Scholar] [CrossRef]
  36. Toru Yasuda, H.Y. Yoshiyuki Shigeta, Tunnel Inspection System by using High-speed Mobile 3D Survey Vehicle: MIMM-R. J. Robot. Soc. Jpn. 2016, 34, 589–590. [Google Scholar] [CrossRef]
  37. Yasuda, T.; Yamamoto, H.; Enomoto, M.; Nitta, Y. Smart Tunnel Inspection and Assessment using Mobile Inspection Vehicle, Non-Contact Radar and AI. In Proceedings of the International Symposium on Automation and Robotics in Construction, Kitakyushu, Japan, 27–28 October 2020; (ISARC 2020). IAARC Publications: Oulu, Finland, 2020; pp. 1373–1379. [Google Scholar]
  38. Guo, H.; Mohanty, A.; Ding, Q.; Wang, T.T. Image mosaicking of a section of a tunnel lining and the detection of cracks through the frequency histogram of connected elements concept. In Proceedings of the 2012 International Workshop on Image Processing and Optical Engineering, Harbin, China, 9–10 January 2012; SPIE: Bellingham, WA, USA, 2012. [Google Scholar]
  39. Lee, C.-H.; Chiu, Y.-C.; Wang, T.-T.; Huang, T.-H. Application and validation of simple image-mosaic technology for interpreting cracks on tunnel lining. Tunn. Undergr. Space Technol. 2013, 34, 61–72. [Google Scholar] [CrossRef]
  40. Tu, Y.; Song, Y.; Liu, F.; Zhou, Y.; Li, T.; Zhi, S.; Wang, Y. An Accurate and Stable Extrinsic Calibration for a Camera and a 1D Laser Range Finder. IEEE Sens. J. 2022, 22, 9832–9842. [Google Scholar] [CrossRef]
  41. Li, Y.; Ja, W.; Chen, P.; Wang, X.; Xu, M.; Xie, Z. Extrinsic calibration of non-overlapping multi-camera system with high precision using circular encoded point ruler. Opt. Lasers Eng. 2024, 174, 107927. [Google Scholar] [CrossRef]
  42. Van Crombrugge, I.; Penne, R.; Vanlanduit, S. Extrinsic camera calibration for non-overlapping cameras with Gray code projection. Opt. Lasers Eng. 2020, 134, 106305. [Google Scholar] [CrossRef]
  43. Yang, T.; Zhao, Q.; Zhou, Q.; Huang, D. Global Calibration of Multi-camera Measurement System from Non-overlapping Views. In Artificial Intelligence and Robotics; Springer: Cham, Switzerland, 2018; pp. 183–191. [Google Scholar]
  44. Yang, T.; Zhao, Q.; Wang, X.; Huang, D. Accurate calibration approach for non-overlapping multi-camera system. Opt. Laser Technol. 2019, 110, 78–86. [Google Scholar] [CrossRef]
  45. Liu, Z.; Li, F.; Zhang, G. An external parameter calibration method for multiple cameras based on laser rangefinder. Measurement 2014, 47, 954–962. [Google Scholar] [CrossRef]
  46. Farias, J.G.; De Pieri, E.D.; Martins, D. A Review on the Applications of Dual Quaternions. Machines 2024, 12, 402. [Google Scholar] [CrossRef]
  47. Rooney, J. A comparison of representations of general spatial screw displacement. Environ. Plan. B Plan. Des. 1978, 5, 45–88. [Google Scholar] [CrossRef]
  48. Pennestrì, E.; Stefanelli, R. Linear algebra and numerical algorithms using dual numbers. Multibody Syst. Dyn. 2007, 18, 323–344. [Google Scholar] [CrossRef]
  49. Dekel, A.; Harenstam-Nielsen, L.; Caccamo, S. Optimal least-squares solution to the hand-eye calibration problem. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; IEEE: Seattle, WA, USA, 2020; pp. 13598–13606. [Google Scholar]
  50. Yu, Q.; Xu, G.; Cheng, Y. An efficient and globally optimal method for camera pose estimation using line features. Mach. Vis. Appl. 2020, 31, 48. [Google Scholar] [CrossRef]
  51. Faugeras, O. Three-Dimensional Computer Vision, a Geometric Viewpoint; MIT Press: Cambridge, MA, USA, 1993. [Google Scholar]
  52. Přibyl, I.B. Camera Pose Estimation from Lines Using Direct Linear Transformation. Ph.D. Thesis, Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic, 2017. [Google Scholar]
  53. Hu, C.; Wang, G.; Ho, K.; Liang, J. Robust ellipse fitting with Laplacian kernel based maximum correntropy criterion. IEEE Trans. Image Process. 2021, 30, 3127–3141. [Google Scholar] [CrossRef] [PubMed]
  54. Jia, Y.-B. Dual Quaternions; Iowa State University: Ames, IA, USA, 2013. [Google Scholar]
  55. Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An accurate O (n) solution to the PnP problem. Int. J. Comput. Vis. 2009, 81, 155–166. [Google Scholar] [CrossRef]
  56. Brown, M.; Lowe, D.G. Automatic Panoramic Image Stitching using Invariant Features. Int. J. Comput. Vis. 2006, 74, 59–73. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the working principle of the RTL measurement system (using the side lining measurement system for illustration). O s X s Y s Z s represents the coordinate system of the measurement system and O c i X c i Y c i Z c i represents the coordinate frame of the i -th camera.
Figure 1. Schematic diagram of the working principle of the RTL measurement system (using the side lining measurement system for illustration). O s X s Y s Z s represents the coordinate system of the measurement system and O c i X c i Y c i Z c i represents the coordinate frame of the i -th camera.
Sensors 25 04177 g001
Figure 2. External views of the tunnel lining measurement system. (a) Overall appearance of the measurement system; (b) schematic diagram of the measurement system’s operation process; (c) side view of the measurement system; (d) rear view of the measurement system.
Figure 2. External views of the tunnel lining measurement system. (a) Overall appearance of the measurement system; (b) schematic diagram of the measurement system’s operation process; (c) side view of the measurement system; (d) rear view of the measurement system.
Sensors 25 04177 g002
Figure 3. System architecture. (a) Key assemblies; (b) sensor-data acquisition framework.
Figure 3. System architecture. (a) Key assemblies; (b) sensor-data acquisition framework.
Sensors 25 04177 g003
Figure 4. Ideal measurement model. (a) Schematic diagram of the system. (b) Side view of the system. O s X s Y s Z s is the coordinate frame of the measurement system, X 1 and X 2 are the coordinates of the laser spots in object space, x is the longitudinal coordinate of the spot in retina plane, O c is the optical center of the camera, θ it the angle between the collimation laser and the camera’s optical axis, p it the principle distance of the lens, A is the baseline distance of the system, and B is the distance from O c to the objective surface.
Figure 4. Ideal measurement model. (a) Schematic diagram of the system. (b) Side view of the system. O s X s Y s Z s is the coordinate frame of the measurement system, X 1 and X 2 are the coordinates of the laser spots in object space, x is the longitudinal coordinate of the spot in retina plane, O c is the optical center of the camera, θ it the angle between the collimation laser and the camera’s optical axis, p it the principle distance of the lens, A is the baseline distance of the system, and B is the distance from O c to the objective surface.
Sensors 25 04177 g004
Figure 5. Flowchart of tunnel lining image stitching.
Figure 5. Flowchart of tunnel lining image stitching.
Sensors 25 04177 g005
Figure 6. Fast search for laser spot based on imaging geomatic constraints. L is the Plücker coordinate of the laser, X is a point on L , Π 1 and Π 2 are the image planes of cameras c 1 and c 2 , u 1 and u 2 are the projections of X on Π 1 and Π 2 , l 1 and l 2 are the projections of L on Π 1 and Π 2 , e 1 and e 2 are the epipoles, and l e p 1 and l e p 2 are the epipolar lines.
Figure 6. Fast search for laser spot based on imaging geomatic constraints. L is the Plücker coordinate of the laser, X is a point on L , Π 1 and Π 2 are the image planes of cameras c 1 and c 2 , u 1 and u 2 are the projections of X on Π 1 and Π 2 , l 1 and l 2 are the projections of L on Π 1 and Π 2 , e 1 and e 2 are the epipoles, and l e p 1 and l e p 2 are the epipolar lines.
Sensors 25 04177 g006
Figure 7. The two-step camera calibration method.
Figure 7. The two-step camera calibration method.
Sensors 25 04177 g007
Figure 8. Gaussian vs. Laplacian kernels ( σ = 1 ).
Figure 8. Gaussian vs. Laplacian kernels ( σ = 1 ).
Sensors 25 04177 g008
Figure 9. DLM simulation diagram.
Figure 9. DLM simulation diagram.
Sensors 25 04177 g009
Figure 10. Picture of experimental platform.
Figure 10. Picture of experimental platform.
Sensors 25 04177 g010
Figure 11. Distribution of NCCPs and CCPs involved in calculation. (a) Control points in c1-Frame; (b) control points in c2-Frame (The red dashed lines in the figure represent the connections between the NCCPs).
Figure 11. Distribution of NCCPs and CCPs involved in calculation. (a) Control points in c1-Frame; (b) control points in c2-Frame (The red dashed lines in the figure represent the connections between the NCCPs).
Sensors 25 04177 g011
Figure 12. Epipolar projection on image.
Figure 12. Epipolar projection on image.
Sensors 25 04177 g012
Figure 13. Histogram of reprojection errors for NCCP–Plücker coordinates. (a) EPNP; (b) DQ-LS; (c) DLM-MSR; (d) DLM-SR; (e) DM. In every sub-panel, the left histogram shows the reprojection error of the line direction vectors, while the right histogram shows the reprojection error of the line moment vectors.
Figure 13. Histogram of reprojection errors for NCCP–Plücker coordinates. (a) EPNP; (b) DQ-LS; (c) DLM-MSR; (d) DLM-SR; (e) DM. In every sub-panel, the left histogram shows the reprojection error of the line direction vectors, while the right histogram shows the reprojection error of the line moment vectors.
Sensors 25 04177 g013
Figure 14. Outdoor experiment site.
Figure 14. Outdoor experiment site.
Sensors 25 04177 g014
Figure 15. Typical experiment images. (a) Frame captured by the wide-FOV camera; (b,c) frames captured by two narrow-FOV cameras; (d) projection of the laser spot trajectories in image space; (e) three-dimensional positions of the laser spots in the corresponding camera frame. In panels (c,d) the spot sets are numbered clockwise, starting from the upper-left laser, and are color-coded red → green → blue → orange.
Figure 15. Typical experiment images. (a) Frame captured by the wide-FOV camera; (b,c) frames captured by two narrow-FOV cameras; (d) projection of the laser spot trajectories in image space; (e) three-dimensional positions of the laser spots in the corresponding camera frame. In panels (c,d) the spot sets are numbered clockwise, starting from the upper-left laser, and are color-coded red → green → blue → orange.
Sensors 25 04177 g015
Figure 16. Re-projection of control points in three-dimensional space. (a) Result obtained with the DLM-MSR/DLM algorithms; (b) result obtained with the LS algorithm. The color of each laser spot cluster is identical to that used in Figure 15.
Figure 16. Re-projection of control points in three-dimensional space. (a) Result obtained with the DLM-MSR/DLM algorithms; (b) result obtained with the LS algorithm. The color of each laser spot cluster is identical to that used in Figure 15.
Sensors 25 04177 g016
Figure 17. Object–space projection of laser and camera image. (a) Camera 1 image and the projection of the laser in the c 1 -Frame; (b) Camera 2 image and the projection of the laser in the c 2 -Frame; (c) the projection of both camera images and lasers in the c 2 -Frame (In the figure, the green lines denote the laser beams expressed in the c 1 -frame, whereas the red lines denote the beams expressed in the c 2 -frame).
Figure 17. Object–space projection of laser and camera image. (a) Camera 1 image and the projection of the laser in the c 1 -Frame; (b) Camera 2 image and the projection of the laser in the c 2 -Frame; (c) the projection of both camera images and lasers in the c 2 -Frame (In the figure, the green lines denote the laser beams expressed in the c 1 -frame, whereas the red lines denote the beams expressed in the c 2 -frame).
Sensors 25 04177 g017
Figure 18. Experimental site.
Figure 18. Experimental site.
Sensors 25 04177 g018
Figure 19. Tunnel lining image stitching results. (a) Raw images; (b) stitched image; (c) magnified view of Rect 1; (d) magnified view of Rect 2.
Figure 19. Tunnel lining image stitching results. (a) Raw images; (b) stitched image; (c) magnified view of Rect 1; (d) magnified view of Rect 2.
Sensors 25 04177 g019
Figure 20. SIFT + RANSAC image registration results. (a1a4) Images captured by Camera 5; (b1b4) images captured by Camera 6.
Figure 20. SIFT + RANSAC image registration results. (a1a4) Images captured by Camera 5; (b1b4) images captured by Camera 6.
Sensors 25 04177 g020
Figure 21. Comparison between ICE and the proposed method. (a) Mosaic produced by ICE; (b) mosaic produced by the proposed laser-assisted approach.
Figure 21. Comparison between ICE and the proposed method. (a) Mosaic produced by ICE; (b) mosaic produced by the proposed laser-assisted approach.
Sensors 25 04177 g021
Table 1. Representative RICs.
Table 1. Representative RICs.
RIC SystemCamera TypeIlluminationAuxiliary SensorManufacturer Location
ZOYON TFS [14,15,16,17,18]ASCLEDLidarChina
tjgeo TDV-H [19]ASCLEDLidarChina
TiDS [20]LSCLSLLidarChina
Keisokukensa Co., MIMM-R [21]ASCLEDLidarJapan
Tonox TC-2 [22]LSCLSL\Japan
Ricoh TMS [23]LSCLSL\Japan
NEXCO Smart-EAGLE [24,25]LSCLED\Japan
Tunnel Tracer [26]ASCLED\Japan
Kim’s [27]ASCLED\South Korea
Nguyen’s [28,29]ASCLED\Japan
Alpha-product FOCUSα-T [30]ASCLEDCollimated lasersJapan
Zou’s [31]LSCLEDLidarChina
Tongji University’s [32]ASCLEDLidarChina
Remark: ASC represents area-scanning camera, LSC represents line-scanning camera, LSL represents line-scanning laser, LED represents light-emitting diode. The symbol “\” indicates that the device is not equipped with the corresponding component.
Table 2. Main component specifications.
Table 2. Main component specifications.
Component TypeModelKey ParametersQuantity
Collimated laser520 nm collimated laserOutput power 70 mWTop: 22Side: 44
LED strobe moduleIn-house design18 × 18 W LED chips per moduleTop: 120Side: 160
Frequency dividerIn-house designFPGA: Altera EPF10K20TC144-4single
Server computerAdvantech AIIS-3410UIntel i7-6700 CPU, 8 GB RAMTop: 3Side: 8
Narrow-FOV cameraBasler acA2440-75 um/uc2440 × 2048 px, 3.45 µm pixel;
lens focal length: f = 50 mm (side),
f = 75 mm (top)
Top: 11 (Mono)Side: 21 (Color)
Wide-FOV cameraBasler acA2440-75 ucSame sensor as above;
lens focal length: f = 8 mm
Top: 1Side: 3
Table 3. Simulation result.
Table 3. Simulation result.
Group 1Group 2Group 3Group 4Group 5Group 6
DQ-LSEEA0.00630.00630.00800.00930.01120.0125
ET12.2816.6121.3225.2230.6034.13
CT0.670.760.690.700.690.67
DLM-MSREEA0.00250.00260.00300.00310.00350.0036
ET3.553.714.114.384.835.18
IC3.393.573.763.883.993.84
CT4.885.425.425.524.934.34
DLM-SREEA0.00290.00300.00310.00340.00380.0040
ET3.994.22034.50654.73905.20325.6193
IC10.7110.3310.2710.0310.7310.12
CT10.8612.5611.3412.3511.1611.62
DMEEA0.00250.00260.00300.00320.00350.0037
ET3.563.744.184.514.975.30
CT2.122.342.112.182.032.03
Remark: EEA denotes estimation error of Euler angle (L2-norm, expressed in radians); ET denotes estimation error of translation (L2-norm); IC denotes iteration count; CT denotes computation time (expressed in seconds). Each cell reports the average over all trials.
Table 4. Camera pose estimation results obtained by different methods.
Table 4. Camera pose estimation results obtained by different methods.
Euler   Angle / [ 10 5 rad]Displacement/[mm]Mean CCP Reprojection Error/[mm]Epipolar Constraint Error RMS/[pix]
EPNP[91.09, −1628.02, −7701.98[315.96, −7.48, 77.16]20.790.770
DQ-LS[54.38, −9.426, −6792.29][271.49, 19.70, 58.32]10.130.943
DLM-MSR[174.278, −69.579, −6781.41][271.24, 20.04, 62.71]10.240.944
DLM-SR[4515.932, −0.606, −6808.39][310.26, −63.43, −5777.59]5835.966.29
DM[222.435, −69.184, −6781.95][271.26, 20.13, 62.70]10.240.944
Design Value[0, 0, 0][280, 0, 0]
Remark: The CCP reprojection error is computed using the L2 norm.
Table 5. Pose estimation results.
Table 5. Pose estimation results.
AlgorithmEuler Angles (Yaw, Pitch, Roll)/DegreeTranslation/mm
DLM-MSR(−2.309, 0.140, −3.514)
(3.131, −0.272, −4.981)
(21.82, 120.50, 258.43)
(−1.01, 117.72, −373.73)
DM(−2.309, 0.143, −3.514)
(3.131, −0.272, −4.981)
(21.82, 120.50, 258.43)
(−1.01, 117.72, −373.73)
DQ-LS(−2.486, 4.345, −3.530)
(3.288, −4.677, −5.113)
(32.15, 121.58, 318.20)
(15.97, 117.78, −410.87)
EPNP(1.769, −11.624, 3.805)
(−14.490, −22.829, 40.962)
(888.51, 96.96, −312.61)
(1370.40, 1412.53, 1317.17)
Design value(0, 0, −4.89)
(0, 0, −4.95)
Remark: The top line in each cell corresponds to the 29/30 camera pair; the second line to the 30/31 pair.
Table 6. Comparison of re-projection errors.
Table 6. Comparison of re-projection errors.
AlgorithmLaser Point Sets 1/4 Projection ErrorLaser Point Sets 2/3 Projection Error
MAE/[mm]RMS/[mm]MAE/[mm]RMS/[mm]
DLM-MSR(36.46, 19.85)(36.50, 19.92)(19.91, 8.03)(19.92, 8.10)
DM(37.36, 19.87)(37.40, 19.94)(20.98, 8.17)(20.99, 8.18)
Remark: In each cell the first value corresponds to the 29/30 camera pair; the second to the 30/31 pair.
Table 7. Camera and laser parameters.
Table 7. Camera and laser parameters.
Intrinsic Parameters
k x , k y , u 0 , v 0 , k 1 , k 2
Laser
Plücker Coordinates
Camera Pose
(Euler Angle and Translation)
Camera 1 21,907.45,21,862.22 , 1242.68,1042.23 , 0.090,1.028 L 11 = 0.017 , 0.022,0.99 , 80.39,216.73,6.38
L 12 = 0.0029 , 0.030,0.99 , 75.70 , 162.56 , 5.18
L 13 = 0.029,0.017,0.99 , 76.68,201.32 , 7.99
L 14 = 0.036,0.021,0.99 , 80.27 , 178.50,6.47
θ = 0.0465 0.0209 0.0056
t = 1.66 159.90 1.43
Camera 2 21,964.88,21,919.94 , 1242.97,1009.57 , 0.0150,0.606 L 21 = 0.0019 , 0.018,0.99 , 83.21,202.03,5.07
L 22 = 0.051 , 0.069,0.99 , 80.50 , 177.54 , 7.80
L 23 = 0.0030,0.027,0.99 , 79.42,215.73 , 6.19
L 24 = 0.0027,0.020,0.99 , 85.21 , 165.07,3.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, X.; Ma, J.; Wang, J.; Song, H.; Xu, J. Mobile Tunnel Lining Measurable Image Scanning Assisted by Collimated Lasers. Sensors 2025, 25, 4177. https://doi.org/10.3390/s25134177

AMA Style

Wu X, Ma J, Wang J, Song H, Xu J. Mobile Tunnel Lining Measurable Image Scanning Assisted by Collimated Lasers. Sensors. 2025; 25(13):4177. https://doi.org/10.3390/s25134177

Chicago/Turabian Style

Wu, Xueqin, Jian Ma, Jianfeng Wang, Hongxun Song, and Jiyang Xu. 2025. "Mobile Tunnel Lining Measurable Image Scanning Assisted by Collimated Lasers" Sensors 25, no. 13: 4177. https://doi.org/10.3390/s25134177

APA Style

Wu, X., Ma, J., Wang, J., Song, H., & Xu, J. (2025). Mobile Tunnel Lining Measurable Image Scanning Assisted by Collimated Lasers. Sensors, 25(13), 4177. https://doi.org/10.3390/s25134177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop