Next Article in Journal
Device-Driven Service Allocation in Mobile Edge Computing with Location Prediction
Previous Article in Journal
RFID Sensor with Integrated Energy Harvesting for Wireless Measurement of dc Magnetic Fields
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on a Rapid Image Stitching Method for Tunneling Front Based on Navigation and Positioning Information

1
School of Mechanical and Electrical Engineering, China University of Mining & Technology-Beijing, Beijing 100083, China
2
Huadian Coal Industry Group Digital Intelligence Technology Co., Ltd., Beijing 102488, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(10), 3023; https://doi.org/10.3390/s25103023 (registering DOI)
Submission received: 11 April 2025 / Revised: 5 May 2025 / Accepted: 8 May 2025 / Published: 10 May 2025
(This article belongs to the Section Intelligent Sensors)

Abstract

:
To address the challenges posed by significant parallax, dynamic changes in monitoring camera positions, and the need for rapid wide-field image stitching in underground coal mine tunneling faces, this paper proposes a fast image stitching method for tunneling face images based on navigation and positioning data. First, using a pixel-based calculation approach, the tunneling face scene is partitioned into the cutting section and the ground, enhancing the reliability of scene segmentation. Then, the spatial distance between the camera and the cutting plane is computed based on the tunneling machine’s navigation and positioning data, and a plane-induced homography model is employed to efficiently determine the dynamic transformation matrix of the cutting section. Finally, the Dual-Homography Warping (DHW) method is applied to achieve fast panoramic image stitching of the tunneling face. Comparative experiments with three classical stitching methods, SURF, SIFT, and BRISK, demonstrate that the proposed method reduces stitching time by 60%. Field experiments in underground environments verify that this method can generate a complete panoramic stitched image of the tunneling face, providing an unobstructed perspective beyond the machine body and cutting head to clearly observe the shovel plate and surrounding ground conditions, significantly enhancing the visibility and convenience of remote operation.

1. Introduction

As the front line of coal mining operations, tunneling faces present significant safety risks. The implementation of intelligent remote control systems has become crucial for enhancing operational safety [1,2]. These systems rely heavily on visual data, where images and videos serve as core perception tools, enabling remote monitoring and control in coal mines. A critical challenge lies in monitoring key areas such as the cutting cross-section and the ground near the shovel plate. Traditional camera setups, dispersed across machinery [3,4,5], only capture fragmented views, failing to provide a comprehensive visual representation of these regions. This limitation hampers intuitive remote operation and impedes the practical deployment of intelligent systems [6]. Therefore, stitching together images captured by multiple cameras, especially by merging the image of the cutting face with the ground image of the scoop (shovel) board area, enables workers to observe not only the cutting face but also, as if “looking through” the obstructions of the machine body and the cutting head, the images of the scoop board and the ground. This technology can provide powerful support for the intelligentization of roadheading (tunneling/driving) operations.
Traditional image stitching methods like SURF, SIFT, and BRISK all rely on detecting key points in an image, extracting distinctive features from them, and matching these features across images to align and stitch them together. SURF (Speeded-Up Robust Features) and SIFT (Scale-Invariant Feature Transform) both focus on detecting key points that are invariant to scale, rotation, and illumination changes, making them robust for matching features in various conditions. BRISK (Binary Robust Invariant Scalable Keypoints), on the other hand, is a faster alternative that uses binary descriptors for feature matching, offering efficient performance while maintaining robustness in feature detection. These methods all aim to find overlapping areas between images and seamlessly combine them into a single, larger image by using matched key points and geometric transformations.
Building on these classic algorithms, many research groups today have expanded their work on image stitching. Ren Wei proposed installing a multi-lens panoramic camera at the front end of a roadheader in an underground coal mine to acquire large-field-of-view and high-resolution images of the roadheading face. However, due to obstructions from the machine body and the cutting head, the camera is unable to observe the ground area near the scoop board, resulting in blind spots in the field of view [7]. To obtain complete images of both the cutting face and the ground, it is necessary to install cameras on both sides and in the middle of the roadheader for image stitching. Nevertheless, significant installation position deviations among these three cameras make it difficult to avoid large parallax issues during stitching [8]. To address the problem of large parallax image stitching, Gao et al. proposed the DHW (Dual-Homography Warping) method, which segments the scene image into front and back planes and employs separate transformation matrices for each plane, combined with weighting factors for stitching alignment [9]. However, this method relies on scenarios where the camera and scene positions are relatively fixed. During coal mine tunneling, the operational equipment carrying the camera is constantly moving, causing the distance between the front and back planes in the scene image to dynamically change, making the traditional DHW method difficult to apply. For dynamic scenes within coal mine roadways, Zhang Kailong proposed utilizing online calibration of camera extrinsic parameters based on image feature points, combined with a 3D model of the equipment, to dynamically segment the front and back planes of the image [10]. However, the actual working conditions in coal mine roadways are complex, with poor lighting conditions and severe dust interference, making it difficult to obtain sufficient and high-quality feature points, thereby reducing the reliability of plane segmentation. Zhang Xuhui et al. proposed using image enhancement, feature point matching, and optimal seamline methods to calculate the transformation matrices for the front and back planes and address misalignment between them, overcoming the impact of low lighting and heavy dust underground. However, their stitching process takes nearly one second, which is difficult to meet the dynamic demands of practical applications [11].
Currently popular stitching algorithms often face issues like slow feature point matching and limitations in fixed scene applications. However, during the tunneling process, both the scene and depth of field change in real time, which makes these traditional algorithms unsuitable. To address the specific requirements of large parallax, dynamic changes in the positions of monitoring cameras, and rapid stitching of large-field-of-view images at tunneling faces in underground coal mines, this paper proposes a rapid stitching method for tunneling face images based on navigation and positioning information. Initially, based on the known coordinates of the cutting face and the ground demarcation line within the tunnel coordinate system, their corresponding pixel positions in the camera image are calculated, and the tunneling face image is subsequently divided into two planes accordingly. Furthermore, utilizing the navigation and positioning information of the roadheader, the distance between the camera and the cutting face is derived, and a dynamic transformation matrix for the cutting face is computed based on a plane-induced homography model. Finally, combined with the DHW method, rapid stitching of the cutting face images is achieved.

2. Segmentation and Registration of Tunneling Face

2.1. Calculation of Image Dynamic Segmentation Line

The navigation and positioning device for the roadheader consists of two parts: a laser guidance device suspended from the tunnel roof at the rear end of the roadheader and a pose measurement device installed on the roadheader. The origin OL of the tunnel coordinate system is located on the laser guidance device, with the X, Y, and Z directions of the coordinate system representing the lateral, tunneling, and height directions of the tunnel, respectively. The entire navigation and positioning device provides the heading angle, pitch angle, and roll angle of the roadheader body, as well as the coordinates of the roadheader in the tunnel coordinate system [12].
To obtain surveillance images of the tunneling face, three cameras are installed on the left and right sides at the front end of the roadheader and near the driver’s position in the middle, respectively. As shown in Figure 1, the multi-camera system covers the entire cutting face and ground area through its spatially distributed fields of view.
In an actual tunnel, the boundary between the ground and the cutting face approximates an ideal straight line, and the coordinates of the endpoints P1 and P2 of this segmentation line are known in the tunnel coordinate system OL. If we calculate the imaging pixels of P1 and P2 in the cameras, the line connecting these pixels can serve as the segmentation line between the cutting face and the ground in the image.
As shown in Figure 2, taking the middle camera as an example, a calibration board is installed at the tunnel heading, and the coordinates of each corner point on the calibration board in the tunnel coordinate system can be obtained in advance through measurement. Let the coordinate of any corner point in the tunnel coordinate system be P tar _ in _ OL . Meanwhile, based on the camera’s intrinsic parameters, the geometric parameters of the calibration board itself, and the pixel coordinates corresponding to the image of this corner point captured by the middle camera, its coordinate in the middle camera coordinate system can be calculated as P tar _ in _ OCm .
P tar _ in _ OCm = R Cm O t 1 R O t O L 1 · P tar _ in _ OL T O t O L T Cm O t
where R O t O L and T O t O L represent the rotation matrix and translation vector from the pose measurement device coordinate system Ot to the tunnel coordinate system OL, which can be obtained by inverse calculation using the heading angle, roll angle, pitch angle, and spatial position coordinates provided by the roadheader navigation and positioning system. R Cm O t and T Cm O t represent the rotation matrix and translation vector from the middle camera coordinate system OCm to the pose measurement device coordinate system Ot, which can be calibrated by combining multiple corner point coordinates and Equation (1).
For the endpoints P1 and P2 of the segmentation line between the cutting face and the ground, their coordinates in the tunnel coordinate system are PL_1 and PL_2, respectively, while their coordinates PCm_1 and PCm_2 in the middle camera coordinate system can be obtained using Equation (2).
P Cm _ 1 = R O t Cm R O L O t · P L _ 1 + T O L O t + T O t Cm P Cm _ 2 = R O t Cm R O L O t · P L _ 2 + T O L O t + T O t Cm
By combining the camera’s intrinsic parameters, we can further obtain the corresponding pixel coordinates pm1 and pm2 of these two points in the image captured by the middle camera with the following equation.
p m 1 = 1 P Cm 1 _ Z · K Cm · P Cm 1 p m 2 = 1 P Cm 2 _ Z · K Cm · P Cm 2
where K Cm is the intrinsic matrix of the middle camera, while P Cm 1 _ Z and P Cm 2 _ Z represent the distances of points P1 and P2, respectively, along the Z-axis direction of the middle camera. The line connecting the pixel points pm1 and pm2 serves as the line between the cutting face and the ground in the tunneling heading image.
This method is also applicable to the images captured by the left and right cameras. On the one hand, it can obtain the rotation matrix R Lm O t and translation vector T Lm O t from the left camera coordinate system OLm to the pose measurement device coordinate system Ot, as well as the rotation matrix R Rm O t and translation vector T Rm O t from the right camera coordinate system ORm to the pose measurement device coordinate system Ot. On the other hand, it can also calculate the segmentation lines between the cutting face and the ground in the left and right camera images, as shown in Figure 2. pr1, pr2 and pl1, pl2 are the corresponding pixel points of points P1 and P2 in the right and left camera images, respectively, and their connecting lines represent the segmentation lines between the cutting face and the ground in the corresponding images.
Compared to other methods that utilize image feature points to divide the front and back planes, this method directly calculates and generates the segmentation lines between the cutting face and the ground based on the coordinates of the segmentation line endpoints, thereby avoiding reliance on image quality and significantly improving the reliability of the segmentation of the tunneling heading image.

2.2. Stitching of Segmented Images

After completing the segmentation of the cutting face and the ground within the front-facing images, this section employs the DHW method for these two planes. It separately transforms the images captured by the left and center cameras into the perspective of the right camera through matrix transformation, and then stitches them with the image captured by the right camera.

2.2.1. Stitching of Ground Plane

As shown in Figure 1, the heights of the left, center, and right cameras remain basically unchanged relative to the roadway ground. Therefore, once the camera installation positions are fixed, the homography matrices H l _ ground r _ ground and H c _ ground r _ ground for perspective transformation from the left and center cameras to the right camera are also fixed. These matrices can be pre-calibrated by setting multiple feature points in the common ground area of the cameras and performing feature point matching.

2.2.2. Stitching of Cutting Face

Unlike the ground, the distance between the cutting face and the cameras varies with the movement of the roadheader, leading to dynamic adjustments in the transformation matrices between images captured by different cameras according to this distance. To address the dynamically changing transformation matrices, this paper adopts a plane-induced homography model, which is rapidly calculated by integrating the internal and external parameters of the cameras, the pose transformation matrices between the coordinate systems of each camera, the normal vector of the cutting face in the camera coordinate system, and the distance from the camera to the cutting face [13].
Taking the left camera as an example, the homography matrix H Lm _ wall Rm _ wall of the cutting face portion within its image relative to the cutting face portion in the right camera’s image can be calculated by Equation (3).
H Lm _ wall Rm _ wall = K Lm · R Lm Rm I + T Lm Rm · n wall Rm d wall Rm · K Rm 1
where K Lm and K Rm are the intrinsic matrices of the left and right cameras, respectively; R Lm Rm and T Lm Rm are the rotation matrix and translation matrix from the left camera to the right camera, respectively; I is the identity matrix; n wall Rm is the normal vector of the cutting face in the right camera coordinate system; and d wall Rm is the distance from the cutting face to the origin of the right camera coordinate system.
Based on the coordinate system transformation relationship, R Lm Rm and T Lm Rm can be obtained according to the positional relationships between the left and right camera coordinate systems and the pose measurement device coordinate system provided in Section 2.1.
R Lm Rm = R Rm O t 1 · R Lm O t T Lm Rm = R Rm O t 1 · T Lm O t + T O t Rm
d wall Rm and n wall Rm can be determined based on several feature points (such as P1, P2, etc.) on the cutting face and the coordinate values of the right camera in the roadway coordinate system OL [13,14]. The specific calculation steps are as follows: As shown in Figure 2, four feature points P1, P2, P3, and P4 are selected on the cutting face (P1 and P2 are the endpoints of the intersection line between the cutting face and the ground, and P3 and P4 are two feature points on the cutting face directly above P1 and P2 at a height h). Their coordinates in the roadway coordinate system OL are known, denoted as PL_1, PL_2, PL_3, and PL_4, respectively. By combining the rotation matrix R O t R m and translation vector T O t R m between the pose measurement device and the right camera obtained through calibration in Section 2.1, as well as the pose parameters provided by the navigation and positioning system, their coordinates in the right camera coordinate system can be calculated.
P R m _ 1 ( x 1 , y 1 , z 1 ) = R O t R m R O L O t P L _ 1 + T O L O t + T O t R m P R m _ 2 ( x 2 , y 2 , z 2 ) = R O t R m R O L O t P L _ 2 + T O L O t + T O t R m P R m _ 3 ( x 3 , y 3 , z 3 ) = R O t R m R O L O t P L _ 3 + T O L O t + T O t R m P R m _ 4 ( x 4 , y 4 , z 4 ) = R O t R m R O L O t P L _ 4 + T O L O t + T O t R m
Subsequently, the normal vector n wall Rm of the cutting face formed by the four points P1, P2, P3, and P4 in the right camera coordinate system and the distance d wall Rm from the cutting face to the origin of the right camera coordinate system can be obtained.
n wall Rm x , y , z = x x 1 x 2 + y ( y 1 y 2 ) + z ( z 1 z 2 ) = 0 x x 1 x 3 + y ( y 1 y 3 ) + z ( z 1 z 3 ) = 0 x x 1 x 4 + y ( y 1 y 4 ) + z ( z 1 z 4 ) = 0
d wall Rm = O Rm P rm _ 1 n wall Rm n wall Rm
By substituting the results from Equations (5)–(8) into Equation (4), the dynamic homography matrix H Lm _ wall Rm _ wall of the left camera relative to the right camera can be calculated. Similarly, the homography matrix H Cm _ wall Rm _ wall of the cutting face portion within the middle camera image relative to the right camera can also be determined.
The aforementioned process demonstrates that, given the camera parameters, the proposed method in this paper can quickly solve for the transformation matrix of the cutting face by combining the coordinate information of the cutting face provided by the navigation and positioning system and the distance from the camera to the cutting face. This approach avoids the complex process in traditional algorithms of first selecting feature points, performing registration, and then calculating the homography matrix through SVD decomposition, thereby significantly improving real-time performance.

2.2.3. Stitching of the Overall Image

After completing image segmentation and the calculation of transformation matrices, when employing the DHW method for final stitching, it is necessary to calculate the weight of each pixel based on its position in the image and perform overlay fusion. The process is illustrated in Figure 3.
Assuming there exists a pixel point p in the left image, the transformation matrix Hp used to convert it to the right camera’s perspective can be calculated according to the following formula.
H p = 1 ω p · H Lm _ wall Rm _ wall + ω p · H Lm _ ground Rm _ ground
where ω p is the weight corresponding to the position of point p in the image, and the calculation method is as follows.
ω p = L p _ wall L p _ wall + L p _ ground
L p _ wall represents the distance from pixel point p to the nearest feature point on the cutting face, and L p _ ground represents the distance from pixel point p to the nearest feature point on the ground. Since the segmentation line between the cutting face and the ground has been clearly marked in this paper, when point p is on the cutting face, L p _ wall = 0 , and at this time, ω p = 0 ; similarly, when point p is on the ground, L p _ ground = 0 , and at this time, ω p = 1 .
After performing perspective transformation on the left camera image using Equation (9), it is stitched and fused with the right camera image. The position of the image stitching seam (as indicated by the annotated stitching transition region in the figure) can achieve a smooth transition through linear weight blending [15].
After completing the stitching of the left and right camera images, this stitched image is then fused with the middle camera image. Since, in practical scenarios, the middle camera is usually installed at a higher position above the ground compared to the left and right cameras, the cutting face portion occupies a larger proportion in its image. Therefore, only the cutting face portion (above the dynamic segmentation line) in the middle camera image is selected, subjected to perspective transformation using the dynamic homography matrix H Cm _ wall Rm _ wall and then fused and stitched a second time with the previously stitched image of the left and right cameras to ultimately obtain a complete stitched image of the three cameras, as shown in Figure 4 below.

3. Experimental Verification

3.1. Introduction to the Simulation Experimental System

To verify the stitching effectiveness and efficiency of the method proposed in this paper, a simulation experimental environment for roadways was established indoors, as shown in Figure 5 below, by referencing actual underground tunneling face scenarios.
As shown in Figure 5a, the corridor section is assumed to be a tunneling roadway, with the wall directly ahead (enclosed by the red line segment) being the cutting face. A trolley in the middle of the roadway simulates the tunneling machine, equipped with a navigation and positioning system for the tunneling machine that provides real-time location and attitude information of the tunneling machine within the roadway. With reference to actual working conditions, the laser guidance device (i.e., the origin of the roadway coordinate system OL) in the tunneling machine navigation and positioning system is located approximately 18 m behind the machine in the middle of the roadway. The coordinates of endpoints P1 and P2, which are the intersection points between the corresponding simulated cutting face and the ground plane, are known in OL.
In Figure 5b, the left and right cameras are mounted on the left and right sides of the simulated tunneling machine, respectively, with a horizontal downward viewing angle of approximately 25° towards the ground and a height of approximately 50 cm above the ground. The middle camera is located in the middle of the machine body, with a horizontal viewing angle towards the cutting face and a height of approximately 100 cm above the ground. The simulated roadway images captured by the three cameras are shown in Figure 6. We deployed pieces of A4 paper sheets on the low-textured floor surface as artificial reference markers to demonstrate the quality of the image stitching result.

3.2. Three-Camera Image Stitching Process

Following the method described in Section 2, the specific stitching process is as follows:
(1) Calculation of Image Segmentation Lines
First, the rotation matrices and translation vectors from the pose measurement device coordinate system Ot to each camera coordinate system are calibrated using the method in Section 2.1. Simultaneously, given the coordinates of the two endpoints P1 and P2 of the boundary line between the cutting face and the ground plane in OL, the segmentation lines between the cutting face and the ground in each camera image are calculated and displayed in the corresponding images, as shown in Figure 6.
(2) Image Fusion and Stitching of Left and Right Cameras
As described in Section 2.2, the transformation matrix H Lm _ ground Rm _ ground from the left camera’s ground portion to the right camera’s ground portion is calculated using common feature points in the left and right camera images, such as the marked points shown in Figure 7.
Four feature points, P1, P2, P3, and P4, are selected on the cutting face. Among them, P1 and P2 are points on the segmentation line between the cutting face and the ground, while P3 and P4 are two feature points on the cutting face located 1 m directly above P1 and P2, respectively. The coordinates of these four points are substituted into Equations (6)–(9) to calculate the dynamic transformation matrix H Lm _ wall Rm _ wall of the left camera relative to the right camera and the dynamic transformation matrix H Cm _ wall Rm _ wall of the middle camera relative to the right camera. After applying a homography matrix weighted transformation to the left image using Equation (9), it is stitched with the right camera image. Figure 8 demonstrates the stitching effect of the left camera image transformed using either the cutting face homography matrix or the ground homography matrix individually and then stitched with the right camera image, along with a comparison to the stitching effect using the method proposed in this paper. As shown in Figure 8, if only the homography matrix corresponding to the cutting face is used, misalignment occurs in the ground part of the stitched image; if only the homography matrix corresponding to the ground is used, large-scale deformation occurs in the cutting face part of the stitched image; thus, using the segmentation and weighted transformation method proposed in this paper significantly improves the stitching effect.
(3) Three-camera image stitching
The dynamic homography matrix H Cm _ wall Rm _ wall from the middle camera to the right camera is used to perform perspective transformation on the cutting face section, which is then stitched with the fused result of the left and right camera images as mentioned above. Ultimately, a complete stitched image from the three-camera fusion is obtained, as shown in Figure 9. It can be observed that the stitched image eliminates the obstruction of the roadheader’s own components to the line of sight, presenting a complete view of the heading face scene.

3.3. Time Consumption Performance Analysis

To verify the effectiveness of the proposed algorithm in terms of time efficiency, three classic stitching algorithms—SURF [16], SIFT [17], and BRISK [18]—were employed to perform full-image stitching based on the left and right camera images in Figure 8. The computational time of each method was compared (using the average time taken over 10 executions of each method), as shown in Table 1 below.
The stitching quality of the three traditional methods (SURF, SIFT, and BRISK) is comparable to the method proposed in this paper, and all of them can meet the requirements for tunneling monitoring. However, the algorithm presented in this paper has a clear advantage in terms of time consumption. Compared with classic stitching algorithms, the computational time is reduced by over 60%. Under the same hardware acceleration conditions, the proposed algorithm is more capable of meeting real-time monitoring requirements.

4. Verification of Real Coal Mine Tunneling Scenario

The proposed method in this paper was practically validated in a real coal mine tunneling scenario, as shown in Figure 10 below. The left and right cameras were installed on the left and right sides of the front end of the roadheader body, approximately 1 m above the ground and tilted downward at an angle of approximately 25° to monitor the scraper board and the ground in front of it. The middle camera was installed near the driver’s position, approximately 2 m above the ground, to monitor the cutting face section.
After pre-calibrating the transformation relationship between the camera and the inertial navigation system on the roadheader body, the perspective of the middle camera is used as the stitching reference. The images captured by the three cameras, as well as the segmentation lines between the cutting cross-section and the ground in each camera image, are shown in Figure 11 below.
The stitching process of images from the three cameras is illustrated in Figure 12. It can be observed that in the original, unstitched images, the field of view of the middle camera only captures partial information of the cutting cross-section, while the ground information below the cutting head is completely obscured. However, after stitching, not only can the complete image information of the cross-section be observed, but it is also possible to “see through” the cutting head to view the full scoop plate and ground information beneath it. Meanwhile, we also conducted experiments on the continuous image stitching of the underground tunneling process. The experimental results show that both the stitching quality and speed meet the monitoring requirements for the underground tunneling process. Since navigation and position information are employed, the increase in interference such as dust and low lighting does not reduce the processing speed of the algorithm presented in this paper. These features significantly enhance the visibility and operational precision of remote operations, particularly in assisting with machine relocation and clearing loose coal on the floor, holding important application value.

5. Conclusions

In response to the special requirements of large parallax in underground coal mine tunneling faces, dynamic changes in the positions of surveillance cameras, and rapid stitching of large-field-of-view images, this paper proposes a rapid stitching method for tunneling head images based on navigation and positioning information. This method first utilizes tunnel coordinate information to divide the tunneling head scene into the cutting cross-section and the ground through pixel calculations, enhancing the reliability of image segmentation. Subsequently, it leverages tunneling machine navigation and positioning information to calculate the spatial distance between the camera and the cutting plane and employs a plane-induced homography model to efficiently solve for the dynamic homography matrices of the cutting cross-section and the ground. Finally, the DHW method is adopted to achieve rapid stitching of tunneling head images.
By comparing this method with three classical stitching methods (SURF, SIFT, and BRISK), the results demonstrate a 60% reduction in stitching time. Application validation has been conducted under actual working conditions in coal mines, yielding complete stitched images of tunneling heads. This not only allows workers to observe the entire working face area from a unified perspective but also enables them to “see through” the machine body and cutting head to observe the scoop plate and the ground nearby, greatly facilitating remote machine movement and float coal cleanup operations by workers. This method holds high engineering application value.

Author Contributions

Conceptualization, H.Z. and S.Z.; methodology, S.Z.; software, H.Z.; validation, H.Z. and S.Z.; formal analysis, H.Z. and S.Z.; investigation, H.Z. and S.Z.; data curation, H.Z.; writing—original draft preparation, H.Z.; writing—review and editing, S.Z.; visualization, H.Z.; supervision, S.Z.; funding acquisition, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 52474187.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and code of this work will be available from the corresponding author upon reasonable request.

Conflicts of Interest

Author Hongda Zhu was employed by the company Huadian Coal Industry Group Digital Intelligence Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Wang, G.F.; Zhang, L.; Li, S.B.; Li, S.; Feng, Y.h.; Meng, L.Y.; Nan, B.F.; Du, M.; Fu, Z.; Li, R.; et al. Progress in theory and technology research of unmanned intelligent mining systems in coal mines. J. China Coal Soc. 2023, 48, 34–53. [Google Scholar]
  2. Wang, H.; Wang, B.K.; Zhang, X.F.; Li, F.Q. Key technology and engineering practice of intelligent rapid heading in coal mine. J. China Coal Soc. 2021, 46, 2068–2083. [Google Scholar]
  3. Zhang, X.H.; Yang, H.Q.; Bai, L.N.; Shi, S.; Du, Y.Y.; Zhang, C.; Wan, J.C.; Yang, W.J.; Mao, Q.H. Research on low illumination video enhancement technology in coal mine heading face. J. Coal Geol. Explor. 2023, 51, 309–316. [Google Scholar]
  4. Cheng, J.; Li, H.; Ma, K.; Liu, B.; Sun, D.Z.; Ma, Y.Z.; Yin, G.; Wang, G.F.; Li, H.P. Architecture and key technologies of coalmine underground vision computing. Coal Sci. Technol. 2023, 51, 202–218. [Google Scholar]
  5. Wang, J.C.; Pan, W.D.; Zhang, G.Y.; Yang, S.L.; Yang, K.H.; Li, L.H. Principles and applications of image-based recognition of withdrawn coal and intelligent control of draw opening in longwall top coal caving face. J. China Coal Soc. 2022, 47, 87–101. [Google Scholar]
  6. Gao, X.B. Research on key technology of remote visual control in fully-mechanized heading face. Coal Sci. Technol. 2019, 47, 17–22. [Google Scholar]
  7. Ren, W. Development and application of multi view panoramic camera in fully mechanized mining face. Coal Eng. 2022, 54, 102–108. [Google Scholar]
  8. Xia, D.; Zhou, R. Survey of Parallax Image Registration Technology. Comput. Eng. Appl. 2021, 57, 18–27. [Google Scholar]
  9. Gao, J.; Kim, S.J.; Brown, M.S. Constructing image panoramas using dual-homography warping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 49–56. [Google Scholar]
  10. Zhang, K.L. Research and application of panoramic video remote control technology for intelligent fully mechanized mining face. China Coal 2023, 49, 70–81. [Google Scholar] [CrossRef]
  11. Zhang, X.H.; Wang, Y.; Yang, W.J.; Chen, X.; Zhang, C.; Huang, M.; Liu, Y.H.; Yang, J.H. A mine image stitching method based on improved best seam-line. Ind. Min. Autom. 2024, 50, 9–17. [Google Scholar]
  12. Wang, P.P.; Li, R.; Liu, X.; Li, X.; Fu, C.L. A positioning solution method for roadheader under optical target occlusion conditions. Ind. Min. Autom. 2024, 50, 118–124. [Google Scholar]
  13. Sun, H.X.; Luo, J.X.; Pan, Z.S.; Zhang, Y.Y.; Zheng, Y.J. A Method for Solving Homography Matrix Based on Constrained Total Least Squares. Comput. Technol. Dev. 2022, 32, 50–56. [Google Scholar]
  14. Deng, S.C.; Jiang, Y.L.; Gao, X.Y. Parameter Calibration of Line Structured Light Vision Sensor Based on Plane Normal Vectors. Mod. Mach. Tool Autom. Manuf. Technol. 2023, 7, 69–72. [Google Scholar]
  15. He, J.H.; Wu, B.; Zhang, H.Y. Fast image stitching based on similarity of invariant moments. Microcomput. Appl. 2017, 36, 50–53. [Google Scholar]
  16. Wang, Z.J.; Chao, Y.F. Image registration algorithm using SURF feature and local cross correlation information. Infrared Laser Eng. 2022, 51, 492–497. [Google Scholar]
  17. Xia, X.H.; Zhao, Q.; Xiang, H.T.; Qin, X.F.; Yue, J.P. SIFT feature extraction method for the defocused blurred area of multi-focus images. Opt. Precis. Eng. 2023, 31, 3630–3639. [Google Scholar] [CrossRef]
  18. Du, G.; Hou, L.Y.; Tong, Q.; Yang, D.L. Image mosaicing based on BRISK and improved RANSAC algorithm. Liq. Cryst. Disp. 2022, 37, 758–767. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the roadheader working face and camera view.
Figure 1. Schematic diagram of the roadheader working face and camera view.
Sensors 25 03023 g001
Figure 2. Calibration method for camera coordinate system and tunnel coordinate system.
Figure 2. Calibration method for camera coordinate system and tunnel coordinate system.
Sensors 25 03023 g002
Figure 3. Schematic diagram of the stitching principle for left and right camera images.
Figure 3. Schematic diagram of the stitching principle for left and right camera images.
Sensors 25 03023 g003
Figure 4. Schematic diagram of stitching for the fusion image of the middle camera with the left and right cameras.
Figure 4. Schematic diagram of stitching for the fusion image of the middle camera with the left and right cameras.
Sensors 25 03023 g004
Figure 5. Construction of a simulation experiment system. (a) Schematic diagram of the simulated roadway. (b) Installation of simulated tunneling machine and sensors.
Figure 5. Construction of a simulation experiment system. (a) Schematic diagram of the simulated roadway. (b) Installation of simulated tunneling machine and sensors.
Sensors 25 03023 g005aSensors 25 03023 g005b
Figure 6. Simulated roadway images captured by the three cameras with segmentation lines marked. (a) The image captured by the left camera. (b) The image captured by the middle camera. (c) The image captured by the right camera. (d) Final stitched image.
Figure 6. Simulated roadway images captured by the three cameras with segmentation lines marked. (a) The image captured by the left camera. (b) The image captured by the middle camera. (c) The image captured by the right camera. (d) Final stitched image.
Sensors 25 03023 g006
Figure 7. Selection of matching feature points in Figure 7’s left and right camera images.
Figure 7. Selection of matching feature points in Figure 7’s left and right camera images.
Sensors 25 03023 g007
Figure 8. Stitching of left and right camera images.
Figure 8. Stitching of left and right camera images.
Sensors 25 03023 g008
Figure 9. Stitching of fused images from the middle camera and the left and right cameras.
Figure 9. Stitching of fused images from the middle camera and the left and right cameras.
Sensors 25 03023 g009
Figure 10. Camera position layout at the heading face.
Figure 10. Camera position layout at the heading face.
Sensors 25 03023 g010
Figure 11. Preset feature points of the cutting cross-section and plane segmentation lines.
Figure 11. Preset feature points of the cutting cross-section and plane segmentation lines.
Sensors 25 03023 g011
Figure 12. Overall stitching result in a mine.
Figure 12. Overall stitching result in a mine.
Sensors 25 03023 g012
Table 1. Time performance analysis of different algorithms.
Table 1. Time performance analysis of different algorithms.
AlgorithmProposed AlgorithmSURFSIFTBRISK
Time (ms)129.1389.8845.3349.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, H.; Zhao, S. Research on a Rapid Image Stitching Method for Tunneling Front Based on Navigation and Positioning Information. Sensors 2025, 25, 3023. https://doi.org/10.3390/s25103023

AMA Style

Zhu H, Zhao S. Research on a Rapid Image Stitching Method for Tunneling Front Based on Navigation and Positioning Information. Sensors. 2025; 25(10):3023. https://doi.org/10.3390/s25103023

Chicago/Turabian Style

Zhu, Hongda, and Sihai Zhao. 2025. "Research on a Rapid Image Stitching Method for Tunneling Front Based on Navigation and Positioning Information" Sensors 25, no. 10: 3023. https://doi.org/10.3390/s25103023

APA Style

Zhu, H., & Zhao, S. (2025). Research on a Rapid Image Stitching Method for Tunneling Front Based on Navigation and Positioning Information. Sensors, 25(10), 3023. https://doi.org/10.3390/s25103023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop