Next Article in Journal
A CPS-Based Architecture for Mobile Robotics: Design, Integration, and Localisation Experiments
Previous Article in Journal
Study on the Ground-Penetrating Radar Response Characteristics of Pavement Voids Based on a Three-Phase Concrete Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Detection of Methane Leaks by Combining Infrared Imaging and a Gas-Faster Region-Based Convolutional Neural Network Technique

1
Sinopec Research Institute of Petroleum Engineering Co., Ltd., Beijing 102206, China
2
Chinese Academy of Sciences, Aerospace Information Research Institute, State Environmental Protection Key Laboratory of Satellite Remote Sensing & State Key Laboratory of Remote Sensing Science, Beijing 100864, China
3
Beijing Institute of Environmental Characteristics, Science and Technology on Optical Radiation Laboratory, Beijing 100854, China
4
National Engineering Research Center of Disaster Backup and Recovery, School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(18), 5714; https://doi.org/10.3390/s25185714
Submission received: 26 July 2025 / Revised: 9 September 2025 / Accepted: 10 September 2025 / Published: 12 September 2025
(This article belongs to the Section Optical Sensors)

Abstract

Gas leaks threaten ecological and social safety. Non-contact infrared imaging enables large-scale, real-time measurements; however, in complex environments, weak signals from small leaks can hinder reliable detection. This study proposes a novel automated methane leak detection method based on infrared imaging and a Gas-Faster Region-based convolutional neural network (Gas R-CNN) to classify leakage amounts (≥30 mL/min). An uncooled infrared imaging system was employed to capture gas leak images containing leak volume features. We developed the Gas R-CNN model for gas leakage detection. This model introduces a multiscale feature network to improve leak feature extraction and enhancement, and it incorporates region-of-interest alignment to address the mismatch caused by double quantization. Feature extraction was enhanced by integrating ResNet50 with an efficient channel attention mechanism. Image enhancement techniques were applied to expand the dataset diversity. Leak detection capabilities were validated using the IOD-Video dataset, while the constructed gas dataset enabled the first quantitative leak assessment. The experimental results demonstrated that the model can accurately detect the leakage area and classify leakage amounts, enabling the quantitative analysis of infrared images. The proposed method achieved average precisions of 0.9599, 0.9647, and 0.9833 for leak rates of 30, 100, and 300 mL/min, respectively.

1. Introduction

Methane is one of the predominant non-greenhouse gases that is widely distributed in nature. The concentration of methane in the atmosphere has gradually increased at a rate of 0.5% per year over the last decade [1], making methane the largest source of radiative forcing after CO2 (0.97 W/m2) [2], and resulting in serious environmental impacts. Methane is a colorless, odorless, highly flammable gas with an explosive range of 9.5%. Poisoning or explosions caused by leaks in natural gas pipelines (whose main component is methane), petrochemical parks, and related equipment in production and life have seriously jeopardized the safety of human life and property [3]. To avoid this, leaks must be detected in time, and early emergency response programs must be implemented to reduce the hazards to a manageable level. In this area, detection of gas leaks is the key issue.
Conventional gas leak detection methods mainly focus on the manual inspection of pipelines and equipment, which not only requires a lot of manpower and materials but also takes a long time. As the number of devices, pipe distances, and complexity of the plant sites increase, work efficiency decreases. Gas leak detection methods have been developed based on specific sensors [4] (optical, electrochemical, and acoustic) to monitor process parameters. For example, the negative pressure wave method [5], acoustic method [6], and volume/mass balance method [7] assess leakage by acquiring pressure, acoustic signals, and flow rate parameters. However, most of these methods depend on the sensitivity of the sensors, proper data acquisition, and the accuracy of the mathematical model for data processing [8], and sensor-based fixed-point monitoring methods cannot meet the requirements of large-scale dynamic measurements.
With the development of infrared radiation technology, infrared optical gas imaging (OGI) has become an effective method for detecting gas leaks. The passive infrared imaging detection method does not require radiation sources or background reflections and is based on the differences in radiation between the gas and background regions. Due to the characteristics of the infrared absorption spectrum of the gas, it is represented in the infrared image as a gas plume (generally, the absorption plume is shown in black and the emission plume in white). Many researchers have studied infrared imaging for gas leak detection and have made significant progress. Li et al. [9] used a self-developed wideband infrared imaging system for gas leak detection to detect CO2. The image noise was reduced using an anisotropic diffusion filter, and then the frame difference method was used to mark the leakage area. Lu et al. [10] identified leaks using optical gas imaging infrared thermography in combination with an improved Gaussian mixture background model. Zheng et al. applied a four-dimensional parametric model to compensate for jitter in the OGI system and then combined cumulative integration of multiple frames with the high-order statistics (HOS) method to identify gas leakage areas [11]. Weng et al. [12] segmented the gas region in an infrared image using the frame difference method, extracted the scale-invariant feature transform (SIFT) features in the region, and used a support vector machine (SVM) for classification. Wang et al. [13] developed the first deep learning model for infrared gas detection by combining a convolutional neural network (CNN) and an OGI system for binary classification of methane leakage detection. Shi et al. [14] combined a Faster Region-based CNN (Faster R-CNN) model with the OGI technique to detect hydrocarbon leaks. Zhang et al. [15] proposed a novel method that utilizes the deep learning technique and convolutional neural networks (CNNs) to detect the leakage of VOC gas from a single-frame mid-wave infrared image. Zhou et al. [16] proposed a gas plume-constrained YOLOv11 model based on infrared imaging detection technology, named YPCN (YOLO-Plume Classification Network).
Although OGI offers a significant advantage, its detection efficiency is strongly affected by factors such as environmental conditions, operators, gas composition, leakage area, and detection distance. As a selector, the gas target cannot fully absorb the background radiation, meaning some of the background radiation enters the detection system. The absorption of background radiation by the gas is further reduced when the gas concentration or leakage is low, which, in turn, reduces the accuracy of leakage detection. Even gas compressed in a pipe only has a limited amount of gasification for cooling. In addition, the gas diffusion effect and irregular shape characteristics make it difficult to detect gas leaks in complex background environments. In 2018, Ravikumar et al. [17] analyzed the probability profiles of OGI-based CH4 leak detection in real-world scenarios, and the curves showed that their median and 90% detection probability limits correspond to a power-law relationship with the detection distances. However, the sensitivity of the OGI system directly determines the quality of the gas infrared image and influences the extraction and classification of gas features. Therefore, effective extraction of gas information features is the key to gas leak detection. Conventional feature extraction methods are SIFT [18], histograms of oriented gradients (HOG) [19], and so on. They need users to specify the feature extraction region of the detection target in advance, and specific targets are generally designed using fixed feature extraction templates. However, the above approach relies on strong a priori knowledge and a single task and requires tedious preprocessing operations. Efficient acquisition of gas regions and extraction of gas features in complex and changing environments are the major problems faced by conventional feature extraction methods.
With the wide application of computer vision in face recognition [20], autonomous driving [21], energy and environment prediction [22,23], and other fields, it has shown great potential. As a current mainstream machine learning algorithm, CNNs offer significant advantages in feature extraction and pattern recognition. CNNs based on weight sharing effectively simplify the model and reduce the number of weighted values, thereby reducing cumbersome preprocessing steps. To overcome these challenges, this study investigated a gas leak detection algorithm based on a convolutional neural network. Song et al. detected gas leaks in galvanized steel pipes using a CNN in combination with acoustic data by Song et al. [24]. Ning et al. [25] implemented leak detection in natural-gas pipelines by combining sensor signals with a CNN. Wang et al. [13] realized methane leak detection for the first time by combining a CNN with an infrared optical gas imaging system. With the application and extensive development of CNNs, CNN models based on object detection have emerged, such as the Region-based CNN (R-CNN) [26], Fast R-CNN [27], and Faster R-CNN [28]. Shi et al. [14] combined the conventional Faster R-CNN and OGI techniques to detect the locations of hydrocarbon leaks in infrared images, enabling the detection of gas targets. However, due to the low contrast and low signal-to-noise ratio of gas infrared images, it is necessary to develop a Gas R-CNN model for gas leak detection to improve the detection performance of gas targets, to better meet the requirements of complex environments.
To solve the above problems and improve the detection performance for gas targets, an automated methane gas leak detection method based on infrared images and Gas R-CNN was proposed in this study, to classify the leakage amount (≥30 mL/min) for the first time. Under the framework of the same task of OGI imaging, leak detection is systematically expanded from the existing “binary qualitative visual detection” to “quantitative visual detection of multi-level leakage”. First, gas leak detection experiments were conducted using an uncooled infrared imaging gas leak detection system for two scenarios and three leak rates (30, 100, and 300 mL/min) to obtain infrared gas images. Second, a Gas R-CNN gas leakage detection model was proposed for methane gas leak detection. The model proposes a multiscale feature network structure for the extraction and enhancement of leak gas features and uses region of interest (RoI) alignment to solve the problem of region mismatch due to double quantization. In the feature extraction network, feature extraction was realized by adding ResNet50 with an efficient channel attention (ECA) mechanism. The final gas features were obtained using a feature pyramid network (FPN) to fuse the semantic information from the different levels of ResNet50+ECA. Finally, the diversity of the infrared gas images was enriched using image enhancement techniques, and the data were fed into the Gas R-CNN model for gas leak detection to evaluate the effectiveness of the model.
The remainder of this paper is organized as follows. Faster R-CNN and Gas R-CNN models are presented in Section 2. The uncooled infrared imaging gas leakage detection system and the corresponding experimental setup are explained in Section 3. The detection process is explained in detail and the results are discussed in Section 4. Finally, the conclusions are presented in Section 5.

2. Methodology

2.1. Faster R-CNN

A faster R-CNN was developed using R-CNN [26] and Fast R-CNN [27]. It achieved the best results in the ILSVRV and COCO competitions of the year and became one of the leading target detection algorithms [28]. The main innovation of this algorithm is the use of a region proposal network (RPN) to generate RoIs instead of the selective search (SS) method, which overcomes the bottleneck in detection efficiency. Faster R-CNN consists of three modules: a feature extraction network, RPN, and detection network, as shown in Figure 1.
The feature extraction network of Faster R-CNN scales the infrared image of the gas leakage with size P × M to X × Y and then inputs the image into the CNN (typically VGG16) for feature map extraction. Next, the feature map is fed into the RPN to generate a rectangular region suggested on the feature map. RPN uses 9 anchors with width:height∈ {1:1,1:2,2:1} and size∈ {128 × 128, 256 × 256, 512 × 512} as the initial detection box for bounding box regression. The box classification layer (cls) and the box regression layer (reg) can be used to obtain information about the categories of gas objects and their corresponding coordinates in the proposal box. Here, the cls layer outputs the probability information of the object, and the reg layer outputs the parameter information (x,y,w,h) of the proposal box, including the width w, the length h, and the center coordinate (x,y). The RoI pooling layer combine feature maps and region proposals to map different-sized proposal boxes to fixed-scale feature vectors (7 × 7). Finally, the classification and regression data was fed into the fully connected layer for more accurate target detection.

2.2. Gas R-CNN Detection Model

In this section, an automated gas leak detection model is proposed, Gas R-CNN, as shown in Figure 2. In this study, a multiscale gas feature extraction network is used instead of VGG16 in the conventional Faster R-CNN, and RoIAlign is used instead of RoIPololong, which makes our model more skillful in detecting small gas leaks.

2.2.1. Multiscale Network for Gas Feature Extraction

The observation horizon of an image can be effectively extended using convolutional networks, and different depth features of the image correspond to different levels of semantic features. Low-level features of an image provide access to rich detail and location information but also have more noise and fewer semantics. As the depth of the network increases to acquire higher-level semantic features, the feature map becomes increasingly abstract, resulting in poorer detail perception but rich semantic information [14,29,30]. Feature extraction is the key to gas leak detection. The infrared image of a gas leak is characterized by low contrast, fuzzy edges, and a low signal-to-noise ratio. At the same time, it is strongly influenced by the gas concentration and the external environment, so that the area of the gas leak is not clearly recognizable in some cases. When the gas region transitions from a shallow network to a deep network, it may be misinterpreted as a noise or background region, resulting in a loss of gas information in high-level semantics.
To solve these problems, a multiscale feature network structure for leakage gas feature extraction and enhancement was developed, as in the feature extraction network shown in Figure 2. First, feature extraction is achieved by furnishing ResNet50 with the ECA mechanism to suppress useless information; then, the FPN is utilized to fuse the semantic information. Next, the FPN is used to fuse the semantic information from different layers of ResNet50+ECA to obtain the final gas features.
(1)
ResNet50 combination of ECA mechanism
To obtain better access to gas information, a residual network (ResNet) was used in the Gas R-CNN instead of VGG16. Using the ImageNet dataset, Kaiming et al. [30] demonstrated that the complexity of ResNet with a depth of up to 152 layers is still lower than that of the VGG16 network, effectively suppressing the complexity of the network while increasing its depth. The innovation of the residual block effectively solves the problems of gradient vanishing and explosions caused by deep networks. Combined with the complexity of the model and the type of gas leakage identification in this study, Resnet50 was chosen to extract the feature information, which consists of 1 fully connected layer and 49 convolutional layers.
Relevant studies have shown that the channel attention mechanism has great potential to improve the performance of DCNNs [31,32,33], but the complex attention module leads to the complexity of the model while improving the performance of the network. The ECA mechanism proposed in 2020 [34] efficiently achieves local cross-channel interactions without dimensionality reduction through one-dimensional convolution and uses a nonlinear mapping of channel dimensions to determine the size of the convolution kernel, to achieve adaptive coverage of channel interactions, which strikes a better balance between model performance and complexity. This enhances the fine-grained feature response in low-contrast, diffuse boundary scenes, avoiding the loss of detail that can come with channel compression. For low-contrast gas infrared images, the ECA mechanism helps to extract gas information, suppress useless background and noise information, and improve the performance of the Gas R-CNN.
(2)
Multiscale feature fusion based on FPN
The RPN input is the last layer of the feature map of the feature extraction network in the conventional Faster R-CNN with single feature information. The multiscale fusion approach of the FPN [35] was proposed in 2017 and proved to be accurate and fast in target detection. In the feature extraction stage, the multiscale pyramid provides a general structure to produce enhanced feature representations. With the feature pyramid networks, feature maps at each scale have strong semantic information, and the amount of calculation is greatly reduced [36]. The FPN structure consists of three main parts: bottom-up, top-down, and lateral connections. First, the bottom-up is the feature extraction process of the ResNet50+ECA network, which categorizes the feature map into five levels, from C1 to C5. Second, the top-down process upsamples the feature maps obtained at a high level and then passes them downward, passing the high-level features, which contain rich semantic information, to the low-level features. As shown in Figure 3, the lateral connection consists of three main steps: (1) The feature maps of C2–C5 reduce the dimensionality by a 1 × 1 convolution to increase the nonlinearity of the image. (2) The acquired feature Cn is fused with the upsampled acquired feature Pn+1. (3) To eliminate the aliasing effect caused by upsampling, a 3 × 3 convolution operation was used to output the feature maps P2–P5, where P6 was obtained by downsampling P5. To improve the detection speed, the high-resolution C1 feature maps were not fed into the FPN. Therefore, the feature input of the RPN was changed from single to multiscale (P2–P6), which can effectively fuse the deep and shallow gas information and improve the accuracy of gas leak detection.

2.2.2. Gas Detection Networks with RoI Align

Practical gas leak detection for tiny leaks poses a major challenge for gas leak detection. The RoI pooling layer maps the proposed region generated by the RPN to an equivalent location of the feature map to obtain the RoI. The process was acquired by selecting and offsetting the proposed regions with different sizes and proportions, but with different sizes of RoI feature maps (containing floating-point numbers). Therefore, the first quantization operation was performed to match the pixel values. Second, the fully connected layer (FC) in the detection network requires a fixed-size input that quantizes the RoI feature maps of different sizes to a fixed size (7 × 7). However, the two quantization operations cause the RoI feature map to deviate from the original image, which affects the detection of the gas leakage area and the leakage source.
RoI Align Pooling [37] effectively solves the problem of mapping error and mean error caused by RoI pooling and improves the recognition accuracy of the model. As shown in Figure 4, the RoI alignment uses bilinear interpolation to calculate the exact value of multiple sampling points, and max pooling to calculate the maximum value of multiple sampling points as the final value. This replaces the quantization operation of the RoI and preserves the number of floating points. For gas leak detection tasks that require precise boundary box localization, its spatial alignment mechanism can significantly enhance localization accuracy. Therefore, RoI Align was used in this study to achieve more accurate detection and identification results.

3. Experimental Investigation

3.1. Uncooled Infrared Imaging Gas Leak Detection System

The infrared absorption spectral characteristics of the gas result in different radiation levels compared to the background, which is displayed as a moving gas cloud in the infrared image. CH4 is a colorless, odorless, flammable, and explosive gas. Its infrared absorption spectrum is shown in Figure 5a (infrared spectrum data from the HITRAN database). Two strong infrared absorption peaks were found, at 3.31 μm and 7.669 μm. Although the infrared absorption peak at 3.31 μm is significantly higher than that at 7.669 μm, most volatile organic compounds (VOCs) (alkanes, olefins, alkynes, etc.) have absorption peaks between 3.2 μm and 3.4 μm, and there are no significant absorption peaks for the other VOCs near 7.669 μm. In this study, the infrared spectral absorption characteristics of 7.669 μm were used for leak detection.
The in-house-developed, uncooled, infrared imaging camera for detecting gas leaks is shown in Figure 5b [38]. It uses a domestic VOx, uncooled, infrared focal plane detector (NETD: 50 mK, resolution: 640 × 480), the filter range is 7–8 μm, and the frame rate is 20 frames/s. In order to ensure that the instrument is in a stable working state for a long time, the instrument needs to be regularly maintained and calibrated in bold body. The transmission path from the background to the imaging system was divided into three layers (A, B, and C), and the radiation values of the gaseous and nongaseous paths received along the line-of-sight direction of the system are shown in Equations (1) and (2). Leakage detection was performed based on the difference between the radiation values.

3.2. Experimental Setup

The experimental setup is illustrated in Figure 6 [39] and comprises a CH4 gas cylinder with a capacity of 40 L (room temperature, concentration: 1000 ppm), black body, gas cell, flowmeter, number of conduits, and fixed fittings. Image acquisition was realized using the uncooled, infrared imaging gas leak detection system described in Section 2, and the images were saved in JPG format. The software environment was Visual Studio 2015 + OpenCV 3.2.0, and the operating system was Windows 10. For this experiment, a black body and a wall were used as the background. The indoor temperature is 26 °C, and the relative humidity is 40–50%. Two leakage sources were simulated using a conduit and a gas cell, and the leakage volume was controlled with a flow meter to simulate the leakage scenarios. Scenario 1 involved a conduit source leakage with a wall in the background. Scenario 2 involved a gas cell source leak with a black body background. The experiments were performed at leak rates of 300, 100, and 30 mL/min, with a distance of 0.8 m between the system and the leak source. Each case was recorded at a frame rate of 20 fps for 4 min (repeated five times). The leaked gas may have been unstable at the beginning and end of the process. Therefore, the first 30 s and last 30 s of each video were cut out of the experiment.

3.3. Typical Infrared Image of a CH4 Gas Leak

The infrared images of the gas leakage in different scenarios of this experiment are shown in Figure 7, where the boxed parts indicate the gas leakage source and some leakage areas. The specificity of the gas target results in a low contrast in the gas infrared image, making accurate localization of the leakage source difficult under the experimental conditions. The detection of gas leaks has been realized using gas motion characteristics. However, this requires many preprocessing steps and is not able to perform large-scale detection in complex situations in a timely manner. Figure 7c,d show the pseudo-color images obtained using the motion detection method [38]. The conventional method of relying on professionals to detect the leakage source poses a major challenge in ensuring efficiency and accuracy at the same time. Therefore, a gas leak detection method is proposed based on infrared images and Gas R-CNN, which improves the efficiency and accuracy of detection.

4. Detection of Gas Leaks Using Infrared Images and Gas R-CNN

4.1. Gas Dataset

4.1.1. Data Augmentation

In this section, the gas dataset is created using the infrared images of the gas leak obtained in Section 3. In this study, one image was extracted every five frames to account for the high similarity of consecutive images in the video. To better meet the practical requirements and improve the training effect of the gas leakage model, the data augmentation (DA) method was used in this study to augment the gas dataset. Each leakage image was transformed using four methods: brightness transformation, Gaussian blur, horizontal flip, and random rotation (−30–30°).

4.1.2. Data Labeling

With the enhancement process described above, 9000 gas leakage datasets containing two scenarios and three leakage volumes are obtained. All images were uniformly named, and the volume and location of the leaks in each image were labeled using the LabelImg tool. The gas dataset follows the format of the PASCAL VOC2012 dataset.

4.1.3. Dataset Partition

Shanmugam et al. have shown that the test time augmentation (TTA) method improves the predictive ability of the model [38]. There is a high similarity between the original image and the augmented image, which is classified into the same set to improve the training effect and generalization ability of the model. The gas dataset comprised three sets: training, validation, and test sets, divided at a ratio of 6:2:2.

4.2. Implementation Details

4.2.1. Leak Detection

The infrared gas image was input into the Gas R-CNN model, and the processing flow was as follows. (1) The features of the gas and the background were extracted using the improved multiscale feature extraction network in the model, and the feature maps were fed into the RPN and the detection network. (2) Based on the specified intersection over union (IOU), the RPN classifies positive and negative samples and obtains region proposals and associated parameters, with positive samples indicating gas leaks (30, 100, or 300 mL/min). (3) Leaks are identified using a detection network that outputs the probability of the category while fine-tuning the region proposal box.

4.2.2. Model Training

In this study, computers with specific features were used for model training, including an Intel Xeon Gold 6330 CPU @ 2.00 GHz (Intel Corporation, Santa Clara, California, USA) and a GeForce RTX 3090 GPU (NVIDIA Corporation, Santa Clara, California, USA) with 24G GB video memory and 60G GB RAM. Based on PyTorch+cuda11.3+Python3.8, the model was trained in a Linux environment using the corresponding libraries. To improve the training performance and minimize the oscillation range of the loss curve, the learning rate was set to 0.01, and the stochastic gradient descent (SGD) method was used to set the learning rate decay value to 0.0001 and the momentum factor to 0.9. As the gas did not have a fixed shape, both the confidence and IOU thresholds were set to 0.5. This is a classic setup for balancing the ratio of positive and negative samples in object detection tasks. The number of training epochs was set to 600, and the model was saved every five epochs.

4.2.3. Evaluation Indicators

In this study, the performance of the gas leak detection model was evaluated using a test set. In this case, detecting a leak is a classification task, and evaluating the deviation between the predicted leak location and the original location is a regression task. Precision, recall, F1 score, average precision (AP), and mAP were used for the model performance evaluation.
Taking an image with a leak volume of 30 mL/min as an example, the recall is the proportion of samples with a leak volume of 30 mL/min that are correctly predicted. Precision is the proportion of all samples predicted to have a volume of 30 mL/min that are actually 30 mL/min. Recall and precision are defined in Equations (1) and (2),
Re c a l l = T P T P + F N
Pr e c i s i o n = T P T P + F P
where TP denotes the number of correctly identified 30 mL/min gas leaks, TN denotes the number of correctly identified non-30 mL/min gas leaks, FP denotes the number of non-30 mL/min gas leaks identified as 30 mL/min gas leaks, and FN denotes the number of unidentified 30 mL/min gas leaks.
As shown in Equation (3), the F1 score harmonizes precision and recall. This can be calculated using Equation (4), when the test set contains multiple categories.
F 1 = 2 × Pr e c i s i o n × Re c a l l Pr e c i s i o n + Re c a l l
m F 1 = ( 1 n i = 1 n F 1 i ) 2
Precision–recall (PR) curves were used to assess the differences between the performances of the different models. IOU is the degree of overlap between the manually labeled true bounding box and the model-predicted bounding box. For a specified IOU, as shown in Equation (5), AP is the area under the PR curve, and mAP denotes the average AP value of the different categories in the test set.
A P = 0 1 P r e c i s i o n ( R e c a l l ) d ( R e c a l l )
m A P = 1 n i = 1 n A P ( i )

4.3. Results and Analysis

4.3.1. Performance Evaluation of Gas R-CNN Model

To evaluate the performance of the Gas R-CNN model, it was compared with the conventional Faster R-CNN model from different perspectives in this study. Figure 8 shows the loss curves of the two models after 600 iterations [40]. It can be seen that the Gas R-CNN model can be quickly reduced to 0.03 within the first 10 epochs, and the loss continues to decrease as the number of iterations increases, eventually stabilizing at 0.0262 ± 0.0001. While the conventional Faster R-CNN model drops to 0.0527 within the first 10 epochs, the value of its loss only stabilizes at approximately 0.0509 with an increasing number of iterations. The Gas R-CNN model exhibits better convergence and higher robustness.
To evaluate the effects of Resnet50, ECA, FPN, and RoIAlign on the performance of the gas leakage detection model, ablation experiments were performed with the Gas R-CNN model. Figure 9 shows the results of comparing the PR curves of the four models. The PR curves indicate that the detection performance of each model improves under the condition of different leakage volumes, indicating the effectiveness of the improved model.
The curves of the Faster R-CNN (Resnet50_FPN_RoIAlign) model were significantly higher than those of the above two models, indicating a superior detection performance. However, the detection performance of the model decreases more clearly with a decreasing gas leakage volume. The performance of the proposed model was significantly better than those of the other three models. The detection performance remained stable with a decreasing leakage volume, further emphasizing the effectiveness of the proposed model in detecting gas leaks.
Table 1 shows the performance comparison in the ablation experiments of the Gas R-CNN model. The results show that the increase in Resnet50, ECA, FPN, and RoIAlign effectively improves the AP and mAP of gas leak detection. The AP increased by 0.234, 0.1503, and 0.2064 for leaks of 30, 100, and 300 mL/min, respectively. It can be seen that the detection of gas with low leakage is significantly improved, indicating that our model can effectively extract the gas information and also accurately identify the location of gas leakage. The F1 and mF1 of the Gas R-CNN model were also higher than those of the other models.
Figure 10 shows the detection results of the Gas R-CNN model, including the predicted leakage type and location, under four conditions for different scenarios at a leakage volume of 100 mL/min. In Scenario 1, the prediction score for the original image is 99.77%, whereas it is 97.72% for the image after Gaussian blurring. Meanwhile, in Scenario 2, the prediction score of the original image is 99.93%, whereas the prediction score of the image after Gaussian blurring it is 99.94%.
It is shown that background and environmental disturbances can lead to a degradation of the detection effect. From Table 1, it can be seen that the AP of the proposed model is 96.47% at a leakage volume of 100 mL/min, whereas the AP of the original Faster R-CNN model is 81.44%. The proposed model showed a better detection performance with high generalization ability in the overall testing environment.

4.3.2. Comparison with Prevalent Models

To further evaluate the performance of the Gas RCNN model, it was compared with four typical target detection algorithms: Yolov3 [41], SSD [42], Yolovx [43], and Yolov7 [44]. The model exhibited a superior performance in different tasks and effectively extracted weak gas information, which is more satisfactory for practical applications. Quantitative analysis of the detection effect of the different models was also performed, and their AP and mAP values are listed in Table 2. Table 2 shows the superiority of the proposed model for gas detection, especially at 30 mL/min. The model effectively accounts for the fact that infrared images of gas leaks are characterized by low contrast and a low signal-to-noise ratio, with gas features often obscured by complex backgrounds. The gas information was effectively extracted, and the different leaks were classified.

4.3.3. Generalization Ability of the Gas R-CNN Model

In this section, the IOD-Video dataset (leakage/non-leakage classification only) from the Caoxun team from Nanjing University is used to evaluate the generalization ability of the Gas R-CNN model. Figure 11 shows the detection results of the Gas R-CNN model in different environments. Due to the limitations of the IOD-Video dataset, the exact leak volumes could not be labeled. However, our model accurately identified leakage areas in different background environments. Table 3 lists the results of the quantitative analysis of the Gas R-CNN model in different environments. Six different video streams were used for testing under identical environmental conditions. The results show that the maximum AP value is 0.9920, the minimum value is 0.9591, and the mAP value is 0.9742, indicating that the method performs well in different environments. The highest recall rate was 0.9994, the lowest 0.9292, and the average recall rate 0.9735, indicating that the method can detect most leaks.
The detection of the IOD-Video dataset based on the leak/non-leak classification had a minimum AP value of 0.9591. In contrast, the detection of the gas dataset based on the leak quantity classification in this study had a minimum AP value of 0.9599. This shows that the model can maintain a reliable migration capability even when there are differences between different fields.

5. Conclusions

The gas detection effect is closely related to the environmental conditions, gas composition, leakage area, detection distance, and other factors. When the gas leakage is small, the gas leakage characteristics of the information are too weak to be effectively detected and identified. To solve the aforementioned problems and improve the detection performance of gas targets, an automated methane gas leak detection method was proposed based on infrared images and a Gas R-CNN, which enables gas leak classification for the first time. The Gas R-CNN model proposes a multiscale feature network structure for gas leak feature extraction and enhancement and uses region of interest (RoI) alignment to solve the problem of region mismatch due to double quantization. The experimental results show that the model successfully detects the gas leakage area and classifies the leakage amount, which enables the quantitative detection of infrared images. The APs for gas leak detection were 0.9599 at 30 mL/min, 0.9647 at 100 mL/min, 0.9833 at 300 mL/min, and 0.9693 for mAP, indicating that the proposed model has a good detection capability. The detection of the IOD-Video dataset based on the leak/non-leak classification had a minimum AP value of 0.9591.
The basic leak detection capabilities were validated in this study using the IOD-Video dataset, and we performed the first quantitative assessment of leaks using the Gas dataset, which was built in a laboratory. Within the framework of OGI imaging, leak detection is systematically extended from the existing “binary qualitative visual detection” to “quantitative visual detection across multi-level leakage.” The methodology used in this study is also applicable to other gases with infrared absorption characteristics.
However, this study has some limitations. First, more variables, such as temperature, detection distance, and gas concentration, should be included in the experiment to evaluate the effectiveness of the model more comprehensively. Secondly, there are some limitations in the laboratory conditions compared to actual leak scenarios, and the influence of external environmental factors on gas leak detection needs further investigation. In general, this study focuses on verifying methane leak detection and leakage rate classification under controllable conditions, and it does not assess the generalization in real environmental conditions. Real environments involve diverse factors, such as weather conditions (wind, rain, fog, and extreme weather) and complex backgrounds. Future work will focus on addressing these challenges.

Author Contributions

Conceptualization, J.Z. (Jinhui Zuo) and Z.L.; methodology, J.Z. (Jinhui Zuo); software, J.Z. (Jinhui Zuo); validation, J.Z. (Jinhui Zuo), Z.R. and J.Z. (Jinxin Zuo); formal analysis, J.Z. (Jinhui Zuo); investigation, W.X.; resources, W.X.; data curation, J.Z. (Jinxin Zuo); writing—original draft preparation, J.Z. (Jinhui Zuo); writing—review and editing, Z.L.; visualization, J.Z. (Jinhui Zuo); supervision, J.Z. (Jinxin Zuo); project administration, W.X.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Plan (2023YFB3907405) and the Li Zhengqiang Expert Workstation of Yunnan Province (202205AF150031).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data generated and/or analyzed during the current study are not publicly available.

Acknowledgments

All authors gratefully acknowledge the contributions of other researchers working in this area.

Conflicts of Interest

Author J.Z. (Jinhui Zuo) was employed by the company Sinopec Research Institute of Petroleum Engineering Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GasR-CNNGas-Faster Region-based Convolutional Neural Network
ECAEfficient Channel Attention
OGIOptical Gas Imaging
HOSHigh-Order Statistics
SVMSupport Vector Machine
CNNConvolutional Neural Network
Faster R-CNNFaster Region-based Convolutional Neural Network
R-CNNRegion-based Convolutional Neural Network
RoIRegion Of Interest
FPNFeature Pyramid Network
FCFully Connected
VOCsVolatile Organic Compounds
SGDStochastic Gradient Descent
APAverage Precision
PRPrecision–Recall

References

  1. Dlugokencky, E. Global CH4 Monthly Means; Earth System Research Laboratories: Boulder, CO, USA, 2021. [Google Scholar]
  2. O’Connor, F.M.; Abraham, N.L.; Dalvi, M.; Folberth, G.A.; Griffiths, P.T.; Hardacre, C. Assessment of pre-industrial to present-day anthropogenic climate forcing in UKESM1. Atmos. Chem. Phys. 2021, 21, 1211–1243. [Google Scholar] [CrossRef]
  3. Krasner, A.; Jones, T.S.; La Rocque, R. Cooking with gas can harm children: Cooking with gas stoves is associated with increased risk of childhood respiratory illnesses including asthma. J. Environ. Health. 2019, 83, 14–18. [Google Scholar]
  4. Meribout, M.; Khezzar, L.; Azzi, A.; Ghendour, N. Leak detection systems in oil and gas fields: Present trends and future prospects. Flow. Meas. Instrum. 2020, 75, 101772. [Google Scholar] [CrossRef]
  5. Lu, W.; Liang, W.; Zhang, L.; Liu, W. A novel noise reduction method applied in negative pressure wave for pipeline leakage localization. Process Saf. Environ. Prot. 2016, 104, 142–149. [Google Scholar] [CrossRef]
  6. Liu, C.W.; Li, Y.X.; Yan, Y.K.; Fu, J.T.; Zhang, Y.Q. A new leak location method based on leakage acoustic waves for oil and gas pipelines. J. Loss Prev. Process Ind. 2015, 35, 236–246. [Google Scholar] [CrossRef]
  7. Lu, H.; Iseley, T.; Behbahani, S.; Fu, L. Leakage detection techniques for oil and gas pipelines: State-of-the-art. Tunn. Undergr. Space Technol. 2020, 98, 103249. [Google Scholar] [CrossRef]
  8. Doshmanziari, R.; Khaloozadeh, H.; Nikoofard, A. Gas pipeline leakage detection based on sensor fusion under model-based fault detection framework. J. Pet. Sci. Eng. 2020, 184, 106581. [Google Scholar] [CrossRef]
  9. Li, J.; Wang, L.; Zhang, C.; Long, Y.; Zhang, B. Gas cloud infrared image enhancement based on anisotropic diffusion. In Proceedings of the Advanced Environmental, Chemical, and Biological Sensing Technologies VIII, Orlando, FL, USA, 26 May 2011; pp. 197–204. [Google Scholar]
  10. Lu, Q.; Li, Q.; Hu, L.; Huang, L. An effective low-contrast SF6 gas leakage detection method for infrared imaging. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
  11. Zhengzheng, T.; Bin, L.; Yongjian, S.; Qing, F.; Jianlin, L. A new method for SF6 gas leakage detection. In Proceedings of the International Conference on Computer Science & Education, Hefei, China, 24–27 August 2010; pp. 31–34. [Google Scholar]
  12. Yuan, P.; Wang, M. Thermal imaging detection method of leak gas clouds based on support vector machine. Acta Opt. Sinica. 2022, 42, 104–111. [Google Scholar]
  13. Wang, J.; Ji, J.; Ravikumar, A.P.; Savarese, S.; Brandt, A.R. VideoGasNet: Deep learning for natural gas methane leak classification using an infrared camera. Energy 2022, 238, 121516. [Google Scholar] [CrossRef]
  14. Shi, J.; Chang, Y.; Xu, C.; Khan, F.; Chen, G.; Li, C. Real-time leak detection using an infrared camera and faster R-CNN technique. Comput. Chem. Eng. 2020, 135, 106780. [Google Scholar] [CrossRef]
  15. Zhang, C.; Chen, F.; Su, L.; Yang, B.; Hu, Z.; Hong, W. VOC gas leakage detection using infrared image and convolutional neural networks. In Proceedings of the AOPC 2022: Infrared Devices and Infrared Technology; and Terahertz Technology and Applications, Beijing, China, 18–20 December 2022. [Google Scholar]
  16. Zhou, J.; Liu, Y.; Zhang, Y.; Hu, H.; Leng, Z.; Sun, F.; Chen, C. High-accuracy combustible gas cloud imaging system using YOLO-plume classification network. Front. Phys. 2025, 13, 1603047. [Google Scholar] [CrossRef]
  17. Ravikumar, A.P.; Wang, J.; McGuire, M.; Bell, C.S.; Zimmerle, D.; Brandt, A.R. Good versus good enough? empirical tests of methane leak detection sensitivity of a commercial infrared camera. Environ. Sci. Technol. 2018, 52, 2368–2374. [Google Scholar] [CrossRef] [PubMed]
  18. Huang, H.; Guo, W.; Zhang, Y. Detection of copy-move forgery in digital images using SIFT algorithm. In Proceedings of the IEEE Pacific-Asia Workshop on Computational Intelligence and Industrial Application, Wuhan, China, 19–20 December 2008; pp. 272–276. [Google Scholar]
  19. Xiang, Z.; Tan, H.; Ye, W. The excellent properties of a dense grid-based HOG feature on face recognition compared to gabor and LBP. IEEE Access. 2018, 6, 29306–29319. [Google Scholar] [CrossRef]
  20. Zeng, D.; Veldhuis, R.; Spreeuwers, L. A survey of face recognition techniques under occlusion. IET Biometrics. 2021, 10, 581–606. [Google Scholar] [CrossRef]
  21. Janai, J.; Güney, F.; Behl, A.; Geiger, A. Computer vision for autonomous vehicles: Problems, datasets and state of the art. Found. Trendsö Comput. Graph. Vis. 2021, 12, 1–308. [Google Scholar]
  22. Yu, R.; Liu, Z.; Li, X.; Lu, W.; Ma, D.; Yu, M.; Wang, J.; Li, B. Scene learning: Deep convolutional networks for wind power prediction by embedding turbines into grid space. Appl. Energy 2019, 238, 249–257. [Google Scholar] [CrossRef]
  23. Lago, J.; De Ridder, F.; De Schutter, B. Forecasting spot electricity prices: Deep learning approaches and empirical comparison of traditional algorithms. Appl. Energy 2018, 221, 386–405. [Google Scholar] [CrossRef]
  24. Song, Y.; Li, S. Gas leak detection in galvanised steel pipe with internal flow noise using convolutional neural network. Process Saf. Environ. 2020, 146, 736–744. [Google Scholar] [CrossRef]
  25. Ning, F.; Cheng, Z.; Meng, D.; Duan, S.; Wei, J. Enhanced spectrum convolutional neural architecture: An intelligent leak detection method for gas pipeline. Process Saf. Environ. Prot. 2021, 146, 726–735. [Google Scholar] [CrossRef]
  26. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 142–158. [Google Scholar] [CrossRef]
  27. Girshick, R. Fast R-CNN. arXiv 2015, arXiv:1504.08083. [Google Scholar] [CrossRef]
  28. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  29. Poudel, R.P.; Bonde, U.; Liwicki, S.; Zach, C. Contextnet: Exploring context and detail for semantic segmentation in real-time. arXiv 2018, arXiv:1805.04554. [Google Scholar] [CrossRef]
  30. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  31. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  32. Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar]
  33. Zhang, H.; Wu, C.; Zhang, Z.; Zhu, Y.; Lin, H.; Zhang, Z.; Sun, Y.; He, T.; Mueller, J.; Manmatha, R.; et al. ResNeSt: Split-attention networks. arXiv 2004, arXiv:08955. [Google Scholar]
  34. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11531–11539. [Google Scholar]
  35. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
  36. Xiang, X.; Zhang, Y.; Saddik, A.E. Pavement crack detection network based on pyramid structure and attention mechanism. IET Image Process. 2020, 14, 1580–1586. [Google Scholar] [CrossRef]
  37. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. arXiv 2017, arXiv:1703.06870. [Google Scholar]
  38. Shanmugam, D.; Blalock, D.; Balakrishnan, G.; Guttag, J. Better aggregation in test-time augmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 1194–1203. [Google Scholar]
  39. Zuo, J.; Hu, X.; Xu, L.; Xu, W.; Han, Y.; Li, Z. CH4 gas leakage detection method for low contrast infrared images. Infrared Phys. Technol. 2022, 127, 104473. [Google Scholar] [CrossRef]
  40. Zheng, H.; Liu, J.; Ren, X. Dim target detection method based on deep learning in complex trafc environment. J. Grid Comput. 2022, 20, 8. [Google Scholar] [CrossRef]
  41. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  42. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. arXiv 2016, arXiv:1512.02325. [Google Scholar] [CrossRef]
  43. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar] [CrossRef]
  44. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
Figure 1. The structure of the Faster Region-based convolutional neural network (Faster R-CNN).
Figure 1. The structure of the Faster Region-based convolutional neural network (Faster R-CNN).
Sensors 25 05714 g001
Figure 2. Structure of the Gas R-CNN.
Figure 2. Structure of the Gas R-CNN.
Sensors 25 05714 g002
Figure 3. Schematic representation of the lateral connection structure.
Figure 3. Schematic representation of the lateral connection structure.
Sensors 25 05714 g003
Figure 4. Schematic representation of RoI Align.
Figure 4. Schematic representation of RoI Align.
Sensors 25 05714 g004
Figure 5. Uncooled infrared imaging gas leak detection system. (a) CH4 infrared absorption spectrum. (b) Uncooled infrared imaging camera for gas leak detection.
Figure 5. Uncooled infrared imaging gas leak detection system. (a) CH4 infrared absorption spectrum. (b) Uncooled infrared imaging camera for gas leak detection.
Sensors 25 05714 g005
Figure 6. Experimental setup for indoor gas detection. (a) Scenario 1; (b) Scenario 2.
Figure 6. Experimental setup for indoor gas detection. (a) Scenario 1; (b) Scenario 2.
Sensors 25 05714 g006
Figure 7. Typical infrared image of a gas leak. (a) Scenario 1 (100 mL/min); (b) Scenario 1 pseudo-color image (100 mL/min); (c) Scenario 2 (100 mL/min); (d) Scenario 2 pseudo-color image (100 mL/min).
Figure 7. Typical infrared image of a gas leak. (a) Scenario 1 (100 mL/min); (b) Scenario 1 pseudo-color image (100 mL/min); (c) Scenario 2 (100 mL/min); (d) Scenario 2 pseudo-color image (100 mL/min).
Sensors 25 05714 g007
Figure 8. Loss curves of the conventional Faster R-CNN and the Gas R-CNN.
Figure 8. Loss curves of the conventional Faster R-CNN and the Gas R-CNN.
Sensors 25 05714 g008
Figure 9. Curves for the Gas R-CNN model ablation experiments. (a) Leakage of 30 mL/min; (b) leakage of 100 mL/min; (c) leakage of 300 mL/min.
Figure 9. Curves for the Gas R-CNN model ablation experiments. (a) Leakage of 30 mL/min; (b) leakage of 100 mL/min; (c) leakage of 300 mL/min.
Sensors 25 05714 g009
Figure 10. Detection results of the Gas R-CNN model: (a) Scene 1 (original image); (b) Scenario 1 (brightness transformation); (c) Scenario 1 (Gaussian blur); (d) Scenario 1 (horizontal flip); (e) Scenario 2 (original image); (f) Scenario 2 (brightness transformation); (g) Scenario 2 (Gaussian blur); (h) Scenario 2 (horizontal flip).
Figure 10. Detection results of the Gas R-CNN model: (a) Scene 1 (original image); (b) Scenario 1 (brightness transformation); (c) Scenario 1 (Gaussian blur); (d) Scenario 1 (horizontal flip); (e) Scenario 2 (original image); (f) Scenario 2 (brightness transformation); (g) Scenario 2 (Gaussian blur); (h) Scenario 2 (horizontal flip).
Sensors 25 05714 g010
Figure 11. Detection results of the Gas R-CNN mode in the IOD-Video dataset. (a) Images of different brightness in dynamic scenarios (brighter, normal, darker). (b) Images of different brightness in static scenarios (brighter, normal, darker).
Figure 11. Detection results of the Gas R-CNN mode in the IOD-Video dataset. (a) Images of different brightness in dynamic scenarios (brighter, normal, darker). (b) Images of different brightness in static scenarios (brighter, normal, darker).
Sensors 25 05714 g011
Table 1. Performance comparison in ablation experiments of Gas R-CNN.
Table 1. Performance comparison in ablation experiments of Gas R-CNN.
ModelsAPmAPF1mF1
30
mL/min
100
mL/min
300
mL/min
30
mL/min
100
mL/min
300
mL/min
Faster RCNN (VGG16)0.72590.81440.77690.77240.55110.60410.55760.3259
Faster RCNN (Resnet50)0.72490.76280.83440.77400.63750.69050.62410.4234
Faster RCNN
(Resnet50+FPN+RoIAlig)
0.94240.95480.95940.95220.91310.89300.91720.8241
Proposed0.95990.96470.98330.96930.93580.91250.97550.8860
Table 2. Comparison of the results of the different detection models.
Table 2. Comparison of the results of the different detection models.
ModelsAPmAP
30 mL/min100
mL/min
300
mL/min
Yolov30.82610.94170.95430.9074
SSD0.85850.86930.88880.8722
Faster R-CNN (EfficientNetB7)0.69970.85310.91790.8236
Yolovx0.87580.94830.95790.9273
Yolov70.89610.95060.96940.9387
Proposed0.95990.96470.98330.9693
Table 3. Gas R-CNN model detection results for different environments.
Table 3. Gas R-CNN model detection results for different environments.
ConditionAPSDRecallSD
Dynamic + Darker0.95910.04100.92920.1113
Dynamic + Normal0.97850.02200.97710.0476
Dynamic + Brighter0.96090.03250.95260.0746
Static + Darker0.98340.00310.99390.0063
Static + Normal0.99200.00420.99940.0013
Static + Brighter0.97110.01560.98870.0198
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zuo, J.; Li, Z.; Xu, W.; Zuo, J.; Rong, Z. Automated Detection of Methane Leaks by Combining Infrared Imaging and a Gas-Faster Region-Based Convolutional Neural Network Technique. Sensors 2025, 25, 5714. https://doi.org/10.3390/s25185714

AMA Style

Zuo J, Li Z, Xu W, Zuo J, Rong Z. Automated Detection of Methane Leaks by Combining Infrared Imaging and a Gas-Faster Region-Based Convolutional Neural Network Technique. Sensors. 2025; 25(18):5714. https://doi.org/10.3390/s25185714

Chicago/Turabian Style

Zuo, Jinhui, Zhengqiang Li, Wenbin Xu, Jinxin Zuo, and Zhipeng Rong. 2025. "Automated Detection of Methane Leaks by Combining Infrared Imaging and a Gas-Faster Region-Based Convolutional Neural Network Technique" Sensors 25, no. 18: 5714. https://doi.org/10.3390/s25185714

APA Style

Zuo, J., Li, Z., Xu, W., Zuo, J., & Rong, Z. (2025). Automated Detection of Methane Leaks by Combining Infrared Imaging and a Gas-Faster Region-Based Convolutional Neural Network Technique. Sensors, 25(18), 5714. https://doi.org/10.3390/s25185714

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop