Next Article in Journal
Applicability of Clay/Organic Clay to Environmental Pollutants: Green Way—An Overview
Next Article in Special Issue
Noise-Assessment-Based Screening Method for Remote Photoplethysmography Estimation
Previous Article in Journal
Evaluation of the Effect of Biostimulation on the Yielding of Golden Delicious Apple Trees
Previous Article in Special Issue
Multi-Scale Feature Fusion and Structure-Preserving Network for Face Super-Resolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Corrosion Damage Detection in Headrace Tunnel Using YOLOv7 with Continuous Wall Images

1
Institute of Education, Research and Regional Cooperation for Crisis Management Shikoku, Kagawa University, Kagawa 760-8521, Japan
2
Nippon Koei Co., Ltd., Tsukuba-shi 300-1259, Japan
3
Department of Civil Engineering, The University of Tokyo, Tokyo 113-8656, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(16), 9388; https://doi.org/10.3390/app13169388
Submission received: 7 June 2023 / Revised: 8 August 2023 / Accepted: 16 August 2023 / Published: 18 August 2023
(This article belongs to the Special Issue Advances in Image and Video Processing: Techniques and Applications)

Abstract

:
Infrastructure that was constructed during the high economic growth period of Japan is starting to deteriorate; thus, there is a need for the maintenance and management of these structures. The basis of maintenance and management is the inspection process, which involves finding and recording damage. However, in headrace tunnels, the water supply is interrupted during inspection; thus, it is desirable to comprehensively photograph and record the tunnel wall and detect damage using the captured images to significantly reduce the water supply interruption time. Given this background, the aim of this study is to establish an investigation and assessment system for deformation points in the inner walls of headrace tunnels and to perform efficient maintenance and management of the tunnels. First, we develop a mobile headrace photography device that photographs the walls of the headrace tunnel with a charge-coupled device line camera. Next, we develop a method using YOLOv7 for detecting chalk marks at the damage locations made during cleaning of the tunnel walls that were photographed by the imaging system, and these results are used as a basis to develop a system that automatically accumulates and plots damage locations and distributions. For chalking detection using continuous wall surface images, a high accuracy of 99.02% is achieved. Furthermore, the system can evaluate the total number and distribution of deteriorated areas, which can be used to identify the causes of change over time and the occurrence of deterioration phenomena. The developed system can significantly reduce the duration and cost of inspections and surveys, and the results can be used to select priority repair areas and to predict deterioration through data accumulation, contributing to appropriate management of headrace tunnels.

1. Introduction

Headrace tunnels have various functions, such as flood control, securing urban and industrial water, and water purification. However, in recent years, similar to other types of infrastructure such as bridges, headrace tunnels have steadily deteriorated, and there have been concerns about safety, water quality, and adverse effects on the surrounding ground. Therefore, considerable effort is directed toward extending the life of headrace tunnels, including periodic inspections and repairs. Since 2014, the Ministry of Land, Infrastructure, Transport and Tourism of Japan has implemented a periodic inspection guideline for all bridges and tunnels longer than 2 m based on a detailed visual inspection once every five years, and inspections are currently conducted at this frequency for the subject infrastructure throughout Japan [1]. However, currently, inspections of headrace tunnels mainly involve visual inspection by inspection engineers, which requires a large amount of labor. There are also concerns that such visual inspections by inspection engineers may lead to individual differences in judging the state of deterioration or the deterioration being overlooked. Additionally, the following situation is unique to headrace tunnels: the water supply needs to be interrupted for surveys; however, the water supply cannot be interrupted frequently, leading to the problem that it is difficult to confirm the inspection results onsite. Therefore, the current approach is limited to low-cost and simple inspections during normal times, and in cases of severe deformities or progressing deterioration, the process is shifted to more accurate inspections and investigations in stages. Given these issues, for inspecting headrace tunnels, it is desirable to use photography devices that are easy to operate at various sites for acquiring high-quality wall images in a short period for highly accurate inspections even in normal times and to conduct damage detection using artificial intelligence (AI) with the captured images so that the results are not influenced by the individual differences of inspection engineers. Currently, the information is managed via inspection reports after visual inspections and investigations. The locations of deteriorated areas, occurrence of deterioration phenomena, and conditions are recorded in the report as text and listed in a table. There are typically no photographs of deteriorated areas, and even when there are photographs, they are stored as individual data; thus, they must be searched for and confirmed by the number in the inspection report. Hence, it is impossible to comprehensively grasp the deterioration status and its distribution from the previous survey. Therefore, in this study, we developed a photography device and damage detection method for a headrace tunnel whose walls are made of steel, and we conducted an experiment involving an actual headrace tunnel to validate the proposed method.

2. Literature Review

In many recent studies, including those conducted by the authors, specific parts were automatically detected by combining AI with image-processing technology. For example, the authors have conducted research on detecting slope failure regions from aerial photographs taken during landslides [2], detecting buried pipes from penetrating radar measurements [3], and detecting bridge damage and integrating it into three-dimensional data using Structure from Motion (SfM) [4,5]. In the field of civil engineering, there have been numerous studies on infrastructure, especially on bridges, many of which involved crack detection. For example, Xu et al. [6] conducted crack detection for concrete bridges with semantic segmentation using an extended convolutional layer called ASPP, and Chun et al. [7,8] conducted crack detection for concrete structures using LightBGM, which is a machine-learning method, and crack detection for roads using ResNet [9], which is a convolutional neural network (CNN). There have been many other studies on deformation detection in concrete [10,11,12,13,14], but relatively few studies have been performed on steel damage. Nonetheless, there has been research [15,16,17,18] on the detection of corrosion points in steel using fully convolutional networks [19], including the study of Shi et al. As mentioned above, various methods have been discussed and actively researched for deterioration/damage detection in bridges. However, for headrace tunnels, which were the focus of the present study, effective maintenance and management methods have not been developed, despite their importance, and little research has been performed on the subject [20]. As a method for conducting inspection surveys, Otsu et al. constructed a tunnel inspection system that can accurately capture piping facilities that are difficult to check visually, such as those hidden in walls, as visual information by using a Mixed Reality device and a model of the channel shape that is created by attaching a plan view recorded by past inspections to a three-dimensional model generated from design and construction data of the channel tunnel [21]. Mori et al., developed a float-type image-capturing device equipped with a CCD camera for agricultural canal tunnels and detected cracks in the captured images [22]. In addition, they constructed a functional diagnosis system using electromagnetic radar that can be used when water is cut off [23]. Chen et al., conducted a safety risk assessment in tunnels using a robot that can operate in underwater environments [24]. Although inspection and survey systems have been developed, there have been few studies on comprehensive headrace-tunnel maintenance and management systems, e.g., the construction of databases of inspection and survey results and analyses based on the obtained data. Thus, both the image-acquisition method and the analysis method are still in early stages.

3. Research Purpose

Given this background, in the present study, we sought to efficiently survey and diagnose headrace tunnels by first developing a continuous nondestructive survey system for headraces that can acquire high-quality wall surface images in a short period. The developed system adopts a CCD line camera for wall image acquisition, which allows continuous images of the wall to be captured at 1.0 km/h. The time required for visual inspection depends on the number of deformed areas, but with this photography equipment, measurements can be performed in a short period and at a fixed time, and they can even be recorded. This is useful for headrace tunnels, where the survey time is often limited owing to water supply interruption time restrictions. Furthermore, it is a compact system that can be easily disassembled and assembled inside a tunnel, and it can be applied to headrace tunnels of various shapes and sizes. Furthermore, we ensured sufficient accuracy for the purposes of this research. Next, the positions of the wall damage were detected using the continuous images of the wall that were captured using the aforementioned system. Currently, chalk is used to mark damage that is found during the removal and cleaning of shells stuck to the wall, but the chalk mark positions are difficult to identify, record, and tally because the chalk marks are made arbitrarily for all instances. Therefore, in this study, we constructed a system that detects chalking positions using YOLOv7, which is an object-detection method based on deep learning, and records and plots the results. The corroded areas surrounded by chalk are the detection targets. The features of corroded areas are not included in the training, because the corroded areas are small and it is difficult to capture the features in the obtained images. This system can contribute to the advancement of headrace-tunnel asset management based on the identification of locations where severe damage has occurred, as well as our understanding of the increase in damage over time. The results clarify deterioration trends and can contribute to the advancement of headrace-tunnel asset management.

4. Detecting Chalking Position by Photographing and Analyzing Wall Surfaces

4.1. Development of Nondestructive Survey System Using CCD Line Camera for Capturing Continuous Wall Images

In this study, we developed a device that can comprehensively capture wall images in headrace tunnels such as that shown in Figure 1 to record chalking positions, which are damage markers, and detect them using AI. The device is equipped with a charge-coupled device (CCD) line camera, and it constructs a continuous wall image by connecting long and narrow images captured by the CCD camera at regular intervals and recording and saving them as high-precision digitally developed images. Figure 2 shows an overview of the equipment used for continuous nondestructive surveys of the headrace tunnel in this study, as well as a photograph of the actual equipment, and the state of the survey using this equipment. The device was made with a focus on miniaturization and unitization so that it could be applied to headrace tunnels with a diameter of ≧ 1.3 m. Furthermore, the measurement speed is 1.0 km/h, allowing efficient operation, and the device can be used even during short water supply interruption periods. For example, if the images are captured from a distance of 2.5 m, the size of one pixel is 1 mm × 1 mm, which is sufficient performance for capturing chalking positions. Furthermore, the images are 8-bit for each RGB color, with a total of 16,277,216 colors.
An innovation of this device is the identification of the camera shooting position. The CCD line camera collects the acceleration data while driving, converts them into a displacement, and corrects the camera shooting position. However, there is a risk that errors will accumulate when shooting from long distances using only acceleration data correction; thus, the system was designed to conduct software correction using the joint position of the tunnel, which is known in advance. Aperture adjustments are also required. In the device, an illuminometer was installed on the upper part of the CCD line camera, and a function to automatically adjust the aperture according to the value of the illuminometer was incorporated. The left side of Figure 3 presents a continuous image of the wall surface that was used in this study as an example. From top to bottom, the upper left, lower left, lower right, and upper right are shown as seen from the direction of travel in the tunnel. As indicated by the figure, the entire wall surface was photographed.

4.2. Chalking-Position Detection Model

4.2.1. Object Detection Using Deep Learning

In this study, we developed a method to detect the positions marked by chalk in the headrace tunnel using deep learning. An early example of deep-learning-based object-detection algorithms was the region-based convolutional neural network (R-CNN) proposed by Girshick et al. [25]. R-CNN uses processes such as selective search [26] to detect regions containing object candidates (region proposal) and classifies these regions using a CNN [27]. However, it takes a long time to classify the proposed regions one-by-one using the CNN; thus, Fast R-CNN [28], which classifies the proposed regions using the features obtained when analyzing the input image with the CNN, was proposed. Additionally, to further accelerate the process, Faster R-CNN [29], which uses a region proposal network that classifies objects in a rectangular area set in an image as objects or background, was proposed. However, Faster R-CNN has a limited processing speed because the algorithm is divided into the two stages: object detection and classification of detected objects. For damage detection in a headrace tunnel, as in the present study, the number of images to be analyzed inevitably increases because the entire tunnel is photographed comprehensively. Therefore, in addition to a high detection accuracy, a high detection speed is desirable. You Only Look Once (YOLO), which is an object-detection algorithm using deep learning [30], achieves fast processing by conducting object detection and classification processing in parallel, making it suitable for the present study. YOLO estimates the object region, divides the input image into S × S grid cells, and classifies each grid cell. According to the detected area and classification result of each grid cell, the classification result of the detected area is determined. A confidence score is calculated for each detection result, and the results whose scores exceed the threshold are output as the final results. Algorithms such as YOLOv2 [31] and YOLOv3 [32], which are based on YOLO and have higher processing speeds and detection accuracies, have also been proposed. YOLO evolves quickly; the current YOLO version is v8 (https://github.com/ultralytics/ultralytics, accessed on 18 August 2023), and further accuracy improvements are expected in the future. However, the accuracy improvement is likely only a few percentage points [33]; thus, in this paper, we report the results of YOLOv7, which was the latest version at the time of analysis. YOLOv7 is a supervised machine-learning model. Supervised machine learning is a method where input data that were prepared in advance and the corresponding output data are used as training data to learn the relationship between input and output data. The training data generated in this study for model training and the training flow are discussed below.

4.2.2. Training

As mentioned previously, in the present study, we aimed to detect the chalked areas of damage. Figure 3 shows an example of the photograph data used for training. We created a dataset for training and validation using a total of 3125 images of actual tunnels captured in 2016, 2017, and 2020. As shown in Figure 4, photographs were taken of the Fusa shield section No. 1 route (first headrace) in 2016, the underground pipe section (second headrace) in 2017, and the Fusa shield section No. 2 route (first headrace) in 2020. Therefore, the images were captured at different locations. Additionally, as shown in Figure 3, some images had multiple chalking positions. The total amount of data for each type of label used as training and validation data is presented in Table 1. No previous studies have involved the detection of chalked areas in headrace tunnels; however, in previous studies involving the detection of cracks in concrete structures, the number of data used for learning and validation ranged from several hundred to several tens of thousands [11,12,14,15]. Moreover, a wide variety of detection methods were employed, such as CNNs, encoder–decoder networks, segmentation, and object detection [7,8,11]. In this study, YOLOv7—the object-detection method described above—was employed because the area marked with chalk was rectangular, and in contrast to cracks, the object to be detected was not continuous.
The data generated in this manner were used to train YOLOv7. In the present study, of the 3125 chalking positions, 2479 positions—80% of the total, excluding the test data—were used as the training data, and the other 20%—totaling 614 positions—were used as the validation data. The corresponding numbers of images were 415 and 145, respectively. The image size was 2528 pixels × 2528 pixels. An Nvidia GeForce RTX 3060 graphics processing unit was used for the training. The learning rate was set to 0.01 , and stochastic gradient descent was adopted as the optimization method. The batch size was set to 2, and the amount of training data per epoch was set to be the same as the number of training data prepared. A total of 1000 training epochs were conducted. Figure 5 presents the losses of training data and validation data for each epoch. As shown, the loss decreased as the training progressed. If the number of training data is too small or the number of epochs is too large, overtraining may occur. When overtraining occurs, the loss of validation data often increases. As indicated by Figure 5, the training conducted in this study did not show signs of overtraining, and the training was conducted appropriately.
The accuracy of the trained model was tested using 415 images not present in the training data. In this study, the threshold for confidence, which indicates whether the bounding box used for detection contains an object, was set as 0.25. Mean average precision (mAP) is commonly used as an accuracy evaluation index in object detection [30,34]. The mAP depends on the intersection over union (IoU), which is set at the time of validation, and the mAP value corresponding to IoU = 0.5 is often used. The mAP is the average value obtained by calculating the average precision (AP) for each type of object, but the only objects targeted for detection in this study were the corrosion positions; thus, the calculated AP value was used for evaluation as-is. We obtained AP = 0.9353 when the generally used IoU = 0.5 was set. Figure 6 shows an example of the detection results based on the test data. Here, the blue frames indicate the annotation results, the pink frames indicate the detection results, and the numbers shown above the pink frames indicate the confidence scores of the detection results. As shown in Figure 6, the damage positions in the image were detected using the trained model. However, when the bounding boxes that indicated damage overlapped, as shown in the yellow frame in Figure 6, the confidence score was low. Although such cases reduced the detection accuracy, the damage was generally detected well.

4.2.3. Novelty and Effectiveness of Method

In this study, a nondestructive investigation system using a CCD line camera was constructed, along with a model for detecting chalked areas surrounding deteriorated areas. As mentioned previously, the survey system employs two corrections to identify the shooting position of the camera: correction using acceleration data and soft correction using the tunnels’ joints, which allows continuous image capture with little distortion, camera shake, or overlap. This is a unique feature of the system. Furthermore, the model is novel in that it not only employs a high-sensitivity CCD camera with low minimum object illuminance, similar to previous studies [22], but also employs an illuminance meter to automatically adjust the camera’s aperture according to the measured illuminance. The chalked-area detection model analyzes all the captured images of the entire tunnel; thus, the number of images and the data size are large. For this reason, YOLOv7, which has high detection accuracy and high detection speed, is used. There are no examples of object detection of chalked areas shown in images of tunnel walls, and the detection of chalked areas is novel because the characteristics of the objects differ from those of cracks.
The use of the proposed system in headrace tunnels can reduce costs by approximately 13.74 million yen per year and reduce the amount of time spent by approximately one-fifth compared with visual inspection and investigation. It is necessary to cut off the water supply during the inspection and investigation of headrace tunnels. Therefore, the introduction of the system will lead to a significant cost reduction by shortening the survey time. As infrastructure facilities are aging, this system is effective for reducing maintenance and management costs.

5. Demonstration Experiment and Position Identification in Actual Headrace Tunnel

5.1. North Chiba Headrace

The target of this study was the North Chiba Headrace (Figure 4), which is being inspected and repaired as part of a life extension plan. At this headrace, we examined the validity and applicability of the proposed method. The North Chiba Headrace is approximately 28.5 km long and plays a role in removing inland water during floods, supplying city water to Tokyo and purifying nearby rivers and swamps such as the Saka River and Tega Swamp. It has been over 20 years since the start of its operations in 2000, and the deterioration of the water pipes has become apparent, necessitating periodic inspection and repair. Therefore, we formulated a maintenance plan for the headrace and started inspection and repair in each section. The first round of inspection and repair was conducted from 2013 to 2017, and the second round of inspection started in 2018. However, owing to the role played by headrace tunnels, a long water supply interruption period is impossible, and this applies to the North Chiba Headrace. Therefore, when deformities, which need to be removed, are found when removing mussels clinging to the wall (Figure 7), they are circled with chalk. The chalk color depends on the type of deterioration; white chalk is used in the case of corrosion, and red chalk is used in cases of blistering (Figure 3, right).

5.2. Analysis Results

Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 show the results of the trained model for detecting chalking positions. In each figure, the blue frames indicate the annotation results, the pink frames indicate the detection results, and the numbers shown above the pink frames indicate the confidence scores of the detection results.
Figure 8 shows an example of detection near a ladder. Almost all the chalking positions were detected, and the confidence score was high, with an average value of 0.845. The detection results (pink frames) had almost the same shape as the blue frames indicating the annotation positions, and both the rectangular areas surrounding the deterioration positions and the characters indicating the information could be detected. In the upper part of Figure 8, there is a ladder on the wall, and only half of the chalked area is visible in the upper left part of this ladder. However, the confidence score is as high as 0.803, and the IoU is as high as 0.866. The IoU, which is described in detail later in the paper, indicates the degree of overlap of the regions and is expressed as a value between 0 and 1. A value closer to 1 indicates a higher accuracy. As shown in Figure 8, almost all chalked areas were detected without significant deviations. This was confirmed by the fact that the mean IoU (mIoU), i.e., the average value of the IoU in Figure 8, was 0.840.
Similarly, in Figure 9, most of the damage positions were detected, and the confidence score was high, with an average value of 0.871. However, at one position near the bottom of Figure 9, the confidence score was 0.149. Most of the chalking positions in this study had square-like shapes with similar aspect ratios, but there were a few chalking positions with a vertically long rectangular shape located at the center of Figure 9. This chalking point has not only a high confidence score of 0.817 but also a high IoU of 0.826. The developed model detected these few characteristic points with a high confidence score and IoU. Similarly, the mIoU in Figure 9 was as high as 0.739. In Figure 9, there are two yellow chalking points slightly above the aforementioned vertical chalking points. The accuracy was increased by the fact that the yellow chalked areas were not detected incorrectly, because only the white chalked area was the target in this study.
In the case of Figure 10, similar to Figure 8 and Figure 9, the positions were detected with high accuracy, but there were positions where only the numbers and symbols that were chalked were detected incorrectly. These numbers and symbols were written using the same chalk as those in the annotations, and their sizes were very similar; thus, it is possible that such misdetection occurred. However, these detection results had very low confidence scores, with values of 0.173 and 0.179, as shown in Figure 10. Herein, all the results are presented, regardless of the confidence score; thus, there are false positives such as this. We consider that setting a threshold for the confidence score will increase the accuracy. However, even with these false detections, the mIoU was as high as 0.756, indicating that the detection was highly accurate.
Figure 11 shows a very large amount of chalking, with many symbols and characters around it. Although they were detected, the confidence score was low, or only positions with characters were detected, and the detection was not very accurate. The mIoU was 0.611, for the case where the chalking was dense or there were many letters and numbers outside the rectangular area, as shown in Figure 11. Distinguishing such characters and symbols from annotation positions is difficult even for humans; thus, when conducting chalking, there is probably a need to color-code corrosion positions to be detected along with symbols and characters.
Furthermore, there are areas that were detected accurately and areas that were not detected accurately in the same region in Figure 12. The large chalked area in the center of the image was detected accurately, with a confidence score of 0.739 and an IoU of 0.816. The small chalked areas to the left and right of the chalked area were also generally detected accurately, because the chalked rectangle area was clear. However, the disappearing chalked area overlapping on the right side of the large chalked area in the center of the image was detected incorrectly. Because this chalking point is where deterioration has appeared in the past, it is necessary to consider whether such a chalked area should be detected. The chalked area in the lower left of Figure 12 was detected accurately but with a low confidence score (0.207). This is because the image was a planar rendering of a curved tunnel wall surface, which caused distortions; consequently, the chalk markings were distorted. Although there were many such distorted areas, only a few were not detected. The confidence score of the disappearing chalked area ranged from approximately 0.132 to 0.175, whereas the confidence score of the distorted area was 0.207. Therefore, it is better not to detect chalked areas when the confidence score is <0.200. In the case of the densely chalked areas shown in Figure 11, some false detections have a confidence score of 0.300–0.500; however, the confidence score of the false-detection area in Figure 10 is 0.179. Thus, setting a threshold value is effective for increasing the detection accuracy.
Finally, Figure 13 shows the detection results for chalking points larger than those in Figure 12. In this case, two large chalking points were annotated as the left and right areas, and the detection results indicated that there were two chalking points in the left area and three in the right area. For the two areas with confidence scores exceeding 0.800, the rectangular area indicated by the chalk was detected, but for the other areas, only the text was detected. Such large areas include many deteriorated areas (Figure 13, circled areas within chalked areas), and whether the area should be divided into one or two rectangles depends on the judgment of the individual. Therefore, the distribution of the chalked areas should not be judged solely according to the number of chalked areas; rather, the size of the chalked areas should be considered to obtain an accurate grasp of the deterioration status.
Furthermore, we evaluated all the detection results using the IoU, which is expressed by Equation (1) and indicates the degree of overlapping of regions.
IoU = TP TP FP FN
Here, TP, FP, and FN are as shown in Table 2. All these values have units of pixels. This index allows the accuracy of the detection results to be evaluated in pixel units. The IoU for the proposed method was 0.777. Additionally, three accuracy indices—the precision, recall, and F-score—were calculated using the following equations (Equations (2)–(4)). Their values were 0.8066, 0.9902, and 0.8891, respectively. As shown in Figure 6, the overlapping of bounding boxes that indicate damage reduces the value of the above indices, but it is not considered to be a major problem for broadly determining the distribution of damage.
Precision = TP TP + FP
Recall = TP TP + FN
F - score = TP TP + 1 2 ( FP + FN )
Table 3 presents a breakdown of the detection results, and Table 4 presents each detected and undetected case. As shown in Table 3, there were 609 positions where chalking was detected accurately. As indicated by the first row of Table 4, there were cases of singular and multiple corrosion positions, and these positions were detected even when the shapes were slightly different, e.g., when the rectangular area surrounding them was larger than usual. Only six positions were not detected, and as indicated by the second row of Table 4, these included cases where chalking was performed at seams and dents and where the chalk color was faded. Thus, the results indicated that the proposed model can generally detect objects that are clearly chalked on a flat surface. The number of incorrectly detected positions was 146 and as indicated by the third row in Table 4, there were examples of positions that were chalked in red or yellow and where some numbers or symbols were detected. However, approximately 86% of these were positions where parts of chalking positions or numbers/symbols were detected; thus, it can be said that “locations with chalking” were generally captured. The results suggested that the damage positions were generally detected well. Excluding the false detection results, the number of chalking positions detected was 608 out of 614, indicating that the positions were detected with an extremely high accuracy.

5.3. Plot of Chalking Positions in Headrace Tunnel

To efficiently detect deformation positions on the inner wall of the headrace tunnel, in the proposed system, we made it possible to visualize the damage positions in the entire tunnel, as shown in Figure 14a, according to the positional information of the tunnel provided by the captured continuous images of the walls and the chalking-position detection results. This is based on the fact that YOLO can derive the bounding box of the detected damage, and here, the central point of the bounding box was used as the representative point. As shown in Figure 14a, there were many detections near the entrance (positions of 0–20 m in the direction of travel), but there were very few chalking positions after approximately 40–80 m. Figure 14b,c present continuous images of the wall 5–10 m and 60–65 m from the entrance, respectively. In Figure 14b, there are many chalking positions, whereas in Figure 14c, there are hardly any; thus, the detection result is considered to be valid. Additionally, for example, there was considerable damage at approximately 10 m, but this was a joint part, and the damage is attributed to the scattering of spatter during fabrication, which made leakage more likely, promoting corrosion. Thus, detecting the chalking positions throughout the tunnel made it possible to determine the distribution of damage locations and their tendencies, which will be useful for future maintenance and management.

6. Discussion

In this study, chalking points were detected by YOLOv7 using continuous wall images. The analysis results indicated that when the chalking position was clear, it was detected and classified with a high accuracy. However, it was difficult to detect chalking in the rectangular areas targeted in this study, as well as in cases where objects such as numbers and symbols, which were not subject to detection, were concentrated. Nonetheless, the presence or absence of chalking was detected reliably, and we developed a model with few detection omissions. The experimental results indicated no omissions of non-rectangular chalk marks. However, there are cases where the chalking differs from normal chalking, e.g., when it is located on a step or when the chalked area is large, depending on the shooting conditions and the chalked area, and may not be detected. Although there were only a few such cases in this study, they are likely to be more prevalent in tunnels with many uneven surfaces or deteriorated areas, and we believe that it is necessary to add training data to handle such special cases that differ from normal chalking. Currently, headrace tunnels are inspected visually by engineers, who take photographs as needed to store data on the extent of deterioration. This not only causes individual differences in judgment of deterioration and oversights but also makes it difficult to create a database for efficient maintenance and management. Capturing continuous development images of interior walls, as in this study, makes it possible to grasp the condition of wall surfaces comprehensively and to construct a database and store the data at each inspection. Furthermore, deterioration areas are detected by AI using the captured images, and the number and distribution of these deteriorated areas monitored over time can be used to identify deterioration trends, leading to more efficient maintenance and advanced asset management of headrace tunnels.

7. Conclusions

We sought to increase the efficiency of detecting deformation positions on the inner walls of headrace tunnels by developing a continuous nondestructive survey system for headraces that captures wall surfaces in headrace tunnels, and we used continuous images of the wall captured with this photography device to detect chalking positions on the inner wall of the North Chiba Headrace using YOLOv7. Our findings are summarized below.
  • The model developed in this study allowed us to determine the location of deterioration with an accuracy as high as 99.02%.
  • Chalking detection from continuous wall images allows quantitative and qualitative evaluation of the total number and distribution of deteriorated areas, facilitating the identification of changes over time and the factors that cause deterioration phenomena.
  • The cost and time associated with investigation and diagnosis are reduced by approximately JPY 13.74 million/year and one-fifth, respectively, by using the developed continuous nondestructive survey system.
  • Effective maintenance and management can be achieved through the acquisition of data that can be easily stored in a database and the development of a series of systems to monitor the deterioration status.
  • The continuous wall surface images and the chalking locations detected using the images are recorded, plotted, and stored in a database, leading to an advanced asset-management system for headrace tunnels.
We were able to significantly reduce the time required for inspection by using this device to photograph the inside of the headrace tunnel, making efficient maintenance and management possible. Additionally, we recorded and saved the state of the wall surface as continuous images, which made it easier to identify and tabulate the deterioration positions and to follow the changes over time. By capturing continuous images of the headrace tunnel wall and determining the chalking positions, we significantly reduced the cost of inspection and recording, which previously relied on visual inspection. Furthermore, it is expected that the distribution and characteristics of the deterioration positions will clarify the causes of deterioration and facilitate the proposal of repair plans that match these characteristics and trends. Additionally, the creation of a database of the location, type, and extent of the deterioration that is discovered at the time of inspection will lead to the establishment of a deterioration prediction method that can be utilized for future inspection and repair planning as well as a reduction in lifecycle costs.Although researchers have conducted inspections and surveys of the inner walls of headrace tunnels using robots, no previous studies have been carried out on the detection of chalked areas by AI using continuous wall surface images captured via the surveys using nondestructive systems based on CCD line cameras and quantitative and qualitative evaluation of the inspection and survey results, making the present study unique. The proposed system allows efficient maintenance and management of headrace tunnels.
As discussed in a previous section, the number and distribution of chalking points (which corresponded to corrosion points in this study) in each headrace tunnel were determined using the captured images. From the high detection accuracy and recall, we concluded that the actual number and distribution of chalking points were similar to the detection results, indicating that the developed system can be used to maintain and manage headrace tunnels. The data used for training and validation in this study were obtained from different areas of the tunnel, and the wall surfaces were in different conditions; thus, the developed model was validated. However, we believe that it is necessary to check whether similar detection performance can be achieved for headrace tunnels with different characteristics. Additionally, the damage severity was not discussed in this study, and we wish to be able to determine it from distribution trends and other factors. Another task for future research is the development of a model that is more robust against non-target chalking, e.g., the color of the headrace wall, numbers, and symbols. For example, many images with only numbers and symbols can be prepared to increase the amount of training data. Furthermore, in the future, if we can develop a method that can detect damage without chalk, the inspection costs will be significantly reduced; hence, this is one of our research goals. Corrosion damage in a headrace tunnel is difficult to detect because it is very small compared with the tunnel as a whole, but we aim to achieve this by improving the photography methods and increasing the amount of data.

Author Contributions

Conceptualization, S.K. and P.-j.C.; Methodology, S.K. and P.-j.C.; Software, S.K., N.N. and S.M.; Validation, S.K.; Formal analysis, S.K.; Investigation, S.K., N.N., S.M. and P.-j.C.; Resources, P.-j.C.; Data curation, P.-j.C.; Writing—original draft, S.K.; Writing—review & editing, S.K., N.N., S.M. and P.-j.C.; Visualization, S.K., N.N. and S.M.; Supervision, P.-j.C.; Project administration, P.-j.C.; Funding acquisition, P.-j.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by JSPS KAKENHI Grant Number 21H01417. Additionally, part of this research was conducted as commissioned research with the Kanto Regional Development Bureau of the Ministry of Land, Infrastructure, Transport and Tourism of Japan. We express our gratitude for this support.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are used with permission from the Ministry of Land, Infrastructure, Transport and Tourism of Japan, and thus not available to the public.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ministry of Land, Infrastructure, Transport and Tourism; Roads Bureau. Bridge Periodic Inspection Guidelines. Available online: https://www.mlit.\go.jp/road/sisaku/yobohozen/tenken/yobo4_1.pdf (accessed on 1 August 2023).
  2. Kubo, S.; Yamane, T.; Chun, P.J. Study on Accuracy Improvement of Slope Failure Region Detection Using Mask R-CNN with Augmentation Method. Sensors 2022, 22, 6412. [Google Scholar] [CrossRef] [PubMed]
  3. Chun, P.J.; Suzuki, M.; Kato, Y. Iterative application of generative adversarial networks for improved buried pipe detection from images obtained by ground-penetrating radar. Comput. Civ. Infrastruct. Eng. 2023. early view. [Google Scholar] [CrossRef]
  4. Yamane, T.; Chun, P.J.; Dang, J.; Honda, R. Recording of bridge damage areas by 3D integration of multiple images and reduction of the variability in detected results. Comput. Civ. Infrastruct. Eng. 2023. early view. [Google Scholar] [CrossRef]
  5. Yamane, T.; Chun, P.J.; Honda, R. Detecting and localising damage based on image recognition and structure from motion, and reflecting it in a 3D bridge model. Struct. Infrastruct. Eng. 2022. [Google Scholar] [CrossRef]
  6. Xu, H.; Su, X.; Wang, Y.; Cai, H.; Cui, K.; Chen, X. Automatic Bridge Crack Detection Using a Convolutional Neural Network. Appl. Sci. 2019, 9, 2867. [Google Scholar] [CrossRef]
  7. Chun, P.; Izumi, S.; Yamane, T. Automatic detection method of cracks from concrete surface imagery using two-step light gradient boosting machine. Comput. Civ. Infrastruct. Eng. 2020, 36, 61–72. [Google Scholar] [CrossRef]
  8. Chun, P.; Yamane, T.; Tsuzuki, Y. Automatic Detection of Cracks in Asphalt Pavement Using Deep Learning to Overcome Weaknesses in Images and GIS Visualization. Appl. Sci. 2021, 11, 892. [Google Scholar] [CrossRef]
  9. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. Available online: https://arxiv.org/abs/1512.03385 (accessed on 18 August 2023).
  10. Koch, C.; Georgieva, K.; Kasireddy, V.; Akinci, B.; Fieguth, P. A review on computer vision based defect detection and condition assessment of concrete and asphalt civil infrastructure. Adv. Eng. Inform. 2015, 29, 196–210. [Google Scholar] [CrossRef]
  11. Pu, R.; Ren, G.; Li, H.; Jiang, W.; Zhang, J.; Qin, H. Autonomous Concrete Crack Semantic Segmentation Using Deep Fully Convolutional Encoder-Decoder Network in Concrete Structures Inspection. Buildings 2022, 12, 2019. [Google Scholar] [CrossRef]
  12. Barri, Q.K.; Babanajad, S.K.; Alavi, A.H. Real-Time Detection of Cracks on Concrete Bridge Decks Using Deep Learning in the Frequency Domain. Engineering 2021, 7, 1786–1796. [Google Scholar] [CrossRef]
  13. Mohan, A.; Poobal, S. Crack detection using image processing A critical review and analysis. Alex. Eng. J. 2018, 57, 787–798. [Google Scholar] [CrossRef]
  14. Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
  15. Zhang, G.; Wang, B.; Yan, Z.; Li, Y.; Yang, H. Rust Detection of Steel Structure via One-Class Classification and L2 Sparse Representation with Decision Fusion. IEICE Trans. Inf. Syst. 2019, 103, 450–453. [Google Scholar] [CrossRef]
  16. Petricca, L.; Moss, T.; Figueroa, G.; Broen, S. Corrosion Detection Using A.I.: Comparison of Standard Computer Vision Techniques And Deep Learning Model. In Proceedings of the Sixth International Conference on Computer Science, Engineering and Information Technology, Vienna, Austria, 21–22 May 2016; Volume 6, pp. 91–99. [Google Scholar] [CrossRef]
  17. Forkana, A.R.M.; Kangb, Y.B.; Jayaramana, P.P.; Liaoa, K.; Kaula, R.; Morgand, G.; Ranjand, R.; Sinhae, S. CorrDetector A Framework for Structural Corrosion Detection from Drone Images using Ensemble Deep Learning. Expert Syst. Appl. 2022, 193, 116461. [Google Scholar] [CrossRef]
  18. Hoang, N.D.; Duc, T.V. Image Processing-Based Detection of Pipe Corrosion Using Texture Analysis and Metaheuristic-Optimized Machine Learning Approach. Comput. Intell. Neurosci. 2019, 2019, 8097213. [Google Scholar] [CrossRef] [PubMed]
  19. Shi, J.; Dang, J.; Zuo, R. Bridge damage cropping-and-stitching segmentation using fully convolutional network based on images from UAVs. In Bridge Maintenance, Safety, Management, Life-Cycle Sustainability and Innovations; CRC Press: Boca Raton, FL, USA, 2021; pp. 264–270. [Google Scholar] [CrossRef]
  20. Wang, H.; Wu, X.; Chen, Y.; Liu, Z. Diversion Tunnel Structural Inspection and Assessment Using a Robotic System. In Proceedings of the 38th IAHR World Congress, Panama City, PA, USA, 1–6 September 2019; Volume 9. 14p. [Google Scholar] [CrossRef]
  21. Otsu, S. Survey and Inspection Methods for Waterway Tunnels Using MR Devices and Efficiency Improvement of Maintenance and Management. J. JCMA 2021, 73, 64–68. [Google Scholar]
  22. Mori, M.; Mori, T.; Tokashiki, M.; Nakaya, T.; Fujiwara, T.; Saito, Y. Development of Diagnosis System of Irrigation Tunnel under Water Servicing. Trans. Jpn. Soc. Irrig. Drain. Rural. Eng. 2012, 80, 87–95. [Google Scholar] [CrossRef]
  23. Mori, M.; Saito, Y.; Takaiwa, T.; Inagaki, M. Application of Ground Penetrating Radar for Diagnosis of Agricultural Irrigation and Drainage Tunnels. Water Land Environ. Eng. 2008, 76, 809–812.a2. [Google Scholar] [CrossRef]
  24. Chen, Y.; Chen, J.; Wang, H.; Gong, Y.; Feng, Y.; Liu, Z.; Qi, N.; Liu, M.; Li, Y.; Xie, H. Key technology of underwater inspection robot system for large diameter and long headrace tunnel. J. Tsinghua Univ. (Sci. Technol.) 2023, 63, 1015–1031. [Google Scholar]
  25. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 580–587. [Google Scholar]
  26. Uijlings, J.R.; Van De Sande, K.E.; Gevers, T.; Smeulders, A.W. Selective Search for Object Recognition Selective Search for Object Recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef]
  27. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  28. Girshick, R. Fast R-CNN. In Proceedings of the Computer Vision and Pattern Recognition, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. Available online: https://arxiv.org/abs/1504.08083 (accessed on 18 August 2023).
  29. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Curran Associates, Inc.: Red Hook, NY, USA, 2015. 14p. Available online: https://arxiv.org/abs/1506.01497 (accessed on 18 August 2023).
  30. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. Available online: https://arxiv.org/abs/1506.02640 (accessed on 18 August 2023).
  31. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. Available online: https://arxiv.org/abs/1612.08242 (accessed on 18 August 2023).
  32. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. In Proceedings of the Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; 6p. Available online: https://arxiv.org/abs/1804.02767 (accessed on 18 August 2023).
  33. Terven, J.; Esparza, D.C. A Comprehensive Review of YOLO: From YOLOv1 to YOLOv8 and Beyond. In Proceedings of the Computer Vision and Pattern Recognition, Oxford, UK, 15–17 September 2023. [Google Scholar] [CrossRef]
  34. Ren, S.; He, K.; Girshick, R.; Zhang, X.; Sun, J. Object Detection Networks on Convolutional Feature Maps. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; 8p. Available online: https://arxiv.org/abs/1504.06066 (accessed on 18 August 2023).
Figure 1. Nondestructive continuous digital scanning system for water canals.
Figure 1. Nondestructive continuous digital scanning system for water canals.
Applsci 13 09388 g001
Figure 2. Photography equipment used (left) and shooting conditions inside the headrace tunnel (right).
Figure 2. Photography equipment used (left) and shooting conditions inside the headrace tunnel (right).
Applsci 13 09388 g002
Figure 3. Example of the images used as training and validation data (inner wall).
Figure 3. Example of the images used as training and validation data (inner wall).
Applsci 13 09388 g003
Figure 4. Outline of the plan for extending the service life of the Kita–Chiba headrace tunnel.
Figure 4. Outline of the plan for extending the service life of the Kita–Chiba headrace tunnel.
Applsci 13 09388 g004
Figure 5. Losses for the training and validation of the chalking detection model.
Figure 5. Losses for the training and validation of the chalking detection model.
Applsci 13 09388 g005
Figure 6. Example of detection results. In the lower part of the image, the chalking positions are separated from each other; thus, many positions were detected with high confidence. Meanwhile, in the upper center part of the image, there is a large amount of chalking overlap, and although some of the positions had low confidence, most of them were detected.
Figure 6. Example of detection results. In the lower part of the image, the chalking positions are separated from each other; thus, many positions were detected with high confidence. Meanwhile, in the upper center part of the image, there is a large amount of chalking overlap, and although some of the positions had low confidence, most of them were detected.
Applsci 13 09388 g006
Figure 7. Removing mussels.
Figure 7. Removing mussels.
Applsci 13 09388 g007
Figure 8. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. Almost all the chalking positions were detected with high confidence scores.
Figure 8. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. Almost all the chalking positions were detected with high confidence scores.
Applsci 13 09388 g008
Figure 9. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. Detection was achieved for not only chalking positions with square-shaped areas having similar aspect ratios but also shapes that did not exist in large numbers, such as vertically or horizontally long shapes. The yellow chalked area in the upper center part was not detected, and it can be said that the color was recognized.
Figure 9. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. Detection was achieved for not only chalking positions with square-shaped areas having similar aspect ratios but also shapes that did not exist in large numbers, such as vertically or horizontally long shapes. The yellow chalked area in the upper center part was not detected, and it can be said that the color was recognized.
Applsci 13 09388 g009
Figure 10. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. As in Figure 7, even vertical chalking positions were detected. However, some of the positions where numbers and symbols were written in white chalk were incorrectly detected.
Figure 10. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. As in Figure 7, even vertical chalking positions were detected. However, some of the positions where numbers and symbols were written in white chalk were incorrectly detected.
Applsci 13 09388 g010
Figure 11. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. When there were not only many rectangular chalking positions but also many cases with notes such as numbers or symbols, multiple detections were conducted for a single annotation position. Most annotation areas contained both rectangles and numbers; thus, locations with only numbers and symbols written were also detected.
Figure 11. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. When there were not only many rectangular chalking positions but also many cases with notes such as numbers or symbols, multiple detections were conducted for a single annotation position. Most annotation areas contained both rectangles and numbers; thus, locations with only numbers and symbols written were also detected.
Applsci 13 09388 g011
Figure 12. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. The large chalked area in the center of this figure was detected accurately as well, although it is different from the usual chalked area. The lower left detection area in this figure has a high IoU, although the image is distorted, resulting in a low confidence score.
Figure 12. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. The large chalked area in the center of this figure was detected accurately as well, although it is different from the usual chalked area. The lower left detection area in this figure has a high IoU, although the image is distorted, resulting in a low confidence score.
Applsci 13 09388 g012
Figure 13. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. Although only two large areas on the left and right were considered as chalking points during annotation, two chalked areas were detected in the left area and three chalked areas were detected in the right area. Since these large areas contain many deteriorated areas (circled areas within a rectangular area), it is not necessarily a mistake to detect more than the annotated data.
Figure 13. Example of chalking detection results. The blue frames indicate the annotation results, the pink frames indicate the detection results, and the pink numbers indicate the confidence scores at the time of each detection. Although only two large areas on the left and right were considered as chalking points during annotation, two chalked areas were detected in the left area and three chalked areas were detected in the right area. Since these large areas contain many deteriorated areas (circled areas within a rectangular area), it is not necessarily a mistake to detect more than the annotated data.
Applsci 13 09388 g013
Figure 14. Plot of chalking positions in the headrace tunnel. (a) Plot of positions where chalking was detected from the analysis results; there were many detections near the entrance (positions of 0–20 m in direction of travel). Meanwhile, there were few detections near 40–80 m. (b) Image of the wall 5–10 m from the entrance. As indicated by the detection results in (a), there were many chalking positions here. (c) Image of the wall 60–65 m from the entrance. As indicated by the detection results in (a), there were relatively few chalking positions here.
Figure 14. Plot of chalking positions in the headrace tunnel. (a) Plot of positions where chalking was detected from the analysis results; there were many detections near the entrance (positions of 0–20 m in direction of travel). Meanwhile, there were few detections near 40–80 m. (b) Image of the wall 5–10 m from the entrance. As indicated by the detection results in (a), there were many chalking positions here. (c) Image of the wall 60–65 m from the entrance. As indicated by the detection results in (a), there were relatively few chalking positions here.
Applsci 13 09388 g014
Table 1. Training and validation data.
Table 1. Training and validation data.
ClassNumber of DataExample Images for Class
Training DataValidation Data
White chalking position2479614Applsci 13 09388 i001
(Pitting corrosion: <3 mm)415 images145 images
Table 2. Confusion matrix for classification.
Table 2. Confusion matrix for classification.
True Class
PositiveNegative
Predicted ClassPositiveTP (True Positive)FP (False Positive)
NegativeFN (False Negative)TN (True Negative)
Table 3. Breakdown of the detection results.
Table 3. Breakdown of the detection results.
Total number of chalking locations614
Total number of detections754
Correctly detected (TP)608 (99.02% of all chalking locations)
False detections (FP)146
     A part or multiple parts of white chalk area126 (86.30% of false detections)
     (e.g., number and symbols)
     Background14 (9.59% of false detections)
     Other colors6 (4.11% of false detections)
     (Bright yellow, white and yellow, etc.)
     Repair sheet0 (0% of false detections)
No detection (FN)6 (0.98% of all chalking locations)
Table 4. Example detection results for each class.
Table 4. Example detection results for each class.
Examples of Detectable/Not Detectable
True Positive (TP)Applsci 13 09388 i002
False Negative (FN)Applsci 13 09388 i003
False Positive (FP)Applsci 13 09388 i004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kubo, S.; Nakayama, N.; Matsuda, S.; Chun, P.-j. Corrosion Damage Detection in Headrace Tunnel Using YOLOv7 with Continuous Wall Images. Appl. Sci. 2023, 13, 9388. https://doi.org/10.3390/app13169388

AMA Style

Kubo S, Nakayama N, Matsuda S, Chun P-j. Corrosion Damage Detection in Headrace Tunnel Using YOLOv7 with Continuous Wall Images. Applied Sciences. 2023; 13(16):9388. https://doi.org/10.3390/app13169388

Chicago/Turabian Style

Kubo, Shiori, Nobuhiro Nakayama, Sadanori Matsuda, and Pang-jo Chun. 2023. "Corrosion Damage Detection in Headrace Tunnel Using YOLOv7 with Continuous Wall Images" Applied Sciences 13, no. 16: 9388. https://doi.org/10.3390/app13169388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop