# Application of Closed-Circuit Television Image Segmentation for Irrigation Channel Water Level Measurement

^{*}

## Abstract

**:**

^{2}of 0.99, MAE (Mean Absolute Error) of 0.01 m, and ME (Maximum Error) of 0.05 m. The F1 score of 313 test datasets was 0.99, indicating that the water surface was sufficiently segmented and the water level measurement errors were within the irrigation system’s acceptable range. Although this methodology requires initial work to build the datasets and the model, it enables an accurate and low-cost water level measurement.

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Dataset

#### 2.2. Hardware and Software

#### 2.3. Segmentation Model Construction (U-Net and Link-Net)

#### 2.4. Water Level Estimation

^{−6}− 1.782], and the formula for the ROI datasets was [y = x × 3 × 10

^{−5}+ 0.197]. Secondly, a quadratic line was applied to the ROI datasets. The quadratic line was derived considering the relationship between pixels and levels in train datasets, and the formula for the ROI datasets was [y = x

^{2}× 2 × 10

^{−9}– x × 3 × 10

^{−5}+ 0.785].

#### 2.5. Performance Evaluation

^{2}(Coefficient of Determination), MAE (Mean Absolute Error), RMSE (Root Mean Squared Error), and ME (Maximum Error) metrics were used to evaluate the predicted water levels. The R

^{2}measures how well estimations are replicated by the model based on the proportion of total variation and unexplained variation. The MAE is used to quantify the size of the error to see how large the error is on average, and the RMSE has the characteristic of being sensitive to outliers. The ME represents the largest error and is a metric to check for robustness. In Table 4, $y$

_{i}denotes the water levels measured from the ultrasonic sensor, $\overline{y}$ denotes the average water level, and $\widehat{y}$

_{i}denotes the estimated water levels from this study.

## 3. Results and Discussion

#### 3.1. Semantic Segmentation

#### 3.1.1. Optimal Epoch Decision

#### 3.1.2. Segmented Results with 313 Test Datasets

#### 3.2. Water Level Estimation

#### 3.2.1. Full-Resolution Image and Linear Line for Conversion

^{2}of the linear line was calculated to be 0.76, and the time-series water level was calculated using the generated mask images and this linear line. Figure 7b shows the scatter plot between the predicted and observed water levels for 313 test datasets. The MAE was calculated to be 0.03 m, the RMSE to be 0.06 m, the ME to be 0.25 m, and the R

^{2}to be 0.84. Considering that the maximum water level in the channel was 1.10 m, the magnitude of the deviation was relatively large. Figure 7c shows the time-series data of the predicted and observed water levels. Obs, Pred, and Obs indicate the water levels measured using the ultrasonic sensor, the water levels simulated using a full-resolution image and a linear line, and the difference between Obs and Pred, respectively. In the time-series graph, the Obs shows all the values used for training, validation, and testing. As the test images were segmented almost identically to the ground-truth mask images with an F1 score of 0.997, the segmentation process was outstanding, whereas the process of converting water pixels into water levels performed poorly. This was because the water pixels of the full-resolution image were not sufficiently correlated with the water level. Therefore, ROIs that were expected to have more parts correlated with the water level were selected from the full-resolution image.

#### 3.2.2. ROI Image and Linear Line for Conversion

^{2}of the linear line was calculated to be 0.94, and Figure 8b shows the scatter plot of the predicted and observed values for 313 test datasets. The MAE was calculated to be 0.05 m, the RMSE to be 0.06 m, the ME to be 0.13 m, and the R

^{2}to be 0.94. Compared to the case of not selecting an ROI, the metrics have improved, with deviations not exceeding 0.1 m for most of the test datasets. The time-series water levels were calculated as in Figure 8c. The test images were segmented almost identically to the ground-truth images with an F1 score of 0.999, which was higher compared to the case of not selecting ROIs. However, the water level deviations were still too large to apply, which was due to the deviations in the linear line in Figure 8b. Therefore, after selecting ROIs to segment images, this study applied a quadratic line for the pixel conversion instead of a linear line.

#### 3.2.3. ROI Image and Quadratic Line for Conversion

^{2}of the quadratic line was calculated to be 0.99, and Figure 9b shows the scatter plot of the observed and predicted values with ROI test images and a quadratic conversion line. The MAE was calculated to be 0.01 m, the RMSE to be 0.01 m, the ME to be 0.06 m, and the R

^{2}to be 0.99. The time-series water levels were calculated as in Figure 9c. In the graph, most of the predicted levels followed the observed levels closely. Compared to the previous cases, the metrics have improved significantly. In particular, the deviations did not exceed more than 0.05 m for most of the predicted levels, and the predicted water levels were highly correlated with the observed values.

#### 3.2.4. Overall Comparisons for Three Approaches

_{E>0.02}and N

_{E>0.03}), and the ROI image with a quadratic line showed the highest metrics compared to the other cases. The datasets with a 10 min constant interval were selected for comparison and showed a maximum deviation of 0.04 m in the method of an ROI image with a quadratic line.

^{2}. Since the segmentation showed high performances, the consideration of potential errors from the ultrasonic sensor is needed for the application of this methodology. Also, to improve the accuracy, it is necessary to verify the results with at least three different measurements, including staff gauges.

## 4. Discussion

_{E>0.01}in 46 out of 1152 trials, and Akbari [2] reported an RMSE of 0.01 m. Previous studies have shown that higher accuracy is expected if the staff gauge is applied from a fixed viewpoint. The application at the irrigation channel would be easier and more accurate if a staff gauge was installed.

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Koech, R.; Langat, P. Improving irrigation water use efficiency: A review of advances, challenges and opportunities in the Australian context. Water
**2018**, 10, 1771. [Google Scholar] [CrossRef] - Akbari, M.; Gheysari, M.; Mostafazadeh-Fard, B.; Shayannejad, M. Surface irrigation simulation-optimization model based on meta-heuristic algorithms. Agric. Water Manag.
**2018**, 201, 46–57. [Google Scholar] [CrossRef] - Costabile, P.; Costanzo, C.; Gangi, F.; De Gaetani, C.I.; Rossi, L.; Gandolfi, C.; Masseroni, D. High-resolution 2D modelling for simulating and improving the management of border irrigation. Agric. Water Manag.
**2023**, 275, 108042. [Google Scholar] [CrossRef] - Conde, G.; Quijano, N.; Ocampo-Martinez, C. Modeling and control in open-channel irrigation systems: A review. Annu. Rev. Control
**2021**, 51, 153–171. [Google Scholar] [CrossRef] - Koech, R.; Rod, S.; Malcolm, G. Automation and control in surface irrigation systems: Current status and expected future trends. In Proceedings of the Southern Region Engineering Conference, Toowoomba, Australia, 11–12 November 2010. [Google Scholar]
- Weyer, E. Control of irrigation channels. IEEE Trans. Control Syst. Technol.
**2008**, 16, 664–675. [Google Scholar] [CrossRef] - Lee, J. Evaluation of automatic irrigation system for rice cultivation and sustainable agriculture water management. Sustainability
**2022**, 14, 11044. [Google Scholar] [CrossRef] - Kuswidiyanto, L.W.; Nugroho, A.P.; Jati, A.W.; Wismoyo, G.W.; Murtiningrum; Arif, S.S. Automatic water level monitoring system based on computer vision technology for supportin the irrigation modernization. IOP Conf. Ser. Earth Environ. Sci.
**2021**, 686, 012055. [Google Scholar] [CrossRef] - Lozano, D.; Arranja, C.; Rijo, M.; Mateos, L. Simulation of automatic control of an irrigation canal. Agric. Water Manag.
**2010**, 97, 91–100. [Google Scholar] [CrossRef] - Masseroni, D.; Moller, P.; Tyrell, R.; Romani, M.; Lasagna, A.; Sali, G.; Facchi, A.; Gandolfia, C. Evaluating performances of the first automatic system for paddy irrigation in Europe. Agric. Water Manag.
**2018**, 201, 58–69. [Google Scholar] [CrossRef] - Hamdi, M.; Rehman, A.; Alghamdi, A.; Nizamani, M.A.; Missen, M.M.S.; Memon, M.A. Internet of Things (IoT) Based Water Irrigation System. Int. J. Online Biomed. Eng.
**2021**, 17, 69–80. [Google Scholar] [CrossRef] - Park, C.E.; Kim, J.T.; Oh, S.T. Analysis of stage-discharge relationships in the irrigation canal with auto-measuring system. J. Korean Soc. Agric. Eng.
**2012**, 54, 109–114. [Google Scholar] - Hong, E.M.; Nam, W.-H.; Choi, J.-Y.; Kim, J.-T. Evalutation of water supply adequacy using real-time water level monitoring system in paddy irrigation canals. J. Korean Soc. Agric. Eng.
**2014**, 56, 1–8. [Google Scholar] - Lee, J.; Noh, J.; Kang, M.; Shin, H. Evaluation of the irrigation water supply of agricultural reservoir based on measurement information from irrigation canal. J. Korean Soc. Agric. Eng.
**2020**, 62, 63–72. [Google Scholar] - Seibert, J.; Strobi, B.; Etter, S.; Hummer, P.; van Meerveld, H.J. Virtual staff gauges for crowd-based stream level observations. Front. Earth Sci.
**2019**, 7, 70. [Google Scholar] [CrossRef] - Kuo, L.-C.; Tai, C.-C. Automatic water-level measurement system for confined-space applications. Rev. Sci. Instrum.
**2021**, 92, 085001. [Google Scholar] [CrossRef] - Liu, W.-C.; Chung, C.-K.; Huang, W.-C. Image-based recognition and processing system for monitoring water levels in an irrigation and drainage channel. Paddy Water Environ.
**2023**, 21, 417–431. [Google Scholar] [CrossRef] - Sharma, N.; Sheifali, G.; Deepika, K.; Sultan, A.; Hani, A.; Yousef, A.; Asadullah, S. U-Net model with transfer learning model as a backbone for segmentation of Gastrointestinal tract. Bioengineering
**2023**, 10, 119. [Google Scholar] [CrossRef] - Long, J.; Evan, S.; Trevor, D. Fully convolutional networks for semantic segmentation. arXiv
**2015**, arXiv:1411.4038. [Google Scholar] - Zhou, L.; Zhang, C.; Wu, M. D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Muhadi, N.A.; Abdullah, A.F.; Bejo, S.K.; Mahadi, M.R.; Mijic, A. Deep Learning semantic segmentation for water level estimation using surveillance camera. Appl. Sci.
**2021**, 11, 9691. [Google Scholar] [CrossRef] - Vianna, P.; Farias, R.; de Albuquerque Pereira, W.C. U-Net and SegNet performances on lesion segmentation of breast ultrasonography images. Res. Biomed. Eng.
**2021**, 37, 171–179. [Google Scholar] [CrossRef] - Chang, R.; Hou, D.; Chen, Z.; Chen, L. Automatic extraction of urban impervious surface based on SAH-Unet. Remote Sens.
**2023**, 15, 1042. [Google Scholar] [CrossRef] - Abdollahi, A.; Pradhan, B.; Alamri, A.M. An ensemble architecture of deep convolutional Segnet and Unet networks for building semantic segmentation from high-resolution aerial images. Geocarto Int.
**2022**, 37, 3355–3370. [Google Scholar] [CrossRef] - Kim, J.; Jeon, H.; Kim, D.-J. Extracting flooded areas in Southeast Asia using SegNet and U-Net. Korean J. Remote Sens.
**2020**, 36, 1095–1107. [Google Scholar] - Hies, T.; Parasuraman, S.B.; Wang, Y.; Duester, R.; Eikaas, H.S.; Tan, K.M. Enhanced water-level detection by image processing. In Proceedings of the 10th International Conference on Hydroinformatics, Hamburg, Germany, 14–18 July 2012. [Google Scholar]
- Lin, Y.-T.; Lin, Y.-C.; Han, J.-Y. Automatic water-level detection using single-camera images with varied poses. Measurement
**2018**, 127, 167–174. [Google Scholar] [CrossRef] - De Vitry, M.M.; Kramer, S.; Wegner, J.D.; Leitão, J.P. Scalable flood level trend monitoring with surveillance cameras using a deep convolutional neural network. Hydrol. Earth Syst. Sci.
**2019**, 23, 4621–4634. [Google Scholar] [CrossRef] - Zaffaroni, M.; Rossi, C. Water segmentation with deep learning models for flood detection and monitoring. In Proceedings of the 17th Information Systems for Crisis Response and Management, Blacksburg, VA, USA, 24–27 May 2020. [Google Scholar]
- Lopez-Fuentes, L.; Rossi, C.; Skinnemoen, H. River segmentation for flood monitoring. In Proceedings of the 2017 IEEE International Conference on Big Data, Boston, MA, USA, 11–14 December 2017. [Google Scholar]
- Akiyama, T.S.; Junior, J.M.; Gonçalves, W.N.; Bressan, P.O.; Eltner, A.; Binder, F.; Singer, T. Deep learning applied to water segmentation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. (ISPRS Arch.)
**2020**, 43, 1189–1193. [Google Scholar] [CrossRef] - Vandaele, R.; Dance, S.L.; Ojha, V. Deep learning for automated river-level monitoring through river-camera images: An approach based on water segmentation and transfer learning. Hydrol. Earth Syst. Sci.
**2021**, 25, 4435–4453. [Google Scholar] [CrossRef] - Bai, G.; Hou, J.; Zhang, T.; Li, B.; Han, H.; Wang, T.; Hinkelmann, R.; Zhang, D.; Guo, L. An intelligent water level monitoring method based on SSD algorithm. Measurement
**2021**, 185, 110047. [Google Scholar] [CrossRef] - Kim, K.; Kim, M.; Yoon, P.; Bang, J.; Myoung, W.-H.; Choi, J.-Y.; Choi, G.-H. Application of CCTV images and semantic segmentation model for water level estimation of irrigation channel. J. Korean Soc. Agric. Eng.
**2022**, 64, 63–73. [Google Scholar] - Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. arXiv
**2015**, arXiv:1505.04597v1. [Google Scholar] - Chaurasia, A.; Culurciello, E. LinkNet: Exploiting encoder representations for efficient semantic segmentation. arXiv
**2017**, arXiv:1707.03718v1. [Google Scholar] - Chaudhary, P.; D’Aronco, S.; Leitão, J.P.; Schindler, K.; Wegner, J.D. Water level prediction from social media images with a multi-task ranking approach. ISPRS J. Photogramm. Remote Sens.
**2020**, 167, 252–262. [Google Scholar] [CrossRef] - Zhang, Z.; Zhou, Y.; Liu, H.; Gao, H. In-situ water level measurement using NIR-imaging video camera. Flow Meas. Instrum.
**2019**, 67, 95–106. [Google Scholar] [CrossRef] - Lee, Y.-J.; Kim, P.-S.; Kim, S.J.; Jee, Y.K.; Joo, U.J. Estimation of water loss in irrigation canals through field measurement. J. Korean Soc. Agric. Eng.
**2008**, 50, 13–21. [Google Scholar] - Mohammadi, A.; Rizi, A.P.; Abbasi, N. Field measurement and analysis of water losses at the main and tertiary levels of irrigation canals: Varamin Irrigation Scheme, Iran. Glob. Ecol. Conserv.
**2019**, 18, e00646. [Google Scholar] [CrossRef] - Sultan, T.; Latif, A.; Shakir, A.S.; Kheder, K.; Rashid, M.U. Comparison of water conveyance losses in unlined and lined watercourses in developing countries. Tech. J. Univ. Eng. Technol. Taxila
**2014**, 19, 23. [Google Scholar]

**Figure 1.**Flow chart of this study (data preprocess, water segmentation, and water level estimation).

**Figure 2.**Example images of this study: (

**a**) original images; (

**b**) associated manually annotated ground-truth images for full resolution and ROI.

**Figure 3.**Results of backbone model comparison using ResNet-18, ResNet-50, VGGNet-16, and VGGNet-19 (U-Net segmentation): (

**a**) train and validation losses with ResNet-18 and ResNet-50 for 500 epochs; (

**b**) train and validation losses with VGGNet-16 and VGGNet-19 for 500 epochs.

**Figure 4.**Segmentation model comparison using U-Net and Link-Net (ResNet-50 as backbone model): train and validation losses for 500 epochs.

**Figure 5.**Box plots of the eight configuration results using 313 test datasets: (

**a**) U-Net; (

**b**) Link-Net.

**Figure 6.**Comparison among different combinations: (

**a**) The first three rows show the full-resolution images and (

**b**) the other three rows show the ROI images. Original images and their associated ground-truth images are presented in the first two columns. Subsequent columns show the segmentation output of four combinations (U-Net or Link-Net and ResNet-50 or VGGNet-16).

**Figure 7.**Overall water level estimation results with full-resolution images and a linear line for conversion: (

**a**) scatter plot between the number of water pixels and water levels; (

**b**) scatter plot of observed and predicted water levels; and (

**c**) time-series water levels of observation, prediction, and difference with randomly selected test data.

**Figure 8.**Overall water level estimation results with ROI images and a linear line for conversion: (

**a**) scatter plot between the number of water pixels and water levels; (

**b**) scatter plot of observed and predicted water levels; and (

**c**) time-series water levels of observation, prediction, and difference with randomly selected test data.

**Figure 9.**Overall water level estimation results with ROI images and a quadratic line for conversion: (

**a**) scatter plot between the number of water pixels and water levels; (

**b**) scatter plot of observed and predicted water levels; and (

**c**) time-series water levels of observation, prediction, and difference with randomly selected test data.

**Figure 10.**Time-series water levels of observation, prediction, and difference with constant 10 min interval from 3rd June 16:40 to 4th 17:00: (

**a**) full-resolution image with a linear line, (

**b**) ROI image with a linear line, and (

**c**) ROI image with a quadratic line for conversion.

Image Type | File Format | Resolution | Number of Datasets | Water Level Range (m) | |||
---|---|---|---|---|---|---|---|

Full-Resolution | ROI | Train | Validation | Test | |||

Raw image | PNG | 1280 × 720 × 3 | 256 × 256 × 3 | 1125 | 126 | 313 | 0.63~1.10 |

Mask image | TIFF | 1280 × 720 × 1 | 256 × 256 × 1 |

Encoder | ||||||
---|---|---|---|---|---|---|

Stage | ResNet-18 | ResNet-50 | VGGNet-16 | VGGNet-19 | Block Architecture | |

U-Net | Link-Net | |||||

Conv1 | $\left[7\times 7,64\right]$ × 1 | $\left[7\times 7,64\right]$ × 1 | $\left[3\times 3,64\right]$ × 2 | $\left[3\times 3,64\right]$ × 2 | Convolution ↓ BatchNorm ↓ ReLuActivation ↓ Zero Padding | |

Conv2 | $\left[\begin{array}{c}3\times 3,64\\ 3\times 3,64\end{array}\right]$ × 2 | $\left[\begin{array}{c}1\times 1,64\\ 3\times 3,64\\ 1\times 1,256\end{array}\right]$ × 3 | $\left[3\times 3,128\right]$ × 2 | $\left[3\times 3,128\right]$ × 2 | ||

Conv3 | $\left[\begin{array}{c}3\times 3,128\\ 3\times 3,128\end{array}\right]$ × 2 | $\left[\begin{array}{c}1\times 1,128\\ 3\times 3,128\\ 1\times 1,512\end{array}\right]$ × 4 | $\left[3\times 3,256\right]$ × 3 | $\left[3\times 3,256\right]$ × 4 | ||

Conv4 | $\left[\begin{array}{c}3\times 3,256\\ 3\times 3,256\end{array}\right]$ × 2 | $\left[\begin{array}{c}1\times 1,256\\ 3\times 3,256\\ 1\times 1,1024\end{array}\right]$ × 6 | $\left[3\times 3,512\right]$ × 3 | $\left[3\times 3,512\right]$ × 4 | ||

Conv5 | $\left[\begin{array}{c}3\times 3,512\\ 3\times 3,512\end{array}\right]$ × 2 | $\left[\begin{array}{c}1\times 1,512\\ 3\times 3,512\\ 1\times 1,\mathrm{2,048}\end{array}\right]$ × 3 | $\left[3\times 3,512\right]$ × 3 | $\left[3\times 3,512\right]$ × 4 | ||

Decoder | ||||||

Stage | U-Net | Link-Net | Block Architecture | |||

ResNet-18 | ResNet-50 | VGGNet-16 and VGGNet-19 | U-Net | Link-Net | ||

Conv1 | $\left[3\times 3,256\right]$ × 2 | $\left[1\times 1,128\right]$ × 1 $\left[3\times 3,128\right]$ × 1 $\left[1\times 1,256\right]$ × 1 | $\left[1\times 1,512\right]$ × 1 $\left[3\times 3,512\right]$ × 1 $\left[1\times 1,1024\right]$ × 1 | $\left[1\times 1,128\right]$ × 1 $\left[3\times 3,128\right]$ × 1 $\left[1\times 1,512\right]$ × 1 | Up-sampling ↓ Concatenation ↓ $\left[\begin{array}{c}Convolution\\ BatchNorm\\ ReLuActivation\end{array}\right]$ × 2 | $\left[\begin{array}{c}Convolution\\ BatchNorm\\ ReLuActivation\end{array}\right]$ × 1 ↓ Up-sampling ↓ $\left[\begin{array}{c}Convolution\\ BatchNorm\\ ReLuActivation\end{array}\right]$ × 2 ↓ Add |

Conv2 | $\left[3\times 3,128\right]$ × 2 | $\left[1\times 1,64\right]$ × 1 $\left[3\times 3,64\right]$ × 1 $\left[1\times 1,128\right]$ × 1 | $\left[1\times 1,256\right]$ × 1 $\left[3\times 3,256\right]$ × 1 $\left[1\times 1,512\right]$ × 1 | $\left[1\times 1,128\right]$ × 1 $\left[3\times 3,128\right]$ × 1 $\left[1\times 1,512\right]$ × 1 | ||

Conv3 | $\left[3\times 3,64\right]$ × 2 | $\left[1\times 1,32\right]$ × 1 $\left[3\times 3,32\right]$ × 1 $\left[1\times 1,64\right]$ × 1 | $\left[1\times 1,128\right]$ × 1 $\left[3\times 3,128\right]$ × 1 $\left[1\times 1,256\right]$ × 1 | $\left[1\times 1,128\right]$ × 1 $\left[3\times 3,128\right]$ × 1 $\left[1\times 1,512\right]$ × 1 | ||

Conv4 | $\left[3\times 3,32\right]$ × 2 | $\left[1\times 1,16\right]$ × 1 $\left[3\times 3,16\right]$ × 1 $\left[1\times 1,64\right]$ × 1 | $\left[1\times 1,64\right]$ × 1 $\left[3\times 3,64\right]$ × 1 $\left[1\times 1,64\right]$ × 1 | $\left[1\times 1,64\right]$ × 1 $\left[3\times 3,64\right]$ × 1 $\left[1\times 1,128\right]$ × 1 | ||

Conv5 | $\left[3\times 3,16\right]$ × 2 | $\left[1\times 1,16\right]$ × 1 $\left[3\times 3,16\right]$ × 1 $\left[1\times 1,16\right]$ × 1 | $\left[1\times 1,16\right]$ × 1 $\left[3\times 3,16\right]$ × 1 $\left[1\times 1,16\right]$ × 1 | $\left[1\times 1,32\right]$ × 1 $\left[3\times 3,32\right]$ × 1 $\left[1\times 1,16\right]$ × 1 |

Model | Optimizer | Batch Size | Loss Function | Evaluation |
---|---|---|---|---|

U-Net | Adam | 8 | Binary cross-entropy | F1 score |

Link-Net |

Metric | Description | Formula | Value Range | Unit |
---|---|---|---|---|

True Positive | Sum of correctly identified water pixels | TP | 0~No. of pixels | ea |

True Negative | Sum of correctly identified non-water pixels | TN | 0~No. of pixels | ea |

False Positive | Sum of pixels incorrectly identified as water | FP | 0~No. of pixels | ea |

False Negative | Sum of pixels incorrectly identified as non-water | FN | 0~No. of pixels | ea |

Precision (P) | Proportion of detected water pixels | $\frac{TP}{TP+FP}$ | 0~1 | - |

Recall (R) | Proportion of ground-truth water pixels detected | $\frac{TP}{TP+FN}$ | 0~1 | - |

F1 Score | Harmonic mean between the precision and recall | $2\times \frac{R\times P}{R+P}$ | 0~1 | - |

Water level values | Observed water level at time i | ${y}_{i}$ | 0~water level | m |

Estimated water level at time i | ${\widehat{y}}_{i}$ | 0~water level | m | |

Average of observed water levels | $\overline{y}$ | 0~water level | m | |

R^{2} | Coefficient of determination | $1-\frac{\sum _{i=1}^{n}{\left({y}_{i}-{\widehat{y}}_{i}\right)}^{2}}{\sum _{i=1}^{n}{\left({y}_{i}-\overline{y}\right)}^{2}}$ | 0~1 | - |

MAE | Mean Absolute Error | $\frac{1}{n}{\displaystyle \sum _{i=1}^{n}}\left|{y}_{i}-{\widehat{y}}_{i}\right|$ | 0~∞ | m |

RMSE | Root Mean Squared Error | $\sqrt{\frac{\sum _{i=1}^{n}{\left({y}_{i}-{\widehat{y}}_{i}\right)}^{2}}{n}}$ | 0~∞ | m |

ME | Maximum Error | - | 0~∞ | m |

N_{E>0.05} | Gross error (>0.05 m) number | - | 0~313 | ea |

Segmentation Model | Backbone Model | ||||
---|---|---|---|---|---|

ResNet-18 | ResNet-50 | VGGNet-16 | VGGNet-19 | ||

Number of parameters | U-Net | 14,340,570 | 32,561,114 | 23,752,273 | 29,061,969 |

Link-Net | 11,521,690 | 28,783,386 | 20,325,137 | 25,634,833 | |

Time to train (h) | U-Net | 146 | 261 | 221 | 248 |

Link-Net | 144 | 256 | 226 | 252 |

Segmentation Model | Backbone Model | ||||
---|---|---|---|---|---|

ResNet-18 | ResNet-50 | VGGNet-16 | VGGNet-19 | ||

Train loss | U-Net | 0.00242 | 0.00293 | 0.00238 | 0.00389 |

Link-Net | 0.00116 | 0.00357 | 0.00205 | 0.00270 | |

Validation loss | U-Net | 0.00517 | 0.00553 | 0.00866 | 0.00865 |

Link-Net | 0.00572 | 0.00521 | 0.00771 | 0.00969 | |

Epoch | U-Net | 57 | 55 | 57 | 53 |

Link-Net | 76 | 72 | 72 | 77 |

Dataset | Image and Conversion Line | R^{2} | MAE (m) | RMSE (m) | Maximum Error (m) | N_{E>0.05} | N_{E>0.03} | N_{E>0.02} | N_{E>0.01} |
---|---|---|---|---|---|---|---|---|---|

Dataset selected randomly | Full-resolution with linear | 0.84 | 0.03 | 0.06 | 0.25 | 36 | 67 | 141 | 286 |

ROI with linear | 0.94 | 0.05 | 0.06 | 0.13 | 1 | 208 | 222 | 236 | |

ROI with quadratic | 0.99 | 0.01 | 0.01 | 0.06 | 1 | 7 | 33 | 136 | |

Dataset selected with constant 10 min interval | Full-resolution with linear | 0.05 | 0.04 | 0.05 | 0.10 | 39 | 83 | 102 | 111 |

ROI with linear | 0.86 | 0.05 | 0.05 | 0.08 | 59 | 114 | 129 | 135 | |

ROI with quadratic | 0.86 | 0.01 | 0.01 | 0.04 | 0 | 9 | 29 | 65 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kim, K.; Choi, J.-Y.
Application of Closed-Circuit Television Image Segmentation for Irrigation Channel Water Level Measurement. *Water* **2023**, *15*, 3308.
https://doi.org/10.3390/w15183308

**AMA Style**

Kim K, Choi J-Y.
Application of Closed-Circuit Television Image Segmentation for Irrigation Channel Water Level Measurement. *Water*. 2023; 15(18):3308.
https://doi.org/10.3390/w15183308

**Chicago/Turabian Style**

Kim, Kwihoon, and Jin-Yong Choi.
2023. "Application of Closed-Circuit Television Image Segmentation for Irrigation Channel Water Level Measurement" *Water* 15, no. 18: 3308.
https://doi.org/10.3390/w15183308