2.1. Measurement System
Flash floods have given rise to the greatest natural disasters on Madeira Island, with significant loss of human life. Given the orography of the island, with the highest point at 1862 m, heavy rains have caused strong water flows in the streams of the city of Funchal. From the beginning of the 19th century to the end of 2010, 38 flash floods were recorded on Madeira Island [
32]. About 1000 people died in the flash flood of 1803, mostly in Funchal. More recently, the flash flood of 20 February 2010 resulted in more than 45 deaths. The weather station near Funchal recorded an accumulated rainfall above 4000 mm between October 2009 and February 2010, with some days recording a precipitation above 100 mm [
33].
Figure 1 shows an image of the channel used in the experimental setup to support the development of the proposed technique. This is one of the three main water streams of Funchal with a high potential risk of flooding. The figure also illustrates the region of Madeira where the study took place. Stone or concrete walls typically flank these urban streams. The installation of a staff gauge to provide a reference system for the waterline detection proved difficult or impossible due to the water flow being too strong during heavy rain events. Thus, this created a reference system obtained from natural existing control points on the channel wall.
We developed a low-cost image acquisition system based on a Raspberry Pi 3 model B and a Pi NoIR camera V1 [
34]. This infrared camera allows for daytime as well as nighttime operation with different luminosity conditions. The camera was installed on the ceiling of a balcony in a building facing the stream. This method of installing the camera has several advantages. As the camera is under a balcony, and therefore protected from the rain, the lens becomes sheltered from raindrops. This installation also protects the camera from direct sunlight, which would saturate the image, and eliminates the need for a mast to suspend the camera, minimizing the environmental impact. The system used the Wi-Fi network of the house, avoiding the installation of a dedicated communication system. In the power supply for the camera, options were not limited to the house’s electrical system but also to a renewable energy system. For this study, we installed an 80 W solar panel on the building terrace for power supply and a 100 Ah @12V battery for energy storage. This solution makes the system autonomous in terms of power consumption.
2.2. Camera Calibration
The main parameters specified by the manufacturer for the Pi NoIR camera V1 are the resolution of 2592 × 1944 pixels, the pixel size of 1.4 μm × 1.4 μm, and the focal length of 3.6 mm. For camera calibration, we evaluated the focal length from experimental data. The dimension of an object in a plane parallel to the camera plane was given by
where
P is the dimension of the object in pixels,
T is the pixel size in mm,
d is the distance between the camera lens and the plane of the object in meters,
w is a dimension of the object in meters,
ws is the image dimension of the object in mm, and
f is the focal length in mm. We determined the focal length considering (1) with known distances, which yielded a result very close to the value provided by the manufacturer.
We found it necessary to determine some local parameters to define the region of interest (ROI) used to measure the water level.
Figure 2 shows an image taken by the camera, with a resolution of 1280 × 720. In Pi camera V1, the change in the aspect ratio to achieve this resolution corresponds to 75% of the full sensor size in the vertical dimension. For data processing, the image should include the entire vertical region of the wall and part of the water zone. From initial experiments, we found that the wall was nine meters high and had an inclination of 5.8° related to the vertical. As the water stream is between two streets and for privacy reasons, an additional 15% cut to the top of the images became necessary to avoid capturing vehicles. Then, the images were converted to the resolution of 1280 × 720. The region of interest used in the water level measurements is marked in
Figure 2 by a red rectangle. The figure also includes the control points used in this work, defined by the lines and the green rectangle. The region of interest included all vertical zones of the concrete face. From measurements, we determined that the concrete was two meters high.
Measurements proved to be a difficult task due to harsh access to the water stream. Thus, we determined the distance between the camera and the stream wall by an indirect method.
Figure 3 shows the reference system used in the calculations. The plane
XY is the object plane and
xy is the image plane. The goal was to obtain the distance
d, the horizontal angle
αx, and the vertical angle
αy to characterize the wall plane. These angles were defined in the
xz and
yz planes, respectively.
The horizontal and vertical distances were obtained from (1), giving
with
where
SH is the horizontal sensor size (2592 pixels),
SV is the vertical sensor size (1944 pixels),
RH is the horizontal image resolution (1280 pixels),
RV is the vertical image resolution (720 pixels),
C1 is the image cut due to the aspect ratio change (0.75), and
C2 is the second image cut (0.85). The development of a new approach proved necessary to relate the image plane to the object plane. The goal was to simplify the parametrization required to obtain the water level, given by the distance from the stream wall to the camera, the horizontal angle between both planes, and the vertical angle between both. Using the geometric representation of
Figure 3 and considering
x’/(
d-
z’) =
x/
d, the distance
X can be obtained from
x using the expression
For calibration purposes when performing actual measurements in the ROI, we placed a ten-meter graduated strip vertically on the wall in different horizontal positions. These measurements also provided values to assess the error made by the proposed technique in measuring the water level. From two distances obtained in the image in opposite directions around the origin, two vertical values ya and yb were defined using (2). The graduated strip made it possible to obtain the corresponding actual distances Ya and Yb. With (5) and these two distances, the unknowns d and αy could be determined by solving a system of two equations. The results were d = 23.68 m and αy = 27.2°. Substituting d into (4), αx was determined from known values of x and X, giving 7.4°.
To relate a point in the
xy plane with a point in the
XY plane, we derived the equation of the
XY plane using the general form
ax +
by +
cz +
d = 0. The constants
a,
b,
c, and
d were obtained using three points of the plane. The points were (0,0,0), (0,
y’,
z’), and (
x’,0,
z’), resulting in the following plane equation in the
xyz reference system:
For a point (
X,
Y) in the object plane, a point (
x1,
y1) in the image plane is given by
A point in the object plane can be obtained from a point in the image plane by solving these equations for X and Y.
2.3. Camera Motion Compensation
Strong wind speeds can cause small camera movements. In addition, camera position may vary over time due to its reinstallation, which results in minor changes to the calibration parameters determined in the previous section. To obtain a stable ROI, it was necessary to calculate a camera motion compensation before any measurement of the water level. The motion compensation included image rotation and translation. We considered the line defined by the upper edge of the stream wall as a set of control points to determine the camera rotation about the initial conditions. Another control point was the coordinates of a template used to compensate for the translation motion.
Figure 4 shows an example of a template applied in the compensation procedure. A green rectangle represents the template shown in
Figure 2. Appropriate characteristics are necessary for the template to be detectable. To generalize the procedure proposed in this work, we removed the metallic tubes observed in
Figure 2 from the template options. In any case, using this type of object increases the success rate of the template matching procedure.
The flowchart shown in
Figure 5 describes the camera motion compensation procedure. Python software allowed for data processing with the support of the OpenCV library [
35]. The images acquired by the camera were converted to grayscale. We applied the Contrast Limited Adaptive Histogram Equalization (CLAHE) method [
36] to highlight the wall features. The edge detection procedure started by applying a Gaussian filter to reduce noise and applying the binarization process to convert the grayscale image into a binary image. We applied this procedure to the area around the top of the wall containing the desired edges. For edge detection, we chose the Canny method because it provides the best performance among all edge detectors [
37]. The Hough transform proved to be the most effective to identify the straight lines of the edges within the area of interest.
The edge detection procedure aims to detect the yellow or red line represented in
Figure 2. In many situations, it detected both edges. In this case, the compensation procedure considered was the upper edge. In some cases, such as at night, this edge was not detected, and the second edge was necessarily used. As the two lines were parallel, the image rotation due to the camera motion was determined. Next, we applied a template matching method to detect the coordinates of the image given in
Figure 4. This procedure allowed us to obtain the translation of the image caused by the camera movement. Six matching methods are available in the OpenCV library to search the template in the input image. The best results were obtained with the Normalized Correlation Coefficient Matching method. Finally, the coordinates of the template allowed for defining the ROI in the input image.
In most cases, the algorithm detected the correct wall edge and the correct template position. However, for images taken under very difficult lighting conditions, incorrect detection of one of these parameters or both may occur. In this case, the error made in detecting the waterline when using an incorrect ROI setting can be high. To avoid this situation, we determined the distance between the edge and the position of the template (TP) and compared it with the expected position of the template (ETP). When this difference exceeded a certain limit, we applied the last successful compensation to the ROI. As the algorithm does not know which edge it detects, the comparison represented in the flowchart served for the two values of the limit parameter. We determined this parameter by measuring the distance between the edge and the position of the template for various images acquired in different situations. It may also happen that the algorithm detects the edge with a small error in the slope and the template position correctly. This situation can lead to errors in setting the ROI. The application of a second template proved to be useful to minimize this effect.
2.4. Waterline Detection
The waterline detection procedure started by defining a ROI around the water boundary, as shown in
Figure 2. We set this region at a certain distance from the center of the image. However, small camera movements can result in an image center different from that obtained in the calibration process, which can cause large errors in the waterline detection. Another way was to define the ROI using a reference point of the stream wall around the center. This point can be the coordinates of the template used for camera compensation.
Figure 6 shows the image reference system defined to support the waterline detection. A vertical line in the object plane is seen in perspective in the image plane. We used the line within the ROI to detect the waterline position, defined by the point (
x1,
y1).
Figure 7 shows the ROI for various images taken under different conditions. The stream flow can be characterized by shallow water most of the time, resulting in images like the one shown in
Figure 7a taken during the day.
Figure 7b shows a typical image taken at night. Rain events affect image quality, as shown in
Figure 7c.
Figure 7d is a typical situation that occurs during periods of rain, with water undulation. Another situation is the existence of debris on the water surface, as shown in
Figure 7e.
Figure 7f shows an example with a shadow effect within the ROI created by buildings on sunny days. As can be seen, edge detection methods are not suitable for obtaining the waterline because of the image and water quality.
To minimize some of the effects observed in the images of
Figure 7, we considered
T images captured with a time difference of three seconds between them to obtain an average image. By converting the image to grayscale and applying histogram equalization, it became possible to highlight the waterline. This line was determined by detecting the transition between the water and the stream wall. We also defined a reference system for the ROI to support the waterline detection procedure, where (
x’,
y’) is a point in the ROI image with the origin at the lower-left pixel. The application of a moving average filter allowed reducing noise effects on the image. For a position
x’, represented in
Figure 6 by a red line, the grayscale profile was given by
where
P(
x’,
y’) is the pixel at position (
x’,
y’),
Nx is the number of horizontal pixels,
Ny is the number of vertical pixels, and
Py is the number of pixels in the vertical dimension of the ROI. In addition, we determined the gradient of the grayscale profile to detect the water boundary by the maximum absolute value of the gradient.
The problem of using a single detection position was that, in many situations, the maximum absolute value of the gradient did not match the position of the waterline. Irregularities in the wall, debris on the water surface, traces of rain captured by the camera, water undulation, and other effects can create a maximum gradient at the wrong position. Using a larger waterline zone solved this problem and improved detection. For the experimental setup, we surveyed the waterline at
S equidistant positions (detection positions) in a dimension of about two meters. We added the gradients of the grayscale profiles considering the slope of the waterline to enhance its detection (
Figure 6). For this, we measured the slope
m of this line, giving a relationship between
y’ and
x’ of the form Δ
y’ =
m Δ
x’. The gradients of the grayscale profiles were determined at
S positions of
x’. The sum of gradients considered the slope of
y’ to highlight the waterline values and to minimize the effects that degrade the waterline detection. In other urban stream locations, the waterline inside the ROI may have a different shape. The applied procedure uses the detection positions defined on the curve created by the waterline.
Figure 8 shows the result obtained with one detection position (
Figure 8a) and ten detection positions (
Figure 8b) for an image with debris on the water surface. For
S = 1, the maximum absolute value of the gradient was drastically affected by the water quality. As shown in
Figure 8b, it was possible to detect the waterline with several detection positions, despite the existence of floating debris around the sensing zone.
For images taken at night, it became necessary to consider the ROI lighting issues. As the water stream is in an urban environment, the street lighting system may be sufficient to illuminate the area of interest. In other cases, we can employ infrared lighting. In this work, we did not install any equipment to light the water zone, minimizing the costs of installing a dedicated system that requires a power supply. With the streetlights facing to the street, shadow zones might be visible within the ROI created by the stream wall. Thus, it was necessary to distinguish the procedure for obtaining the waterline for images taken at night from those taken during the day. The knowledge that night image acquisition requires different camera parameters allowed us to distinguish the two cases.
Figure 9 shows the grayscale profile and the gradient for two images taken at night.
Figure 9a illustrates an image with a detectable waterline. As can be observed, the maximum absolute value of the gradient occurred on the line created by the wall shadow over the water and not on the waterline. However, this position can aid in the detection of the water level. A variation in the water level has a corresponding variation in the shadow edge. The distance between the waterline and the shadow line was practically constant and defined by the parameter
PW measured in the vertical of the image.
Figure 9b shows a case where the waterline was not detected, and the shadow line was necessary to detect the water boundary.
Figure 10 shows the flowchart of the procedure to extract the waterline. The initial operations were the acquisition of
T images to obtain the average image, conversion to grayscale, and histogram equalization. We defined
S positions in the horizontal dimension of the ROI to detect the waterline. The grayscale profile was determined using (9) for each of the
S values of
x’ as well as the corresponding gradient functions. Then, we obtained the maximum absolute value by summing the gradients.
Two situations arose for images taken during the day. The first one was the case in which the maximum absolute value of the gradient corresponded to the waterline. This situation happened most of the time and the algorithm searched for this maximum in the concrete face. We determined the parameter
PH (in pixels) shown in
Figure 6 from this maximum. However, for a short period during the day, shadows of buildings may appear in the ROI. In this case, the maximum absolute value of the gradient can occur in the transition between the sunlit area and the shaded area. In the flowchart of
Figure 10, the “Shadow Period” defines the time interval in which this situation can happen, and we determined this period from initials measurements. To determine the waterline position, the algorithm searched for two peaks corresponding to the highest absolute values of the gradient. To assess whether the shadow of buildings affected the waterline detection procedure, we employed a technique to verify the conditions of existence of a shadow episode within the ROI. As sunlight produces a bright image in the sunlit zone, we determined, by image processing, the brightness of a small band above each of the gradient peaks and the brightness of a band below those peaks. This operation allowed us to compare the brightness of the image produced in the sunlit area with that of the shaded area. If any peak produced a difference in brightness above a threshold, this corresponded to a shadow transition. In that case, the other peak defined the position of the waterline. The threshold obtained came from measurements made on several captured images. Otherwise, we defined the waterline by the maximum absolute value of the gradient.
2.5. Water Level Estimation
To estimate the water level, we resorted to using Equations (7) and (8) to obtain X and Y, with the parameters of the wall plane of the concrete zone. We needed three parameters for this procedure, the distance d, the horizontal angle αx, and the vertical angle αy. Through measurements and simulation, we confirmed that the plane of the concrete zone was different from the plane of the stone zone. Following the procedure applied in the calibration section, the concrete plane had the following parameters: d = 24.16 m, αx = 7.4°, and αy = 22.2°.
With the
PH parameter determined in the previous section, we obtained the position of the waterline in the reference system represented in
Figure 6. The vertical distance is
Py =
PH −
Py0, where
Py0 is the origin of the ROI in the vertical dimension of the image.
Px was determined from the following expression:
where
ml and
K are the calibration line parameters. Knowing
Px and
Py,
x1 and
y1 were calculated, respectively. Considering Equations (7) and (8), the distances in the wall plane are given by
Finally, we determined the water level through the difference between Y and Y0, where Y0 is the distance considered for the zero-water level.
For some images, it was very difficult or impossible to measure the water level. In such cases, large errors could occur. We used data filtering to minimize the error effects. One procedure was to remove values that exceed a certain threshold when compared to previous results. To support this decision, we noticed that water flow could increase suddenly but decreases more slowly. The second peak of the gradient can also be applied to replace the wrong value if it does not exceed the defined threshold. The reason was the high probability of the waterline being there.