Figure 1.
On the first row, left image shows the orthophoto map of Switzerland (source: Swisstopo [
44]). Regions around the four target lakes (shown as blue and yellow rectangles on the map) are zoomed in and shown on the right side of the map (lake Sihl on the left, region around lakes Sils, Silvaplana, St. Moritz on the right). On the second row, the image footprints of two webcams monitoring lake St. Moritz are displayed (Camera 0 and 1 images were captured on 14 December 2016 and 13 December 2016 respectively when the lake was partially frozen). Best if viewed on screen.
Figure 1.
On the first row, left image shows the orthophoto map of Switzerland (source: Swisstopo [
44]). Regions around the four target lakes (shown as blue and yellow rectangles on the map) are zoomed in and shown on the right side of the map (lake Sihl on the left, region around lakes Sils, Silvaplana, St. Moritz on the right). On the second row, the image footprints of two webcams monitoring lake St. Moritz are displayed (Camera 0 and 1 images were captured on 14 December 2016 and 13 December 2016 respectively when the lake was partially frozen). Best if viewed on screen.
Figure 2.
Bar graphs of mean monthly air temperature 2m above ground (top) and total monthly precipitation (bottom) in winter 2016–2017, recorded at the meteorological stations closest to the respective lakes. Data courtesy of MeteoSwiss.
Figure 2.
Bar graphs of mean monthly air temperature 2m above ground (top) and total monthly precipitation (bottom) in winter 2016–2017, recorded at the meteorological stations closest to the respective lakes. Data courtesy of MeteoSwiss.
Figure 3.
Spectral range of MODIS (left) and VIIRS (right) bands used in our analysis. The start and end wavelengths are shown for each band.
Figure 3.
Spectral range of MODIS (left) and VIIRS (right) bands used in our analysis. The start and end wavelengths are shown for each band.
Figure 4.
Example images that were discarded from the dataset due to bad illumination (left), sun over-exposure (middle), and thick fog (right).
Figure 4.
Example images that were discarded from the dataset due to bad illumination (left), sun over-exposure (middle), and thick fog (right).
Figure 5.
Photi-LakeIce dataset. Rows 1 and 2 display sample images from cameras 0 and 1 (St. Moritz) respectively. Row 3 shows example images of camera 2 (Sihl, non-stationary, and some rotations [, etc.] are also displayed). State of the lake: water(w), ice(i), snow(s), and clutter(c) is also displayed in brackets.
Figure 5.
Photi-LakeIce dataset. Rows 1 and 2 display sample images from cameras 0 and 1 (St. Moritz) respectively. Row 3 shows example images of camera 2 (Sihl, non-stationary, and some rotations [, etc.] are also displayed). State of the lake: water(w), ice(i), snow(s), and clutter(c) is also displayed in brackets.
Figure 6.
Two webcams monitoring lake St. Moritz along with their approximate coverage. Image courtesy of Google.
Figure 6.
Two webcams monitoring lake St. Moritz along with their approximate coverage. Image courtesy of Google.
Figure 7.
Bar graphs displaying class imbalance (including the class background) in our dataset. Ice and clutter are the under-represented classes.
Figure 7.
Bar graphs displaying class imbalance (including the class background) in our dataset. Ice and clutter are the under-represented classes.
Figure 8.
Inter-class similarities and intra-class differences of states snow (s), ice (i), and water (w) in our webcam data.
Figure 8.
Inter-class similarities and intra-class differences of states snow (s), ice (i), and water (w) in our webcam data.
Figure 9.
Block diagram of the proposed lake ice detection approach using satellite data.
Figure 9.
Block diagram of the proposed lake ice detection approach using satellite data.
Figure 10.
Bar graphs for MODIS (
left) and VIIRS (
right) showing the significance of each of the selected bands (12 for MODIS, 5 for VIIRS) for frozen vs. non-frozen pixel separation using the
XGBoost algorithm [
41]. All non-transition days from winter 16–17 are included in the analysis.
Figure 10.
Bar graphs for MODIS (
left) and VIIRS (
right) showing the significance of each of the selected bands (12 for MODIS, 5 for VIIRS) for frozen vs. non-frozen pixel separation using the
XGBoost algorithm [
41]. All non-transition days from winter 16–17 are included in the analysis.
Figure 11.
VIIRS grey-value histograms for sanity check (Bands , , , , are respectively shown from left to right).
Figure 11.
VIIRS grey-value histograms for sanity check (Bands , , , , are respectively shown from left to right).
Figure 12.
Deeplab v3+ (left) and Deep-U-Lab (right) architectures. The “⋆” symbol indicates the additional skip connections for Deep-U-Lab.
Figure 12.
Deeplab v3+ (left) and Deep-U-Lab (right) architectures. The “⋆” symbol indicates the additional skip connections for Deep-U-Lab.
Figure 13.
MODIS (top) and VIIRS (middle) timeline results of lake Sihl for full winter 16–17 using linear kernel. A timeline of the MODIS snow and ice product (bottom) is also plotted for comparison with our results and the webcam-based ground truth. In all timelines, the x-axis shows all dates that are at least 30% cloud-free in chronological order and the respective results [% of Non-Frozen (NF) pixels] are plotted on the y-axis.
Figure 13.
MODIS (top) and VIIRS (middle) timeline results of lake Sihl for full winter 16–17 using linear kernel. A timeline of the MODIS snow and ice product (bottom) is also plotted for comparison with our results and the webcam-based ground truth. In all timelines, the x-axis shows all dates that are at least 30% cloud-free in chronological order and the respective results [% of Non-Frozen (NF) pixels] are plotted on the y-axis.
Figure 14.
MODIS qualitative results using the linear kernel. Top and bottom rows show classification results and corresponding confidence respectively. Results of cloudy pixels are not displayed. First, second, and third columns show success cases while the fourth column displays a failure case. In the second row, more red means more non-frozen and more blue means more frozen. The dates and ground truth labels are shown below each sub-figure in the first and second rows respectively.
Figure 14.
MODIS qualitative results using the linear kernel. Top and bottom rows show classification results and corresponding confidence respectively. Results of cloudy pixels are not displayed. First, second, and third columns show success cases while the fourth column displays a failure case. In the second row, more red means more non-frozen and more blue means more frozen. The dates and ground truth labels are shown below each sub-figure in the first and second rows respectively.
Figure 15.
Lake detection results. Both success (rows 1,2,3) and failure (row 4) cases are shown. The colour code used to visualise the results is also displayed. The first column shows the lakes being monitored, along with the approximate location (latitude, longitude) of the webcam.
Figure 15.
Lake detection results. Both success (rows 1,2,3) and failure (row 4) cases are shown. The colour code used to visualise the results is also displayed. The first column shows the lakes being monitored, along with the approximate location (latitude, longitude) of the webcam.
Figure 16.
Evolution of mean IoU (mIoU) against the number of training steps (camera 0, St. Moritz, winter 2016–2017). Dark red curve represents a smoothed version of the original (light red) curve.
Figure 16.
Evolution of mean IoU (mIoU) against the number of training steps (camera 0, St. Moritz, winter 2016–2017). Dark red curve represents a smoothed version of the original (light red) curve.
Figure 17.
Precision-recall plots (St. Moritz) of cross-winter experiments. Best if viewed on screen.
Figure 17.
Precision-recall plots (St. Moritz) of cross-winter experiments. Best if viewed on screen.
Figure 18.
Qualitative lake ice segmentation results on webcam images. The colour code is also shown.
Figure 18.
Qualitative lake ice segmentation results on webcam images. The colour code is also shown.
Figure 19.
Cross-camera time series results (winter 17–18) of lake St. Moritz using Deep-U-Lab. Results for camera 1 (when trained on camera 0 data) is displayed. All dates are shown in chronological order on the x-axis and the respective results (percentage of frozen pixels) are plotted on the y-axis. Data lost due to technical failures are shown as red bars.
Figure 19.
Cross-camera time series results (winter 17–18) of lake St. Moritz using Deep-U-Lab. Results for camera 1 (when trained on camera 0 data) is displayed. All dates are shown in chronological order on the x-axis and the respective results (percentage of frozen pixels) are plotted on the y-axis. Data lost due to technical failures are shown as red bars.
Table 1.
Comparison of specifications of our machine learning-based product with the operational products such as CCI Lake Ice Cover (CCI LIC), Lake Ice Extent (LIE), MODIS Snow Product (MODIS SP), and VIIRS Snow Product (VIIRS SP).
Table 1.
Comparison of specifications of our machine learning-based product with the operational products such as CCI Lake Ice Cover (CCI LIC), Lake Ice Extent (LIE), MODIS Snow Product (MODIS SP), and VIIRS Snow Product (VIIRS SP).
| CCI LIC [18] | LIE [19] | MODIS SP [15] | VIIRS SP [21] | Ours |
---|
Temporal resolution | 1 day | 1 day | 1 day | 1 day | 1 day |
Spatial resolution (GSD) | 250 m | 250 m | 500 m | 375 m | 250 m |
Input data | MODIS | MODIS | MODIS | VIIRS | MODIS, VIIRS |
Table 2.
Characteristics of the target lakes (data mainly from Wikipedia). Latitude (lat, North), longitude (lon, East), altitude (alt, m), area (km), volume (vol, Mm), and the maximum and average depth [depth(M,A)] in m are shown. Additionally, for each lake, the nearest meteorological station (MS) is shown together with the corresponding latitude, longitude, and altitude.
Table 2.
Characteristics of the target lakes (data mainly from Wikipedia). Latitude (lat, North), longitude (lon, East), altitude (alt, m), area (km), volume (vol, Mm), and the maximum and average depth [depth(M,A)] in m are shown. Additionally, for each lake, the nearest meteorological station (MS) is shown together with the corresponding latitude, longitude, and altitude.
Lake | Lat, Lon, Alt | Area, Vol | Depth (M, A) | Remarks | MS, Lat, Lon, Alt |
---|
Sihl | 47.14, 8.78, 889 | 11.3, 96 | 23, 17 | frozen most years | Einsiedeln, 47.13, 8.75, 910 |
Sils | 46.42, 9.74, 1797 | 4.1, 137 | 71, 35 | freezes every year | Segl-Maria, 46.43, 9.77, 1804 |
Silvaplana | 46.45, 9.79, 1791 | 2.7, 140 | 77, 48 | freezes every year | Segl-Maria, 46.43, 9.77, 1804 |
St. Moritz | 46.49, 9.85, 1768 | 0.78, 20 | 42, 26 | freezes every year | Samedan, 46.53, 9.88, 1708 |
Table 3.
Parameters of the used data (GSD = Ground Sampling Distance).
Table 3.
Parameters of the used data (GSD = Ground Sampling Distance).
| MODIS | VIIRS | Webcams |
---|
Temporal resolution | 1 day | 1 day | 1 hour (typically) |
Spatial resolution (GSD) | 250–1000 m | 375–750 m | ca. 4 mm to 4 m |
Spectral resolution | 36 bands (0.41–14.24 μm) | 22 bands (0.41–12.01 μm) | RGB |
Radiometric resolution | 12 bits | 12 bits | 8 bits |
Costs | free | free | free |
Availability | very good | very good | depending on location |
Cloud mask issues | slight | slight | NA |
Cloud problems | severe | severe | negligible |
Table 4.
Total number of clean, cloud-free pixels on non-transition dates from MODIS (M) and VIIRS (V) sensors (at least 30% cloud-free days) used in our experiments.
Table 4.
Total number of clean, cloud-free pixels on non-transition dates from MODIS (M) and VIIRS (V) sensors (at least 30% cloud-free days) used in our experiments.
| Winter | Sihl | Sils | Silvaplana | St. Moritz | Total |
---|
M | V | M | V | M | V | M | V | M | V |
---|
Frozen | 16–17 | 4137 | 1919 | 2345 | 894 | 1736 | 739 | 157 | — | 8375 | 3552 |
Non-Frozen | 16–17 | 13,568 | 4598 | 3019 | 1051 | 1965 | 765 | 191 | — | 18,743 | 6414 |
Frozen | 17–18 | 1005 | 198 | 1858 | 722 | 1169 | 591 | 124 | — | 4156 | 1511 |
Non-Frozen | 17–18 | 11,804 | 4311 | 2435 | 784 | 1574 | 621 | 140 | — | 15,953 | 5716 |
Total | 30,514 | 11,026 | 9657 | 3451 | 6444 | 2716 | 612 | — | 47,227 | 17,193 |
Table 5.
Dataset statistics. M/V format displays the respective numbers of MODIS/VIIRS. The effective temporal resolution (shown as ‘resolution’) and fraction of transition dates over all processed dates (Trans fraction) are also shown. #Pixels (clean) displays the number of clean pixels per acquisition.
Table 5.
Dataset statistics. M/V format displays the respective numbers of MODIS/VIIRS. The effective temporal resolution (shown as ‘resolution’) and fraction of transition dates over all processed dates (Trans fraction) are also shown. #Pixels (clean) displays the number of clean pixels per acquisition.
Lake | #Pixels | Winter | Non-Transition Days | Transition | Resolution | Trans Fraction |
---|
(Clean) | Non-Frozen | Frozen | Days | (Days) |
---|
Sihl | 115/45 | 16–17 | 98/87 | 32/33 | 12/11 | 1.9/2.1 | 0.09/0.08 |
17–18 | 90/88 | 8/6 | 24/22 | 2.2/2.4 | 0.20/0.19 |
Sils | 33/11 | 16-17 | 70/73 | 57/59 | 33/30 | 1.7/1.7 | 0.21/0.19 |
17–18 | 60/57 | 49/48 | 25/32 | 2.0/2.0 | 0.19/0.23 |
Silvaplana | 21/9 | 16–17 | 66/66 | 63/59 | 33/34 | 1.7/1.7 | 0.20/0.21 |
17–18 | 58/58 | 43/54 | 27/31 | 2.1/1.9 | 0.21/0.22 |
St. Moritz | 4/0 | 16–17 | 79/— | 65/— | 14/— | 1.7/— | 0.09/— |
17–18 | 64/— | 58/— | 16/— | 2.0/— | 0.12/— |
Table 6.
Details of the Photi-LakeIce dataset. Lat and Long respectively denote latitude (North) and longitude (East) of the approximate camera location. Res stands for resolution and H and W represent height and width of the image in pixels (after cropping).
Table 6.
Details of the Photi-LakeIce dataset. Lat and Long respectively denote latitude (North) and longitude (East) of the approximate camera location. Res stands for resolution and H and W represent height and width of the image in pixels (after cropping).
Name | Lake (Lat, Long) | Camera Model | #Images 16–17 | #Images 17–18 | Res (H × W) |
---|
Camera 0 | St. Moritz (46.50, 9.84) | AXIS Q6128-E | 820 | 474 | 324 × 1209 |
Camera 1 | St. Moritz (46.50, 9.84) | AXIS Q6128-E | 1180 | 443 | 324 × 1209 |
Camera 2 | Sihl (47.13, 8.74) | unknown | 500 | 600 | 344 × 420 |
Table 7.
The 4-fold cross validation results on MODIS and VIIRS data from two winters (16–17 and 17–18). For the same SVM setup, results without and with multi-temporal analysis (MTA) are shown.
Table 7.
The 4-fold cross validation results on MODIS and VIIRS data from two winters (16–17 and 17–18). For the same SVM setup, results without and with multi-temporal analysis (MTA) are shown.
Sensor | Feature Vector | SVM Kernel | with MTA | Overall Accuracy | mIoU |
---|
MODIS | | Linear | No | 0.91 | 0.78 |
MODIS | Top 5 bands | Linear | No | 0.93 | 0.83 |
MODIS | All 12 bands | Linear | No | 0.93 | 0.84 |
MODIS | All 12 bands | Linear | Yes | 0.93 | 0.84 |
MODIS | Top 5 bands | RBF | No | 0.96 | 0.90 |
MODIS | All 12 bands | RBF | No | 0.99 | 0.98 |
MODIS | All 12 bands | RBF | Yes | 0.99 | 0.99 |
VIIRS | | Linear | No | 0.93 | 0.84 |
VIIRS | All 5 I-bands | Linear | No | 0.95 | 0.88 |
VIIRS | All 5 I-bands | Linear | Yes | 0.95 | 0.88 |
VIIRS | All 5 I-bands | RBF | No | 0.97 | 0.93 |
VIIRS | All 5 I-bands | RBF | Yes | 0.97 | 0.93 |
Table 8.
Detailed results on MODIS (left) and VIIRS (right) data for the best cases of 4-fold cross validation on combined data from two winters.
Table 8.
Detailed results on MODIS (left) and VIIRS (right) data for the best cases of 4-fold cross validation on combined data from two winters.
| Recall | Precision | IoU | | Recall | Precision | IoU |
---|
Frozen | 0.99 | 0.99 | 0.98 | Frozen | 0.93 | 0.97 | 0.90 |
Non-Frozen | 0.99 | 0.99 | 0.99 | Non-Frozen | 0.99 | 0.97 | 0.96 |
Accuracy / mIoU | 0.99/0.99 | Accuracy/mIoU | 0.97/0.93 |
Table 9.
MODIS leave one lake out results. Numbers are in A/B format where A and B represent the results using Radial Basis Function (RBF) and linear kernels, respectively. The better kernel for a given experiment is shown in black, worse kernel in grey.
Table 9.
MODIS leave one lake out results. Numbers are in A/B format where A and B represent the results using Radial Basis Function (RBF) and linear kernels, respectively. The better kernel for a given experiment is shown in black, worse kernel in grey.
Lake Sihl | Lake Sils |
---|
| Recall | Precision | IoU | | Recall | Precision | IoU |
---|
Frozen | 0.82/0.79 | 0.63/0.78 | 0.55/0.65 | Frozen | 0.89/0.88 | 0.97/0.95 | 0.86/0.85 |
Non-Frozen | 0.90/0.95 | 0.96/0.96 | 0.87/0.92 | Non-Frozen | 0.98/0.97 | 0.92/0.92 | 0.90/0.89 |
Accuracy | | 0.89/0.93 | | Accuracy | | 0.94/0.93 | |
mIoU | | 0.71/0.78 | | mIoU | | 0.88/0.87 | |
Lake Silvaplana | Lake St. Moritz |
| Recall | Precision | IoU | | Recall | Precision | IoU |
Frozen | 0.91/0.81 | 0.96/0.97 | 0.88/0.79 | Frozen | 0.85/0.64 | 0.93/0.96 | 0.80/0.63 |
Non-Frozen | 0.97/0.98 | 0.93/0.86 | 0.90/0.85 | Non-Frozen | 0.95/0.98 | 0.88/0.76 | 0.84/0.75 |
Accuracy | | 0.94/0.91 | | Accuracy | | 0.90/0.83 | |
mIoU | | 0.89/0.82 | | mIoU | | 0.82/0.69 | |
Table 10.
VIIRS leave one lake out results. Numbers are in A/B format where A and B represent the results using RBF and linear kernels, respectively. The better kernel for a given experiment is shown in black, worse kernel in grey.
Table 10.
VIIRS leave one lake out results. Numbers are in A/B format where A and B represent the results using RBF and linear kernels, respectively. The better kernel for a given experiment is shown in black, worse kernel in grey.
Lake Sihl | Lake Sils |
---|
| Recall | Precision | IoU | | Recall | Precision | IoU |
---|
Frozen | 0.87/0.87 | 0.73/0.85 | 0.66/0.76 | Frozen | 0.93/0.89 | 0.97/0.99 | 0.90/0.88 |
Non-Frozen | 0.92/0.97 | 0.97/0.97 | 0.90/0.94 | Non-Frozen | 0.98/0.99 | 0.94/0.91 | 0.92/0.90 |
Accuracy | | 0.91/0.95 | | Accuracy | | 0.95/0.94 | |
mIoU | | 0.78/0.85 | | mIoU | | 0.91/0.89 | |
| | Lake Silvaplana | | |
| | | Recall | Precision | IoU | | |
| | Frozen | 0.91/0.87 | 0.97/0.98 | 0.88/0.86 | | |
| | Non-Frozen | 0.97/0.98 | 0.92/0.89 | 0.90/0.88 | | |
| | Accuracy | | 0.94/0.93 | | | |
| | mIoU | | 0.90/0.87 | | | |
Table 11.
MODIS leave one winter out results. The numbers are shown in A/B format where A and B represent the outcomes using RBF and linear kernels, respectively. The better kernel for a given experiment is shown in black, worse kernel in grey. Left is winter 16–17, right is winter 17–18.
Table 11.
MODIS leave one winter out results. The numbers are shown in A/B format where A and B represent the outcomes using RBF and linear kernels, respectively. The better kernel for a given experiment is shown in black, worse kernel in grey. Left is winter 16–17, right is winter 17–18.
| Recall | Precision | IoU | | Recall | Precision | IoU |
---|
Frozen | 0.72/0.77 | 0.90/0.91 | 0.67/0.72 | Frozen | 0.73/0.84 | 0.64/0.85 | 0.52/0.73 |
Non-Frozen | 0.96/0.97 | 0.89/0.91 | 0.86/0.88 | Non-Frozen | 0.89/0.96 | 0.93/0.96 | 0.83/0.92 |
Accuracy | | 0.89/0.91 | | Accuracy | | 0.86/0.94 | |
mIoU | | 0.76/0.80 | | mIoU | | 0.68/0.83 | |
Table 12.
VIIRS leave one winter out results. The numbers are shown in A/B format where A and B represent the outcomes using RBF and linear kernels respectively. The better kernel for a given experiment is shown in black, worse kernel in grey. Left is winter 16–17, right is winter 17–18.
Table 12.
VIIRS leave one winter out results. The numbers are shown in A/B format where A and B represent the outcomes using RBF and linear kernels respectively. The better kernel for a given experiment is shown in black, worse kernel in grey. Left is winter 16–17, right is winter 17–18.
| Recall | Precision | IoU | | Recall | Precision | IoU |
---|
Frozen | 0.83/0.79 | 0.92/0.99 | 0.77/0.78 | Frozen | 0.87/0.90 | 0.77/0.79 | 0.69/0.72 |
Non-Frozen | 0.96/0.99 | 0.91/0.89 | 0.88/0.89 | Non-Frozen | 0.93/0.94 | 0.97/0.97 | 0.90/0.91 |
Accuracy | | 0.91/0.92 | | Accuracy | | 0.92/0.93 | |
mIoU | | 0.82/0.84 | | mIoU | | 0.79/0.82 | |
Table 13.
Hyperparameters for the Deep-U-Lab model.
Table 13.
Hyperparameters for the Deep-U-Lab model.
Name | Lake Detection | Lake Ice Segmentation |
---|
Crop size | 500, 500 | 321, 321 |
Optimiser | stochastic gradient descent | stochastic gradient descent |
Atrous rates (dilation) | 6, 12, 18 | 6, 12, 18 |
Output stride | 16 | 16 (training), 8 (testing) |
Base learning rate | 1 × 10 | 1 × 10 |
Batch size | 4 | 8 |
Epochs | 100 | 100 |
Table 14.
Lake detection results (mIoU). The four cameras that monitor lake Silvaplana are indicated as , , , and . BG and FG denote background and foreground (lake area) respectively.
Table 14.
Lake detection results (mIoU). The four cameras that monitor lake Silvaplana are indicated as , , , and . BG and FG denote background and foreground (lake area) respectively.
Training Set | Test Set | IoU (BG) | IoU (FG) | mIoU |
---|
Lakes | #Images | Lake | #Images |
---|
, , , , Sils, St. Moritz | 9104 | Sihl | 448 | 0.95 | 0.60 | 0.93 |
, , , , St. Moritz, Sihl | 7477 | Sils | 2075 | 0.95 | 0.60 | 0.93 |
, , , Sils, St. Moritz, Sihl | 8041 | | 1511 | 0.96 | 0.59 | 0.94 |
, , , Sils, St. Moritz, Sihl | 8676 | | 876 | 0.92 | 0.58 | 0.90 |
, , , Sils, St. Moritz, Sihl | 7906 | | 1646 | 0.98 | 0.44 | 0.95 |
, , , Sils, St. Moritz, Sihl | 7652 | | 1900 | 0.98 | 0.55 | 0.95 |
, , , , Sils, Sihl | 8456 | St. Moritz | 1096 | 0.93 | 0.80 | 0.92 |
Table 15.
Results (IoU) of
same camera train/test experiments. We compare our results with
Tiramisu Network [
34] (shown in
grey). Cameras 0 and 1 monitor lake St. Moritz while camera 2 captures lake Sihl.
Table 15.
Results (IoU) of
same camera train/test experiments. We compare our results with
Tiramisu Network [
34] (shown in
grey). Cameras 0 and 1 monitor lake St. Moritz while camera 2 captures lake Sihl.
Training Set | Test Set | Water | Ice | Snow | Clutter | mIoU |
---|
Camera | Winter | Camera | Winter |
---|
Camera 0 | 16–17 | Camera 0 | 16–17 | 0.98/0.70 | 0.95/0.87 | 0.95/0.89 | 0.97/0.63 | 0.96/0.77 |
Camera 0 | 17–18 | Camera 0 | 17–18 | 0.97 | 0.88 | 0.96 | 0.87 | 0.93 |
Camera 1 | 16–17 | Camera 1 | 16–17 | 0.99/0.90 | 0.96/0.92 | 0.95/0.94 | 0.79/0.62 | 0.92/0.85 |
Camera 1 | 17–18 | Camera 1 | 17–18 | 0.93 | 0.84 | 0.92 | 0.84 | 0.89 |
Camera 2 | 16–17 | Camera 2 | 16–17 | 0.79 | 0.62 | 0.81 | — | 0.74 |
Camera 2 | 17–18 | Camera 2 | 17–18 | 0.81 | 0.69 | 0.86 | — | 0.79 |
Table 16.
Effect of data augmentation (IoU values) on the same camera train/test experiment (camera 0).
Table 16.
Effect of data augmentation (IoU values) on the same camera train/test experiment (camera 0).
Experiment | Water | Ice | Snow | Clutter | mIoU |
---|
Without augmentation | 0.97 | 0.93 | 0.91 | 0.96 | 0.94 |
With augmentation | 0.98 | 0.95 | 0.95 | 0.97 | 0.96 |
Table 17.
Results (IoU) of
cross-camera experiments. We compare our results with the
Tiramisu Network [
34] (shown in
grey). Both cameras 0 and 1 monitor lake St. Moritz.
Table 17.
Results (IoU) of
cross-camera experiments. We compare our results with the
Tiramisu Network [
34] (shown in
grey). Both cameras 0 and 1 monitor lake St. Moritz.
Training Set | Test Set | Water | Ice | Snow | Clutter | mIoU |
---|
Camera | Winter | Camera | Winter |
---|
Camera 0 | 16–17 | Camera 1 | 16–17 | 0.76/0.36 | 0.75/0.57 | 0.84/0.37 | 0.61/0.27 | 0.74/0.39 |
Camera 0 | 17–18 | Camera 1 | 17–18 | 0.62 | 0.66 | 0.89 | 0.42 | 0.64 |
Camera 1 | 16–17 | Camera 0 | 16–17 | 0.94/0.32 | 0.75/0.41 | 0.92/0.33 | 0.48/0.43 | 0.77/0.37 |
Camera 1 | 17–18 | Camera 0 | 17–18 | 0.59 | 0.67 | 0.91 | 0.51 | 0.67 |
Table 18.
Results (IoU) of
cross-winter experiments. We compare our results with the
Tiramisu Network [
34] (shown in
grey). Cameras 0 and 1 monitor lake St. Moritz while camera 2 captures lake Sihl.
Table 18.
Results (IoU) of
cross-winter experiments. We compare our results with the
Tiramisu Network [
34] (shown in
grey). Cameras 0 and 1 monitor lake St. Moritz while camera 2 captures lake Sihl.
Training Set | Test Set | Water | Ice | Snow | Clutter | mIoU |
---|
Camera | Winter | Camera | Winter |
---|
Camera 0 | 16–17 | Camera 0 | 17–18 | 0.64/0.45 | 0.58/0.44 | 0.87/0.83 | 0.59/0.40 | 0.67/0.53 |
Camera 0 | 17–18 | Camera 0 | 16–17 | 0.98 | 0.91 | 0.94 | 0.58 | 0.87 |
Camera 1 | 16–17 | Camera 1 | 17–18 | 0.86/0.80 | 0.71/0.58 | 0.93/0.92 | 0.57/0.33 | 0.77/0.57 |
Camera 1 | 17–18 | Camera 1 | 16–17 | 0.93 | 0.76 | 0.86 | 0.65 | 0.80 |
Camera 2 | 16–17 | Camera 2 | 17–18 | 0.61 | 0.14 | 0.35 | — | 0.51 |
Camera 2 | 17–18 | Camera 2 | 16–17 | 0.41 | 0.18 | 0.45 | — | 0.50 |
Table 19.
Results (IoU) of cross-lake experiments. Cameras 0 and 2 monitor lakes St. Moritz and Sihl respectively.
Table 19.
Results (IoU) of cross-lake experiments. Cameras 0 and 2 monitor lakes St. Moritz and Sihl respectively.
Training Set | Test Set | Water | Ice | Snow | mIoU |
---|
Camera 0 (16–17) | Camera 2 (16–17) | 0.40 | 0.23 | 0.42 | 0.35 |
Camera 2 (16–17) | Camera 0 (16–17) | 0.85 | 0.25 | 0.68 | 0.60 |
Table 20.
Results (IoU) of leave one dataset out experiments. Cameras 0 and 1 monitor lake St. Moritz while camera 2 captures lake Sihl.
Table 20.
Results (IoU) of leave one dataset out experiments. Cameras 0 and 1 monitor lake St. Moritz while camera 2 captures lake Sihl.
Training Set | Test Set | Water | Ice | Snow | Clutter | mIoU |
---|
Camera 0 (17–18), Cameras 1 and 2 (2 winters) | Camera 0 (16–17) | 0.98 | 0.90 | 0.96 | 0.62 | 0.86 |
Camera 0 (16–17), Cameras 1 and 2 (2 winters) | Camera 0 (17–18) | 0.83 | 0.78 | 0.95 | 0.59 | 0.78 |
Camera 1 (17–18), Cameras 0 and 2 (2 winters) | Camera 1 (16–17) | 0.99 | 0.92 | 0.91 | 0.69 | 0.87 |
Camera 1 (16–17), Cameras 0 and 2 (2 winters) | Camera 1 (17–18) | 0.92 | 0.81 | 0.96 | 0.55 | 0.81 |
Camera 2 (17–18), Cameras 0 and 1 (2 winters) | Camera 2 (16–17) | 0.35 | 0.25 | 0.46 | — | 0.35 |
Camera 2 (16–17), Cameras 0 and 1 (2 winters) | Camera 2 (17–18) | 0.66 | 0.30 | 0.36 | — | 0.44 |
Table 21.
Ice-on/off dates (winter 16–17). Ground truth dates are shown in the order of confidence in case of more than one candidate.
Table 21.
Ice-on/off dates (winter 16–17). Ground truth dates are shown in the order of confidence in case of more than one candidate.
Dates | Ground Truth | Satellite Approach | Webcam Approach | In-Situ (T) [34] |
---|
ice-on (Sihl) | 1 January 2017 | 3 January 2017 | 4 January 2017 | 28–29 December 2016 |
ice-off (Sihl) | 14 March 2017, 15 March 2017 | 10 March 2017 | 14 February 2017 | 16 March 2017 |
ice-on (Sils) | 2 January 2017, 5 January 2017 | 6 January 2017 | - | 31 December 2016 |
ice-off (Sils) | 8 April 2017, 11 April 2017 | 31 March 2017 | - | 10 April 2017 |
ice-on (Silvaplana) | 12 January 2017 | 15 January 2017 | - | 14 January 2017 |
ice-off (Silvaplana) | 11 April 2017 | 30 March 2017 | - | 14 April 2017 |
ice-on (St. Moritz) | 15–17 December 2016 | 1 January 2017 | 14 December 2016 | 17 December 2016 |
ice-off (St. Moritz) | 30 March–6 April 2017 | 7 April 2017 | 18 March–26 April 2017 | 5–8 April 2017 |