Automated Webcam Monitoring of Fractional Snow Cover in Northern Boreal Conditions

Fractional snow cover (FSC) is an important parameter to estimate snow water equivalent (SWE) and surface albedo important to climatic and hydrological applications. The presence of forest creates challenges to retrieve FSC accurately from satellite data, as forest canopy can block the sensor’s view of snow cover. In addition to the challenge related to presence of forest, in situ data of FSC—necessary for algorithm development and validation—are very limited. This paper investigates the estimation of FSC using digital imagery to overcome the obstacle caused by forest canopy, and the possibility to use this imagery in the validation of FSC derived from satellite data. FSC is calculated here using an algorithm based on defining a threshold value according to the histogram of an image, to classify a pixel as snow-covered or snow-free. Images from the MONIMET camera network, producing a continuous image series in Finland, are used in the analysis of FSC. The results obtained from automated image analysis of snow cover are compared with reference data estimated by visual inspection of same images. The results show the applicability and usefulness of digital imagery in the estimation of fractional snow cover in forested areas, with a Root Mean Squared Error (RMSE) in the range of 0.1–0.3 (with the full range of 0–1).


Introduction
Snow cover is an essential climate variable directly affecting the Earth's energy balance, due to its high albedo.Snow cover has a number of important physical properties that exert an influence on global and regional energy, water supply, and carbon cycles.Its quantification in a changing climate is thus important for various environmental and economic impact assessments.Proper description and assimilation of snow cover information into hydrological, land surface, meteorological, and climate models, are critical to address the impact of snow on various phenomena, to predict local snow water resources, and to warn about snow-related natural hazards.
Several methods for retrieving fractional snow cover (FSC) from remote sensing data have been developed [1][2][3][4][5][6][7].Boreal forest occupies about 17 percent of the Earth's land surface area in a circumpolar belt in the far northern hemisphere.The presence of forest in seasonally snow-covered regions, especially in the northern hemisphere, creates great challenges for the accurate FSC retrieval from satellites, as forest canopy can block sensor's view of snow cover, either almost totally, or at least partially.Many studies have been conducted to overcome the presence of forest [8][9][10][11][12][13].
In addition to the challenges of retrieval methodologies of remote sensing over forested areas, in situ data of FSC is rarely available, or at least temporally very limited.Ground reference data feasible for evaluation of FSC retrievals is often relatively difficult to obtain.This is because FSC is typically registered by human observers.Also, the use of very high resolution satellite images is complicated, as there are no feasible algorithms available to create a reliable and accurate FSC map for forested areas [14].Moreover, fractional snow cover typically varies rather widely in space and time, and therefore single point observations are not necessarily representative of the local spatial variation.This representation also depends on the landscape characteristics and other prevailing conditions.The observations should be conducted over an area corresponding to the pixel size of the applied satellite sensor, and the timing should match the satellite overpass, at least so that no major changes in snow cover occur.
Several webcam networks intended for scientific monitoring of ecosystems have been established lately.The European Phenology Camera Network (EUROPhen) is a collection of cameras used for phenology across Europe [15].In a similar way, the PhenoCam Network has more than 80 cameras across the US [16].Time-lapse digital camera images of Australian biomes for different locations are archived and distributed by the Australian Phenocam Network [17].The Phenological Eyes Network (PEN) is another phenological camera network in Asia [18].The MONIMET camera network is established to provide time series of field observations, and consists of 27 cameras over Finland, presently distributed at 14 sites [19].Digital images and other phenological data from such camera networks are used in various studies [15,16,18,[20][21][22].The use of digital repeat photography at a daily resolution can aid the automatic identification of interannual variations in vegetation status, and capture of agricultural practices [15].Digital photos can be used for detecting phenological patterns quantitatively in various types of vegetation over a longer period [20].Digital repeat photography can be used to assess the link between vegetation phenology and CO 2 exchange for two high-latitude ecosystems [22].
Detection of the amount of snow cover from digital imagery has been studied for other non-forested environments, e.g., mountains and glaciers [23][24][25][26].
For our study, we selected the algorithm which has been tested on images from different cameras looking over the mountains in the Alps and southern Italy [26].The algorithm is based on blue channel histogram, and showed a good performance for detecting snow cover on ground.In this study, we test the performance of the suggested algorithm [26] in conditions typical of boreal regions, where open wetlands and sparse tree canopies are frequent, sun angles are low, and ground may contain lichen that could potentially be misclassified as snow.

Study Sites
We mounted four typical surveillance outdoor cameras at three different sites.These cameras are part of the phenological camera network deployed in the frame of the MONIMET project, to establish knowledge about how low-cost cameras function in the monitoring of seasonal development of different types of ecosystems.In this project, we made a feasibility study of how the entire network, which consists of 14 sites and 27 cameras over Finland [19], could serve snow detection purposes in Finland, by using cameras from three northern sites.
Deployed cameras were Stardot NetCam 5MP, with charge coupled device (CCD) sensors producing images in the visible range (IR filtered).All selected cameras are set to automatic exposure mode.Images produced by the cameras are in JPEG format at 2592 pixels by 1944 pixels resolution, and have 8 bits of information (values between 0-255) per channel.The cameras are connected to the internet either by ethernet cables through the infrastructure at the sites, or by cellular modems.The images are uploaded to a server by file transfer protocol (FTP) every 30 or 60 min, depending on the camera.Four cameras were selected for the analyses presented in this paper.Two of these cameras are located in Kenttärova, and the other two are in Sodankylä.Both sites are located in Northern Finland.In Kenttärova, the cameras view a mature Norway spruce (Picea abies) forest.Cameras are mounted on a mast at a height of 21 m.One of them looks over a large area of the canopy, with the horizon and two hills visible at distance.The other one faces down towards the ground, and some of the area overlaps with the other camera.At higher altitudes, also visible in camera views, forests become sparser, and mountains and higher hills are treeless.The canopy in Kenttärova is dominated by evergreen spruce trees (Figure 1).
Geosciences 2017, 7, 55 3 of 21 mounted on a mast at a height of 21 m.One of them looks over a large area of the canopy, with the horizon and two hills visible at distance.The other one faces down towards the ground, and some of the area overlaps with the other camera.At higher altitudes, also visible in camera views, forests become sparser, and mountains and higher hills are treeless.The canopy in Kenttärova is dominated by evergreen spruce trees (Figure 1).In Sodankylä, one of the cameras is located in a Scots pine ecosystem, viewing towards the ground below the canopy.The camera is installed at 2 m elevation.The imaged area is small and flat, which results in a high spatial resolution of ground details, and lower relative rectification error.The other camera is located in the Sodankylä wetland site.The visible area is quite large and mostly open (Figure 2).In Sodankylä, one of the cameras is located in a Scots pine ecosystem, viewing towards the ground below the canopy.The camera is installed at 2 m elevation.The imaged area is small and flat, which results in a high spatial resolution of ground details, and lower relative rectification error.The other camera is located in the Sodankylä wetland site.The visible area is quite large and mostly open (Figure 2).

Regions and Times of Interest
Regions of interest (ROI) were selected for the analysis of snow cover at the camera sites.Examples are given for the Sodankylä wetland site and the Kenttärova forest site (Figures 3 and 4), for which the union of the polygons (drawn by cyan lines) describes the ROI.Only images taken between 10:45 and 12:45 (local time) are used, in order to minimize the illumination change effects, to ensure sufficient available light for the camera, and to avoid direct light against the camera lens during summer.

Regions and Times of Interest
Regions of interest (ROI) were selected for the analysis of snow cover at the camera sites.Examples are given for the Sodankylä wetland site and the Kenttärova forest site (Figures 3 and 4), for which the union of the polygons (drawn by cyan lines) describes the ROI.Only images taken between 10:45 and 12:45 (local time) are used, in order to minimize the illumination change effects, to ensure sufficient available light for the camera, and to avoid direct light against the camera lens during summer.

Validation Data
Results of automated image analysis are evaluated with the subjective visual inspection of the same image.The data is also compared to the ground measurement snow depth data from the near-by weather station.We visually inspected all midday images, and the ground snow cover was subjectively classified to categories between 0 and 100%.Information obtained from image time series by visual inspection represent mean conditions in the camera view, and cannot be solely attributed to the defined ROI.The observer did not take into account the contamination of pure snow cover from forest litter, which reduces the reflectance of snow under canopies during the snowmelt period in spring.A set of selected images is analyzed by 7 interpreters, to give an idea of the subjective error (bias) introduced by the interpreters when the snow cover is partial.We have selected 33 images for 11 days representing 11 categories (0, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 100%).Each day had 3 images.We calculated RMSE values between interpretations by 3, 5, and 7 interpreters and the visual interpretation used in the paper.The RMSE values were 0.1 for 3, 5 and 7 interpreters.

Digital Elevation Model (DEM)
Digital elevation model (DEM) data used in the study is produced by the National Land Survey of Finland (2016).DEM data provided by the institute are DEM2 (2 m grid), based on airborne laser scanning, and DEM10 (10 m grid).The DEM2 data are available only for the camera at Sodankylä; the DEM10 data are used at Kenttärova.

Orthorectification Parameters
Measurement of the location of the camera target has a relatively larger error compared to the measurement of camera orientation parameters, because it is challenging to find the exact point that the camera is targeting, and the spatial accuracy of the GPS receivers is generally much lower than the size of that point.Thus, for the study, camera orientation parameters are obtained.These parameters can be obtained as one of the followings sets: (1) camera target approach parameters include the roll angle of the camera, the camera position, the camera target position in real world coordinates, the focal length of the camera, and the scaling factor of the projection; (2) camera position in real world coordinates, the roll angle, the yaw angle, and the pitch angle of the camera, the focal length of the camera, and the scaling factor of the projection.
The focal length of the camera is specific to the design of the camera.The scaling factor of the projection also depends on the design of the camera, but changes with the zoom, since it is relative to the spatial resolution of the image.The yaw angle of the camera is defined as the angle between the geographic north, and the projection of the axis of the image from the camera target to the camera on the ground.The pitch angle of the camera is defined as the angle between the horizontal, and the axis of the image from the camera target to the camera.The roll angle of the camera is defined as the angle between the horizontal, and the axis of the images from left to right.Camera position is defined as the position of the camera in real world coordinates, which are the latitude (Y axis), the longitude (X axis) and the height (Z axis).Camera target position is defined as the position of the point that the center of the camera image corresponds to in real world coordinates, which are the latitude (Y axis) and the longitude (X axis) (Figure 5).
The algorithm by [27] is used for generation of a viewshed, which is used for orthorectification.The algorithm uses reference planes to determine if a location is visible from another location.For the processing, along with the DEM data, the camera location is used as input.
The location and the orientation parameters are measured by the authors using measurements tape, inclinometer and GPS receiver, and compass.Focal length of the cameras, and scaling factor for the images, are found empirically by using the projection algorithm and measurements (Table 1).The algorithm by [27] is used for generation of a viewshed, which is used for orthorectification.The algorithm uses reference planes to determine if a location is visible from another location.For the processing, along with the DEM data, the camera location is used as input.
The location and the orientation parameters are measured by the authors using measurements tape, inclinometer and GPS receiver, and compass.Focal length of the cameras, and scaling factor for the images, are found empirically by using the projection algorithm and measurements (Table 1).

Snow Detection Algorithm
We used an algorithm based on a threshold value, which is defined according to the histogram of an image to classify the pixels in the image as snow-covered or snow-free [26].In the algorithm, a threshold value for the image is chosen by finding the first local minimum higher than digital number (DN) 127 in the histogram of the blue channel.This situation occurs ideally in the case of partial snow coverage.If no local minimum is found, DN 127 is selected as the threshold.This situation occurs ideally in cases of both full and zero snow coverage.If the blue channel value of a pixel is higher than the threshold obtained from the histogram of the image, it is considered as a snow-covered pixel (Figure 6).The histogram is extracted only for the ROI, and it is also smoothed by averaging the 5 nearest points to each data point.

Snow Detection Algorithm
We used an algorithm based on a threshold value, which is defined according to the histogram of an image to classify the pixels in the image as snow-covered or snow-free [26].In the algorithm, a threshold value for the image is chosen by finding the first local minimum higher than digital number (DN) 127 in the histogram of the blue channel.This situation occurs ideally in the case of partial snow coverage.If no local minimum is found, DN 127 is selected as the threshold.This situation occurs ideally in cases of both full and zero snow coverage.If the blue channel value of a pixel is higher than the threshold obtained from the histogram of the image, it is considered as a snow-covered pixel (Figure 6).The histogram is extracted only for the ROI, and it is also smoothed by averaging the 5 nearest points to each data point.
The algorithm was developed and tested on images from various cameras in the mountains in the Alps and southern Italy.It has good performance at detecting snow cover on ground [26].In this study, this algorithm is used to detect snow cover in conditions typical to boreal regions, where the monitored areas include wetlands and forests.
From the camera images, it is seen that the dominant distortion from the lenses is radial.Radial lens distortion causes the actual image point to be displaced radially in the image plane.The approximation for the distortion is used with a single coefficient to correct the radial distortion [28].Coefficients are determined empirically by visually checking the objects and the horizon line in the field of view.
After classifying pixels as snow-covered or snow-free, the image can be orthorectified to a grid to obtain a snow cover map of the area [25,26,28,29].This snow cover map can be used to count pixels with and without snow to obtain the scene-specific snow cover fraction (FSC).An algorithm is used for orthorectification [29].This technique first converts the points in DEM with real world coordinates to the camera coordinate system.Then, by applying perspective projection, corresponding 2D coordinates for the perspective of the camera are calculated.Finally, the coordinates are scaled to fit to the size of the image.
The real world coordinates used in the orthorectification process are provided from the DEM data.Other than the DEM data, the unit vectors defining viewing geometry are needed for orthorectification.These unit vectors can be calculated using certain parameters.
An automatic digital image processing system for multiple camera networks was developed.The system is based on processing webcam images for environmental data from multiple camera networks in a user friendly and automated way.The toolbox for the system is called the Finnish Meteorological Institute Image Processing Tool (FMIPROT).For the study, the snow cover algorithm is implemented in the toolbox with georeferencing (orthorectification, viewshed, and automatic downloading and handling of DEM data for Finland).If lens distortion is present, correction is applied before snow detection, and after creating a ROI mask (Figure 7).The algorithm was developed and tested on images from various cameras in the mountains in the Alps and southern Italy.It has good performance at detecting snow cover on ground [26].In this study, this algorithm is used to detect snow cover in conditions typical to boreal regions, where the monitored areas include wetlands and forests.
From the camera images, it is seen that the dominant distortion from the lenses is radial.Radial lens distortion causes the actual image point to be displaced radially in the image plane.The approximation for the distortion is used with a single coefficient to correct the radial distortion [28].Coefficients are determined empirically by visually checking the objects and the horizon line in the field of view.
After classifying pixels as snow-covered or snow-free, the image can be orthorectified to a grid to obtain a snow cover map of the area [25,26,28,29].This snow cover map can be used to count pixels with and without snow to obtain the scene-specific snow cover fraction (FSC).An algorithm is used for orthorectification [29].This technique first converts the points in DEM with real world coordinates to the camera coordinate system.Then, by applying perspective projection, corresponding 2D coordinates for the perspective of the camera are calculated.Finally, the coordinates are scaled to fit to the size of the image.
The real world coordinates used in the orthorectification process are provided from the DEM data.Other than the DEM data, the unit vectors defining viewing geometry are needed for orthorectification.These unit vectors can be calculated using certain parameters.
An automatic digital image processing system for multiple camera networks was developed.The system is based on processing webcam images for environmental data from multiple camera networks in a user friendly and automated way.The toolbox for the system is called the Finnish Meteorological Institute Image Processing Tool (FMIPROT).For the study, the snow cover algorithm is implemented in the toolbox with georeferencing (orthorectification, viewshed, and automatic downloading and handling of DEM data for Finland).If lens distortion is present, correction is applied before snow detection, and after creating a ROI mask (Figure 7).
Using FMIPROT, the study can be applied for other cameras and camera networks.One can also add a newly established camera network or even a time series of images from a single camera to FMIPROT, to apply the algorithm used in the study.If the camera is located in Finland, DEM data will also be downloaded and handled automatically.The software is free to use, and available from the MONIMET website [30].

Statistical Analyses
Comparison of estimated FSC and the reference FSC was conducted using the original continuous values, and also by category, in order to present the success rate of the algorithm applied in classifying the images.For the continuous observations, we calculated root-mean-squared-errors (RMSE) between camera-estimated FSC and FSC estimated by an observer: Using FMIPROT, the study can be applied for other cameras and camera networks.One can also add a newly established camera network or even a time series of images from a single camera to FMIPROT, to apply the algorithm used in the study.If the camera is located in Finland, DEM data will also be downloaded and handled automatically.The software is free to use, and available from the MONIMET website [30].

Statistical Analyses
Comparison of estimated FSC and the reference FSC was conducted using the original continuous values, and also by category, in order to present the success rate of the algorithm applied in classifying the images.For the continuous observations, we calculated root-mean-squared-errors (RMSE) between camera-estimated FSC and FSC estimated by an observer: where FSC reference refers to observations made by a person who classified FSC of each image (ROI) in the 10% category, FSC estimated refers to the estimated FSCs from the digital image, and N refers to the total number of data pairs.Although the reference observations are made subjectively by an expert, we assume that they are most likely totally correct when there is full or no snow cover.Accuracy, however, is expected to be smaller during partial snow cover, but even in these cases, this visual observation is unlikely to give unrealistic or implausible information about the true snow cover.In order to estimate the start and end dates of early and melting seasons, we used visual observations.We defined the start dates of early and melting seasons as 3 days before the first snow, and the start of melting, respectively.We defined the end dates of early and melting seasons as 3 days after full snow cover (100%), and no snow, respectively.The definition of seasons varies between the sites and years (Table 2).When using categorized data, as from weather stations, the FSC estimates are also categorized accordingly.We attributed the FSC observation to the following categories: 0-10 (Class A), 10-50 (Class B), 50-90 (Class C), 90-100 (Class D) (Table 3).Producer's accuracy for a class describes the proportion of correctly classified reference estimates to the all reference cases of that class.User's accuracy for a class describes the proportion of cases correctly placed into that class to the all cases placed into that class.Commission error for a class describes the proportion of estimates incorrectly placed into the class to the total number of cases placed into that class (falsely committed).Omission error for a class describes the proportion of cases erroneously put into an incorrect class to the number of cases actually belonging to that class (falsely omitted).
For a class X, The total accuracy is defined as the number of correct matches as a proportion of the total number of cases.
For overall:

Snow Cover Analyzed as a Continuous Variable
The two cameras at the Kenttärova site replicated similar FSC estimates for both early and melting seasons in 2015 and 2016 (Figure 8).Consequently, RMSE of the cameras were also very similar (Table 4).The result also showed that FSC estimates gave good results in both early and melting seasons for both the Sodankylä ground and wetland cameras (Figures 9 and 10).For all the sites, late winter results have a large number of days with high error.From the figures, it seems that these days cover a high portion of the data, but the number of days with low error are actually much more numerous throughout the year.Since the markers in the graphs are drawn on top of each other, the amount of data points is less visible (Figures 11-14).RMSE results for seasons show that the errors of the results from the days in winters are still in a comparable range with others (Table 4).FSC estimates for all sites from image processing are in reasonably good agreement with visual observations, with R-squared values above 0.65.The slopes of all the graphs are mostly between 0.7 to 0.9, meaning the algorithm consistently underestimated the snow cover relative to the observer (Figures 11-14).
The images for which the fractional snow cover results have large error were further inspected to understand the reasons for the failure.The factors that cause failures were divided into four groups: (1) changes in the camera view, (2) environmental components that are classified as snow, (3) environmental components that hide the snow cover, and (4) phenomena that disturb the histogram.These factors occur in different circumstances, and their effects on the results are different.
Changes in the camera view occurred on two of the four cameras.The view direction of the Kenttärova canopy camera moved to the right about 5-10 degrees in the winter of 2015-2016.The movement has not only changed the ROI, but also caused the reference plate in front of the camera to cover most of the ROI (Figure 15a).In addition, in late winter, the accumulation of the snow on the reference plate masked the field of view almost completely (Figure 15b).Later in the same winter, it is seen that the camera has moved again.The camera was also rotated 90 degrees to the right, like it has fallen down (Figure 15c).Movements on this camera change the ROI completely, thus the images from that situation were discarded from the analyses.Changes on the zoom level and focus occurred on the Sodankylä ground camera.This time, the ROI did not change as much and it covers the same area (Figure 15d,e).Thus, the images were not discarded from the analysis.
Environmental components that were classified as snow are the objects or vegetation that simply look like snow in the pixel level, even to the human eye.An example is the lichen on the ground, visible from Sodankylä ground camera in summer seasons (Figure 15f).High reflectance of lichen in the blue channel [31] near soil and green vegetation causes it to be detected as snow.Error in fractional snow cover caused by lichen is relatively low.RMSE in summer for Sodankylä is higher than other cameras, but still as low as 3.6% (Table 4).Another example is the water on the ground.High reflectance of accumulated water on bare soil in Kenttärova field of view after rain and wetland in the Sodankylä field of view in the melting season produces high values in the blue channel, depending on the direction of incoming light (Figure 15g).This effect causes the wet area to be classified as snow.Objects that had high blue channel reflectance (e.g., reference plates, snow sticks, masts, etc.) were also classified as snow (Figure 15g).Thus, such objects should not be included in ROIs, and should be stabilized so that they do not fall into ROIs when they become loose.
The two cameras at the Kenttärova site replicated similar FSC estimates for both early and melting seasons in 2015 and 2016 (Figure 8).Consequently, RMSE of the cameras were also very similar (Table 4).The result also showed that FSC estimates gave good results in both early and melting seasons for both the Sodankylä ground and wetland cameras (Figures 9 and 10).For all the sites, late winter results have a large number of days with high error.From the figures, it seems that these days cover a high portion of the data, but the number of days with low error are actually much more numerous throughout the year.Since the markers in the graphs are drawn on top of each other, the amount of data points is less visible (Figures 11-14).RMSE results for seasons show that the errors of the results from the days in winters are still in a comparable range with others (Table 4).FSC estimates for all sites from image processing are in reasonably good agreement with visual observations, with R-squared values above 0.65.The slopes of all the graphs are mostly between 0.7 to 0.9, meaning the algorithm consistently underestimated the snow cover relative to the observer (Figures 11-14).that case, the automatic selection of threshold by the algorithm causes the shady areas to be classifed as snow-free and non-shady areas to be classified as snow-covered, regardless of whether the pixels correponded to snow cover.The error caused by this phenomenon can be up to 99%.This phenomenon is observed in almost all the images with an error larger than 50%.Environmental components that hide snow cover are objects and vegetation that block the field of view at the pixel level.The litter from trees and dirt are the most probable examples, and the effect is most visible when full snow cover is present (Figure 15h).Another example is the long branches, either from ground or trees.These branches change position due to the weight of the snow when snow accumulates on them.Even though ROIs are selected so that this situation does not disturb the analyses, some images have branches in the field of view, for example when a branch breaks and falls down on another branch.
Phenomena that disturb the histogram include shades in the field of view from the objects, vegetation, clouds, and snow properties (e.g., roughness, irregularities) (Figure 15i-l).In the situations with full cloud cover, illumination of the field of view is almost uniform.The same situation is valid when there is no cloud cover, and the ROI is selected as such that it does not include shades, either because there is no object to create a shade, or the direction of the incoming light casts the shadows in the other direction.Under uniform illumination, the histogram of the ROI can have two different signatures, as explained in the methods section.When this phenomenon occurs, different parts of the ROI have different levels of illumination.Parts in shadow are much darker than the others.That causes the number of distribution components (peaks) to be doubled (Figure 16).In that case, the automatic selection of threshold by the algorithm causes the shady areas to be classifed as snow-free and non-shady areas to be classified as snow-covered, regardless of whether the pixels correponded to snow cover.The error caused by this phenomenon can be up to 99%.This phenomenon is observed in almost all the images with an error larger than 50%.Histogram disturbing by shade phenomenon is the most significant failure, as it causes the largest errors.Besides, failures by environmental components occurred mostly in summer, and can be discarded from the analyses or studies.Changes in camera view are also easier to spot because they generally cover an interval of time.But the shade phenomenon depends on the cloud cover, environment and sunlight direction, and this may change even within minutes.Thus, one should inspect all the images and list them in order to discard problematic ones, but such intervention would be further away to the idea of automatized processing, and also would mean losing a large amount of data.Instead, the algorithm should be developed or trained with the information about histograms under different light conditions, possibly by supervising the algorithm training with visual inspection and classification of sun/shade images.

Categorical Snow Cover Analysis
We prepared confusion matrices with accuracies and errors for all four selected regions (Tables Histogram disturbing by shade phenomenon is the most significant failure, as it causes the largest errors.Besides, failures by environmental components occurred mostly in summer, and can be discarded from the analyses or studies.Changes in camera view are also easier to spot because they generally cover an interval of time.But the shade phenomenon depends on the cloud cover, environment and sunlight direction, and this may change even within minutes.Thus, one should inspect all the images and list them in order to discard problematic ones, but such intervention would be further away to the idea of automatized processing, and also would mean losing a large amount of data.Instead, the algorithm should be developed or trained with the information about histograms under different light conditions, possibly by supervising the algorithm training with visual inspection and classification of sun/shade images.

Categorical Snow Cover Analysis
We prepared confusion matrices with accuracies and errors for all four selected regions (Tables 5-8).The producer accuracy was highest in Class A (0-10%), around 0.92, and reasonably good in other Classes (B: 10-50, C, 50-90; D: 90-100), between 0.6 and 0.7 for all sites, except Class C, 0.21 and 0.32 in Sodankylä wetland and ground sites, respectively.

Conclusions
Estimation of fractional snow cover (FSC) is critical to water management, and important for meteorological, climatological, and hydrological applications.Retrieving of FSC, particularly over forested areas by satellite remote sensing, is challenging; furthermore, the development and validation of new algorithms, along with the new generation satellite sensors, sets a need for good quality data validation.However, both in situ FSC, and high resolution satellite data-based reference snow maps for the evaluation of moderate resolution FSC retrievals, are difficult to obtain.Particularly, no properly working algorithms for creating an accurate high resolution FSC reference maps for boreal forests are available.Our earlier studies show that validation results using high resolution FSC-maps as reference may give substantially different results, depending on the applied FSC-algorithm [14].
Due to the above-mentioned problems, we have applied a technique for retrieving temporally very frequent information on the local site-specific FSC, using a network of digital cameras.Based on our results, we conclude that snow cover could be analyzed with consumer grade cameras.The results showed that the tested snow algorithm is able to estimate fractional snow cover with high R-squared and low RMSE values.The gained RMSEs varied between 12% and 30% (FSC %-units), excluding the summer season, as it provided very low RMSEs, due to lack of snow.We analyzed the reasons for large estimation errors in the automatic snow cover classification in particular cases.The main error source was the occurrence of shaded areas in the region of interest.We showed that cameras could be used to monitor snow status with reasonable accuracy, and could thus be used to improve FSC retrieval algorithms from remote sensing data and/or to validate Earth-observed FSC.Therefore, camera-based algorithms should be further developed, especially for varying light conditions in the field of view, to obtain better accuracy in FSC retrieval.Another way of obtaining better accuracy in FSC retrieval, is to implement a balanced enhancement technique for improving the visual quality of both highlights and dark areas [32,33].

Figure 5 .
Figure 5. Camera orientation parameters defining the viewing geometry.

Figure 5 .
Figure 5. Camera orientation parameters defining the viewing geometry.

Figure 6 .
Figure 6.Threshold selection for snow classification for two different types of histogram distributions.The histograms are extracted from real images and smoothed in the same way in the algorithm.

Figure 6 . 21 Figure 7 .
Figure 6.Threshold selection for snow classification for two different types of histogram distributions.The histograms are extracted from real images and smoothed in the same way in the algorithm.Geosciences 2017, 7, 55 9 of 21

Figure 7 .
Figure 7. Main steps of calculating fractional snow cover for a single image: (a) Original image; (b) Corresponding y coordinates of the image on the spatial grid; (c) Corresponding x coordinates of the image on the spatial grid; (d) Georeferenced image on the spatial grid (for (b-d), spatial area is cropped according to the ROI.);(e) The mask to be applied for the ROI; (f) Snow-covered and snow-free pixels in ROI; and (g) Weight of the surface area for each pixel in the ROI.

Figure 8 .
Figure 8. Kenttärova ground and canopy cameras early season (top) and melting season (bottom); comparisons of image processing results and visual observations.

Figure 8 .
Figure 8. Kenttärova ground and canopy cameras early season (top) and melting season (bottom); comparisons of image processing results and visual observations.

Figure 9 .
Figure 9. Sodankylä ground camera early season (top) and melting season (bottom); comparisons of image processing results and visual observations.Figure 9. Sodankylä ground camera early season (top) and melting season (bottom); comparisons of image processing results and visual observations.

Figure 9 .
Figure 9. Sodankylä ground camera early season (top) and melting season (bottom); comparisons of image processing results and visual observations.Figure 9. Sodankylä ground camera early season (top) and melting season (bottom); comparisons of image processing results and visual observations.

Figure 10 .
Figure 10.Sodankylä wetland camera early season (top) and melting season (bottom) comparison of image processing results and visual observations.Figure 10.Sodankylä wetland camera early season (top) and melting season (bottom) comparison of image processing results and visual observations.

Figure 10 .
Figure 10.Sodankylä wetland camera early season (top) and melting season (bottom) comparison of image processing results and visual observations.Figure 10.Sodankylä wetland camera early season (top) and melting season (bottom) comparison of image processing results and visual observations.

Figure 11 .
Figure 11.Kenttärova canopy camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 12 .
Figure 12.Kenttärova ground camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 11 .Figure 11 .
Figure 11.Kenttärova canopy camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 12 .
Figure 12.Kenttärova ground camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 12 .
Figure 12.Kenttärova ground camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 13 .
Figure 13.Sodankylä wetland camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 14 .
Figure 14.Sodankylä ground camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 13 .
Figure 13.Sodankylä wetland camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 13 .
Figure 13.Sodankylä wetland camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 14 .
Figure 14.Sodankylä ground camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 14 .
Figure 14.Sodankylä ground camera: (a) 3-day moving window averaged fractional snow cover over time from image processing and daily visual observations, daily averaged snow depth over time from automatic ground measurements.Distribution and regression of fractional snow cover from image processing versus visual observations for (b) all data; (c) early season; and (d) melting season.

Figure 15 .
Figure 15.Problems in the images that cause failures in detection of fractional snow cover: (a) Field of view blocked by the reference plate after the camera movement; (b) Field of view blocked by the accumulation of snow on reference plate; (c) Field of view loss after the camera movement; (d,e) Before and after the minimal camera movement and loss of focus; (f) Lichen on the ground; (g) Water accumulation which reflects the bright sky and fallen snow sticks; (h) Litter and dirt on the ground; (i) Shadow of trees in Kenttärova canopy camera field of view; (j) Shadow of trees in Kenttärova ground camera field of view; (k) Shadow by trees and snow sticks in Sodankylä ground camera field of view; (l) Shadow by the snow surface irregularites, snow roughness, snow sticks, and camera mast.

Figure 15 .
Figure 15.Problems in the images that cause failures in detection of fractional snow cover: (a) Field of view blocked by the reference plate after the camera movement; (b) Field of view blocked by the accumulation of snow on reference plate; (c) Field of view loss after the camera movement; (d,e) Before and after the minimal camera movement and loss of focus; (f) Lichen on the ground; (g) Water accumulation which reflects the bright sky and fallen snow sticks; (h) Litter and dirt on the ground; (i) Shadow of trees in Kenttärova canopy camera field of view; (j) Shadow of trees in Kenttärova ground camera field of view; (k) Shadow by trees and snow sticks in Sodankylä ground camera field of view; (l) Shadow by the snow surface irregularites, snow roughness, snow sticks, and camera mast.

Figure 16 .
Figure 16.Examples of histogram disturbance by shade.Ideal histogram and disturbed histogram (left) for full snow cover and (right) for partial snow cover.

Figure 16 .
Figure 16.Examples of histogram disturbance by shade.Ideal histogram and disturbed histogram (left) for full snow cover and (right) for partial snow cover.

Table 1 .
Orthorectification parameters for the cameras.

Table 1 .
Orthorectification parameters for the cameras.

Table 2 .
Definition of Seasons.
+n BB +n CC +n DD n AA +n AB +n AC +n AD +n BA +n BB +n BC +n BD +n CA +n CB +n CC +n CD +n DA +n DB +n DC +n DD err tot = 1 − acc tot

Table 4 .
RMSE for the fractional snow cover (FSC) from image processing for all seasons.

Table 6 .
Confusion matrices for Kenttärova ground camera.

Table 7 .
Confusion matrices for Sodankylä ground camera.