Next Article in Journal
Enhancement of Cloudless Skies Frequency over a Large Tropical Reservoir in Brazil
Next Article in Special Issue
A Fast and Effective Method for Unsupervised Segmentation Evaluation of Remote Sensing Images
Previous Article in Journal
Spaceborne SAR Data for Regional Urban Mapping Using a Robust Building Extractor
Previous Article in Special Issue
Uncertainty Analysis for Object-Based Change Detection in Very High-Resolution Satellite Images Using Deep Learning Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Introducing GEOBIA to Landscape Imageability Assessment: A Multi-Temporal Case Study of the Nature Reserve “Kózki”, Poland

1
Department of Grassland and Landscape Shaping, University of Life Sciences in Lublin, 20-950 Lublin, Poland
2
Department of Applied Mathematics and Computer Science, University of Life Sciences in Lublin, 20-950 Lublin, Poland
3
School of Architecture, Building and Civil Engineering Loughborough University, Leicestershire LE11 3TU, UK
4
Department of Forest Resource Management, Faculty of Forestry, University of Agriculture in Krakow, 31-425 Krakow, Poland
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(17), 2792; https://doi.org/10.3390/rs12172792
Submission received: 10 August 2020 / Revised: 24 August 2020 / Accepted: 25 August 2020 / Published: 27 August 2020
(This article belongs to the Special Issue Object Based Image Analysis for Remote Sensing)

Abstract

:
Geographic object-based image analysis (GEOBIA) is a primary remote sensing tool utilized in land-cover mapping and change detection. Land-cover patches are the primary data source for landscape metrics and ecological indicator calculations; however, their application to visual landscape character (VLC) indicators was little investigated to date. To bridge the knowledge gap between GEOBIA and VLC, this paper puts forward the theoretical concept of using viewpoint as a landscape imageability indicator into the practice of a multi-temporal land-cover case study and explains how to interpret the indicator. The study extends the application of GEOBIA to visual landscape indicator calculations. In doing so, eight different remote sensing imageries are the object of GEOBIA, starting from a historical aerial photograph (1957) and CORONA declassified scene (1965) to contemporary (2018) UAV-delivered imagery. The multi-temporal GEOBIA-delivered land-cover patches are utilized to find the minimal isovist set of viewpoints and to calculate three imageability indicators: the number, density, and spacing of viewpoints. The calculated indicator values, viewpoint rank, and spatial arrangements allow us to describe the scale, direction, rate, and reasons for VLC changes over the analyzed 60 years of landscape evolution. We found that the case study nature reserve (“Kózki”, Poland) landscape imageability transformed from visually impressive openness to imageability due to the impression of several landscape rooms enclosed by forest walls. Our results provide proof that the number, rank, and spatial arrangement of viewpoints constitute landscape imageability measured with the proposed indicators. Discussing the method’s technical limitations, we believe that our findings contribute to a better understanding of land-cover change impact on visual landscape structure dynamics and further VLC indicator development.

1. Introduction

The movement of remote sensing software from pixel-based approaches to geographic object-based image analysis (GEOBIA) [1] led to improved workflows for imagery processing, especially land-cover classification [2] and land-cover change detection [3]. GEOBIA has a diverse range of applications [4], including, for example, soil science [5], archaeological research [6], geomorphology [7], forestry [8,9,10], and agriculture [11]. Because of reduced spectral resolution, historical grayscale imagery is usually the object of visual interpretation techniques [12]. Individual [13,14] and sometimes even multiple classes [15] can be digitized on screen by GIS operators. However, these time-consuming tasks can be improved by image segmentation techniques so that the image objects are classified more efficiently. Furthermore, textural analysis enables the supervised classification of single-channel imagery [16] with accuracy comparable to that of on-screen digitalization.
More recently, GEOBIA also became a prerequisite stage for visual landscape indicator development [17,18]. However, the use of GEOBIA for landscape imageability—the ability of a view to make a lasting impression on an observer [19]—and the assessment of visual landscape character (VLC) indicators [19,20] had little investigation to date. Although the contribution of land-use patches to landscape indicator calculation is already a well-recognized topic in landscape ecology [21], landscape quality assessment [22,23], and cultural ecosystem services [24], so far, no comprehensive long-term study exists on the complex relationships between land-cover patch changes and landscape visual structure.
To close this knowledge gap, we put the theoretical concept of using viewpoints as imageability indicators into practice through a multi-temporal land-use case study with three main parts: (i) an investigation of land-cover change over six decades using GEOBIA and high-resolution remote sensing imagery; (ii) deriving imageability indicators using viewpoints and the isovist algorithm; (iii) exploring the relationship between changing land cover and its impacts on the visual landscape. Taking visual landscape [25,26] characteristics as our focus, we interpret the isovist results to determine what they mean for landscape imageability. We hypothesize that the rank and spatial arrangement of viewpoints constitute landscape imageability, while the number and density of viewpoints reveal more about how the structure of a landscape imparts a unique character and even a strong visual image to the landscape user (the observer).

1.1. The Importance of Image Segmentation Quality as Prerequisites for Imageability Indicator Calculations

Land-cover patterns, vegetation, and topography are regarded as key landscape features that create a “view” from the perspective of the human observer. Land-cover patterns and landscape features play two main roles in view generation: visual barriers and aesthetic components. We use landscape features as visual barriers to assess VLC, derived through GEOBIA, leaving the more subjective assessment of aesthetic components aside.
Segmentation, as the first stage of GEOBIA [27], is the task of grouping together adjacent pixels that have similar spectral characteristics to define radiometrically homogeneous segments [28]. Each segment represents the core of object-based analysis [1]. As described in Section 3.3, the segments’ borders, classified as visual curtains, can be used for imageability indicator calculations.
Although there are several different kinds of segmentation algorithms [29], we chose to use the multi-resolution segmentation (MRS) algorithm [30], which is one of the most widely used and successful [31]. This color-, texture-, shape-, and noise-sensitive algorithm [32] is associated with the difficulty of choosing the optimal scale parameter (SP), as well as the optimal shape and compactness settings. Of the three, however, it is SP which provides the most control of the segmentation process.
The accuracy of automated viewpoint calculations depends on segmentation quality; therefore, finding optimal SP thresholds for each time period of our analysis is essential. However, this can be a challenging task, solved either through a trial-and-error approach [33,34,35] or a more automated method such as using estimation scale parameter (ESP) software [36]. The trial-and-error approach generally omits parameter evaluation and focuses on image objects and their classification for accuracy assessment. However, in the context of our study, where finding an optimal SP for both small and large co-existing landscape features was likely to be challenging and time-consuming, we opted for the more automated approach provided by ESP.
ESP software [37] iteratively generates image objects (segments) at multiple scale levels and calculates the local variance (LV) [38,39] for each of them [40]. The resulting plot of LV change rates (ROC-LV) enables the end-user to select the optimal SP for MRS in the most appropriate manner [41]. The automated scale parameter approach, as an unsupervised method of segmentation accuracy assessment [42,43,44], also provides a more objective basis on which to set SP, a key factor for image object classification accuracy [45,46]. To further improve the process of selecting optimal SP values, an enhancement to ESP software was released as ESP2, an e-Cognition plug-in [36]. The plug-in was successfully used in several studies and is regarded as a credible tool [47,48,49,50] for automating image segmentation at three levels of detail.

1.2. Measuring Imageability with the Use of Viewpoints

The VLC indicator theoretical framework, proposed by Ode et al. [19], suggests that “the number and density of viewpoints as imageability indicator(s), could be calculated through visibility analysis using orthophotos or land-cover data and terrain data”. Imageability reflects the ability of landscape to create a strong visual image for the observer, thereby making it distinguishable and memorable [19]. Tveit et al. [20] and Ode et al. [51] used imageability to describe a VLC, while Lynch [52] and other urban planners reviewed by Shanken [53] focused on the similar concept of urban legibility.
As reviewed by Ode et al. [19], memorable visual images can be created by iconic elements, landmarks, or land-cover patterns, as well as the number of viewpoints. This theoretical concept does not prejudge whether high or low viewpoint density characterizes imageability. The Ode et al. [19] classification also distinguishes the viewshed size and depth of view as visual scale indicators. Specifically, the visual scale refers to perceptual units of open land delimited by taller vegetation, topography, or human-made objects viewed from a single viewpoint, termed the “landscape room” [20] or “landscape interiors” [54].
Our method uses viewpoints and their properties informed by the VLC imageability indicator classification [19]. We extend the concept of using a viewpoint to measure imageability by distinguishing viewpoint rank and spatial arrangement to describe imageability. To derive viewpoints, we use the isovist algorithm, from which we derived both primary and secondary viewpoints.
First introduced by Benedikt [55], a single isovist is the volume of space visible from a given point, together with a specification of the location of that point (the viewpoint). This behaviorally and perceptually oriented methodology measures what area of space is visible from a predefined viewpoint. One can also think of the isovist as the volume of space illuminated by a point source of light. Every point in physical space has an isovist associated with it. This gives rise to formulating a geometrical model of a minimal isovist set (MIS) covering visible space. In short, the MIS of a simple polygonal region P is the smallest set of viewpoints in P whose union of isovists is equal to P [56]. The two-dimensional MIS is constructed as shown in Figure 1.
The MIS, where viewpoints have a high field of view, are termed primary viewpoints, whereas secondary viewpoints have relatively low fields of view and are usually more numerous. In the context of VLC, specifically imageability, the primary viewpoints create a sense of the infinite and mystery of what lies in the landscape beyond the vantage point [57]. In general, long-distance views relate to our ancestors’ experiences of open landscapes, for which the possibility of looking at a distance determines the safety and survival in the living environment. On this basis, a long-distance view provided by the primary viewpoints contributes to landscape users’ psychological comfort, sense of personal freedom, security, and mental pleasure [58], as well as contemplative experience of the space [59]. According to the most recent contemplative landscape research findings that used neuroscience tools to evaluate the health impact of landscape exposure [60], the depth of the view and the visibility of landscape layers (fore-, middle-, and background) is one of the key factors contributing to the restorative effect in the human brain [61]. Secondary viewpoints, meanwhile, tend to entice an observer toward deeper fields of view. Their contribution to imageability is less important unless they form a characteristic system (e.g., maze), constituting a specific landscape legibility. Therefore, the minimal set of viewpoints and their density are usually used to describe landscape imageability, with their rank and location based on the degree of landscape openness or visual scale. This is achieved with the use of two-dimensional (2D) visibility modelling.

2. Study Area

Our case study “high natural values” (HNV) grassland landscape belongs to two NATURE 2000 sites: the habitat refuge PLH 14,001 and the bird refuge PLB 140,001 located in the east part of Poland. The central part of this research area (52°21′32″ north (N) 22°51′40″ east (E)) is the Nature Reserve “Kózki” (86.1 ha), located at the Bug river valley, which is also part of Podlasie Bug Gorge Landscape park. This relatively flat area is located at mean 117 m above sea level, 2 km from the town of Siemiatycze (Figure 2). Its high natural values (HNVs) are created by expanses of gray and white dunes overgrown by xeric sand calcareous grassland communities [62]. Further grassland community details of “Kózki” Nature Reserve were described by Warda et al. [63] and Kulik et al. [64].

3. Materials and Methods

Our methodology links GEOBIA with VLC assessment procedures. The GEOBIA stages include image preprocessing, segmentation, classification, and accuracy assessment. The GEOBIA classification result is then input as a two-dimensional landscape model for viewpoint calculation using the MIS algorithm implemented in the Isovists (version 2.3) [65] software. We use the derived viewpoints to estimate the minimum set and measure their density, spacing, and hierarchy—all core indicators of landscape imageability [19].

3.1. Remote Sensing Imagery Pre-Preprocessing

The relatively small, low-contrast land-cover patches of our case study site make the use of VHR images—the most suitable for the case-study purposes. For this reason, global Earth Observation (EO) programs with coarser pixels (like Landsat NASA or Sentinel-2 ESA) were excluded in favor of historical aerial, VHR satellite, and UAV imagery. An exception was CORONA KH-4 declassified imagery, which, despite its relatively coarse spatial resolution, allowed us to include a time period from the 1960s. Historical imagery was initially sourced from the Polish National Geodetic and Cartography Repository (PNGCR). Only imagery from the growing season and without cloud cover was considered. Off-nadir imagery was included to avoid the exclusion of potentially useful imagery.
To create a more complete time series, we also obtained more recent imagery from satellite (Pleiades-1B) and UAV platforms. The resulting collection of imagery had varied characteristics, including differing scale or ground sample distance (GSD) and spectral resolution (from grayscale to four-band compositions). The complete list of remote sensing datasets used in this study, along with basic technical characteristics, is listed in Table 1 and presented in Figure 3.

3.2. Imagery Processing Method

Grayscale images (1957, 1965, 1973) were retrieved as raw 8-bit raster files that needed to be georeferenced and mosaicked into a single image. Similar to other cases of historical imagery pre-processing [66], metadata like interior and exterior camera orientation (applied to aerial imagery) were unknown. Therefore, we applied the image to an image co-registration technique [14,67,68,69], as an alternate method known to yield satisfactory results especially in low-relief terrains [70]. A high-resolution (GSD 0.25 m, horizontal accuracy 0.15 m) contemporary orthoimage [71] was used as a reference to conduct co-registration.
For each georeferenced image, at least 10 equally distributed ground control points (GCPs) were manually selected through careful inspection of the reference and archival images. The task of identifying the ground features was conducted by an analyst with good knowledge of the study area from field surveys. As GCPs, we used identifiable, stable ground features such as road intersections, buildings, bridges, and eventually natural features (e.g., single trees), which can also be used for GCP in the absence of anthropogenic features [72]. The height (z-value) of each GCP was read from the SRTM elevation model (DEM Version 4) to enhance the polynomial transformation of the interpolator [73,74]. Furthermore, because we used imagery spanning 61 years of landscape change, we selected imagery-specific GCPs for each period.
Image co-registration was done to single-pixel accuracy. A third-order polynomial transformation that resulted in the lowest RMSE and least visible image distortion was selected for image co-registration (reported in Table 1 as metadata). All datasets were registered in the EPSG 2180 coordinate system. Co-registered images were then mosaicked into single raster datasets for each time period and clipped to the extent of the 2018 UAV imagery. The UAV imagery encompassed the extent of the “Kózki” nature reserve, but was otherwise the imagery with the smallest spatial extent of our study (Figure 3H); thus, it became our study area boundary. Finally, all archival imagery (except CORONA) was down-sampled to the same GSD (0.5 m) to make the comparison among the imagery easier in subsequent processing steps.
For the imagery retrieved from the PNGCR as orthoimages, no pre-processing was done except for clipping to the research area boundary and resampling to 0.5 m. The Pleiades orthoimages were delivered from Airbus Space as two (pan, multispectral) Tiff files after atmospheric correction (scene ID: DS_PHR1B_201507040930594_FR1_PX_E022N52_1008_03164). To achieve the GSD adopted in our study, the Pleiades imagery was pansharpened using the Gram–Schmidt method [75] at band weights dedicated for this particular sensor (R 0.166, B, G 0.167, IR 0.5) [76]. The most recent orthophoto was acquired on 26 June 2018 from a UAV platform provided by the University of Wroclaw (Department of Geoinformatics and Cartography; Poland). According to the UAV service provider report [77], the flight was done with the use of an eBee drone equipped with a Canon S100 camera at an ATO height of 148 m AGL. Orthoimagery was produced by the UAV contractor using Metashape (Agisoft). In total, 36 signalized and 23 natural GCPs were used, with horizontal and vertical accuracies of 0.05 m. Finally, the delivered RGB and CIR imagery was merged into four-band Tiff files and resampled from GSD 0.39 to 0.5 m, as with all other source data.

3.3. Segmentation and Segment Evaluation Method

The multi-resolution segmentation (MRS) algorithm [30] implemented in e-Cognition Developer (Trimble Geospatial) software was applied for imagery segmentation. For shape and compactness segmentation parameters, the default values of 0.1 and 0.5, respectively, were retained. However, for the key factor of segmentation, the spatial parameter (SP) from an ROC-LV analysis was used.
The ROC-LV graph was generated as a bottom-up process of 150 loops with the use of ESP2 software. For each set of imagery, the ROC-LV was analyzed to select five SP candidates. Using this unsupervised method helped to avoid inefficient, step-by-step SP candidate testing [78]. To decide which among the five SP candidates was the most appropriate, a supervised method of assessing segmentation accuracy was applied [79]. Lucieer and Stein [80,81] pioneered the method of using topological object metrics to assess under- and over-segmentation; thus, segmentation accuracy metrics are now well documented [33,79,82]. Specifically, GEOBIA segments (GSs) are compared to ground-truth reference segments (RSs). Segmentation accuracy metrics can be calculated when several RSs match with one or more GS in terms of shape and size [78]. RSs are usually digitized with the use of orthoimagery [83]. In GEOBIA, the well-defined shape of arable fields [33,84], water bodies [85], trees [36], and some anthropogenic structures (e.g., building footprints) are preferred as RSs.
All RSs were digitized manually (on-screen vectorization) by two independent GIS operators. Next, in accordance with Zhang et al. [86], each RS was discussed by the GIS operators, and only those agreed upon were used for segmentation accuracy metric calculation. We also digitized each time period’s RS segments separately. By doing this, the land-cover changes, vegetation phenology. and spectral properties of each imagery dataset were captured by each RS set. Furthermore, as recommended by Moller et al. [33] and Marup et al. [87], the RS count and area class proportions [79] were also restricted. We used a set of 50 RSs at area class proportions set to 3:1:1 for small (≤0.09 ha), medium (0.1–0.2 ha), and large (>0.2 ha) RS area classes, respectively. An exception was made for the 1965 CORONA imagery, where its lower spatial resolution limited the number of small RSs we were able to confidently digitize.
We chose a higher proportion of small RSs due to the fact that the optimal SP was expected to create segments small enough to catch homogeneous patches of shrubs growing over the open sands of the study area, but that also resulted in over-segmentation of the neighboring grassland patches. Because of strong shadow effects (especially in 2010 and 2018), mixed forest areas could not be represented as a single segment, with pre-testing showing that homogeneous segmentation of these areas was difficult even at SPs greater than 150, as well as at ESP2 Level 3.
To assess segmentation accuracy, we used four common metrics: (1) f-score (FS) metrics combined from precision (p) and recall (r) measures [42], (2) the segmentation error (SE) [88], (3) the Jaccard index (JI) [89,90], and (4) accuracy (AC). The applied metrics’ mathematical formulae are presented in Equations (1)–(4). The metrics range in value from 0–1, with lower SE values showing higher RS to GS similarity and, thus, the optimal SP, with the inverse being true for FS, JI, and AC. To select optimal values, SPs with the best metric scores were selected for each imagery date.
F S =   1 n i = 1 n 2 p i r i p i + r i ,
S E =   1 n i = 1 n | a r e a ( R S ) a r e a ( G S i ) | a r e a ( R S ) + a r e a ( G S i ) ,
J I =   1 n i = 1 n a r e a ( R S     G S i ) a r e a ( R S     G S i ) ,
A C =   1 n i = 1 n a r e a ( R S     G S i ) + a r e a ( E i \ ( R S     G S i ) ) a r e a ( E i ) ,
where p i = a r e a ( R S     G S i ) a r e a ( G S i ) (precision) and r i = a r e a ( R S     G S i ) a r e a ( R S ) (recall), n is the number of GS segments, RS is the reference segment, GSi is the i-th GEOBIA segment and E i = e n v e l o p e ( R S     G S i ) .

3.4. Image Classifications

3.4.1. Land-Cover Class Nomenclature

For our multi-temporal analysis, five land-cover classes were used. According to Anderson’s [91] land-cover classification levels (LCCL) used by the USGS, panchromatic images are only suitable for level I (LCCL-1) classification because of their low spectral resolution. However, from the perspective of landscape visual property analysis, only two of the nine LCCL-1 classes (built-up and forest) can be used as visual curtains. On the other hand, the sub-meter spatial resolution of archival RS imagery allows for the extraction of other important land-cover classes at LCCL-2 (e.g., shrubs, orchards, vineyards, and bare exposed rock). Furthermore, Corine Land Cover (CLC) nomenclature provides additional class description details that can affect imageability indicator calculation. For imageability analysis, the appointed LCCL should include compulsory land-cover classes (c-classes) which create visual curtains and, optionally, case-study specific classes (s-classes), including, in this case, xeric sandy grassland, open sand, and water. The adopted classification nomenclature is presented in Table 2, along with references to USGS [91] and CLC nomenclature.

3.4.2. GEOBIA Classification Methodology

Taking advantage of image object geometry, texture, and brightness, we adopted the Reference [67] framework of combining manual and supervised image object classification, along with post-processing procedures. Firstly, water bodies and building footprints were manually digitized (on-screen visual interpretation) and incorporated as classified segments into an MRS workflow. This helped to solve the problems of water body misclassification caused by reflection and the presence of sandbanks, as well as misclassification of buildings with gabled roofs due to high contrast among roof surfaces. All other segments generated at optimal SP (Section 3.3) were classified by supervised methods incorporating training (ts) and validation (vs) ground-truth samples.
For each imagery period, a set of 250 land-cover samples was created by a GIS operator, with 50 in each land-cover class. Only well-recognized samples were selected with the use of time series orthoimages, as well as knowledge from previous field work and information on the vegetative community structure of the case study area [62]. For this reason, a random sampling method for ground-truth sample selection was not applied. Ground-truth samples were divided into two equal sets of ts used for classifier training and vs used for accuracy assessment.
The classification procedure was conducted with the use of e-Cognition Developer as an automated process, along with a classification error matrix [92] and accuracy assessment summary including overall, user, and producer accuracy and Kappa measure [93]. Automation of the classification process allowed us to test several of the most popular classifiers implemented in e-Cognition: random forest (RF) [94], support vector machine (SVM) [95,96], and K-nearest neighbor (KNN) [97]. Each classifier received the same input parameters: brightness, degree of skeleton branching, curvature, and textures after Haralick [98] (GLCM homogeneity, entropy, mean, dissimilarity). The best classifier was chosen based on confusion matrix results [92].
Classification post-processing included identification of misclassified image objects and manual correction. The main focus of post-classification correction was for c-class objects, as they were the most important for imageability analysis. Finally, the c-class GEOBIA results were converted to CAD format (.dxf) to create land-cover edges (LCEs) that became visual barriers for MIS analysis (Section 3.5).

3.5. Isovist and Imageability Indicator Method

To assess the visual consequences of landscape change, three imageability indicators were calculated: number of viewpoints (Vn), viewpoint density (VD), and viewpoint spacing (VS) (Equations (5) and (6)). The role of these metrics is to objectively describe VLC changes over the 61 years of our study, rather than subjectively landscape aesthetics. Therefore, the meaning of the imageability indicator output is descriptive, rather than normative, and it is not meant to comment on landscape quality. In general, low indicator values characterize landscapes with a strong visual impression created by openness. Gradual increases in VLC metric values indicate a shift to a more closed and less visually connected landscape, with visual impressions resulting more from landscape rooms (interiors) rather than a deep field of view.
VD = Vn/A,
VS   =   1 V D ,
where Vn is the number of viewpoints, and A is the research area.
The indicator values rely on the viewpoint number (Vn) calculated during MIS analysis implemented in ISOVISTS software [65]. To initiate the MIS analysis, a strategic point was placed common to the LCE generated for each time period of our analysis. In our case, this was the middle of Bug River bridge in the northeastern part of the study area. Next, a series of minimally overlapping isovist fields was generated until the entire analysis area was covered. For each isovist, the points of maximum permitted field and minimum occlusivity are marked as viewpoints, connected by at least one uninterrupted line of sight. The algorithm marked the viewpoints of the dominant isovist [99] as large rings proportional to the area of each viewpoint’s isovist field. To better describe the imageability of each landscape, we then classified viewpoints into primary (the most prominent viewpoints) and secondary viewpoints, and we cartographically represented their fields of view. Additional conclusions characterizing the changing VLC were formulated based on the spatial arrangement of viewpoints.

4. Results

The study returned three types of multi-temporal results: land-cover maps, viewpoint maps, and imageability indicators. The accuracy of the land-cover segmentation and classification is reported in Section 4.1, Section 4.2 and Section 4.3. Section 4.4 presents the results of the imageability analysis, including the number, density, spatial distribution, and hierarchy of viewpoints regarded as imageability indicators. This is followed by our discussion of how changes in land cover affect landscape imageability, and how imageability can be described using viewpoint properties as core indicators.

4.1. Obtaining Optimal Segmentation Parameters

4.1.1. SP Candidate Results

Scale signatures are presented as ROC-LV graphs (Figure 3) with scale ranges from 1–150. Over the study periods, the maximum value of LV ranged from 15.05 (1973) to 72.43 (2015), with an average across most years closer to 20. This indicates a degree of similarity for object heterogeneity, as well as a similar spatial dimension of the most characteristic objects across time periods. The LV values in 2015 are outliers to this trend, with high LV values possibly the result of the pan-sharpening procedure used in that year. All LV graphs show a rising trend, which confirms the similarity of the compared areas.
Because of the smoothed shape of the LV line, ROC lines were used to pick SP candidates. Local variation in the ROC line made simple selection of thresholds difficult; thus, the vertical ROC axes were rescaled to a value of 2 to enable better interpretation of prominent peaks in the ROC line against the x-axis. The most significant scale levels were identified for the grayscale CORONA imagery, which provides a good example for our selection method. While an SP value of 107 was identified as optimate by the automatic ESP2 analysis at Level 1, a more significant threshold was identifiable in the ROC-LV graph at SP 120, which was our final selection for segmentation (Figure 4). The following sets of SP were selected for segmentation using MRS in each year: 45, 75, 91, 135, 149 (1957); 52, 72, 81, 120, 139 (1965); 30, 54, 72, 115, 135 (1973); 34, 62, 88, 108, 147 (1997); 42, 57, 97, 107, 122 (2006); 31, 41, 68, 108, 120 (2010); 34, 77, 88, 104, 118 (2015); 23, 34, 67, 92, 125 (2018). Note that values of SP below 20 were omitted, as they would result in over-segmentation.
The ROC-LV graph analysis resulted in 40 SP candidates with scale ranges from 23 to 149 to be used as MRS input parameters. The comparison of subsequent segmentation results revealed that the relationships between SP values and segment properties differed depending on the spectral properties and quality of imagery used. The same SP value applied in segmentation of two different images of the same research area results in a different image object count, shape, and size. In our case study, five of the 40 SPs were repeated for at least two segmentation procedures. On the one hand, repeated SPs reveal some common geoprocessing properties (e.g., comparable minimum segment size); however, on the other hand, different segmentation results are due to differences in image spectral characteristics and quality, and, above all, in land-cover changes between compared landscapes. The summary statistics presented in Appendix A (Table A1) confirm the need for individual selection of SPs for each imagery dataset, even for the same research area.

4.1.2. The Reference Segment Digitalization Results

This also implies that MRS result evaluation should be carried out with the use of RS and shape similarity metrics. Examples of RS polygon digitization are presented in Figure 5B–D, along with overlapping RS polygon results (A). The overlap count number identifies zones of clearly distinguishable homogeneous patches. In the southern part of the study area, they are arable field shapes, and, in the central part, they are water bodies (or patches of aquatic vegetation), while, in the northcentral section, RS shapes correspond more to open sand or groups of trees, characteristic of the case study area. The mean size of each set of 50 RS was 0.13, 0.41, 0.12, 0.11, 0.10, 0.13, 0.12, and 0.09 for 1957, 1965, 1973, 1997, 2006, 2010, 2015, and 2018, respectively. The 1965 outlier is a result of low CORONA imagery resolution, which was also the reason for the methodological choice of using equal proportions of small, medium, and large RSs for this year.

4.1.3. Segmentation Accuracy Results

SP is the main parameter that controls MRS and determines the size and shape of resulting segments. To know which SP value provides the greatest similarity between GSs and RSs, we use shape similarity metrics, as well as precision and recall, to compare segmentation quality. The precision and recall curves calculated for five SP candidates are shown in Figure 6, while extensive details on segmentation goodness metrics are provided in Appendix A (Table A2). Each curve of Figure 6 has a falling trend from upper left to the lower right, typical of these opposing indicators, with curves located in the upper-right part of the graph indicative of high segmentation quality [86]. The worst segmentation performance is for the CORONA imagery (1965); however, due to that imagery’s low GSD, these results are not unexpected. At the same time, overall image quality is important, with the 1957 aerial imagery achieving the worst segmentation quality. The curves of 2006 (green) and 2010 (light blue), and, to a lesser extent, the curves of 2018 (purple), 1997 (light green), and 1973 (yellow) show higher segmentation quality for the first three SPs. The central location of the 2015 curve (blue) highlights the differences between MRS performed on satellite imagery compared with lower-altitude imagery.
Although informative, changes in segmentation quality rates reflected by precision and recall curves cannot indicate optimal SP values. Therefore, additional segmentation quality metrics were used (FS, SE, JI, and AC), the results of which are reported in detail in Appendix A (Table A2), along with the selected optimal SP values. In the case of 1957, an SP of 75 was chosen due to its highest values of FS, JI, and AC, as well as the lowest SE value. Furthermore, the SP of 75 turned out to be the only SP value also confirmed by the automated ESP2 results as optimal for Level 1 segmentation (Figure 4a). In other cases, the choice of an optimal SP was confirmed by the compliance of all (2006, 2010, and 2018) or at least three of four (1965, 1973, and 1997) calculated metrics (details in Table A2, Appendix A). Finally, the following SPs were selected as optimal for the case study MRS: 75, 72, 72, 62, 88, 108, 97, and 92 for 1957, 1965, 1973, 1997, 2006, 2010, 2015, and 2018, respectively.

4.2. Segment Classification Accuracy Results

The MRS calculated for each of the eight analyzed time-frame imageries resulted in individual image object segments, which were the object of further supervised classification (excluding water and building footprints processed manually). We were interested in merging all segments belonging to particular land-cover classes and then extracting LCE as input for imageability indicator calculation. The results of imageability analysis depend on object classification accuracy; since the research is focused more on imageability rather than GEOBIA classification efficiency, the details on contributions of the different variables to classification accuracy are limited to accuracy summary statistics. Using 125 training samples (25 per class), various levels of segment classification accuracy were obtained depending on the classifier used. The accuracy measures (overall accuracy, as well as user and producer accuracy) as a result of 24 confusion matrix calculations, as well as additional Kappa statistics, are presented in Appendix B (Table A3). The resulting overall accuracy (OA) ranged from 92% to 46%. In general, the results obtained at OA above 85% were accepted as sufficient for further imageability analysis. The highest accuracy was obtained in case of Pleiades image classification, where the RF classification resulted in OA at 92%. Furthermore, the RF proved to be an effective algorithm in most classified segments. In two of 24 classifications, KNN (1965) and SVM (1973) resulted in higher OA than RF (OA 73% and 85% for 1965 and 1973, respectively). Furthermore, both c-classes were classified at a producer accuracy not worse than 88% for forest segments and 76% for shrub segments. Moreover, segments of sand (an r-class) as the brightest part of imagery were correctly classified at a producer accuracy between 100% and 84%, except for 1997, where the producer accuracy was 60%. The Xeric sand grassland was the most often misclassified community, usually confused with agricultural background, especially in the case of SVM. The producer accuracy of the best Xeric sand grassland classification ranged from 40% (1965) to 100% (2015). Despite this, the Xeric sand grassland, as an r-class, does not impact further visibility analysis. From the perspective of the case study, the Xeric sand grassland patch locations relative to viewpoints have key importance for analyzing HNV. Finally, the misclassified segments were the object of manual post-classification to provide reliable land-cover results. The final land-cover maps, along with segmentation results, are presented at Figure 7.

4.3. Land Cover Changes

Main changes within the r-class referred to open sand patches; thus, characteristics for this open landscape decreased by 79.6% (from 20.23 ha to 4.13 ha). The results of land-use change dynamics presented as a percentage of land-cover class area revealed the increase in forest area from 8.97 ha (1957) to 80.07 ha (2018) and relatively stable area of the shrub community ranging from 3.44–7.70 ha, and most recently 3.89 ha in 2018 (Figure 8). Only noticeable increases in the shrub community area between 1997 and 2006 are the result of natural succession, which, in subsequent years, was eliminated under the active sheep grazing program for the “Kózki” nature reserve in 2010 [64]. This trend is also reflected in patch number (NP) and density (PD) analysis results (Table 3). Taking into account only c-class patches as the most relevant for VLC, the landscape pattern changed from small to large patches, maintaining their NP at a relatively unchanged level (NP oscillation between 465 and 598, excluding coarse 1965 imagery). These land-cover change directions reflect a question regarding VLC transformation. To answer this, the results on imageability indicators are presented in the next section.

4.4. The Imageability of Changing Landscape Interpretation

The MIS calculations resulted in unique viewpoint arrangements (Figure 9A–H) and indicator values (Table 4) that reflect the VLC changes. Specifically, 1957, as a starting point of 61 years of transformation, displayed its openness, limited only by the distant forest patches. Three primary visibility points created a 1.32-km route, which allows admiring the overviewed landscape (Figure 9A), whose character is created by open sands and unobstructed water bodies. These characteristics are also confirmed by the lowest imageability indicator values, VD = 62 point/km2 and VS = 0.13 km. In the case of 1965, due to no shrubs detected as view curtains, only two primary viewpoints provide the central section of the research area overview. Over time, this unique visual character changed irreversibly. In 1973, due to afforestation, the new primary viewpoint was created and maintained until 2018 within the Bug River northern section. Two more primary viewpoints were delimited in the central zone; however, their location favors close-range observation of open sands and Xeric sand grassland, constituting the HNV of the analysis. In 1997, the value of VS became less than 0.1 (0.09), which, together with viewpoint location analysis, can be interpreted as imageability created more by the sequences of landscape interiors and view panoramas [54] rather than stunning openness. Importantly, 1997 and 2006 gradually closed the visual connectivity with the central zone of the research area. This visual connectivity was maintained until 2010, whereas, in 2015, the central section became visually separated (Figure 9G,H). Visual connectivity closing was accomplished by an increase in viewpoint number with gradual predominance of secondary viewpoints. Furthermore, as the number of viewpoints rises, their field of view is reduced, which underlines the role of close-range visual impressions in imageability creation. Patches of open sands and Xeric sand grassland, as well as the Bug River natural valley, constitute the HNV of the contemporary “Kózki” reserve are. From 2006 until 2018, grassland communities were the location of primary viewpoints. Importantly, the viewpoint sequences created a visibility route from the east (the road) to the west, guiding the landscape user through sand and grassland landscape to border river. The imageability of the contemporary reserve “Kózki” is created by a sequence of landscape interiors closed by forest walls and sliced by spared scrub communities. Concluding, the interpretation of the indicator values allows capturing a very general VLC (open, closed, visually connected/disconnected), which provides the basis for changing landscape imageability description only in combination with viewpoint spatial distribution.

5. Discussion

This study extends the application of GEOBIA by utilizing landscape imageability indicators to assess VLC, both qualitatively (VLC description) and quantitatively (indicator values). We show that GEOBIA results—specifically, the use of classified objects (forest, shrubs, and building edges) as visibility curtains—can support VLC assessment, along with additional landscape imageability indicators. To the best of our knowledge, the idea of using viewpoints as imageability indicators was, to date, only presented as a theoretical concept [19]. Therefore, a discussion of indicator value results cannot be conducted with respect to other studies. Our segmentation accuracy results, meanwhile, do correspond with similar studies [100,101], although it is often a challenge to compare segmentation accuracies among GEOBIA case studies due to unique methods, remote sensing imagery, and landscape types.
Although there are no studies to compare this one to directly, we still identified some important limitations. The use of 2D isovist software for visibility calculations means that only quite flat landscapes can be analyzed using the methods shown here. To overcome this, 2D isovist could be replaced by 2D viewshed analysis, which takes the height of a surface model into account; however, as noted by Weitkamp [102], viewshed does not calculate visible space, but visible landscape elements. An algorithm for deriving a viewshed minimal viewpoint set (equivalent to MIS) was proposed by Wang and Dou [103], as well as Shi and Xue [104]. The viewshed approach uses a candidate viewpoint filtering strategy to avoid a rapid increase in the number of viewpoints as the digital surface model resolution increases. Candidate viewpoints with relatively low viewshed contribution rates are iteratively filtered out to derive a minimum set that covers a given viewshed. However, a key limitation of this method is that the VLC of less-open landscapes can be underestimated because the algorithm rejects the candidate viewpoints with a relatively low viewshed contribution rate. Furthermore, depending on the case-study vegetation structure, the 2D viewshed may deviate from reality by up to 45% [105]. This is why, in our case, we preferred to use MIS, despite the fact that it finds a sufficient rather than a minimal viewpoint covering set [56,106]. Even more accurate visibility models can be obtained with the use of 3D-LoS, which uses a similar ray-tracing technique to that used by isovist, or by developing the MIS algorithm to be implemented in 3D isovist software [107,108].
The discussion of the technical limitations of MIS is also relevant for distance decay visibility, view angle, and human eye mimicry. Our methodology uses 360 degrees of observer visibility, assuming that a landscape user (the observer) can look freely in every direction. However, only at the most prominent viewpoints do human observers enjoy 360-degree visibility. The phenomenon of visibility arises in motion, and moving people tend to look predominantly in one direction. Thus, one could argue that an assumption of 360-degree visibility is not particularly natural or realistic for human observers. Agent-based isovist modeling can overcome this limitation. It enables the adjustment of the horizontal view angle (span), as well as the minimum and maximum view range, for simulated observers that are in motion [65]. Therefore, applying an agent-based visibility approach may add additional insights into imageability calculations, expressed not only as a minimum set of vantage points but also as a minimum time needed to overview the landscape. Moreover, because near things are more related than distant things [109], the distance decay effect along an uninterrupted line of sight may also be taken into account during viewpoint creation. To account for this in the field of viewshed modeling, Fisher proposed “fuzzy viewshed” analyses, also called “probable viewshed” analyses [110]. Bartie et al. [111] improved upon this further by taking into account the semi-transparent nature of vegetation; however, neither of these methods were tested in conjunction with MIS analysis.
Another identified limitation with our methodology presented here is imageability indicator value interpretation. Specifically, Vn, VD, and VS did not allow us to capture the process of gradual visual isolation, because these metrics only operate over the entire study area. The increase in Vn and VD values to 2006, followed by their decrease in subsequent years (18% reduction in Vn and VD by 2018) suggests that the MIS algorithm attempts to set up some additional viewpoints to maintain visual connectivity. As it fails (e.g., 2015), two visually separated landscape interiors begin to form VLC branches. However, these results are derived from viewpoint spatial distribution. Similar to the landscape metric biases caused by the land-cover patch scale effect [112], imageability indicators can also be biased by overproduction of secondary viewpoints depending on the adopted scale and LCE smooth factor. In our case, many small, irregularly shaped patches (e.g., shrubs) may have caused the MIS algorithm to overdraw secondary viewpoints.
Our MIS results, supported by the information on land-cover changes from our multi-temporal GEOBIA analysis, show a significant VLC transformation from imageability created by unique landscape openness to imageability as the impression of several landscape rooms enclosed by forest walls. This VLC evolution can be discussed with respect to neuroscience research on the psychological effects of landscape views, providing an example of how imageability indicators can be used beyond merely describing landscape structure. The psychological comfort, pleasure, and stress reduction provided by a long-distance view [58] suggests that similar mental health benefits can be potentially associated with the imageability impression created by primary viewpoints’ high field of view. Although our case study site is an uninhabited nature reserve, the calculation of imageability indicators for inhabited areas, as well as associating those indicator values with the quality of a view, may contribute to a better understanding of how to improve subjective well-being [113] concerning urban viewsheds, as well as to better urban planning results. Importantly, further studies on the environmental context in which people are exposed and the mental health implications of those environments [114] could be extended to space–time visual exposure and imageability levels. The landscape users’ daily exposure sequences and relative proportions of exposure to primary or secondary viewpoints, recorded by personal tracking devices, could extend knowledge on health-supporting and health-deteriorating factors. Further studies, focusing on people’s mobility through a landscape, can address the hypothesis of the mental health benefits of VLC. In order to extend the knowledge of landscape imageability attributes (e.g., landscape layers, color and light, adjacent scenery, visible archetypal elements, landmarks), the artificial expert evaluation tool [115] can be used.
Our findings emphasize the application of GEOBIA to VLC indicator calculations; however, other possible practical applications of the three imageability indicators used here (Vn, VD, VS) can be discussed in the field of landscape quality assessment. The European Landscape Convention [116] obliges signatory countries to assess and monitor landscape quality using landscape indicators derived from quantitative methods [117]. The most common use of landscape quality indicators is based on land-cover patches [118,119,120], which, despite modifiable areal unit biases [21,121], are commonly used in landscape quality studies [122]. Contrary to these studies, we do not evaluate the quality of spatial pattern, but we use land-cover data to conduct visibility analysis (specifically MIS), because it allows us to infer visual landscape structure. Importantly, any quality assessment must be done against pre-established standards and criteria, another area of exploration for landscape imageability analysis. As this pilot study shows, there is strong potential for imageability indicators to find new practical applications, especially in the areas of regional planning, resource management, tourism, or real estate. Another advantage of the analysis shown here is that many aspects of it can be automated. Therefore, it is possible that, in conjunction with Earth observation imagery streams, imageability indicators can be provided as a cloud service similar to other contemporary online land-cover mapping services [123,124].

6. Conclusions

Despite the technical limitations of using a 2D isovist approach, which can be overcome by 3D isovist software development, we implement in practice, for the first time, the concept of using viewpoints for VLC descriptions. The imageability analysis was itself informed by a multi-temporal GEOBIA land-cover classification process that extends the usefulness of GEOBIA results from ecological indicators and landscape metrics [125] to imageability indicator calculation and derivation. This work contributes to a better understanding of the impact of land-cover change on visual landscape structure dynamics. Our multi-temporal case study reveals considerable changes in case-study VLC resulting from changes in vegetation structure, primarily due to afforestation. Since people identify with landscapes, and since landscapes contribute to psychological well-being [126], VLC is not solely an ecological issue, but also a social one. Thus, lessons learned from how past landscapes changed through time can help to protect and enhance HNV in a more comprehensive way in the future.

Author Contributions

Conceptualization, S.C.; methodology, S.C.; software, S.C., A.B.; validation, S.C. and A.N.; formal analysis, S.C. and A.B.; investigation, S.C. and P.W.; resources, S.C.; data curation, S.C. and A.B.; writing—original draft preparation, S.C.; writing—review and editing, S.C.; visualization, S.C. and A.N.; supervision, P.W.; project administration, S.C.; funding acquisition, S.C. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by the Ministry of Science and Higher Education (Poland) for the dissemination of science (766/P-DUN/2019).

Acknowledgments

We would like to thank Lucian Dragut, Milja-Miroslav Vernica (Department of Geography, West University of Timisoara, Romania), and Alexandru Hegyi (Applied Geomorphology and Interdisciplinary Research Centre, West University of Timisoara, Romania) for sharing their knowledge and experience in the segmentation accuracy assessment task and for fruitful discussion. We also thank Alessandro Montaghi (Department of Geography, University of Calgary, Calgary, Canada) for insight into .py scripting and Dafna Fisher-Gewirtzman (Faculty of Architecture and Town Planning, Technion-Israel Institute of Technology, Haifa, Israel) for collaboration on isovist analysis. We especially thank Halina Lipińska (Department of Grassland and Landscape Studies, University of Life Sciences in Lublin, Poland) for setting up the Remote Sensing laboratory at UP-Lublin in which this work was created. We would like to thank all the anonymous reviewers for their helpful comments that guided manuscript improvement.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. The Segmentation Results

Table A1. The relationships between the SP candidates and the resulting segmented image object size and count—the example of repeating SP candidates in at least two time-frames.
Table A1. The relationships between the SP candidates and the resulting segmented image object size and count—the example of repeating SP candidates in at least two time-frames.
SP ValueTime-FramesNumber of SegmentsMean
Segment Size (ha)
Min
Segment Size (ha)
Max
Segment Size (ha)
34201533,2750.00600.00010.3204
201815,0580.01320.00010.2327
72196512400.16100.00352.7375
197330770.06490.00051.7865
88199726220.07610.00053.1234
200629090.06860.00052.4855
108199718130.11010.00093.3752
201015120.13210.00132.2697
13519577920.2520.00202.1776
197310480.19060.00252.0154
Table A2. The segmentation goodness metrics (the optimal SP is underlined, the best results of segmentation goodness metrics are in bold, the SP values proposed by ESP2 at level 1 are marked with an asterisk).
Table A2. The segmentation goodness metrics (the optimal SP is underlined, the best results of segmentation goodness metrics are in bold, the SP values proposed by ESP2 at level 1 are marked with an asterisk).
Time-FrameSP CandidatesPrecisionRecallF-Score
(Zhang 2015)
Jaccard IndexAccuracy (e)Segmentation Error
1957SP 450.890.620.670.580.800.28
SP 75 *0.790.710.680.600.830.23
SP 910.690.720.600.520.810.31
SP 1350.490.820.480.420.780.44
SP 1490.380.830.380.320.750.55
1965SP 520.670.480.510.400.730.28
SP 720.580.660.570.460.780.26
SP 810.590.620.560.450.780.26
SP 1200.490.700.510.400.790.38
SP 1390.430.760.450.360.790.47
1973SP 300.930.360.470.340.680.49
SP 540.870.620.680.580.800.24
SP 720.840.730.760.670.850.14
SP 1150.750.810.740.660.860.18
SP 135 *0.680.850.710.620.860.22
1997SP 340.920.460.560.440.740.41
SP 620.890.700.750.660.850.19
SP 880.820.750.740.650.860.18
SP 108 *0.760.810.740.650.860.19
SP 1470.640.870.680.600.860.24
2006SP 42 *0.920.400.510.380.690.44
SP 570.910.580.650.550.770.29
SP 880.880.860.860.780.900.08
SP 1070.830.880.840.760.890.11
SP 1220.770.890.800.710.880.14
2010PS 310.920.440.540.420.720.42
SP 410.900.640.700.600.820.24
SP 680.860.800.800.720.880.15
SP 1080.790.880.800.720.890.14
SP 1200.680.910.720.640.870.23
2015SP 340.860.390.500.360.640.42
SP 770.750.680.680.580.790.21
SP 970.690.760.680.590.820.22
SP 1040.660.800.660.580.810.25
SP 1180.620.850.640.560.810.29
2018SP 23 *0.950.300.420.290.620.56
SP 340.940.430.520.420.700.44
SP 670.870.730.750.670.840.18
SP 920.780.820.750.670.870.18
SP 1250.650.840.670.570.840.27

Appendix B. The Segment Classification Results

Table A3. The accuracy assessment table. The best performing classifier overall for each time period is in bold. RF—random forest; SVM—support vector machine; KNN—K-nearest neighbor.
Table A3. The accuracy assessment table. The best performing classifier overall for each time period is in bold. RF—random forest; SVM—support vector machine; KNN—K-nearest neighbor.
DateClassificatoryOverall MeasuresPer-Class Accuracy: Producer/User/Kappa
AccuracyKappac-Classr-Class
forestshrubsc-grassopen sandother
1957
(SP75)
RF0.910.890.91/0.95/0.890.96/0.96/0.940.88/0.91/0.850.96/0.96/0.940.88/0.81/0.84
SVM0.770.710.79/1/0.750.96/0.92/0.940.4/0.83/0.330.8/0.95/0.750.92/0.50/0.87
KNN0.790.730.83/0.90/0.790.88/0.95/0.850.56/0.77/0.480.80/0.80/0.740.88/0.61/0.83
1965
(SP 71)
RF0.670.560.8/1/0.75excluded0.2/0.57/0.121/0.54/10.7/0.7/0.6
SVM0.580.450.8/0.88/0.74excluded0.1/0.16/0.051/0.64/10.45/0.47/0.27
KNN0.730.650.85/0.94/0.80excluded0.4/0.72/0.300.95/0.70/0.920.75/0.62/0.64
1973
(SP71)
RF0.780.731/0.92/10.96/1/0.950.36/0.52/0.250.8/1/0.760.8/0.54/0.71
SVM0.850.821/0.92/10.96/0.96/0.950.84/0.65/0.780.6/1/0.540.88/0.84/0.84
KNN0.680.611/0.78/10.88/1/0.850.28/0.36/0.150.68/0.94/0.620.6/0.44/0.45
1997
(SP62)
RF0.890.870.92/0.79/0.890.76/1/0.710.92/0.92/0.90.92/1/0.900.96/0.82/0.94
SVM0.660.580.4/0.58/0.300.8/1/0.760.24/1/0.200.96/0.96/0.950.92/0.40/0.85
KNN0.520.400.32/0.36/0.170.56/1/0.500.08/1/0.060.76/0.73/0.690.88/0.36/0.76
2006
(SP97)
RF0.860.830.88/0.81/0.840.84/0.87/0.800.72/0.9/0.660.96/0.96/0.950.92/0.79/0.89
SVM0.550.440.84/0.84/0.790.12/0.37/0.050.8/0.48/0.690.86/1/0.840.16/0.14/0.08
KNN0.460.330.56/0.29/0.270.56/1/0.500.36/0.56/0.260.69/0.94/0.640.16/0.14/0.07
2010
(SP65)
RF_3b0.850.820.96/0.75/0.940.72/1/0.670.76/1/0.711/0.96/10.84/0.70/0.78
-SVM_5b0.600.510.52/0.76/0.440.62/1/0.570.41/0.47/0.290.48/1/0.421/0.43/1
-KNN_5b0.510.380.24/0.75/0.180.62/1/0.570.29/0.46/0.190.4/1/0.341/0.33/1
2015
(SP97)
RF0.920.910.96/0.92/0.940.88/1/0.851/0.83/11/1/10.8/0.90/0.75
SVM0.800.751/1/10.84/1/0.800.48/1/0.420.68/1/0.621/0.5/1
KNN0.580.480.88/0.5/0.810.72/1/0.670.4/0.76/0.330.64/1/0.580.28/0.20/0.01
2018
(SP70)
RF0.910.891/0.89/10.88/1/0.850.92/0.88/0.890.84/1/0.800.92/0.85/0.89
SVM (DT)0.840.810.96/0.8/0.940.72/1/0.670.88/0.81/0.840.84/1/0.800.84/0.75/0.79
KNN (svm)0.630.540.28/0.41/0.160.48/1/0.420.72/0.78/0.650.72/1/0.670.96/0.44/0.92

References

  1. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  2. Arvor, D.; Durieux, L.; Andrés, S.; Laporte, M.-A. Advances in geographic object-based image analysis with ontologies: A review of main contributions and limitations from a remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2013, 82, 125–137. [Google Scholar] [CrossRef]
  3. Souza-Filho, P.W.M.; Nascimento, W.R.; Santos, D.C.; Weber, E.J.; Silva, R.O.; Siqueira, J.O. A GEOBIA Approach for Multitemporal Land-Cover and Land-Use Change Analysis in a Tropical Watershed in the Southeastern Amazon. Remote Sens. 2018, 10, 1683. [Google Scholar] [CrossRef] [Green Version]
  4. Chen, G.; Weng, Q.; Hay, G.J.; He, Y. Geographic object-based image analysis (GEOBIA): Emerging trends and future opportunities. GISci. Remote Sens. 2018, 55, 159–182. [Google Scholar] [CrossRef]
  5. Dornik, A.; Drăguţ, L.; Urdea, P. Classification of Soil Types Using Geographic Object-Based Image Analysis and Random Forests. Pedosphere 2018, 28, 913–925. [Google Scholar] [CrossRef]
  6. Hegyi, A.; Vernica, M.-M.; Drăguţ, L. An object-based approach to support the automatic delineation of magnetic anomalies. Archaeol. Prospect. 2019, 27, 1–10. [Google Scholar] [CrossRef]
  7. Drăguţ, L.; Eisank, C.; Strasser, T. Local variance for multi-scale analysis in geomorphometry. Geomorphology 2011, 130, 162–172. [Google Scholar] [CrossRef] [Green Version]
  8. Hay, G.J.; Niemann, K.O.; McLean, G.F. An object-specific image-texture analysis of H-resolution forest imagery. Remote Sens. Environ. 1996, 55, 108–122. [Google Scholar] [CrossRef]
  9. Wężyk, P.; Hawryło, P.; Janus, B.; Weidenbach, M.; Szostak, M. Forest cover changes in Gorce NP (Poland) using photointerpretation of analogue photographs and GEOBIA of orthophotos and nDSM based on image-matching based approach. Eur. J. Remote Sens. 2018, 51, 501–510. [Google Scholar] [CrossRef]
  10. Chen, G.; Hay, G.J. An airborne lidar sampling strategy to model forest canopy height from Quickbird imagery and GEOBIA. Remote Sens. Environ. 2011, 115, 1532–1542. [Google Scholar] [CrossRef]
  11. Vogels, M.F.A.; de Jong, S.M.; Sterk, G.; Addink, E.A. Agricultural cropland mapping using black-and-white aerial photography, Object-Based Image Analysis and Random Forests. Int. J. Appl. Earth Obs. Geoinf. 2017, 54, 114–123. [Google Scholar] [CrossRef]
  12. Pinto, A.T.; Gonçalves, J.A.; Beja, P.; Honrado, J.P. From archived historical aerial imagery to informative orthophotos: A framework for retrieving the past in long-term socioecological research. Remote Sens. 2019, 11, 1388. [Google Scholar] [CrossRef] [Green Version]
  13. Nita, M.D.; Munteanu, C.; Gutman, G.; Abrudan, I.V.; Radeloff, V.C. Widespread forest cutting in the aftermath of World War II captured by broad-scale historical Corona spy satellite photography. Remote Sens. Environ. 2018, 204, 322–332. [Google Scholar] [CrossRef]
  14. Kadmon, R.; Harari-Kremer, R. Studying long-term vegetation dynamics using digital processing of historical aerial photographs. Remote Sens. Environ. 1999, 68, 164–176. [Google Scholar] [CrossRef]
  15. Ellis, E.C.; Wang, H.; Xiao, H.S.; Peng, K.; Liu, X.P.; Li, S.C.; Ouyang, H.; Cheng, X.; Yang, L.Z. Measuring long-term ecological changes in densely populated landscapes using current and historical high resolution imagery. Remote Sens. Environ. 2006, 100, 457–473. [Google Scholar] [CrossRef]
  16. Kupidura, P. The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery. Remote Sens. 2019, 11, 1233. [Google Scholar] [CrossRef] [Green Version]
  17. Sertel, E.; Topaloğlu, R.H.; Şallı, B.; Yay Algan, I.; Aksu, G.A. Comparison of Landscape Metrics for Three Different Level Land Cover/Land Use Maps. ISPRS Int. J. Geo. Inf. 2018, 7, 408. [Google Scholar] [CrossRef] [Green Version]
  18. Riedler, B.; Lang, S. A spatially explicit patch model of habitat quality, integrating spatio-structural indicators. Ecol. Indic. 2018, 94, 128–141. [Google Scholar] [CrossRef]
  19. Ode, Å.; Tveit, M.; Fry, G. Capturing landscape visual character using indicators: Touching base with landscape aesthetic theory. Landsc. Res. 2008, 33, 89–117. [Google Scholar] [CrossRef]
  20. Tveit, M.; Ode, A.; Fry, G. Key visual concepts in a framework for analyzing visual landscape character. Landsc. Res. 2006, 31, 229–255. [Google Scholar] [CrossRef]
  21. Frazier, A.E.; Kedron, P. Landscape Metrics: Past Progress and Future Directions. Curr. Landsc. Ecol. Rep. 2017, 2, 63–72. [Google Scholar] [CrossRef] [Green Version]
  22. Schirpke, U.; Tasser, E.; Tappeiner, U. Predicting scenic beauty of mountain regions. Landsc. Urban. Plan. 2013, 111, 1–12. [Google Scholar] [CrossRef]
  23. Gong, L.; Zhang, Z.; Xu, C. Developing a Quality Assessment Index System for Scenic Forest Management: A Case Study from Xishan Mountain, Suburban Beijing. Forests 2015, 6, 225–243. [Google Scholar] [CrossRef] [Green Version]
  24. Hermes, J.; Albert, C.; von Haaren, C. Assessing the aesthetic quality of landscapes in Germany. Ecosyst. Serv. 2018, 31, 296–307. [Google Scholar] [CrossRef]
  25. Fry, G.; Tveit, M.S.; Ode, Å.; Velarde, M.D. The ecology of visual landscapes: Exploring the conceptual common ground of visual and ecological landscape indicators. Ecol. Indic. 2009, 9, 933–947. [Google Scholar] [CrossRef]
  26. Robert, S. Assessing the visual landscape potential of coastal territories for spatial planning. A case study in the French Mediterranean. Land Use Policy 2018, 72, 138–151. [Google Scholar] [CrossRef]
  27. Castilla, G.; Hay, G.J. Image objects and geographic objects. In Object-Based Image Analysis; Blaschke, T., Lang, S., Hay, G.J., Eds.; Springer: Heidelberg/Berlin, Germany, 2008; pp. 91–110. [Google Scholar]
  28. Bock, M.; Xofis, P.; Mitchley, J.; Rossner, G.; Wissen, M. Object-Oriented Methods for Habitat Mapping at Multiple Scales—Case Studies from Northern Germany and Wye Downs, UK. J. Nat. Conserv. 2005, 13, 75–89. [Google Scholar] [CrossRef]
  29. Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
  30. Baatz, M.; Schäpe, A. Multiresolution segmentation: An optimization approach for high quality multi-scale image segmentation. In Angewandte Geographische Informations-Verarbeitung; Strobl, J., Blaschke, T., Griesebner, G., Eds.; Herbert Wichmann-Verlag: Heidelberg, Germany, 2000; Volume XII, pp. 12–23. [Google Scholar]
  31. Munyati, C. Optimising multiresolution segmentation: Delineating savannah vegetation boundaries in the Kruger National Park, South Africa, using Sentin. 2 MSI Imagery. Int. J. Remote Sens. 2018, 39, 5997–6019. [Google Scholar] [CrossRef]
  32. Fu, Z.; Sun, Y.; Fan, L.; Han, Y. Multiscale and multifeature segmentation of high-spatial resolution remote sensing images using superpixels with mutual optimal strategy. Remote Sens. 2018, 10, 1289. [Google Scholar] [CrossRef] [Green Version]
  33. Möller, M.; Lymburner, L.; Volk, M. The comparison index: A tool for assessing the accuracy of image segmentation. Int. J. Appl. Earth Obs. Geoinf. 2007, 9, 311–321. [Google Scholar] [CrossRef]
  34. Duro, D.C.; Franklin, S.E.; Dubé, M. A comparison of pixel-based and object- based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  35. Jhonnerie, R.; Siregar, V.P.; Nababan, B.; Prasetyo, L.B.; Wouthuyzen, S. Random Forest Classification for Mangrove Land Cover Mapping Using Landsat 5 TM and Alos Palsar Imageries. Procedia Environ. Sci. 2015, 24, 215–221. [Google Scholar] [CrossRef] [Green Version]
  36. Drǎguţ, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 119–127. [Google Scholar] [CrossRef] [Green Version]
  37. The ESP Software Repository. Available online: http://research.enjoymaps.ro/downloads/ (accessed on 4 May 2020).
  38. Woodcock, C.E.; Strahler, A.H. The factor of scale in remote sensing. Remote Sens. Environ. 1987, 21, 311–332. [Google Scholar] [CrossRef]
  39. Kim, M.; Madden, M.; Warner, T. Estimation of optimal image object size for the segmentation of forest stands with multispectral IKONOS imagery. In Object-Based Image Analysis-Spatial Concepts for Knowledge Driven Remote Sensing Applications; Blaschke, T., Lang, S., Hay, G.J., Eds.; Springer: Heidelberg/Berlin, Germany, 2008; pp. 291–307. [Google Scholar]
  40. Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H. A region-based multi-scale approach for object-based image analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 241–247. [Google Scholar] [CrossRef]
  41. Drǎguţ, L.; Tiede, D.; Levick, S.R. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  42. Zhang, X.; Fritts, J.E.; Goldman, S.A. Image segmentation evaluation: A survey of unsupervised methods. Comput. Vis. Image Underst. 2008, 110, 260–280. [Google Scholar] [CrossRef] [Green Version]
  43. Yang, J.; He, Y.; Weng, Q. An Automated Method to Parameterize Segmentation Scale by Enhancing Intrasegment Homogeneity and Intersegment Heterogeneity. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1282–1286. [Google Scholar] [CrossRef]
  44. Yang, L.; Mansaray, L.R.; Huang, J.; Wang, L. Optimal segmentation scale parameter, feature subset and classification algorithm for geographic object-based crop recognition using multisource satellite imagery. Remote Sens. 2019, 11, 514. [Google Scholar] [CrossRef] [Green Version]
  45. Dorren, L.; Maier, B.; Seijmonsbergen, A. Improved Landsat-based forest mapping in steep mountainous terrain using object-based classification. Forest Ecol. Manag. 2003, 183, 31–46. [Google Scholar] [CrossRef]
  46. Kim, M.; Madden, M.; Warner, T. Forest type mapping using object- specific texture measures from multispectral IKONOS imagery: Segmentation quality and image classification issues. Photogramm. Eng. Remote Sens. 2009, 75, 819–830. [Google Scholar] [CrossRef] [Green Version]
  47. Csillik, O. Fast segmentation and classification of very high resolution remote sensing data using SLIC superpixels. Remote Sens. 2017, 9, 243. [Google Scholar] [CrossRef] [Green Version]
  48. Lu, L.; Tao, Y.; Di, L. Object-Based Plastic-Mulched Landcover Extraction Using Integrated Sentinel-1 and Sentinel-2 Data. Remote Sens. 2018, 10, 1820. [Google Scholar] [CrossRef] [Green Version]
  49. Wang, B.; Zhang, Z.; Wang, X.; Zhao, X.; Yi, L.; Hu, S. Object-based mapping of gullies using optical images: A case study in the black soil region, Northeast of China. Remote Sens. 2020, 12, 487. [Google Scholar] [CrossRef] [Green Version]
  50. Belgiu, M.; Drǎguţ, L. Comparing supervised and unsupervised multiresolution segmentation approaches for extractiong buildings from very high resolution imagery. ISPRS J. Photogramm. Remote Sens. 2014, 96, 67–75. [Google Scholar] [CrossRef] [Green Version]
  51. Ode, A.; Miller, D. Analysing the relationship between indicators of landscape complexity and preference. Environ. Plan. B Plan. Des. 2011, 38, 24–40. [Google Scholar] [CrossRef]
  52. Lynch, K. The Image of the City; The MIT Press: Cambridge, MA, USA, 1960. [Google Scholar]
  53. Shanken, A.M. The visual culture of planning. J. Plan. Hist. 2018, 17, 300–319. [Google Scholar] [CrossRef]
  54. Chmielewski, T.J.; Kułak, A.; Michalik-Snieżek, M.; Lorens, B. Physiognomic structure of agro-forestry landscapes: Method of evaluation and guidelines for design, on the example of the West Polesie Biosphere Reserve. Int. Agrophys. 2016, 30, 415–429. [Google Scholar] [CrossRef]
  55. Benedikt, M.L. To Take Hold of Space: Isovists and Isovist Fields. Environ. Plan. B Plann. Des. 1997, 6, 47–65. [Google Scholar] [CrossRef]
  56. Doherty, M.F. Computation of Minimal Isovist Sets. Technical Rapport. Maryland University College Park Centre for Automation Research (ADA157624), 89. 1984. Available online: https://apps.dtic.mil/sti/citations/ADA157624 (accessed on 1 July 2020).
  57. Gobster, P.H. Visions of nature: Conflict and compatibility in urban park restoration. Landsc. Urban. Plan. 2001, 56, 35–51. [Google Scholar] [CrossRef]
  58. Skalski, J. Komfort Dalekiego Patrzenia a Krajobraz Dolin Rzecznych W Miastach Rzecznych Na Nizinach. Teka Komisji Architektury, Urbanistyki i Studiów Krajobrazowych 2005, 1, 44–52. [Google Scholar]
  59. Smardon, R.C.; Palmer, J.F.; Felleman, J.P. Foundations for Visual Project Analysis; Wiley: New York, NY, USA, 1986. [Google Scholar]
  60. Olszewska, A.; Marques, P.F.; Barbosa, F. Enhancing Urban Landscape with Neuroscience Tools: Lessons from the Human Brain. Citygreen 2015, 1, 60. [Google Scholar] [CrossRef]
  61. Olszewska, A.; Marques, P.F.; Ryan, R.L.; Barbosa, F. What makes a landscape contemplative? Environ. Plan. B Urban. Analy. City Sci. 2018, 45, 7–25. [Google Scholar] [CrossRef]
  62. Kulik, M.; Warda, M.; Leśniewska, P. Monitoring the diversity of psammophilous grassland communities in the Kózki Nature Reserve under grazing and non-grazing conditions. J. Water Land Dev. 2013, 19, 59–67. [Google Scholar] [CrossRef]
  63. Warda, M.; Kulik, M.; Gruszecki, T. Description of selected grass communities in the “Kozki” nature reserve and a test of their active protection through the grazing of sheep of the Świniarka race. Ann. UMCS Agric. 2011, 66, 1–8. [Google Scholar] [CrossRef]
  64. Kulik, M.; Patkowski, K.; Warda, M.; Lipiec, A.; Bojar, W.; Gruszecki, T.M. Assessment of biomass nutritive value in the context of animal welfare and conservation of selected Natura 2000 habitats (4030, 6120 and 6210) in eastern Poland. Glob. Ecol. Conserv. 2019, 19, e00675. [Google Scholar] [CrossRef]
  65. Benedikt, M.L.; Mcelhinney, S. Isovists and the Metrics of Architectural Space. In Proceedings 107th ACSA Annual Meeting; Ficca, J., Kulper, A., Eds.; ACSA Press: Pittsburgh, PA, USA, 2019; pp. 1–10. [Google Scholar]
  66. Nagarajan, S.; Schenk, T. Feature-based registration of historical aerial images by Area Minimization. ISPRS J. Photogramm. Remote Sens. 2016, 116, 15–23. [Google Scholar] [CrossRef]
  67. Brinkmann, K.; Hoffmann, E.; Buerkert, A. Spatial and temporal dynamics of Urban Wetlands in an Indian Megacity over the past 50 years. Remote Sens. 2020, 12, 662. [Google Scholar] [CrossRef] [Green Version]
  68. Galiatsatos, N.; Donoghue, D.N.M.; Philip, G. High resolution elevation data derived from stereoscopic CORONA imagery with minimal ground control: An approach using Ikonos and SRTM data. Photogramm. Eng. Remote Sens. 2008, 74, 1093–1106. [Google Scholar] [CrossRef]
  69. Gheyle, W.; Bourgeois, J.; Goossens, R.; Jacobsen, K. Scan Problems in Digital CORONA Satellite Images from USGS Archives. Photogramm. Eng. Remote Sens. 2011, 77, 1257–1264. [Google Scholar] [CrossRef]
  70. Luman, D.E.; Stohr, C.; Hunt, L. Digital reproduction of historical aerial photographic prints for preserving a deteriorating archive. Photogramm. Eng. Remote Sens. 1997, 63, 1171–1179. [Google Scholar]
  71. The Web Mapping Services (WMS) of Polish National Geoportal. Available online: https//mapy.geoportal.gov.pl/wss/service/img/guest/ORTO/MapServer/WMSServer (accessed on 10 August 2020).
  72. Ford, M. Shoreline changes interpreted from multi-temporal aerial photographs and high resolution satellite images: Wotje Atoll, Marshall Islands. Remote Sens. Environ. 2013, 135, 130–140. [Google Scholar] [CrossRef]
  73. Casana, J.; Cothren, J. Stereo analysis, DEM extraction and orthorectification of CORONA satellite imagery: Archaeological applications from the Near East. Antiquity 2008, 82, 732–749. [Google Scholar] [CrossRef] [Green Version]
  74. Ma, R. Rational function model in processing historical aerial photographs. Photogramm. Eng. Remote Sens. 2013, 79, 337. [Google Scholar] [CrossRef]
  75. Laben Craig, A.; Bernard, V.B. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6,011,875, 4 January 2000. [Google Scholar]
  76. Coeurdevey, L.; Fernandez, K. Pleiades Imagery User Guide. Report No. USRPHR-DT-125-SPOT-2.0. Airbus Defense and Space Intelligence, 2012, France CNES, 106. Available online: https://www.intelligence-airbusds.com/en/8718-user-guides (accessed on 4 May 2020).
  77. Niedzielski, T.; Witek, M.; Miziński, B.; Remisz, J. The Description of UAV Campaign in Kózki Nature Reserve. Biuletyn Informacyjny Instytutu Geografii i Rozwoju Regionalnego 2017, 7–10, 17–18. Available online: http://www.geogr.uni.wroc.pl/data/files/lib-biuletyn_igrr_2017_07_08_09_10.pdf (accessed on 13 April 2020).
  78. Liu, D.; Xia, F. Assessing object-based classification: Advantages and limitations. Remote Sens. Lett. 2010, 1, 187–194. [Google Scholar] [CrossRef]
  79. Su, T.; Zhang, S. Local and global evaluation for remote sensing image segmentation. ISPRS J. Photogramm. Remote Sens. 2017, 130, 256–276. [Google Scholar] [CrossRef]
  80. Lucieer, A.; Stein, A. Existential uncertainty of spatial objects segmented from satellite sensor imagery. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2518–2521. [Google Scholar] [CrossRef] [Green Version]
  81. Lucieer, A. Uncertainties in segmentation and their visualisation. Ph.D. Thesis, International Institute for Geo-Information Science and Earth Observation (ITC) and the University of Utrecht, Utrecht, The Netherlands, 2004. [Google Scholar]
  82. Clinton, N.; Holt, A.; Scarborough, J.; Yan, L.I.; Gong, P. Accuracy assessment measures for object-based image segmentation goodness. Photogramm. Eng. Remote Sens. 2010, 76, 289–299. [Google Scholar] [CrossRef]
  83. Whiteside, T.G.; Maier, S.W.; Boggs, G.S. Area-based and location-based validation of classified image objects. Int. J. Appl. Earth Obs. Geoinf. 2014, 28, 117–130. [Google Scholar] [CrossRef]
  84. Sicre, M.; Fieuzal, R.; Baup, F. Contribution of multispectral (optical and radar) satellite images to the classification of agricultural surfaces. Int. J. App. Earth Observ. Geoinf. 2020, 84, 101972. [Google Scholar] [CrossRef]
  85. Cai, L.; Shi, W.; Miao, Z.; Hao, M. Accuracy assessment measures for object extraction from remote sensing images. Remote Sens. 2018, 10, 303. [Google Scholar] [CrossRef] [Green Version]
  86. Zhang, X.; Feng, X.; Xiao, P.; He, G.; Zhu, L. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images. ISPRS J. Photogramm. Remote Sens. 2015, 102, 73–84. [Google Scholar] [CrossRef]
  87. Marpu, P.R.; Neubert, M.; Herold, H.; Niemeyer, I. Enhanced evaluation of image segmentation results. J. Spat. Sci. 2010, 55, 55–68. [Google Scholar] [CrossRef]
  88. Montaghi, A.; Larsen, R.; Greve, M.H. Accuracy assessment measures for image segmentation goodness of the land parcel identification system (LPIS) in Denmark. Remote Sens. Lett. 2013, 4, 946–955. [Google Scholar] [CrossRef]
  89. Dubes, R.C. How many clusters are best?—An experiment. Pattern Recognit. 1987, 20, 645–663. [Google Scholar] [CrossRef]
  90. Polak, M.; Zhang, H.; Pi, M. An evaluation metric for image segmentation of multiple objects. Image Vis. Comput. 2009, 27, 1223–1227. [Google Scholar] [CrossRef]
  91. Anderson, J.R. A Land Use and Land Cover Classification System for Use with Remote Sensor Data; U.S. Government Printing Office: Washington, DC, USA, 1976; 28p.
  92. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  93. Foody, G.M. Explaining the unsuitability of the kappa coefficient in the assessment and comparison of the accuracy of thematic maps obtained by image classification. Remote Sens. Environ. 2020, 239, 111630. [Google Scholar] [CrossRef]
  94. Mahesh, P. Random Forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar]
  95. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  96. Sabat-Tomala, A.; Raczko, E.; Zagajewski, B. Comparison of support vector machine and random forest algorithms for invasive and expansive species classification using airborne hyperspectral data. Remote Sens. 2020, 12, 516. [Google Scholar] [CrossRef] [Green Version]
  97. Mallinis, G.; Koutsias, N.; Tsakiri-Strati, M.; Karteris, M. Object-Based Classification Using Quickbird Imagery for Delineating Forest Vegetation Polygons in a Mediterranean Test Site. ISPRS J. Photogramm. Remote Sens. 2008, 63, 237–250. [Google Scholar] [CrossRef]
  98. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. Syst. Man Cybern. IEEE Trans. 1973, 6, 610–621. [Google Scholar] [CrossRef] [Green Version]
  99. Batty, M.; Rana, S. The Automatic Definition and Generation of Axial Lines and Axial Maps. Environ. Plan. B: Plan. Des. 2004, 31, 615–640. [Google Scholar] [CrossRef]
  100. Jones, E.G.; Wong, S.; Milton, A.; Sclauzero, J.; Whittenbury, H.; McDonnell, M.D. The impact of pan-sharpening and spectral resolution on vineyard segmentation through machine learning. Remote Sens. 2020, 12, 934. [Google Scholar] [CrossRef] [Green Version]
  101. Xiao, P.; Zhang, X.; Zhang, H.; Hu, R.; Feng, X. Multiscale optimized segmentation of urban green cover in high resolution remote sensing image. Remote Sens. 2018, 10, 1813. [Google Scholar] [CrossRef] [Green Version]
  102. Weitkamp, G.; Lammeren, R.; van Bregt, A. Validation of isovist variables as predictors of perceived landscape openness. Landsc. Urban. Plan. 2014, 125, 140–145. [Google Scholar] [CrossRef]
  103. Wang, Y.; Dou, W. A fast candidate viewpoints filtering algorithm for multiple viewshed site planning. Int. J. Geogr. Inf. Sci. 2020, 34, 448–463. [Google Scholar] [CrossRef]
  104. Shi, X.; Xue, B. Deriving a minimum set of viewpoints for maximum coverage over any given digital elevation model data. Int. J. Digit. Earth 2016, 9, 1153–1167. [Google Scholar] [CrossRef]
  105. Chmielewski, S.; Lee, D. GIS-Based 3D visibility modeling of outdoor advertising in urban areas. In Proceedings of the 15th International Multidisciplinary Scientific GeoConference SGEM, Albena, Bulgaria, 18–24 June 2015; pp. 923–930. [Google Scholar]
  106. Turner, A.; Doxa, M.; O’Sullivan, D.; Penn, A. From isovists to visibility graphs: A methodology for the analysis of architectural space. Environ. Plan. B Plan. Des. 2001, 28, 103–121. [Google Scholar] [CrossRef] [Green Version]
  107. Suleiman, W.; Joliveau, T.; Favier, E. A New Algorithm for 3D Isovists. In Advances in Spatial Data Handling; Timpf, S., Laube, P., Eds.; Springer: Heidelberg/Berlin, Germany, 2013; pp. 157–173. [Google Scholar]
  108. Varoudis, T.; Psarra, S. Beyond two dimensions: Architecture through three-dimensional visibility graph analysis. J. Space Syntax 2014, 5, 91–108. [Google Scholar]
  109. Tobler, W.R. A computer model simulation of urban growth in the Detroit region. Econ. Geogr. 1970, 46, 234–240. [Google Scholar] [CrossRef]
  110. Fisher, P.F. An Exploration of Probable Viewsheds in Landscape Planning. Environ. Plan. B Plan. Des. 1995, 22, 527–546. [Google Scholar] [CrossRef]
  111. Bartie, P.; Reitsma, F.; Kingham, S.; Mills, S. Incorporating vegetation into visual exposure modelling in urban environments. Int. J. Geogr. Inf. Sci. 2011, 5, 851–868. [Google Scholar] [CrossRef]
  112. O’Neill, R.V.; Hunsaker, C.T.; Jackson, B.L.; Jones, K.B.; Riiters, K.H.; Wickham, J.D. Scale problems in reporting landscape pattern at regional scale. Landsc. Ecol. 1996, 11, 169–180. [Google Scholar] [CrossRef]
  113. Kim-Prieto, C.; Diener, E.; Tamir, M.; Scollon, C.; Diener, M. Integrating the Diverse Definitions of Happiness: A Time-Sequential Framework of Subjective Well-Being. J. Happiness Stud. 2005, 6, 261–300. [Google Scholar] [CrossRef]
  114. Helbich, M. Toward dynamic urban environmental exposure assessments in mental health research. Environ. Res. 2018, 161, 129–135. [Google Scholar] [CrossRef]
  115. Navickas, L.; Olszewska, A.; Mantadelis, T. CLASS: Contemplative landscape automated scoring system. In Proceedings of the 24th Mediterranean Conference on Control and Automation (MED), Athens, Greece, 21–24 June 2016; pp. 1180–1185. [Google Scholar]
  116. Council of Europe. The European Landscape Convention Text. 2007. Available online: https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/176 (accessed on 20 August 2020).
  117. Cassatella, C. Landscape Indicators. Assessing and Monitoring Landscape Quality; Cassatella, C., Peano, A., Eds.; Springer: Dordrecht, The Netherland, 2011. [Google Scholar] [CrossRef]
  118. Dramstad, W.E.; Tveit, M.S.; Fjellstad, W.J.; Fry, G.L. Relationships between visual landscape preferences and map-based indicators of landscape structure. Landsc. Urban. Plan. 2006, 78, 465–474. [Google Scholar] [CrossRef]
  119. Uuemaa, E.; Roosaare, J.; Mander, U. Landscape metrics as indicators of river water quality at catchment scale. Nord. Hydrol. 2007, 38, 125–138. [Google Scholar] [CrossRef]
  120. Walz, U.; Stein, C. Indicator for a monitoring of Germany’s landscape attractiveness. Ecol. Indic. 2018, 94, 64–73. [Google Scholar] [CrossRef]
  121. Lustig, A.; Stouffer, D.B.; Roigé, M.; Worner, S.P. Towards more predictable and consistent landscape metrics across spatial scales. Ecol. Indic. 2015, 57, 11–21. [Google Scholar] [CrossRef]
  122. Herbst, H.; Förster, M.; Kleinschmidt, B. Contribution of landscape metrics to the assessment of scenic quality—the example of the landscape structure plan Havelland/Germany. Landscape Online 2009, 10, 1–17. [Google Scholar] [CrossRef]
  123. Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J. Photogramm. Remote Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef] [Green Version]
  124. Midekisa, A.; Holl, F.; Savory, D.J.; Andrade-Pacheco, R.; Gething, P.W.; Bennett, A.; Sturrock, H.J. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing. PLoS ONE 2017, 12, e0184926. [Google Scholar] [CrossRef] [PubMed]
  125. Cushman, S.A.; McGarigal, K.; Neel, M.C. Parsimony in landscape metrics: Strength, universality, and consistency. Ecol. Indic. 2008, 8, 691–703. [Google Scholar] [CrossRef]
  126. Fry, G. Culture and nature versus culture or nature. In The New Dimensions of the European Landscape; Jongman, R., Ed.; Springer: Dordrecht, The Netherlands, 2002; pp. 75–81. [Google Scholar]
Figure 1. Isovist minimum viewpoint set (MIS) construction: a single viewpoint (A) gives an overview of most of the landscape and is, therefore, regarded as a primary viewpoint. To see obstructed areas (shaded in red), a secondary viewpoint is necessary (B). However, two small obscured areas remain in red; thus, two more viewpoints (C) are positioned by the MIS algorithm. Full visual coverage of the landscape, therefore, is provided by four viewpoints.
Figure 1. Isovist minimum viewpoint set (MIS) construction: a single viewpoint (A) gives an overview of most of the landscape and is, therefore, regarded as a primary viewpoint. To see obstructed areas (shaded in red), a secondary viewpoint is necessary (B). However, two small obscured areas remain in red; thus, two more viewpoints (C) are positioned by the MIS algorithm. Full visual coverage of the landscape, therefore, is provided by four viewpoints.
Remotesensing 12 02792 g001
Figure 2. The location of the “Kózki” Nature Reserve area in East Poland (upper left) near Siemiatycze (bottom left) as a central part of the research area delineated by the dashed line (right).
Figure 2. The location of the “Kózki” Nature Reserve area in East Poland (upper left) near Siemiatycze (bottom left) as a central part of the research area delineated by the dashed line (right).
Remotesensing 12 02792 g002
Figure 3. Imagery used for geographic object-based image analysis (GEOBIA) covering the case study location: (A) 1957, (B) 1965, (C) 1973, (D) 1997, (E) 2006, (F) 2010, (G) 2015, and (H) 2018.
Figure 3. Imagery used for geographic object-based image analysis (GEOBIA) covering the case study location: (A) 1957, (B) 1965, (C) 1973, (D) 1997, (E) 2006, (F) 2010, (G) 2015, and (H) 2018.
Remotesensing 12 02792 g003
Figure 4. The enhanced estimation scale parameter (ESP2) plug-in ROC local variance (LV) graph output: (A) 1957, (B) 1965, (C) 1973, (D) 1997, (E) 2006, (F) 2010, (G) 2015, and (H) 2018; the SP values proposed by ESP2 at level 1 are marked with an asterisk.
Figure 4. The enhanced estimation scale parameter (ESP2) plug-in ROC local variance (LV) graph output: (A) 1957, (B) 1965, (C) 1973, (D) 1997, (E) 2006, (F) 2010, (G) 2015, and (H) 2018; the SP values proposed by ESP2 at level 1 are marked with an asterisk.
Remotesensing 12 02792 g004
Figure 5. Digitized reference segments (RSs) and their overlapping count across the whole research area (A), and RS examples for (B) 1965, (C) 2006, and (D) 2015.
Figure 5. Digitized reference segments (RSs) and their overlapping count across the whole research area (A), and RS examples for (B) 1965, (C) 2006, and (D) 2015.
Remotesensing 12 02792 g005
Figure 6. The precision–recall curves covering segmentation for five spatial parameter (SP) candidates. A curve located in the upper-right corner indicates high segmentation quality (Zhang et al., 2015).
Figure 6. The precision–recall curves covering segmentation for five spatial parameter (SP) candidates. A curve located in the upper-right corner indicates high segmentation quality (Zhang et al., 2015).
Remotesensing 12 02792 g006
Figure 7. The segments (round frames) and the segment classification results (land-cover map): (A) 1957, (B) 1965, (C) 1973, (D) 1997, (E) 2006, (F) 2010, (G) 2015, and (H) 2018.
Figure 7. The segments (round frames) and the segment classification results (land-cover map): (A) 1957, (B) 1965, (C) 1973, (D) 1997, (E) 2006, (F) 2010, (G) 2015, and (H) 2018.
Remotesensing 12 02792 g007
Figure 8. Land-cover change dynamics from 1957 to 2018 (1965 was excluded because of the low classification accuracy and the relatively higher GSD of CORONA imagery, which limited the possibility of shrub community identification).
Figure 8. Land-cover change dynamics from 1957 to 2018 (1965 was excluded because of the low classification accuracy and the relatively higher GSD of CORONA imagery, which limited the possibility of shrub community identification).
Remotesensing 12 02792 g008
Figure 9. The viewpoint spatial arrangement resulting from MIS analysis, with the size of viewpoint referring to its rank (as explained in Figure 1): (A) open riverside landscape with a linear arrangement of three main viewpoints, expanded in 1965 (B) with an additional viewpoint on the river in the northern part of the research area, maintained practically until 1973 (C), but at the beginning of the succession, gradually fragmented with separation landscape interiors in the center of the research area (D,E), which remained in visual connectivity with the entire area until 2010 (F) then became (G,H, marked in blue) visually isolated.
Figure 9. The viewpoint spatial arrangement resulting from MIS analysis, with the size of viewpoint referring to its rank (as explained in Figure 1): (A) open riverside landscape with a linear arrangement of three main viewpoints, expanded in 1965 (B) with an additional viewpoint on the river in the northern part of the research area, maintained practically until 1973 (C), but at the beginning of the succession, gradually fragmented with separation landscape interiors in the center of the research area (D,E), which remained in visual connectivity with the entire area until 2010 (F) then became (G,H, marked in blue) visually isolated.
Remotesensing 12 02792 g009
Table 1. The list of remote sensing imagery used for a multi-temporal case study of “Kózki” nature reserve. GSD—ground sample distance.
Table 1. The list of remote sensing imagery used for a multi-temporal case study of “Kózki” nature reserve. GSD—ground sample distance.
DateTypeScale/GSDSpectral ResolutionCamera/Project info/RMSE
1957 (August)Archival aerial imagery, mono coverage1:18,000/0.2 mGrayscaleRC5/unknown/3.6 *
1965 (26 September)CORONA KH-4A2.8 mGrayscale70 mm Panoramic, Forward, Stereo Medium/Mission 1024-1/3.3 m *
1973 (July)Archival aerial imagery, mono-coverage1:17,000/0.18 mGrayscaleRC8/unknown/2.1 *
1997 (month unknown)Orthophoto 0.5 mRGBRC20/PHARE LPIS48/1.5m
2006 (month unknown)Orthophoto 0.5 mRGBRC20/LPIS_Centrum/1.5 m
2010 (month unknown)Orthophoto 0.25 mRGB + CIRUnknown/LPIS40/0.75 m
2015 (4 July)Pleiades-1B Ortho (Level 3) after radiometric and geometric correction0.5 m (pansharpened)RGB + NIRDigital 12 bits/not specified/0.35 m
2018 (26 June)UAV eBee flight campaign (Orthophoto)0.04 mRGB + CIRCanon S100/BIOSTRATEG2/297267/14/NCBR/2016/RMSE 0.39 m
* Values after orthorectification.
Table 2. Land cover classes used in the study. Descriptions are provided, along with references to USGS and Corine Land Cover (CLC) nomenclature. Only references that most accurately describe the case-study land cover are listed (e.g., CLC Level 1 forest is not reported because, at Level 1, forest is merged with semi-natural areas). LCCL—land-cover classification level.
Table 2. Land cover classes used in the study. Descriptions are provided, along with references to USGS and Corine Land Cover (CLC) nomenclature. Only references that most accurately describe the case-study land cover are listed (e.g., CLC Level 1 forest is not reported because, at Level 1, forest is merged with semi-natural areas). LCCL—land-cover classification level.
LCCL-1LCCL-2LCCL-3
Class NameStatusUSGSCLCUSGSCLCCLC
Built-up areasc-class-Artificial surfaces (1)Residential-Road and rail networks
Forestc-classForest land-Deciduous, evergreen, mixed forest land, forest wetland, orchardsForests (31)Broad-leaved forest (311), coniferous forest (312), mixed forest (313),
fruit trees and berry plantations (222), agro-forestry areas (244)
Shrubsc-class--ShrubScrub (32)
Xeric sandy grasslands-class- Herbaceousherbaceous (32)Moors and heathland (322)
Open sandss-class- Sandy areas other than beachesOpen spaces with little or no vegetation (33)Sparsely vegetated areas (333)
Waters-classWaterWater bodies (5)LakesInland waters (51)Water bodies (512)
Other (agricultural background)s-class-Agricultural area (2)Cropland and pastures, bare groundArable land (21)Non-irrigated arable land (211),
Pasture (23)Pasture (231)
natural grasslands (321)
Table 3. Number and patch size metrics for merged c-classes (NP—number of patches, PD—patch density).
Table 3. Number and patch size metrics for merged c-classes (NP—number of patches, PD—patch density).
19571965197319972006201020152018
NP479261541598561572465510
PD1328.21348.51317981682.9643.6538.1576.6
Table 4. The results of imageability indicator value calculations. VD—viewpoint density; VS—viewpoint spacing.
Table 4. The results of imageability indicator value calculations. VD—viewpoint density; VS—viewpoint spacing.
Vn (count)VD (n/km2)VS (km)
PrimarySecondaryTotalPrimarySecondaryTotalPrimarySecondaryTotal
195731231261.561.5630.820.130.13
19652136138168691.000.120.12
19736147153373.576.50.580.120.11
199782292374114.5118.50.500.090.09
2006112392505.5119.51250.430.090.09
2010112232345.5111.51170.430.090.09
20151023424451171220.450.090.09
201892162254.5108112.50.470.100.09

Share and Cite

MDPI and ACS Style

Chmielewski, S.; Bochniak, A.; Natapov, A.; Wężyk, P. Introducing GEOBIA to Landscape Imageability Assessment: A Multi-Temporal Case Study of the Nature Reserve “Kózki”, Poland. Remote Sens. 2020, 12, 2792. https://doi.org/10.3390/rs12172792

AMA Style

Chmielewski S, Bochniak A, Natapov A, Wężyk P. Introducing GEOBIA to Landscape Imageability Assessment: A Multi-Temporal Case Study of the Nature Reserve “Kózki”, Poland. Remote Sensing. 2020; 12(17):2792. https://doi.org/10.3390/rs12172792

Chicago/Turabian Style

Chmielewski, Szymon, Andrzej Bochniak, Asya Natapov, and Piotr Wężyk. 2020. "Introducing GEOBIA to Landscape Imageability Assessment: A Multi-Temporal Case Study of the Nature Reserve “Kózki”, Poland" Remote Sensing 12, no. 17: 2792. https://doi.org/10.3390/rs12172792

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop