Quantitative Landscape Assessment Using LiDAR and Rendered 360 ◦ Panoramic Images

: The study presents a new method for quantitative landscape assessment. The method uses LiDAR data and combines the potential of GIS (ArcGIS) and 3D graphics software (Blender). The developed method allows one to create Classiﬁed Digital Surface Models (CDSM), which are then used to create 360 ◦ panoramic images from the point of view of the observer. In order to quantify the landscape, 360 ◦ panoramic images were transformed to the Interrupted Sinusoidal Projection using G.Projector software. A quantitative landscape assessment is carried out automatically with the following landscape classes: ground, low, medium, and high vegetation, buildings, water, and sky according to the LiDAR 1.2 standard. The results of the analysis are presented quantitatively—the percentage distribution of landscape classes in the 360 ◦ ﬁeld of view. In order to fully describe the landscape around the observer, graphs of little planets have been proposed to interpret the obtained results. The usefulness of the developed methodology, together with examples of its application and the way of presenting the results, is described. The proposed Quantitative Landscape Assessment method (QLA360) allows quantitative landscape assessment to be performed in the 360 ◦ ﬁeld of view without the need to carry out ﬁeld surveys. The QLA360 uses LiDAR American Society of Photogrammetry and Remote Sensing (ASPRS) classiﬁcation standards, which allows one to avoid di ﬀ erences resulting from the use of di ﬀ erent algorithms for classifying images in semantic segmentation. The most important advantages of the method are as follows: observer-independent, 360 ◦ ﬁeld of view which simulates human perspective, automatic operation, scalability, and easy presentation and interpretation of results.


Introduction
The landscape plays an important role in the cultural, ecological, environmental, and social fields. Landscape is an important element of the quality of life of people all over the world, both for urban and countryside inhabitants [1]. Landscape analysis plays a key role in research on urban ecology [2], urban planning [3], urban heat island studies [4], and monitoring of landscape changes [5]. The procedures for landscape identification and assessment should be based on the exchange of experience and methodology.
The interest in landscape research increased significantly at the beginning of the 21st century. Dynamic progress of Geographic Information Systems (GIS) and widespread availability of high-resolution data contributed to the development of many methods used for landscape assessment. Among the methods developed so far, two are the most frequently used. The first one is based on spatial data analyses, which describe the landscape compositions and patterns in the form of 2D categorical maps [6,7]. The second one is based on eye-level photographs, and the landscape is characterized either by qualitative questionnaire surveys [8][9][10][11] or quantitative computer vision and machine learning algorithms [12,13].
The 2D categorical maps can be made with a series of developed metrics that can effectively characterize landscape structures, but they can be misleading when correlated with people's 3D visual landscape preferences [14] because of the lack of quantitative information in the vertical direction [15]. The availability of high-resolution Digital Elevation Models (DEMs), Digital Surface Models (DSMs), and LiDAR point clouds allowed for the generation of 3D models [16][17][18] and the implementation of the third dimension to the analysis workflow in environmental modeling [19][20][21][22]. Although it greatly improved the quality of landscape classifications, it does not solve the biggest issue of landscape mapping, which is the potential discrepancy between landscape classification and the actual view from the observer location. The main problem with landscape classification mapping is the strict borders of adjacent classes. Zheng et al. [23] pointed out that the two most commonly used approaches, which are moving window and grid methods, takes groups of contiguous pixels as neighborhoods while overlooking the fact that ecological processes do not occur in fixed boundaries. In the moving window method, the results of landscape metrics are summarized in the central pixel of the window to construct a new metric map. On the other hand, the grid method subdivides a map with equal-size blocks. The same applies to the viewshed area of the observer, which is variable in different, sometimes very close locations (just around the corner). The fact that the observer sees only the first reflection of the surrounding environment makes the practical usefulness of landscape classification maps questionable.
From the observer point of view, the methods of landscape characterization using eye-level photographs seem to be more useful, as they try to simulate the perspective from the human eye instead of 2D maps. Visual assessment of landscapes and landscape elements has relied heavily on the use of photographs since the 1960s [24]. However, the lack of a uniform methodology of taking photos and making questionnaire surveys makes it difficult to compare the results from different studies. The basic factor, which is the camera focal length, varies in different studies and takes, for example, 38 [25], 50 [26], and 55 mm [27]. The sensor size is often undefined despite the fact that it is, together with the focal length value, crucial for defining the field of view (FOV). There are many factors that may influence the perception of landscape photographs during questionnaire surveys, such as image exposure, highlights, black point, hue, and saturation [28]. The composition of the landscape photo may unintentionally have a direct impact on the perception, as it is used by professional photographers to direct viewers' attention to the intended focal point and yield a visually appealing image [29]. The issue related to the composition and unstandardized FOV can be resolved with the use of 360 • panoramic images, which capture the whole scene surrounding the observer. Huge progress in computer vision and machine learning algorithms observed lately allowed for observer-independent image classification, which can be applied for landscape description [13,30,31]. Panoramic images from the Google Street View (GSV) database along with machine learning has proved to be useful for quantification of the landscape in an urban environment [32][33][34][35]; however, GSV image locations are biased towards streets and do not offer complete coverage of open areas at this point.
Simensen et al. [36], on the basis of a review of 54 contemporary landscape characterization approaches, stated that no single method can address all dimensions of the landscape without important trade-offs. Thus, there is still a need for quantitative, observer-independent methods for landscape characterization.
In this work, a landscape is defined as any permanent object in the human 360 • field of view at a given location. This assumption meets the definition of a European Landscape Convention that defines a landscape as an area, as perceived by people, whose character is the result of the action and interaction of natural and/or human factors [1].
The aims of the study were to (1) develop a methodology of Quantitative Landscape Assessment (QLA360) from the perspective of an observer in the full 360 • panoramic field of view and (2) demonstrate the practical applicability of the methodology. The following requirements were assumed at the stage of developing the method concept: observer-independent, 360 • field of view, automatic operation, scalability, and easy presentation of results. The methodology is based on the data obtained from airborne laser scanning (LiDAR) and integrates the use of GIS and 3D graphics software.

Data
The QLA360 is based on point clouds obtained from airborne laser scanning (LiDAR). The basic information contained in the point cloud are coordinates presenting the location in 3D space. Moreover, the RGB values, intensity, number of returns, and classification are attached to each point. The classes of point cloud in the LAS 1.2 format are shown in Table 1. The LiDAR data used in this study were obtained from the Polish Head Office of Land Surveying and Cartography. The surveying was done in July 2012. The mean spatial error of data was 0.08 m horizontal and 0.1 m vertical. The average density of a point cloud was 6.8 points per square meter. According to LiDAR data provider, the point cloud classification was carried out in TerraScan and TerraModeler modules of TerraSolid software according to the LAS 1.2 standard of the American Society of Photogrammetry and Remote Sensing (ASPRS) ( Table 1). The classification was carried out in two stages. First, automatic classification was carried out using TerraSolid algorithms. Then a manual correction was made, which is a process of checking the effectiveness of the automatic classification and its completion in places where the automatic filtration process did not bring satisfactory results. The classification process is completed when the classification error is less than 5%. Six classes were used for landscape analysis: ground, low vegetation (height < 0.4 m), medium vegetation (height 0.4-2 m), high vegetation (height > 2 m), building, and water.

Classified Digital Surface Model (CDSM) Development
The QLA360 methodology assumes the inclusion of all objects within the sight range of the observer. The input data are the LiDAR point cloud and the observer's locations (Figure 1).
The DSM in raster format was developed in 1 m resolution on the basis of a LiDAR point cloud. The observer location is a vector shapefile layer containing one or multiple points with X and Y coordinates and height 1.7 m above the ground level to simulate the line of sight [12]. The Viewshed tool identifies the areas that can be seen from one or multiple observer points. The result of the analysis was a raster map with a resolution of 1 m, where 0 means no visibility and 1 means visibility. Based on the results of the Viewshed analysis and by using the Minimum Bounding Geometry tool, an area of further analysis was determined (vector shape file-AoI.shp). Using the AoI.shp file, the point cloud was clipped to an area that will be subject to further detailed analysis. From the clipped point cloud, points representing ground (2), low vegetation (3), medium vegetation (4), high vegetation (5), buildings (6), and water (9) were selected for separate layers. Then, based on point clouds, mesh files were created in the form of TIN (Triangulated Irregular Network). Mesh files allowed us to reconstruct the topography and low, medium, and high vegetation and buildings. The process of converting point clouds to meshes was conducted in CloudCompare v.2.1 (https://www.danielgm.net/cc/), which is an open-software project for 3D point cloud processing that provides a set of tools for managing and editing point clouds and triangular meshes [38]. The process consists of rasterization of the point cloud with specified resolution and then converting it to a mesh in the form of TIN. The process is relatively easy with the continuous ground mesh, but the automatic transformation of relatively sparse (4-12 points/m 2 ) point clouds of objects above the ground level, which lack points on the sides, is a difficult task and can introduce some errors, especially in the case of vegetation. In this study, a simplified method was used in which the meshes were created on the basis of first returns of points in each class, and then they were extruded on the z-axis to the ground. This allowed for automation of the task and moreover, for correct representation of vertical features (building elevations) on the 3D model. All of the models were exported separately in the .fbx format.
In the next step, all of the developed models were imported into Blender Software v.2.79. Blender is a free and open-source 3D creation suite. It supports the entirety of the 3D pipeline-modeling, rigging, animation, simulation, rendering, compositing and motion tracking, video editing, and game creation. The great versatility of the program allows for performing a 3D visual impact analysis [39].
Blender's scene consisting of a complete, natural scale, 3D model of the study site can be used to perform renders from any location.

Presentation of the Outputs
The output of the QLA360 method is the set of equirectangular 360 • panoramic images, including ground, low, medium, and high vegetation separately, buildings, water, and sky visible from the perspective of the observers of known locations. This gives a wide variety of result presentation possibilities. The equirectangular images can be directly used in Virtual Reality devices, which can be useful for subjective visual impact assessment by the observers. Moreover, the equirectangular renders can be easily converted to stereographic (little planet) projections, which can help to understand the direction and spatial distribution of landscape features from the human perspective. In order to quantify the amount of visible landscape features, 360 • panoramic images have been transformed to the Interrupted Sinusoidal Projection, which is a type of Equal-Area projection [40]. For the transformation, the G.Projector software developed by the National Aeronautics and Space Administration (NASA) was used. In order to reduce the distortion to a minimum, the 10 • Gores Interruption was applied. Next, the pixels of colors representing each class were counted in every image. As the different classes are represented with different colors (RGB values), the amount of landscape features can be quantified as the percentage of pixels representing different colors (classes). In this study, in order to automate the task, the pixels were counted using Python and the PIL library (Appendix B), but it can be done in most graphic editor software such as Photoshop or Gimp. Percentage quantities of landscape features can be joined to the observer location vector points as the attributes. The results can be presented as charts with a percentage distribution of landscape features or thematic maps showing the classes separately.

Results
The QLA360 methodology was tested in the example of the City of Poznań, Poland. In the first stage, on the basis of the LiDAR point cloud, a DSM was developed. Then, the viewshed analysis was carried out for the six example observer locations with the set height of 1.7 m above the ground (Figure 2a). On the basis of the viewshed analysis results, the minimum bounding geometry tool was used to create an AoI polygon. It has an area of 1.71 km 2 and was used to limit the range of further analysis. The point cloud with assigned RGB color values ( Figure 2b) and class values according to the LAS 1.2 standard (Figure 2c) was clipped to the AoI extent. Next, the points were extracted separately by their class values, converted to meshes in CloudCompare software, and imported to Blender software ( Figure 2d). In Blender, the material was assigned for each mesh class ( Figure 2e). Next, the CDSM created in Blender and observer locations were used for rendering the 360 • panoramic images. Renders were made as equirectangular images with the resolution of 10,000 × 5000 pixels for all analyzed observer locations. This resolution was selected on the basis of the analysis performed on a set of images rendered with different resolutions for a single location (Figure 3). The difference in the number of pixels representing particular classes was less than 0.005%, and the render time was 129 s. All renders were made on the PC with the following parameters: Intel(R) Core(TM) i7-8700K CPU with 48 GB RAM and NVIDIA GeForce GTX 1080Ti GPU. Rendered panoramic images are shown in Figure 4a-f (right top corner). In order to perform visual verification, the GSV panoramas were downloaded, using the methodology presented by Gong et al. [32] (Figure 4a-f, left top corner). The rendered and GSV panoramic images were then overlaid with 50% transparency, and the lines presenting north, south, east, and west directions were added for easier interpretation (Figure 4, bottom). The obtained results indicate that the CDSM developed on the basis of the LiDAR point cloud in Blender represents the landscape elements visible from the perspective of the observer well. The discrepancies between the rendered and GSV panoramic images are mainly temporary elements, such as cars, people, and others. There are also some differences in landscape features, particularly with regard to low vegetation and the lack of billboards ( Figure 4c) and street lamps (Figure 4a,c,d). Individual inconsistencies in the classification of objects were also identified; for example, the street lamp was classified as high vegetation (Figure 4f). Differences between panoramic images may result from different dates of LiDAR and GSV surveys. Moreover, the classification of the LiDAR point cloud in the LAS standard is characterized by the quality of classification at the level of not less than 95%. The analysis shows that 360 • panoramic images created with the QLA360 method can be used as the basis for further analysis of the landscape assessment. In order for the rendered panoramic images to be useful for further quantitative analysis, it is necessary to transform them. This is due to large distortion in the upper and lower part of the panoramic images in the equirectangular projection. The panoramic images were transformed using an Interrupted Sinusoidal Projection from the Equal-Area projections group using G.Projector software developed by NASA. The results of the 360 • panoramic images transformation are shown in Figure 5.  The transformed panoramic images were used for further calculations to determine the percentage distribution of each landscape feature in the entire 360 • panoramic image. In order to automate the calculation process, a Python script was developed for counting pixels of different classes. It allowed us to calculate the percentage values of separated classes in accordance with the LAS 1.2. standard, which were shown as pie charts (Figure 6a). Additionally, the panoramic images were transformed into a stereographic projection, which enabled presentation of the results with so-called little planets (Figure 6b). In addition, the results can be presented independently for each landscape feature as low vegetation, medium vegetation, high vegetation (Figure 6c), buildings (Figure 6d), and sky. Pie charts quantify the contribution of individual landscape elements. However, they do not fully show which elements of the landscape are in the immediate vicinity of the observer and to which extent they may dominate in the 360 • FOV. The little planet charts, although not allowing the quantitative landscape assessment on their own, enable one to determine the impact of particular landscape elements on the observer depending on their size and distance. A comparison of little planet charts from GSV images and renders from the Blender environment is shown in Figure 7. The QLA360 method allows for automatic visibility analysis for any number of locations in an area where LiDAR data is available. An exemplary analysis was carried out for 113 observer points located manually on streets and sidewalks at a distance of approximately 25 m. The results can be presented in the GIS environment for all separated landscape elements (Figure 8). The results can be useful in comparing the visual aspects of different areas and to complement the spatial planning process.
Imperfections resulting from the LiDAR point cloud can be corrected in the Blender environment. The next example shows the analysis carried out for an observer located near the lake (Figure 9a). A common problem with LiDAR is its poor representation of water (Figure 9b). This problem can be easily resolved by implementing the CDSM with a polygon representing the water surface (Figure 9c).  Working in the 3D environment of the Blender software makes it easy to add new objects that can be considered in the visual impact analysis. It can be very helpful for planning new infrastructure, expanding green infrastructure [41], planning decision support systems for smart cities [42,43], and for performing an environmental impact assessment of potential investments. For example, three potential buildings were added to the CDSM (Figure 10a) and assigned with a different material of known RGB values (Figure 10c). The analysis shows the visual impact of the new buildings on the observers located down the road. The mean visibility of the new buildings for eight observer locations equals 5.77% of the 360 • field of view with a maximum of 9.71% for observer 3 (before Figure 10b and after Figure 10e building construction). A slightly different view is obtained by observer 7 where the buildings are visible from the side and mutually obscure each other (before Figure 10c and after Figure 10f building construction). The new buildings are not visible for observer 8, as they are obstructed by high vegetation. The average sum of the total buildings fraction increased from 7.09% to 12.8%.

Discussion
The QLA360 method is based entirely on the classified LiDAR point cloud and does not require any additional data or surveys. As it is highly observer-independent, it can be used in any geographic location in the world, where LiDAR data are available, and gives comparable results. The application of LiDAR ASPRS classification standards as landscape classes allows avoiding differences resulting from the application of different algorithms for classifying images in semantic segmentation. The methodology can be applied in any location, which enables a quantitative comparison of the landscape obtained in the various areas. So far, mapping with LiDAR is a newly emerging, expensive technology. Most large-scale scans have only been performed once. In the near future, when more LiDAR scans are available, landscape changes can easily be tracked using the proposed methodology.
Many recently developed landscape assessment methods use semantic segmentation [34,[44][45][46], also called scene labeling, which refers to the process of assigning a semantic label (e.g., vegetation, buildings, and cars) to each pixel of an image. In the QLA360 methodology, the same results in the form of classified images are obtained. The use of standardized classification allows one to avoid differences resulting from the use of different algorithms for classifying images. The image segmentation methods are able to define landscape features on the photographs with high accuracy, but their wide applicability of landscape assessment can be limited by the high cost of field work for taking the photographs. The objective comparison of the results can also be an issue as these methods use different classification algorithms and there is a lack of standard methodology for obtaining the photographs. The use of 360 • panoramic images can eliminate the problem with subjective framing of photographs and could be a way of standardizing the input data for semantic segmentation algorithms. The GSV is a database that can be successfully used in landscape assessment allowing a great number of input data to be obtained for analysis, eliminating the issue of taking photographs manually [34]. GSV image locations, however, do not cover every location as they are biased toward the streets at the moment. Our method allows one to generate one's own panoramic images so they can be applied in any location regardless of the availability of GSV. Another problem to confront in semantic segmentation is the elimination of temporary objects (cars, people). The QLA360 method is based on the classified LiDAR point cloud, which is already filtered from temporary objects. Many different classifications of landscape features can be found in the literature (Table 3). The definition of classes depends on the input data used. GSVs and photographs may contain more detailed classes, but their analysis is more difficult and time consuming. Our methodology, entirely based on LiDAR, will evolve along with the standards of point cloud classification. The methodology can be easily adapted to different LAS standards. The LAS 1.4 version additionally includes: railway, surface, wire, transmission, and bridge classes [50]. All these classes can be included in the analysis simply by assigning new, unique RGB material to them and slightly modifying the script (Appendix B) to include new classes in the pixel counting process. In addition to the LAS classification, the QLA360 allows implementing additional classes by assigning new RGB material to them. It can be used, for example, to differentiate buildings as public, private, historic, etc.
The proposed methodology can be useful for urban planners. Quantified results of visible landscape features can help maintain or restore balance between natural and anthropogenic features and can help to adapt the urban environment to people's landscape preferences [14]. The presentation of the results in the form of stereographic projection-little planet diagrams-can be useful in urban planning as it shows the cumulation and direction of landscape features. Unlike methods based on image manipulation and photomontage, a visual impact assessment can be performed for many observer locations without additional effort. The implementation of the planned infrastructure into the photographs is time-consuming and requires appropriate skills, so in practice, no more than a few photomontages are performed from different perspectives [51]. The QLA360 method allows for integration of 3D models in the CDSM so it is possible to automatically generate a render with an appropriate scale and perspective from any number of observation sites. The result of the visual impact assessment is automatically generated in the quantified form of percentages in the observer's field of view. This paper presents a simple example of implementing new buildings as basic 3D shapes, but it is possible to make an analysis for highly detailed 3D models of designed structures. It can also be used to assess the impact on the landscape by implementing new 3D models of vegetation. The generated equirectangular panoramic renders can be directly displayed with a virtual-reality head-set, which can be helpful for subjective evaluation of visual impact [52].
The use of LiDAR data also has certain limitations. The method of constructing the CDSM for the analysis consisting of meshing and extruding the first return of the object is simple, fast, user-independent, and does not require any additional data but can generate geometry errors and inaccuracies especially in the case of buildings and tree objects. The QLA360 method can be directly applied for already existing 3D city models with different Level of Detail (LOD), which can improve analysis results due to the better buildings geometry representation [53,54], and virtual environments made with game engines [55]. The reconstruction of individual trees is possible but requires a very high density LiDAR collected using UAV platforms [56,57] or Terrestrial Laser Scanning [58,59] or manually modeling vegetation in 3D modeling software. The versatility of the proposed methodology allows every object of the CDSM to be replaced with more detailed geometry when necessary.

Conclusions
The study shows great potential in extending GIS features with 3D graphic software and remote sensed data for landscape assessment. Presented methodology uses LiDAR point cloud as the only input data. It shows great versatility of LiDAR technology, as it is used for generating 3D models and, at the same time, divides generated objects by class. The classification of landscape features is directly adapted with ASPRS LAS classification, which makes the method observer-independent. As presented, the QLA360 method can be easily applied in the landscape impact assessment of new investments. The following detailed conclusions were reached: • The main advantages of the QLA360 method are: observer-independent, 360 • field of view, automatic operation, scalability, and easy presentation and interpretation of results, • The QLA360 method allows for quantitative analysis of landscape elements from the perspective of an observer in the 360 • field of view based on classified LIDAR point clouds, • A quantitative assessment of landscape features can be performed for any location without additional field studies, • The use of GIS tools and 3D graphics software in the QLA360 method allows to assess changes in the landscape caused by the introduction of new elements such as trees, buildings, and infrastructure, • The method is based on processed LIDAR point clouds developed in accordance with ASPRS standards; therefore, it allows for standardization of the classification of landscape features, gives comparable results, and can be easily applied in practice.

Funding:
The publication was co-financed within the framework of the Ministry of Science and Higher Education programme as "Regional Initiative Excellence" in years 2019-2022, Project No. 005/RID/2018/19.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.