3D Point Cloud Data in Conveying Information for Local Green Factor Assessment

: The importance of ensuring the adequacy of urban ecosystem services and green infrastructure has been widely highlighted in multidisciplinary research. Meanwhile, the consolidation of cities has been a dominant trend in urban development and has led to the development and implementation of the green factor tool in cities such as Berlin, Melbourne, and Helsinki. In this study, elements of the green factor tool were monitored with laser-scanned and photogrammetrically derived point cloud datasets encompassing a yard in Espoo, Finland. The results show that with the support of 3D point clouds, it is possible to support the monitoring of the local green infrastructure, including elements of smaller size in green areas and yards. However, point clouds generated by distinct means have differing abilities in conveying information on green elements, and canopy covers, for example, might hinder these abilities. Additionally, some green factor elements are more promising for 3D measurement-based monitoring than others, such as those with clear geometrical form. The results encourage the involvement of 3D measuring technologies for monitoring local urban green infrastructure (UGI), also of small scale.


Introduction
Ensuring the adequacy and quality of urban ecosystem services and green infrastructure has been widely highlighted in the urban land use and planning literature in recent years.In an urban setting, ecosystem services distinguish between nature's functions in production, regulation, support and cultural services, and also recognize nature's intrinsic function [1].Like ecosystem services, urban green infrastructure (UGI) has become a central concept in land-use planning and policy [2].It refers to, or is managed for, both natural and artificial elements of nature that are designed to provide ecosystem services [3].UGI covers for example parks, public green space, allotments, green corridors, street trees, urban forests, roof and vertical greening, and private yards [4].
Urbanization and densification of housing are global phenomena [5].Apart from the structural change, urban living seems to be a matter of dwelling preferences, too.At the same time, observation of and movement in nature have been shown to play a part in housing desires and proven to enhance human well-being and health [6][7][8], which the global circumstances under COVID-19 continue to underline [9][10][11].This poses a challenge for the densification of cities and puts pressure on preserving and promoting the natural environment as much as possible in densely populated areas.Thus, recent studies on urban greening have pointed out the environmental justice and importance of small-scale solutions, as they enable access to nature in cities more widely than large-scale and more concentrated urban green projects, while most likely being easier to implement [12,13].
Yards have, until recently, played a minor role in the scope of UGI assessments, even if their importance is similar to other urban green areas [14,15].Their proportion in the urban morphology is usually not insignificant, either; according to the survey of Loram et al. [16], the urban area covered by domestic yards ranged from 21.8% to 26.8% in six studied cities in the UK.According to Cameron et al. [4], there are significant differences in both the form and management of yards which radically influence their benefits; that is, their quality affects their impact on ecosystem services, such as carbon sequestration and storage potential [15].According to Clark et al. [17], in many cities, private trees dominate tree canopy cover.As densification often means fewer private trees, it might lead to diminishing urban tree canopy cover.By acknowledging the role of private land and yards, the discussion on UGI expands into the private realm [18].
Awareness on the importance of local UGI and the requirement for its comprehensive planning has led to the implementation of the green factor (i.e., green area ratio, green space factor) tool in cities such as Berlin in 1997 [19], Helsinki in 2014 [20], and Melbourne in 2020 [18].The purpose of the green factor is usually to ensure the sufficient amount of total green [21], as well as quality in the planning of a new district [22] by generating a numeric value for the planned and remaining green elements of the area.In the case of Helsinki's green factor, for example, each element is given a multiplier, which is then used to calculate the value of the plan.This way, it is possible to compare distinct plans and to assess how the sustainability goals and targets are achieved.The green factor is still less dealt with in research, and only few practical application experiences are described in the literature [20,21].However, in Helsinki, for example, the recent political debate has pointed out the necessity to extend the use of the green factor tool to urban infill projects, instead of limiting its use to new area development [23].This puts pressure on developing the tool further, and on evaluating its possibilities to assess the already existing vegetation, instead of using it only as a regulative tool in the planning phase.
Three-dimensional point clouds generated with laser scanning and photogrammetry allow monitoring of physical properties and visually detectable elements of the environment.3D point clouds are applied in natural resource management and forestry [24][25][26], disaster management [27,28], landscape monitoring and planning [29,30] as well as in the monitoring of individual urban buildings and urban scenery [31], urban trees [32,33], and streetscapes [34,35].However, especially in 3D city modeling, the emphasis has traditionally been on buildings, rather than on small-scale natural environments, yards, and their elements.The applications on forestry research have been widely studied from both structural [36][37][38] and individual tree points of view [39][40][41][42][43], including forest inventory and change prediction [44][45][46].Studies in forestry have also specified levels of detail for a single tree model [47]; however, methods in the field mainly concentrate on tree attributes.According to Casalegno et al. [48], Alavipanah et al. [49], and Feltynowski et al. [50], until recent years, the use of laser scanning and photogrammetry-aided methods have been implemented in surprisingly few applications in UGI assessments, even if its 2D-based applications, such as satellite data and mapping, are diverse.
The existing point cloud-based applications for UGI include, for example, quantitative metrics to estimate its overall volume and to demonstrate the spatial and volumetric heterogeneity of it.Casalegno et al. [48] demonstrated a voxel (volumetric pixels)-based assessment of UGI from waveform airborne lidar, including three different structures: grass, shrubs, and trees.The study resulted in differing outcomes with the evaluations based on other remote sensing data.A similar result has been achieved with the so-called green view index (GVI), a method usually utilizing panoramic photographs to assess the greenery of urban views enabling the local vertical assessment of the views.Larkin and Hystad [51] noted that the green views did not always correlate with the satellite-based normalized difference vegetation index (NDVI).Hence, UGI research could benefit from digital vertical (3D) data to supplement the hegemonic role of horizontal (2D) data.
Our aim is to develop 3D point cloud-based assessment of local UGI.We assess how well the green elements central to green factor assessment are visible and detectable in 3D point cloud data, focusing on the local scale.The idea is therefore not to include the existing criteria of the green factor in the study set, as it is, in many ways, bound to the planning phase, as well as to the two-dimensional information used in the planning documents.Instead, the idea is to point out aspects and possibilities that could be useful in the future's 3D measuring-assisted assessment of local green infrastructure.Hence, our approach advocates the use of digital 3D data for built and existing local green elements in contrast to the typical approach in which the green factor tool is used mainly for planning and as 2D information.
More specifically, our objective is to explore the distinct point cloud data sets' ability to convey information on green elements, especially when comparing them qualitatively in terms of geometry and appearance, and further with details and completeness of various green elements.Finally, the results are discussed in terms of the development of 3D point cloud-based assessments for local UGI.

Study Site, Measurements, and Data Sets
The study field encompasses the yard of Träskända, an 1890s manor located in southern Finland (60.2370 • N, 24.7090 • E), 18 km from the Helsinki city center.Currently owned by the city of Espoo, the Träskända manor and its yard are part of a nature reserve and park [52].The diversity of its green elements makes the manor yard a practical study field for monitoring the use of point clouds for the purposes of the green factor, since many of the green factor elements can be found in the well-managed park area.
Reference data were gathered during two field inspections in August 2020.Photographs taken with an iPhone 6, and notes were utilized as reference material and in study designs.These were also used for including and excluding elements from the analysis (see Section 2.2).The characteristics of the point cloud data sets were explored visually, acknowledging the special characteristics of the study field, that is, acknowledging the elements that were located only under the canopy.

Tarot T960
The Tarot T960 hexacopter is an unmanned aircraft system (UAS).In its basic configuration, the UAV is equipped with a 3-axis gimbal stabilized Sony 36.3-megapixelA7R digital single-lens mirrorless (DSLM) camera fitted with a Zeiss Loxia 21 mm f/2.8 lens, resulting in a field of view (FoV) of 91 • [53].The drone system was configured for a lightweight survey mission with half-capacity battery, the simulated hover flight time being 19 min.
The flight was planned with Mission Planner (version 1.3.68build 1.3.7105.26478)as a cross-grid oblique imaging survey, and the flight path length was 2771 m.The survey flight altitude was 61 m according to Agisoft Metashape Professional (version 1.6.5),and the ground sample distance (GSD) was 12 mm/px.The take-off and landing were operated manually, while the rest of the flight was controlled by the flight control unit (FCU).The survey was conducted as a single flight with a 12 min flight time.The resulting 354 images were processed with Agisoft Metashape Professional to form a georeferenced point cloud with a control point root mean square error (RMSE) of 12 mm.Georeferencing was done using five ground survey global navigation satellite system (GNSS) control points.The survey area was 3.45 hectares.Figure 1 shows the Tarot T960 survey flight plan and the resulting flight path.The planned path is shown in yellow, while the red path illustrates the realized flight path.
survey area was 3.45 hectares.Figure 1 shows the Tarot T960 survey flight plan and the resulting flight path.The planned path is shown in yellow, while the red path illustrates the realized flight path.

DJI Phantom 4 Pro+
DJI Phantom 4 Pro+ is an entry-level professional quadcopter equipped with a 3-axis gimbal-stabilized 20-megapixel FC6310 camera.The camera is equipped with a fixed 8.8 mm lens, resulting in a FoV of 84° [54].The drone system was flown in a standard configuration with a hover time of approximately 30 min.The flight was conducted in three manually operated parts with a total flight time of 61 min (75 min including take-offs, landings, and changing the battery).The first two flights were done using oblique imaging, while the third flight was done using nadir images.The survey flight altitude was 32 m according to Agisoft Metashape Professional version 1.6.5, and the GSD was 7.8 mm/px.The resulting 503 images were processed in Agisoft Metashape Professional to form a georeferenced point cloud with a control point RMSE of 18 mm.Georeferencing was done using five ground survey GNSS control points.The survey area was 0.94 hectares.Figure 2 shows the DJI Phantom 4 Pro+ orthophoto with camera stations illustrated in white.Data are combined from three manual survey flights.

DJI Phantom 4 Pro+
DJI Phantom 4 Pro+ is an entry-level professional quadcopter equipped with a 3-axis gimbal-stabilized 20-megapixel FC6310 camera.The camera is equipped with a fixed 8.8 mm lens, resulting in a FoV of 84 • [54].The drone system was flown in a standard configuration with a hover time of approximately 30 min.The flight was conducted in three manually operated parts with a total flight time of 61 min (75 min including take-offs, landings, and changing the battery).The first two flights were done using oblique imaging, while the third flight was done using nadir images.The survey flight altitude was 32 m according to Agisoft Metashape Professional version 1.6.5, and the GSD was 7.8 mm/px.The resulting 503 images were processed in Agisoft Metashape Professional to form a georeferenced point cloud with a control point RMSE of 18 mm.Georeferencing was done using five ground survey GNSS control points.The survey area was 0.94 hectares.Figure 2 shows the DJI Phantom 4 Pro+ orthophoto with camera stations illustrated in white.Data are combined from three manual survey flights.

Leica RTC360
Leica RTC360 is a time-of-flight-based terrestrial laser scanner.It has a FoV of 360° × 300° and a range accuracy of 1.0 mm + 10 ppm.The maximum scanning range of the sensor is 130 m, and the data acquisition rate is 2,000,000 points/sec.Additionally, the scanner has three body-mounted, high dynamic range (HDR) cameras with a resolution of 4000 × 3000 px for colorization of the point cloud and a visual inertial system for real-time registration purposes.The FoV of the camera system matches the one of the scanner [55].The test site was measured with two RTC360 laser scanners simultaneously, totaling 77 scans during a 5 h period.A scanning resolution of 6 mm at a 10 m distance was used with the "double scan" option enabled to reduce the level of noise in the measurements.The scans were processed, registered, and georeferenced with Leica Cyclone Register 360 (version 2020.1.0build R17509) software, resulting in an absolute mean error of 12 mm.Figure 3 shows the Leica RTC360 point cloud with the scanner stations visualized in red.

Leica RTC360
Leica RTC360 is a time-of-flight-based terrestrial laser scanner.It has a FoV of 360 • × 300 • and a range accuracy of 1.0 mm + 10 ppm.The maximum scanning range of the sensor is 130 m, and the data acquisition rate is 2,000,000 points/s.Additionally, the scanner has three body-mounted, high dynamic range (HDR) cameras with a resolution of 4000 × 3000 px for colorization of the point cloud and a visual inertial system for real-time registration purposes.The FoV of the camera system matches the one of the scanner [55].The test site was measured with two RTC360 laser scanners simultaneously, totaling 77 scans during a 5 h period.A scanning resolution of 6 mm at a 10 m distance was used with the "double scan" option enabled to reduce the level of noise in the measurements.The scans were processed, registered, and georeferenced with Leica Cyclone Register 360 (version 2020.1.0build R17509) software, resulting in an absolute mean error of 12 mm.Figure 3 shows the Leica RTC360 point cloud with the scanner stations visualized in red.

GeoSLAM ZEB Revo RT
GeoSLAM ZEB Revo RT is a hand-held mobile laser scanner that uses simultaneous localization and mapping (SLAM) for locating itself in the environment in real-time.The ZEB Revo RT uses a Hokuyo UTM-30LX laser sensor, which rotates continuously around the front pointing axis.The FoV of the sensor is 360° × 270° [56].The relative accuracy of the scanner is 1-3 cm, the maximum range is 30 m, and the data acquisition rate is 43,200 points/sec [57].The colorization of the point cloud is executed with an integrated camera that has a FoV of 120° × 90° [58].
The study field was measured with five independent measurements in 30 min (Figure 4).These were processed in GeoSLAM Hub (version 6.1) with default settings and colorized with the video of an integrated camera.The separate measurements were merged first in GeoSLAM Hub without colors with the merge tool.After merging, the colorized point clouds were registered based on GeoSLAM Hub registration in Cloud Compare with an average error of 1.28 cm.Then, the point cloud was matched by each test site independently to the same coordinate system as the Tarot T960 point cloud with iterative closest point (ICP) calculation in Cloud Compare version 2.11.

GeoSLAM ZEB Revo RT
GeoSLAM ZEB Revo RT is a hand-held mobile laser scanner that uses simultaneous localization and mapping (SLAM) for locating itself in the environment in real-time.The ZEB Revo RT uses a Hokuyo UTM-30LX laser sensor, which rotates continuously around the front pointing axis.The FoV of the sensor is 360 • × 270 • [56].The relative accuracy of the scanner is 1-3 cm, the maximum range is 30 m, and the data acquisition rate is 43,200 points/s [57].The colorization of the point cloud is executed with an integrated camera that has a FoV of 120 • × 90 • [58].
The study field was measured with five independent measurements in 30 min (Figure 4).These were processed in GeoSLAM Hub (version 6.1) with default settings and colorized with the video of an integrated camera.The separate measurements were merged first in GeoSLAM Hub without colors with the merge tool.After merging, the colorized point clouds were registered based on GeoSLAM Hub registration in Cloud Compare with an average error of 1.28 cm.Then, the point cloud was matched by each test site independently to the same coordinate system as the Tarot T960 point cloud with iterative closest point

Green Factor Elements
The inspected green factor elements were derived from the concept of the Finnish green factor [21], also applied later in international cooperation [59].The green factor concept demonstrates the complexity of urban natural environments, as it includes a listing of green elements which contribute to and are essential for the quality of the UGI.We selected the suitable parts of the green element listing presented by the iWater project [60] (iWater project.Green Factor tool.Available online: https://www.integratedstormwater.eu/sites/www.integratedstormwater.eu/files/final_outputs/green_factor_tool_protected.xlsm,accessed on 1 September 2021).We started by dividing the green elements into visible and non-visible (or intangible) elements.Subsequently, we included the elements of which above-ground visibility makes it theoretically possible for them to be detected via point clouds.The elements that require both the information from above, as well as underground characteristics (e.g., soil) to be identified in a proper manner, were also excluded (i.e., some of the stormwater management solutions).Further, according to the observations during the field visits and photographs taken in August 2020, the elements that were not found in the study area were excluded from the element list.Thus, the visually detectable elements existing in the study area were included in the analysis.We also needed to do some additional adaption in the green element listing, as in the original green factor tool, preserved vegetation and soil, as well as planted/new vegetation are distinguished.As we did not assess the elements in plans but as already existing vegetation, we merged these two classes in the final analysis.The included green elements are described in Table 1, and the excluded elements in Appendix A.

Green Factor Elements
The inspected green factor elements were derived from the concept of the Finnish green factor [21], also applied later in international cooperation [59].The green factor concept demonstrates the complexity of urban natural environments, as it includes a listing of green elements which contribute to and are essential for the quality of the UGI.We selected the suitable parts of the green element listing presented by the iWater project [60] (iWater project.Green Factor tool.Available online: https://www.integratedstormwater.eu/sites/www.integratedstormwater.eu/files/final_outputs/green_factor_tool_protected.xlsm,accessed on 1 September 2021).We started by dividing the green elements into visible and non-visible (or intangible) elements.Subsequently, we included the elements of which above-ground visibility makes it theoretically possible for them to be detected via point clouds.The elements that require both the information from above, as well as underground characteristics (e.g., soil) to be identified in a proper manner, were also excluded (i.e., some of the stormwater management solutions).Further, according to the observations during the field visits and photographs taken in August 2020, the elements that were not found in the study area were excluded from the element list.Thus, the visually detectable elements existing in the study area were included in the analysis.We also needed to do some additional adaption in the green element listing, as in the original green factor tool, preserved vegetation and soil, as well as planted/new vegetation are distinguished.As we did not assess the elements in plans but as already existing vegetation, we merged these two classes in the final analysis.The included green elements are described in Table 1, and the excluded elements in Appendix A.

Study Design
In this study, we assessed how well the elements central to green factor assessment were visible and detectable via 3D point cloud data.We combined the concept of the green factor and means of 3D measuring, namely, photogrammetry and laser scanning.In our approach, we examined monitoring the existing local green infrastructure with semi-automated digital means, focusing on the green elements that are not usually included in urban assessment with 3D point clouds, but which could benefit the green factor assessment.
Based on a qualitative inspection, the point clouds of different sensing methods were compared in terms of their ability to convey information on the green elements and their characteristics.In 3D visualization and modeling studies, along the geometric representativeness, appearance has long been recognized as an important variable for qualification of a 3D visualization [61][62][63].Appearance is non-geometric information that is defined here as the visual comprehensiveness and informativeness that is bound to the interplay of the colors and surface of the object.Since it was possible that an element had an exact geometry but non-informative coloring, or vice versa, the geometry and appearance of the element were distinguished.Appearance is represented by the RGB color information captured from the surface of the objects in the scene by camera sensors.
In addition, we found it essential to evaluate the capability of the point cloud data to convey information both on the quality and details, as well as on the volume and/or amount of green elements.For this, we analyzed green elements by rating their (1) details (i.e., especially green elements' characteristics and distinctiveness) and ( 2) completeness (i.e., especially green elements' volume and/or amount).As with the geometry and appearance, the details and completeness were distinguished; elements such as flowers of a shrub may have been well-identifiable from the point cloud, but the size of the shrub was still difficult to determine, or vice versa.
We analyzed the quality of geometry by rating each data set according to the height ramp colored point cloud.This way, the geometric differences between the point clouds could be highlighted.In turn, we analyzed the appearance by rating each data set according to their RGB colored point clouds (i.e., the RGB colors retrieved from the photographs generated with the respective measuring system).To conclude, we ended up with four categories (Figure 5) that scored according to a four-point (0-3) grading table (Table 2).By identifying these parameters, the study seeks to broaden the possibilities of 3D-based assessment of UGI.Table 2.The criteria for the ability of the point cloud data to convey information on the tested parameters.

Ability to Convey Information Rating Description
No ability 0 The geometry of the element, or visual information, such as the color of the element, is missing or not detectable in the data.Data allows no evaluation of the element Low ability 1 Traces of the element's form, or minimal visual information, are detectable.Other sources are needed to monitor the element.

Moderate ability 2
The limits of the element are somewhat detectable, or visual information exists but is at least partly incomplete.Data allows moderate but no proper evaluation of the element.

Good ability 3
The limits of the elements are mostly clear, or visual information is mostly comprehensive.Data allows the monitoring of the element.
The point clouds were visualized and compared with Cloud Compare [64] (Cloud-Compare.3D point cloud and mesh processing software.2021.Available online: http://www.cloudcompare.org/,accessed on 1 September 2021).To enhance perception of depth in assessing the point cloud data, the height ramp colored point cloud was visualized with an EDL (shader) filter, which is a real-time non-photorealistic shading filter technique that enhances very small features on blank clouds [65].The point clouds were visualized with a fixed point size 2.

Results
The quality of the parameters, that is, details in the context of appearance and of geometry, and completeness in the context of appearance and geometry, were tested for all the included green elements with all the point cloud data sets.The results of the ratings are given in Table 3 for parameters on geometry and Table 4 for parameters on appearance.The results are further explained in the text.Table 2.The criteria for the ability of the point cloud data to convey information on the tested parameters.

Ability to Convey Information Rating Description
No ability 0 The geometry of the element, or visual information, such as the color of the element, is missing or not detectable in the data.Data allows no evaluation of the element Low ability 1 Traces of the element's form, or minimal visual information, are detectable.Other sources are needed to monitor the element.

Moderate ability 2
The limits of the element are somewhat detectable, or visual information exists but is at least partly incomplete.Data allows moderate but no proper evaluation of the element.

Good ability 3
The limits of the elements are mostly clear, or visual information is mostly comprehensive.Data allows the monitoring of the element.
The point clouds were visualized and compared with Cloud Compare [64] (Cloud-Compare.3D point cloud and mesh processing software.2021.Available online: http://www.cloudcompare.org/,accessed on 1 September 2021).To enhance perception of depth in assessing the point cloud data, the height ramp colored point cloud was visualized with an EDL (shader) filter, which is a real-time non-photorealistic shading filter technique that enhances very small features on blank clouds [65].The point clouds were visualized with a fixed point size 2.

Results
The quality of the parameters, that is, details in the context of appearance and of geometry, and completeness in the context of appearance and geometry, were tested for all the included green elements with all the point cloud data sets.The results of the ratings are given in Table 3 for parameters on geometry and Table 4 for parameters on appearance.The results are further explained in the text.The visual comparisons for the green elements are presented in Figures 6-14.In all the figures, the point clouds are denoted as the following: (a) Tarot T960 for geometry, (b) DJI Phantom 4 Pro+ for geometry, (c) Leica RTC360 for geometry, (d) GeoSLAM ZEB Revo RT for geometry, (e) Tarot T960 for appearance, (f) DJI Phantom 4 Pro+ for appearance, (g) Leica RTC360 for appearance, and (h) GeoSLAM ZEB Revo RT for appearance.
ISPRS Int.J. Geo-Inf.2021, 10, x FOR PEER REVIEW 11 of 25 The visual comparisons for the green elements are presented in Figures 6-14.In all the figures, the point clouds are denoted as the following: (a) Tarot T960 for geometry, (b) DJI Phantom 4 Pro+ for geometry, (c) Leica RTC360 for geometry, (d) GeoSLAM ZEB Revo RT for geometry, (e) Tarot T960 for appearance, (f) DJI Phantom 4 Pro+ for appearance, (g) Leica RTC360 for appearance, and (h) GeoSLAM ZEB Revo RT for appearance.
Trees of different sizes were generally well-visible in all the point clouds (Figures 6-8); however, the appearance of trees was of somewhat poor quality in the RGB colored point cloud generated by GeoSLAM ZEB Revo RT.For the very small tree, the point cloud generated with Tarot T960 could not provide a complete form (as the trunk was missing) which eventually also affected the appearance.The visual comparisons for the green elements are presented in Figures 6-14.In all the figures, the point clouds are denoted as the following: (a) Tarot T960 for geometry, (b) DJI Phantom 4 Pro+ for geometry, (c) Leica RTC360 for geometry, (d) GeoSLAM ZEB Revo RT for geometry, (e) Tarot T960 for appearance, (f) DJI Phantom 4 Pro+ for appearance, (g) Leica RTC360 for appearance, and (h) GeoSLAM ZEB Revo RT for appearance.
Trees of different sizes were generally well-visible in all the point clouds (Figures 6-8); however, the appearance of trees was of somewhat poor quality in the RGB colored point cloud generated by GeoSLAM ZEB Revo RT.For the very small tree, the point cloud generated with Tarot T960 could not provide a complete form (as the trunk was missing) which eventually also affected the appearance.The visual comparisons for the green elements are presented in Figures 6-14.In all the figures, the point clouds are denoted as the following: (a) Tarot T960 for geometry, (b) DJI Phantom 4 Pro+ for geometry, (c) Leica RTC360 for geometry, (d) GeoSLAM ZEB Revo RT for geometry, (e) Tarot T960 for appearance, (f) DJI Phantom 4 Pro+ for appearance, (g) Leica RTC360 for appearance, and (h) GeoSLAM ZEB Revo RT for appearance.
Trees of different sizes were generally well-visible in all the point clouds (Figures 6-8); however, the appearance of trees was of somewhat poor quality in the RGB colored point cloud generated by GeoSLAM ZEB Revo RT.For the very small tree, the point cloud generated with Tarot T960 could not provide a complete form (as the trunk was missing) which eventually also affected the appearance.Apart from the location in the study field (under the canopy vs. open area), the form of the element affected the results; sand surfaces and lawns were given lower scores than           elements with a more geometric form.For those elements, laser scanning-based solutions could more likely provide a well-presented geometry; however, the results show the strengths in appearance of the UAV-based solutions in conveying information on distinct visual details in elements, such as perennials (Figures 12 and 13).As shown for blooming shrubs (Figure 14), the laser scanning-based data were more likely to cover the geometry of the element as a whole.However, for the appearance, the RGB visualization of the colors was not as informative with GeoSLAM ZEB Revo RT (Figure 14d,h).RGB visualization would be essential in defining the blooming element of the shrub.elements with a more geometric form.For those elements, laser scanning-based solutions could more likely provide a well-presented geometry; however, the results show the strengths in appearance of the UAV-based solutions in conveying information on distinct visual details in elements, such as perennials (Figures 12 and 13).As shown for blooming shrubs (Figure 14), the laser scanning-based data were more likely to cover the geometry of the element as a whole.However, for the appearance, the RGB visualization of the colors was not as informative with GeoSLAM ZEB Revo RT (Figure 14d,h).RGB visualization would be essential in defining the blooming element of the shrub.strengths in appearance of the UAV-based solutions in conveying information on distinct visual details in elements, such as perennials (Figures 12 and 13).As shown for blooming shrubs (Figure 14), the laser scanning-based data were more likely to cover the geometry of the element as a whole.However, for the appearance, the RGB visualization of the colors was not as informative with GeoSLAM ZEB Revo RT (Figure 14d,h).RGB visualization would be essential in defining the blooming element of the shrub.Trees of different sizes were generally well-visible in all the point clouds (Figures 6-8); however, the appearance of trees was of somewhat poor quality in the RGB colored point cloud generated by GeoSLAM ZEB Revo RT.For the very small tree, the point cloud generated with Tarot T960 could not provide a complete form (as the trunk was missing) which eventually also affected the appearance.
The location of the investigated green elements affected the results; the elements located under the tree canopy were generally less visible in the data than elements located in the open area, as shown in Figures 8,10 and 11.Apart from the location in the study field (under the canopy vs. open area), the form of the element affected the results; sand surfaces and lawns were given lower scores than elements with a more geometric form.For those elements, laser scanning-based solutions could more likely provide a well-presented geometry; however, the results show the strengths in appearance of the UAV-based solutions in conveying information on distinct visual details in elements, such as perennials (Figures 12 and 13).
As shown for blooming shrubs (Figure 14), the laser scanning-based data were more likely to cover the geometry of the element as a whole.However, for the appearance, the RGB visualization of the colors was not as informative with GeoSLAM ZEB Revo RT (Figure 14d,h).RGB visualization would be essential in defining the blooming element of the shrub.

Discussion
By implementing a case test study in Espoo, Finland, our study aim was to support the monitoring of existing UGI on a local scale.We tested the suitability of distinct 3D point cloud data by exploring the detectability of visible green elements.In the following, we conclude the most interesting results with all the tested green elements.
In the case of large trees, the appearance of the top canopy was somewhat lower in quality in the laser scanning-derived point cloud data due to the perspective of the terrestrial sensors (Figure 6).The small trees were captured almost equally with all the tested sensing methods; however, the appearance rating was slightly lower in the laser scanning -derived point cloud data (Figure 7).In the case of very small trees, the higher flight altitude reduced Tarot T960's capacity to capture minor geometries of the elements, leaving the trunks of the trees missing (Figure 8).The large shrub was located under a large tree canopy in the test area, which negatively affected both the appearance and geometry of the element in the UAV-based point cloud data sets.However, the appearance of the large shrub was also generally low in the laser scanning-derived point cloud data sets, while its geometry was generally good in them (Figure 9).The results with natural vegetation (Figure 10) and dead wood (Figure 11) were similar to the large shrub due to a similar location under a large tree canopy, even if the elements itself were of different size and geometry.
The tested pavements, including grass stones and sand surfaces (Figure 12), were not detectable in terms of geometry in any of the point cloud data, and thus were rated with the lowest possible scores.However, from the appearance point of view and due to the RGB-coloring, these elements were varyingly detectable.The geometry of the lawn showed slightly better results in all the point cloud data sets (Figure 12).Geometry of the perennials, perennial wines (Figure 12), and plants with impressive blooming (Figure 13) resulted in moderate to good ratings in the photogrammetrically derived point clouds, and good ratings in the laser scanning-derived point clouds.For these elements, the appearance was somewhat better with the photogrammetrically derived point clouds, except for the plants with impressive blooming, for which Leica RTC360 generated equally good appearance ratings.The UAV-based point clouds had issues with the flowering shrub's geometry (Figure 14).Further, the flowering shrub was the only green element which showed the best appearance with a laser scanning-derived point cloud data set; however, the differences in the results were only small.

Suitability of Point Cloud-Based Information for the Purposes of Monitoring Local Green Elements
The results show that the point clouds originating from different systems have differing abilities in conveying information on green elements.When looking at the mean results, there are clear differences in how well the point clouds were able to convey information.In some cases, there were quite remarkable differences even within single elements as shown in the results with natural ground vegetations, semipermeable surfaces, and perennials (Figure 15).To highlight the best observed ability of the point clouds to convey information on green elements, the top results received with any of the point cloud data sets are shown in Figure 16.For this, it was enough that the green element quality parameter received the top score of 3 with at least one of the point cloud data sets.We can conclude that the differences between the green elements were relatively low when looking at the best performing results from any sensor system.Five of the thirteen green elements could be assessed with the point clouds as they were evaluated to have a good ability to convey information on them (full scores).Five elements received less than a full score, but at least 2-5 points in the mean, indicating that point clouds have a moderate ability to convey information on them.Surface-like elements that have similar textures to one other, that is, grass stones and sand surfaces, as well as dead wood located within the natural ground vegetation and under the canopy, were given less than 2 points in the mean, meaning that point clouds have only a low ability to convey information on them.To highlight the best observed ability of the point clouds to convey information on green elements, the top results received with any of the point cloud data sets are shown in Figure 16.For this, it was enough that the green element quality parameter received the top score of 3 with at least one of the point cloud data sets.We can conclude that the differences between the green elements were relatively low when looking at the best performing results from any sensor system.Five of the thirteen green elements could be assessed with the point clouds as they were evaluated to have a good ability to convey information on them (full scores).Five elements received less than a full score, but at least 2-5 points in the mean, indicating that point clouds have a moderate ability to convey information on them.Surface-like elements that have similar textures to one other, that is, grass stones and sand surfaces, as well as dead wood located within the natural ground vegetation and under the canopy, were given less than 2 points in the mean, meaning that point clouds have only a low ability to convey information on them.

Differences of Data Acquisition Methods
The results with point cloud data sets were compared in terms of conveying information on geometry and appearance (Figure 17).According to the results, differences are not only found between UAV photogrammetry and laser scanning-based solutions, but also between UAV photogrammetry methods, as shown in the results with very small trees, and between laser scanning methods, as shown in the results with flowering shrubs and lawns.For geometry, Leica RTC360 generated the best mean results, and for appearance, DJI Phantom 4 Pro+ generated the best mean results.The differences in the latter are explained by the generally low performance of GeoSLAM ZEB Revo RT in qualities considering appearance.

Differences of Data Acquisition Methods
The results with point cloud data sets were compared in terms of conveying information on geometry and appearance (Figure 17).According to the results, differences are not only found between UAV photogrammetry and laser scanning-based solutions, but also between UAV photogrammetry methods, as shown in the results with very small trees, and between laser scanning methods, as shown in the results with flowering shrubs and lawns.For geometry, Leica RTC360 generated the best mean results, and for appearance, DJI Phantom 4 Pro+ generated the best mean results.The differences in the latter are explained by the generally low performance of GeoSLAM ZEB Revo RT in qualities considering appearance.DJI Phantom 4 Pro+ generated more detailed results when compared to Tarot T960.This difference in accuracy is explained by the flight altitude, as the GSD of DJI Phantom 4 Pro+ was 7.8 mm/px, and of Tarot T960 12 mm/px.Similarly, Leica RTC360 generally resulted in more detailed and comprehensive point clouds than GeoSLAM ZEB Revo RT.This was expected, as terrestrial laser scanning has been shown to enable high-quality point clouds with high accuracy (0.1-5 mm) and precision (0.6-4 mm), and a high level of detail [66].Prior studies show that the accuracy of SLAM techniques are at the 1-3 cm level [67][68][69][70].The strengths of UAV photogrammetry-derived point clouds are related to appearance parameters, while lower flight altitude seems to generally generate better results when many of the targets are small-scale elements, like many of the elements in the green factor.The strengths of laser scanning generated point clouds are related to the quality of geometry [66].In previous studies, GeoSLAM ZEB Revo RT has been noted to have lower quality in RGB-colored point clouds [71], which is seen in the results as weak- DJI Phantom 4 Pro+ generated more detailed results when compared to Tarot T960.This difference in accuracy is explained by the flight altitude, as the GSD of DJI Phantom 4 Pro+ was 7.8 mm/px, and of Tarot T960 12 mm/px.Similarly, Leica RTC360 generally resulted in more detailed and comprehensive point clouds than GeoSLAM ZEB Revo RT.This was expected, as terrestrial laser scanning has been shown to enable high-quality point clouds with high accuracy (0.1-5 mm) and precision (0.6-4 mm), and a high level of detail [66].Prior studies show that the accuracy of SLAM techniques are at the 1-3 cm level [67][68][69][70].The strengths of UAV photogrammetry-derived point clouds are related to appearance parameters, while lower flight altitude seems to generally generate better results when many of the targets are small-scale elements, like many of the elements in the green factor.The strengths of laser scanning generated point clouds are related to the quality of geometry [66].In previous studies, GeoSLAM ZEB Revo RT has been noted to have lower quality in RGB-colored point clouds [71], which is seen in the results as weaknesses in terms of appearance.To conclude, accuracy of measuring local and smallscale UGI can be improved when utilizing terrestrial laser scanners and UAV data from lower flight altitudes.

Prospects for the Point Cloud-Based Evaluation of the Local Existing Green Factor
Finally, we applied the test results to estimate the capability of point cloud-based evaluation in the future for existing green infrastructure through the green factor.The tentative estimations are presented in Appendix B. According to the results, we argue that elements with clear geometric form have a good potential to be assessed with the support of point cloud data [72].The underground (non-visible) and surface-like elements, such as pavements, are likely to be applied only together with additional information sources.However, yards are individually structured natural environments, which might pose challenges for semi-automated assessments [73].For future use, it is important to notice that in some cases, point cloud data sets can offer even more detailed and comprehensive information on the elements than is now defined in the green factor tool.The green factor tool that was used as a reference tool in this study defines the quantity of elements mostly in square meters or pieces.This is logical, as the actual green factor tool is intended to be used in the planning phase.However, for the assessment of the existing green factor, the tool could be developed to include the qualities of the existing elements in vertical strata and in volume [42].In such case, point cloud-based evaluation of green effectiveness can enable possibilities for geometrically comprehensive assessment of UGI, including the vertical dimension.

Remarks on the Study Design and Future Research
The chosen perspective plays a major role in differences between aerial and terrestrial data capture.Terrestrial sensors are usually utilized to capture features located on, or near, the ground surface while UAV footage tends to be used for bird's eye purposes, such as for capturing tall elements like large trees [73].Another aspect bound to the chosen perspective is distance from the sensor to the subject.Laser scanners are designed to operate in a certain range window that needs to be accounted for.Generally, in surveys of built environments, UAVs are applied to capture an overview of the area and terrestrial methods for the close-range targets.However, both methods can be at least theoretically used for both close and high range purposes.With terrestrial methods, it is easier to cover the area under the canopy, but it is also possible to fly the UAV under the canopy [74].This is not yet a very typical way to advance UAV-based surveying, and in our study, we chose distinct methods that advocate the differences in their strengths and demonstrate how these methods complement each other.To conclude, for monitoring elements of differing sizes, both terrestrial and aerial perspectives are beneficial.
Even though comparing the time consumption of the different methods was not an objective in our study, it is worth noting that in this study, it took five hours to gather the terrestrial laser scanning data, while the Tarot T960 survey mission took 12 min.However, ZEB Revo RT offered relatively fast data capture by covering the test areas in 30 min.Finally, the manually operated DJI Phantom 4 Pro+ flights took 75 min combined.It should be noted that the DJI Phantom Pro+ could have been flown higher like the Tarot T960 to cover the area quicker but with lower resolution.This is an important aspect to consider in operative surveying; higher resolution data capture generally takes more time but might be purposeful in some cases.Aerial methods can be scaled easily, and as we have shown, flight distance had a relatively large impact on the results [75].On the other hand, in terrestrial methods, scaling is more limited because of the perspective induced by occlusion [76].The final aspect to consider is the amount of detail needed for the given task as the sensor should be chosen accordingly.It might not be cost-efficient to gather high-resolution data if lower resolutions can provide the information needed.Thus, for careful method design based on the individual characteristics of the targeted yard or similar environment, possible prior field inquiries are advised.
In our study, we analyzed the green elements manually from the point clouds.In the future, the possibility to use machine learning to classify the distinct green elements, even species, is of great interest.The fully automated analysis of point clouds has been studiedin remote sensing and computer vision research for numerous years [77].Both deep learning and machine learning techniques [78] have been tested and deployed in point cloud data analysis, leading to promising results in urban point cloud classification via algorithms, such as random forest [79] and presence and background learning [80], and also via deep-learning architectures, such as SPGraph [81].Tree attributes, such as canopy and stem surveying-based quantitative methods, have already been widely studied for forestry e.g., [35][36][37]82,83].For green factor-like evaluations, questions around quality as well as the variety of objects and species arise.In addition to the green effectiveness evaluations, there are also other opportunities for the use of point cloud-based information in the management and planning of local UGI, including private and semi-private entities such as yards [14][15][16][17][18].One of the possibilities is to manipulate point cloud-based models to represent future landscape scenarios' design and planning [28].
The literature on point cloud-based assessment of urban single green elements, apart from trees, is still limited [42][43][44].Therefore, the criteria for the qualities of the point clouds, as well as the respective methods, should be further studied in terms of UGI assessment.In this study we utilized geometry and appearance with a distinction between details and volume/amount.As we broaden the monitoring of UGI to the detailed vertical strata, also the criteria and targets of the environmental assessment tools need to be adjusted and further discussed.Originally, the green factor is a tool intended for evaluating 2D plans, which does not yet open all the possibilities that the three-dimensional approach offers to the management of UGI.Consequently, as stated in recent research [42][43][44], in addition to technical development, a shift in thinking is required.In the sustainable planning of UGI, there are still many possibilities that the vertical aspect of local geospatial information has to offer but have still not yet been utilized up to their full potential.Based on the results of this study, a point cloud-based green factor calculation is a promising approach that could be reinforced with machine-and deep-learning techniques in future studies.

Conclusions
Detailed evaluation of local UGI is a potential tool in maintaining sustainable urban environments under the era of consolidating cities.With remote sensing methods, prior research is oriented on large-scale estimation of UGI.Studying the environment to a high level of detail with photogrammetry and laser scanning is a well-established research practice, however, apart from trees, it is currently less applied in estimations, such as the green factor, including single urban green elements.Terrestrial laser scanning, mobile laser scanning, and UAV photogrammetry were applied for 3D-mapping a yard environment in high detail.The resulting point clouds were compared in their ability to convey information of urban green elements, both concerning their geometry and appearance.While there were differences in how successful the distinct sensor methods were in presenting different green elements, the green elements were also not equally well-captured in the data.This was seen especially with the surface-like elements, and with the elements located under the canopy or on the ground within natural vegetation.Thus, while point clouds appear to be a potential tool in future estimations of the existing green factor of a single area, the individual characteristics of the study site may play a great role in how successful the monitoring of green elements will be in quality.Therefore, we suggest that the implementation of point cloud-based methods should be designed according to the desired level of detail, to the division of tall and small-scale elements, and to the number of green elements located under the canopy.Additionally, further development of the green factor or similar tools Appendix B

Figure 1 .
Figure 1.Tarot T960 survey mission.Flight plan is illustrated in yellow, and the actual flight path in red.

Figure 1 .
Figure 1.Tarot T960 survey mission.Flight plan is illustrated in yellow, and the actual flight path in red.

Figure 2 .
Figure 2. DJI Phantom 4 Pro+ survey flights.Camera stations are illustrated in white.

Figure 2 .
Figure 2. DJI Phantom 4 Pro+ survey flights.Camera stations are illustrated in white.

Figure 4 .
Figure 4. GeoSLAM ZEB Revo RT point cloud with independent measurement trajectories which are colored individually as blue, red, yellow, green, and purple.The point cloud is colored with the data of the integrated camera system, and the black points are laser scanning points that do not have any color value from camera data.

Figure 4 .
Figure 4. GeoSLAM ZEB Revo RT point cloud with independent measurement trajectories which are colored individually as blue, red, yellow, green, and purple.The point cloud is colored with the data of the integrated camera system, and the black points are laser scanning points that do not have any color value from camera data.

Figure 5 .
Figure 5. Parameters for the comparative analysis.The parameters of geometry were tested with height ramp colored point clouds and the parameters of appearance with RGB colored point clouds.

Figure 5 .
Figure 5. Parameters for the comparative analysis.The parameters of geometry were tested with height ramp colored point clouds and the parameters of appearance with RGB colored point clouds.

Figure 6 .
Figure 6.Comparisons of point clouds for large trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 7 .
Figure 7. Comparisons of point clouds for small trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 6 .
Figure 6.Comparisons of point clouds for large trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 6 .
Figure 6.Comparisons of point clouds for large trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 7 .
Figure 7. Comparisons of point clouds for small trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.Figure 7. Comparisons of point clouds for small trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 7 .
Figure 7. Comparisons of point clouds for small trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.Figure 7. Comparisons of point clouds for small trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 6 .
Figure 6.Comparisons of point clouds for large trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 7 .
Figure 7. Comparisons of point clouds for small trees.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 8 .
Figure 8. Comparisons of point clouds for very small trees.In the top row, the point clouds are colored with the height ramp and in the bottom row with RGB colorization.

Figure 8 .
Figure 8. Comparisons of point clouds for very small trees.In the top row, the point clouds are colored with the height ramp and in the bottom row with RGB colorization.The location of the investigated green elements affected the results; the elements located under the tree canopy were generally less visible in the data than elements located in the open area, as shown inFigures 8, 10 and 11.

Figure 9 .
Figure 9.Comparison of point clouds for a large shrub.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 10 .
Figure 10.Comparison of point clouds for natural ground vegetation.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 11 .
Figure 11.Comparison of point clouds for dead wood/stumps.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 9 .
Figure 9.Comparison of point clouds for a large shrub.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 8 .
Figure 8. Comparisons of point clouds for very small trees.In the top row, the point clouds are colored with the height ramp and in the bottom row with RGB colorization.The location of the investigated green elements affected the results; the elements located under the tree canopy were generally less visible in the data than elements located in the open area, as shown inFigures 8, 10 and 11.

Figure 9 .
Figure 9.Comparison of point clouds for a large shrub.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 10 .
Figure 10.Comparison of point clouds for natural ground vegetation.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 11 .
Figure 11.Comparison of point clouds for dead wood/stumps.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.Apart from the location in the study field (under the canopy vs. open area), the form of the element affected the results; sand surfaces and lawns were given lower scores than

Figure 10 .
Figure 10.Comparison of point clouds for natural ground vegetation.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 8 .
Figure 8. Comparisons of point clouds for very small trees.In the top row, the point clouds are colored with the height ramp and in the bottom row with RGB colorization.The location of the investigated green elements affected the results; the elements located under the tree canopy were generally less visible in the data than elements located in the open area, as shown inFigures 8, 10 and 11.

Figure 9 .
Figure 9.Comparison of point clouds for a large shrub.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 10 .
Figure 10.Comparison of point clouds for natural ground vegetation.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 11 .
Figure 11.Comparison of point clouds for dead wood/stumps.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.Apart from the location in the study field (under the canopy vs. open area), the form of the element affected the results; sand surfaces and lawns were given lower scores than

Figure 11 .
Figure 11.Comparison of point clouds for dead wood/stumps.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 12 .
Figure 12.Comparisons of point clouds for perennials, lawns, perennial vines, and semipermeable surfaces: grass stones, and permeable pavements: sand surfaces.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 13 .
Figure 13.Comparisons of point clouds for plants with impressive blooming.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 12 .
Figure 12.Comparisons of point clouds for perennials, lawns, perennial vines, and semipermeable surfaces: grass stones, and permeable pavements: sand surfaces.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 12 .
Figure 12.Comparisons of point clouds for perennials, lawns, perennial vines, and semipermeable surfaces: grass stones, and permeable pavements: sand surfaces.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 13 .
Figure 13.Comparisons of point clouds for plants with impressive blooming.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 13 .
Figure 13.Comparisons of point clouds for plants with impressive blooming.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 12 .
Figure 12.Comparisons of point clouds for perennials, lawns, perennial vines, and semipermeable surfaces: grass stones, and permeable pavements: sand surfaces.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 13 .
Figure 13.Comparisons of point clouds for plants with impressive blooming.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 14 .
Figure 14.Comparison of point clouds for a blooming shrub.In the top row, the point clouds are colored with the height ramp, and in the bottom row with RGB colorization.

Figure 15 .
Figure 15.The mean results of all the point cloud data sets to convey information on green elements, 3 being the top score (good ability to convey information) and 0 (no ability to convey information) being the lowest score.

Figure 15 .
Figure 15.The mean results of all the point cloud data sets to convey information on green elements, 3 being the top score (good ability to convey information) and 0 (no ability to convey information) being the lowest score.

Figure 16 .
Figure 16.The best-performing results of all of the point cloud data sets to convey information on green elements, 3 being the top score (good ability to convey information) and 0 (no ability to convey information) being the lowest score.

Figure 16 .
Figure 16.The best-performing results of all of the point cloud data sets to convey information on green elements, 3 being the top score (good ability to convey information) and 0 (no ability to convey information) being the lowest score.ISPRS Int.J. Geo-Inf.2021, 10, x FOR PEER REVIEW 17 of 25

Figure 17 .
Figure 17.The mean results of all the point cloud data sets with geometry and appearance.

Figure 17 .
Figure 17.The mean results of all the point cloud data sets with geometry and appearance.

Table 1 .
Tested green factor elements according to, and adapted from, the Helsinki green factor tool.

Table 3 .
The point cloud data sets' ability to convey information on the green elements' geometry, with 3 denoting good ability, 2 denoting moderate ability, 1 denoting low ability, and 0 denoting no ability to convey information on the given parameter. Geometry:

Point Cloud Data Sets' Ability to Convey Information
1Located in An Open Area; 2 Located under the Canopy.

Table 4 .
The point cloud data sets' ability to convey information on the green elements' appearance, with 3 denoting good ability, 2 denoting moderate ability, 1 denoting low ability, and 0 denoting no ability to convey information on the given parameter. Appearance:

Point Cloud Data Sets' Ability to Convey Information
1Located in An Open Area;2Located under the Canopy.ISPRS Int.J. Geo-Inf.2021, 10, 762 11 of 24

Table A2 .
Tentative estimation based on the study results; possibility to utilize point clouds in green factor-based assessment of existing vegetation.