Exploring Landscape Composition Using 2D and 3D Open Urban Vectorial Data

: Methods and tools for assessing the visual impact of objects such as high-rises are rarely used in planning, despite the increase in opportunities to develop automated visual assessments, now that 3D urban data are acquired and used by municipalities as well as made available through open data portals. This paper presents a new method for assessing city visibility using a 3D model on a metropolitan scale. This method measures the view composition in terms of city objects visible from a given viewpoint and produces a georeferenced and semantically rich database of those visible objects in order to propose a thematic vision of the city and its urban landscape. As far as computational efﬁciency is concerned and considering the large amount of data needed, the method relies on a dedicated system of automatic data organization for analyzing visibility over vast areas (hundreds of square kilometers), offering various possibilities for uses on different scales. In terms of operational uses, as shown in our paper, the various results produced by the method (quantitative data, georeferenced databases and 3D schematic images) allow for a wide spectrum of applications in urban planning.


Introduction
Obtaining the best 2D view or an image from a 3D structure [1] has been studied in different domains.The best view of a city is an important topic that interests not only the tourists but also urban planners.Though such a view is very subjective, urban skyline, visibility of landmarks from a given point and count of visible landmarks are few such criteria that have been identified to qualify the views.Two-dimensional, 2.5D and 3D [2] data models of the city may help in qualifying such viewpoints.A 2D data model gives only the localization of objects in 2D space and is commonly used in classical geographical information systems (GIS).A 2.5D data model gives only one elevation of a spatial object along with its localization and is frequently used for studying terrain elevations.Urban data may need to represent more complex objects, such as buildings.In this case, we have three coordinates for every point of an object in a 3D urban data model.These data models are being used for the visual 3D analyses of landscapes and applications [3,4], eye-level greenness visibility [5], etc. Urban studies [6] have even incorporated the use of crowd-sourced imagery [7] and object-based image analysis of satellite images [8] for identifying such viewpoints.
Consequently, in the quest for a sustainable city, one must also consider the possible alteration of relationships between urban societies and their landscape(s) in the context of the verticalization of urbanization [9].The conflicts that have emerged from the return of high-rise buildings have called into question the very principles and tools of urban planning.In cities such as London, viewing corridors have been set up, then modified to accommodate high-rise pressure, whereas, in Paris, longstanding height limits have been removed altogether in outlying areas of the metropolis.In many cities, the visual assessment of high-rises has become strategic in determining tower planning applications.Tools often used to evaluate the visual impact of high-rises relies on renderings based on a selection of georeferenced photographs.The approach is rarely systematic, however, with the exception of research by Nijhuis [10,11] and Cassatella and Voghera [12], who have demonstrated the benefits of using GIS and spatial analysis tools based on simplified elevation models.
Based on the expertise of both computer scientists and geographers, the aim of this paper is to provide a precise quantification and qualification of skyline views for planning as well as view composition by using 3D geometry and semantics, as opposed to the usual analysis conducted on a short perimeter with 2.5D data without semantics.Using open data in a standardized format allows us to provide a generic method usable on other available datasets in other cities.Our method offers analytical tools for quantitatively and semantically assessing the visual impact of city objects taking into account topography, but also building types and forms, vegetation, hydrology and any other types of city features included in the 3D model over large areas (several hundred square kilometers).It measures the visual composition of a landscape from a given viewpoint: by using the 3D model composed of different types of city objects, we detect those that are visible and store their identification number and geographic coordinates in a 3D georeferenced database.We can then provide the list of visible objects and the semantic information linked in the 3D city model (name, address of a building, botanical species, etc.) to generate a database that can be queried and used for further analyses beyond a visual representation of the view in question.Moreover, thanks to the 3D coordinates of the visible objects, these results can be used together with external georeferenced data, such as socio-economic data describing the corresponding area, to perform specific analyses corresponding to user needs.
Thanks to geometry and semantics, city view composition studies can be conducted.Figure 1 illustrates this purpose with a view of Lyon (France).Landscape analysis can provide additional information, such as vegetation composition [5] and the location of significant buildings or roads [13][14][15].Visual analysis can be carried out according to urban planners' needs [16,17] (Figures 2 and 3).This paper focuses on the production of these new methods to assess the visibility and visual impact of existing or planned features of the city described in a 3D geometrical and semantic city model in order to provide local authorities and associations with quantitative and qualitative data (through georeferenced geometrical and semantic data) for a fairer debate between stakeholders and decision-makers.Our argument is based on the shortcomings of data currently available to the public during consultations regarding high-rise building projects or urban view composition in a more general way.We propose a generic approach and provide open-source tools for processing various city models with different types of city objects.We also ensure the replicability of our work with the use of standards.In order to present our interdisciplinary work and its implications, as well as its limitations, we will begin by offering insights both into previous scientific works and the needs expressed by practitioners taken into account in the development of our work in Section 2. We will then present the developed method in Section 3 and explain how it may be used with 3Dcity models to assess skylines and present some of its operational uses in Section 4. Finally, we will conclude by showing how our tool is opening up new possibilities for practitioners and examining how this work may be improved in Section 5.

Previous Scientific Works in Respect to Users' Needs
Cities are evolving at a rapid rate, both horizontally and vertically.Because of their prominence in wide-angle and long-range panoramas, debates surrounding high-rises among governing authorities and the wider public have focused on skylines.Conflicts have emerged, especially in Paris, London ( [18][19][20], Barcelona, Turin [21], Vienna and even St.Petersburg [22], where tall building projects have revived the debate between advocates of modernity and those wanting to preserve the character of European cities.Hence, the analyses of the impact of tall buildings on the skyline [23] have become a major part of urban planning.Several tools are currently being explored [24] to study the visibility of objects in different orientations.Recently, LiDAR data have been used for visibility analysis [25]. One of the main difficulties when working on visibility analyses in operational contexts is the gap between scientific works and the tools and methods used in practice.The SKYLINE research project (2013-2016) aimed to counter the existing lack of research on the skyline as a contested dimension of the urban landscape at a time when skyscrapers are rapidly spreading across the globe (http://recherche.univ-lyon2.fr/skyline/wordpress/?page_id=452, (accessed on 7 September 2022)).We identified a lack of use of the existing visibility analysis tools.Although basic 2.5D visibility analysis tools have been available in GIS software for many years, they are rarely used in planning assessments [10,11].More sophisticated methods are almost never used in practice, and practitioners are mostly unaware of their existence and potential.Nevertheless, practitioners listed many different contexts requiring visibility assessments, with various objectives depending on the institutional context, the actors involved, the spatial configuration, etc., so that various outputs may be sought/required depending on the context.An analysis of existing scientific works conducted at the beginning of our project shed light on the shortcomings of most methods proposed by scientists, particularly regarding the opportunities offered by the rapid development of the production of 3D city models, allowing for more precise and semantically richer analyses.During the SKYLINE project, we, therefore, worked alongside practitioners in order to determine their needs, as well as with computer scientists, who were able to develop new and efficiently tailored tools in order to allow for more precise and complete visibility analyses.The following paragraphs present a summarized cross-analysis between the state of the art of scientific developments and the tools and methods used in practice.
The vast majority of scientific and operational studies that have actually been produced and used by practitioners use 2D or 2.5D analysis.For example, 2D models can be used to classify the surroundings of a pedestrian pathway [26] or to measure landscape openness [27] by using isovist (a vector approach), defined by Benedikt [28] as "the set of all points visible from a given vantage point in space and with respect to an environment".Bartie et al. [29] focus on specific landmark features in order to quantify visibility according to the current viewpoint by defining new metrics extracted from a 2.5D city model.Yang et al. [30] also use 2.5D models to develop their concept of "viewsphere analysis" consisting of estimating the visibility of a scene, defined by a viewpoint and its spherical surroundings, by computing volume-based indices.Visibility analyses were also performed for smaller cities based on 2.5D data using tools such as the Line of Sight Analyst toolbox [31] and Viewshed [32] tools in ArcGIS.We will also compare the results obtained from our proposed approach to those obtained from the ArcGIS tool in Section 4. The isovist approach [26,27] is a vector method based on ray casting strategies (GearScape Geoprocessing Language (GGL) [33], Isovist Analyst [34]).
While 2.5D raster analyses do provide information on the distance up to which a building can be seen and indications as to which areas may be affected by its visual impact, they do not address vertical surfaces, especially facades, as noted by Suleiman et al. [35].In other words, such analyses mostly provide information about the visual impact of a landmark on a building from its higher vantage point.In this respect, the resulting images give little insight with regard to inhabitants whose views may be affected by a given building or landmark, even if methods that aim to be more realistic from a human perspective arise [14].In the case of visibility of a landmark or project from a public space, the main limitation of 2.5D raster analysis lies in the precision of the raster used, which cannot properly describe building shapes, especially where the study area is large, due to computing limits.
Although rarely used for visibility analyses in practice, 3D city models are increasingly used in a wide spectrum of applications [36] and are becoming more widely available in open data formats for many cities.While the use of 3D models can improve the precision and relevance of the analyses produced, it is also important to note that they can also provide more immersive visibility analyses in which non-expert users can easily identify their surroundings and understand the corresponding results.
Practitioners stated that 3D visualizations are often used in association with 2.5D raster analysis, especially through photomontages.The vast majority of such photomontages are not produced from accurate spatial data, and some have been accused of conveying a wrongful image of a project, spoiling public debate and sometimes giving rise to legal action [37].Other practitioners use 3D models provided by Google by inserting models representing projects into Google Earth (Figure 3).In this case, users have no control over the 3D data available in their area of interest, little control over what can be displayed in the Google Earth viewer and no way to perform analysis, limited to only visualization.
However, in the scientific field, multiple works exist for using 3D models to perform visibility analyses.For example, Morello and Ratti [38] and Suleiman et al. [39] propose 3D extensions for the isovist concept on Digital Elevation Models (DEM) defined in a raster voxel model and a polygonal model, respectively.Caldwell et al. [40] chose a different approach, precomputing a Complete Intervisibility Database on a Digital Elevation Model in order to be able to directly compute metrics supporting specific decision-making needs.For each point of the DEM, the viewshed (a raster approach), which is a concept close to that of the isovist (a vector approach) [41], is computed and stored for rapid access when needed by specific processes.These visibility results can, for example, be used to compute the least and most visible paths between two positions on the ground.Choudhury et al. [42] and Rabban et al. [43] propose methods to compute a Visibility Color Map from a specific viewpoint: a color is assigned to each point of its surroundings according to the visibility of this viewpoint from it.Rosa [13] links a viewshed analysis, which detects the most visible area of a Digital Elevation Model, with landscape values based on experts' assessments in order to generate an index of visible landscape values: this makes it possible to detect which areas represent an important landscape with high visibility.
While these visibility analyses offer tools to qualify urban and natural landscapes, they can also be used to measure the visual impact of a specific city feature such as buildings (existing or planned) on such landscapes, as in the works of Hernández et al. [15], Czy ńska [44], which propose a method to measure the visual impact of rural buildings on natural landscapes and compares the results to a public survey.Similarly, Danese et al. [45] examine the Visual Impact Assessment of new projects in various Italian cities by computing viewshed analyses before and after the addition of buildings and comparing the results in order to evaluate the impact on the visibility of landmarks.
Most works propose visibility studies of 3D models but do not address the scaling problem inherent to the applications related to these data or acknowledge that their process can only address a limited area.We thus opt for a generic method that can be used on large 3D vector datasets available for numerous open data cities.As a result of the visibility analysis being stored as 3D georeferenced vector entities, any necessary post-processing of the generated data can be conducted in order to include visibility distances/range, meteorological conditions, etc.
These works also focus on analyzing the impact of buildings but rarely address other types of city features such as vegetation, landmarks or other urban features that may have a certain impact in terms of visibility.Since these kinds of data are now correctly managed by international standards, such as CityGML, it is necessary to enhance methods to take them into account semantically and geometrically.
In order to overcome the shortcomings of available tools so as to provide practitioners with precise and useful tools, we worked with practitioners to provide a list of requirements a visibility tool should meet and then developed a method that could meet different practitioners' needs.This approach is intended to be more general than scientific works proposing one tool or one visibility indicator to answer a particular need [44].We thus aim to produce precise and complete results that can be stored as 3D georeferenced vector data with linked attributes, allowing for a wide range of GIS-based analyses to extract the necessary information, depending on the context of use.
Based on the needs expressed, a few requirements were listed: • the need for precise geometrical analysis, which requires the use of a 3D vector city model instead of simplified DEM or rasters, • the need for semantic data to identify as precisely as possible which feature is seen from a vantage point; if a building or landmark is concerned by the visual impact of a proposed building (concerned only by its roof or its facades and which floors may be impacted, etc.), • the need to be able to process large amounts of data, and especially rich, 3D vector data so as to obtain precise results on any area (a whole metropolis or region if needed), • the need for numerous outputs that can be used to generate multiple results (images, georeferenced databases and data quantification stored in spreadsheets), some of which may be used as is and others opening possibilities in terms of spatial analysis (i.e., interaction with other georeferenced data in GIS tools), depending on the end users' objectives and their technical capabilities, • the need for a generic approach for processing various city models with different types of city objects, • the need for open-source tools that can be widely used by any stakeholder, • the need to ensure replicability with the use of standards.
The above requirements were identified based on the interactions between urban project practitioners and computer scientists to identify the scientific and technical challenges.
The next section presents the new methods developed.Various uses of the results are then presented, demonstrating the extent of the opportunities offered by the tool.

Measuring the Visual Composition of an Urban Landscape
In this section, we will describe the various steps comprising our method, depicted in Figure 4 and described in the following subsections.

Field of View Description (Step A)
The input parameters are the studied viewpoint and the Field Of View (FOV).They are defined by a given 3D position and the angular aperture, respectively.The FOV may be human (200 degrees) (FoVx, i.e., field of view in the horizontal direction) × 135 degrees (FoVy, i.e., field of view in the vertical direction) according to Wandell and Thomas [46] to study the view from the window of an apartment, for example, or a complete 360 × 180-degree full panoramic field (2πsr solid angle), for example, to compute a view from a rooftop terrace at the top of a tower.In order to partition the FOV by a set of rays (Figure 5), it is also necessary to define a resolution, L × H (where L is the width and H is the height of the screen), which represents the number of rays to be generated.Currently, 3D city models may cover several hundred square kilometers, and numerous objects and polygons must, therefore, be tested for intersection computations.For example, the Lyon open data in France (https://data.grandlyon.com/,(accessed on 7 September 2022)) provides a territory of 550 sq km (an example of 1 × 1 km of Lyon is shown in Figure 6).Table 1 shows the number of triangles regarding the four different kinds of objects (buildings, roads, terrain and vegetation).Due to the density (urban, peri-urban or rural areas), the number of triangles may also evolve for these kinds of objects.Table 1.Different types of objects shown in Figure 6 and the associated number of triangles for a 1 × 1 km tile of Lyon.Due to visual accuracy, computing all the ray intersections in the 3D space is irrelevant and time-consuming.In our method, for a given viewpoint, it is necessary to follow the rays in order to intersect the first object for each ray.Some parameters visible to the human eye are chosen to decrease the number of rays to be computed (for instance, for visual human eye accuracy, the average angular resolution is between 0.02 and 0.03 degrees [47]; it is possible to distinguish an object with a diameter of around 40 cm at a distance of 1 km).At the very least, the object becomes less relevant if one moves away from the viewpoint.As such, the 3D space is decomposed in angular partitions.Two neighboring rays are separated by

Type of
A panoramic analysis can be carried out if FoVx equals 360°.Increasing the resolution, L or H, makes it possible to obtain a higher resolution but increases the number of rays and hence the computation cost.

Intersecting Objects in the 3D Scene (Step B)
With the previously depicted discretization process, it is possible to obtain a set of rays.Each ray will be propagated from the viewpoint to the 3D scene and maintain the RGB value of the intersected object (this is a well-known process used in rendering pipelines, commonly named ray tracing).In our case, we must bear in mind that the semantic part of the data or the distance between the viewpoint and the object are of interest.The diagram in Figure 7 illustrates our purpose; three rays intersect different kinds of objects in a simplified view.For instance, the red one intersects a building; the distance or color of the object, and linked information given by attributes, are stored for this ray.Rays with no intersection are classified as out of scope (two cases remain: the intersection is out of the territory bounds (will be detected later) or in intersection with the sky).

Proposed Data Structure for a Large-Scale Study
Managing a large dataset with numerous rays is only possible with a special data structure.The entire dataset cannot be loaded at the same time.A tiling decomposition is necessary to manage such a large dataset.The city model is decomposed into a regular configurable grid (Figure 8) with an automatic process.Thanks to this organization, only the necessary tiles are loaded, which makes it possible to manage a large area.An accelerating structure is then provided with a bounding box hierarchy based on a quadtree structure.More information can be found in [48].Each ray is sorted according to the bounding box intersection in order to load each tile one at a time.

Generating the Database (Step C and D)
The given intersection of each ray is the starting point for producing a database composed of georeferenced 3D points with a link to the intersected object and its attributes.It is then easy to provide simple query information for a given viewpoint, such as visible buildings, or advanced information with linked attributes (tree species, number of road lanes, etc.).
Another application may be to compute the view composition in terms of percentages for a given viewpoint and the sky view factor (the sky view factor is the definition of the delimitation between the sky and the other city objects for a given viewpoint [49,50]) with additional semantic information.An example of a synthetic view is provided in Figures 9 and 10   To summarize this section, the method provides a rich panel of output data.All the results are reproducible by delivering our method via an open-source platform called 3DUSE (3D-Urban Scene Editor) (https://github.com/VCityTeam/3DUSE(accessed on 7 September 2022)) which also contains a collection of other components for manipulating, processing and visualizing urban data.3DUSE is a desktop solution, and some of the developed components are now available as services for a web-based solution called UD-SV (Urban Data Services and Visualization) (https://github.com/VCityTeam/UD-SV(accessed on 7 September 2022)).Our goal is to incorporate the visibility analysis component into UD-SV in the near future.This work is developed in the context of the VCity (https://projet.liris.cnrs.fr/vcity/(accessed on 7 September 2022)) project of the LIRIS (https://liris.cnrs.fr/(accessed on 7 September 2022)) laboratory.The generic approach proposed in this article is ensured by using a standard and a large amount of open data already available (for instance, in CityGML format).This 3D geometrical and semantic information, often provided on a large scale, makes it possible to deliver numerous outputs with our method that are valuable for the applications presented in the next section.

Data Used for Our Study
In order to test the tools presented in the previous section, we worked with practitioners from the Lyon Metropolis, using the freely available 3D data covering the entire metropolis territory (more than 550 sq km of data are available).These data are composed of 3D models and semantic information provided in CityGML format (with buildings, terrain, roads and watercourses).Additional data, such as vectorial 2D databases, describe the territory (more than 1000 different datasets are available in the Lyon open data).For instance, LiDAR data and orthophotos can be used to generate an additional layer for a 3D vegetal canopy over the entire territory (describing this work on vegetation is considered out of our scope).Figure 11 shows how the urban landscape is represented through CityGML files available on the open data portal of the Lyon Metropolis.Such data are used as input for the visibility analysis tools.Others can be easily computed, for instance, area or terrain, additional city furniture (for example, park benches, street lights, etc.), 3D roads, etc.

Geometrical Accuracy for a More Precise Description of the Skyline
Geometrical accuracy of the analysis is the primary advantage of our method, which provides accurate results regarding the masking effects of the geometry of the terrain, buildings and vegetation on the visibility of a given building or landmark (i.e., the results can be as precise as the original data).It is also possible to consider the terrains as discontinuity jumps while studying the skyline.
This gain in precision in terms of geometrical analysis was identified as a key point in the examination of high-rise construction projects, which could have impacts on places far from the building.Indeed, practitioners working in London explained to us that despite all the visibility analysis carried out for the presentation and discussion of the "Shard" Tower project [51], its visibility from several boroughs and their public areas had not been anticipated due to a lack of accurate analyses.As a result, some district councils were not contacted to discuss the project, and the councilors and residents of those areas discovered the impact of the Shard on their daily landscape once the building was built, which sparked both dissatisfaction and mistrust in consultation processes.In the case of high-rise buildings such as The Shard (https://en.wikipedia.org/wiki/The_Shard(accessed on 7 September 2022)), whose iconic aspect was put forward by its promoters, the lack of precision of the visibility analysis can, therefore, lead to errors in governance (absence of prior consultation of people affected by a project), which can negatively impact relationships between local governments and inhabitants and introduce long-term mistrust.
This gain in precision can be seen visually when exploring the results of the analysis, but we also wanted to quantify this performance by comparing our results with a 2.5D analysis.

Characterization of Geometrical Accuracy for the Visual Impact Assessment of a Specific Building
In our case study on the metropolis of Lyon (>550 sq km), using the visibility analysis tool of a common GIS software program (ESRI ArcGIS (http://www.esri.com/arcgis/about-arcgis (accessed on 7 September 2022))), we could only perform a 2.5D visibility analysis on a 1-m-resolution DEM, covering 48 sq km (only the City of Lyon). Figure 12 shows how the urban morphology is rendered through a 1-m-resolution raster DEM, while Figure 13 makes it possible to visually compare the representation of a given area through a 1-m raster DEM and a 3D vector model.This sheds light on the crucial importance of the processing of 3D vector data, as offered by our tool, so as to use the most precise description of building shapes input and to be able to take into account the masking effects of vegetation.Currently, the existing 2.5D raster analysis performed on raster data limited in size (by machine and/or software limitations) simplifies the geometry of the vegetation and buildings when carried out in large areas (districts, cities, metropolis, etc.).On very large scales (metropolis, region, etc.), 2.5D raster data can hardly even take into account the geometry of built structures, let alone the masking effects of vegetation.In our work on the landscape of the metropolis of Lyon with ArcGIS, we had to downsize our DEM to a resolution of 25 m or work on a larger area than the city itself (70,000 sq km) (the analysis of such a large area was interesting as the work on landscape composition included distant mountains visible in specific meteorological conditions (in our case, the visibility of Mont Blanc from the city center of Lyon)).In order to assess the benefit of our tool in terms of precision compared to raster analysis, we studied the visibility of a well-known Lyon monument, the Fourvière Basilica.Located on a hill in the city, it is visible from many places in the city, including from a distance.This allowed us to test the method and demonstrate its potential to our practitioner partners.We thus generated an analysis using a classic GIS tool (ArcGIS), using the raster Digital Elevation Model (with 1-m resolution) mentioned above, as well as an analysis using our tool from a 3D vector dataset in CityGML.The comparison of the results was carried out only on the area containing the city of Lyon (48 sq km) as it was not possible to produce precise results over a wider area with the ArcGIS raster analysis.
At first glance, a 2D visual comparison of the results shows relatively similar results, with spatial definition of the visible areas appearing rather close.However, the 3D visualization of the results clearly shows a difference in the geometrical precision of the results (Figure 14).The images on the right of Figure 14, which show the results of the visibility analysis using our tool, show the geometrical precision of the results produced, which are well superimposed on the facades of the 3D buildings.Each point depicted on the facade, roof, vegetation and terrain is visible from the Fourvière Basilica.

Quantitative Analysis of the Gain in Geometrical Accuracy
A more quantitative analysis of the results helps to highlight the strengths of our tool in terms of precision compared to the raster analysis.Table 2 shows the total number of visible points from a chosen viewpoint generated by our tool, the number of these points which are common to the raster analysis and those which were "missed" by the raster analysis.The first column gives the gross comparison of the number of points, and the second shows the same comparison by adding a buffer of one meter around the resulting points from our tool (a 3D point becomes a 1-m-diameter disk) in order to bring back the results of our tool to the resolution of the numerical elevation model used for raster analysis, the third one gives the comparison for the results of the visibility analysis beyond 1 km around the observation point.
Table 2. Difference between the results of a raster visibility analysis and those from our tool for the viewpoint from the Basilica of Fourvière.Table 2 shows that the difference in precision between our results and those acquired from a raster analysis is greater for areas close to the chosen viewpoint.This is explained by the method used, as the results of the ray tracing give a more precise meshing over short distances.Thus, for near distances (less than 1 km from the observation point), the precision gain is obvious, confirming the previous visual analysis of the results.For far-off distances, the precision enhancement is also not negligible and confirms the advantage of using our method for analyses of close and far-off areas.Although the data are less precise over long distances, their accuracy remains higher than the results of raster analyses, which are, as we have already mentioned, limited in resolution for analyses over large areas.From this particular viewpoint, our tool makes it possible to carry out more precise analysis, and with reduced calculation times (it takes several hours to perform a raster analysis over a large area, whereas it takes a few minutes or at most a few tens of minutes (30 min maximum for the analyses presented here) for our tool).

Comparison for
To confirm our first results, we repeated these comparisons for visibility analyses from the top of four other landmarks in the urban Lyon landscape.The following tables (Tables 3-5) show these results in full.There is a more notable difference for some monuments in terms of distant data (beyond 1 km).This is due to the urban morphology of Lyon and the positioning of such monuments, which are not visible from areas far from the center of the city because they are surrounded by several hills.Moreover, for the same monuments integrated in a dense urban fabric, we note that the results of our tool are much more precise for close areas (less than 1 km from the monument).We can thus show the advantage of our tool, which is more precise than the analysis derived from raster data and suitable for generating analyses in dense urban centers as well as for obtaining reliable data over long distances.The tests were limited here to the city of Lyon (which is a 48-sq-km territory) because of the intrinsic limitations to the raster mode (inducing large data for precise resolutions).We were able to carry out analyses on the whole territory of the metropolis (>500 sq km), integrating all buildings and vegetation, with the same computer used for raster analysis limited to 48 sq km.
We observe a more noticeable difference for some monuments in terms of long-distance data (beyond one km).This is due to the position of some buildings that are not visible from distant areas because they are hidden by tall buildings or by the relief.Nevertheless, for these buildings, in a nearby environment, our method remains more accurate in all cases.

Example of Uses of the Data Produced by Our Tool
Our tool generates multiple results for each viewpoint: quantitative data, 3D georeferenced data and several images.Regarding quantitative data, it is possible to obtain dashboards.Dashboards can be constructed on demand, thanks to the data stored in the database.For instance, for a given area, the attributes linked to each point can allow for the differentiation of the buildings, noticeable buildings, vegetation and terrain categories.Regarding 3D georeferenced data, they may be used as colored 3D points with a list of attributes that can be highlighted in a 3D view (for instance, Figure 13).A mix can also be performed with polygonal (CityGML) information.Images can also be produced with ray-tracing.It is possible to obtain depth cards with dedicated colors, images with building categorization, etc.All these data can be used to provide visual support for collaborative discussion and decision-making.Some examples are also detailed in the next sections.

GIS Analysis of 3D Georeferenced Results
In this section, we present two possible uses of the 3D georeferenced results produced by our tool as illustrations of some of the possibilities offered by our method.In fact, many analyses can be performed in GIS tools using 3D georeferenced data, a complete overview of which is out of the scope of this study.

Visual Impact of Buildings in Context
Our first illustrative example shows how selection based on semantic data and basic statistics can provide useful insights into the visual impact of a building on a district scale.This is useful for practitioners in order to verify the preservation of some particularly significant viewpoints from public spaces (views of monuments accessible by all citizens).
Our tool provides 3D points indicating a 3D location from where the target of our analysis is visible.Each point contains, in its attribute table, the nature of the corresponding CityGML element (whether terrain, vegetation, building, etc.) and its original CityGML identifier so that any data from the original 3D model can be linked to it (through a simple attribute join).For instance, in the case of the visibility analysis of the Fourvière Basilica presented above (see Section 4.2.1),we can derive precise results about its visibility in a given district from the data provided on the city of Lyon as a whole.The 2.5D raster visibility analysis provides a binary result (whether or not the target is seen) but does not provide data on what the resulting pixel entails (if it is a public space, a private building, etc.).Practitioners often use a spatial join to try to create such information (Apur 2014), but the accuracy of the result depends on the size of the pixel and the density of the urban fabric.In our case, each resulting 3D point corresponds to a unique element from the CityGML database (terrain, vegetation or building, etc.).It is then easy to count, for example, from how many buildings the target is seen and to precisely determine from which location the target is seen (which floor of a building or which area of a public space such as a square or a park).
Figure 15 shows the results of the visibility analysis from the Fourvière Basilica with a different color attached to each resulting 3D point according to its type.Green points indicate the presence of vegetation in the landscape seen from the Basilica (our tool also provides a quantitative result of the percentages of vegetation, building and terrain in the view).However, those points would not generally indicate a location from which the Basilica can be seen by inhabitants (since few of them are likely to climb trees to view it).Yellow points indicate public spaces from which the Basilica can be seen, and using them together with grey points, representing buildings, we can draw up an analysis of the Fourvière Basilica's visibility on a scale of a large square (Bellecour square, 62.000 sq m). Figure 16 shows the results of the raster analysis of the Fourvière Basilica visibility of this square, and Figure 17 adds the results from our tool.The main difference between our tool and the raster analysis is the better precision of both the building and vegetation masking effects (thanks to the use of more precise 3D geometry) and the ability to easily distinguish which results correspond to accessible viewpoints for citizens.Green pixels indicate that the Basilica is seen, and red pixels that it is not seen.The results are displayed on an aerial image of the square; hence transparency is used.16, with the addition of the visibility analysis from our tool (in green, vegetation; in yellow, terrain; in red, roofs; in white, building's walls).
Thanks to our data, we can determine from which buildings the Fourvière Basilica can be seen more precisely.In fact, results from most raster analyses, being 2.5D, only provide information from the highest point of a building, which is often its rooftop.Since it is difficult to know whether rooftops are accessible to citizens, this does not give a clear picture of the viewpoints of the Basilica that are actually accessible.Our tool allows for more precise analyses, thus enabling practitioners to tackle the issue of the extent of public access to remarkable views and to quantify the privatization of remarkable views in the case of the construction of high rises.
Figure 18 shows the results from the raster analysis.For most of the buildings around the square, the raster results show both green and red pixels.A simple GIS request makes it possible to count the number of green and red pixels in each building, but this only concerns building rooftops.Our tool allows us to pinpoint exactly which building is concerned and to distinguish between the visibility of the Basilica from facades and from the roofs.We can thus select buildings that have a facade from which the Basilica can be seen (see Figure 19).Quantitatively speaking, results from the raster analysis indicate that the Basilica is visible from 119 buildings around the square (be that from their facades or roofs).Our analysis shows that the Basilica is only seen from the facades of 56 buildings.We can even calculate, through SQL queries, the percentage of the facades of those buildings that are oriented towards the Basilica or the floors from which the Basilica can be seen.
Finally, our method offers more precise results than raster analysis and open possibilities for city planners to quantify and qualify public and private access to remarkable views in each district (even on the scale of each building).This can also be used to quantify the privatization of those views for a given project by performing the analysis on the existing dataset and with the project inserted into the CityGML model, enhancing public debate.

Detecting the Visibility of a Building from Vantage Points
Our second illustrative example developed with practitioners is a work on the qualification of public belvederes, i.e., public spaces that offer vantage points with views of remarkable buildings and landscapes.This is of interest to practitioners since belvederes enable everyone to enjoy the urban scenery.As far as planning is concerned, identifying possible locations for public vantage points or belvederes and working on their accessibility and equipment (public benches, for instance) are ways to grant every citizen access to the urban scenery.
In the case of Lyon, we have used the visibility analysis from several landmarks in a GIS tool, along with other data (public transport, roads, topography and orientation), in order to create a database of existing belvederes and provide a diagnosis of their accessibility.For each vantage point, we have computed the number of landmarks seen, their altitude, their accessibility from the public transport network (distance by foot from the nearest public transport), the effort required to walk up to it given the slope and its exposure to the sun, according to its orientation.This database also contains the equipment of each belvedere, which was existing data and can be used to develop a public strategy to enhance citizens' access to the skyline.Figure 20 shows visualizations of the data in a GIS tool: belvederes classified by the number of landmarks that can be seen and by the effort required to access them on foot (calculated in calories).Our data were also used to work on pedestrian itineraries on the hills of Lyon in order to automatically determine how many landmarks can be seen from a tourist path and where particular vantage points are situated.These analyses require only an SQL query where all necessary data are available in the georeferenced format.As such, we thought it particularly useful to produce some of our results in 3D georeferenced vector format.

Outlook: Use of Images Produced by Our Tools for the Visual Analysis of High-Rise Projects and Their Impact on the Skyline
While our 3D vector results offer a great number of possibilities in terms of analysis, our tool also produces images for each analysis performed.According to the practitioners working with us on the project, images are very useful for visual analysis for many actors (practitioners, elected representatives and citizens) when working on a construction project or the design of a district, especially where it contains high rises.These images can be particularly useful to test various hypotheses and to evaluate the impact of a new building on the skyline when designing the project.
Today, in professional practice, many projects are evaluated through photomontages, which are difficult to produce with accuracy.Accounts of several cases of such images misleading a public debate already exist [37].For this reason, practitioners involved in the SKYLINE projects expressed their wish for images that are geometrically reliable and graphically restrained (especially without texture so that the shape of the building is actually what is visually analyzed).Figure 21 shows an example of an imaginary project in a central business district in Lyon, seen from a popular public vantage point.Such images can help practitioners and elected representatives work on designs that fit in with the existing skyline and present their proposals to a larger audience.Along with the images, quantitative data produced by our tool enabled us to draw up a list of 70 buildings that would lose their view of the Fourvière Basilica if the project was constructed.

Discussion
The study of the visibility of landmarks at a given viewpoint or at different viewpoints is important.The former ensures that certain landmarks remain visible from a given landmark, and the latter, as illustrated in Figure 20, helps to identify key vantage points that could be used by urban planners and even proposed to tourists.The use of 3D data compared to 2D or 2.5D data helped us to obtain a much finer visibility analysis.In addition, by making use of standards such as CityGML, our approach can be extended to any other city in a reproducible manner, ensuring that the entire computational process is repeatable.Delivering data and code along with complete documentation also helps towards reproducibility.Compared to traditional tools, our approach can also be used to study the visibility of far-away objects.The visibility indicators can be used to qualify certain areas and can be grouped with other indicators to propose a measure of the visual ambiance present at these locations.
However, such analyses are sometimes cumbersome, especially when it involves objects that are separated by kilometers.Furthermore, they require complete 3D models since multiple objects may play a role in the overall visual impact of a particular object of interest for large-scale analyses.With the development of versioned city models that can propose competing versions of an urban area, as illustrated in Chaturvedi et al. [52], Samuel et al. [53], visibility analyses can also be used to compare the visual impact of multiple competing urban projects.Furthermore, we have not considered the density of obstacles in this work.

Conclusions
Using 3D CityGML vector data, our tool provides accurate and reliable results in terms of visibility analysis and skyline assessment and can be used over very large areas.In this article, we demonstrated our proposed method of visibility analyses for the city of Lyon, especially for some of the major historical monuments by making use of 3D data models.Our analyses show a significant improvement compared to tools based on 2.5D raster data.Our tool, called 3DUSE, an open-source solution, also allows for quantitative and qualitative (through images) analyses of the skyline on various scales, enabling users to adapt the ray tracing resolution to their needs.The originality of our method lies in its versatility, the accuracy and diversity of the analysis results, as well as the optimization of the processing time.Our method opens up new opportunities for skyline assessment and can help practitioners in various areas (identifying accessible public spaces with remarkable views, quantifying privatization of access to remarkable views, producing quantitative and qualitative data for public debates on landscape projects affecting the skyline, etc.).
The ability for the visibility analysis to cover very large areas is particularly interesting for work on areas sharing a common landscape.It can, however, lead to time-consuming processing and large resulting datasets, which will need to be explored, sorted out and analyzed through specific queries, requiring some GIS technical knowledge.The images and tables produced by our tool for every analysis can be used directly to discuss with a wider audience.
The accuracy of the results produced by our tool depends directly on the 3D vector data used as input (its geometrical precision and available semantic data).Nevertheless, our process could be improved in order to provide additional results to complete the visibility analysis.For instance, complementary processes could be developed in order to take into account meteorological data or human vision limitations [10] affecting visibility at a given time and place.Working with a temporal 3D model would thus be relevant in order to produce variable analyses according to the time of day/year.This would add further complexity with regard to handling the results, while practitioners and citizens have expressed a great need for accessible data.For this reason, subsequent work may also focus on providing results structured in a database with predefined queries adapted to the user's objectives.

Figure 1 .
Figure 1.View composition regarding city features.

Figure 2 .
Figure 2. Viewpoint from south of Lyon.

Figure 3 .
Figure 3. Three-dimensional visualization of buildings and associated documents (See more examples in [17]).

Figure 4 .
Figure 4. Composition of the 3D view in four steps: Field of View Description (Step A), Intersecting Objects in the 3D Scene (Step B), Storing intersected objects (Step C) and Storing results in the database (Step D).

Figure 5 .
Figure 5. Discretization of the 3D space according to rays generated from the viewpoint of interest.

Figure 6 .
Figure 6.A 1 × 1 km tile of Lyon composed of four types of objects: buildings, terrain, roads and vegetation.

Figure 7 .
Figure 7. Three rays are generated from the viewpoint of different kinds of objects.

Figure 8 .
Figure 8. Organization of a model city using a regular grid. .

Figure 9 .
Figure 9. (Left): Top view of 3D skyline for a given point or view (purple).(Right): Composition of this skyline.

Figure 10 .
Figure 10.View composition analysis.(Left): decomposition of the skyline according to the intersected object.(Right): the view composition.

Figure 11 .
Figure 11.Three-dimensional visualization of a tile (1 × 1 km) of the CityGML model of Lyon.

Figure 12 .
Figure 12.One-meter-resolution DEM used for a visibility analysis for the Lyon Metropolis: view from the whole DEM covering the city of Lyon (48 sq km) (left) and zoom on a specific part of the city (right).

Figure 14 .
Figure14.Three-dimensional visualizations to compare the results of 2.5D raster (left) and 3D vector (right) visibility analyses from the vantage point of the Fourvière Basilica.On the raster analysis, the visible areas are in green, while on the vector one, we only see the visible 3D points colored according to their type.The 3D model of Saint Jean's Cathedral has been added to the bottom visualizations, which zoom in on the panoramic view of the top images.

Figure 15 .
Figure 15.Three-dimensional visualization using our tools, with each color corresponding to the CityGML category of the resulting 3D points (green: vegetation, grey: buildings, yellow: terrain, blue: water).

Figure 16 .
Figure 16.Raster analysis results regarding the visibility of the Fourvière Basilica (Bellecour Square).Green pixels indicate that the Basilica is seen, and red pixels that it is not seen.The results are displayed on an aerial image of the square; hence transparency is used.

Figure 17 .
Figure 17.Same as Figure16, with the addition of the visibility analysis from our tool (in green, vegetation; in yellow, terrain; in red, roofs; in white, building's walls).

Figure 18 .
Figure 18.Raster analysis results regarding the visibility of the Fourvière Basilica (Bellecour Square).Green pixels indicate areas where the Basilica can be seen, and red pixels indicate places from which it is not visible.The results are displayed on an aerial image with a little transparency.Building footprints are represented in black.

Figure 19 .
Figure 19.Buildings that have a facade from which the Fourvière Basilica can be seen are shown in white.Buildings from which the Basilica is only visible from the rooftop are excluded.

Figure 20 .
Figure 20.Visualization of vantage points (each point is a vantage point).On the (left), the number of landmarks seen from the vantage points (from red, five landmarks seen, to yellow, zero landmarks seen).On the (right), effort needed to access the vantage point on foot, in calories (from light blue (less than 20 calories) to dark blue (more than 70 calories).

Figure 21 .
Figure 21.Visualization of an imaginary high-rise project (on the right) in the existing business district of La Part-Dieu from the belvedere of Fourvière.

Table 3 .
Results from the gross comparison between our analysis and a raster analysis from different landmarks in the Lyon area.

Table 4 .
Results from the comparison between our analysis with 1 m buffer and a raster analysis from different landmarks in the Lyon area.

Table 5 .
Results from the comparison for points beyond 1 km between our analysis and a raster analysis from different landmarks in the Lyon area.