Next Article in Journal
Correlation of Road Network Structure and Urban Mobility Intensity: An Exploratory Study Using Geo-Tagged Tweets
Previous Article in Journal
Characterizing Intercity Mobility Patterns for the Greater Bay Area in China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Canopy Assessment of Cycling Routes: Comparison of Videos from a Bicycle-Mounted Camera and GPS and Satellite Imagery

by
Albert Bourassa
1,2,
Philippe Apparicio
1,*,
Jérémy Gelb
1 and
Geneviève Boisjoly
3
1
Institut National de la Recherche Scientifique, Centre Urbanisation Culture Société, 385 Sherbrooke E, Montréal, QC H2X 1E3, Canada
2
Département d’études Urbaines et Touristiques, Université du Québec à Montréal, 315 rue Sainte-Catherine E, Montréal, QC H2X 3X2, Canada
3
Département des Génies Civil, Géologique et des Mines, Polytechnique Montréal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2023, 12(1), 6; https://doi.org/10.3390/ijgi12010006
Submission received: 7 October 2022 / Revised: 7 December 2022 / Accepted: 23 December 2022 / Published: 27 December 2022

Abstract

:
Many studies have proven that urban greenness is an important factor when cyclists choose a route. Thus, detecting trees along a cycling route is a major key to assessing the quality of cycling routes and providing further arguments to improve ridership and the better design of cycling routes. The rise in the use of video recordings in data collection provides access to a new point of view of a city, with data recorded at eye level. This method may be superior to the commonly used normalized difference vegetation index (NDVI) from satellite imagery because satellite images are costly to obtain and cloud cover sometimes obscures the view. This study has two objectives: (1) to assess the number of trees along a cycling route using software object detection on videos, particularly the Detectron2 library, and (2) to compare the detected canopy on the videos to other canopy data to determine if they are comparable. Using bicycles installed with cameras and GPS, four participants cycled on 141 predefined routes in Montréal over 87 h for a total of 1199 km. More than 300,000 images were extracted and analyzed using Detectron2. The results show that the detection of trees using the software is accurate. Moreover, the comparison reveals a strong correlation (>0.75) between the two datasets. This means that the canopy data could be replaced by video-detected trees, which is particularly relevant in cities where open GIS data on street vegetation are not available.

1. Introduction

Connecting with nature provides many health benefits [1,2,3]. A great way of improving mental and physical health is by using a bicycle to travel and commute [4]. However, many factors such as the built environment affect the decision to use a bicycle as a mode of transportation [5,6]. According to Winters et al. [7], when choosing a route to cycle, beautiful scenery is the second most important motivator in a person’s decision after routes that minimize exposure to noise and air pollution and before cycling paths that are separated from road traffic. Determining the greening of cities, particularly the abundance of street trees, can therefore be a tool for understanding travel behavior or assessing the quality of bicycle routes in a city. Indeed, the tree canopy is an element of nature that contributes to positive emotions [8] and nicer scenery [9]. Routes with trees are also preferred by pedestrians and cyclists [10].
A popular method of determining greenness in cities is to calculate the normalized difference vegetation index (NDVI) from satellite imagery [11,12]. Although interesting, this approach has two significant drawbacks: (1) high-resolution imagery can be costly to obtain, and (2) sometimes, parts of the images are not usable due to cloud cover.
An alternative is to analyze street view images—obtained from Google Street View (GSV) or BMap—which provide a visualization of vegetation as seen by an individual on the street [13,14,15,16]. For example, based on GSV, Li et al. [14] proposed a green view index (GVI) that varies from 0 to 100 for the percentage of street vegetation in city scenery. In this respect, a recent Canadian study based on a survey of 282 adults found a significant positive association between the GVI and participation in recreational activities during the summer, whereas no significant association was found with the NDVI [15]. However, the use of GSV has two important limitations: (1) the street data collection of a whole city can be carried out over several seasons, including winter (Figure 1), or even years, and (2) GSV images have not been captured everywhere in the world, especially in cities in the Global South.
Over the last two decades, video cameras have become a popular tool for recording and analyzing data in the field [17,18] including in transportation studies [19,20,21,22,23]. New algorithms using artificial intelligence can detect the components in an image to identify the objects including features in the built environment. The most popular libraries for such use are Detectron2, EfficientDet, YOLO, and Faster R-CNN, with Detectron2 being the most accurate [24]. Furthermore, Detectron2 is already being used in transportation studies to count the number of cars on a highway [25]. To the best of our knowledge, there are, however, no studies that currently use such algorithms to determine if videos taken in a city can accurately show the amount of greenness on different routes and road types. These videos taken at eye level with a camera mounted on a bicycle handlebar could also provide new real-time information due to their perspective. Although satellite data capture videos from a higher perspective, street videos capture every obstacle in a city, such as motorized vehicles or construction sites, and could, therefore, offer a different measure of greenness.

Research Objectives

Previous studies have shown the importance of scenic routes and trees in choosing a cycle path [10,13,26,27,28]. The goal of this paper is twofold. First, we aim to determine if a cyclist’s video footage taken just under eye level can be used to determine the greenness level—which, in this study, is measured as the percentage of street trees—of a route using software object detection, in this case, the Detectron2 library. Second, we want to know if video data might be an alternative to canopy data, especially when data are collected while riding a bicycle. Therefore, we compare eye-level video data with canopy data derived from NDVI data for the same year in Montréal—a city where a large dataset is already available. This allows us to determine the possibility of quantifying street trees with video images and use these data in other studies.

2. Materials and Methods

2.1. Study Area and Primary Data Collection

This study is based on a primary data collection using instrumented bicycles conducted on the island of Montréal in June 2019 (2 million inhabitants in 2020). This extensive mobile data collection has previously been used in recent works to analyze cycling safety, particularly dangerous overtaking [19] and conflict occurrence with motorized vehicles and pedestrians [20]. The reader can refer to these two studies for a detailed description of this primary data collection. Briefly, 4 participants cycled on 141 predefined routes for 87 h and 1199 km. These routes were chosen to maximize the coverage of the road and cycling networks while also taking the diversity of urban micro-environments into consideration [20]. All the subjects gave their informed consent for inclusion before participating in the study. The study was conducted in accordance with the Declaration of Helsinki and the protocol was approved by the Ethics Committee of the Institut National de la Recherche Scientifique (project No. CER 19-509). Each participant was equipped with (1) a GPS watch (Garmin Forerunner 920 XT, Olathe, KA, USA) to record GPS points every second, and (2) an action camera (Garmin VIRB XE, Olathe, KA, USA) mounted on the handlebar of the bicycle to record a video of each route.

2.2. GIS Secondary Data on Road Network and Canopy

As described in previous studies [19,20], all GPS points were map-matched on the OpenStreetMap (OSM) [29] network data and manually validated to extract the type of road (primary, secondary, tertiary, service, residential, etc.) using the highway key from OSM [30] (Table 1).
The canopy data were downloaded from the Montreal Urban Community Website [31]. Built from NDVI data and a digital height model (DHM), this open dataset contains four categories (covers): low mineral, high mineral, and low vegetal and high vegetal (canopy), where l o w is below three meters from ground level and h i g h is above three meters. The difference between low vegetal and high vegetal is the NDVI value, with data lower than 0.3 being l o w and the rest being h i g h (with data ranging from 1 to 1). With these categories, we isolate the high vegetal cover, creating a map of the canopy on the Island of Montréal.

2.3. Data Processing

The data processing was conducted entirely in Python and is illustrated in Figure 2.
First, one image per second was extracted from each video using the OpenCV library [32]. In total, 311,446 images were generated. Second, each video image was analyzed using the Detectron2 [33] library, which is implemented in PyTorch [34], in order to calculate the number of trees each frame contained. It should be noted that although this study focused on the trees, the Detectron2 algorithm can also identify flowers, grass, and other types of vegetation. The configuration used for Detectron2 was the COCO-PanopticSegmentation file provided by the library [35]. During this process, three other features were extracted: buildings, roads, and sky. As an example, 17.1%, 32.7%, 21.8%, and 25.1% of trees, roads/pavement, sky, and buildings, respectively, were detected in the image in Figure 3. Note that these percentages were annotated on each frame to verify whether they made sense. These percentages were also saved to a text file and their univariate statistics are reported in Table 2.
Third, the results obtained from the Detectron2 analysis were then merged with the GPS points collected on each route and saved in a geopackage file (gpkg). The values were associated with each point using the route filename, as well as the timestamp (DD:HH:MM:SS).
Fourth, each route was split into segments ranging from 100 to 400 m, with a step of 50 m. The different lengths were compared to see if a specific length was more efficient at predicting the canopy at the route level (i.e., sensitivity analysis). These segments were created from the GPS coordinates of each route using the GeoPandas library [36]. The segments also contained a greenness level ( g S ), which is the weighted mean tree percent of each point ( g i ), where w i is the distance between the point i and the next point over the segment length ( l S ):
g S = p S i = 1 g i w i p S i = 1 w i with w i = d ( i , i + 1 ) l S
Formula (1) was used to fix the following problem: data points where the cyclist was stopped tended to accumulate because we had one point per second, but the urban features on the image hardly varied (Figure 4). By looking at the distance to the next point, we smoothed out any accumulation of points that did not provide a new value.
We then added a buffer of 15 m on each side of each point (Figure 5a). This buffer allowed us to create polygons with each segment representing the field of view. Because we wanted to compare the greenness detected in videos with the canopy data, we limited the observable area to that seen by the camera. For example, we could not see on the other side of buildings or very far on each side. It would be time-consuming and difficult to manually enter a distance for each point based on the built environment and visual observations so a buffer of 15 m on each side was chosen. We selected this 15 m threshold because, when observed in GIS software, it reflected the average road width for the routes in Montréal. The polygons, therefore, represented the field of view on each side of the video for most of the city. Because there were points where the buffer was too low or high, the final results might have been affected.
Finally, it was possible to determine the parts of the canopy derived from NDVI data that intersected with the route segments and calculate a canopy area percentage ( C S , Equation (2)) for each segment (Figure 5b). The univariate statistics of these C S indicators are reported in Table 3.
C S = Area of canopy in segment Total area of segment × 100

2.4. Data Analysis

All the statistical analyses were conducted using R (version 4.0.5) [37]. Following image segmentation (using Detectron2), two types of analyses were performed. First, a Pearson correlation matrix was built to explore the associations between the four categories (tree, road, sky, building). Second, box plots, analysis of variance (ANOVA), and the Kruskal–Wallis test by ranks were used to test whether the percentage of greenness varied with the road types identified by OpenStreetMap. This allowed us to determine if a certain road type had more greenness than others.
In line with the second objective—to verify whether video data might be an interesting alternative to canopy data for quantifying the vegetation on a given route—bivariate analyses (simple regression and correlation analyses) were performed with the greenness indicators obtained using Detectron2 ( g S ) and the canopy indicators ( C S ) for the buffered segment from 100 to 400 meters.

3. Results

3.1. Tree Detection

The first results were obtained after the videos were processed to detect the components (i.e., tree, road, sky, building). The algorithm was able to detect the trees in each image quite accurately. Not all of the >300,000 images were revised by humans but each image that was observed had a correct percentage of trees identified (Figure 6). These results are in line with the literature, which shows that the Detectron2 library was accurate [24]. There were some areas, such as the second picture in the set, where small shrubs were considered trees, which might have impacted the results.
The correlation matrix between the proportions of the four categories detected in the 311,446 images is reported in Table 4. The more roads, sky, or buildings detected in the picture, the less greenness there was. The strongest negative correlation was observed between the proportions of the tree and building categories (r = 0.452 , p < 0.001). There was also a positive correlation between the road and building categories (r = 0.113 , p < 0.001). These results might seem obvious but they present a good argument that image detection works well. Thus, the Detectron2 algorithm can correctly determine the parts of the image that correspond to each category.
Once the greenness indicators at the GPS points and buffered segments were obtained, they could be mapped with GIS software such as QGIS [38], as shown for a portion of a route in Figure 7.
Unsurprisingly, the percentage of greenness (trees) varied according to the road type, as illustrated by the box plots in Figure 8. The pedestrian streets had the most greenness, most of them being located in large parks (e.g., Mont-Royal Park). Primary and secondary roads had the least greenness because these roads, according to the OpenStreetMap classification, are the most important streets in the road network, most of them being larger roads and used mainly by cars. It was also a design choice by the city of Montréal, meaning that the results might differ in another city. Conversely, the presence of trees was more important on residential streets, cycleways, and pedestrian streets.

3.2. Correlation between Detected Greenness and Canopy Data

Simple linear regression and Pearson correlation coefficients were calculated to assess the relationship between the percentage of trees detected in the images and the percentage of canopy, with segments of 100, 150, 200, 250, 300, and 400 m (Figure 9). All the correlation coefficient values were significant (p < 0.001) and varied slightly between 0.77 and 0.82. The sensitivity analysis showed that the segment length did not have a significant effect on the correlation between the video greenness (tree) and canopy coverage.

4. Discussion

4.1. Using Detectron2 to Detect Greenness

Using Detectron2 to detect trees works well according to the pictures created during the generation of the greenness index. This indicates that it is possible to use videos collected while riding a bicycle in cities to determine the number of street trees on a route. This approach is particularly relevant for cities and towns that do not possess canopy data—data that are often used in scientific research to determine greenness. This approach could also be used to detect other urban features (e.g., roads, buildings, sidewalks), urban objects (traffic lights, fire hydrants, street signs, stop signs, parking meters, benches), and street users (motorized vehicles, pedestrians, cyclists). This is particularly relevant for cities where Open GIS Data, Google Street View images, or satellite imagery are not available. Moreover, because the videos are taken roughly at eye level, they better represent the scenery observed while riding a bicycle in the city. It is easier to see the number of cars on the street, which might hide some of the vegetation, rendering the route less green and therefore less scenic (Figure 10).
The greenness index created using the videos represents a more natural way of determining greenness because the videos are taken at eye level. The videos were taken on a certain day and time, meaning it is unlikely that the same route would return the same value on another day. In other words, future works could explore how the amount of vegetation visible to a cyclist could vary according to the time and day (e.g., rush hour versus the rest of the day, weekdays versus weekends) or the season for the same route. This is an advantage because eye-level greenness is more representative of the scenery than satellite images.

4.2. Comparing Canopy Data to Video Greenness

The strong correlation values (>0.75) obtained between the canopy data indicators and video greenness demonstrate that the proposed approach to evaluating trees using software object detection in videos is relevant. This finding should be validated in other cities, particularly European cities and cities in the Global South, where the amount and species of street trees could be very different.
However, two principal elements could explain the discrepancy between the two datasets. The first is related to the chosen buffer width (15 m) representing the field of view. Ideally, we would have the exact width of each street. Unfortunately, open GIS data on street widths are not available for the study area, which is also true for many cities around the world. The second is that the two indicators measured two different features: (1) the canopy as seen from the sky, and (2) trees as seen from the street, with different obstacles obstructing the view (e.g., motorized vehicles). Although there was a strong correlation between the two, both could be used as distinctive variables in studies observing the different impacts of greenness.

5. Conclusions

To conclude, this paper had two goals: (1) to determine whether videos recorded by a camera fixed on a bicycle’s handlebar can be used to determine greenness, and (2) to determine if videos can replace canopy data. For the first goal, we found that by using Detectron2, we could accurately detect trees in images taken at eye level. The percentage of trees (and other categories such as flowers and grass [35]) offers a more realistic view since it records the same view as the cyclist including road obstacles. This may help in calculating a more representative greenness level in different cities, especially when including all types of greenness in the detection algorithm. For the second goal, strong correlations were found between the two types of vegetation indicators. This means that canopy data could be replaced by video-detected greenness. This finding can, therefore, be useful for future urban mobility studies that already use cameras and take into consideration a user’s perspective of vegetation. It could also be applied to other aspects of the built environment such as buildings and roads.

Author Contributions

Conceptualization, Albert Bourassa, Philippe Apparicio, Jérémy Gelb, and Geneviève Boisjoly; methodology, Albert Bourassa, Philippe Apparicio, and Jérémy Gelb; software, Albert Bourassa; data validation, Albert Bourassa, and Philippe Apparicio; statistical analyses, Albert Bourassa, and Philippe Apparicio; writing—original draft preparation, Albert Bourassa and Philippe Apparicio; writing—review and editing, Albert Bourassa, Philippe Apparicio, Jérémy Gelb, and Geneviève Boisjoly; supervision, project administration, funding acquisition, Philippe Apparicio. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financially supported by the Social Sciences and Humanities Research Council of Canada (SSHRC) through an Insight Grant (435-2019-0796).

Institutional Review Board Statement

This study has been approved by the Research Ethics Board of the Institut National de la Recherche Scientifique (project No. CER 19-509, date of approval: 28 May 2019).

Informed Consent Statement

Informed consent was obtained from all the subjects involved in the study.

Data Availability Statement

The Python code to evaluate the canopy detection with Detectron2 is available at: https://gitlab.com/albert.bourassa/detectron2_canopy (accessed on 24 August 2022).

Acknowledgments

Special thanks to the four cyclists who were involved in the data collection.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bratman, G.N.; Anderson, C.B.; Berman, M.G.; Cochran, B.; de Vries, S.; Flanders, J.; Folke, C.; Frumkin, H.; Gross, J.J.; Hartig, T.; et al. Nature and mental health: An ecosystem service perspective. Sci. Adv. 2019, 5, eaax0903. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Coventry, P.A.; Brown, J.E.; Pervin, J.; Brabyn, S.; Pateman, R.; Breedvelt, J.; Gilbody, S.; Stancliffe, R.; McEachan, R.; White, P.L. Nature-based outdoor activities for mental and physical health: Systematic review and meta-analysis. SSM-Popul. Health 2021, 16, 100934. [Google Scholar] [CrossRef] [PubMed]
  3. Bowler, D.E.; Buyung-Ali, L.M.; Knight, T.M.; Pullin, A.S. A systematic review of evidence for the added benefits to health of exposure to natural environments. BMC Public Health 2010, 10, 456. [Google Scholar] [CrossRef] [Green Version]
  4. Group, B.M.J.P. The health risks and benefits of cycling in urban environments compared with car use: Health impact assessment study. BMJ 2011, 343, d5306. [Google Scholar] [CrossRef] [Green Version]
  5. Majumdar, B.B.; Mitra, S.; Pareekh, P. On identification and prioritization of motivators and deterrents of bicycling. Transp. Lett. 2020, 12, 591–603. [Google Scholar] [CrossRef]
  6. Cervero, R.; Caldwell, B.; Cuellar, J. Bike-and-ride: Build it and they will come. J. Public Transp. 2013, 16, 5. [Google Scholar] [CrossRef] [Green Version]
  7. Winters, M.; Brauer, M.; Setton, E.M.; Teschke, K. Built environment influences on healthy transportation choices: Bicycling versus driving. J. Urban Health 2010, 87, 969–993. [Google Scholar] [CrossRef] [Green Version]
  8. Stefansdottir, H. A theoretical perspective on how bicycle commuters might experience aesthetic features of urban space. J. Urban Des. 2014, 19, 496–510. [Google Scholar] [CrossRef]
  9. Parsons, R.; Tassinary, L.G.; Ulrich, R.S.; Hebl, M.R.; Grossman-Alexander, M. The view from the road: Implications for stress recovery and immunization. J. Environ. Psychol. 1998, 18, 113–140. [Google Scholar] [CrossRef]
  10. Lusk, A.C.; Da Silva Filho, D.F.; Dobbert, L. Pedestrian and cyclist preferences for tree locations by sidewalks and cycle tracks and associated benefits: Worldwide implications from a study in Boston, MA. Cities 2020, 106, 102111. [Google Scholar] [CrossRef]
  11. Reid, C.E.; Kubzansky, L.D.; Li, J.; Shmool, J.L.; Clougherty, J.E. It’s not easy assessing greenness: A comparison of NDVI datasets and neighborhood types and their associations with self-rated health in New York City. Health Place 2018, 54, 92–101. [Google Scholar] [CrossRef] [PubMed]
  12. Rhew, I.C.; Vander Stoep, A.; Kearney, A.; Smith, N.L.; Dunbar, M.D. Validation of the Normalized Difference Vegetation Index as a measure of neighborhood greenness. Ann. Epidemiol. 2011, 21, 946–952. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Lu, Y.; Yang, Y.; Sun, G.; Gou, Z. Associations between overhead-view and eye-level urban greenness and cycling behaviors. Cities 2019, 88, 10–18. [Google Scholar] [CrossRef]
  14. Li, X.; Zhang, C.; Li, W.; Ricard, R.; Meng, Q.; Zhang, W. Assessing street-level urban greenery using Google Street View and a modified green view index. Urban For. Urban Green. 2015, 14, 675–685. [Google Scholar] [CrossRef]
  15. Villeneuve, P.J.; Ysseldyk, R.L.; Root, A.; Ambrose, S.; DiMuzio, J.; Kumar, N.; Shehata, M.; Xi, M.; Seed, E.; Li, X.; et al. Comparing the Normalized Difference Vegetation Index with the Google Street View measure of vegetation to assess associations between greenness, walkability, recreational physical activity, and health in Ottawa, Canada. Int. J. Environ. Res. Public Health 2018, 15, 1719. [Google Scholar] [CrossRef] [Green Version]
  16. Gao, F.; Li, S.; Tan, Z.; Zhang, X.; Lai, Z.; Tan, Z. How is urban greenness spatially associated with dockless bike sharing usage on weekdays, weekends, and holidays? ISPRS Int. J. Geo-Inf. 2021, 10, 238. [Google Scholar] [CrossRef]
  17. Garrett, B.L. Videographic geographies: Using digital video for geographic research. Prog. Hum. Geogr. 2011, 35, 521–541. [Google Scholar] [CrossRef]
  18. Büscher, M.; Urry, J. Mobile methods and the empirical. Eur. J. Soc. Theory 2009, 12, 99–116. [Google Scholar] [CrossRef]
  19. Henao, A.; Apparicio, P. Dangerous overtaking of cyclists in Montréal. Safety 2022, 8, 16. [Google Scholar] [CrossRef]
  20. Jarry, V.; Apparicio, P. Ride in peace: How cycling infrastructure types affect traffic conflict occurrence in Montréal, Canada. Safety 2021, 7, 63. [Google Scholar] [CrossRef]
  21. Ismail, K.; Sayed, T.; Saunier, N.; Lim, C. Automated analysis of pedestrian–vehicle conflicts using video data. Transp. Res. Rec. 2009, 2140, 44–54. [Google Scholar] [CrossRef]
  22. Jackson, S.; Miranda-Moreno, L.F.; St-Aubin, P.; Saunier, N. Flexible, mobile video camera system and open source video analysis software for road safety and behavioral analysis. Transp. Res. Rec. 2013, 2365, 90–98. [Google Scholar] [CrossRef]
  23. Saunier, N.; Sayed, T. Automated analysis of road safety with video data. Transp. Res. Rec. 2007, 2019, 57–64. [Google Scholar] [CrossRef]
  24. Jabir, B.; Noureddine, F.; Rahmani, K. Accuracy and efficiency comparison of object detection open-source models. Int. J. Online Biomed. Eng. (iJOE) 2021, 17, 165. [Google Scholar] [CrossRef]
  25. Mandal, V.; Adu-Gyamfi, Y. Object detection and tracking algorithms for vehicle counting: A comparative analysis. J. Big Data Anal. Transp. 2020, 2, 251–261. [Google Scholar] [CrossRef]
  26. Winters, M.; Davidson, G.; Kao, D.; Teschke, K. Motivators and deterrents of bicycling: Comparing influences on decisions to ride. Transportation 2011, 38, 153–168. [Google Scholar] [CrossRef]
  27. Mertens, L.; Van Dyck, D.; Ghekiere, A.; De Bourdeaudhuij, I.; Deforche, B.; Van de Weghe, N.; Van Cauwenberg, J. Which environmental factors most strongly influence a street’s appeal for bicycle transport among adults? A conjoint study using manipulated photographs. Int. J. Health Geogr. 2016, 15, 31. [Google Scholar] [CrossRef] [Green Version]
  28. Wang, R.; Lu, Y.; Wu, X.; Liu, Y.; Yao, Y. Relationship between eye-level greenness and cycling frequency around metro stations in Shenzhen, China: A big data approach. Sustain. Cities Soc. 2020, 59, 102201. [Google Scholar] [CrossRef]
  29. OpenStreetMap Contributors. Planet Dump Retrieved. 2017. Available online: https://planet.osm.org (accessed on 26 December 2022).
  30. OpenStreetMap. Key:highway. 2022. Available online: https://wiki.openstreetmap.org/wiki/Key:highway (accessed on 26 December 2022).
  31. Communauté métropolitaine de Montréal, Données géoréférencées de l’Observatoire du Grand Montréal: Indice de canopée métropolitain. 2022. Available online: https://observatoire.cmm.qc.ca/produits/donnees-georeferencees/#indice_canopee (accessed on 26 December 2022).
  32. Bradski, G. The openCV library. Dr. Dobb’s J. Softw. Tools Prof. Program. 2000, 25, 120–123. [Google Scholar]
  33. Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.Y.; Girshick, R. Detectron2. 2019. Available online: https://github.com/facebookresearch/detectron2 (accessed on 26 December 2022).
  34. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32; Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2019; pp. 8024–8035. [Google Scholar]
  35. Lin, T.Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common objects in context. arXiv 2015, arXiv:1405.0312. [Google Scholar] [CrossRef]
  36. Jordahl, K.; Bossche, J.V.d.; Fleischmann, M.; Wasserman, J.; McBride, J.; Gerard, J.; Tratner, J.; Perry, M.; Badaracco, A.G.; Farmer, C.; et al. geopandas/geopandas: V0.8.1. Zenodo. 2020. [Google Scholar] [CrossRef]
  37. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  38. QGIS Development Team. QGIS Geographic Information System. 2022. Available online: https://www.qgis.org (accessed on 26 December 2022).
Figure 1. Google Street View image for a road in Montreal in 2020.
Figure 1. Google Street View image for a road in Montreal in 2020.
Ijgi 12 00006 g001
Figure 2. Data processing in Python.
Figure 2. Data processing in Python.
Ijgi 12 00006 g002
Figure 3. Segmented image; 17.08% of image pixels were classified as trees.
Figure 3. Segmented image; 17.08% of image pixels were classified as trees.
Ijgi 12 00006 g003
Figure 4. GPS points on a route with different distances between them.
Figure 4. GPS points on a route with different distances between them.
Ijgi 12 00006 g004
Figure 5. Example of road segments. (a) Road segment polygons. (b) Road segments with overlapping canopy.
Figure 5. Example of road segments. (a) Road segment polygons. (b) Road segments with overlapping canopy.
Ijgi 12 00006 g005
Figure 6. Detection of trees in different settings.
Figure 6. Detection of trees in different settings.
Ijgi 12 00006 g006
Figure 7. Mapping results along a route. (a) Google Maps imagery and 100 m buffered segment in red. (b) GPS points and 100 m buffered segments. The greenness of the GPS points is defined as the percentage of trees detected in the image using Detectron2.
Figure 7. Mapping results along a route. (a) Google Maps imagery and 100 m buffered segment in red. (b) GPS points and 100 m buffered segments. The greenness of the GPS points is defined as the percentage of trees detected in the image using Detectron2.
Ijgi 12 00006 g007
Figure 8. Greenness per road type (i.e., percentage of trees). ANOVA: Welch’s F(8, 311,437) = 6842, p < 0.001, Eta2 = 0.15. Kruskal–Wallis test: χ 2 ( 8 ) = 39,112, p < 0.001.
Figure 8. Greenness per road type (i.e., percentage of trees). ANOVA: Welch’s F(8, 311,437) = 6842, p < 0.001, Eta2 = 0.15. Kruskal–Wallis test: χ 2 ( 8 ) = 39,112, p < 0.001.
Ijgi 12 00006 g008
Figure 9. Greenness and canopy correlation.
Figure 9. Greenness and canopy correlation.
Ijgi 12 00006 g009
Figure 10. Motorized vehicle blocking trees.
Figure 10. Motorized vehicle blocking trees.
Ijgi 12 00006 g010
Table 1. GPS points and road types.
Table 1. GPS points and road types.
Type of Road 1N%HH:MM:SS
Primary14,6504.7004:04:10
Secondary70,96822.7909:42:48
Tertiary69,99622.4719:26:36
Service22330.7200:37:13
Residential96,63631.0326:50:36
Unclassified59381.9101:38:58
Cycleway40,58713.0311:16:27
Footway87652.8102:26:05
Pedestrian16730.5400:27:53
Total311,446100.086:30:46
1 Based on the highway key from OpenStreetMap [30].
Table 2. Univariate statistics for the four detected categories in the images (n = 311,446).
Table 2. Univariate statistics for the four detected categories in the images (n = 311,446).
TreeRoadSkyBuilding
Percentiles
10.00.50.00.0
50.810.61.50.0
102.318.03.20.0
257.328.68.10.0
5017.539.316.32.7
7529.948.026.711.2
9041.855.336.922.9
9549.458.942.930.1
9965.865.654.242.4
Mean20.237.818.57.5
SD 115.714.413.010.3
1 SD: standard deviation.
Table 3. Univariate statistics for the percentages of the canopy within the buffered segments.
Table 3. Univariate statistics for the percentages of the canopy within the buffered segments.
Segment Length100 m150 m200 m250 m300 m400 m
n 113,97893096972556946253444
Percentiles
10.00.00.00.00.00.0
50.00.00.10.20.40.8
100.20.71.11.71.72.2
254.25.15.56.66.67.0
5014.515.015.315.915.916.3
7529.628.928.928.628.628.0
9046.745.343.642.642.040.8
9557.455.553.652.851.649.4
9982.977.877.576.373.572.3
Mean19.619.619.619.619.619.5
SD 219.118.217.617.216.716.2
1 n: number of buffered segments. 2 SD: standard deviation.
Table 4. Pearson correlation matrix between the proportions of the four detected categories 1.
Table 4. Pearson correlation matrix between the proportions of the four detected categories 1.
TreeRoadSkyBuilding
Tree 0.364 0.416 0.452
Road 0.364 0.200 0.113
Sky 0.416 0.200 0.151
Building 0.452 0.113 0.151
1 All correlation values were significant at p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bourassa, A.; Apparicio, P.; Gelb, J.; Boisjoly, G. Canopy Assessment of Cycling Routes: Comparison of Videos from a Bicycle-Mounted Camera and GPS and Satellite Imagery. ISPRS Int. J. Geo-Inf. 2023, 12, 6. https://doi.org/10.3390/ijgi12010006

AMA Style

Bourassa A, Apparicio P, Gelb J, Boisjoly G. Canopy Assessment of Cycling Routes: Comparison of Videos from a Bicycle-Mounted Camera and GPS and Satellite Imagery. ISPRS International Journal of Geo-Information. 2023; 12(1):6. https://doi.org/10.3390/ijgi12010006

Chicago/Turabian Style

Bourassa, Albert, Philippe Apparicio, Jérémy Gelb, and Geneviève Boisjoly. 2023. "Canopy Assessment of Cycling Routes: Comparison of Videos from a Bicycle-Mounted Camera and GPS and Satellite Imagery" ISPRS International Journal of Geo-Information 12, no. 1: 6. https://doi.org/10.3390/ijgi12010006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop