# HybVOR: A Voronoi-Based 3D GIS Approach for Camera Surveillance Network Placement

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Works

#### 2.1. Target-Based Approaches

#### 2.2. Landscape Based Approaches

#### 2.3. Discussion

## 3. Coverage Calculation Based on the Line of Sight

**Figure 2.**Visibility based on Line of Sight. The target T1 is visible from O but the target T2 is not visible from O (Gray line: Visible, dashed line: Invisible).

#### 3.1. Construction of “Lines of Sight” for Viewshed Analysis

**Figure 3.**The parameters used to perform viewshed analysis in ArcGIS 10.1 [29].

**SPOT**: Corresponds to the ground elevation for the observer point (i.e., camera).**OF1**and**OF2**: These two parameters define the vertical elevation to be added to the ground elevation of the observer**OF1**(OFFSETA) and the target**OF2**(OFFSETB).**AZ1**and**AZ2**: These two values allow defining the range of the horizontal angle that characterizes the scan from the observer (i.e., camera).**AZ1**(AZIMUTH1) corresponds to the starting angle of the scan range and**AZ2**(AZIMUTH2) corresponds to the ending angle of the scan range. Note that the values of**AZ1**and**AZ2**may range from 0 to 360° where 0 is defined by the north direction.**V1**and**V2**: These two elements delimit the vertical range of the scan from the observer.**V1**(VERT1) defines the upper limit of the vertical angle and**V2**(VERT2) defines the lower limit of the vertical angle. Note that**V1**and**V2**may vary from −90° to 90°.**R1**and**R2**: These parameters determine the distance that may be covered by the observer (i.e., camera).**R1**(RADIUS1) corresponds to the starting distance from which a target may be visible (the points that are closer than**R1**are surely not visible).**R2**(RADIUS2) corresponds to the ending distance where any point beyond this distance will be surely not visible.

#### 3.2. Calculation of the Viewshed

**O**is the observation point where “i” is the index of each observer point, i = 1, 2, …, n. Note that_{i}(x_{i}, y_{i}, z_{i})**z**_{i}**=****SPOT**_{i}**+****OF1**._{i}**T**is the target point where “j” is the index of each target, j= 1, 2, …, m. Note that_{j}(x_{j}, y_{j}, z_{j})**z**, where_{j}= H_{j}+ OF2_{i}**H**is the elevation of the surface that corresponds to the Target_{j}**T**._{j}**OT**_{ij}(x_{j}**− x**_{i}, y_{j}**− y**_{i}, z_{j}**− z**is the line of sight constructed by the Observer O_{i})_{i}with the index “i” and the Target T_{j}with the index “j”. The line of sight distance is written as**||OT**._{ij}||

**T**

_{j}**is not visible**from the Observer

**O**if one of the subsequent conditions is satisfied:

_{i}- (1)
- The line of sight
**OT**between the Observer_{ij}**O**and the Target_{i}**T**is obscured by one or many obstacles (Figure 3)._{j} - (2)
- The Target
**T**is outside the distance range defined by_{j}**R1**and_{i}**R2**:_{i}(**||OT**) or (_{ij}|| < R1_{i}**||OT**)._{ij}|| > R2_{i} - (3)
- The Target
**T**is outside the horizontal angle range defined by_{j}**AZ1**and_{i}**AZ2**:_{i}(**Arctan((y**) and (_{j}− y_{i})/(x_{j}− x_{i})) < AZ1_{i}**Arctan((y**)._{j}− y_{i})/(x_{j}− x_{i})) > AZ2_{i} - (4)
- The Target
**T**is outside the vertical angle range defined by_{j}**V1**and_{i}**V2**:_{i}(**Arcsin((z**) and (_{j}− z_{i})/||OT_{ij}||) < V1_{i}**Arcsin((z**)._{j}− z_{i})/||OT_{ij}||) > V2_{i} - (5)
- If all of these previous conditions (1) to (4) are not satisfied, then the Target
**T**from the Observer_{j}is visible**O**._{i}

## 4. Basics of Voronoi Diagram

#### Formal Definition of Voronoi Diagram

**S = {s**as the set of points, called “sites” or “generators”, in a space

_{1}, s_{2}, …, s_{n}}**E**(

**R**or

^{2}**R**

**). Also, consider**

^{3}**s**and

_{i}**s**as two sites that belong to

_{j}**S**. The dominance (

**DOM**) of

**s**over

_{i}**s**may be formulated as the subset of the space

_{j}**E**being at least as close to

**s**as to

_{i}**s**. This dominance may be formulated as follow [32]:

_{j}**Dist**(a,b) is the Euclidian Distance between point a and b and

**p**is a point that belong to the space

**E**.

**VOR**) generated by a site

**s**is the result of the intersection of all the dominances of

_{i}**s**over the remaining sites in

_{i}**S**. This means [32]:

**DIAVOR**) generated from a set of sites

**S = {s**is the collection of all Voronoi regions that divides the entire space:

_{1}, s_{2}, …, s_{n}}**Figure 5.**Voronoi Diagrams generated from lines and polygons. (

**A**) Unweighted lines, (

**B**) weighted lines, (

**C**) unweighted polygons, and (

**D**) weighted polygons (adapted from [33]).

## 5. Camera Placement based on HybVOR Approach

#### 5.1. Generating Voronoi Diagram from Buildings

#### 5.1.1. Minimizing the Number of Cameras for a Maximum Possible Coverage

**Figure 6.**Advantage of placing camera on a Voronoi edge. (

**A**) Camera is placed on the Voronoi edge, (

**B**,

**C**) Camera is placed farther from the Voronoi edge.

#### 5.1.2. Providing a Better Observability for Buildings Facades and Entrances

**Figure 7.**The impact of the foreshortening effect. (

**A**) Image without foreshortening effect and (

**B**) image with foreshortening effect.

#### 5.1.3. Ensuring a Maximum Coverage for Roads

#### 5.2. Deploying Camera Network

- -
**OFFSETB = 0**(there is no artificial target on the field).- -
**AZIMUTH1 = 0°**(the starting horizontal angle).- -
**AZIMUTH2 = 360°**(the ending horizontal angle of the scan range).- -
**VERT1 = 90°**(the upper limit of the vertical angle).- -
**VERT2 = −90°**(the lower limit of the vertical angle).- -
**RADIUS1 = 0**(the starting distance from which a point may be visible).- -
**RADIUS2 = Constant**(the ending distance where any point beyond this distance will be surely not visible, this parameter represent the camera range).

**SPOT**and

**OFFSETA**:

- -
**SPOT**corresponds to the ground elevation where the camera will be placed.- -
**OFFSETA**is related to the height of buildings to be monitored from each camera.

- If the length of a Voronoi edge is less than the range of the camera, then this edge will not be divided.
- If the length of a Voronoi edge is more than the range of the camera, then this edge will be divided into equal segments. The number of these new segments is calculated based on the following procedure:

- -
**Ns**: is the number of new segments generated from splitting a Voronoi Edge.- -
**Ceil**: is the mathematical function that calculates the integer part of a number (for example Ceil(1.45) = 1).- -
**Mod**: is the mathematical function that calculates the remainder of a division.- -
**L**: is the original length of the Voronoi edge._{VE}- -
**R**: is the range of the camera._{Cam}

**L**is the length of each new segment that is produced form splitting the corresponding original Voronoi edge.

_{SpVE}**Figure 9.**Placing Cameras on Voronoi edge. (

**A**) Voronoi edge length is less than camera range, (

**B**) Voronoi edge length is more than camera range.

_{max}is the maximum allowed distance between a building and a Voronoi edge. Based on Figure 11, the distance D

_{max}is equal to:

_{max}in Equation (7) is an approximation according to a 2D assumption. Its calculation based on a 3D assumption will give values that are almost equal for cameras that have a height of about 10m.

- The distance between a duplicated Voronoi edge and the corresponding building should not be further than the distance D
_{max}mentioned in Equation (7). - Cameras placed on a duplicated Voronoi edge should ensure a full coverage of the origin Voronoi edge.

#### 5.3. Hybrid Assessing of the Camera Network Coverage

#### 5.3.1. Raster-Based Viewshed Calculation

- -
- For each pixel that belongs to the resulting DSM:
- If the corresponding value in the building’ roof raster file is null then the pixel value is taken from the raster DEM.
- Otherwise, the pixel value is taken from the corresponding pixel in the building’ roof raster file.

**OFFSETA**(c.f. Section 3.1). This offset value will be chosen according to the height of the two buildings which are considered as generators of the Voronoi edge where the camera is placed. The camera height is mainly related to the possibility of constructing poles to place cameras. Two situations occur:

**OFFSETB**,

**AZIMUTH1**,

**AZIMUTH2**,

**VERT1**,

**VERT2**,

**RADIUS1**and

**RADIUS2**are also constant. However,

**SPOT**and

**OFFSETA**may vary depending on the location of each camera. The attribute

**ID_ Camera**represents the primary key of the class “Camera”, i.e., that each camera will have a unique identifier.

**Figure 14.**Unified Modeling Language notation of the many-to-many relationship between the classes “Camera” and “Entrance”.

- If there are one or many objects in the environment that intersect all lines of sight between the camera and the target, then this target is completely invisible from the camera.
- If there are one or many objects in the environment that intersect some lines of sight (not all lines of sight) between the camera and the target, then this target is partially visible from the camera.
- If there is no object that intersects lines of sight between the camera and the target, then this target is fully visible form the camera.

**Figure 15.**Three possible cases that may result from 3D Vector-based analysis of visibility; Case 1: The target is partially visible, Case 2: The target is completely not visible, and Case 3: The target is fully visible.

## 6. Results and Discussion

#### 6.1. Case Study: Jeddah Sea Port

#### 6.2. Results

**Figure 18.**The Digital Elevation Model (DEM) and Digital Surface Model (DSM) used for raster based viewshed calculation.

#### 6.3. Discussion

^{2}. A surface coverage of almost 100% is obtained based on the raster based viewshed calculation (surface of visible zones = 0.13 km

^{2}, surface of non-visible zones = 0.036 km

^{2}, where the non-visible zones mainly correspond to some buildings’ surfaces). Then, the 3D Vector-based analysis of visibility is used to ensure that the main buildings’ entrances are fully covered by the camera network.

## 7. Conclusion and Future Works

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Fehr, D.; Fiore, L.; Papanikolopoulos, N. Issues and solutions in surveillance camera placement. In Proceedings of 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA, 10–15 October 2009; pp. 3780–3785.
- Jacobs, W.; Ducruet, C.; De Langen, P. Integrating world cities into production networks: The case of port cities. Glob. Netw.
**2010**, 10, 92–113. [Google Scholar] [CrossRef] - Peckham, C. An overview of maritime and port security. In Proceedings of 2012 IEEE Conference on Technologies for Homeland Security (HST), Waltham, MA, USA, 13–15 November 2012; pp. 260–265.
- Bocca, E.; Viazzo, S.; Longo, F.; Mirabelli, G. Developing data fusion systems devoted to security control in port facilities. In Proceedings of the 2005 Winter Simulation Conference, Orlando, FL, USA, 4–7 December 2005; pp. 445–449.
- Ercan, A.O.; Yang, D.B.; El Gamal, A.; Guibas, L.J. Optimal placement and selection of camera network nodes for target localization. Distrib. Comput. Sens. Syst.
**2006**, 4026, 389–404. [Google Scholar] - Bodor, R.; Drenner, A.; Schrater, P.; Papanikolopoulos, N. Optimal camera placement for automated surveillance tasks. J. Intell. Robot. Syst.
**2007**, 50, 257–295. [Google Scholar] [CrossRef] - Chen, S.Y.; Li, Y.F. Automatic sensor placement for model-based robot vision. IEEE Trans. Syst. Man Cybern. Part B: Cybern.
**2004**, 34, 393–408. [Google Scholar] [CrossRef] - Ghosh, S.K. Approximation algorithms for art gallery problems in polygons. Discret. Appl. Math.
**2010**, 158, 718–722. [Google Scholar] [CrossRef] - Carranza, J.; Theobalt, C.; Magnor, M.A.; Seidel, H.P. Free-viewpoint video of human actors. ACM Trans. Graph.
**2003**, 22, 569–577. [Google Scholar] [CrossRef] - Cheung, K.M.G.; Baker, S.; Kanade, T. Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
**2003**, 1, 77–84. [Google Scholar] - Weik, S.; Liedtke, C.E. Hierarchical 3D pose estimation for articulated human body models from a sequence of volume data. In Robot Vision, Proceedings of 2001 International Workshop RobVis, Auckland, New Zealand, 16–18 February 2001; Springer Berlin Heidelberg: Berlin, Germany, 2001; pp. 27–34. [Google Scholar]
- Ahmed, N.; Kanhere, S.S.; Jha, S. The holes problem in wireless sensor networks: A survey. ACM SIGMOBILE Mob. Comput. Commun. Rev.
**2005**, 9, 4–18. [Google Scholar] [CrossRef] - Liu, B.; Towsley, D. A study of the coverage of large-scale sensor networks. In Proceedings of 2004 IEEE International Conference on Mobile Ad-hoc and Sensor Systems, Fort Lauderdale, FL, USA, 25–27 October 2004; pp. 475–483.
- O’rourke, J. Art Gallery Theorems and Algorithms; Oxford University Press: Oxford, UK, 1987. [Google Scholar]
- Wang, Y.C.; Tseng, Y.C. Distributed deployment schemes for mobile wireless sensor networks to ensure multilevel coverage. IEEE Trans. Parallel Distrib. Syst.
**2008**, 19, 1280–1294. [Google Scholar] [CrossRef] - Adickes, M.D.; Billo, R.E.; Norman, B.; Banerjee, S.; Nnaji, B.O.; Rajgopal, J. Optimization of indoor wireless communication network layouts. IIE Trans.
**2002**, 34, 823–836. [Google Scholar] - Raisanen, L.; Whitaker, R.M.; Hurley, S. A comparison of randomized and evolutionary approaches for optimizing base station site selection. In Proceedings of the 2004 ACM Symposium on Applied Computing, Nicosia, Cyprus, 14–17 March 2004; ACM: New York, NY, USA, 2004; pp. 1159–1165. [Google Scholar]
- Anderson, H.R.; McGeehan, J.P. Optimizing microcell base station locations using simulated annealing techniques. In Proceedings of 1994 IEEE 44th Vehicular Technology Conference, Stockholm, Sweden, 8–10 June 1994; pp. 858–862.
- Argany, M.; Mostafavi, M.A.; Akbarzadeh, V.; Gagné, C.; Yaagoubi, R. Impact of the quality of spatial 3D city models on sensor networks placement optimization. Geomatica
**2012**, 66, 291–305. [Google Scholar] [CrossRef] - Wang, G.; Cao, G.; La Porta, T. Movement-assisted sensor deployment. IEEE Trans. Mob. Comput.
**2006**, 5, 640–652. [Google Scholar] [CrossRef] - Akbarzadeh, V.; Gagné, C.; Parizeau, M.; Argany, M.; Mostafavi, M.A. Probabilistic sensing model for sensor placement optimization based on line-of-sight coverage. IEEE Trans. Instrum. Meas.
**2013**, 62, 293–303. [Google Scholar] [CrossRef] - Huang, C.F.; Tseng, Y.C. The coverage problem in a wireless sensor network. Mob. Netw. Appli.
**2005**, 10, 519–528. [Google Scholar] [CrossRef] - Argany, M.; Mostafavi, M.A.; Karimipour, F.; Gagné, C. A GIS based wireless sensor network coverage estimation and optimization: A Voronoi approach. Lect. Note. Comput. Sci.
**2011**, 6970, 151–172. [Google Scholar] - Slijepcevic, S.; Potkonjak, M. Power efficient organization of wireless sensor networks. IEEE Int. Conf. Commun.
**2001**, 2, 472–476. [Google Scholar] - Tian, D.; Georganas, N.D. A coverage-preserving node scheduling scheme for large wireless sensor networks. In Proceedings of The 1st ACM International Workshop on Wireless Sensor Networks and Applications (WSNA 2002), Atlanta, GA, USA, 28 September 2002; pp. 32–41.
- Ye, F.; Zhong, G.; Lu, S.; Zhang, L. PEAS: A robust energy conserving protocol for long-lived sensor networks. In Proceedings of 23rd International Conference on Distributed Computing Systems, Providence, RI, USA, 19–22 May 2003.
- De Floriani, L.; Magillo, P. Algorithms for visibility computation on terrains: A survey. Environ. Plan. B Plan. Des.
**2003**, 30, 709–728. [Google Scholar] [CrossRef] - Yang, P.P.; Putra, S.Y.; Li, W. Viewsphere: A GIS-based 3D visibility analysis for urban design evaluation. Environ. Plan. B Plan. Des.
**2007**, 34, 971–992. [Google Scholar] [CrossRef] - ArcGIS Help 10.1. Using Viewshed and Observer Points for Visibility Analysis. Available online: http://resources.arcgis.com/en/help/main/10.1/index.html#//00q90000008n000000 (accessed on 27 April 2015).
- Okabe, A.; Boots, B.; Sugihara, K.; Chiu, S.N. Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, 2nd ed.; Wiley: Chichester, UK, 2000; p. 671. [Google Scholar]
- Fortune, S. Voronoi diagrams and Delaunay triangulations. Comput. Euclidean Geom.
**1992**, 1, 193–233. [Google Scholar] - Aurenhammer, F. Voronoi diagrams—A survey of a fundamental geometric data structure. ACM Comput. Surv.
**1991**, 23, 345–405. [Google Scholar] [CrossRef] - Dong, P. Generating and updating multiplicatively weighted Voronoi diagrams for point, line and polygon features in GIS. Comput. Geosci.
**2008**, 34, 411–421. [Google Scholar] [CrossRef] - Jeddah Islamic Port—Information & Service, Ports Authority, Kingdom of Saudi Arabia. Available online: http://www.ports.gov.sa/English/SAPorts/Jeddah/Pages/Services.aspx (accessed on 11 June 2013).
- Gong, Y.; Li, G.; Tian, Y.; Lin, Y.; Liu, Y. A vector-based algorithm to generate and update multiplicatively weighted Voronoi diagrams for points, polylines, and polygons. Comput. Geosci.
**2012**, 42, 118–125. [Google Scholar] [CrossRef]

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Yaagoubi, R.; Yarmani, M.E.; Kamel, A.; Khemiri, W.
HybVOR: A Voronoi-Based 3D GIS Approach for Camera Surveillance Network Placement. *ISPRS Int. J. Geo-Inf.* **2015**, *4*, 754-782.
https://doi.org/10.3390/ijgi4020754

**AMA Style**

Yaagoubi R, Yarmani ME, Kamel A, Khemiri W.
HybVOR: A Voronoi-Based 3D GIS Approach for Camera Surveillance Network Placement. *ISPRS International Journal of Geo-Information*. 2015; 4(2):754-782.
https://doi.org/10.3390/ijgi4020754

**Chicago/Turabian Style**

Yaagoubi, Reda, Mabrouk El Yarmani, Abdullah Kamel, and Walid Khemiri.
2015. "HybVOR: A Voronoi-Based 3D GIS Approach for Camera Surveillance Network Placement" *ISPRS International Journal of Geo-Information* 4, no. 2: 754-782.
https://doi.org/10.3390/ijgi4020754