1. Introduction
2. Related Work
3. System Model
3.1. Statistical Models for Parking Data
3.2. EventBased Simulator
4. A Simple Anomaly Detection Approach
 (1)
 a threshold $\xi \in (0,1)$ is set and,
 (2)
 a parking event with duration τ is flagged as anomalous if $P({t}_{\mathrm{ON}}>\tau )<\xi $.
5. Advanced Classification Techniques
5.1. Discussion on Outliers
5.2. Features for Parking Analysis
 (1)
 Sensor occupation (SO): accounts for the amount of time during which ${s}_{i}(t)=1$, i.e., the corresponding parking space is occupied.
 (2)
 Event frequency (EF): accounts for the number of parking events per unit time.
 (3)
 Parking event duration (PD): measures the duration of parking events.
 (4)
 Vacancy duration (VD): measures the duration of vacancies.
5.3. Selected Clustering Techniques from the Literature
 (1)
 Compute the distances between each input data vector and each cluster centroid.
 (2)
 Assign each input vector to the cluster associated with the closest centroid.
 (3)
 Compute the new average of the points (data vectors) in each cluster to obtain the new cluster centroids.
 (1)
 Initial values of the normal distribution model (mean and standard deviation) are arbitrarily assigned.
 (2)
 Mean and standard deviations are iteratively refined through the expectation and maximization steps of the EM algorithm. The algorithm terminates when the distribution parameters converge or a maximum number of iterations is reached.
 (3)
 Data vectors are assigned to the cluster with the maximum membership probability.
 (1)
 ε: used to define the εneighborhood of any input vector $\mathit{x}$, which corresponds to the region of space whose distance from $\mathit{x}$ is smaller than or equal to ε.
 (2)
 MinPts: representing the minimum number of points needed to form a socalled dense region.
5.4. Classification Based on SelfOrganizing Maps
Algorithm 1 (SOM): 

 Data point: The input dataset is composed of N data points, where “data point” i is the feature column vector ${\mathit{x}}_{i}\in \mathcal{X}$ associated with the parking sensor $i=1,\cdots ,N$. These vectors are conveniently represented through the full feature matrix $\mathit{X}=[{\mathit{x}}_{1},\cdots ,{\mathit{x}}_{N}]$. With ${\mathit{X}}_{p}$, we mean a submatrix of $\mathit{X}$ obtained by collecting p columns (feature vectors), not necessarily the first p. A generic cluster $\mathcal{C}$ containing p elements is then uniquely identified by a collection of p sensors and by the corresponding feature matrix ${\mathit{X}}_{p}$.
 Cluster cohesiveness: Consider a cluster $\mathcal{C}$ with p elements, and let ${\mathit{X}}_{p}$ be the corresponding feature matrix. We use a scatter function as a measure of its cohesiveness, i.e., to gauge the distance among the cluster elements and its mean (centroid). The centroid of ${\mathit{X}}_{p}=[{\mathit{x}}_{1},\cdots ,{\mathit{x}}_{p}]$ is computed as: ${\mathit{\mu}}_{p}=({\sum}_{j=1}^{p}{\mathit{x}}_{j})/p$. The dispersion of the cluster members around ${\mathit{\mu}}_{p}$ is assessed through the sample standard deviation:$$\sigma ({\mathit{X}}_{p})=\sqrt{\frac{1}{p1}\sum _{j=1}^{p}{\parallel {\mathit{x}}_{j}{\mathit{\mu}}_{p}\parallel}^{2}}\phantom{\rule{0.166667em}{0ex}},$$
 Global vs. local clustering metrics: In our tests, we experimented with different metrics, and the best results were obtained by tracking the correlation among features, as we now detail. We proceed by computing two statistical measures: (1) a first metric, referred to as global, is obtained for the entire feature matrix $\mathit{X}$; (2) a local metric is computed for the smaller clusters (matrix ${\mathit{X}}_{p}$).
 (1)
 Global metric: Let $\mathit{X}$ be the full feature matrix. From $\mathit{X}$, we obtain the $N\times N$ correlation matrix $\mathit{C}=\{{c}_{ij}\}$, where ${c}_{ij}=\mathrm{corr}({\mathit{x}}_{i},{\mathit{x}}_{j})$. Thus, we average $\mathit{C}$ by row, obtaining the Nsized vector $\overline{\mathit{c}}={[{\overline{c}}_{1},\cdots ,{\overline{c}}_{N}]}^{T}$, with ${\overline{c}}_{i}=({\sum}_{j=1}^{N}{c}_{ij})/N$. We respectively define $\mathrm{stdev}(\overline{\mathit{c}})$ and $\mathrm{mean}(\overline{\mathit{c}})$ as the sample standard deviation and the mean of $\overline{\mathit{c}}$. We finally compute two global measures for matrix $\mathit{X}$ as:$$\begin{array}{ccc}\mathtt{meas}\mathtt{1}(\mathit{X})& =& \mathrm{stdev}(\overline{\mathit{c}})\\ \mathtt{meas}\mathtt{2}(\mathit{X})& =& \mathrm{mean}(\overline{\mathit{c}})\phantom{\rule{0.166667em}{0ex}}.\end{array}$$
 (2)
 Local metric: The local metric is computed on a subsection of the entire dataset, namely on the clusters that are obtained at runtime. Now, let us focus on one such cluster, say cluster $\mathcal{C}$ with $\left\mathcal{C}\right=p$. Hence, we build the corresponding feature matrix ${\mathit{X}}_{p}$ by selecting the p columns of $\mathit{X}$ associated with the elements in $\mathcal{C}$. Thus, we compute the correlation matrix of ${\mathit{X}}_{p}$, which we call ${\mathit{C}}_{p}$, and the psized vector ${\overline{\mathit{c}}}^{\prime}={[{\overline{c}}_{1}^{\prime},\cdots ,{\overline{c}}_{p}^{\prime}]}^{T}$, obtained averaging ${\mathit{C}}_{p}$ by row as above. The local measures associated with matrix ${\mathit{X}}_{p}$ are:$$\begin{array}{ccc}\mathtt{meas}\mathtt{1}({\mathit{X}}_{p})& =& \mathrm{stdev}({\overline{\mathit{c}}}^{\prime})\\ \mathtt{meas}\mathtt{2}({\mathit{X}}_{p})& =& min({\overline{\mathit{c}}}^{\prime})\phantom{\rule{0.166667em}{0ex}},\end{array}$$
 Global vs. local dominance: We now elaborate on the comparison of global and local metrics. Let $\mathit{X}$ and ${\mathit{X}}_{p}$ respectively be the full feature matrix and that of a cluster obtained at runtime by our algorithm. Global and local metrics are respectively computed using Equations (6) and (7) and are compared in a Pareto [37] sense as follows. We say that the global metric (matrix $\mathit{X}$) dominates the local one (matrix ${\mathit{X}}_{p}$) if the following inequalities are jointly verified:$$\begin{array}{ccc}\mathbf{dominance}& & \\ \mathtt{meas}\mathtt{1}(\mathit{X})& >& \mathtt{meas}\mathtt{1}({\mathit{X}}_{p})\\ \mathtt{meas}\mathtt{2}(\mathit{X})& <& \mathtt{meas}\mathtt{2}({\mathit{X}}_{p})\phantom{\rule{0.166667em}{0ex}}.\end{array}$$
Algorithm 2 Unsupervised SOM clustering: 

6. Numerical Results
6.1. Synthetic Data: Classification Performance with Varying Number of Clusters
6.2. Synthetic Data: Classification Performance with Outliers and Complex Statistics
 kmeans: ${w}_{1}=0.06$ and ${w}_{2}={w}_{3}=0.3$.
 EM: ${w}_{1}=0.35$, ${w}_{2}=0.06$ and ${w}_{3}=0.26$.
 DBSCAN: ${w}_{1}=0.2$, ${w}_{2}=0.3$, ${w}_{3}=0.02$, $\u03f5=0.21$ and MinPts $=5$.
 SOM: ${w}_{1}=0.1$, ${w}_{2}=0.34$, ${w}_{3}=0.04$ and $\gamma =0.7$ ($\gamma ={\sigma}_{\mathrm{th}}/\sigma (\mathit{X})$).
6.3. Classification Performance on Real Data
7. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
Abbreviations
DBSCAN  DensityBased Spatial Clustering of Applications with Noise 
EF  Event Frequency 
EM  Expectation Maximization 
GMM  Gaussian Mixture Model 
IoT  Internet of Things 
PD  Parking Duration 
PGI  Parking Guidance and Information 
SO  Sensor Occupation 
SOM  SelfOrganizing Maps 
SVDD  Support Vector Data Description 
VD  Vacancy duration 
WSN  Wireless Sensor Networks 
References
 Zanella, A.; Bui, N.; Castellani, A.; Vangelista, L.; Zorzi, M. Internet of things for smart cities. IEEE Internet Things J. 2014, 1, 22–32. [Google Scholar] [CrossRef]
 Jog, Y.; Sajeev, A.; Vidwans, S.; Mallick, C. Understanding smart and automated parking technology. Int. J. u eServ. Sci. Technol. 2015, 8, 251–262. [Google Scholar] [CrossRef]
 Rathorea, M.M.; Ahmada, A.; Paul, A.; Rho, S. Urban planning and building smart cities based on the internet of things using big data analytics. Comput. Netw. 2016, 101, 63–80. [Google Scholar] [CrossRef]
 Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Comput. Surv. 1999, 31, 264–323. [Google Scholar] [CrossRef]
 Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. Densitybased clustering in spatial databases: The algorithm GDBSCAN and its applications. Data Min. Knowl. Discov. 1998, 2, 169–194. [Google Scholar] [CrossRef]
 McLachlan, G.; Krishnan, T. The EM Algorithm and Extensions, 2nd ed.; WileyInterscience: Hoboken, NJ, USA, 2008. [Google Scholar]
 Yanxu, Z.; Rajasegarar, S.; Leckie, C.; Palaniswami, M. Smart car parking: Temporal clustering and anomaly detection in urban car parking. In Proceedings of the IEEE Ninth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), Singapore, 21–24 April 2014.
 Kohonen, T. SelfOrganization and Associative Memory; Springer: Berlin, Germany, 1984. [Google Scholar]
 Kohonen, T. SelfOrganizing Maps; Springer: Berlin, Germany, 2001. [Google Scholar]
 Vesanto, J.; Alhoniemi, E. Clustering of the selforganizing map. IEEE Trans. Neural Netw. 2000, 11, 586–600. [Google Scholar] [CrossRef] [PubMed]
 Polycarpou, E.; Lambrinos, L.; Protopapadakis, E. Smart parking solutions for urban areas. In Proceedings of the IEEE International Symposium and Workshops on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), Madrid, Spain, 4–7 June 2013.
 Dance, C. Lean smart parking. Park. Prof. 2014, 30, 26–29. [Google Scholar]
 Pierce, G.; Shoup, D. Getting the prices right. J. Am. Plan. Assoc. 2013, 79, 67–81. [Google Scholar] [CrossRef]
 Worldsensing. Smartprk—Making Smart Cities Happen. Available online: http://www.fastprk.com/ (accessed on 21 September 2016).
 Yang, J.; Portilla, J.; Riesgo, T. Smart parking service based on wireless sensor networks. In Proceedings of the Annual Conference on IEEE Industrial Electronics Society (IECON), Montreal, QC, Canada, 25–28 October 2012.
 Shoup, D.C. Cruising for parking. Transp. Policy 2006, 13, 479–486. [Google Scholar] [CrossRef]
 Wang, H.; He, W. A Reservationbased smart parking system. In Proceedings of the IEEE Conference on Computer Communications Workshops, Shanghai, China, 10–15 April 2011; pp. 690–695.
 Geng, Y.; Cassandras, C. New “Smart Parking” system based on resource allocation and reservations. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1129–1139. [Google Scholar] [CrossRef]
 Khan, Z.; Anjum, A.; Kiani, S.L. Cloud based big data analytics for smart future cities. In Proceedings of the IEEE/ACM International Conference on Utility and Cloud Computing, Dresden, Germany, 9–12 December 2013.
 Anastasi, G.; Antonelli, M.; Bechini, A.; Brienza, S.; de Andrea, E.; de Guglielmo, D.; Ducange, P.; Lazzerini, B.; Marcelloni, F.; Segatori, A. Urban and social sensing for sustainable mobility in smart cities. In Proceedings of the 2013 Sustainable Internet and ICT for Sustainability (SustainIT), Palermo, Italy, 30–31 October 2013.
 Barone, R.E.; Giuffrè, T.; Siniscalchi, S.M.; Morgano, M.A.; Tesoriere, G. Architecture for parking management in smart cities. IET Intell. Transp. Syst. 2014, 8, 445–452. [Google Scholar] [CrossRef]
 Gupta, A.; Sharma, V.; Ruparam, N.K.; Jain, S.; Alhammad, A.; Ripon, M.A.K. Integrating pervasive computing, InfoStations and swarm intelligence to design intelligent contextaware parkingspace location mechanism. In Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI), Delhi, India, 24–27 September 2014.
 He, W.; Yan, G.; Xu, L.D. Developing vehicular data cloud services in the IoT environment. IEEE Trans. Ind. Inform. 2014, 10, 1587–1595. [Google Scholar] [CrossRef]
 Vlahogiannia, E.I.; Kepaptsogloua, K.; Tsetsosa, V.; Karlaftisa, M.G. A realtime parking prediction system for smart cities. J. Intell. Transp. Syst. Technol. Plan. Oper. 2016, 20, 192–204. [Google Scholar] [CrossRef]
 Martinez, B.; Vilajosana, X.; Vilajosana, I.; Dohler, M. Lean sensing: Exploiting contextual information for most energyefficient sensing. IEEE Trans. Ind. Inform. 2016, 11, 1156–1165. [Google Scholar] [CrossRef]
 Lin, T.; Rivano, H.; Le Mouël, F. How to choose the relevant MAC protocol for wireless smart parking urban networks? In Proceedings of the ACM International Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PEWASUN), Montreal, QC, Canada, 21–26 September 2014.
 Bishop, C. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2007. [Google Scholar]
 Jain, A.K. Data clustering: 50 Years beyond kmeans. Pattern Recognit. Lett. 2010, 38, 651–666. [Google Scholar] [CrossRef]
 Lloyd, S. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
 Arthur, D.; Vassilvitskii, S. kmeans++: The advantages of careful seeding. In Proceedings of the ACMSIAM Symposium on Discrete Algorithms (SODA), New Orleans, LA, USA, 7–9 January 2007.
 Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
 Haykin, S. Neural Networks and Learning Machines, 3rd ed.; Pearson Education; Prentice Hall: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
 Boley, D.L. Principal direction divisive partitioning. Data Min. Knowl. Discov. 1998, 2, 325–344. [Google Scholar] [CrossRef]
 Savaresi, S.M.; Boley, D.L.; Bittanti, S.; Gazzaniga, G. Cluster selection in divisive clustering algorithms. In Proceedings of the International Conference on Data Mining (SIAM), Arlington, VA, USA, 11–13 April 2002.
 Hofmey, D.P.; Pavlidis, N.G.; Eckley, I.A. Divisive clustering of high dimensional data streams. Stat. Comput. 2016, 26, 1101–1120. [Google Scholar] [CrossRef][Green Version]
 Qu, B.; Zhang, Y.; Yang, T. Localglobal joint decision based clustering for airport recognition. In Intelligence Science and Big Data Engineering; Sun, C., Fang, F., Zhou, Z.H., Yang, W., Liu, Z.Y., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
 Pareto, V. Cours d’Economie Politique; Librairie Droz: Lausanne, Switzerland, 1896; Volume 1. [Google Scholar]
 Karypis, G.; Han, E.H.; Kumar, V. Chameleon: Hierarchical clustering using dynamic modeling. IEEE Comput. 1999, 32, 68–75. [Google Scholar] [CrossRef]
 Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
 Musicant, D.R.; Kumar, V.; Ozgur, A. Optimizing Fmeasure with support vector machines. In Proceedings of the International FLAIRS Conference, St. Augustine, FL, USA, 20–23 October 2003; pp. 356–360.
 Tsai, C.F.; Lin, W.C.; Ke, S.W. Big data mining with parallel computing: A comparison of distributed and MapReduce methodologies. J. Syst. Softw. 2016, 122, 83–92. [Google Scholar] [CrossRef]
Parking  Weekday (wd)  Weekend (we)  Mean (in Minutes) 

Average  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}45.7422$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.6039$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}58.9885$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.6313$  ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}68.2438$ (wd) ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}83.3360$ (we) 
Max  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}124.8911$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.8137$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}121.0529$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.8445$  ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}139.8266$ (wd) ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}132.2398$ (we) 
Min  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}17.8723$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.4245$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}10.1799$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.4119$  ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}50.8284$ (wd) ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}31.2628$ (we) 
Vacancies  Weekday (wd)  Weekend (we)  Mean (in Minutes) 
Average  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}112.4832$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.8448$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}101.3203$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.7480$  ${T}_{\mathrm{OFF}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}122.8511$ (wd) ${T}_{\mathrm{OFF}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}120.9045$ (we) 
Max  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}417.8844$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}2.0947$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}355.2186$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}1.8366$  ${T}_{\mathrm{OFF}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}370.1241$ (wd) ${T}_{\mathrm{OFF}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}315.6035$ (we) 
Min  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}15.2319$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.4727$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}9.5868$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.4429$  ${T}_{\mathrm{OFF}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}33.9791$ (wd) ${T}_{\mathrm{OFF}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}24.6376$ (we) 
Weibull  Weekday (wd)  Weekend (we)  Mean (in Minutes) 

Cluster 1 (Min)  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}2.8830$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}4.9033$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}4.7391$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}3.8346$  ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}2.6441$ (wd) ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}4.2853$ (we) 
Cluster 2  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}33.9250$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}1.2681$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}41.5004$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}3.8024$  ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}31.4959$ (wd) ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}37.5088$ (we) 
Cluster 3 (Average)  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}45.7422$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.6039$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}58.9885$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.6313$  ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}68.2438$ (wd) ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}83.3360$ (we) 
Cluster 4  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}109.0669$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}1.1866$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}102.8083$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}1.6052$  ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}102.8975$ (wd) ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}92.1482$ (we) 
Cluster 5 (Max)  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}390.601$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}4.9137$  $\lambda \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}644.1756$ $\kappa \phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}1.2876$  ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}358.2768$ (wd) ${T}_{\mathrm{ON}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}596.1100$ (we) 
Occupancy Stat.  December 2014  January 2015  February 2015 

Avg/Hour  $44.49$%  $40.02$%  $42.04$% 
Max/Hour  $91.22$% 24 December 2014 at time 23:00  $86.61$% 23 January 2015 at time 20:00  $87.22$% 7 February 2015 at time 19:00 
March 2015  April 2015  May 2015  
Avg/Hour  $43.41$%  $40.04$%  $39.65$% 
Max/Hour  $83.45$% 21 March 2015 at time 19:00  $88.60$% 4 April 2015 at time 19:00  $85.10$% 9 May 2015 at time 19:00 
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CCBY) license (http://creativecommons.org/licenses/by/4.0/).