Next Article in Journal
Advancement of Individualized Head-Related Transfer Functions (HRTFs) in Perceiving the Spatialization Cues: Case Study for an Integrated HRTF Individualization Method
Previous Article in Journal
Experimental Study and Optimization of the Organic Rankine Cycle with Pure NovecTM649 and Zeotropic Mixture NovecTM649/HFE7000 as Working Fluid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Loitering Detection Based on Pedestrian Activity Area Classification

1
School of Information Engineering, Nanchang University, Nanchang 330031, China
2
School of Software, Nanchang University, Nanchang 330047, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(9), 1866; https://doi.org/10.3390/app9091866
Submission received: 26 February 2019 / Revised: 21 April 2019 / Accepted: 24 April 2019 / Published: 7 May 2019

Abstract

:

Featured Application

This paper proposes a loitering detection method based on pedestrian activity area classification. Automatic loitering detection can help recognize vulnerable people who need attention and potential suspects that might be harmful to public security. It can be widely used in many important research fields and industries such as public security and video surveillance.

Abstract

Loitering detection can help recognize vulnerable people needing attention and potential suspects harmful to public security. The existing loitering detection methods used time or target trajectories as assessment criteria, and only handled some simple loitering circumstances because of complex track. To solve these problems, this paper proposes a loitering detection method based on pedestrian activity area classification. The paper first gave loitering definition from a new perspective using the size of pedestrian activity area. The pedestrian loitering behaviors were divided into three categories. The proposed algorithms dynamically calculate enclosing rectangle, ellipse, and sector of pedestrian activity areas through curve fitting based on trajectory coordinates within given staying time threshold. The loitering is recognized if the pedestrian activity is detected to be constrained in an area within a certain period of time. The algorithm does not need to calculate complex trajectories. The PETS2007 dataset and our own self-collected simulated test videos were tested. The experimental results show that the proposed method accurately detected the pedestrian loitering, not only detected some loitering that the existing methods could not detect, but also distinguishing different types of loitering. The proposed method is effectiveness, robust, and simplified in implementation.

1. Introduction

With the advance of artificial intelligence, pedestrian loitering detection has been more and more widely applied in many fields. As one case of abnormal behavior detection, it plays an important role in the urban video surveillance system. According to IMS Research’s research prediction, in the next five years the market demand of domestic security software will increase at the speed of 30% to 50%, which is enough to show that today’s video surveillance and management software platform has huge potential market [1,2,3]. The demand of video surveillance systems in public places is large [4,5]. Pedestrian loitering detection is mainly applied to safety monitoring in public places such as public transportation areas to ensure the urban safety [6].
Automatic loitering detection can help recognize vulnerable people who needing attention and potential suspects that might be harmful to public security [7]. For example, older patients with Alzheimer’s disease in vulnerable groups are prone to be victims of unexpected security event [8,9]. Nowadays, people are paying more and more attention to this. Alzheimer’s patients are often in a state of loitering when they need attention. In addition, public safety has always been a matter of great concern. The presence of dangerous suspicious individuals in public places poses a great threat [10,11,12]. Some scholars made an example about human behavior detection in transit scenes told us that detecting specific activities like loitering needs scarce human resources [13]. Even with costly human resources, facing such a mass of data, operators may be unfocused because of fatigue, distraction, or interruption [14]. It means that traditional loitering detection is achieved by manual monitoring through observation or with assistance of videos, which is not only time-consuming, but also hard in operation.
During recent years, different kinds of approaches have been proposed to detect loitering behavior, but there are not explicit definition of loitering in the existing research literatures. Some researchers define loitering by time, others define loitering by classify trajectories by curvature or grid [15,16], these studies consider single scenario, without considering the different needs of multiple scenarios and external environments. Patino [17] compared the stationary time of boats on a given area with the time that boats normally stay to determine whether the boats are loitering. Our paper detects pedestrian loitering in different situations. It uses the change of pedestrian activity area size to detect pedestrian loitering. The common loitering detection methods mainly include the feature optical flow method and the motion trajectory method [18]. Trajectory detection is one of the most common and accurate methods. There are also some motion trajectory-based detected methods [19,20]. The abnormal behavior was analyzed by combining the motion tracking and shape aspect ratio [21]. Kang [22] used the trajectory distribution and the probability of the trajectory feature to determine loitering. The loitering detection methods in literatures [23,24] were performed by capturing the trajectories of the pedestrian area. Although there is large amount of research on trajectory analysis, using trajectory analysis to detect loitering is still a challenge. Authors like Ko [25] detected loitering by analyzed the simple trajectory but ignored the reality, loitering is usually random and arbitrary. Another loitering detection method based on trajectory tracking in literature [26] needed use historical traces as an assistive model. A method in the literature [27] analyzed the trajectory of object through the duration time and angles variation between the initial point and remaining points on the trajectory, but is not suitable for practical complex scenarios. Target missing, occlusion, detection errors, and trajectory terminations all belong to complex scenarios. In order to achieve satisfactory trajectory detection results, complex scenarios must be handled [28].
In recent years, deep learning has been applied in many fields, obtaining remarkable achievements. Lim [29] present an intelligent framework for detection of multiple events, but relies on robust known approaches and video content needs to be partitioned by attributes. Commonly used deep learning algorithms are too complex and difficult to build model, affecting real-time performance. Due to the large amount of data and auxiliary equipment required, it also meets some challenges in a wider range of applications. In this paper, loitering behavior is judged by analyzing the size of the active area. Some scholars proposed the use of sparse methods to detect abnormal events [30,31,32]. These methods are not mature yet. Algorithm operations, calculation of regulation parameters, and training models all require high performance computers, as well as long running time. Zin [33] proposed a monitoring system for automatic detection of wanderers, based on two-dimensional Markov random filed. Artificial intelligence technology was used to detect the loitering of the elderly through tracking and analyzing the mobile targets [34]. In Dawn [35] and Hassan [36] several simple pedestrian activities were analyzed by computer vision. Some pedestrian abnormal behaviors were analyzed by the auxiliary equipment using the human torso motion model [37]. Zhao [38] used wavelet transformation to detect loitering behavior in the crowd. Some scholars applied the dynamic texture blending theory to the abnormal analysis of space–time video [39,40] to detect pedestrian abnormal. N. Kumar chose Bayesian probability and some probability plot models for video surveillance and detection [41]. The likelihood function was used to estimate the probability of each individual [42,43]. Some methods used auxiliary equipment to complete the loitering detection. The most common method was pedestrian loitering detected with the help of GPS [44,45], the target needs to wear a wristband analog, which detection for specified targets cannot be monitored in public. In a method using the stereo camera to analyze abnormal behavior [46], stereo cameras are needed to track people in dense environments such as queues, otherwise this loitering detection technique works in a generic manner on any trajectory data. Our method has no special requirement for equipment.
The above-existing loitering detection methods used time or target trajectories as assessment criteria, and only handled some simple loitering circumstances because of complexity of target tracks. In order to solve these problems of lacking theoretic foundation of loitering detection and failing to detect loitering in complicated situations, this paper proposes a loitering detection method based on pedestrian activity area classification. First, the Gaussian mixture models (GMMs) and MeanShift are used to detect and track the target. The residence time of the target is used as a prejudgment for loitering detection; the complex trajectory is transformed into area using coordinate points of trajectory to describe the geometric figure. Pedestrian loitering behaviors are divided into three categories, i.e., the rectangle loitering, the ellipse loitering, and the sector loitering. The loitering is recognized if the pedestrian activity is detected to be constrained in an area within a certain period of time. The algorithm does not need to consider the complex trajectories, and it is effective and efficient in detecting loitering behaviors.

2. The Overview of Our Proposed Loitering Detection Method

Figure 1 shows the overall framework of our proposed loitering detection method. According to different scenarios, the method analyzes and calculates the size of the target activity area using the best adaptive fitting curve for loitering detection. An automatic alarm will be performed when loitering behavior occurs.
The previous research was based on simple scenarios and lacks reliability. This paper redefines and classifies loitering through the new perspective of pedestrian activity area. In the analysis of loitering in various situations, pedestrian activity areas were used to divide loitering behavior into three categories.
In the spacious and wide scene, normal pedestrians usually walk straight line with a strict goal [47]; the suspects are aimless. They keep loitering with a complex and disordered trajectory in order to not to be noticed. Figure 2 shows a rectangle loitering in the real scene. Through the trajectory map and analysis of suspicious target trajectory, the target’s area of motion is completely in the red rectangle. So, we call this type of loitering as rectangle loitering within a rectangle area.
In some small spaces, suspects with abnormal behavior loiter in a narrow area with back-and-forth movement. However, when the area of activity decreases, the distribution of target trajectory is dispersed, and the trajectory coordinate change greatly and discrete. The rectangular analysis method not satisfy this kind of small area movement, which will miss the judgment of loitering behavior. Figure 3 shows the different detection methods used for fitting the loitering in the same situation. The rectangle frame and ellipse frame were used to fit the same area of activity, and the shaded area was the difference of the area. So, it is necessary to propose ellipse detection in order to improve the accuracy of detection. Aiming at the narrow moving scene and the large return distance of target trajectory, an ellipse area fitting method is proposed. The trajectory of the suspect will constantly change. We first detected the convex hulls of all the trajectory points, and then used the ellipse fitting to solve the problem of uncertainty in the active area and the irregularity of the trajectory.
In some occasions, the suspects, such as thieves or drug dealers, are purposefully loitering around points of interest. Candamo [13] the author claims that the suspect’s motion is purposeful, for example, drug dealers focusing on the goals of interest, purposeful round trip loitering, to achieve drug dealing or stealing. This is very similar to the reality of the detection of dropped objects [48]. From the perspective of the mathematical model for analysis, this kind of activity area is similar to a sector. A sector loitering example is shown in Figure 4. The pedestrians’ activity area is fitted with a red sector which contains the motions of the pedestrian. We call this type of loitering as a sector loitering around a point in a sector area.

3. The Loitering Detection Methods

3.1. Target Detection and Tracking

Loitering detection video is usually shot statically, and the background is changed slowly, mainly considering the impact of illumination, occlusion, and so on. In this paper, in order to achieve the purpose of detecting moving objects, GMM [49] is used to detect the target. The probability density of image gray level is estimated by the gray level difference between target and background, this method has high accuracy and objectivity. Multiple Gaussian models are selected to represent the feature distribution, then compared with the GMM. The EM iterative algorithm is used to solve the parameters of the GMM. Based on the existing library implementation algorithm, some parameter values are default values and complete the classification of target and background.
We established k GMM fusion models in each pixel, the probability density function of a pixel X at time T is shown in Equation (1).
P ( X t ) = n = 1 k W n , t ( X t , μ n , t , n , t )
where W is the weight of the Gaussian model, μ is the mean vector of the Gaussian model, and n is the covariance vector of n GMM fusion models. We used the EM algorithm to calculate the parameters of the GMM and posterior probability, set a new latent variable Z, and then transformed Equation (1) by variable Z to Equation (2).
P ( X t ) = z p ( z ) p ( x | z ) = z ( n = 1 k W n ( X , μ n , n   ) z k )
Attain the posterior probability δ ( z n ) by the prior probability P ( z ) , and P ( x | z ) is likelihood probability. The posterior probability δ ( z n ) is shown in Equation (3).
δ ( z n ) = W k ( x , μ k , k ) n = 1 k W k ( x , μ n , n )
From the Equation (3), the derivative expressions of three variables are shown in Equation (4).
π k = q = 1 Q δ ( z q , k ) Q , μ k = q = 1 Q δ ( z q , k ) x ( q ) Q k ,   k = q = 1 Q δ ( z q , k ) ( x ( q ) μ k ) ( x ( q ) μ k ) T Q k
where Q is the number of all the pixels, we update the parameters to convergence by using Equations (3) and (4) so as to achieve the purpose of detecting moving objects; this method has high accuracy and objectivity.
After targets are detected, we used the MeanShift method to find the most likely target location in the current video frame [50]. The process of object tracking is to find a target position to make the Bhattacharyya coefficient maximize, it is defined in the following Formula (5).
ρ = p ( x ) q ( x ) d x
p ( x ) and q ( x ) represent the relative density functions of the discrete position and the target model, respectively. The specific flowchart of target tracking is shown in Figure 5.
Through the above method, we can track the pedestrian and get a series of tracking coordinates defined in Equation (6).
s i   = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x n , y n ) }
where ( x 1 , y 1 ) ,     ( x 2 , y 2 ) ,     ,     ( x n , y n ) , respectively, represent the position coordinates ( x , y ) at the corresponding time. Figure 6 shows the motion trajectory. s i represents the trajectory of the i target. We use Equation (7) to calculate the change of the pedestrian angle to select the feature points:
v = a r c c o s v t v m | v t | | v m | > θ
where v represents the direction vector’s angle value of the target at the different moment. θ is the angle threshold set to 50 ° in our experiment if it meets the condition in the Equation (7) at a point, then we record this point as feature point v f .
The moment of the initial direction vector is the m moment. Once a feature point is found, the direction vector v m will be replaced by the direction vector v f . The eigenvector of the trajectory is marked. As the number of eigenvectors appearing in the time ( 0 , , t ) exceeds 4, it needs to find corresponding loitering detection based on different activity area characteristics.

3.2. The Rectangle Loitering

We use Equation (8) to establish an active rectangle frame to determine whether the pedestrian’s activity area is in a rectangle frame. A rectangle trajectory is defined as a trajectory contained within a rectangle, as shown in Figure 7.
( x i , y i ) ϵ   r e c t { ( x 1 , y 1 ) , ( x 2 , y 2 ) }
In Equation (8), ( x i , y i ) represents any coordinate in the ( t 1 , , t 2 ) period, ( x 1 , y 1 ) represents the coordinates of t 1 moments, ( x 2 , y 2 ) represents the coordinates of t 2 moments, and rect { ( x 1 , y 1 ) , ( x 2 , y 2 ) } represents all coordinate points in the rectangular domain with ( x 1 , y 1 ) and ( x 2 , y 2 ) as diagonals.
We propose a dynamic rectangular frame to judge loitering, which avoids judging the motion trajectory of each frame in the video. The detailed of our algorithm is described in the Algorithm 1, where T 0 represents the initial threshold. In our experiments using PETS 2007 dataset, the time threshold T 0 is set to 60 s. The time threshold should be set according to different scenarios.
Algorithm 1 Rectangle loitering detection
Input: the trajectory coordinates P ( x , y ) = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x n , y n ) }
Output: loitering detection result
  • Extract the trajectory coordinates p i ( x i , y i )
  • when T i > T 0 , calculate the distance between P i and P i T , i > 0
    ( x P i x P i T ) = { D x m a x D x m i n   , ( y p i y p i T ) = { D y m a x D y m i n
  • Update P i , calculate the size of activity area: S = | D x m a x D x m i n | | D y m a x D y m i n |
  • Compare the area, if S < S t h r e s h o l d , a rectangle loitering is detected.

3.3. The Ellipse Loitering

An ellipse trajectory is defined as a trajectory contained within an ellipse. We use the following steps to determine whether the pedestrian’s active area is in an ellipse area. Firstly, the Graham scan [51] method is used to find convex hulls for all track points in time T i , then ellipse fitting is performed. Figure 8 shows the process of finding a convex hull of a set of point.
Algorithm 2 Ellipse loitering detection
Input: the trajectory coordinates P ( x , y ) = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x n , y n ) }
Output: loitering detection result
  • Extract the trajectory coordinates P i ( x i , y i ) .
  • when T i > T 0 , find convex hull, input point set | Q | 3   ,   P = ( P 1 , P 2 , P n ) ; calculate the distance between P m i n and P i .
  • Sort the clockwise direction through the polar angle θ and distance D i , then cycle points; select six points in the calculate coordinates, T i t < T < T i ; calculate parameters:
    f ( A , B , C , D , E ) = i = 0 n ( A x 2 + B x y + C y 2 + D x + E y + F ) 2
  • Record num-match and num-index, when n u m m a t c h > n u m i n d e x , update ellipse parameter.
  • Calculate the size of activity area: S = π A C     ,     A > 0   , C > 0 , compare area, if S < S t h r e s h o l d , an ellipse loitering is detected.
In order to fit the ellipse, we use the general elliptic equation in Equation (9).
A x 2 + B x y + C y 2 + D x + E y + F = 0
We used some constraints [52] so that we can directly use the coordinates of the trajectories to do the least square. The coefficients of the equation can be obtained using Equation (10).
f = i = 0 n ( A x 2 + B x y + C y 2 + D x + E y + F ) 2
Each coefficient is determined by its minimum value. Equation (11) can be obtained by the principle of extreme value.
f A = f B = f C = f D = f E = 0
Thus, we can get A , B , C , D ,   and   E values, so that we can perform ellipse fitting on the given coordinates, calculate the optimal elliptic equation with least squares method. The detailed algorithm is described in the Algorithm 2.

3.4. The Sector Loitering

When suspicious person loiters around the point of interest, suspect’s activity area revolves around one point. The displacement calculating cannot accurately determine whether pedestrians have abnormal behavior. Therefore, a sector trajectory is defined as a trajectory contained within a sector. As shown in Figure 9. Equation (12) is the loitering definition based on the sector.
{ D t     { D m i n , , D m a x } M 1 < | D m a x D m i n | < M 2
The D t is the displacement between the initial detection coordinate and the current coordinate and M 1 and M 2 are the set thresholds. D m a x is the maximum value of the coordinate point displacement, while D m i n is the minimum value of the coordinate point displacement in the time ( t i , , t ) . The detailed algorithm is described in the Algorithm 3.
Algorithm 3 Sector loitering detection
Input: the trajectory coordinates P ( x , y ) = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x n , y n ) }
Output: loitering detection result
  • Record the initial position P 0 , extract the trajectory coordinates P i ( x i , y i )
  • Calculate the distance D i between P i and P 0 , when T i > T 0 , then update D i into the scope [ D m a x , D m i n ]
  • Calculate the size of activity area, the sector angle is α : α = 180 ° π 2 a r c t a n ( y p 0   y p i x p 0   x p i ) ,
    So, the area is: S = π ( ( D m a x ) 2 ( D m i n ) 2 ) ( α m a x α m i n ) 360 ° .
  • Compare the area, when S < S t h r e s h o l d , then a sector loitering is detected.

4. Experimental Results and Analysis

Most of the public abnormal behavior dataset are abnormal behaviors for public safety. There are only a few datasets for loitering. The most classic dataset is PETS2007 [53]. This dataset was taken by four cameras at four different angles. The shooting scenes were changeable and complex. The PETS2007 dataset has specialized experimentation and testing for loitering. Loitering is defined as a person who enters the scene, and remains within the scene for more than 60 s.
We also collected our own dataset by using video cameras. About our collected dataset, we used HIKVISON DS-2CD5152F-IZ camera, choose different scenarios for experiments. The video was saved in AVI format. Videos in the dataset were captured at 15 frames per second. To ensure the quality of the experiment, each video has at least 800 frames. We collected 120 videos (90 videos of loitering and 30 videos of normal walking), including loitering detection in many scenarios from both indoor and outdoor scenes.
In order to ensure the loitering experiment, we tested PETS2007 and our collected dataset which contain different angles, different illumination, target occlusion, multitarget, and so on. Figure 10 shows the results of test PETS2007 dataset and our self-collected dataset by using the GMM fusion model. The background of the PETS 2007 dataset is complex and many challenges exist while detecting objects such as dynamic environment, illumination, occlusion of objects, etc. Figure 10a,b shows the detection effect of the PETS 2007 dataset. The detection scene is complex background in public places, and there are many external factors such as illumination change and occlusion. Although the experimental results provided by PETS 2007 dataset were inevitably incoherent and some residual, the overall effect was still relatively accurate. Figure 10c–d shows the results of our collected dataset including outdoor and indoor scenes. The dataset collected had accurate detection results and distinct human contours were extracted. Figure 11 shown the experimental results of tracking and detecting pedestrians. The trajectory of pedestrians was plotted, the green frame is used as the detection frame, and the center of the frame is used as the trajectory plotting point. For the PETS 2007 dataset, mobile pedestrians could be detected. The test results are valid for our collected dataset. Table 1 shows the accuracy of target detection and tracking based on our collected dataset and PETS 2007 dataset.
After the detection and tracking algorithm achieves high accuracy and good robustness, a loitering detection method is added to the framework. Through the analysis and testing of the dataset we selected 30 videos of each algorithm for area threshold testing. Those videos are all from loitering dataset collected by ourselves. In order to be applicable for video surveillance scenarios, all videos are converted to the same resolution 704 × 576, which can reduce the influence of different resolution on detection results. Figure 12 shows the accuracy of loitering detection algorithms with different area thresholds. The abscissa is the number of pixels which represents the area of geometric figure while the ordinate represents the number of correctly detected videos containing loitering instances. The change of curve represents the ability of the algorithm to detect correctly under different area thresholds. The higher the curve is, the better the detection effect is.
Figure 13 shows the rectangle loitering detection results and corresponding fitting area changed. The horizontal coordinate represents frames of videos, and the vertical coordinate is the size of fitting rectangle area. The curve change corresponds to the change of fitted area. When the area is over the threshold area it will stop alarm, until the pedestrian out of the camera field of views which can reduce misjudgment. We can clearly see that the rectangle loitering detection algorithm was accurate and stable throughout the fitting process.
Figure 14 shows the detected result of ellipse loitering in different scenarios and the activity area fitting curve. Once it exceeds the time, it immediately alarms. From the results, the entire fitting process is continuous without interruption, so it has good robustness and real-time performance.
Figure 15 shows the experimental results of the sector loitering detected in three different scenarios and activity area fitting. From these experimental results, it can be seen that the sector loitering detected can perfectly fitted the suspect’s activity area for both indoor and outdoor. The whole process of sector fitting was very smooth and stable.
In order to directly demonstrate the superiority of the algorithm presented in this paper, Table 2 shows the real experimental record and result analysis of this paper. Our collected dataset was accurately classified according to different scenarios. Each scenario has loitering and normal walking. Here, TP (True Positive) means the true loitering can be detected. TN (True Negative) means normal walking can be detected. FP (False Positive) is the normal behavior detected as abnormal. FN (False Negative) is loitering detected as normal. In the Table 2, P stands for precision ( P = T P T P + F P ), R stands for recall rate ( R = T P T P + F N ), and A stands for accuracy ( A = T P + T N T P + T N + F P + F N ).
Figure 16 shows the advantages and differences of the three algorithms. From the results, an unsuitable algorithm does not only consumes time but also fails to detect loitering correctly. Therefore, the detection of loitering should be achieved based on scene and motivation to choose the correct algorithm. Table 3 shows the results of the three algorithms applied to our collected dataset.
Figure 17 shows the detected results of the PETS2007 dataset (S00, S01, S03, S05, and S08) based on our method. The experimental results show that our proposed algorithms can also be used to detect suspects in crowded scenarios. Our method was compared with the method of Nam 2015 [54]. Our method can detect loitering in videos from multiple angles, while Nam 2015 only detected loitering in a video from one angle where the number of pedestrians is small. For dataset S00 without containing loitering behavior, our algorithms successfully detected them as normal. It is good to pick a rectangle algorithm to detect suspects in emptiness and high activity areas, learning from the result of dataset S01. Dataset S05 is a video with a thief in public: the suspect loitered around the interest point of the package; we used the sector loitering detection algorithm. The suspicious loitering in a small area in dataset S03, we choose the ellipse loitering algorithm. All the loitering detecting algorithms can effective and accurately detect the suspects.
Table 4 is the loitering detection results using the PETS2007 dataset compared with Dalley’s method [55]. First of all, our algorithms can accurately detect loitering in the PETS2007 dataset. All our three algorithms performed much better and achieved much higher detection accuracy than the Dalley’s algorithm. Our algorithms successfully detected the loitering which could not be detected by the Dalley’s method, such as those in dataset S05 and S08.

5. Conclusions

In this paper, we first gave a loitering definition from a new perspective using pedestrian activity areas. We propose a loitering detection method based on pedestrian activity areas classification. The pedestrian loitering behaviors are divided into three categories, i.e., the rectangle loitering, the ellipse loitering and the sector loitering. Unlike other loitering detection algorithms judging the loitering only by trajectory angle or time, this paper converts complex trajectory processing into activity area fitting. The loitering is recognized if the pedestrian activity is detected and to be constrained in an area within a certain period of time. From the experimental results, the proposed three algorithms are very effective for detecting various loitering. No matter whether it is indoors or outdoors, it has good accuracy and robustness in loitering detection for both the PETS2007 dataset and our self-collected dataset. The proposed method can detect loitering in videos captured from different angles, and can be applied to crowded scenes also. Compared with the existing methods, the proposed method not only can detect some loitering that the existing methods could not detect, but also is practical to handle complicated situations, efficient, robust, and simplified in implementation. Self-adaptive time threshold setting for loitering detection in different scenarios is worth of further research. We will further research on this in the future.

Author Contributions

All authors of the paper have made significant contributions to this work. T.H. and Q.H. contributed equally to the paper, conceived the idea of work, wrote the manuscript, and analyzed the experiment data. W.M. led the project, directed, and revised the paper writing. X.L., Y.Y., and Y.Z. collected the original data of the experiment and participated in programming.

Funding

This work was supported by the National Natural Science Foundation of China, under Grant 61762061, and the Natural Science Foundation of Jiangxi Province, China under Grant 20161ACB20004.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, G.; Jiang, C.; Zhi, C.; Guosong, L.; Jun, L. Research on technical requirements of digital terrestrial television broadcasting monitoring equipment. Radio TV Broadcast Eng. 2017. [Google Scholar] [CrossRef]
  2. Mishra, M.S.K.; Bhagat, K.S. A survey on human motion detection and surveillance. Int. J. Adv. Res. Electr. Commun. Eng. 2015, 4, 1004–1048. [Google Scholar]
  3. Bird, N.D.; Masoud, O.; Papanikolopoulos, N.P. Detection of loitering individuals in public transportation areas. IEEE Trans. Intell. Transp. Syst. 2005, 6, 167–177. [Google Scholar] [CrossRef]
  4. Hamida, A.B.; Koubaa, M.; Nicolas, H. Video surveillance system based on a scalable application-oriented architecture. Multimed. Tools Appl. 2016, 75, 17187–17213. [Google Scholar] [CrossRef]
  5. Asakura, S.; Shitomi, T.; Saito, S. Technologies for the next generation of digital terrestrial television broadcasting. IEEE Int. Symp. Broadband Multimed. Syst. Broadcast. 2016, 62, 306–315. [Google Scholar]
  6. Min, C.B.; Zhang, J.J.; Xu, H. A Method of Video Loitering Detection Based on Dynamic Programming. In Proceedings of the IEEE Symposium on Photonics and Optoelectronics, Shanghai, China, 21–23 May 2012. [Google Scholar]
  7. Elhamod, M.; Levine, M.D. Automated real-time detection of potentially suspicious behavior in public transport areas. IEEE Trans. Intell. Transp. Syst. 2013, 14, 688–699. [Google Scholar] [CrossRef]
  8. Young, Y.; Papenkov, M.; Nakashima, T. Who is responsible? A man with dementia wanders from home, is hit by a train. J. Am. Med. Dir. Assoc. 2018, 19, 563–567. [Google Scholar] [CrossRef]
  9. Lin, Q.; Zhang, D.; Chen, L. Managing elders’ wandering behavior using sensors-based solutions: A survey. Int. J. Gerontol. 2014, 8, 49–55. [Google Scholar] [CrossRef]
  10. Zhang, X.; Zhang, Q.; Hu, S. Energy level-based abnormal crowd behavior detection. Sensors 2018, 18, 423. [Google Scholar] [CrossRef]
  11. Van, K.; Van, K.; Vennekens, J. Abnormal behavior detection in LWIR surveillance of railway platforms. In Proceedings of the 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 29 August–1 September 2017. [Google Scholar] [CrossRef]
  12. Ko, K.; Sim, K. Deep convolutional framework for abnormal behavior detection in a smart surveillance system. Eng. Appl. Artif. Intell. 2018, 67, 226–234. [Google Scholar] [CrossRef]
  13. Candamo, J.; Shreve, M.; Goldgof, D.B. Understanding transit scenes: A survey on human behavior-recognition algorithms. IEEE Trans. Intell. Transp. Syst. 2010, 11, 206–224. [Google Scholar] [CrossRef]
  14. Makris, D.; Ellis, T. Learning semantic scene models from observing activity in visual surveillance. IEEE Trans. Syst. Man Cybern. 2005, 35, 397–408. [Google Scholar] [CrossRef] [Green Version]
  15. Liu, Q.; Luo, B.; Zhai, S.L. Loitering detection based on discrete curvature entropy. Comput. Eng. Appl. 2013, 49, 164–166. [Google Scholar]
  16. Zhong, Z.; Zhang, N. Analysis of moving object trajectory gridded and hovering behavior detection research. Microelectron. Comput. 2014, 31, 60–63. [Google Scholar]
  17. Patino, L.; Ferryman, J. Loitering Behaviour Detection of Boats at Sea. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 2169–2175. [Google Scholar]
  18. Li, C.; Han, Z.; Ye, Q.; Jiao, J. Visual abnormal behavior detection based on trajectory sparse reconstruction analysis. Neurocomputing 2013, 119, 94–100. [Google Scholar] [CrossRef]
  19. Rodriguez, M.; Navarro, R.; Favela, J. An ontological representation model totailor ambient assisted interventions for wandering. AAAI Fall Symp. 2012, 15, 245–246. [Google Scholar]
  20. Leach, M.J.V.; Sparks, E.P.; Robertson, N.M. Contextual anomaly detection in crowded surveillance scenes. Pattern Recognit. Lett. 2014, 44, 71–79. [Google Scholar] [CrossRef] [Green Version]
  21. Weidong, M.; Longshu, W.; Qing, H.; Yongzhen, K. Human fall detection based on motion tracking and shape aspect ratio. Int. J. Multimed. Ubiquitous Eng. 2016, 11, 1–14. [Google Scholar]
  22. Kang, J.; Kwak, S. Loitering detection solution for CCTV security system. J. Korea Multimed. Soc. 2014, 17, 15–25. [Google Scholar] [CrossRef]
  23. Mo, X.; Monga, V.; Bala, R.; Fan, Z. Adaptive sparse representations for video anomaly detection. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 631–645. [Google Scholar]
  24. Hampapur, A.; Brown, L.; Connell, J. Smart video surveillance: Exploring the concept of multiscale spatiotemporal tracking. IEEE Signal Process. Mag. 2005, 22, 38–51. [Google Scholar] [CrossRef]
  25. Ko, J.G.; Yoo, J.H. Rectified trajectory analysis based abnormal loitering detection for video surveillance. In Proceedings of the 2013 1st International Conference on Artificial Intelligence, Modelling and Simulation, Kota Kinabalu, Malaysia, 3–5 December 2013; pp. 289–293. [Google Scholar] [CrossRef]
  26. Lin, Q.; Zhang, D.; Connelly, K. Disorientation detection by mining GPS trajectories for cognitively-impaired elders. Pervasive Mob. Comput. 2015, 19, 71–85. [Google Scholar] [CrossRef]
  27. Li, W.; Zhang, D.; Sun, M. Loitering Detection Based on Trajectory Analysis. In Proceedings of the International Conference on Intelligent Computation Technology and Automation, Sofia, Bulgaria, 4–6 September 2016; pp. 530–533. [Google Scholar] [CrossRef]
  28. Jiang, H.; Wang, J.; Gong, Y.; Rong, N.; Chai, Z. Online multi-target tracking with unified handling of complex scenarios. IEEE Trans. Image Process. 2015, 24, 3464. [Google Scholar] [CrossRef]
  29. Lim, M.K.; Tang, S.; Chan, C.S. iSurveillance: Intelligent framework for multiple events detection in surveillance videos. Expert Syst. Appl. 2014, 41, 4704–4715. [Google Scholar] [CrossRef]
  30. Adler, A.; Elad, M.; Hel-Or, Y.; Rivlin, E. Sparse coding with anomaly detection. J. Signal Process. Syst. 2015, 79, 179–188. [Google Scholar] [CrossRef]
  31. Li, C.; Han, Z.; Ye, Q.; Jiao, J. Abnormal behavior detection via sparse reconstruction analysis of trajectory. In Proceedings of the 2011 Sixth International Conference on Image and Graphics, Hefei, China, 12–15 August 2011; pp. 807–810. [Google Scholar] [CrossRef]
  32. Xu, J.; Denman, S.; Sridharan, S.; Fookes, C.; Rana, R. Dynamic Texture Reconstruction from Sparse Codes for Unusual Event Detection in Crowded Scenes. In Proceedings of the Joint ACM Workshop on Modeling and Representing Events, Scottsdale, AZ, USA, 28 November–1 December 2011; pp. 25–30. [Google Scholar]
  33. Zin, T.T.; Tin, P.; Toriu, T. A markov random walk model for loitering people detection. In Proceedings of the 2010 Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Darmstadt, Germany, 15–17 October 2010; pp. 680–683. [Google Scholar] [CrossRef]
  34. Héctor, F.G.A.; Tomás, R.M.; Tapia, S.A. Identification of Loitering Human Behaviour in Video Surveillance Environments; Springer International Publishing: Berlin, Germany, 2015; Volume 9107, pp. 516–525. [Google Scholar]
  35. Dawn, D.D.; Shaikh, S.H. A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector. Vis. Comput. 2016, 32, 289–306. [Google Scholar] [CrossRef]
  36. Hassan, M.; Ahmad, T.; Farooq, A. A review on human actions recognition using vision based techniques. J. Image Graph. 2014, 2, 28–32. [Google Scholar] [CrossRef]
  37. Leiyue, Y.; Weidong, M.; Keqiang, L. A new approach to fall detection based on the human torso motion model. Appl. Sci. 2017, 7, 993. [Google Scholar] [CrossRef]
  38. Zhao, Y.; Qiao, Y.; Yang, J. Abnormal Activity Detection Using Spatio-Temporal Feature and Laplacian Sparse Representation; Springer International Publishing: Berlin, Germany, 2015; pp. 410–418. [Google Scholar] [CrossRef]
  39. Zhu, S.; Hu, J.; Shi, Z. Local abnormal behavior detection based on optical flow and spatio-temporal gradient. Multimed. Tools Appl. 2016, 75, 9445–9459. [Google Scholar] [CrossRef]
  40. Li, W.; Mahadevan, V.; Vasconcelos, N. Anomaly detection and localization in crowded scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 18–32. [Google Scholar] [CrossRef]
  41. Kumar, N.; Lee, J.H.; Rodrigues, J.J.P.C. Intelligent mobile video surveillance system as a bayesian coalition game in vehicular sensor networks: Learning automata approach. IEEE Trans. Intell. Transp. Syst. 2015, 16, 1148–1161. [Google Scholar] [CrossRef]
  42. Li, S.; Liu, C.; Yang, Y. Anomaly detection based on maximum a posteriori. Pattern Recognit. Lett. 2018, 107, 91–97. [Google Scholar] [CrossRef]
  43. Saligrama, V.; Chen, Z. Video anomaly detection based on local statistical aggre-gates. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; Volume 157, pp. 2112–2119. [Google Scholar]
  44. Lin, Q.; Zhang, D.; Huang, X. Detecting wandering behavior based on GPS traces for elders with dementia. In Proceedings of the 2012 12th International Conference on Control Automation Robotics and Vision (ICARCV), Guangzhou, China, 5–7 December 2012; Volume 43, pp. 672–677. [Google Scholar]
  45. Hadwen, T.; Smallbon, V.; Zhang, Q. Energy Efficient LoRa GPS Tracker for Dementia Patients. In Proceedings of the International Conference of the IEEE Engineering in Medicine and Biology Society, Honolulu, HI, USA, 11–15 July 2017; pp. 771–774. [Google Scholar]
  46. Patino, L.; Ferryman, J.; Beleznai, C. Abnormal behaviour detection on queue analysis from stereo cameras. In Proceedings of the IEEE International Conference on Advanced Video and Signal Based Surveillance, Taipei, Taiwan, 18–21 September 2015; pp. 1–6. [Google Scholar]
  47. Wang, C.Y.; Liu, H.; Zhang, H.Q. Anti-occluding tracking algorithm based on grey prediction and Mean-Shift. Control Eng. China 2017, 24. [Google Scholar] [CrossRef]
  48. Weidong, M.; Zhang, Y.; Jing, L.; Shaoping, X. Recognition of pedestrian activity based on dropped-object detection. Signal Process. 2018, 144, 238–252. [Google Scholar] [CrossRef]
  49. Permuter, H.; Francos, J.; Jermyn, I. A study of Gaussian mixture models of color and texture features for image classification and segmentation. Pattern Recognit. 2006, 39, 695–706. [Google Scholar] [CrossRef] [Green Version]
  50. Min, W.; Fan, M.; Guo, X.; Han, Q. A new approach to track multiple vehicles with the combination of robust detection and two classifiers. IEEE Trans. Intell. Transp. Syst. 2018, 19, 174–186. [Google Scholar] [CrossRef]
  51. Pramudya, G. Introduction to Algorithm; China Machine Press: Cambridge, MA, USA; London, UK, 2013. [Google Scholar]
  52. Prasad, D.K.; Leung, M.K.H.; Quek, C. ElliFit: An unconstrained, non-iterative, least squares based geometric ellipse fitting method. Pattern Recognit. 2013, 46, 1449–1465. [Google Scholar] [CrossRef]
  53. Duan, G.; Ai, H.; Lao, S. Human detection in video over large viewpoint changes. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 683–696. [Google Scholar]
  54. Nam, Y. Loitering detection using an associating pedestrian tracker in crowded scenes. Multimed. Tools Appl. 2015, 74, 2939–2961. [Google Scholar] [CrossRef]
  55. Dalley, G.; Wang, X.; Grimsin, W.E.L. Event detection using an attention-based tracker. In Proceedings of the 10th IEEE International Workshop on PETS, Miami, FL, USA, 15–17 January 2009; pp. 71–79. [Google Scholar]
Figure 1. The overall framework of loitering detection method.
Figure 1. The overall framework of loitering detection method.
Applsci 09 01866 g001
Figure 2. Rectangle loitering. (a) A loitering in the real scene. (b) The trajectory simulation map corresponding.
Figure 2. Rectangle loitering. (a) A loitering in the real scene. (b) The trajectory simulation map corresponding.
Applsci 09 01866 g002
Figure 3. Ellipse loitering. (a) The ellipse detection method and rectangle detection method fitting the trajectory of simulation. (b) Ellipse loitering in real scene.
Figure 3. Ellipse loitering. (a) The ellipse detection method and rectangle detection method fitting the trajectory of simulation. (b) Ellipse loitering in real scene.
Applsci 09 01866 g003
Figure 4. Sector loitering. (a) The loitering in the real scene. (b) Sector loitering trajectory simulation map.
Figure 4. Sector loitering. (a) The loitering in the real scene. (b) Sector loitering trajectory simulation map.
Applsci 09 01866 g004
Figure 5. The scheme of target detection and tracking.
Figure 5. The scheme of target detection and tracking.
Applsci 09 01866 g005
Figure 6. Motion trajectory. (a)Trajectory models and (b) real trajectory.
Figure 6. Motion trajectory. (a)Trajectory models and (b) real trajectory.
Applsci 09 01866 g006
Figure 7. The rectangle trajectory and corresponding calculation.
Figure 7. The rectangle trajectory and corresponding calculation.
Applsci 09 01866 g007
Figure 8. Finding a convex hull of a set of points.
Figure 8. Finding a convex hull of a set of points.
Applsci 09 01866 g008
Figure 9. Demonstrate a sector trajectory and its corresponding calculation.
Figure 9. Demonstrate a sector trajectory and its corresponding calculation.
Applsci 09 01866 g009
Figure 10. Gaussian mixture model (GMM) model foreground detection. (a,b) The detection effect of PETS 2007 dataset. (c,d) The results of our collected dataset include outdoor and indoor scenes.
Figure 10. Gaussian mixture model (GMM) model foreground detection. (a,b) The detection effect of PETS 2007 dataset. (c,d) The results of our collected dataset include outdoor and indoor scenes.
Applsci 09 01866 g010
Figure 11. Target tracking and detection.
Figure 11. Target tracking and detection.
Applsci 09 01866 g011
Figure 12. Active area threshold area setting.
Figure 12. Active area threshold area setting.
Applsci 09 01866 g012
Figure 13. The result of rectangle loitering. (a) The rectangle loitering detection results of scenario 1. (b) The rectangle loitering detection results of scenario 2. (c) The rectangle loitering detection results of scenario 3. (d) The area fitting curve results of rectangle loitering, the red line in the figure indicates the threshold of the area.
Figure 13. The result of rectangle loitering. (a) The rectangle loitering detection results of scenario 1. (b) The rectangle loitering detection results of scenario 2. (c) The rectangle loitering detection results of scenario 3. (d) The area fitting curve results of rectangle loitering, the red line in the figure indicates the threshold of the area.
Applsci 09 01866 g013
Figure 14. The result of ellipse loitering. (ac) The activity area fitting process from scenarios 1 to 2. (d) The area fitting curve results of ellipse loitering, the red line in the figure indicates the threshold of the area.
Figure 14. The result of ellipse loitering. (ac) The activity area fitting process from scenarios 1 to 2. (d) The area fitting curve results of ellipse loitering, the red line in the figure indicates the threshold of the area.
Applsci 09 01866 g014
Figure 15. The results of sector loitering. (ac) The sector loitering detection results of video sequence under different scenarios from scenarios 1 to 3. (d) The area fitting curve results of sector loitering, the red line in the figure indicates the threshold of the area.
Figure 15. The results of sector loitering. (ac) The sector loitering detection results of video sequence under different scenarios from scenarios 1 to 3. (d) The area fitting curve results of sector loitering, the red line in the figure indicates the threshold of the area.
Applsci 09 01866 g015
Figure 16. The different algorithms to test the same scenarios to distinguish the advantages and differences of the three algorithms.
Figure 16. The different algorithms to test the same scenarios to distinguish the advantages and differences of the three algorithms.
Applsci 09 01866 g016
Figure 17. The detected results of the PETS2007 dataset. (a,b) The rectangle loiterer detection results in the same scene S01 but from different shooting angles. (c) The detected result for the S05 dataset using the sector loitering detection algorithm. (d) The results for the S03 dataset using the ellipse loitering detection algorithm.
Figure 17. The detected results of the PETS2007 dataset. (a,b) The rectangle loiterer detection results in the same scene S01 but from different shooting angles. (c) The detected result for the S05 dataset using the sector loitering detection algorithm. (d) The results for the S03 dataset using the ellipse loitering detection algorithm.
Applsci 09 01866 g017
Table 1. The accuracy of target detection and tracking.
Table 1. The accuracy of target detection and tracking.
DatasetAccuracy
PETS200797.3%
Our collected99%
Table 2. The experimental results in our proposed method.
Table 2. The experimental results in our proposed method.
CategoriesSamplesSuccessful Detection Applsci 09 01866 i001
Rectangle3029
Sector3028
Ellipse3030
Table 3. The results of the three-algorithm comparison.
Table 3. The results of the three-algorithm comparison.
CategoriesSamplesSuccessful Detection
Rectangle LoiteringSector LoiteringEllipse Loitering
Rectangle302915
Sector302283
Ellipse303130
Table 4. Comparison of the detection accuracy (%) using the PETS2007 dataset.
Table 4. Comparison of the detection accuracy (%) using the PETS2007 dataset.
PETS2007 DatasetDalley’s MethodOur Method
Rectangle LoiteringSector LoiteringEllipse Loitering
S0194.297.6497.9497.74
S0367.590.7789.1290.94
S05/91.8391.5190.77
S08/98.0798.1797.63

Share and Cite

MDPI and ACS Style

Huang, T.; Han, Q.; Min, W.; Li, X.; Yu, Y.; Zhang, Y. Loitering Detection Based on Pedestrian Activity Area Classification. Appl. Sci. 2019, 9, 1866. https://doi.org/10.3390/app9091866

AMA Style

Huang T, Han Q, Min W, Li X, Yu Y, Zhang Y. Loitering Detection Based on Pedestrian Activity Area Classification. Applied Sciences. 2019; 9(9):1866. https://doi.org/10.3390/app9091866

Chicago/Turabian Style

Huang, Tiemei, Qing Han, Weidong Min, Xiangpeng Li, Yunjun Yu, and Yu Zhang. 2019. "Loitering Detection Based on Pedestrian Activity Area Classification" Applied Sciences 9, no. 9: 1866. https://doi.org/10.3390/app9091866

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop