Next Article in Journal
Low-Rank Multi-Channel Features for Robust Visual Object Tracking
Previous Article in Journal
Lie-Point Symmetries and Backward Stochastic Differential Equations

Symmetry 2019, 11(9), 1154; https://doi.org/10.3390/sym11091154

Article
ILRA: Novelty Detection in Face-Based Intervener Re-Identification
1
Instituto Universitario SIANI, Universidad de Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas, Spain
2
Department of Computer Science and Artificial Intelligence, UPV-EHU, 20018 Gipuzkoa, Spain
3
Statistics Section: Department of Genetics, Microbiology and Statistics, Universitat de Barcelona, 08028 Barcelona, Spain
*
Correspondence: [email protected]
These authors contributed equally to this work.
Received: 22 July 2019 / Accepted: 8 September 2019 / Published: 11 September 2019

Abstract

:
Transparency laws facilitate citizens to monitor the activities of political representatives. In this sense, automatic or manual diarization of parliamentary sessions is required, the latter being time consuming. In the present work, this problem is addressed as a person re-identification problem. Re-identification is defined as the process of matching individuals under different camera views. This paper, in particular, deals with open world person re-identification scenarios, where the captured probe in one camera is not always present in the gallery collected in another one, i.e., determining whether the probe belongs to a novel identity or not. This procedure is mandatory before matching the identity. In most cases, novelty detection is tackled applying a threshold founded in a linear separation of the identities. We propose a threshold-less approach to solve the novelty detection problem, which is based on a one-class classifier and therefore it does not need any user defined threshold. Unlike other approaches that combine audio-visual features, an Isometric LogRatio transformation of a posteriori (ILRA) probabilities is applied to local and deep computed descriptors extracted from the face, which exhibits symmetry and can be exploited in the re-identification process unlike audio streams. These features are used to train the one-class classifier to detect the novelty of the individual. The proposal is evaluated in real parliamentary session recordings that exhibit challenging variations in terms of pose and location of the interveners. The experimental evaluation explores different configuration sets where our system achieves significant improvement on the given scenario, obtaining an average F measure of 71.29% for online analyzed videos. In addition, ILRA performs better than face descriptors used in recent face-based closed world recognition approaches, achieving an average improvement of 1.6% with respect to a deep descriptor.
Keywords:
re-identification; open world scenario; novelty detection; one-class classification; ILR transformation; local descriptors; deep descriptor

1. Introduction

Person re-identification is the process of recognizing an individual over different non-overlapping camera views [1,2,3,4,5,6]. Usually, probe is used to refer to the image of the individual to be recognized and gallery to the set of images of known people where the probe has to be recognized. Re-identification problems can be classified into different categories depending on the considered dimension [2]: sample set, body model, etc. Bedagkar-Gala and Sha [5] propose a wider taxonomy based on the mandatory presence or not of the probe in the gallery. Thus, a closed world, or closed set, scenario is similar to the classic matching problem with a fixed size gallery. In an open world, or open set, the probe does not necessarily belong to the gallery, which evolves dynamically, adding new identities as the re-identification process takes place.
In the open world re-identification scenario, firstly, it is necessary to decide whether the probe belongs to the gallery or not. If the probe belongs to the gallery, a matching process is carried out; otherwise, the probe is added to the gallery as a new identity. The first stage in an open world re-identification scenario is very similar to the problem of novelty detection [7,8,9], which refers to the identification of new or unknown individuals, who were not previously registered in the system. Those individuals are denominated atypicals in opposition to those registered, who are referred to as typicals.
Speaker diarization [10] can be considered a similar problem to person re-identification. In the former, systems try to answer the questions of who spoke when. The difference lies in the scenarios where they are applied. Person re-identification is considered mostly in video surveillance scenarios where there is no audio, and coarse views of the people are obtained, so appearance based methods are widely used [2]. On the contrary, speaker diarization is carried out in video recordings (news, talk shows or television debates) where audio and close views of the participants are available. The availability of audio and images allows the application of techniques that combine both information sources [10,11]. In addition, the intervener views are normally close frontal views that allow information of the face to be extracted, instead of the general appearance of the intervener, allowing the exploitation of the facial features that are almost symmetrical and uniform [12,13].
In this paper, a face based open world re-identification approach is presented in a parliamentary debate scenario. This is a challenging scenario because deputies can participate in the debate from different locations: speaker platform (top row in Figure 1), seats (second and third row in Figure 1) and presidential table (bottom row in Figure 1). These locations impose appearance variations in terms of pose and distance to the camera; therefore, a frontal face is not always available for each intervener during the debate. Thus, the main difference between usual speaker diarization scenarios, e.g., TV talk shows, and parliamentary debates, which makes the latter a challenging problem, is that there exists a higher variability in poses, from closeup intervener frontal views, to a general view where not only the intervener appears, but other deputies that are close to her/him (first image of the bottom row in Figure 1). In order to provide a solution to these situations, the contributions of this paper are threefold:
  • We present a contextualization of open world re-identification problems.
  • We propose a feature vector based on Isometric LogRatio (ILR) transformation of a posteriori probabilities of belonging to a known intervener, applying a previous descriptor calculated only over the intervener face.
  • A threshold-less approach is used to solve the novelty detection problem in an open world scenario. Thus, there is not a need for any user defined threshold.
The remainder of this paper is organized as follows: Section 2 presents a review of recent literature in both re-identification and speaker diarization. Section 3 describes our methodology. Section 4 contains the experiment designs to evaluate our proposal and includes the achievements of the experiments. Section 5 deals with the advantages and disadvantages of the proposal, and, finally, conclusions are drawn in Section 6.

2. Related Work

In recent years, a dual, i.e., audio-visual, methodology in diarization has become popular. Bredin and Gelly [14] use television series to evaluate their diarization method. Their proposal is based on applying a clustering technique over the face images to assign the most co-occurring face cluster with the corresponding audio cluster. The latter is extracted from the linear Bayesian Information Criterion (BIC) clustering of the audio stream. Lastly, regular BIC clustering is used to obtain the final diarization. Unlike the previous authors, a multiple speaker detection approach that uses the position of the audio signals sources was proposed in [15]. Other authors [16] use the LIUM system, to extract the audio diarization and deformable part-based model (DPM) to detect visual faces. Later, a conditional random field based multi-target tracking is adopted to track the interveners. Subsequently, a clustering technique based on the similarity distances and biometric measures is applied. To assign the names, One-to-One Speaker Tagging is computed to maximize the co-occurrence duration between clusters and the names provided by an Optical Character Recognition (OCR). As opposed to previous works, in [17], the authors do not detect the faces. Instead, skin blocks are detected using the chrominance coefficients of the skin-tone in the YUV color space, where motion vectors are obtained. The Mel Frequency Cepstral Coefficients (MFCCs) of the audio stream are combined with the visual representation using a log-likelihood from two Gaussian Mixture Models (GMM).
Given that our proposal is based on a re-identification approach, we summarize some related works. The approach by Bazzani et al. [18] consists of splitting the individual body parts of the pedestrians. Features are extracted from the HSV color space using weighted histograms. Other features are extracted using an agglomerative clustering of the image pixels and the computation of texture patches. Moreover, in recent years, some researchers have introduced the use of metric learning techniques in the field of people re-identification. The aim of these techniques is to project the representation of the individuals in a feature space where those of the same individual are closer and those of different individuals are further apart. Authors in [19] propose the Keep It Simple and Straightforward (KISS) learning, improving the method using a regularization in order to suppress the effect of larger eigenvalues in the covariance matrices. Moreover, in [20], the authors describe a technique to find a common space in different camera views in an unsupervised context. Thus, a k-means is used to cluster the person images from different views. Neural Networks are also commonly used to project the samples in a new sample space. In this sense, authors in [21] split the image into three grids and use this representation as input into a bilinear network to aggregate in a feature vector. These vectors are used to obtain a new embedding feature space using a Siamese network. This architecture is commonly used to verify the input samples. In [22], the authors add also an identification stage to the model.
As mentioned above, recent challenging scenarios in re-identification fields are those related to open world problems, where novelty detection is a must (Figure 2). Novelty detection is used in a large kind of context, such as [23] in wildlife scenes and [24] for temporal series of vital signs with gastrointestinal cancer surgery; in addition, diagnosis of dermal diseases and the analysis of lymphatic cancer have been treated [25] or in robotics scenarios [26]. More related with people re-identification but using audio cues, authors in [27] propose a novelty detection approach in a speaker diarization system. A likelihood ratio thresholding is applied, depending on the speaker gender; and it is normalized using the mean and standard deviation. This thresholding determines typical/atypical speakers. Despite previous approaches, we are focusing on visual based re-identification problems. Authors in [28] propose a novel transfer ranking approach for two types of verification, multi-shot and one-shot verification, in a bipartite ranking problem. They applied RankSVM and probabilistic relative distance comparison to obtain a model, which optimizes a margin parameter based on the typical intra-class and inter-class variations, and inter-class variations between typical and atypical images. Authors in [29] present a supervised subspace learning approach where a linear transformation of the features is learnt by the optimization of a cost function related to the proportion of positive and negative misclassified pairs. In order to determine the presence of a probe person in a gallery, they introduce a margin parameter such that pairs whose distance is lower than the threshold are considered as belonging to the gallery and not belonging to the gallery otherwise. Authors in [30] introduce a new person re-identification search setting where the main features are: a vast probe search population, fast disjoint-view search and sparse training person identities. Over this setting, they obtain a set of features from the cross-view identity correlation and identity discrimination verification. In the same way as previous authors, the novelty detection is based on a threshold over the distance between individual representations.
Open world re-identification problems have dealt with deep learning in recent years; in particular, generative networks are used. For instance, an unsupervised domain adaptation approach that generates samples for effective target-domain learning is presented in [31]. This is done under the assumption that datasets in different re-identification domains have entirely different sets of identities. Thus, a translated image should be of a different identity from any target image. In this way, a Cycle Generative Adversarial Network (CycleGAN) [32] is used to translate images from a source to a target domain. Then, a Siamese network pushes two dissimilar images away and brings similar ones closer, with the aim of classifying a sample as typical or atypical. In addition, authors in [33] take advantage of the benefit of integrating generated people images. On the one side, they use a person discriminator to verify whether the generated image is a person or not. On the other side, a target discriminator identifies if a person belongs to the dataset or not. The feature vector is extracted from the last fully connected layer of the target discriminator and a threshold is used to determine the novelty of the person.
Unlike the previous approaches in which most of them use a margin parameter to detect the novelty of an individual, our approach applies a one-class classifier [34] to determine the novelty of a person, without the need of tuning a threshold. The advantage of this classifier is that only positive samples are needed to train it, unlike other classifiers that make use of positive and negative samples in the training process. Furthermore, we propose the use of a feature vector based on ILR transformation of a posteriori probabilities of belonging to a known intervener, applying a descriptor calculated only over the intervener face that fits with the one-class classifier.

3. Method

In this section, firstly we outline the proposed approach, and then we explain in detail its two different stages: initialization and ILR transformation of a posteriori (ILRA) probabilities (see Figure 3). Previously to the initialization stage, the video is pre-processed keeping only frames that contain frontal faces.
A video is composed of a sequence of I shots ( S 1 , , S I ), where a shot is defined as a sequence of frames with a single intervener—see Figure 3. At the initialization stage, the system assigns an identity I D 1 to the first shot ( K = 1 ). Next, shots are processed for novelty detection one by one, as a single intervener is assumed in each shot. Therefore, the system has to recognize whether a current shot intervener has been seen in previous shots (typical) or not (atypical). This stage is finished when an atypical shot is detected. Thus, the system knows two interveners ( K = 2 ).
Once the system has registered two interveners, the next shots are processed to solve a new atypical detection problem. To this end, a novel modelling based on a posteriori probability of individuals is proposed. This modelling cannot be implemented in the previous stage because the system needs to have registered at least two interveners. If the new shot is typical, a K-label classification is used to recognize which one of the known interveners corresponds to the current shot. Otherwise, a new identity is assigned to the current shot. This procedure is repeated until no shots are left in the sequence S 1 , , S I . In the following subsections, the different stage details are described.

3.1. Video Pre-Processing

A video is a sequence of S 1 , , S I shots and each shot S i is composed of f r 1 i , , f r n i i frames with a detected face; in the case of multiple detected faces, the largest one is selected. Previously to the detection of the faces, each frame has been converted to grayscale because color information is not used by face descriptors [35]. For each shot S i of the video, a matrix X i = [ x 1 i , , x n i i ] is obtained, n i being the number of frames of shot i-th. The detected face of each frame, f r j i , is represented by a descriptor computed on the face region as proposed by [36]. Thus, each row x j i of matrix X i corresponds to the descriptor of dimension D, x j i = d e s c ( f r j i ) R D ( j = 1 , , n i , i = 1 , , I ), resulting in a matrix of dimension n i × D :
X i = x 11 i x 1 D i x n i 1 i x n i D i .

3.2. Initialization Stage

Firstly, the system assigns identity I D 1 to the first shot S 1 , obtaining the extended matrix, including the label of the shot intervener:
X 1 e = x 11 1 x 1 D 1 I D 1 x n i 1 1 x n i D 1 I D 1 .
From now on, we refer as identity to the label ( I D x ) given to each registered individual. Later, the system has to determine the identity of the intervener in the following shots until the first atypical shot is found. This stage has similarities with a One Vs. One (OVO) strategy because, so far, the system knows just one intervener. Therefore, the procedure has to detect whether the intervener in the next shot is the same intervener I D 1 (typical) or if s/he is a different one (atypical). In terms of a classification problem, a one-class Support Vector Machine (SVM) [37] classifier is trained with the extended matrices X 1 e , , X i 1 e , and predictions are obtained for input matrix X i . In this way, for each frame in X i , a prediction in terms of typical/atypical is obtained. However, all frames do not necessarily have the same predicted labels; and it is reasonable to consider the whole shot S i as typical ( i d ( S i ) = I D 1 ) if most of the n i frames in shot S i are predicted as typical—otherwise, as atypical ( i d ( S i ) = I D 2 ), increasing the number of interveners K. Thus, we have decided to use the Winner-Takes-All (WTA) principle to this purpose.

3.3. ILRA Stage

Once the system has registered at least two individuals ( K 2 ), it is necessary to determine whether the individual of the next shot S i is registered or not. For this purpose, this stage comprises three main processes: modelling, novelty detection and, if the current shot is typical, classification. This stage has similarities with a One Vs. All (OVA) strategy. The available data at this stage are, on the one hand, the extended matrices X 1 e , , X i 1 e , which are the descriptors of each previous shot frames plus the label of their respective associated identities, and, on the other hand, the descriptors of the frames in shot S i , i.e., X i .
The aim of the modelling stage is to obtain the a posteriori probability, p j k i = Prob ( I D k | x j i ) , of each frame j in shot S i belonging to each registered identity k. Thus, for shot i-th, a matrix P i is computed:
P i = p 11 i p 1 K i p n i 1 i p n i K i ,
where k = 1 K p j k i = 1 . On the one hand, for shots S 1 , , S i 1 , where an identity has been assigned, the estimation of the a posteriori probability is done using a leave-one-out strategy. Therefore, for each frame f r j { S 1 , , S i 1 } , the a posteriori probabilities are computed using a Naïve Bayes classifier trained with all the frames minus frame f r j , { S 1 , , S i 1 } f r j . On the other hand, for each frame f r j S i , the a posteriori probabilities are computed using a Naïve Bayes classifier trained with all the frames of previous shots, { S 1 , , S i 1 } .
Once the a posteriori probabilities are computed, the second step of the modelling process is carried out. The ILR transformation is applied to P 1 , , P i matrices. This is a well-known transformation in the field of Compositional Data, which obtains a real coordinate representation, preserving the Aitchison metric in the original space of the a posteriori probabilities [38]. Formally defined as:
Z i = i l r v = c l r ( P i ) V ,
where c l r is the Centered Log Ratio (CLR) transformation and V is a matrix whose columns form an orthonormal basis of the CLR plane [38]. As a summary, each jth frame is normalized as follows:
x j i R D p j i R K z j i R K 1 .
Then, all transformed vectors are organized by rows in a matrix Z i and this is the matrix that characterizes the shot S i to determine the identity of the intervener. A similar transformation procedure is followed for all frames in shots S 1 , , S i 1 , obtaining matrices Z 1 , , Z i 1 . To determine the novelty in shot S i , a one-class SVM classifier is trained with the extended matrices Z 1 e , , Z i 1 e , and, similarly to the novelty detection approach of the initialization stage, predictions are obtained for input matrix Z i . Again, WTA is used to determine if S i is atypical or typical. In the first situation, i d ( S i ) = I D K + 1 is assigned and the number of identities known by the system increases. In the other case, when S i is considered typical, a classifier is used to identify which of the known ones it belongs to. The classification module could be performed by any classifier, which could be trained with the extended matrices Z 1 e , , Z i 1 e to determine i d ( S i ) . Moreover, a WTA strategy is chosen to determine the identity that characterizes shot S i .

3.4. ILRA Time Complexity

The time complexity for computing ILRA comprises the a posteriori probability computation, the ILR transformation, the novelty detection stage, and, in some cases, a classification. The time complexity of a posteriori probability computation (Naïve Bayes classifier) is O ( N × D ) , where N is the number of frames and D is the number of attributes. The time complexity of ILR transformation is O ( N × K ) , where K is the number of interveners. Finally, the time complexity of novelty detection and classification approach (SVM) is O ( ( K 1 ) × N 2 ) . Thus, the overall time complexity for each shot is O ( N ( D + K ( N + 1 ) N ) ) in case of atypical detection; otherwise, O ( N ( D + K ( 2 N + 1 ) 2 N ) ) .

4. Experimental Evaluation and Results

In order to evaluate our proposal, recordings from the Canary Islands Parliament (Santa Cruz de Tenerife, Canary Islands, Spain), which are publicly available in the Parliament web site [39], were processed on a workstation with an Intel Core i7-2600 at 3.40 GHz and 16 GB of RAM. The source code is available in github [40]. For the experiments, we chose six videos with different characteristics which are summarized in Table 1. The selected videos cover a wide range of interveners (5—21) and shots, so the influence of the number of interveners could be evaluated. Shots shorter than 30 s were skipped as they were considered not relevant for the diarization. In addition, frames without a detected face are avoided. For this aim, a face detector based on Histogram of Oriented Gradients features and SVM classifier is applied [41], where the face is normalized, establishing as a vertical symmetry axis through the center of the eyes position in the image, which are estimated by the face detector. Attending to the number of interveners, the videos could be classified as short with less than ten interverners (video identifiers 2771, 2918, 3015) and large with more than ten (video identifiers 2792, 2907, 3011).
First, a set of offline experiments were carried out to focus and to evaluate different situations involved in the proposed approach. The evaluation comprised three main experiments: (1) novelty detection in the initialization; (2) novelty detection and (3) classification in the ILRA stage. In this way, the performance of the different stages of our approach can be evaluated. With this objective, the shots of the same I D were reorganized to carry out the experiments properly, as shown in Figure 4.
As a result of the rearrangement of the samples, the training sets are unbalanced because there are I D s more present than others; to avoid that, 500 frames were randomly chosen per identity. When the number of frames for an identity was lower, all shot frames were used. To validate the process, we carried out 100 repetitions.
The dimensionality of the individuals was reduced, R w × h R D , as mentioned in Section 3. This reduction is based on applying a descriptor to the intervener face area ( w × h ) where w and h represent the width and height, respectively. Two descriptor types have been evaluated, local descriptor and deep descriptor. The former type used a grid of 3 × 3 cells over an aligned image of 59 × 65. The following local descriptors were evaluated: Histogram of Oriented Gradients (HOG) [42], Local Binary Patterns (LBP) [43], LBP Uniform (LBPu2) [44], Neighborhood Intensity based LBP (NILBP) [45], and Weber Local Descriptor (WLD) [46] with a dimensionality of 81, 2304, 531, 531, and 2304, respectively. The latter type corresponds to a feature vector extracted from a deep network. In this case, a triplet network based on Inception Resnet backbone (Resnet T ) [47,48] is used. Mainly, a triplet network embedded the samples in a new feature space, where the samples that belong to the same identity are close and samples from different identities are far. Thus, three instances of Inception Resnet are used that share the same weight matrix. The embedded space is represented by the last fully connected layer, with a dimensionality of 128 in our experiments. Resnet T is used due to its excellent scores in different kinds of problems in recent years. The network was trained on Ms-celeb-1m [49] because the dataset consists of 1 million identities and we obtained a generalized model to extract the feature vectors from the faces. The network was initialized with the following parameters: mini-batches of size 90 along 500 epochs; the initial learning rate was 0.1, and this was decreased with a factor of 10 after every 100 epochs. Thus, the margin between positive and negative pairs ( α ) is set to 0.2. We set multiple descriptors due to the importance to evaluate the influence of different feature vectors for both stages of the algorithm.
Once the experimental setup is defined, it is necessary to adopt a metric. The accuracy (Acc.) is used with the purpose of evaluating the offline experiments, being formally defined as
A c c = T P + T N T P + F P + T N + F N ,
where T P and F P are the number of true and false positives, respectively; T N and F N are the number of true and false negatives, respectively. Accuracy is used to measure typical and atypical detections. Instead of calculating the mean of typical and atypical values, the F measure is adopted to obtain only a measure providing a trade-off between both accuracies. Its formal definition is presented in the following equation:
F = 2 p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l ,
where
p r e c i s i o n = T P T P + F P
and
r e c a l l = T P T P + F N ,
where p r e c i s i o n is the fraction of relevant samples among the retrieved samples; moreover, r e c a l l is the fraction of relevant samples that have been retrieved over the total amount of relevant samples. Below, we present and discuss the results obtained in the experiments.

4.1. Evaluation of Novelty Detection in the Initialization Stage

The purpose of this first experiment is to evaluate the ability of the system to detect a novel identity when a single identity is known, i.e., K = 1 . The typical or atypical detection was performed as follows: for each identity I D k , we considered its corresponding samples as a test set, and, to conform the training set, we considered two different situations.
In the first case, the training set was composed of those samples with identity I D j I D k . In such situation, the tested identity should be labelled as atypical (Figure 5a) and the number of different comparisons is K 2 K . Note that, for each comparison, the detection of the individuals has to be atypical to be a success.
In the other case, the training set was composed by those samples with the same identity I D k . To avoid having identical training and test sets, one third of the original samples of identity I D k is used as a test set and the remaining two thirds as a training set. In this situation, the detection of the individuals has to be typical to be a success (Figure 5b). We performed this experiment for all K identities in the video.
Novelty detection in initialization stage columns of Table 2 summarize the results of the initialization stage experiments. It can be observed that, in all videos, the best F measure is obtained using Resnet T , with an average value of 97.66%. In general, the atypical detection results are greater than or equal to 90 % in 30 of 36 settings.

4.2. Evaluation of Novelty Detection in the ILRA Stage

The experiments related to the ILRA stage for offline scope are motivated by the need to evaluate the capacity of the approach to detect the novel identity of a new shot when several identities are known. Therefore, two evaluations are considered for each identity: atypical and typical. The former comprises all I D k identity samples in the test set, while the rest of identity samples, I D j k , are used for training (Figure 6a). This experiment is carried out to evidence the approach behaviour for atypical identity detection, as the tested identity I D k should be labelled as atypical. The latter comprises all identities in both training and test set, splitting randomly and balanced their respective samples, using one third for testing and the rest for training (Figure 6b). This experiment is carried out to evidence the approach behaviour for typical identity detection, as the tested identity I D k should be labelled as typical.
Novelty detection in the ILRA stage columns of Table 2 allude that the descriptor with the highest F accuracy is HOG, reporting 78.14%. It is also observed that, when the number of interveners is low, the best descriptor is HOG and, over a large number of interveners, WLD behaves apparently better than the remaining descriptors.

4.3. Evaluation of Intervener Classification in the ILRA Stage

The purpose of this experiment is to evaluate the capacity of the approach to correctly assign the identity of a new intervener shot when multiple identities are known. That means, when the identity of the new shot ( i d ( S i ) ) is present among the known identities, this intervener has been considered as typical in the ILRA stage, and the approach should match it to whom I D belongs to. Two classifiers are considered: the Maximum A Posteriori (MAP) probability extracted from the samples (see Figure 7a); and an SVM classifier to continue using the same typology of classifiers that we used throughout this proposal (see Figure 7b). In the case of the SVM, a Radial Basis Function (RBF) kernel is selected with main parameters ν = 0.1 , γ = 0.1 and C = 1 . A repeated holdout validation is carried out using 100 repetitions with re-sampling of the individuals, one third of the samples to test and the remaining to train.
The results are summarized in intervener classification in the ILRA stage columns of Table 2. Among the six descriptors, Resnet T yields the best accuracy in seven of the twelve experiments, giving an average value for the MAP and SVM classifiers of 88.69% and 97.36%, respectively.

4.4. Evaluation of the Proposed Online System

After evaluating the different offline stages, we carried out an online experiment. The number of frames per shot has been modified compared to the offline configuration. In addition, 200 frames per shot were used because the experiment comprises a larger number of shots, some of them containing a reduced number of frames. This situation brought about unbalanced shots that affect the performance of the algorithm. Given the best performance provided by the SVM classifiers in previous offline experiments, SVM is adopted to identify the interveners in the case of typical individuals.
To evaluate the online system, we adopted, from [50], True Re-identification Rate (TRR) and True Distinction Rate (TDR) measures. TRR evaluates how good the method is to re-identify interveners, while TDR evaluates how good the method is to distinguish among the interveners. Both measures are formulated as follows:
T R R = tr ( s c o r e ) N ,
T D R = 1 ( s c o r e 1 N ) T 1 N tr ( s c o r e ) N ( N 1 ) ,
where 1 N is a vector of dimension N with all the elements to one; and tr ( s c o r e ) is the trace of s c o r e that is a N × N matrix that has the result of comparing each proposed intervener shot identity with respect to all proposed intervener shot identities, 1 is assigned to equal identities and 0 to different ones. Thus, 1 in the diagonal elements and 0 in off-diagonal elements compose a perfect score. To obtain a single measure, the F measure is adopted, relating TRR (considered as recall) and TDR (considered as precision).
The last evaluated experiment is the online process where real online video processing is comprised, evaluating the same descriptor for each stage of the algorithm. The results of the experiments are summarized in Table 3. In most of the processed videos, a descriptor beats the others, but there is no common behaviour across the entire video collection. In this case, the use of the descriptor depends on the video, not on the number of interveners. On the one hand, we would like to highlight the F measure obtained in video 3011, 88.25%, covering a population of 21 interveners and two hours of recording in an open world problem that means a real complex problem. On the other hand, the result achieved for recording 2907 is interesting because it brings forward a deficiency in traditional feature vectors, aroused by an occlusion issue due to most of the interveners putting the glasses on or taking them off during the intervention. In this situation, Resnet T improves at least 44.69% compared to the other descriptors, reaching 76.51% in recording 2907.
Furthermore, our system is compared with our previous work [51]—as far as we know, the only existing approach in this scenario, i.e., face-based intervener re-identification in open-world parliamentary debates sessions. Additionally, face recognition approaches focusing on the closed world are used to extend the comparative of the proposed ILRA approach. In particular, HOG, LBP LBPu2, NILBP, WLD and Resnet T are used as feature vectors. In order to detect atypical samples, we use a threshold with a value of 0.5, an atypical sample being the corresponding one with a value larger than the threshold. In the case that the sample is typical, a distance vector is calculated from the samples previously analyzed with respect to the current sample. The identity with the minimum distance will represent the current sample.
Our method obtains in most of the experiments the best F measure for the different videos, compared with the above methods. These results are summarized in Table 4. On the one hand, the highest increase in performance is video 3015 where there is an improvement of 63.80% with respect to our previous work, ILRA being widely superior to traditional methods of face recognition. On the other hand, the recent technique, Resnet T , achieves a significant increase in results compared to the techniques mentioned above. However, it does beat the proposed method, reaching an average difference of 1.12% for the analyzed videos.

5. Discussion

In this paper, we analyzed the ILRA approach in offline and online contexts. On the one hand, offline experiments were carried out to evaluate the method in a controlled scenario. In this way, we could analyze each stage of the approach. On the other hand, online experiments allowed us to test the method in real conditions, where the system starts without any registered person.
A feature of the proposed method is the need of an initialization stage because it is not possible to calculate the ILRA with less than three registered identities (Section 3). The performance in detecting the second identity to start the ILRA process will affect the rest of the system. For this reason, we evaluate the initialization stage in an offline context (Section 4.1), where we have obtained that the Resnet T descriptor achieves a better score than local descriptors.
The modelling process is evaluated in the offline ILRA stage, which is split into two processes, novelty detection (Section 4.2) and classification of identities (Section 4.3). Firstly, the Resnet T descriptor is not the best descriptor for novelty detection in the ILRA stage. In this instance, a local descriptor, HOG, obtains the best average performance. Secondly, the Resnet T descriptor is better than local descriptors for classifying, as much as using a MAP as an SVM classifier.
This disaggregated analysis of the offline experiments shows that there is not a common best descriptor for each stage. This issue is translated into the online experiments, where a decreasing of the average score for each descriptor is obtained. This is due to having failures in the recognition at the first stage, which generates more false positive identifications. A way to alleviate this issue is to choose a specific descriptor for each stage; as shown in Section 4, there is no single descriptor that stands out in all stages. The selection of a single descriptor for any stage affects the system performance, making it less robust. Certainly, the system is simpler, but the use of a single descriptor in all system stages seems not to be the ideal approach. A further observation suggests that Resnet T is well suited to detect outliers in a one-class problem, HOG fits to novel detection in the ILRA stage and Resnet T performs better to classify in the ILRA stage.
The existence of short videos with very few detected faces favors SVM over MAP as can be observed in video 3011 (intervener classification in the ILRA stage of Table 2). This is due to the estimation of the Naïve Bayes parameters used in the MAP, as the average that is more affected by unbalanced classes. Some authors have verified that SVM performs better than Naïve Bayes dealing with unbalanced classes [52,53,54]. Moreover, the feature vector transformation using ILR alleviates the unbalanced problem as it is suggested in [55].

6. Conclusions

A feasible face-based intervener re-identification to open world solutions has been presented in order to be applied to diarization problems. We have evaluated the approach in parliamentary debate sessions, a challenging scenario, where people vary their pose and appearance, and do not necessarily appear while speaking.
In this scenario, the novelty intervener detection is relevant, as those identities must be properly registered. If novelty detection fails to detect a new intervener, s/he will be incorrectly assigned to a previously detected intervener. On the contrary, if a previously detected intervener is considered as a new one, the number of interveners will be erroneously increased. We have used and evaluated descriptors for identity registration. In the one-class problem, Resnet T has shown a good performance in novelty detection. The use of HOG yields the highest accuracy for a low number of interveners. However, when the number of interveners is larger, WLD achieves the best results. The best configuration is the Resnet T with an SVM classifier in the classification stage.
Our proposed system experiments have exhibited good results with an average F measure of 71.29% for the best descriptor for each video. In addition, we have compared the ILRA with respect to different techniques used in face recognition in a closed world, exhibiting an increase of 1.6% with respect to the deep descriptor extracted from a triplet network based on an Inception Resnet backbone. In the offline experiments, the results for the novelty detection in the initialization stage reach an average 97.66% accuracy for the Resnet T descriptor. In the ILRA stage, for the novelty detection, an average accuracy of 78.14% is obtained for the HOG descriptor. In the ILRA stage for the intervener classification experiments, the average accuracy is 97.36% using Resnet T descriptor with our method.
As future work, we plan to apply this approach using only audio features and the fusion of audio and video features. In this way, we could determine the influence of the audio over the image representation and verify if we obtain a better feature vector. Moreover, we intend to use deep learning techniques in order to replace the one-class SVM in the novelty detection module.

Author Contributions

Data curation, P.A.M.-R., J.L.-N and M.C.-S.; formal analysis, P.A.M.-R., I.I., C.A.; funding acquisition, B.S., I.I., J.L.-N, M.C.-S. and C.A.; investigation, P.A.M.-R., B.S. and I.I.; methodology, P.A.M.-R., B.S., I.I. and C.A.; project administration, B.S. and I.I.; resources, J.L.-N and M.C.-S.; software, P.A.M.-R., J.L.-N. and I.I.; supervision, J.L.-N, M.C.-S., B.S., I.I. and C.A.; validation, P.A.M.-R., B.S., I.I. and C.A.; visualization, P.A.M.-R., B.S., I.I. and C.A.; writing - original draft, P.A.M.-R., J.L.-N and M.C.-S.; writing - review and editing, P.A.M.-R., J.L.-N, M.C.-S., B.S., I.I. and C.A.

Funding

This research was funded by the Spanish Ministry of Economy and Competitiveness, Spain RTI2018-093337-B-I00, by AGAUR 2014SGR464, by the Office of Economy, Industry, Commerce and Knowledge of the Canary Islands Government (CEI2018-4), and the Computer Science Department at the Universidad de Las Palmas de Gran Canaria.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gheissari, N.; Sebastian, T.B.; Hartley, R. Person Reidentification Using Spatiotemporal Appearance. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), New York, NY, USA, 17–22 June 2006; Volume 2, pp. 1528–1535. [Google Scholar]
  2. Vezzani, R.; Baltieri, D.; Cucchiara, R. People reidentification in surveillance and forensics: A survey. ACM Comput. Surv. 2013, 46, 29:1–29:37. [Google Scholar] [CrossRef]
  3. Prosser, B.; Zheng, W.S.; Gong, S.; Xiang, T. Person Re-Identification by Support Vector Ranking. In Proceedings of the British Machine Vision Conference (BMVC), Aberystwyth, UK, 31 August–3 September 2010; pp. 21.1–21.11. [Google Scholar] [CrossRef]
  4. Roth, P.M.; Hirzer, M.; Köstinger, M.; Beleznai, C.; Bischof, H. Mahalanobis distance learning for person re-identification. In Person Re-Identification; Gong, S., Cristani, M., Yan, S., Loy, C.C., Eds.; Springer: London, UK, 2014; pp. 247–267. [Google Scholar]
  5. Bedagkar-Gala, A.; Shah, S.K. A survey of approaches and trends in person re-identification. Image Vis. Comput. 2014, 32, 270–286. [Google Scholar] [CrossRef]
  6. Liao, S.; Hu, Y.; Zhu, X.; Li, S.Z. Person Re-Identification by Local Maximal Occurrence Representation and Metric Learning. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 2197–2206. [Google Scholar]
  7. Markou, M.; Singh, S. Novelty detection: a review-part 1: Statistical approaches. Signal Process. 2003, 83, 2481–2497. [Google Scholar] [CrossRef]
  8. Chandola, V.; Banerjee, A.; Kumar, V. Anomaly detection: A survey. ACM Comput. Surv. 2009, 41, 1–58. [Google Scholar] [CrossRef]
  9. Pimentel, M.A.; Clifton, D.A.; Clifton, L.; Tarassenko, L. A review of novelty detection. Signal Process. 2014, 99, 215–249. [Google Scholar] [CrossRef]
  10. Anguera, X.; Bozonnet, S.; Evans, N.; Fredouille, C.; Friedland, G.; Vinyals, O. Speaker diarization: A review of recent research. IEEE Trans. Audio Speech Lang. Process. 2012, 20, 356–370. [Google Scholar] [CrossRef]
  11. El Khoury, E.; Sénac, C.; Joly, P. Audiovisual diarization of people in video content. Multimed. Tools Appl. 2014, 68, 747–775. [Google Scholar] [CrossRef]
  12. Liu, K.; Chen, J.H.; Chang, K.M. A Study of Facial Features of American and Japanese Cartoon Characters. Symmetry 2019, 11, 664. [Google Scholar] [CrossRef]
  13. Kamachi, M.G.; Chiba, T.; Kurosumi, M.; Mizukoshi, K. Perception of Human Age from Faces: Symmetric Versus Asymmetric Movement. Symmetry 2019, 11, 650. [Google Scholar] [CrossRef]
  14. Bredin, H.; Gelly, G. Improving Speaker Diarization of TV Series Using Talking-Face Detection and Clustering. In Proceedings of the ACM International Conference on Multimedia (ACMMM), Amsterdam, The Netherlands, 15–19 October 2016; pp. 157–161. [Google Scholar]
  15. Gebru, I.; Ba, S.; Li, X.; Horaud, R. Audio-visual speaker diarization based on spatiotemporal bayesian fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39. in press. [Google Scholar] [CrossRef] [PubMed]
  16. Le, N.; Wu, D.; Meignier, S.; Odobez, J.M. EUMSSI Team at the Mediaeval Person Discovery Challenge. In Proceedings of the Working Notes Proceedings of the MediaEval 2015 Workshop, Wurzen, Germany, 14–15 September 2015. [Google Scholar]
  17. Friedland, G.; Hung, H.; Yeo, C. Multi-Modal Speaker Diarization of Real-World Meetings Using Compressed-Domain Video Features. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), Taipei, Taiwan, 19–24 April 2009; pp. 4069–4072. [Google Scholar]
  18. Bazzani, L.; Cristani, M.; Murino, V. Symmetry driven accumulation of local features for human characterization and re-identification. Comput. Vis. Image Underst. 2013, 117, 130–144. [Google Scholar] [CrossRef]
  19. Tao, D.; Guo, Y.; Song, M.; Li, Y.; Yu, Z.; Tang, Y.Y. Person re-identification by dual-regularized kiss metric learning. IEEE Trans. Image Process. 2016, 25, 2726–2738. [Google Scholar] [CrossRef]
  20. Yu, H.X.; Wu, A.; Zheng, W.S. Cross-View Asymmetric Metric Learning for Unsupervised Person Re-Identification. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  21. Ustinova, E.; Ganin, Y.; Lempitsky, V. Multi-Region Bilinear Convolutional Neural Networks for Person Re-Identification. In Proceedings of the 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 29 August–1 September 2017; pp. 1–6. [Google Scholar]
  22. Zheng, Z.; Zheng, L.; Yang, Y. A Discriminatively Learned CNN Embedding for Person Reidentification. ACM Trans. Multimed. Comput. Commun. Appl. 2017, 14, 13. [Google Scholar] [CrossRef]
  23. Yong, S.P.; Deng, J.D.; Purvis, M.K. Novelty detection in wildlife scenes through semantic context modelling. Pattern Recognit. 2012, 45, 3439–3450. [Google Scholar] [CrossRef]
  24. Clifton, D.A.; Clifton, L.; Hugueny, S.; Wong, D.; Tarassenko, L. An extreme function theory for novelty detection. IEEE J. Sel. Top. Signal Process. 2013, 7, 28–37. [Google Scholar] [CrossRef]
  25. Irigoien, I.; Arenas, C. INCA: New statistic for estimating the number of clusters and identifying atypical units. Stat. Med. 2008, 27, 2948–2973. [Google Scholar] [CrossRef] [PubMed]
  26. Boucenna, S.; Cohen, D.; Meltzoff, A.N.; Gaussier, P.; Chetouani, M. Robots learn to recognize individuals from imitative encounters with people and avatars. Sci. Rep. 2016, 6. in press. [Google Scholar] [CrossRef]
  27. Markov, K.; Nakamura, S. Improved Novelty Detection for Online GMM Based Speaker Diarization. In Proceedings of the Annual Conference of the International Speech Communication Association (Interspeech), Brisbane, Australia, 22–26 September 2008; pp. 363–366. [Google Scholar]
  28. Zheng, W.S.; Gong, S.; Xiang, T. Transfer Re-Identification: From Person to Set-Based Verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2650–2657. [Google Scholar]
  29. Chan-Lang, S.; Pham, Q.C.; Achard, C. Closed and Open-World Person Re-Identification and Verification. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney, Australia, 29 November–1 December 2017; pp. 1–8. [Google Scholar]
  30. Zhu, X.; Wu, B.; Huang, D.; Zheng, W.S. Fast open-world person re-identification. IEEE Trans. Image Process. 2018, 27, 2286–2300. [Google Scholar] [CrossRef]
  31. Deng, W.; Zheng, L.; Ye, Q.; Kang, G.; Yang, Y.; Jiao, J. Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-dissimilarity for Person Re-Identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 994–1003. [Google Scholar]
  32. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE international conference on computer vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  33. Li, X.; Wu, A.; Zheng, W.S. Adversarial Open-World Person Re-Identification. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 280–296. [Google Scholar]
  34. Khan, S.S.; Madden, M.G. One-class classification: Taxonomy of study and review of techniques. Knowl. Eng. Rev. 2014, 29, 345–374. [Google Scholar] [CrossRef]
  35. Castrillón-Santana, M.; Lorenzo-Navarro, J.; Ramón-Balmaseda, E. Descriptors and regions of interest fusion for in- and cross-database gender classification in the wild. Image Vis. Comput. 2017, 57, 15–24. [Google Scholar] [CrossRef]
  36. Castrillón-Santana, M.; Lorenzo-Navarro, J.; Travieso-González, C.M.; Freire-Obregón, D.; Alonso-Hernández, J.B. Evaluation of local descriptors and CNNs for non-adult detection in visual content. Pattern Recognit. Lett. 2017, in press. [Google Scholar] [CrossRef]
  37. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  38. Egozcue, J.J.; Pawlowsky-Glahn, V.; Mateu-Figueras, G.; Barceló-Vidal, C. Isometric logratio transformations for compositional data analysis. Math. Geol. 2003, 35, 279–300. [Google Scholar] [CrossRef]
  39. de Canarias, P. Web Site of Canary Islands Parliament. 2018. Available online: http://www.parcan.es/ (accessed on 7 June 2018).
  40. Marín-Reyes, P.A. ILRA Source Code. 2019. Available online: https://github.com/foumacray/ILRA (accessed on 12 August 2019).
  41. Kazemi, V.; Sullivan, J. One Millisecond Face Alignment with an Ensemble of Regression Trees. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1867–1874. [Google Scholar]
  42. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–26 June 2005; Volume 1, pp. 886–893. [Google Scholar] [CrossRef]
  43. Ojala, T.; Pietikainen, M.; Harwood, D. Performance Evaluation of Texture Measures with Classification Based on Kullback Discrimination of Distributions. In Proceedings of the International Conference on Pattern Recognition (ICPR), Jerusalem, Israel, 9–13 October 1994; Volume 1, pp. 582–585. [Google Scholar] [CrossRef]
  44. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  45. Liu, L.; Zhao, L.; Long, Y.; Kuang, G.; Fieguth, P. Extended local binary patterns for texture classification. Image Vis. Comput. 2012, 30, 86–99. [Google Scholar] [CrossRef]
  46. Chen, J.; Shan, S.; He, C.; Zhao, G.; Pietikainen, M.; Chen, X.; Gao, W. WLD: A robust local image descriptor. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1705–1720. [Google Scholar] [CrossRef]
  47. Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A Unified Embedding for Face Recognition and Clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar] [CrossRef]
  48. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-resnet and the Impact of Residual Connections on Learning. In Proceedings of the Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  49. Guo, Y.; Zhang, L.; Hu, Y.; He, X.; Gao, J. Ms-celeb-1m: A Dataset and Benchmark for Large-Scale Face Recognition. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 87–102. [Google Scholar]
  50. Cong, D.N.T.; Khoudour, L.; Achard, C.; Meurie, C.; Lezoray, O. People re-identification by spectral classification of silhouettes. Signal Process. 2010, 90, 2362–2374. [Google Scholar] [CrossRef]
  51. Sánchez-Nielsen, E.; Chávez-Gutiérrez, F.; Lorenzo-Navarro, J.; Castrillón-Santana, M. A multimedia system to produce and deliver video fragments on demand on parliamentary websites. Multimed. Tools Appl. 2016, 76, 6281–6307. [Google Scholar] [CrossRef]
  52. Liu, Y.; Loh, H.T.; Sun, A. Imbalanced text classification: A term weighting approach. Expert Syst. Appl. 2009, 36, 690–701. [Google Scholar] [CrossRef]
  53. Zhang, S.; Sadaoui, S.; Mouhoub, M. An empirical analysis of imbalanced data classification. Comput. Inf. Sci. 2015, 8, 151. [Google Scholar] [CrossRef]
  54. Zhuang, L.; Dai, H. Parameter Estimation of One-Class SVM on Imbalance Text Classification. In Proceedings of the Conference of the Canadian Society for Computational Studies of Intelligence, Québec City, QC, Canada, 7–9 June 2006; pp. 538–549. [Google Scholar]
  55. de Deus, J.L.; Neves, J.C.L.; Corrêa, M.C.d.M.; Parent, S.É.; Natale, W.; Parent, L.E. Balance design for robust foliar nutrient diagnosis of “Prata” banana (Musa spp.). Sci. Rep. 2018, 8, 15040. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Deputy captures of the Canary Islands Parliament. These images show different problematic situations where correct (green) and incorrect (red) intervener matches are presented.
Figure 1. Deputy captures of the Canary Islands Parliament. These images show different problematic situations where correct (green) and incorrect (red) intervener matches are presented.
Symmetry 11 01154 g001
Figure 2. An overview of an open world re-identification system.
Figure 2. An overview of an open world re-identification system.
Symmetry 11 01154 g002
Figure 3. A video is divided into shots, S i that are composed of frames, f r i . Each shot contains a single intervener. Each shot is the input data of our proposed system. The system is mainly divided into two stages. The initialization stage is carried out without the modelling approach unlike the ILRA stage.
Figure 3. A video is divided into shots, S i that are composed of frames, f r i . Each shot contains a single intervener. Each shot is the input data of our proposed system. The system is mainly divided into two stages. The initialization stage is carried out without the modelling approach unlike the ILRA stage.
Symmetry 11 01154 g003
Figure 4. The original shots are reorganized with the purpose of grouping by I D for the novelty detection (initialization and ILRA stages) and classification (ILRA stage) experiments.
Figure 4. The original shots are reorganized with the purpose of grouping by I D for the novelty detection (initialization and ILRA stages) and classification (ILRA stage) experiments.
Symmetry 11 01154 g004
Figure 5. Initialization stage. (a) atypical experimental evaluation where each I D is matched individually with the remaining I D s (colored arrows); (b) typical experimental evaluation where each I D is matched with itself (colored arrows).
Figure 5. Initialization stage. (a) atypical experimental evaluation where each I D is matched individually with the remaining I D s (colored arrows); (b) typical experimental evaluation where each I D is matched with itself (colored arrows).
Symmetry 11 01154 g005
Figure 6. ILRA stage. (a) atypical experimental evaluation where each I D is matched with the remaining I D s; (b) typical experimental evaluation where each set is split into one third to test and the rest for training.
Figure 6. ILRA stage. (a) atypical experimental evaluation where each I D is matched with the remaining I D s; (b) typical experimental evaluation where each set is split into one third to test and the rest for training.
Symmetry 11 01154 g006
Figure 7. Procedure to determine i d ( S i ) . (a) represents the process to extract the I D of the intervener, using the Maximum A Posteriori (MAP) probability to each sample; (b) shows the use of an SVM to obtain the intervener I D .
Figure 7. Procedure to determine i d ( S i ) . (a) represents the process to extract the I D of the intervener, using the Maximum A Posteriori (MAP) probability to each sample; (b) shows the use of an SVM to obtain the intervener I D .
Symmetry 11 01154 g007
Table 1. Description of the videos analyzed. The columns, “Shots” and “Frames” indicate the number of shots and frames.
Table 1. Description of the videos analyzed. The columns, “Shots” and “Frames” indicate the number of shots and frames.
Video IdentifierIntervenersShotsFramesDuration
277151324400:33:23
291873371421:21:23
301585222,0883:02:44
2792115513,9561:48:00
2907125795422:20:20
3011217365252:01:42
Table 2. Results of the offline experiments in terms of accuracy measure for novelty detection in initialization stage, novelty detection in the ILRA stage, and intervener classification in the ILRA stage. The results comprise the evaluation for different descriptors. The highest score is in bold.
Table 2. Results of the offline experiments in terms of accuracy measure for novelty detection in initialization stage, novelty detection in the ILRA stage, and intervener classification in the ILRA stage. The results comprise the evaluation for different descriptors. The highest score is in bold.
Video FeaturesDescriptorNovelty Detection inNovelty Detection inIntervener Classification in
Initialization StageILRA StageILRA Stage
IdK TypicalAtypicalFTypicalAtypicalFMAP Acc.SVM Acc.
27715HOG100.090.0094.7480.0060.0068.5796.5296.92
LBP80.0090.0084.7140.0060.0048.0062.1564.07
LBPu280.0090.0084.7180.0060.0068.5796.7296.89
NILBP100.090.0094.7420.0080.0032.0072.4581.13
Resnet T 100.0100.0100.060.0080.0068.5798.5198.05
WLD80.00100.088.8940.0080.0053.3394.1794.17
29187HOG85.7152.3865.02100.085.7192.3192.1591.12
LBP85.7185.7185.7185.7157.1468.5744.3447.57
LBPu285.7190.4888.0328.57100.044.4498.6398.03
NILBP100.085.7192.31100.00.000.0041.0550.20
Resnet T 100.0100.0100.029.5786.7142.8697.4997.16
WLD85.7180.9583.2742.8585.7157.1492.8992.99
30158HOG100.085.7192.31100.0100.0100.095.3094.24
LBP87.50100.093.3325.0037.5030.0054.9256.47
LBPu287.50100.093.3350.00100.066.6797.9398.01
NILBP100.096.4398.1887.5012.5021.8863.0067.58
Resnet T 100.0100.0100.027.27100.042.8597.7897.52
WLD100.096.4398.1887.500.000.0063.5768.68
279211HOG81.8289.0985.3090.91100.095.2492.6091.26
LBP90.9198.1894.4118.1872.7229.0951.8453.42
LBPu281.8296.3688.5045.4590.9160.6197.1296.84
NILBP81.8296.3688.50100.00.000.0045.1655.76
Resnet T 100.0100.0100.036.00100.052.9497.9497.83
WLD81.8298.1889.269.0990.9116.5385.1385.31
290712HOG75.0090.9182.1941.6683.3355.5696.4296.02
LBP75.0096.9784.5866.6775.0070.5964.3064.47
LBPu266.6798.4879.5125.0083.3338.4698.1198.19
NILBP83.33100.090.9150.0025.0033.3376.1979.07
Resnet T 100.0100.0100.041.67100.058.8398.9098.98
WLD58.33100.073.6875.0091.6782.5092.2591.87
301121HOG52.3894.7667.4747.6271.4357.1441.2996.55
LBP42.8697.1459.4861.9061.9061.9020.1849.65
LBPu242.8696.1959.3042.8676.1954.8640.8594.92
NILBP57.1496.1971.6961.9052.3856.7536.9884.26
Resnet T 76.1998.5785.9523.8190.4837.7041.4994.64
WLD28.5799.0544.3585.7180.9583.2736.7086.09
Mean HOG82.4983.8181.1776.7083.4178.1485.7194.35
LBP77.0094.6783.7049.5860.7151.3649.6255.94
LBPu274.0995.2582.2345.3185.0755.6088.2397.15
NILBP87.0594.1289.3969.9028.3123.9955.8169.67
Resnet T 96.0399.7697.6636.2292.7050.6288.6997.36
WLD63.5195.7779.6056.6971.5448.8077.4586.52
Table 3. Results of the online experiments in terms of TDR, TDR, and F. The highest F is in bold.
Table 3. Results of the online experiments in terms of TDR, TDR, and F. The highest F is in bold.
Video IDDescriptorTRRTDRF
HOG83.3374.0778.43
LBP25.0094.4439.53
2771LBPu216.6796.3028.42
NILBP16.6788.8928.07
Resnet T 58.3390.7471.01
WLD8.3396.3015.34
HOG38.5099.0855.45
LBP95.7211.6320.75
2918LBPu240.1183.2854.14
NILBP56.6876.8965.26
Resnet T 59.1597.6573.68
WLD31.5593.7647.21
HOG71.0595.9681.65
LBP29.4775.7642.44
3015LBPu234.2196.8050.55
NILBP40.5388.8955.67
Resnet T 56.8399.5872.37
WLD36.8496.4653.32
HOG71.8369.1870.48
LBP28.1785.1842.34
2792LBPu270.4294.1280.56
NILBP59.1583.7669.34
Resnet T 47.5997.6964.00
WLD54.9382.8266.05
HOG52.2748.4950.31
LBP15.9187.0926.90
2907LBPu231.8295.1247.69
NILBP27.2792.5442.13
Resnet T 65.7991.4176.51
WLD40.9174.7552.88
HOG82.0895.4388.25
LBP55.6691.3469.17
3011LBPu249.0699.6965.75
NILBP73.5865.9869.58
Resnet T 54.6897.8570.15
WLD71.7089.2979.53
HOG66.5180.3770.76
LBP41.6674.2440.19
MeanLBPu240.3894.2254.52
NILBP45.6582.8355.01
Resnet T 57.0695.8271.29
WLD40.7188.9052.39
Table 4. Results of the online experiments compared with other approaches in terms of TDR, TDR, and F. The highest F is in bold.
Table 4. Results of the online experiments compared with other approaches in terms of TDR, TDR, and F. The highest F is in bold.
Video IDDescriptorTRRTDRF
[42]58.3361.1159.69
[43]41.6770.3752.34
[44]41.6770.3752.34
2771[45]33.3379.6346.99
[48]79.3364.5271.16
[46]41.6770.3752.34
[51]53.9175.3662.86
Ours (Resnet T )58.3390.7471.01
[42]49.4179.8561.05
[43]42.3595.8258.74
[44]48.8297.1864.99
[45]57.6585.0768.72
2918[48]96.0044.7161.01
[46]43.5394.7859.66
[51]50.5975.1660.47
Ours (Resnet T )59.1597.6573.68
[42]85.8112.5621.91
[43]43.0258.8549.71
[44]45.4957.8450.93
3015[45]48.1851.4949.78
[48]80.1747.9860.03
[46]69.5238.5849.62
[51]85.939.9617.85
Ours (Resnet T )56.8399.5872.37
[42]20.3396.2833.57
[43]31.1794.8746.92
[44]31.0595.6446.88
2792[45]48.1851.4949.78
[48]89.0558.1770.37
[46]31.1791.5657.27
[51]23.8593.5838.01
Ours (Resnet T )47.5997.6964.00
[42]23.4988.2337.10
[43]33.7387.7948.74
[44]28.3189.6743.03
2907[45]26.2088.5840.44
[48]91.2688.6889.95
[46]34.9482.9249.16
[51]21.9984.9134.93
Ours (Resnet T )65.7991.4176.51
[42]57.6177.3866.05
[43]51.7870.6259.75
[44]53.4170.5560.80
3011[45]58.3873.5965.11
[48]66.4570.6168.47
[46]50.7678.6861.71
[51]58.1279.3667.10
Ours (Resnet T )54.6897.8570.15
[42]49.1669.2446.56
[43]40.6279.7252.70
[44]41.4680.2153.16
Mean[45]45.3271.6453.47
[48]83.7162.4470.17
[46]45.2776.1554.96
[51]49.0769.7246.87
Ours (Resnet T )57.0695.8271.29

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop