Next Article in Journal
Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia
Next Article in Special Issue
A MAC Protocol to Support Monitoring of Underwater Spaces
Previous Article in Journal
Experimental Analysis of Bisbenzocyclobutene Bonded Capacitive Micromachined Ultrasonic Transducers
Previous Article in Special Issue
An Intelligent Parking Management System for Urban Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study †

by
Cristina Tîrnăucă
1,*,
José L. Montaña
1,
Santiago Ontañón
2,
Avelino J. González
3 and
Luis M. Pardo
1
1
Departamento de Matemáticas, Estadística y Computación, Universidad de Cantabria, Santander 39005, Spain
2
Department of Computer Science, Drexel University, Philadelphia, PA 19104, USA
3
Department of Computer Science, University of Central Florida, Orlando, FL 32816, USA
*
Author to whom correspondence should be addressed.
^This paper is an extended version of our paper published in Ubiquitous Computing and Ambient Intelligence. Sensing, Processing, and Using Environmental Information. Volume 9454 of the series Lecture Notes in Computer Science, 2015; pp. 103–115.
Sensors 2016, 16(7), 958; https://doi.org/10.3390/s16070958
Submission received: 28 April 2016 / Revised: 18 June 2016 / Accepted: 21 June 2016 / Published: 24 June 2016
(This article belongs to the Special Issue Selected Papers from UCAmI, IWAAL and AmIHEALTH 2015)

Abstract

:
Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent’s actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.

1. Introduction

Modern training, entertainment and education applications make extensive use of autonomously controlled virtual agents or physical robots. In these applications, the agents must display complex intelligent behaviors that, until recently, were only shown by humans. Driving simulations, for example, require having vehicles moving in a realistic way in the simulation, while interacting with other virtual agents as well as humans. Likewise, computer games require artificial characters or opponents that display complex intelligent behaviors to enhance the entertainment factor of the games. Manually creating those complex behaviors is usually expensive. For example, the artificial intelligence for the two virtual characters in the game Façade [1] took more than five person-years to develop. Additionally, the required knowledge to create these behaviors is often tacit. In other words, humans that are proficient in the task at hand have difficulties articulating this knowledge in an effective manner so that it can be included into the agent’s behavior. For example, when asked how hard to apply the brakes in a car when approaching a traffic light, most human practitioners would be unable to find appropriate words to describe the experience. On the other hand, showing how to do it is much easier.
Another example comes from the United States Department of Defense, which was one of the first official institutions to recognize the value of intelligent virtual agents that could reliably act as intelligent opponents, friendly forces and neutral bystanders in training simulations. The availability of such agents permitted them to avoid using human experts, a scarce and expensive resource, who had been used to manually control such entities in past training sessions. Semi-Automated Forces (SAF) were the first attempt at developing such software agents, where simulated enemy and friendly entities inhabit the virtual training environment and act to combat or support war-fighters in training sessions. Recent advances in the closely related area of Computer-Generated Forces (CGF) indicate important progress in simulating behaviors that are more complex, but at the cost of operational efficiency. However, CGF models are still very difficult to build, validate and maintain.
An attractive and promising alternative to handcrafting behaviors is to automatically generate them through machine learning techniques. The problem of automatically generating behaviors has been studied in artificial intelligence at least from two different perspectives. The first one is Reinforcement Learning (RL), which focuses on learning from experimentation. The second one is Learning from Observation (LfO), focusing on learning by observing sample traces of the behavior to be learned. Reinforcement Learning is by far the better known and studied of the two, but presents several open problems like scalability and generalization. Furthermore, RL does not lend itself to learn human-like behaviors, given its inherent nature toward optimization. We believe that LfO offers a promising and computationally tractable approach to achieve human-like behaviors with high fidelity, at an affordable cost, and with a reasonable level of generalization. Although LfO has already shown some success in learning behaviors requiring tacit knowledge, many open problems remain in the field.
Some of the early work on the field refers to a technique called programming by demonstration. For example, Bauer [2] showed how to make use of knowledge about variables, inputs, instructions and procedures in order to learn programs, which basically amounts to learning strategies to perform abstract computations by demonstration. Programing by demonstration has also been especially popular in robotics [3]. Another early mention of LfO comes from Michalski et al. [4], who define it merely as unsupervised learning. Gonzalez et al. [5] discussed LfO at length, but provided no formalization nor suggested an approach to realize it algorithmically. More extensive work on the more general LfO subject came nearly simultaneously but independently from Sammut et al. [6] and Sidani [7]. Fernlund et al. [8] used LfO to build agents capable of driving a simulated automobile in a city environment. Pomerleau [9] developed the Autonomous Land Vehicle in a Neural Network (ALVINN) system that trained neural networks from observations of a road-following automobile in the real world. Moriarty and Gonzalez [10] used neural networks to carry out LfO for computer games. Könik and Laird [11] introduced LfO in complex domains with the State, Operator and Result (SOAR) system, by using inductive logic programming techniques. Other significant work done under the label of learning from demonstration has emerged recently in the Case-Based Reasoning community. For example, Floyd et al. [12] present an approach to learn how to play RoboSoccer by observing the play of other teams. Ontañón et al. [13] use learning from demonstration for real-time strategy games in the context of Case-Based Planning. Another related area is that of Inverse Reinforcement Learning [14], where the focus is on reconstructing the reward function given optimal behavior (i.e., given a policy, or a set of trajectories). One of the main problems here is that different reward functions may correspond to the observed behavior, and heuristics need to be devised to consider only those families of reward functions that are interesting. In this paper, we present an approach to LfO based on Probabilistic Finite Automata (PFAs). PFAs are interesting because, in addition to learning a model of the desired behavior, they can also assess the probability that a certain behavior was generated by a given PFA. For that reason, given a training set consisting of traces with observations that come from an agent exhibiting different behaviors, PFAs can be used for two different tasks. The first one is behavioral recognition: a PFA can be trained from the traces of each one of these behaviors (one PFA per different behavior). Then, given a new, unseen, behavioral trace from another agent, these PFAs can be used to assess the probability that this new behavior has been produced by each one of them. Assuming that each PFA is a good model of each of the behaviors of interest, this effectively corresponds to identifying which of the initial set of behaviors was exhibited by the new agent. The second one is behavioral cloning: by training a PFA with the traces corresponding to the desired behavior, such a PFA can be used to recreate this behavior in a new, unseen, situation. The notion of BC was first introduced in [15] to refer to a form of imitation learning whose motivation is to build a model of the behavior of a human.
The perspective provided by our work is general enough to deal with significant applications such as masquerade detection in computer intrusion, analysis of the task performed by the user in some e-learning activity, classification and prediction of the user behavior in a web user interaction process and, more generally, Activity Recognition (AR). The aim of AR is to recognize the actions and tasks of one or several agents taking as input a sequence of observations of their actions and the state of the environment. Most research in AR concentrates on the recognition of human activities. One goal of Human Activity Recognition (HAR) is to provide information on a user’s behavior that allows computing systems to proactively assist users with their tasks (see [16] for a detailed overview on the subject). Hidden Markov Models are widely used tools for prediction in the context of HAR. Successful applications of these machines include important applications like speech recognition [17] and DNA sequence alignment [18].
The remainder of this paper is organized as follows. In Section 2, we introduce our proposed LfO framework, we explain how this framework can be used for BC and BR, and we propose a set of evaluation metrics to assess the performance of the approach. Then, in Section 3, we report on the experiments conducted in the context of a simulated learning environment (a virtual Roomba vacuum cleaner robot). Finally, concluding remarks and future work ideas are presented in Section 4.

2. Methodology

The key idea in Learning from Observation is that there is a learner that observes one or several agents performing a task in a given environment, and records the agent’s behavior in the form of traces. Then, those traces are used by the learner to generalize the observed behavior and replicate it in other similar situations. Most LfO work assumes that the learner does not have access to a description of the task during learning, and thus, the features of the task and the way it is achieved must be learned purely through unobtrusive observation of the behavior of the agent.
Let B be the behavior (by, behavior, we mean the control mechanism, policy, or algorithm that an agent or a learner uses to determine which actions to execute over time) of an agent A. Our formalization is founded on the principle that behavior can be modeled as a stochastic process, and its elements as random variables dependent on time. Our model includes the following variables (we use the following convention: if X is a variable, then we use a calligraphic X to denote the set of values it can take, and lower case x X to denote specific values it takes.): the state X X of the environment, the unobservable internal state C C of the agent and the perceptible action Y Y that the agent executes. We interpret the agent’s behavior as a discrete-time process Z = { Z 1 , . . . , Z k , . . . } (which can be either deterministic or stochastic), with state space I = X × C × Y . Z t = ( X t , C t , Y t ) is a multidimensional variable that captures the state of the agent at time t, i.e, a description of the environment, the internal state and the action performed at time t.
The observed behavior of an agent in a particular execution defines a trace T of observations: T = [ ( x 1 , y 1 ) , , ( x m , y m ) ] , where x t and y t represent the specific perception of the environment and action of the agent at time t. The pair of variables X t and Y t represents the observation of the agent A. We assume that the random variables X t and Y t are multidimensional discrete variables. Under this statistical model, we distinguish three types of behaviors [19]: Type 1 (that includes strict imitation behavior) corresponding to a process that only depends on time (independent of previous states and actions); Type 2 (reactive behavior), where Y t may only depend on the time t, the present state X t and the non-observable internal state C t ; and Type 3 (planned behavior) for the case in which the action Y t depends on the time t, on the non-observable internal state C t , and on any of the previous states X 1 , , X t and actions Y 1 , , Y t - 1 .
When a behavior does not explicitly depend on time, we say that it is a stationary behavior. In addition, we distinguish between deterministic and stochastic behavior.

2.1. LfO Models

In this article, we model only stationary behaviors of Types 2 and 3 that do not explicitly depend on the internal state. Moreover, we limit the “window” of knowledge in the case of planned behavior to one previous observation (our methodology could be easily extended to allow a larger memory, with the obvious drawback of an increased number of features). This gives rise to three possibilities for the current action Y t :
  • Y t depends only on current state X t (Model 1);
  • Y t depends on previous and current state: X t , X t - 1 (Model 2);
  • Y t depends on previous and current state: X t , X t - 1 , and on previous action Y t - 1 (Model 3).
An example of the kind of information available for each of the three models is presented in Table 1, where each row represents a training example, and columns represent features. The last column (Class) is the action to be performed, which is what our models try to predict.
Note that some of the actual strategies employed by the agents in our experiments exhibit more complex dependencies (see Section 3.2), but the model of the behavior learned by our approach is restricted to one of the three models presented above.

2.2. Machine Learning Tools

We describe in this section the kind of learning machines that we propose for modeling reactive and planned behaviors. Note that the only information we have is a trace with pairs (state, action): we do not know if the trace was produced by a deterministic or a stochastic agent, or whether it uses an internal state. However, we would like to have a mechanism that predicts, in each state, the action to perform.
If the learned strategy is a deterministic one, this can be done via a classifier, using more or less features depending on the model (see Table 1). We experimented with a decision tree (DT) algorithm [20], a probabilistic neural network (PNN) [21], the k Nearest Neighbour (kNN) algorithm, the RProp algorithm for multilayer feedforward networks [22] and the Naive Bayes (NB) algorithm. On the other hand, for training stochastic models, we propose the use of PFAs, which we describe below.
PFAs were introduced in the 1960s by Rabin (see [23]) and are still used in several fields of science and technology for modeling stochastic processes in applications such as DNA sequencing analysis, image and speech recognition, human activity recognition and environmental problems, among others. The reader is referred to the work of Dupont et al. [24] for an overview of the basic properties of PFAs and a presentation of their relation with other Markovian models.
Formally, a PFA is a 5-tuple A = ( Σ , Q , ϕ , ι , γ ) , where Σ is a finite alphabet (that is, a discrete set of symbols), Q is a finite collection of states, ϕ : Q × Σ × Q [ 0 , 1 ] is a function defining the transition probability (i.e., ϕ ( q , a , q ) is the probability of emission of symbol a while transitioning to state q from state q), ι : Q [ 0 , 1 ] is the initial state probability function and γ : Q [ 0 , 1 ] is the final state probability function. In addition, the following functions, defined over words α = ( a 1 , , a m ) Σ * and state paths π = ( q 1 , , q m ) Q * , must be probability distributions (Equation (1) when using final probabilities and Equation (2) otherwise):
P A ( α , π ) = ι ( q 1 ) i = 1 m - 1 ϕ ( q i , a i , q i + 1 ) γ ( q m )
P ^ A ( α , π ) = ι ( q 1 ) i = 1 m - 1 ϕ ( q i , a i , q i + 1 )
This implies in particular that the two following functions are probability distributions over Σ * :
P A ( α ) = q , q ι ( q ) ϕ ( q , α , q ) γ ( q )
P ^ A ( α ) = q , q ι ( q ) ϕ ( q , α , q )
Note that P A ( α ) is the probability of generating word α and P ^ A ( α ) is the probability of generating a word with prefix α. Here, ϕ ( q , α , q ) is the extension of function ϕ to words with the obvious meaning: the probability of reaching state q from state q while generating word α (the reader is referred to the work of Dupont et al. [24] for a detailed explanation of Equations (3) and (4)). In many real situations, we are interested in PFAs with no final probabilities, and in this case we use Equation (4).

2.3. Training a Probabilistic Finite Automaton

We propose to train a PFA A = ( Σ , Q , ϕ , ι , γ ) to model an unknown behavior by observing its trace T. To this end, we define the alphabet Σ of automaton A to be the set of all actions Y that the agent can perform. The state space Q depends on the model: it is either X (Model 1), or X × X (Model 2) or X × Y × X (Model 3).
Training the automaton A from a trace [ ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x m , y m ) ] consists of determining the transition probability function values { ϕ ( q , a , q ) q , q Q , a Σ } and the initial probability values { ι ( q ) q Q } (we opted for a model with no final probabilities). For any state q Q , let q be the number of occurrences of symbol q in trace T. In the case of Model 1, q = | { i 1 , m ¯ x i = q } | , for Model 2, if q = ( x , x ) , then q = | { i 2 , m ¯ x i - 1 = x , x i = x } | , and for Model 3, if q = ( x , y , x ) , then q = | { i 2 , m ¯ x i - 1 = x , y i - 1 = y , x i = x } | . Similarly, we define ( q , a ) and ( q , a , q ) for a Σ and q , q Q as follows:
Model 1:
q = x , a = y , q = x ,
( q , a ) = | { i 1 , m ¯ x i = x , y i = y } | ,
( q , a , q ) = | { i 1 , m - 1 ¯ x i = x , y i = y , x i + 1 = x } | .
Model 2:
q = ( x , x ) , a = y , q = ( x , x ) ,
( q , a ) = | { i 2 , m ¯ x i - 1 = x , x i = x , y i = y } | ,
( q , a , q ) = | { i 2 , m - 1 ¯ x i - 1 = x , x i = x , x i + 1 = x , y i = y } | .
Model 3:
q = ( x , y , x ) , a = y , q = ( x , y , x ) ,
( q , a ) = | { i 2 , m ¯ x i - 1 = x , x i = x , y i - 1 = y , y i = y } | ,
( q , a , q ) = | { i 2 , m - 1 ¯ x i - 1 = x , x i = x , x i + 1 = x , y i - 1 = y , y i = y } | .
Note that, in the case of Models 2 and 3, ( q , a , q ) is zero by definition if the last element of state q is different than the first element of state q . Next, we estimate the values of ι and ϕ with the following formulas (we use Laplace smoothing to avoid zero values in the testing phase for elements that never appeared in training):
ι ( q ) : = q + 1 m + | Q | , ϕ ( q , a , q ) : = ( q , a , q ) + 1 q + | Q | × | Σ |
It is easy to see that P ( a q ) , the probability of performing the action a when in state q, defined as q Q ϕ ( q , a , q ) , becomes ( ( q , a ) + | Q | ) / ( q + | Q | × | Σ | ) (see [24] for a survey on learnings PFAs).

2.4. Evaluation Metrics

In the case of BR, the goal is to identify which was the strategy employed by an agent A using the learning trace T = [ ( x 1 , y 1 ) , , ( x m , y m ) ] that the agent produced. To this end, we train a PFA A k for each available planned strategy, we compute the value P A k ( α , π M ) and return arg max k { P A k ( α , π M ) } , where α = ( y 1 , , y m ) and π M = ( q 1 M , , q m M ) . The value of q i M depends on the amount M of memory used: q i 1 = x i for Model 1, q i 2 = ( x i - 1 , x i ) for Model 2 and q i 3 = ( x i - 1 , y i - 1 , x i ) for Model 3.
In practice, we use the values P A k ( α , π M ) to measure the distance between the behavior B exhibited by agent A (through the learning trace T) and the behavior B of the strategy modeled by automaton A k by computing the negative log-probability:
R A k M ( T ) = - 1 m log P A k ( α , π M )
This value can be interpreted as a Monte Carlo approximation of the crossed entropy between behaviors B and B , known in the literature as Vapnik’s risk (see [19]). Obviously, maximizing P A k ( α , π M ) is the same as minimizing R A k M ( T ) and for practical reasons (in order to avoid underflow and because adding is faster than multiplying), we use distances instead of probabilities.
In the case of BC, we are interested in assessing the quality of the models proposed (that is, we would like to know how well does the cloned agent behave on previously unseen data). For this purpose, we use two different metrics, which we detail below.
Predictive Accuracy. This is a standard measure for classification tasks. Let M be the model trained by one of the learning algorithms using the trace obtained by observing an agent A (that follows a certain strategy) on a fixed set of maps. This model can be either deterministic (in which case, there is only one possible action at any point in time) or stochastic (the action is chosen randomly according to some probability distribution).
Now, let T = [ ( x 1 , y 1 ) , , ( x m , y m ) ] be the trace of the agent A on a different previously unseen map. The predictive accuracy A c c M ( T ) is measured as follows:
A c c M ( T ) = 1 m i = 1 m ( M , T , i ) , ( M , T , i ) = 1 , if M ( [ x i - 1 , y i - 1 , ] x i ) = y i 0 , otherwise
where M ( [ x i - 1 , y i - 1 , ] x i ) represents the action predicted by the model M for the state x i , possibly knowing previous state and action. If M is a stochastic model, M ( [ x i - 1 , y i - 1 , ] x i ) is a random variable over Y with the probability distribution { P ( a q ) } a Y , where q = x i for Model 1, q = ( x i - 1 , x i ) for Model 2 and q = ( x i - 1 , y i - 1 , x i ) for Model 3.
Monte Carlo Distance. To assess the adequacy of a model M in reproducing the behavior of an agent A, we propose a Monte Carlo-like measure based on estimating the crossed entropy between the probability distributions associated with both the model M and the agent A. More concretely, let T = [ ( x 1 , y 1 ) , , ( x n , y n ) ] be a trace generated (the model predicts the next action, but the next state is given by the actual configuration of the map; in the case that it is impossible to perform a certain action because of an obstacle, the agent does not change its location) according to model M on a fixed map (different than the one used in training), and let T = [ ( x 1 , y 1 ) , , ( x m , y m ) ] be the trace generated by the agent on the same map. We define the Monte Carlo distance between model M and agent A as follows (estimated through traces T and T ):
H ( T , T ) = - 1 n i = 1 n l o g 1 m j = 1 m I { o j } ( o i )
where o i = ( [ x i - 1 , y i - 1 , ] x i , y i ) and o i = ( [ x i - 1 , y i - 1 , ] x i , y i ) (depending on the model used, we may store into our observations information about previous state and action). Here, I { o j } means the indicator function of set { o j } . The previous measure in Equation (7) is obviously empirical. For large enough traces, it approximates the true cross entropy between the behavior corresponding to model M and the behavior exhibited by agent A. Using Laplace smoothing, the previous formula becomes:
H ( T , T ) = - 1 n i = 1 n l o g j = 1 m I { o j } ( o i ) + 1 m + | Q | × | Y |

3. Experiments

We have run our experiments with a simulator of a simplified version of a Roomba (iRobot, Bedford, MA, USA), which is a series of autonomous robotic vacuum cleaners sold by iRobot (According to the company’s website (http://www.irobot.com), iRobot Corporation is an American advanced technology company founded in 1990 by Massachusetts Institute of Technology roboticists. More than 14 million home robots have been sold worldwide. Roomba was introduced in 2002). The original Roomba vacuum cleaner uses a set of basic sensors in order to perform its tasks. For instance, it is able to change direction whenever it encounters an obstacle. It uses two independently operating wheels that allow 360 degree turns in place. Additionally, it can adapt to perform other more creative tasks using an embedded computer in conjunction with the Roomba Open Interface.

3.1. Training Maps

The environment in which the agent moves is a 40 × 60 rectangle surrounded by walls, which may contain all sorts of obstacles. For testing, we have randomly generated obstacles on an empty map. Below, we briefly explain the six maps used in the training phase (they are visually represented in Figure 1). Each of them is meant to represent a real-life situation, as indicated by their title.
  • Empty Map. The empty map consists of a big empty room with no obstacles.
  • Messy Room. The messy room simulates an untidy teenager bedroom, with all sorts of obstacles on the floor, and with a narrow entry corridor that makes the access to the room even more challenging for any “hostile intruder”.
  • The Office. The office map simulates a space in which several rooms are connected to each other by small passages. In this case, obstacles are representing big furniture such as office desks or storage cabinets.
  • The Passage. The highlight of this map is an intricate pathway that leads to a small room. The main room is big and does not have any obstacle in it.
  • The Museum. This map is intended to simulate a room from a museum, with the main sculpture in the center, and with several other sculptures on the four sides of the room, separated by small spaces.
  • The Maze. In this map, there are many narrow pathways with the same width as the agent. It also contains a little room, which is difficult to find.

3.2. Agent Strategies

In our experiments, the simulation time is discrete, and, at each time step, the agent can take one out of these four actions: up, down, left and right, with their intuitive effect. The agent perceives the world through the input variable X having four different binary components (up, down, left, right), each one of them identifying what the vacuum cleaner can see in each direction (obstacle, no obstacle). We have designed a series of strategies with different complexities. When describing a strategy, we must define the behavior of the agent in a certain situation (its state X t ) that depends on the configuration of obstacles in its vicinity (prefix Rnd is used for stochastic strategies).
  • Walk. The agent always performs the same action in a given state. As an example, a possible strategy could be to go Right whenever there are no obstacles, and Up whenever there is only one obstacle to the right (stationary deterministic behavior of Type 2, it only depends on current state X t ).
  • Rnd_Walk. In this strategy, the next move is selected randomly from the set of available moves. For example, an agent that has obstacles to the right and to the left can only move Up or Down, but there is no predefined choice between those two (stationary stochastic behavior of Type 2, it only depends on current state X t ).
  • Crash. In this strategy, the agent should perform the same action as in the previous time step (if possible). Whenever it encounters a new obstacle in its way, the agent must choose a certain predefined action. Therefore, it needs to have information about its previous action in order to know where to move (stationary deterministic behavior of Type 3, it depends on current state X t and previous action Y t - 1 ).
  • Rnd_Crash. This strategy allows the agent to take a random direction when it crashes with an obstacle. The main difference with the Rnd_Walk is that in Rnd_Crash the agent does not change direction if it does not encounter an obstacle in its way (stationary stochastic behavior of Type 3, it depends on current state X t and previous action Y t - 1 ).
  • ZigZag. It consists of different vertical movements in two possible directions, avoiding the obstacles. It has an internal state that tells the robot if it should advance towards the left or the right side with this vertical movements: it initially goes towards the right side, and once it reaches one of the right corners, the internal state changes so that the robot will start moving toward the left side (stationary deterministic behavior of Type 3, it depends on current state X t , previous action Y t - 1 and internal state C t ).
  • Rnd_ZigZag. This strategy is similar to the previous one, with the only difference that, once it reaches a corner, the internal state could either change its value or not, and this is randomly assigned (stationary stochastic behavior of Type 3, it depends on current state X t , previous action Y t - 1 and internal state C t ).

3.3. Trace and Performance Evaluation

We work with an agent represented by a simplified version of a Roomba robot. In our implementation, although it is possible for the agent to start anywhere, the traces we generate are always with the agent starting in the top-left corner of the map. We use the strategies explained in Section 3.2 and the maps described in Section 3.1. For each of the six strategies, we have generated six traces of 1500 time steps (one for each map) and merged them together into one single trace, which was used as training data.

3.4. Behavioral Recognition Experimentation

In order to determine whether our approach leads to a correct identification of the agent’s strategy, we performed the following experiment. First, we generated 100 random maps (each of them having a total of 150 obstacles). Then, we generated a family of traces (each of them of 1500 time steps) for each pair strategy/map: ( T n i ) i { 1 , , 100 } , n { 1 , , 6 } and we computed the log-normalized distance R A m M ( T n i ) between the observation trace T n i and the automaton A m (see Equation (5)). The average value R m , n M = i = 1 100 R A m M ( T n i ) / 100 of these distances is reported in Table 2, in the ( m , n ) -th cell.
Our system classifies the testing task represented by column n as being generated by the automaton A k such that k = arg min m R m , n M (minimizing distance maximizes trace probability). The smallest value of each column is marked in bold.
Analysis of the results indicates that the PFA recognition system is able to correctly identify the three random strategies (Rnd_Walk, Rnd_Crash and Rnd_ZigZag). However, the system most often fails when recognizing the respective underlying deterministic strategies (Walk, Crash and ZigZag). In addition, note that the deterministic versions of the random behaviors Crash and ZigZag are not confused with each other but each of them is most of the times classified by the system as its corresponding non-deterministic version (Crash is classified as Rnd_Crash and ZigZag as Rnd_ZigZag for all three models). Moreover, Walk is, on average, correctly classified by Models 2 and 3.
In Table 3, we present the confusion matrix. This is a specific table layout that allows the visualization of the performance of a supervised learning algorithm (see [16]). The numerical value C m , n M placed in the ( m , n ) -th cell of this matrix is the empirical probability of the n-th task to be classified as the m-th task, that is, the percentage of the learning traces produced using strategy n that are recognized as being produced by strategy m. More precisely,
C m , n M = | { i { 1 , , n t e s t s } m = arg min k R A k M ( T ¯ n i ) } | n t e s t s
The diagonal of this matrix reflects the empirical probabilities of right classification and the sum of the other rows different from the diagonal element is the probability of error.
We observe that this second table confirms the conclusions of the first one, with one notable exception: Models 2 and 3 seem to be more prone to confuse the Walk strategy with Rnd_Crash, while Model 1 is the one with highest rates of success. Note that the percentage of right classification is not negligible for deterministic strategies of Type 3 (around 0.20 in the case of Crash for all three models, and between 0.16 and 0.39 for ZigZag).

3.5. Behavioral Cloning Experimentation

As in the case of BR, we used a single trace containing 9000 time steps for each strategy to train our models. For testing, we used the same set of 100 randomly generated maps.

3.5.1. Predictive Accuracy

The numbers in the three tables of Table 4 represent average values of the predictive accuracy (see Equation (6)) computed for each of the randomly generated maps. Note that the only stochastic model is the PFA.
Analyzing the results, we can see that our hierarchy of models behaves as expected: Type 3 behavior is very well captured by Model 3 (the one that uses information about both previous state and previous action), while Type 2 deterministic behavior is better explained by Model 1 (in which we only take into account the current state). Note that, even though, intuitively, the more info we have the best we can predict, in the case of Type 2 behavior, using this extra information can do more harm than good. Another anticipated result that was experimentally confirmed is that Model 1 would be very good in predicting the Walk strategy because the agent always performs the same action in a given state. A surprising conclusion that can be drawn is that, while Rnd_Walk should be the most unpredictable strategy of all, in the case of Models 2 and 3, most of our classifiers have even worse accuracy for the Walk strategy. It is worth noting the high accuracy rates of Model 3 for Rnd_Crash, Rnd_ZigZag and ZigZag, all Type 3 strategies. Furthermore, this model gives somewhat lower but still satisfactory prediction rates for the Crash strategy.
A PFA is the best option when predicting the behavior of Walk and Zigzag strategies (both in their deterministic and stochastic versions) using a minimal amount of memory. In addition, although, intuitively, a stochastic model should be better than a deterministic one in describing the behavior of a random process. According to the predictive accuracy metric, the advantage of using PFAs apparently has to do more with the amount of memory used and not with the nature of the underlying process.

3.5.2. Monte Carlo Distance

Predictive accuracy, however, is not a very good metric for BC when behaviors are non-deterministic. Consider, for example, the extreme case of cloning a random agent. Using the predictive accuracy metric, the highest accuracy a learning agent could expect is 0.25 (if there are four possible actions), even when the behavior is perfectly cloned. Thus, the Monte Carlo distance metric is a more adequate metric when comparing stochastic behaviors.
We used the same set of 100 randomly generated maps. For each of the classifiers (DT, PNN, KNN, RProp, NB) and, for each of the six predefined strategies (Walk, Rnd_Walk, Crash, Rnd_Crash, ZigZag, Rnd_ZigZag), we generated the trace T produced by an agent whose next action is dictated by the classifier on each of the randomly generated maps (each trace would contain at most 1500 observations). Note that, since the strategy is a learned one, it could happen that the action suggested by the model is not feasible (for example, the model says to go up even if there is an obstacle in that direction). Therefore, one may end up having empty traces (marked by a dash in Table 5). On the other hand, we generated the trace T of each of the six predefined strategies on those maps (in this case, all traces have 1500 observations). In Table 5, we present the average of H ( T , T ) for each pair classifier/strategy (see Equation (8)). For stochastic models, the next action, instead of being predefined, is obtained by sampling according to the probability distribution given by the trained model.
According to this metric, PFAs are globally the best tool when less information is used (Models 1 and 2), being outperformed by DTs in the case in which the model of the learner also takes into consideration the previous action (Model 3). Note that PFAs are mostly better than other classifiers when learning random strategies. One surprising conclusion that can be drawn is that NB seem to be the best tool for the Walk strategy when the learner is allowed to use knowledge about the past (Models 2 and 3).
Finally, for each model, each strategy and each machine learning tool, we report the number of times the given tool was the best one. Since we have 100 different maps, this number is always between 0 and 100. In the case of equal Monte Carlo distance score, we assigned a corresponding fraction to each of the ‘winning’ models. For example, if there are three tools with the same score, each of them receives 1/3 points for that particular map (therefore, we end up having real numbers other than integers).
Table 6 confirms PFAs as being the best tool in the case of Model 1, this time even for the Crash strategy. In the case of Model 2, it is the best overall tool and the most reliable one for random strategies. In addition, note that the frequency with which it returns the best score is very close to the NB one for Walk and to the DT one for Zigzag. Again, it is the the third model where its use does not pay off, being outperformed by DTs when the strategy to be cloned is of Type 3.

4. Conclusions

4.1. Discussion

Three of the major open challenges in Learning from Observation are (1) devising training regimes that address the particularities that make LfO different from supervised learning (namely that LfO violated the i.i.d. assumption) [25]; (2) devising approaches to handle long-range variable interactions (actions depending on past states); and (3) devising performance metrics that adequately characterize the performance of LfO agents. The work presented in this paper (in addition to being a general probabilistic framework that can handle both Behavior Recognition and Behavioral Cloning) constitutes a contribution to several of those challenges.
Specifically, we presented an empirical study showing the learning performance of PFAs under different assumptions over the amount of past memory used for learning, showing that considering past states can significantly improve learning performance when long-range dependencies exist. However, when learning behaviors where such long-range dependencies do not exist, considering past states actually hinders performance. Additionally, building on our previous work [19], we showed how traditional supervised performance metrics, such as classification accuracy, are not enough to capture the performance of LfO agents, and proposed a metric based on Vapnik’s risk.
Concerning experimental results for Behavior Recognition, we have shown that our approach using PFAs correctly identifies tasks performed by the agent whenever those tasks have a certain random component. The inference technique is based on the greatest likelihood probability value generated by the PFAs of the model. One of the strengths of our approach is that it captures the stochastic aspect of behaviors. On the other hand, we observed that this technique has difficulties in distinguishing between a certain strategy and a similar strategy perturbed with some degree of randomness.
Concerning Behavioral Cloning, we compared our approach against supervised learning classifiers that predict the action of the agent in a given state (deterministic models). Then, we trained a PFA to estimate the probabilities of the agent’s actions in a given state (a stochastic model). In both cases, we measured the predictive accuracy on new unseen data: supervised learning approaches seemed to perform better according to classification accuracy. However, when comparing the trace of the learned strategy with the trace of the original agent on a randomly generated map using a Monte Carlo distance metric, we could experimentally check that PFAs had actually learned a better model of the agent’s behavior in general. This highlights both the need to use appropriate models to learn certain behaviors (e.g., stochastic models for stochastic behaviors), and the use of appropriate performance metrics for the evaluation of LfO algorithms.

4.2. Future Work

A major challenge in both behavior recognition and behavioral cloning approaches, such as the ones explored in this paper, is determining the amount of ‘memory’ that the learning agent should have access to in order to model the behavior at hand. In our experimental results, we have seen that, for behaviors that only require considering the current state, using Model 1 (which only considers the current state) results in better learning performance. More complex models (like Models 2 and 3 considered above) allow learning more complex behaviors, but they should be used only if necessary. Devising strategies for determining the amount of past memory required to learn a task remains an open problem, which we plan to address in our future work.
Concerning scalability, the major computational limitation of the proposed learning machines is the amount of memory required by the trained automaton. A possible solution we plan to investigate in our future work is to employ only a small number of non-observable internal states. Another possible approach is the use of some notion of context learning [26] in order to reduce the number of possible states.
We also plan to extend this methodology to handle continuous state and action spaces. This would allow modeling more realistic Type of robots, which can perform rotations of different angles and whose position is represented by real values on the map. This would allow us to consider behaviors such as the outward-moving spiral of the Roomba robot. Future work also contemplates the usage of probabilistic transducers to take into account the input-output (state-action) nature of the observations composing the learning traces in the observation scenario.

Acknowledgments

This work was partially supported by project PAC::LFO (MTM2014-55262-P) of Programa Estatal de Fomento de la Investigación Científica y Técnica de Excelencia, Ministerio de Ciencia e Innovación (MICINN), Spain, and by the National Science Foundation (NSF) project SCH-1521943, USA.

Author Contributions

Cristina Tîrnăucă and José L. Montaña conceived and designed the paper structure and the experiments; Cristina Tîrnăucă performed the experiments; Santiago Ontañón, Avelino J. González and Luis M. Pardo contributed with materials and analysis tools.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
PFA
Probabilistic Finite Automaton
SAF
Semi-Automated Forces
CGF
Computer-Generated Forces
RL
Reinforcement Learning
LfO
Learning from Observation
ALVINN
Autonomous Land Vehicle In a Neural Network
SOAR
State, Operator And Result
DNA
DeoxyriboNucleic Acid
AR
Activity Recognition
HAR
Human Activity Recognition
BR
Behavioral Recognition
BC
Behavioral Cloning
DT
Decision Tree
kNN
K-Nearest Neighbor
NB
Naive Bayes
PNN
Probabilistic Neural Network
MICINN
Ministerio de Ciencia e Innovación
NSF
National Science Foundation

References

  1. Mateas, M. Expressive AI: Games and Artificial Intelligence. In Proceedings of the Digital Games Research Conference, Utrecht, The Netherlands, 4–6 November 2003.
  2. Bauer, M.A. Programming by Examples. Artif. Intell. 1979, 12, 1–21. [Google Scholar] [CrossRef]
  3. Lozano-Pérez, T. Robot Programming. Proc. IEEE 1983, 71, 821–841. [Google Scholar] [CrossRef]
  4. Michalski, R.S.; Stepp, R.E. Learning From Observation: Conceptual Clustering. In Machine Learning: An Artificial Intelligence Approach; Michalski, R.S., Carbonell, J.G., Mitchell, T.M., Eds.; Springer Berlin: Heidelberg, Germany, 1983; pp. 331–364. [Google Scholar]
  5. Gonzalez, A.J.; Georgiopoulos, M.; DeMara, R.F.; Henninger, A.; Gerber, W. Automating the CGF Model Development and Refinement Process by Observing Expert Behavior in a Simulation. In Proceedings of the 7th Conference on Computer Generated Forces and Behavioral Representation, Orlando, FL, USA, 12–14 May 1998.
  6. Sammut, C.; Hurst, S.; Kedzier, D.; Michie, D. Learning to Fly. In Proceedings of the 9th International Workshop on Machine Learning, Aberdeen, UK, 1–3 July 1992; Sleeman, D.H., Edwards, P., Eds.; Morgan Kaufmann: San Francisco, CA, USA, 1992; pp. 385–393. [Google Scholar]
  7. Sidani, T. Automated Machine Learning from Observation of Simulation. Ph.D. Thesis, University of Central Florida, Orlando, FL, USA, 1994. [Google Scholar]
  8. Fernlund, H.K.G.; Gonzalez, A.J.; Georgiopoulos, M.; DeMara, R.F. Learning tactical human behavior through observation of human performance. IEEE Trans. Syst. Man Cybern. B 2006, 36, 128–140. [Google Scholar] [CrossRef]
  9. Pomerleau, D. ALVINN: An Autonomous Land Vehicle in a Neural Network. In Proceedings of the Advances in Neural Information Processing Systems 1 (NIPS 1988), Denver, CO, USA, 1988; Touretzky, D.S., Ed.; Morgan Kaufmann: San Francisco, CA, USA, 1989; pp. 305–313. [Google Scholar]
  10. Moriarty, C.L.; Gonzalez, A.J. Learning Human Behavior from Observation for Gaming Applications. In Proceedings of the 22nd International Florida Artificial Intelligence Research Society Conference, Sanibel Island, FL, USA, 19–21 May 2009; Lane, H.C., Guesgen, H.W., Eds.; AAAI Press: Menlo Park, CA, USA, 2009. [Google Scholar]
  11. Könik, T.; Laird, J.E. Learning goal hierarchies from structured observations and expert annotations. Mach. Learn. 2006, 64, 263–287. [Google Scholar] [CrossRef]
  12. Floyd, M.W.; Esfandiari, B.; Lam, K. A Case-Based Reasoning Approach to Imitating RoboCup Players. In Proceedings of the 21st International Florida Artificial Intelligence Research Society Conference, Coconut Grove, FL, USA, 15–17 May 2008; Wilson, D., Lane, H.C., Eds.; AAAI Press: Menlo Park, CA, USA, 2008; pp. 251–256. [Google Scholar]
  13. Ontañón, S.; Mishra, K.; Sugandh, N.; Ram, A. On-Line Case-Based Planning. Comput. Intell. 2010, 26, 84–119. [Google Scholar] [CrossRef]
  14. Ng, A.Y.; Russell, S. Algorithms for Inverse Reinforcement Learning. In Proceedings of the 17th International Conference on Machine Learning, Stanford, CA, USA, 29 June–2 July 2000; Langley, P., Ed.; Morgan Kaufmann: San Francisco, CA, USA, 2000; pp. 663–670. [Google Scholar]
  15. Michie, D.; Bain, M.; Hayes-Michie, J. Cognitive models from subcognitive skills. In Knowledge-Based Systems for Industrial Control; McGhee, J., Grimble, M.J., Mowforth, P., Eds.; P. Peregrinus Ltd. on behalf of the Institution of Electrical Engineers: London, UK, 1990; pp. 71–99. [Google Scholar]
  16. Fatima, I.; Fahim, M.; Lee, Y.K.; Lee, S. A Unified Framework for Activity Recognition-Based Behavior Analysis and Action Prediction in Smart Homes. Sensors 2013, 13, 2682–2699. [Google Scholar] [CrossRef] [PubMed]
  17. Rabiner, R. A Tutorial on Hidden Markov Models and Selected Applications on Speech Recognition. IEEE Proc. 1989, 77, 257–286. [Google Scholar] [CrossRef]
  18. Abecasis, G.R.; Altshuler, D.; Auton, A.; Brooks, L.D.; Durbin, R.M.; Gibbs, R.A.; Hurles, M.E.; McVean, G.A.; Bentley, D.R.; et al. A Map of Human Genome Variation from Population-Scale Sequencing. Nature 2010, 467, 1061–1073. [Google Scholar] [PubMed]
  19. Ontañón, S.; Montaña, J.L.; Gonzalez, A.J. A Dynamic-Bayesian Network framework for modeling and evaluating learning from observation. Expert Syst. Appl. 2014, 41, 5212–5226. [Google Scholar] [CrossRef]
  20. Shafer, J.C.; Agrawal, R.; Mehta, M. SPRINT: A Scalable Parallel Classifier for Data Mining. In Proceedings of the 22th International Conference on Very Large Data Bases, Mumbai, India, 3–6 September 1996; Vijayaraman, T.M., Buchmann, A.P., Mohan, C., Sarda, N.L., Eds.; Morgan Kaufmann: San Francisco, CA, USA, 1996; pp. 544–555. [Google Scholar]
  21. Berthold, M.R.; Diamond, J. Constructive training of probabilistic neural networks. Neurocomputing 1998, 19, 167–183. [Google Scholar] [CrossRef]
  22. Riedmiller, M.; Braun, H. A direct adaptive method for faster backpropagation learning: The RProp algorithm. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 28 March–1 April 1993; Volume 1, pp. 586–591.
  23. Rabin, M.O. Probabilistic Automata. Inf. Control 1963, 6, 230–245. [Google Scholar] [CrossRef]
  24. Dupont, P.; Denis, F.; Esposito, Y. Links between probabilistic automata and hidden Markov models: Probability distributions, learning models and induction algorithms. Pattern Recognit. 2005, 38, 1349–1371. [Google Scholar] [CrossRef]
  25. Ross, S.; Bagnell, D. Efficient reductions for imitation learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; pp. 661–668.
  26. Trinh, V.C.; Gonzalez, A.J. Discovering contexts from observed human performance. IEEE Trans. Hum. Mach. Syst. 2013, 43, 359–370. [Google Scholar] [CrossRef]
Figure 1. Training Maps, where obstacles are shown in black. The starting position of the vacuum cleaner is shown in red.
Figure 1. Training Maps, where obstacles are shown in black. The starting position of the vacuum cleaner is shown in red.
Sensors 16 00958 g001
Table 1. Training examples for LfO.
Table 1. Training examples for LfO.
Model 1Model 2Model 3
1stClass1st2ndClass1st2nd3rdClass
x 1 y 1 x 1 y 1 x 1 y 1
x 2 y 2 x 1 x 2 y 2 x 1 y 1 x 2 y 2
x m y m x m - 1 x m y m x m - 1 y m - 1 x m y m
Table 2. Distance matrix.
Table 2. Distance matrix.
R m , n 1 values (Model 1)
9000/1500 obsWalkRnd_WalkCrashRnd_CrashZigZagRnd_ZigZag
Walk8.42679.25047.31787.87288.98719.0486
Rnd_Walk8.94143.97514.32653.88464.64634.6070
Crash8.49966.50794.77265.83764.30724.3743
Rnd_Crash8.42454.13313.98543.42444.39354.3924
Zigzag9.14597.10875.38495.81593.97153.9815
Rnd_Zigzag9.16837.10355.40435.78533.95633.9591
R m , n 2 values (Model 2)
9000/1500 obsWalkRnd_WalkCrashRnd_CrashZigZagRnd_ZigZag
Walk9.72839.82768.52748.77609.72789.7654
Rnd_Walk9.80164.99236.48325.80426.01206.0783
Crash9.85028.08866.84617.48486.46886.5510
Rnd_Crash9.84875.69606.12285.22046.38576.4331
Zigzag9.86638.12037.89028.42135.58735.6445
Rnd_Zigzag9.86818.10027.82788.35995.55065.5981
R m , n 3 values (Model 3)
9000/1500 obsWalkRnd_WalkCrashRnd_CrashZigZagRnd_ZigZag
Walk11.710111.891210.533510.700611.868711.8792
Rnd_Walk11.84157.12828.88858.27548.27948.3277
Crash11.835911.42528.48409.06598.16778.2454
Rnd_Crash11.826110.70197.55716.54457.81687.8871
Zigzag11.880811.26679.22459.64586.71696.8174
Rnd_Zigzag11.882411.25639.16149.58506.67286.7602
Table 3. Confusion matrix.
Table 3. Confusion matrix.
C m , n 1 values (Model 1)
9000/1500 obsWalkRnd_WalkCrashRnd_CrashZigZagRnd_ZigZag
Walk0.710.000.000.000.000.00
Rnd_Walk0.120.990.100.000.000.01
Crash0.030.000.200.000.030.04
Rnd_Crash0.010.010.630.990.100.09
Zigzag0.130.000.030.010.390.24
Rnd_Zigzag0.000.000.040.000.480.62
C m , n 2 values (Model 2)
9000/1500 obsWalkRnd_WalkCrashRnd_CrashZigZagRnd_ZigZag
Walk0.280.000.040.010.010.01
Rnd_Walk0.051.000.240.010.000.00
Crash0.240.000.210.010.030.03
Rnd_Crash0.420.000.430.970.030.01
Zigzag0.010.000.010.000.180.05
Rnd_Zigzag0.000.000.070.000.750.90
C m , n 3 values (Model 3)
9000/1500 obsWalkRnd_WalkCrashRnd_CrashZigZagRnd_ZigZag
Walk0.270.000.030.010.010.01
Rnd_Walk0.051.000.010.000.000.00
Crash0.070.000.200.010.030.02
Rnd_Crash0.600.000.640.980.010.01
Zigzag0.010.000.020.000.160.03
Rnd_Zigzag0.000.000.100.000.790.93
Table 4. Predictive accuracy for the generated maps (higher is better).
Table 4. Predictive accuracy for the generated maps (higher is better).
Model 1PFADTPNNKNNRPropNB
Walk0.9770.9250.9240.9250.4520.504
Rnd_Walk0.2760.2760.2750.2760.2700.253
Crash0.5710.6290.6290.6260.5610.336
Rnd_Crash0.3880.3980.3920.3940.3810.381
Zigzag0.4850.4780.4780.4750.4540.456
Rnd_Zigzag0.4850.4670.4680.4640.4420.447
Average0.5300.5290.5280.5270.4270.396
Model 2PFADTPNNKNNRPropNB
Walk0.2880.5160.0790.1240.4480.078
Rnd_Walk0.2700.2760.2690.2720.2670.267
Crash0.5090.6120.5700.5780.5470.328
Rnd_Crash0.3910.4190.4110.4210.3920.370
Zigzag0.5220.5540.5560.5430.5040.460
Rnd_Zigzag0.5280.5840.5580.5740.5130.451
Average0.4180.4930.4070.4190.4450.326
Model 3PFADTPNNKNNRPropNB
Walk0.2880.0790.0800.1190.0800.071
Rnd_Walk0.2670.2760.2670.2710.2780.271
Crash0.7020.8150.7050.7250.7580.563
Rnd_Crash0.9230.9450.9090.9290.9300.929
Zigzag0.8810.9600.8850.9100.8710.836
Rnd_Zigzag0.8800.9610.8820.9300.8760.815
Average0.6570.6730.6210.6470.6320.581
Table 5. Monte Carlo distance between original and cloned behavior (lower is better).
Table 5. Monte Carlo distance between original and cloned behavior (lower is better).
Model 1PFADTPNNKNNRPropNB
Walk0.8540.8660.9620.8664.8364.797
Rnd_Walk2.4875.1305.0625.062--
Crash3.0122.5992.5992.5743.1195.572
Rnd_Crash3.0673.8533.7623.8233.3433.767
Zigzag2.2613.4973.6083.4973.3164.324
Rnd_Zigzag2.3573.5063.5913.5064.3544.354
Average2.3403.2423.2643.2213.7944.563
Model 2PFADTPNNKNNRPropNB
Walk6.1657.0727.2746.7186.1596.033
Rnd_Walk5.3856.8866.9587.1897.5387.575
Crash5.5386.4605.4085.7604.817-
Rnd_Crash4.7505.1575.6025.7574.912-
Zigzag4.8184.1914.5014.336-7.663
Rnd_Zigzag4.7594.5024.7274.5444.5247.683
Average5.2365.7115.7455.7185.5907.239
Model 3PFADTPNNKNNRPropNB
Walk6.9618.0827.9537.5246.9556.830
Rnd_Walk6.9407.8947.9417.9837.9788.348
Crash6.0004.7976.9246.6855.6135.613
Rnd_Crash5.4704.9866.3245.3255.708-
Zigzag5.9144.5615.3244.9035.8718.460
Rnd_Zigzag5.8454.2205.4164.9345.2348.459
Average6.1885.7576.6476.2266.2277.542
Table 6. Empirical distribution of the winning tool (smallest Monte Carlo distance).
Table 6. Empirical distribution of the winning tool (smallest Monte Carlo distance).
Model 1PFADTPNNKNNRPropNB
Walk31.5022.5021.5022.501.001.00
Rnd_Walk99.001.000.000.000.000.00
Crash43.3010.3010.3010.3024.301.50
Rnd_Crash63.333.005.004.3321.003.33
Zigzag95.170.920.920.920.921.17
Rnd_Zigzag93.171.501.501.501.171.17
Average70.916.546.546.598.061.36
Model 2PFADTPNNKNNRPropNB
Walk29.9521.254.705.127.9531.03
Rnd_Walk58.6716.5016.004.502.172.17
Crash9.5022.0022.338.8337.170.17
Rnd_Crash31.0024.007.504.5033.000.00
Zigzag26.5030.5017.3322.331.671.67
Rnd_Zigzag34.3315.928.9217.0822.081.67
Average31.6621.6912.8010.3917.346.12
Model 3PFADTPNNKNNRPropNB
Walk31.1216.838.285.458.3729.95
Rnd_Walk49.0015.5010.005.5015.005.00
Crash13.0050.0012.3310.337.177.17
Rnd_Crash11.6737.1710.6720.6719.830.00
Zigzag7.4239.6711.4228.4211.421.67
Rnd_Zigzag8.6747.679.1722.1710.671.67
Average20.1534.4710.3115.4212.087.58

Share and Cite

MDPI and ACS Style

Tîrnăucă, C.; Montaña, J.L.; Ontañón, S.; González, A.J.; Pardo, L.M. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study. Sensors 2016, 16, 958. https://doi.org/10.3390/s16070958

AMA Style

Tîrnăucă C, Montaña JL, Ontañón S, González AJ, Pardo LM. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study. Sensors. 2016; 16(7):958. https://doi.org/10.3390/s16070958

Chicago/Turabian Style

Tîrnăucă, Cristina, José L. Montaña, Santiago Ontañón, Avelino J. González, and Luis M. Pardo. 2016. "Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study" Sensors 16, no. 7: 958. https://doi.org/10.3390/s16070958

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop