Next Article in Journal
Order Indices and Entanglement Production in Quantum Systems
Next Article in Special Issue
Is the Free-Energy Principle a Formal Theory of Semantics? From Variational Density Dynamics to Neural and Phenotypic Representations
Previous Article in Journal
On Relations Between the Relative Entropy and χ2-Divergence, Generalizations and Applications
Previous Article in Special Issue
Inferring What to Do (And What Not to)
Article

Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network

Okinawa Institute of Science and Technology, Okinawa 904-0495, Japan
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(5), 564; https://doi.org/10.3390/e22050564
Received: 20 April 2020 / Revised: 12 May 2020 / Accepted: 15 May 2020 / Published: 18 May 2020
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories.
View Full-Text
Keywords: goal directed planning; active inference; predictive coding; variational bayes; recurrent neural network goal directed planning; active inference; predictive coding; variational bayes; recurrent neural network
Show Figures

Graphical abstract

MDPI and ACS Style

Matsumoto, T.; Tani, J. Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network. Entropy 2020, 22, 564. https://doi.org/10.3390/e22050564

AMA Style

Matsumoto T, Tani J. Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network. Entropy. 2020; 22(5):564. https://doi.org/10.3390/e22050564

Chicago/Turabian Style

Matsumoto, Takazumi, and Jun Tani. 2020. "Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network" Entropy 22, no. 5: 564. https://doi.org/10.3390/e22050564

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop