Special Issue "Probabilistic Causal Modelling in Intelligent Systems"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (1 August 2018)

Special Issue Editors

Guest Editor
Dr. Kevin B Korb

Faculty of Information Technology´╝îMonash University, Clayton, Victoria 3800, Australia
Website | E-Mail
Interests: causal models; causal discovery; Bayesian networks
Guest Editor
Prof. Ann E Nicholson

Clayton School of Information Technology, Monash University, Clayton, Victoria 3800, Australia
Website | E-Mail
Interests: artificial intelligence; Bayesian networks; data mining; evolutionary ethics; intelligent agents; knowledge engineering; plan recognition; probabilistic reasoning; user modelling
Guest Editor
Mr. Erik Nyberg

Clayton School of Information Technology, Monash University, Clayton, Victoria 3800, Australia
Website | E-Mail
Interests: artificial intelligence; Bayesian networks

Special Issue Information

Dear Colleagues,

Probabilistic Causality—the idea that causality is stochastic and that probabilistic dependencies reveal their causal foundations—has come a long way since its origins with the work of Hans Reichenbach in the 1950s. After losing its reductionist pretensions in the 1970s and 1980s in debates within Philosophy of Science, it crashed headlong into the Bayesian network technology emerging from Statistics and Artificial Intelligence (AI). Out of that collision, in the late 1980s, grew some remarkable innovations, including the Causal Discovery programs of Clark Glymour and collaborators in Philosophy and Judea Pearl and others in AI. The technology and new ideas have continued to flow, and at a great pace. This Special Issue on “Probabilistic Causal Modelling” will provide a view of where we have come from, a snapshot of where we are, and a probabilistic prediction of where we are headed.

Dr. Kevin B Korb
Prof. Ann E Nicholson
Mr. Erik Nyberg
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • causal Bayesian networks
  • probabilistic graphical models
  • probabilistic causality
  • information theory
  • causal power

Published Papers (2 papers)

View options order results:
result details:
Displaying articles 1-2
Export citation of selected articles as:

Research

Open AccessArticle A Theory of Physically Embodied and Causally Effective Agency
Information 2018, 9(10), 249; https://doi.org/10.3390/info9100249
Received: 31 July 2018 / Revised: 18 September 2018 / Accepted: 28 September 2018 / Published: 6 October 2018
PDF Full-text (3180 KB)
Abstract
Causality is fundamental to agency. Intelligent agents learn about causal relationships by interacting with their environments and use their causal knowledge to choose actions intended to bring about desired outcomes. This paper considers a causal question that is central to the very meaning
[...] Read more.
Causality is fundamental to agency. Intelligent agents learn about causal relationships by interacting with their environments and use their causal knowledge to choose actions intended to bring about desired outcomes. This paper considers a causal question that is central to the very meaning of agency, that of how a physically embodied agent effects intentional action in the world. The prevailing assumption is that both biological and computer agents are automatons whose decisions are determined by the physical processes operating in their information processing apparatus. As an alternative hypothesis, this paper presents a mathematical model of causally efficacious agency. The model is based on Stapp’s theory of efficacious choice in physically embodied agents. Stapp’s theory builds on a realistic interpretation of von Neumann’s mathematical formalization of quantum theory. Because it is consistent with the well-established precepts of quantum theory, Stapp’s theory has been dismissed as metaphysical and unfalsifiable. However, if taken seriously as a model of efficacious choice in biological agents, the theory does have empirically testable implications. This paper formulates Stapp’s theory as an interventionist causal theory in which interventions are ascribed to agents and can have macroscopically distinguishable effects in the world. Empirically testable implications of the theory are discussed and a path toward scientific evaluation is proposed. Implications for artificial intelligence are considered. Full article
(This article belongs to the Special Issue Probabilistic Causal Modelling in Intelligent Systems)
Open AccessArticle Imprecise Bayesian Networks as Causal Models
Information 2018, 9(9), 211; https://doi.org/10.3390/info9090211
Received: 12 July 2018 / Revised: 15 August 2018 / Accepted: 20 August 2018 / Published: 23 August 2018
PDF Full-text (271 KB) | HTML Full-text | XML Full-text
Abstract
This article considers the extent to which Bayesian networks with imprecise probabilities, which are used in statistics and computer science for predictive purposes, can be used to represent causal structure. It is argued that the adequacy conditions for causal representation in the precise
[...] Read more.
This article considers the extent to which Bayesian networks with imprecise probabilities, which are used in statistics and computer science for predictive purposes, can be used to represent causal structure. It is argued that the adequacy conditions for causal representation in the precise context—the Causal Markov Condition and Minimality—do not readily translate into the imprecise context. Crucial to this argument is the fact that the independence relation between random variables can be understood in several different ways when the joint probability distribution over those variables is imprecise, none of which provides a compelling basis for the causal interpretation of imprecise Bayes nets. I conclude that there are serious limits to the use of imprecise Bayesian networks to represent causal structure. Full article
(This article belongs to the Special Issue Probabilistic Causal Modelling in Intelligent Systems)
Figures

Figure 1

Back to Top