Can a Robot Have Free Will?
AbstractUsing insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self to self-determine; (b) there must be a non-zero probability of more than one option being enacted; (c) there must be an internal means of choosing among options (which is not merely random, since randomness is not a choice). For (a) to be fulfilled, the agent of self-determination must be organisationally closed (a “Kantian whole”). For (c) to be fulfilled: (d) options must be generated from an internal model of the self which can calculate future states contingent on possible responses; (e) choosing among these options requires their evaluation using an internally generated goal defined on an objective function representing the overall “master function” of the agent and (f) for “deep free-will”, at least two nested levels of choice and goal (d–e) must be enacted by the agent. The agent must also be able to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. The main impediment to free-will in present-day artificial robots, is their lack of being a Kantian whole. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low and include relatively simple life-forms that are at least able to learn. View Full-Text
Share & Cite This Article
Farnsworth, K.D. Can a Robot Have Free Will? Entropy 2017, 19, 237.
Farnsworth KD. Can a Robot Have Free Will? Entropy. 2017; 19(5):237.Chicago/Turabian Style
Farnsworth, Keith D. 2017. "Can a Robot Have Free Will?" Entropy 19, no. 5: 237.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.