The paper examines today’s debate on the legal status of AI robots, and how often scholars and policy makers confuse the legal agenthood of these artificial agents with the status of legal personhood. By taking into account current trends in the field, the paper suggests a twofold stance. First, policy makers shall seriously mull over the possibility of establishing novel forms of accountability and liability for the activities of AI robots in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility. Second, any hypothesis of granting AI robots full legal personhood has to be discarded in the foreseeable future. However, how should we deal with Sophia, which became the first AI application to receive citizenship of any country, namely, Saudi Arabia, in October 2017? Admittedly, granting someone, or something, legal personhood is—as always has been—a highly sensitive political issue that does not simply hinge on rational choices and empirical evidence. Discretion, arbitrariness, and even bizarre decisions play a role in this context. However, the normative reasons why legal systems grant human and artificial entities, such as corporations, their status, help us taking sides in today’s quest for the legal personhood of AI robots. Is citizen Sophia really conscious, or capable of suffering the slings and arrows of outrageous scholars?
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited