Special Issue "The Impact of Artificial Intelligence on Law"

A special issue of J (ISSN 2571-8800). This special issue belongs to the section "Computer Science & Mathematics".

Deadline for manuscript submissions: closed (15 March 2022) | Viewed by 14121

Special Issue Editors

Prof. Ugo Pagallo
E-Mail Website
Guest Editor
Department of Law, University of Turin, 10135 Turin, Italy
Interests: AI and law; information technology law; governance; network theory
Prof. Massimo Durante
E-Mail Website
Guest Editor
Department of Law, University of Turin, Turin, Italy
Interests: AI and law; governance of algorithms; theory of information; privacy and data protection

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) has a growing normative impact on our daily lives. The use of AI in plentiful applications is already challenging many areas, issues, and concepts of established law. Much of the law that exists today logically evolved to account for human actions. As AI systems become smarter, more autonomous, and lacking in transparency in their decision-making, courts will face increasingly complex dilemmas in applying law to AI: in some cases, new law will be required; in others, existing law will need to be reformed. We urge developing a legal framework for law and AI in which to generate discussion on how the law can be used not only to spur innovation in AI but also to safeguard the rights of all parties involved as AI proliferates more deeply into society.

This Special Issue aims at promoting original and high-quality papers on the impact of AI on law from a multidisciplinary perspective. In particular, the Guest Editors seek papers on AI regulation, human rights considerations, constitutional law challenges, governance of algorithms, legal personality and AI, issues of privacy and data protection, AI and legal analytics, copyright and other fields of intellectual property law (e.g., patents), international humanitarian law, and criminal law. The issue also welcomes papers on classical topics related to AI and law, such as computational methods for negotiation and contract formation, machine learning and data analytics applied to the legal domain, intelligent legal tutoring systems, or intelligent support systems for law and forensics.

We cordially invite you to submit a high-quality original research paper or review to this Special Issue, “The Impact of AI on Law”.

Prof. Massimo Durante
Prof. Ugo Pagallo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. J is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI regulation
  • Human rights
  • Constitutional law challenges
  • Governance of algorithms
  • Legal personality and AI
  • Privacy and data protection
  • AI and legal analytics
  • Copyright and other fields of intellectual property law
  • Computational methods for negotiation and contract formation
  • Machine learning and data analytics applied to the legal domain
  • Intelligent support systems for law and forensics

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
The Good, the Bad, and the Invisible with Its Opportunity Costs: Introduction to the ‘J’ Special Issue on “the Impact of Artificial Intelligence on Law”
J 2022, 5(1), 139-149; https://doi.org/10.3390/j5010011 - 19 Feb 2022
Viewed by 1044
Abstract
Scholars and institutions have been increasingly debating the moral and legal challenges of AI, together with the models of governance that should strike the balance between the opportunities and threats brought forth by AI, its ‘good’ and ‘bad’ facets. There are more than [...] Read more.
Scholars and institutions have been increasingly debating the moral and legal challenges of AI, together with the models of governance that should strike the balance between the opportunities and threats brought forth by AI, its ‘good’ and ‘bad’ facets. There are more than a hundred declarations on the ethics of AI and recent proposals for AI regulation, such as the European Commission’s AI Act, have further multiplied the debate. Still, a normative challenge of AI is mostly overlooked, and regards the underuse, rather than the misuse or overuse, of AI from a legal viewpoint. From health care to environmental protection, from agriculture to transportation, there are many instances of how the whole set of benefits and promises of AI can be missed or exploited far below its full potential, and for the wrong reasons: business disincentives and greed among data keepers, bureaucracy and professional reluctance, or public distrust in the era of no-vax conspiracies theories. The opportunity costs that follow this technological underuse is almost terra incognita due to the ‘invisibility’ of the phenomenon, which includes the ‘shadow prices’ of economy. This introduction provides metrics for such assessment and relates this work to the development of new standards for the field. We must quantify how much it costs not to use AI systems for the wrong reasons. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Article
Metrics, Explainability and the European AI Act Proposal
J 2022, 5(1), 126-138; https://doi.org/10.3390/j5010010 - 18 Feb 2022
Viewed by 934
Abstract
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not [...] Read more.
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Article
Law, Socio-Legal Governance, the Internet of Things, and Industry 4.0: A Middle-Out/Inside-Out Approach
J 2022, 5(1), 64-91; https://doi.org/10.3390/j5010005 - 21 Jan 2022
Cited by 1 | Viewed by 1097
Abstract
The Web of Data, the Internet of Things, and Industry 4.0 are converging, and society is challenged to ensure that appropriate regulatory responses can uphold the rule of law fairly and effectively in this emerging context. The challenge extends beyond merely submitting digital [...] Read more.
The Web of Data, the Internet of Things, and Industry 4.0 are converging, and society is challenged to ensure that appropriate regulatory responses can uphold the rule of law fairly and effectively in this emerging context. The challenge extends beyond merely submitting digital processes to the law. We contend that the 20th century notion of ‘legal order’ alone will not be suitable to produce the social order that the law should bring. The article explores the concepts of rule of law and of legal governance in digital and blockchain environments. We position legal governance from an empirical perspective, i.e., as an explanatory and validation concept to support the implementation of the rule of law in the new digital environments. As a novel contribution, this article (i) progresses some of the work done on the metarule of law and complements the SMART middle-out approach with an inside-out approach to digital regulatory systems and legal compliance models; (ii) sets the state-of-the-art and identifies the way to explain and validate legal information flows and hybrid agents’ behaviour; (iii) describes a phenomenological and historical approach to legal and political forms; and (iv) shows the utility of separating enabling and driving regulatory systems. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Show Figures

Figure 1

Article
Argumentation and Defeasible Reasoning in the Law
J 2021, 4(4), 897-914; https://doi.org/10.3390/j4040061 - 18 Dec 2021
Cited by 1 | Viewed by 929
Abstract
Different formalisms for defeasible reasoning have been used to represent knowledge and reason in the legal field. In this work, we provide an overview of the following logic-based approaches to defeasible reasoning: defeasible logic, Answer Set Programming, ABA+, ASPIC+, and DeLP. We compare [...] Read more.
Different formalisms for defeasible reasoning have been used to represent knowledge and reason in the legal field. In this work, we provide an overview of the following logic-based approaches to defeasible reasoning: defeasible logic, Answer Set Programming, ABA+, ASPIC+, and DeLP. We compare features of these approaches under three perspectives: the logical model (knowledge representation), the method (computational mechanisms), and the technology (available software resources). On top of that, two real examples in the legal domain are designed and implemented in ASPIC+ to showcase the benefit of an argumentation approach in real-world domains. The CrossJustice and Interlex projects are taken as a testbed, and experiments are conducted with the Arg2P technology. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Show Figures

Figure 1

Article
Nothing to Be Happy about: Consumer Emotions and AI
J 2021, 4(4), 784-793; https://doi.org/10.3390/j4040053 - 16 Nov 2021
Cited by 1 | Viewed by 1032
Abstract
Advancements in artificial intelligence and Big Data allow for a range of goods and services to determine and respond to a consumer’s emotional state of mind. Considerable potential surrounds the technological ability to detect and respond to an individual’s emotions, yet such technology [...] Read more.
Advancements in artificial intelligence and Big Data allow for a range of goods and services to determine and respond to a consumer’s emotional state of mind. Considerable potential surrounds the technological ability to detect and respond to an individual’s emotions, yet such technology is also controversial and raises questions surrounding the legal protection of emotions. Despite their highly sensitive and private nature, this article highlights the inadequate protection of emotions in aspects of data protection and consumer protection law, arguing that the contribution by recent proposal for an Artificial Intelligence Act is not only unsuitable to overcome such deficits but does little to support the assertion that emotions are highly sensitive. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Article
The Ethical Assessment of Autonomous Systems in Practice
J 2021, 4(4), 749-763; https://doi.org/10.3390/j4040051 - 10 Nov 2021
Cited by 1 | Viewed by 1197
Abstract
This paper presents the findings of a study that used applied ethics to evaluate autonomous robotic systems practically. Using a theoretical tool developed by a team of researchers in 2017, which one of the authors contributed to, we conducted a study of four [...] Read more.
This paper presents the findings of a study that used applied ethics to evaluate autonomous robotic systems practically. Using a theoretical tool developed by a team of researchers in 2017, which one of the authors contributed to, we conducted a study of four existing autonomous robotic systems in July 2020. The methods used to carry out the study and the results are highlighted by examining the specific example of ANYmal, an autonomous robotic system that is one component of the CERBERUS team that won first place in DARPA’s Subterranean Challenge Systems Competition in September 2021. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Show Figures

Figure A1

Article
The European Commission’s Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)
J 2021, 4(4), 589-603; https://doi.org/10.3390/j4040043 - 08 Oct 2021
Cited by 1 | Viewed by 2545
Abstract
On 21 April 2021, the European Commission presented its long-awaited proposal for a Regulation “laying down harmonized rules on Artificial Intelligence”, the so-called “Artificial Intelligence Act” (AIA). This article takes a critical look at the proposed regulation. After an introduction (1), the paper [...] Read more.
On 21 April 2021, the European Commission presented its long-awaited proposal for a Regulation “laying down harmonized rules on Artificial Intelligence”, the so-called “Artificial Intelligence Act” (AIA). This article takes a critical look at the proposed regulation. After an introduction (1), the paper analyzes the unclear preemptive effect of the AIA and EU competences (2), the scope of application (3), the prohibited uses of Artificial Intelligence (AI) (4), the provisions on high-risk AI systems (5), the obligations of providers and users (6), the requirements for AI systems with limited risks (7), the enforcement system (8), the relationship of the AIA with the existing legal framework (9), and the regulatory gaps (10). The last section draws some final conclusions (11). Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Article
A Systems and Control Theory Approach for Law and Artificial Intelligence: Demystifying the “Black-Box”
J 2021, 4(4), 564-576; https://doi.org/10.3390/j4040041 - 27 Sep 2021
Cited by 1 | Viewed by 1012
Abstract
In this paper, I propose a conceptual framework for law and artificial intelligence (AI) that is based on ideas derived from systems and control theory. The approach considers the relationship between the input to an AI-controlled system and the system’s output, which may [...] Read more.
In this paper, I propose a conceptual framework for law and artificial intelligence (AI) that is based on ideas derived from systems and control theory. The approach considers the relationship between the input to an AI-controlled system and the system’s output, which may affect events in the real-world. The approach aims to add to the current discussion among legal scholars and legislators on how to regulate AI, which focuses primarily on how the output, or external behavior of a system, leads to actions that may implicate the law. The goal of this paper is to show that not only is the systems output an important consideration for law and AI but so too is the relationship between the systems input to its desired output, as mediated through a feedback loop (and other control variables). In this paper, I argue that ideas derived from systems and control theory can be used to provide a conceptual framework to help understand how the law applies to AI, and particularly, to algorithmically based systems. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Show Figures

Figure 1

Article
Artificial Intelligence ante portas: Reactions of Law
J 2021, 4(3), 486-499; https://doi.org/10.3390/j4030037 - 06 Sep 2021
Cited by 1 | Viewed by 754
Abstract
Artificial intelligence and algorithmic decision-making causes new (technological) challenges for the normative environment around the globe. Fundamental legal principles (such as non-discrimination, human rights, transparency) need to be strengthened by regulatory interventions. The contribution pleads for a combination of regulatory models (hard law [...] Read more.
Artificial intelligence and algorithmic decision-making causes new (technological) challenges for the normative environment around the globe. Fundamental legal principles (such as non-discrimination, human rights, transparency) need to be strengthened by regulatory interventions. The contribution pleads for a combination of regulatory models (hard law and soft law); based on this assessment, the recent European legislative initiatives are analyzed. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Article
The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls Are ‘Necessarily’ Equal When Considered as Energy)
J 2021, 4(3), 452-475; https://doi.org/10.3390/j4030035 - 20 Aug 2021
Cited by 2 | Viewed by 877
Abstract
What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better [...] Read more.
What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they have to gaining some legal duties and protections insofar as they are sophisticated enough to display consciousness akin to humans. Should our species enable AIS to gain a digital body to inhabit (if we have not already done so), it is more pressing than ever that solid arguments be made as to how humanity can accept AIS as being cognizant of the same degree as we ourselves claim to be. By accepting the notion that AIS can and will be able to fool our senses into believing in their claim to possessing a will or ego, we may yet have a chance to address them as equals before some unforgivable travesty occurs betwixt ourselves and these super-computing beings. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Law)
Back to TopTop