1. Introduction
Scholars have extensively discussed the emergence of a profitable digital afterlife industry (DAI), whose aim is to monetize the “digital remains” [
1] of departed internet users [
2]. The different services offered by such an industry include posthumous information management and messaging services, online memorials, and re-creation services, among others. The call for making the phenomenon of “digital afterlife” [
3] a new field of study has been welcomed enthusiastically by scholars [
4], who focus on the moral [
5], sociological [
6], economic, and legal facets of death online [
7]. As regards jurists, for example, lawyers’ and notaries’ associations have published reports and issued recommendations on the panoply of legal issues involved [
8]; bills have been presented in many jurisdictions [
9], lawsuits have been filed against digital platforms and online providers, and so forth.
Advances in the fields of AI and humanoid robotics have made real such new scenarios as creating a digital rendition of the deceased. Some readers may recall Black Mirror, the sci-fi series produced in the UK in 2011, in which Martha, a young woman grieving her boyfriend’s death, learnt about a digital service enabling her to connect with a chatbot version of her partner [
10]. Digitally interacting with deceased people is no longer only the stuff of science fiction. Persuasive digital surrogates of the dead are popping up in real life, and the business is rapidly evolving. Some researchers have predicted mainstream viability in a decade [
11]. Among the cases widely covered by the media, it is worth mentioning James Vlahos’ “Dadbot”, i.e., a virtual representation of his deceased father [
12]. Machine-learning techniques make chatbots simulate how dead people would react or interact by analyzing opinions and data recorded when those people were alive. However, what a legion of dear deceased could think, say, or do today raises challenging normative issues, which are either moral (e.g., the defuncts’ dignity), legal (e.g., whether the defuncts’ data are personal), or related to the governance of this flourishing digital afterlife industry (DAI).
Arguably, deceased people enjoy, so to speak, moral and legal rights, and yet, it can be tricky to determine how such rights should be conceived of and enforced in the case of AI-driven memorial chatbots. The set of moral and legal rights includes protection for personality rights and intellectual property of the deceased; their right of publicity; and torts of defamation, inheritance, and confidentiality. Moreover, attention should be drawn to issues of personal data protection and privacy rights related to the use of DAI technologies, because posthumous information management and messaging services, online memorials, or re-creation services all rest on how software systems collect and process the defuncts’ data and how such data are related to the social life of the deceased. Whether the defuncts’ data should be considered as personal data and hence, fall under current provisions of data protection and corresponding data privacy rights, remains controversial indeed, as much as how such data-driven rights should be related to the traditional protections of the moral and legal rights of deceased persons in criminal law and civil law, public law and private law, and family law and intellectual property law, down to the laws of contracts.
As a result, to grasp the normative impact of DAI technologies and, in particular, of AI-driven memorial chatbots, this paper’s analysis is divided into five parts, strictly following this paper’s methodology, which is illustrated separately in the next section.
2. Structure and Methodology
Scholars have widely discussed the moral “rights of the dead” [
13,
14]. This discussion must be appreciated against the legal rights that deceased people have had since ancient times. In the tradition of Roman law, for example, such legal rights revolved around the testament as the unilateral and personal act with which an individual disposed of their assets for the time after which they were no longer alive. How the deceased’s legal rights have evolved throughout the centuries is fundamental to appreciating not only the normative impact of technology but also moral discussions on the “rights of the dead” [
15]. In distinguishing between hard and soft ethics [
16], we have to determine whether the focus of moral discussions is on what should or should not be done, against current legislation or in spite of it (i.e., hard ethics); or, whether such moral discussions intend to ameliorate today’s state of the legal art (i.e., soft ethics).
Accordingly, the first step of this analysis concerns the normative impact of DAI technologies and, in particular, of AI-driven memorial chatbots.
Section 3 focuses on the state-of-the-art related to patent registers, i.e., the public records in which technological inventions are registered, because there would not be any new issue related to the defuncts’ moral and legal rights without the corresponding technologies. The assumption is that new normative issues brought forth by the use of AI-driven memorial chatbots do exist.
To dissect this assumption,
Section 4 examines the level of risk posed by the use of such memorial chatbots. Drawing on the ways in which AI systems function according to the description of patent registers, the risk can be appreciated in connection with the protection of the departed internet users’ personality rights and vis-à-vis the distinction between privacy and data protection. Certain uses of the technology could arguably be prohibited under the provisions of the Artificial Intelligence (AI) Act in EU law. And yet, the alignment of legal regulations with moral principles does not represent today’s norm.
Section 5 substantiates this claim by inspecting the open problems of the legal field, in particular, whether and to what extent dead people’s data shall be considered as personal data. Against the “data freedom” regime endorsed by several jurisdictions, from the Netherlands to the United Kingdom, the analysis dwells on the opposite thesis of scholars who support a full recognition of the defuncts’ data as personal data with the corresponding privacy rights. To prevent any simplification, however,
Section 5 stresses that today’s “data freedom” does not entail that the personal identity of deceased people is simply at the mercy of the DAI’s interests. Several jurisdictions have either fully or partially extended current data protection safeguards to the deceased with the aim of complementing the defunct’s traditional rights in family law and contracts, intellectual property law, and criminal law.
Section 6 offers a cautionary tale through the case study of Italy’s law. By settling halfway between a “data freedom” approach and a full recognition of the defuncts’ data as personal data, Italy’s law risks having the defects of both legislative stances. On this basis,
Section 7 provides a discussion on this paper’s approach to persisting differences between moral arguments and legal constraints, potential conflicts between individual rights and business interests, and among multiple jurisdictions across the board. The overall aim is not to find the magic bullet for such potential conflicts and persisting differences, but rather to provide recommendations on efforts of coordination at the normative level.
Finally, the conclusions intend to justify the title of this paper: whether moral arguments, legal rules, and their interplay in the governance of DAI technologies properly tackle the normative challenges of AI-driven memorial services. Is there any peace after death?
Before any assessment of what human inventiveness has caused to become real, it is thus indispensable to preliminarily understand what technology makes possible, for better or worse. The assumption is not the neutrality of technology, i.e., as a simple means to attain whatever end. Instead, technology affects who we are, as well as what we know and what we do “onlife” [
17]. Thus, the focus is next on patent registrations for chatbots.
3. Patent Registrations for Chatbots and the Personality Rights of Defunct People
On 31 March 2015, Google was awarded a patent concerning “methods and systems for robot-user interaction, where a robot may be programmed to operate according to a specific personality for a given user and may have a certain look and feel in the way of attributes which are unique or even idiosyncratic to that robot” (United States Patent no. 8996429B1 on “methods and systems for robot personality development”, filed on 30 April 2012, and granted on 31 March 2015).
The “specific personality” of the robot that users can choose refers to either imaginary or real-world people, including celebrities, or “a deceased loved one”. According to the patent description, human qualities or characteristics are provided in audio or video format, drawing on a dataset as much as on human–robot interaction (HRI), the surroundings, or the circumstances of the case. The robot can be linked to a cloud and access users’ devices. Once those users opt in to allow the robot to process their personal data, the robot checks history, lifestyle, emails, pictures, contact data, user accounts, preferences, call logs, calendars, and third-party applications. The robots can also infer further information through speech and face recognition technologies.
Users can select the robot’s personality on a simple command, or let the robot invent the character in accordance with cues or patterns detected by itself. Users can clone, store, transport, transfer, or share the robot personalities, such as “mom”, “Gwynneth”, or “persona beta”, all around the world, simply by matching a local robot with the user’s home location robot. The HRI can be passive or proactive. Robots can either respond to the human user or initiate some conduct, including the ability for improvisation and extrapolation, such as making inferences on the personality of “mom”, “Gwynneth”, or “persona beta”.
In January 2022, Microsoft similarly secured a patent for a system and method of creating a conversational chatbot of a specific person who may correspond to a past or present human being, so that users could interact with such an entity impersonated through robotic forms [
18] (United States Patent no. 10853717B2 “creating a conversational chatbot for a specific person”, filed on 11 April 2017, and granted on 1 December 2020). The impersonation includes someone’s conversational and behavioral properties as much as demographic information. In the first case, conversational and behavioral attributes concern style, diction, tone, voice, intent, and sentence/dialogue length with the related complexities of topics and consistency. In the second case, demographic information includes the age, gender, education, profession, income level, and relationship status of the digital interlocutor.
Technological performances have, of course, to do with data. As occurs with Google’s robot, Microsoft’s data repositories can refer to a dataset or HRI [
19]. Depending on the user’s consent, the robot can search through the user’s devices, accounts, and apps. Whenever social data do not provide enough information, conversational data stores may supplement the personality index. All such data are used to train the chatbot to replicate the personality of a specific individual. An accurate voice font is created through a speech-synthesis algorithm applied to one or more voice recordings, while the image of the person is generated by a 3D-modelling algorithm that the chatbot employs to create a more immersive and interactive experience for the user. Robots can proactively improve or extrapolate information by using clues from either the human with whom they are interacting or by hinging on the context, the surrounding environment, and all circumstances of the case. For example, by making inferences from text messages and recordings, the robot can determine how users would interact with a deceased “loved mom”.
Admittedly, animating deceased persons seems nothing new. As far back as 2012, a hologram of the American rapper and actor Tupac Shakur, also known as 2Pac or Makaveli, was exhibited at the Coachella music festival, 16 years after Tupac Shakur’s death. We may wonder about what he could have thought of this performance and, for that matter, what we could think about our own holograms being exhibited without our consent.
All in all, the AI-driven DAI (digital afterlife industry) represents just one more challenge in the long evolution of personality rights protection throughout history. The good news is that current legal frameworks protect a person’s image, name, and reputation from being commercially exploited without their permission, even after death. The bad news is the loopholes in current provisions of data privacy that trends in web data scraping and generative AI have made crystal clear [
20]. All capabilities of AI-driven robots responding as someone we knew, such as Vlahos’ “Dadbot”, are unlocked by the processing of personal data. AI chatbots exploit databases full of information about individuals and use such data to improve their self-help capabilities. AI intends to predict what the targeted person would say or do in a wide range of circumstances. The goal is not to create a digital replica of what people were at the time of their death. Instead, by building upon the digital archive that a person left behind with emails, texts, tweets, and even Snapchats, AI-driven chatbots develop the ability to “think” by themselves, developing new opinions and becoming an entity that refers to a real person but is not a mere replica. What the marvels of technology entail for the protection of people’s identity and the current laws on data protection may be problematic. The next section illustrates why.
4. The Troubles with Digital Identity
People might think it is not a bad idea to talk to a simulated beloved person who passed away: it would be like those traditional scenarios in which individuals buy defuncts’ souvenirs that come along with burial ceremonies, flowers, cosmetics, services, and all sorts of other products. Conversations with a chatbot would not be all that different from such traditional scenarios as long as the deceased and then their family members consented to the collection and processing of the defunct’s data for empowering AI-driven memorial chatbots. It is worth mentioning that in EU law, the AI Act, arguably among the strictest regulations across the board, does not consider the use of chatbots, pursuant to Recital 119 of the Regulation, as necessarily “high risk”. Instead, Article 50 aims to prevent users from being misled by the AI system with which they are interacting [
21]. Correspondingly, users should be aware that they are interacting with a machine so that they can make informed decisions to either continue or step back from such an interaction and data processing. In particular, Article 50 establishes that “providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system”. The same is valid under paragraph 4 for “AI system that generates or manipulates image, audio, or video content constituting a deep fake”. Art. 50(5) specifies that the information referred to above shall “be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure”. As clarified by Recital 133, the purpose of such obligations is to tackle “new risks of misinformation and manipulation at scale, fraud, impersonation and consumer deception” and to restore “the integrity and trust in the information ecosystem”.
What is highly problematic concerns the lack of consent, i.e., when individuals never consented to what a robot or AI-driven chatbot says or does. We noted above in the previous section that conversational chatbots are not simple replicas of individuals, but rather, their dynamic representation. Since personal data may not be good enough or sufficient to develop a personality index, such data are often integrated with crowdsourced data to fill any gaps. The system may infer from this heterogeneous dataset personality aspects which do not fit or are not fully respondent to the behavioral attributes of such a person. Chatbots that end up saying things a person would never have said and provide users with weird conversations with the departed may lead to emotional stress comparable to going through the loss again.
Scholars have stressed that the DAI has an incentive to alter the “informational bodies” of the dead in the name of profit [
2]. However, the commodification of the defunct’s informational body not only affects their identity but also that of users. On the one hand, the moral rights and safeguards of human dignity related to the protection of the deceased are traditionally covered in the legal domain with such rights as the right to image or the right of publicity and reputation. We can extend to this context what Warren and Brandeis defined as the essence of the right to privacy, i.e., the “right to be let alone” even after death [
22]. On the other hand, the protection of the moral and legal rights of users of AI-driven memorial chatbots can be understood in connection with their right to privacy because, in addition to the protection of people’s bodies, spaces, property, and communications, the right to privacy regards the protection of people’s self-development in their intellectual, decisional, associational, and behavioral dimensions [
23].
It is against this framework that we can suspect that certain uses of the defuncts’ informational body may fall under Article 5.1 (a) or (b) of the AI Act in EU law. In the first case, the provisions of the law would be triggered when a chatbot “deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative… techniques”. In the second case, i.e., under Lett. (b) of Art. 5.1, tarnishing human memories may amount to an undue exploitation of “the vulnerabilities of a person” capable of “materially distorting the behaviour of that person… in a manner that causes or is reasonably likely to cause… significant harm”. In both cases, the result would be the ban of such uses of technology, preventing those chatbots from being placed on the market.
Complementing substantial safeguards for the protection of both the defuncts’ identity and living people’s autonomy, attention should be drawn to the crucial role that data protection plays in safeguarding such a human identity and autonomy. Privacy and data protection are often understood as similar, or even synonyms, yet they should be carefully differentiated [
24]. Whilst the multiple dimensions of privacy can be summarized by referring to Hannah Arendt’s idea of individual “opaqueness”, the protection of personal data mostly revolves around the “transparency” with which data are collected, processed, and used [
25]. It is not by chance that the protection of privacy can involve no data processing at all, for example, the protection against cases of “unwanted fame” or “false light”. On the other hand, data can be protected regardless of any harm, prejudice, or privacy issue associated with data protection. Although the moral and legal spheres of privacy and data protection often overlap, this is not necessary. It is worth mentioning that, in EU law, the Charter of Fundamental Rights carefully distinguishes between the right to privacy (Art. 7) and the right to data protection (Art. 8).
Drawing on this basis, two issues of data protection must be differentiated. They are either ex ante or ex post. In the first case, the focus is on the legal basis for the processing of personal data: dealing with AI-driven memorial chatbots, attention must be drawn to the purpose of training the AI system, for example, to develop the personality index. In the second case, what is relevant involves the right to consent, and moreover, to withdraw such consent to end any personal data processing. Although matters of consent and the purposes of data processing do not raise particular challenges when the data subject is alive, the issue concerning who shall be entitled to safeguard the digital persona of the deceased can be controversial. Therefore, the aim of the next section is to stress the uncertainties of current law that are to the detriment of individuals and even market operators. From a legal viewpoint, most provisions are up to national lawmakers, which is also true in the field of EU law. By considering the ambiguities and drawbacks of today’s law, the moral challenges brought forth by DAI technologies will become more apparent. Several scholars claim that, against any “data freedom” legal regime, we should endorse a full recognition of the defuncts’ data as personal data with the corresponding privacy rights. Is there any middle way between such opposite conclusions?
5. Data Protection Beyond the Life of Individuals
In the EU legal system, data protection legislation does not explicitly address either privacy rights or data protection claims related to deceased individuals. The assumption regards both primary law (See Articles 8(1) ECHR and 7 and 8 EUCFR, which refer to “everyone” as the subject of rights, whereas the current interpretation of these texts is generally limited to living natural persons) and secondary law (i.e., data protection law and the national regulation of the Member States of the Union).
The European Court of Human Rights acknowledges that an individual has rights under the Convention, even after death (See ECtHR, Decision as to the admissibility of Application No. 1338/03 by the Estate of Kresten Filtenborg Mortensen v. Denmark, 15 May 2006). However, such protection has been granted only to living relatives, being qualified as a sort of right to respect their grief and memory [
26]. On the contrary, the issue of personal data protection of deceased people has not yet been specifically addressed by the Court in Strasbourg.
Regarding secondary law, the 95/46/EC Directive on data protection does not mention the case of deceased data subjects. The 2016 GDPR has followed suit, because it does not apply “to the personal data of deceased persons” (Recital 27). According to an Opinion of the Article 29 Working Party, “information relating to dead individuals is therefore not to be considered as personal data”, although it “may still indirectly receive some protection” [
27]. The EU Court of Justice has clarified that the individual rights granted to the data subject, such as the rights to information, access, and erasure, cannot be exercised after death except for if the data subject died after lodging the lawsuit and their heirs wish to continue the proceedings (Judgment of the Court of Justice, 27 November 2018,T-314/16, ECLI:EU:T:2018:841). The court’s doctrine does not entail that relatives cannot defend the rights connected to any abuse of the defunct’s personal data. Consider Pérez Gutiérrez v Commission (Judgment of the Court of Justice, 9 September 2015,T-168/14, ECLI:EU:T:2015:607), in which parties to the lawsuit discussed the unauthorized use of a picture of the plaintiff’s husband on tobacco package health warnings [
28]. The EU Court acknowledged the potential personal damage to the widow for the violation of her right to respect for private and family life under Art. 7 CFR. However, the action was finally dismissed because it was not demonstrated that the picture corresponded to the husband’s portrait.
According to Recital 27 of the GDPR, however, “Member States may provide for rules regarding the processing of personal data of deceased persons”. This choice has been considered as “consistent with the traditional principle according to which legislative policy decisions that affect family and succession law, as they are areas characterized by domestic values closely correlated with the traditions and culture of the state community of reference, fall outside the regulatory competence of the European Union” [
29]. As a result, national governments have adopted manifold approaches [
30]. Most European privacy laws endorse a “data freedom” approach. This approach is followed by countries that do not adopt any provision concerning the processing of personal data of deceased persons or explicitly exclude the application of data protection law. This is the case in countries such as the Netherlands, Belgium, Austria, Finland, France, Sweden, Ireland, Cyprus, and the United Kingdom, thus excluding that the deceased individuals might hold data privacy rights [
31]. This view hinges on the “problem-of-the-subject” paradox [
32]. The overall idea is that all rights related to and intertwined with the protection of the unique personal identity and dignity of individuals would extinguish with their death. Correspondingly, data controllers and the DAI might be free to process the data of deceased persons for training a chatbot without any guarantee of data protection law.
This legal approach seems inadequate in an information society. Some scholars argue that the personality rights of deceased individuals, or at least their personhood, persist as long as the interest of the deceased persons makes it necessary [
33]. The ontology of post-mortem existence rests on strands of information that are not unique to human living beings. Therefore, the ontology of the post-mortem existence recommends granting a right to data privacy also for post-mortem personae [
34,
35]. The approach fits hand in glove with the Kantian view that bona fama defuncti deserves protection because this right should be conceived of as a “natural right” [
36]. Since respect for the dead is generally considered a moral obligation across most cultures and jurisdictions, it follows that the moral and legal right to informational privacy after death, such as the right to personal immunity from unknown, undesired, or unintentional changes in one’s identity, must tackle posthumous harms [
4,
35,
37].
However, today’s legal “data freedom” scenario does not entail that the personal identity of deceased people is simply at the mercy of the DAI’s interests. Data protection safeguards may not be applicable, and yet the tort of defamation, the right to one’s own name and image, or the right to keep correspondence confidential, come into play. For example, some countries, e.g., Germany, apply inheritance law and the principle of universal succession to govern these cases (See BGH, 12.7.2018, Case III ZR 183/17, para. 21, according to which intangible goods and personal data, such as a social media account, are a part of the person’s estate, and thus heritable in accordance with the principle of universal succession). Other jurisdictions, such as Denmark and Estonia, provide that the GDPR applies to deceased persons. Such legislations establish that either a person appointed by the data subject during their life, or a close relative, can exercise the data subject’s rights after death. In such jurisdictions, a chatbot that harms an individual’s dignity would likely infringe upon their personality rights even after death.
Different degrees of legal protection for deceased people can thus be granted. Some countries limit this protection to a time threshold of 10 years (Denmark) or 30 years (Estonia) after the subjects’ death. In other jurisdictions, such as Italy and Spain, there is no time constraint. The fragmentation of the legal order, also but only in EU law [
38], warns against every simplification, either considering the nuances of legal regulations or their potential conflicts with moral opinions. This fragmentation can represent a formidable hurdle for DAI interests, but also a normative opportunity, because the non-divisibility of data and the compliance costs of multinational corporations dealing with multiple regulatory regimes may prompt the DAI to adopt and adapt itself to the strictest legal standards across the board [
39].
To test how far these ideas go, the next section focuses on a case study, namely, the attempt of Italian law to govern this complex matter. Will there be some peace after death?
6. A Case Study
Italy set up data protection rights even after the data subjects’ death well before the entry into force of the GDPR in 2018. Article 9.3 of the Italian Privacy Code (IPC) establishes three types of rightsholder with a right to access the personal data related to a deceased individual, namely, (a) those who have an own interest, (b) those appointed by the deceased as an agent to protect their interests (The provision must be understood in connection with the social network platforms’ practice of appointing a ‘legacy contact’ in the event of death. Scholars present such an entrustment as a ‘mandato ad mortem exequendum’, meaning an agency agreement whose effects shall follow the death of the ‘principal’, i.e., the data subject), and (c) those who act for family reasons worthy of protection (IDPA 17 July 2008). Heirs were thus also, but not the only, subjects entitled to exercise the right to access.
Family reasons, of course, played a major role. By its ruling 2 March 2021, the District Court of Milan found the applicant widow’s desire to transmit the memory of her husband to her two underage daughters worthy of protection because of the significant emotional content of the photos and videos stored in the father’s iCloud account.
As regards the case law of Article 9 IPC, the Italian Supreme Court distinguished the legitimacy of enforcing the right to access, including third parties’ rights, from the further rights that the Italian data protection law grants to living human beings, such as the right to erase or to keep data updated. Such rights were not enforceable by anyone else after the data subject’s death (Italian Supreme Court, 27 March 2020, no. 7559).
However, the framework was amended by Legislative Decree no. 101/2018. Under the new Article 2 terdecies, the right to enforce the personal data rights of deceased individuals is no longer limited to the right to access. Indeed, such rights are explicitly extended to cover all rights referred to, from Art. 15 to Art. 22 of the GDPR. This means, for example, that all rightsholders can now exercise the right to be forgotten or to get an order of withdrawal of personal data, also after the data subject’s death. Since Art. 2 terdecies is considered a mandatory provision of law, freedom of contract may not prevent rightsholders from exercising the rights under the GDPR and the IPC. Yet, it remains unclear whether such rights are granted either iure hereditatis or iure proprio, that is, either as an heir’s right or as an own right. Most scholars opt for this second interpretation [
40,
41], which includes those cases, such as withdrawal of personal data, where rightsholders have no direct interest (District Court of Rome, 21 November 1996).
Despite the good intentions of Italian lawmakers, the regulatory framework presents several shortcomings that add to the uncertainties of the iure hereditatis or iure proprio regimes. Such drawbacks are either endogenous or exogenous. The first set of causes, which explain the undesirable features or disadvantages of the legislation, regards its reactive, rather than proactive, approach. Although Art. 2 terdecies empowers some individuals with the right to react to the processing of the defunct’s data, which may potentially affect their identity and dignity, the law does not empower such rightsholders with the right to consent to any data processing ex ante. In fact, the Italian data protection law does not empower the post-mortem rightsholders with the right to authorize the use of the deceased subject’s data. It may be argued that dead people’s data are not personal, in accordance with the provisions of the GDPR. Yet, it is unclear why rightsholders shall wait until the infringement of the defunct’s dignity materializes before they can legally act.
The reactive approach of the Italian legislation is confirmed by other provisions of the IPC, such as Art. 110 concerning the processing of personal data for medical research purposes. If the consent cannot be obtained because the data subject is deceased or otherwise unreachable, the data controller shall not revert to the heirs, relatives, or other legitimate subjects to obtain their authorization. Instead, Italian law admits the data processing without any consent or authorization, the only condition being the adoption of “appropriate measures to protect the rights, freedoms and legitimate interests of the data subject” (IDPA, doc. web no. 9124510, available at
https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9124510, accessed on 15 May 2025). Although the provision on the reuse of data may seem sound and legitimate, it triggers a further fragmentation of the legal system, because the provision of Art. 110 not only limits the rightsholders’ powers under Art. 2 terdecies, but also, the provision must be coordinated with the restrictions for the reuse of personal data for medical research purposes pursuant to Art. 110 bis [
42].
The limits of the legislation, on the other hand, are exogenous because of further issues that concern the overlapping of different rights and interests protected by law. For example, the rightsholders entitled to enforce the posthumous personal data protection rights under Art. 2 terdecies IPC may not be the same rightsholders entitled to claim a violation of the overlapping rights under Art. 93 of Italian Copyright Law (ICL). Art. 93 establishes that “correspondence, letters, collections of letters, family and personal memoirs and other writings of a similar nature, having a confidential character or associated with the intimacy of private life, may not be published, reproduced, or in any manner brought to the knowledge of the public without the consent of the author and, in the case of correspondence and letters, also the consent of the person to whom they are addressed”.
This provision is relevant in the case of conversational chatbots that collect data from various sources, including emails, text messages, social media posts, and letters, because it may overlap with the safeguards of data protection law by protecting the same interests in confidentiality and privacy. After the death of the copyright holder, in fact, Art. 93 ICL requires “the consent of the spouse and children or, if none exists, the consent of the parents…; if there is no spouse, child or parent, the consent of the brothers and sisters or, if none exists, the consent of the direct ascendants and descendants up to the fourth degree”. This means, however, that, under Italian copyright law, posthumous rights are granted only to heirs, whereas Art. 2 terdecies IPC includes every subject who has a personal or familial interest or represents the deceased. The same conflict of law is at work with the unauthorized display, reproduction, or commercial distribution of the portrait of a person under Art. 93 ICL. After the death of the portrayed person, the legitimacy to enforce the right to one’s own image belongs to the heirs and does not coincide with the entitlement to enforce posthumous personal data protection rights.
Similar considerations hold when the use of personal data triggers the application of the right of publicity or the tort of defamation. Settled case law acknowledges that only relatives are the “passive subjects” of the tort of defamation under Art. 597 paragraph 3 of the Italian criminal code when the offense is addressed to the memory of a deceased individual. The provision rests on the feelings towards this deceased family member that are affected, with direct prejudice to the relatives vis-à-vis the defunct’s honor and dignity.
The conclusions we can draw from the Italian case study suggest a cautionary tale. By settling halfway between a “data freedom” regime and a full recognition of the defuncts’ data as personal data with corresponding privacy rights, the Italian legal system risks upholding the defects of both approaches. We noted the lack of some necessary safeguards that occurs with the data freedom regime as much as the intricacy of a legal network that should take into account different interests, rights, and principles at stake in criminal law and civil law, public law and private law, family law and intellectual property law, and data privacy and data protection, down to the laws of contracts. Would the full recognition of the defuncts’ data rights be the one-size-fits-all solution?
7. Discussion
Legal systems are expected to protect the defuncts’ personality rights vis-à-vis the ongoing development of hyper-realistic chatbots representing living and departed persons [
43]. National laws traditionally governing individuals’ personality rights, such as the right to one’s own name and image, or the tort of defamation, have been progressively complemented with provisions of personal data processing. This evolution of the legal framework has gone hand-in-hand with the increasing dependence of human societies on data and information as their vital resource, since the law has progressively been transformed into a matter of access to and control over data and information in digital environments [
17]. As a result, how the law intends to govern issues of data protection and privacy related to the 0s and 1s remains of departed internet users critically affects, and should be understood in connection with, the traditional protection of the defuncts’ rights offline.
The protection of the digital remains of departed internet users poses several critical problems [
44]. This paper draws attention to the discretionary powers of the EU Member States in bringing the processing of the defuncts’ data either under or out of the GDPR umbrella, thereby triggering an extremely fragmented legal framework. The overlap between national laws traditionally governing personality rights and the EU personal data framework prompts further inconsistencies. Such discrepancies not only involve the legal domain but also potential conflicts between legal constraints and moral arguments.
We have noted that respect for the dead is conceived of as a moral obligation across most cultures and jurisdictions. A moral right to personal immunity from unknown, undesired, or unintentional changes in one’s identity shall reasonably be endorsed by all legal systems to tackle posthumous harms. This endorsement appears especially urgent in those legal systems supporting a data freedom approach in which traditional safeguards for the defuncts’ rights to name and image, or the tort of defamation, inheritance, and intellectual property, may fall short in coping with the normative challenges of AI-driven memorial chatbots. That which the German Constitutional Court framed in terms of “informational self-determination” in the Volkszählungs-Urteil (“census decision”) from 1983 can be extended to this context to strengthen the traditional framework of basic rights associated with the physical body of the individuals and their habeas corpus with the principle of habeas data [
45]. Among the first and most striking cases of the limitations of law with respect to this scenario, this paper stresses the impossibility of accessing the internet accounts of all beloved ones once they have passed away [
40,
41].
However, whether granting a right to privacy also for post-mortem personae [
35], or supporting the Kantian bona fama defuncti as a human right [
36], it is our contention that moral experts shall pay attention to a critical distinction of law. In addition to evergreen debates on the content of legal provisions that aim to directly govern social or individual behavior, that is, between advocates of legal positivism and supporters of the natural law tradition, attention must be drawn to the further set of rules through which legal provisions are created, amended, or extinguished [
46]. By reversing the Kantian idea that a nation of devils can establish a state of good citizens if they “have understanding” [
47], we can say that even a nation of angels would need the law for their coordination and collaboration [
48]. The claim can be clarified with the case study in this paper on the protection of the digital remains of departed internet users in Italy’s law.
Against the backdrop of EU data protection law, Italian law provides some data protection safeguards after the individuals’ death, yet the rightsholders entitled to enforce such rights do not coincide with those entitled to claim other rights of the deceased. The risk is a possible conflict between different people who have a legitimate interest in the protection of the defuncts’ rights who nevertheless disagree on how to manage their data. The lack of normative coordination between, for example, data protection rights and the right of publicity, or data privacy rights and copyrights over email exchanges, is highly problematic because such rights have the same nature, i.e., they are personality rights that, moreover, represent a constitutive element of a more general right to our personal identity [
40,
41]. It is not by chance that the scope of protection overlaps: a personal image or portrait falls under copyright law, but it is also personal data under Article 4 GDPR, so that the unauthorized use of portraits and personal images may be challenged under different laws almost interchangeably and yet, confusedly.
Taking into account the drawbacks of this paper’s case study, three recommendations follow as a result. First, attention must be drawn to cases of posthumous harm that DAI technologies, with their AI-driven memorial chatbots, may bring forth. In accordance with the tenets of hard and soft ethics [
16], the moral assessment of legal provisions shall determine what should or should not be done against current legislation or in spite of it (i.e., hard ethics), or whether it is sufficient to ameliorate today’s regulations (i.e., soft ethics). The overall aim of the law should be to properly tackle cases of posthumous harm in accordance with the principle of informational self-determination, whether individuals are alive or dead.
The second recommendation has to do with the legal distinction between norms of conduct and norms of change [
46,
49]. It is not enough to fill the gaps in the law that are brought forth by DAI technologies with their AI-driven memorial chatbots. Although amendments to tackle posthumous harms, which follow unknown, undesired, or unintentional changes in human identity, are more than welcomed in accordance with moral arguments [
50], such legal amendments should be coordinated with current provisions on the protection of the rights of the dead. Coordination must be grasped in this case as formal, rather than substantial, because the legal rules of change and adjudication should establish and implement standards and procedures for the proper development, use, and management of DAI technologies under the principle of habeas data, thereby complementing the traditional protection of the habeas corpus’ rights “onlife” [
17].
Such a coordination holds also for those jurisdictions, if any, in which an appropriate protection of the deceased individuals’ data rights supplements the safeguards that the law has traditionally granted for the defunct individuals in criminal law, family law, or intellectual property law. The third recommendation of this paper thus regards a common legal framework either at the regional level, for example, in EU law, or at the international level. The lack of coordination among legal orders is critical considering today’s industry efforts to monetize the 0s and 1s remains of departed internet users through those practices, such as web crawling techniques at the heart of LLMs [
20]. An increase in personality rights infringements should be expected due to the digital afterlife industry’s readiness to turn data into a new and appetizing earning opportunity [
2]. A coherent legal framework for the protection of personal data after death is urgently needed and may paradoxically benefit also the afterlife industry due to the non-divisibility of data and the compliance costs of their digital services.
The recommendations of this paper, however, do not only aim to address today’s legal fragmentation. Also, the intent is to take into account the different functions of moral arguments (i.e., soft and hard ethics), and the different types of legal rules that come into play (i.e., rules of conduct and of adjudication and change), whose interplay is crucial in the governance of DAI technologies and AI-driven memorial services (i.e., policies, standards, and procedures). Drawing on this backdrop, we can return to the question posed by this paper’s title: will there be any peace after death?