AI, Authorship, Copyright, and Human Originality
Definition
1. Introduction
2. The MATH-COPE Framework
- Moral rights: including integrity, attribution, and posthumous exploitation.
- Authorship and originality: understanding the role of human creativity in the age of AI tools.
- Training data and copyright: determining whether large-scale text and data mining is infringement or fair use and an exception doctrine.
- Human originality: the protection of the semantic and stylistic identity of authors in the context of artificial intelligence reproduction.
- Commercialisation: the market structures and remuneration mechanisms for human work in AI-based economies
- Organisational practice: how industry actors, publishers, and platforms are integrating AI into their operational workflows.
- Policy and governance: the national and international regulatory frameworks, the Berne Convention being taken as the reference standard.
- Ethical technology: accountability, transparency, and the developer’s responsibilities in AI design.
3. The Legal Frameworks of Germany, the UK, and the USA
3.1. United Kingdom
3.1.1. Statutory Basis
3.1.2. Doctrinal Stress Points
3.2. United States
3.2.1. Statutory Basis
3.2.2. Doctrinal Stress Points (US)
3.3. Germany
3.3.1. Statutory Basis
3.3.2. Doctrinal Stress Points (DE)
4. The Relevant Literature and Emergent Issues
4.1. Moral Rights
4.2. Authorship and Originality
4.3. Training Data and Copyright
4.4. Human Originality
4.5. Synthesis and Gap Analysis
5. Stakeholder Perspectives
6. Integrating Evidence: Towards a Global Copyright Framework
- Moral Rights: Moral rights are not adequately protected, especially in the posthumous context. UK and US law offer little protection against style or voice imitation. German law is somewhat better, but it is still human-centric and analogue in design.
- Authorship and Originality: Courts and legislators require human authorship but offer no guidance for works that involve human and AI collaboration. This puts creators and publishers in a state of uncertainty.
- Training Data: There is no agreement across jurisdictions on whether large-scale scraping for model training is allowed. Fair use in the US, TDM in the UK, and exceptions in Germany are still ambiguous, especially since AI tools are no longer limited to syntax but also semantics.
- Human Originality: Interviewees emphasised the threat of semantic replication to the originality of artistic voice, and the literature emphasised the loss of cultural and economic value.
- Commercialisation: Market players need to be able to clearly license and remunerate to prevent undermining human creators.
- Organisational Practice: Publishers and platforms are using AI tools without regulatory certainty, creating competitive asymmetries.
- Policy and Governance: Policymakers are not well equipped to apply Berne principles to an AI-enabled environment, and the result is fragmented approaches.
- Ethical Technology: Developers are asking for collective ownership and open standards to reduce risks of style and identity copying.
7. The Draft Global Copyright Framework
- Definition of “Work”—A work is defined to be the combination of semantic and syntactic elements created through human originality, therefore making it impossible for AI outputs to claim their own authorship.
- 2.
- Authorship Attribution—The article makes it clear that AI-assisted works are still protected by retaining human authorship, and AI-only works are not.
- 3.
- Training Data Licensing—A licensing regime for AI training data is introduced, including provisions for remuneration for rights holders.
- 4.
- Revenue Sharing—Compensation models are proposed for creators whose works form training datasets, and for works that copy-protected styles.
- 5.
- Moral Rights—Attribution and integrity rights are extended to explicitly include AI-generated copies of style, voice, and likeness.
- 6.
- Transparency Obligations—The article requires transparency on the use of AI in creative works, such as the datasets and models used.
- 7.
- Posthumous Protection—The framework protects against the use of a deceased person’s likeness, voice, or style without the permission of heirs or estates.
- 8.
- Educational Exceptions—Strict transparency and non-replication guarantees are maintained for limited exceptions for academic and non-commercial research.
- 9.
- Technological Measures—The article promotes watermarking, attribution tools, and audit systems to make them enforceable.
- 10.
- International Harmonisation—The framework calls for conformity with Berne principles but also introduces AI-specific standards which can be implemented multilaterally.
- 11.
- Implementation and Enforcement—The article provides for cross—border cooperation, dispute resolution and sanctions to ensure compliance.
8. Implementation Pathways: Registries, Provenance, and Redress
8.1. Registries and Permissions
8.2. Provenance and Labelling
8.3. Audits and Access
8.4. Redress and Remedies
8.5. Protecting SMEs and Non-Profits
9. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Turvill, W. Elton and Paul in harmony over the dangers of AI. Sunday Times Business and Money. 26 January 2025. Available online: https://www.thetimes.com/business-money (accessed on 27 January 2025).
- Neubauer, A. AI and Authorship Redefined—Towards a Global Copyright Framework for Commerce and Human Originality. Ph.D. Thesis, University of Gloucestershire, Cheltenham, UK, 2025. [Google Scholar] [CrossRef]
- United States Court of Appeals, Ninth Circuit. NARUTO v. Slater, 888 F.3d 418 | Casetext Search + Citator. 2018. Available online: https://casetext.com/case/naruto-v-slater-2 (accessed on 3 November 2025).
- United States Court of Appeals for the District of Columbia. THALER v. PERLMUTTER, No. 23-5233. 2025. Available online: https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf (accessed on 3 November 2025).
- US Copyright Office. Zarya of the Dawn Letter. 2023. Available online: https://www.copyright.gov/docs/zarya-of-the-dawn.pdf (accessed on 29 October 2025).
- Ginsburg, J.C.; Kernochan, J. One Hundred and Two Years Later: The U.S. Joins the Berne Convention. Columbia Law School Scholarship Archive. 13 Colum.-VLA J. L. & Arts 1. 1988. Available online: https://scholarship.law.columbia.edu/faculty_scholarship/379 (accessed on 29 October 2025).
- Burk, D.L. Thirty-Six Views of Copyright Authorship, by Jackson Pollock. Hou. L. Rev. 2020, 58, 263. [Google Scholar]
- Sundara Rajan, M.T. Moral rights: The future of copyright law? J. Intellect. Prop. Law Pract. 2019, 14, 257–258. [Google Scholar] [CrossRef]
- Hook, S. Moral Rights, Creativity, and Copyright Law: The Death of the Transformative Author; Routledge: Abingdon, UK, 2024. [Google Scholar] [CrossRef]
- Abbott, R. We, the Robots? Regulating Artificial Intelligence and the Limits of the Law. Int. Comp. Law Q. 2023, 72, 272–273. [Google Scholar] [CrossRef]
- Boyden, B.E. Emergent Works. Columbia J. Law Arts 2016, 39, 377. [Google Scholar]
- US Supreme Court. Feist Publications, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340. 1991. Available online: https://supreme.justia.com/cases/federal/us/499/340/ (accessed on 25 October 2025).
- Bently, L. The UK’s Provisions on Computer-Generated Works: A Solution for AI Creations? (ECS International Conference). 2018. Available online: https://europeancopyrightsocietydotorg.files.wordpress.com/2018/06/lionel-the-uk-provisions-on-computer-generated-works.pdf (accessed on 15 October 2025).
- Dornis, T.W.; Stober, S. Urheberrecht und Training generativer KI-Modelle—Technologische und juristische Grundlagen. In Recht und Digitalisierung. Digitalization and the Law; Nomos Verlag: Baden-Baden, Germany, 2024; Volume 19. [Google Scholar] [CrossRef]
- Bäcker, K.; Feindor-Schmidt, U. The destruction of copyright—Are jurisprudence and legislators throwing fundamental principles of copyright under the bus? J. Intellect. Prop. Law Pract. 2021, 16, 41–55. [Google Scholar] [CrossRef]
- OLG Hamburg vom 5. November 1998 (Az. 3 U 212/97)–Urheberbenennungsfall (“Verletzerzuschlag”). Available online: https://dejure.org/dienste/vernetzung/rechtsprechung?Gericht=OLG%20Hamburg&Datum=05.11.1998&Aktenzeichen=3%20U%20212%2F97 (accessed on 27 September 2025).
- Lucchi, N. Generative AI and Copyright: Training, Creation, Regulation; European Parliament, Policy Department for Justice, Civil Liberties and Institutional Affairs: Brussels, Belgium, 2025; ISBN 9789284828395. [Google Scholar]
- Zurth, P. Artificial Creativity? A Case Against Copyright Protection for AI Generated Works. UCLA J. Law Technol. 2021, 25. Available online: https://ssrn.com/abstract=3707651 (accessed on 12 October 2025).
- Bonadio, E.; McDonagh, L. Artificial intelligence as producer and consumer of copyright works: Evaluating the consequences of algorithmic creativity. Intellect. Prop. Q 2020, 2, 112–116. [Google Scholar]
- Shakir, U. Google’s Invisible AI Watermark Will Help Identify Generative Text and Video. The Verge. 2024. Available online: https://www.theverge.com/2024/5/14/24155927/google-ai-synthid-watermark-text-video-io (accessed on 15 November 2025).
- Ahuja, V. Artificial Intelligence and Copyright Issues and Challenges. 2023. Available online: https://www.researchgate.net/publication/372418545_ARTIFICIAL_INTELLIGENCE_AND_COPYRIGHT_ISSUES_AND_CHALLENGES (accessed on 12 November 2025).
- Craig, C.J.; Kerr, I.R. The Death of the AI Author. Ottawa Law Review 2021, 52, 33–86. Available online: https://digitalcommons.osgoode.yorku.ca/cgi/viewcontent.cgi?article=3984&context=scholarly_works (accessed on 23 October 2025).
- Ramalho, A. Will Robots Rule the (Artistic) World? A Proposed Model for the Legal Status of Creations by Artificial Intelligence Systems. Maastricht Univ. Fac. Law 2017. Available online: https://ssrn.com/abstract=2987757 (accessed on 9 October 2025).
- Schere, E. Where is the Morality? Moral Rights in International Intellectual Property and Trade Law. 2018. Available online: https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2703&context=ilj (accessed on 17 September 2025).
- Gervais, D. Is Intellectual Property Law Ready for Artificial Intelligence? GRUR Int. 2020, 69, 117–118. [Google Scholar] [CrossRef]
- Bridy, A. Coding Creativity: Copyright and the Artificially Intelligent Author. Stanf. Technol. Law Rev. 2012, 5, 1–28. [Google Scholar]
- Deltorn, J.-M.; Macrez, F. Authorship in the Age of Machine learning and Artificial Intelligence. In The Oxford Handbook of Music Law and Policy; O’Connor, S.M., Ed.; Oxford University Press: Oxford, UK, 2019; Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3261329 (accessed on 9 October 2025).
- Aziz, A. Artificial Intelligence Produced Original Work: A New Approach to Copyright Protection and Ownership. Eur. J. Artif. Intell. Mach. Learn. 2023, 2, 9–16. [Google Scholar] [CrossRef]
- Guadamuz, A. Artificial Intelligence and Copyright: The rise of the Machines Is Here, but They Do not Come as Conquerors, they Come as Creators. 2017. Available online: https://www.wipo.int/wipo_magazine/en/2017/05/article_0003.html (accessed on 12 October 2025).
- Germany—Hamburg District Court, 310 O.22723, LAION v Robert Kneschke. 27 September 2024. Available online: https://openjur.de/u/2495651.html (accessed on 9 September 2025).
- Samuelson, P. Allocating Ownership Rights in Computer-Generated Works. 1986. Available online: https://www.semanticscholar.org/paper/Allocating-Ownership-Rights-in-Computer-Generated-Samuelson/e9da9f9f15834d3149ea46fc4d929c193b9e2442 (accessed on 12 October 2025).
- Getty Images v. Stability AI. High Court of Justice (London). 2023. Available online: https://www.judiciary.uk/wp-content/uploads/2025/01/Getty-Images-and-others-v-Stability-AI-14.01.25.pdf (accessed on 15 November 2025).
- Bartz v. Anthropic. U.S. District Court. 2023. Available online: https://copyrightalliance.org/wp-content/uploads/2025/06/Bartz-v.-Anthropic-Order.pdf (accessed on 30 August 2025).
- Kretschmer, M.; Meletti, B.; Bently, L.; Cifrodell, G.; Eben, M.; Erickson, K.; Iramina, A.; Li, Z.; Mcdonagh, L.; Perot, E.; et al. Copyright and AI in the UK: Opting-In or Opting-Out? GRUR Int. 2025, 74, 1055–1070. [Google Scholar] [CrossRef]
| Legal Issues | ||||||
| Social & Economic Context | MATH themes/ COPE dimensions | Moral Rights | Authorship & Originality | Training Data & Copyright | Human Originality | |
| Commercialisation | ||||||
| Organisational Practice | ||||||
| Policy & Governance | ||||||
| Ethical Technology | ||||||
| MATH/ COPE | Moral Rights (M) | Authorship and Originality (A) | Training Data (T) | Human Originality (H) |
|---|---|---|---|---|
| Commercialisation (C) | UK/US: Weak moral rights; no robust posthumous safeguards. GE: Stronger, but analogue and not tailored to AI imitation. | UK: Section 9(3) allows “computer-generated works,” bypassing originality. US: Strict human authorship baseline. GE: Only “personal intellectual creation”. | UK: Limited TDM exception, uncertain for semantic training. US: Fair use unresolved for large-scale scraping. GE: Exceptions not designed for AI scale. | Market risk: semantic imitation undermines distinctiveness; economic erosion across all three systems. |
| Organisational Practice (O) | Attribution gaps for reused styles/voices across jurisdictions. | Hybrid authorship thresholds unclear; no guidance for “meaningful human control”. | Dataset opacity; lack of provenance across publishers and platforms. | Style/voice replication adopted in workflows without safeguards; creators displaced. |
| Policy and Governance (P) | Fragmented: UK/US offer minimal posthumous protection; DE more robust but nationally limited. | No harmonised doctrine on AI-assisted works; Berne is silent on hybrids (i.e., no explicit rule). | No cross-border clarity: US (fair use), UK (TDM), and GE (UrhG exceptions) diverge. | No international enforcement standard for semantic/style replication; forum-shopping risk. |
| Ethical Technology (E) | Persona/voice can be cloned without consent. | AI outputs may simulate creativity without human authorship. | Risks of memorisation and leakage from opaque models. | Semantic mimicry threatens authenticity and cultural trust in creative expression. |
| MATH/ COPE | Moral Rights (M) | Authorship and Originality (A) | Training Data (T) | Human Originality (H) |
|---|---|---|---|---|
| Commercialisation (C) | Lack of enforceable attribution in AI-generated media ([6,8,14,16,21]). | Unclear ownership of commercially valuable AI outputs ([3,4,6,9,22,23,24,25,26,27]). Cases: Naruto v. Slater (2018) [3]; Zarya of the Dawn (2023) [5]. | Unlicensed monetisation of large-scale training datasets ([13,16,24,26,33]). Cases: Getty v. Stability AI (pending) [32]; Koepscke v. LAION (2024) [30]; Bartz v. Anthropic (2025) [33]. | Market displacement of human creators through stylistic mimicry ([13,14,27,33]). |
| Organisational Practice (O) | Moral rights are ignored in AI production workflows ([6,7,8,16]). | Authorship attribution bypassed by automated pipelines ([4,22,23,24,26]). Case: Zarya of the Dawn (2023) [5]. | No metadata standards for source traceability ([13,16,24,33]). | Style extraction decouples originality from authorship ([13,14,27]). |
| Policy and Governance (P) | Legal uncertainty around rights to control AI-driven imitations ([11,14,16]). | No unified definition of AI-assisted or AI-generated authorship (Naruto v. Slater, 2018 [3]; Zarya of the Dawn (2023) [5]). | Fragmented national rules on TDM and fair use ([13,16,24,26]). Cases: Getty v. Stability AI [32]; Koepscke v. LAION [30]; Bartz v. Anthropic [33]. | Style and voice are excluded from originality criteria ([11,13,14,16]). |
| Ethical Technology (E) | AI emulates reputational identity without creator consent ([6,8,14,16]). | No accountability for fabricated authorial intent ([22,23,24]). | Semantic learning lacks safeguards for sensitive data ([13,16,24,26]). | Human uniqueness blurred by high-fidelity simulation ([13,14,27]). |
| MATH/ COPE | Moral Rights (M) | Authorship and Originality (A) | Training Data (T) | Human Originality (H) |
|---|---|---|---|---|
| Commercialisation (C) | Creators fear erosion of attribution and integrity rights; industry wants clarity to protect markets. | Legal experts reject AI as a co-author to preserve market certainty; industry demands predictable attribution. | Stakeholders demand licensing rails with remuneration to prevent unfair competition. | All groups warn of market dilution and loss of cultural value through AI-driven imitation. |
| Organisational Practice (O) | No attribution safeguards in AI workflows; moral identity sidelined in production. | Authorship unverifiable in automated pipelines; creators demand human-in-the-loop criteria. | Developers admit opacity in data provenance; industry seeks standardised licensing mechanisms. | Creators report displacement through style/voice replication adopted in workflows. |
| Policy and Governance (P) | Lawyers stress the lack of harmonised safeguards for moral/personality rights. | Legal experts call for doctrinal clarity: AI = tool, not author. | Policymakers lag behind: fragmented exceptions (fair use, TDM) leave gaps. | No enforcement standards for semantic originality; all groups stress governance deficit. |
| Ethical Technology (E) | Voice/style imitation is seen as unethical exploitation; posthumous misuse is especially criticised. | Stakeholders oppose “co-authorship” fictions; creators stress human originality as an ethical baseline. | Developers acknowledge the need for transparency and liability frameworks. | AI mimicry undermines authenticity and trust; strong ethical concern across all groups. |
| MATH/ COPE | Moral Rights | Authorship | Training Data | Human Originality |
|---|---|---|---|---|
| Commercialisation | posthumous rights gap | AI ≠ author (market clarity) | unlicensed dataset inputs | market dilution of creators |
| Organisational Practice | attribution missing in workflows | unclear thresholds for human control | opaque provenance/no metadata | style/voice replication adopted in pipelines |
| Policy and Governance | fragmented protection regimes | Berne is silent on hybrids (i.e., no explicit rule). | TDM vs. fair use ambiguity | no international harmonisation |
| Ethical Technology | ethical misuse of persona/voice | deceptive representation risks | semantic copying without consent | identity mimicry/erosion of trust |
| MATH/ COPE | Moral Rights (M) | Authorship and Originality (A) | Training Data (T) | Human Originality (H) |
|---|---|---|---|---|
| Commercialisation (C) | Art. 4 Revenue Sharing—ensures creators are remunerated when AI outputs replicate protected styles/voices. | Art. 2 Authorship Attribution—keeps market clarity: only humans are rights holders, AI remains a tool. | Art. 3 Training Data Licensing—mandatory licenses stabilise markets, prevent free-riding. | Art. 1 Definition of “Work”—preserves economic value of human originality by excluding AI-only outputs. |
| Organisational Practice (O) | Art. 5 Moral Rights—obliges platforms to respect attribution/integrity, even in AI-assisted outputs. | Art. 2 Authorship Attribution—sets “human-in-the-loop” criteria for industry workflows. | Art. 6 Transparency Obligations—requires disclosure of datasets/models in outputs. | Art. 9 Technological Measures—watermarking/provenance tools support compliance in publishing pipelines. |
| Policy and Governance (P) | Art. 7 Posthumous Protection—fills doctrinal gaps in Berne by safeguarding style/likeness after death. | Art. 10 International Harmonisation—aligns national law with Berne, introduces AI-specific clauses. | Art. 11 Implementation and Enforcement—cross-border dispute resolution, sanctions for dataset misuse. | Art. 8 Educational Exceptions—balances copyright with research policy, limits scope of replication. |
| Ethical Technology (E) | Art. 5 Moral Rights—closes the ethical gap of style/voice imitation, prevents deceptive AI use. | Art. 2 Authorship Attribution—prevents false attribution of creativity to machines. | Art. 6 Transparency Obligations—audit trails, regulator access, accountability of developers. | Art. 7 Posthumous Protection + Art. 9 Technological Measures—ethical guardrails against exploitative mimicry, support trust in originality. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Neubauer, A.; Wynn, M.; Bown, R. AI, Authorship, Copyright, and Human Originality. Encyclopedia 2026, 6, 9. https://doi.org/10.3390/encyclopedia6010009
Neubauer A, Wynn M, Bown R. AI, Authorship, Copyright, and Human Originality. Encyclopedia. 2026; 6(1):9. https://doi.org/10.3390/encyclopedia6010009
Chicago/Turabian StyleNeubauer, Anja, Martin Wynn, and Robin Bown. 2026. "AI, Authorship, Copyright, and Human Originality" Encyclopedia 6, no. 1: 9. https://doi.org/10.3390/encyclopedia6010009
APA StyleNeubauer, A., Wynn, M., & Bown, R. (2026). AI, Authorship, Copyright, and Human Originality. Encyclopedia, 6(1), 9. https://doi.org/10.3390/encyclopedia6010009

