Previous Article in Journal
Information Modeling of Asymmetric Aesthetics Using DCGAN: A Data-Driven Approach to the Generation of Marbling Art
Previous Article in Special Issue
Service Mode Switching for Autonomous Robots and Small Intelligent Vehicles Using Pedestrian Personality Categorization and Flow Series Fluctuation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models

Department of Foreign Languages and Literature, University of Verona, 37129 Verona, Italy
*
Author to whom correspondence should be addressed.
Information 2026, 17(1), 95; https://doi.org/10.3390/info17010095
Submission received: 19 December 2025 / Revised: 13 January 2026 / Accepted: 15 January 2026 / Published: 16 January 2026
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)

Abstract

Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less authentic or emotionally resonant than human creations, with authorship attribution strongly shaping esthetic judgments. Yet little attention has been paid to how AI systems themselves evaluate creative authorship. This study investigates how large language models (LLMs) evaluate literary quality under different framings of authorship—Human, AI, or Human+AI collaboration. Using a questionnaire-based experimental design, we prompted four instruction-tuned LLMs (ChatGPT 4, Gemini 2, Gemma 3, and LLaMA 3) to read and assess three short stories in Italian, originally generated by ChatGPT 4 in the narrative style of Roald Dahl. For each story × authorship condition × model combination, we collected 100 questionnaire completions, yielding 3600 responses in total. Across esthetic, literary, and inclusiveness dimensions, the stated authorship systematically conditioned model judgments: identical stories were consistently rated more favorably when framed as human-authored or human–AI co-authored than when labeled as AI-authored, revealing a robust negative bias toward AI authorship. Model-specific analyses further indicate distinctive evaluative profiles and inclusiveness thresholds across proprietary and open-source systems. Our findings extend research on attribution bias into the computational realm, showing that LLM-based evaluations reproduce human-like assumptions about creative agency and literary value. We publicly release all materials to facilitate transparency and future comparative work on AI-mediated literary evaluation.
Keywords: LLMs; authorship bias; esthetic appreciation; literary value; inclusiveness LLMs; authorship bias; esthetic appreciation; literary value; inclusiveness
Graphical Abstract

Share and Cite

MDPI and ACS Style

Rospocher, M.; Salgaro, M.; Rebora, S. Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models. Information 2026, 17, 95. https://doi.org/10.3390/info17010095

AMA Style

Rospocher M, Salgaro M, Rebora S. Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models. Information. 2026; 17(1):95. https://doi.org/10.3390/info17010095

Chicago/Turabian Style

Rospocher, Marco, Massimo Salgaro, and Simone Rebora. 2026. "Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models" Information 17, no. 1: 95. https://doi.org/10.3390/info17010095

APA Style

Rospocher, M., Salgaro, M., & Rebora, S. (2026). Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models. Information, 17(1), 95. https://doi.org/10.3390/info17010095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop