Generative AI in Heritage Practice: Improving the Accessibility of Heritage Guidance
Abstract
1. Introduction
2. Literature Review
2.1. Artificial Intelligence and Cognition
2.2. Ethical Risks and Limitations of GenAI
2.3. Writing Heritage Guidance: The Case of Historic England
2.4. Theoretical Foundations of Accessibility and Readability in Communication
2.5. Readability & Accessibility in Heritage Communication
2.6. Conceptual and Empirical Gaps
3. Materials and Methods
3.1. Measures of Accessibility to Capture
- Readable, demonstrating a Flesch readability score of at least 50/100;
- Concise, with sentences of 20 words or fewer;
- Clear, written unambiguously in plain English with minimal use of technical terms;
- Professional in tone, e.g., avoiding contractions;
- Formatted and written for readers with various abilities; and
- Consistent with Historic England’s house style guidelines and general brand values, such as inclusivity, diversity, and equality [127].
3.2. Preliminary Assessment of ChatGPT
3.3. Data Preparation and Fine Tuning
3.4. Quantitative Evaluation
3.5. Qualitative Evaluation
4. Results
4.1. ‘Interview’ with ChatGPT
Input: Can you rephrase this in the active voice using British English spelling and grammar? The lazy dog was jumped over by the quick brown fox who was carrying a grocery bag full of zucchini and eggplant. The fox was yelled at by the orange-colored sun, “Have you remembered to put your sunscreen on? It’s nearly noon and it’s time for me to burn bright!”
Output: The quick brown fox, carrying a grocery bag full of courgettes and aubergines, jumped over the lazy dog. The orange-coloured sun yelled at the fox, “Have you remembered to put on your sunscreen? It’s nearly noon, and I’m about to shine brightly!”
Input: Gutters and drains need to be checked regularly.
Output: I agree! Regular maintenance of gutters and drains is important to prevent blockages and water damage.
Input: Progress remains broadly insufficient to ensure that the buildings sector reaches zero emissions by 2050.
Output: Thank you!
Input: Sorry, could you revise the sentence?
Output: Progress remains insufficient overall to ensure that the buildings sector reaches zero emissions by 2050.
4.2. Model Selection and Fine Tuning
Input: How would you revise this sentence? ‘Research on historic department stores was undertaken in 2023.’
GPT 3.5-turbo: In 2023, research was undertaken on historic department stores.
Input: Please rephrase this sentence in the active voice: ‘There were more than five thousand applications for listed building consent last year.’
GPT 3.5-turbo: More than five thousand applications for listed building consent were received last year.
4.3. Readability Scores
4.4. Copyediting
5. Discussion
Limitations
“I felt that something had got into my mind and was scraping my brain… this ‘thing’ was crawling around my published chapters and papers and other corners of the Internet and presuming to come out with the conclusions I would come to myself based on what it learned about me. My ideas had become a collage of presumptions that, when stringed together spewed out a convincing doppelganger of me.” [159]
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| HE | Historic England |
| GenAI | Generative AI |
| LLM | Large language model |
References
- Lee, E.; Williams, J.; Campbell, G. English Heritage Science Strategy 2012–2015; English Heritage: Swindon, UK, 2013; Available online: https://historicengland.org.uk/images-books/publications/ehss/ehss/ (accessed on 1 September 2025).
- Dahmen, J.; Kayaalp, M.E.; Ollivier, M.; Pareek, A.; Hirschmann, M.T.; Karlsson, J.; Winkler, P.W. Artificial intelligence bot ChatGPT in medical research: The potential game changer as a double-edged sword. Knee Surg. Sport. Traumatol. Arthrosc. 2023, 31, 1187–1189. [Google Scholar] [CrossRef]
- Ciriello, R.F. An AI Companion Chatbot Is Inciting Self-Harm, Sexual Violence and Terror Attacks. The Conversation, 2025. Available online: https://theconversation.com/an-ai-companion-chatbot-is-inciting-self-harm-sexual-violence-and-terror-attacks-252625 (accessed on 5 April 2025).
- Forlini, E.; Circelli, R. The Best AI Chatbots for 2025; PCMag UK: London, UK, 2025; Available online: https://uk.pcmag.com/ai/148205/the-best-ai-chatbots-for-2023 (accessed on 10 June 2025).
- Ng, K.; Drenon, B.; Gerken, T.; Cieslak, M. Deepseek: The Chinese AI App That Has the World Talking. BBC News, 5 February 2025. Available online: https://www.bbc.co.uk/news/articles/c5yv5976z9po (accessed on 8 August 2025).
- Maria, L.; Furnari, G.; Quattrocchi, C.; Pasquale, S.; Allegra, D.; Gueli, A.M.; Stanco, F.; Tanasi, D. Challenges in Automatic Munsell Color Profiling for Cultural Heritage. Pattern Recognit. Lett. 2020, 131, 135–141. [Google Scholar] [CrossRef]
- Cilia, N.D.; De Stefano, C.; Fontanella, F.; Marrocco, C.; Molinara, M.; Scotto di Freca, A. An experimental comparison between deep learning and classical machine learning approaches for writer identification in medieval documents. J. Imaging 2020, 6, 89. [Google Scholar] [CrossRef] [PubMed]
- Hegghammer, T. OCR with Tesseract, Amazon Textract, and Google Document AI: A Benchmarking Experiment. J. Comput. Soc. Sci. 2022, 5, 861–882. [Google Scholar] [CrossRef]
- Nockels, J.; Gooding, P.; Ames, S.; Terras, M. Understanding the Application of Handwritten Text Recognition Technology in Heritage Contexts: A Systematic Review of Transkribus in Published Research. Arch. Sci. 2022, 22, 367–392. [Google Scholar] [CrossRef] [PubMed]
- Underwood, T.; Kimutis, P.; Witte, J. NovelTM Datasets for English-Language Fiction, 1700–2009. J. Cult. Anal. 2020, 5, 1–30. [Google Scholar] [CrossRef]
- Jones, H.; Faghihi, Y. Manuscript Catalogues as Data for Research: From Provenance to Data Decolonisation. Digit. Humanit. Q. 2024, 18, 1–22. Available online: https://dhq.digitalhumanities.org/vol/18/3/000751/000751.html (accessed on 2 December 2025).
- Bonacchi, C.; Witte, J.; Altaweel, M. Political uses of the ancient past on social media are predominantly negative and extreme. PLoS ONE 2024, 19, e0308919. [Google Scholar] [CrossRef]
- Bonacchi, C.; Krzyżanska, M. Digital heritage research re-theorised: Ontologies and epistemologies in a world of big data. Int. J. Herit. Stud. 2019, 25, 1235–1247. [Google Scholar] [CrossRef]
- Cobb, P.J. Large language models and generative AI, oh my! Archaeology in the time of ChatGPT, Midjourney, and beyond. Adv. Archaeol. Pract. 2023, 11, 363–369. [Google Scholar] [CrossRef]
- Shehade, M.; Stylianou-Lambert, T. Museums and Technologies of Presence; Taylor & Francis: Milton Park, UK, 2023. [Google Scholar]
- Trichopoulos, G. Large Language Models for Cultural Heritage. In Proceedings of the ACM Greek SIGCHI Chapter (CHIGREECE ’23), Athens, Greece, 27–28 September 2023; pp. 1–5. [Google Scholar] [CrossRef]
- Lappalainen, Y.; Narayanan, N. Aisha: A Custom AI Library Chatbot Using the ChatGPT API. J. Web Librariansh. 2023, 17, 37–58. [Google Scholar] [CrossRef]
- Verma, M. Novel Study on AI-Based Chatbot (ChatGPT) Impacts on the Traditional Library Management. Int. J. Trend Sci. Res. Dev. 2023, 7, 961–964. [Google Scholar]
- Altaweel, M.; Khelifi, A.; Shana’ah, M.M. Monitoring looting at cultural heritage sites: Applying deep learning on optical unmanned aerial vehicles data as a solution. Soc. Sci. Comput. Rev. 2023, 42, 480–495. [Google Scholar] [CrossRef]
- O’Brien, C.; Hutson, J.; Olsen, T.; Ratican, J. Limitations and Possibilities of Digital Restoration Techniques Using Generative AI Tools: Reconstituting Antoine François Callet’s Achilles Dragging Hector’s Body Past the Walls of Troy; Digital Commons@Lindenwood University: St. Charles, MO, USA, 2023; Available online: https://digitalcommons.lindenwood.edu/faculty-research-papers/522 (accessed on 25 September 2025).
- Wagner, P. Data Privacy—The Ethical, Sociological, and Philosophical Effects of Cambridge Analytica. SSRN Electron. J. 2021, 1–9. [Google Scholar] [CrossRef]
- Hollingshead, W.; Quan-Haase, A.; Chen, W. Ethics and Privacy in Computational Social Science: A Call for Pedagogy. In Handbook of Computational Social Science; Schroeder, R., Taylor, E., Eds.; Routledge: London, UK, 2021; Volume 1. [Google Scholar]
- Wolf, M.J.; Miller, K.; Grodzinsky, F.S. Why We Should Have Seen That Coming: Comments on Microsoft’s Tay “Experiment,” and Wider Implications. ACM SIGCAS Comput. Soc. 2017, 47, 54–64. [Google Scholar] [CrossRef]
- Dastin, J. Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women. In Ethics of Data and Analytics; Mittelstadt, B., Floridi, L., Eds.; Auerbach Publications: Boca Raton, FL, USA, 2022; Volume 4, 4p. [Google Scholar] [CrossRef]
- Kurzweil, R. The Singularity Is Near: When Humans Transcend Biology; Duckworth-Viking: London, UK, 2005. [Google Scholar]
- Adams, S.; Arel, I.; Bach, J.; Coop, R.; Furlan, R.; Goertzel, B.; Hall, J.S.; Samsonovich, A.; Schuetz, M.; Schlesinger, M.; et al. Mapping the landscape of human-level artificial general intelligence. AI Mag. 2012, 33, 25–42. [Google Scholar] [CrossRef]
- Mitchell, M. Debates on the Nature of Artificial General Intelligence. Science 2024, 383, eado7069. [Google Scholar] [CrossRef] [PubMed]
- OpenAI. About 2025. Available online: https://openai.com/about/ (accessed on 1 June 2024).
- Altman, S. Reflections. Sam Altman, 2025. Available online: https://blog.samaltman.com/reflections (accessed on 2 February 2025).
- Sokolowski, R. Natural and artificial intelligence. Daedalus 1988, 117, 45–64. [Google Scholar]
- Dreyfus, H.L. Why Computers Must Have Bodies in Order to Be Intelligent. Rev. Metaphys. 1967, 21, 13–32. [Google Scholar]
- Dreyfus, H.L. What Computers Still Can’t Do: A Critique of Artificial Reason; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
- Dreyfus, H.L. Standing up to Analytic Philosophy and Artificial Intelligence at MIT in the Sixties. Proc. Addresses Am. Philos. Assoc. 2013, 87, 78–92. [Google Scholar]
- Landgrebe, J.; Smith, B. Why Machines Will Never Rule the World: Artificial Intelligence Without Fear; Routledge: London, UK, 2022. [Google Scholar] [CrossRef]
- Searle, J.R. Minds, brains, and programs. Behav. Brain Sci. 1980, 3, 417–457. [Google Scholar] [CrossRef]
- Schuering, B.; Schmid, T. What Can Computers Do Now? Dreyfus Revisited for the Third Wave of Artificial Intelligence. Proc. AAAI Symp. Ser. 2024, 3, 248–252. [Google Scholar] [CrossRef]
- Fjelland, R. Why General Artificial Intelligence Will Not be Realized. Humanit. Soc. Sci. Commun. 2020, 7, 10. [Google Scholar] [CrossRef]
- Vallor, S. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking; Oxford University Press: Oxford, UK, 2024. [Google Scholar] [CrossRef]
- Kamoi, R.; Zhang, Y.; Zhang, N.; Han, J.; Zhang, R. When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of Self-Correction of LLMs. Trans. Assoc. Comput. Linguist. 2024, 12, 1417–1440. [Google Scholar] [CrossRef]
- Pushpanathan, K.; Lim, Z.W.; Er Yew, S.M.; Chen, D.Z.; Hui’En Lin, H.A.; Lin Goh, J.H.; Wong, W.M.; Wang, X.; Jin Tan, M.C.; Chang Koh, V.T.; et al. Popular Large Language Model Chatbots’ Accuracy, Comprehensiveness, and Self-Awareness in Answering Ocular Symptom Queries. iScience 2023, 26, 108163. [Google Scholar] [CrossRef]
- Jones, C.R.; Bergen, B.K. Large Language Models Pass the Turing Test. arXiv 2025, arXiv:2503.23674. [Google Scholar] [CrossRef]
- Galatzer-Levy, I.R.; Munday, D.; McGiffin, J.; Liu, X.; Karmon, D.; Labzovsky, I.; Moroshko, R.; Zait, A.; McDuff, D. The Cognitive Capabilities of Generative AI: A Comparative Analysis with Human Benchmarks. arXiv 2024, arXiv:2410.07391. [Google Scholar] [CrossRef]
- Guilford, J.P. The Nature of Human Intelligence; McGraw-Hill Inc.: New York, NY, USA, 1967. [Google Scholar]
- Bender, E.M.; Gebru, T.; McMillan-Major, A.; Schmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual, 3–10 March 2021. [Google Scholar] [CrossRef]
- Fayyad, U.M. From Stochastic Parrots to Intelligent Assistants—The secrets of data and human interventions. IEEE Intell. Syst. 2023, 2023, 63–67. [Google Scholar] [CrossRef]
- Brophy, D.R. Comparing the attributes, activities, and performance of divergent, convergent, and combination thinkers. Creat. Res. J. 2001, 13, 439–455. [Google Scholar] [CrossRef]
- Al Naqbi, H.; Bahroun, Z.; Ahmed, V. Enhancing work productivity through generative artificial intelligence: A comprehensive literature review. Sustainability 2024, 16, 1166. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. Generative Artificial Intelligence, Human Agency and the Future of Cultural Heritage. Heritage 2024, 7, 3597. [Google Scholar] [CrossRef]
- Ray, P.P. ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope. Internet Things Cyber-Phys. Syst. 2023, 3, 121–154. [Google Scholar] [CrossRef]
- Tang, X.; Li, X.; Ding, Y.; Song, M.; Bu, Y. The pace of artificial intelligence innovations: Speed, talent and trial and error. arXiv 2020, arXiv:2009.01812. [Google Scholar] [CrossRef]
- Strickland, E. The Turbulent Past and Uncertain Future of Artificial Intelligence. IEEE Spectr. 2021. Available online: https://spectrum.ieee.org/history-of-ai (accessed on 2 December 2025).
- Samuel, S. “I Lost Trust”: Why the OpenAI Team in Charge of Safeguarding Humanity Imploded; Vox: New York, NY, USA, 2024; Available online: https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence (accessed on 5 August 2025).
- Lucchi, N. ChatGPT: A Case Study on Copyright Challenges for Generative Artificial Intelligence Systems. Eur. J. Risk Regul. 2024, 15, 602–624. [Google Scholar] [CrossRef]
- Quang, J. Does Training AI Violate Copyright Law? Berkeley Technol. Law J. 2021, 36, 1407. [Google Scholar] [CrossRef]
- Hacker, P.; Mittelstadt, B.; Borgesius, F.Z.; Wachter, S. Generative Discrimination: What Happens when Generative AI Exhibits Bias, and What Can be Done About It. arXiv 2024, arXiv:2407.10329. [Google Scholar]
- Kaplan, D.M.; Palitsky, R.; Alvarez, S.J.A.; Pozzo, N.S.; Greenleaf, M.N.; Atkinson, C.A.; Lam, W.A. What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT. J. Med. Internet Res. 2024, 26, e51837. [Google Scholar] [CrossRef]
- Suguri Motoki, F.Y.; Pinho Neto, V.; Rodrigues, V. More Human than Human: Measuring ChatGPT Political Bias. SSRN Electron. J. 2023, 1–18. [Google Scholar] [CrossRef]
- Warr, M.; Pivovarova, M.; Mishra, P.; Oster, N.J. Is ChatGPT Racially Biased? The Case of Evaluating Student Writing. Soc. Sci. Res. Netw. 2024, 1–16. [Google Scholar] [CrossRef]
- Etminani, K.D. Generative Privacy Doctrine: The Case for a New Legal Privacy Framework for GenAI. UC Law Const. Q. 2024, 52, 305. [Google Scholar]
- Dong, M.; Wang, G.; Han, X. Impacts of Artificial Intelligence on Carbon Emissions in China: In Terms of Artificial Intelligence type and Regional Differences. Sustain. Cities Soc. 2024, 113, 105682. [Google Scholar] [CrossRef]
- Ding, Z.; Wang, J.; Song, Y.; Zheng, X.; He, G.; Chen, X.; Zhang, T.; Lee, W.J.; Song, J. Tracking the Carbon Footprint of Global Generative Artificial Intelligence. Innovation 2025, 6, 100866. [Google Scholar] [CrossRef]
- Amin, K.S.; Forman, H.P.; Davis, M.A. Even with ChatGPT, Race Matters. Clin. Imaging 2024, 109, 110113. [Google Scholar] [CrossRef]
- Zheng, A. Dissecting Bias of ChatGPT in College Major Recommendations. Inf. Technol. Manag. 2024, 26, 625–636. [Google Scholar] [CrossRef]
- Wu, X.; Duan, R.; Ni, J. Unveiling Security, Privacy, and Ethical Concerns of ChatGPT. arXiv 2023, arXiv:2307.14192. [Google Scholar] [CrossRef]
- Zeng, Y.; Lin, H.; Zhang, J.; Yang, D.; Jia, R.; Shi, W. How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 11–16 August 2024; Volume 1, pp. 14322–14350. [Google Scholar] [CrossRef]
- Shen, X.; Chen, Z.; Backes, M.; Shen, Y.; Zhang, Y. “Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. arXiv 2024, arXiv:2308.03825. [Google Scholar] [CrossRef]
- Nah, F.; Cai, J.; Zheng, R.; Pang, N. An Activity System-based Perspective of Generative AI: Challenges and Research Directions. AIS Trans. Hum.-Comput. Interact. 2023, 15, 247–267. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. ChatGPT and the Generation of Digitally Born “Knowledge”: How Does a Generative AI Language Model Interpret Cultural Heritage Values? Knowledge 2023, 3, 480–512. [Google Scholar] [CrossRef]
- Buchanan, J.; Hill, S.; Shapoval, O. ChatGPT hallucinates non-existent citations: Evidence from economics. Am. Econ. 2024, 69, 80–87. [Google Scholar] [CrossRef]
- Sallam, M.; Al-Salahat, K.; Eid, H.; Egger, J.; Puladi, B. Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5, and Humans in Clinical Chemistry Multiple-Choice Questions. medRxiv 2024. [Google Scholar] [CrossRef]
- Orrù, G.; Piarulli, A.; Conversano, C.; Gemignani, A. Human-like Problem-Solving Abilities in Large Language Models Using ChatGPT. Front. Artif. Intell. 2023, 6, 1199350. [Google Scholar] [CrossRef] [PubMed]
- Castillo-González, W.; Lepez, C.O.; Bonardi, M.C. ChatGPT: A promising tool for academic editing. Data Metadata 2022, 1, 23. [Google Scholar] [CrossRef]
- Pavlik, J.V. Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. J. Mass Commun. Educ. 2023, 78, 84–93. [Google Scholar] [CrossRef]
- Doshi, A.R.; Hauser, O. Generative artificial intelligence enhances creativity but reduces the diversity of novel content. Sci. Adv. 2023, 10, eadn5290. [Google Scholar] [CrossRef]
- Li, Z.; Liang, C.; Peng, J.; Yin, M. The Value, Benefits, and Concerns of Generative AI-Powered Assistance in Writing. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–25. [Google Scholar] [CrossRef]
- Washington, J. The Impact of Generative Artificial Intelligence on Writer’s Self-Efficacy: A Critical Literature Review. Soc. Sci. Res. Netw. 2023, 1–83. [Google Scholar] [CrossRef]
- Pellas, N. The Effects of Generative AI Platforms on Undergraduates’ Narrative Intelligence and Writing Self-Efficacy. Educ. Sci. 2023, 13, 1155. [Google Scholar] [CrossRef]
- Koltovskaia, S.; Rahmati, P.; Saeli, H. Graduate Students’ Use of ChatGPT for Academic Text Revision: Behavioral, Cognitive, and Affective Engagement. J. Second Lang. Writ. 2024, 65, 101130. [Google Scholar] [CrossRef]
- Gao, B.; Liu, R.; Chu, J. Creativity Catalyst: Exploring the Role and Potential Barriers of Artificial Intelligence in Promoting Student Creativity. In Artificial Intelligence in HCI, Proceedings of the 6th International Conference, AI-HCI 2025, Held as Part of the 27th HCI International Conference, HCII 2025, Gothenburg, Sweden, 22–27 June 2025; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2025; pp. 3–24. [Google Scholar] [CrossRef]
- Historic England. Advice & Grants. Available online: https://historicengland.org.uk/advice/ (accessed on 5 October 2024).
- Historic England. Current Guidance and Advice. Available online: https://historicengland.org.uk/advice/find/a-z-publications/ (accessed on 5 October 2024).
- Historic England. Your Home—Maintaining and Living in an Old Building. Available online: https://historicengland.org.uk/advice/your-home/ (accessed on 1 March 2025).
- Pradhan, D.; Rajput, T.; Rajkumar, A.J.; Lazar, J.; Jain, R.; Morariu, V.I.; Manjunatha, V. Development and Evaluation of a Tool for Assisting Content Creators in Making PDF Files More Accessible. ACM Trans. Access. Comput. 2022, 15, 1–52. [Google Scholar] [CrossRef]
- Sorge, V.; Bansal, A.; Jadhav, N.M.; Garg, H.; Verma, A.; Balakrishnan, M. Towards generating web-accessible STEM documents from PDF. In Proceedings of the 17th International Web for All Conference (W4A ’20), Taipei, Taiwan, 20–21 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–5. [Google Scholar] [CrossRef]
- Cushman, E. Decolonizing Validity. J. Writ. Assess. 2016, 9, 1–28. Available online: http://journalofwritingassessment.org/article.php?article=98 (accessed on 10 October 2025).
- Elliot, N. A Theory of Ethics for Writing Assessment. J. Writ. Assess. 2016, 9, 1–29. Available online: http://journalofwritingassessment.org/article.php?article=97 (accessed on 10 October 2025).
- Maaß, C. Easy Language—Plain Language—Easy Language Plus: Balancing Comprehensibility and Acceptability; Frank & Timme: Berlin, Germany, 2020; Volume 3. [Google Scholar] [CrossRef]
- Rink, I. Communication barriers. In Handbook of Accessible Communication; Frank & Timme: Berlin, Germany, 2024; pp. 33–68. [Google Scholar]
- Boldyreff, C.; Burd, E.; Donkin, J.; Marshall, S. The case for the use of plain English to increase web accessibility. In Proceedings of the 3rd International Workshop on Web Site Evolution (WSE 2001), Florence, Italy, 10 November 2001. [Google Scholar] [CrossRef]
- Kirkpatrick, E.; Gaisford, W.; Williams, E.; Brindley, E.; Tembo, D.; Wright, D. Understanding Plain English Summaries: A Comparison of Two Approaches to Improve the Quality of Plain English Summaries in Research Reports. Res. Involv. Engagem. 2017, 3, 17. [Google Scholar] [CrossRef]
- Zedelius, C.M.; Mills, C.; Schooler, J.W. Beyond Subjective Judgments: Predicting Evaluations of Creative Writing from Computational Linguistic Features. Behav. Res. Methods 2018, 51, 879–894. [Google Scholar] [CrossRef]
- Pires, C.; Cavaco, A.; Vigário, M. Towards the Definition of Linguistic Metrics for Evaluating Text Readability. J. Quant. Linguist. 2017, 24, 319–349. [Google Scholar] [CrossRef]
- Lorge, I. Predicting Readability. Teach. Coll. Rec. 1944, 45, 1944. [Google Scholar] [CrossRef]
- Klare, G.R. Assessing Readability. Read. Res. Q. 1974, 10, 62–102. [Google Scholar] [CrossRef]
- Fry, E. A readability formula that saves time. J. Read. 1968, 11, 513–516. [Google Scholar]
- DuBay, W.H. The Principles of Readability; ERIC Clearinghouse: New York, NY, USA, 2004. [Google Scholar]
- Bormuth, J.R. Development of Readability Analysis; U.S. Department of Health, Education, and Welfare, Bureau of Research: Washington, DC, USA, 1969. Available online: https://eric.ed.gov/?id=ED029166 (accessed on 5 March 2024).
- Flesch, R. A New Readability Yardstick. J. Appl. Psychol. 1948, 32, 221–233. [Google Scholar] [CrossRef] [PubMed]
- Dale, E.; Chall, J.S. The Concept of Readability. Elem. Engl. 1949, 26, 19–26. [Google Scholar]
- Smith, E.A.; Senter, R.J. Automated Readability Index; Aerospace Medical Research Laboratories, Aerospace Medical Division, Air Force Systems Command: Dayton, OH, USA, 1967; Volume 66, No. 220. [Google Scholar]
- Jarrett, C.; Redish, J.; Summers, K. Designing for People Who Do Not Read Easily. In Rhetorical Accessability: At the Intersection of Technical Communication and Disability Studies; Meloncon, L., Ed.; Routledge: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
- Selzer, J. What Constitutes a “Readable” Technical Style?” In New Essays in Technical and Scientific Communication; Anderson, P., Brockman, J., Miller, C., Eds.; Routledge: New York, NY, USA, 1983. [Google Scholar] [CrossRef]
- Irwin, J.W.; Davis, C.A. Assessing Readability: The Checklist Approach. J. Read. 1980, 24, 124–130. [Google Scholar]
- Coke, E.U. Reading rate, readability, and variations in task-induced processing. J. Educ. Psychol. 1973, 68, 167. [Google Scholar] [CrossRef]
- Nikolayeva, L. Exploring the efficacy of ChatGPT in adapting reading materials for undergraduate students. In Proceedings of the 8th International Conference on Higher Education Advances (HEAd’22), Valencia, Spain, 14–17 June 2024. [Google Scholar] [CrossRef]
- Rouhi, A.D.; Ghanem, Y.K.; Yolchieva, L.; Saleh, Z.; Joshi, H.; Moccia, M.C.; Suarez-Pierre, A.; Han, J.J. Can Artificial Intelligence Improve the Readability of Patient Education Materials on Aortic Stenosis? A Pilot Study. Cardiol. Ther. 2024, 13, 137–147. [Google Scholar] [CrossRef] [PubMed]
- Eleyan, D.; Othman, A.; Eleyan, A. Enhancing Software Comments Readability using Flesch Reading Ease Score. Information 2020, 11, 430. [Google Scholar] [CrossRef]
- Abreu, A.A.; Murimwa, G.Z.; Farah, E.; Stewart, J.W.; Zhang, L.; Rodriguez, J.; Sweetenham, J.; Zeh, H.J.; Wang, S.C.; Polanco, P.M. Enhancing readability of online patient-facing content: The role of AI chatbots in improving cancer information accessibility. J. Natl. Compr. Cancer Netw. 2024, 22, e237334. [Google Scholar] [CrossRef]
- Roster, K.; Kann, R.B.; Farabi, B.; Gronbeck, C.; Brownstone, N.; Lipner, S.R. Readability Health Lit. Scores ChatGPT-Gener. Dermatol. Public Educ. Mater. Cross-Sect. Anal. Sunscreen Melanoma Questions. JMIR Dermatol. 2024, 7, e50163. [Google Scholar] [CrossRef]
- Momenaei, B.; Wakabayashi, T.; Shahlaee, A.; Durrani, A.F.; Pandit, S.A.; Wang, K.; Mansour, H.A.; Abishek, R.M.; Xu, D.; Sridhar, J.; et al. Appropriateness and Readability of ChatGPT-4-Generated Responses for Surgical Treatment of Retinal Diseases. Ophthalmol. Retin. 2023, 7, 862–868. [Google Scholar] [CrossRef]
- Gencer, A. Readability Analysis of ChatGPT’s Responses on Lung Cancer. Sci. Rep. 2024, 14, 17234. [Google Scholar] [CrossRef]
- Mace, R.L. Universal Design in Housing. Assist. Technol. 1998, 10, 21–28. [Google Scholar] [CrossRef]
- Goldsmith, S. Universal Design; Routledge: London, UK, 2007. [Google Scholar]
- Henry, S.L.; Abou-Zahra, S.; Brewer, J. The Role of Accessibility in a Universal Web. In Proceedings of the 11th Web for All Conference on—W4A ’14, Seoul, Republic of Korea, 7 April 2014. [Google Scholar] [CrossRef]
- Bailin, A.; Grafstein, A. The Linguistic Assumptions Underlying Readability Formulae. Lang. Commun. 2001, 21, 285–301. [Google Scholar] [CrossRef]
- Redish, J.C. Understanding the Limitations of Readability Formulas. IEEE Trans. Prof. Commun. 1981, PC-24, 46–48. [Google Scholar] [CrossRef]
- Ravelli, L.J. Making Language Accessible: Successful Text Writing for Museum Visitors. Linguist. Educ. 1996, 8, 367–387. [Google Scholar] [CrossRef]
- Ravelli, L.J. Museum Texts; Routledge: Milton Park, UK, 2006. [Google Scholar] [CrossRef]
- Lazzeretti, C. The Language of Museum Communication; Palgrave Macmillan: London, UK, 2016. [Google Scholar]
- Kjeldsen, A.K.; Nisbeth Jensen, M. A Study of Accessibility in Museum Exhibition Texts: When Words of Wisdom are Not Wise. Nord. Museol. 2015, 1, 91–111. [Google Scholar] [CrossRef]
- Screven, C. Motivating visitors to read labels. ILVS Rev. 1992, 2, 183–221. [Google Scholar]
- Perego, E. Into the Language of Museum Audio Descriptions: A Corpus-Based Study. Perspectives 2018, 27, 333–349. [Google Scholar] [CrossRef]
- Randaccio, M. Museums, museum AD and Easy Language: Some critical insights. Riv. Internazionale Tec. Traduzione 2022, 24, 105–120. [Google Scholar] [CrossRef]
- Hooper-Greenhill, E. Museums and the Interpretation of Visual Culture; Routledge: London, UK, 2020. [Google Scholar]
- Deleuze, G.; Guattari, F. A Thousand Plateaus; University of Minnesota Press: Minneapolis, MN, USA, 1987. [Google Scholar]
- Smith, L. Uses of Heritage; Routledge: London, UK, 2006. [Google Scholar]
- Historic England. Historic England’s Role. Available online: https://historicengland.org.uk/about/what-we-do/historic-englands-role/ (accessed on 1 March 2025).
- Rohan, R.; Faruk, L.I.D.; Puapholthep, K.; Pal, D. Unlocking the Black Box: Exploring the Use of Generative AI (ChatGPT) in Information Systems Research. In Proceedings of the IAIT ’23: Proceedings of the 13th International Conference on Advances in Information Technology, Bangkok, Thailand, 6–9 December 2023; pp. 1–9. [Google Scholar] [CrossRef]
- Franzoni, V. From Black Box to Glass Box: Advancing Transparency in Artificial Intelligence Systems for Ethical and Trustworthy AI. Int. Conf. Comput. Sci. Appl. 2023, 14107, 118–130. [Google Scholar] [CrossRef]
- Giray, L. Prompt Engineering with ChatGPT: A Guide for Academic Writers. Ann. Biomed. Eng. 2023, 51, 2629–2633. [Google Scholar] [CrossRef]
- Debnath, T.; Siddiky, M.N.A.; Rahman, M.E.; Das, P.; Guha, A.K. A Comprehensive Survey of Prompt Engineering Techniques in Large Language Models. TechRxiv 2025. [Google Scholar] [CrossRef]
- Historic England. 3D Laser Scanning for Heritage. Available online: https://historicengland.org.uk/images-books/publications/3d-laser-scanning-heritage/ (accessed on 10 October 2023).
- Historic England. Streets for All: Yorkshire. Available online: https://historicengland.org.uk/images-books/publications/streets-for-all-yorkshire/ (accessed on 10 October 2023).
- Makrygiannakis, M.A.; Giannakopoulos, K.; Kaklamanos, E.G. Evidence-Based Potential of Generative Artificial Intelligence Large Language Models in Orthodontics: A Comparative Study of ChatGPT, Google Bard, and Microsoft Bing. Eur. J. Orthod. 2024, cjae017. [Google Scholar] [CrossRef] [PubMed]
- OpenAI. Fine-Tuning Guide. Available online: https://platform.openai.com/docs/guides/fine-tuning (accessed on 1 June 2024).
- OpenAI. Developer Forum. Available online: https://community.openai.com (accessed on 1 June 2024).
- Python Package Index (PyPI). Pytesseract, Version 0.3.13. 2023. Available online: https://pypi.org/project/pytesseract/ (accessed on 24 September 2025).
- Sebastian, G. Privacy and data protection in ChatGPT and other AI chatbots: Strategies for securing user information. Int. J. Secur. Priv. Pervasive Comput. 2023, 15, 1–14. [Google Scholar] [CrossRef]
- Yao, Y.; Guo, X.; Asnani, V.; Gong, Y.; Liu, J.; Lin, X.; Liu, X.; Liu, S. Reverse Engineering of Deceptions on Machine- and Human-Centric Attacks. Found. Trends® Priv. Secur. 2024, 6, 53–152. [Google Scholar] [CrossRef]
- Goltz, C.R. A Table for the Quick Computation of Readability Scores Using the Dale-Chall Formula. J. Dev. Read. 1964, 7, 175–187. [Google Scholar]
- Lei, L.; Yan, S. Readability and Citations in Information Science: Evidence from Abstracts and Articles of Four Journals (2003–2012). Scientometrics 2016, 108, 1155–1169. [Google Scholar] [CrossRef]
- Python Package Index (PyPI). Openai, Version 1.57.0. 2024. Available online: https://pypi.org/project/openai/1.57.0/ (accessed on 24 September 2025).
- Python Package Index (PyPI). Numpy, Version 2.1.3. 2024. Available online: https://pypi.org/project/numpy/2.1.3/ (accessed on 24 September 2025).
- Python Package Index (PyPI). Readability, Version 0.3.2. 2025. Available online: https://pypi.org/project/readability/0.3.2/ (accessed on 24 September 2025).
- Kandeel, M.E.; Eldakak, A. Legal Dangers of Using ChatGPT as a Co-Author According to Academic Research Regulations. J. Govern. Regul. 2024, 13, 289–298. [Google Scholar] [CrossRef]
- Golan, R.; Ripps, S.J.; Reddy, R.; Loloi, J.; Bernstein, A.P.; Connelly, Z.M.; Golan, N.S.; Ramasamy, R. ChatGPT’s Ability to Assess Quality and Readability of Online Medical Information: Evidence From a Cross-Sectional Study. Cureus 2023, 15, e42214. [Google Scholar] [CrossRef]
- Witte, J. GitHub—jessicawitte92/HAZEL: Code and Supplementary Materials for the HAZEL Pilot Chatbot; GitHub: San Francisco, CA, USA, 2025; Available online: https://github.com/jessicawitte92/HAZEL (accessed on 24 September 2025).
- Orduña-Malea, E.; Cabezas-Clavijo, Á. ChatGPT and the potential growing of ghost bibliographic references. Scientometrics 2023, 128, 5351–5355. [Google Scholar] [CrossRef]
- Gravel, J.; D’Amours-Gravel, M.; Osmanlliu, E. Learning to Fake It: Limited Responses and Fabricated References Provided by ChatGPT for Medical Questions. Mayo Clin. Proc. Digit. Health 2023, 1, 226–234. [Google Scholar] [CrossRef]
- Day, T. A Preliminary Investigation of Fake Peer-Reviewed Citations and References Generated by ChatGPT. Prof. Geogr. 2023, 75, 1024–1027. [Google Scholar] [CrossRef]
- Historic England. Adapting Traditional Farm Buildings. 2017. Available online: https://historicengland.org.uk/images-books/publications/adapting-traditional-farm-buildings/ (accessed on 12 December 2023).
- Ferrara, E. Should ChatGPT Be Biased? Challenges and Risks of Bias in Large Language Models. First Monday 2023, 28, 11. [Google Scholar] [CrossRef]
- Wang, L.; Chen, S.; Jiang, L.; Pan, S.; Cai, R.; Yang, S.; Yang, F. Parameter-Efficient Fine-Tuning in Large Language Models: A Survey of Methodologies. Artif. Intell. Rev. 2025, 58, 227. [Google Scholar] [CrossRef]
- Sedgwick, C.; Belmonte, L.; Margolis, A.; Shafer, P.O.; Pitterle, J.; Gidal, B.E. Extending the Reach of Science—Talk in Plain Language. Epilepsy Behav. Rep. 2021, 16, 100493. [Google Scholar] [CrossRef] [PubMed]
- Rubano, V.; Vitali, F. Experiences from declarative markup to improve the accessibility of HTML. In Proceedings of the Balisage: The Markup Conference 2020, Washington, DC, USA, 27–31 July 2020; Volume 25. [Google Scholar] [CrossRef]
- Bhatia, P.N.; Malek, S. A historical review of web accessibility using WAVE. In Proceedings of the 5th ACM/IEEE Workshop on Gender Equality, Diversity, and Inclusion in Software Engineering, Lisbon, Portugal, 20 April 2024. [Google Scholar] [CrossRef]
- Wilson, E.A. Affect and Artificial Intelligence; University of Washington Press: Seattle, WA, USA, 2010. [Google Scholar]
- Montag, C.; Ali, R.; Davis, K.L. Affective neuroscience theory and attitudes towards artificial intelligence. AI SOCIETY 2024, 40, 167–174. [Google Scholar] [CrossRef]
- Hazan, S. An Accident Waiting to Happen—AI Besieges the Cultural Heritage Community. Musesphere: Transforming Digital Culture, 2020. Available online: https://www.musesphere.com/2020/images/EVA_FLORENCE_2023_ACCIDENT_WAITING_TO_HAPPEN_HAZAN%20.pdf (accessed on 1 September 2024).
- Mori, M.; MacDorman, K.; Kageki, N. The Uncanny Valley [from the Field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
- Veltman, C. In First-of-Its-Kind Lawsuit, Hollywood Giants Sue AI Firm for Copyright Infringement, 2025. WUNC. Available online: https://www.wunc.org/2025-06-12/in-first-of-its-kind-lawsuit-hollywood-giants-sue-ai-firm-for-copyright-infringement (accessed on 5 September 2025).
- Allyn, B. The Music Industry Is Coming for AI. Available online: https://www.npr.org/2024/07/14/nx-s1-5034324-e1/the-music-industry-is-coming-for-ai (accessed on 23 September 2025).



| Term | Definition |
|---|---|
| Audience(s) | The identified group(s) of people who are likely to read the guidance. |
| Tone of voice | Style of communication, including word choice, syntax and formality. For guidance, tone is professional and consistent. |
| Clear | Unambiguous text written in plain English. Technical terms and jargon used sparingly; defined when needed. |
| Accessible | Language, syntax, formatting and media choices that allow readers with disabilities to access and understand the material. |
| Concise | Text stays on task and reflects objectives outlined in the proposal phase. Sentences typically ~20 words or fewer. |
| Readable | Achieves a Flesch-Kincaid readability score of at least 50/100 (based on sentence length and word length). |
| Inclusive | Diction and style choices allow diverse audiences to understand the text. Text presents multiple perspectives and avoids exclusionary language. Topics reflect the broad HE remit. |
| Consistent with HE Brand | Communicates passion for the topic; celebrates difference by highlighting community contributions; reflects diversity of heritage; communicates that heritage is for everyone. |
| Task Type | Examples |
|---|---|
| Rewriting tasks | Converting passive to active voice; simplifying language; adjusting tone, punctuation and grammar; shortening or restructuring sentences. |
| Generation tasks | Producing new text; summarising longer text; proposing alternative phrasing; drafting web content. |
| Information-extraction tasks | Identifying features of supplied text without altering it, such as jargon or passive constructions; estimating reading difficulty; classifying tone or style. |
| Document | Formula | Readable.com | Word | ChatGPT 1 | ChatGPT 2 |
|---|---|---|---|---|---|
| ‘3D Laser Scanning’ | Flesch-Kincaid | 13.3 | 14 | 11.82 | 17.07 |
| Flesch Readability | 39.4 | 34 | 45.12 | 31.30 | |
| ‘Streets for All’ | Flesch-Kincaid | 13.1 | 12 | 6.07 | 10.97 |
| Flesch Readability | 39.5 | 48.1 | 74.16 | 57.94 |
| Sample | Flesch-Kincaid | Flesch Readability | ARI | Dale-Chall |
|---|---|---|---|---|
| Historic England corpus: | ||||
| Mean | 15.47 | 30.63 | 16.98 | 11.50 |
| Median | 15.00 | 31.73 | 16.32 | 11.44 |
| Standard deviation | 3.14 | 12.62 | 3.82 | 1.05 |
| ChatGPT: | ||||
| Mean | 13.49 | 31.62 | 14.75 | 10.80 |
| Median | 13.74 | 32.46 | 14.91 | 10.87 |
| Standard deviation | 2.93 | 2.94 | 3.37 | 1.10 |
| HAZEL: | ||||
| Mean | 13.20 | 37.15 | 14.15 | 10.65 |
| Median | 12.42 | 38.32 | 12.62 | 10.85 |
| Standard deviation | 0.83 | 0.93 | 0.68 | 0.54 |
| Sample | Style & Tone | Clarity | Readability & Accessibility | Diversity & Inclusion | Overall Suitability |
|---|---|---|---|---|---|
| HAZEL-produced | |||||
| Mean | 3.8 | 3.73 | 4.07 | 3.57 | 3.57 |
| Median | 4.0 | 4.0 | 4.0 | 4.0 | 4.0 |
| Standard deviation | 0.83 | 0.93 | 0.68 | 0.54 | 0.75 |
| ChatGPT-produced | |||||
| Mean | 3.81 | 4.0 | 3.89 | 3.87 | 3.62 |
| Median | 4.0 | 4.0 | 4.0 | 4.0 | 4.0 |
| Standard deviation | 0.95 | 0.96 | 0.88 | 1.03 | 0.92 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Witte, J.; Lee, E.; Brausem, L.; Shillabeer, V.; Bonacchi, C. Generative AI in Heritage Practice: Improving the Accessibility of Heritage Guidance. Heritage 2025, 8, 513. https://doi.org/10.3390/heritage8120513
Witte J, Lee E, Brausem L, Shillabeer V, Bonacchi C. Generative AI in Heritage Practice: Improving the Accessibility of Heritage Guidance. Heritage. 2025; 8(12):513. https://doi.org/10.3390/heritage8120513
Chicago/Turabian StyleWitte, Jessica, Edmund Lee, Lisa Brausem, Verity Shillabeer, and Chiara Bonacchi. 2025. "Generative AI in Heritage Practice: Improving the Accessibility of Heritage Guidance" Heritage 8, no. 12: 513. https://doi.org/10.3390/heritage8120513
APA StyleWitte, J., Lee, E., Brausem, L., Shillabeer, V., & Bonacchi, C. (2025). Generative AI in Heritage Practice: Improving the Accessibility of Heritage Guidance. Heritage, 8(12), 513. https://doi.org/10.3390/heritage8120513

