Next Article in Journal
Filling the Sensory Gap: A Sensory Evaluation of Plant-Based vs. Pork Hotdogs
Previous Article in Journal
Characteristics of Creeping Discharge on a Dielectric-Covered Cathode
Previous Article in Special Issue
The Gardener of the Grand Duke: History and Analysis of Ms. 462 Hortus Pisanus, Icones variarum plantarum
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

The Affordances of AI-Powered, Deepfake, Avatar Creator Systems in Archaeological Facial Depiction and the Related Changes in the Cultural Heritage Sector

Face Lab, Forensic Research Institute (FORRI), Liverpool John Moores University, Liverpool L1 9DE, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(2), 1023; https://doi.org/10.3390/app16021023
Submission received: 1 December 2025 / Revised: 13 January 2026 / Accepted: 15 January 2026 / Published: 20 January 2026
(This article belongs to the Special Issue Application of Digital Technology in Cultural Heritage)

Abstract

Technological advances have influenced and changed cultural heritage in the galleries, libraries, archives, and museums (GLAM) sector by facilitating new forms of experimentation and knowledge exchange. In this context, this paper explores the evolving practice of archaeological facial depiction using AI-powered deepfake avatar creator software programs, such as Epic Games’ MetaHuman Creator (MHC), which offer new affordances in terms of agility, realism, and engagement, and build upon traditional workflows involving the physical sculpting or digital modelling of faces from the past. Through a case-based approach, we illustrate these affordances via real-world applications, including four-dimensional portraits, multi-platform presentations, Augmented Reality (AR), and enhanced audience interaction. We consider the limitations and challenges of these digital avatar systems, such as misrepresentation or cultural insensitivity, and we position this advanced technology within the broader context of digital heritage, considering both the technical possibilities and ethical concerns around synthetic representations of individuals from the past. Finally, we propose that the use of MHC is not a replacement for current practice, but rather an augmentation, expanding the potential for storytelling and public learning outcomes in the GLAM sector, as a result of increased efficiency and new forms of public engagement.

1. Context

A deepfake avatar is a digital entity that has an anthropomorphic (i.e., human-like) appearance that is controlled and operated by a software program to inform, interact, and communicate with users of information systems, apps, or services [1]. Deepfake avatars also include MetaHumans [1]. This paper examines the affordances of AI-powered deepfake avatar creator systems (specifically MetaHuman Creator and Unreal Engine) for the facial depiction of individuals within archaeological and museological practice [2]. The concept of affordance, as developed in digital heritage and visualization scholarship, refers to the range of actions, perceptions, and interpretive possibilities that a given technology makes available to its users [3]. In the context of digital facial depiction of an individual from the past, affordances describe how tools not only determine what can be visualised but also how such visualisations are produced, interacted with, and understood. We approach ‘affordance’ not merely as the technological properties available to the creator of the facial depiction, but how the depiction produced enables or constrains public interaction with individuals from the past in specific cultural and historical contexts and display environments.

1.1. Conceptual Background

Facial depiction from skeletal remains is used within heritage communication to visualise an individual identity and personal narrative; something otherwise absent from skeletal remains alone [4,5,6]. It is well documented that a facial depiction of an ancient individual can allow the public to form an emotional connection with a person from the past, encouraging re-personification and re-humanisation, rather than the exhibition of a scientific artefact. By reimagining the visual appearance of people from history, a facial depiction can act as a tangible bridge between the material past and the public imagination [7,8]. In a cultural heritage context, facial depiction can form part of an exhibition or display in the galleries, libraries, archives, and museums (GLAM) sector or part of television documentaries and media communications to introduce and contextualise an individual within a particular time in history, location, or collection [9]. However, as visualisations, these depictions are not neutral; they are interpretations shaped by scientific evidence, technological affordance, disciplinary convention, and cultural expectation [6]. Facial depictions are often developed through multi-disciplinary teams involving experts from the fields of history, genomics, armoury, arts, anthropology, and archaeology, and the resulting face is a testament to the multidisciplinary nature of such projects [10,11,12].

1.2. Technological Background

In the contemporary context, facial depictions from skeletal remains, also known as facial reconstructions, approximations, or estimations, are reconstructed digitally by a practitioner using digitised human remains in three-dimensional (3D) software, such as ZBrush (https://www.maxon.net/en/zbrush accessed on 14 January 2026), FreeForm Modelling (https://support.geomagic.com/s/article/Freeform-Sculpt-Documentation?language=en_US accessed on 14 January 2026), and Blender (https://www.blender.org/ accessed on 14 January 2026).https://www.blender.org/ Last accessed on 14 January 2026). There have been recent developments that also utilise algorithmic processes or the genetic analysis of body fluids recovered from a scene to create facial representations of people, and there are a large number of publications that describe, compare, and evaluate these methods [13,14,15,16]. Whichever process is utilised to create the 3D facial morphology, the final step is to add textural information, such as skin details, colour, hair, clothing, and additional accessories.
We have previously described [14] ‘texturing’ as a distinct post-reconstruction phase that moves a facial reconstruction (skull-to-face model) toward a facial depiction (public-facing image). The textural processes and techniques involved vary widely, from photo-editing software for two-dimensional presentations to 3D digital software, such as Substance Painter and ZBrush, and the final facial depiction can be presented as a 2D image or 3D render or with facial movement that ranges from simple blinking to speech using Autodesk Maya (https://www.autodesk.com/uk/products/maya/overview accessed on 14 January 2026) or Blender [8,14]. The face can also be presented in AR, where the viewer can interact with and view it from different angles [17].
The process of digitally sculpting and texturing a facial depiction requires highly skilled creatives and is labour-intensive [14]. Recent advances in real-time, AI-powered, deepfake creator software have transformed the practical and aesthetic possibilities for the representation of and interaction with people from the past [8].
Recently, Yuan and colleagues [16] proposed a digital workflow for the creation of lifelike, interactive virtual characters that enhance the learning experience in relation to historically accurate scenes, ranging from generalised medieval citizens to historical figures. These researchers found that learners developed a deeper, more nuanced appreciation of history, fostering critical thinking and empathy by being placed next to everyday people and objects from the past. This experiential learning approach enhanced retention and comprehension by making abstract historical concepts tangible and relatable.
Originally developed for the gaming and film industries, MetaHumans are high-fidelity photo-realistically rendered, procedurally animated deepfake avatars that can include AI-driven performance capture—all within an accessible and interoperable workflow. MHC draws from an extensive library of real 3D scans of people and allows the creation of unique photorealistic, fully rigged digital humans by mixing together different parts of real people while changing facial textures and geometries and updating the underlying rig. The tool allows visual artists to rapidly and seamlessly manipulate a character’s facial features, adjust skin complexion, and select from a range of preset body types. Compared to traditional 3D workflows, which can take weeks or months, MHC significantly reduces production time, and real-time rendering in Unreal Engine enables immediate visual feedback and integration into AR, VR, and video outputs [18]. Using head blending controls, MHC allows users to create and customise the faces of avatars that represent themselves based on blending features from up to three preset characters [18]. While this offers great flexibility in creating unique avatars, it does have its limitations. If your specific facial features are not represented among the available presets, it can be difficult to create a face that truly matches your desired appearance. However, this challenge can be addressed through Unreal Engine’s ‘Mesh to MetaHuman’ plugin; this creates a digital doppelganger, which allows users to convert a custom mesh created through 3D scanning, sculpting, or standard modelling into a fully rigged MetaHuman [2,19]. By using this method and 3D scans of a real individual, unique facial features can be incorporated into the MetaHuman model, resulting in a highly personalised avatar that meets any specific aesthetic preferences. Details of the methodology utilised for this process are described in previous publications [8,19,20].

1.3. Ethical Background

AI-powered deepfake avatar creator systems can create realistic and moving images of people who do not exist; these deepfakes are now becoming widespread in everyday culture, and people are becoming aware of their use in marketing, entertainment, and social media, in addition to malicious use, such as political propaganda and information warfare [21]. However, research suggests that fake images may erode our trust in others [22], as people perceive faces produced by Generative Adversarial Networks (GAN) to be even more real-looking than genuine photos and even more trustworthy.
Early research [23] studying avatars produced using MetaHuman Creator (MHC) shows that MetaHuman faces trigger/enact most of the major habits related to face detection, identification, reading, and agency in a significantly different way than previous CG faces, but despite this, the experiential realism will still depend on external factors related to intentionality and cultural context. Bae and colleagues [24] suggested that MetaHumans perform better with greater user acceptance than avatars created with traditional CGI technology, and MetaHumans have surpassed the uncanny valley in appearance. However, this research also found that MetaHumans can still lack familiarity in terms of behaviour. In addition, Liu and colleagues [25] found that although participants identified their digital doubles as their own, they consistently did not like their avatars, especially those of realistic appearance. They were, however, less critical and more forgiving about the digital doubles of acquaintances or unknown identities.
When integrated into cultural heritage applications, MHC allows for the depiction of ancient faces not merely as static reconstructions, but as expressive, dynamic, post-mortem avatars [8], which are capable of interaction with the public, fostering emotive connections between audiences and the past [17,26]. In addition, this method is more efficient than the traditional method of digital sculpture and texture.
MHC is situated within a wider movement of AI-enhanced, XR-compatible technologies that are transforming the cultural heritage landscape. Its adoption represents a shift toward more agile, platform-based production workflows in museums and galleries, supporting digital strategies that prioritise accessibility, responsiveness, and public participation. Its synthetic aesthetic aligns with contemporary digital expectations shaped by gaming, film, and social media. Though not truly realistic, the plausibility of MetaHuman faces contributes to audience immersion. This responsiveness supports collaborative storytelling and iterative design, which is crucial in the increasingly participatory GLAM sector.
In the following section, using a case-based approach, we examine how the use of AI-powered deepfake creator systems, specifically MHC and UnReal Engine, has altered existing workflows, facilitated new methods for the public presentation of historic faces, and changed cultural heritage engagement. These are illustrated by real-world applications in the GLAM sector, including hybrid methods to create 3D facial depictions, the production of multiple presentations (static, animated, Augmented Reality, interactive), and deployment for increased affordances, such as hyperrealism, agility, efficiency, and interactivity. In each case, facial reconstructions were produced digitally in 3D from digitised skeletal remains (either from 3D surface scans or Computed Tomography data) in Geomagic Freeform (version 2025.1.20, https://uk.3dsystems.com/software/geomagic-freeform accessed on 14 January 2026) using a haptic TouchTM device (https://www.3dsystems.com/haptics-devices/touch accessed on 14 January 2026). The faces were reconstructed following established tested methods [27] by adding facial muscles from a database, which were adjusted to correspond to the underlying skeletal morphology following tissue-depth markers [28] before facial features and fat layers were sculpted according to established guidelines, then combined to form a 3D mesh of the skin surface [29,30,31]. The 3D skin meshes were then exported and added to MHC using Unreal Engine 5.6 ‘Mesh to MetaHuman’ auto-mapping plugin, where the 3D model of the head is transformed into a fully rigged facial mesh. Digital texturing was then selected and adjusted on the MetaHuman interface (see Figure 1).
Our discussion demonstrates the potential impact of technological advancements on public engagement with faces from the past in cultural heritage spaces.

2. Affordances and Accessibility in Cultural Heritage Applications

2.1. Representational Affordances

AI-powered deepfake creator systems offer opportunities for the addition of realistic human textures to 3D facial reconstructions, and a 2025 ‘Pre-Columbian Funerary Masked Skulls’ project undertaken by researchers at Academia Colombiana de Historia (Colombia) and Face Lab LJMU (UK) demonstrates the ability of MHC to produce a variety of textures that can represent globally diverse populations.
The project team studied four funerary masked skulls (dated 1216–1600 AD) from the collection at the Colombian Institute of Anthropology and History [32] and produced facial reconstructions using CT scan data. These skulls originate from pre-Hispanic Andean populations, which have been historically marginalised and misunderstood. Whilst this funerary practice was rare in Colombia, the exact cultural meaning remains mysterious. These masked skulls demonstrate extraordinary workmanship and, since they are the only known examples of their kind in Colombia, they are considered of significant historical importance [32]. The masked skulls were of a range of human ages, including a child (6–7 years), a young male (18–21 years), an adult male (>21 years), and an elderly woman (>60 years). The MHC textures appropriate to these ages were added to the final 3D reconstructions to create realistic facial depictions (see Figure 2).
From a practical perspective, MHC can produce a digital double of a real person from a 3D scan of their face, or of an ancient face from a sculpted 3D facial reconstruction, with an anatomically accurate face shape [19]. In reality, this process is not completely accurate, with the MHC algorithm creating forced symmetry and feature smoothing, producing a MetaHuman digital double with a symmetrical and rounded head shape; this limits any natural cranial anomalies, reducing recognisability and reliability. For example, the Colombian child demonstrated a dolichocephalic skull (long anterior to posterior distance relative to height), and the adult male skull demonstrated cranial asymmetry; neither of these characteristic features was maintained through the MHC process. In this case, the final depictions were presented as frontal view images, so this did not affect the visual outcome, but 3D heads would require additional manual manipulation to correct the forced asymmetry and smoothing. This may be particularly important for cases from ancient populations where deliberate cranial deformation was more common, such as the Mayan, Andean, and Mangbetu cultures [33,34,35,36].
The MHC algorithm also struggles with features that are outside the ‘average’ range (for example, especially protruding eyes or large noses), restricting its use for the depiction of people with features that differ significantly in shape or size from the population that the algorithm was trained on. MHC cannot effectively texture 3D models that display less common structural creases/features (such as bifid nasal tip, cleft chin, or adherent ear), and this creates a challenge for the facial reconstruction process, as some of these details will be directly related to skeletal morphology and are therefore important to maintain in the resulting facial depiction. Whilst this has a minimal effect in gaming applications, it could significantly harm archaeological (or indeed forensic) depictions, where the inclusion of characteristic features is integral or important for recognition.
This also applies to some extent to 3D modelled signs of ageing, such as wrinkles, hooded eyes, sagging skin, and nasolabial folds, which are smoothed during the MetaHuman creation. MetaHuman age variation can be controlled through sliders in MHC, which adjust features like wrinkle density to make characters look older or younger. However, there are a smaller number of elderly or child templates, and the AI-powered ageing variation is not linear, producing random patterns of wrinkles and age-related changes. This creates a less effective process for the addition of ageing signs or the creation of very young faces. Kaate and colleagues [1] demonstrated that AI-powered, deepfake avatar creator systems contain significant demographic disparities, with older age groups (>60 years) substantially underrepresented by 64–76% relative to the average number of all templates. This constrains the ability to represent accurate likenesses of older adults, particularly post-menopausal women, who may therefore be created as less recognisable likenesses [37]. For example, the older Colombian individual, a female estimated to be more than 60 years old, had no dentition, a condition that causes changes to the structure of the face. MHC does not have the ability to remove some or all of the teeth or to alter the position of the mandible to represent the resting position of an edentulous person.
Kaate and colleagues [1] demonstrated that, in general, deepfake avatar creator systems contain significant demographic disparities, with eighteen out of forty-eight possible demographic groups unrepresented. This research indicates that current deepfake technology lacks diversity, primarily favouring young white individuals, neglecting Asian and Middle Eastern populations. Another study [38] found that dark-skinned deepfake avatars were perceived as less realistic than pale-skinned deepfake avatars. Kaate and colleagues [1] are in support of MHC, stating that, “These software tools cater to demographic diversity and mitigate the monotonous nature of DAs”. However, the hair options available through the MHC are less representative of diverse populations in both style and texture, with only seven of the thirty-seven options currently available depicting non-straight hair.
These process limitations risk the homogenisation of diversity and the erasure of distinctive heritage features and could lead to the misrepresentation of vulnerable or historically marginalised populations.

2.2. Technical Affordances

MetaHuman data is structured in such a way that individual component parts (assets) can be edited and utilised independently, which offers practitioners the opportunity to make selective extractions. One such example is the facial depiction of Rathlin Man, a case that highlights the use of facial hair assets. In 2024, a team from Oilean Beag Productions and Queen’s University Belfast (UK), in collaboration with Face Lab LJMU (UK), worked on a project involving skeletal remains discovered in 2006 within an undisturbed cist during excavations in Rathlin Island, Co. Antrim (Ireland).
Rathlin Man was estimated to be approximately 40–60 years of age with origins from the early Bronze Age. Phenotypic DNA analysis suggested dark brown hair, pale-intermediate skin, and hazel eyes. Furthermore, to align with historical records, Rathlin Man was depicted with shoulder-length, straight/wavy hair, with grey flecks, and a beard. Working from a 3D surface scan of the skull, a facial depiction was produced, with skin textures and eye colour added using MHC.
The head hair was produced using traditional digital grooming techniques (XGen in Autodesk Maya), following the workflow outlined in a previous publication [14]. This method, whilst effective, requires a high level of practitioner skill and training, and was time-consuming and labour-intensive. MHC hair assets were considered as a more efficient option, but the presets did not provide the necessary style, texture, and length required, most of which are contemporary in style. In MHC, facial hair assets can be found in the preset library, including variation in hair distribution patterns, length, and style, with settings available to change the hair colour. In this case, the presets yielded eyebrow, eyelash, and beard options that were appropriate for Rathlin Man. These assets were downloaded from MHC into Unreal Engine 5.6 and aligned and edited to fit the facial reconstruction (see Figure 3). Given the intricate nature of eyebrows, eyelashes, and beards, utilising appropriate MHC assets saved considerable time on the project compared to manual methods. Whilst MHC offers preset assets that can be efficiently utilised in facial depiction, they are not always appropriate, and adopting a hybrid approach, by combining manually created digital assets with MHC, produces depictions that are age-sensitive and authentic.
MHC can be used in combination with motion capture and animation technology [39] to create realistic movements for video games, movies, television, and other standard human–computer interfaces [40]. This technology can also be utilised to create 3D digital avatars directly from 3D scan models of living people or 3D digital post-mortem avatars [8] from 3D facial depictions of deceased people, using the ‘Mesh to MetaHuman’ plugin for Unreal Engine. In addition, the motion of these avatars can be driven in real time using ‘MetaHuman Animator’ in Unreal Engine 5.6.
In 2024, a post mortem digital avatar of King Richard III was revealed at York Theatre Royal, UK (a collaboration between Face Lab LJMU and the Voice for Richard III team led by Yvonne Morley-Chisholm, expert voice teacher and vocal coach, with guidance from Professor David Crystal OBE, linguist and specialist in Original Pronunciation) and the audience viewed a movie of the medieval king presenting an address relating to the investiture of his son as the Prince of Wales [41]. The post-mortem avatar utilised the digital facial reconstruction created from his skull, and he was depicted in representative medieval clothing, wearing an ornate gown and golden circlet crown. The avatar spoke the king’s own words, with his most likely accent, vocal tone, and pronunciation (see Figure 4). A second version was created in 2025 and launched at the King Richard III Visitors Centre in Leicester (UK); this movie presented King Richard reciting a prayer. Both versions were created using the voice and performance of actor Thomas Dennis.
This project was achieved using MetaHuman Creator and Unreal Engine 5.6, along with additional CGI techniques in ZBrush (version 2022.0.5), XGen, AutodeskMaya, and Adobe Substance Painter (version 11) to produce his medieval hairstyle, clothing, and crown. The Live Link Face (LLF) application for Apple Operating System (iOS), along with high-quality sound equipment, produced a performance capture of the actor and ‘MetaHuman Animator’ was utilised to store and transfer the raw video and depth data into Unreal Engine with the MetaHuman plugin to produce an animation performance for application onto the digital avatar of Richard III [42]. A control rig was used in an animation blueprint that was set to the positions of the facial features for each frame of the animation (see Figure 5). To adjust the animation for selected areas at specified times, an additive control rig was then added to the Richard III MetaHuman face, as opposed to an absolute control rig where the weight of adjustments cannot be altered [42], making for a less seamless outcome. This allowed the adjustment of features such as eye gaze, head position, and unilateral muscle movements for frames where the animation did not match the performance of the actor.
An additional feature in MetaHuman Animator called Audio Driven Animation (ADA, https://dev.epicgames.com/documentation/en-us/metahuman/audio-driven-animation Last accessed 14 January 2026) was used for the second performance of King Richard III. This feature became available once Unreal Engine was updated to Version 5.6 in November 2024 and involves using an audio file in place of performance capture footage in the ‘MetaHuman Performance’ workflow to produce an Animation Sequence. The control rig from the Audio Driven Animation was copied onto the animation produced from the original performance footage as a new additive control rig. Adjustments were made to correct any exaggeration of movement caused by the use of two control boards. This process enhanced the process of adjusting the control rig to match the actor’s facial movement.
The capture of facial movements using LLF on the iOS was ineffective in accurately capturing tongue movements and specific vocal sounds, such as alveolar and dental sounds. The new ADA feature significantly reduces the manual adjustments required to accurately visualise the shapes of the lips, mouth, and tongue in relation to sounds [42] (see Figure 6). This advancement greatly enhances the efficiency of integrating the actor’s facial and audio performance with the AI-driven animation of the oral region. It specifically targets the oral region while preserving other facial movements, thereby enhancing the actor’s performance and making the lip movements more believable. To produce a video output, Unreal Engine rendered 2D still images of the scene across the duration of the performance at 24 frames per second, which were then collated to be visualised as motion using Adobe Premiere Pro 2024.
To showcase a change in the emotional state of the actor from sad to crying, the performance was rendered twice, and the two different presentations of the avatar face were blended in Adobe Premiere Pro 2024 to transition between the two emotions. The facial texture map was adjusted in Adobe Photoshop 2024, and the vein power of the eyes was adjusted in the default MetaHuman material of the eye in Unreal Engine 5.6 (see Figure 7). These adjustments allowed the avatar to appear more flushed as a proxy for crying.
This project demonstrates the affordances offered by MetaHuman Creator and Unreal Engine for the creation of animated characters driven by individual human performance. This affordance transforms the opportunities in the GLAM sector, making complex animation more affordable, less time-consuming, and more accessible. Previous film-based motion/performance capture and transfer have required expensive and complex face rigs to capture facial expression, along with time-consuming animation expertise to transfer the performance onto digital characters. Examples of this can be seen in the film industry, where pioneers, such as Robert Zemeckis and Andy Serkis [43], have created animated characters using blend-shape animation, a technique that creates facial expression by animating between a series of sculpted templates. Whilst characters are fully digital creations, the animators use video footage of the actor’s performances as reference for facial expression [44,45] and design the character to closely resemble the actor in facial structure so computer-generated muscles can replicate facial expressions realistically.

2.3. Interpretative Affordances

Studies of digital replicas and avatars in virtual heritage contexts [17,46] emphasise how realism increases emotional engagement, but also raises expectations of accuracy, which can be misleading when not contextualised. While advanced digital tools such as MHC can bring ancient figures to life, enhancing public engagement with archaeology, challenges can arise in the stylistic aspects of projects, especially relating to authentic hairstyles, clothing, and accessories. A recent historical project between Ghent University (Belgium) and Face Lab LJMU (UK) in 2025 to depict remains thought to be Judith of Flanders (born 843/844 AD), the eldest daughter of the Carolingian King Charles the Bald (823 AD–877 AD) and Queen Ermentrude [47], showcases the innovative potential of technologies like MHC in the cultural heritage sector, whilst highlighting where additional CGI expertise may be necessary.
The 3D facial reconstruction of Judith of Flanders was converted into a MetaHuman using the ‘Mesh to MetaHuman’ workflow; however, obstacles in relation to Judith’s appearance were encountered, as there were no written accounts that defined her features, nor was any phenotypic DNA data available. The team adopted a creative approach by illustrating variation using multiple colour presentations for her eyes, skin, and hair. Using the material controls in Unreal Engine, elements, such as eye colour, eyebrow shape, and even intricate details like vein patterns visible on the eyes, were efficiently adjusted to produce a range of depictions with multiple appearances (nine in total—see Figure 8). As for the Rathlin Man case, no suitable historical hairstyles were available in the existing MetaHuman asset library; therefore, the hair was modelled manually using XGen in Autodesk Maya 2022 [14].
To further enrich the depiction and emphasise the status of Judith as a medieval noble woman, clothing and a headpiece were created. Since MHC does not include a library of historical clothing or accessories, these were modelled using Autodesk Maya and ZBrush 2022.0.5 and painted using Adobe Substance Painter 11. The clothing was informed by historical records provided by the research team at Ghent University, ensuring that the visualisation remained authentic to the time period and that it was engaging for a public audience (see Figure 9).
This diversity in visual appearance, afforded by the tools in MHC and Unreal Engine, served not only to fill gaps in historical knowledge but also to invite the public to engage with the uncertainties inherent in archaeological research. However, it also denotes the limitations of the software in producing very specific clothing, hairstyles, and accessories, which still need to be modelled by an artist.

2.4. Engagement Affordances

AI-powered deepfake creator systems like MHC enable users with limited or advanced artistic experience to quickly build fully rigged, photorealistic, high-fidelity digital human avatars from scratch, complete with hair and clothing, and these can easily be deployed via Unreal Engine into virtual worlds, including the metaverse, for human interaction. The technological affordances of these systems mean that similar workflows can be applied to 3D digital facial depictions of people from the past to allow them to be presented in Virtual and Augmented Reality (VR/AR) environments. This offers a range of interactive expositions for a digitally expectant public audience and can bring notable historical figures to life in a range of real-world settings. A collaborative project between Dunkers Kulturhus in Helsingborg (Sweden) and Face Lab LJMU (UK) in 2025, which created a facial depiction of medieval figure Peter Karlsson, a historic Mayor of Helsingborg [48], and presented it in AR within a museum exhibit, illustrates this.
A 3D facial reconstruction of the young mayor was created using a surface scan of the skull, and the final depiction was textured in MHC. As in the previous example, three different colourations were produced (Figure 10) and medieval clothing, including a chaperon, was created by 3D scanning real-world items on a human model using the iOS app Abound, and then further modelling using Autodesk Maya 2022 and ZBrush 2022.0.5 before being painted using Adobe Substance Painter 11. Again, historical records guided this process (see Figure 11), as these bespoke items were not available in the MHC asset library. The curatorial team at Dunkers Kulturhus also expressed an interest in the inclusion of a visualisation of a visible pathology at his lower jaw, as it was believed to be a likely cause of death; however, MHC does not facilitate the depiction of pathology and trauma, meaning it was not possible to add this to the 3D facial depiction. Instead, a 2D photo-edited version of the facial depiction was produced in Adobe Photoshop 2024 to provide a visual explanation of the condition, while complementing a 3D representation shown on a screen in the gallery.
The Dunkers Kulturhus curatorial team requested an AR version of the 3D facial depiction of Peter Karlsson to extend public interaction with his skeletal remains displayed in the exhibit. In comparison to VR, AR is a more common tool used to support learning and enhance visitor experience in museum spaces because many visitors already carry mobile devices with cameras, and the device can overlay contextual content on real-world artefacts [49,50]. The facial depiction of Peter Karlsson from MHC, together with the clothing assets, was combined into one model. The clothing and the skin were assigned two different shaders to reflect the materials, and then the assets were exported as a compressed GLB file for AR presentation using a platform developed by EC solutions. Audiences scanned a QR code using their mobile device to activate the AR experience adjacent to the exhibit (see Figure 12).
Within Unreal Engine, an AI-powered MetaHuman can inhabit richer virtual scenes, making the digital avatar part of a specific environment rather than an isolated object. This enhances the affective viewer response by placing the reconstructed face in a contextual ‘world’ [8,51]. However, by placing a realistic-looking 3D facial depiction created in MHC ‘in situ’ near or next to the real skeletal remains using AR, it can help bridge the gap between abstract or historical context and real-world artefact to develop a humanised, affective encounter, which can make the historical figure more relatable and memorable [52]. This AR presentation could also attract more interest from museum visitors, especially among digitally literate audiences [53]. The presentation of 3D facial depictions as Augmented Reality avatars or overlays in close proximity to the human remains increases the interpretive value compared with decontextualised isolated presentations [54], ultimately leading to the AR presented avatar becoming a mediator that enables discussions around facial depiction methods and stimulates conversation relating to any scientific uncertainty in the process. This transparency can help visitors understand and appreciate both the scientific and historical aspects of the individual case.
In this example, the technological affordances, specifically the ability to create high-fidelity, realistic digital avatars in MHC and Unreal Engine, and to be able to export them easily for AR presentation in the real-world, encourage museum visitors to reflect on the displayed human remains as a once living person, not a museum object. This approach also connects traditional static displays with more immersive storytelling, which can drive footfall and deeper visitor engagement [52].
AI-powered deepfake creator systems enable a more streamlined and less resource-heavy process for the creation of responsive avatars and, when utilised in combination with 3D digital facial depiction from human remains, unlimited opportunities are offered for the depiction and interaction with historical figures and ancient peoples, including in the metaverse. However, the use of MetaHumans for the depiction of archaeological individuals is complicated by the open-source nature of the technologies; currently, they are freely accessible, and the ease with which an avatar can be created and then powered by an AI in the metaverse presents unknown complexities and potential misuses. The full impact of these advances in the GLAM sector is currently unknown, but the production of AI-enabled digital post-mortem avatars is likely to be utilised over other modalities due to its effective and financially efficient nature. The widened opportunities for digitally interactive and voice-enabled human avatars may also present critical challenges around authenticity, specifically within the spheres of history and archaeology.

3. Discussion

The adoption of AI-powered deepfake avatar creator systems in facial depiction contexts, specifically the depiction of ancient faces, has been widely praised, with multiple presentations deemed successful by both the media and the public [47,55]. Their popularity stems in part from the significant affordances these tools offer. The main affordances include realism, efficiency, agility, and interactivity.
The increasing realism of digital human avatars challenges the boundaries between reconstruction, speculation, and reanimation [8]. While the technical capabilities afforded by MHC and Unreal Engine offer unprecedented fidelity, they also bring ethical issues to the forefront. There is also a concern that the AI-powered facial expression templates in MHC may further enhance the uncanny valley effect, and if not used cautiously or in a rigorous manner, could undermine public trust in the depiction of an individual from history. These affordances are relational and contextual as they both enable and delimit interpretation [3]. However, misinterpretation could have wide-reaching implications, such as cultural insensitivity towards indigenous or colonised communities, confirmation bias in relation to ancient populations, or algorithmic bias in historical representation.
MHC’s predefined morphologies and aesthetic defaults also risk imposing normative facial templates that may unintentionally homogenise diversity. It is also far less flexible when dealing with pathological or atypical features that require more fundamental changes to craniofacial structures. While MHC streamlines workflow and enhances synthetic realism, it cannot fully replace traditional methods. Its current version lacks the flexibility to represent non-normative or historically diverse facial features, such as those of elderly individuals, unusual features, or people from underrepresented populations. MHC’s multiethnic, diverse, non-gendered library is a design tool, not a representative global archive, and the traces of gender and ethnic identities are simply part of a larger template palette. Research [56] suggests that MHC tools representing trans and gender-nonconforming identities and invisible disabilities have limitations and are at risk of replicating inaccessibility and excluding persons with disabilities and gender differences.
Museums and heritage professionals can prototype multiple versions of a face quickly and respond to curatorial input more dynamically. MHC also supports non-destructive updates, allowing practitioners to refine reconstructions when new data, such as aDNA findings, become available. This increases the potential for reuse, enabling a single face to be recontextualised across future exhibits and interactive experiences, which is crucial in the increasingly responsive GLAM sector.
Additionally, Unreal Engine’s black-boxed shaders and physics systems can obscure the interpretive mechanics behind surface realism, potentially diminishing transparency in relation to decisions made by the creator. There is a danger that facial depictions created using a library of templates for skin texture, hairstyle, and expression might drift toward ‘believable defaults’ rather than reflecting interpretive uncertainty, and the public audience may not grasp the informed interpretive steps taken to achieve the polished veneer. Additionally, the public expectation of ‘realism’ in digital heritage must be balanced against ethical and epistemological concerns. Digital avatars can appear to be ‘clean’ and ‘perfect’, supporting the claim that the non-linear approach to the textural selection of the MetaHuman is limiting. For example, it is not possible to define the specific location or degree of age-related changes (e.g., wrinkles) visualised on a face. Furthermore, the very features that enhance expressive fidelity, such as smooth animation and lifelike lighting, can provoke uncanny effects or overstate evidential certainty. The uncanny valley effect, where faces appear almost but not quite real, can undermine trust or evoke discomfort, and while synthetic faces are becoming more accepted, critical discourse is still necessary around their use. Recognising these tensions is integral to responsible implementation [8].
MetaHuman faces are synthetic, and their familiarity within popular media contexts supports more intuitive learning and engagement, especially for digitally native audiences. As demonstrated with the AR presentation of the 3D facial depiction of Peter Karlsson, this potential offers an opportunity for increased accessibility and educational interaction, aligning with broader cultural heritage goals of social inclusion and cultural participation [57].
As MHC supports non-destructive updates and iterations, facial depiction practitioners can quickly refine a post-mortem avatar when new data becomes available, for example, new aDNA outcomes. This increases the capacity for reuse, i.e., a single reconstructed face may be represented and recontextualised in future exhibits or interactive experiences. However, not all institutions can support high-tech implementations, and some curators prefer printed images due to recurring technical failures in small museums [58]. This introduces a paradox; while MHC enables dynamic, interactive experiences, it may exceed the technical infrastructure of many GLAM institutions.
In summary, although AI-powered deepfake avatar creator systems offer unprecedented levels of fidelity and creative flexibility in the production and presentation of faces from the past, they also introduce significant challenges in the heritage sector, including cultural misrepresentation, historical misinformation, and tensions around consent and authenticity. In the GLAM sector, realism is often confused with authenticity, creating the potential for misinformation and misrepresentation. The implications are significant: global homogenisation, historical marginalisation, and cultural harm.
In addition, without careful methodological transparency and critical engagement, AI-powered facial expression movement presets and synthetic facial appearance defaults may ultimately undermine public trust in historical facial depiction. We propose that MHC is not a replacement for current practice, but rather an augmentation, expanding the potential for storytelling and public engagement in the GLAM sector. In addition, the GLAM sector requires standards relating to synthetic avatars to ensure provenance documentation of interpretative choices and uncertainty disclosure.

Author Contributions

Conceptualization, C.M.W., C.Y.J.L., M.R. and S.S.; methodology, C.M.W., C.Y.J.L., M.R., C.D., T.D. and S.S.; formal analysis, C.M.W., C.Y.J.L., M.R. and S.S.; writing—original draft preparation, all authors; writing—review and editing, C.M.W., C.Y.J.L. and M.R.; visualisation, all authors; funding acquisition, C.M.W., C.Y.J.L., M.R. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

Funding for the projects included in this paper was received from the following sources: Columbian Masked Skulls—British Academy/Leverhulme Small Research Grant (SRG2223\230412); Rathlin Man—Oilean Beag Productions and Queen’s University Belfast; Judith of Flanders—Ghent University; Peter Karlsson—Dunkers Kulturhus in Helsingborg, Sweden; King Richard III—The Voice for Richard III Project and Liverpool John Moores University.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analysed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the GLAM sector partners and Liverpool John Moores University for their continued support.

Conflicts of Interest

The authors declare no conflicts of interest. The sponsors had no role in the design, execution, interpretation, or writing of the paper.

References

  1. Kaate, I.; Salminen, J.; Al Tamine, R.; Jung, S.G.; Jansen, B.J. Is deepfake diversity real? Analysing the diversity of deepfake avatars. Expert Syst. Appl. 2025, 269, 126382. [Google Scholar] [CrossRef]
  2. Epic Games. MetaHuman Technology Brings a 10,000⁠-Year⁠-⁠Old Shaman Back to Life. 16 June 2022. Available online: https://www.unrealengine.com/en-US/spotlights/MetaHuman-technology-brings-a-10-000-year-old-shaman-back-to-life (accessed on 12 November 2024).
  3. Roughley, M.A.; Wilkinson, C.M. The Affordances of 3D and 4D Digital Technologies for Computerized Facial Depiction. In Biomedical Visualisation; Rea, P., Ed.; Advances in Experimental Medicine and Biology; Springer: Cham, Switzerland, 2019; Volume 1138. [Google Scholar] [CrossRef]
  4. Smith, K.; Roughley, M.A.; Harris, S.; Wilkinson, C.M.; Palmer, E. From Ta-kesh to Ta-kush: The affordances of digital, haptic visualisation for heritage accessibility. Digit. Appl. Archaeol. Cult. Herit. 2020, 19, e00159. [Google Scholar] [CrossRef]
  5. Roughley, M.; Liu, C.Y.J.; Wilkinson, C.M.; Saleem, S.N. Using a morph-based animation to visualise the face of Pharaoh Ramesses II ageing from middle to old age. Digit. Appl. Archaeol. Cult. Herit. 2024, 35, e00377. [Google Scholar] [CrossRef]
  6. Vanni, A.; Licata, M.; Fusco, R.; Rossetti, N.; Picozzi, M. Humanizing the past: A review on the role of facial approximation in museums and its public perception. Front. Environ. Archaeol. 2025, 4, 1591662. [Google Scholar] [CrossRef]
  7. Buti, L.; Gruppioni, G.; Benazzi, S. Facial Reconstruction of Famous Historical Figures: Between Science and Art. In Studies in Forensic Biohistory: Anthropological Perspectives; Stojanowski, C.M., Duncan, W.N., Eds.; Cambridge Studies in Biological and Evolutionary Anthropology; Cambridge University Press: Cambridge, UK, 2017; Chapter 9, Volume 75; pp. 191–212. ISBN 131694302X/9781316943021. [Google Scholar]
  8. Wilkinson, C.M.; Roughley, M.A.; Shrimpton, S.L. Digital Immortality in Palaeoanthropology and Archaeology: The Rise of the Postmortem Avatar. Heritage 2024, 7, 7188–7209. [Google Scholar] [CrossRef]
  9. Appleton, A. Faces from 2400 years ago: Archaeological Museum exhibit focuses on reconstructing the faces and dignity of the Goucher Mummy (left) and the Cohen Mummy. Johns Hopkins Magazine, Fall 2018. Available online: https://hub.jhu.edu/magazine/2018/fall/mummy-facial-reconstruction/ (accessed on 4 January 2026).
  10. Suppersberger Hamre, S.; Ersland, G.A.; Daux, V.; Parson, W.; Wilkinson, C.M. Three individuals, three stories, three burials from medieval Trondheim, Norway. PLoS ONE 2017, 12, e0180277. [Google Scholar] [CrossRef]
  11. Wilkinson, C.M.; Roughley, M.A.; Moffat, R.D.; Monckton, D.G.; MacGregor, M. In search of Robert Bruce, part I: Craniofacial analysis of the skull excavated at Dunfermline in 1819. J. Archaeol. Sci. Rep. 2019, 24, 556–564. [Google Scholar] [CrossRef]
  12. Jacobs, A.M.; Irish, J.D.; Cooke, A.; Anastasiadou, K.; Barrington, C.; Gilardet, A.; Kelly, M.; Silva, M.; Speidel, L.; Tait, F.; et al. Whole-genome ancestry of an Old Kingdom Egyptian. Nature 2025, 644, 714–721. [Google Scholar] [CrossRef]
  13. Baldasso, R.P.; Moraes, C.; Gallardo, E.; Stumvoll, M.B.; Crespo, K.C.; Strapasson, R.A.P.; de Oliveira, R.N. 3D forensic facial approximation: Implementation protocol in a forensic activity. J. Forensic Sci. 2021, 66, 383–388. [Google Scholar] [CrossRef] [PubMed]
  14. Roughley, M.A.; Liu, C.Y.J. Digital 2D, 2.5D and 3D Methods for Adding Photo-Realistic Textures to 3D Facial Depictions of People from the Past. In Biomedical Visualisation; Rea, P.M., Ed.; Advances in Experimental Medicine and Biology; Springer: Cham, Switzerland, 2022; Volume 1356. [Google Scholar] [CrossRef]
  15. Navic, P.; Inthasan, C.; Chaimongkhol, T.; Mahakkanukrauh, P. Facial reconstruction using 3-D computerized method: A scoping review of Methods, current Status, and future developments. Leg. Med. 2023, 62, 102239. [Google Scholar] [CrossRef]
  16. Yuan, M.; Goovaerts, S.; Vanneste, M.; Matthews, H.; Hoskens, H.; Richmond, S.; Klein, O.D.; Spritz, R.A.; Hallgrimsson, B.; Walsh, S.; et al. Mapping genes for human face shape: Exploration of univariate phenotyping strategies. PLoS Comput. Biol. 2024, 20, e1012617. [Google Scholar] [CrossRef]
  17. Sylaiou, S.; Dafiotis, P.; Fidas, C.; Vlachou, E.; Nomikou, V. Evaluating the impact of XR on user experience in the Tomato Industrial Museum “D. Nomikos”. Heritage 2024, 7, 1754–1768. [Google Scholar] [CrossRef]
  18. Epic Games; MetaHuman. Head Blend Controls|MetaHuman Documentation|Epic Developer Community. 2025. Available online: https://dev.epicgames.com/documentation/en-us/metahuman/head-blend-controls (accessed on 4 January 2026).
  19. Lăzăroiu, G.; Gedeon, T.; Szpilko, D.; Halicka, K. Digital twin-based alternate ego modeling and simulation: Eva Herzigová as a 3D MetaHuman avatar. Eng. Manag. Prod. Serv. 2024, 16, 1–14. [Google Scholar] [CrossRef]
  20. Epic Games. Mesh to MetaHuman|MetaHuman Documentation|Epic Developer Community. 2025. Available online: https://dev.epicgames.com/documentation/en-us/metahuman/mesh-to-metahuman (accessed on 4 January 2026).
  21. Scorzin, P.C. AI Body Images and the Meta-Human: On the Rise of AI-generated Avatars for Mixed Realities and the Metaverse. IMAGE Z. Für Interdiszip. Bild. 2023, 19, 179–194. [Google Scholar] [CrossRef]
  22. Tsakiris, M. Deepfakes: Faces Created by AI Now Look More Real than Genuine Photos. The Conversation. Science & Tech. Published: 23 January 2023 12.26pm GMT. Available online: https://theconversation.com/deepfakes-faces-created-by-ai-now-look-more-real-than-genuine-photos-197521 (accessed on 12 January 2023).
  23. Giuliana, G.T. What is So Special About Contemporary CG Faces? Semiotics of MetaHumans. Topoi 2022, 41, 821–834. [Google Scholar] [CrossRef]
  24. Bae, S.; Jung, T.; Cho, J.; Kwon, O. Effects of meta-human characteristics on user acceptance: From the perspective of uncanny valley theory. Behav. Inf. Technol. 2025, 44, 731–748. [Google Scholar] [CrossRef]
  25. Liu, S.; Haque, K.I.; Yumak, Z. “I don’t like my avatar”: Investigating Human Digital Doubles. arXiv 2025, arXiv:2509.17748. [Google Scholar] [CrossRef]
  26. Lorenzoni, G.; Iacono, S.; Martini, L.; Zolezzi, D.; Vercelli, G.V. Virtual Reality and Conversational Agents for Cultural Heritage Engagement. Conference Proceedings. The Future of Education 2025. IRIS/Catalogo Prodotti della Ricerca Università di Genova/04—Contributo in atti di Convegno/04.01—Contributo in atti di Convegno. ISBN 979-12-80225-85-6. 2025. Available online: https://unige.iris.cineca.it/handle/11567/1257160 (accessed on 14 January 2026).
  27. Wilkinson, C.M.; Liu, C.Y.J.; Shrimpton, S.; Greenway, E. Craniofacial identification standards: A review of reliability, reproducibility, and implementation. Forensic Sci. Int. 2024, 359, 111993. [Google Scholar] [CrossRef] [PubMed]
  28. Mahoney, G.; Wilkinson, C.M. Computer-generated facial depiction. In Craniofacial Identification; Wilkinson, C.M., Rynn, C., Eds.; Cambridge University Press: Cambridge, UK, 2012; Chapter 18; pp. 222–237. ISBN 9780521768627. [Google Scholar]
  29. Gerasimov, M.M. The Reconstruction of the Face from the Basic Structure of the Skull; Tshernezky, W., Translator; Personal copy; Publishers unknown: Russia, 1955. [Google Scholar]
  30. Fedosyutkin, B.A.; Nainys, J.V. The relationship of skull morphology to facial features. In Forensic Analysis of the Skull: Craniofacial Analysis, Reconstruction, and Identification; Iscan, M.Y., Helmer, R.P., Eds.; Wiley-Liss: Hoboken, NJ, USA, 1993; pp. 199–213. ISBN 0471560782. [Google Scholar]
  31. Rynn, C.; Wilkinson, C.M.; Peters, H.L. Prediction of nasal morphology from the skull. Forensic Sci. Med. Pathol. 2010, 6, 20–34. [Google Scholar] [CrossRef] [PubMed]
  32. Kuta, S. These Individuals Were Buried in Colombia Wearing ‘Death Masks.’ Researchers Just Digitally Removed the Skull Coverings to Reveal Their Faces for the First Time. Smithsonian Magazine, 6 October 2025. Available online: https://www.smithsonianmag.com/smart-news/see-the-faces-of-four-individuals-buried-in-colombia-with-death-masks-researchers-just-digitally-removed-the-skull-coverings-to-reveal-their-faces-for-the-first-time-180987449/ (accessed on 4 January 2026).
  33. Torres-Rouff, C.; Yablonsky, L.T. Cranial vault modification as a cultural artifact: A comparison of the Eurasian steppes and the Andes. HOMO 2005, 56, 1–16. [Google Scholar] [CrossRef]
  34. Fehir, A. In a bind: Artificial cranial deformation in the Americas. LUJA 2014, 1, 29–37. [Google Scholar]
  35. Madai, Á.; Szeniczey, T.; Rácz, Z.; Marcsik, A.; Bálintné Tóth, Á.; Wilhelm, G.; Hajdu, T. “Coneheads” in the Gepid period: Artificial cranial deformation in a community by the Tisza River. Hung. Archaeol. 2022, 11, 16–25. [Google Scholar] [CrossRef]
  36. Torres-Rouff, C. Cranial modification and the shapes of heads across the Andes. Int. J. Paleopathol. 2020, 29, 94–101. [Google Scholar] [CrossRef]
  37. Morris, M.E.; Rosner, D.K.; Nurius, P.S.; Dolev, H.M. “I don’t want to hide behind an avatar”: Self-representation in social VR among women in midlife. In Proceedings of the 2023 ACM Designing Interactive Systems Conference, Pittsburgh, PA, USA, 10–14 July 2023; pp. 537–546. [Google Scholar] [CrossRef]
  38. de Andrade Araujo, V.F.; Pelachaud, C.; Costa, A.B.; Musse, S.R. Diversity in Virtual Humans: Unveiling Biases in Human Characteristics Representation. In Conference on Graphics, Patterns and Images (SIBGRAPI); SBC: Porto Alegre, Brazil, 2024; pp. 15–21. [Google Scholar] [CrossRef]
  39. Deep Motion. MetaHuman Creator. 2024. Available online: https://www.deepmotion.com/companion-tools/MetaHuman-creator (accessed on 5 December 2024).
  40. Nørtoft, M. Gamifying the Past–Archaeogaming by Archaeologists. SocArXiv 2024. Available online: https://osf.io/preprints/socarxiv/fxs6c_v1 (accessed on 14 January 2026).
  41. York Theatre Royal. World Premier—A Voice for King Richard III. 17 November 2024. Available online: https://www.yorktheatreroyal.co.uk/latest/world-premier-a-voice-for-king-richard-iii/ (accessed on 4 January 2026).
  42. Epic Games. Animation Sequence Editor In Unreal Engine. 2025. Available online: https://dev.epicgames.com/documentation/en-us/unreal-engine/animation-sequence-editor-in-unreal-engine (accessed on 4 January 2026).
  43. Bestor, N. The technologically determined decade: Robert Zemeckis, Andy Serkis, and the promotion of performance capture. Animation 2016, 11, 169–188. [Google Scholar] [CrossRef]
  44. Mihailova, M. You were not so very different from a hobbit once: Motion capture as an estrangement device in Peter Jackson’s Lord of the Rings trilogy. Post Scr.-Essays Film. Humanit. 2013, 33, 3–105. [Google Scholar]
  45. Failes, I. From Performance Capture to Creature: How the Apes Were Created In ‘War for the Planet of the Apes’. Feature Film, VFX, Cartoon Brew. 18 July 2017. Available online: https://www.cartoonbrew.com/vfx/performance-capture-creature-apes-created-war-planet-apes-152357.html (accessed on 14 January 2026).
  46. Karuzaki, E.; Partarakis, N.; Patsiouras, N.; Zidianakis, E.; Katzourakis, A.; Pattakos, A.; Kaplanidi, D.; Baka, E.; Cadi, N.; Magnenat-Thalmann, N.; et al. Realistic virtual humans for cultural heritage applications. Heritage 2021, 4, 4148–4171. [Google Scholar] [CrossRef]
  47. De Groote, I.; Wilkinson, C.; Liu, J.; Palmer, J.; Vanderputten, S. Face to Face with Judith of Flanders? Science Meets Art. ArcheOs: Research Laboratory for Biological Anthropology. 2025. Available online: https://www.archeos-ugent.be/news/judithface (accessed on 14 January 2026).
  48. Weidhagen-Hanerdt, M.; Sjøvold, T.; Mörnstad, H. Who was Peter Karlsson of Helsingborg? An Attempt to Identify a Medieval Seal Stamp Find. Curr. Swed. Archaeol. 1996, 4, 185–198. [Google Scholar] [CrossRef]
  49. Ghouaiel, N.; Garbaya, S.; Cieutat, J.M.; Jessel, J.P. Mobile augmented reality in museums: Towards enhancing visitor’s learning experience. Int. J. Virtual Real. 2017, 17, 21–31. [Google Scholar] [CrossRef]
  50. Zhou, Y.; Chen, J.; Wang, M. A meta-analytic review on incorporating virtual and augmented reality in museum learning. Educ. Res. Rev. 2022, 36, 100454. [Google Scholar] [CrossRef]
  51. Machidon, O.M.; Duguleana, M.; Carrozzino, M. Virtual humans in cultural heritage ICT applications: A review. J. Cult. Herit. 2018, 33, 249–260. [Google Scholar] [CrossRef]
  52. Boboc, R.G.; Băutu, E.; Gîrbacia, F.; Popovici, N.; Popovici, D.M. Augmented reality in cultural heritage: An overview of the last decade of applications. Appl. Sci. 2022, 12, 9859. [Google Scholar] [CrossRef]
  53. Okanovic, V.; Ivkovic-Kihic, I.; Boskovic, D.; Mijatovic, B.; Prazina, I.; Skaljo, E.; Rizvic, S. Interaction in extended reality applications for cultural heritage. Appl. Sci. 2022, 12, 1241. [Google Scholar] [CrossRef]
  54. Meinecke, C.; Hall, C.; Jänicke, S. Towards enhancing virtual museums by contextualizing art through interactive visualizations. ACM J. Comput. Cult. Herit. 2022, 15, 1–26. [Google Scholar] [CrossRef]
  55. Romero, S. Revelan por Primera vez el Rostro de Cuatro Momias Precolombinas Oculto Bajo Máscaras Funerarias. National Geographic, Historia, Archaeologia. 2025. Available online: https://historia.nationalgeographic.com.es/a/revelan-por-primera-vez-rostro-cuatro-momias-precolombinas-oculto-bajo-mascaras-funerarias_24684 (accessed on 4 January 2026).
  56. Bartlett, S.; Chester, S.; Delamore, P.; Roeck, S.; Broach, Z. Dignitas in the Metaverse. In Proceedings of the 12th EAI International Conference, ArtsIT 2023, São Paulo, Brazil, 27–29 November 2023; Springer Nature: Cham, Switzerland, 2024; Part I, pp. 186–201. [Google Scholar] [CrossRef]
  57. Giglitto, D.; Ciolfi, L.; Lockley, E.; Kaldeli, E. Digital Approaches to Inclusion and Participation in Cultural Heritage; Routledge: London, UK, 2023. [Google Scholar] [CrossRef]
  58. Nikolaou, P. Museums and the post-digital: Revisiting challenges in the digital transformation of museums. Heritage 2024, 7, 1784–1800. [Google Scholar] [CrossRef]
Figure 1. Stages showing the ‘Mesh to MetaHuman’ process: 3D model in Geomagic Freeform (left); rigged mesh in MetaHuman (middle); hairless MetaHuman (right).
Figure 1. Stages showing the ‘Mesh to MetaHuman’ process: 3D model in Geomagic Freeform (left); rigged mesh in MetaHuman (middle); hairless MetaHuman (right).
Applsci 16 01023 g001
Figure 2. Facial depiction process for Colombian masked skulls (1216–1600 AD).
Figure 2. Facial depiction process for Colombian masked skulls (1216–1600 AD).
Applsci 16 01023 g002
Figure 3. Facial depiction of 4000-year-old Rathlin Man.
Figure 3. Facial depiction of 4000-year-old Rathlin Man.
Applsci 16 01023 g003
Figure 4. Still from the King Richard III performance; an address relating to the investiture of his son as Prince of Wales. Available here: https://youtu.be/FbH_KLMs6aY?si=QBBxElDavh65OdNW (last accessed on 14 January 2026).
Figure 4. Still from the King Richard III performance; an address relating to the investiture of his son as Prince of Wales. Available here: https://youtu.be/FbH_KLMs6aY?si=QBBxElDavh65OdNW (last accessed on 14 January 2026).
Applsci 16 01023 g004
Figure 5. Manipulation of facial movement using the facial control rig for the post-mortem avatar of King Richard III in Unreal Engine 5.6.
Figure 5. Manipulation of facial movement using the facial control rig for the post-mortem avatar of King Richard III in Unreal Engine 5.6.
Applsci 16 01023 g005
Figure 6. Audio Driven Animation of the facial avatar in Unreal Engine 5.6.
Figure 6. Audio Driven Animation of the facial avatar in Unreal Engine 5.6.
Applsci 16 01023 g006
Figure 7. The facial avatar of King Richard III with two different presentations of facial texture (top). Adjustment parameters in Unreal Engine (bottom left). Colour layers of the facial texture in Adobe Photoshop (bottom right).
Figure 7. The facial avatar of King Richard III with two different presentations of facial texture (top). Adjustment parameters in Unreal Engine (bottom left). Colour layers of the facial texture in Adobe Photoshop (bottom right).
Applsci 16 01023 g007
Figure 8. The nine facial depictions of Judith of Flanders with colour variations.
Figure 8. The nine facial depictions of Judith of Flanders with colour variations.
Applsci 16 01023 g008
Figure 9. Historical reference material (left); digital clothing and accessories modelled in Autodesk Maya and ZBrush and rendered in Unreal Engine (right).
Figure 9. Historical reference material (left); digital clothing and accessories modelled in Autodesk Maya and ZBrush and rendered in Unreal Engine (right).
Applsci 16 01023 g009
Figure 10. Facial depiction of Peter Karlsson, including three colour variations.
Figure 10. Facial depiction of Peter Karlsson, including three colour variations.
Applsci 16 01023 g010
Figure 11. Historical clothing reference for digital production of a chaperon in the Peter Karlsson case.
Figure 11. Historical clothing reference for digital production of a chaperon in the Peter Karlsson case.
Applsci 16 01023 g011
Figure 12. Augmented Reality (AR) version of the facial depiction of Peter Karlsson shown on a smartphone in the museum gallery.
Figure 12. Augmented Reality (AR) version of the facial depiction of Peter Karlsson shown on a smartphone in the museum gallery.
Applsci 16 01023 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wilkinson, C.M.; Roughley, M.; Liu, C.Y.J.; Shrimpton, S.; Davidson, C.; Dickinson, T. The Affordances of AI-Powered, Deepfake, Avatar Creator Systems in Archaeological Facial Depiction and the Related Changes in the Cultural Heritage Sector. Appl. Sci. 2026, 16, 1023. https://doi.org/10.3390/app16021023

AMA Style

Wilkinson CM, Roughley M, Liu CYJ, Shrimpton S, Davidson C, Dickinson T. The Affordances of AI-Powered, Deepfake, Avatar Creator Systems in Archaeological Facial Depiction and the Related Changes in the Cultural Heritage Sector. Applied Sciences. 2026; 16(2):1023. https://doi.org/10.3390/app16021023

Chicago/Turabian Style

Wilkinson, Caroline M., Mark Roughley, Ching Yiu Jessica Liu, Sarah Shrimpton, Cydney Davidson, and Thomas Dickinson. 2026. "The Affordances of AI-Powered, Deepfake, Avatar Creator Systems in Archaeological Facial Depiction and the Related Changes in the Cultural Heritage Sector" Applied Sciences 16, no. 2: 1023. https://doi.org/10.3390/app16021023

APA Style

Wilkinson, C. M., Roughley, M., Liu, C. Y. J., Shrimpton, S., Davidson, C., & Dickinson, T. (2026). The Affordances of AI-Powered, Deepfake, Avatar Creator Systems in Archaeological Facial Depiction and the Related Changes in the Cultural Heritage Sector. Applied Sciences, 16(2), 1023. https://doi.org/10.3390/app16021023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop