1. Introduction
The protection of cultural heritage is not limited to physical entities but has extended into the digital realm, injecting new vitality into cultural dissemination and public engagement. However, practical challenges remain: in current digital practices, technical precision has become the primary consideration in cultural heritage restoration. Such operations can accurately restore the physical attributes of artifacts but struggle to convey their underlying cultural characteristics and emotional tension accurately [
1]. This model not only hinders the continuous accumulation of historical knowledge but also risks creating a passive, closed psychological state in users, exacerbating the sense of alienation from diverse groups, particularly evident from the lack of emotional connection among younger generations [
2]. Accompanied by the emergence of virtual reality and augmented reality, participants feel as though they are interacting within historical scenes, significantly enhanced the emotional experience [
3]. However, achieving both content quality and cultural suitability on these platforms remains a challenging task, requiring substantial time, material resources, and specialized human capital [
4].
The development of Artificial Intelligence-Generated Content (AIGC) is reshaping content production paradigms, relying on deep learning systems such as Generative Adversarial Networks (GANs) and diffusion models [
5,
6,
7]. This technology can automatically generate a wide variety of contents, including text, images, and 3D models, based on user instructions, significantly shortening the design cycle [
8,
9] and greatly improving efficiency. In the field of cultural heritage, AIGC technology has successfully enabled the digital restoration of Dunhuang murals, including the creation of digital assets in virtual reality environments [
10]. By optimizing efficiency and lowering the usability threshold, this technology makes the creation of digital cultural heritage experiences more accessible, allowing the public to participate with ease [
11]. Recent surveys indicate rapid adoption of generative AI tools across creative and cultural sectors, underscoring the timeliness of this work [
12].
Secondly, while the potential of AIGC is widely acknowledged, there is still a lack of mature practical pathways that effectively combine the efficiency of AI with the subtle expertise of human professionals [
13]. How to balance automated generation with expert-guided refinement processes, ensuring cultural accuracy, historical fidelity, and artistic integrity, remains a core issue to be addressed [
14]. Third, existing digital cultural heritage applications mostly remain at the passive viewing level. How to systematically integrate AIGC with gamification and interactive storytelling principles to create deep, immersive, educational, and personalized experiences for diverse audiences is still a field in need of breakthroughs [
15,
16].
The aim of this research is to directly address this long-standing research issue that has not been resolved before. It involves developing and evaluating a new type of human–AI collaboration framework, exploring a new path for the stylized, immersive, and interactive revitalization of architectural cultural heritage. Using the Kaiping Diaolou and surrounding villages in Guangdong Province, China—this UNESCO World Heritage site with a rich but fragmented history—as the case study, this research pioneers a new method for digital preservation. The main goal is to establish a workflow that utilizes AIGC for artistic style generation, while relying on human expertise for cultural and historical validation.
The study pursues three specific objectives:
Design and implement a hybrid workflow that combines AIGC tools with traditional 3D modeling to produce stylized assets with cultural resonance;
Use these assets to develop a stylized VR/AR demonstrator that reconstructs historical narratives; a fully evaluated gamified system is beyond the present scope;
Conduct an empirical evaluation of the framework’s effectiveness by comparing it with traditional methods across multiple dimensions, including efficiency, esthetic quality, cultural authenticity, and user experience.
In this study, we extend digital heritage research toward interpretive and educational engagement by combining AI-generated stylization with conventional 3D modeling. This hybrid workflow aims to enhance how audiences perceive, learn, and interact with cultural heritage beyond the traditional scope of conservation and restoration.
2. Literature Review
2.1. Overview of AIGC Technology
As an important breakthrough in the field of artificial intelligence, AIGC is a complex set of algorithms used to generate various forms of content, including text, images, audio, and 3D assets [
5,
17]. With the development of some foundational models such as Generative Adversarial Networks (GANs) [
18], Variational Autoencoders (VAEs) [
19], and the recent emergence of Diffusion Models [
7,
20], the quality and controllability of generated content have been significantly enhanced. These models synthesize outputs that are both coherent and diverse by analyzing underlying distribution patterns in vast datasets [
6]. The transformative impact of AIGC is reshaping the landscape of multiple industries—it not only automates complex creative tasks but also enhances human creative capabilities [
8,
9,
21]. Comprehensive research in this field indicates that significant progress has been made in generating content from two-dimensional images to complex three-dimensional outputs, which is crucial for constructing immersive virtual worlds [
4,
22]. This technological innovation has fundamentally altered the design process, shifting from traditional manual creation to a human–AI collaboration model. This innovative working method is expected to yield higher efficiency and groundbreaking results [
13,
23]. See
Table 1 for a comparative overview of tools
2.2. Application of AIGC in Cultural Heritage
Beyond conservation workflows, digital heritage has progressed from early virtual-heritage exemplars [
24] and large-scale 3D scanning of monumental artifacts [
25] to more principled visualization and presentation guidelines [
26,
27,
28], which collectively contextualize our focus on AI-assisted, stylized models for public interpretation. Early 2000s efforts established core capture and representation methods—e.g., reflectance transformation imaging [
29] and photo-based 3D exploration [
30]—while the 2010s consolidated structure-from-motion and multiview stereo [
31,
32], heritage/BIM-oriented modeling [
33,
34], and VR/AR visitor engagement research [
35,
36], with survey and guidance works standardizing documentation pipelines. More recently, neural scene representations have been adopted for heritage digitization [
37,
38]. These strands frame our AI-generated stylization pipeline and its evaluation focus.
In the field of cultural heritage preservation, AIGC provides powerful new tools for artifact preservation, interpretation, and dissemination [
10]. Early applications primarily focused on digital restoration and reconstruction, using AI technology to repair damaged artifacts or generate high-precision 3D models for archival preservation [
39,
40]. For example, the digital preservation project of the Dunhuang Mogao Caves utilized AI to develop digital restoration solutions, aiding in the sustainable protection of the site [
41]. The primary goal of such projects is to pursue photorealistic accuracy, striving to create digital twins as close to reality as possible [
42,
43].
However, there is a growing awareness that digital heritage should not merely be a faithful replica. Artistic representation can convey emotions, historical atmosphere, and cultural identity in ways that realistic photography cannot achieve [
44,
45]. The emerging methodology injects specific artistic styles into digital assets through AIGC. Tools such as MidJourney can transform abstract design concepts into tangible visual forms for products or architectural spaces [
12,
46,
47]. This capability is significant for cultural heritage preservation—the esthetic qualities of ink wash painting can evoke specific cultural contexts and artistic traditions. Additionally, AIGC based on language models (LLM) provides powerful tools for narrative construction [
48]. LLMs can generate dynamic scripts, character dialogs, and informational content, weaving fragmented historical elements into cohesive and engaging storylines, transforming passive viewing experiences into active participation [
49,
50]. However, the challenge lies in ensuring that these AI-generated styles and narratives maintain cultural and historical authenticity, which requires the establishment of a robust human validation mechanism [
14,
51].
2.3. Immersive Technologies and Gamification for Audience Engagement
The true value of digital heritage ultimately depends on its ability to effectively engage audiences. Immersive technologies such as virtual reality and augmented reality focus on achieving this goal, as they offer users an immersive historical scene experience that traditional media cannot match [
1,
3,
52]. VR technology allows users to travel through time and space, immersing them in carefully restored historical sites, while AR technology adds multiple dimensions to on-site visits by overlaying digital information and interactive elements in real-world spaces [
37,
53]. The synergy between AIGC and these immersive platforms is particularly striking—AIGC can rapidly inject dynamic, responsive, and personalized content into virtual worlds, making the user experience more vivid and engaging [
11,
54,
55].
To further enhance engagement, especially among young audiences, many digital heritage projects have incorporated gamification principles [
15,
56]. This includes the use of game elements such as interactive tasks and reward mechanisms to motivate users and promote a more active, exploration-based learning process [
2,
16]. Studies consistently show that compared to passive information delivery, interactive digital storytelling and gamified tasks significantly enhance user engagement and knowledge retention [
57,
58]. However, a common pitfall is that the application of game mechanics is superficial and lacks relevance to the cultural heritage themes [
59]. Therefore, the key challenge lies in designing meaningful interactions that are intrinsically linked to heritage narratives, stimulating critical thinking and emotional resonance, rather than just task completion [
60,
61]. This requires a deep and thoughtful integration of narrative design, interaction design, and the specific cultural context of the heritage itself [
62,
63,
64].
In this regard, although AIGC holds great promise, its deep application in cultural heritage, especially in the case of architecture with both complex spatial structures and profound cultural symbolism, still presents several research gaps that need to be addressed. First, existing digital practices lean heavily toward “technological restorationism”, with insufficient exploration of “stylized artistic expression” that goes beyond physical realism [
46]. For example, the esthetic style of traditional Chinese ink painting, with its expressive rather than realistic approach, may evoke cultural resonance more effectively than high-precision models, conveying the spirit and meaning of Eastern philosophy and achieving a transcendence from “representation” to “expression” [
44].
3. Methodology
3.1. Research Design
This study uses a mixed research paradigm with quasi-experimental methods and qualitative analysis to implement and validate a human–AI collaboration system, and to stylize cultural heritage models. The analysis is divided into three stages: the first stage focuses on framework development, the second stage focuses on framework implementation, and the third stage focuses on evaluating the framework’s effectiveness.
The overall research framework is shown in
Figure 1. The framework begins with the collection and preliminary processing of design data, followed by the main production phase, which uses a dual-track system combining traditional design pathways with innovative AIGC mixed pathways. The final outputs of both pathways will enter the validation phase, where they will be verified through quantitative data indicators and qualitative feedback from different user groups. Based on the validation results, the framework will be dynamically adjusted to support future application implementation.
3.2. Design Participants
This study proposes a method that combines both qualitative and quantitative assessment approaches to explore the application of AIGC technology in the design of immersive cultural heritage. This study proposes a methodology integrating quantitative and qualitative assessments to explore the application of AIGC technology in immersive cultural heritage design. We recruited six design participants and divided them into three groups to conduct comparative testing of different workflows. The modeling operation time was recorded. Group A is the AIGC professional group, consisting of two professional designers with over five years of 3D modeling experience. They adopted the proposed AIGC mixed workflow. Group B is the traditional tool professional group, composed of two professional designers with similar experience. They completed the same design challenge using only traditional modeling software. Group C is the non-professional AIGC group, consisting of two users with no professional design experience. They received brief AIGC tool training before performing the same tasks.
Participant information and the relevant tools are detailed in
Table 2, and the modeling experiment results are shown in
Table 3. Workflow evaluation was conducted through a combination of direct observation and interviews after task completion, focusing on user-tool interaction and the cognitive experience of creative tools.
After the experiment, the design participants were required to complete a detailed experience feedback questionnaire (
Table 4). The questionnaire covered multiple dimensions, including workflow efficiency, tool usability, and overall satisfaction, with workflow efficiency and usability rated on a 10-point scale. Subsequently, we invited all design participants to take part in focused interviews to explore the challenges they encountered when using new technologies versus traditional methods. The interviews centered around three main themes: the impact on creativity, improvements in work efficiency, and suggestions for future tool enhancements. We conducted semi-structured interviews following a predefined guide (available on request); sessions lasted 20–30 min and were audio-transcribed. Two coders performed thematic analysis with an initial codebook and reconciled differences by discussion to improve transparency and rigor.
3.3. Technical Implementation and Tools
This study employed a combination of AIGC and traditional software tools detailed in
Table 5, selected based on their functionality, accessibility, and relevance to the workflow objectives.
3.4. Human–AI Collaborative Workflow
The core of this study is to construct a structured, multi-step workflow aimed at creating a stylized, immersive model representing the Ruishi Tower of the Kaiping Diaolou. The process begins with the data collection and digital twin creation phase: high-resolution images of the Ruishi Tower were captured using a DJI Mavic 3 drone and processed through Agisoft Metashape software to generate a geometrically accurate 3D model.
The next phase involves generating narrative and style concepts: after analyzing the historical context, GPT-4.5 generates initial story concepts and descriptive keywords such as “Frosted Ink” and “Scattered Memories,” which serve as creative prompts for MidJourney, ultimately establishing the target ink-wash esthetic style. The following AIGC-driven stylization phase renders the 3D model from multiple perspectives, and the images are used as structural inputs through the ControlNet control network to generate a diffusion model, creating stylized elevation renderings. These renderings are processed by TripoAI to form an initial stylized 3D mesh.
Finally, expert-level optimization is performed: the original stylized mesh is imported into Blender software, where professional designers carry out key manual refinements, including mesh cleaning, correcting AI-generated errors, and adjusting details to enhance cultural authenticity. The final step is the development of immersive scenes and gamified interactions: the refined model is imported into Unity to design a gamified AR experience, allowing users to interact with a physical miniature model of the tower and trigger animations of historical events, with narrative elements generated by GPT-4.5.
3.5. Evaluation Methods
In order to conduct a thorough assessment of the feasibility of the framework, this study adopts a multidimensional integrated assessment approach. The expert panel consists of six specialists, three males and three females. All have worked on the front lines of the fields of architectural design, cultural heritage, and AIGC generative design research. The expert panel conducted a systematic evaluation of the final models submitted by the three design participant groups based on the weighted scoring criteria detailed in
Table 6.
The five dimensions of this evaluation system (cultural authenticity, esthetic quality, technical accuracy, innovation, and usability) were based on relevant studies in digital cultural heritage assessment [
11]. These weight values were determined through internal discussions within the expert panel, and it was unanimously agreed that “cultural authenticity” is the most critical factor in cultural heritage projects (weight 30%). Each evaluation criterion uses a five-point scale, while time efficiency is recorded as a separate performance metric. To ensure clarity and simplicity in calculation, the weights are set as integers. Due to the small number of samples available for the experts (
n = 6), reliability analysis does not have statistical significance, so the reliability of the evaluation primarily relies on clear scoring criteria and expert consensus. Weights for
Table 6 were established via a structured two—round expert consensus. Panelists independently proposed pairwise importance rankings, followed by discussion to reach consensus; integer weights were then set to preserve the agreed proportions.
This study also distributed questionnaires to 122 general survey respondents to understand the public’s views on AIGC-generated ink-wash style models in cultural heritage displays. To ensure consistency in the evaluation system, the survey adopted the same five dimensions as the expert evaluation: cultural authenticity, visual appeal, technical details, innovation, and usability, and was measured using a five-point scale. To prove the reliability and validity of the questionnaire tool, reliability analysis showed that all items had high internal consistency (Cronbach’s α coefficient = 0.87). Validity testing through factor analysis also showed strong convergent validity, with factor loadings of all major constructs on their corresponding dimensions exceeding 0.65. Bartlett’s test of sphericity confirmed significant correlation patterns (p < 0.001), while the Kaiser-Meyer-Olkin measure of 0.82 confirmed that the sample size was sufficient.
Quantitative data from expert scoring and public surveys were processed using descriptive statistics and analysis of variance (ANOVA). In conducting the ANOVA, we treated the respondent’s age group as the independent variable and the scores for the five evaluation dimensions as dependent variables, setting a significance level of p < 0.05 to conduct tests for significant perceptual differences between each age group. Open-ended questions and qualitative feedback from post-task interviews with design participants were analyzed using thematic coding to provide deeper insights.
The survey also aimed to understand the users’ views on the ink-wash style models generated by AIGC used by the three design participant groups (A, B, and C) in cultural heritage displays, as well as their needs and preferences for digital displays. The survey used a five-point scale, and to ensure consistency in the evaluation system, it directly adopted the same five dimensions as the expert evaluation, evaluating the models on cultural authenticity, visual appeal, technical details, innovation, and usability. To ensure the reliability and validity of the questionnaire tool, reliability analysis showed good internal consistency for all items (Cronbach’s α = 0.87). Validity testing through factor analysis also showed strong convergent validity, with factor loadings of all main constructs on their corresponding dimensions exceeding 0.65. Bartlett’s test of sphericity confirmed significant correlation patterns (p < 0.001), while the Kaiser-Meyer-Olkin measure of 0.82 confirmed that the sample size was adequate.
Quantitative data from ratings and surveys were analyzed using descriptive statistics and analysis of variance (ANOVA). In conducting the ANOVA, we treated the respondent’s age group as the independent variable and the scores for the five evaluation dimensions as dependent variables, setting a significance level of p < 0.05 to test for significant perceptual differences between age groups. Open-ended questions and qualitative feedback from post-task interviews with design participants were analyzed using thematic coding to provide deeper insights.
4. Results
This section performs a horizontal comparative analysis of the output results of each design process, followed by a detailed analysis of the full immersive system experience. The analysis results are explained through a triple-structured approach: a comparison of the quantitative results of the design solutions from each group, qualitative findings from the design implementation process, and a quantitative analysis of the feedback results.
4.1. Workflow Performance: Efficiency and Quality
A horizontal comparison of the output results of each design process reveals significant differences in production efficiency and output quality. As shown in
Table 7, the AIGC mixed workflow adopted by Group A significantly improved efficiency, with an average completion time of only 68.5 min, compared to 274.5 min for the traditional workflow of Group B, reducing production time by approximately 75%. Even for non-professional users in Group C using AIGC tools, the average task completion time was only 137.5 min, significantly faster than the professionals using traditional methods. (
Table 6).
In terms of quality evaluation, Group A leads with an average score of 7.86, slightly ahead of the traditional group’s 7.65. The traditional workflow adopted by Group B performed best in the two indicators of cultural authenticity and technical accuracy, reflecting the fine control advantage of manual modeling. On the other hand, the AIGC mixed workflow outperformed in terms of esthetic quality and innovation. This indicates that human-AI collaboration successfully achieved a balance between creative expression and technical implementation. Although Group C had the lowest overall score (6.60), it still achieved remarkable results in innovation, proving that AIGC tools enable users with limited technical abilities to produce highly creative works.
Figure 2 presents a visual comparison of the scores across five quality dimensions, comparing the core strengths and weaknesses of the two workflows: the workflow using Group B’s standard method showed better accuracy and robustness, while the workflow combining AIGC technology from Group A demonstrated clear advantages in visual effects and innovation value.
4.2. Qualitative Insights into the Creative Process
After completing the interviews with design participants, we collected a substantial amount of qualitative data regarding their experiences with different workflows. Through thematic analysis of the interview transcripts, we identified four core themes, detailed in
Table 8.
Participants in Group A generally praised the efficiency of the AIGC workflow, with some designers specifically noting that the ability to generate dozens of style variations in minutes completely transformed the conceptualization process, allowing the team to focus on artistic direction rather than the tedious manual texture work. However, they also emphasized the trade-offs introduced by fine control—AI faced challenges when handling specific historical textures on architectural facades, which made manual intervention through Blender an essential part of the process.
In contrast, design participants in Group B felt they had complete control over every detail, which allowed them to focus more on historical authenticity. However, they also acknowledged that the process was time-consuming and labor-intensive, limiting their ability to explore different creative ideas.
The AIGC workflow was also viewed as a powerful tool for breaking through creative bottlenecks and stimulating innovation. Group A design participants mentioned that AIGC acted like a creative partner, sometimes generating unexpected results that pushed designers in new directions. This aligns with their high scores in innovation. Design participants in Group C, consisting of non-professionals, also expressed similar feelings. One member mentioned that although they did not understand modeling techniques, as long as they described the desired effect, AI could transform it into a concrete form, which they found incredibly empowering.
Despite the enthusiasm of Group C participants, they faced many challenges. Their biggest challenge was generating effective text prompts—specifically, how to write text instructions that precisely guide the AI. One design participant admitted that in order to obtain the AI to understand the ink-wash style, they had to repeatedly trial and error, as the results often turned out to be too general or misinterpreted cultural symbols. This indicates that while AIGC lowers the technical barrier, it requires the mastery of new skills for semantic communication with AI and still requires professional supervision to ensure cultural appropriateness and output quality.
4.3. Participant Reception and Immersive Experience Evaluation
After obtaining consent from 122 participants, this study collected data through an online questionnaire, ultimately receiving 122 valid responses. By studying the 122 respondents who participated in the survey, we gained deep insights into user acceptance. Participants were divided into three age groups: 68% were aged 18–25, 26% were aged 26–35, and less than 6% were over 36 years old.
The survey used a five-point rating scale to evaluate the five core dimensions of the AI-generated ink-wash models—cultural authenticity, visual appeal, technical details, innovation, and usability. Overall, user experience received positive feedback, with an average satisfaction score of 4.12. The Cronbach’s α coefficient for all survey items was 0.87, and reliability analysis showed good internal consistency. This high reliability coefficient confirms that the survey items consistently assess the targeted constructs. Validity testing through factor analysis also showed strong convergent validity, with all major constructs having factor loadings exceeding 0.65 on their corresponding dimensions. Bartlett’s test of sphericity confirmed significant correlation patterns (p < 0.001), while the Kaiser-Meyer-Olkin measure confirmed the adequacy of the sample size with a value of 0.82.
The results of the analysis of variance (ANOVA) showed significant differences in the perception of cultural heritage across different age groups. As shown in
Table 7, users aged 18–25 rated cultural authenticity the highest, with an average score of 4.21, indicating that they place more importance on the narrative expression and stylistic representation of cultural heritage. Qualitative feedback further corroborated this, with this age group ranking “historical accuracy” as the highest priority. Their main critique was that the current “style appears too uniform” and they suggested enhancing cultural depth by “adding era-specific design elements” (
Table 9).
In contrast, the middle-aged group (26–35 years old) focused more on technical accuracy, with an average score of 4.05, and exhibited higher standards for visual presentation. This tendency was reflected in their feedback, where they focused on “technical refinement,” with major criticisms pointing to rendering flaws such as “edges looking pixelated,” and suggesting improvements to the rendering algorithm to enhance visual quality.
The most significant difference appeared in usability—users aged 36 and above reported a noticeable increase in difficulty, with an average score of 3.29, highlighting the urgent need for improved interface intuitiveness for this group. Their core demand was for “simplified controls,” as they generally found the current version to have “too many steps,” indicating a need for process optimization (
Table 10).
5. Discussion
The findings of this study provide empirical evidence for the transformative potential of human–AI collaboration frameworks in the field of cultural heritage digital preservation. This paper, in conjunction with existing literature, discusses the core research findings, focusing on the dynamic balance between efficiency and control, the tension between authenticity and esthetic innovation, and the socio-technical impacts of AIGC applications.
5.1. Hybrid Workflow: Balancing Efficiency, Control, and Quality
The most striking result is the significant efficiency improvement brought by the AIGC hybrid workflow, which reduced modeling time by approximately 75% compared to traditional methods. This aligns with broader research findings that AIGC plays an important role in accelerating design and innovation cycles [
8,
9]. However, this study empirically demonstrates that the highest overall quality did not come from pure automation but from the performance of the hybrid Group A. This group quickly generated ideas and textures using AI, supplemented by manual refinement from human experts, thus surpassing the traditional Group B. This finding empirically supports the necessity of the “human-AI collaboration model” [
14].
Qualitative data revealed the core of this dynamic: the trade-off between AI efficiency and the precise control of traditional tools. Professionals using the traditional workflow (Group B) scored the highest in cultural authenticity and technical accuracy, thanks to their meticulous manual control. This finding is crucial because it suggests that in cultural heritage preservation, where fidelity is of utmost importance, existing AI-generated content tools cannot fully replace expert craftsmanship. Survey results confirmed that the hybrid AI-generated content group exhibited significant esthetic innovation effects. The study reveals a breakthrough paradigm shift: the procedural operations in automated AI workflows alleviate the cognitive load on designers, allowing them to focus on strategic creative practices. This form of intellectual collaboration enables a leap for AI from a passive tool to an active co-creator—AI systems generate vast combinations of ideas, which are then precisely adjusted and professionally revised by experts. The mutual benefits of this collaboration build a scalable digital practice framework for cultural heritage preservation.
5.2. Stylization, Authenticity, and Interpretation
This exploration is distinctly different from the commonly used realism strategies in the field of digital heritage preservation. This study focuses on the ink-wash painting artistic style, which has been consistently praised in esthetic evaluations. The younger generation expressed strong recognition of this narrative-driven art style. The study shows that formulaic artistic presentations can create engaging and heartfelt expressions, achieving the popularization and emotional transmission of cultural heritage. However, compared to traditional models, the AIGC model scored lower in cultural authenticity, and designers pointed out that AI could not accurately interpret specific cultural symbols, highlighting a key challenge. AIGC models trained on vast generic datasets often lack the specific cultural literacy needed to understand historical details [
4,
23]. This further confirms that to make stylized cultural heritage projects both successful and responsible, complete reliance on automation is not feasible. The role of human experts is not only to perfect technical details but also to act as cultural translators, ensuring that AI’s creative outputs adhere to authentic historical and cultural knowledge while maintaining consistency. This positioning makes digital representations not a perfect replica, but an effective and valuable interpretation of cultural heritage.
5.3. Democratized Design and the Digital Divide
The performance of the non-professional group (Group C) strongly validates the democratizing potential of AIGC. These users, despite having no modeling experience, produced results faster than professionals using traditional tools, and their innovation was highly praised. This indicates that AIGC can effectively lower the technical barriers to creative creation, allowing more people to participate in the preservation and reinterpretation of cultural heritage [
48]. However, this digital process is not without challenges. The difficulties faced by non-professionals during the rapid engineering process indicate that the required skills have shifted from technical operation to effective semantic communication with AI. Additionally, users aged 36 and above reported significant differences in user experience and satisfaction, revealing a key digital divide. The preference for technical accuracy among the 26–35 age group and the focus of the younger generation on narrative authenticity suggest that users’ expectations for digital experiences are deeply influenced by demographic characteristics and digital literacy. This finding aligns with broad conclusions in human–computer interaction research regarding generational differences in technology adoption and use [
65]. This means that designing inclusive digital heritage experiences requires not only accessible content but also adaptive interfaces and interaction models to meet the diverse needs and preferences of multi-generational audiences.
5.4. Practical Implications and Improvement Pathways
From an applied perspective, this study outlines several key evolutionary directions for enhancing the effectiveness of AI ink-wash painting generation tools. User evaluations revealed a widespread value divergence regarding the balance between the expression of cultural heritage in digital art and its technical performance. While the esthetic value and technical accuracy of the generated works received recognition with scores of 4.0 and 3.9, respectively, the significant evaluation differences in creative expression and historical accuracy pinpoint the core areas for future improvement.
Furthermore, users’ evaluation criteria were significantly influenced by age differences, providing refined user profiles for product iteration. The younger user group was highly focused on cultural authenticity, with many believing that the generated results lacked easily recognizable specific historical elements. In contrast, users aged 26–35 were more focused on the precision of technical implementation, advocating for higher-resolution rendering effects and efforts to eliminate image artifacts. For older users, ease of use was a significant barrier, with many participants in this age group reporting that the menu system was too complex and the operating instructions unclear.
Based on these findings, the data strongly supports a phased, group-specific improvement strategy to systematically enhance the overall performance of the tool. To address the concerns of younger users regarding cultural authenticity, priority should be given to expanding a database that includes rich cultural symbols and patterns. To meet the demand of middle-aged users for technical details, more advanced customization features should be introduced, such as adjustable brush textures or collaborative editing tools. To solve usability issues for older users, extreme simplification of the interface is necessary. Specific measures could include larger interactive buttons, voice command input integration, or the development of intelligent guided tutorials to create a more inclusive design environment.
To translate these strategic directions into actionable steps and balance them with technical feasibility, a clear improvement roadmap has been formulated. High-priority action items are aimed at addressing the most common and critical issues. This includes the implementation of a “simplified navigation menu” for all users, and the addition of a “historical reference mode” specifically for the 18–35 age group to address their concerns about historical authenticity. Medium-priority projects target more specific needs, such as introducing “material customization” features for tech-oriented users aged 26–35 and adding “voice-guided tutorials” for users aged 36 and above to effectively reduce their operational barriers (
Table 11).
5.5. Limitations of the Study
This study also has some limitations to be addressed. First, the sample size for the comparative workflow experiment consisted of only six participants, which is relatively small and limits the generalizability of the performance metrics. Future research should include a larger and more diverse group of designers. Second, although the user survey had a larger sample size, the age distribution was unbalanced, with participants over 36 years old being underrepresented, making up less than 6% of the total sample. This may have introduced bias in the overall satisfaction ratings. Additionally, the geographic focus of the study was centered on urban areas in China, with limited consideration of accessibility issues in low-tech environments. Therefore, future research should expand the sample size for the elderly population and include rural user groups for more comprehensive testing. Longitudinal studies tracking tool updates could also provide an opportunity to assess whether interface optimization effectively narrows usage disparities between different age groups. Third, this study focused on a single case study, the Kaiping Diaolou, with its specific esthetic style. While the framework design is scalable, its application to different types of cultural heritage or cultural contexts may yield different results. Finally, the field of AIGC is evolving at a rapid pace. The specific models and tools used in this study will eventually be surpassed, necessitating ongoing research using more advanced technologies to reassess these findings. In particular, the public user survey did not include a control condition using traditionally modeled assets; future studies will add a baseline to enable direct comparisons of AIGC versus conventional methods.
6. Conclusions
The core contribution of this study lies in constructing and validating an AIGC-enabled framework aimed at surpassing “photorealistic” reproduction, thereby advancing digital practices in architectural heritage from static, expert-driven “technical restoration” to dynamic, public-participation-based “value co-creation.” By integrating AIGC tools, expert knowledge, and gamified experiences, this study provides an empirical and innovative pathway for achieving the efficient, deep, and sustainable revitalization of cultural heritage. The main findings and theoretical contributions can be summarized in the following three points:
First, the study confirms the superiority of the “human–AI hybrid” workflow in balancing efficiency and quality. The study establishes an efficient AI collaboration workflow that, while maintaining high-quality standards, reduces production time by approximately 75%. This framework successfully integrates AI-driven creative generation with manual expert revision, ensuring cultural and historical accuracy, and its overall quality outperforms both purely traditional methods and non-professional AIGC approaches. This not only provides quantitative evidence for AI applications in design but, more importantly, reveals that the optimal role of AI is not to replace humans, but to serve as a creative catalyst and efficiency amplifier, forming a symbiotic relationship with human experts.
Second, the study establishes the unique value of “stylized expression” in cultural heritage communication. The study confirms that formulaic expressions, exemplified by ink-wash painting, can create highly attractive and visually pleasing cultural heritage experiences, especially favored by younger audiences. This artistic expression emphasizes emotional resonance and cultural identity rather than mere technical precision, offering a highly promising interpretive path in the long-standing realist-dominated digital heritage field. This challenges the sole correctness of “digital twins” and advocates for diversified esthetic strategies in digital preservation based on heritage characteristics and communication goals.
Third, the study reveals the “democratization of design” driven by AIGC and the accompanying issue of the “digital divide.” The findings show that AIGC effectively lowers the technical barriers for non-professionals in creative heritage production. However, this also highlights the new skill requirements, shifting from “technical skills” to “semantic communication abilities” (prompt engineering), and reveals significant differences in technology acceptance and user experience across different age groups. This finding has important socio-technical implications, reminding us that the widespread adoption of technology does not automatically bridge the digital divide, and inclusive design must fully account for generational differences, developing adaptive interaction interfaces.
Future research should prioritize the following directions. First, the framework should be tested in more cultural heritage sites to further validate its scalability and adaptability. Second, more intuitive user interfaces and interaction methods for AI-generated content tools should be developed, particularly for non-professional users and older age groups, to address the observed digital divide. Third, future work should focus on training AI models with localized, culturally specific datasets, which would enhance their ability to understand and accurately present the subtle differences in cultural symbols, reducing the need for manual corrections. Finally, longitudinal studies are needed to assess the long-term impact of immersive gamified experiences on cultural knowledge retention and the formation of cultural identity across different demographic groups. Furthermore, Future work will extend the evaluation through structured working groups and larger surveys to test cross-generational and cross-disciplinary usability. Participants will be recruited according to both age and general technology literacy, enabling comparisons of interpretive outcomes across varied user profiles. By continuously exploring the deep integration of technology and cultural preservation, this field will fully unleash the potential of AI, preserving and revitalizing shared cultural heritage for future generations.
Author Contributions
Conceptualization: C.L., W.W. and Z.Y.; Methodology: C.L., W.W., Z.Y. and L.L.; Software: Z.Y. and L.L.; Validation: C.L., Z.Y. and L.L.; Formal analysis: C.L. and W.W.; Investigation: Z.Y., L.L. and C.L.; Resources: C.L. and W.W.; Data curation: C.L. and W.W.; Writing—original draft preparation: C.L. and W.W.; Writing—review and editing: C.L. and W.W.; Visualization: Z.Y. and L.L.; Supervision: J.S.; Project administration: C.L.; Funding acquisition: C.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki. Ethical review and approval were waived for this study because the research involved anonymous questionnaires and interviews on non-medical design activities, collected no identifiable personal data, posed minimal risk, and reports only aggregated findings. The formal waiver document is attached for your records.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study. No identifiable images or personal data of participants are included in this article.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Mortara, M.; Catalano, C.E.; Bellotti, F.; Fiucci, G.; Houry-Panchetti, M.; Petridis, P. Learning cultural heritage by playing digital games. J. Cult. Herit. 2014, 15, 318–325. [Google Scholar] [CrossRef]
- Champion, E. The role of ‘aesthetics’ in creating engaging and meaningful digital heritage. J. Cult. Herit. 2021, 52, 147–157. [Google Scholar]
- Banfi, F. The Evolution of Interactivity, Immersion and Interoperability in HBIM: Digital Model Uses, VR and AR for Built Cultural Heritage. ISPRS Int. J. Geo Inf. 2021, 10, 685. [Google Scholar] [CrossRef]
- Liu, J.; Huang, X.; Huang, T.; Chen, L.; Hou, Y.; Tang, S.; Liu, Z.; Ouyang, W.; Zuo, W.; Jiang, J.; et al. A Comprehensive Survey on 3D Content Generation. arXiv 2024, arXiv:2402.01166. [Google Scholar] [CrossRef]
- Foo, L.G.; Rahmani, H.; Liu, J. AI-Generated Content (AIGC) for Various Data Modalities: A Survey. arXiv 2023, arXiv:2308.14256. [Google Scholar] [CrossRef]
- Karras, T.; Laine, S.; Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4401–4410. [Google Scholar]
- Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 2020, 33, 6840–6851. [Google Scholar]
- Verganti, R.; Vendraminelli, L.; Iansiti, M. Innovation and design in the age of artificial intelligence. J. Prod. Innov. Manag. 2020, 37, 212–227. [Google Scholar] [CrossRef]
- Wu, J.; Cai, Y.; Sun, T.; Ma, K.; Lu, C. Integrating AIGC with design: Dependence, application, and evolution-a systematic literature review. J. Eng. Des. 2024, 36, 758–796. [Google Scholar] [CrossRef]
- Wang, X.; Zeng, M.L.; Gao, J.; Zhao, K. Intelligent Computing for Cultural Heritage: Global Achievements and China’s Innovations; Routledge: London, UK, 2024. [Google Scholar]
- Innocente, C.; Ulrich, L.; Moos, S.; Vezzetti, E. A framework study on the use of immersive XR technologies in the cultural heritage domain. J. Cult. Herit. 2023, 62, 268–283. [Google Scholar] [CrossRef]
- Yoo, S.; Lee, S.; Kim, S.; Hwang, K.H.; Park, J.H.; Kang, N. Integrating deep learning into CAD/CAE system: Generative design and evaluation of 3D conceptual wheel. Struct. Multidiscip. Optim. 2021, 64, 2725–2747. [Google Scholar] [CrossRef]
- Tao, W.; Gao, S.; Yuan, Y. Boundary crossing: An experimental study of individual perceptions toward AIGC. Front. Psychol. 2023, 14, 1185880. [Google Scholar] [CrossRef]
- Dell’Unto, N. Experiential and experimental archaeology in the digital age: A reflection on the role of the human-in-the-loop. J. Archaeol. Method Theory 2021, 28, 837–857. [Google Scholar]
- Arnab, S.; Lim, T.; Carvalho, M.B.; Bellotti, F.; de Freitas, S.; Louchart, S.; Suttie, N.; Berta, R.; De Gloria, A. Mapping learning and game mechanics for serious games analysis. Br. J. Educ. Technol. 2015, 46, 391–411. [Google Scholar] [CrossRef]
- Anderson, E.F.; McLoughlin, L.; Liarokapis, F.; Peters, C.; Petridis, P.; de Freitas, S. Developing serious games for cultural heritage: A state-of-the-art review. Virtual Real. 2010, 14, 255–275. [Google Scholar] [CrossRef]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the opportunities and risks of foundation models. arXiv 2021, arXiv:2108.07258. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10684–10695. [Google Scholar]
- Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; WW Norton & Company: New York, NY, USA, 2014. [Google Scholar]
- Cao, M.; Wang, P.; Li, Y.; Zhang, Z.; Wang, H.; Wang, Y.; Gao, S. A survey on 3D AIGC: A technical overview and analysis of challenges. arXiv 2023, arXiv:2310.19834. [Google Scholar]
- Wu, Y.; Ma, L.; Yuan, X.; Li, Q. Human–machine hybrid intelligence for the generation of car frontal forms. Adv. Eng. Inform. 2023, 55, 101906. [Google Scholar] [CrossRef]
- Levoy, M.; Pulli, K.; Curless, B.; Rusinkiewicz, S.; Koller, D.; Pereira, L.; Ginzton, M.; Anderson, S.; Davis, J.; Ginsberg, J.; et al. The Digital Michelangelo Project: 3D Scanning of Large Statues. In Proceedings of the SIGGRAPH 2000, New Orleans, LA, USA, 23–28 July 2000; ACM: New York, NY, USA, 2000; pp. 131–144. [Google Scholar] [CrossRef]
- Denard, H. A New Introduction to the London Charter for the Computer-Based Visualization of Cultural Heritage. In Paradata and Transparency in Virtual Heritage; Bentkowska-Kafel, A., Denard, H., Baker, D., Eds.; Ashgate: Farnham, UK, 2012; pp. 57–71. [Google Scholar]
- ICOMOS. Charter for the Interpretation and Presentation of Cultural Heritage Sites (Ename Charter). Available online: https://icip.icomos.org/wp-content/uploads/2025/03/ICIP-ICOMOS-Charter-full-text.pdf (accessed on 11 October 2025).
- UNESCO. Convention for the Safeguarding of the Intangible Cultural Heritage. Available online: https://ich.unesco.org/en/convention (accessed on 11 October 2025).
- Malzbender, T.; Gelb, D.; Wolters, H. Polynomial Texture Maps. In Proceedings of the SIGGRAPH 2001, Los Angeles, CA, USA, 12–17 August 2001; ACM: New York, NY, USA, 2001; pp. 519–528. [Google Scholar] [CrossRef]
- Snavely, N.; Seitz, S.M.; Szeliski, R. Photo Tourism: Exploring Photo Collections in 3D. ACM Trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef]
- Schönberger, J.L.; Frahm, J.-M. Structure-from-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: London, UK, 2016; pp. 4104–4113. [Google Scholar] [CrossRef]
- Furukawa, Y.; Ponce, J. Accurate, Dense, and Robust Multiview Stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef]
- Murphy, M.; McGovern, E.; Pavia, S. Historic Building Information Modelling—Adding Intelligence to Laser and Image Based Surveys of European Classical Architecture. ISPRS J. Photogramm. Remote Sens. 2013, 76, 89–102. [Google Scholar] [CrossRef]
- Dore, C.; Murphy, M. Current State of the Art Historic Building Information Modelling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 185–192. [Google Scholar] [CrossRef]
- Styliani, S.; Fotis, L.; Kostas, K.; Petros, P. Virtual Museums, a Survey and Some Issues for Consideration. J. Cult. Herit. 2009, 10, 520–528. [Google Scholar] [CrossRef]
- Perry, S.; Roussou, M.; Economou, M.; Young, H.; Pujol, L. Moving Beyond the Virtual Museum: Engaging Visitors Emotionally. In Proceedings of the 2017 23rd International Conference on Virtual Systems & Multimedia (VSMM), Dublin, Ireland, 31 October–4 November 2017; IEEE: London, UK, 2017; pp. 1–8. [Google Scholar] [CrossRef]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Springer: Cham, Switzerland, 2020; pp. 405–421. [Google Scholar] [CrossRef]
- Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. J. Comput. Cult. Herit. 2018, 11, 1–36. [Google Scholar] [CrossRef]
- Remondino, F. Heritage Recording and 3D Modelling with Photogrammetry and Laser Scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef]
- Agathos, A.; D’Agnano, F.; Doulamis, N.; Doulamis, A. AI for digital restoration of cultural heritage. IEEE Signal Process. Mag. 2022, 39, 107–118. [Google Scholar]
- Piotrowski, M. Digital humanities and cultural heritage: A case for collaboration. Lit. Linguist. Comput. 2011, 26, 153–165. [Google Scholar]
- Yu, T.; Lin, C.; Zhang, S.; Wang, C.; Ding, X.; An, H.; Liu, X.; Qu, T.; Wan, L.; You, S.; et al. Artificial intelligence for Dunhuang cultural heritage protection: The project and the dataset. Int. J. Comput. Vis. 2022, 130, 2646–2673. [Google Scholar] [CrossRef]
- Bruno, F.; Bruno, S.; De Sensi, G.; Luchi, M.L.; Mancuso, S.; Muzzupappa, M. From 3D reconstruction to virtual reality: A complete methodology for digital archaeological exhibition. J. Cult. Herit. 2010, 11, 42–49. [Google Scholar] [CrossRef]
- Apollonio, F.I.; Gaiani, M.; Sun, Z. 3D reality-based modeling for the management of cultural heritage. Int. J. Digit. Era 2012, 1, 47–64. [Google Scholar]
- Deng, M. AI-driven innovation in ethnic clothing design: An intersection of machine learning and cultural heritage. Heliyon 2023, 9, e19434. [Google Scholar] [CrossRef]
- Hsiao, S.W.; Tsai, H.C. Applying a hybrid approach based on fuzzy neural network and genetic algorithm to product form design. Int. J. Ind. Ergon. 2005, 35, 411–428. [Google Scholar] [CrossRef]
- Wu, F.; Hsiao, S.W.; Lu, P. An AIGC-empowered methodology to product color matching design. Displays 2024, 81, 102623. [Google Scholar] [CrossRef]
- Cheng, K.; Neisch, P.; Cui, T. From concept to space: A new perspective on AIGC-involved attribute translation. Digit. Creat. 2023, 34, 211–229. [Google Scholar] [CrossRef]
- Wang, Y.; Pan, Y.; Yan, M.; Su, Z.; Luan, T.H. A survey on ChatGPT: AI-generated contents, challenges, and solutions. IEEE Open J. Comput. Soc. 2023, 4, 280–302. [Google Scholar] [CrossRef]
- Gatti, E.; Giunchi, D.; Numan, N.; Steed, A. Aisop: Exploring immersive vr storytelling leveraging generative AI. In Proceedings of the 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Orlando, FL, USA, 16–21 March 2024; pp. 865–866. [Google Scholar]
- Wu, Y. Application of artificial intelligence within virtual reality for production of digital media art. Comput. Intell. Neurosci. 2022, 2022, 3781750. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Yang, W.; Xiong, Z.; Zhao, Y.; Quek, T.Q.; Han, Z. Harnessing the power of AI-generated content for semantic communication. IEEE Netw. 2024, 38, 102–111. [Google Scholar] [CrossRef]
- Slater, M. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef]
- Okanovic, V.; Ivkovic-Kihic, I.; Boskovic, D.; Mijatovic, B.; Prazina, I.; Skaljo, E.; Rizvic, S. Interaction in eXtended Reality Applications for Cultural Heritage. Appl. Sci. 2022, 12, 1241. [Google Scholar] [CrossRef]
- Wang, X.; Hong, Y.; He, X. Exploring artificial intelligence generated content (AIGC) applications in the metaverse: Challenges, solutions, and future directions. IET Blockchain 2024, 4, 365–378. [Google Scholar] [CrossRef]
- Hu, Y.; Zhang, D.; Yuan, M.; Xian, K.; Elvitigala, D.S.; Kim, J.; Mohammadi, G.; Xing, Z.; Xu, X.; Quigley, A. Investigating the Design Considerations for Integrating Text-to-Image Generative AI within Augmented Reality Environments. arXiv 2024, arXiv:2303.16593. [Google Scholar]
- Deterding, S.; Dixon, D.; Khaled, R.; Nacke, L. From game design elements to gamefulness: Defining “gamification”. In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, Tampere, Finland, 28–30 September 2011; pp. 9–15. [Google Scholar]
- Li, X.; Xie, C.; Sha, Z. A predictive and generative design approach for three-dimensional mesh shapes using target-embedding variational autoencoder. J. Mech. Des. 2022, 144, 114501. [Google Scholar] [CrossRef]
- Lo, C.H.; Ko, Y.C.; Hsiao, S.W. A study that applies aesthetic theory and genetic algorithms to product form optimization. Adv. Eng. Inform. 2015, 29, 662–679. [Google Scholar] [CrossRef]
- Nicholson, S. A recipe for meaningful gamification. In Gamification in Education and Business; Springer: Cham, Switzerland, 2015; pp. 1–20. [Google Scholar]
- Roose, K. An AI-Generated Picture Won an Art Prize. Artists Aren’t Happy. The New York Times, 2 September 2022. [Google Scholar]
- Hsiao, S.W.; Hsu, C.F.; Tang, K.W. A consultation and simulation system for product color planning based on interactive genetic algorithms. Color Res. Appl. 2013, 38, 375–390. [Google Scholar] [CrossRef]
- Ribeiro de Oliveira, T.; Biancardi Rodrigues, B.; Moura da Silva, M.; Antonio, N.; Spinassé, R.; Giesen Ludke, G.; Ruy Soares Gaudio, M.; Cotini, L.G.; da Silva Vargens, D.; Schimidt, M.Q.; et al. Virtual reality solutions employing artificial intelligence methods: A systematic literature review. ACM Comput. Surv. 2023, 55, 1–29. [Google Scholar] [CrossRef]
- Perry, B.; Geva, A. Serious games for cultural heritage. In Serious Games and Entertainment Applications; Springer: London, UK, 2011; pp. 271–291. [Google Scholar]
- Champion, E.M. Playing with the Past; Springer Science & Business Media: Karlsruhe, Germany, 2011. [Google Scholar]
- Addison, A.C. Emerging Trends in Virtual Heritage. IEEE Multimed. 2000, 7, 22–25. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).