Next Article in Journal
Mould Growth Risk for Internal Retrofit Insulation of Heritage-Protected Timber Plank Frame Walls
Previous Article in Journal
Does Water Cleaning Mitigate Atmospheric Degradation of Unstable Heritage Glass? An Experimental Study on Glass Models
Previous Article in Special Issue
Evaluating Older Adults’ Engagement with Digital Interpretation Exhibits in Museums: A Universal Design-Based Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reshaping Museum Experiences with AI: The ReInHerit Toolkit

by
Paolo Mazzanti
*,
Andrea Ferracani
,
Marco Bertini
* and
Filippo Principi
Media Integration and Communication Center (MICC), University of Florence, Viale Morgagni 65, 50134 Firenze, Italy
*
Authors to whom correspondence should be addressed.
Heritage 2025, 8(7), 277; https://doi.org/10.3390/heritage8070277
Submission received: 27 May 2025 / Revised: 4 July 2025 / Accepted: 10 July 2025 / Published: 14 July 2025

Abstract

This paper presents the ReInHerit Toolkit, a collection of open-source interactive applications developed as part of the H2020 ReInHerit project. Informed by extensive surveys and focus groups with cultural professionals across Europe, the toolkit addresses key needs in the heritage sector by leveraging computer vision and artificial intelligence to enrich museum experiences through engaging, personalized interactions that enhance visitor learning. Designed to bridge the technology gap between larger institutions and smaller organizations, the ReInHerit Toolkit also promotes a sustainable, people-centered approach to digital innovation, supported by shared resources, training, and collaborative development opportunities accessible through the project’s Digital Hub.

1. Introduction

The ReInHerit Toolkit, developed within the Horizon 2020 ReInHerit Project, is a set of open-source applications based on computer vision (CV) and artificial intelligence (AI) that provide diverse types of experiences exploiting interactions and tasks of interest for museums and cultural institutions with the goal to engage visitors with cultural content and improve learning. The ReInHerit project aims to propose an innovative model of sustainable heritage management, through a dynamic and collaborative network comprised of cultural heritage (CH) professionals, innovation and CH solution tech experts, researchers, creatives, museums, and managers of heritage sites.
The conceptual model pioneered by the project focuses on creating a digital ecosystem dedicated to CH. This ecosystem fosters cooperation and knowledge sharing between stakeholders, offering a shared, experimental environment. Central to this model is the Digital Hub,1 an interactive platform that collects and disseminates all project-related content, including tools, learning materials, and webinar sessions aimed at supporting heritage professionals. Within this space, users can freely access and download the project’s toolkit along with the relevant materials on digital applications, including instructional resources, reuse strategies, and curated webinars. The Toolkit and all associated documentation for the technological applications are available for free and can be downloaded from the Digital Hub. This includes a number of resources such as webinars, reuse guidelines, training material,2 collection of reference components, and free source codes hosted on ReInHerit’s GitHub page (https://github.com/ReInHerit, accessed on 9 July 2025).
Initial studies conducted within ReInHerit provided an overview of the research in the field of CH. These investigations mapped current models, institutional needs, visitor expectations, and contemporary trends influencing museums and cultural sites. This pivotal investigation served as a key factor in shaping the project’s approach to technological innovation and digital transformation. It advised on the integration of new technologies designed to make CH more accessible, interactive and sustainable, while responding to the current demands and developments in the field. It also provided valuable insight into the motivations behind the types of applications to be developed for the toolkit, forming a foundational conceptual framework (Figure 1).
In the following sections, we will outline the motivations and conceptual framework that prompted the development of the tools, focusing in particular on the results of the research conducted with museum visitors and professionals, which provided relevant insights on needs and preferences. Based on these outcomes, the Toolkit Strategy was designed, with particular emphasis on both the opportunities and the challenges linked to the use of AI within the museum context. The final part of the paper describes how an interdisciplinary and collaborative methodology guided the creation of the technological tools.

1.1. Results from ReInHerit Research on Digital Tools

The ReInHerit study carried out focus groups with professionals working in museums and cultural sites, as well as with experts in digital technologies. Additionally, online surveys were launched for both professionals and visitors. The objective was to identify the needs of cultural professionals and institutions, while also aligning them with the preferences of users with diverse backgrounds. The research aimed to explore the digital capabilities of heritage organizations in Europe, identifying the best practices for CH management and the most useful types of ICT (Information and Communication Technology) tools.3 This analysis provided insightful indications as to whether organizations have made significant progress towards digital transformation. It also explored the availability of human resources in the heritage sector, offering a clear picture of whether organizations have the capacity to implement and sustain digital innovation [1]. Focus group interviews were conducted with a total of 34 professionals working in the cultural heritage management sector from 12 countries in Europe, mapping the current state of the practices in the management of the CH sector. Additional focus groups on the current ICT tools in CH were conducted online in March 2022 with museum/heritage site professionals, academic researchers, officers from public authorities/NGOs and ICT professionals, from 10 European countries (Austria, Croatia, Cyprus, Finland, Greece, Italy, Spain, Sweden, Switzerland, and the Netherlands). The online Survey on October–December 2021 collected 1746 responses by visitors and 506 responses from institutions by cultural heritage professionals of 37 European countries. Ethical approval for the survey and focus groups was obtained, and participants provided informed consent to be included.4 The results are summarized below, with a focus on the key findings.

1.2. Demographics

The demographics of the survey participants provide valuable insights into the audience profile engaged in the ReInHerit project. Most of the respondents were in the 30–64 age group, 76.42%. Younger users, aged between 18 and 29, accounted for 19.32%, while the oldest group, those aged 65 and over, accounted for 4.26%. Regarding education, the most common level of qualification was a Master’s degree, held by 48.74% of respondents. This was followed by participants with a Bachelor’s degree (23.58%), and those with a PhD (13.48%). Only a small fraction (0.61%) reported having completed only primary education. In terms of employment, a significant majority were employees, representing 60.61% of the respondents. Additionally, 14.56% were self-employed, while 4.83% were unemployed. Participants hailed from a diverse array of European countries. The largest groups were from Spain (23.98%), Italy (20.84%), Austria (14.32%), Finland (11.01%), Greece (10.71%), and Cyprus (6.52%). Other represented nations included Belgium, Bulgaria, Croatia, France, Germany, Romania, and several others. A small number of participants were from countries like Andorra (0.17%), Denmark (0.06%), Kosovo (0.06%), and Slovenia (0.06%), highlighting the wide geographic reach of the survey. Lastly, the types of organizations represented in the survey were varied. Public museums and cultural heritage sites comprised the largest group at 36.10%, followed by universities and research institutes (16.39%), and private museums or cultural heritage sites (14.32%). Smaller proportions included creative industries (9.75%), NGOs (9.13%), and public authorities (8.09%). These demographic insights lay the foundation for understanding the preferences and needs of museum visitors and cultural heritage professionals, crucial for tailoring digital tools effectively. The key insights derived from the analysis of digital tool usage by both museum visitors and professionals provided essential input for shaping the toolkit’s design and development.

1.3. Visitor Preferences

These insights reflect visitor preferences, challenges, and interests related to digital engagement during museum visits.
Reasons for not using mobile applications: The primary reason for not using mobile applications during museum visits was that respondents found them distracting. Over 50% of respondents considered mobile applications either distracting or uninteresting. This tendency became more prominent with age, as older users (65+) were particularly likely to find mobile applications distracting. Younger users (18–29), on the other hand, expressed concerns about insufficient storage space on their devices. This trend is consistent with existing research, showing that younger visitors are reluctant to download museum apps due to limited smartphone memory. This pattern was also observed across different educational levels (Figure 2).
Digital tools for improving the visitor experience: No single digital tool was considered to be the most or least useful; preferences varied by age group. Younger and middle-aged visitors preferred interactive technologies such as multi-touch tables and immersive environments, while older visitors (65+) preferred classic audio guides. This reflects the tendency of older users to find advanced technologies more difficult to navigate (Figure 3). In general, all visitors preferred to use their smartphones or personal devices (Figure 4).
Interest in interaction with exhibits: Across all age groups, a significant percentage of visitors expressed interest in direct interaction with exhibits. Visitors of all ages found this to be very interesting, with no significant differences between age groups (Figure 5).
Digital games: A significant portion of older users (65+) found digital games not interesting, while younger and middle-aged visitors showed more interest, mainly motivated by curiosity, entertainment, improving knowledge, and social connections. (Figure 6).
Mobile applications: The interest varied by age. Younger respondents (18–29) found mobile apps “interesting” or “very interesting”. However, older users (65+) tended to remain neutral, neither finding mobile apps particularly interesting nor boring (Figure 7). Younger visitors are more likely to engage with mobile apps depending on the perceived benefits and features of the apps, mainly web apps. This aligns with existing research on teenagers’ behavior in museums, which shows that younger visitors are often reluctant to download museum-specific apps, as they prefer not to use up memory on their smartphones, highlighting the potential to engage museum visitors with AI-driven, more seamless and personalized experiences instead [2].
Digital tools thus emerge as crucial mediators of the museum experience, especially for younger visitors, fundamentally shaping how they perceive, learn from, and emotionally connect with CH as evidenced by recent socio-material studies on youth–technology interaction [3].

1.4. Heritage Professionals Needs

Heritage professionals were asked about the technological services and systems available in the organizations where they work (Figure 8).
Responses were divided into two main groups: (1) Emerging and Advanced ICT solutions, including artificial intelligence applications, chatbots, games and/or gamification, digital storytelling tools, digital tools for exhibition planning, and (2) Conventional and Standard digital technologies, such as video and audio recording equipment, web applications, mobile applications, online exhibitions, digitalization systems, analytics and feedback tools, social media management tools, ticketing systems, and e-shop. The following results summarize the major findings regarding the technological services and systems currently available in cultural heritage organizations. What follows is a synthesis of the most relevant outcomes concerning the digital systems currently employed by CH institutions. These findings highlight the current digital landscape of the industry, underscore the disparity between conventional and cutting-edge technologies, and identify the major obstacles that organizations face in integrating innovative digital tools.
  • In total, 67.33% of museums and CH sites rely on standard ICT tools, while only 33% use innovative ICT tools. This highlights a need for integrating more innovative tools into the sector to improve visitor engagement and experience.
  • The analysis revealed that smaller organizations are more likely to rely on standard ICT tools and face greater challenges in adopting innovative solutions. This underscores the need for sharing digital platform that can support museums of all sizes, offering tailored solutions to their specific needs.
  • AI and gamification tools (e.g., chatbots and digital storytelling) were identified as important but rarely used. These tools can enhance visitor interaction and engagement, making them crucial for future development.
  • Human resources: Most organizations do not employ dedicated professionals for technological implementation. Instead, they rely on third-party consultants or lack the resources to develop digital tools internally. This indicates a need for training and upskilling heritage professionals to become active agents in the digital transformation of cultural heritage institutions (Figure 9).
The results of the National Survey highlighted the primary technological requirements in the field of CH, emphasizing both the integration of digital solutions and the availability of skilled human resources. While all organizations showed interest in adopting technologies like AI, CV, digital games, and immersive experiences, smaller institutions emphasized the need for more technical support to implement them effectively. Despite increasing digital adoption, smaller organizations struggle to integrate these technologies and require upskilling. The survey also revealed a heavy reliance on outsourcing for digital tasks, contributing to a knowledge gap. There is strong demand for technologies such as AI, gamification, and interactive solutions to enhance visitor engagement. Smaller organizations face challenges in adopting these technologies, highlighting the need for accessible, tailored solutions and staff upskilling to drive digital transformation. Tools for “phygital” interaction—such as participative storytelling, gamification, and multisensory engagement—are seen as key to enrich the visitor experience, fostering emotional involvement before, during, and after the visit, and contributing to the design of authentic, meaningful experiences that enhance the sense of presence and connection with CH, in line with recent studies [4]. AI can also personalize content and create dynamic exhibitions that connect audiences at a distance, extending the experience beyond the visit through shared memories.
ReInHerit Focus Groups explored how digital tools can enhance visitor engagement in the CH sector. The research focused on identifying the primary needs and challenges encountered by CH professionals and examining how interactive technologies—such as AI and CV—can be implemented in ways that are sustainable and user-centered. Participants were recruited by consortium partners following a standardized selection protocol. Criteria required professional involvement in the CH sector—museum staff, academic researchers, public officers, and NGO representatives—with attention to institutional diversity (e.g., art, archaeological, and science museums) and geographic spread (local to European level). ICT professionals and digital tool developers active in CH were also included to provide technological perspectives. Five focus groups were conducted with 34 participants from at least seven countries (Cyprus, Greece, Italy, Austria, Spain, Finland, and Belgium), representing both junior and senior roles—directors, curators, educators, consultants, and ICT experts. A mixed-methods design combined qualitative data from these focus groups with survey feedback and application testing to inform the iterative development of the ReInHerit Toolkit. Sessions followed a semi-structured protocol and emphasized co-creation: CH and ICT professionals jointly identified user needs, barriers, and enabling conditions, resulting in three key parameters for digital management in small- and medium-sized institutions. Additional input from academic and technical experts provided concrete examples of the existing tools and implementation practices. Qualitative data were transcribed and thematically coded and cross-validated for consistency. Quantitative survey results were analyzed using descriptive statistics, while open-ended responses were inductively categorized. This triangulated methodology ensured that the Toolkit’s design was firmly grounded in real-world insights, addressing both technical feasibility and long-term sustainability.5
Discussions with professionals highlighted the need for tools that foster dialogue between professionals and visitors. Community-driven initiatives like hackathons and workshops ensure that tools remain user-centered and sustainable. Digital platforms can connect museums globally, promoting best practices and a people-centered approach to CH. These insights align with ReInHerit’s goal of enhancing engagement through interactive digital tools. Digital technologies can expand cultural offerings and create personalized experiences [5]. The discussions also emphasize the importance of collaboration among museum experts, developers, and visitors. However, several challenges remain, including the high costs of developing and maintaining customized apps, which quickly become outdated due to rapid technological advancements, as well as the technology gap between large and smaller institutions. ReInHerit’s Toolkit aims to address these issues by using AI and CV to create interactive, gamified experiences that deepen visitor engagement and encourage user-generated content.

2. Materials and Methods

This section explores the integration of AI and CV in enhancing visitor engagement in museums, highlighting the key insights from recent research and the development of the ReInHerit strategy6. In particular, we examine how AI technologies are revolutionizing museum operations and visitor experiences, including opportunities for personalization and interactive engagement. In recent years, the concept of the ‘smart museum’ has emerged as an innovative model, where AI-based technologies—from facial recognition to real-time translation—animate otherwise static collections, creating an experience that connects people, education, culture, and art [6]. Recent studies on Human–Computer Interaction (HCI) and museum studies have further emphasized the importance of emotional interaction [7] and embodied engagement [8] in digital cultural heritage experiences [9,10]. Researchers argue that digital systems should be designed not only for usability but also to support affective interaction and participatory meaning-making, aligning with the growing discourse on “affective heritage” and experience-centered design [11]. In parallel, the increasing use of AI in the cultural heritage sector has raised a number of critical concerns, particularly regarding transparency, algorithm bias, privacy and data ownership. The current debate emphasizes the importance of incorporating ethical frameworks from the earliest stages of technology design. These theoretical and ethical perspectives formed the conceptual basis of the ReInHerit strategy, which translates them into practice through user-centered, playful and emotionally engaging digital tools. The ReInHerit Toolkit was developed through a co-creative and interdisciplinary methodology involving heritage professionals, ICT experts, and end users. This approach ensured that the tools are adaptable to diverse institutional contexts and aligned with the principles of sustainable innovation.

2.1. Insights on AI and Museums

The recent literature on museums underscores both the challenges and opportunities presented by AI-driven digital innovation. For example, the 2021 Museum Innovation Barometer revealed that fewer than 20% of museums worldwide had adopted AI tools for collections management, administration, education, or financial operations [12]. However, the COVID-19 pandemic acted as a catalyst for digital transformation, prompting institutions to accelerate digitization and adopt new technologies while exploring alternative revenue streams [13]. CV and AI have emerged as transformative technologies in this evolving landscape [14], with applications extending well beyond operational efficiency. In the broader creative industries, AI has already demonstrated its ability to increase productivity by automating processes and optimizing workflows [15]. In the museum context, these technologies offer potential for more personalized and engaging visitor experiences. AI and CV can analyze behavior and preferences, enabling museums to tailor narratives, content, and interfaces to individual users [16]. This level of personalization makes digital tools not only more relevant and appealing but also more effective in enhancing cultural experiences. Beyond user interaction, CV allows museums to derive insights from visual data that would be difficult or impossible to obtain manually [17]. Combined with AI, it enables curators to identify patterns, anomalies, and relationships within collections, helping overcome limitations in time, staffing, and resources. As Villaespesa and Murphy note, AI and CV are becoming essential for enriching collections and enhancing engagement through more interactive and customized experiences [18].
However, the rapid adoption of algorithmic systems raises ethical and operational concerns, including bias, error, job displacement, and privacy. These issues underscore the need for critical, transparent frameworks, particularly in sensitive public domains such as cultural heritage [19]. In this regard, initiatives like the Museums + AI Network and its associated guide, “AI: A Museum Planning Toolkit”, offer valuable models for sustainable, ethically grounded AI adoption, especially for small- and medium-sized museums [20]. Efforts toward standardization are also underway. Organizations like ISO are developing standards for AI design and deployment, though progress is challenged by the field’s rapid evolution and ongoing research gaps [21]. At the policy level, the European Commission and the European Parliamentary Research Service have identified various opportunities for AI in cultural institutions, from improving cataloging and archival practices to enhancing audience engagement and operational planning [22]. Techniques such as sentiment analysis, attendance tracking, and forecasting can deliver real-time insights to support decision-making. Yet these benefits often remain inaccessible to smaller institutions due to the high costs, limited funding, and the perception of AI as being non-essential. To address this disparity, recent recommendations from the EU Committee of Ministers promote the use of AI to foster emotional engagement and social interaction in cultural contexts [23], and advocate for collaboration and training to build institutional capacity [24]. The ethical deployment of AI in museums depends on principles such as transparency, data integrity, and scientific accuracy, essential for public trust and for ensuring that AI supports rather than distorts cultural interpretation [25]. Practical resources like “AI FAQ Artificial Intelligence” by Orlandi et al. (2025) provide concrete guidance on copyright, data governance, and evolving technology regulation [26].
Finally, museums can themselves shape public understanding of AI. As institutions dedicated to knowledge and reflection, they are uniquely positioned to host critical dialogue around technological change. This role was affirmed by the three strategic recommendations presented by NEMO at the 2024 conference “Innovation and Integrity: Museums Paving the Way in an AI-driven Society”, which call for (1) integrating museums into AI regulatory development; (2) investing in infrastructure, training, and data management; and (3) establishing a European AI Innovation Hub for CH.7

2.2. The ReInherit Toolkit Method

The ReInHerit strategy was shaped by extensive research aimed at identifying best practices for the use of digital tools in the context of digital transformation [27]. This approach emphasizes digital interactivity as a core element in designing user-centered exhibitions. Digital tools are seen not as ends in themselves but as means to access content, engage audiences, and support inclusive, participatory learning experiences. Adopting an interdisciplinary approach, the ReInHerit Toolkit has been presented to ICT experts at major international conferences and workshops, including ACM Multimedia 2022 in Lisbon, Portugal; ACM Multimedia 2023 in Ottawa, Canada; ACM Multimedia Systems 2024 in Bari, IT; IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) 2022 New Orleans, USA; IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024 Waikoloa, Hawaii, USA; and the Human–Computer Interaction International Conference 2024 in Washington DC, USA. It has also been successfully showcased at museum-focused events such as “ExICE—Extended Intelligence for Cultural Engagement”, Bologna IT (H2020 SPICE Project), AVICOM (ICOM International Committee for Audiovisual, New Technologies and Social Media) 2024 Annual Conference “News from the Digital Museum World” and during the NEMO Members’ Meetup.8 Specifically, ReInHerit applications were developed with a user-centered approach, aligned with recent research by NEMO, including the 2023 report on “Digital Learning and Education in Museums” [28]. This report highlights how digital tools such as AI and CV can strengthen emotional connections between museums and visitors. By incorporating playful elements, museums encourage active participation, allowing visitors to engage with collections rather than being passive observers. This approach addresses the growing demand for digital innovation and audience inclusion, reflects the key topics identified through research conducted by the ReInHerit project (Table 1).
Tools were designed to foster an interactive museum experience across personal, social, and physical contexts—dimensions that, as Falk and Dierking argue, critically shape perception, comprehension, and learning outcomes [29,30]. Emotional engagement plays a key role in this process [31]. As Falk underscores in the NEMO report “Emotions and Learning in Museums” [32], the museum experience is not confined to the time spent within its physical space. Instead, emotions influence all stages, from pre-visit anticipation to post-visit memory. Visitor satisfaction and the likelihood of returning are largely determined by affective responses rather than dispassionate reasoning. Neuroscientific research reinforces this perspective [33,34]. Damasio [35] demonstrated that emotions are not separate from rational cognition but are enmeshed in the brain’s decision-making and memory systems. Barrett [36] further elaborates that emotions are not universal reactions but context-dependent constructions shaped by past experiences and cultural expectations [37]. These insights clarify why individuals choose to visit museums, how they engage, and what they retain [38]—placing emotions at the heart of curatorial and design strategies [39].
Motivated by this body of research, the ReInHerit Toolkit embraced a dynamic and affective approach to digital engagement, seeking to activate multisensory and socially meaningful encounters. Increasingly, studies emphasize that sensory immersion and emotional resonance are key drivers of user engagement in digital and virtual museum environments [40]. To this end, the Toolkit employs playful, gamified elements to inspire curiosity, enjoyment, and personalized interaction, particularly targeting younger and non-expert [41]. This approach blends social, relational, and digital dimensions, supporting a more inclusive and user-driven experience [42]. The Toolkit also draws from research on citizen curation and participatory storytelling, which shift the interpretive focus from institutional authority to the audience’s emotional and narrative contributions. For example, user-centered applications have been developed to facilitate the emotional interpretation and creation of art-related narratives, encouraging reflection and affective sharing [43]. Other digital tools explore how interactive performances can deepen audience participation through computationally mediated experiences [44]. Gamification, in particular, has been shown to shape users’ perceptions and behaviors, increasing engagement with cultural content through affective triggers [45], and to create immersive and memorable experiences that improve users’ learning capabilities [46,47]; a recent review of gamification approaches has been presented in [48]. A notable source of inspiration for ReInHerit was the integration of new visual technologies by institutions such as the Cleveland Museum of Art. Their “ArtLens Gallery One” application enables visitors to explore the permanent collection via a tablet, enhancing the museum experience with personalized digital interaction [49,50]. In the toolkit, this type of visual interaction is enabled through more advanced, AI-based technologies capable of processing human-generated skeleton data, rather than relying on devices such as Microsoft Kinect. We implemented a BYOD strategy offering visitors a smarter, more dynamic, and enriched interactive experience, with the aim of enhancing the educational and learning process [51].
Building on these insights and survey data, the ReInHerit Toolkit strategy (Figure 10) focused on developing open-source, mobile-first web applications that promote hybrid interaction across physical and digital spaces. AI and machine learning techniques support innovative forms of collection interpretation and data visualization, enhancing accessibility and user engagement [52]. These developments are consistent with recent scholarship that explores algorithmic advancements in CH data analysis [53], as well as the importance of ethical frameworks tailored to the sector’s specific risks and values [54].
The toolkit features interactive tools aimed at motivating learning and deepening connections between artworks and users. Through the Digital Hub, it seeks to bridge the technological divide between large and small institutions by offering accessible training materials, open-source software, and adaptable resources. A bottom–up, community-driven approach guides its development, with needs assessment and feedback gathered via hackathons, workshops, and webinars.
A core aspect of the strategy is the responsible use of AI, guided by UNESCO’s “Guidance for Generative AI in Education and Research” [55] and the “Recommendation on the Ethics of Artificial Intelligence” [56]. Additionally, computer vision and face recognition systems are central to the ethical discussions surrounding AI, in line with the “AI-Act”, first regulation on artificial intelligence of the European Union. These frameworks ensure transparency, fairness, and user privacy. ReInHerit tools prioritize data privacy, with no personal information stored, and all generated media provided solely to the user. Ethical considerations are addressed through the ReInHerit Ethics Card, which assesses aspects like data training and security, copyright, and scientific accuracy. Specifically, the neural networks used in applications such as “Strike-a-Pose” (Section 3.1) and “Face-Fit” (Section 3.2) have been demonstrated to function equitably with users from diverse global regions and across various attributes (e.g., gender, age, and skin color) as evidenced by the Model cards of the respective models9. Moreover, “Strike-a-Pose” has been shown to work effectively with users who have various disabilities, such as those using wheelchairs. In this case, the curator can choose to replicate only specific parts of the artwork, for example, the torso, in order to better accommodate users’ needs. By promoting open-source codes and providing related resources via the Digital Hub, ReInHerit empowers smaller organizations to implement and adapt these tools, ensuring sustainable digital practices across the cultural heritage sector.

3. Results

The following sections describe the selected results from CV/AI-based applications developed within the toolkit10 and their role in enhancing visitor engagement, accessibility, and interactions with cultural heritage: Strike-a-Pose (Section 3.1) and Face-Fit (Section 3.2), two applications designed around the principles of gamification and interactive engagement with artworks, aiming to increase visitor participation in museum experiences; Smart Retrieval, which leverages CLIP for artwork recognition [57] and image retrieval (Section 3.3), and Smart Lens, a web app that transforms mobile devices into magnifying tools for detailed artwork observation and contextual information (Section 3.4); VIOLA Multimedia Chatbot which integrates CV and Natural Language Processing (NLP) to provide a conversational interface for web and mobile platforms, utilizing speech-to-text for seamless, natural interaction Section 3.5; and finally Photo Video Restorer, a cutting-edge tool for AI-driven restoration of digital heritage video and photos archives, showcasing the potential of AI in preserving CH (Section 3.6) [58].

3.1. Strike-a-Pose

Strike-a-Pose (Figure 11) is an interactive online tool designed to let users mirror the body positions seen in artworks, including paintings and sculptures. It can be accessed directly from visitors’ personal devices (following a BYOD model) or via a fixed installation equipped with a display and integrated camera. The platform leverages game-like elements to turn the exploration of art into an enjoyable and participatory experience. Users are encouraged to physically imitate poses from the museum’s collection [59]. To assist with alignment, skeleton outlines of both the user and the selected artwork are shown side by side, helping to match body points and movements. The application includes a variety of pose challenges, some of which are adapted to be inclusive—such as those focusing on torso movements only—making them accessible for users with limited mobility. After completing all the pose-matching tasks, users can create a customized video of their session, which they have the option to download or share on social media. Users receive the results of their interaction via e-mail with additional material for further study and learning. The conceptual metaphor behind the activity is to mirror the pose of a character in an artwork, fostering shared experiences and connecting people and stories.
In Figure 12, the workflow for pose detection and matching is depicted. The system takes as input a reference image and a camera frame.
Using pose landmark detection, the skeleton structure of the individual in both inputs is analyzed. The primary focus is on detecting the orientation and position of joints. The system compares the extracted joint orientations of the reference pose and the camera input. If the orientations match, the system proceeds to generate the output video showing the alignment of the camera input with the reference pose. If the orientations do not match, the process iterates until the correct alignment is achieved. This ensures precise tracking and pose replication.
The system was built using JavaScript for the frontend interface and Python 3.x on the server side. Human pose recognition is powered by the TensorFlow.js library [60], which integrates the MoveNet pose estimation model. MoveNet identifies 17 key body landmarks and is optimized for speed and precision—especially the “Lightning” version, ideal for applications requiring low latency, as it runs in real time across most standard desktop and mobile devices. All computations for pose detection are performed client-side in the browser, ensuring responsiveness. On the server side, a SQLite database manages the collection of artworks, challenge parameters, and metadata. Communication between the front end and the database is facilitated through REST APIs developed with Flask [61]. Video compilation is handled by the server. The user interface is designed responsively: it adjusts to a vertical layout for smartphones and a horizontal one for larger screens in fixed setups. To isolate the pose data from background context, the coordinates of each key point are normalized. Pose similarity is assessed through a calculation of Euclidean distance between the user’s pose and the reference pose, with a match being validated when distances remain within a set threshold over a brief time interval. We used a pre-trained model for body pose detection: MediaPipe Movenet11.
ReinHerit Consortium museums contributed to the development of the image gallery. As part of the dissemination activities, the app was efficiently tested at international workshops in the tech and cultural sectors, involving diverse users (Figure 13). The participatory model was strengthened through the organization of dedicated co-creation events, such as the ReInHerit hackathon, where a prototype was tested and improved with direct input from the users, creatives, and museum professionals. This process led to the emergence of new technological developments for applications and interaction scenarios, enhancing engagement, inclusivity, and design features. One of the outcomes of this collaborative experimentation was the proposal of Strike-a-Pose 2.0, an evolved version of the original app that integrates these advancements and continues the project’s vision of ethical, playful, and personalized digital experiences in CH contexts.12

3.2. Face-Fit

Face-Fit (see Figure 14) is an interactive software created with a combination of JavaScript and Python, designed to allow users to engage creatively with portrait paintings by customizing and animating artworks. Optimized for both smartphones and larger museum installations, and inspired by the ‘share-your-expression’ concept, the activity challenges participants to find the perfect match by aligning their facial orientation and expressions with those depicted in historical portraits. After successfully replicating the pose, users can blend their own facial image with the artwork, generating a new visual. Upon completion of the interaction, a personalized email is sent to users containing the final image ready for social media sharing, along with supplementary curated content (textual descriptions, audio guides, or video clips) selected by the museum.
The development process relied on an iterative usability testing cycle involving three separate groups of five participants each [62], as per the best practices in user-centered design. To begin, users position themselves in front of a device with a camera and select a portrait from a vertical scrollable menu. A semi-transparent overlay of their live image is superimposed onto the chosen painting, helping them align their face accurately with the original. This ‘ghost image’ design choice was implemented to maintain user attention and support engagement without detracting from the overall enjoyment of the game. Earlier prototypes used visual cues to guide alignment, but these were eventually removed due to their tendency to divert focus from the artwork itself. We used a pre-trained model for face landmark detection: MediaPipe Face Mesh [63]. The app leverages MediaPipe’s Face Mesh module [64], which uses TensorFlow Lite to perform lightweight and real-time facial landmark detection even on mobile devices [65]. The detected keypoints are analyzed to extract head rotation angles in three dimensions. These angles are compared with those of the artwork using Euclidean distance calculations, confirming a match when a preset threshold is met. An additional verification step checks the resemblance of key facial regions—specifically the eyes, eyebrows, and mouth—since these convey universal emotions as identified in Ekman’s facial expression theory [66]. Face-swapping is achieved by applying affine transformations between the corresponding facial mesh triangles of the user and the portrait. To match the user’s photo stylistically with the artwork, a color correction algorithm based on Reinhard’s statistical method is used [67]. Figure 15 illustrates the pipeline for facial landmark detection and matching, followed by a face-swapping process.
Gallery images in the demo test version are part of WikiArt—Visual Art Encyclopedia. The version also includes a limited selection of artworks provided by museums participating in the ReinHerit project. In addition, the gallery has been populated with images of artworks and additional content provided by small- and medium-sized museums in Italy. As part of dissemination activities, the web app was successfully tested in workshops within museums and the cultural sector, with the enthusiastic participation of diverse users (young people, adults, families, researchers, museum experts, etc.) from different backgrounds (Figure 16).
These two computer vision applications aim to engage visitors with artworks by detecting and interpreting body movements and facial expressions. Both are based on a “play and engage” approach that encourages active visitor engagement with cultural content, moving beyond passive observation. This model is also inspired by the neurological concept of mirror neurons, which are associated with understanding others’ actions and emotions through internal simulation. By inviting users to replicate poses and expressions found in art, the applications foster an empathic and embodied interaction with cultural heritage [68]. Gamification, understood as the application of game-like dynamics in non-game contexts, has proven effective in enhancing museum sustainability and visitor engagement [69]. Within this framework, Strike-a-Pose and Face-Fit demonstrate how gamified digital tools can transform heritage experiences into more playful, social, and personalized journeys of discovery—fully embracing the “I play, I learn” principle [70]. They support a transition from the traditional “look but don’t touch” museum experience to one that invites visitors to “play and engage”, making cultural learning both memorable and meaningful. Accordingly, the objectives of these two applications are as follows:
  • To implement these experiences as challenges that enhance visitor engagement and provide personalized takeaways of the visit, encouraging post-visit exploration.
  • To generate user-created content that can amplify engagement on social media platforms.
  • To employ advanced AI methods optimized for mobile execution, supporting a BYOD strategy for widespread accessibility.
As a result of close collaboration with experts and professionals from various sectors, particular attention was devoted to ethical considerations in the development of the toolkit’s applications. In particular, the data generated during interaction with Strike-a-Pose and Face-Fit are managed with respect to the user’s privacy. Ethical issues concerning the use of personal data for interaction with users have been defined and specified. It should be emphasized that personal data is never stored, shared with third parties, or used to train systems. Facial and body images are not stored to respect privacy, and a checkbox is included in the application’s privacy policy. Guidance is included to inform museums on the use of copyright-free museum images and validated data to be provided and included as additional information for users. The applications developed within ReInHerit do not store personal information of the users; the generated media are not stored and are provided only to the user. The neural networks used in Strike-a-pose and Face-Fit have been shown to work fairly with users sampled in diverse world regions (Face-Fit) and with users with different attributes (i.e., gender, age, skin color—Strike-a-Pose) as shown in the model cards of the respective models.

3.3. Using CLIP for Artwork Recognition and Image Retrieval

In recent years, multimodal neural models—particularly CLIP [71]—have demonstrated superior performance compared to traditional models based on hand-crafted features, especially in the field of CV. CLIP, trained on 400 million image–text pairs, has shown excellent zero-shot generalization capabilities, meaning it can perform well on tasks without requiring additional training data. This makes it particularly suitable for domains where annotated datasets are scarce, such as cultural heritage. In our study, we explored the applicability of CLIP in recognizing artworks using the NoisyArt dataset [72], a collection of 89,095 images divided into 3120 classes, enriched with metadata and textual descriptions from sources such as DBpedia and Flickr. CLIP was tested in three main tasks: supervised classification, zero-shot classification, and image retrieval. In each of these scenarios, CLIP significantly outperformed unimodal pre-trained models such as ResNet-50, confirming the effectiveness of the multimodal approach even when working with noisy and imperfectly annotated datasets. For an accurate description of the results, please refer to [73].
In the supervised classification task, we trained a simple classifier on top of CLIP’s visual backbone using labeled examples from NoisyArt. Despite the presence of noise and class imbalance in the dataset, the model achieved an accuracy of 86.63% on the test set, demonstrating the robustness of CLIP, even with minimal additional architecture and training. In the zero-shot classification task, we directly leveraged the similarity between images and textual descriptions as learned by CLIP, without performing any fine-tuning on the dataset. By comparing image embeddings with textual labels, we obtained a remarkable improvement of over 20 percentage points compared to the state-of-the-art methods, highlighting the ability of CLIP to generalize and recognize concepts based on prior vision–language training alone. In the image retrieval task, we experimented with multiple configurations. We developed both a baseline retrieval system using the CLIP pre-trained image embeddings and more advanced variants. One of these incorporated a visual similarity-based re-ranking phase, improving result relevance by reordering retrieved items according to fine-grained visual closeness. Another version involved fine-tuning the CLIP network for retrieval-specific objectives, further enhancing performance in both image-to-image and description-to-image search tasks. These configurations allowed us to assess the flexibility and extensibility of CLIP for retrieval applications in cultural datasets.
These experimental results laid the foundation for the development of the Smart Retrieval application, designed to address real-world search and annotation needs in museum and archival settings. In particular, the application implements an innovative approach known as Combined Image Retrieval (CIR), which enables advanced multimodal search. Users can issue queries by combining a reference image with a textual description that modifies or refines the intended visual content. This functionality—highly powerful and rarely found in practical systems—was developed and tested as part of the ReInHerit project, with scientific demonstrations presented at ACM Multimedia 2023 and ICCV 2023. The app was designed to be accessible through a web interface and usable even on low-spec hardware, making it particularly suitable for small cultural institutions. It is currently being tested in collaboration with the Europeana Fashion Heritage Association, which provided a dataset of historical images for system evaluation. The entire architecture, based on CLIP, supports not only image retrieval but also zero-shot tagging, enabling automatic image annotation, even in the absence of labeled datasets. This approach extends the capabilities for the access, exploration, and valorization of visual heritage, especially for cultural organizations with limited resources.
As a concrete outcome of this research, the ReInHerit project integrated Smart Retrieval into its Toolkit to provide advanced multimedia search capabilities. The platform employs Content-Based Image Retrieval (CBIR) in two distinct modes: users may search by entering a textual description, or by combining a reference image with a natural-language prompt that specifies or modifies certain attributes (Figure 17). What sets Smart Retrieval apart is its conditional retrieval mechanism: an advanced neural model fuses text and visual inputs to enable searches based on both similarity and semantic modifications. Additionally, the system offers automatic zero-shot tagging, eliminating the need for pre-annotated datasets—a crucial feature for smaller museums with limited resources. The web application’s frontend is built in JavaScript and HTML5, while the backend is implemented in Python, leveraging PyTorch 2.3 for the computer vision components and Flask to expose RESTful endpoints that connect the UI to the vision engine. A demonstrator showcases both text-to-image and image-to-image retrieval on the NoisyArt corpus, achieving state-of-the-art performance. The conditional image retrieval feature is under evaluation with the Europeana Fashion Heritage Association’s newly provided fashion image dataset.
To address the cost and effort of creating labeled training data, we extended our CIR method to a zero-shot setting. This approach maps reference image features into a pseudo-word token in the CLIP embedding space and combines it with the text caption—allowing conditioned retrieval without any dataset-specific training. Our zero-shot CIR method surpasses previous benchmarks on both the FashionIQ and CIRR datasets. By integrating these innovations, Smart Retrieval transforms static image archives into interactive, semantically aware platforms, enhancing digital access, curation, and interpretation in the cultural heritage domain.

3.4. Smart Lens

Smart Lens (see Figure 18) is a mobile-friendly web application that turns smartphones and tablets into interactive visual tools for artwork exploration. It allows users to analyze specific details within artworks by framing them through the device’s camera. Recognition is performed in real time using a combination of computer vision techniques—namely image similarity search (CBIR), visual classification, and object detection. As soon as a visual element is identified, the app delivers curated multimedia content, such as text, images, audio, or video, linked to that detail. This transforms the viewing experience into a more dynamic and personalized interaction with the art.
The app differs from traditional guide systems based on static elements like QR codes by encouraging users to actively investigate and interpret what they see. This type of visual engagement transforms passive observation into a form of discovery, promoting a deeper connection with the artwork. At the same time, the application collects anonymized usage data—such as the details most frequently recognized or explored—offering curators a new way to understand visitor behavior and improve exhibit design. Smart Lens supports three distinct recognition modes:
  • Content-based Image Retrieval (CBIR)—This method compares visual descriptors from the user’s live camera feed with those extracted from a curated image dataset. Each artwork is not only represented as a whole but also partitioned into segments so that fine-grained features can be detected and matched efficiently.
  • Classification—A neural network model, specifically fine-tuned for the collection, assigns a class label to the input frame based on overall appearance. The recognition result is accepted only if the confidence score exceeds a designated threshold. This lightweight solution is ideal for running directly on mobile devices.
  • Object Detection—In this mode, the system pinpoints and labels multiple details within a single artwork using bounding boxes. The underlying model, optimized for detecting artwork-specific elements, selects only those regions whose confidence level meets the predefined criteria. This approach is particularly suited for complex works with multiple visual components.
The frontend is implemented in responsive HTML5 and CSS, dynamically updated via JavaScript to support real-time feedback. As the user navigates through the artwork with their camera, the app continuously compares live input to the database and presents contextual information on-screen. TensorFlow [74] models are employed for classification and retrieval, while SSD/MobileNetV3 powers the detection functionality. For every recognized element, users receive access to an image, descriptive metadata, and any associated media content. If no audio file is available, an integrated text-to-speech engine ensures that the user can still experience the application as a full-featured audio guide. Additionally, the app can operate in a simplified mode that focuses solely on identifying whole artworks, bypassing the detail-oriented recognition pipeline. The web app and its backend are hosted on a remote server, accessible through a simple QR code to streamline entry and avoid manual URL input. By promoting interactive exploration and supporting fine-grained analysis of art details, Smart Lens offers a smart, engaging alternative to static museum guides. It invites visitors to become active participants in the exhibition experience while empowering institutions with new tools for data collection and visitor engagement.
Smart Lens app was tested during the ReInHerit ReThinking Exhibitions13 held at the GrazMuseum (Austria), the Museum of Cycladic Art (Athens, Greece), and the Bank of Cyprus Cultural Foundation (Nicosia, Cyprus), (Figure 19).

3.5. Multimedia Chatbot: VIOLA

VIOLA (Visual Intelligence OnLine Art-assistant)14 is a web and mobile application designed to enhance interaction with cultural heritage content through a chatbot interface powered by AI. Users can engage with the system using natural language to ask questions about artworks, including both visual elements and historical or contextual information. The chatbot employs a hybrid architecture that merges CV and NLP techniques. This enables it to interpret and respond to different types of queries, offering a more intuitive and flexible user experience. The design is inspired by the growing popularity of conversational AI platforms like ChatGPT (4o model), now widely adopted across various domains.
The backend is developed in Python, utilizing Flask to provide a REST API that connects with the frontend. Two versions of the backend are available, one of which incorporates three neural networks to handle different functionalities:
  • A neural network classifies the user’s query, determining whether it pertains to the visual content or the contextual aspects of the artwork.
  • A question-answering (QA) neural network uses contextual information about the artwork, stored in JSON format, to address questions related to its context.
  • A visual question-answering (VQA) neural network processes the visual data and the visual description of the artwork, stored in JSON format, to answer questions about the content of the artwork [75].
To further enhance accuracy and dialogue capabilities, large language models (LLMs), including GPT-based architectures, are employed for more complex question-answering tasks that require deeper contextual understanding. An example of the app is shown in the figure below (Figure 20).
The application is entirely web-based, combining a backend system that runs the visual question answering engine with a frontend interface designed for user interaction. The interface, developed using modern web technologies, is fully responsive and adapts smoothly to both desktop and mobile environments. This flexibility allows VIOLA to be embedded into existing museum websites or used as a standalone mobile solution, offering an innovative and intelligent visitor guide. To improve usability on smartphones and tablets, the system includes voice input through speech recognition, reducing the need for manual typing. Its modular design makes it easily customizable—museums can update visual assets and metadata to reflect their own collections. Additionally, VIOLA provides a scalable framework for more advanced features, including interactive learning tools, gamified experiences, and natural language-driven retrieval systems. The integration of large-scale multimodal language models (MLLMs) enhances its ability to handle complex visual-contextual questions, combining image understanding with contextual interpretation for richer, more informative responses [76].
Chatbots, through recent technological advances, create new opportunities for museums and galleries to engage younger audiences through innovative narrative visualization [77]; recently, many approaches have studied how to improve the performance using external knowledge sources to improve the accuracy in the CH domain [78,79,80]. Building on this approach, the VIOLA web app has been implemented in small- and medium-sized museums as a BYOD tool, enabling new audiences and young visitors to access detailed information about artworks while prioritizing ethics and transparency, particularly in terms of privacy, data accuracy, and training data sources. Initially, VIOLA was tested using the ArtPedia dataset, a standard benchmark for VQA systems but has since evolved to incorporate a curated selection of artworks from partner museums, including the GrazMuseum Museum of Cycladic Art and the Bank of Cultural Cyprus Foundation. This transition to museum-provided content ensures a higher degree of accuracy and scientific reliability, allowing the chatbot to deliver responses grounded in expert knowledge. In addition to the first VIOLA Gallery with the ReInHerit Consortium’s museum artowrks, an extended version of VIOLA was implemented in small- and medium-sized museums in Italy, following a BYOD approach that allows visitors to access the chatbot online and using their smartphones. A case study has been conducted at the Gipsoteca di Arte Antica (GiArA) of the University of Pisa (see Figure 21).
A dedicated VIOLA gallery has been created that features artworks from the GiAra Museum, all content validated by the museum’s curators. To minimize errors, the chatbot uses those validated content from museum experts, but continuous updates are necessary to address uncertainties, improve language processing, and personalize user interactions. A central feature of the VIOLA chatbot is its “prompt engineering”, designed to prevent incorrect responses, known as “hallucinations”; while testing the prompt used in the system, it is important to take into account what has been observed in [80], i.e., that safeguarding the accuracy may impact the liveliness of the interactions. Curated textual data on artwork ensure that the chatbot answers are accurate and aligned with validated datasets. Prompt engineering also helps minimize unanswered questions by enabling the chatbot to address a wide range of queries, from general curiosities to more detailed historical inquiries. During the ReInHerit project testing workshops, users often began with curiosity-driven questions about the stories behind artworks and gradually explored more specific historical and scientific topics. The VIOLA content, provided by curators, often lacked information related to these user curiosity. The system was enhanced with an Admin interface that allows curators to update content, as well as a feature that tracks unanswered questions, helping to fill information gaps. This dynamic and speech-based approach to the use of chatbot in museums [81] improves the ability of the chatbot to respond to general questions driven by curiosity and more scientific or historical questions, aiming to avoid incorrect answers while providing comprehensive responses to the diverse questions of visitors. This user-oriented approach (Figure 22) promotes continuous collaboration between developers and curators. In this way, the chatbot does not replace the curators, but is an integrative tool that supports their digitization efforts. It continuously relies on the curators to update and validate artwork descriptions, ensuring that the chatbot provides accurate, relevant information, and is engaging for visitors.
During museum testing and co-creation process conducted as part of the project hackathons15, VIOLA proved particularly effective for younger audiences, both in providing quick, dialogic access to content and in the idea of further development aimed at providing additional layered multimedia content (images, video, and sound), aligning with the AI-powered multimodal interaction approach [82]. Unlike more rigid guided tours, the conversational format appears to encourage exploration and autonomy, and evaluation data showed an increase in time spent with the artwork, indicating improved visitor retention and engagement.

3.6. Smart Video and Photo Restorer

Smart Video and Photo Restorer is an advanced system developed to recover analog videos and aged photographs from historical archives affected by substantial visual degradation due to tape deterioration. Traditionally, the restoration of such content is carried out manually by expert archivists using commercial tools, editing frame by frame—a process that is labor-intensive and costly. The solution we propose adopts a multi-frame restoration strategy, which is particularly effective in handling severe tape mistracking artifacts that often result in visually corrupted or scrambled frames. To train the model, we constructed a synthetic dataset that closely emulates the types of degradation observed in real-world analog footage provided by Archivio Storico Luce, the largest historical video archive in Italy. These actual analog recordings guided the creation of high-fidelity simulations of degradation. The synthetic dataset was generated by applying various distortions—such as Gaussian noise, white speckles, chroma bleeding, and horizontal displacements—to high-quality digital video, using Adobe After Effects to replicate the effects of analog tape damage. These modified sequences were then paired with their clean versions to form ground-truth training data. The final dataset included 26,392 frames, which were divided into training and validation subsets.
The restoration architecture we developed is based on a Swin-UNet (see Figure 23, that works on videos, using a multi-frame approach to enhance T frames at once, exploiting spatio-temporal information. We employed 3D convolution for partitioning the input into patches and pixel shuffle for the patch expanding layer, allowing the model to learn the residual difference between degraded and restored frames—thereby stabilizing training and speeding up convergence [83].
The training procedure used a weighted combination of pixel-wise loss and a perceptual loss calculated in the VGG-19 feature space [84]. The model processes patches of size 256 × 256, cropped randomly from video frames. During both training and inference, the number of consecutive frames T was set to 5. To evaluate the effectiveness of our method, we used three standard full-reference image quality metrics: PSNR, SSIM [85], and LPIPS [86]. On the synthetic dataset, our model outperformed DeOldify [87], a well-known image and video restoration framework, as shown in Table 2. We then applied the trained model to real analog recordings from Archivio Storico Luce, where it demonstrated excellent generalization and superior restoration quality—establishing it as a state-of-the-art solution for analog video enhancement.
Finally, to allow users to restore their own degraded photos or videos, we developed a Flask-based demo web app that users can access through a web browser (Figure 24). The platform supports the upload of video files and provides the user with the downloadable restored result, as well as a comparison with the original video/photo. Alternatively, the user can choose one of the example videos/photos to see what the model is capable of. To make this powerful restoration technology accessible, Smart Video and Photo Restorer App is an intuitive web application designed for archivists, curators, and cultural institutions working with degraded analog footage. This user-friendly tool allows professionals to upload old photos or analog videos directly through a browser and receive high-quality restored versions, eliminating the need for costly and time-intensive manual frame-by-frame editing. Leveraging our advanced neural network and multi-frame approach, the application can handle even severe degradations such as frame fragmentation due to tape mistracking. By restoring visibility to rare and fragile visual materials, Smart Video Restoration offers a practical solution for safeguarding audiovisual heritage and ensuring broader public access to cultural memory preserved in historical footage.

4. Discussion

The findings of the ReInHerit project and its Toolkit validate the evolving concept of the interactive museum, which leverages digital tools across the entire visitor journey as seen in Section 1. This approach positions museums as spaces for unique, informal learning experiences that attract diverse audiences and deepen visitor engagement as shown in our survey analysis (Section 1.1). Recent museum studies, as discussed in Section 2.1, emphasize the importance of digital tools, in particular those based on AI and CV, in enriching visitor interactions and providing educational content through storytelling and personalized engagement, enhancing museum collections, and ultimately improving the user experience by making it more dynamic, informative, and tailored to individual interests. The research conducted by ReInHerit highlights the necessity of such digital tools for enhancing museum experiences, particularly for younger, digitally savvy audiences, who prefer interactive, BYOD-friendly tools over native apps as shown in the ReInHerit Strategy (Section 2.2). Here, CV and AI have proven invaluable in personalizing interactions and require active and visual engagement from users, making the experience more interactive, engaging, and smarter as elaborated in Section 3.4 and fostering a participatory, user-oriented process as highlighted in Section 3.5. The ReInHerit toolkit thus focuses on visitor-centered experiences that prioritize BYOD web applications for seamless interaction, designed to heighten emotional and playful engagement, which is crucial for effective and memorable digital learning. With a commitment to open-source development (Section 3), the ReInHerit project offers adaptable and sustainable digital solutions tailored to museums of all sizes, especially small- to medium-sized institutions (Section 2). By fostering an open-source approach, the toolkit enhances maintainability and reuse, empowering organizations with limited resources to implement and sustain digital innovations.
The project also advocates a multidisciplinary, collaborative approach—integral to the Digital Hub model where resources and best practices are shared, fostering community-wide advancement in digital heritage management. According to focus groups conducted with museum professionals described in Section 1.4, it is useful to overcome traditional boundaries between disciplines and it is crucial to develop tools in dialogue with visitors and to invite communities into the creation process. The main objective of digital innovation, as described in Section 2.2, is to provide not only a tool as an end product but a transdisciplinary development process, promoting collaboration and integration of knowledge from different areas, creating a dialogue and mediation between different disciplinary fields. Following this approach, the ReInHerit Toolkit was designed and tested with a bottom–up approach, inviting communities to participate in the creation process through workshops and hackathons. Toolkit apps were tested and studied during interdisciplinary hackathons at the AI/XR Summer School held in Matera in July 2023. During the week, international speakers and experts discussed and engaged with students and researchers with different backgrounds and skills. Young PhD students worked on the Toolkit using open-source codes shared by the Digital Hub. Multidisciplinary groups of young Ph.D. students and tailored experts worked on the on two main themes: ’Gamification and playful engagement’ and ’Smart Interaction and digital contents’, using open-source codes from Digital Hub and adapted them to artworks from some local museums. This co-creation process added new technological developments for apps and user interaction scenarios, improving engagement, inclusivity, and new design features16. ReInHerit’s impact has been recognized through presentations and demonstrations at national and international events, including the ACM Multimedia 2022 Conference, where the applications Strike-a-Pose and Face-Fit (Section 3.1 and Section 3.2) received the Best Demo Honourable Mention Award for Engaging Museum Experience. The web app used to test the retrieval system (Section 3.3) received the Best Demo Award Honorable Mention at the Computer Vision and Pattern Recognition (CVPR) 2022 Conference. Further, the toolkit has been included as a case study in the European Heritage Hub,17 has been presented at several national and international conferences, and is gaining visibility as part of Italy’s “Museums + AI Toolkit” [88] initiative, exemplifying how ReInHerit bridges the technology gap and promotes equitable access to cultural heritage innovation.
A distinctive aspect of ReInHerit’s approach is that it provides a diverse and open-source AI-powered toolkit. Platforms such as Google Arts and Culture, Smartify, Amazon Rekognition or AI-enhanced services developed through digitization and innovation initiatives often offer advanced features such as metadata enrichment, image recognition, recommender systems or immersive storytelling. However, these solutions are often based on proprietary architectures, centralized infrastructures with limited transparency regarding algorithmic decision making. Similarly, well-known AI-based tools developed by large, well-resourced museums (e.g., the Cleveland Museum of Art’s ArtLens, the Rijksmuseum’s AI-powered Art Explorer, and the Metropolitan Museum of Art’s collaboration with Microsoft AI) are highly refined and impactful but often tailored to specific institutional needs and large-scale infrastructures. These tools usually follow top–down models and require significant technical and financial capacity for implementation and long-term maintenance. ReInHerit, on the other hand, emphasizes modularity, offering a scalable alternative based on ethical and participatory values. The Toolkit offers not just a set of technical solutions but a model of cultural innovation based on transparency, co-creation, and local digital empowerment. Unlike platforms that operate as “black boxes”, the toolkit is fully open-source and transparent in its architecture and logic. This enables institutions to understand, adapt, and evolve the tools autonomously. The development process itself reflects a shift from top–down delivery to participatory co-design. The model allows museums to customize tools in line with their missions and needs, enabling culturally relevant and diverse experiences. In this way, it helps redefine what responsible innovation looks like in the CH sector especially for smaller institutions facing digital transformation with limited resources.

5. Challenges and Future Directions

The ReInHerit Toolkit represents a significant, original, and future-oriented contribution to digital museology. Rooted in an interdisciplinary and participatory design approach, it addresses a key challenge for the sector: how to enable sustainable and inclusive digital innovation that is accessible to all institutions, not only those with substantial resources or technical capacity. By leveraging modular, open-source architecture, the Toolkit empowers especially small- and medium-sized museums to experiment with advanced technologies like AI and computer vision. Its design facilitates ethical, BYOD-compatible interactions that can be easily customized and maintained locally, reducing dependency on costly proprietary platforms. The emphasis on personalization, gamification, and playful learning enhances emotional engagement and makes digital cultural experiences more inclusive and impactful. A major strength of the Toolkit lies in the structural support provided by the Horizon 2020 program, which enabled a center of academic excellence in computer vision and artificial intelligence to co-develop tools in close collaboration with museum professionals, creatives, and heritage experts. This sustained, iterative dialogue between technical and cultural domains proved to be as valuable as the resulting applications themselves. The co-creation process fostered an interdisciplinary working culture and produced a transferable model of collaboration that supports sustainable digital innovation. Importantly, it contributed to the training and capacity building of all participants, especially early career researchers and practitioners, by generating curricular resources, syllabi, and best practices that bridge the gap between AI research and cultural mediation.
While user engagement, participation rates, and feedback from museum visitors and professionals have been important indicators of success, ReInHerit has adopted a broader, multi-dimensional framework to evaluate its impact. Success has been measured through the following:
  • Transferability and reuse: Several applications from the toolkit (e.g., Strike-a-Pose, Face-Fit, SmartLens, VIOLA Chatbot, and Smart Retrieval) have been adapted and reused by institutions within and beyond the original consortium. The reusability and portability are strong indicators of technical and conceptual robustness.
  • Capacity building and skill development: The toolkit has contributed to the training of museum professionals and early-career researchers. Success was measured through the creation and adoption of syllabi, workshops, and curricular resources, as well as through qualitative feedback and reflection activities conducted during and after piloting.
  • Scientific recognition and peer-reviewed contributions: Components of the toolkit were recognized at major international conferences (e.g., ACM Multimedia and CVPR) and contributed to scholarly publications in the fields of HCI, computer vision, and digital museology.
  • Open-source impact: Public repositories on platforms like GitHub have shown sustained interest as reflected in downloads, stars, forks, and issues raised or resolved by external contributors. However, it must be noted that adapting an open-source application still requires a certain investment in terms of activity and technical capabilities from the adopting organization: these open-source systems greatly lower the cost to implement innovative applications but cannot fully eliminate them.
  • Institutional integration: Success was also measured by the degree of integration of ReInHerit tools within the curatorial or educational strategies of participating museums, demonstrating long-term adoption potential.
To support long-term adoption and sustainability, the project established a Digital Hub, a collaborative infrastructure that aggregates applications, source codes, technical webinars, and documentation. This platform enables small institutions to access, adapt, and extend digital solutions without relying on proprietary infrastructures. Several applications, such as Strike-a-Pose, Face-Fit, and SmartLens, have already been reused or adapted by new projects and institutions, demonstrating the Toolkit’s portability, scalability, and technical legacy. Moreover, the open-source repositories have attracted interest beyond the original consortium as reflected in their activity (downloads, stars, and forks) on platforms like GitHub. Importantly, ReInHerit contributes not only applied solutions but also theoretical and scientific insights to the evolving discourse on the digital museum. It operationalizes key museological concepts, such as co-creation, affective engagement, hybrid participation, and ethical innovation, demonstrating how they can be translated into tangible practices and technologies. At the same time, it identifies and addresses critical tensions in the field, including gaps in digital literacy, the sustainability of innovation, and the ethical implications of personalization and data use. On a scientific level, ReInHerit has also contributed original research to the multimedia, Human–Computer Interaction (HCI), and computer vision communities, particularly on themes such as Smart Retrieval, image restoration, and interaction design. These results are being further explored through academic collaborations, open research projects, and doctoral dissertations, reflecting the project’s relevance to both technical and heritage-oriented disciplines.
Looking ahead, further research should focus on evaluating the long-term impact and adoption of the toolkit across diverse institutional and geographic contexts. In particular, there is a need to assess how curatorial strategies and visitor behaviors evolve with the integration of such tools, and how personalization through AI can be implemented responsibly to avoid bias and build institutional trust. Furthermore, ReInHerit highlights the potential of bottom–up policymaking and international collaboration to shape a more equitable shared digital heritage ecosystem. The toolkit is not simply a collection of digital applications—it is a blueprint for future museum innovation. It illustrates how technology can be ethically designed, collaboratively developed, and sustainably maintained to support a new paradigm of participatory, inclusive, and emotionally engaging cultural experiences. It invites ongoing reflection on how digital tools can be used not only to preserve heritage but also to activate it, making museums more relevant, dynamic, and meaningful in the digital age.

6. Conclusions

Critically reflecting on the role of museums in the age of artificial intelligence means confronting complex challenges related to technological accessibility, the sustainability of digital solutions, and the ethical coherence of innovation processes. This paper aimed to highlight how the ReInHerit project has addressed these challenges not only through advanced technological tools but also through a deeply collaborative, interdisciplinary, and capacity-building approach. The value of the toolkit lies as much in its tools as in the method that generated them: a co-creation process involving researchers, heritage experts, museum professionals, and digital creatives, made possible by the support of the Horizon 2020 program. This collaboration has produced best practices, educational resources, and sustainable models, with a tangible training impact for participants and a replicable vision for the sector’s future. The true innovation and core challenge of this digital transformation approach lies in rethinking the relationship between audiences and collections. We have shown how the toolkit introduces interactive and personalized digital tools that enhance storytelling, emotional engagement, and informal learning. We have also demonstrated how these developments are grounded in cross-disciplinary research that addresses crucial themes such as emotional museums, playful experiences and gamification, audience diversification, sustainability, and bottom–up co-creation. Equally important is the foundation ReInHerit provides for the ethical and sustainable integration of AI into cultural heritage practices, supporting museums in producing new knowledge and enriching collection data in ways that expand interpretive potential and improve user experience. Ultimately, the ReInHerit Toolkit offers a future-oriented vision for innovation in the cultural sector, reshaping museums as dynamic, participatory, and inclusive spaces for cultural interpretation, knowledge production, and creative engagement.

Author Contributions

Conceptualization, P.M. and M.B.; methodology, M.B., P.M. and A.F., software, M.B., A.F. and F.P.; validation, all authors; investigation, P.M. and M.B.; resources, P.M. and M.B.; data curation, M.B.; writing—original draft preparation, P.M.; writing—review and editing, P.M. and A.F.; visualization, F.P. and P.M.; supervision, M.B.; project administration, P.M.; funding acquisition, M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the European Commission under European Horizon 2020 Programme, grant number 101004545—ReInHerit.

Data Availability Statement

The data presented in this study were produced within the framework of the ReInHerit project and are available in project deliverables with restricted access. Access to these data can be granted upon request from the corresponding author, subject to the conditions and restrictions outlined by the project’s data management and dissemination policies. All the code of the described applications is available on the ReInHerit Github: https://github.com/ReInHerit.

Acknowledgments

The authors would like to thank all colleagues from the ReInHerit project and the MICC researchers who contributed to the outcomes presented in the Results section. The authors gratefully acknowledge the ReInHerit Consortium for granting permission to reproduce some of the images used in this article (see copyright acknowledgments in captions) © 2023, ReInHerit.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
APIApplication Programming Interface
BYODBring Your Own Device
CBIRContent-based Image Retrieval
CHATConversational Human Agent Training
CHCultural Heritage
CIRCombined Image Retrieval
CLIPContrastive Language-Image Pre-Training
CSSCascading Style Sheets
CVComputer Vision
GDPRGeneral Data Protection Regulation
GPUGraphics Processing Unit
GPTGenerative Pre-trained Transformer
HCIHuman–Computer Interaction
HTMLHyper Text Markup Language
ICTInformation and Communication Technologies
ISOInternational Organization for Standardization
LLMsLarge Language Models
LPIPSLearned Perceptual Image Patch Similarity
MLMachine Learning
MLLMsMultimodal Language Models
NGOsNon Governmental Organizations
NLPNatural Language Processing
PSNRPeak signal-to-noise ratio
QAQuestion Answering
RESTRepresentational State Transfer
SQLStructured Query Language
SSDSolid State Drive
SSIMStructural Similarity Index
UXUser Experience
VGGVisual Geometry Group
VQAVisual Question Answering

Notes

1
The Hub collects resources and training material to foster and support cultural tourism in museums and heritage sites, and serves as a networking platform to connect and exchange experiences. Website: https://reinherit-hub.eu/resources, accessed on 9 July 2025.
2
ReInHerit deliv. D3.9—Training Curriculum and Syllabi—https://ucarecdn.com/095df394-fad6-4f35-bdcc-09931d0b8dd2/, accessed on 9 July 2025.
3
ReInHerit deliv. D3.1—National Surveys Report—https://ucarecdn.com/54faa991-1570-4a53-9e8a-c1dea0a33110/, accessed on 9 July 2025.
4
The national surveys complied with GDPR, ensuring full anonymization of data by not requesting personal details. Participants were also informed about the survey’s purpose and how the data would be used. Similarly, the focus groups followed strict ethical guidelines, with informed consent procedures and data anonymization carefully planned according to the project’s Ethics and Data Management Plan. An additional central aim was to ensure full adherence to GDPR standards. See ReInHerit Deliverables D2.1, D2.4, D3.1 https://reinherit-hub.eu/deliverables, accessed on 9 July 2025.
5
For more details on the selection process and the qualitative and quantitative analytical methods, see ReInHerit deliv. D2.4 Focus Groups Phase II Report—https://ucarecdn.com/4966fc16-784a-4473-b457-4ba6ba458c13/, accessed on 9 July 2025.
6
ReInHerit deliv. D3.2—Toolkit Strategy Report—https://ucarecdn.com/71ffe888-3c0d-470d-962d-ab145edcff3f/, accessed on 9 July 2025.
7
8
ReInHerit news: https://reinherit-hub.eu/news, accessed on 9 July 2025.
9
Ethical Aspects and Scientific Accuracy of AI/CV-based tools: https://reinherit-hub.eu/bestpractices/db1bd5ab-218f-480b-b709-06ac9ab72b33, accessed on 9 July 2025.
10
ReInHerit Applications https://reinherit-hub.eu/applications/, accessed on 9 July 2025.
11
12
Strike-a-Pose: Co-creation process at ReInHerit Hackathon https://reinherit-hub.eu/summerschool/6205f8e2-60aa-46d2-bca3-bc46c9283029, accessed on 9 July 2025.
13
ReInHerit travelling exhibitions: https://reinherit-hub.eu/travellingexhibitions, accessed on 9 July 2025.
14
Available on the ReInHerit digital hub: https://reinherit-hub.eu/tools/apps/543b2b77-35f1-41b5-b06e-3a355f2a1c6b, accessed on 9 July 2025.
15
VIOLA Chatbot: Co-creation process at ReInHerit Hackathon https://reinherit-hub.eu/summerschool/314ed627-6f30-4d43-9428-4e55aee28066 accessed on 9 July 2025.
16
Students and researchers from different international academic backgrounds participated in the international XR/AI Summer School 2023 from 17 to 22 July 2023 in Matera Italy, working on the topics of extended reality and artificial intelligence. More info on ReInHerit Hackathon and project proposals: https://reinherit-hub.eu/summerschool/ accessed on 9 July 2025.
17
AI-Based Toolkit for Museums and Cultural Heritage Sites https://www.europeanheritagehub.eu/document/ai-based-toolkit-for-museums-and-cultural-heritage-sites/ accessed on 9 July 2025.

References

  1. Stamatoudi, I.; Roussos, K. A Sustainable Model of Cultural Heritage Management for Museums and Cultural Heritage Institutions. ACM J. Comput. Cult. Herit. 2024. [Google Scholar] [CrossRef]
  2. Gaia, G.; Boiano, S.; Borda, A. Engaging Museum Visitors with AI: The Case of Chatbots. In Museums and Digital Culture: New Perspectives and Research; Giannini, T., Bowen, J.P., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 309–329. [Google Scholar] [CrossRef]
  3. Adahl, S.; Träskman, T. Archeologies of Future Heritage: Cultural Heritage, Research Creations, and Youth. Mimesis J. 2024, 13, 487–497. [Google Scholar] [CrossRef]
  4. Pietroni, E. Multisensory Museums, Hybrid Realities, Narration, and Technological Innovation: A Discussion Around New Perspectives in Experience Design and Sense of Authenticity. Heritage 2025, 8, 130. [Google Scholar] [CrossRef]
  5. Ivanov, R.; Velkova, V. Analyzing Visitor Behavior to Enhance Personalized Experiences in Smart Museums: A Systematic Literature Review. Computers 2025, 14, 191. [Google Scholar] [CrossRef]
  6. Wang, B. Digital Design of Smart Museum Based on Artificial Intelligence. Mob. Inf. Syst. 2021, 2021, 4894131. [Google Scholar] [CrossRef]
  7. Perakyla, A.; Sorjonen, M.L. Emotion in Interaction; Oxford University Press: Oxford, UK, 2012. [Google Scholar] [CrossRef]
  8. Ciolfi, L. Embodiment and Place Experience in Heritage Technology Design. In The International Handbooks of Museum Studies; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2015; Chapter 19; pp. 419–445. [Google Scholar] [CrossRef]
  9. Parry, R. Museums in a Digital Age; Leicester Readers in Museum Studies; Routledge: London, UK, 2010. [Google Scholar]
  10. Walhimer, M. Designing Museum Experiences; Rowman & Littlefield: Lanham, MD, USA, 2021. [Google Scholar]
  11. Cucchiara, R.; Del Bimbo, A. Visions for Augmented Cultural Heritage Experience. IEEE Multimed. 2014, 21, 74–82. [Google Scholar] [CrossRef]
  12. MusuemBooster. Museum Innovation Barometer. 2022. Available online: https://www.museumbooster.com/mib (accessed on 7 May 2025).
  13. Giannini, T.; Bowen, J. Museums and Digital Culture: From Reality to Digitality in the Age of COVID-19. Heritage 2022, 5, 192–214. [Google Scholar] [CrossRef]
  14. Thiel, S.; Bernhardt, J.C. (Eds.) Reflections, Perspectives and Applications; Transcript Verlag: Bielefeld, Germany, 2023; pp. 117–130. [Google Scholar] [CrossRef]
  15. Clarencia, E.; Tiranda, T.G.; Achmad, S.; Sutoyo, R. The Impact of Artificial Intelligence in the Creative Industries: Design and Editing. In Proceedings of the International Seminar on Application for Technology of Information and Communication (iSemantic), Semarang, Indonesia, 21–22 September 2024; pp. 440–444. [Google Scholar] [CrossRef]
  16. Muto, V.; Luongo, S.; Sepe, F.; Prisco, A. Enhancing Visitors’ Digital Experience in Museums through Artificial Intelligence. In Proceedings of the Business Systems Laboratory International Symposium, Palermo, Italy, 11–12 January 2024. [Google Scholar]
  17. Furferi, R.; Di Angelo, L.; Bertini, M.; Mazzanti, P.; De Vecchis, K.; Biffi, M. Enhancing traditional museum fruition: Current state and emerging tendencies. Herit. Sci. 2024, 12, 20. [Google Scholar] [CrossRef]
  18. Villaespesa, E.; Murphy, O. This is not an apple! Benefits and challenges of applying computer vision to museum collections. Mus. Manag. Curatorship 2021, 36, 362–383. [Google Scholar] [CrossRef]
  19. Osoba, O.A.; Welser, W., IV; Welser, W. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence; Rand Corporation: Santa Monica, CA, USA, 2017. [Google Scholar]
  20. Villaespesa, E.; Murphy, O. THE MUSEUMS + AI NETWORK—AI: A Museum Planning Toolkit. 2020. Available online: https://themuseumsai.network/toolkit/ (accessed on 7 May 2025).
  21. Zielke, T. Is Artificial Intelligence Ready for Standardization? In Proceedings of the Systems, Software and Services Process Improvement; Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R., Eds.; Springer: Cham, Switzerland, 2020; pp. 259–274. [Google Scholar]
  22. Pasikowska-Schnass, M.; Young-Shin, L. Members’ Research Service, “Artificial Intelligence in the Context of Cultural Heritage and Museums: Complex Challenges and New Opportunities” EPRS|European Parliamentary Research PE 747.120—May 2023. Available online: https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/747120/EPRS_BRI(2023)747120_EN.pdf (accessed on 7 May 2025).
  23. CM. Recommendation CM/Rec(2022)15 Adopted by the Committee of Ministers on 20 May 2022 at the 132nd Session of the Committee of Ministers. 2022. Available online: https://search.coe.int/cm?i=0900001680a67952 (accessed on 9 July 2025).
  24. Furferi, R.; Colombini, M.P.; Seymour, K.; Pelagotti, A.; Gherardini, F. The Future of Heritage Science and Technologies: Papers from Florence Heri-Tech 2022. Herit. Sci. 2024, 12, 155. [Google Scholar] [CrossRef]
  25. Boiano, S.; Borda, A.; Gaia, G.; Di Fraia, G. Ethical AI and Museums: Challenges and New Directions. In Proceedings of the EVA London 2024, London, UK, 8–12 July 2024. [Google Scholar] [CrossRef]
  26. Orlandi, S.D.; De Angelis, D.; Giardini, G.; Manasse, C.; Marras, A.M.; Bolioli, A.; Rota, M. IA FAQ Intelligenza Artificiale—AI FAQ Artificial Intelligence. Zenodo. 2025. Available online: https://zenodo.org/records/15069460 (accessed on 9 July 2025).
  27. Nikolaou, P. Museums and the Post-Digital: Revisiting Challenges in the Digital Transformation of Museums. Heritage 2024, 7, 1784–1800. [Google Scholar] [CrossRef]
  28. Barekyan, K.; Peter, L. Digital Learning and Education in Museums: Innovative Approaches and Insights. NEMO. 2023. Available online: https://www.ne-mo.org/fileadmin/Dateien/public/Publications/NEMO_Working_Group_LEM_Report_Digital_Learning_and_Education_in_Museums_12.2022.pdf (accessed on 9 July 2025).
  29. Falk, J.H.; Dierking, L.D. The Museum Experience, 1st ed.; Routledge: London, UK, 2011. [Google Scholar] [CrossRef]
  30. Falk, J.; Dierking, L. The Museum Experience Revisited; Routledge: London, UK, 2016. [Google Scholar] [CrossRef]
  31. Falk, J.H. Identity and the Museum Visitor Experience; Wiley: Hoboken, NJ, USA, 2010. [Google Scholar]
  32. Mazzanti, P.; Sani, M. Emotions and Learning in Museums. NEMO. 2021. Available online: https://www.ne-mo.org/fileadmin/Dateien/public/Publications/NEMO_Emotions_and_Learning_in_Museums_WG-LEM_02.2021.pdf (accessed on 9 July 2025).
  33. Ekman, P.; Davidson, R.J. (Eds.) The Nature of Emotion: Fundamental Questions; Oxford University Press: New York, NY, USA, 1994. [Google Scholar]
  34. Panksepp, J. Affective Neuroscience: The Foundations of Human and Animal Emotions; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  35. Damásio, A.R. Descartes’ Error: Emotion, Reason, and the Human Brain; Avon Books: New York, NY, USA, 1994. [Google Scholar]
  36. Barrett, L.F. How Emotions Are Made; Houghton Mifflin Harcourt: New York, NY, USA, 2017. [Google Scholar]
  37. Barrett, L.F.; Mesquita, B.; Gendron, M. Context in Emotion Perception. Curr. Dir. Psychol. Sci. 2011, 20, 286–290. [Google Scholar] [CrossRef]
  38. Immordino-Yang, M.; Damasio, A. We Feel, Therefore We Learn: The Relevance of Affective and Social Neuroscience to Education. Mind Brain Educ. 2007, 1, 3–10. [Google Scholar] [CrossRef]
  39. Wood, E.; Latham, K.F. The Objects of Experience: Transforming Visitor-Object Encounters in Museums, 1st ed.; Routledge: London, UK, 2014. [Google Scholar] [CrossRef]
  40. Pietroni, E.; Pagano, A.; Fanini, B. UX Designer and Software Developer at the Mirror: Assessing Sensory Immersion and Emotional Involvement in Virtual Museums. Stud. Digit. Herit. 2018, 2, 13–41. [Google Scholar] [CrossRef]
  41. Hohenstein, J.; Moussouri, T. Museum Learning: Theory and Research as Tools for Enhancing Practice, 1st ed.; Routledge: London, UK, 2017. [Google Scholar] [CrossRef]
  42. Pescarin, S.; Città, G.; Spotti, S. Authenticity in Interactive Experiences. Heritage 2024, 7, 6213–6242. [Google Scholar] [CrossRef]
  43. Lieto, A.; Striani, M.; Gena, C.; Dolza, E.; Anna Maria, M.; Pozzato, G.L.; Damiano, R. A sensemaking system for grouping and suggesting stories from multiple affective viewpoints in museums. Hum.-Comput. Interact. 2024, 39, 109–143. [Google Scholar] [CrossRef]
  44. Damiano, R.; Lombardo, V.; Monticone, G.; Pizzo, A. Studying and designing emotions in live interactions with the audience. Multimed. Tools Appl. 2021, 80, 6711–6736. [Google Scholar] [CrossRef]
  45. Khan, I.; Melro, A.; Amaro, A.C.; Oliveira, L. Role of Gamification in Cultural Heritage Dissemination: A Systematic Review. In Proceedings of the Sixth International Congress on Information and Communication Technology (ICICT), London, UK, 25–26 February 2021; pp. 393–400. [Google Scholar] [CrossRef]
  46. Casillo, M.; Colace, F.; Marongiu, F.; Santaniello, D.; Valentino, C. Gamification in Cultural Heritage: When History Becomes SmART. In Proceedings of the Image Analysis and Processing—ICIAP 2023 Workshops; Foresti, G.L., Fusiello, A., Hancock, E., Eds.; Springer: Cham, Switzerland, 2024; pp. 387–397. [Google Scholar]
  47. Galindo-Durán, A. Enhancing Artistic Heritage Education through Gamification: A Comparative Study of Engagement and Learning Outcomes in Local Museums. Nusant. J. Behav. Soc. Sci. 2025, 4, 51–58. [Google Scholar] [CrossRef]
  48. Liao, Y.; Jin, G. Design, Technology, and Applications of Gamified Exhibitions: A Review. In Proceedings of the HCI International 2025 Posters; Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G., Eds.; Springer: Cham, Switzerland, 2025; pp. 106–116. [Google Scholar]
  49. Alexander, J. Gallery One at the Cleveland Museum of Art. Curator Mus. J. 2014, 57, 347–362. [Google Scholar] [CrossRef]
  50. Alexander, J.; Barton, J.; Goeser, C. Transforming the Art Museum Experience: Gallery One. In Proceedings of the Museums and the Web 2013; Proctor, N., Cherry, R., Eds.; Museums and the Web LLC: Silver Spring, MD, USA, 2013; Available online: https://mw2013.museumsandtheweb.com/paper/transforming-the-art-museum-experience-gallery-one-2 (accessed on 9 July 2025).
  51. Fan, S.; Wei, J. Enhancing Art History Education Through the Application of Multimedia Devices. In Proceedings of the International Conference on Internet, Education and Information Technology (IEIT 2024), Tianjin, China, 31 May–2 June 2024; Atlantis Press: Dordrecht, The Netherland, 2024; pp. 425–436. [Google Scholar] [CrossRef]
  52. Münster, S.; Maiwald, F.; di Lenardo, I.; Henriksson, J.; Isaac, A.; Graf, M.M.; Beck, C.; Oomen, J. Artificial Intelligence for Digital Heritage Innovation: Setting up a R&D Agenda for Europe. Heritage 2024, 7, 794–816. [Google Scholar] [CrossRef]
  53. Fiorucci, M.; Khoroshiltseva, M.; Pontil, M.; Traviglia, A.; Del Bue, A.; James, S. Machine Learning for Cultural Heritage: A Survey. Pattern Recognit. Lett. 2020, 133, 102–108. [Google Scholar] [CrossRef]
  54. Pansoni, S.; Tiribelli, S.; Paolanti, M.; Di Stefano, F.; Frontoni, E.; Malinverni, E.S.; Giovanola, B. Artificial intelligence and cultural heritage: Design and assessment of an ethical framework. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 1149–1155. [Google Scholar] [CrossRef]
  55. UNESCO. Guidance for Generative AI in Education and Research. 2023. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386693?locale=en (accessed on 7 May 2025).
  56. UNESCO. Recommendation on the Ethics of Artificial Intelligence SHS/BIO/PI/2021/1. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137. (accessed on 7 May 2025).
  57. Ranjgar, B.; Sadeghi-Niaraki, A.; Shakeri, M.; Rahimi, F.; Choi, S.M. Cultural Heritage Information Retrieval: Past, Present, and Future Trends. IEEE Access 2024, 12, 42992–43026. [Google Scholar] [CrossRef]
  58. Rachabathuni, P.K.; Mazzanti, P.; Principi, F.; Ferracani, A.; Bertini, M. Computer Vision and AI Tools for Enhancing User Experience in the Cultural Heritage Domain. In Proceedings of the HCI International 2024—Late Breaking Papers; Zaphiris, P., Ioannou, A., Sottilare, R.A., Schwarz, J., Rauterberg, M., Eds.; Springer: Cham, Switzerland, 2025; pp. 345–354. [Google Scholar] [CrossRef]
  59. Donadio, M.G.; Principi, F.; Ferracani, A.; Bertini, M.; Del Bimbo, A. Engaging Museum Visitors with Gamification of Body and Facial Expressions. In Proceedings of the ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; MM ’22. pp. 7000–7002. [Google Scholar] [CrossRef]
  60. TensorFlow.js Library. Available online: https://www.tensorflow.org/js (accessed on 9 July 2025).
  61. Flask Framework. Available online: https://flask.palletsprojects.com/ (accessed on 9 July 2025).
  62. Nielsen, J. Iterative user-interface design. Computer 1993, 26, 32–41. [Google Scholar] [CrossRef]
  63. MediaPipe Face Mesh Model Card. Available online: https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view (accessed on 9 July 2025).
  64. MediaPipe Library. Available online: https://ai.google.dev/edge/mediapipe/solutions/guide (accessed on 9 July 2025).
  65. Kartynnik, Y.; Ablavatski, A.; Grishchenko, I.; Grundmann, M. Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs. arXiv 2019, arXiv:1907.06724. [Google Scholar]
  66. Ekman, P.; Friesen, W.V.; O’sullivan, M.; Chan, A.; Diacoyanni-Tarlatzis, I.; Heider, K.; Krause, R.; LeCompte, W.A.; Pitcairn, T.; Ricci-Bitti, P.E.; et al. Universals and cultural differences in the judgments of facial expressions of emotion. J. Personal. Soc. Psychol. 1987, 53, 712. [Google Scholar] [CrossRef]
  67. Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
  68. Gallese, V. Embodied Simulation. Its Bearing on Aesthetic Experience and the Dialogue Between Neuroscience and the Humanities. Gestalt Theory 2019, 41, 113–127. [Google Scholar] [CrossRef]
  69. Karahan, S.; Gül, L.F. Mapping Current Trends on Gamification of Cultural Heritage. In Proceedings of the Game + Design Education; Cordan, Ö., Dinçay, D.A., Yurdakul Toker, Ç., Öksüz, E.B., Semizoğlu, S., Eds.; Springer: Cham, Switzerland, 2021; pp. 281–293. [Google Scholar]
  70. Bonacini, E.; Giaccone, S.C. Gamification and cultural institutions in cultural heritage promotion: A successful example from Italy. Cult. Trends 2022, 31, 3–22. [Google Scholar] [CrossRef]
  71. OPEN-AI. CLIP: Connecting Text and Images. 2021. Available online: https://openai.com/research/clip (accessed on 7 May 2025).
  72. Chiaro, R.D.; Bagdanov, A.; Bimbo, A.D. NoisyArt: A Dataset for Webly-supervised Artwork Recognition. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019), Prague, Czech Republic, 25–27 February 2019; INSTICC. SciTePress: Lisboa, Portugal, 2019; pp. 467–475. [Google Scholar] [CrossRef]
  73. Baldrati, A.; Bertini, M.; Uricchio, T.; Del Bimbo, A. Exploiting CLIP-Based Multi-modal Approach for Artwork Classification and Retrieval. In The Future of Heritage Science and Technologies: ICT and Digital Heritage; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 140–149. [Google Scholar] [CrossRef]
  74. TensorFlow Framework. Available online: https://www.tensorflow.org/ (accessed on 9 July 2025).
  75. Bongini, P.; Becattini, F.; Bimbo, A.D. Is GPT-3 all you need for Visual Question Answering in Cultural Heritage? arXiv 2023, arXiv:2207.12101. [Google Scholar]
  76. Rachabatuni, P.K.; Principi, F.; Mazzanti, P.; Bertini, M. Context-aware chatbot using MLLMs for Cultural Heritage. In Proceedings of the ACM Multimedia Systems Conference, Bari, Italy, 15–18 April 2024; MMSys ’24. pp. 459–463. [Google Scholar] [CrossRef]
  77. Boiano, S.; Borda, A.; Gaia, G.; Rossi, S.; Cuomo, P. Chatbots and new audience opportunities for museums and heritage organisations. In Proceedings of the Conference on Electronic Visualisation and the Arts, Swindon, GBR, London, UK, 9–13 July 2018; EVA ’18. pp. 164–171. [Google Scholar] [CrossRef]
  78. Mountantonakis, M.; Koumakis, M.; Tzitzikas, Y. Combining LLMs and Hundreds of Knowledge Graphs for Data Enrichment, Validation and Integration Case Study: Cultural Heritage Domain. In Proceedings of the International Conference On Museum Big Data (MBD), Athens, Greece, 18–19 November 2024. [Google Scholar]
  79. Ferrato, A.; Gena, C.; Limongelli, C.; Sansonetti, G. Multimodal LLM Question Generation for Children’s Art Engagement via Museum Social Robots. In Proceedings of the Adjunct, Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization, New York, NY, USA, 16–19 June 2025; UMAP Adjunct’25; Association for Computing Machinery: New York, NY, USA, 2025; pp. 144–150. [Google Scholar] [CrossRef]
  80. Engstrøm, M.P.; Løvlie, A.S. Using a Large Language Model as Design Material for an Interactive Museum Installation. arXiv 2025, arXiv:2503.22345. [Google Scholar]
  81. Gustke, O.; Schaffer, S.; Ruß, A. CHIM—Chatbot in the Museum. Exploring and Explaining Museum Objects with Speech-Based AI. In AI in Museums; Thiel, S., Bernhardt, J.C., Eds.; Transcript Verlag: Bielefeld, Germany, 2023; pp. 257–264. ISBN 978-3-8394-6710-7. [Google Scholar] [CrossRef]
  82. Ferracani, A.; Ricci, S.; Principi, F.; Becchi, G.; Biondi, N.; Del Bimbo, A.; Bertini, M.; Pala, P. An AI-Powered Multimodal Interaction System for Engaging with Digital Art: A Human-Centered Approach to HCI. In Proceedings of the Artificial Intelligence in HCI; Degen, H., Ntoa, S., Eds.; Springer: Cham, Switzerland, 2025; pp. 281–294. [Google Scholar]
  83. Galteri, L.; Bertini, M.; Seidenari, L.; Uricchio, T.; Del Bimbo, A. Increasing Video Perceptual Quality with GANs and Semantic Coding. In Proceedings of the ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; MM ’20. pp. 862–870. [Google Scholar] [CrossRef]
  84. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; Bengio, Y., LeCun, Y., Eds.; ICLR: Singapore, 2015. [Google Scholar]
  85. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  86. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 18–22 June 2018; pp. 586–595. [Google Scholar] [CrossRef]
  87. Antic. DeOldify. Available online: https://github.com/jantic/DeOldify (accessed on 7 May 2025).
  88. Murphy, O.; Villaespesa, E.; Gaia, G.; Boiano, S.; Elliott, L. Musei e Intelligenza Artificiale: Un Toolkit di Progettazione; Goldsmiths; University of London: London, UK, 2024. [Google Scholar] [CrossRef]
Figure 1. Digital transformation and innovation process in ReInHerit—© 2023, ReInHerit.
Figure 1. Digital transformation and innovation process in ReInHerit—© 2023, ReInHerit.
Heritage 08 00277 g001
Figure 2. Visitor survey: possible reasons for not using a mobile application during a visit to a museum or a cultural heritage site. Using web-based apps allows to address the “Not enough space on device” concern—© 2023, ReInHerit.
Figure 2. Visitor survey: possible reasons for not using a mobile application during a visit to a museum or a cultural heritage site. Using web-based apps allows to address the “Not enough space on device” concern—© 2023, ReInHerit.
Heritage 08 00277 g002
Figure 3. Visitor survey: which digital tools help improve the visit experience—© 2023, ReInHerit.
Figure 3. Visitor survey: which digital tools help improve the visit experience—© 2023, ReInHerit.
Heritage 08 00277 g003
Figure 4. Visitor survey: the devices people prefer to use during a visit to a museum or a cultural heritage site. The possibility to use their own device is very relevant for users—© 2023, ReInHerit.
Figure 4. Visitor survey: the devices people prefer to use during a visit to a museum or a cultural heritage site. The possibility to use their own device is very relevant for users—© 2023, ReInHerit.
Heritage 08 00277 g004
Figure 5. Visitor survey: how interesting the direct interaction with the exhibits was during the last visit. Interaction is valued by the vast majority of the answers—© 2023, ReInHerit.
Figure 5. Visitor survey: how interesting the direct interaction with the exhibits was during the last visit. Interaction is valued by the vast majority of the answers—© 2023, ReInHerit.
Heritage 08 00277 g005
Figure 6. Visitor survey: digital game motivations. Increasing the curiosity and knowledge are among the leading motivations—© 2023, ReInHerit.
Figure 6. Visitor survey: digital game motivations. Increasing the curiosity and knowledge are among the leading motivations—© 2023, ReInHerit.
Heritage 08 00277 g006
Figure 7. Visitor survey: how interesting the use of mobile applications was during the last visit—© 2023, ReInHerit.
Figure 7. Visitor survey: how interesting the use of mobile applications was during the last visit—© 2023, ReInHerit.
Heritage 08 00277 g007
Figure 8. Survey professionals: types of technological services and systems in heritage organizations. AI-based technologies are extremely rare, and the most common technologies are standard such as social media management or ticketing systems—© 2023, ReInHerit.
Figure 8. Survey professionals: types of technological services and systems in heritage organizations. AI-based technologies are extremely rare, and the most common technologies are standard such as social media management or ticketing systems—© 2023, ReInHerit.
Heritage 08 00277 g008
Figure 9. Survey professionals: available human resources for implementing technological services and systems in heritage organizations. Only a very small percentage of the organizations have an internal development team—© 2023, ReInHerit.
Figure 9. Survey professionals: available human resources for implementing technological services and systems in heritage organizations. Only a very small percentage of the organizations have an internal development team—© 2023, ReInHerit.
Heritage 08 00277 g009
Figure 10. ReInHerit Toolkit Strategy is based on developing open-source interactive tools, designed to be used in a mobile-first and web-first environment, based on bleeding edge AI and computer vision technologies—© 2023, ReInHerit.
Figure 10. ReInHerit Toolkit Strategy is based on developing open-source interactive tools, designed to be used in a mobile-first and web-first environment, based on bleeding edge AI and computer vision technologies—© 2023, ReInHerit.
Heritage 08 00277 g010
Figure 11. Strike-a-Pose matching: The user pose is detected by the system which checks the correspondence with the artwork’s pose. Success in replicating a pose leads to the next step of the challenge. Museum curators can create different types of challenges selecting the artworks of their museum. As feedback to the user, the detected skeletons are shown superimposed on the captured video stream. (Pictured are MB and PM, the authors of this paper—© 2023, ReInHerit.)
Figure 11. Strike-a-Pose matching: The user pose is detected by the system which checks the correspondence with the artwork’s pose. Success in replicating a pose leads to the next step of the challenge. Museum curators can create different types of challenges selecting the artworks of their museum. As feedback to the user, the detected skeletons are shown superimposed on the captured video stream. (Pictured are MB and PM, the authors of this paper—© 2023, ReInHerit.)
Heritage 08 00277 g011
Figure 12. Processing flow for pose detection and matching. The reference image and the camera frame input undergo pose landmark detection. If the joints’ orientations match, an output video is generated; otherwise, the flow loops back for further comparison.
Figure 12. Processing flow for pose detection and matching. The reference image and the camera frame input undergo pose landmark detection. If the joints’ orientations match, an output video is generated; otherwise, the flow loops back for further comparison.
Heritage 08 00277 g012
Figure 13. Strike-a-Pose testing at “ACM Multimedia 2022”, Lisbon, Portugal and “Research Fair”, Arcada University of Applied Sciences, Finland 2023—© 2023, ReInHerit.
Figure 13. Strike-a-Pose testing at “ACM Multimedia 2022”, Lisbon, Portugal and “Research Fair”, Arcada University of Applied Sciences, Finland 2023—© 2023, ReInHerit.
Heritage 08 00277 g013
Figure 14. Face-Fit: Match the expression. The “ghost” image helps the user focus on the task without distracting from the painting and thus the game. (Pictured is the result of the interaction of FP, author of this article—© 2023, ReInHerit).
Figure 14. Face-Fit: Match the expression. The “ghost” image helps the user focus on the task without distracting from the painting and thus the game. (Pictured is the result of the interaction of FP, author of this article—© 2023, ReInHerit).
Heritage 08 00277 g014
Figure 15. Processing flow for facial landmark matching and face swapping. The reference image and the camera frame input undergo facial landmark detection. If the landmarks and orientations match, a face swap is performed to generate the output image.
Figure 15. Processing flow for facial landmark matching and face swapping. The reference image and the camera frame input undergo facial landmark detection. If the landmarks and orientations match, a face swap is performed to generate the output image.
Heritage 08 00277 g015
Figure 16. Face-Fit testing during Workshops at Museo Capitolare Diocesano “CREA Cultura Festival 2024”, Foligno (IT), “Humanities Festival 2023”, Macerata (IT) GIARA Gipsoteca di Arte Antica, 2024 Pisa (IT)—© 2023, ReInHerit.
Figure 16. Face-Fit testing during Workshops at Museo Capitolare Diocesano “CREA Cultura Festival 2024”, Foligno (IT), “Humanities Festival 2023”, Macerata (IT) GIARA Gipsoteca di Arte Antica, 2024 Pisa (IT)—© 2023, ReInHerit.
Heritage 08 00277 g016
Figure 17. Example of the web interface Smart Retrieval. (left) Text-to-image search and (right) image-to-image search—© 2023, ReInHerit.
Figure 17. Example of the web interface Smart Retrieval. (left) Text-to-image search and (right) image-to-image search—© 2023, ReInHerit.
Heritage 08 00277 g017
Figure 18. Smart Lens App for smartphone. (left) Home. (center) The user frame the artwork with the device camera. Bounding boxes of the details and image details previews are shown. (right) After the selection of a detail, the user can have insights about it and listen to the audio guide—© 2023, ReInHerit.
Figure 18. Smart Lens App for smartphone. (left) Home. (center) The user frame the artwork with the device camera. Bounding boxes of the details and image details previews are shown. (right) After the selection of a detail, the user can have insights about it and listen to the audio guide—© 2023, ReInHerit.
Heritage 08 00277 g018
Figure 19. Examples of the use of the Smart Lens app during ReThinking Exhibition at Cycladic Museum, Athens, Greece. Left image: the user is shown the “hotspots” recognized on the artwork. Second and third leftmost images: show the associated content to the hotspots. Two rightmost images: other examples of hotspots for different artworks—© 2023, ReInHerit.
Figure 19. Examples of the use of the Smart Lens app during ReThinking Exhibition at Cycladic Museum, Athens, Greece. Left image: the user is shown the “hotspots” recognized on the artwork. Second and third leftmost images: show the associated content to the hotspots. Two rightmost images: other examples of hotspots for different artworks—© 2023, ReInHerit.
Heritage 08 00277 g019
Figure 20. VIOLA multimedia chatbot answering (blue bubbles) some questions (yellow bubbles) about cycladic figurine (Museum of Cycladic Art)—artwork included in the VIOLA Gallery—© 2023, ReInHerit.
Figure 20. VIOLA multimedia chatbot answering (blue bubbles) some questions (yellow bubbles) about cycladic figurine (Museum of Cycladic Art)—artwork included in the VIOLA Gallery—© 2023, ReInHerit.
Heritage 08 00277 g020
Figure 21. Testing multimedia chatbot VIOLA at GiArA Gipsoteca di Arte Antica, Pisa IT—© 2023, ReInHerit.
Figure 21. Testing multimedia chatbot VIOLA at GiArA Gipsoteca di Arte Antica, Pisa IT—© 2023, ReInHerit.
Heritage 08 00277 g021
Figure 22. VIOLA: participatory process to generate “user-oriented” and quality content.
Figure 22. VIOLA: participatory process to generate “user-oriented” and quality content.
Heritage 08 00277 g022
Figure 23. Video restorer network architecture, based on SWIN Transformer modules. Frames of the video to be restored on the left, restored frames on the right. Using a temporal approach where nearby frames help the network to restore the central frame improves the quality of the reconstruction.
Figure 23. Video restorer network architecture, based on SWIN Transformer modules. Frames of the video to be restored on the left, restored frames on the right. Using a temporal approach where nearby frames help the network to restore the central frame improves the quality of the reconstruction.
Heritage 08 00277 g023
Figure 24. ReInHerIt demo web app Old Photo’s Restorer. The system shows the input (left), the restored photo (right) and where the image was modified as a mask (middle).
Figure 24. ReInHerIt demo web app Old Photo’s Restorer. The system shows the input (left), the restored photo (right) and where the image was modified as a mask (middle).
Heritage 08 00277 g024
Table 1. Key topics of the people-centered approach in the ReInHerit project.
Table 1. Key topics of the people-centered approach in the ReInHerit project.
TopicDescription
Playful ExperienceAI and CV tools are applied to foster learning and build a deeper connection between visitors and artworks. Interactions and gamified experiences are designed to trigger emotion, encourage creativity, and support participatory engagement.
New AudienceYounger audiences, who tend to be more familiar with digital technologies, are a key target of the ReInherit Toolkit, which aims to increase their active participation in museum experiences.
SustainabilitySmaller museums often lack the resources and skills to adopt digital tools, making training and capacity-building crucial for effective heritage innovation.
Bottom–UpThe development process follows a community-driven model, where local participants are actively involved through workshops and hackathons. This inclusive method ensures that the tools reflect the needs and insights of the users themselves.
Co-CreationThe innovative goal is to offer not just a tool as a final product but a collaborative development process that fosters mediation between different disciplinary sectors.
Table 2. Evaluation results using synthetic data with reference-based visual quality metrics. Higher values (↑) for PSNR and SSIM, and lower values (↓) for LPIPS are better; best results in bold. The proposed approach outperforms the previous state of the art by a large margin on all the metrics.
Table 2. Evaluation results using synthetic data with reference-based visual quality metrics. Higher values (↑) for PSNR and SSIM, and lower values (↓) for LPIPS are better; best results in bold. The proposed approach outperforms the previous state of the art by a large margin on all the metrics.
MethodPSNR ↑SSIM ↑LPIPS ↓
DeOldify [87]11.560.4510.671
Our method34.780.9390.063
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mazzanti, P.; Ferracani, A.; Bertini, M.; Principi, F. Reshaping Museum Experiences with AI: The ReInHerit Toolkit. Heritage 2025, 8, 277. https://doi.org/10.3390/heritage8070277

AMA Style

Mazzanti P, Ferracani A, Bertini M, Principi F. Reshaping Museum Experiences with AI: The ReInHerit Toolkit. Heritage. 2025; 8(7):277. https://doi.org/10.3390/heritage8070277

Chicago/Turabian Style

Mazzanti, Paolo, Andrea Ferracani, Marco Bertini, and Filippo Principi. 2025. "Reshaping Museum Experiences with AI: The ReInHerit Toolkit" Heritage 8, no. 7: 277. https://doi.org/10.3390/heritage8070277

APA Style

Mazzanti, P., Ferracani, A., Bertini, M., & Principi, F. (2025). Reshaping Museum Experiences with AI: The ReInHerit Toolkit. Heritage, 8(7), 277. https://doi.org/10.3390/heritage8070277

Article Metrics

Back to TopTop