Reviving Antiquity in the Digital Era: Digitization, Semantic Curation, and VR Exhibition of Contemporary Dresses

: In this paper, we present a comprehensive methodology to support the multifaceted process involved in the digitization, curation, and virtual exhibition of cultural heritage artifacts. The proposed methodology is applied in the context of a unique collection of contemporary dresses inspired by antiquity. Leveraging advanced 3D technologies, including lidar scanning and photogrammetry, we meticulously captured and transformed physical garments into highly detailed digital models. The postprocessing phase refined these models, ensuring an accurate representation of the intricate details and nuances inherent in each dress. Our collaborative efforts extended to the dissemination of this digital cultural heritage, as we partnered with the national aggregator in Greece, SearchCulture, to facilitate widespread access. The aggregation process streamlined the integration of our digitized content into a centralized repository, fostering cultural preservation and accessibility. Furthermore, we harnessed the power of these 3D models to transcend traditional exhibition boundaries, crafting a virtual experience that transcends geographical constraints. This virtual exhibition not only enables online exploration but also invites participants to immerse themselves in a captivating virtual reality environment. The synthesis of cutting-edge digitization techniques, cultural aggregation, and immersive exhibition design not only contributes to the preservation of contemporary cultural artifacts but also redefines the ways in which audiences engage with and experience cultural heritage in the digital age.


Introduction
In this paper, a methodology for the digitization, curation, and virtual exhibition of heritage artifacts is provided.For the use case, a distinct collection of contemporary dresses inspired by antiquity is being showcased.The provided methodology can be traced back to the evolution of technologies, including the recent advancements in 3D technologies, specifically laser scanning and photogrammetry, that have significantly reshaped the landscape of cultural preservation.These methods, in contrast to conventional approaches, enable the detailed capture and transformation of physical garments into digital models.
The current limits of technological advancements still require that these digital models be post-processed to create the final 3D models.The postprocessing phase is required to further refine these digital replicas by removing digitization faults and merging partial scans of the digitized artifacts.Furthermore, ensuring that the high level of accuracy in representing the intricate details and nuanced characteristics inherent in each dress pertains while fusing individual datasets.
Apart from digitization, which is an important part of preservation, effort should be invested into making the preserved artifact findable, accessible, interoperable, and reusable (FAIR) [1].By following established standards in the cultural heritage (CH) sector and by facilitating open data infrastructures and content aggregators, this vision today can be transformed into reality [2,3].In this use case, in collaboration with the Branding Heritage organization [4], we present how this collaboration facilitates widespread access and streamlines the integration of our digitized collection of contemporary dresses into a centralized repository, fostering cultural preservation and accessibility.
However, the significance of the exploration presented in this work extends beyond digitizing artifacts.Building on the potential of making these 3D models widely accessible, our approach builds on virtual exhibition design and implementation.This virtual experience is crafted not only to enable online exploration but, more ambitiously, to invite participants to immerse themselves in a captivating virtual reality (VR) environment.
In this synthesis of cutting-edge digitization techniques, cultural aggregation, and immersive exhibition design and implementation, we contribute to the preservation and presentation of contemporary cultural artifacts.Moreover, we redefine how audiences engage with and experience CH in the digital age.

Background and Related Work 2.1. 3D Digitization
Over the past decades, the evolution of 3D digitization technologies has seen significant advancements, and its adoption has been wide in several application domains, including civil engineering [5], indoor environments [6], archaeology [7], underwater structures [8,9], geography [10], health [11,12], etc.Early methods, such as structured light scanners, employed projected patterns to capture object geometry [13,14].Subsequent advancements introduced laser scanners, offering enhanced accuracy and speed in capturing intricate details, particularly in controlled environments [15,16].Moving forward, the integration of lidar technology revolutionized large-scale 3D scanning, providing rapid and precise data acquisition, especially outdoors.Photogrammetry, leveraging computer vision algorithms, emerged as a powerful tool, reconstructing 3D models from overlapping images with increasing accuracy [17,18].The progression culminated in the democratization of 3D scanning through handheld devices, exemplified by smartphone apps like Trnio [19] and Poly.Cam [20], allowing users to generate detailed models conveniently on the go.

Application of 3D Digitization in Cultural Heritage
The application of digitization technologies to cultural heritage has reshaped the preservation and accessibility of historical artifacts [21][22][23].Early on, structured light scanners found utility in capturing the details of objects like sculptures and artifacts [24,25].As laser scanning evolved, its precision became instrumental in the preservation of architectural wonders, as exemplified by the comprehensive digitization of historical structures such as the Palace of Knossos [26,27].Lidar technology has contributed to large-scale cultural heritage documentation, enabling the creation of detailed 3D maps for archaeological sites such as the ancient city of Petra in Jordan [28].Photogrammetry has proven invaluable in reconstructing artifacts with high accuracy, notable in the preservation of CH artifacts [29].In recent years, handheld devices and smartphone apps like Trnio and Poly.Cam [19,20] have empowered museums and cultural institutions to engage in on-the-spot digitization, offering immersive virtual experiences and expanding public accessibility [30,31].These examples illustrate how the evolution of digitization technologies has diversified their applications, ranging from detailed object capture to the preservation of entire historical landscapes [32], revolutionizing the field of cultural heritage.At the same time, advances in photonics make promises for more experiential technologies in the future, including see-through head-mounted displays and advanced AR optics [33][34][35].

Virtual Clothing
The need to realistically represent clothing in virtual environments has led to numerous techniques for virtual cloth simulation.This discipline integrates mechanics, numerical methods, and garment design principles [36].Recent advancements have led to the devel-opment of sophisticated simulation engines capable of accurately representing the complex behavior of cloth materials [37].Techniques such as particle system models have evolved to incorporate nonlinear properties of cloth elasticity, streamlining computations and improving efficiency in simulating anisotropic tensile behavior [38].Additionally, approaches like MIRACloth draw inspiration from traditional garment construction methods, utilizing 2D patterns and seam assembly to create virtual garments that can be animated on 3D models [37,39].These methods ensure precise representation and measurement of cloth surfaces, crucial for achieving high-quality animations.Furthermore, enhancements in collision detection and resolution algorithms enable simulations to handle irregular meshes, high deformations, and complex collisions, thus expanding the scope of possible scenarios for cloth simulation [40].The work presented in this paper can be perceived as complementary to the above mentioned advancements.The accurate 3D reconstruction of garmets dicussed in this work can enhance the capacity of these methods in delivering animated virtual clothing of extreme realism and quality.

Semantic Knowledge Representation and Presentation
Semantic knowledge representation has received increased attention in various application domains, including e-health [41,42], education [43,44], commerce [45,46], automotive [47][48][49], etc.Among these domains, semantic knowledge representation and presentation have an active role in CH preservation [50][51][52], particularly with the adoption of open data (OD) and linked open data (LOD) principles [53][54][55].OD initiatives involve making cultural heritage information findable and interoperable, fostering collaboration between CH institutions and researchers [56,57].LOD takes this a step further by establishing standardized, interlinked connections between disparate datasets [58,59].The integration of ontologies, like the CIDOC Conceptual Reference Model (CRM) [60], provides a common semantic framework for CH data, enabling more coherent and interconnected representations of artifacts, events, historical contexts, and spatiotemporal dimensions [61].This ensures a consistent and standardized approach to data description, facilitating interoperability across diverse cultural heritage collections.The Europeana project [62,63] is a notable example where LOD principles have been employed [64], allowing users to seamlessly navigate and explore a vast repository of cultural heritage artifacts from various institutions.

Virtual Exhibitions of Cultural Heritage Collections
Advances in virtual exhibitions and virtual museums within the CH sector have significantly transformed the way audiences engage with and experience historical artifacts and artworks [65,66].Virtual exhibitions leverage digital technologies to create immersive and interactive online environments, providing a dynamic platform for the presentation of cultural heritage content.These exhibitions go beyond the constraints of physical spaces, allowing for the inclusion of a broader range of artifacts, contextual information, and multimedia elements.Institutions worldwide, from museums to galleries, have embraced virtual exhibitions as a means to reach global audiences, especially during times when physical visits may be restricted [67,68].Cutting-edge technologies like augmented reality (AR) [69][70][71][72] and virtual reality (VR) [73][74][75][76][77] contribute to more engaging and authentic experiences, enabling users to virtually explore exhibitions as if they were physically present [78].Notable examples include virtual tours of renowned museums, historical sites, and events [79][80][81], offering users the ability to navigate through exhibitions, zoom in on artifacts, and access additional information at their own pace.These advances in virtual exhibitions enhance accessibility, including for people with disabilities [82].At the same time, recent developments redefine the traditional boundaries of cultural heritage presentation, fostering a more inclusive and immersive way for individuals worldwide to connect with and appreciate our shared cultural legacy.

Contribution of This Research Work
While significant progress has been achieved in the domains of 3D digitization technologies, semantic knowledge representation, and virtual exhibitions, especially concerning the CH sector, several research gaps persist.In this work, we propose a concrete methodology that builds on this advancement and, in combination, can support bridging these gaps.
Seamless interoperability and standardization across diverse cultural heritage datasets have not yet been achieved.The proposed methodology utilizes standard domain ontologies, such as the CIDOC CRM [60] and the Europeana Data Model (EDM) [83], for the semantic knowledge representation.Thus, more precise descriptions of artifacts and events are supported, and at the same time, this is achieved by maintaining interoperability with standards-compliant CH knowledge.
In the domain of digitization, there is still not a single methodology capable of achieving adequate results in all cases (e.g., indoors, outdoors, small scale, medium scale, etc.).As such, some form of fusion will always be required to achieve optimal results [84][85][86][87].To this end, combining different technologies based on their strengths and weaknesses can make a difference.At the same time, in the proposed approach, we also lay the foundation for postprocessing [88] the results of the technologies to get the most out of lidar scanning, laser scanning, and photogrammetry, especially in the case of dresses where time-dependent variations in their structure make the registration of individual scans challenging.
Despite the growth of virtual exhibitions, ensuring the accessible dissemination of digitized content remains a concern.Our methodology addresses this by exporting curated data in RDF/XML [89] format and ingesting them into national aggregators like SearchCulture [90] and European platforms like Europeana [91], enhancing accessibility and exposure at national and European scales.At the same time, the raw data and the digitization outcomes become available as open datasets through Zenodo [92] to foster data reuse for scientific purposes.
User engagement and interaction are considered important parts of immersing in virtual exhibitions.Our methodology incorporates innovative platforms like the Invisible Museum [93] that allow the creation of virtual spaces and the definition of the rendering characteristics of artifacts to enhance the visual appeal and lifelike representation of artifacts while simplifying immersion through its versatile support for web-based or VR-based interaction.

Method
The proposed methodology outlines a systematic approach for the digitization, curation, and exhibition of a diverse collection, employing a multimodal strategy as presented in Figure 1.
In the initial phase, the items undergo a comprehensive digitization process incorporating various techniques.Detailed images are captured from multiple angles through photographic documentation, serving as the foundational dataset for subsequent procedures.Concurrently, geometric data are recorded using laser scanning equipment, capturing intricate details of the materials and embellishments with an operating accuracy of 0.1 mm.Finally, a mobile app can be employed as a good all-around solution for validating the reconstruction outcomes and as a data source in the case that a partial scan fails to synthesize the entire model.
In the case of this work, for the acquisition of images, a Nikon D850 was used.For laser scanning, the FARO Focus Laser Scanner was used, which is capable of creating accurate, photorealistic 3D representations of any environment or object in just a few minutes [94].Due to the selection of the scanning equipment, the data acquired have the pitfall that the heritage artifact cannot be covered in a single scan, and thus multiple scans are required per artifact.Finally, the Trnio Mobile app [19] was employed for on-the-go 3D model generation through mobile phones, fussing lidar, depth, and photogrammetric methods.Data processing in Trnio happens on the cloud and requires no additional resources, which makes it ideal for our proposed multimodal approach.Unfortunately, Trnio was discontinued at the time of writing this paper.For consistency in our methodology, we have performed tests with alternative software, and we propose the use of Poly.Cam [20] as an alternative to Trnio.In the initial phase, the items undergo a comprehensive digitization process incorporating various techniques.Detailed images are captured from multiple angles through photographic documentation, serving as the foundational dataset for subsequent procedures.Concurrently, geometric data are recorded using laser scanning equipment, capturing intricate details of the materials and embellishments with an operating accuracy of 0.1 mm.Finally, a mobile app can be employed as a good all-around solution for validating the reconstruction outcomes and as a data source in the case that a partial scan fails to synthesize the entire model.
In the case of this work, for the acquisition of images, a Nikon D850 was used.For laser scanning, the FARO Focus Laser Scanner was used, which is capable of creating accurate, photorealistic 3D representations of any environment or object in just a few minutes [94].Due to the selection of the scanning equipment, the data acquired have the pitfall that the heritage artifact cannot be covered in a single scan, and thus multiple scans are required per artifact.Finally, the Trnio Mobile app [19] was employed for on-the-go 3D model generation through mobile phones, fussing lidar, depth, and photogrammetric methods.Data processing in Trnio happens on the cloud and requires no additional resources, which makes it ideal for our proposed multimodal approach.Unfortunately, Trnio was discontinued at the time of writing this paper.For consistency in our methodology, we have performed tests with alternative software, and we propose the use of Poly.Cam [20] as an alternative to Trnio.
In the next stage, the collected data are used to perform 3D reconstruction.To this end, three processes are proposed.The first is the photogrammetric reconstruction of the collected image datasets.This results in a mesh structure and a texture for each scanned artifact.The method and software characteristics are capable of producing an ultra-highquality texture and a lower-quality mesh structure.The second method is the post-processing of the lidar data that results in a point cloud (directly through the measurements taken by the lidar scanner), an ultra-high quality mesh structure (accuracy ~0.1 mm), and a lower quality texture (the texture is synthesized by combining the measured individual points' colors).The third is the post-processing of the mobile device data on the cloud.This method produces a medium-averaged-quality mesh and texture that, in the proposed method, is used as a reference and fallback dataset.In the use case of this work, for the reconstruction, the PIX4Dmatic from Pix4D [95] was used.For the creation of the 3D point cloud from the laser scanner data, a Faro scene was used [96].In the next stage, the collected data are used to perform 3D reconstruction.To this end, three processes are proposed.The first is the photogrammetric reconstruction of the collected image datasets.This results in a mesh structure and a texture for each scanned artifact.The method and software characteristics are capable of producing an ultra-high-quality texture and a lower-quality mesh structure.The second method is the post-processing of the lidar data that results in a point cloud (directly through the measurements taken by the lidar scanner), an ultra-high quality mesh structure (accuracy ~0.1 mm), and a lower quality texture (the texture is synthesized by combining the measured individual points' colors).The third is the post-processing of the mobile device data on the cloud.This method produces a medium-averaged-quality mesh and texture that, in the proposed method, is used as a reference and fallback dataset.In the use case of this work, for the reconstruction, the PIX4Dmatic from Pix4D [95] was used.For the creation of the 3D point cloud from the laser scanner data, a Faro scene was used [96].
Following 3D reconstruction, the digitization process continues by curating the data in Blender [97].Blender is a versatile tool that refines and transforms the collected data into high-fidelity 3D models.This phase is dedicated to preserving the essence of each item while ensuring accuracy in the digital representation.The main activities here involve the application of modifiers to individual scans to achieve alignment, their merging, and simplification to produce the final mesh structure.Then the projection of textures from individual scans to the combined mesh and the application of an averaged image stacking methodology for the combination of multiple textures into a uniform result.
The curation phase focuses on enhancing the accessibility and discoverability of the digitized collection.Collections are methodically organized on an open data repository, with each item assigned a unique Internationalized Resource Identifier (IRI).Simultaneously, to further enhance the FAIR qualities of the produced data, an online platform adhering to CH heritage standards on knowledge representation is important to enrich metadata associated with each item and document detailed historical context, materials, and cultural significance.This documentation provides a comprehensive digital resource for each artifact.Then, to broaden dissemination, curated data are exported in RDF/XML format for ingestion into a CH aggregator, adhering to LOD principles.
The methodology concludes by making data experienceable through the creation of a virtual exhibition using a digital authoring platform.This facilitates the design of virtual spaces, replicating the ambiance of a traditional museum setting while leveraging digital technologies.Rendering characteristics are configured to enhance visual appeal, ensuring a lifelike representation of each artifact.
Upon completion, the virtual exhibition is published, making it accessible online through standard web browsers and providing an immersive experience for users with VR headsets.This approach serves to preserve the collection in a digital format while establishing an interactive platform for the exploration and appreciation of cultural heritage.
An overview of the technologies employed in each step of the methodology and their functions is presented in Table 1.

Virtual Exhibition •
Linking with the open repository to access to both the 3D models and their metadata.

•
Creation of virtual exhibits.The collection of 3D models is transformed into a collection of virtual exhibits, i.e., objects that can placed as interactable within a virtual exhibition.

•
Authoring of the digital space.In this step, the digital space where the virtual exhibition will be hosted is authored.

•
Setup rendering and spatial parameters of exhibits.

•
Baking and publication.The final scene is baked and published.
In the following sections, each step of the aforementioned methodology is presented as applied in the context of the contemporary collection of dresses of the Branding Heritage organization.

Digitisation 4.1. Multimodal Data Acquisition
To capture detailed and accurate representations of the dress collection, a comprehensive scanning methodology combining various techniques was employed.Initially, photographic documentation served as a foundational element, with photographs taken around the object from multiple angles with regards to the z-axis and at a fixed distance with the artifact having an overlapping ratio of approximately 50%.The careful acquisition of the dataset is essential to facilitate the subsequent photogrammetric reconstruction process.Next, to enhance the three-dimensional fidelity of the data acquired, a Pharo Focus laser scanner was utilized to capture precise geometric data, and thus intricate details of the dresses, including texture and surface features.The laser scanner was positioned in four locations around the artifact with a 45-degree distance between them and calibrated to scan only the part of the artifact visible to the scanner.Partial scanning rather than 360 scanning was selected to reduce the amount of unusable data acquired and to reduce both processing and scanning time.As part of the scanning methodology used, Trnio played a crucial role as a versatile and accessible fallback solution for 3D model generation.Recognizing the need for flexibility and convenience, especially in environments where extensive scanning equipment might be impractical, Trnio emerged as an invaluable tool.Trnio allowed for the swift capture of 3D data through a user-friendly interface.While the primary data acquisition involved more sophisticated techniques such as laser scanning and photogrammetric reconstruction, Trnio served as a practical alternative since it supports on-the-go 3D model generation and can be useful for the validation of more detailed 3D scans later on.

3D Models Reconstruction
All 3D modeling was carried out using Blender versions 2.93 for texture extraction/synthesis and 3.51 for the Geometry Node Shader capabilities.Final models were simplified using Instant Field-Aligned Meshes, a method that will automatically re-topologize complex meshes [98].Photogrammetry is capable of generating a single mesh for the entire artifact because the acquired datasets cover multiple angles and a nearly complete coverage of the subject.The resulting mesh is imperfect; the dimensions require a reference for scale; structural errors accumulate in fine geometric details; and texture quality is also inconsistent.In contrast, LiDAR FARO scans are almost stationary, with large gaps between scans, that deliver restraint but high-quality color and geometric structure from their point of view.For the project, both scanning techniques have been used, as each has its own advantages and disadvantages.As a result, due to the nature of the above-discussed results, a merging methodology is required for using multiple scans and performing laborious retopology efforts.Another important consideration that should be taken into account is that clothing is a malleable subject, prone to shape deformations from minute external factors that worsen with time.Capturing multiple photographs is fast, but the resulting details can be disjointed, and while LiDAR is precise, its methodical nature is slow, introducing various warps between scans.
To address this issue, a forced manual registration process for the majority of the scans was required, using their common texture features as reference points.The multitude of scans compound into complicated, partial mesh overlaps, consisting of millions of polygons.Their one-sided flat structure is detrimental to Boolean operations and is almost impossible to work reliably as a means to unify the scans.
To get around the issue, a series of modifiers and geometry nodes are applied to each scan that subdivide and remove irrelevant geometry using an alpha texture.The alpha is generated for higher than 25-degree angles because LiDAR's geometric and image quality are best at near-perpendicular angles and rapidly decline at steeper angles.The process retains only the most accurate parts from each scan.Within the Geometry Node, the alpha is the deciding factor that keeps or deletes geometry; thus, artistically editing the alpha is a powerful process that allows for fine structure to arise.For example, painting away the mannequin while keeping thin lines enables the inclusion of strings, braces, and other delicate features of a dress as real geometry, visibly determining the prospective outcome.This methodology is graphically represented using an exemplar dress in Figure 2. The first part of the figure (a) presents an original scan mesh as acquired through the scanning methodology, and then in the second part (b), the same scan mesh is shown with the modifiers applied.The unification of all meshes is presented in the third part of the figure, where the outcome is a single mesh structure acquired by combining 9 individual modified meshes.Finally, the fourth part presents the merged mesh as simplified following Instant Field-Aligned Meshes [98].
To get around the issue, a series of modifiers and geometry nodes are applied to each scan that subdivide and remove irrelevant geometry using an alpha texture.The alpha is generated for higher than 25-degree angles because LiDAR's geometric and image quality are best at near-perpendicular angles and rapidly decline at steeper angles.The process retains only the most accurate parts from each scan.Within the Geometry Node, the alpha is the deciding factor that keeps or deletes geometry; thus, artistically editing the alpha is a powerful process that allows for fine structure to arise.For example, painting away the mannequin while keeping thin lines enables the inclusion of strings, braces, and other delicate features of a dress as real geometry, visibly determining the prospective outcome.This methodology is graphically represented using an exemplar dress in Figure 2. The first part of the figure (a) presents an original scan mesh as acquired through the scanning methodology, and then in the second part (b), the same scan mesh is shown with the modifiers applied.The unification of all meshes is presented in the third part of the figure, where the outcome is a single mesh structure acquired by combining 9 individual modified meshes.Finally, the fourth part presents the merged mesh as simplified following Instant Field-Aligned Meshes [98].Diving into more details on the aforementioned process.In the first step, all individual FARO scans have the following modifiers applied: (a) geometry node "cut to alpha" (b) planar decimation at 0.1 degrees, and (c) triangulate.The modifier is presented in Figure 3a.Subsequently, the modified scans are merged into a new target model.The structure is complicated with overlapping polygons that often times have missing parts due to a lack of scanned information.These can be quickly covered by projecting simple polygons around them.The unification transformation happens by applying a set of modifiers.The process converts the mesh into a volume, encompassing the intended model that is thick enough to fill small structural gaps.Then the volume is converted back to a mesh.The remesh modifier is applied to smooth the geometry (see Figure 3b), and finally, a shrink modifier pushes back the surface, restoring its original, intended form (see Figure 3c).The resulting mesh is a single unified mesh and can be further simplified using Instant Field-Aligned Meshes [98] to reduce the polygon count of the model.
The individual modified scans are geometrically accurate and are used to transfer their equivalent textures to the final model.For every scan, a 16 k RGBA texture is extracted by projecting the respective panoramic onto it.The alpha is auto-generated by the overlapping geometry, benefiting greatly from the fine structure.The textures are baked with a Selected to Active and Extrusion/Ray Distance set to 0.01.Finally, an average image stacking approach is used to synthesize all textures into one detailed albedo texture.
To acquire the final 3D models of the dress collection, we applied the aforementioned methodology to all the individual datasets per dress.The final collection of dresses was subsequently digitally curated.
gons around them.The unification transformation happens by applying a set of modifiers.The process converts the mesh into a volume, encompassing the intended model that is thick enough to fill small structural gaps.Then the volume is converted back to a mesh.The remesh modifier is applied to smooth the geometry (see Figure 3b), and finally, a shrink modifier pushes back the surface, restoring its original, intended form (see Figure 3c).The resulting mesh is a single unified mesh and can be further simplified using Instant Field-Aligned Meshes [98] to reduce the polygon count of the model.The individual modified scans are geometrically accurate and are used to transfer their equivalent textures to the final model.For every scan, a 16 k RGBA texture is extracted by projecting the respective panoramic onto it.The alpha is auto-generated by the overlapping geometry, benefiting greatly from the fine structure.The textures are baked with a Selected to Active and Extrusion/Ray Distance set to 0.01.Finally, an average image stacking approach is used to synthesize all textures into one detailed albedo texture.
To acquire the final 3D models of the dress collection, we applied the aforementioned methodology to all the individual datasets per dress.The final collection of dresses was subsequently digitally curated.

Digital Curation
The digital curation part of the methodology as applied in the presented use case is a complex procedure that transforms a collection of media files that is the output of the

Digital Curation
The digital curation part of the methodology as applied in the presented use case is a complex procedure that transforms a collection of media files that is the output of the digitization phase into data that adhere to the FAIR principles.This process is initiated by transforming the data into an open dataset.For this purpose, we use the Zenodo [92] platform.The process of creating these datasets includes uploading and documenting all the source data and connecting them to the authors, the project's community, and the source of funding.The result is a fully discoverable dataset assigned with a doi [99 -103].
Each data item receives, through this integration, a unique IRI that can be reused across the web.In this stage, there is also the option of depositing the raw data used for digitization for their long-term preservation.The reasons for doing so are twofold.The first is that in the future, this data can be reused with more advanced digitization methods without the need to recapture everything.The second is that having such datasets freely available enhances the availability of data for researchers and scientists working on the improvement of digitization methods.
With the unique IRIs available, it is time to start building the semantic information for the digital assets.For this purpose, we use two levels of information.The first level regards the metadata assigned to the files (i.e., 3D models and photographs), and the second level regards the semantic representation of the artifact in the form of a CH object.For the representation in this work, we propose the usage of the Mingei Online Platform (MOP) [104], implemented in the context of the Mingei Horizon 2020 project [105], and updated and enhanced in the context of the Craeft Horizon Europe project [106].
Examples of first-level documentation of an image and a 3D object are presented in Figure 4.In Section (a), we present the metadata associated with the image, which,  The documentation of the heritage object is more complex since it combines all the associated media assets and further social and historical information.Each object may have multiple descriptions, each one associated with a language.Furthermore, it The documentation of the heritage object is more complex since it combines all the associated media assets and further social and historical information.Each object may have multiple descriptions, each one associated with a language.Furthermore, it integrates information regarding the event of its creation, the materials used, and its creator.Each of these is represented by a separate semantic instance.A rich set of semantic annotations to external vocabularies is also used to further represent the heritage object.A graphical representation of the heritage object, including its major association with MOP, is presented in Figure 5.The aforementioned documentation is sufficient to present a heritage object online in the MOP.Further dissemination is needed to make the resource globally accessible through different dissemination channels following the LOD principles.To this end, MOP provides an exporting facility that exports the contents of its knowledge base in the form of RDF/XML.An example of such an export for the knowledge object under discussion is presented in Figure 6.Using this export functionality, the knowledge base can be ingested into content aggregators such as SearchCulture.SearchCulture is the Greek National Aggregator of Digital Cultural Content that offers access to digital collections of cultural heritage provided by institutions from all over Greece.Currently, it aggregates information about 813,269 items, including photographs, artworks, monuments, maps, folklore artifacts, and intangible cultural heritage in image, text, video, audio, and 3D [90].
Computers 2024, 13, x FOR PEER REVIEW 14 of 24 The aforementioned documentation is sufficient to present a heritage object online in the MOP.Further dissemination is needed to make the resource globally accessible through different dissemination channels following the LOD principles.To this end, MOP provides an exporting facility that exports the contents of its knowledge base in the form of RDF/XML.An example of such an export for the knowledge object under discussion is presented in Figure 6.Using this export functionality, the knowledge base can be ingested into content aggregators such as SearchCulture.SearchCulture is the Greek National Aggregator of Digital Cultural Content that offers access to digital collections of cultural heritage provided by institutions from all over Greece.Currently, it aggregates information about 813,269 items, including photographs, artworks, monuments, maps, folklore artifacts, and intangible cultural heritage in image, text, video, audio, and 3D [90].For our use case, a public API was implemented in Mop to list all the dresses as a collection of metadata in the aforementioned format.Through this public API [107], SearchCulture performed the ingestion of the knowledge entities to be aggregated through its online services [108].

Virtual Reality Exhibition
The VR exhibition was built on top of the Invisible Museum Platform (IMP) [93,109].For the front end, A-Frame [110], and Three.js[111] are employed.A-Frame, a web framework designed for creating web-based and virtual reality applications, was chosen as the foundation for these tools due to its utilization of web technologies like WebGL 1.0 [112] and WebVR 1.5.0 [113].This utilization allows compatibility with modern browsers without the need for additional plugins or installations.This offers the flexibility to develop applications accessible across various devices, including desktops, mobile devices, and VR headsets, using their built-in browser.

Exhibition Scene Design
The virtual 3D exhibition scene is designed using a web-based tool called "Exhibition Designer" that is offered by the IMP.This web-based designer is the first step in the exhibition-generation pipeline, and its purpose is to increase the speed while simplifying the exhibition setup process.By minimizing the challenge of creating a 3D exhibition, the creators are not required to be familiar with complex 3D modeling software, allowing them to rapidly explore exhibition setup concepts.To initiate the process, the creator of the exhibition must first draw a top-down view of the exhibition on a tile-based canvas (see Figure 7), which will be translated into a 3D building.The tool also supports importing entire 3D models of scanned buildings.This step can be skipped by selecting one of the various preset buildings provided that can also be edited anytime through the process.
Computers 2024, 13, x FOR PEER REVIEW 15 of 24 For our use case, a public API was implemented in Mop to list all the dresses as a collection of metadata in the aforementioned format.Through this public API [107], SearchCulture performed the ingestion of the knowledge entities to be aggregated through its online services [108].

Virtual Reality Exhibition
The VR exhibition was built on top of the Invisible Museum Platform (IMP) [93,109].For the front end, A-Frame [110], and Three.js[111] are employed.A-Frame, a web framework designed for creating web-based and virtual reality applications, was chosen as the foundation for these tools due to its utilization of web technologies like WebGL 1.0 [112] and WebVR 1.5.0 [113].This utilization allows compatibility with modern browsers without the need for additional plugins or installations.This offers the flexibility to develop applications accessible across various devices, including desktops, mobile devices, and VR headsets, using their built-in browser.

Exhibition Scene Design
The virtual 3D exhibition scene is designed using a web-based tool called "Exhibition Designer" that is offered by the IMP.This web-based designer is the first step in the exhibition-generation pipeline, and its purpose is to increase the speed while simplifying the exhibition setup process.By minimizing the challenge of creating a 3D exhibition, the creators are not required to be familiar with complex 3D modeling software, allowing them to rapidly explore exhibition setup concepts.To initiate the process, the creator of the exhibition must first draw a top-down view of the exhibition on a tile-based canvas (see Figure 7), which will be translated into a 3D building.The tool also supports importing entire 3D models of scanned buildings.This step can be skipped by selecting one of the various preset buildings provided that can also be edited anytime through the process.Once the space where the 3D exhibit models will be hosted is ready, it is time to import them.The tool facilitates the quick import of GLB format 3D models and allows for their position, rotation, and scale adjustments (see Figure 8a).The reasons GLB was selected as the sole format for the 3D objects are the single file format, the native support by modern browsers (without the need for external plugins or additional libraries), and the efficient rendering due to their GPU rendering optimization.After the exhibit models are placed within the exhibition, lighting sources-such as spotlights and point lights-are integrated to improve visibility and highlight specific models.Well-positioned lights can be strategically utilized to guide the viewer's focus within the scene, drawing attention to particular objects (see Figure 8b).Additionally, the designer enables the import of decorative elements (images, videos, 3D models, and music) to further shape the tone of the exhibition and influence the mood and atmosphere (see Figure 8c).The editing of the attributes of the above 3D objects is enabled through an on-screen inspector (see Figure 8d).This inspector provides a UI environment to edit the values of the 3D objects directly from the Three.jslayer.This low-level access to the objects presents the possibility of viewing the changes made in real-time without impacting the performance of the designer.Gizmos are also provided for less precise but faster changes.

Baking the Scene
Upon completing the exhibition setup, a JSON format file is generated containing the scene specification as rendered by the designer.This file contains information such as the exhibits, lights, and decorations present within the scene, as well as details about the transformations applied to them.The scene specification is directly editable in Blender (see Figure 9).

Baking the Scene
Upon completing the exhibition setup, a JSON format file is generated containing the scene specification as rendered by the designer.This file contains information such as the exhibits, lights, and decorations present within the scene, as well as details about the transformations applied to them.The scene specification is directly editable in Blender (see Figure 9).The scene specification generated by the exhibition designer is subsequently utilized within a Blender-based service responsible for recreating the entire exhibition in Blender.This pipeline includes functionalities such as lightmap baking and the merging of all the geometries of the 3D objects into one.Lightmap baking plays a crucial role in the outcome.Instead of computing lighting in real-time, this process pre-calculates how light interacts with surfaces in a 3D scene.The significant advantage lies in improved performance during user interaction with the 3D exhibition.By pre-computing lighting information, realtime rendering becomes less resource-intensive, leading to smoother user experiences.Additionally, this technique ensures consistent and predictable results across various devices and platforms.
Geometry merging also contributes to performance enhancements.By reducing the number of individual objects and consolidating them into a single entity, rendering performance is optimized, particularly in exhibitions with numerous smaller objects.Moreover, merging these 3D objects simplifies management, easing the overall workflow.The scene before and after baking can be seen in Figure 10.The scene specification generated by the exhibition designer is subsequently utilized within a Blender-based service responsible for recreating the entire exhibition in Blender.This pipeline includes functionalities such as lightmap baking and the merging of all the geometries of the 3D objects into one.Lightmap baking plays a crucial role in the outcome.Instead of computing lighting in real-time, this process pre-calculates how light interacts with surfaces in a 3D scene.The significant advantage lies in improved performance during user interaction with the 3D exhibition.By pre-computing lighting information, real-time rendering becomes less resource-intensive, leading to smoother user experiences.Additionally, this technique ensures consistent and predictable results across various devices and platforms.
Geometry merging also contributes to performance enhancements.By reducing the number of individual objects and consolidating them into a single entity, rendering performance is optimized, particularly in exhibitions with numerous smaller objects.Moreover, merging these 3D objects simplifies management, easing the overall workflow.The scene before and after baking can be seen in Figure 10.Once the 3D exhibition is baked, it becomes accessible for use within the Exhibition Viewer' that directly engages users with the final result, allowing them to explore the 3D exhibition and access information about the showcased exhibits.Moreover, the Exhibition Viewer includes a tour management section, enabling tour creators to customize the experience.Within this section, creators can determine the included exhibits, establish the starting point for the tour, and even select background music.Exhibits can also be accompanied by sound.With this extra flexibility, tours can feel more dynamic since voiced-over narrations can be played when the viewer approaches an exhibit.Notably, a single exhibition can support multiple tours in multiple languages and different ages.Currently, the virtual exhibition is available for demonstration purposes [114] and is planned to be released to the public in the first quarter of 2024.

Discussion and Conclusions
In conclusion, in this paper, we provide a comprehensive methodology for the digitization, curation, and virtual exhibition of CH artifacts.Furthermore, we demonstrated this methodology in the digitization, curation, and virtual exhibition of a unique collection of contemporary dresses inspired by antiquity.The evolution of 3D digitization technologies, including lidar scanning, laser scanning, and photogrammetry, has been employed, combined with intelligent post-processing methodologies, to mitigate the challenges associated with multimodal digitization.
A notable insight from the post-processing phase regards the individualities of CH artifacts that may have an effect on the technical methodology to be followed.In 3D reconstruction for CH objects, there is a tendency to suppose that the subjects remain still during digitization, which is the case for most subjects such as sculptures, ancient artifacts, tools, machinery, etc.In the use case presented, we realized that dresses are not such a case since minor changes in their geometry during the acquisition of data have greatly affected the registration of partial scans.As a result, several modifiers have to be applied to partial scans to perform the registration.Furthermore, due to the fusion of data from several scanning modalities, the synthesized mesh was of increased size and complexity, and thus simplification was required.We followed the Instant Field-Aligned Meshes [98] methodology to achieve a simplified mesh.Finally, due to the need to visually combine textures from multiple scans, an average image stacking approach is used to synthesize all textures into one detailed albedo texture.As a lesson learned from this digitization effort, we can propose two deviation measures.The first involves controlling the digitization setup and placement of dresses to ensure the least possible changes in geometry.The second is to make sure that the acquisition phase has been enhanced in terms of time since all the phenomena observed during our experiments were time-dependent, with the error rate increasing during long scans.Once the 3D exhibition is baked, it becomes accessible for use within the 'Exhibition Viewer' that directly engages users with the final result, allowing them to explore the 3D exhibition and access information about the showcased exhibits.Moreover, the Exhibition Viewer includes a tour management section, enabling tour creators to customize the experience.Within this section, creators can determine the included exhibits, establish the starting point for the tour, and even select background music.Exhibits can also be accompanied by sound.With this extra flexibility, tours can feel more dynamic since voiced-over narrations can be played when the viewer approaches an exhibit.Notably, a single exhibition can support multiple tours in multiple languages and different ages.Currently, the virtual exhibition is available for demonstration purposes [114] and is planned to be released to the public in the first quarter of 2024.

Discussion and Conclusions
In conclusion, in this paper, we provide a comprehensive methodology for the digitization, curation, and virtual exhibition of CH artifacts.Furthermore, we demonstrated this methodology in the digitization, curation, and virtual exhibition of a unique collection of contemporary dresses inspired by antiquity.The evolution of 3D digitization technologies, including lidar scanning, laser scanning, and photogrammetry, has been employed, combined with intelligent post-processing methodologies, to mitigate the challenges associated with multimodal digitization.
A notable insight from the post-processing phase regards the individualities of CH artifacts that may have an effect on the technical methodology to be followed.In 3D reconstruction for CH objects, there is a tendency to suppose that the subjects remain still during digitization, which is the case for most subjects such as sculptures, ancient artifacts, tools, machinery, etc.In the use case presented, we realized that dresses are not such a case since minor changes in their geometry during the acquisition of data have greatly affected the registration of partial scans.As a result, several modifiers have to be applied to partial scans to perform the registration.Furthermore, due to the fusion of data from several scanning modalities, the synthesized mesh was of increased size and complexity, and thus simplification was required.We followed the Instant Field-Aligned Meshes [98] methodology to achieve a simplified mesh.Finally, due to the need to visually combine textures from multiple scans, an average image stacking approach is used to synthesize all textures into one detailed albedo texture.As a lesson learned from this digitization effort, we can propose two deviation measures.The first involves controlling the digitization setup and placement of dresses to ensure the least possible changes in geometry.The second is to make sure that the acquisition phase has been enhanced in terms of time since all the phenomena observed during our experiments were time-dependent, with the error rate increasing during long scans.
The proposed approach to digital preservation is quite straightforward and can be applied to any form of digital artifact.By building on semantic knowledge representation standards like the CIDOC CRM, open-linked data repositories, and content aggregators, the widest possible dissemination of digital assets is supported.Moreover, the strategic dissemination of digitized content through national and European aggregators ensures wider accessibility.
In the use case, we learned that due to the digital curation, the effort needed to integrate the collection into the virtual exhibition was greatly reduced since the digital curation had already solved the issues of storing, retrieving, and accessing metadata for each digital object.This is certainly hard evidence of how FAIR data can simplify the reusability of media assets and, through simplicity in integration, enhance their value.Further validation of these open data sources from third parties will be needed in the future to ensure that the principles that we are employing in this work make data reusable since, in our case, both the provider and consumer of data were the same organization.
Using a mature platform for the implementation of the virtual exhibition was a wise decision that allowed us to greatly compress the development time.Of course, there are some limitations to the type of platform that can be employed.Careful consideration should be placed on semantic data interoperability to ensure that open data are directly exploitable by the target platform.Furthermore, compatibility with the data format of the 3D models is essential.In this work, we employ the glb format, known for its wide compatibility and integration efficiency.In the use case, the employed Invisible Museum platform offered both forms of compatibility since it is compatible with CIDOC-CRM-based knowledge representations and has off-the-shelf compatibility with glb files.Based on these facilities, it was possible to simplify the creation of the virtual exhibition without compromising quality or interaction.
In summary, we are confident that the provided methodology represents a holistic and innovative response to the multifaceted challenges in the preservation and presentation of cultural artifacts, contributing to the evolving landscape of cultural heritage in the digital age.The following table (see Table 2) summarizes the technical outcomes of this work, providing references to the location where data and content can be accessed, previewed, and experienced.

Collections ingested to aggregators
SearchCulture: Digitization of Contemporary Works By Young Artists Inspired By Textile Heritage [108].

Data access
Branding Heritage collection public API [107] Experiential access Branding Heritage virtual exhibition [114]

Computers 2024 , 24 Figure 1 .
Figure 1.Graphical representation of the methodology proposed by this research work.The method used is encoded in black and it's output in red.

Figure 1 .
Figure 1.Graphical representation of the methodology proposed by this research work.The method used is encoded in black and it's output in red.

Figure 2 .
Figure 2. FARO example with 9 scans from different angles.(a) An original scan mesh; (b) the scan mesh with modifiers applied; (c) the resulting unified mesh; all 9 modified scans are merged; modifiers and geometry nodes convert them in a single mesh (6 million triangles); (d) InstaMesh simplification.Diving into more details on the aforementioned process.In the first step, all individual FARO scans have the following modifiers applied: (a) geometry node "cut to alpha" (b) planar decimation at 0.1 degrees, and (c) triangulate.The modifier is presented in Figure 3a.Subsequently, the modified scans are merged into a new target model.The structure is complicated with overlapping polygons that often times have missing parts due to

Figure 2 .
Figure 2. FARO example with 9 scans from different angles.(a) An original scan mesh; (b) the scan mesh with modifiers applied; (c) the resulting unified mesh; all 9 modified scans are merged; modifiers and geometry nodes convert them in a single mesh (6 million triangles); (d) InstaMesh simplification.
image file characteristics, includes semantic annotations with external vocabularies and internal links to the object representing the dress as a heritage object.Section (c) presents the meta-data for a 3D model.Here information can also be seen about the creators of the digital files, semantic annotations with external vocabularies, and internal semantic links to the object representing the dress as a heritage object.Sections (b) and (d) present the online previews supported by the MOP for the image and 3D model.

Figure 4 .
Figure 4. (a) semantic meta-data for an image; (b) a web-based preview of the image; (c) semantic metadata for a 3D object; and (d) a web-based preview of the 3D object.

Figure 4 .
Figure 4. (a) semantic meta-data for an image; (b) a web-based preview of the image; (c) semantic metadata for a 3D object; and (d) a web-based preview of the 3D object.

Computers 2024 , 24 Figure 5 .
Figure5.A representation of a heritage object and its associations with its creator, materials used, the event of its creation, and the media object representing its reconstruction in 3D.Associations are marked with arrows.

Figure 5 .
Figure5.A representation of a heritage object and its associations with its creator, materials used, the event of its creation, and the media object representing its reconstruction in 3D.Associations are marked with arrows.

Figure 6 .
Figure 6.Export of the heritage object's metadata in RDF/XML format.Figure 6. Export of the heritage object's metadata in RDF/XML format.

Figure 6 .
Figure 6.Export of the heritage object's metadata in RDF/XML format.Figure 6. Export of the heritage object's metadata in RDF/XML format.

Figure 7 .
Figure 7. Top-down drawing of the exhibition building, using the built-in tool.Figure 7. Top-down drawing of the exhibition building, using the built-in tool.

Figure 7 .
Figure 7. Top-down drawing of the exhibition building, using the built-in tool.Figure 7. Top-down drawing of the exhibition building, using the built-in tool.

Figure 8 .
Figure 8.(a) Imported GLB format 3D model on the exhibition; (b) with a ceiling for the exhibition and a spotlight over the model; (c) imported decorative elements to complement the 3D model; (d) The inspector (left modal) and the scale gizmos are attached to the model.

Figure 8 .
Figure 8.(a) Imported GLB format 3D model on the exhibition; (b) with a ceiling for the exhibition and a spotlight over the model; (c) imported decorative elements to complement the 3D model; (d) The inspector (left modal) and the scale gizmos are attached to the model.

Figure 9 .
Figure 9.The scene specification in Blender.

Figure 10 .
Figure 10.(a) Before baking, (b) after baking the scene in Blender.

Figure 10 .
Figure 10.(a) Before baking, (b) after baking the scene in Blender.

Table 1 .
Overview of the proposed technologies.
[95] Points and their RGB value: The FARO Focus Laser Scanner acquires 3D Points and their RGB of a segment of the space occupied by the artifact.•Mobiledepthenhancedphotogrammetry:AmobiledevicetogetherwiththeTrnioapptoacquire360-degree view from various heights.Reconstruction• Photogrammetric reconstruction: PIX4Dmatic from Pix4D[95]is used with input from the acquired photographic documentation.•Reconstruction based on lidar data: Faro scene is used to produce the point clouds and to translate point clouds to textured 3D meshes.•Cloud-basedreconstruction:DataacquiredusingTrnio is post-processed in the Trnio cloud to create the reconstruction.•Application of modifiers.Partial scans are modified in Blender.•Meshunification and refactoring.The modified partial scans are registered in Blender merged and refactored.•Meshsimplification.The combined 3D mesh is simplified to reduce the number of polygons.• Projection of scan textures.The individual scan textures are projected on the simplified mesh.• Averaged image stacking.The combined texture is created, color and lighting is adjusted and calibrated Digital Curation • Deposit of 3D models as linked open data.The collection of final 3D models is deposited in Zenodo and receives a URI • Curation of 3D models.3D models and their metadata are curated in a semantic repository.• Curation of artifacts.Semantic representations of the artifacts are authored that enrich the representation through events, materials, places, and links to open vocabularies.• Export for ingestion in open repositories.The semantically rich representations are exported and ingested in open repositories.

Table 2 .
Summary of the outcomes of this work.