sensors-logo

Journal Browser

Journal Browser

Smart Sensors Data Processing and Visualization in Cultural Heritage

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 8349

Special Issue Editors

Department of Computer Science, University of Verona, 37134 Verona, Italy
Interests: computer vision; computer graphics; visual computing; medical applications; cutural heritage
CRS4 Visual Computing Group, 09010 Pula, Italy
Interests: computer graphics; visualization; cultural heritage; visual computing

Special Issue Information

Dear Colleagues,

Smart sensors and digital image processing tools are, nowadays, fundamental to document, preserve, and make cultural heritage accessible at a variety of scales, from the structure of very large sites to the tiniest microscopic details of cultural artifacts. Elaboration of acquired digital data enables many applications, from stylistic and historical analysis to restoration, to virtual fruition and visualization.

In this Special Issue, you are invited to submit original research contributions and innovative reports on advancements, developments, and experiments pertaining to the exploitation of sensor data in cultural heritage applications. In particular, the Special Issue welcomes contributions on newly developed methods and ideas that combine data obtained from various sensors in the following fields (but not limited to these fields):

  • 3D scanning, remote sensing, multispectral and multi-light imaging, X-ray, terahertz imaging, motion capture, and other techniques allowing cultural heritage to be recorded;
  • Visual data processing, visualization, and multimodal data fusion;
  • Image and 3D data processing, classification, and retrieval;
  • Re-colorization and art restoration from incomplete data;
  • Material characterization, cracks and feature detection, and ageing estimation and prediction;
  • Acquisition protocols, digital libraries, archiving, and long-term preservation of 3D documents;
  • Pattern recognition and neural networks for automatic annotation of cultural heritage objects;
  • Sensors for monitoring visitation and fruition of cultural heritage sites and analyses of users’ experiences.

Dr. Andrea Giachetti
Dr. Enrico Gobbetti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Cultural Heritage;
  • Sensors;
  • Sensor fusion;
  • Visualization;
  • Shape and material reconstruction;
  • Semantic annotation;
  • Conservation;
  • Pattern analysis;
  • Digital libraries;
  • Virtual Museums.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 13319 KiB  
Article
Colour-Balanced Edge-Guided Digital Inpainting: Applications on Artworks
by Irina-Mihaela Ciortan, Sony George and Jon Yngve Hardeberg
Sensors 2021, 21(6), 2091; https://doi.org/10.3390/s21062091 - 17 Mar 2021
Cited by 10 | Viewed by 2208
Abstract
The virtual inpainting of artworks provides a nondestructive mode of hypothesis visualization, and it is especially attractive when physical restoration raises too many methodological and ethical concerns. At the same time, in Cultural Heritage applications, the level of details in virtual reconstruction and [...] Read more.
The virtual inpainting of artworks provides a nondestructive mode of hypothesis visualization, and it is especially attractive when physical restoration raises too many methodological and ethical concerns. At the same time, in Cultural Heritage applications, the level of details in virtual reconstruction and their accuracy are crucial. We propose an inpainting algorithm that is based on generative adversarial network, with two generators: one for edges and another one for colors. The color generator rebalances chromatically the result by enforcing a loss in the discretized gamut space of the dataset. This way, our method follows the modus operandi of an artist: edges first, then color palette, and, at last, color tones. Moreover, we simulate the stochasticity of the lacunae in artworks with morphological variations of a random walk mask that recreate various degradations, including craquelure. We showcase the performance of our model on a dataset of digital images of wall paintings from the Dunhuang UNESCO heritage site. Our proposals of restored images are visually satisfactory and they are quantitatively comparable to state-of-the-art approaches. Full article
(This article belongs to the Special Issue Smart Sensors Data Processing and Visualization in Cultural Heritage)
Show Figures

Figure 1

29 pages, 10280 KiB  
Article
Protocol Development for Point Clouds, Triangulated Meshes and Parametric Model Acquisition and Integration in an HBIM Workflow for Change Control and Management in a UNESCO’s World Heritage Site
by Adela Rueda Márquez de la Plata, Pablo Alejandro Cruz Franco, Jesús Cruz Franco and Victor Gibello Bravo
Sensors 2021, 21(4), 1083; https://doi.org/10.3390/s21041083 - 04 Feb 2021
Cited by 20 | Viewed by 3041
Abstract
This article illustrates a data acquisition methodological process based on Structure from Motion (SfM) processing confronted with terrestrial laser scanner (TLS) and integrated into a Historic Building Information Model (HBIM) for architectural Heritage’s management. This process was developed for the documentation of Cáceres’ [...] Read more.
This article illustrates a data acquisition methodological process based on Structure from Motion (SfM) processing confronted with terrestrial laser scanner (TLS) and integrated into a Historic Building Information Model (HBIM) for architectural Heritage’s management. This process was developed for the documentation of Cáceres’ Almohad wall bordering areas, a UNESCO World Heritage Site. The case study’s aim was the analysis, management and control of a large urban area where the urban growth had absorbed the wall, making it physically inaccessible. The methodology applied was the combination of: clouds and meshes obtained by SfM; with images acquired from Unmanned Aerial Vehicle (UAV) and Single Lens Reflex (SLR) and terrestrial photogrammetry; and finally, with clouds obtained by TLS. The outcome was a smart-high-quality three-dimensional study model of the inaccessible urban area. The final result was two-fold. On one side, there was a methodological result, a low cost and accurate smart work procedure to obtain a three-dimensional parametric HBIM model that integrates models obtained by remote sensing. On the other side, a patrimonial result involved the discovery of a XII century wall’s section, that had supposedly been lost, that was hidden among the residential buildings. The article covers the survey campaign carried out by the research team and the techniques applied. Full article
(This article belongs to the Special Issue Smart Sensors Data Processing and Visualization in Cultural Heritage)
Show Figures

Figure 1

14 pages, 12792 KiB  
Article
A Visual Attentive Model for Discovering Patterns in Eye-Tracking Data—A Proposal in Cultural Heritage
by Roberto Pierdicca, Marina Paolanti, Ramona Quattrini, Marco Mameli and Emanuele Frontoni
Sensors 2020, 20(7), 2101; https://doi.org/10.3390/s20072101 - 08 Apr 2020
Cited by 5 | Viewed by 2578
Abstract
In the Cultural Heritage (CH) context, art galleries and museums employ technology devices to enhance and personalise the museum visit experience. However, the most challenging aspect is to determine what the visitor is interested in. In this work, a novel Visual Attentive Model [...] Read more.
In the Cultural Heritage (CH) context, art galleries and museums employ technology devices to enhance and personalise the museum visit experience. However, the most challenging aspect is to determine what the visitor is interested in. In this work, a novel Visual Attentive Model (VAM) has been proposed that is learned from eye tracking data. In particular, eye-tracking data of adults and children observing five paintings with similar characteristics have been collected. The images are selected by CH experts and are—the three “Ideal Cities” (Urbino, Baltimore and Berlin), the Inlaid chest in the National Gallery of Marche and Wooden panel in the “Studiolo del Duca” with Marche view. These pictures have been recognized by experts as having analogous features thus providing coherent visual stimuli. Our proposed method combines a new coordinates representation from eye sequences by using Geometric Algebra with a deep learning model for automated recognition (to identify, differentiate, or authenticate individuals) of people by the attention focus of distinctive eye movement patterns. The experiments were conducted by comparing five Deep Convolutional Neural Networks (DCNNs), yield high accuracy (more than 80 %), demonstrating the effectiveness and suitability of the proposed approach in identifying adults and children as museums’ visitors. Full article
(This article belongs to the Special Issue Smart Sensors Data Processing and Visualization in Cultural Heritage)
Show Figures

Figure 1

Back to TopTop