Special Issue "Machine Learning and Vision for Cultural Heritage"

A special issue of Sci (ISSN 2413-4155).

Deadline for manuscript submissions: closed (29 February 2020).

Special Issue Editor

Dr. Ognjen Arandjelović
Website
Guest Editor
School of Computer Science, University of St Andrews, St Andrews, Fife KY16 9SX, Scotland, UK
Interests: computer vision; machine learning; pattern recognition; data mining; bioinformatics; medicine

Special Issue Information

Dear Colleagues,

It is hardly an exaggeration to say that recent advances in machine learning have already demonstrably had an enormous impact on everyday life for billions of people across the world, generating febrile excitement both in the research community and amongst the general public. This technological progress coupled with the broader enthusiasm for the field have created an environment which is hungry for novel ideas and applications, including in realms previously not thought of as naturally linked with computer science.

One such domain concerns cultural heritage. Rather than alienating us from culture and the arts, machine learning and artificial intelligence provide an excellent opportunity to bring the former closer to the modern world, make them more exciting, and more readily understood and explored. This Special Issue aims at helping to facilitate exactly that.

We welcome high-quality submissions in any topic falling under the broad umbrella of cultural heritage, particularly encouraging those which describe work in novel application domains. Some of the domains of particular interest include numismatics, visual arts, musicology-related research, archeological object reconstruction and visualization, interactive tools, etc.

Dr. Ognjen Arandjelović
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sci is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Visual Reconstruction of Ancient Coins Using Cycle-Consistent Generative Adversarial Networks
Sci 2020, 2(3), 52; https://doi.org/10.3390/sci2030052 - 07 Jul 2020
Abstract
In this paper, our goal is to perform a virtual restoration of an ancient coin from its image. The present work is the first one to propose this problem, and it is motivated by two key promising applications. The first of these emerges [...] Read more.
In this paper, our goal is to perform a virtual restoration of an ancient coin from its image. The present work is the first one to propose this problem, and it is motivated by two key promising applications. The first of these emerges from the recently recognised dependence of automatic image based coin type matching on the condition of the imaged coins; the algorithm introduced herein could be used as a pre-processing step, aimed at overcoming the aforementioned weakness. The second application concerns the utility both to professional and hobby numismatists of being able to visualise and study an ancient coin in a state closer to its original (minted) appearance. To address the conceptual problem at hand, we introduce a framework which comprises a deep learning based method using Generative Adversarial Networks, capable of learning the range of appearance variation of different semantic elements artistically depicted on coins, and a complementary algorithm used to collect, correctly label, and prepare for processing a large numbers of images (here 100,000) of ancient coins needed to facilitate the training of the aforementioned learning method. Empirical evaluation performed on a withheld subset of the data demonstrates extremely promising performance of the proposed methodology and shows that our algorithm correctly learns the spectra of appearance variation across different semantic elements, and despite the enormous variability present reconstructs the missing (damaged) detail while matching the surrounding semantic content and artistic style. Full article
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Show Figures

Figure 1

Open AccessArticle
Making Japenese Ukiyo-e Art 3D in Real-Time
Sci 2020, 2(2), 32; https://doi.org/10.3390/sci2020032 - 11 May 2020
Abstract
Ukiyo-e is a traditional Japanese painting style most commonly printed using wood blocks. Ukiyo-e prints feature distinct line work, bright colours, and a non-perspective projection. Most previous research on ukiyo-e styled computer graphics has been focused on creation of 2D images. In this [...] Read more.
Ukiyo-e is a traditional Japanese painting style most commonly printed using wood blocks. Ukiyo-e prints feature distinct line work, bright colours, and a non-perspective projection. Most previous research on ukiyo-e styled computer graphics has been focused on creation of 2D images. In this paper we propose a framework for rendering interactive 3D scenes with ukiyo-e style. The rendering techniques use standard 3D models as input and require minimal additional information to automatically render scenes in a ukiyo-e style. The described techniques are evaluated based on their ability to emulate ukiyo-e prints, performance, and temporal coherence. Full article
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Show Figures

Figure 1

Open AccessArticle
Learning to Describe: A New Approach to Computer Vision Based Ancient Coin Analysis
Sci 2020, 2(2), 27; https://doi.org/10.3390/sci2020027 - 17 Apr 2020
Abstract
In recent years, a range of problems under the broad umbrella of computer vision based analysis of ancient coins have been attracting an increasing amount of attention. Notwithstanding this research effort, the results achieved by the state of the art in published literature [...] Read more.
In recent years, a range of problems under the broad umbrella of computer vision based analysis of ancient coins have been attracting an increasing amount of attention. Notwithstanding this research effort, the results achieved by the state of the art in published literature remain poor and far from sufficiently well performing for any practical purpose. In the present paper we present a series of contributions which we believe will benefit the interested community. We explain that the approach of visual matching of coins, universally adopted in existing published papers on the topic, is not of practical interest because the number of ancient coin types exceeds by far the number of those types which have been imaged, be it in digital form (e.g., online) or otherwise (traditional film, in print, etc.). Rather, we argue that the focus should be on understanding the semantic content of coins. Hence, we describe a novel approach—to first extract semantic concepts from real-world multimodal input and associate them with their corresponding coin images, and then to train a convolutional neural network to learn the appearance of these concepts. On a real-world data set, we demonstrate highly promising results, correctly identifying a range of visual elements on unseen coins with up to 84% accuracy. Full article
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Show Figures

Figure 1

Back to TopTop