Machine Learning and Vision for Cultural Heritage

A special issue of Sci (ISSN 2413-4155).

Deadline for manuscript submissions: closed (29 February 2020) | Viewed by 39248

Special Issue Editor


E-Mail Website
Guest Editor
School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
Interests: computer vision; machine learning; pattern recognition; data mining; bioinformatics; medicine
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It is hardly an exaggeration to say that recent advances in machine learning have already demonstrably had an enormous impact on everyday life for billions of people across the world, generating febrile excitement both in the research community and amongst the general public. This technological progress coupled with the broader enthusiasm for the field have created an environment which is hungry for novel ideas and applications, including in realms previously not thought of as naturally linked with computer science.

One such domain concerns cultural heritage. Rather than alienating us from culture and the arts, machine learning and artificial intelligence provide an excellent opportunity to bring the former closer to the modern world, make them more exciting, and more readily understood and explored. This Special Issue aims at helping to facilitate exactly that.

We welcome high-quality submissions in any topic falling under the broad umbrella of cultural heritage, particularly encouraging those which describe work in novel application domains. Some of the domains of particular interest include numismatics, visual arts, musicology-related research, archeological object reconstruction and visualization, interactive tools, etc.

Dr. Ognjen Arandjelović
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sci is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4059 KiB  
Article
Images of Roman Imperial Denarii: A Curated Data Set for the Evaluation of Computer Vision Algorithms Applied to Ancient Numismatics, and an Overview of Challenges in the Field
by Ognjen Arandjelović and Marios Zachariou
Sci 2020, 2(4), 91; https://doi.org/10.3390/sci2040091 - 7 Dec 2020
Cited by 3 | Viewed by 4966
Abstract
Automatic ancient Roman coin analysis only recently emerged as a topic of computer science research. Nevertheless, owing to its ever-increasing popularity, the field is already reaching a certain degree of maturity, as witnessed by a substantial publication output in the last decade. At [...] Read more.
Automatic ancient Roman coin analysis only recently emerged as a topic of computer science research. Nevertheless, owing to its ever-increasing popularity, the field is already reaching a certain degree of maturity, as witnessed by a substantial publication output in the last decade. At the same time, it is becoming evident that research progress is being limited by a somewhat veering direction of effort and the lack of a coherent framework which facilitates the acquisition and dissemination of robust, repeatable, and rigorous evidence. Thus, in the present article, we seek to address several associated challenges. To start with, (i) we provide a first overview and discussion of different challenges in the field, some of which have been scarcely investigated to date, and others which have hitherto been unrecognized and unaddressed. Secondly, (ii) we introduce the first data set, carefully curated and collected for the purpose of facilitating methodological evaluation of algorithms and, specifically, the effects of coin preservation grades on the performance of automatic methods. Indeed, until now, only one published work at all recognized the need for this kind of analysis, which, to any numismatist, would be a trivially obvious fact. We also discuss a wide range of considerations which had to be taken into account in collecting this corpus, explain our decisions, and describe its content in detail. Briefly, the data set comprises 100 different coin issues, all with multiple examples in Fine, Very Fine, and Extremely Fine conditions, giving a total of over 650 different specimens. These correspond to 44 issuing authorities and span the time period of approximately 300 years (from 27 BC until 244 AD). In summary, the present article should be an invaluable resource to researchers in the field, and we encourage the community to adopt the collected corpus, freely available for research purposes, as a standard evaluation benchmark. Full article
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Show Figures

Figure 1

18 pages, 2926 KiB  
Article
Visual Reconstruction of Ancient Coins Using Cycle-Consistent Generative Adversarial Networks
by Marios Zachariou, Neofytos Dimitriou and Ognjen Arandjelović
Sci 2020, 2(3), 52; https://doi.org/10.3390/sci2030052 - 7 Jul 2020
Cited by 8 | Viewed by 11214
Abstract
In this paper, our goal is to perform a virtual restoration of an ancient coin from its image. The present work is the first one to propose this problem, and it is motivated by two key promising applications. The first of these emerges [...] Read more.
In this paper, our goal is to perform a virtual restoration of an ancient coin from its image. The present work is the first one to propose this problem, and it is motivated by two key promising applications. The first of these emerges from the recently recognised dependence of automatic image based coin type matching on the condition of the imaged coins; the algorithm introduced herein could be used as a pre-processing step, aimed at overcoming the aforementioned weakness. The second application concerns the utility both to professional and hobby numismatists of being able to visualise and study an ancient coin in a state closer to its original (minted) appearance. To address the conceptual problem at hand, we introduce a framework which comprises a deep learning based method using Generative Adversarial Networks, capable of learning the range of appearance variation of different semantic elements artistically depicted on coins, and a complementary algorithm used to collect, correctly label, and prepare for processing a large numbers of images (here 100,000) of ancient coins needed to facilitate the training of the aforementioned learning method. Empirical evaluation performed on a withheld subset of the data demonstrates extremely promising performance of the proposed methodology and shows that our algorithm correctly learns the spectra of appearance variation across different semantic elements, and despite the enormous variability present reconstructs the missing (damaged) detail while matching the surrounding semantic content and artistic style. Full article
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Show Figures

Figure 1

24 pages, 5429 KiB  
Article
Making Japenese Ukiyo-e Art 3D in Real-Time
by Innes Brown and Ognjen Arandjelović
Sci 2020, 2(2), 32; https://doi.org/10.3390/sci2020032 - 11 May 2020
Cited by 1 | Viewed by 5685
Abstract
Ukiyo-e is a traditional Japanese painting style most commonly printed using wood blocks. Ukiyo-e prints feature distinct line work, bright colours, and a non-perspective projection. Most previous research on ukiyo-e styled computer graphics has been focused on creation of 2D images. In this [...] Read more.
Ukiyo-e is a traditional Japanese painting style most commonly printed using wood blocks. Ukiyo-e prints feature distinct line work, bright colours, and a non-perspective projection. Most previous research on ukiyo-e styled computer graphics has been focused on creation of 2D images. In this paper we propose a framework for rendering interactive 3D scenes with ukiyo-e style. The rendering techniques use standard 3D models as input and require minimal additional information to automatically render scenes in a ukiyo-e style. The described techniques are evaluated based on their ability to emulate ukiyo-e prints, performance, and temporal coherence. Full article
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Show Figures

Figure 1

15 pages, 4653 KiB  
Article
Learning to Describe: A New Approach to Computer Vision Based Ancient Coin Analysis
by Jessica Cooper and Ognjen Arandjelović
Sci 2020, 2(2), 27; https://doi.org/10.3390/sci2020027 - 17 Apr 2020
Cited by 8 | Viewed by 5798
Abstract
In recent years, a range of problems under the broad umbrella of computer vision based analysis of ancient coins have been attracting an increasing amount of attention. Notwithstanding this research effort, the results achieved by the state of the art in published literature [...] Read more.
In recent years, a range of problems under the broad umbrella of computer vision based analysis of ancient coins have been attracting an increasing amount of attention. Notwithstanding this research effort, the results achieved by the state of the art in published literature remain poor and far from sufficiently well performing for any practical purpose. In the present paper we present a series of contributions which we believe will benefit the interested community. We explain that the approach of visual matching of coins, universally adopted in existing published papers on the topic, is not of practical interest because the number of ancient coin types exceeds by far the number of those types which have been imaged, be it in digital form (e.g., online) or otherwise (traditional film, in print, etc.). Rather, we argue that the focus should be on understanding the semantic content of coins. Hence, we describe a novel approach—to first extract semantic concepts from real-world multimodal input and associate them with their corresponding coin images, and then to train a convolutional neural network to learn the appearance of these concepts. On a real-world data set, we demonstrate highly promising results, correctly identifying a range of visual elements on unseen coins with up to 84% accuracy. Full article
(This article belongs to the Special Issue Machine Learning and Vision for Cultural Heritage)
Show Figures

Figure 1

Back to TopTop