Special Issue "Fine Art Pattern Extraction and Recognition"

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Computer Vision and Pattern Recognition".

Deadline for manuscript submissions: closed (31 May 2021) | Viewed by 14607

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Dr. Giovanna Castellano
E-Mail Website
Guest Editor
Department of Computer Science, University of Bari Aldo Moro, Via Orabona, 4-70125 Bari, Italy
Interests: image processing; computer vision; fuzzy systems; fuzzy clustering; image retrieval; neural networks; neuro-fuzzy modeling; granular computing; recommender systems
Special Issues, Collections and Topics in MDPI journals
Dr. Gennaro Vessio
E-Mail Website
Guest Editor
Department of Computer Science, University of Bari, Bari, Italy
Interests: machine learning; deep learning; pattern recognition; computer vision; health informatics; biometrics
Special Issues, Collections and Topics in MDPI journals
Dr. Fabio Bellavia
E-Mail Website
Guest Editor
Department of Math and Computer Science, University of Palermo, Palermo, Italy
Interests: computer vision; image processing; image matching; 3D reconstruction; image mosaicing; color correction

Special Issue Information

Dear Colleagues,

Cultural heritage, in particular fine art, has invaluable importance for the cultural, historic, and economic growth of our societies. Fine art is developed primarily for aesthetic purposes, and it is mainly concerned with paintings, sculptures, and architectures. In the last few years, due to technology improvements and drastically declining costs, a large-scale digitization effort has been made, leading to a growing availability of large digitized fine art collections. This availability, along with the recent advancements in pattern recognition and computer vision, has opened new opportunities for computer science researchers to assist the art community with automatic tools to analyse and further understand fine arts. Among the other benefits, a deeper understanding of fine arts has the potential to make them more accessible to a wider population, both in terms of fruition and creation, thus supporting the spread of culture.

The ability to recognize meaningful patterns in fine art inherently falls within the domain of human perception, and this perception can be extremely hard to conceptualize. Thus, visual-related features, such as those automatically learned by deep learning models, can be the key to tackling problems of extracting useful representations from low-level colour and texture features. These representations can assist in various art-related tasks, ranging from object detection in paintings to artistic style categorization, useful for examples in museum and art gallery websites.

The aim of the International Workshop on Fine Art Pattern Extraction and Recognition (FAPER 2020 @ ICPR 2020) is to provide an international forum for those who wish to present advancements in the state of the art, innovative research, ongoing projects, and academic and industrial reports on the application of visual pattern extraction and recognition for the better understanding and fruition of fine arts. The workshop solicits contributions from diverse areas such as pattern recognition, computer vision, artificial intelligence, and image processing.

This Special Issue welcomes selected papers from FAPER 2020. In addition, we also solicit contributions from scholars involved and interested in this research area.

Prof. Giovanna Castellano
Dr. Gennaro Vessio
Dr. Fabio Bellavia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Application of machine learning and deep learning to cultural heritage
  • Computer vision and multimedia data
  • Generative adversarial networks for artistic data
  • Augmented and virtual reality for cultural heritage
  • 3D reconstruction of historical artifacts
  • Historical document analysis
  • Content-based retrieval in the art domain
  • Speech, audio, and music analysis from historical archives
  • Digitally enriched museum visits
  • Smart interactive experiences in cultural sites
  • Projects, products, or prototypes for cultural heritage restoration, preservation, and fruition

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Editorial
Editorial for Special Issue “Fine Art Pattern Extraction and Recognition”
J. Imaging 2021, 7(10), 195; https://doi.org/10.3390/jimaging7100195 - 29 Sep 2021
Viewed by 706
Abstract
Cultural heritage, especially the fine arts, plays an invaluable role in the cultural, historical, and economic growth of our societies [...] Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)

Research

Jump to: Editorial

Article
Multimodal Emotion Recognition from Art Using Sequential Co-Attention
J. Imaging 2021, 7(8), 157; https://doi.org/10.3390/jimaging7080157 - 21 Aug 2021
Cited by 1 | Viewed by 882
Abstract
In this study, we present a multimodal emotion recognition architecture that uses both feature-level attention (sequential co-attention) and modality attention (weighted modality fusion) to classify emotion in art. The proposed architecture helps the model to focus on learning informative and refined representations for [...] Read more.
In this study, we present a multimodal emotion recognition architecture that uses both feature-level attention (sequential co-attention) and modality attention (weighted modality fusion) to classify emotion in art. The proposed architecture helps the model to focus on learning informative and refined representations for both feature extraction and modality fusion. The resulting system can be used to categorize artworks according to the emotions they evoke; recommend paintings that accentuate or balance a particular mood; search for paintings of a particular style or genre that represents custom content in a custom state of impact. Experimental results on the WikiArt emotion dataset showed the efficiency of the approach proposed and the usefulness of three modalities in emotion recognition. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
Classification of Geometric Forms in Mosaics Using Deep Neural Network
J. Imaging 2021, 7(8), 149; https://doi.org/10.3390/jimaging7080149 - 18 Aug 2021
Cited by 2 | Viewed by 687
Abstract
The paper addresses an image processing problem in the field of fine arts. In particular, a deep learning-based technique to classify geometric forms of artworks, such as paintings and mosaics, is presented. We proposed and tested a convolutional neural network (CNN)-based framework that [...] Read more.
The paper addresses an image processing problem in the field of fine arts. In particular, a deep learning-based technique to classify geometric forms of artworks, such as paintings and mosaics, is presented. We proposed and tested a convolutional neural network (CNN)-based framework that autonomously quantifies the feature map and classifies it. Convolution, pooling and dense layers are three distinct categories of levels that generate attributes from the dataset images by introducing certain specified filters. As a case study, a Roman mosaic is considered, which is digitally reconstructed by close-range photogrammetry based on standard photos. During the digital transformation from a 2D perspective view of the mosaic into an orthophoto, each photo is rectified (i.e., it is an orthogonal projection of the real photo on the plane of the mosaic). Image samples of the geometric forms, e.g., triangles, squares, circles, octagons and leaves, even if they are partially deformed, were extracted from both the original and the rectified photos and originated the dataset for testing the CNN-based approach. The proposed method has proved to be robust enough to analyze the mosaic geometric forms, with an accuracy higher than 97%. Furthermore, the performance of the proposed method was compared with standard deep learning frameworks. Due to the promising results, this method can be applied to many other pattern identification problems related to artworks. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
Towards Generating and Evaluating Iconographic Image Captions of Artworks
J. Imaging 2021, 7(8), 123; https://doi.org/10.3390/jimaging7080123 - 23 Jul 2021
Cited by 3 | Viewed by 832
Abstract
To automatically generate accurate and meaningful textual descriptions of images is an ongoing research challenge. Recently, a lot of progress has been made by adopting multimodal deep learning approaches for integrating vision and language. However, the task of developing image captioning models is [...] Read more.
To automatically generate accurate and meaningful textual descriptions of images is an ongoing research challenge. Recently, a lot of progress has been made by adopting multimodal deep learning approaches for integrating vision and language. However, the task of developing image captioning models is most commonly addressed using datasets of natural images, while not many contributions have been made in the domain of artwork images. One of the main reasons for that is the lack of large-scale art datasets of adequate image-text pairs. Another reason is the fact that generating accurate descriptions of artwork images is particularly challenging because descriptions of artworks are more complex and can include multiple levels of interpretation. It is therefore also especially difficult to effectively evaluate generated captions of artwork images. The aim of this work is to address some of those challenges by utilizing a large-scale dataset of artwork images annotated with concepts from the Iconclass classification system. Using this dataset, a captioning model is developed by fine-tuning a transformer-based vision-language pretrained model. Due to the complex relations between image and text pairs in the domain of artwork images, the generated captions are evaluated using several quantitative and qualitative approaches. The performance is assessed using standard image captioning metrics and a recently introduced reference-free metric. The quality of the generated captions and the model’s capacity to generalize to new data is explored by employing the model to another art dataset to compare the relation between commonly generated captions and the genre of artworks. The overall results suggest that the model can generate meaningful captions that indicate a stronger relevance to the art historical context, particularly in comparison to captions obtained from models trained only on natural image datasets. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
A Methodology for Semantic Enrichment of Cultural Heritage Images Using Artificial Intelligence Technologies
J. Imaging 2021, 7(8), 121; https://doi.org/10.3390/jimaging7080121 - 22 Jul 2021
Cited by 1 | Viewed by 1551
Abstract
Cultural heritage images are among the primary media for communicating and preserving the cultural values of a society. The images represent concrete and abstract content and symbolise the social, economic, political, and cultural values of the society. However, an enormous amount of such [...] Read more.
Cultural heritage images are among the primary media for communicating and preserving the cultural values of a society. The images represent concrete and abstract content and symbolise the social, economic, political, and cultural values of the society. However, an enormous amount of such values embedded in the images is left unexploited partly due to the absence of methodological and technical solutions to capture, represent, and exploit the latent information. With the emergence of new technologies and availability of cultural heritage images in digital formats, the methodology followed to semantically enrich and utilise such resources become a vital factor in supporting users need. This paper presents a methodology proposed to unearth the cultural information communicated via cultural digital images by applying Artificial Intelligence (AI) technologies (such as Computer Vision (CV) and semantic web technologies). To this end, the paper presents a methodology that enables efficient analysis and enrichment of a large collection of cultural images covering all the major phases and tasks. The proposed method is applied and tested using a case study on cultural image collections from the Europeana platform. The paper further presents the analysis of the case study, the challenges, the lessons learned, and promising future research areas on the topic. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
ChainLineNet: Deep-Learning-Based Segmentation and Parameterization of Chain Lines in Historical Prints
J. Imaging 2021, 7(7), 120; https://doi.org/10.3390/jimaging7070120 - 19 Jul 2021
Cited by 1 | Viewed by 1002
Abstract
The paper structure of historical prints is sort of a unique fingerprint. Paper with the same origin shows similar chain line distances. As the manual measurement of chain line distances is time consuming, the automatic detection of chain lines is beneficial. We propose [...] Read more.
The paper structure of historical prints is sort of a unique fingerprint. Paper with the same origin shows similar chain line distances. As the manual measurement of chain line distances is time consuming, the automatic detection of chain lines is beneficial. We propose an end-to-end trainable deep learning method for segmentation and parameterization of chain lines in transmitted light images of German prints from the 16th Century. We trained a conditional generative adversarial network with a multitask loss for line segmentation and line parameterization. We formulated a fully differentiable pipeline for line coordinates’ estimation that consists of line segmentation, horizontal line alignment, and 2D Fourier filtering of line segments, line region proposals, and differentiable line fitting. We created a dataset of high-resolution transmitted light images of historical prints with manual line coordinate annotations. Our method shows superior qualitative and quantitative chain line detection results with high accuracy and reliability on our historical dataset in comparison to competing methods. Further, we demonstrated that our method achieves a low error of less than 0.7 mm in comparison to manually measured chain line distances. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
Computer Vision Meets Image Processing and UAS PhotoGrammetric Data Integration: From HBIM to the eXtended Reality Project of Arco della Pace in Milan and Its Decorative Complexity
J. Imaging 2021, 7(7), 118; https://doi.org/10.3390/jimaging7070118 - 16 Jul 2021
Cited by 6 | Viewed by 1511
Abstract
This study aims to enrich the knowledge of the monument Arco della Pace in Milan, surveying and modelling the sculpture that crowns the upper part of the building. The statues and the decorative apparatus are recorded with the photogrammetric technique using both a [...] Read more.
This study aims to enrich the knowledge of the monument Arco della Pace in Milan, surveying and modelling the sculpture that crowns the upper part of the building. The statues and the decorative apparatus are recorded with the photogrammetric technique using both a terrestrial camera and an Unmanned Aerial Vehicle (UAV). Research results and performance are oriented to improve computer vision and image processing integration with Unmanned Aerial System (UAS) photogrammetric data to enhance interactivity and information sharing between user and digital heritage models. The vast number of images captured from terrestrial and aerial photogrammetry will also permit to use of the Historic Building Information Modelling (HBIM) model in an eXtended Reality (XR) project developed ad-hoc, allowing different types of users (professionals, non-expert users, virtual tourists, and students) and devices (mobile phones, tablets, PCs, VR headsets) to access details and information that are not visible from the ground. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
Camera Color Correction for Cultural Heritage Preservation Based on Clustered Data
J. Imaging 2021, 7(7), 115; https://doi.org/10.3390/jimaging7070115 - 13 Jul 2021
Cited by 1 | Viewed by 733
Abstract
Cultural heritage preservation is a crucial topic for our society. When dealing with fine art, color is a primary feature that encompasses much information related to the artwork’s conservation status and to the pigments’ composition. As an alternative to more sophisticated devices, the [...] Read more.
Cultural heritage preservation is a crucial topic for our society. When dealing with fine art, color is a primary feature that encompasses much information related to the artwork’s conservation status and to the pigments’ composition. As an alternative to more sophisticated devices, the analysis and identification of color pigments may be addressed via a digital camera, i.e., a non-invasive, inexpensive, and portable tool for studying large surfaces. In the present study, we propose a new supervised approach to camera characterization based on clustered data in order to address the homoscedasticity of the acquired data. The experimental phase is conducted on a real pictorial dataset, where pigments are grouped according to their chromatic or chemical properties. The results show that such a procedure leads to better characterization with respect to state-of-the-art methods. In addition, the present study introduces a method to deal with organic pigments in a quantitative visual approach. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
Comparing CAM Algorithms for the Identification of Salient Image Features in Iconography Artwork Analysis
J. Imaging 2021, 7(7), 106; https://doi.org/10.3390/jimaging7070106 - 29 Jun 2021
Cited by 2 | Viewed by 1202
Abstract
Iconography studies the visual content of artworks by considering the themes portrayed in them and their representation. Computer Vision has been used to identify iconographic subjects in paintings and Convolutional Neural Networks enabled the effective classification of characters in Christian art paintings. However, [...] Read more.
Iconography studies the visual content of artworks by considering the themes portrayed in them and their representation. Computer Vision has been used to identify iconographic subjects in paintings and Convolutional Neural Networks enabled the effective classification of characters in Christian art paintings. However, it still has to be demonstrated if the classification results obtained by CNNs rely on the same iconographic properties that human experts exploit when studying iconography and if the architecture of a classifier trained on whole artwork images can be exploited to support the much harder task of object detection. A suitable approach for exposing the process of classification by neural models relies on Class Activation Maps, which emphasize the areas of an image contributing the most to the classification. This work compares state-of-the-art algorithms (CAM, Grad-CAM, Grad-CAM++, and Smooth Grad-CAM++) in terms of their capacity of identifying the iconographic attributes that determine the classification of characters in Christian art paintings. Quantitative and qualitative analyses show that Grad-CAM, Grad-CAM++, and Smooth Grad-CAM++ have similar performances while CAM has lower efficacy. Smooth Grad-CAM++ isolates multiple disconnected image regions that identify small iconographic symbols well. Grad-CAM produces wider and more contiguous areas that cover large iconographic symbols better. The salient image areas computed by the CAM algorithms have been used to estimate object-level bounding boxes and a quantitative analysis shows that the boxes estimated with Grad-CAM reach 55% average IoU, 61% GT-known localization and 31% mAP. The obtained results are a step towards the computer-aided study of the variations of iconographic elements positioning and mutual relations in artworks and open the way to the automatic creation of bounding boxes for training detectors of iconographic symbols in Christian art images. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
Restoration and Enhancement of Historical Stereo Photos
J. Imaging 2021, 7(7), 103; https://doi.org/10.3390/jimaging7070103 - 24 Jun 2021
Cited by 1 | Viewed by 652
Abstract
Restoration of digital visual media acquired from repositories of historical photographic and cinematographic material is of key importance for the preservation, study and transmission of the legacy of past cultures to the coming generations. In this paper, a fully automatic approach to the [...] Read more.
Restoration of digital visual media acquired from repositories of historical photographic and cinematographic material is of key importance for the preservation, study and transmission of the legacy of past cultures to the coming generations. In this paper, a fully automatic approach to the digital restoration of historical stereo photographs is proposed, referred to as Stacked Median Restoration plus (SMR+). The approach exploits the content redundancy in stereo pairs for detecting and fixing scratches, dust, dirt spots and many other defects in the original images, as well as improving contrast and illumination. This is done by estimating the optical flow between the images, and using it to register one view onto the other both geometrically and photometrically. Restoration is then accomplished in three steps: (1) image fusion according to the stacked median operator, (2) low-resolution detail enhancement by guided supersampling, and (3) iterative visual consistency checking and refinement. Each step implements an original algorithm specifically designed for this work. The restored image is fully consistent with the original content, thus improving over the methods based on image hallucination. Comparative results on three different datasets of historical stereograms show the effectiveness of the proposed approach, and its superiority over single-image denoising and super-resolution methods. Results also show that the performance of the state-of-the-art single-image deep restoration network Bringing Old Photo Back to Life (BOPBtL) can be strongly improved when the input image is pre-processed by SMR+. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
Analysis of Diagnostic Images of Artworks and Feature Extraction: Design of a Methodology
J. Imaging 2021, 7(3), 53; https://doi.org/10.3390/jimaging7030053 - 12 Mar 2021
Cited by 4 | Viewed by 937
Abstract
Digital images represent the primary tool for diagnostics and documentation of the state of preservation of artifacts. Today the interpretive filters that allow one to characterize information and communicate it are extremely subjective. Our research goal is to study a quantitative analysis methodology [...] Read more.
Digital images represent the primary tool for diagnostics and documentation of the state of preservation of artifacts. Today the interpretive filters that allow one to characterize information and communicate it are extremely subjective. Our research goal is to study a quantitative analysis methodology to facilitate and semi-automate the recognition and polygonization of areas corresponding to the characteristics searched. To this end, several algorithms have been tested that allow for separating the characteristics and creating binary masks to be statistically analyzed and polygonized. Since our methodology aims to offer a conservator-restorer model to obtain useful graphic documentation in a short time that is usable for design and statistical purposes, this process has been implemented in a single Geographic Information Systems (GIS) application. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Article
A Portable Compact System for Laser Speckle Correlation Imaging of Artworks Using Projected Speckle Pattern
J. Imaging 2020, 6(11), 119; https://doi.org/10.3390/jimaging6110119 - 06 Nov 2020
Cited by 2 | Viewed by 1291
Abstract
Artworks have a layered structure subjected to alterations caused by various factors. The monitoring of defects at sub-millimeter scale may be performed by laser interferometric techniques. The aim of this work was to develop a compact system to perform laser speckle imaging in [...] Read more.
Artworks have a layered structure subjected to alterations caused by various factors. The monitoring of defects at sub-millimeter scale may be performed by laser interferometric techniques. The aim of this work was to develop a compact system to perform laser speckle imaging in situ for effective mapping of subsurface defects in paintings. The device was designed to be versatile with the possibility of optimizing the performance by easy parameters adjustment. The system exploits a laser speckle pattern generated through an optical diffuser and projected onto the artworks and image correlation techniques for the analysis of the speckle intensity pattern. A protocol for the optimal measurement was suggested, based on calibration curves for tuning the mean speckle size in the acquired intensity pattern. The system was validated in the analysis of detachments in an ancient painting model using a short pulse thermal stimulus to induce a surface deformation field and standard decorrelation algorithms for speckle pattern matching. The device is equipped with a compact thermal camera for preventing any overheating effects during the phase of the stimulus. The developed system represents a valuable nondestructive tool for artwork diagnostics, allowing the monitoring of subsurface defects in paintings in out-of-laboratory environment. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

Back to TopTop