Artificial Intelligence for Digital Humanities (AI4DH)

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 27358

Special Issue Editor


E-Mail Website
Guest Editor
1. Department of Innovation and Information Engineering, Università degli Studi "Guglielmo Marconi", 00193 Roma, Italy
2. Leibniz Institute for Educational Media | Georg Eckert Institute, 38118 Braunschweig, Germany
Interests: digital analysis and digital humanities; machine learning; natural language processing; knowledge discovery and data linking of structured and unstructured open linked data; semantic technology and knowledge management of big data
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The aim of this Special Issue on Artificial Intelligence for Digital Humanities (AI4DH) is to gather results on the interdisciplinary area of innovative technologies in the IT field—with specific reference to the methodologies, techniques, and tools related to Artificial Intelligence—applied in the field of digital humanities studies.

This Special Issue is calling submissions of novel and innovative research results on digital research tool developments with a clear reference to artificial intelligence for extracting knowledge, to the use natural language processing, as well as to the extraction of quality knowledge from textual resources, to the automatic analysis of visual and multimedia data and effective information and document retrieval that can be exploited in the field of Digital Humanities and cultural heritage. 

The Special Issue also invites submissions which concentrate on a well-founded representation of extracted knowledge and automatic reasoning to derive new knowledge; development of resources and applications according to Semantic Web and Linked Data best practices; and, in general, on the integration of metadata and semantic research in the digital humanities domain.

In addition, this Special Issue aims to emphasize the role of humanists in the learning loop, demonstrating the success and challenges of applying human interaction in a virtuous circle where they train, tune, and test machine learning models.

The Special Issue thus intends to focus on any aspect of Artificial Intelligence for digital humanities.

Dr. Francesca Fallucchi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence/data mining/big data/machine learning/natural language processing
  • Digital humanities
  • Knowledge organization and knowledge management
  • Human-in-the-loop machine learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 1048 KiB  
Article
Teaching an Algorithm How to Catalog a Book
by Ernesto William De Luca, Francesca Fallucchi and Roberto Morelato
Computers 2021, 10(11), 155; https://doi.org/10.3390/computers10110155 - 18 Nov 2021
Cited by 1 | Viewed by 3257
Abstract
This paper presents a study of a strategy for automated cataloging within an OPAC or for online bibliographic catalogs generally. The aim of the analysis is to offer a set of results, while searching in library catalogs, that goes further than the expected [...] Read more.
This paper presents a study of a strategy for automated cataloging within an OPAC or for online bibliographic catalogs generally. The aim of the analysis is to offer a set of results, while searching in library catalogs, that goes further than the expected one-to-one term correspondence. The goal is to understand how ontological structures can affect query search results. This analysis can also be applied to search functions other than in the library context, but in this case, cataloging relies on predefined rules and noncontrolled dictionary terms, which means that the results are meaningful in terms of knowledge organization. The approach was tested on an Edisco database, and we measured the system’s ability to detect whether a new incoming record belonged to a specific set of textbooks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

26 pages, 536 KiB  
Article
Dynamic Privacy-Preserving Recommendations on Academic Graph Data
by Erasmo Purificato, Sabine Wehnert and Ernesto William De Luca
Computers 2021, 10(9), 107; https://doi.org/10.3390/computers10090107 - 25 Aug 2021
Cited by 6 | Viewed by 2853
Abstract
In the age of digital information, where the internet and social networks, as well as personalised systems, have become an integral part of everyone’s life, it is often challenging to be aware of the amount of data produced daily and, unfortunately, of the [...] Read more.
In the age of digital information, where the internet and social networks, as well as personalised systems, have become an integral part of everyone’s life, it is often challenging to be aware of the amount of data produced daily and, unfortunately, of the potential risks caused by the indiscriminate sharing of personal data. Recently, attention to privacy has grown thanks to the introduction of specific regulations such as the European GDPR. In some fields, including recommender systems, this has inevitably led to a decrease in the amount of usable data, and, occasionally, to significant degradation in performance mainly due to information no longer being attributable to specific individuals. In this article, we present a dynamic privacy-preserving approach for recommendations in an academic context. We aim to implement a personalised system capable of protecting personal data while at the same time allowing sensible and meaningful use of the available data. The proposed approach introduces several pseudonymisation procedures based on the design goals described by the European Union Agency for Cybersecurity in their guidelines, in order to dynamically transform entities (e.g., persons) and attributes (e.g., authored papers and research interests) in such a way that any user processing the data are not able to identify individuals. We present a case study using data from researchers of the Georg Eckert Institute for International Textbook Research (Brunswick, Germany). Building a knowledge graph and exploiting a Neo4j database for data management, we first generate several pseudoN-graphs, being graphs with different rates of pseudonymised persons. Then, we evaluate our approach by leveraging the graph embedding algorithm node2vec to produce recommendations through node relatedness. The recommendations provided by the graphs in different privacy-preserving scenarios are compared with those provided by the fully non-pseudonymised graph, considered as the baseline of our evaluation. The experimental results show that, despite the structural modifications to the knowledge graph structure due to the de-identification processes, applying the approach proposed in this article allows for preserving significant performance values in terms of precision. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

20 pages, 1855 KiB  
Article
Fine-Grained Cross-Modal Retrieval for Cultural Items with Focal Attention and Hierarchical Encodings
by Shurong Sheng, Katrien Laenen, Luc Van Gool and Marie-Francine Moens
Computers 2021, 10(9), 105; https://doi.org/10.3390/computers10090105 - 25 Aug 2021
Cited by 2 | Viewed by 2042
Abstract
In this paper, we target the tasks of fine-grained image–text alignment and cross-modal retrieval in the cultural heritage domain as follows: (1) given an image fragment of an artwork, we retrieve the noun phrases that describe it; (2) given a noun phrase artifact [...] Read more.
In this paper, we target the tasks of fine-grained image–text alignment and cross-modal retrieval in the cultural heritage domain as follows: (1) given an image fragment of an artwork, we retrieve the noun phrases that describe it; (2) given a noun phrase artifact attribute, we retrieve the corresponding image fragment it specifies. To this end, we propose a weakly supervised alignment model where the correspondence between the input training visual and textual fragments is not known but their corresponding units that refer to the same artwork are treated as a positive pair. The model exploits the latent alignment between fragments across modalities using attention mechanisms by first projecting them into a shared common semantic space; the model is then trained by increasing the image–text similarity of the positive pair in the common space. During this process, we encode the inputs of our model with hierarchical encodings and remove irrelevant fragments with different indicator functions. We also study techniques to augment the limited training data with synthetic relevant textual fragments and transformed image fragments. The model is later fine-tuned by a limited set of small-scale image–text fragment pairs. We rank the test image fragments and noun phrases by their intermodal similarity in the learned common space. Extensive experiments demonstrate that our proposed models outperform two state-of-the-art methods adapted to fine-grained cross-modal retrieval of cultural items for two benchmark datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

14 pages, 3507 KiB  
Article
Knowledge Graph Embedding-Based Domain Adaptation for Musical Instrument Recognition
by Victoria Eyharabide, Imad Eddine Ibrahim Bekkouch and Nicolae Dragoș Constantin
Computers 2021, 10(8), 94; https://doi.org/10.3390/computers10080094 - 3 Aug 2021
Cited by 9 | Viewed by 3247
Abstract
Convolutional neural networks raised the bar for machine learning and artificial intelligence applications, mainly due to the abundance of data and computations. However, there is not always enough data for training, especially when it comes to historical collections of cultural heritage where the [...] Read more.
Convolutional neural networks raised the bar for machine learning and artificial intelligence applications, mainly due to the abundance of data and computations. However, there is not always enough data for training, especially when it comes to historical collections of cultural heritage where the original artworks have been destroyed or damaged over time. Transfer Learning and domain adaptation techniques are possible solutions to tackle the issue of data scarcity. This article presents a new method for domain adaptation based on Knowledge graph embeddings. Knowledge Graph embedding forms a projection of a knowledge graph into a lower-dimensional where entities and relations are represented into continuous vector spaces. Our method incorporates these semantic vector spaces as a key ingredient to guide the domain adaptation process. We combined knowledge graph embeddings with visual embeddings from the images and trained a neural network with the combined embeddings as anchors using an extension of Fisher’s linear discriminant. We evaluated our approach on two cultural heritage datasets of images containing medieval and renaissance musical instruments. The experimental results showed a significant increase in the baselines and state-of-the-art performance compared with other domain adaptation methods. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 1818 KiB  
Review
Fraud Detection Using the Fraud Triangle Theory and Data Mining Techniques: A Literature Review
by Marco Sánchez-Aguayo, Luis Urquiza-Aguiar and José Estrada-Jiménez
Computers 2021, 10(10), 121; https://doi.org/10.3390/computers10100121 - 30 Sep 2021
Cited by 13 | Viewed by 14352
Abstract
Fraud entails deception in order to obtain illegal gains; thus, it is mainly evidenced within financial institutions and is a matter of general interest. The problem is particularly complex, since perpetrators of fraud could belong to any position, from top managers to payroll [...] Read more.
Fraud entails deception in order to obtain illegal gains; thus, it is mainly evidenced within financial institutions and is a matter of general interest. The problem is particularly complex, since perpetrators of fraud could belong to any position, from top managers to payroll employees. Fraud detection has traditionally been performed by auditors, who mainly employ manual techniques. These could take too long to process fraud-related evidence. Data mining, machine learning, and, as of recently, deep learning strategies are being used to automate this type of processing. Many related techniques have been developed to analyze, detect, and prevent fraud-related behavior, with the fraud triangle associated with the classic auditing model being one of the most important of these. This work aims to review current work related to fraud detection that uses the fraud triangle in addition to machine learning and deep learning techniques. We used the Kitchenham methodology to analyze the research works related to fraud detection from the last decade. This review provides evidence that fraud is an area of active investigation. Several works related to fraud detection using machine learning techniques were identified without the evidence that they incorporated the fraud triangle as a method for more efficient analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

Back to TopTop