Next Article in Journal
AirTrace-SA: Air Pollution Tracing for Source Attribution
Previous Article in Journal
Less Is More: Analyzing Text Abstraction Levels for Gender and Age Recognition Across Question-Answering Communities
Previous Article in Special Issue
Optimizing Contextonym Analysis for Terminological Definition Writing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Performance of Automatic Keyword Extraction (AKE) Methods Using PoS Tagging and Enhanced Semantic-Awareness

1
Institute of Cyber Security for Society (iCSS) & School of Computing, University of Kent, Canterbury CT2 7NP, UK
2
School of Cyber Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Information 2025, 16(7), 601; https://doi.org/10.3390/info16070601
Submission received: 19 May 2025 / Revised: 5 July 2025 / Accepted: 9 July 2025 / Published: 13 July 2025
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)

Abstract

Automatic keyword extraction (AKE) has gained more importance with the increasing amount of digital textual data that modern computing systems process. It has various applications in information retrieval (IR) and natural language processing (NLP), including text summarisation, topic analysis and document indexing. This paper proposes a simple but effective post-processing-based universal approach to improving the performance of any AKE methods, via an enhanced level of semantic-awareness supported by PoS tagging. To demonstrate the performance of the proposed approach, we considered word types retrieved from a PoS tagging step and two representative sources of semantic information—specialised terms defined in one or more context-dependent thesauri, and named entities in Wikipedia. The above three steps can be simply added to the end of any AKE methods as part of a post-processor, which simply re-evaluates all candidate keywords following some context-specific and semantic-aware criteria. For five state-of-the-art (SOTA) AKE methods, our experimental results with 17 selected datasets showed that the proposed approach improved their performances both consistently (up to 100% in terms of improved cases) and significantly (between 10.2% and 53.8%, with an average of 25.8%, in terms of F1-score and across all five methods), especially when all the three enhancement steps are used. Our results have profound implications considering the fact that our proposed approach can be easily applied to any AKE method with the standard output (candidate keywords and scores) and the ease to further extend it.

1. Introduction

Keyword extraction (KE), also known as keyphrase or key term extraction, is an information extraction task that aims to identify a number of words/phrases that best summarise the nature or the context of a piece of text. It has several applications in information retrieval (IR) and natural language processing (NLP), including text summarisation, topic analysis, and document indexing [1,2]. Considering the vast amount of text-based documents online in today’s digital society, it is very useful to be able to extract keywords from online documents automatically to support large-scale textual analysis. Therefore, for many years, the research community has been investigating automatic keyword extraction (AKE) methods, especially with the recent advancements in artificial intelligence (AI) and NLP. Despite these efforts, however, AKE has been shown to be a challenging task, and AKE methods with very high performance are still to be found [3]. Two main challenges are the lack of a precise definition of the AKE task and the lack of consistent performance evaluation metrics and benchmarks [1]. Since there is no consensus on the definition and characteristics of a keyword, KE datasets created by researchers have different characteristics. Examples include the minimum/average/maximum numbers of keywords, if absent keywords (human-labelled keywords that do not appear in the text) are allowed, and what part-of-speech (PoS) tags, such as verbs, are accepted as valid keywords. This makes performance evaluation and comparison of AKE methods more difficult.
Based on whether a labelled training set is used, AKE methods reported in the literature can be grouped into unsupervised and supervised methods. Unsupervised methods include statistical, graph-based, embedding-based and/or language model-based methods, while supervised ones use either traditional or deep machine learning models [3]. Surprisingly, for most AKE methods, semantic information has not been considered or only insufficiently considered to align the returned keywords with the semantic context of the input document [4].
In this work, to fill the above-mentioned gap on the lack of or insufficient use of semantic information in the state-of-the-art (SOTA) AKE methods, we propose a universal performance improvement approach for any AKE methods. This approach serves as a post-processor that can consider semantic information more explicitly, with the support of PoS tagging. To start with, we conducted an analysis of human-annotated ‘gold standard’ keywords in 17 KE datasets to better understand some relevant characteristics of such keywords. Particularly, this analysis focuses on PoS tag patterns, n-gram sizes, and the possible consideration of semantic information by human labellers when extracting keywords.
Our proposed approach is demonstrated using the following three post-processing steps that can be freely combined: (1) keeping candidate keywords with a desired PoS tag only; (2) matching candidate keywords with one or more context-specific thesauri containing more semantically relevant terms; and (3) prioritising candidate keywords that appear as a valid Wikipedia named entity. We applied different combinations of the above three post-processing steps to five SOTA AKE methods, YAKE! [5], KP-Miner [6], RaKUn [7], LexRank [8], and SIFRank+ [9], and compared the performances of the original methods with those of the enhanced versions. The experimental results with the 17 KE datasets showed that our proposed post-processing steps helped improve the performances of all the five SOTA AKE methods both consistently (up to 100% in terms of improved cases) and significantly (between 10.2% and 53.8%, with an average of 25.8%, in terms of F1-score and across all five methods), particularly when all the three steps are combined. Our work validates the possibility of using easy-to-use post-processing steps to enhance the semantic awareness of AKE methods and to improve their performance in real-world applications, a fact that has not been reported before (to the best of our knowledge). The main contributions of this paper are as follows:
  • We propose a modular and universal post-processing pipeline that enhances existing AKE methods using part-of-speech filtering and external knowledge sources.
  • We provide a comprehensive analysis of 17 AKE datasets to empirically justify our design choices.
  • We conduct extensive experiments to demonstrate that the proposed pipeline improves the performance of multiple state-of-the-art AKE methods across diverse evaluation settings.
The rest of the paper is organised as follows. Section 2 briefly surveys AKE methods in the literature. The analysis of the human-annotated keywords in 17 KE datasets is given in Section 3. In Section 4, we present the methodology of our study. Section 5 explains the experimental setup for evaluation as well as the results. Finally, the paper is concluded with some further discussions in Section 6, and an overall summary in Section 7.

2. Related Work

2.1. Unsupervised AKE Methods

Some unsupervised AKE methods have been proposed, including statistical, graph-based, embedding-based and language model-based methods [3]. Statistical AKE methods rely on some selected statistical metrics, e.g., term frequency, relevance to context, and co-occurrences, for ranking candidate keywords. One of the most used metrics is TF-IDF [10], which combines two aspects of a term: term frequency within the input article, and the inverse document frequency across several domains. One AKE method using TF-IDF is KP-Miner [6], which also considers other metrics such as word length and word position. A more recent method in this category is YAKE! [5]. It leverages a range of statistical metrics, such as casing, word position, word frequency, word relatedness to context, and how often a term appears in different sentences. Finally, LexSpec [8] makes use of lexical specificity—a statistical metric to select the most representative keywords from a given text based on the hypergeometric distribution. While statistical AKE methods are easy to compute and language-independent, they mostly fail to capture contextual or semantic significance, leading to poor performance in nuanced texts.
Graph-based AKE methods consider candidate keywords as nodes in a directed graph, often with weighted edges reflecting the syntactic/semantic relatedness of different keywords. They leverage graph-based methods, such as PageRank [11], for ranking the nodes of the graph in terms of their overall importance. The earliest AKE method in this category is TextRank [12]. It uses an unweighted graph of candidate keywords after filtering the ones that are not nouns or adjectives, and uses PageRank for ranking the nodes. As an extension to TextRank, SingleRank [13] adds edge weights to the graph, which reflect the number of co-occurrences of the candidate keywords represented by any pair of two connected nodes. Another graph-based AKE method is RAKE [14], which builds a word-word co-occurrence graph and assigns a score for each candidate by using word frequency and word degree. A more recent graph-based AKE method is RaKUn [7], which introduces meta-vertices by aggregating similar vertices and employs load centrality metrics for candidate ranking. Finally, LexRank [8] and TFIDFRank [8] are two different enhanced versions of SingleRank, which use lexical specificity and TF-IDF, respectively. Graph-based AKE approaches improve upon the statistical approaches by modelling term connectivity, yet still depend heavily on co-occurrence patterns.
Embedding-based AKE methods utilise word representation techniques, such as Doc2Vec [15] and GloVe [16]. An example method in this category is EmbedRank [17], which uses sentence embeddings and ranks candidate keywords in terms of cosine similarity. A more recent method is SIFRank [9], which combines a sentence embedding model SIF and an autoregressive pre-trained language model ELMo, and it was upgraded to SIFRank+ by position-biased weight to improve its performance for long documents. Lastly, MDERank [18] considers the similarity between the embeddings of the source document and its masked version for candidate ranking. Embedding-based AKE methods can better capture context and semantics, but often come with a higher computational cost. Furthermore, many embedding-based approaches depend on pre-trained language models, which may require fine-tuning or adaptation to specific domains. Their reliance on large-scale models also reduces interpretability and makes integration into lightweight systems more challenging.
Apart from the AKE methods mentioned above, there also exist a number of AKE methods based on other techniques. Rabby et al. [19] proposed TeKET, a domain- and language-independent AKE technique utilising a binary tree for extracting final keywords from candidate ones. As another example, Liu et al. [20] introduced an AKE algorithm based on term clustering considering semantic relatedness to identify the exemplar terms. The identified exemplar terms are then used to extract keywords.

2.2. Supervised AKE Methods

Although unsupervised methods are preferred for AKE, supervised methods have also been proposed. One of the earliest methods is KEA [21], which calculates TF-IDF scores and the position of the first occurrence of each candidate, and employs the Naive Bayes learning algorithm to decide if a candidate should be selected. More recently, there has been a growing interest in using deep learning for AKE. For example, Basaldella et al. [22] proposed an AKE method based on Bi-LSTM, which is capable of exploiting the context of each candidate word. Another AKE method, TNT-KID [23], leverages transformers and allows users to train their own language model on a domain-specific corpus. A third example is TANN [24], an AKE method based on a topic-based artificial neural network model. It aims to improve the performance of AKE by transferring knowledge from a resource-rich source domain to an unlabelled or insufficiently labelled target domain. Finally, Bordoloi et al. [25] proposed a supervised variant of TextRank, leveraging a statistical supervised weighting scheme for terms to employ both global and local weights during keyword extraction. Supervised AKE methods can be promising when trained on large annotated corpora. However, their reliance on labelled data limits their applicability in low-resource domains or languages, and their generalisation to unseen domains can be inconsistent. Additionally, such methods tend to require significant computational resources during both training and inference.

2.3. PoS Tagging and Semantics in AKE

Many AKE methods have considered how to extract more semantically meaningful keywords. For this purpose, PoS tagging has been used so that extracted keywords are restricted to a pre-defined set of PoS tag patterns, e.g., noun phrases only [26,27,28]. Some methods utilise external knowledge to provide useful contextual information for extracting more semantically sensible keywords. For instance, Li and Wang [29] proposed a TextRank-based AKE method that benefits from domain knowledge by using author-assigned keywords of scientific publications, and Gazendam et al. [30] proposed to use semantic relations between thesaurus terms for ranking candidate keywords without a reference corpus. Thesaurus relations have also been combined with machine learning techniques to improve the performance of AKE methods [31,32]. More recently, Sheoran et al. [33] leveraged domain-specific ontologies for aspect assignment of candidate keywords extracted from opinionated texts so that the selected candidates cover a maximum number of aspects.
Some AKE methods also make use of Wikipedia, a useful source of semantic information. Shi et al. [34] utilised Wikipedia to extract semantic features of candidate keywords. Their method constructs a semantic graph connecting candidate keywords to document topics based on the hierarchical relations extracted from Wikipedia, and semantic feature weights are assigned to candidate keywords with a link analysis algorithm. WikiRank is another AKE method leveraging Wikipedia [35]. It employs the TAGME annotator [36] to link meaningful word sequences in the input document to concepts in Wikipedia and constructs a semantic graph. Then, it transforms the KE task to an optimisation problem on the graph and tries to obtain the optimal keyword set that has the best coverage of the identified concepts. Finally, several embedding-based AKE methods utilise Wikipedia for pre-training and/or fine-tuning their underlying embedding methods [17,37].
Compared with existing AKE methods that have considered PoS tagging or semantic information more explicitly, our proposed approach is more universal and can be applied to any AKE method as a post-processor, which simply re-evaluates candidate keywords generated by an AKE method before the top n keywords are returned. Our approach is easily generalisable and can be used flexibly to eliminate candidate keywords that are unlikely to be keywords and prioritise those that are more likely to be keywords. Furthermore, our approach addresses the limitations of existing AKE approaches regarding semantic-awareness, mentioned in the previous subsections, without adding significant computational cost.

3. Analysis of Human-Annotated Keywords

Our proposed approach was motivated by some of our observations regarding how human labellers extracted “golden” (i.e., ground truth) keywords in 17 KE datasets. Such observations also helped us to determine some specific details, such as the parameters used in our proposed approach. In the following, we describe the 17 datasets we used and the key observations.

3.1. Datasets Inspected

Considering the subjectivity of the keyword extraction task, a standard approach has not been established to follow for constructing keyword extraction datasets [38]. This brings an extreme diversity to the datasets constructed so far, which makes comprehensive tests of keyword extraction algorithms harder. Therefore, to achieve a better understanding of human-annotated keywords, we aimed to collect a wide range of representative KE datasets used in the literature. With this respect, research papers corresponding to SOTA AKE methods and relevant surveys have been searched on multiple research databases, including Google Scholar and Scopus, with the keywords “automatic keyword extraction” and “automatic keyphrase extraction”. Then, the collected papers were reviewed to identify datasets used by other researchers, and the publicly available datasets were downloaded. Multiple collections of AKE datasets have been found through different GitHub repositories (Examples include https://github.com/LIAAD/KeywordExtractor-Datasets, https://github.com/boudinfl/ake-datasets, and https://github.com/SDuari/Keyword-Extraction-Datasets, all accessed on 8 July 2025). In total, we were able to collect 17 datasets covering multiple contexts, including agriculture, computer science and health, and several types of documents, such as scientific papers, news, theses and abstracts. However, we excluded datasets containing short-text documents, such as tweets, since they tend to contain fewer candidate keywords, which could negatively impact the informativeness of our analysis. In addition, we only selected English datasets to limit the scope of our study to the English language due to the lack of a sufficient amount of non-English datasets and English being the only language shared by the authors of this paper. Further details regarding the datasets can be seen in Table 1.

3.2. Observations: PoS Tag Patterns

There has been a lot of research on linguistic properties of different multi-word expression types, such as collocations [51] and technical terms [52]. In addition, various PoS tag patterns have been proposed in the literature to identify noun phrases, which have been commonly considered a major indicator of keyword candidates [53]. However, these are unable to properly explain the linguistic properties of keywords used in AKE research because of the lack of linguistic standards for human-annotated keywords. Therefore, firstly, we reviewed the structure of human-annotated keywords in the 17 datasets, in terms of the used PoS tag patterns. For this purpose, we used the NLTK [54] library’s PoS tagger and computed the distribution of different PoS tag patterns. As shown in Table 2, nine of the top ten PoS tag patterns correspond to either noun or gerund phrases. The only non-noun/gerund pattern in the top ten PoS tag patterns is a single adjective (JJ), with an average percentage of 6.85%. The top ten PoS tag patterns count 80% of all patterns. These observations imply that leveraging knowledge about how human labellers define keywords based on PoS tag patterns for a specific domain can potentially help improve the performance of any AKE methods for the corresponding domain.

3.3. Observations: n-Gram Size

AKE methods generally include a parameter for the maximum n-gram size, corresponding to the maximum number of words a keyword is allowed to contain. Although it is well-known that multi-word expressions (MWEs) are more likely to be of length two to three in English [55], it is less clear how human labellers of the 17 datasets were instructed to consider the n-gram size. Therefore, we analysed the golden keywords across the 17 datasets to see how human labellers decided on the n-gram sizes. On average, bigrams ( n = 2 ) constitute 45.55% of the golden keywords in the 17 datasets, while this rate is 36.45% for unigrams ( n = 1 ) and 12.73% for trigrams ( n = 3 ). In addition, the percentages for keywords with n 4 are considerably low—5.12% on average. More detailed statistics can be seen in Table 3. These results show that human labellers largely used two or three as the maximum n-gram size, covering 82.01% and 94.74% of the golden keywords across the different datasets, respectively. The results are aligned with those in the research literature on MWEs. Based on such observations, we can see that AKE methods could benefit from focusing more on keywords with a shorter word length.

3.4. Observations: Semantic Information

Finally, we analysed the human-annotated keywords to see if human labellers explicitly or implicitly relied on semantic information to select keywords. We first calculated the percentage of golden keywords that are covered by Wikipedia across all the datasets. This quantitative analysis indicated that, on average, 64.39% of the golden keywords are Wikipedia named entities, i.e., titles of Wikipedia articles. This interesting (previously unreported) finding justifies that Wikipedia can be a very useful knowledge base for AKE algorithms, as it covers so many golden keywords chosen by human labellers for all the 17 datasets we chose. Although unexpected, this finding can be explained by the diversity and richness of the content of Wikipedia. More detailed results of the analysis can be seen in Table 4.
In addition to Wikipedia named entities, we also manually inspected many golden keywords and observed that many collected datasets contain domain-specific golden keywords. This observation indicates that considering domain-specific terms can potentially help improve the performance of AKE methods, too.

4. Methodology

4.1. Problem Definition and Our Proposed Approach

Suppose W C ( D ) = { w i C } i = 1 m denotes m candidate keywords generated from a document D by an AKE method. In addition, let W S ( D ) W C ( D ) denotes n m keywords produced by the AKE method. Finally, let W ( D ) = { w i } i = 1 t be the set of ground truth keywords an ideal AKE method should extract from D. Given the above notations, our goal is to find post-processing methods that can minimise | W ( D ) W S ( D ) | (false negatives) and | W S ( D ) W ( D ) | (false positives). Among the two types of errors, reducing false negatives is more important than reducing false positives, but given the fact that n cannot be too large to make the results manageable, balancing both types of errors is still very important. Typically, AKE methods select keywords by assigning a numerical score s i to each candidate keyword w i , and then return the top n keywords with the highest (Although some AKE methods, e.g., YAKE!, use smaller scores for better keywords, here, for the sake of simplicity, we assume that a higher score means a more preferred keyword.) scores. Our proposed approach can work with any AKE method with such a scoring system, and it aims to re-adjust such scores so that true positive keywords’ scores will more likely increase and true negative keywords’ scores will more likely decrease.
Informed by the findings presented in Chapter 3, our proposed approach is based on three general post-processing steps that can be applied to any baseline AKE methods as shown in Figure 1: (1) removing candidate keywords with an unlikely PoS tag pattern by zeroing its score ( s i = 0 ), (2) using one or more context-aware (i.e., domain-specific) thesauri to prioritise important candidate keywords for the target domain ( s i = c i s i , where c i is an amplifying factor larger than 1), and (3) prioritising candidate keywords that are Wikipedia named entities ( s i = w i s i , where w i is another amplifying factor larger than 1). Note that the amplifying factor c i and w i can be a static value for all prioritised keywords (so independent of i) or a keyword-dependent factor, depending on an importance score of each candidate keyword in the thesauri and Wikipedia, e.g., c i can be proportional to the word frequency in the thesauri and w i can be proportional to the size of the Wikipedia entry or the number of references to the entry.
In the following subsections, we explain the three steps in more detail.

4.2. Filtering Specific PoS Tag Patterns

As mentioned in Section 2.3, PoS tagging has been extensively used in AKE methods to consider morpho-syntactic features. Motivated by the observations in Section 3.2, we attempted to leverage a PoS tagger to filter out candidate keywords labelled with unlikely PoS tag patterns. More precisely, candidate keywords that do not conform with any of the following PoS tag patterns were discarded: (i) simple nouns and noun phrases—one or more nouns/gerunds (optionally with one or more adjectives appearing before the first noun); (ii) two or more simple nouns and/or noun phrases connected by one or more prepositions or conjunctions (Examples include “quality of service” and “buyer and seller” from the SemEval2010 dataset. Although none of the possible PoS tag patterns conforming to this criterion are among the most common patterns presented in Table 2 individually, they collectively constitute 1.3% of all patterns across the 17 datasets.); and (iii) a single adjective.
In the PoS tag patterns mentioned above, nouns and adjectives mean any PoS tags that can provide the corresponding functionality in a sentence. Therefore, nouns also include gerunds, and adjectives also include past participle verbs. Considering the most common PoS tag patterns mentioned in Section 3.2, our proposed PoS tag patterns correspond to over 90% of the patterns observed across all the 17 datasets. We used NLTK to extract PoS tags for each term in the input documents. For pattern matching, we took advantage of regular expressions. Since regular expressions return the longest possible matches, we extracted the shorter matches from the longest ones separately.
Note that the proposed PoS tag patterns can be further changed to reflect any domain-specific needs, e.g., we observed that gerunds are quite uncommon in the health domain, so they can be removed if preferred.

4.3. Context-Aware Thesauri

Context means any kind of domain, topic or field that has its own set of terms semantically specific to itself. While the set of terms specific to a context can be covered in a more structured vocabulary, such as a thesaurus or an ontology, a simple word list can often be sufficient for the purpose of AKE. As reported in Section 3.4, many keywords are related to the context of the input text, and contextual consideration can be quite useful for AKE. Therefore, we propose to make use of external resources to inform AKE methods more about semantically useful keywords for the relevant domain. More specifically, we proposed to integrate one or more domain-specific thesauri, which contain terms specific to a target context, and to prioritise candidate keywords included in such thesauri. At the implementation level, we introduce a weight for each candidate keyword and increase the weight of any candidate keyword appearing in one of the thesauri. In our experiments, we doubled the weights of candidate keywords in a thesaurus. However, the actual weight increase can be a parameter that can be empirically determined based on some training data or qualitative evidence observed. To determine if a candidate keyword exists in a given thesaurus, we applied exact matching with lemmatisation. Although using stemming with exact matching is a more common practice in AKE [3], we preferred to use lemmatisation due to its context-awareness. In our experiments, we focused on thesauri with a single context, but using multiple contexts in a single thesaurus is of course also possible. Regarding integrating relevant thesauri, we considered two different approaches explained below.
  • Manual Context Consideration:
This approach is more useful when documents processed by an AKE method are known to belong to a specific context. It utilises one or more thesauri containing a list of terms relevant to the context, which are given a higher weight for prioritisation by the AKE method. In our experiments, we assigned a single domain-specific thesaurus to each of the datasets to represent the relevant context. Note that it is possible that multiple contexts and multiple thesauri are used in some applications of AKE.
  • Automatic Context Identification:
Considering the wide range of applications in which AKE methods can be utilised, manually providing a thesaurus for each input document may not be very usable. Therefore, we also studied how to identify the context of an input document automatically, which can allow assigning a different context and a corresponding thesaurus automatically. This can be achieved by building a machine learning-based classifier, which produces a class label representing the context or a context-specific thesaurus of a given document or its abstract.
Once the classifier predicts the context of an input abstract, we identify a thesaurus corresponding to the context, as defined in a context-to-thesaurus look-up table, to inform the AKE method. Unlike the manual approach, automatic identification allows us to use a different thesaurus for each document in the dataset; therefore, it can be applied to many real-world scenarios where the documents processed can belong to multiple contexts.

4.4. Wikipedia Named Entities

Based on the Wikipedia-related observations reported in Section 3.4, we propose to use Wikipedia as a context-independent thesaurus to improve the performance of any AKE methods working in any context(s). Similar to how a thesaurus can be used, we prioritise the candidate keywords covered by Wikipedia as an entry by increasing their weight. Then, we apply exact matching with lemmatisation to identify if a candidate keyword is a Wikipedia named entity. Since Wikipedia also contains a vast amount of entries with too general semantic meanings, e.g., unigrams such as ‘father’, ‘school’, and ‘table’ that are normally already well covered by most AKE methods, we utilised the NLTK’s words corpus (i.e., a wordlist including common English dictionary words) to identify such unigrams and remove them from the Wikipedia entities that will be prioritised in our post-processing step. For the Wikipedia named entities, we used the 2021-10-01 version of the English Wikipedia dump (https://archive.org/download/enwiki-20211001, accessed on 8 July 2025), containing only page titles. We first cleaned the dump data by removing the disambiguation tags (https://en.wikipedia.org/wiki/Wikipedia:Disambiguation#Naming_the_disambiguation_page, accessed on 8 July 2025) added next to the title by Wikipedia. Then, we normalised the data with lemmatisation and lower-casing by following the common practice.

5. Experiments and Results

5.1. Evaluation Metrics

As the evaluation metrics, we used precision, recall and F1 score at the top ten keywords, which have been commonly used in AKE evaluation [3]. Furthermore, we adopted micro-averaging and exact matching with stemming when calculating the scores.

5.2. Selecting Baseline Methods

To show the effectiveness and generalisability of the proposed methods, we first attempted to identify some representative AKE algorithms with different key characteristics for our experiments. We reviewed existing AKE algorithms in terms of multiple aspects, e.g., recency, ease of reconfiguration, and whether they already use one or more of our proposed methods by any means, as shown in Table 5. These methods are considered more representative because they have open-source implementations, are applicable to any document type, were validated on a number of datasets, and do not require training (i.e., unsupervised so that it is easier to use and less likely to have generalisation problems) (Unsupervised AKE methods have become more popular for this reason. Most implementations of supervised methods are also harder to reconfigure.). Among these methods, we selected two statistical methods, i.e., KP-Miner and YAKE!, two graph-based methods, i.e., RaKUn and LexRank, and an embedding-based method, i.e., SIFRank+, as baseline methods for our experiments. Since SIFRank+ is very computationally costly, we used only seven of the datasets, containing shorter documents (i.e., KPCrowd, DUC-2001, Inspec, KDD, KPTimes, SemEval2017 and WWW) for its evaluation when our methods were applied.
For the implementations of the selected AKE methods, we utilised the PKE [56] library for KP-Miner and the original implementations of the other four. We used the default parameters for all the methods, except the maximum n-gram size parameter. Considering the n-gram size across the datasets being mostly limited up to 3, as mentioned in Section 3.3, we set the maximum n-gram size to be 3.

5.3. PoS Tag Patterns

As the first step, we applied our PoS tagging-based post-processing approach to the selected AKE methods and evaluated on all the datasets. Results show that the proposed approach improved all the methods except SIFRank+ on average, in terms of precision, recall, and F1 score. While KP-Miner achieved better performance for 14 of the 17 datasets with an average of 6.08% in F1 score, RaKUn was improved by a cross-dataset average of 4.46% for 14 of the 17 datasets. We observed the most change in the performance of YAKE!—it was improved in 16 of the 17 datasets by a cross-dataset average of 18.05%. We believe this is because YAKE! does not benefit from linguistic features as a more language-independent (multilingual) approach. Finally, we observed a limited improvement in the scores of LexRank (0.84% on average) for 12 of the 17 datasets, and a slight decrease in the performance of SIFRank+, which is likely due to the fact that these two methods already use PoS tagging-based filtering. The obtained scores for YAKE! and SIFRank+ are shown in Table 6 and Table 7, respectively, as examples. These results provide new evidence for the effectiveness of PoS tagging in AKE algorithms and imply that there is still room to improve the use of PoS tagging in many AKE methods.
Finally, we studied how tailoring the selected PoS tag patterns according to domain-specific needs may affect the performance of AKE methods. To this end, we considered the example given in Section 4.2, i.e., the observation that gerunds are rarely seen as a keyword in the health domain. We selected the health datasets (i.e., PubMed and Schutz2008) from our collection and applied the tailored PoS tag-based filtering that disregards gerunds. For this experiment, we used YAKE! since it is more sensitive to linguistic-based improvements as a language-independent algorithm. As shown in Table 8, the tailored filtering approach provided some small improvements to our original filtering proposal in terms of precision, recall, and F1 score. The limited improvement is likely due to the small percentage of gerunds as candidate keywords.

5.4. Context-Aware Thesauri

For this step, we selected 10 datasets mentioned in Section 3.1 that have a particular context. The included contexts (and datasets) are agriculture (fao30 and fao780), health (PubMed) and computer science (Inspec, Krapivin2009, Nguyen2007, SemEval2010, KDD, Wiki20 and WWW). In addition, we constructed another context-specific dataset, KPTimes-Econ, by extracting economy-related news from the KPTimes dataset, which includes 3258 news articles. For extracting economy-related news articles, we have looked for the records involving the term “economy” in the keyword and/or categories field(s). Based on the 11 datasets, we collected a thesaurus (or something similar, e.g., dictionary, ontology, or wordlist) for each context. More specifically, we used the following thesauri: (i) AGROVOC 2021-07 [57]—a multilingual controlled vocabulary constructed by the Food and Agriculture Organization of the United Nations (FAO), with 844,000 agriculture-related terms including 50,163 English ones; (ii) Medical Subject Headings (MeSH) 2021 [58]—a thesaurus covering biomedical and health-related terms produced by the National Library of Medicine (NLM), with over 1.4 million terms in English; (iii) Computer Science Ontology (CSO) v3.3 [59]—a large-scale computer science ontology automatically produced by Klink-2 [60] algorithm from 16 million computer science publications, with 14,000 terms; and (iv) STW v9.10 [61]—a bilingual thesaurus (in English and German) for economics produced by the Leibniz Information Center for Economics (ZBW), with over 20,000 terms including 6217 English ones.
For the initial step aiming to experiment with manual integration, we fed each of the baseline methods with each of the datasets and their corresponding thesaurus depending on the context. As in the previous experiment, SIFRank+ was evaluated on only the datasets with shorter documents, i.e., Inspec, KDD, WWW, and KP-Times-Econ in this case. The experiments showed that the manual integration of context-aware thesaurus improved all five AKE methods in terms of precision, recall, and F1 score significantly for all the datasets. The improvement in F1 score was observed to be 29.03%, 23.88%, 12.85%, 13.19%, and 7.09% for RaKUn, LexRank, YAKE!, KP-Miner, and SIFRank+, respectively. Table 9 and Table 10 show more detailed results of the experiment for LexRank and SIFRank+, respectively. The results of this experiment produced solid evidence of the effectiveness of using context-aware thesaurus to improve the performance of AKE methods.
For the next step, we experimented with the automated thesauri integration process. In our experiments, especially for datasets covering mainly scientific papers, we built a classifier for classifying a given article’s title and abstract into the main discipline the article belongs to. The classifier was trained on samples extracted from the arXiv.org dataset (https://www.kaggle.com/Cornell-University/arxiv, accessed on 8 July 2025) containing metadata of over 1.7M preprints in multiple disciplines. Before the training process, we filtered the arXiv.org dataset by the main discipline reflected by its categories field so as to include the following three disciplines: (1) cs (Computer Science, e.g., cs.AI), (2) bio (Biology, e.g., q-bio), and (3) fin (Finance, e.g., q-fin.CP) and econ (Economics, e.g., econ.EM). After this filtering process, we obtained a dataset of 583,796 samples (551,443 computer science, 20,110 biology, and 12,243 finance/economics samples). Since the resulting dataset is highly imbalanced, we applied random downsampling to equate the number of samples from each discipline to the size of the smallest class, 12,243, which made the final size of our training set 36,729. In our classifier, we utilised the TF-IDF vectoriser for feature extraction. We chose to use the calibrated linear support vector classifier (SVC) with the default parameters and the one-vs-rest setting, rather than a multi-class classification method or more advanced feature extraction methods such as BERT, to show that even a lightweight classifier is sufficient for the task of automatic context detection. The classifier was evaluated with a stratified 5-fold cross-validation. The testing accuracies (i.e., the fraction of the number of correct predictions with respect to the total number of predictions) of computer science, biology and finance/economics models were 93.2%, 94.9%, and 97.0%, respectively. The classifier can also be extended to support multiple contexts for a single article, although in our experiments, we considered the case of a single context per article for the sake of simplicity and clarity. We used the Scikit-learn library [62] to implement all of the mentioned components.
Since the training set of the classifier does not cover agriculture preprints, and we were unable to find a proper agriculture dataset for training, we excluded the agriculture context and the corresponding datasets, fao30 and fao780, for this part of the experiments. The results of the experiments performed with our classifier indicated that the automatic thesaurus integration approach achieved as good as the manual integration approach with a negligible performance decrease. More precisely, the F1 score was improved by an average of 23.23%, 18.07%, 9.60%, 11.27%, and 5.92% for RaKUn, LexRank, YAKE!, KP-Miner, and SIFRank+, respectively, compared to the baseline scores. Table 9 and Table 10 show more detailed results of the experiment for LexRank and SIFRank+, respectively. The obtained results imply that automatic integration can be generalised to cover more contexts and thesauri, which can be quite useful in real-world AKE applications.

5.5. Wikipedia Named Entities

For this part of the experiments, we used the entire set of datasets as we did in Section 3.2. The results of the experiment indicated that leveraging Wikipedia named entities improved the performance of KP-Miner and RaKUn for 16 of the datasets, and the performance of YAKE! and LexRank for all the datasets, in terms of all the evaluation metrics. Furthermore, the average improvement rates of the F1 score were observed as 18.83%, 11.11%, 10.96%, and 10.11% for RaKUn, LexRank, YAKE!, and KP-Miner, respectively. However, we observed a slight decrease in the average F1 score of SIFRank+, although it improved for most (5 out of 7) of the datasets, which may be explained by its underlying sentence embedding approach, SIF [63], which already leverages Wikipedia for pre-training and fine-tuning. Table 11 and Table 12 show more detailed results for RaKUn and SIFRank+ as examples.

5.6. Combining Post-Processing Steps

In the final part of our experiments, we tried combining multiple post-processing steps to improve the performance further. With this respect, we tried to apply all the combinations of the three proposed enhancements. The generated heatmaps from the F1@10 scores and the percentages of improved cases with different combinations for each baseline method can be seen in Figure 2. The results show that the best F1 scores for YAKE!, RaKUn, and KP-Miner were obtained when all the proposed post-processing steps were applied. For LexRank and SIFRank+, however, the best combination was integrating context-aware thesaurus and Wikipedia since they already benefited from PoS tagging-based filtering. In addition, the applied post-processing steps improved the baselines significantly –the improvement rate reached up to 23.7% for YAKE!, 21.3% for KP-Miner, 53.8% for RaKUn, 20.1% for LexRank, and 10.2% for SIFRank+. Finally, the improvements were consistent—at least one combination of the post-processing steps was observed for each method, resulting in higher performance across all the datasets. The results showed that even for more modern AKE methods there is still room for improvement using simple post-processing steps like those proposed in this paper.

6. Further Discussions

The proposed post-processing steps in this study were applied to five representative SOTA AKE methods, showing their universality to improve the performance of many different AKE methods. The universality of the post-processing steps is rooted in the fact that they rely on access to the list of candidate keywords and their scores, which are the standard output for most (if not all) AKE methods. The performance improvements can be explained by two main reasons: (i) utilising PoS tagging avoids AKE methods, especially those less benefiting from linguistic features, to generate keywords that are less likely to be meaningful keywords, such as conjunctions, determiners, and adverbs; and (ii) thesauri and Wikipedia-based enhancements allow prioritisation of more domain-specific and context-specific keywords to be returned by AKE methods.
Although PoS tagging can be easily integrated into AKE methods to implement a filtering mechanism, it should be separately considered for each dataset since AKE datasets lack linguistic standards for golden keywords. This can significantly increase the accuracy of the AKE methods, benefiting from PoS tagging. Thesaurus and Wikipedia integration can also be applied to AKE methods without much effort. Considering that a text document can cover multiple contexts, the results we reported can be further improved by integrating multiple contexts. This can be achieved by utilising a multi-label classifier. Since one-vs-rest classifiers can be used for multi-label classification, our classifier can be refined to cover multiple contexts. In addition, more advanced models, such as BERT, can be utilised to develop a more accurate classifier. It is also worth noting that two of the proposed post-processing steps in this study were selected as representative examples of semantic elements. Other semantic elements can also be used to further improve the performance of AKE methods.
Although our experiments on the proposed post-processing steps are based on English NLP tools and datasets, they can also be applied to multilingual AKE methods, e.g., YAKE!, for any language. The language of input documents can be identified automatically with a language identifier, which can achieve high accuracy for many languages [64]. Then, the corresponding PoS tagger and Wikipedia data can be utilised, although the set of acceptable PoS tag patterns will need updating according to the identified language. Nevertheless, utilising a context-aware thesaurus could be tricky for some languages, especially small ones, as there might be no thesaurus relevant to the context of the document in the identified language.
This study has a number of limitations that can be addressed in future work. Firstly, the selected baseline AKE methods are just examples of SOTA methods, so they may not be sufficiently representative. As our focus was improving AKE methods in general, we did not aim to achieve the best scores among the studies on AKE. As a result, this study is limited to open-source, unsupervised, and general-purpose AKE methods. In addition, this study leveraged multiple elements of the English language and used English datasets for evaluation. Therefore, it disregarded non-English settings, which are needed especially for multilingual AKE methods, such as YAKE!. The proposed mechanisms have been applied separately throughout the experiments. Therefore, the results could be improved further if different mechanisms benefit from each other (e.g., applying PoS tag-based filtering to the Wikipedia integration mechanism to disregard the Wikipedia named entities that cannot be keywords). Finally, a better matching strategy considering word ambiguities can be developed for checking if a candidate keyword appears in a thesaurus or Wikipedia, with the help of techniques such as word sense disambiguation.
Furthermore, while the proposed post-processing approaches are designed to be applicable across a wide range of AKE methods, their effectiveness inherently depends on the availability and quality of external knowledge sources. Specifically, the performance of the pipeline relies on the following: (1) accurate part-of-speech (PoS) tagging, (2) comprehensive and context-relevant thesauri, and (3) sufficient coverage in Wikipedia. These dependencies introduce certain robustness constraints. For example, in low-resource or emerging domains where structured thesauri are not available, or in informal text genres such as social media that include many novel or slang expressions not covered by Wikipedia, the performance gains from our approach may be limited. Similarly, PoS tagging tools may be less reliable on noisy or non-standard input. While our modular design allows selective activation of individual steps depending on the context, future work can explore adaptive strategies and fallback mechanisms to improve robustness under such conditions.

7. Conclusions

AKE has a more important role in IR and NLP with the increasingly vast amount of digital textual data that modern systems process. In this paper, we aimed to show that an enhanced level of semantic-awareness supported by PoS tagging can improve AKE algorithms. We selected five algorithms as the baseline methods upon experiments comparing several state-of-the-art AKE methods. Then, we used PoS tagging, integrated thesauri, and Wikipedia named entities for improving the baselines. Our experiments on 17 English datasets indicated that the three proposed mechanisms improved the baseline algorithms significantly and consistently.

Author Contributions

Conceptualization, E.A. and S.L.; methodology, E.A., J.R.C.N. and S.L.; software, E.A.; validation, E.A., Y.X. and J.G.; formal analysis, E.A.; investigation, E.A.; writing—original draft preparation, E.A.; writing—review and editing, J.R.C.N. and S.L.; supervision, J.R.C.N. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The source code and the complete experimental results of our work are publicly available at https://github.com/altuncu/AKE (accessed on 8 July 2025).

Acknowledgments

We would like to thank Ricardo Campos for clarification and additional information about the YAKE! algorithm. The first author E. Altuncu was supported by funding from the Ministry of National Education, Republic of Türkiye, under grant number MoNE-YLSY-2018.

Conflicts of Interest

The authors declare no competing interest.

References

  1. Merrouni, Z.A.; Frikh, B.; Ouhbi, B. Automatic Keyphrase Extraction: A Survey and Trends. J. Intell. Inf. Syst. 2020, 54, 391–424. [Google Scholar] [CrossRef]
  2. Gavrilescu, M.; Leon, F.; Minea, A.A. Techniques for Transversal Skill Classification and Relevant Keyword Extraction from Job Advertisements. Information 2025, 16, 167. [Google Scholar] [CrossRef]
  3. Papagiannopoulou, E.; Tsoumakas, G. A review of keyphrase extraction. WIREs Data Min. Knowl. Discov. 2020, 10, e1339. [Google Scholar] [CrossRef]
  4. Firoozeh, N.; Nazarenko, A.; Alizon, F.; Daille, B. Keyword extraction: Issues and methods. Nat. Lang. Eng. 2020, 26, 259–291. [Google Scholar] [CrossRef]
  5. Campos, R.; Mangaravite, V.; Pasquali, A.; Jorge, A.; Nunes, C.; Jatowt, A. YAKE! Keyword extraction from single documents using multiple local features. Inf. Sci. 2020, 509, 257–289. [Google Scholar] [CrossRef]
  6. El-Beltagy, S.R.; Rafea, A. KP-Miner: A keyphrase extraction system for English and Arabic documents. Inf. Syst. 2009, 34, 132–144. [Google Scholar] [CrossRef]
  7. Škrlj, B.; Repar, A.; Pollak, S. RaKUn: Rank-based Keyword Extraction via Unsupervised Learning and Meta Vertex Aggregation. In Proceedings of the 7th International Conference on Statistical Language and Speech Processing (SLSP’19), Ljubljana, Slovenia, 14–16 October 2019; Volume 11816, pp. 311–323. [Google Scholar] [CrossRef]
  8. Ushio, A.; Liberatore, F.; Camacho-Collados, J. Back to the Basics: A Quantitative Analysis of Statistical and Graph-Based Term Weighting Schemes for Keyword Extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP ’21), Online, 7–11 November 2021; pp. 8089–8103. [Google Scholar] [CrossRef]
  9. Sun, Y.; Qiu, H.; Zheng, Y.; Wang, Z.; Zhang, C. SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-Trained Language Model. IEEE Access 2020, 8, 10896–10906. [Google Scholar] [CrossRef]
  10. Jones, K.S. A Statistical Interpretation of Term Specificity and Its Application in Retrieval. J. Doc. 1972, 28, 11–21. [Google Scholar] [CrossRef]
  11. Brin, S.; Page, L. The Anatomy of a Large-Scale Hypertextual Web Search Engine. In Proceedings of the Seventh International World Wide Web Conference (WWW ’98), Brisbane, Australia, 14–18 April 1998; Volume 30, pp. 107–117. [Google Scholar] [CrossRef]
  12. Mihalcea, R.; Tarau, P. TextRank: Bringing Order into Text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain, 25–26 July 2004; pp. 404–411. [Google Scholar]
  13. Wan, X.; Xiao, J. Single Document Keyphrase Extraction Using Neighborhood Knowledge. In Proceedings of the 23rd National Conference on Artificial Intelligence, Chicago, IL, USA, 13–17 July 2008; Volume 2, pp. 855–860. [Google Scholar]
  14. Rose, S.; Engel, D.; Cramer, N.; Cowley, W. Automatic Keyword Extraction from Individual Documents. In Text Mining: Applications and Theory; Wiley: Hoboken, NJ, USA, 2010; Chapter 1; pp. 1–20. [Google Scholar] [CrossRef]
  15. Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient Estimation of Word Representations in Vector Space. arXiv 2013. [Google Scholar] [CrossRef]
  16. Pennington, J.; Socher, R.; Manning, C.D. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP ’14), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar] [CrossRef]
  17. Bennani-Smires, K.; Musat, C.; Hossmann, A.; Baeriswyl, M.; Jaggi, M. Simple Unsupervised Keyphrase Extraction using Sentence Embeddings. In Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL’18), Brussels, Belgium, 31 October–1 November 2018; pp. 221–229. [Google Scholar] [CrossRef]
  18. Zhang, L.; Chen, Q.; Wang, W.; Deng, C.; Zhang, S.; Li, B.; Wang, W.; Cao, X. MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, 22–27 May 2022; pp. 396–409. [Google Scholar] [CrossRef]
  19. Rabby, G.; Azad, S.; Mahmud, M.; Zamli, K.Z.; Rahman, M.M. TeKET: A Tree-based Unsupervised Keyphrase Extraction Technique. Cogn. Comput. 2020, 12, 811–833. [Google Scholar] [CrossRef]
  20. Liu, Z.; Li, P.; Zheng, Y.; Sun, M. Clustering to Find Exemplar Terms for Keyphrase Extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP’09), Singapore, 6–7 August 2009; pp. 257–266. [Google Scholar]
  21. Witten, I.H.; Paynter, G.W.; Frank, E.; Gutwin, C.; Nevill-Manning, C.G. KEA: Practical Automated Keyphrase Extraction. In Design and Usability of Digital Libraries: Case Studies in the Asia Pacific; IGI Global: Hershey, PA, USA, 2005; pp. 129–152. [Google Scholar] [CrossRef]
  22. Basaldella, M.; Antolli, E.; Serra, G.; Tasso, C. Bidirectional LSTM Recurrent Neural Network for Keyphrase Extraction. In Proceedings of the Digital Libraries and Multimedia Archives, Udine, Italy, 25–26 January 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 180–187. [Google Scholar] [CrossRef]
  23. Martinc, M.; Škrlj, B.; Pollak, S. TNT-KID: Transformer-based neural tagger for keyword identification. Nat. Lang. Eng. 2022, 28, 409–448. [Google Scholar] [CrossRef]
  24. Wang, Y.; Liu, Q.; Qin, C.; Xu, T.; Wang, Y.; Chen, E.; Xiong, H. Exploiting Topic-Based Adversarial Neural Network for Cross-Domain Keyphrase Extraction. In Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM’18), Singapore, 17–20 November 2018; pp. 597–606. [Google Scholar] [CrossRef]
  25. Bordoloi, M.; Chatterjee, P.C.; Biswas, S.K.; Purkayastha, B. Keyword extraction using supervised cumulative TextRank. Multimed. Tools Appl. 2020, 79, 31467–31496. [Google Scholar] [CrossRef]
  26. Hulth, A. Improved Automatic Keyword Extraction Given More Linguistic Knowledge. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan, 11–12 July 2003; pp. 216–223. [Google Scholar]
  27. Pay, T. Totally automated keyword extraction. In Proceedings of the 2016 IEEE International Conference on Big Data, Washington, DC, USA, 5–8 December 2016; pp. 3859–3863. [Google Scholar] [CrossRef]
  28. Zervanou, K. UvT: The UvT Term Extraction System in the Keyphrase Extraction task. In Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval-2010), Uppsala, Sweden, 15–16 July 2010; pp. 194–197. [Google Scholar]
  29. Li, G.; Wang, H. Improved Automatic Keyword Extraction Based on TextRank Using Domain Knowledge. In Proceedings of the Third CCF International Conference on Natural Language Processing and Chinese Computing (NLPCC’14), Shenzhen, China, 5–9 December 2014; Volume 496, pp. 403–413. [Google Scholar] [CrossRef]
  30. Gazendam, L.; Wartena, C.; Brussee, R. Thesaurus Based Term Ranking for Keyword Extraction. In Proceedings of the 2010 Workshops on Database and Expert Systems Applications, Bilbao, Spain, 30 August–3 September 2010; pp. 49–53. [Google Scholar] [CrossRef]
  31. Hulth, A.; Karlgren, J.; Jonsson, A.; Boström, H.; Asker, L. Automatic Keyword Extraction Using Domain Knowledge. In Proceedings of the Computational Linguistics and Intelligent Text Processing: Procedings of the Second International Conference on Intelligent Text Processing and Computational Linguistics (CICLing’01), Hanoi, Vietnam, 18–24 March 2001; Volume 2004, pp. 472–482. [Google Scholar] [CrossRef]
  32. Medelyan, O.; Witten, I.H. Thesaurus Based Automatic Keyphrase Indexing. In Proceedings of the 6th ACM/IEEE-CS Joint Conference on Digital Libraries, Chapel Hill, NC, USA, 11–15 June 2006; pp. 296–297. [Google Scholar] [CrossRef]
  33. Sheoran, A.; Jadhav, G.V.; Sarkar, A. SubModRank: Monotone Submodularity for Opinionated Key-phrase Extraction. In Proceedings of the IEEE 16th International Conference on Semantic Computing (ICSC’22), Laguna Hills, CA, USA, 26–28 January 2022; pp. 159–166. [Google Scholar] [CrossRef]
  34. Shi, T.; Jiao, S.; Hou, J.; Li, M. Improving Keyphrase Extraction Using Wikipedia Semantics. In Proceedings of the 2nd International Symposium on Intelligent Information Technology Application, Shanghai, China, 21–22 December 2008; Volume 2, pp. 42–46. [Google Scholar] [CrossRef]
  35. Yu, Y.; Ng, V. WikiRank: Improving Unsupervised Keyphrase Extraction using Background Knowledge. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC’18), Miyazaki, Japan, 7–12 May 2018; pp. 3723–3727. [Google Scholar]
  36. Ferragina, P.; Scaiella, U. TAGME: On-the-Fly Annotation of Short Text Fragments (by Wikipedia Entities). In Proceedings of the 19th ACM International Conference on Information and Knowledge Management (CIKM’10), Toronto, ON, Canada, 26–30 October 2010; pp. 1625–1628. [Google Scholar] [CrossRef]
  37. Papagiannopoulou, E.; Tsoumakas, G. Local Word Vectors Guiding Keyphrase Extraction. Inf. Process. Manag. 2018, 54, 888–902. [Google Scholar] [CrossRef]
  38. Zesch, T.; Gurevych, I. Approximate Matching for Evaluating Keyphrase Extraction. In Proceedings of the International Conference RANLP-2009, Borovets, Bulgaria, 14–16 September 2009; pp. 484–489. [Google Scholar]
  39. Marujo, L.; Gershman, A.; Carbonell, J.; Frederking, R.; Neto, J.P. Supervised Topical Key Phrase Extraction of News Stories using Crowdsourcing, Light Filtering and Co-reference Normalization. arXiv 2013. [Google Scholar] [CrossRef]
  40. Medelyan, O.; Frank, E.; Witten, I.H. Human-Competitive Tagging Using Automatic Keyphrase Extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP’09), Singapore, 6–7 August 2009; pp. 1318–1327. [Google Scholar]
  41. Medelyan, O.; Witten, I.H. Domain-Independent Automatic Keyphrase Indexing with Small Training Sets. J. Am. Soc. Inf. Sci. Technol. 2008, 59, 1026–1040. [Google Scholar] [CrossRef]
  42. Das Gollapalli, S.; Caragea, C. Extracting Keyphrases from Research Papers Using Citation Networks. Proc. AAAI Conf. Artif. Intell. 2014, 28, 1629–1635. [Google Scholar] [CrossRef]
  43. Gallina, Y.; Boudin, F.; Daille, B. KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents. In Proceedings of the 12th International Conference on Natural Language Generation (INLG’19), Tokyo, Japan, 29 October–1 November 2019; pp. 130–135. [Google Scholar] [CrossRef]
  44. Krapivin, M.; Autaeu, A.; Marchese, M. Large Dataset for Keyphrases Extraction; Departmental Technical Report DISI-09-055; University of Trento: Trento, Italy, 2009. [Google Scholar]
  45. Nguyen, T.D.; Kan, M.Y. Keyphrase Extraction in Scientific Publications. In Proceedings of the Asian Digital Libraries. Looking Back 10 Years and Forging New Frontiers: Proceedings of the 10th International Conference on Asian Digital Libraries (ICADL’07), Hanoi, Vietnam, 10–13 December 2007; Volume 4822, pp. 317–326. [Google Scholar] [CrossRef]
  46. Gay, C.W.; Kayaalp, M.; Aronson, A.R. Semi-Automatic Indexing of Full Text Biomedical Articles. In Proceedings of the 2005 AMIA Symposium. American Medical Informatics Association (AMIA), Washington, DC, USA, 22–26 October 2005; pp. 271–275. [Google Scholar]
  47. Schutz, A.T. Keyphrase Extraction from Single Documents in the Open Domain Exploiting Linguistic and Statistical Methods. Master’s Thesis, National University of Ireland, Galway, Ireland, 2008. [Google Scholar]
  48. Kim, S.N.; Medelyan, O.; Kan, M.Y.; Baldwin, T. SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles. In Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval-2010), Los Angeles, CA, USA, 15–16 July 2010; pp. 21–26. [Google Scholar]
  49. Augenstein, I.; Das, M.; Riedel, S.; Vikraman, L.; McCallum, A. SemEval 2017 Task 10: ScienceIE-Extracting Keyphrases and Relations from Scientific Publications. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Vancouver, BC, Canada, 3–4 August 2017; pp. 546–555. [Google Scholar] [CrossRef]
  50. Medelyan, O.; Witten, I.H.; Milne, D. Topic Indexing with Wikipedia. In Proceedings of the 2008 AAAI Workshop on Wikipedia and Artificial Intelligence: An Evolving Synergy, Chicago, IL, USA, 13 July 2008; pp. 19–24. [Google Scholar]
  51. Smadja, F. Retrieving Collocations from Text: Xtract. Comput. Linguist. 1993, 19, 143–178. [Google Scholar]
  52. Justeson, J.S.; Katz, S.M. Technical Terminology: Some Linguistic Properties and an Algorithm for Identification in Text. Nat. Lang. Eng. 1995, 1, 9–27. [Google Scholar] [CrossRef]
  53. Ajallouda, L.; Fagroud, F.Z.; Zellou, A.; Benlahmar, E.H. A Systematic Literature Review of Keyphrases Extraction Approaches. Int. J. Interact. Mob. Technol. (iJIM) 2022, 16, 31–58. [Google Scholar] [CrossRef]
  54. Bird, S. NLTK: The Natural Language Toolkit. In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions, Sydney, Australia, 17–18 July 2006; pp. 69–72. [Google Scholar] [CrossRef]
  55. Choueka, Y. Looking for Needles in a Haystack or Locating Interesting Collocational Expressions in Large Textual Databases. In Proceedings of the RIAO Conference on User-Oriented Content-Based Text and Image Handling, Cambridge, MA, USA, 21–24 March 1988; pp. 609–623. [Google Scholar]
  56. Boudin, F. PKE: An Open Source Python-Based Keyphrase Extraction Toolkit. In Proceedings of the 26th International Conference on Computational Linguistics: System Demonstrations (COLING’16), Osaka, Japan, 13–16 December 2016; pp. 69–73. [Google Scholar]
  57. Caracciolo, C.; Stellato, A.; Morshed, A.; Johannsen, G.; Rajbhandari, S.; Jaques, Y.; Keizer, J. The AGROVOC Linked Dataset. Semant. Web 2013, 4, 341–348. [Google Scholar] [CrossRef]
  58. Lipscomb, C.E. Medical Subject Headings (MeSH). Bull. Med. Libr. Assoc. 2000, 88, 265–266. [Google Scholar] [PubMed]
  59. Salatino, A.A.; Thanapalasingam, T.; Mannocci, A.; Osborne, F.; Motta, E. The Computer Science Ontology: A Large-Scale Taxonomy of Research Areas. In Proceedings of the Semantic Web: Proceedings of the 17th International Semantic Web Conference (ISWC’18), Part II, Monterey, CA, USA, 8–12 October 2018; Volume 11137, pp. 187–205. [Google Scholar] [CrossRef]
  60. Osborne, F.; Motta, E. Klink-2: Integrating Multiple Web Sources to Generate Semantic Topic Networks. In Proceedings of the Semantic Web: Proceedings of the 14th International Semantic Web Conference (ISWC’15), Part I, Bethlehem, PA, USA, 11–15 October 2015; Volume 9366, pp. 408–424. [Google Scholar] [CrossRef]
  61. Kempf, A.O.; Neubert, J. The Role of Thesauri in an Open Web: A Case Study of the STW Thesaurus for Economics. Knowl. Organ. 2016, 43, 160–173. [Google Scholar] [CrossRef]
  62. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  63. Arora, S.; Liang, Y.; Ma, T. A Simple but Tough-to-Beat Baseline for Sentence Embeddings. In Proceedings of the 2017 International Conference on Learning Representations (ICLR ’17), Toulon, France, 24–26 April 2017; pp. 1–16. [Google Scholar]
  64. Jauhiainen, T.; Lui, M.; Zampieri, M.; Baldwin, T.; Lindén, K. Automatic Language Identification in Texts: A Survey. J. Artif. Intell. Res. 2019, 65, 675–782. [Google Scholar] [CrossRef]
Figure 1. The overview of the proposed post-processing approach.
Figure 1. The overview of the proposed post-processing approach.
Information 16 00601 g001
Figure 2. Average improvements in F1 scores across all the datasets (upper side), and percentages of the improved cases across all the datasets (bottom side), for different AKE methods. (B: Baseline, P: PoS tagging, T: Thesaurus integration, W: Wikipedia integration).
Figure 2. Average improvements in F1 scores across all the datasets (upper side), and percentages of the improved cases across all the datasets (bottom side), for different AKE methods. (B: Baseline, P: PoS tagging, T: Thesaurus integration, W: Wikipedia integration).
Information 16 00601 g002
Table 1. Basic information about the 17 datasets.
Table 1. Basic information about the 17 datasets.
DatasetContentContextSizeAvg. # (Keys)Abs. KeysAnnotators 1
KPCrowd [39]NewsMisc.50048.9213.5%Readers
citeulike180 [40]PaperMisc.18318.4232.2%Readers
DUC-2001 [13]NewsMisc.3088.13.7%Readers
fao30 [41]PaperAgr.3033.2341.7%Experts
fao780 [41]PaperAgr.7798.9736.1%Experts
Inspec [26]AbstractCS200014.6237.7%Experts
KDD [42]AbstractCS7555.0753.2%Authors
KPTimes (test) [43]NewsMisc.20,0005.054.7%Editors
Krapivin2009 [44]PaperCS23046.3415.3%Authors
Nguyen2007 [45]PaperCS20911.3317.8%Authors & Readers
PubMed [46]PaperHealth50015.2460.2%Authors
Schutz2008 [47]PaperHealth123144.6913.6%Authors
SemEval2010 [48]PaperCS24316.4711.3%Authors & Readers
SemEval2017 [49]ParagrMisc.49318.190.0%Experts & Readers
theses100 2ThesisMisc.1007.6747.6%Unknown
wiki20 [50]ReportCS2036.5051.2%Readers
WWW [42]AbstractsCS13305.8055.0%Authors
1 Experts: Professional indexers assigned for annotation, Readers: People recruited for annotation regardless of their expertise, Authors: The authors of the document annotated. 2 https://github.com/LIAAD/KeywordExtractor-Datasets#theses100 (accessed on 8 July 2025).
Table 2. Percentages of top 10 PoS tag patterns across 17 datasets. PoS tags: NN—noun (singular), NNS—noun (plural), JJ—adjective, VBG—verb gerund.
Table 2. Percentages of top 10 PoS tag patterns across 17 datasets. PoS tags: NN—noun (singular), NNS—noun (plural), JJ—adjective, VBG—verb gerund.
DatasetNNNN NNJJ NNNNSJJJJ NNSNN NNSJJ NN NNVBGNN NN NN
KPCrowd31.382.183.2911.6510.130.950.950.265.270.17
citeulike18048.717.034.7812.9312.741.611.560.151.950.05
DUC-200119.1315.9015.2810.491.808.7310.163.650.281.52
fao3032.6014.687.9215.845.066.629.350.000.780.26
fao78029.5614.119.1115.183.786.0210.880.061.210.04
Inspec19.0512.5712.496.643.858.115.954.351.112.50
KDD27.9313.499.065.899.255.133.552.224.810.76
KPTimes15.3216.6515.674.272.838.626.262.921.761.51
Krapivin200935.154.704.0614.145.672.171.470.270.950.17
Nguyen200720.8519.8311.314.842.534.793.373.061.512.66
PubMed30.889.233.8715.4312.013.515.500.770.562.03
Schutz200830.156.2010.6118.6310.915.043.191.610.310.66
SemEval201019.4521.7421.540.083.200.170.066.400.423.15
SemEval201714.578.739.007.232.125.954.463.310.661.62
theses10027.888.555.399.4815.246.134.280.001.300.19
wiki2041.9118.6511.061.496.600.501.822.812.810.99
WWW32.3313.448.985.418.743.883.881.632.861.05
Average (%)28.0512.229.619.396.854.594.511.971.681.13
Table 3. n-gram distributions of the 17 datasets. Bold values indicate the proportion of the most frequently observed n-gram length in the corresponding dataset.
Table 3. n-gram distributions of the 17 datasets. Bold values indicate the proportion of the most frequently observed n-gram length in the corresponding dataset.
Dataset n = 1 n = 2 n = 3 n 4 n = 1 , 2 1 n 3
KPCrowd73.7818.474.902.8392.2597.15
citeulike18077.1019.982.790.0997.0899.87
DUC-200117.3261.2917.733.6678.6196.34
fao3043.0252.743.410.8395.7699.17
fao78042.3253.723.620.3496.0499.66
Inspec16.4453.6823.056.8470.1293.17
KDD25.4856.3213.974.2481.8095.77
KPTimes46.6834.3912.556.3881.0793.62
Krapivin200918.9561.6115.743.7080.5696.30
Nguyen200727.5349.9615.426.9777.4992.91
PubMed35.7943.7415.904.5879.5395.43
Schutz200857.8330.228.151.6788.0596.20
SemEval201020.0552.9720.666.3173.0293.68
SemEval201725.2333.7417.1923.8458.9776.16
theses10031.6350.3711.096.9082.0093.09
wiki2026.2053.5218.172.1179.7297.89
WWW34.3647.7112.155.7882.0794.22
Average (%)36.4545.5512.735.1282.0194.74
Table 4. The percentages of golden keywords covered by Wikipedia.
Table 4. The percentages of golden keywords covered by Wikipedia.
Dataset%Dataset%
KPCrowd71.77Nguyen200752.19
citeulike18083.78PubMed81.28
DUC-200151.05Schutz200867.43
fao3080.97SemEval201041.27
fao78079.00SemEval201731.02
Inspec39.08theses10068.82
KDD62.92wiki2089.01
KPTimes79.09WWW63.83
Krapivin200952.12
Table 5. An overview of some existing open-source unsupervised AKE methods, showing a number of key characteristics.
Table 5. An overview of some existing open-source unsupervised AKE methods, showing a number of key characteristics.
MethodEasy toPoS TaggingThesaurusWikipedia
Reconfigure
Statistical Methods
KP-Miner [6]
YAKE! [5]
LexSpec [8]
Graph-based Methods
TextRank [12]
SingleRank [13]
RAKE [14]
RaKUn [7]
LexRank [8]
TFIDFRank [8]
Embeddings-based Methods
EmbedRank [17]
SIFRank [9]
SIFRank+ [9]
MDERank [18]
Table 6. Comparison of the precision, recall, and F1 score of the original YAKE! and the one utilising PoS tagging, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
Table 6. Comparison of the precision, recall, and F1 score of the original YAKE! and the one utilising PoS tagging, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
DatasetYAKE!YAKE!+PoS
P%R%F1%P%R%F1%
KPCrowd24.204.928.1733.986.9011.47
citeulike18023.1113.2716.8625.6814.7418.73
DUC-200112.0114.8713.2917.4421.5819.29
fao3022.006.8310.4225.337.8612.00
fao78011.9314.9513.2713.1816.5214.67
Inspec19.8214.0516.4424.5717.4120.38
KDD6.0114.688.535.8314.238.27
KPTimes7.9715.8310.6111.3722.5815.12
Krapivin20099.5417.8812.449.9318.6112.95
Nguyen200719.0015.8217.2619.1915.9817.43
PubMed7.285.116.018.666.087.15
Schutz200837.298.0613.2647.6310.3016.93
SemEval201020.3713.0815.9320.8213.3716.28
SemEval201720.6111.9115.1029.4117.0021.55
theses1009.4014.0911.2810.5015.7412.60
wiki2019.505.498.5722.006.209.67
WWW6.4913.478.766.5813.668.88
Avg. Score (%)16.2712.0212.1319.5414.0414.32
Improvement (%) 20.10 16.8118.05
Table 7. Comparison of the precision, recall, and F1 score of the original SIFRank+ and the one utilising PoS tagging, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
Table 7. Comparison of the precision, recall, and F1 score of the original SIFRank+ and the one utilising PoS tagging, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
DatasetSIFRank+SIFRank+ + PoS
P%R%F1%P%R%F1%
KPCrowd26.085.308.8126.205.328.85
DUC-200128.3435.0931.3627.8634.4930.82
Inspec35.6825.2929.6035.1024.8829.12
KDD5.6813.878.064.4210.806.28
KPTimes7.9215.7410.547.7415.3710.30
SemEval201741.6624.0830.5240.1623.2129.42
WWW6.5913.698.905.2610.937.10
Avg. Score (%)21.7119.0118.2620.9617.8617.41
Improvement (%) −3.45−6.05−4.65
Table 8. Comparison of the precision, recall, and F1 score of YAKE! when the original (PoS) and the tailored (PoS*) filtering approaches are used, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
Table 8. Comparison of the precision, recall, and F1 score of YAKE! when the original (PoS) and the tailored (PoS*) filtering approaches are used, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
DatasetYAKE! + PoSYAKE! + PoS*
P%R%F1%P%R%F1%
PubMed8.666.087.158.706.117.18
Schutz200847.6310.3016.9347.8010.3417.00
Avg. Score (%)28.158.1912.0428.258.2312.09
Improvement (%) 0.36 0.490.42
Table 9. Comparison of precision, recall, and F1 score of the original LexRank and its enhanced versions with manual (M) and automatic (A) thesaurus integration, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
Table 9. Comparison of precision, recall, and F1 score of the original LexRank and its enhanced versions with manual (M) and automatic (A) thesaurus integration, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
DatasetContextLexRankLexRank + T (M)LexRank + T (A)
P%R%F1%P%R%F1%P%R%F1%
fao30Agr.20.336.319.6330.339.4114.36
fao780Agr.8.5510.729.5113.0416.3514.51
InspecCS30.4921.6125.2931.1022.0425.7930.9721.9525.69
KDDCS6.0714.818.616.2315.208.836.2515.268.87
Krapivin2009CS7.0113.149.158.7916.4811.478.7416.3711.39
Nguyen2007CS13.2511.0412.0415.6913.0714.2615.4512.8714.04
SemEval2010CS13.138.4310.2715.109.7011.8115.109.7011.81
wiki20CS14.003.946.1523.006.4810.1123.006.4810.11
WWWCS6.6613.838.996.9514.439.386.9314.409.36
PubMedHealth4.222.963.488.986.317.418.926.267.36
Schutz2008Health28.326.1210.0734.357.4312.2134.007.3512.09
KPTimes-EconEcon.3.277.034.464.098.805.594.098.795.58
Avg. Score (%) 12.949.999.8016.4712.1412.1415.3511.9411.63
Improvement (%) 27.2821.5223.8821.4416.0318.07
Table 10. Comparison of precision, recall, and F1 score of the original SIFRank+ and its enhanced versions with manual (M) and automatic (A) thesaurus integration, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
Table 10. Comparison of precision, recall, and F1 score of the original SIFRank+ and its enhanced versions with manual (M) and automatic (A) thesaurus integration, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
DatasetContextSIFRank+SIFRank+ + T (M)SIFRank+ + T (A)
P%R%F1%P%R%F1%P%R%F1%
InspecCS35.6825.2929.6036.6225.9530.3736.0325.5329.88
KDDCS5.6813.878.065.9714.588.485.9514.528.44
WWWCS6.5913.698.907.3215.199.887.2715.109.81
KPTimes-EconEcon.3.497.504.764.569.816.234.569.816.23
Avg. Score (%) 12.8615.0912.8313.6216.3813.7413.4516.2413.59
Improvement (%) 5.918.557.094.597.625.92
Table 11. Comparison of precision, recall, and F1 score of the original RaKUn and its enhanced versions with Wikipedia, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
Table 11. Comparison of precision, recall, and F1 score of the original RaKUn and its enhanced versions with Wikipedia, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
DatasetRaKUnRaKUn+Wiki
P%R%F1%P%R%F1%
KPCrowd42.528.6414.3642.648.6614.40
citeulike18016.569.5012.0817.9210.2913.07
DUC-20015.687.036.296.177.646.82
fao3015.004.657.1018.675.798.84
fao7806.508.147.237.649.578.50
Inspec6.544.645.436.744.775.59
KDD3.668.925.193.638.865.15
KPTimes8.0716.0310.748.1516.1810.84
Krapivin20092.775.203.624.949.266.44
Nguyen20076.795.666.179.678.058.78
PubMed4.303.023.556.584.625.43
Schutz200833.147.1611.7840.098.6714.25
SemEval20106.754.335.2810.046.457.85
SemEval201711.426.608.3711.746.798.60
theses1003.905.854.684.807.205.76
wiki209.502.684.1819.505.498.57
WWW4.328.985.844.399.125.93
Avg. Score (%)11.026.887.1713.148.088.52
Improvement (%) 19.2417.4418.83
Table 12. Comparison of the precision, recall, and F1 score of the original SIFRank+ and the one utilising Wikipedia named entities, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
Table 12. Comparison of the precision, recall, and F1 score of the original SIFRank+ and the one utilising Wikipedia named entities, at 10 extracted keywords. Bold values indicate the best scores obtained for each dataset.
DatasetSIFRank+SIFRank+ + Wiki
P%R%F1%P%R%F1%
KPCrowd26.085.308.8127.465.589.27
DUC-200128.3435.0931.3622.8228.2625.25
Inspec35.6825.2929.6036.6025.9430.36
KDD5.6813.878.066.1114.908.66
KPTimes7.9215.7410.549.2218.3112.26
SemEval201741.6624.0830.5241.3423.8930.28
WWW6.5913.698.907.5015.5710.12
Avg. Score (%)21.7119.0118.2621.5818.9218.03
Improvement (%) −0.60−0.47−1.26
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Altuncu, E.; Nurse, J.R.C.; Xu, Y.; Guo, J.; Li, S. Improving Performance of Automatic Keyword Extraction (AKE) Methods Using PoS Tagging and Enhanced Semantic-Awareness. Information 2025, 16, 601. https://doi.org/10.3390/info16070601

AMA Style

Altuncu E, Nurse JRC, Xu Y, Guo J, Li S. Improving Performance of Automatic Keyword Extraction (AKE) Methods Using PoS Tagging and Enhanced Semantic-Awareness. Information. 2025; 16(7):601. https://doi.org/10.3390/info16070601

Chicago/Turabian Style

Altuncu, Enes, Jason R. C. Nurse, Yang Xu, Jie Guo, and Shujun Li. 2025. "Improving Performance of Automatic Keyword Extraction (AKE) Methods Using PoS Tagging and Enhanced Semantic-Awareness" Information 16, no. 7: 601. https://doi.org/10.3390/info16070601

APA Style

Altuncu, E., Nurse, J. R. C., Xu, Y., Guo, J., & Li, S. (2025). Improving Performance of Automatic Keyword Extraction (AKE) Methods Using PoS Tagging and Enhanced Semantic-Awareness. Information, 16(7), 601. https://doi.org/10.3390/info16070601

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop