Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = textual reuse

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 34043 KB  
Article
Toward the Adaptive Reuse of Vernacular Architecture: Practices from the School of Porto
by David Ordóñez-Castañón and Teresa Cunha Ferreira
Heritage 2024, 7(3), 1826-1849; https://doi.org/10.3390/heritage7030087 - 21 Mar 2024
Cited by 4 | Viewed by 6025
Abstract
Strategies for the adaptive reuse of vernacular architecture are of utmost importance in the current context of social, economic, and environmental vulnerability. This article examines the design strategies of adaptive reuse in three cases of renowned architects of the so-called School of Porto [...] Read more.
Strategies for the adaptive reuse of vernacular architecture are of utmost importance in the current context of social, economic, and environmental vulnerability. This article examines the design strategies of adaptive reuse in three cases of renowned architects of the so-called School of Porto developed across the second half of the 20th century, specifically between 1956 and 1991. The paper aims to introduce a new and deeper knowledge of the selected practices by critically documenting the whole process of the intervention (before, during, after) and not only the final result, as is common practice in specialized publications. The research methodology combines the bibliographical and archival research and interpretation of diverse graphic, photographic, and textual documentation with the production of analytical drawings. The demolitions/additions color code (black/yellow/red) is applied to plans, sections, and elevations as an essential tool for understanding and communicating the transformations undertaken. The selected case studies are Além House (1956–1967) by Fernando Távora, Alcino Cardoso House (1971–1973; 1988–1991) by Álvaro Siza, and the House in Gerês (1980–1982) by Eduardo Souto de Moura. These projects show different strategies of intervention in built heritage, providing lessons on the reactivation of obsolete or abandoned rural constructions with new functions that are compatible with the preservation of their values (historical, landscape, constructive, social, and aesthetic) and guidelines for sustainable reuse. Full article
(This article belongs to the Special Issue Adaptive Reuse of Heritage Buildings)
Show Figures

Figure 1

17 pages, 3238 KB  
Article
An Algorithm for Finding Optimal k-Core in Attribute Networks
by Jing Liu and Yong Zhong
Appl. Sci. 2024, 14(3), 1256; https://doi.org/10.3390/app14031256 - 2 Feb 2024
Cited by 2 | Viewed by 2139
Abstract
As a structural indicator of dense subgraphs, k-core has been widely used in community search due to its concise and efficient calculation. Many community search algorithms have been expanded on the basis of k-core. However, relevant algorithms often set k values [...] Read more.
As a structural indicator of dense subgraphs, k-core has been widely used in community search due to its concise and efficient calculation. Many community search algorithms have been expanded on the basis of k-core. However, relevant algorithms often set k values based on empirical analysis of datasets or require users to input manually. Once users are not familiar with the graph network structure, they may miss the optimal solution due to an improper k setting. Especially in attribute social networks, characterizing communities with only k-cores may lead to a lack of semantic interpretability of communities. Consequently, this article proposes a method for identifying the optimal k-core with the greatest attribute score in the attribute social network as the target community. The difficulty of the problem is that the query needs to integrate both structural and textual indicators of the community while fully considering the diversity of attribute scoring functions. To effectively reduce computational costs, we incorporate the topological characteristics of the k-core and the attribute characteristics of entities to construct a hierarchical forest. It is worth noting that we name tree nodes in a way similar to pre-order traversal and can maintain the order of all tree nodes during the forest creation process. In such an attribute forest, it is possible to quickly locate the initial solution containing all query vertices and reuse intermediate results during the process of expanding queries. We conducted effectiveness and performance experiments on multiple real datasets. As the results show, attribute scoring functions are not monotonic, and the algorithm proposed in this paper can avoid scores falling into local optima. With the help of the attribute k-core forest, the actual query time of the Advanced algorithm has improved by two orders of magnitude compared to the BaseLine algorithm. In addition, the average F1 score of our target community has increased by 2.04 times and 26.57% compared to ACQ and SFEG, respectively. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

15 pages, 6723 KB  
Article
Observations on the Intertextuality of Selected Abhidharma Texts Preserved in Chinese Translation
by Sebastian Nehrdich
Religions 2023, 14(7), 911; https://doi.org/10.3390/rel14070911 - 14 Jul 2023
Cited by 2 | Viewed by 2586
Abstract
Textual reuse is a fundamental characteristic of traditional Buddhist literature preserved in various languages. Given the sheer volume of preserved Buddhist literature and the often-unmarked instances of textual reuse, the thorough analysis and evaluation of this material without computational assistance are virtually impossible. [...] Read more.
Textual reuse is a fundamental characteristic of traditional Buddhist literature preserved in various languages. Given the sheer volume of preserved Buddhist literature and the often-unmarked instances of textual reuse, the thorough analysis and evaluation of this material without computational assistance are virtually impossible. This study investigates the application of computer-aided methods for detecting approximately similar passages within Xuanzang’s translation corpus and a selection of Abhidharma treatises preserved in Chinese translation. It presents visualizations of the generated network graphs and conducts a detailed examination of patterns of textual reuse among selected works within the Abhidharma tradition. This study demonstrates that the general picture of textual reuse within Xuanzang’s translation corpus and the selected Abhidharma texts, based on computational analysis, aligns well with established scholarship. Thus, it provides a robust foundation for conducting more detailed studies on individual text sets. The methods employed in this study to create and analyze citation network graphs can also be applied to other texts preserved in Chinese and, with some modifications, to texts in other languages. Full article
(This article belongs to the Special Issue Historical Network Analysis in the Study of Chinese Religion)
Show Figures

Figure 1

28 pages, 1948 KB  
Review
A Contemporary Review on Utilizing Semantic Web Technologies in Healthcare, Virtual Communities, and Ontology-Based Information Processing Systems
by Senthil Kumar Narayanasamy, Kathiravan Srinivasan, Yuh-Chung Hu, Satish Kumar Masilamani and Kuo-Yi Huang
Electronics 2022, 11(3), 453; https://doi.org/10.3390/electronics11030453 - 3 Feb 2022
Cited by 33 | Viewed by 10087
Abstract
The semantic web is an emerging technology that helps to connect different users to create their content and also facilitates the way of representing information in a manner that can be made understandable for computers. As the world is heading towards the fourth [...] Read more.
The semantic web is an emerging technology that helps to connect different users to create their content and also facilitates the way of representing information in a manner that can be made understandable for computers. As the world is heading towards the fourth industrial revolution, the implicit utilization of artificial-intelligence-enabled semantic web technologies paves the way for many real-time application developments. The fundamental building blocks for the overwhelming utilization of semantic web technologies are ontologies, and it allows sharing as well as reusing the concepts in a standardized way so that the data gathered from heterogeneous sources receive a common nomenclature, and it paves the way for disambiguating the duplicates very easily. In this context, the right utilization of ontology capabilities would further strengthen its presence in many web-based applications such as e-learning, virtual communities, social media sites, healthcare, agriculture, etc. In this paper, we have given the comprehensive review of using the semantic web in the domain of healthcare, some virtual communities, and other information retrieval projects. As the role of semantic web is becoming pervasive in many domains, the demand for the semantic web in healthcare, virtual communities, and information retrieval has been gaining huge momentum in recent years. To obtain the correct sense of the meaning of the words or terms given in the textual content, it is deemed necessary to apply the right ontology to fix the ambiguity and shun any deviations that persist on the concepts. In this review paper, we have highlighted all the necessary information for a good understanding of the semantic web and its ontological frameworks. Full article
(This article belongs to the Special Issue New Trends in Deep Learning for Computer Vision)
Show Figures

Figure 1

25 pages, 6926 KB  
Article
A Semantic Annotation Pipeline towards the Generation of Knowledge Graphs in Tribology
by Patricia Kügler, Max Marian, Rene Dorsch, Benjamin Schleich and Sandro Wartzack
Lubricants 2022, 10(2), 18; https://doi.org/10.3390/lubricants10020018 - 25 Jan 2022
Cited by 8 | Viewed by 5365
Abstract
Within the domain of tribology, enterprises and research institutions are constantly working on new concepts, materials, lubricants, or surface technologies for a wide range of applications. This is also reflected in the continuously growing number of publications, which in turn serve as guidance [...] Read more.
Within the domain of tribology, enterprises and research institutions are constantly working on new concepts, materials, lubricants, or surface technologies for a wide range of applications. This is also reflected in the continuously growing number of publications, which in turn serve as guidance and benchmark for researchers and developers. Due to the lack of suited data and knowledge bases, knowledge acquisition and aggregation is still a manual process involving the time-consuming review of literature. Therefore, semantic annotation and natural language processing (NLP) techniques can decrease this manual effort by providing a semi-automatic support in knowledge acquisition. The generation of knowledge graphs as a structured information format from textual sources promises improved reuse and retrieval of information acquired from scientific literature. Motivated by this, the contribution introduces a novel semantic annotation pipeline for generating knowledge in the domain of tribology. The pipeline is built on Bidirectional Encoder Representations from Transformers (BERT)—a state-of-the-art language model—and involves classic NLP tasks like information extraction, named entity recognition and question answering. Within this contribution, the three modules of the pipeline for document extraction, annotation, and analysis are introduced. Based on a comparison with a manual annotation of publications on tribological model testing, satisfactory performance is verified. Full article
(This article belongs to the Special Issue Machine Learning in Tribology)
Show Figures

Graphical abstract

11 pages, 3175 KB  
Article
Automatic Extraction of Adverse Drug Reactions from Summary of Product Characteristics
by Zhengru Shen and Marco Spruit
Appl. Sci. 2021, 11(6), 2663; https://doi.org/10.3390/app11062663 - 17 Mar 2021
Cited by 7 | Viewed by 3459
Abstract
The summary of product characteristics from the European Medicines Agency is a reference document on medicines in the EU. It contains textual information for clinical experts on how to safely use medicines, including adverse drug reactions. Using natural language processing (NLP) techniques to [...] Read more.
The summary of product characteristics from the European Medicines Agency is a reference document on medicines in the EU. It contains textual information for clinical experts on how to safely use medicines, including adverse drug reactions. Using natural language processing (NLP) techniques to automatically extract adverse drug reactions from such unstructured textual information helps clinical experts to effectively and efficiently use them in daily practices. Such techniques have been developed for Structured Product Labels from the Food and Drug Administration (FDA), but there is no research focusing on extracting from the Summary of Product Characteristics. In this work, we built a natural language processing pipeline that automatically scrapes the summary of product characteristics online and then extracts adverse drug reactions from them. Besides, we have made the method and its output publicly available so that it can be reused and further evaluated in clinical practices. In total, we extracted 32,797 common adverse drug reactions for 647 common medicines scraped from the Electronic Medicines Compendium. A manual review of 37 commonly used medicines has indicated a good performance, with a recall and precision of 0.99 and 0.934, respectively. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Pharmaceutics)
Show Figures

Figure 1

21 pages, 1048 KB  
Article
TagML—An Implementation Specific Model to Generate Tag-Based Documents
by Ricardo Tesoriero, Gabriel Sebastian and Jose A. Gallud
Electronics 2020, 9(7), 1097; https://doi.org/10.3390/electronics9071097 - 5 Jul 2020
Cited by 2 | Viewed by 3464
Abstract
This article describes TagML, a method to generate collections of XML documents using model-to-model (M2M) transformations. To accomplish this goal, we define the TagML meta-model and the TagML-to-XML model-to-text transformation. While TagML models represent the essential characteristics of collections of XML documents, the [...] Read more.
This article describes TagML, a method to generate collections of XML documents using model-to-model (M2M) transformations. To accomplish this goal, we define the TagML meta-model and the TagML-to-XML model-to-text transformation. While TagML models represent the essential characteristics of collections of XML documents, the TagML-to-XML transformation generates the textual representation of collections of XML documents from TagML models. This approach enables developers to define model-to-model transformations to generate TagML models. These models are turned into text applying the TagML-to-XML transformation. Consequently, developers are able to use declarative languages to define model-to-text transformations that generate XML documents, instead of traditional archetype-based languages to define model-to-text transformations that generate collections of XML documents. The TagML model editor as well as the TagML-to-XML transformation were developed as Eclipse plugins using the Eclipse Modeling Framework. The plugin has been developed following the Object Modeling Group standards to ensure the compatibility with legacy tools. Using TagML, unlike other previous proposals, implies the use of model-to-model transformations to generate XML documents, instead of model-to-text transformations, which results on an improvement of the transformation readability and reliability, as well as a reduction of the transformation maintenance costs. The proposed approach helps developers to define transformations less prone to errors than using the traditional approach. The novelty of this approach is based on the way XML documents are generated using model-to-model transformations instead of traditional model-to-text transformations. Moreover, the simplicity of the proposed approach enables the generation of XML documents without the need for any transformation configuration, which does not penalize the model reuse. To illustrate the features of the proposal, we present the generation of XHTML documents using UML class diagrams as input models. The evaluation section demonstrates that the proposed method is less prone to errors than the traditional one. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 1863 KB  
Article
Software Support for Discourse-Based Textual Information Analysis: A Systematic Literature Review and Software Guidelines in Practice
by Patricia Martin-Rodilla and Miguel Sánchez
Information 2020, 11(5), 256; https://doi.org/10.3390/info11050256 - 7 May 2020
Cited by 6 | Viewed by 5270
Abstract
The intrinsic characteristics of humanities research require technological support and software assistance that also necessarily goes through the analysis of textual narratives. When these narratives become increasingly complex, pragmatics analysis (i.e., at discourse or argumentation levels) assisted by software is a great ally [...] Read more.
The intrinsic characteristics of humanities research require technological support and software assistance that also necessarily goes through the analysis of textual narratives. When these narratives become increasingly complex, pragmatics analysis (i.e., at discourse or argumentation levels) assisted by software is a great ally in the digital humanities. In recent years, solutions have been developed from the information visualization domain to support discourse analysis or argumentation analysis of textual sources via software, with applications in political speeches, debates, online forums, but also in written narratives, literature or historical sources. This paper presents a wide and interdisciplinary systematic literature review (SLR), both in software-related areas and humanities areas, on the information visualization and the software solutions adopted to support pragmatics textual analysis. As a result of this review, this paper detects weaknesses in existing works on the field, especially related to solutions’ availability, pragmatic framework dependence and lack of information sharing and reuse software mechanisms. The paper also provides some software guidelines for improving the detected weaknesses, exemplifying some guidelines in practice through their implementation in a new web tool, Viscourse. Viscourse is conceived as a complementary tool to assist textual analysis and to facilitate the reuse of informational pieces from discourse and argumentation text analysis tasks. Full article
(This article belongs to the Special Issue Digital Humanities)
Show Figures

Figure 1

24 pages, 3107 KB  
Article
Automatic Classification of Web Images as UML Static Diagrams Using Machine Learning Techniques
by Valentín Moreno, Gonzalo Génova, Manuela Alejandres and Anabel Fraga
Appl. Sci. 2020, 10(7), 2406; https://doi.org/10.3390/app10072406 - 1 Apr 2020
Cited by 13 | Viewed by 5306
Abstract
Our purpose in this research is to develop a method to automatically and efficiently classify web images as Unified Modeling Language (UML) static diagrams, and to produce a computer tool that implements this function. The tool receives a bitmap file (in different formats) [...] Read more.
Our purpose in this research is to develop a method to automatically and efficiently classify web images as Unified Modeling Language (UML) static diagrams, and to produce a computer tool that implements this function. The tool receives a bitmap file (in different formats) as an input and communicates whether the image corresponds to a diagram. For pragmatic reasons, we restricted ourselves to the simplest kinds of diagrams that are more useful for automated software reuse: computer-edited 2D representations of static diagrams. The tool does not require that the images are explicitly or implicitly tagged as UML diagrams. The tool extracts graphical characteristics from each image (such as grayscale histogram, color histogram and elementary geometric forms) and uses a combination of rules to classify it. The rules are obtained with machine learning techniques (rule induction) from a sample of 19,000 web images manually classified by experts. In this work, we do not consider the textual contents of the images. Our tool reaches nearly 95% of agreement with manually classified instances, improving the effectiveness of related research works. Moreover, using a training dataset 15 times bigger, the time required to process each image and extract its graphical features (0.680 s) is seven times lower. Full article
(This article belongs to the Special Issue Knowledge Retrieval and Reuse)
Show Figures

Figure 1

22 pages, 3213 KB  
Article
‘Examining Religion’ through Generations of Jain Audiences: The Circulation of the Dharmaparīkṣā
by Heleen De Jonckheere
Religions 2019, 10(5), 308; https://doi.org/10.3390/rel10050308 - 7 May 2019
Cited by 5 | Viewed by 6839
Abstract
Indian literary traditions, both religious and non-religious, have dealt with literature in a fluid way, repeating and reusing narrative motifs, stories and characters over and over again. In recognition of this, the current paper will focus on one particular textual tradition within Jainism [...] Read more.
Indian literary traditions, both religious and non-religious, have dealt with literature in a fluid way, repeating and reusing narrative motifs, stories and characters over and over again. In recognition of this, the current paper will focus on one particular textual tradition within Jainism of works titled Dharmaparīkṣā and will trace its circulation. This didactic narrative, designed to convince a Jain audience of the correctness of Jainism over other traditions, was first composed in the tenth century in Apabhraṃśa and is best known in its eleventh-century Sanskrit version by the Digambara author Amitagati. Tracing it from a tenth-century context into modernity, across both classical and vernacular languages, will demonstrate the popularity of this narrative genre within Jain circles. The paper will focus on the materiality of manuscripts, looking at language and form, place of preservation, affiliation of the authors and/or scribe, and patronage. Next to highlighting a previously underestimated category of texts, such a historical overview of a particular literary circulation will prove illuminating on broader levels: it will show networks of transmission within the Jain community, illustrate different types of mediation of one literary tradition, and overall, enrich our knowledge of Jain literary culture. Full article
(This article belongs to the Special Issue Jainism Studies)
Show Figures

Figure 1

13 pages, 751 KB  
Article
A Recommendation System for Execution Plans Using Machine Learning
by Jihad Zahir and Abderrahim El Qadi
Math. Comput. Appl. 2016, 21(2), 23; https://doi.org/10.3390/mca21020023 - 15 Jun 2016
Cited by 13 | Viewed by 5619
Abstract
Generating execution plans is a costly operation for the DataBase Management System (DBMS). An interesting alternative to this operation is to reuse the old execution plans, that were already generated by the optimizer for past queries, to execute new queries. In this paper, [...] Read more.
Generating execution plans is a costly operation for the DataBase Management System (DBMS). An interesting alternative to this operation is to reuse the old execution plans, that were already generated by the optimizer for past queries, to execute new queries. In this paper, we present an approach for execution plan recommendation in two phases. We firstly propose a textual representation of our SQL queries and use it to build a Features Extractor module. Then, we present a straightforward solution to identify query similarity.This solution relies only on the comparison of the SQL statements. Next, we show how to build an improved solution enabled by machine learning techniques. The improved version takes into account the features of the queries’ execution plans. By comparing three machine learning algorithms, we find that the improved solution using Classification Based on Associative Rules (CAR) identifies similarity in 91 % of the cases. Full article
Show Figures

Figure 1

Back to TopTop