Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = RDF triples

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
7 pages, 707 KB  
Proceeding Paper
Enhancing Text-to-SPARQL Generation via In-Context Learning with Example Selection Strategies
by Eric Jui-Lin Lu and Zi-Ting Su
Eng. Proc. 2026, 134(1), 36; https://doi.org/10.3390/engproc2026134036 - 9 Apr 2026
Abstract
Large language models demonstrate strong in-context learning (ICL) capabilities, allowing them to perform diverse tasks without fine-tuning. In knowledge graph question answering (KGQA), natural language questions are translated into SPARQL queries. Existing ICL approaches mainly rely on semantic similarity, often neglecting structural features. [...] Read more.
Large language models demonstrate strong in-context learning (ICL) capabilities, allowing them to perform diverse tasks without fine-tuning. In knowledge graph question answering (KGQA), natural language questions are translated into SPARQL queries. Existing ICL approaches mainly rely on semantic similarity, often neglecting structural features. To address this limitation, we developed a structure-aware example selection strategy that integrates both semantic and structural patterns by abstracting Resource Description Framework (RDF) triples. We compare four strategies: (1) fully random, (2) semantic similarity, (3) same-type random, and (4) same-type semantic similarity. Experiments on LC-QuAD 1.0 using FLAN-T5 show that in non-fine-tuned settings, structure-aware semantic selection achieves the best results, highlighting the importance of structural congruence, while after fine-tuning, differences between strategies converge but diversity and semantic relevance remain beneficial. These findings demonstrate the critical role of example quality in ICL and provide empirical insights for KGQA design. Full article
Show Figures

Figure 1

23 pages, 1156 KB  
Article
An Industry-Ready Machine Learning Ontology
by Bernhard G. Humm
Appl. Sci. 2026, 16(2), 843; https://doi.org/10.3390/app16020843 - 14 Jan 2026
Viewed by 672
Abstract
This article presents an industry-ready ontology for the machine learning domain, which is named “ML Ontology”. While based on lightweight modelling languages, ML ontology provides novel features including built-in queries and quality assurance, as well as sophisticated reasoning. With ca. 700 individuals that [...] Read more.
This article presents an industry-ready ontology for the machine learning domain, which is named “ML Ontology”. While based on lightweight modelling languages, ML ontology provides novel features including built-in queries and quality assurance, as well as sophisticated reasoning. With ca. 700 individuals that define key ML concepts and ca. 5000 RDF triples, ML Ontology ranks among the largest domain-specific ontologies for ML. An experiment to estimate the correctness and completeness of ML terminology included in ML Ontology indicates an F1-score of 0.83. A benchmark evaluating query performance reveals query response times far below 100 ms even for complex queries and memory consumption below 3.5 MB. Its industry-readiness is demonstrated by benchmarks as well as two use case implementations within a data science platform. ML Ontology is open source and published under an MIT license. Full article
(This article belongs to the Special Issue Current Advances in Intelligent Semantic Technologies)
Show Figures

Figure 1

25 pages, 5679 KB  
Article
Mine Emergency Rescue Capability Assessment Integrating Sustainable Development: A Combined Model Using Triple Bottom Line and Relative Difference Function
by Lu Feng, Jing Xie and Yuxian Ke
Sustainability 2025, 17(22), 9948; https://doi.org/10.3390/su17229948 - 7 Nov 2025
Cited by 1 | Viewed by 779
Abstract
Assessing Mine Emergency Rescue Capability (MERC) is critical for ensuring mining safety and advancing sustainable development. However, existing MERC assessments often lack a holistic sustainability perspective. To bridge this gap, this study develops a MERC assessment model grounded in the Triple Bottom Line [...] Read more.
Assessing Mine Emergency Rescue Capability (MERC) is critical for ensuring mining safety and advancing sustainable development. However, existing MERC assessments often lack a holistic sustainability perspective. To bridge this gap, this study develops a MERC assessment model grounded in the Triple Bottom Line (TBL) framework, integrating the relative difference function (RDF) to address the fuzziness and subjectivity in evaluation processes. A hierarchical indicator system is constructed, comprising 5 primary factors and 25 sub-indicators across environmental, economic, and social dimensions, reflecting both immediate rescue effectiveness and long-term sustainability performance. Indicator weights are derived from a hybrid approach that combines the subjective G1 method with the objective entropy weight method. RDF is employed to compute membership degrees, and the final MERC level is determined by level characteristic values. The model is validated through an empirical study of six green mines in China. Results demonstrate robust performance and consistency with alternative methods and reveal the environmental dimension as the dominant driver within the TBL framework. This finding supports the ecology-first principle of green mining and underscores the alignment of high-level emergency preparedness with sustainable development objectives. By explicitly embedding sustainability principles into safety assessment, the proposed model provides a scientifically grounded tool to guide the green transformation of the mining industry. Future work will adapt the model to diverse mining contexts and refine the indicators to better support global sustainability goals. Full article
Show Figures

Figure 1

25 pages, 4531 KB  
Article
Interoperable Knowledge Graphs for Localized Supply Chains: Leveraging Graph Databases and RDF Standards
by Vishnu Kumar
Logistics 2025, 9(4), 144; https://doi.org/10.3390/logistics9040144 - 13 Oct 2025
Cited by 2 | Viewed by 3581
Abstract
Background: Ongoing challenges such as geopolitical conflicts, trade disruptions, economic sanctions, and political instability have underscored the urgent need for large manufacturing enterprises to improve resilience and reduce dependence on global supply chains. Integrating regional and local Small- and Medium-Sized Enterprises (SMEs) [...] Read more.
Background: Ongoing challenges such as geopolitical conflicts, trade disruptions, economic sanctions, and political instability have underscored the urgent need for large manufacturing enterprises to improve resilience and reduce dependence on global supply chains. Integrating regional and local Small- and Medium-Sized Enterprises (SMEs) has been proposed as a strategic approach to enhance supply chain localization, yet barriers such as limited visibility, qualification hurdles, and integration difficulties persist. Methods: This study proposes a comprehensive knowledge graph driven framework for representing and discovering SMEs, implemented as a proof-of-concept in the U.S. BioPharma sector. The framework constructs a curated knowledge graph in Neo4j, converts it to Resource Description Framework (RDF) format, and aligns it with the Schema.org vocabulary to enable semantic interoperability and enhance the discoverability of SMEs. Results: The developed knowledge graph, consisting of 488 nodes and 11,520 edges, enabled accurate multi-hop SME discovery with query response times under 10 milliseconds. RDF serialization produced 16,086 triples, validated across platforms to confirm interoperability and semantic consistency. Conclusions: The proposed framework provides a scalable, adaptable, and generalizable solution for SME discovery and supply chain localization, offering a practical pathway to strengthen resilience in diverse manufacturing industries. Full article
Show Figures

Figure 1

19 pages, 2689 KB  
Article
A Multi-Temporal Knowledge Graph Framework for Landslide Monitoring and Hazard Assessment
by Runze Wu, Min Huang, Haishan Ma, Jicai Huang, Zhenhua Li, Hongbo Mei and Chengbin Wang
GeoHazards 2025, 6(3), 39; https://doi.org/10.3390/geohazards6030039 - 23 Jul 2025
Cited by 2 | Viewed by 1670
Abstract
In the landslide chain from pre-disaster conditions to landslide mitigation and recovery, time is an important factor in understanding the geological hazards process and managing landsides. Static knowledge graphs are unable to capture the temporal dynamics of landslide events. To address this limitation, [...] Read more.
In the landslide chain from pre-disaster conditions to landslide mitigation and recovery, time is an important factor in understanding the geological hazards process and managing landsides. Static knowledge graphs are unable to capture the temporal dynamics of landslide events. To address this limitation, we propose a systematic framework for constructing a multi-temporal knowledge graph of landslides that integrates multi-source temporal data, enabling the dynamic tracking of landslide processes. Our approach comprises three key steps. First, we summarize domain knowledge and develop a temporal ontology model based on the disaster chain management system. Second, we map heterogeneous datasets (both tabular and textual data) into triples/quadruples and represent them based on the RDF (Resource Description Framework) and quadruple approaches. Finally, we validate the utility of multi-temporal knowledge graphs through multidimensional queries and develop a web interface that allows users to input landslide names to retrieve location and time-axis information. A case study of the Zhangjiawan landslide in the Three Gorges Reservoir Area demonstrates the multi-temporal knowledge graph’s capability to track temporal updates effectively. The query results show that multi-temporal knowledge graphs effectively support multi-temporal queries. This study advances landslide research by combining static knowledge representation with the dynamic evolution of landslides, laying the foundation for hazard forecasting and intelligent early-warning systems. Full article
(This article belongs to the Special Issue Landslide Research: State of the Art and Innovations)
Show Figures

Figure 1

22 pages, 631 KB  
Article
Time Travel with the BiTemporal RDF Model
by Abdullah Uz Tansel, Di Wu and Hsien-Tseng Wang
Mathematics 2025, 13(13), 2109; https://doi.org/10.3390/math13132109 - 27 Jun 2025
Viewed by 2126
Abstract
The Internet is not just used for communication, transactions, and cloud storage; it also serves as a massive knowledge store where both people and machines can create, analyze, and use data and information. The Semantic Web was designed to enable machines to interpret [...] Read more.
The Internet is not just used for communication, transactions, and cloud storage; it also serves as a massive knowledge store where both people and machines can create, analyze, and use data and information. The Semantic Web was designed to enable machines to interpret the meaning of data, facilitating more informed and autonomous decision-making. The foundation of the Semantic Web is the Resource Description Framework (RDF). The standard RDF is limited to representing simple binary relationships in the form of the <subjectpredicateobject> triple. In this paper, we present a new data model called BiTemporal RDF (BiTRDF), which adds valid time and transaction time to the standard RDF. Our approach treats temporal information as references instead of attributes, simplifying the semantics while enhancing the model’s expressiveness and consistency. BiTRDF treats all resources and relationships as inherently bitemporal, enabling the representation and reasoning of complex temporal relationships in RDF. Illustrative examples demonstrate the model’s support for type propagation, domain-range inference, and transitive relationships in a temporal setting. While this work lays a theoretical foundation, future research will address implementation, query language support, and compatibility with RDF streams and legacy systems. Full article
Show Figures

Figure 1

18 pages, 3526 KB  
Article
Smart Data-Enabled Conservation and Knowledge Generation for Architectural Heritage System
by Ziyuan Rao and Guoguang Wang
Buildings 2025, 15(12), 2122; https://doi.org/10.3390/buildings15122122 - 18 Jun 2025
Cited by 3 | Viewed by 1135
Abstract
In architectural heritage conservation, fragmented data practices and heterogeneous formats hinder knowledge extraction, limiting the translation of raw data into actionable conservation insights. This study proposes a knowledge-centric framework integrating smart data methodologies to bridge this gap. The framework synergizes Heritage Building Information [...] Read more.
In architectural heritage conservation, fragmented data practices and heterogeneous formats hinder knowledge extraction, limiting the translation of raw data into actionable conservation insights. This study proposes a knowledge-centric framework integrating smart data methodologies to bridge this gap. The framework synergizes Heritage Building Information Modeling (HBIM), semantic knowledge graphs, and knowledge bases, prioritizing three interconnected dimensions: geometric digitization through 3D laser scanning and parametric HBIM reconstruction, semantic enrichment of historical texts via NLP and rule-based entity extraction, and knowledge graph-driven discovery of spatiotemporal patterns using Neo4j and ontology mapping. Validated through dual case studies—the Historical Educational Sites in South China (humanistic narratives) and the Dong ethnic drum towers (structural logic)—the framework demonstrates its capacity to automate knowledge generation, converting 20.5 GB of multi-source data into 2652 RDF triples that interconnect 1701 nodes across HBIM models and archival records. By enabling real-time visualization of semantic relationships (e.g., educator networks, mortise-and-tenon typologies) through graph queries, the system enhances interdisciplinary collaboration. Furthermore, the proposed smart data framework facilitated the generation of domain-specific knowledge through systematic data valorization, yielding actionable insights for architectural conservation practice. This research redefines conservation as a knowledge-to-action paradigm, where smart data methodologies unify tangible and intangible heritage values, fostering data-driven stewardship across cultural, historical, and technical domains. Full article
(This article belongs to the Special Issue Advanced Research on Cultural Heritage)
Show Figures

Figure 1

28 pages, 8885 KB  
Article
The Development of a Water Resource Monitoring Ontology as a Research Tool for Sustainable Regional Development
by Assel Ospan, Madina Mansurova, Vladimir Barakhnin, Aliya Nugumanova and Roman Titkov
Data 2023, 8(11), 162; https://doi.org/10.3390/data8110162 - 26 Oct 2023
Cited by 1 | Viewed by 3510
Abstract
The development of knowledge graphs about water resources as a tool for studying the sustainable development of a region is currently an urgent task, because the growing deterioration of the state of water bodies affects the ecology, economy, and health of the population [...] Read more.
The development of knowledge graphs about water resources as a tool for studying the sustainable development of a region is currently an urgent task, because the growing deterioration of the state of water bodies affects the ecology, economy, and health of the population of the region. This study presents a new ontological approach to water resource monitoring in Kazakhstan, providing data integration from heterogeneous sources, semantic analysis, decision support, and querying and searching and presenting new knowledge in the field of water monitoring. The contribution of this work is the integration of table extraction and understanding, semantic web rule language, semantic sensor network, time ontology methods, and the inclusion of a module of socioeconomic indicators that reveal the impact of water quality on the quality of life of the population. Using machine learning methods, the study derived six ontological rules to establish new knowledge about water resource monitoring. The results of the queries demonstrate the effectiveness of the proposed method, demonstrating its potential to improve water monitoring practices, promote sustainable resource management, and support decision-making processes in Kazakhstan, and can also be integrated into the ontology of water resources at the scale of Central Asia. Full article
Show Figures

Figure 1

28 pages, 8117 KB  
Article
Chatbots for Cultural Venues: A Topic-Based Approach
by Vasilis Bouras, Dimitris Spiliotopoulos, Dionisis Margaris, Costas Vassilakis, Konstantinos Kotis, Angeliki Antoniou, George Lepouras, Manolis Wallace and Vassilis Poulopoulos
Algorithms 2023, 16(7), 339; https://doi.org/10.3390/a16070339 - 14 Jul 2023
Cited by 8 | Viewed by 4826
Abstract
Digital assistants—such as chatbots—facilitate the interaction between persons and machines and are increasingly used in web pages of enterprises and organizations. This paper presents a methodology for the creation of chatbots that offer access to museum information. The paper introduces an information model [...] Read more.
Digital assistants—such as chatbots—facilitate the interaction between persons and machines and are increasingly used in web pages of enterprises and organizations. This paper presents a methodology for the creation of chatbots that offer access to museum information. The paper introduces an information model that is offered through the chatbot, which subsequently maps the museum’s modeled information to structures of DialogFlow, Google’s chatbot engine. Means for automating the chatbot generation process are also presented. The evaluation of the methodology is illustrated through the application of a real case, wherein we developed a chatbot for the Archaeological Museum of Tripolis, Greece. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 363 KB  
Article
[k]-Roman Domination in Digraphs
by Xinhong Zhang, Xin Song and Ruijuan Li
Symmetry 2023, 15(3), 743; https://doi.org/10.3390/sym15030743 - 17 Mar 2023
Cited by 1 | Viewed by 2086
Abstract
Let D=(V(D),A(D)) be a finite, simple digraph and k a positive integer. A function [...] Read more.
Let D=(V(D),A(D)) be a finite, simple digraph and k a positive integer. A function f:V(D){0,1,2,,k+1} is called a [k]-Roman dominating function (for short, [k]-RDF) if f(AN[v])|AN(v)|+k for any vertex vV(D), where AN(v)={uN(v):f(u)1} and AN[v]=AN(v){v}. The weight of a [k]-RDF f is ω(f)=vV(D)f(v). The minimum weight of any [k]-RDF on D is the [k]-Roman domination number, denoted by γ[kR](D). For k=2 and k=3, we call them the double Roman domination number and the triple Roman domination number, respectively. In this paper, we presented some general bounds and the Nordhaus–Gaddum bound on the [k]-Roman domination number and we also determined the bounds on the [k]-Roman domination number related to other domination parameters, such as domination number and signed domination number. Additionally, we give the exact values of γ[kR](Pn) and γ[kR](Cn) for the directed path Pn and directed cycle Cn. Full article
(This article belongs to the Special Issue Advances in Combinatorics and Graph Theory)
Show Figures

Figure 1

20 pages, 2643 KB  
Article
JQPro:Join Query Processing in a Distributed System for Big RDF Data Using the Hash-Merge Join Technique
by Nahla Mohammed Elzein, Mazlina Abdul Majid, Ibrahim Abaker Targio Hashem, Ashraf Osman Ibrahim, Anas W. Abulfaraj and Faisal Binzagr
Mathematics 2023, 11(5), 1275; https://doi.org/10.3390/math11051275 - 6 Mar 2023
Cited by 2 | Viewed by 3136
Abstract
In the last decade, the volume of semantic data has increased exponentially, with the number of Resource Description Framework (RDF) datasets exceeding trillions of triples in RDF repositories. Hence, the size of RDF datasets continues to grow. However, with the increasing number of [...] Read more.
In the last decade, the volume of semantic data has increased exponentially, with the number of Resource Description Framework (RDF) datasets exceeding trillions of triples in RDF repositories. Hence, the size of RDF datasets continues to grow. However, with the increasing number of RDF triples, complex multiple RDF queries are becoming a significant demand. Sometimes, such complex queries produce many common sub-expressions in a single query or over multiple queries running as a batch. In addition, it is also difficult to minimize the number of RDF queries and processing time for a large amount of related data in a typical distributed environment encounter. To address this complication, we introduce a join query processing model for big RDF data, called JQPro. By adopting a MapReduce framework in JQPro, we developed three new algorithms, which are hash-join, sort-merge, and enhanced MapReduce-join for join query processing of RDF data. Based on an experiment conducted, the result showed that the JQPro model outperformed the two popular algorithms, gStore and RDF-3X, with respect to the average execution time. Furthermore, the JQPro model was also tested against RDF-3X, RDFox, and PARJs using the LUBM benchmark. The result showed that the JQPro model had better performance in comparison with the other models. In conclusion, the findings showed that JQPro achieved improved performance with 87.77% in terms of execution time. Hence, in comparison with the selected models, JQPro performs better. Full article
(This article belongs to the Special Issue Machine Learning, Statistics and Big Data)
Show Figures

Figure 1

16 pages, 2367 KB  
Article
Medical Knowledge Graph Completion Based on Word Embeddings
by Mingxia Gao, Jianguo Lu and Furong Chen
Information 2022, 13(4), 205; https://doi.org/10.3390/info13040205 - 18 Apr 2022
Cited by 11 | Viewed by 5316
Abstract
The aim of Medical Knowledge Graph Completion is to automatically predict one of three parts (head entity, relationship, and tail entity) in RDF triples from medical data, mainly text data. Following their introduction, the use of pretrained language models, such as Word2vec, BERT, [...] Read more.
The aim of Medical Knowledge Graph Completion is to automatically predict one of three parts (head entity, relationship, and tail entity) in RDF triples from medical data, mainly text data. Following their introduction, the use of pretrained language models, such as Word2vec, BERT, and XLNET, to complete Medical Knowledge Graphs has become a popular research topic. The existing work focuses mainly on relationship completion and has rarely solved entities and related triples. In this paper, a framework to predict RDF triples for Medical Knowledge Graphs based on word embeddings (named PTMKG-WE) is proposed, for the specific use for the completion of entities and triples. The framework first formalizes existing samples for a given relationship from the Medical Knowledge Graph as prior knowledge. Second, it trains word embeddings from big medical data according to prior knowledge through Word2vec. Third, it can acquire candidate triples from word embeddings based on analogies from existing samples. In this framework, the paper proposes two strategies to improve the relation features. One is used to refine the relational semantics by clustering existing triple samples. Another is used to accurately embed the expression of the relationship through means of existing samples. These two strategies can be used separately (called PTMKG-WE-C and PTMKG-WE-M, respectively), and can also be superimposed (called PTMKG-WE-C-M) in the framework. Finally, in the current study, PubMed data and the National Drug File-Reference Terminology (NDF-RT) were collected, and a series of experiments was conducted. The experimental results show that the framework proposed in this paper and the two improvement strategies can be used to predict new triples for Medical Knowledge Graphs, when medical data are sufficiently abundant and the Knowledge Graph has appropriate prior knowledge. The two strategies designed to improve the relation features have a significant effect on the lifting precision, and the superposition effect becomes more obvious. Another conclusion is that, under the same parameter setting, the semantic precision of word embedding can be improved by extending the breadth and depth of data, and the precision of the prediction framework in this paper can be further improved in most cases. Thus, collecting and training big medical data is a viable method to learn more useful knowledge. Full article
(This article belongs to the Special Issue Knowledge Graph Technology and Its Applications)
Show Figures

Figure 1

17 pages, 1332 KB  
Article
gRDF: An Efficient Compressor with Reduced Structural Regularities That Utilizes gRePair
by Tangina Sultana and Young-Koo Lee
Sensors 2022, 22(7), 2545; https://doi.org/10.3390/s22072545 - 26 Mar 2022
Cited by 5 | Viewed by 2723
Abstract
The explosive volume of semantic data published in the Resource Description Framework (RDF) data model demands efficient management and compression with better compression ratio and runtime. Although extensive work has been carried out for compressing the RDF datasets, they do not perform well [...] Read more.
The explosive volume of semantic data published in the Resource Description Framework (RDF) data model demands efficient management and compression with better compression ratio and runtime. Although extensive work has been carried out for compressing the RDF datasets, they do not perform well in all dimensions. However, these compressors rarely exploit the graph patterns and structural regularities of real-world datasets. Moreover, there are a variety of existing approaches that reduce the size of a graph by using a grammar-based graph compression algorithm. In this study, we introduce a novel approach named gRDF (graph repair for RDF) that uses gRePair, one of the most efficient grammar-based graph compression schemes, to compress the RDF dataset. In addition to that, we have improved the performance of HDT (header-dictionary-triple), an efficient approach for compressing the RDF datasets based on structural properties, by introducing modified HDT (M-HDT). It can detect the frequent graph pattern by employing the data-structure-oriented approach in a single pass from the dataset. In our proposed system, we use M-HDT for indexing the nodes and edge labels. Then, we employ gRePair algorithm for identifying the grammar from the RDF graph. Afterward, the system improves the performance of k2-trees by introducing a more efficient algorithm to create the trees and serialize the RDF datasets. Our experiments affirm that the proposed gRDF scheme can substantially achieve at approximately 26.12%, 13.68%, 6.81%, 2.38%, and 12.76% better compression ratio when compared with the most prominent state-of-the-art schemes such as HDT, HDT++, k2-trees, RDF-TR, and gRePair in the case of real-world datasets. Moreover, the processing efficiency of our proposed scheme also outperforms others. Full article
(This article belongs to the Special Issue VOICE Sensors with Deep Learning)
Show Figures

Figure 1

26 pages, 2786 KB  
Article
QB4MobOLAP: A Vocabulary Extension for Mobility OLAP on the Semantic Web
by Irya Wisnubhadra, Safiza Kamal Baharin, Nurul A. Emran and Djoko Budiyanto Setyohadi
Algorithms 2021, 14(9), 265; https://doi.org/10.3390/a14090265 - 13 Sep 2021
Cited by 2 | Viewed by 2815
Abstract
The accessibility of devices that track the positions of moving objects has attracted many researchers in Mobility Online Analytical Processing (Mobility OLAP). Mobility OLAP makes use of trajectory data warehousing techniques, which typically include a path of moving objects at a particular point [...] Read more.
The accessibility of devices that track the positions of moving objects has attracted many researchers in Mobility Online Analytical Processing (Mobility OLAP). Mobility OLAP makes use of trajectory data warehousing techniques, which typically include a path of moving objects at a particular point in time. The Semantic Web (SW) users have published a large number of moving object datasets that include spatial and non-spatial data. These data are available as open data and require advanced analysis to aid in decision making. However, current SW technologies support advanced analysis only for multidimensional data warehouses and Online Analytical Processing (OLAP) over static spatial and non-spatial SW data. The existing technology does not support the modeling of moving object facts, the creation of basic mobility analytical queries, or the definition of fundamental operators and functions for moving object types. This article introduces the QB4MobOLAP vocabulary, which enables the analysis of mobility data stored in RDF cubes. This article defines Mobility OLAP operators and SPARQL user-defined functions. As a result, QB4MobOLAP vocabulary and the Mobility OLAP operators are evaluated by applying them to a practical use case of transportation analysis involving 8826 triples consisting of approximately 7000 fact triples. Each triple contains nearly 1000 temporal data points (equivalent to 7 million records in conventional databases). The execution of six pertinent spatiotemporal analytics query samples results in a practical, simple model with expressive performance for the enabling of executive decisions on transportation analysis. Full article
Show Figures

Figure 1

24 pages, 1008 KB  
Article
SPARQL2Flink: Evaluation of SPARQL Queries on Apache Flink
by Oscar Ceballos, Carlos Alberto Ramírez Restrepo, María Constanza Pabón, Andres M. Castillo and Oscar Corcho
Appl. Sci. 2021, 11(15), 7033; https://doi.org/10.3390/app11157033 - 30 Jul 2021
Cited by 6 | Viewed by 3929
Abstract
Existing SPARQL query engines and triple stores are continuously improved to handle more massive datasets. Several approaches have been developed in this context proposing the storage and querying of RDF data in a distributed fashion, mainly using the MapReduce Programming Model and Hadoop-based [...] Read more.
Existing SPARQL query engines and triple stores are continuously improved to handle more massive datasets. Several approaches have been developed in this context proposing the storage and querying of RDF data in a distributed fashion, mainly using the MapReduce Programming Model and Hadoop-based ecosystems. New trends in Big Data technologies have also emerged (e.g., Apache Spark, Apache Flink); they use distributed in-memory processing and promise to deliver higher data processing performance. In this paper, we present a formal interpretation of some PACT transformations implemented in the Apache Flink DataSet API. We use this formalization to provide a mapping to translate a SPARQL query to a Flink program. The mapping was implemented in a prototype used to determine the correctness and performance of the solution. The source code of the project is available in Github under the MIT license. Full article
(This article belongs to the Special Issue Big Data Management and Analysis with Distributed or Cloud Computing)
Show Figures

Figure 1

Back to TopTop