Special Issue "Sense Making in the Digital World"

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: closed (31 March 2020).

Special Issue Editors

Prof. Dr. Wolf-Tilo Balke
Website
Guest Editor
Institute for Information Systems (IFIS), Technische Universität Braunschweig, Mühlenpfordtstr. 23, 38106 Braunschweig, Germany
Interests: databases and information systems; informations services; digital libraries
Prof. Dr. Ina Schaefer
Website
Guest Editor
Institute of Software Engineering and Automotive Informatics (ISF), Technische Universität Braunschweig, Mühlenpfordtstr. 23, 38106 Braunschweig, Germany
Interests: formal methods; verification; integration of formal methods into software development processes

Special Issue Information

Dear Colleagues,

Sense making is a process where large volumes of basic data, usually collected from different, heterogeneous sources, is set into perspective and yields new insights or a deeper understanding with respect to some topic. In particular, sense making is a way to look at data in light of a certain application domain or application-specific question. That includes the integration of data from different, possibly noisy sources, the selection of relevant data, and the exclusion of irrelevant data, making different pieces of information plausible with respect to prior hypotheses, and empirically testing whether conjectures derived from initial models over samples of data sources (usually of limited size and varying quality) also hold for real-world data. Basic techniques used for sense making, thus, include, but are not limited to, knowledge representation and meta-data enhancement, big data analytics, uncertainty and ambiguity management and resolution, as well as a variety of reasoning techniques from the field of the semantic web.

Consequently, modern intelligent and data-driven applications will always have to rely on components making sense of large sets of heterogeneous, diverse, and possibly noisy data from different sources. In order to actually build those applications and at the same time understand their behavior, on one hand, we need rigorous foundations for sense making on the data side, including means for semantic integration of data, assessment of data quality, etc. On the other hand, intelligent applications need engineering principles and methods considering the specific challenges of data-driven sense making components. In particular, test and verification techniques are required that ensure the quality of system results influenced by sense making.

This Special Issue is intended to bring together researchers in all fields of sense making, from foundations, via engineering and analysis, to applications.

Prof. Dr. Wolf-Tilo Balke
Prof. Dr. Ina Schaefer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • knowledge representation
  • meta-data enhancement
  • big data analytics
  • semantic web
  • software testing
  • software verification

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Suitability of Graph Database Technology for the Analysis of Spatio-Temporal Data
Future Internet 2020, 12(5), 78; https://doi.org/10.3390/fi12050078 - 26 Apr 2020
Abstract
Every day large quantities of spatio-temporal data are captured, whether by Web-based companies for social data mining or by other industries for a variety of applications ranging from disaster relief to marine data analysis. Making sense of all this data dramatically increases the [...] Read more.
Every day large quantities of spatio-temporal data are captured, whether by Web-based companies for social data mining or by other industries for a variety of applications ranging from disaster relief to marine data analysis. Making sense of all this data dramatically increases the need for intelligent backend systems to provide realtime query response times while scaling well (in terms of storage and performance) with increasing quantities of structured or semi-structured, multi-dimensional data. Currently, relational database solutions with spatial extensions such as PostGIS, seem to come to their limits. However, the use of graph database technology has been rising in popularity and has been found to handle graph-like spatio-temporal data much more effectively. Motivated by the need to effectively store multi-dimensional, interconnected data, this paper investigates whether or not graph database technology is better suited when compared to the extended relational approach. Three database technologies will be investigated using real world datasets namely: PostgreSQL, JanusGraph, and TigerGraph. The datasets used are the Yelp challenge dataset and an ambulance response simulation dataset, thus combining real world spatial data with realistic simulations offering more control over the dataset. Our extensive evaluation is based on how each database performs under practical data analysis scenarios similar to those found on enterprise level. Full article
(This article belongs to the Special Issue Sense Making in the Digital World)
Show Figures

Figure 1

Back to TopTop