Systems Simulation and Modelling for IoT Data Processing Applications

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (30 August 2022) | Viewed by 6780

Special Issue Editors


E-Mail Website
Guest Editor
Karunya Institute of Technology and Sciences, Tamil Nadu, India
Interests: medical imaging; healthcare; AI; deep learning
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor
Informatics Building, School of Informatics, University of Leicester, University Road, Leicester LE1 7RH, UK
Interests: artificial intelligence; medical sensor; image processing; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

System simulation and modeling (SSM) focus on solving problems through the use of models and simulations. SSM is used in almost every science and engineering involving a multidisciplinary nature. SSM develops frameworks that are applicable across disciplines and develop benchmark tools that are useful in developing IoT data processing applications. Modeling theories need to be transformed into consistent frameworks, which in turn are implemented into consistent benchmarks. The world is clearly in the era of internet data processing (e.g., Internet of Things). The challenge for IoT data processing systems is balancing operation and cost tradeoffs by optimizing configuration at both the IoT hardware and software layers to accommodate users’ constraints. For example, several internet data processing application developers and researchers can perform tests in a controllable and repeatable manner. Propelled by the need to analyze the performance of different IoT data processing and Industry 4.0 frameworks, researchers have introduced several IoT simulation and modeling benchmarks. Despite the substantial progress achieved, however, the research community still needs a holistic comprehensive simulation and modeling platforms for IoT data processing. This Special Issue aspires to furnish a compendium of high-quality research articles on the following specific areas of interest, including but not limited to:

  • Theoretical aspects of modeling and simulation, including formal modeling, model-checking, sensitivity analysis, Monte Carlo methods, variance reduction techniques, experimental design, meta-modeling, methods and algorithms for validation and verification, and selection and comparison procedures;
  • Development of discrete event simulation benchmarks to evaluate workloads in heterogeneous state-of-the-art hardware platforms;
  • Advances in modeling and simulation tools for performance evaluation, security problems, and scalable IoT data processing environments;
  • Cognitive combined discrete and continuous simulations based on both IoT hardware and software;
  • Cognitive interactive modeling for internet data processing;
  • Methodology and requirements of benchmarking IoT data processing and Industry 4.0;
  • Data-driven cognitive computing for industrial IoT;
  • Advances in Industry 4.0 frameworks and benchmarks for large-scale data analytics;
  • IoT data processing frameworks for industrial predictive analytics.

Dr. J Dinesh Peter
Dr. Yu-Dong Zhang
Dr. Steven L. Fernandes
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Simulation and modeling
  • Internet of Things
  • Data processing
  • Cognitive computing
  • Industry 4.0

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2614 KiB  
Article
A Record Linkage-Based Data Deduplication Framework with DataCleaner Extension
by Otmane Azeroual, Meena Jha, Anastasija Nikiforova, Kewei Sha, Mohammad Alsmirat and Sanjay Jha
Multimodal Technol. Interact. 2022, 6(4), 27; https://doi.org/10.3390/mti6040027 - 11 Apr 2022
Cited by 15 | Viewed by 5691
Abstract
The data management process is characterised by a set of tasks where data quality management (DQM) is one of the core components. Data quality, however, is a multidimensional concept, where the nature of the data quality issues is very diverse. One of the [...] Read more.
The data management process is characterised by a set of tasks where data quality management (DQM) is one of the core components. Data quality, however, is a multidimensional concept, where the nature of the data quality issues is very diverse. One of the most widely anticipated data quality challenges, which becomes particularly vital when data come from multiple data sources which is a typical situation in the current data-driven world, is duplicates or non-uniqueness. Even more, duplicates were recognised to be one of the key domain-specific data quality dimensions in the context of the Internet of Things (IoT) application domains, where smart grids and health dominate most. Duplicate data lead to inaccurate analyses, leading to wrong decisions, negatively affect data-driven and/or data processing activities such as the development of models, forecasts, simulations, have a negative impact on customer service, risk and crisis management, service personalisation in terms of both their accuracy and trustworthiness, decrease user adoption and satisfaction, etc. The process of determination and elimination of duplicates is known as deduplication, while the process of finding duplicates in one or more databases that refer to the same entities is known as Record Linkage. To find the duplicates, the data sets are compared with each other using similarity functions that are usually used to compare two input strings to find similarities between them, which requires quadratic time complexity. To defuse the quadratic complexity of the problem, especially in large data sources, record linkage methods, such as blocking and sorted neighbourhood, are used. In this paper, we propose a six-step record linkage deduplication framework. The operation of the framework is demonstrated on a simplified example of research data artifacts, such as publications, research projects and others of the real-world research institution representing Research Information Systems (RIS) domain. To make the proposed framework usable we integrated it into a tool that is already used in practice, by developing a prototype of an extension for the well-known DataCleaner. The framework detects and visualises duplicates thereby identifying and providing the user with identified redundancies in a user-friendly manner allowing their further elimination. By removing the redundancies, the quality of the data is improved therefore improving analyses and decision-making. This study makes a call for other researchers to take a step towards the “golden record” that can be achieved when all data quality issues are recognised and resolved, thus moving towards absolute data quality. Full article
(This article belongs to the Special Issue Systems Simulation and Modelling for IoT Data Processing Applications)
Show Figures

Figure 1

Back to TopTop