Privacy and Security in Computing Continuum and Data-Driven Workflows

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Cybersecurity".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 23844

Special Issue Editors


E-Mail Website
Guest Editor
AIT Austrian Institute of Technology, Giefinggasse 4, 1210 Vienna, Austria
Interests: (quantum) cryptography; network security; distributed systems; cloud security; IoT security

E-Mail Website
Guest Editor
AIT Austrian Institute of Technology, Giefinggasse 4, 1210 Vienna, Austria
Interests: IT security; encryption; authentication; applied cryptography; information privacy; network security

Special Issue Information

Dear Colleagues,

The current technological progress in ICT as well as the demand for digital transformation drive the development of novel networks, systems, and platforms, which in turn enables collaboration and data-driven workflows on a large scale and beyond existing trust boundaries.

The ongoing seamless integration of cloud and edge computing as well as the Internet of Things (IoT) drive this change towards the so-called computing continuum, which forms the basis for all emerging digital data and collaboration spaces.

However, many new and significant security and privacy risks have to be taken into consideration with such systems. Given the vast amount of potentially sensitive information being processed and the high number of involved entities and devices with different profiles (e.g., available resources, capabilities), protection of confidentiality throughout the entire data life cycle in the computing continuum is a major challenge. Similarly, due to the increasing number of services and devices being added, protecting authenticity from end to end is also highly important. Thus, novel solutions and approaches for secure and privacy-preserving data sharing and processing will be key to unleash the potential of the computing continuum while still ensuring data sovereignty.

This Special Issue is dedicated to research on methods, technologies, and novel approaches focused on increasing security and privacy protection for data and users. We are calling for cutting-edge contributions to fundamental theoretical research as well as applications in practice.

Dr. Thomas Loruenser
Dr. Stephan Krenn
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • security and privacy for data spaces
  • privacy friendly data markets and platforms
  • secure data-driven workflows
  • IoT and cyber-physical security and/or privacy
  • online privacy and anonymous authentication
  • data sovereignty
  • cloud and edge computing security
  • privacy-enhancing technologies
  • secure computing technologies
  • privacy preserving machine learning
  • computing on encrypted data
  • hardware security and attestation
  • End2end security in IoT and cloud computing
  • verifiable computing
  • End2end authenticity

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

61 pages, 10098 KiB  
Article
Segmentation and Filtering Are Still the Gold Standard for Privacy in IoT—An In-Depth STRIDE and LINDDUN Analysis of Smart Homes
by Henrich C. Pöhls, Fabian Kügler, Emiliia Geloczi and Felix Klement
Future Internet 2025, 17(2), 77; https://doi.org/10.3390/fi17020077 - 10 Feb 2025
Viewed by 666
Abstract
Every year, more and more electronic devices are used in households, which certainly leads to an increase in the total number of communications between devices. During communication, a huge amount of information is transmitted, which can be critical or even malicious. To avoid [...] Read more.
Every year, more and more electronic devices are used in households, which certainly leads to an increase in the total number of communications between devices. During communication, a huge amount of information is transmitted, which can be critical or even malicious. To avoid the transmission of unnecessary information, a filtering mechanism can be applied. Filtering is a long-standing method used by network engineers to segregate and thus block unwanted traffic from reaching certain devices. In this work, we show how to apply this to the Internet of Things (IoT) Smart Home domain as it introduces numerous networked devices into our daily lives. To analyse the positive influence of filtering on security and privacy, we offer the results from our in-depth STRIDE and LINDDUN analysis of several Smart Home scenarios before and after the application. To show that filtering can be applied to other IoT domains, we offer a brief glimpse into the domain of smart cars. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

29 pages, 6269 KiB  
Article
Malware Detection Based on API Call Sequence Analysis: A Gated Recurrent Unit–Generative Adversarial Network Model Approach
by Nsikak Owoh, John Adejoh, Salaheddin Hosseinzadeh, Moses Ashawa, Jude Osamor and Ayyaz Qureshi
Future Internet 2024, 16(10), 369; https://doi.org/10.3390/fi16100369 - 13 Oct 2024
Cited by 1 | Viewed by 3384
Abstract
Malware remains a major threat to computer systems, with a vast number of new samples being identified and documented regularly. Windows systems are particularly vulnerable to malicious programs like viruses, worms, and trojans. Dynamic analysis, which involves observing malware behavior during execution in [...] Read more.
Malware remains a major threat to computer systems, with a vast number of new samples being identified and documented regularly. Windows systems are particularly vulnerable to malicious programs like viruses, worms, and trojans. Dynamic analysis, which involves observing malware behavior during execution in a controlled environment, has emerged as a powerful technique for detection. This approach often focuses on analyzing Application Programming Interface (API) calls, which represent the interactions between the malware and the operating system. Recent advances in deep learning have shown promise in improving malware detection accuracy using API call sequence data. However, the potential of Generative Adversarial Networks (GANs) for this purpose remains largely unexplored. This paper proposes a novel hybrid deep learning model combining Gated Recurrent Units (GRUs) and GANs to enhance malware detection based on API call sequences from Windows portable executable files. We evaluate our GRU–GAN model against other approaches like Bidirectional Long Short-Term Memory (BiLSTM) and Bidirectional Gated Recurrent Unit (BiGRU) on multiple datasets. Results demonstrated the superior performance of our hybrid model, achieving 98.9% accuracy on the most challenging dataset. It outperformed existing models in resource utilization, with faster training and testing times and low memory usage. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

36 pages, 3662 KiB  
Article
Enhancing Network Slicing Security: Machine Learning, Software-Defined Networking, and Network Functions Virtualization-Driven Strategies
by José Cunha, Pedro Ferreira, Eva M. Castro, Paula Cristina Oliveira, Maria João Nicolau, Iván Núñez, Xosé Ramon Sousa and Carlos Serôdio
Future Internet 2024, 16(7), 226; https://doi.org/10.3390/fi16070226 - 27 Jun 2024
Cited by 7 | Viewed by 6162
Abstract
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical [...] Read more.
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical infrastructure, each optimized for specific service requirements. Despite its numerous benefits, network slicing introduces significant security vulnerabilities that must be addressed to prevent exploitation by increasingly sophisticated cyber threats. This review explores the application of cutting-edge technologies—Artificial Intelligence (AI), specifically Machine Learning (ML), Software-Defined Networking (SDN), and Network Functions Virtualization (NFV)—in crafting advanced security solutions tailored for network slicing. AI’s predictive threat detection and automated response capabilities are analysed, highlighting its role in maintaining service integrity and resilience. Meanwhile, SDN and NFV are scrutinized for their ability to enforce flexible security policies and manage network functionalities dynamically, thereby enhancing the adaptability of security measures to meet evolving network demands. Thoroughly examining the current literature and industry practices, this paper identifies critical research gaps in security frameworks and proposes innovative solutions. We advocate for a holistic security strategy integrating ML, SDN, and NFV to enhance data confidentiality, integrity, and availability across network slices. The paper concludes with future research directions to develop robust, scalable, and efficient security frameworks capable of supporting the safe deployment of network slicing in next-generation networks. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

17 pages, 1065 KiB  
Article
Evaluating Quantized Llama 2 Models for IoT Privacy Policy Language Generation
by Bhavani Malisetty and Alfredo J. Perez
Future Internet 2024, 16(7), 224; https://doi.org/10.3390/fi16070224 - 26 Jun 2024
Cited by 1 | Viewed by 2370
Abstract
Quantized large language models are large language models (LLMs) optimized for model size while preserving their efficacy. They can be executed on consumer-grade computers without the powerful features of dedicated servers needed to execute regular (non-quantized) LLMs. Because of their ability to summarize, [...] Read more.
Quantized large language models are large language models (LLMs) optimized for model size while preserving their efficacy. They can be executed on consumer-grade computers without the powerful features of dedicated servers needed to execute regular (non-quantized) LLMs. Because of their ability to summarize, answer questions, and provide insights, LLMs are being used to analyze large texts/documents. One of these types of large texts/documents are Internet of Things (IoT) privacy policies, which are documents specifying how smart home gadgets, health-monitoring wearables, and personal voice assistants (among others) collect and manage consumer/user data on behalf of Internet companies providing services. Even though privacy policies are important, they are difficult to comprehend due to their length and how they are written, which makes them attractive for analysis using LLMs. This study evaluates how quantized LLMs are modeling the language of privacy policies to be potentially used to transform IoT privacy policies into simpler, more usable formats, thus aiding comprehension. While the long-term goal is to achieve this usable transformation, our work focuses on evaluating quantized LLM models used for IoT privacy policy language. Particularly, we study 4-bit, 5-bit, and 8-bit quantized versions of the large language model Meta AI version 2 (Llama 2) and the base Llama 2 model (zero-shot, without fine-tuning) under different metrics and prompts to determine how well these quantized versions model the language of IoT privacy policy documents by completing and generating privacy policy text. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

16 pages, 743 KiB  
Article
FREDY: Federated Resilience Enhanced with Differential Privacy
by Zacharias Anastasakis, Terpsichori-Helen Velivassaki, Artemis Voulkidis, Stavroula Bourou, Konstantinos Psychogyios, Dimitrios Skias and Theodore Zahariadis
Future Internet 2023, 15(9), 296; https://doi.org/10.3390/fi15090296 - 1 Sep 2023
Cited by 2 | Viewed by 1687
Abstract
Federated Learning is identified as a reliable technique for distributed training of ML models. Specifically, a set of dispersed nodes may collaborate through a federation in producing a jointly trained ML model without disclosing their data to each other. Each node performs local [...] Read more.
Federated Learning is identified as a reliable technique for distributed training of ML models. Specifically, a set of dispersed nodes may collaborate through a federation in producing a jointly trained ML model without disclosing their data to each other. Each node performs local model training and then shares its trained model weights with a server node, usually called Aggregator in federated learning, as it aggregates the trained weights and then sends them back to its clients for another round of local training. Despite the data protection and security that FL provides to each client, there are still well-studied attacks such as membership inference attacks that can detect potential vulnerabilities of the FL system and thus expose sensitive data. In this paper, in order to prevent this kind of attack and address private data leakage, we introduce FREDY, a differential private federated learning framework that enables knowledge transfer from private data. Particularly, our approach has a teachers–student scheme. Each teacher model is trained on sensitive, disjoint data in a federated manner, and the student model is trained on the most voted predictions of the teachers on public unlabeled data which are noisy aggregated in order to guarantee the privacy of each teacher’s sensitive data. Only the student model is publicly accessible as the teacher models contain sensitive information. We show that our proposed approach guarantees the privacy of sensitive data against model inference attacks while it combines the federated learning settings for the model training procedures. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

Review

Jump to: Research

25 pages, 374 KiB  
Review
Dynamic Risk Assessment in Cybersecurity: A Systematic Literature Review
by Pavlos Cheimonidis and Konstantinos Rantos
Future Internet 2023, 15(10), 324; https://doi.org/10.3390/fi15100324 - 28 Sep 2023
Cited by 16 | Viewed by 6824
Abstract
Traditional information security risk assessment (RA) methodologies and standards, adopted by information security management systems and frameworks as a foundation stone towards robust environments, face many difficulties in modern environments where the threat landscape changes rapidly and new vulnerabilities are being discovered. In [...] Read more.
Traditional information security risk assessment (RA) methodologies and standards, adopted by information security management systems and frameworks as a foundation stone towards robust environments, face many difficulties in modern environments where the threat landscape changes rapidly and new vulnerabilities are being discovered. In order to overcome this problem, dynamic risk assessment (DRA) models have been proposed to continuously and dynamically assess risks to organisational operations in (near) real time. The aim of this work is to analyse the current state of DRA models that have been proposed for cybersecurity, through a systematic literature review. The screening process led us to study 50 DRA models, categorised based on the respective primary analysis methods they used. The study provides insights into the key characteristics of these models, including the maturity level of the examined models, the domain or application area in which these models flourish, and the information they utilise in order to produce results. The aim of this work is to answer critical research questions regarding the development of dynamic risk assessment methodologies and provide insights on the already developed methods as well as future research directions. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

Back to TopTop