Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (40)

Search Parameters:
Keywords = containerized microservice

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 1061 KB  
Article
On Optimized Scheduling Scheme for Rapid Pod Autoscaling in Kubernetes
by Bowen Zhou, Subrota Kumar Mondal, Yuning Cheng and H. M. Dipu Kabir
Appl. Sci. 2026, 16(5), 2481; https://doi.org/10.3390/app16052481 - 4 Mar 2026
Viewed by 112
Abstract
Kubernetes, an open-source project initiated by Google for managing and organizing containers in cloud platforms, has become the preferred choice for deploying large-scale containerized microservice architectures. Kubernetes employs a scheduler that considers constraints defined by workload owners and cluster managers to identify the [...] Read more.
Kubernetes, an open-source project initiated by Google for managing and organizing containers in cloud platforms, has become the preferred choice for deploying large-scale containerized microservice architectures. Kubernetes employs a scheduler that considers constraints defined by workload owners and cluster managers to identify the most suitable node to host a given task. Although it can be configured in a multitude of ways, the default scheduler that comes with Kubernetes is not fully capable of efficiently handling the demands of Horizontal Pod Autoscaling (HPA), particularly when deploying a large number of similar pods simultaneously. This article focuses on the optimization of the Kubernetes scheduler to allocate and manage resources more efficiently in rapid Pod autoscaling scenarios. The scheduling mechanisms of Kubernetes offer considerable potential for improvement. This article introduces a custom scheduler that reduces redundant scoring steps using a caching mechanism, thereby accelerating the scheduling process for horizontal scaling of pods. The article begins with an in-depth literature review, followed by the development of novel algorithms to address existing gaps in the default scheduler. The custom scheduler is then subjected to rigorous simulation and testing phases to ensure its robustness and efficiency. Experimental results demonstrate the effectiveness of the proposed approach in improving the scheduling performance for HPA in Kubernetes. Full article
Show Figures

Figure 1

22 pages, 10518 KB  
Article
A Scalable Microservices Architecture for Condition Monitoring and State-of-Health Tracking in Power Conversion Systems
by José M. García-Campos, Abraham M. Alcaide, A. Letrado-Castellanos, Ramon Portillo and Jose I. Leon
Sensors 2026, 26(4), 1282; https://doi.org/10.3390/s26041282 - 16 Feb 2026
Viewed by 349
Abstract
The role of power converters in modern electrical infrastructure (such as electric vehicle charging stations, battery energy storage systems and photovoltaic energy systems) has become critical. Given the high reliability required by these converters, continuous condition monitoring for predictive maintenance is mandatory. Traditional [...] Read more.
The role of power converters in modern electrical infrastructure (such as electric vehicle charging stations, battery energy storage systems and photovoltaic energy systems) has become critical. Given the high reliability required by these converters, continuous condition monitoring for predictive maintenance is mandatory. Traditional SCADA and HMI systems often face scalability bottlenecks and lack the flexibility in data aggregation and storage scalability required for long-term predictive maintenance. This paper proposes a scalable, containerized microservices-based architecture for degradation tracking and State-of-Health (SoH) monitoring in power conversion systems. The architecture features a decoupled four-layer structure, utilizing dedicated UDP servers for low-latency data ingestion, RabbitMQ (AMQP) for robust message routing, and a NoSQL (MongoDB) storage layer with a FastAPI interface. The proposed system was validated using a Hardware-in-the-Loop (HiL) setup with a Typhoon HIL606 simulator monitoring an Active Neutral Point Clamped (ANPC) power converter. Experimental stress tests demonstrated a Packet Delivery Ratio (PDR) of 1.0 at ingestion rates up to 100 messages per second (msgs/s) per node. The system exhibits transmission and processing overheads consistently below 5 ms, ensuring timely data availability for tracking thermal dynamics and parametric aging trends. This operational performance significantly exceeds the nominal requirement of 2 msgs/s for condition monitoring, ensuring robust data integrity. Finally, this modular approach provides the horizontal scalability necessary for Industry 4.0 integration, offering a high-performance framework for long-term health monitoring in modern power electronics. Full article
(This article belongs to the Special Issue Condition Monitoring of Electrical Equipment Within Power Systems)
Show Figures

Figure 1

24 pages, 894 KB  
Article
Integrating Continuous Compliance into DevSecOps Pipelines: A Data Engineering Perspective
by Aleksandr Zakharchenko
Software 2026, 5(1), 6; https://doi.org/10.3390/software5010006 - 10 Feb 2026
Viewed by 489
Abstract
Modern DevSecOps environments face a persistent tension between accelerating deployment velocity and maintaining verifiable compliance with regulatory, security, and internal governance standards. Traditional snapshot-in-time audits and fragmented compliance tooling struggle to capture the dynamic nature of containerized, continuous delivery, often resulting in compliance [...] Read more.
Modern DevSecOps environments face a persistent tension between accelerating deployment velocity and maintaining verifiable compliance with regulatory, security, and internal governance standards. Traditional snapshot-in-time audits and fragmented compliance tooling struggle to capture the dynamic nature of containerized, continuous delivery, often resulting in compliance drift and delayed remediation. This paper introduces the Continuous Compliance Framework (CCF), a data-centric reference architecture that embeds compliance validation directly into CI/CD pipelines. The framework treats compliance as a first-class, computable system property by combining declarative policies-as-code, standardized evidence collection, and cryptographically verifiable attestations. Central to the approach is a Compliance Data Lakehouse that transforms heterogeneous pipeline artifacts into a queryable, time-indexed compliance data product, enabling audit-ready evidence generation and continuous assurance. The proposed architecture is validated through an end-to-end synthetic microservice implementation. Experimental results demonstrate full policy lifecycle enforcement with a minimal pipeline overhead and sub-second policy evaluation latency. These findings indicate that compliance can be shifted from a post hoc audit activity to an intrinsic, verifiable property of the software delivery process without materially degrading deployment velocity. Full article
(This article belongs to the Special Issue Software Reliability, Security and Quality Assurance)
Show Figures

Figure 1

32 pages, 4251 KB  
Article
Context-Aware ML/NLP Pipeline for Real-Time Anomaly Detection and Risk Assessment in Cloud API Traffic
by Aziz Abibulaiev, Petro Pukach and Myroslava Vovk
Mach. Learn. Knowl. Extr. 2026, 8(1), 25; https://doi.org/10.3390/make8010025 - 22 Jan 2026
Viewed by 772
Abstract
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies [...] Read more.
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies with business risks. The system processes each event/access log through parallel numerical and textual branches: a set of anomaly detectors trained on traffic engineering characteristics and a hybrid NLP stack that combines rules, TF-IDF (Term Frequency-Inverse Document Frequency), and character-level models trained on enriched security datasets. Their results are integrated using a risk-aware policy that takes into account endpoint type, data sensitivity, exposure, and authentication status, and creates a discrete risk level with human-readable explanations and recommended SOC (Security Operations Center) actions. We implement this design as a containerized microservice pipeline (input, preprocessing, ML, NLP, merging, alerting, and retraining services), orchestrated using Docker Compose and instrumented using OpenSearch Dashboards. Experiments with OWASP-like (Open Worldwide Application Security Project) attack scenarios show a high detection rate for injections, SSRF (Server-Side Request Forgery), Data Exposure, and Business Logic Abuse, while the processing time for each request remains within real-time limits even in sequential testing mode. Thus, the pipeline bridges the gap between ML/NLP research for security and practical API protection channels that can evolve over time through feedback and retraining. Full article
(This article belongs to the Section Safety, Security, Privacy, and Cyber Resilience)
Show Figures

Figure 1

29 pages, 2803 KB  
Article
Benchmarking SQL and NoSQL Persistence in Microservices Under Variable Workloads
by Nenad Pantelic, Ljiljana Matic, Lazar Jakovljevic, Stefan Eric, Milan Eric, Miladin Stefanović and Aleksandar Djordjevic
Future Internet 2026, 18(1), 53; https://doi.org/10.3390/fi18010053 - 15 Jan 2026
Viewed by 583
Abstract
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are [...] Read more.
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are assessed across read-dominant, write-dominant, and mixed workloads, with concurrency levels ranging from low to high contention. The experimental setup is fully containerized and executed in a single-node environment to isolate persistence-layer behavior and ensure reproducibility. System performance is evaluated using multiple metrics, including percentile-based latency (p95), throughput, CPU utilization, and memory consumption. The results reveal distinct performance trade-offs among the evaluated configurations, highlighting the sensitivity of persistence mechanisms to workload composition and concurrency intensity. In particular, indexing strategies significantly affect read-heavy scenarios, while document-oriented persistence demonstrates advantages under write-intensive workloads. The findings emphasize the importance of workload-aware persistence selection in microservice-based systems and support the adoption of polyglot persistence strategies. Rather than providing absolute performance benchmarks, the study focuses on comparative behavioral trends that can inform architectural decision-making in practical microservice deployments. Full article
Show Figures

Figure 1

21 pages, 4706 KB  
Article
Near-Real-Time Integration of Multi-Source Seismic Data
by José Melgarejo-Hernández, Paula García-Tapia-Mateo, Juan Morales-García and Jose-Norberto Mazón
Sensors 2026, 26(2), 451; https://doi.org/10.3390/s26020451 - 9 Jan 2026
Viewed by 369
Abstract
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish [...] Read more.
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish National Geographic Institute creates significant challenges due to differences in formats, update frequencies, and access methods. To overcome these limitations, this paper presents a modular and automated framework for the scheduled near-real-time ingestion of global seismic data using open APIs and semi-structured web data. The system, implemented using a Docker-based architecture, automatically retrieves, harmonizes, and stores seismic information from heterogeneous sources at regular intervals using a cron-based scheduler. Data are standardized into a unified schema, validated to remove duplicates, and persisted in a relational database for downstream analytics and visualization. The proposed framework adheres to the FAIR data principles by ensuring that all seismic events are uniquely identifiable, source-traceable, and stored in interoperable formats. Its lightweight and containerized design enables deployment as a microservice within emerging data spaces and open environmental data infrastructures. Experimental validation was conducted using a two-phase evaluation. This evaluation consisted of a high-frequency 24 h stress test and a subsequent seven-day continuous deployment under steady-state conditions. The system maintained stable operation with 100% availability across all sources, successfully integrating 4533 newly published seismic events during the seven-day period and identifying 595 duplicated detections across providers. These results demonstrate that the framework provides a robust foundation for the automated integration of multi-source seismic catalogs. This integration supports the construction of more comprehensive and globally accessible earthquake datasets for research and near-real-time applications. By enabling automated and interoperable integration of seismic information from diverse providers, this approach supports the construction of more comprehensive and globally accessible earthquake catalogs, strengthening data-driven research and situational awareness across regions and institutions worldwide. Full article
(This article belongs to the Special Issue Advances in Seismic Sensing and Monitoring)
Show Figures

Figure 1

25 pages, 2150 KB  
Article
Architecting Multi-Cluster Layer-2 Connectivity for Cloud-Native Network Slicing
by Alex T. de Cock Buning, Ivan Vidal and Francisco Valera
Future Internet 2026, 18(1), 39; https://doi.org/10.3390/fi18010039 - 8 Jan 2026
Viewed by 425
Abstract
Connecting distributed applications across multiple cloud-native domains is growing in complexity. Applications have become containerized and fragmented across heterogeneous infrastructures, such as public clouds, edge nodes, and private data centers, including emerging IoT-driven environments. Existing networking solutions like CNI plugins and service meshes [...] Read more.
Connecting distributed applications across multiple cloud-native domains is growing in complexity. Applications have become containerized and fragmented across heterogeneous infrastructures, such as public clouds, edge nodes, and private data centers, including emerging IoT-driven environments. Existing networking solutions like CNI plugins and service meshes have proven insufficient for providing isolated, low-latency and secure multi-cluster communication. By combining SDN control with Kubernetes abstractions, we present L2S-CES, a Kubernetes-native solution for multi-cluster layer-2 network slicing that offers flexible isolated connectivity for microservices while maintaining performance and automation. In this work, we detail the design and implementation of L2S-CES, outlining its architecture and operational workflow. We experimentally validate against state-of-the-art alternatives and show superior isolation, reduced setup time, native support for broadcast and multicast, and minimal performance overhead. By addressing the current lack of native link-layer networking capabilities across multiple Kubernetes domains, L2S-CES provides a unified and practical foundation for deploying scalable, multi-tenant, and latency-sensitive cloud-native applications. Full article
Show Figures

Figure 1

30 pages, 10600 KB  
Article
Edge-to-Cloud Continuum Orchestrator Based on Heterogeneous Nodes for Urban Traffic Monitoring
by Pietro Ruiu, Andrea Lagorio, Claudio Rubattu, Matteo Anedda, Michele Sanna and Mauro Fadda
Future Internet 2025, 17(12), 574; https://doi.org/10.3390/fi17120574 - 13 Dec 2025
Viewed by 888
Abstract
This paper presents an edge-to-cloud orchestrator capable of supporting services running at the edge on heterogeneous nodes based on general-purpose processing units and Field Programmable Gate Array (FPGA) platform (i.e., AMD Kria K26 SoM) in an urban environment, integrated with a series of [...] Read more.
This paper presents an edge-to-cloud orchestrator capable of supporting services running at the edge on heterogeneous nodes based on general-purpose processing units and Field Programmable Gate Array (FPGA) platform (i.e., AMD Kria K26 SoM) in an urban environment, integrated with a series of cloud-based services and capable of minimizing energy consumption. A use case of vehicle traffic monitoring is considered in a mobility scenario involving computing nodes equipped with video acquisition systems to evaluate the feasibility of the system. Since the use case concerns the monitoring of vehicular traffic by AI-based images and video processing, specific support for application orchestration in the form of containers was required. The development concerned the feasibility of managing containers with hardware acceleration derived from the Vitis AI design flow, leveraged to accelerate AI inference on the AMD Kria K26 SoM. A Kubernetes-based controller node was designed to facilitate the tracking and monitoring of specific vehicles. These vehicles may either be flagged by law enforcement authorities due to legal concerns or identified by the system itself through detection mechanisms deployed in computing nodes. Strategically distributed across the city, these nodes continuously analyze traffic, identifying vehicles that match the search criteria. Using containerized microservices and Kubernetes orchestration, the infrastructure ensures that tracking operations remain uninterrupted even in high-traffic scenarios. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

16 pages, 11523 KB  
Article
MAGI: A Low-Cost IoT Architecture for Distributed AIS-Based Vessel Monitoring and Maritime Emissions Assessment in Panama
by Miguel Hidalgo-Rodriguez, Edmanuel Cruz, Cesar Pinzon-Acosta, Franchesca Gonzalez-Olivardia and José Carlos Rangel
Appl. Syst. Innov. 2025, 8(6), 177; https://doi.org/10.3390/asi8060177 - 24 Nov 2025
Viewed by 1054
Abstract
Real-time vessel tracking and environmental assessment in developing regions face significant challenges due to the high cost and proprietary constraints of commercial Automatic Identification System (AIS) services. We introduce MAGI, an open-source, low-cost, IoT-distributed architecture that integrates Orange Pi 5 edge nodes with [...] Read more.
Real-time vessel tracking and environmental assessment in developing regions face significant challenges due to the high cost and proprietary constraints of commercial Automatic Identification System (AIS) services. We introduce MAGI, an open-source, low-cost, IoT-distributed architecture that integrates Orange Pi 5 edge nodes with software-defined radio (SDR) AIS receivers and containerized microservices to capture, preprocess, and stream AIS messages. During a ten-day field campaign in Panama, our decentralized deployment processed over 500,000 AIS transmissions, achieving 99% uptime and delivering vessel position and speed updates with sub-second latency. Based on the collected data, we also evaluated system scalability, energy consumption, and per node cost, demonstrating that a complete coastal network can be deployed for under USD 1200 per site. These results confirm that MAGI is a scalable, secure, and affordable IoT solution for AIS-based vessel tracking and environmental monitoring in resource-constrained settings. Full article
(This article belongs to the Special Issue Recent Advances in Internet of Things and Its Applications)
Show Figures

Figure 1

43 pages, 2371 KB  
Review
SHEAB: A Novel Automated Benchmarking Framework for Edge AI
by Mustafa Abdulkadhim and Sandor R. Repas
Technologies 2025, 13(11), 515; https://doi.org/10.3390/technologies13110515 - 11 Nov 2025
Cited by 2 | Viewed by 1925
Abstract
Edge computing is characterized by heterogeneous hardware, distributed deployment, and a need for on-site processing, which makes performance benchmarking challenging. This paper presents SHEAB (Scalable Heterogeneous Edge Automation Benchmarking), a novel framework designed to securely automate the benchmarking of Edge AI devices at [...] Read more.
Edge computing is characterized by heterogeneous hardware, distributed deployment, and a need for on-site processing, which makes performance benchmarking challenging. This paper presents SHEAB (Scalable Heterogeneous Edge Automation Benchmarking), a novel framework designed to securely automate the benchmarking of Edge AI devices at scale. The proposed framework enables concurrent performance evaluation of multiple edge nodes, drastically reducing the time-to-deploy (TTD) for benchmarking tasks compared to traditional sequential methods. SHEAB’s architecture leverages containerized microservices for orchestration and result aggregation, integrated with multi-layer security (firewalls, VPN tunneling, and SSH) to ensure safe operation in untrusted network environments. We provide a detailed system design and workflow, including algorithmic pseudocode for the SHEAB process. A comprehensive comparative review of related work highlights how SHEAB advances the state-of-the-art in edge benchmarking through its combination of secure automation and scalability. We detail a real-world implementation on eleven heterogeneous edge devices, using a centralized 48-core server to coordinate benchmarks. Statistical analysis of the experimental results demonstrates a 43.74% reduction in total benchmarking time and a 1.78× speedup in benchmarking throughput using SHEAB, relative to conventional one-by-one benchmarking. We also present mathematical formulations for performance gain and discuss the implications of our results. The framework’s effectiveness is validated through the concurrent execution of standard benchmarking workloads on distributed edge nodes, with results stored in a central database for analysis. SHEAB thus represents a significant step toward efficient and reproducible Edge AI performance evaluation. Future work will extend the framework to broader workloads and further improve parallel efficiency. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

32 pages, 8611 KB  
Article
Softwarized Edge Intelligence for Advanced IIoT Ecosystems: A Data-Driven Architecture Across the Cloud/Edge Continuum
by David Carrascal, Javier Díaz-Fuentes, Nicolas Manso, Diego Lopez-Pajares, Elisa Rojas, Marco Savi and Jose M. Arco
Appl. Sci. 2025, 15(19), 10829; https://doi.org/10.3390/app151910829 - 9 Oct 2025
Viewed by 1499
Abstract
The evolution of Industrial Internet of Things (IIoT) systems demands flexible and intelligent architectures capable of addressing low-latency requirements, real-time analytics, and adaptive resource management. In this context, softwarized edge computing emerges as a key enabler, supporting advanced IoT deployments through programmable infrastructures, [...] Read more.
The evolution of Industrial Internet of Things (IIoT) systems demands flexible and intelligent architectures capable of addressing low-latency requirements, real-time analytics, and adaptive resource management. In this context, softwarized edge computing emerges as a key enabler, supporting advanced IoT deployments through programmable infrastructures, distributed intelligence, and seamless integration with cloud environments. This paper presents an extended and publicly available proof of concept (PoC) for a softwarized, data-driven architecture designed to operate across the cloud/edge/IoT continuum. The proposed architecture incorporates containerized microservices, open standards, and ML-based inference services to enable runtime decision-making and on-the-fly network reconfiguration based on real-time telemetry from IIoT nodes. Unlike traditional solutions, our approach leverages a modular control plane capable of triggering dynamic adaptations in the system through RESTful communication with a cloud-hosted inference engine, thus enhancing responsiveness and autonomy. We evaluate the system in representative IIoT scenarios involving multi-agent collaboration, showcasing its ability to process data at the edge, minimize latency, and support real-time decision-making. This work contributes to the ongoing efforts toward building advanced IoT ecosystems by bridging conceptual designs and practical implementations, offering a robust foundation for future research and deployment in intelligent, software-defined industrial environments. Full article
Show Figures

Figure 1

12 pages, 284 KB  
Article
AI-Enabled Secure and Scalable Distributed Web Architecture for Medical Informatics
by Marian Ileana, Pavel Petrov and Vassil Milev
Appl. Sci. 2025, 15(19), 10710; https://doi.org/10.3390/app151910710 - 4 Oct 2025
Viewed by 1332
Abstract
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical [...] Read more.
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical informatics, integrating artificial intelligence techniques and cloud-based services. The system ensures interoperability via HL7 FHIR standards and preserves data privacy and fault tolerance across interconnected medical institutions. A hybrid AI pipeline combining principal component analysis (PCA), K-Means clustering, and convolutional neural networks (CNNs) is applied to diffusion tensor imaging (DTI) data for early detection of neurological anomalies. The architecture leverages containerized microservices orchestrated with Docker Swarm, enabling adaptive resource management and high availability. Experimental validation confirms reduced latency, improved system reliability, and enhanced compliance with medical data exchange protocols. Results demonstrate superior performance with an average latency of 94 ms, a diagnostic accuracy of 91.3%, and enhanced clinical workflow efficiency compared to traditional monolithic architectures. The proposed solution successfully addresses scalability limitations while maintaining data security and regulatory compliance across multi-institutional deployments. This work contributes to the advancement of intelligent, interoperable, and scalable e-health infrastructures aligned with the evolution of digital healthcare ecosystems. Full article
(This article belongs to the Special Issue Data Science and Medical Informatics)
Show Figures

Figure 1

23 pages, 1262 KB  
Article
Confidential Kubernetes Deployment Models: Architecture, Security, and Performance Trade-Offs
by Eduardo Falcão, Fernando Silva, Carlos Pamplona, Anderson Melo, A S M Asadujjaman and Andrey Brito
Appl. Sci. 2025, 15(18), 10160; https://doi.org/10.3390/app151810160 - 17 Sep 2025
Cited by 1 | Viewed by 3299
Abstract
Cloud computing brings numerous advantages that can be leveraged through containerized workloads to deliver agile, dependable, and cost-effective microservices. However, the security of such cloud-based services depends on the assumption of trusting potentially vulnerable components, such as code installed on the host. The [...] Read more.
Cloud computing brings numerous advantages that can be leveraged through containerized workloads to deliver agile, dependable, and cost-effective microservices. However, the security of such cloud-based services depends on the assumption of trusting potentially vulnerable components, such as code installed on the host. The addition of confidential computing technology to the cloud computing landscape brings the possibility of stronger security guarantees by removing such assumptions. Nevertheless, the merger of containerization and confidential computing technologies creates a complex ecosystem. In this work, we show how Kubernetes workloads can be secured despite these challenges. In addition, we design, analyze, and evaluate five different Kubernetes deployment models using the infrastructure of three of the most popular cloud providers with CPUs from two major vendors. Our evaluation shows that performance can vary significantly across the possible deployment models while remaining similar across CPU vendors and cloud providers. Our security analysis highlights the trade-offs between different workload isolation levels, trusted computing base size, and measurement reproducibility. Through a comprehensive performance, security, and financial analysis, we identify the deployment models best suited to different scenarios. Full article
(This article belongs to the Special Issue Secure Cloud Computing Infrastructures)
Show Figures

Figure 1

23 pages, 1875 KB  
Article
U-SCAD: An Unsupervised Method of System Call-Driven Anomaly Detection for Containerized Edge Clouds
by Jiawei Ye, Ming Yan, Shenglin Wu, Jingxuan Tan and Jie Wu
Future Internet 2025, 17(5), 218; https://doi.org/10.3390/fi17050218 - 14 May 2025
Viewed by 1798
Abstract
Container technology is currently one of the mainstream technologies in the field of cloud computing, yet its adoption in resource-constrained, latency-sensitive edge environments introduces unique security challenges. While existing system call-based anomaly-detection methods partially address these issues, they suffer from high false positive [...] Read more.
Container technology is currently one of the mainstream technologies in the field of cloud computing, yet its adoption in resource-constrained, latency-sensitive edge environments introduces unique security challenges. While existing system call-based anomaly-detection methods partially address these issues, they suffer from high false positive rates and excessive computational overhead. To achieve security and observability in edge-native containerized environments and lower the cost of computing resources, we propose an unsupervised anomaly-detection method based on system calls. This method filters out unnecessary system call data through automatic rule generation and an unsupervised classification model. To increase the accuracy of anomaly detection and reduce the false positive rates, this method embeds system calls into sequences using the proposed Syscall2vec and processes the remain sequences in favor of the anomaly detection model’s analysis. We conduct experiments using our method with a background based on modern containerized cloud microservices. The results show that the detection part of our method improves the F1 score by 23.88% and 41.31%, respectively, as compared to HIDS and LSTM-VAE. Moreover, our method can effectively reduce the original processing data to 13%, which means that it significantly lowers the cost of computing resources. Full article
Show Figures

Figure 1

29 pages, 7086 KB  
Article
A Dockerized Approach to Dynamic Endpoint Management for RESTful Application Programming Interfaces in Internet of Things Ecosystems
by Ebenhezer Mabotha, Nkateko E. Mabunda and Ahmed Ali
Sensors 2025, 25(10), 2993; https://doi.org/10.3390/s25102993 - 9 May 2025
Cited by 1 | Viewed by 2229
Abstract
The growth of IoT devices has generated an increasing demand for effective, agile, and scalable deployment frameworks. Traditional IoT architectures are generally strained by interoperability, real-time responsiveness, and resource optimization due to inherent complexity in managing heterogeneous devices and large-scale deployments. While containerization [...] Read more.
The growth of IoT devices has generated an increasing demand for effective, agile, and scalable deployment frameworks. Traditional IoT architectures are generally strained by interoperability, real-time responsiveness, and resource optimization due to inherent complexity in managing heterogeneous devices and large-scale deployments. While containerization and dynamic API frameworks are seen as solutions, current methodologies are founded primarily on static API architectures that cannot be adapted in real time with evolving data structures and communication needs. Dynamic routing has been explored, but current solutions lack database schema flexibility and endpoint management. This work presents a Dockerized framework that integrates Dynamic RESTful APIs with containerization to achieve maximum flexibility and performance in IoT configurations. With the use of FastAPI for asynchronous processing, the framework dynamically scales API schemas as per real-time conditions, achieving maximum device interaction efficiency. Docker provides guaranteed consistent, portable deployment across different environments. An emulated IoT environment was used to measure significant performance parameters, including functionality, throughput, response time, and scalability. The evaluation shows that the framework maintains high throughput, with an error rate of 3.11% under heavy loads and negligible latency across varying traffic conditions, ensuring fast response times without compromising system integrity. The framework demonstrates significant advantages in IoT scenarios requiring the addition of new parameters or I/O components where dynamic endpoint generation enables immediate monitoring without core application changes. Architectural decisions involving RESTful paradigms, microservices, and containerization are also discussed in this paper to ensure enhanced flexibility, modularity, and performance. The findings provide a valuable addition to dynamic IoT API framework design, illustrating how dynamic, Dockerized RESTful APIs can improve the efficiency and flexibility of IoT systems. Full article
Show Figures

Figure 1

Back to TopTop