Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (70)

Search Parameters:
Keywords = Service-Level Agreement (SLA)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1301 KiB  
Article
Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement in Cloud Data Centers Using Deep Q-Networks and Agglomerative Clustering
by Maraga Alex, Sunday O. Ojo and Fred Mzee Awuor
Computers 2025, 14(7), 280; https://doi.org/10.3390/computers14070280 - 15 Jul 2025
Viewed by 303
Abstract
The fast expansion of cloud computing has raised carbon emissions and energy usage in cloud data centers, so creative solutions for sustainable resource management are more necessary. This work presents a new algorithm—Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement using Deep Q-Networks (DQNs) [...] Read more.
The fast expansion of cloud computing has raised carbon emissions and energy usage in cloud data centers, so creative solutions for sustainable resource management are more necessary. This work presents a new algorithm—Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement using Deep Q-Networks (DQNs) and Agglomerative Clustering (CARBON-DQN)—that intelligibly balances environmental sustainability, service level agreement (SLA), and energy efficiency. The method combines a deep reinforcement learning model that learns optimum placement methods over time, carbon-aware data center profiling, and the hierarchical clustering of virtual machines (VMs) depending on resource constraints. Extensive simulations show that CARBON-DQN beats conventional and state-of-the-art algorithms like GRVMP, NSGA-II, RLVMP, GMPR, and MORLVMP very dramatically. Among many virtual machine configurations—including micro, small, high-CPU, and extra-large instances—it delivers the lowest carbon emissions, lowered SLA violations, and lowest energy usage. Driven by real-time input, the adaptive decision-making capacity of the algorithm allows it to dynamically react to changing data center circumstances and workloads. These findings highlight how well CARBON-DQN is a sustainable and intelligent virtual machine deployment system for cloud systems. To improve scalability, environmental effect, and practical applicability even further, future work will investigate the integration of renewable energy forecasts, dynamic pricing models, and deployment across multi-cloud and edge computing environments. Full article
Show Figures

Figure 1

30 pages, 1687 KiB  
Article
Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters
by Ali Mohammad Baydoun and Ahmed Sherif Zekri
Future Internet 2025, 17(6), 261; https://doi.org/10.3390/fi17060261 - 14 Jun 2025
Viewed by 473
Abstract
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that [...] Read more.
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that optimizes VM placement across geographically distributed datacenters. The approach integrates real-time solar energy availability, dynamic PUE modeling, and multi-criteria decision-making to enable environmentally and cost-efficient resource allocation. The experimental results show that NCRA-DP-ACO reduces power consumption by 13.7%, carbon emissions by 6.9%, and live VM migrations by 48.2% compared to state-of-the-art methods while maintaining Service Level Agreement (SLA) compliance. These results indicate the algorithm’s potential to support more environmentally and cost-efficient cloud management across dynamic infrastructure scenarios. Full article
Show Figures

Graphical abstract

24 pages, 2188 KiB  
Article
Optimizing Energy Efficiency in Cloud Data Centers: A Reinforcement Learning-Based Virtual Machine Placement Strategy
by Abdelhadi Amahrouch, Youssef Saadi and Said El Kafhali
Network 2025, 5(2), 17; https://doi.org/10.3390/network5020017 - 27 May 2025
Viewed by 876
Abstract
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization [...] Read more.
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization algorithm, and a VM sensitivity classification model based on random forest and self-organizing map. The proposed method, RLVMP, classifies VMs as sensitive or insensitive and dynamically allocates resources to minimize energy consumption while ensuring compliance with service level agreements (SLAs). Experimental results using the CloudSim simulator, adapted with data from Microsoft Azure, show that our model significantly reduces energy consumption. Specifically, under the lr_1.2_mmt strategy, our model achieves a 5.4% reduction in energy consumption compared to PABFD, 12.8% compared to PSO, and 12% compared to genetic algorithms. Under the iqr_1.5_mc strategy, the reductions are even more significant: 12.11% compared to PABFD, 15.6% compared to PSO, and 18.67% compared to genetic algorithms. Furthermore, our model reduces the number of live migrations, which helps minimize SLA violations. Overall, the combination of Q-learning and the Firefly algorithm enables adaptive, SLA-compliant VM placement with improved energy efficiency. Full article
Show Figures

Figure 1

17 pages, 1634 KiB  
Article
Optimizing Service Level Agreement Tier Selection in Online Services Through Legacy Lifecycle Profile and Support Analysis: A Quantitative Approach
by Geza Lucz and Bertalan Forstner
Mathematics 2025, 13(11), 1743; https://doi.org/10.3390/math13111743 - 24 May 2025
Viewed by 486
Abstract
This study introduces a novel approach to optimal Service Level Agreement (SLA) tier selection in online services by incorporating client-side obsolescence factors into effective SLA planning. We analyze a comprehensive dataset of 600 million records collected over four years, focusing on the lifecycle [...] Read more.
This study introduces a novel approach to optimal Service Level Agreement (SLA) tier selection in online services by incorporating client-side obsolescence factors into effective SLA planning. We analyze a comprehensive dataset of 600 million records collected over four years, focusing on the lifecycle patterns of browsers published into the iPhone and Samsung ecosystems. Using Gaussian Process Regression with a Matérn kernel and exponential decay models, we model browser version adoption and decline rates, accounting for data sparsity and noise. Our methodology includes a centroid-based filtering technique and a quadratic decay term to mitigate bot-related anomalies. Results indicate distinct browser delivery refresh cycles for both ecosystems, with iPhone browsers showing peaks at 22 and 42 days, while Samsung devices exhibit peaks at 44 and 70 days. We quantify the support duration required to achieve various SLA tiers as follows: for 99.9% coverage, iPhone and Samsung browsers require 254 and 255 days of support, respectively; for 99.99%, 360 and 556 days; and for 99.999%, 471 and 672 days. These findings enable more accurate and effective SLA calculations, facilitating cost-efficient service planning considering the full service delivery and consumption pipeline. Our approach provides a data-driven framework for balancing aggressive upgrade requirements against generous legacy support, optimizing both security and performance within given cost boundaries. Full article
(This article belongs to the Special Issue New Advances in Mathematical Applications for Reliability Analysis)
Show Figures

Figure 1

29 pages, 9831 KiB  
Article
Quality of Experience (QoE) in Cloud Gaming: A Comparative Analysis of Deep Learning Techniques via Facial Emotions in a Virtual Reality Environment
by Awais Khan Jumani, Jinglun Shi, Asif Ali Laghari, Muhammad Ahmad Amin, Aftab ul Nabi, Kamlesh Narwani and Yi Zhang
Sensors 2025, 25(5), 1594; https://doi.org/10.3390/s25051594 - 5 Mar 2025
Cited by 1 | Viewed by 1184
Abstract
Cloud gaming has rapidly transformed the gaming industry, allowing users to play games on demand from anywhere without the need for powerful hardware. Cloud service providers are striving to enhance user Quality of Experience (QoE) using traditional assessment methods. However, these traditional methods [...] Read more.
Cloud gaming has rapidly transformed the gaming industry, allowing users to play games on demand from anywhere without the need for powerful hardware. Cloud service providers are striving to enhance user Quality of Experience (QoE) using traditional assessment methods. However, these traditional methods often fail to capture the actual user QoE because some users are not serious about providing feedback regarding cloud services. Additionally, some players, even after receiving services as per the Service Level Agreement (SLA), claim that they are not receiving services as promised. This poses a significant challenge for cloud service providers in accurately identifying QoE and improving actual services. In this paper, we have compared our previous proposed novel technique that utilizes a deep learning (DL) model to assess QoE through players’ facial expressions during cloud gaming sessions in a virtual reality (VR) environment. The EmotionNET model technique is based on a convolutional neural network (CNN) architecture. Later, we have compared the EmotionNET technique with three other DL techniques, namely ConvoNEXT, EfficientNET, and Vision Transformer (ViT). We trained the EmotionNET, ConvoNEXT, EfficientNET, and ViT model techniques on our custom-developed dataset, achieving 98.9% training accuracy and 87.8% validation accuracy with the EmotionNET model technique. Based on the training and comparison results, it is evident that the EmotionNET model technique predicts and performs better than the other model techniques. At the end, we have compared the EmotionNET results on two network (WiFi and mobile data) datasets. Our findings indicate that facial expressions are strongly correlated with QoE. Full article
Show Figures

Figure 1

28 pages, 6813 KiB  
Article
ZSM Framework for Autonomous Security Service Level Agreement Life-Cycle Management in B5G Networks
by Rodrigo Asensio-Garriga, Alejandro Molina Zarca, Jordi Ortiz, Ana Hermosilla, Hugo Ramón Pascual, Antonio Pastor and Antonio Skarmeta
Future Internet 2025, 17(2), 86; https://doi.org/10.3390/fi17020086 - 12 Feb 2025
Cited by 1 | Viewed by 1105
Abstract
In the rapidly evolving landscape of telecommunications, the integration of commercial 5G solutions and the rise of edge computing have reshaped service delivery, emphasizing the customization of requirements through network slices. However, the heterogeneity of devices and technologies in 5G and beyond networks [...] Read more.
In the rapidly evolving landscape of telecommunications, the integration of commercial 5G solutions and the rise of edge computing have reshaped service delivery, emphasizing the customization of requirements through network slices. However, the heterogeneity of devices and technologies in 5G and beyond networks poses significant challenges, particularly in terms of security management. Addressing this complexity, our work adopts the Zero-touch network and Service Management (ZSM) reference architecture to enable end-to-end automation of security and service management in Beyond 5G networks. This paper introduces the ZSM-based framework, which harnesses software-defined networking, network function virtualization, end-to-end slicing, and orchestration paradigms to autonomously enforce and preserve security service level agreements (SSLAs) across multiple domains that make up a 5G network. The framework autonomously manages end-to-end security slices through intent-driven closed loops at various logical levels, ensuring compliance with ETSI end-to-end network slice management standards for 5G communication services. The paper elaborates with an SSLA-triggered use case comprising two phases: proactive, wherein the framework deploys and configures an end-to-end security slice tailored to the security service level agreement specifications, and reactive, where machine learning-trained security mechanisms autonomously detect and mitigate novel beyond 5G attacks exploiting open-sourced 5G core threat vectors. Finally, the results of the implementation and validation are presented, demonstrating the practical application of this research. Interestingly, these research results have been integrated into the ETSI ZSM Proof of Concept #6: ’Security SLA Assurance in 5G Network Slices’, highlighting the relevance and impact of the study in the real world. Full article
Show Figures

Figure 1

25 pages, 4492 KiB  
Article
Resource Allocation Optimization Model for Computing Continuum
by Mihaela Mihaiu, Bogdan-Costel Mocanu, Cătălin Negru, Alina Petrescu-Niță and Florin Pop
Mathematics 2025, 13(3), 431; https://doi.org/10.3390/math13030431 - 27 Jan 2025
Cited by 1 | Viewed by 1281
Abstract
The exponential growth of Internet of Things (IoT) devices has led to massive volumes of data, challenging traditional centralized processing paradigms. The cloud–edge continuum computing model has emerged as a promising solution to address this challenge, offering a distributed approach to data processing [...] Read more.
The exponential growth of Internet of Things (IoT) devices has led to massive volumes of data, challenging traditional centralized processing paradigms. The cloud–edge continuum computing model has emerged as a promising solution to address this challenge, offering a distributed approach to data processing and management and improved performances in terms of the overhead and latency of the communication network. In this paper, we present a novel resource allocation optimization solution in cloud–edge continuum architectures designed to support multiple heterogeneous mobile clients that run a set of applications in a 5G-enabled environment. Our approach is structured across three layers, mist, edge, and cloud, and introduces a set of innovative resource allocation models that addresses the limitations of the traditional bin-packing optimization problem in IoT systems. The proposed solution integrates task offloading and resource allocation strategies designed to optimize energy consumption while ensuring compliance with Service Level Agreements (SLAs) by minimizing resource consumption. The evaluation of our proposed solution shows a longer period of active time for edge servers because of the lower energy consumption. These results indicate that the proposed solution is viable and a sustainability model that prioritizes energy efficiency in alignment with current climate concerns. Full article
(This article belongs to the Special Issue Distributed Systems: Methods and Applications)
Show Figures

Figure 1

21 pages, 1011 KiB  
Article
TADocs: Teacher–Assistant Distillation for Improved Policy Transfer in 6G RAN Slicing
by Xian Mu, Yao Xu, Dagang Li and Mingzhu Liu
Mathematics 2024, 12(18), 2934; https://doi.org/10.3390/math12182934 - 20 Sep 2024
Viewed by 1110
Abstract
Network slicing is an advanced technology that significantly enhances network flexibility and efficiency. Recently, reinforcement learning (RL) has been applied to solve resource management challenges in 6G networks. However, RL-based network slicing solutions have not been widely adopted. One of the primary reasons [...] Read more.
Network slicing is an advanced technology that significantly enhances network flexibility and efficiency. Recently, reinforcement learning (RL) has been applied to solve resource management challenges in 6G networks. However, RL-based network slicing solutions have not been widely adopted. One of the primary reasons for this is the slow convergence of agents when the Service Level Agreement (SLA) weight parameters in Radio Access Network (RAN) slices change. Therefore, a solution is needed that can achieve rapid convergence while maintaining high accuracy. To address this, we propose a Teacher and Assistant Distillation method based on cosine similarity (TADocs). This method utilizes cosine similarity to precisely match the most suitable teacher and assistant models, enabling rapid policy transfer through policy distillation to adapt to the changing SLA weight parameters. The cosine similarity matching mechanism ensures that the student model learns from the appropriate teacher and assistant models, thereby maintaining high performance. Thanks to this efficient matching mechanism, the number of models that need to be maintained is greatly reduced, resulting in lower computational resource consumption. TADocs improves convergence speed by 81% while achieving an average accuracy of 98%. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

21 pages, 431 KiB  
Article
Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing
by Mauro Femminella and Gianluca Reali
Computers 2024, 13(9), 224; https://doi.org/10.3390/computers13090224 - 6 Sep 2024
Viewed by 2104
Abstract
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced [...] Read more.
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called “cold start” events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-adopted open-source Kubernetes-based serverless platform, leveraging real-world serverless traffic traces. We resort to the reinforcement learning algorithm named Proximal Policy Optimization to dynamically configure the value of the Kubernetes Horizontal Pod Autoscaler, trained on real traffic. This was accomplished via a state space model able to take into account resource consumption, performance values, and time of day. In addition, the reward function definition promotes Service-Level Agreement (SLA) compliance. We evaluate the proposed agent, comparing its performance in terms of average latency, CPU usage, memory usage, and loss percentage with respect to the baseline system. The experimental results show the benefits provided by the proposed agent, obtaining a service time within the SLA while limiting resource consumption and service loss. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Graphical abstract

20 pages, 2522 KiB  
Article
Application of Fuzzy Logic for Horizontal Scaling in Kubernetes Environments within the Context of Edge Computing
by Sérgio N. Silva, Mateus A. S. de S. Goldbarg, Lucileide M. D. da Silva and Marcelo A. C. Fernandes
Future Internet 2024, 16(9), 316; https://doi.org/10.3390/fi16090316 - 2 Sep 2024
Cited by 2 | Viewed by 4719
Abstract
This paper presents a fuzzy logic-based approach for replica scaling in a Kubernetes environment, focusing on integrating Edge Computing. The proposed FHS (Fuzzy-based Horizontal Scaling) system was compared to the standard Kubernetes scaling mechanism, HPA (Horizontal Pod Autoscaler). The comparison considered resource consumption, [...] Read more.
This paper presents a fuzzy logic-based approach for replica scaling in a Kubernetes environment, focusing on integrating Edge Computing. The proposed FHS (Fuzzy-based Horizontal Scaling) system was compared to the standard Kubernetes scaling mechanism, HPA (Horizontal Pod Autoscaler). The comparison considered resource consumption, the number of replicas used, and adherence to latency Service-Level Agreements (SLAs). The experiments were conducted in an environment simulating Edge Computing infrastructure, with virtual machines used to represent edge nodes and traffic generated via JMeter. The results demonstrate that FHS achieves a reduction in CPU consumption, uses fewer replicas under the same stress conditions, and exhibits more distributed SLA latency violation rates compared to HPA. These results indicate that FHS offers a more efficient and customizable solution for replica scaling in Kubernetes within Edge Computing environments, contributing to both operational efficiency and service quality. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

19 pages, 9250 KiB  
Article
Multi-Agent Deep Reinforcement Learning Based Dynamic Task Offloading in a Device-to-Device Mobile-Edge Computing Network to Minimize Average Task Delay with Deadline Constraints
by Huaiwen He, Xiangdong Yang, Xin Mi, Hong Shen and Xuefeng Liao
Sensors 2024, 24(16), 5141; https://doi.org/10.3390/s24165141 - 8 Aug 2024
Cited by 4 | Viewed by 3091
Abstract
Device-to-device (D2D) is a pivotal technology in the next generation of communication, allowing for direct task offloading between mobile devices (MDs) to improve the efficient utilization of idle resources. This paper proposes a novel algorithm for dynamic task offloading between the active MDs [...] Read more.
Device-to-device (D2D) is a pivotal technology in the next generation of communication, allowing for direct task offloading between mobile devices (MDs) to improve the efficient utilization of idle resources. This paper proposes a novel algorithm for dynamic task offloading between the active MDs and the idle MDs in a D2D–MEC (mobile edge computing) system by deploying multi-agent deep reinforcement learning (DRL) to minimize the long-term average delay of delay-sensitive tasks under deadline constraints. Our core innovation is a dynamic partitioning scheme for idle and active devices in the D2D–MEC system, accounting for stochastic task arrivals and multi-time-slot task execution, which has been insufficiently explored in the existing literature. We adopt a queue-based system to formulate a dynamic task offloading optimization problem. To address the challenges of large action space and the coupling of actions across time slots, we model the problem as a Markov decision process (MDP) and perform multi-agent DRL through multi-agent proximal policy optimization (MAPPO). We employ a centralized training with decentralized execution (CTDE) framework to enable each MD to make offloading decisions solely based on its local system state. Extensive simulations demonstrate the efficiency and fast convergence of our algorithm. In comparison to the existing sub-optimal results deploying single-agent DRL, our algorithm reduces the average task completion delay by 11.0% and the ratio of dropped tasks by 17.0%. Our proposed algorithm is particularly pertinent to sensor networks, where mobile devices equipped with sensors generate a substantial volume of data that requires timely processing to ensure quality of experience (QoE) and meet the service-level agreements (SLAs) of delay-sensitive applications. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

39 pages, 2723 KiB  
Article
Electrifying the Last-Mile Logistics (LML) in Intensive B2B Operations—An European Perspective on Integrating Innovative Platforms
by Alejandro Sanz and Peter Meyer
Logistics 2024, 8(2), 45; https://doi.org/10.3390/logistics8020045 - 17 Apr 2024
Cited by 2 | Viewed by 4434
Abstract
Background: literature on last mile logistic electrification has primarily focused either on the stakeholder interactions defining urban rules and policies for urban freight or on the technical aspects of the logistic EVs. Methods: the article incorporates energy sourcing, vehicles, logistics operation, [...] Read more.
Background: literature on last mile logistic electrification has primarily focused either on the stakeholder interactions defining urban rules and policies for urban freight or on the technical aspects of the logistic EVs. Methods: the article incorporates energy sourcing, vehicles, logistics operation, and digital cloud environment, aiming at economic and functional viability. Using a combination of engineering and business modeling combined with the unique opportunity of the actual insights from Europe’s largest tender in the automotive aftermarket electrification. Results: the Last Mile Logistics (LML) electrification is possible and profitable without jeopardizing the high-tempo deliveries. Critical asset identification for a viable transition to EVs leads to open new lines of research for future logistic dynamics rendered possible by the digital dimensions of the logistic ecosystem. Conclusions: beyond the unquestionable benefits for the environment, the electrification of the LML constitutes an opportunity to enhance revenue and diversify income. Full article
Show Figures

Figure 1

30 pages, 7913 KiB  
Article
Evaluation of the Omni-Secure Firewall System in a Private Cloud Environment
by Salman Mahmood, Raza Hasan, Nor Adnan Yahaya, Saqib Hussain and Muzammil Hussain
Knowledge 2024, 4(2), 141-170; https://doi.org/10.3390/knowledge4020008 - 2 Apr 2024
Cited by 2 | Viewed by 2708
Abstract
This research explores the optimization of firewall systems within private cloud environments, specifically focusing on a 30-day evaluation of the Omni-Secure Firewall. Employing a multi-metric approach, the study introduces an innovative effectiveness metric (E) that amalgamates precision, recall, and redundancy considerations. The evaluation [...] Read more.
This research explores the optimization of firewall systems within private cloud environments, specifically focusing on a 30-day evaluation of the Omni-Secure Firewall. Employing a multi-metric approach, the study introduces an innovative effectiveness metric (E) that amalgamates precision, recall, and redundancy considerations. The evaluation spans various machine learning models, including random forest, support vector machines, neural networks, k-nearest neighbors, decision tree, stochastic gradient descent, naive Bayes, logistic regression, gradient boosting, and AdaBoost. Benchmarking against service level agreement (SLA) metrics showcases the Omni-Secure Firewall’s commendable performance in meeting predefined targets. Noteworthy metrics include acceptable availability, target response time, efficient incident resolution, robust event detection, a low false-positive rate, and zero data-loss incidents, enhancing the system’s reliability and security, as well as user satisfaction. Performance metrics such as prediction latency, CPU usage, and memory consumption further highlight the system’s functionality, efficiency, and scalability within private cloud environments. The introduction of the effectiveness metric (E) provides a holistic assessment based on organizational priorities, considering precision, recall, F1 score, throughput, mitigation time, rule latency, and redundancy. Evaluation across machine learning models reveals variations, with random forest and support vector machines exhibiting notably high accuracy and balanced precision and recall. In conclusion, while the Omni-Secure Firewall System demonstrates potential, inconsistencies across machine learning models underscore the need for optimization. The dynamic nature of private cloud environments necessitates continuous monitoring and adjustment of security systems to fully realize benefits while safeguarding sensitive data and applications. The significance of this study lies in providing insights into optimizing firewall systems for private cloud environments, offering a framework for holistic security assessment and emphasizing the need for robust, reliable firewall systems in the dynamic landscape of private clouds. Study limitations, including the need for real-world validation and exploration of advanced machine learning models, set the stage for future research directions. Full article
(This article belongs to the Special Issue New Trends in Knowledge Creation and Retention)
Show Figures

Figure 1

23 pages, 560 KiB  
Article
Comparison of Cloud-Computing Providers for Deployment of Object-Detection Deep Learning Models
by Prem Rajendran, Sarthak Maloo, Rohan Mitra, Akchunya Chanchal and Raafat Aburukba
Appl. Sci. 2023, 13(23), 12577; https://doi.org/10.3390/app132312577 - 22 Nov 2023
Cited by 8 | Viewed by 3975
Abstract
As cloud computing rises in popularity across diverse industries, the necessity to compare and select the most appropriate cloud provider for specific use cases becomes imperative. This research conducts an in-depth comparative analysis of two prominent cloud platforms, Microsoft Azure and Amazon Web [...] Read more.
As cloud computing rises in popularity across diverse industries, the necessity to compare and select the most appropriate cloud provider for specific use cases becomes imperative. This research conducts an in-depth comparative analysis of two prominent cloud platforms, Microsoft Azure and Amazon Web Services (AWS), with a specific focus on their suitability for deploying object-detection algorithms. The analysis covers both quantitative metrics—encompassing upload and download times, throughput, and inference time—and qualitative assessments like cost effectiveness, machine learning resource availability, deployment ease, and service-level agreement (SLA). Through the deployment of the YOLOv8 object-detection model, this study measures these metrics on both platforms, providing empirical evidence for platform evaluation. Furthermore, this research examines general platform availability and information accessibility to highlight differences in qualitative aspects. This paper concludes that Azure excels in download time (average 0.49 s/MB), inference time (average 0.60 s/MB), and throughput (1145.78 MB/s), and AWS excels in upload time (average 1.84 s/MB), cost effectiveness, ease of deployment, a wider ML service catalog, and superior SLA. However, the decision between either platform is based on the importance of their performance based on business-specific requirements. Hence, this paper ends by presenting a comprehensive comparison based on business-specific requirements, aiding stakeholders in making informed decisions when selecting a cloud platform for their machine learning projects. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

39 pages, 1887 KiB  
Article
Efficient Resource Utilization in IoT and Cloud Computing
by Vivek Kumar Prasad, Debabrata Dansana, Madhuri D. Bhavsar, Biswaranjan Acharya, Vassilis C. Gerogiannis and Andreas Kanavos
Information 2023, 14(11), 619; https://doi.org/10.3390/info14110619 - 19 Nov 2023
Cited by 12 | Viewed by 6638
Abstract
With the proliferation of IoT devices, there has been exponential growth in data generation, placing substantial demands on both cloud computing (CC) and internet infrastructure. CC, renowned for its scalability and virtual resource provisioning, is of paramount importance in e-commerce applications. However, the [...] Read more.
With the proliferation of IoT devices, there has been exponential growth in data generation, placing substantial demands on both cloud computing (CC) and internet infrastructure. CC, renowned for its scalability and virtual resource provisioning, is of paramount importance in e-commerce applications. However, the dynamic nature of IoT and cloud services introduces unique challenges, notably in the establishment of service-level agreements (SLAs) and the continuous monitoring of compliance. This paper presents a versatile framework for the adaptation of e-commerce applications to IoT and CC environments. It introduces a comprehensive set of metrics designed to support SLAs by enabling periodic resource assessments, ensuring alignment with service-level objectives (SLOs). This policy-driven approach seeks to automate resource management in the era of CC, thereby reducing the dependency on extensive human intervention in e-commerce applications. This paper culminates with a case study that demonstrates the practical utilization of metrics and policies in the management of cloud resources. Furthermore, it provides valuable insights into the resource requisites for deploying e-commerce applications within the realms of the IoT and CC. This holistic approach holds the potential to streamline the monitoring and administration of CC services, ultimately enhancing their efficiency and reliability. Full article
(This article belongs to the Special Issue Systems Engineering and Knowledge Management)
Show Figures

Figure 1

Back to TopTop