Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (297)

Search Parameters:
Keywords = cloud virtual machines

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3987 KB  
Article
Interactive Application with Virtual Reality and Artificial Intelligence for Improving Pronunciation in English Learning
by Gustavo Caiza, Carlos Villafuerte and Adriana Guanuche
Appl. Sci. 2025, 15(17), 9270; https://doi.org/10.3390/app15179270 - 23 Aug 2025
Viewed by 113
Abstract
Technological advances have enabled the development of innovative educational tools, particularly those aimed at supporting English as a Second Language (ESL) learning, with a specific focus on oral skills. However, pronunciation remains a significant challenge due to the limited availability of personalized learning [...] Read more.
Technological advances have enabled the development of innovative educational tools, particularly those aimed at supporting English as a Second Language (ESL) learning, with a specific focus on oral skills. However, pronunciation remains a significant challenge due to the limited availability of personalized learning opportunities that offer immediate feedback and contextualized practice. In this context, the present research proposes the design, implementation, and validation of an immersive application that leverages virtual reality (VR) and artificial intelligence (AI) to enhance English pronunciation. The proposed system integrates a 3D interactive environment developed in Unity, voice classification models trained using Teachable Machine, and real-time communication with Firebase, allowing users to practice and assess their pronunciation in a simulated library-like virtual setting. Through its integrated AI module, the application can analyze the pronunciation of each word in real time, detecting correct and incorrect utterances, and then providing immediate feedback to help users identify and correct their mistakes. The virtual environment was designed to be a welcoming and user-friendly, promoting active engagement with the learning process. The application’s distributed architecture enables automated feedback generation via data flow between the cloud-based AI, the database, and the visualization interface. Results demonstrate that using 400 samples per class and a confidence threshold of 99.99% for training the AI model effectively eliminated false positives, significantly increasing system accuracy and providing users with more reliable feedback. This directly contributes to enhanced learner autonomy and improved ESL acquisition outcomes. Furthermore, user surveys conducted to understand their perceptions of the application’s usefulness as a support tool for English learning yielded an average acceptance rate of 93%. This reflects the acceptance of these immersive technologies in educational contexts, as the combination of these technologies offers a realistic and user-friendly simulation environment, in addition to detailed word analysis, facilitating self-assessment and independent learning among students. Full article
Show Figures

Figure 1

23 pages, 783 KB  
Article
An Effective QoS-Aware Hybrid Optimization Approach for Workflow Scheduling in Cloud Computing
by Min Cui and Yipeng Wang
Sensors 2025, 25(15), 4705; https://doi.org/10.3390/s25154705 - 30 Jul 2025
Viewed by 318
Abstract
Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling [...] Read more.
Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling algorithms to find optimal or near-optimal task-to-VM allocation solutions that meet users’ specific QoS requirements still remains an open area of research. In this paper, we propose a hybrid QoS-aware workflow scheduling algorithm named HLWOA to address the problem of simultaneously minimizing the completion time and execution cost of workflow scheduling in cloud computing. First, the workflow scheduling problem in cloud computing is modeled as a multi-objective optimization problem. Then, based on the heterogeneous earliest finish time (HEFT) heuristic optimization algorithm, tasks are reverse topologically sorted and assigned to virtual machines with the earliest finish time to construct an initial workflow task scheduling sequence. Furthermore, an improved Whale Optimization Algorithm (WOA) based on Lévy flight is proposed. The output solution of HEFT is used as one of the initial population solutions in WOA to accelerate the convergence speed of the algorithm. Subsequently, a Lévy flight search strategy is introduced in the iterative optimization phase to avoid the algorithm falling into local optimal solutions. The proposed HLWOA is evaluated on the WorkflowSim platform using real-world scientific workflows (Cybershake and Montage) with different task scales (100 and 1000). Experimental results demonstrate that HLWOA outperforms HEFT, HEPGA, and standard WOA in both makespan and cost, with normalized fitness values consistently ranking first. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

20 pages, 2776 KB  
Article
Automatic 3D Reconstruction: Mesh Extraction Based on Gaussian Splatting from Romanesque–Mudéjar Churches
by Nelson Montas-Laracuente, Emilio Delgado Martos, Carlos Pesqueira-Calvo, Giovanni Intra Sidola, Ana Maitín, Alberto Nogales and Álvaro José García-Tejedor
Appl. Sci. 2025, 15(15), 8379; https://doi.org/10.3390/app15158379 - 28 Jul 2025
Viewed by 660
Abstract
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) [...] Read more.
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) and its successor, Gaussian splatting (GS)—as state-of-the-art techniques in the domain. The study advocates for replacing point cloud data in heritage building information modeling workflows with image-based inputs, proposing a novel “photo-to-BIM” pipeline. A proof-of-concept system is presented, capable of processing photographs or video footage of ancient ruins—specifically, Romanesque–Mudéjar churches—to automatically generate 3D mesh reconstructions. The system’s performance is assessed using both objective metrics and subjective evaluations of mesh quality. The results confirm the feasibility and promise of image-based reconstruction as a viable alternative to conventional methods. The study successfully developed a system for automated 3D mesh reconstruction of AH from images. It applied GS and Mip-splatting for NeRFs, proving superior in noise reduction for subsequent mesh extraction via surface-aligned Gaussian splatting for efficient 3D mesh reconstruction. This photo-to-mesh pipeline signifies a viable step towards HBIM. Full article
Show Figures

Figure 1

23 pages, 650 KB  
Article
Exercise-Specific YANG Profile for AI-Assisted Network Security Labs: Bidirectional Configuration Exchange with Large Language Models
by Yuichiro Tateiwa
Information 2025, 16(8), 631; https://doi.org/10.3390/info16080631 - 24 Jul 2025
Viewed by 284
Abstract
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that [...] Read more.
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that could serve as tutors. We address this interoperability gap with an exercise-oriented YANG profile that augments the Internet Engineering Task Force (IETF) ietf-network module with a new network-devices module. The profile expresses Linux interface settings, routing, and firewall rules, and tags each node with roles such as linux-server or linux-firewall. Integrated into our LiNeS Cloud platform, it enables LLMs to both parse and generate machine-readable network states. We evaluated the profile on four topologies—from a simple client–server pair to multi-subnet scenarios with dedicated security devices—using ChatGPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 Flash. Across 1050 evaluation tasks covering profile understanding (n = 180), instance analysis (n = 750), and instance generation (n = 120), the three LLMs answered correctly in 1028 cases, yielding an overall accuracy of 97.9%. Even with only minimal follow-up cues (≦3 turns) —rather than handcrafted prompt chains— analysis tasks reached 98.1% accuracy and generation tasks 93.3%. To our knowledge, this is the first exercise-focused YANG profile that simultaneously captures Linux/iptables semantics and is empirically validated across three proprietary LLMs, attaining 97.9% overall task accuracy. These results lay a practical foundation for artificial intelligence (AI)-assisted security labs where real-time feedback and scenario generation must scale beyond human instructor capacity. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

17 pages, 1301 KB  
Article
Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement in Cloud Data Centers Using Deep Q-Networks and Agglomerative Clustering
by Maraga Alex, Sunday O. Ojo and Fred Mzee Awuor
Computers 2025, 14(7), 280; https://doi.org/10.3390/computers14070280 - 15 Jul 2025
Viewed by 589
Abstract
The fast expansion of cloud computing has raised carbon emissions and energy usage in cloud data centers, so creative solutions for sustainable resource management are more necessary. This work presents a new algorithm—Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement using Deep Q-Networks (DQNs) [...] Read more.
The fast expansion of cloud computing has raised carbon emissions and energy usage in cloud data centers, so creative solutions for sustainable resource management are more necessary. This work presents a new algorithm—Carbon-Aware, Energy-Efficient, and SLA-Compliant Virtual Machine Placement using Deep Q-Networks (DQNs) and Agglomerative Clustering (CARBON-DQN)—that intelligibly balances environmental sustainability, service level agreement (SLA), and energy efficiency. The method combines a deep reinforcement learning model that learns optimum placement methods over time, carbon-aware data center profiling, and the hierarchical clustering of virtual machines (VMs) depending on resource constraints. Extensive simulations show that CARBON-DQN beats conventional and state-of-the-art algorithms like GRVMP, NSGA-II, RLVMP, GMPR, and MORLVMP very dramatically. Among many virtual machine configurations—including micro, small, high-CPU, and extra-large instances—it delivers the lowest carbon emissions, lowered SLA violations, and lowest energy usage. Driven by real-time input, the adaptive decision-making capacity of the algorithm allows it to dynamically react to changing data center circumstances and workloads. These findings highlight how well CARBON-DQN is a sustainable and intelligent virtual machine deployment system for cloud systems. To improve scalability, environmental effect, and practical applicability even further, future work will investigate the integration of renewable energy forecasts, dynamic pricing models, and deployment across multi-cloud and edge computing environments. Full article
Show Figures

Figure 1

46 pages, 1709 KB  
Article
Federated Learning-Driven IoT Request Scheduling for Fault Tolerance in Cloud Data Centers
by Sheeja Rani S and Raafat Aburukba
Mathematics 2025, 13(13), 2198; https://doi.org/10.3390/math13132198 - 5 Jul 2025
Viewed by 587
Abstract
Cloud computing is a virtualized and distributed computing model that provides resources and services based on demand and self-service. Resource failure is one of the major challenges in cloud computing, and there is a need for fault tolerance mechanisms. This paper addresses the [...] Read more.
Cloud computing is a virtualized and distributed computing model that provides resources and services based on demand and self-service. Resource failure is one of the major challenges in cloud computing, and there is a need for fault tolerance mechanisms. This paper addresses the issue by proposing a multi-objective radial kernelized federated learning-based fault-tolerant scheduling (MRKFL-FTS) technique for allocating multiple IoT requests or user tasks to virtual machines in cloud IoT-based environments. The MRKFL-FTS technique includes Cloud RAN (C-RAN) and Virtual RAN (V-RAN). The proposed MRKFL-FTS technique comprises four entities, namely, IoT devices, cloud servers, task assigners, and virtual machines. Each IoT device generates several service requests and sends them to the control server. At first, radial kernelized support vector regression is applied in the local training model to identify resource-efficient virtual machines. After that, locally trained models are combined, and the resulting model is fed into the global aggregation model. Finally, using a weighted round-robin method, the task assigner allocates incoming IoT service requests to virtual machines. This approach improves resource awareness and fault tolerance in scheduling. The quantitatively analyzed results show that the MRKFL-FTS technique achieved an 8% improvement in task scheduling efficiency and fault prediction accuracy, a 36% improvement in throughput, and a 14% reduction in makespan and time complexity. In addition, the MRKFL-FTS technique resulted in a 13% reduction in response time. The energy consumption of the MRKFL-FTS technique is reduced by 17% and increases the scalability by 8% compared to conventional scheduling techniques. Full article
(This article belongs to the Special Issue Advanced Information and Signal Processing: Models and Algorithms)
Show Figures

Figure 1

17 pages, 4077 KB  
Article
An Evolutionary Algorithm for Multi-Objective Workflow Scheduling with Adaptive Dynamic Grouping
by Guochen Zhang, Aolong Zhang, Chaoli Sun and Qing Ye
Electronics 2025, 14(13), 2586; https://doi.org/10.3390/electronics14132586 - 26 Jun 2025
Cited by 1 | Viewed by 342
Abstract
For workflow scheduling with complex dependencies in cloud computing environments, existing research predominantly focuses on multi-objective algorithm optimization while neglecting the critical factor of workflow topological structure. The proposed Adaptive Dynamic Grouping (ADG) strategy breaks through this limitation via dual innovative mechanisms: firstly [...] Read more.
For workflow scheduling with complex dependencies in cloud computing environments, existing research predominantly focuses on multi-objective algorithm optimization while neglecting the critical factor of workflow topological structure. The proposed Adaptive Dynamic Grouping (ADG) strategy breaks through this limitation via dual innovative mechanisms: firstly constructing a dynamic variable grouping model based on task dependencies to effectively compress decision space and reduce global search overhead and secondly introducing an adaptive resource allocation strategy that dynamically distributes execution opportunities according to variable groups’ contribution to optimization, accelerating convergence toward the Pareto frontier. The experimental results on five real-world workflows across three major cloud providers’ virtual machines demonstrate ADG’s superior performance in multi-objective optimization, including execution time, cost, and energy consumption, providing an efficient solution for cloud-based workflow scheduling. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

35 pages, 9294 KB  
Article
Evaluation of Simulation Framework for Detecting the Quality of Forest Tree Stems
by Anwar Sagar, Kalle Kärhä, Kalervo Järvelin and Reza Ghabcheloo
Forests 2025, 16(6), 1023; https://doi.org/10.3390/f16061023 - 18 Jun 2025
Viewed by 443
Abstract
The advancement of harvester technology increasingly relies on automated forest analysis within machine operational ranges. However, real-world testing remains costly and time-consuming. To address this, we introduced the Tree Classification Framework (TCF), a simulation platform for the cost-effective testing of harvester technologies. TCF [...] Read more.
The advancement of harvester technology increasingly relies on automated forest analysis within machine operational ranges. However, real-world testing remains costly and time-consuming. To address this, we introduced the Tree Classification Framework (TCF), a simulation platform for the cost-effective testing of harvester technologies. TCF accelerates technology development by simulating forest environments and machine operations, leveraging machine-learning and computer vision models. TCF has four components: Synthetic Forest Creation, which generates diverse virtual forests; Point Cloud Generation, which simulates LiDAR scanning; Stem Identification and Classification, which detects and characterises tree stems; and Experimental Evaluation, which assesses algorithm performance under varying conditions. We tested TCF across ten forest scenarios with different tree densities and morphologies, using two-point cloud generation methods: fixed points per stem and LiDAR scanning at three resolutions. Performance was evaluated against ground-truth data using quantitative metrics and heatmaps. TCF bridges the gap between simulation and real-world forestry, enhancing the harvester technology by improving efficiency, accuracy, and sustainability in automated tree assessment. This paper presents a framework built from affordable, standard components for stem identification and classification. TCF enables the systematic testing of classification algorithms against known ground truth under controlled, repeatable conditions. Through diverse evaluations, the framework demonstrates its utility by providing the necessary components, representations, and procedures for reliable stem classification. Full article
(This article belongs to the Section Forest Operations and Engineering)
Show Figures

Figure 1

22 pages, 2229 KB  
Article
A Structured Data Model for Asset Health Index Integration in Digital Twins of Energy Converters
by Juan F. Gómez Fernández, Eduardo Candón Fernández and Adolfo Crespo Márquez
Energies 2025, 18(12), 3148; https://doi.org/10.3390/en18123148 - 16 Jun 2025
Viewed by 576
Abstract
A persistent challenge in digital asset management is the lack of standardized models for integrating health assessment—such as the Asset Health Index (AHI)—into Digital Twins, limiting their extended implementation beyond individual projects. Asset managers in the energy sector face challenges of digitalization such [...] Read more.
A persistent challenge in digital asset management is the lack of standardized models for integrating health assessment—such as the Asset Health Index (AHI)—into Digital Twins, limiting their extended implementation beyond individual projects. Asset managers in the energy sector face challenges of digitalization such as digital environment selection, employed digital modules (absence of an architecture guide) and their interconnection, sources of data, and how to automate the assessment and provide the results in a friendly decision support system. Thus, for energy systems, the integration of Asset Assessment in virtual replicas by Digital Twins is a complete way of asset management by enabling real-time monitoring, predictive maintenance, and lifecycle optimization. Another challenge in this context is how to compound in a structured assessment of asset condition, where the Asset Health Index (AHI) plays a critical role by consolidating heterogeneous data into a single, actionable indicator easy to interpret as a level of risk. This paper tries to serve as a guide against these digital and structured assessments to integrate AHI methodologies into Digital Twins for energy converters. First, the proposed AHI methodology is introduced, and after a structured data model specifically designed, orientated to a basic and economic cloud implementation architecture. This model has been developed fulfilling standardized practices of asset digitalization as the Reference Architecture Model for Industry 4.0 (RAMI 4.0), organizing asset-related information into interoperable domains including physical hierarchy, operational monitoring, reliability assessment, and risk-based decision-making. A Unified Modeling Language (UML) class diagram formalizes the data model for cloud Digital Twin implementation, which is deployed on Microsoft Azure Architecture using native Internet of Things (IoT) and analytics services to enable automated and real-time AHI calculation. This design and development has been realized from a scalable point of view and for future integration of Machine-Learning improvements. The proposed approach is validated through a case study involving three high-capacity converters in distinct operating environments, showing the model’s effective assistance in anticipating failures, optimizing maintenance strategies, and improving asset resilience. In the case study, AHI-based monitoring reduced unplanned failures by 43% and improved maintenance planning accuracy by over 30%. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

30 pages, 1687 KB  
Article
Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters
by Ali Mohammad Baydoun and Ahmed Sherif Zekri
Future Internet 2025, 17(6), 261; https://doi.org/10.3390/fi17060261 - 14 Jun 2025
Viewed by 552
Abstract
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that [...] Read more.
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that optimizes VM placement across geographically distributed datacenters. The approach integrates real-time solar energy availability, dynamic PUE modeling, and multi-criteria decision-making to enable environmentally and cost-efficient resource allocation. The experimental results show that NCRA-DP-ACO reduces power consumption by 13.7%, carbon emissions by 6.9%, and live VM migrations by 48.2% compared to state-of-the-art methods while maintaining Service Level Agreement (SLA) compliance. These results indicate the algorithm’s potential to support more environmentally and cost-efficient cloud management across dynamic infrastructure scenarios. Full article
Show Figures

Graphical abstract

24 pages, 2188 KB  
Article
Optimizing Energy Efficiency in Cloud Data Centers: A Reinforcement Learning-Based Virtual Machine Placement Strategy
by Abdelhadi Amahrouch, Youssef Saadi and Said El Kafhali
Network 2025, 5(2), 17; https://doi.org/10.3390/network5020017 - 27 May 2025
Viewed by 1144
Abstract
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization [...] Read more.
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization algorithm, and a VM sensitivity classification model based on random forest and self-organizing map. The proposed method, RLVMP, classifies VMs as sensitive or insensitive and dynamically allocates resources to minimize energy consumption while ensuring compliance with service level agreements (SLAs). Experimental results using the CloudSim simulator, adapted with data from Microsoft Azure, show that our model significantly reduces energy consumption. Specifically, under the lr_1.2_mmt strategy, our model achieves a 5.4% reduction in energy consumption compared to PABFD, 12.8% compared to PSO, and 12% compared to genetic algorithms. Under the iqr_1.5_mc strategy, the reductions are even more significant: 12.11% compared to PABFD, 15.6% compared to PSO, and 18.67% compared to genetic algorithms. Furthermore, our model reduces the number of live migrations, which helps minimize SLA violations. Overall, the combination of Q-learning and the Firefly algorithm enables adaptive, SLA-compliant VM placement with improved energy efficiency. Full article
Show Figures

Figure 1

32 pages, 2174 KB  
Article
Performance Analysis of Cloud Computing Task Scheduling Using Metaheuristic Algorithms in DDoS and Normal Environments
by Fatih Kaplan and Ahmet Babalik
Electronics 2025, 14(10), 1988; https://doi.org/10.3390/electronics14101988 - 13 May 2025
Cited by 1 | Viewed by 803
Abstract
Cloud computing has emerged as a fundamental pillar of modern technology, enabling large-scale data management, computational efficiency, and operational flexibility. However, critical challenges persist, particularly concerning security and performance. DDoS attacks severely impact cloud infrastructure by degrading system performance and causing service disruptions. [...] Read more.
Cloud computing has emerged as a fundamental pillar of modern technology, enabling large-scale data management, computational efficiency, and operational flexibility. However, critical challenges persist, particularly concerning security and performance. DDoS attacks severely impact cloud infrastructure by degrading system performance and causing service disruptions. These persistent threats raise concerns about cloud system reliability and underscore the necessity for advanced security measures. This study investigates the cloud computing task scheduling problem, recognized as NP-hard, and explores the impact of adversarial conditions such as DDoS attacks on system performance. To address this challenge, metaheuristic algorithms are employed. The research evaluates the effectiveness of traditional approaches, including genetic algorithms (GAs), particle swarm optimization (PSO), and artificial bee colony (ABC), while also introducing a GA–PSO algorithm designed to enhance task scheduling efficiency. The proposed method aims to minimize makespan by optimizing task allocation across virtual machines (VMs) within cloud environments. A comparative analysis of scheduling performance under both normal and DDoS-affected conditions reveals that metaheuristic techniques contribute significantly to system resilience. Furthermore, the GA–PSO algorithm demonstrates notable improvements at specific iteration levels. The findings underscore the potential of advanced scheduling methods to enhance cloud computing sustainability while offering practical solutions to mitigate real-world security threats. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 1103 KB  
Article
Multi-Objective Cauchy Particle Swarm Optimization for Energy-Aware Virtual Machine Placement in Cloud Datacenters
by Xuan Liu, Chenyan Wang, Shan Jiang, Yutong Gao, Chaomurilige and Bo Cheng
Symmetry 2025, 17(5), 742; https://doi.org/10.3390/sym17050742 - 13 May 2025
Viewed by 488
Abstract
With the continuous expansion of application scenarios for cloud computing, large-scale service deployments in cloud data centers are accompanied by a significant increase in resource consumption. Virtual machines (VMs) in data centers are allocated to physical machines (PMs) and require the resources provided [...] Read more.
With the continuous expansion of application scenarios for cloud computing, large-scale service deployments in cloud data centers are accompanied by a significant increase in resource consumption. Virtual machines (VMs) in data centers are allocated to physical machines (PMs) and require the resources provided by PMs to run various services. Apparently, a simple solution to minimize energy consumption is to allocate VMs as compactly as possible. However, the above virtual machine placement (VMP) strategy may lead to system performance degradation and service failures due to imbalanced resource load, thereby reducing the robustness of the cloud data center. Therefore, an effective VMP solution that comprehensively considers both energy consumption and other performance metrics in data centers is urgently needed. In this paper, we first construct a multi-objective VMP model aiming to simultaneously optimize energy consumption, resource utilization, load balancing, and system robustness, and we then build a joint optimization function with resource constraints. Subsequently, a novel energy-aware Cauchy particle swarm optimization (EA-CPSO) algorithm is proposed, which implements particle asymmetric disturbances and an energy-efficient population iteration strategy, aiming to minimize the value of the joint optimization function. Finally, our extensive experiments demonstrated that EA-CPSO outperforms existing methods. Full article
Show Figures

Figure 1

23 pages, 3481 KB  
Article
Evaluating QoS in Dynamic Virtual Machine Migration: A Multi-Class Queuing Model for Edge-Cloud Systems
by Anna Kushchazli, Kseniia Leonteva, Irina Kochetkova and Abdukodir Khakimov
J. Sens. Actuator Netw. 2025, 14(3), 47; https://doi.org/10.3390/jsan14030047 - 25 Apr 2025
Viewed by 1002
Abstract
The efficient migration of virtual machines (VMs) is critical for optimizing resource management, ensuring service continuity, and enhancing resiliency in cloud and edge computing environments, particularly as 6G networks demand higher reliability and lower latency. This study addresses the challenges of dynamically balancing [...] Read more.
The efficient migration of virtual machines (VMs) is critical for optimizing resource management, ensuring service continuity, and enhancing resiliency in cloud and edge computing environments, particularly as 6G networks demand higher reliability and lower latency. This study addresses the challenges of dynamically balancing server loads while minimizing downtime and migration costs under stochastic task arrivals and variable processing times. We propose a queuing theory-based model employing continuous-time Markov chains (CTMCs) to capture the interplay between VM migration decisions, server resource constraints, and task processing dynamics. The model incorporates two migration policies—one minimizing projected post-migration server utilization and another prioritizing current utilization—to evaluate their impact on system performance. The numerical results show that the blocking probability for the first VM for Policy 1 is 2.1% times lower than for Policy 2 and the same metric for the second VM is 4.7%. The average server’s resource utilization increased up to 11.96%. The framework’s adaptability to diverse server–VM configurations and stochastic demands demonstrates its applicability to real-world cloud systems. These results highlight predictive resource allocation’s role in dynamic environments. Furthermore, the study lays the groundwork for extending this framework to multi-access edge computing (MEC) environments, which are integral to 6G networks. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

26 pages, 6409 KB  
Article
Design of Rotors in Centrifugal Pumps Using the Topology Optimization Method and Parallel Computing in the Cloud
by Xavier Andrés Arcentales, Danilo Andrés Arcentales and Wilfredo Montealegre
Machines 2025, 13(4), 307; https://doi.org/10.3390/machines13040307 - 10 Apr 2025
Cited by 1 | Viewed by 474
Abstract
Designing flow machines is challenging due to numerous free geometrical parameters. This work aims to develop a parallelized computational algorithm in MATLAB version R2020a to design the rotor of a radial flow in a centrifugal pump using the finite-element method (FEM), topology optimization [...] Read more.
Designing flow machines is challenging due to numerous free geometrical parameters. This work aims to develop a parallelized computational algorithm in MATLAB version R2020a to design the rotor of a radial flow in a centrifugal pump using the finite-element method (FEM), topology optimization method (TOM), and parallel cloud computing (bare-metal vs. virtual machine). The goal is to minimize a bi-objective function comprising energy dissipation and vorticity within half a rotor circumference. When only minimizing energy dissipation (wd = 1, wr = 0), the performance achieved is 5.88 Watts. Considering both energy dissipation and vorticity (wd = 0.8, wr = 0.2), the performance is 5.94 Watts. These topology results are then extended to a full 3D model using Ansys Fluent version 18.2 to validate the objective functions minimized by TOM. The algorithm is parallelized and executed on multiple CPU cores in the cloud on two different platforms: Amazon Web Services (virtual machine) and Equinix (bare-metal machine), to accelerate the blade design process. In conclusion, mathematical optimization tools aid engineering designers in achieving non-intuitive designs and enhancing results. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

Back to TopTop