Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = server consolidation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2040 KiB  
Article
Intelligent Virtual Machine Scheduling Based on CPU Temperature-Involved Server Load Model
by Huan Zhou, Jiebei Zhu, Binbin Chen, Lujie Yu and Heyu Luo
Energies 2025, 18(14), 3611; https://doi.org/10.3390/en18143611 - 8 Jul 2025
Viewed by 240
Abstract
To reduce the significant energy consumption in data centers, virtual machine scheduling optimization and server consolidation are deployed. However, existing server power load (SPL) models typically adopt linear approximations for model developments, which results in inaccuracy with actual SPL characteristics, hindering the optimal [...] Read more.
To reduce the significant energy consumption in data centers, virtual machine scheduling optimization and server consolidation are deployed. However, existing server power load (SPL) models typically adopt linear approximations for model developments, which results in inaccuracy with actual SPL characteristics, hindering the optimal solution of virtual machine scheduling. Therefore, intelligent virtual machine scheduling (IVMS) is proposed based on a CPU temperature-involved server load model for data center energy conservation. The IVMS establishes a novel server power load model considering the influence of CPU temperature to capture the actual server load characteristics. Based on the model, the Q-learning method is utilized to solve the problem with the advantage of global optimization to obtain the scheduling solution that further improves calculation accuracy. The performance of the proposed IVMS is evaluated and compared to existing methods by both simulation and experiments in data centers, proving that the IVMS can better predict SPL characteristics and further reduce server energy consumption. Full article
Show Figures

Figure 1

24 pages, 2188 KiB  
Article
Optimizing Energy Efficiency in Cloud Data Centers: A Reinforcement Learning-Based Virtual Machine Placement Strategy
by Abdelhadi Amahrouch, Youssef Saadi and Said El Kafhali
Network 2025, 5(2), 17; https://doi.org/10.3390/network5020017 - 27 May 2025
Viewed by 862
Abstract
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization [...] Read more.
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization algorithm, and a VM sensitivity classification model based on random forest and self-organizing map. The proposed method, RLVMP, classifies VMs as sensitive or insensitive and dynamically allocates resources to minimize energy consumption while ensuring compliance with service level agreements (SLAs). Experimental results using the CloudSim simulator, adapted with data from Microsoft Azure, show that our model significantly reduces energy consumption. Specifically, under the lr_1.2_mmt strategy, our model achieves a 5.4% reduction in energy consumption compared to PABFD, 12.8% compared to PSO, and 12% compared to genetic algorithms. Under the iqr_1.5_mc strategy, the reductions are even more significant: 12.11% compared to PABFD, 15.6% compared to PSO, and 18.67% compared to genetic algorithms. Furthermore, our model reduces the number of live migrations, which helps minimize SLA violations. Overall, the combination of Q-learning and the Firefly algorithm enables adaptive, SLA-compliant VM placement with improved energy efficiency. Full article
Show Figures

Figure 1

28 pages, 739 KiB  
Article
Cooperative Overbooking-Based Resource Allocation and Application Placement in UAV-Mounted Edge Computing for Internet of Forestry Things
by Xiaoyu Li, Long Suo, Wanguo Jiao, Xiaoming Liu and Yunfei Liu
Drones 2025, 9(1), 22; https://doi.org/10.3390/drones9010022 - 29 Dec 2024
Viewed by 883
Abstract
Due to the high mobility and low cost, unmanned aerial vehicle (UAV)-mounted edge computing (UMEC) provides an efficient way to provision computing offloading services for Internet of Forestry Things (IoFT) applications in forest areas without sufficient infrastructure. Multiple IoFT applications can be consolidated [...] Read more.
Due to the high mobility and low cost, unmanned aerial vehicle (UAV)-mounted edge computing (UMEC) provides an efficient way to provision computing offloading services for Internet of Forestry Things (IoFT) applications in forest areas without sufficient infrastructure. Multiple IoFT applications can be consolidated into fewer UAV-mounted servers to improve the resource utilization and reduce deployment costs with the precondition that all applications’ Quality of Service (QoS) can be met. However, most existing application placement schemes in UMEC did not consider the dynamic nature of the aggregated computing resource demand. In this paper, the resource allocation and application placement problem based on fine-grained cooperative overbooking in UMEC is studied. First, for the two-tenant overbooking case, a Two-tenant Cooperative Resource Overbooking (2CROB) scheme is designed, which allows tenants to share resource demand violations (RDVs) in the cooperative overbooking region. In 2CROB, an aggregated-resource-demand minimization problem is modeled, and a bisection search algorithm is designed to obtain the minimized aggregated resource demand. Second, for the multiple-tenant overbooking case, a Proportional Fairness-based Cooperative Resource Overbooking (PF-MCROB) scheme is designed, and a bisection search algorithm is also designed to obtain the corresponding minimized aggregated resource demand. Then, on the basis of PF-MCROB, a First Fit Decreasing-based Cooperative Application Placement (FFD-CAP) scheme is proposed to accommodate applications in as few servers as possible. Simulation results verify that the proposed cooperative resource overbooking schemes can save more computing resource in cases including more tenants with higher or differentiated resource demand violation ratio (RDVR) thresholds, and the FFD-ACP scheme can reduce about one third of necessarily deployed UAVs compared with traditional overbooking. Thus, applying efficient cooperative overbooking in application placement can considerably reduce deployment and maintenance costs and improve onboard computing resource utilization and operating revenues in UMEC-aided IoFT applications. Full article
Show Figures

Figure 1

30 pages, 4245 KiB  
Article
Evolving High-Performance Computing Data Centers with Kubernetes, Performance Analysis, and Dynamic Workload Placement Based on Machine Learning Scheduling
by Vedran Dakić, Mario Kovač and Jurica Slovinac
Electronics 2024, 13(13), 2651; https://doi.org/10.3390/electronics13132651 - 5 Jul 2024
Cited by 14 | Viewed by 115838
Abstract
In the past twenty years, the IT industry has moved away from using physical servers for workload management to workloads consolidated via virtualization and, in the next iteration, further consolidated into containers. Later, container workloads based on Docker and Podman were orchestrated via [...] Read more.
In the past twenty years, the IT industry has moved away from using physical servers for workload management to workloads consolidated via virtualization and, in the next iteration, further consolidated into containers. Later, container workloads based on Docker and Podman were orchestrated via Kubernetes or OpenShift. On the other hand, high-performance computing (HPC) environments have been lagging in this process, as much work is still needed to figure out how to apply containerization platforms for HPC. Containers have many advantages, as they tend to have less overhead while providing flexibility, modularity, and maintenance benefits. This makes them well-suited for tasks requiring a lot of computing power that are latency- or bandwidth-sensitive. But they are complex to manage, and many daily operations are based on command-line procedures that take years to master. This paper proposes a different architecture based on seamless hardware integration and a user-friendly UI (User Interface). It also offers dynamic workload placement based on real-time performance analysis and prediction and Machine Learning-based scheduling. This solves a prevalent issue in Kubernetes: the suboptimal placement of workloads without needing individual workload schedulers, as they are challenging to write and require much time to debug and test properly. It also enables us to focus on one of the key HPC issues—energy efficiency. Furthermore, the application we developed that implements this architecture helps with the Kubernetes installation process, which is fully automated, no matter which hardware platform we use—x86, ARM, and soon, RISC-V. The results we achieved using this architecture and application are very promising in two areas—the speed of workload scheduling and workload placement on a correct node. This also enables us to focus on one of the key HPC issues—energy efficiency. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

26 pages, 676 KiB  
Article
A Usable Encryption Solution for File-Based Geospatial Data within a Database File System
by Pankajeshwara Sharma, Michael Govorov and Michael Martin
J. Cybersecur. Priv. 2024, 4(2), 298-323; https://doi.org/10.3390/jcp4020015 - 9 May 2024
Cited by 2 | Viewed by 2287
Abstract
Developing a security solution for spatial files within today’s enterprise Geographical Information System (GIS) that is also usable presents a multifaceted challenge. These files exist in “data silos” of different file server types, resulting in limited collaboration and increased vulnerability. While cloud-based data [...] Read more.
Developing a security solution for spatial files within today’s enterprise Geographical Information System (GIS) that is also usable presents a multifaceted challenge. These files exist in “data silos” of different file server types, resulting in limited collaboration and increased vulnerability. While cloud-based data storage offers many benefits, the associated security concerns have limited its uptake in GIS, making it crucial to explore comparable alternative security solutions that can be deployed on-premise and are also usable. This paper introduces a reasonably usable security solution for spatial files within collaborative enterprise GIS. We explore a Database File System (DBFS) as a potential repository to consolidate and manage spatial files based on its enterprise document management capabilities and security features inherited from the underlying legacy DBMS. These files are protected using the Advanced Encryption Standard (AES) algorithm with practical encryption times of 8 MB per second. The final part focuses on an automated encryption solution with schemes for single- and multi-user files that is compatible with various GIS programs and protocol services. Usability testing is carried out to assess the solution’s usability and focuses on effectiveness, efficiency, and user satisfaction, with the results demonstrating its usability based on the minimal changes it makes to how users work in a collaborative enterprise GIS environment. The solution furnishes a viable means for consolidating and protecting spatial files with various formats at the storage layer within enterprise GIS. Full article
(This article belongs to the Special Issue Usable Security)
Show Figures

Figure 1

6 pages, 979 KiB  
Proceeding Paper
Multi-Level Cloud Datacenter Security Using Efficient Hybrid Algorithm
by Koushik Chakraborty, Amrita Parashar, Pawan Bhambu, Durga Prasad Tripathi, Pratap Patil and Gaurav Kumar Srivastav
Eng. Proc. 2023, 59(1), 50; https://doi.org/10.3390/engproc2023059050 - 14 Dec 2023
Cited by 1 | Viewed by 1026
Abstract
Security is currently the main boundary for cloud-based administrations. It is not adequate to just consolidate the cloud by adding a couple of additional controls or component answers for your current organization security programming. Businesses must utilize both virtual and physical information center [...] Read more.
Security is currently the main boundary for cloud-based administrations. It is not adequate to just consolidate the cloud by adding a couple of additional controls or component answers for your current organization security programming. Businesses must utilize both virtual and physical information center security frameworks to keep them secure. The objective is to defend it from dangers that may jeopardize the secrecy, judgment, or openness of mental property or commerce data resources. These are the fundamental central focuses of all assigned attacks, and in this way, they require a high degree of security. Hundreds to thousands of physical and virtual servers are partitioned up into information centers agreeing to sort applications, information classification zones, and other criteria. To protect applications, frameworks, information, and clients, information center security takes on the workload over physical information centers and multi-cloud situations. It also applies to open cloud data centers. All server ranches ought to protect their applications and data from a rising number of refined threats and around-the-world ambushes. Each organization is at risk of assault, and numerous organizations have been compromised without being mindful of it. An evaluation of your resources and business necessities is important to improve a spotless way to deal with your way of life and cloud security technique. To deal with a strong mixture of multi-cloud wellbeing program, you should lay out perceivability and control. You can consolidate incredible controls, organize responsibility dispersion, and lay out fantastic gambles on the board with the assistance of safety items and experts. Full article
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)
Show Figures

Figure 1

7 pages, 1151 KiB  
Proceeding Paper
A Futuristic Approach to Security in Cloud Data Centers Using a Hybrid Algorithm
by Dipankar Chatterjee, Mostaque Md. Morshedur Hassan, Nazrul Islam, Asmita Ray and Munsifa Firdaus Khan Barbhuyan
Eng. Proc. 2023, 59(1), 47; https://doi.org/10.3390/engproc2023059047 - 14 Dec 2023
Cited by 1 | Viewed by 1554
Abstract
All associations use on-premises data focus. An on-premises data focus suggests that an association maintains all locally required IT systems. An on-premises data focus consolidates everything from the servers that support Web and email access to the provision of gear and communicates related [...] Read more.
All associations use on-premises data focus. An on-premises data focus suggests that an association maintains all locally required IT systems. An on-premises data focus consolidates everything from the servers that support Web and email access to the provision of gear and communicates related data back to the organization to establish features like uninterruptible control. Data focus organization is not confined to ensuring that an establishments and program strategies are helpful. Data focus chiefs are also responsible for the security of their circumstances. Establishing a data community office is a sensible idea. Most do not have outside windows and, by and large, only a few entrances. Security staff surveil the inside of the structure, screening for dubious activity using footage from observation cameras positioned along the perimeter. This integrates the use of strong security measures, like two-factor confirmation, for all clients. It is also suggested to encrypt all data in movement, both inside the data center and between the data community and any external structures. The components of data centers must be safeguarded against physical threats. A data center’s physical security controls include a secure location, physical access controls for the building, and monitoring systems. As organizations relocate on-premises IT frameworks to cloud specialist co-ops, cloud information capacity, cloud foundations, and cloud applications, it is vital to comprehend the safety strategies they implement and the service-level arrangements they have set up. Full article
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)
Show Figures

Figure 1

29 pages, 573 KiB  
Article
Deploying Secure Distributed Systems: Comparative Analysis of GNS3 and SEED Internet Emulator
by Lewis Golightly, Paolo Modesti and Victor Chang
J. Cybersecur. Priv. 2023, 3(3), 464-492; https://doi.org/10.3390/jcp3030024 - 3 Aug 2023
Cited by 8 | Viewed by 5243
Abstract
Network emulation offers a flexible solution for network deployment and operations, leveraging software to consolidate all nodes in a topology and utilizing the resources of a single host system server. This research paper investigated the state of cybersecurity in virtualized systems, covering vulnerabilities, [...] Read more.
Network emulation offers a flexible solution for network deployment and operations, leveraging software to consolidate all nodes in a topology and utilizing the resources of a single host system server. This research paper investigated the state of cybersecurity in virtualized systems, covering vulnerabilities, exploitation techniques, remediation methods, and deployment strategies, based on an extensive review of the related literature. We conducted a comprehensive performance evaluation and comparison of two network-emulation platforms: Graphical Network Simulator-3 (GNS3), an established open-source platform, and the SEED Internet Emulator, an emerging platform, alongside physical Cisco routers. Additionally, we present a Distributed System that seamlessly integrates network architecture and emulation capabilities. Empirical experiments assessed various performance criteria, including the bandwidth, throughput, latency, and jitter. Insights into the advantages, challenges, and limitations of each platform are provided based on the performance evaluation. Furthermore, we analyzed the deployment costs and energy consumption, focusing on the economic aspects of the proposed application. Full article
Show Figures

Figure 1

13 pages, 6290 KiB  
Article
Cloud Servers: Resource Optimization Using Different Energy Saving Techniques
by Mohammad Hijji, Bilal Ahmad, Gulzar Alam, Ahmed Alwakeel, Mohammed Alwakeel, Lubna Abdulaziz Alharbi, Ahd Aljarf and Muhammad Umair Khan
Sensors 2022, 22(21), 8384; https://doi.org/10.3390/s22218384 - 1 Nov 2022
Cited by 11 | Viewed by 3701
Abstract
Currently, researchers are working to contribute to the emerging fields of cloud computing, edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, [...] Read more.
Currently, researchers are working to contribute to the emerging fields of cloud computing, edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, are truly concerned about the impact of energy consumption. These cloud computing companies use huge data centers, consisting of virtual computers that are positioned worldwide and necessitate exceptionally high-power costs to preserve. The increased requirement for energy consumption in IT firms has posed many challenges for cloud computing companies pertinent to power expenses. Energy utilization is reliant upon numerous aspects, for example, the service level agreement, techniques for choosing the virtual machine, the applied optimization strategies and policies, and kinds of workload. The present paper tries to provide an answer to challenges related to energy-saving through the assistance of both dynamic voltage and frequency scaling techniques for gaming data centers. Also, to evaluate both the dynamic voltage and frequency scaling techniques compared to non-power-aware and static threshold detection techniques. The findings will facilitate service suppliers in how to encounter the quality of service and experience limitations by fulfilling the service level agreements. For this purpose, the CloudSim platform is applied for the application of a situation in which game traces are employed as a workload for analyzing the procedure. The findings evidenced that an assortment of good quality techniques can benefit gaming servers to conserve energy expenditures and sustain the best quality of service for consumers located universally. The originality of this research presents a prospect to examine which procedure performs good (for example, dynamic, static, or non-power aware). The findings validate that less energy is utilized by applying a dynamic voltage and frequency method along with fewer service level agreement violations, and better quality of service and experience, in contrast with static threshold consolidation or non-power aware technique. Full article
Show Figures

Figure 1

19 pages, 3712 KiB  
Article
An Effective Secured Dynamic Network-Aware Multi-Objective Cuckoo Search Optimization for Live VM Migration in Sustainable Data Centers
by N. Venkata Subramanian and V. S. Shankar Sriram
Sustainability 2022, 14(20), 13670; https://doi.org/10.3390/su142013670 - 21 Oct 2022
Cited by 13 | Viewed by 2136
Abstract
With the increasing use of cloud computing by organizations, cloud data centers are proliferating to meet customers’ demands and host various applications using virtual machines installed in physical servers. Through Live Virtual Machine Migration (LVMM) methods, cloud service providers can provide improved computing [...] Read more.
With the increasing use of cloud computing by organizations, cloud data centers are proliferating to meet customers’ demands and host various applications using virtual machines installed in physical servers. Through Live Virtual Machine Migration (LVMM) methods, cloud service providers can provide improved computing capabilities for server consolidation maintenance of systems and potential power savings through a reduction in the distribution process to customers. However, Live Virtual Machine Migration has its challenges when choosing the best network path for maximizing the efficiency of resources, reducing consumption, and providing security. Most research has focused on the load balancing of resources and the reduction in energy consumption; however, they could not provide secure and optimal resource utilization. A framework has been created for sustainable data centers that pick the most secure and optimal dynamic network path using an intelligent metaheuristic algorithm, namely, the Network-aware Dynamic multi-objective Cuckoo Search algorithm (NDCS). The developed hybrid movement strategy enhances the search capability by expanding the search space and adopting a combined risk score estimation of each physical machine (PM) as a fitness criterion for ensuring security with rapid convergence compared to the existing strategies. The proposed method was assessed using the Google cluster dataset to ascertain its worthiness. The experimental results show the supremacy of the proposed method over existing methods by ensuring services with a lower total migration time, lower energy consumption, less makespan time, and secure optimum resource utilization. Full article
Show Figures

Figure 1

25 pages, 6313 KiB  
Article
More than Meets One Core: An Energy-Aware Cost Optimization in Dynamic Multi-Core Processor Server Consolidation for Cloud Data Center
by Huixi Li, Langyi Wen, Yinghui Liu and Yongluo Shen
Electronics 2022, 11(20), 3377; https://doi.org/10.3390/electronics11203377 - 19 Oct 2022
Cited by 2 | Viewed by 1896
Abstract
The massive number of users has brought severe challenges in managing cloud data centers (CDCs) composed of multi-core processor that host cloud service providers. Guaranteeing the quality of service (QoS) of multiple users as well as reducing the operating costs of CDCs are [...] Read more.
The massive number of users has brought severe challenges in managing cloud data centers (CDCs) composed of multi-core processor that host cloud service providers. Guaranteeing the quality of service (QoS) of multiple users as well as reducing the operating costs of CDCs are major problems that need to be solved. To solve these problems, this paper establishes a cost model based on multi-core hosts in CDCs, which comprehensively consider the hosts’ energy costs, virtual machine (VM) migration costs, and service level agreement violation (SLAV) penalty costs. To optimize the goal, we design the following solution. We employ a DAE-based filter to preprocess the VM historical workload and use an SRU-based method to predict the computing resource usage of the VMs in future periods. Based on the predicted results, we trigger VM migrations before the hosts move into the overloaded state to reduce the occurrence of SLAV. A multi-core-aware heuristic algorithm is proposed to solve the placement problem. Simulations driven by the VM real workload dataset validate the effectiveness of our proposed method. Compared with the existing baseline methods, our proposed method reduces the total operating cost by 20.9~34.4%. Full article
Show Figures

Figure 1

29 pages, 16346 KiB  
Article
Complementary in Time and Space: Optimization on Cost and Performance with Multiple Resources Usage by Server Consolidation in Cloud Data Center
by Huixi Li, Yongluo Shen, Huidan Xi and Yinhao Xiao
Appl. Sci. 2022, 12(19), 9654; https://doi.org/10.3390/app12199654 - 26 Sep 2022
Viewed by 1527
Abstract
The recent COVID-19 pandemic has accelerated the use of cloud computing. The surge in the number of users presents cloud service providers with severe challenges in managing computing resources. Guaranteeing the QoS of multiple users while reducing the operating cost of the cloud [...] Read more.
The recent COVID-19 pandemic has accelerated the use of cloud computing. The surge in the number of users presents cloud service providers with severe challenges in managing computing resources. Guaranteeing the QoS of multiple users while reducing the operating cost of the cloud data center (CDC) is a major problem that needs to be solved urgently. To solve this problem, this paper establishes a cost model based on multiple computing resources in CDC, which comprehensively considers the hosts’ energy cost, virtual machine (VM) migration cost, and SLAV penalty cost. To minimize this cost, we design the following solution. We employ a convolutional autoencoder-based filter to preprocess the VM historical workload and use an attention-based RNN method to predict the computing resource usage of the VMs in future periods. Based on the predicted results, we trigger VM migration before the host enters an overloaded state to reduce the occurrence of SLAV. A heuristic algorithm based on the complementary use of multiple resources in space and time is proposed to solve the placement problem. Simulations driven by the VM real workload dataset validate the effectiveness of our proposed method. Compared with the existing methods, our proposed method reduces the energy consumption of the hosts and SLAV and reduces the total cost by 26.1~39.3%. Full article
Show Figures

Figure 1

27 pages, 6047 KiB  
Article
A VPN Performances Analysis of Constrained Hardware Open Source Infrastructure Deploy in IoT Environment
by Antonio Francesco Gentile, Davide Macrì, Floriano De Rango, Mauro Tropea and Emilio Greco
Future Internet 2022, 14(9), 264; https://doi.org/10.3390/fi14090264 - 13 Sep 2022
Cited by 18 | Viewed by 6592
Abstract
Virtual private network (VPN) represents an HW/SW infrastructure that implements private and confidential communication channels that usually travel through the Internet. VPN is currently one of the most reliable technologies to achieve this goal, also because being a consolidated technology, it is possible [...] Read more.
Virtual private network (VPN) represents an HW/SW infrastructure that implements private and confidential communication channels that usually travel through the Internet. VPN is currently one of the most reliable technologies to achieve this goal, also because being a consolidated technology, it is possible to apply appropriate patches to remedy any security holes. In this paper we analyze the performances of open source firmware OpenWrt 21.x compared with a server-side operating system (Debian 11 x64) and Mikrotik 7.x, also virtualized, and different types of clients (Windows 10/11, iOS 15, Android 11, OpenWrt 21.x, Debian 11 x64 and Mikrotik 7.x), observing the performance of the network according to the current implementation of the various protocols and algorithms of VPN tunnel examined on what are the most recent HW and SW for deployment in outdoor locations with poor network connectivity. Specifically, operating systems provide different performance metric values for various combinations of configuration variables. The first pursued goal is to find the algorithms to guarantee a data transmission/encryption ratio as efficiently as possible. The second goal is to research the algorithms capable of guaranteeing the widest spectrum of compatibility with the current infrastructures that support VPN technology, to obtain a connection system secure for geographically scattered IoT networks spread over difficult-to-manage areas such as suburban or rural environments. The third goal is to be able to use open firmware on constrained routers that provide compatibility with different VPN protocols. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT II)
Show Figures

Figure 1

37 pages, 2272 KiB  
Review
Review on Compressive Sensing Algorithms for ECG Signal for IoT Based Deep Learning Framework
by Subramanyam Shashi Kumar and Prakash Ramachandran
Appl. Sci. 2022, 12(16), 8368; https://doi.org/10.3390/app12168368 - 21 Aug 2022
Cited by 14 | Viewed by 6092
Abstract
Nowadays, healthcare is becoming very modern, and the support of Internet of Things (IoT) is inevitable in a personal healthcare system. A typical personal healthcare system acquires vital parameters from human users and stores them in a cloud platform for further analysis. Acquiring [...] Read more.
Nowadays, healthcare is becoming very modern, and the support of Internet of Things (IoT) is inevitable in a personal healthcare system. A typical personal healthcare system acquires vital parameters from human users and stores them in a cloud platform for further analysis. Acquiring fundamental biomedical signal, such as with the Electrocardiograph (ECG), is also considered for specific disease analysis in personal healthcare systems. When such systems are scaled up, there is a heavy demand for internet channel capacity to accommodate real time seamless flow of discrete samples of biomedical signals. So, there is a keen need for real time data compression of biomedical signals. Compressive Sensing (CS) has recently attracted more interest due to its compactness and its feature of the faithful reconstruction of signals from fewer linear measurements, which facilitates less than Shannon’s sampling rate by exploiting the signal sparsity. The most common biomedical signal that is to be analyzed is the ECG signal, as the prediction of heart failure at an early stage can save a human life. This review is for a vast use-case of IoT framework in which CS measurements of ECG are acquired, communicated through Internet to a server, and the arrhythmia are analyzed using Machine learning (ML). Assuming this use-case specific for ECG, in this review many technical aspects are considered regarding various research components. The key aspect is on the investigation of the best sensing method, and to address this, various sensing matrices are reviewed, analyzed and recommended. The next aspect is the selection of the optimal sparsifying method, and the review recommends unexplored ECG compression algorithms as sparsifying methods. The other aspects are optimum reconstruction algorithms, best hardware implementations, suitable ML methods and effective modality of IoT. In this review all these components are considered, and a detailed review is presented which enables us to orchestrate the use-case specified above. This review focuses on the current trends in CS algorithms for ECG signal compression and its hardware implementation. The key to successful reconstruction of the CS method is the right selection of sensing and sparsifying matrix, and there are many unexplored sparsifying methods for the ECG signal. In this review, we shed some light on new possible sparsifying techniques. A detailed comparison table of various CS algorithms, sensing matrix, sparsifying techniques with different ECG dataset is tabulated to quantify the capability of CS in terms of appropriate performance metrics. As per the use-case specified above, the CS reconstructed ECG signals are to be subjected to ML analysis, and in this review the compressive domain inference approach is discussed. The various datasets, methodologies and ML models for ECG applications are studied and their model accuracies are tabulated. Mostly, the previous research on CS had studied the performance of CS using numerical simulation, whereas there are some good attempts for hardware implementations for ECG applications, and we studied the uniqueness of each method and supported the study with a comparison table. As a consolidation, we recommend new possibilities of the research components in terms of new transforms, new sparsifying methods, suggestions for ML approaches and hardware implementation. Full article
(This article belongs to the Special Issue Research on Biomedical Signal Processing)
Show Figures

Figure 1

24 pages, 927 KiB  
Review
Ransomware Detection, Avoidance, and Mitigation Scheme: A Review and Future Directions
by Adhirath Kapoor, Ankur Gupta, Rajesh Gupta, Sudeep Tanwar, Gulshan Sharma and Innocent E. Davidson
Sustainability 2022, 14(1), 8; https://doi.org/10.3390/su14010008 - 21 Dec 2021
Cited by 63 | Viewed by 23976
Abstract
Ransomware attacks have emerged as a major cyber-security threat wherein user data is encrypted upon system infection. Latest Ransomware strands using advanced obfuscation techniques along with offline C2 Server capabilities are hitting Individual users and big corporations alike. This problem has caused business [...] Read more.
Ransomware attacks have emerged as a major cyber-security threat wherein user data is encrypted upon system infection. Latest Ransomware strands using advanced obfuscation techniques along with offline C2 Server capabilities are hitting Individual users and big corporations alike. This problem has caused business disruption and, of course, financial loss. Since there is no such consolidated framework that can classify, detect and mitigate Ransomware attacks in one go, we are motivated to present Detection Avoidance Mitigation (DAM), a theoretical framework to review and classify techniques, tools, and strategies to detect, avoid and mitigate Ransomware. We have thoroughly investigated different scenarios and compared already existing state of the art review research against ours. The case study of the infamous Djvu Ransomware is incorporated to illustrate the modus-operandi of the latest Ransomware strands, including some suggestions to contain its spread. Full article
Show Figures

Figure 1

Back to TopTop