Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Authors = Abdukodir Khakimov

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3481 KiB  
Article
Evaluating QoS in Dynamic Virtual Machine Migration: A Multi-Class Queuing Model for Edge-Cloud Systems
by Anna Kushchazli, Kseniia Leonteva, Irina Kochetkova and Abdukodir Khakimov
J. Sens. Actuator Netw. 2025, 14(3), 47; https://doi.org/10.3390/jsan14030047 - 25 Apr 2025
Viewed by 894
Abstract
The efficient migration of virtual machines (VMs) is critical for optimizing resource management, ensuring service continuity, and enhancing resiliency in cloud and edge computing environments, particularly as 6G networks demand higher reliability and lower latency. This study addresses the challenges of dynamically balancing [...] Read more.
The efficient migration of virtual machines (VMs) is critical for optimizing resource management, ensuring service continuity, and enhancing resiliency in cloud and edge computing environments, particularly as 6G networks demand higher reliability and lower latency. This study addresses the challenges of dynamically balancing server loads while minimizing downtime and migration costs under stochastic task arrivals and variable processing times. We propose a queuing theory-based model employing continuous-time Markov chains (CTMCs) to capture the interplay between VM migration decisions, server resource constraints, and task processing dynamics. The model incorporates two migration policies—one minimizing projected post-migration server utilization and another prioritizing current utilization—to evaluate their impact on system performance. The numerical results show that the blocking probability for the first VM for Policy 1 is 2.1% times lower than for Policy 2 and the same metric for the second VM is 4.7%. The average server’s resource utilization increased up to 11.96%. The framework’s adaptability to diverse server–VM configurations and stochastic demands demonstrates its applicability to real-world cloud systems. These results highlight predictive resource allocation’s role in dynamic environments. Furthermore, the study lays the groundwork for extending this framework to multi-access edge computing (MEC) environments, which are integral to 6G networks. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

23 pages, 5616 KiB  
Article
Dynamic Offloading in Flying Fog Computing: Optimizing IoT Network Performance with Mobile Drones
by Wei Min, Abdukodir Khakimov, Abdelhamied A. Ateya, Mohammed ElAffendi, Ammar Muthanna, Ahmed A. Abd El-Latif and Mohammed Saleh Ali Muthanna
Drones 2023, 7(10), 622; https://doi.org/10.3390/drones7100622 - 5 Oct 2023
Cited by 10 | Viewed by 3669
Abstract
The rapid growth of Internet of Things (IoT) devices and the increasing need for low-latency and high-throughput applications have led to the introduction of distributed edge computing. Flying fog computing is a promising solution that can be used to assist IoT networks. It [...] Read more.
The rapid growth of Internet of Things (IoT) devices and the increasing need for low-latency and high-throughput applications have led to the introduction of distributed edge computing. Flying fog computing is a promising solution that can be used to assist IoT networks. It leverages drones with computing capabilities (e.g., fog nodes), enabling data processing and storage closer to the network edge. This introduces various benefits to IoT networks compared to deploying traditional static edge computing paradigms, including coverage improvement, enabling dense deployment, and increasing availability and reliability. However, drones’ dynamic and mobile nature poses significant challenges in task offloading decisions to optimize resource utilization and overall network performance. This work presents a novel offloading model based on dynamic programming explicitly tailored for flying fog-based IoT networks. The proposed algorithm aims to intelligently determine the optimal task assignment strategy by considering the mobility patterns of drones, the computational capacity of fog nodes, the communication constraints of the IoT devices, and the latency requirements. Extensive simulations and experiments were conducted to test the proposed approach. Our results revealed significant improvements in latency, availability, and the cost of resources. Full article
(This article belongs to the Special Issue Edge Computing and IoT Technologies for Drones)
Show Figures

Figure 1

28 pages, 814 KiB  
Review
The Age of Information in Wireless Cellular Systems: Gaps, Open Problems, and Research Challenges
by Elena Zhbankova, Abdukodir Khakimov, Ekaterina Markova and Yuliya Gaidamaka
Sensors 2023, 23(19), 8238; https://doi.org/10.3390/s23198238 - 3 Oct 2023
Cited by 5 | Viewed by 2626
Abstract
One of the critical use cases for prospective fifth generation (5G) cellular systems is the delivery of the state of the remote systems to the control center. Such services are relevant for both massive machine-type communications (mMTC) and ultra-reliable low-latency communications (URLLC) services [...] Read more.
One of the critical use cases for prospective fifth generation (5G) cellular systems is the delivery of the state of the remote systems to the control center. Such services are relevant for both massive machine-type communications (mMTC) and ultra-reliable low-latency communications (URLLC) services that need to be supported by 5G systems. The recently introduced the age of information (AoI) metric representing the timeliness of the reception of the update at the receiver is nowadays commonly utilized to quantify the performance of such services. However, the metric itself is closely related to the queueing theory, which conventionally requires strict assumptions for analytical tractability. This review paper aims to: (i) identify the gaps between technical wireless systems and queueing models utilized for analysis of the AoI metric; (ii) provide a detailed review of studies that have addressed the AoI metric; and (iii) establish future research challenges in this area. Our major outcome is that the models proposed to date for the AoI performance evaluation and optimization deviate drastically from the technical specifics of modern and future wireless cellular systems, including those proposed for URLLC and mMTC services. Specifically, we identify that the majority of the models considered to date: (i) do not account for service processes of wireless channel that utilize orthogonal frequency division multiple access (OFDMA) technology and are able to serve more than a single packet in a time slot; (ii) neglect the specifics of the multiple access schemes utilized for mMTC communications, specifically, multi-channel random access followed by data transmission; (iii) do not consider special and temporal correlation properties in the set of end systems that may arise naturally in state monitoring applications; and finally, (iv) only few studies have assessed those practical use cases where queuing may happen at more than a single node along the route. Each of these areas requires further advances for performance optimization and integration of modern and future wireless provisioning technologies with mMTC and URLLC services. Full article
(This article belongs to the Special Issue 5G/6G Networks for Wireless Communication and IoT)
Show Figures

Figure 1

18 pages, 2785 KiB  
Article
Spatio-Temporal Coherence of mmWave/THz Channel Characteristics and Their Forecasting Using Video Frame Prediction Techniques
by Vladislav Prosvirov, Amjad Ali, Abdukodir Khakimov and Yevgeni Koucheryavy
Mathematics 2023, 11(17), 3634; https://doi.org/10.3390/math11173634 - 23 Aug 2023
Cited by 2 | Viewed by 2095
Abstract
Channel state information in millimeter wave (mmWave) and terahertz (THz) communications systems is vital for various tasks ranging from planning the optimal locations of BSs to efficient beam tracking mechanisms to handover design. Due to the use of large-scale phased antenna arrays and [...] Read more.
Channel state information in millimeter wave (mmWave) and terahertz (THz) communications systems is vital for various tasks ranging from planning the optimal locations of BSs to efficient beam tracking mechanisms to handover design. Due to the use of large-scale phased antenna arrays and high sensitivity to environmental geometry and materials, precise propagation models for these bands are obtained via ray-tracing modeling. However, the propagation conditions in mmWave/THz systems may theoretically change at very small distances, that is, 1 mm–1 μm, which requires extreme computational effort for modeling. In this paper, we first will assess the effective correlation distances in mmWave/THz systems for different outdoor scenarios, user mobility patterns, and line-of-sight (LoS) and non-LoS (nLoS) conditions. As the metrics of interest, we utilize the angle of arrival/departure (AoA/AoD) and path loss of the first few strongest rays. Then, to reduce the computational efforts required for the ray-tracing procedure, we propose a methodology for the extrapolation and interpolation of these metrics based on the convolutional long short-term memory (ConvLSTM) model. The proposed methodology is based on a special representation of the channel state information in a form suitable for state-of-the-art video enhancement machine learning (ML) techniques, which allows for the use of their powerful prediction capabilities. To assess the prediction performance of the ConvLSTM model, we utilize precision and recall as the main metrics of interest. Our numerical results demonstrate that the channel state correlation in AoA/AoD parameters is preserved up until approximately 0.3–0.6 m, which is 300–600 times larger than the wavelength at 300 GHz. The use of a ConvLSTM model allows us to accurately predict AoA and AoD angles up to the 0.6 m distance with AoA being characterized by a higher mean squared error (MSE). Our results can be utilized to speed up ray-tracing simulations by selecting the grid step size, resulting in the desired trade-off between modeling accuracy and computational time. Additionally, it can also be utilized to improve beam tracking in mmWave/THz systems via a selection of the time step between beam realignment procedures. Full article
(This article belongs to the Special Issue Applications of Mathematical Analysis in Telecommunications-II)
Show Figures

Figure 1

15 pages, 3542 KiB  
Article
Enhanced Slime Mould Optimization with Deep-Learning-Based Resource Allocation in UAV-Enabled Wireless Networks
by Reem Alkanhel, Ahsan Rafiq, Evgeny Mokrov, Abdukodir Khakimov, Mohammed Saleh Ali Muthanna and Ammar Muthanna
Sensors 2023, 23(16), 7083; https://doi.org/10.3390/s23167083 - 10 Aug 2023
Cited by 1 | Viewed by 2080
Abstract
Unmanned aerial vehicle (UAV) networks offer a wide range of applications in an overload situation, broadcasting and advertising, public safety, disaster management, etc. Providing robust communication services to mobile users (MUs) is a challenging task because of the dynamic characteristics of MUs. Resource [...] Read more.
Unmanned aerial vehicle (UAV) networks offer a wide range of applications in an overload situation, broadcasting and advertising, public safety, disaster management, etc. Providing robust communication services to mobile users (MUs) is a challenging task because of the dynamic characteristics of MUs. Resource allocation, including subchannels, transmit power, and serving users, is a critical transmission problem; further, it is also crucial to improve the coverage and energy efficacy of UAV-assisted transmission networks. This paper presents an Enhanced Slime Mould Optimization with Deep-Learning-based Resource Allocation Approach (ESMOML-RAA) in UAV-enabled wireless networks. The presented ESMOML-RAA technique aims to efficiently accomplish computationally and energy-effective decisions. In addition, the ESMOML-RAA technique considers a UAV as a learning agent with the formation of a resource assignment decision as an action and designs a reward function with the intention of the minimization of the weighted resource consumption. For resource allocation, the presented ESMOML-RAA technique employs a highly parallelized long short-term memory (HP-LSTM) model with an ESMO algorithm as a hyperparameter optimizer. Using the ESMO algorithm helps properly tune the hyperparameters related to the HP-LSTM model. The performance validation of the ESMOML-RAA technique is tested using a series of simulations. This comparison study reports the enhanced performance of the ESMOML-RAA technique over other ML models. Full article
(This article belongs to the Special Issue Resource Allocation for Cooperative Communications)
Show Figures

Figure 1

16 pages, 14909 KiB  
Article
Evaluating the Quality of Experience Performance Metric for UAV-Based Networks
by Abdukodir Khakimov, Evgeny Mokrov, Dmitry Poluektov, Konstantin Samouylov and Andrey Koucheryavy
Sensors 2021, 21(17), 5689; https://doi.org/10.3390/s21175689 - 24 Aug 2021
Cited by 4 | Viewed by 2487
Abstract
In this work, we consider a UAV-assisted cell in a single user scenario. We consider the Quality of Experience (QoE) performance metric calculating it as a function of the packet loss ratio. In order to acquire this metric, a radio-channel emulation system was [...] Read more.
In this work, we consider a UAV-assisted cell in a single user scenario. We consider the Quality of Experience (QoE) performance metric calculating it as a function of the packet loss ratio. In order to acquire this metric, a radio-channel emulation system was developed and tested under different conditions. The system consists of two independent blocks, separately emulating connections between the User Equipment (UE) and unmanned aerial vehicle (UAV) and between the UAV and Base station (BS). In order to estimate scenario usage constraints, an analytical model was developed. The results show that, in the described scenario, cell coverage can be enhanced with minimal impact on QoE. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Future Networking Applications)
Show Figures

Figure 1

22 pages, 8056 KiB  
Article
Distributed Edge Computing to Assist Ultra-Low-Latency VANET Applications
by Andrei Vladyko, Abdukodir Khakimov, Ammar Muthanna, Abdelhamied A. Ateya and Andrey Koucheryavy
Future Internet 2019, 11(6), 128; https://doi.org/10.3390/fi11060128 - 4 Jun 2019
Cited by 40 | Viewed by 6598
Abstract
Vehicular ad hoc networks (VANETs) are a recent class of peer-to-peer wireless networks that are used to organize the communication and interaction between cars (V2V), between cars and infrastructure (V2I), and between cars and other types of nodes (V2X). These networks are based [...] Read more.
Vehicular ad hoc networks (VANETs) are a recent class of peer-to-peer wireless networks that are used to organize the communication and interaction between cars (V2V), between cars and infrastructure (V2I), and between cars and other types of nodes (V2X). These networks are based on the dedicated short-range communication (DSRC) IEEE 802.11 standards and are mainly intended to organize the exchange of various types of messages, mainly emergency ones, to prevent road accidents, alert when a road accident occurs, or control the priority of the roadway. Initially, it was assumed that cars would only interact with each other, but later, with the advent of the concept of the Internet of things (IoT), interactions with surrounding devices became a demand. However, there are many challenges associated with the interaction of vehicles and the interaction with the road infrastructure. Among the main challenge is the high density and the dramatic increase of the vehicles’ traffic. To this end, this work provides a novel system based on mobile edge computing (MEC) to solve the problem of high traffic density and provides and offloading path to vehicle’s traffic. The proposed system also reduces the total latency of data communicated between vehicles and stationary roadside units (RSUs). Moreover, a latency-aware offloading algorithm is developed for managing and controlling data offloading from vehicles to edge servers. The system was simulated over a reliable environment for performance evaluation, and a real experiment was conducted to validate the proposed system and the developed offloading method. Full article
Show Figures

Figure 1

17 pages, 4769 KiB  
Article
Secure and Reliable IoT Networks Using Fog Computing with Software-Defined Networking and Blockchain
by Ammar Muthanna, Abdelhamied A. Ateya, Abdukodir Khakimov, Irina Gudkova, Abdelrahman Abuarqoub, Konstantin Samouylov and Andrey Koucheryavy
J. Sens. Actuator Netw. 2019, 8(1), 15; https://doi.org/10.3390/jsan8010015 - 18 Feb 2019
Cited by 208 | Viewed by 13480
Abstract
Designing Internet of Things (IoT) applications faces many challenges including security, massive traffic, high availability, high reliability and energy constraints. Recent distributed computing paradigms, such as Fog and multi-access edge computing (MEC), software-defined networking (SDN), network virtualization and blockchain can be exploited in [...] Read more.
Designing Internet of Things (IoT) applications faces many challenges including security, massive traffic, high availability, high reliability and energy constraints. Recent distributed computing paradigms, such as Fog and multi-access edge computing (MEC), software-defined networking (SDN), network virtualization and blockchain can be exploited in IoT networks, either combined or individually, to overcome the aforementioned challenges while maintaining system performance. In this paper, we present a framework for IoT that employs an edge computing layer of Fog nodes controlled and managed by an SDN network to achieve high reliability and availability for latency-sensitive IoT applications. The SDN network is equipped with distributed controllers and distributed resource constrained OpenFlow switches. Blockchain is used to ensure decentralization in a trustful manner. Additionally, a data offloading algorithm is developed to allocate various processing and computing tasks to the OpenFlow switches based on their current workload. Moreover, a traffic model is proposed to model and analyze the traffic indifferent parts of the network. The proposed algorithm is evaluated in simulation and in a testbed. Experimental results show that the proposed framework achieves higher efficiency in terms of latency and resource utilization. Full article
(This article belongs to the Special Issue Sensors and Actuators: Security Threats and Countermeasures)
Show Figures

Figure 1

Back to TopTop