Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = two-tier network approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 2917 KiB  
Article
Self-Adapting CPU Scheduling for Mixed Database Workloads via Hierarchical Deep Reinforcement Learning
by Suchuan Xing, Yihan Wang and Wenhe Liu
Symmetry 2025, 17(7), 1109; https://doi.org/10.3390/sym17071109 - 10 Jul 2025
Viewed by 322
Abstract
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database [...] Read more.
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database environments comprising Online Transaction Processing (OLTP), Online Analytical Processing (OLAP), vector processing, and background maintenance workloads. Our approach introduces three key innovations: first, a symmetric two-tier control architecture where a meta-controller allocates CPU budgets across workload categories using policy gradient methods while specialized sub-controllers optimize process-level resource allocation through continuous action spaces; second, graph neural network-based dependency modeling that captures complex inter-process relationships and communication patterns while preserving inherent symmetries in database architectures; and third, meta-learning integration with curiosity-driven exploration enabling rapid adaptation to previously unseen workload patterns without extensive retraining. The framework incorporates a multi-objective reward function balancing Service Level Objective (SLO) adherence, resource efficiency, symmetric fairness metrics, and system stability. Experimental evaluation through high-fidelity digital twin simulation and production deployment demonstrates substantial performance improvements: 43.5% reduction in p99 latency violations for OLTP workloads and 27.6% improvement in overall CPU utilization, with successful scaling to 10,000 concurrent processes maintaining sub-3% scheduling overhead. This work represents a significant advancement toward truly autonomous database resource management, establishing a foundation for next-generation self-optimizing database systems with implications extending to broader orchestration challenges in cloud-native architectures. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 3233 KiB  
Article
Multi-Domain Controversial Text Detection Based on a Machine Learning and Deep Learning Stacked Ensemble
by Jiadi Liu, Zhuodong Liu, Qiaoqi Li, Weihao Kong and Xiangyu Li
Mathematics 2025, 13(9), 1529; https://doi.org/10.3390/math13091529 - 6 May 2025
Cited by 2 | Viewed by 675
Abstract
Due to the rapid proliferation of social media and online reviews, the accurate identification and classification of controversial texts has emerged as a significant challenge in the field of natural language processing. However, traditional text-classification methodologies frequently encounter critical limitations, such as feature [...] Read more.
Due to the rapid proliferation of social media and online reviews, the accurate identification and classification of controversial texts has emerged as a significant challenge in the field of natural language processing. However, traditional text-classification methodologies frequently encounter critical limitations, such as feature sensitivity and inadequate generalization capabilities. This results in a notably suboptimal performance when confronted with diverse controversial content. To address these substantial limitations, this paper proposes a novel controversial text-detection framework based on stacked ensemble learning to enhance the accuracy and robustness of text classification. Firstly, considering the multidimensional complexity of textual features, we integrate comprehensive feature engineering, i.e., encompassing word frequency, statistical metrics, sentiment analysis, and comment tree structure features, as well as advanced feature selection methodologies, particularly lassonet, i.e., a neural network with feature sparsity, to effectively address dimensionality challenges while enhancing model interpretability and computational efficiency. Secondly, we design a two-tier stacked ensemble architecture, which not only combines the strengths of multiple machine learning algorithms, e.g., gradient-boosted decision tree (GBDT), random forest (RF), and extreme gradient boosting (XGBoost), with deep learning models, e.g., gated recurrent unit (GRU) and long short-term memory (LSTM), but also implements the support vector machine (SVM) for efficient meta-learning. Furthermore, we systematically compare three hyperparameter optimization algorithms, including the sparrow search algorithm (SSA), particle swarm optimization (PSO), and Bayesian optimization (BO). The experimental results demonstrate that the SSA exhibits a superior performance in exploring high-dimensional parameter spaces. Extensive experimentation across diverse topics and domains also confirms that our proposed methodology significantly outperforms the state-of-the-art approaches. Full article
(This article belongs to the Special Issue Machine Learning Methods and Mathematical Modeling with Applications)
Show Figures

Figure 1

20 pages, 7507 KiB  
Article
Sliding-Window Dissimilarity Cross-Attention for Near-Real-Time Building Change Detection
by Wen Lu and Minh Nguyen
Remote Sens. 2025, 17(1), 135; https://doi.org/10.3390/rs17010135 - 2 Jan 2025
Viewed by 1429
Abstract
A near-real-time change detection network can consistently identify unauthorized construction activities over a wide area, empowering authorities to enforce regulations efficiently. Furthermore, it can promptly assess building damage, enabling expedited rescue efforts. The extensive adoption of deep learning in change detection has prompted [...] Read more.
A near-real-time change detection network can consistently identify unauthorized construction activities over a wide area, empowering authorities to enforce regulations efficiently. Furthermore, it can promptly assess building damage, enabling expedited rescue efforts. The extensive adoption of deep learning in change detection has prompted a predominant emphasis on enhancing detection performance, primarily through the expansion of the depth and width of networks, overlooking considerations regarding inference time and computational cost. To accurately represent the spatio-temporal semantic correlations between pre-change and post-change images, we create an innovative transformer attention mechanism named Sliding-Window Dissimilarity Cross-Attention (SWDCA), which detects spatio-temporal semantic discrepancies by explicitly modeling the dissimilarity of bi-temporal tokens, departing from the mono-temporal similarity attention typically used in conventional transformers. In order to fulfill the near-real-time requirement, SWDCA employs a sliding-window scheme to limit the range of the cross-attention mechanism within a predetermined window/dilated window size. This approach not only excludes distant and irrelevant information but also reduces computational cost. Furthermore, we develop a lightweight Siamese backbone for extracting building and environmental features. Subsequently, we integrate an SWDCA module into this backbone, forming an efficient change detection network. Quantitative evaluations and visual analyses of thorough experiments verify that our method achieves top-tier accuracy on two building change detection datasets of remote sensing imagery, while also achieving a real-time inference speed of 33.2 FPS on a mobile GPU. Full article
(This article belongs to the Special Issue Remote Sensing and SAR for Building Monitoring)
Show Figures

Figure 1

10 pages, 3856 KiB  
Case Report
Novel LYST Variants Lead to Aberrant Splicing in a Patient with Chediak–Higashi Syndrome
by Maxim Aleksenko, Elena Vlasova, Amina Kieva, Ruslan Abasov, Yulia Rodina, Michael Maschan, Anna Shcherbina and Elena Raykina
Genes 2025, 16(1), 18; https://doi.org/10.3390/genes16010018 - 26 Dec 2024
Viewed by 1135
Abstract
Background: The advent of next-generation sequencing (NGS) has revolutionized the analysis of genetic data, enabling rapid identification of pathogenic variants in patients with inborn errors of immunity (IEI). Sometimes, the use of NGS-based technologies is associated with challenges in the evaluation of the [...] Read more.
Background: The advent of next-generation sequencing (NGS) has revolutionized the analysis of genetic data, enabling rapid identification of pathogenic variants in patients with inborn errors of immunity (IEI). Sometimes, the use of NGS-based technologies is associated with challenges in the evaluation of the clinical significance of novel genetic variants. Methods: In silico prediction tools, such as SpliceAI neural network, are often used as a first-tier approach for the primary examination of genetic variants of uncertain clinical significance. Such tools allow us to parse through genetic data and emphasize potential splice-altering variants. Further variant assessment requires precise RNA assessment by agarose gel electrophoresis and/or cDNA Sanger sequencing. Results: We found two novel heterozygous variants in the coding region of the LYST gene (c.10104G>T, c.10894A>G) in an individual with a typical clinical presentation of Chediak–Higashi syndrome (CHS). The SpliceAI neural network predicted both variants as probably splice-altering. cDNA assessment by agarose gel electrophoresis revealed the presence of abnormally shortened splicing products in each variant’s case, and cDNA Sanger sequencing demonstrated that c.10104G>T and c.10894A>G substitutions resulted in a shortening of the 44 and 49 exons by 41 and 47 bp, respectively. Both mutations probably lead to a frameshift and the formation of a premature termination codon. This, in turn, may disrupt the structure and/or function of the LYST protein. Conclusions: We identified two novel variants in the LYST gene, predicted to be deleterious by the SpliceAI neural network. Agarose gel cDNA electrophoresis and cDNA Sanger sequencing allowed us to verify inappropriate splicing patterns and establish these variants as disease-causing. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

21 pages, 526 KiB  
Article
Collaborative Caching for Implementing a Location-Privacy Aware LBS on a MANET
by Rudyard Fuster, Patricio Galdames and Claudio Gutierréz-Soto
Appl. Sci. 2024, 14(22), 10480; https://doi.org/10.3390/app142210480 - 14 Nov 2024
Viewed by 973
Abstract
This paper addresses the challenge of preserving user privacy in location-based services (LBSs) by proposing a novel, complementary approach to existing privacy-preserving techniques such as k-anonymity and l-diversity. Our approach implements collaborative caching strategies within a mobile ad hoc network (MANET), exploiting [...] Read more.
This paper addresses the challenge of preserving user privacy in location-based services (LBSs) by proposing a novel, complementary approach to existing privacy-preserving techniques such as k-anonymity and l-diversity. Our approach implements collaborative caching strategies within a mobile ad hoc network (MANET), exploiting the geographic of location-based queries (LBQs) to reduce data exposure to untrusted LBS servers. Unlike existing approaches that rely on centralized servers or stationary infrastructure, our solution facilitates direct data exchange between users’ devices, providing an additional layer of privacy protection. We introduce a new privacy entropy-based metric called accumulated privacy loss (APL) to quantify the privacy loss incurred when accessing either the LBS or our proposed system. Our approach implements a two-tier caching strategy: local caching maintained by each user and neighbor caching based on proximity. This strategy not only reduces the number of queries to the LBS server but also significantly enhances user privacy by minimizing the exposure of location data to centralized entities. Empirical results demonstrate that while our collaborative caching system incurs some communication costs, it significantly mitigates redundant data among user caches and reduces the need to access potentially privacy-compromising LBS servers. Our findings show a 40% reduction in LBS queries, a 64% decrease in data redundancy within cells, and a 31% reduction in accumulated privacy loss compared to baseline methods. In addition, we analyze the impact of data obsolescence on cache performance and privacy loss, proposing mechanisms for maintaining the relevance and accuracy of cached data. This work contributes to the field of privacy-preserving LBSs by providing a decentralized, user-centric approach that improves both cache redundancy and privacy protection, particularly in scenarios where central infrastructure is unreachable or untrusted. Full article
(This article belongs to the Special Issue New Advances in Computer Security and Cybersecurity)
Show Figures

Figure 1

46 pages, 2062 KiB  
Article
Exploring Metaheuristic Optimized Machine Learning for Software Defect Detection on Natural Language and Classical Datasets
by Aleksandar Petrovic, Luka Jovanovic, Nebojsa Bacanin, Milos Antonijevic, Nikola Savanovic, Miodrag Zivkovic, Marina Milovanovic and Vuk Gajic
Mathematics 2024, 12(18), 2918; https://doi.org/10.3390/math12182918 - 19 Sep 2024
Cited by 15 | Viewed by 2156
Abstract
Software is increasingly vital, with automated systems regulating critical functions. As development demands grow, manual code review becomes more challenging, often making testing more time-consuming than development. A promising approach to improving defect detection at the source code level is the use of [...] Read more.
Software is increasingly vital, with automated systems regulating critical functions. As development demands grow, manual code review becomes more challenging, often making testing more time-consuming than development. A promising approach to improving defect detection at the source code level is the use of artificial intelligence combined with natural language processing (NLP). Source code analysis, leveraging machine-readable instructions, is an effective method for enhancing defect detection and error prevention. This work explores source code analysis through NLP and machine learning, comparing classical and emerging error detection methods. To optimize classifier performance, metaheuristic optimizers are used, and algorithm modifications are introduced to meet the study’s specific needs. The proposed two-tier framework uses a convolutional neural network (CNN) in the first layer to handle large feature spaces, with AdaBoost and XGBoost classifiers in the second layer to improve error identification. Additional experiments using term frequency–inverse document frequency (TF-IDF) encoding in the second layer demonstrate the framework’s versatility. Across five experiments with public datasets, the accuracy of the CNN was 0.768799. The second layer, using AdaBoost and XGBoost, further improved these results to 0.772166 and 0.771044, respectively. Applying NLP techniques yielded exceptional accuracies of 0.979781 and 0.983893 from the AdaBoost and XGBoost optimizers. Full article
Show Figures

Figure 1

19 pages, 10232 KiB  
Article
Energy Efficiency and Load Optimization in Heterogeneous Networks through Dynamic Sleep Strategies: A Constraint-Based Optimization Approach
by Amna Shabbir, Muhammad Faizan Shirazi, Safdar Rizvi, Sadique Ahmad and Abdelhamied A. Ateya
Future Internet 2024, 16(8), 262; https://doi.org/10.3390/fi16080262 - 25 Jul 2024
Cited by 4 | Viewed by 4282
Abstract
This research endeavors to advance energy efficiency (EE) within heterogeneous networks (HetNets) through a comprehensive approach. Initially, we establish a foundational framework by implementing a two-tier network architecture based on Poisson process distribution from stochastic geometry. Through this deployment, we develop a tailored [...] Read more.
This research endeavors to advance energy efficiency (EE) within heterogeneous networks (HetNets) through a comprehensive approach. Initially, we establish a foundational framework by implementing a two-tier network architecture based on Poisson process distribution from stochastic geometry. Through this deployment, we develop a tailored EE model, meticulously analyzing the implications of random base station and user distributions on energy efficiency. We formulate joint base station and user densities that are optimized for EE while adhering to stringent quality-of-service (QoS) requirements. Subsequently, we introduce a novel dynamically distributed opportunistic sleep strategy (D-DOSS) to optimize EE. This strategy strategically clusters base stations throughout the network and dynamically adjusts their sleep patterns based on real-time traffic load thresholds. Employing Monte Carlo simulations with MATLAB, we rigorously evaluate the efficacy of the D-DOSS approach, quantifying improvements in critical QoS parameters, such as coverage probability, energy utilization efficiency (EUE), success probability, and data throughput. In conclusion, our research represents a significant step toward optimizing EE in HetNets, simultaneously addressing network architecture optimization and proposing an innovative sleep management strategy, offering practical solutions to maximize energy efficiency in future wireless networks. Full article
Show Figures

Figure 1

40 pages, 5898 KiB  
Article
Authentication and Key Agreement Protocol in Hybrid Edge–Fog–Cloud Computing Enhanced by 5G Networks
by Jiayi Zhang, Abdelkader Ouda and Raafat Abu-Rukba
Future Internet 2024, 16(6), 209; https://doi.org/10.3390/fi16060209 - 14 Jun 2024
Cited by 10 | Viewed by 2259
Abstract
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud [...] Read more.
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud computing’s deficiencies, but security challenges remain, especially in Authentication and Key Agreement aspects due to the distributed and dynamic nature of fog computing. This study presents an innovative mutual Authentication and Key Agreement protocol that is specifically tailored to meet the security needs of fog computing in the context of the edge–fog–cloud three-tier architecture, enhanced by the incorporation of the 5G network. This study improves security in the edge–fog–cloud context by introducing a stateless authentication mechanism and conducting a comparative analysis of the proposed protocol with well-known alternatives, such as TLS 1.3, 5G-AKA, and various handover protocols. The suggested approach has a total transmission cost of only 1280 bits in the authentication phase, which is approximately 30% lower than other protocols. In addition, the suggested handover protocol only involves two signaling expenses. The computational cost for handover authentication for the edge user is significantly low, measuring 0.243 ms, which is under 10% of the computing costs of other authentication protocols. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks)
Show Figures

Figure 1

42 pages, 1593 KiB  
Article
Higher-Order Convolutional Neural Networks for Essential Climate Variables Forecasting
by Michalis Giannopoulos, Grigorios Tsagkatakis and Panagiotis Tsakalides
Remote Sens. 2024, 16(11), 2020; https://doi.org/10.3390/rs16112020 - 4 Jun 2024
Cited by 3 | Viewed by 1348
Abstract
Earth observation imaging technologies, particularly multispectral sensors, produce extensive high-dimensional data over time, thus offering a wealth of information on global dynamics. These data encapsulate crucial information in essential climate variables, such as varying levels of soil moisture and temperature. However, current cutting-edge [...] Read more.
Earth observation imaging technologies, particularly multispectral sensors, produce extensive high-dimensional data over time, thus offering a wealth of information on global dynamics. These data encapsulate crucial information in essential climate variables, such as varying levels of soil moisture and temperature. However, current cutting-edge machine learning models, including deep learning ones, often overlook the treasure trove of multidimensional data, thus analyzing each variable in isolation and losing critical interconnected information. In our study, we enhance conventional convolutional neural network models, specifically those based on the embedded temporal convolutional network framework, thus transforming them into models that inherently understand and interpret multidimensional correlations and dependencies. This transformation involves recasting the existing problem as a generalized case of N-dimensional observation analysis, which is followed by deriving essential forward and backward pass equations through tensor decompositions and compounded convolutions. Consequently, we adapt integral components of established embedded temporal convolutional network models, like encoder and decoder networks, thus enabling them to process 4D spatial time series data that encompass all essential climate variables concurrently. Through the rigorous exploration of diverse model architectures and an extensive evaluation of their forecasting prowess against top-tier methods, we utilize two new, long-term essential climate variables datasets with monthly intervals extending over four decades. Our empirical scrutiny, particularly focusing on soil temperature data, unveils that the innovative high-dimensional embedded temporal convolutional network model-centric approaches markedly excel in forecasting, thus surpassing their low-dimensional counterparts, even under the most challenging conditions characterized by a notable paucity of training data. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

15 pages, 5200 KiB  
Article
Few-Shot Image Classification Based on Swin Transformer + CSAM + EMD
by Huadong Sun, Pengyi Zhang, Xu Zhang and Xiaowei Han
Electronics 2024, 13(11), 2121; https://doi.org/10.3390/electronics13112121 - 29 May 2024
Cited by 2 | Viewed by 1439
Abstract
In few-shot image classification (FSIC), the feature extraction module of the traditional convolutional neural networks is often constrained by the local nature of the convolutional kernel. As a result, it becomes challenging to handle global information and long-distance dependencies effectively. In order to [...] Read more.
In few-shot image classification (FSIC), the feature extraction module of the traditional convolutional neural networks is often constrained by the local nature of the convolutional kernel. As a result, it becomes challenging to handle global information and long-distance dependencies effectively. In order to address this problem, an innovative FSIC method is proposed in this paper, which is the integration of Swin Transformer and CSAM and Earth Mover’s Distance (EMD) technology (STCE). We utilize the Swin Transformer network for image feature extraction, and perform CSAM attention mechanism feature weighting on the output feature map, while we adopt the EMD algorithm to generate the optimal matching flow between the structural units, minimizing the matching cost. This approach allows for a more precise representation of the classification distance between images. We have conducted numerous experiments to validate the effectiveness of our algorithm. On three commonly used few-shot datasets, namely mini-ImageNet, tiered-ImageNet, and FC100, the accuracy of one-shot and five-shot has reached the state of the art (SOTA) in the FSIC; the mini-ImageNet achieves an accuracy of 98.65 ± 0.1% for one-shot and 99.6 ± 0.2% for five-shot tasks, while tiered ImageNet has an accuracy of 91.6 ± 0.1% for one-shot tasks and 96.55 ± 0.27% for five-shot tasks. For FC100, the accuracy is 64.1 ± 0.3% for one-shot tasks and 79.8 ± 0.69% for five-shot tasks. On two commonly used few-shot datasets, namely CUB, CIFAR-FS, CUB achieves an accuracy of 83.1 ± 0.4% for one-shot and 92.88 ± 0.4% for five-shot tasks, while CIFAR-FS achieves an accuracy of 86.95 ± 0.2% for one-shot and 94 ± 0.4% for five-shot tasks. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

29 pages, 4351 KiB  
Article
A Two-Stage Data Envelopment Analysis Approach Incorporating the Global Bounded Adjustment Measure to Evaluate the Efficiency of Medical Waste Recycling Systems with Undesirable Inputs and Outputs
by Wen-Jing Song, Jian-Wei Ren, Chun-Hua Chen, Chen-Xi Feng, Lin-Qiang Li and Chong-Yu Ma
Sustainability 2024, 16(10), 4023; https://doi.org/10.3390/su16104023 - 11 May 2024
Cited by 2 | Viewed by 1907
Abstract
With the ever-increasing focus on sustainable development, recycling waste and renewable use of waste products has earned immense consideration from academics and policy makers. The serious pollution, complex types, and strong infectivity of medical waste have brought serious challenges to management. Although several [...] Read more.
With the ever-increasing focus on sustainable development, recycling waste and renewable use of waste products has earned immense consideration from academics and policy makers. The serious pollution, complex types, and strong infectivity of medical waste have brought serious challenges to management. Although several researchers have addressed the issue by optimizing medical waste management networks and systems, there is still a significant gap in systematically evaluating the efficiency of medical waste recycling systems. Therefore, this paper proposes a two-stage data envelopment analysis (DEA) approach that combines the virtual frontier and the global bounded adjustment measure (BAM-VF-G), considering both undesirable inputs and outputs. In the first stage, the BAM-G model is used to evaluate the efficiency of medical waste recycling systems, and the BAM-VF-G model is used to further rank super-efficient medical waste recycling systems. In the second stage, two types of efficiency decomposition models are proposed. The first type of models decompose unified efficiency into production efficiency (PE) and environment efficiency (EE). Depending upon the system structure, the second type of models decompose unified efficiency into the efficiency of the medical waste collection and transport subsystem (MWCS) and the efficiency of the medical waste treatment subsystem (MWTS). The novel approach is used to measure the efficiency of the medical waste recycling systems in China’s new first-tier cities, and we find that (1) Foshan ranks the highest in efficiency, followed by Tianjin and Qingdao, with efficiency values of 0.386, 0.180, and 0.130, respectively; (2) the EE lacks resilience and fluctuated the most from 2017 to 2022; and (3) the efficiency of MWCSs has always been lower than that of MWTSs and is a critical factor inhibiting the overall efficiency of medical waste recycling systems. Full article
(This article belongs to the Section Waste and Recycling)
Show Figures

Figure 1

19 pages, 7067 KiB  
Article
Deep Reinforcement Learning-Empowered Cost-Effective Federated Video Surveillance Management Framework
by Dilshod Bazarov Ravshan Ugli, Alaelddin F. Y. Mohammed, Taeheum Na and Joohyung Lee
Sensors 2024, 24(7), 2158; https://doi.org/10.3390/s24072158 - 27 Mar 2024
Cited by 1 | Viewed by 1960
Abstract
Video surveillance systems are integral to bolstering safety and security across multiple settings. With the advent of deep learning (DL), a specialization within machine learning (ML), these systems have been significantly augmented to facilitate DL-based video surveillance services with notable precision. Nevertheless, DL-based [...] Read more.
Video surveillance systems are integral to bolstering safety and security across multiple settings. With the advent of deep learning (DL), a specialization within machine learning (ML), these systems have been significantly augmented to facilitate DL-based video surveillance services with notable precision. Nevertheless, DL-based video surveillance services, which necessitate the tracking of object movement and motion tracking (e.g., to identify unusual object behaviors), can demand a significant portion of computational and memory resources. This includes utilizing GPU computing power for model inference and allocating GPU memory for model loading. To tackle the computational demands inherent in DL-based video surveillance, this study introduces a novel video surveillance management system designed to optimize operational efficiency. At its core, the system is built on a two-tiered edge computing architecture (i.e., client and server through socket transmission). In this architecture, the primary edge (i.e., client side) handles the initial processing tasks, such as object detection, and is connected via a Universal Serial Bus (USB) cable to the Closed-Circuit Television (CCTV) camera, directly at the source of the video feed. This immediate processing reduces the latency of data transfer by detecting objects in real time. Meanwhile, the secondary edge (i.e., server side) plays a vital role by hosting a dynamically controlling threshold module targeted at releasing DL-based models, reducing needless GPU usage. This module is a novel addition that dynamically adjusts the threshold time value required to release DL models. By dynamically optimizing this threshold, the system can effectively manage GPU usage, ensuring resources are allocated efficiently. Moreover, we utilize federated learning (FL) to streamline the training of a Long Short-Term Memory (LSTM) network for predicting imminent object appearances by amalgamating data from diverse camera sources while ensuring data privacy and optimized resource allocation. Furthermore, in contrast to the static threshold values or moving average techniques used in previous approaches for the controlling threshold module, we employ a Deep Q-Network (DQN) methodology to manage threshold values dynamically. This approach efficiently balances the trade-off between GPU memory conservation and the reloading latency of the DL model, which is enabled by incorporating LSTM-derived predictions as inputs to determine the optimal timing for releasing the DL model. The results highlight the potential of our approach to significantly improve the efficiency and effective usage of computational resources in video surveillance systems, opening the door to enhanced security in various domains. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities—2nd Edition)
Show Figures

Figure 1

17 pages, 608 KiB  
Article
Optimized Two-Tier Caching with Hybrid Millimeter-Wave and Microwave Communications for 6G Networks
by Muhammad Sheraz, Teong Chee Chuah, Mardeni Bin Roslee, Manzoor Ahmed, Amjad Iqbal and Ala’a Al-Habashna
Appl. Sci. 2024, 14(6), 2589; https://doi.org/10.3390/app14062589 - 20 Mar 2024
Cited by 4 | Viewed by 1413
Abstract
Data caching is a promising technique to alleviate the data traffic burden from the backhaul and minimize data access delay. However, the cache capacity constraint poses a significant challenge to obtaining content through the cache resource that degrades the caching performance. In this [...] Read more.
Data caching is a promising technique to alleviate the data traffic burden from the backhaul and minimize data access delay. However, the cache capacity constraint poses a significant challenge to obtaining content through the cache resource that degrades the caching performance. In this paper, we propose a novel two-tier caching mechanism for data caching on mobile user equipment (UE) and the small base station (SBS) level in ultra-dense 6G heterogeneous networks for reducing data access failure via cache resources. The two-tier caching enables users to retrieve their desired content from cache resources through device-to-device (D2D) communications from neighboring users or the serving SBS. The cache-enabled UE exploits millimeter-wave (mmWave)-based D2D communications, utilizing line-of-sight (LoS) links for high-speed data transmission to content-demanding mobile UE within a limited connection time. In the event of D2D communication failures, a dual-mode hybrid system, combining mmWave and microwave μWave technologies, is utilized to ensure effective data transmission between the SBS and UE to fulfill users’ data demands. In the proposed framework. the data transmission speed is optimized through mmWave signals in line-of-sight (LoS) conditions. In non-LoS scenarios, the system switches to μWave mode for obstacle-penetrating signal transmission. Subsequently, we propose a reinforcement learning (RL) approach to optimize cache decisions through the approximation of the Q action-value function. The proposed technique undergoes iterative learning, adapting to dynamic network conditions to enhance the content placement policy and minimize delay. Extensive simulations demonstrate the efficiency of our proposed approach in significantly reducing network delay compared with benchmark schemes. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

7 pages, 1771 KiB  
Proceeding Paper
QoS Performance Evaluation for Wireless Sensor Networks: The AQUASENSE Approach
by Sofia Batsi and Stefano Tennina
Eng. Proc. 2023, 58(1), 113; https://doi.org/10.3390/ecsa-10-16181 - 15 Nov 2023
Viewed by 809
Abstract
The AQUASENSE project is a multi-site Innovative Training Network (ITN) that focuses on water and food quality monitoring by using Internet of Things (IoT) technologies. This paper presents the communication system suitable for supporting the pollution scenarios examined in the AQUASENSE project. The [...] Read more.
The AQUASENSE project is a multi-site Innovative Training Network (ITN) that focuses on water and food quality monitoring by using Internet of Things (IoT) technologies. This paper presents the communication system suitable for supporting the pollution scenarios examined in the AQUASENSE project. The proposed system is designed and developed in the SimuLTE/OMNeT++ simulation for simulating an LTE network infrastructure connecting the Wireless Sensors Network (WSN) with a remote server, where data are collected. In this frame, two network topologies are studied: Scenario A, a single-hop (one-tier) network, which represents a multi-cell network where multiple sensors are associated with different base stations, sending water measurements to the remote server through them, and Scenario B, a two-tier network, which is again a multi-cell network, but this time, multiple sensors are associated to local aggregators, which first collect and aggregate the measurements and then send them to the remote server through the LTE base stations. For these topologies, from the network perspective, delay and goodput parameters are studied as representative performance indices in two conditions: (i) periodic monitoring, where the data are transmitted to the server at larger intervals (every 1 or 2 s), and (ii) alarm monitoring, where the data are transmitted more often (every 0.5 or 1 s); and by varying the number of sensors to demonstrate the scalability of the different approaches. Full article
Show Figures

Figure 1

15 pages, 5649 KiB  
Article
Revolutionizing Urban Mobility: IoT-Enhanced Autonomous Parking Solutions with Transfer Learning for Smart Cities
by Qaiser Abbas, Gulzar Ahmad, Tahir Alyas, Turki Alghamdi, Yazed Alsaawy and Ali Alzahrani
Sensors 2023, 23(21), 8753; https://doi.org/10.3390/s23218753 - 27 Oct 2023
Cited by 13 | Viewed by 7184
Abstract
Smart cities have emerged as a specialized domain encompassing various technologies, transitioning from civil engineering to technology-driven solutions. The accelerated development of technologies, such as the Internet of Things (IoT), software-defined networks (SDN), 5G, artificial intelligence, cognitive science, and analytics, has played a [...] Read more.
Smart cities have emerged as a specialized domain encompassing various technologies, transitioning from civil engineering to technology-driven solutions. The accelerated development of technologies, such as the Internet of Things (IoT), software-defined networks (SDN), 5G, artificial intelligence, cognitive science, and analytics, has played a crucial role in providing solutions for smart cities. Smart cities heavily rely on devices, ad hoc networks, and cloud computing to integrate and streamline various activities towards common goals. However, the complexity arising from multiple cloud service providers offering myriad services necessitates a stable and coherent platform for sustainable operations. The Smart City Operational Platform Ecology (SCOPE) model has been developed to address the growing demands, and incorporates machine learning, cognitive correlates, ecosystem management, and security. SCOPE provides an ecosystem that establishes a balance for achieving sustainability and progress. In the context of smart cities, Internet of Things (IoT) devices play a significant role in enabling automation and data capture. This research paper focuses on a specific module of SCOPE, which deals with data processing and learning mechanisms for object identification in smart cities. Specifically, it presents a car parking system that utilizes smart identification techniques to identify vacant slots. The learning controller in SCOPE employs a two-tier approach, and utilizes two different models, namely Alex Net and YOLO, to ensure procedural stability and improvement. Full article
(This article belongs to the Special Issue AI-IoT for New Challenges in Smart Cities)
Show Figures

Figure 1

Back to TopTop