Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,360)

Search Parameters:
Keywords = cloud service provider

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 650 KiB  
Article
LEMAD: LLM-Empowered Multi-Agent System for Anomaly Detection in Power Grid Services
by Xin Ji, Le Zhang, Wenya Zhang, Fang Peng, Yifan Mao, Xingchuang Liao and Kui Zhang
Electronics 2025, 14(15), 3008; https://doi.org/10.3390/electronics14153008 - 28 Jul 2025
Abstract
With the accelerated digital transformation of the power industry, critical infrastructures such as power grids are increasingly migrating to cloud-native architectures, leading to unprecedented growth in service scale and complexity. Traditional operation and maintenance (O&M) methods struggle to meet the demands for real-time [...] Read more.
With the accelerated digital transformation of the power industry, critical infrastructures such as power grids are increasingly migrating to cloud-native architectures, leading to unprecedented growth in service scale and complexity. Traditional operation and maintenance (O&M) methods struggle to meet the demands for real-time monitoring, accuracy, and scalability in such environments. This paper proposes a novel service performance anomaly detection system based on large language models (LLMs) and multi-agent systems (MAS). By integrating the semantic understanding capabilities of LLMs with the distributed collaboration advantages of MAS, we construct a high-precision and robust anomaly detection framework. The system adopts a hierarchical architecture, where lower-layer agents are responsible for tasks such as log parsing and metric monitoring, while an upper-layer coordinating agent performs multimodal feature fusion and global anomaly decision-making. Additionally, the LLM enhances the semantic analysis and causal reasoning capabilities for logs. Experiments conducted on real-world data from the State Grid Corporation of China, covering 1289 service combinations, demonstrate that our proposed system significantly outperforms traditional methods in terms of the F1-score across four platforms, including customer services and grid resources (achieving up to a 10.3% improvement). Notably, the system excels in composite anomaly detection and root cause analysis. This study provides an industrial-grade, scalable, and interpretable solution for intelligent power grid O&M, offering a valuable reference for the practical implementation of AIOps in critical infrastructures. Evaluated on real-world data from the State Grid Corporation of China (SGCC), our system achieves a maximum F1-score of 88.78%, with a precision of 92.16% and recall of 85.63%, outperforming five baseline methods. Full article
(This article belongs to the Special Issue Advanced Techniques for Multi-Agent Systems)
Show Figures

Figure 1

30 pages, 3451 KiB  
Article
Integrating Google Maps and Smooth Street View Videos for Route Planning
by Federica Massimi, Antonio Tedeschi, Kalapraveen Bagadi and Francesco Benedetto
J. Imaging 2025, 11(8), 251; https://doi.org/10.3390/jimaging11080251 - 25 Jul 2025
Viewed by 202
Abstract
This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach [...] Read more.
This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client–server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

31 pages, 528 KiB  
Article
An Exploratory Factor Analysis Approach on Challenging Factors for Government Cloud Service Adoption Intention
by Ndukwe Ukeje, Jairo A. Gutierrez, Krassie Petrova and Ugochukwu Chinonso Okolie
Future Internet 2025, 17(8), 326; https://doi.org/10.3390/fi17080326 - 23 Jul 2025
Viewed by 219
Abstract
This study explores the challenges hindering the government’s adoption of cloud computing despite its benefits in improving services, reducing costs, and enhancing collaboration. Key barriers include information security, privacy, compliance, and perceived risks. Using the Unified Theory of Acceptance and Use of Technology [...] Read more.
This study explores the challenges hindering the government’s adoption of cloud computing despite its benefits in improving services, reducing costs, and enhancing collaboration. Key barriers include information security, privacy, compliance, and perceived risks. Using the Unified Theory of Acceptance and Use of Technology (UTAUT) model, the study conceptualises a model incorporating privacy, governance framework, performance expectancy, and information security as independent variables, with perceived risk as a moderator and government intention as the dependent variable. The study employs exploratory factor analysis (EFA) based on survey data from 71 participants in Nigerian government organisations to validate the measurement scale for these factors. The analysis evaluates variable validity, factor relationships, and measurement reliability. Cronbach’s alpha values range from 0.807 to 0.950, confirming high reliability. Measurement items with a common variance above 0.40 were retained, explaining 70.079% of the total variance on the measurement items, demonstrating reliability and accuracy in evaluating the challenging factors. These findings establish a validated scale for assessing government cloud adoption challenges and highlight complex relationships among influencing factors. This study provides a reliable measurement scale and model for future research and policymakers on the government’s intention to adopt cloud services. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

25 pages, 1842 KiB  
Article
Optimizing Cybersecurity Education: A Comparative Study of On-Premises and Cloud-Based Lab Environments Using AWS EC2
by Adil Khan and Azza Mohamed
Computers 2025, 14(8), 297; https://doi.org/10.3390/computers14080297 - 22 Jul 2025
Viewed by 207
Abstract
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This [...] Read more.
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This study compares the efficacy of cloud-based solutions—specifically, Amazon Web Services (AWS) Elastic Compute Cloud (EC2)—against traditional settings like VirtualBox, with the goal of determining their potential to improve cybersecurity education. The study conducts systematic experimentation to compare lab environments based on parameters such as lab completion time, CPU and RAM use, and ease of access. The results show that AWS EC2 outperforms VirtualBox by shortening lab completion times, optimizing resource usage, and providing more remote accessibility. Additionally, the cloud-based strategy provides scalable, cost-effective implementation via a pay-per-use model, serving a wide range of pedagogical needs. These findings show that incorporating cloud technology into cybersecurity curricula can lead to more efficient, adaptable, and inclusive learning experiences, thereby boosting pedagogical methods in the field. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

10 pages, 915 KiB  
Article
Power Estimation and Energy Efficiency of AI Accelerators on Embedded Systems
by Minseon Kang and Moonju Park
Energies 2025, 18(14), 3840; https://doi.org/10.3390/en18143840 - 19 Jul 2025
Viewed by 273
Abstract
The rapid expansion of IoT devices poses new challenges for AI-driven services, particularly in terms of energy consumption. Although cloud-based AI processing has been the dominant approach, its high energy consumption calls for more energy-efficient alternatives. Edge computing offers an approach for reducing [...] Read more.
The rapid expansion of IoT devices poses new challenges for AI-driven services, particularly in terms of energy consumption. Although cloud-based AI processing has been the dominant approach, its high energy consumption calls for more energy-efficient alternatives. Edge computing offers an approach for reducing both latency and energy consumption. In this paper, we propose a methodology for estimating the power consumption of AI accelerators on an embedded edge device. Through experimental evaluations involving GPU- and Edge TPU-based platforms, the proposed method demonstrated estimation errors below 8%. The estimation errors were partly due to unaccounted power consumption from main memory and storage access. The proposed approach provides a foundation for more reliable energy management in AI-powered edge computing systems. Full article
(This article belongs to the Special Issue Energy, Electrical and Power Engineering: 4th Edition)
Show Figures

Figure 1

32 pages, 2529 KiB  
Article
Cloud Adoption in the Digital Era: An Interpretable Machine Learning Analysis of National Readiness and Structural Disparities Across the EU
by Cristiana Tudor, Margareta Florescu, Persefoni Polychronidou, Pavlos Stamatiou, Vasileios Vlachos and Konstadina Kasabali
Appl. Sci. 2025, 15(14), 8019; https://doi.org/10.3390/app15148019 - 18 Jul 2025
Viewed by 195
Abstract
As digital transformation accelerates across Europe, cloud computing plays an increasingly central role in modernizing public services and private enterprises. Yet adoption rates vary markedly among EU member states, reflecting deeper structural differences in digital capacity. This study employs explainable machine learning to [...] Read more.
As digital transformation accelerates across Europe, cloud computing plays an increasingly central role in modernizing public services and private enterprises. Yet adoption rates vary markedly among EU member states, reflecting deeper structural differences in digital capacity. This study employs explainable machine learning to uncover the drivers of national cloud adoption across 27 EU countries using harmonized panel datasets spanning 2014–2021 and 2014–2024. A methodological pipeline combining Random Forests (RF), XGBoost, Support Vector Machines (SVM), and Elastic Net regression is implemented, with model tuning conducted via nested cross-validation. Among individual models, Elastic Net and SVM delivered superior predictive performance, while a stacked ensemble achieved the best overall accuracy (MAE = 0.214, R2 = 0.948). The most interpretable model, a standardized RF with country fixed effects, attained MAE = 0.321, and R2 = 0.864, making it well-suited for policy analysis. Variable importance analysis reveals that the density of ICT specialists is the strongest predictor of adoption, followed by broadband access and higher education. Fixed-effect modeling confirms significant national heterogeneity, with countries like Finland and Luxembourg consistently leading adoption, while Bulgaria and Romania exhibit structural barriers. Partial dependence and SHAP analyses reveal nonlinear complementarities between digital skills and infrastructure. A hierarchical clustering of countries reveals three distinct digital maturity profiles, offering tailored policy pathways. These results directly support the EU Digital Decade’s strategic targets and provide actionable insights for advancing inclusive and resilient digital transformation across the Union. Full article
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)
Show Figures

Figure 1

17 pages, 2769 KiB  
Article
Service-Based Architecture for 6G RAN: A Cloud Native Platform That Provides Everything as a Service
by Guangyi Liu, Na Li, Chunjing Yuan, Siqi Chen and Xuan Liu
Sensors 2025, 25(14), 4428; https://doi.org/10.3390/s25144428 - 16 Jul 2025
Viewed by 241
Abstract
The 5G network’s commercialization has revealed challenges in providing customized and personalized deployment and services for diverse vertical industrial use cases, leading to high cost, low resource efficiency and management efficiency, and long time to market. Although the 5G core network (CN) has [...] Read more.
The 5G network’s commercialization has revealed challenges in providing customized and personalized deployment and services for diverse vertical industrial use cases, leading to high cost, low resource efficiency and management efficiency, and long time to market. Although the 5G core network (CN) has adopted a service-based architecture (SBA) to enhance agility and elasticity, the radio access network (RAN) keeps the traditional integrated and rigid architecture and suffers the difficulties of customizing and personalizing the functions and capabilities. Open RAN attempted to introduce cloudification, openness, and intelligence to RAN but faced limitations due to 5G RAN specifications. To address this, this paper analyzes the experience and insights from 5G SBA and conducts a systematic study on the service-based RAN, including service definition, interface protocol stacks, impact analysis on the air interface, radio capability exposure, and joint optimization with CN. Performance verification shows significant improvements of service-based user plane design in resource utilization and scalability. Full article
(This article belongs to the Special Issue Future Horizons in Networking: Exploring the Potential of 6G)
Show Figures

Figure 1

27 pages, 1889 KiB  
Article
Advancing Smart City Sustainability Through Artificial Intelligence, Digital Twin and Blockchain Solutions
by Ivica Lukić, Mirko Köhler, Zdravko Krpić and Miljenko Švarcmajer
Technologies 2025, 13(7), 300; https://doi.org/10.3390/technologies13070300 - 11 Jul 2025
Viewed by 538
Abstract
This paper presents an integrated Smart City platform that combines digital twin technology, advanced machine learning, and a private blockchain network to enhance data-driven decision making and operational efficiency in both public enterprises and small and medium-sized enterprises (SMEs). The proposed cloud-based business [...] Read more.
This paper presents an integrated Smart City platform that combines digital twin technology, advanced machine learning, and a private blockchain network to enhance data-driven decision making and operational efficiency in both public enterprises and small and medium-sized enterprises (SMEs). The proposed cloud-based business intelligence model automates Extract, Transform, Load (ETL) processes, enables real-time analytics, and secures data integrity and transparency through blockchain-enabled audit trails. By implementing the proposed solution, Smart City and public service providers can significantly improve operational efficiency, including a 15% reduction in costs and a 12% decrease in fuel consumption for waste management, as well as increased citizen engagement and transparency in Smart City governance. The digital twin component facilitated scenario simulations and proactive resource management, while the participatory governance module empowered citizens through transparent, immutable records of proposals and voting. This study also discusses technical, organizational, and regulatory challenges, such as data integration, scalability, and privacy compliance. The results indicate that the proposed approach offers a scalable and sustainable model for Smart City transformation, fostering citizen trust, regulatory compliance, and measurable environmental and social benefits. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

32 pages, 6788 KiB  
Article
Knee Osteoarthritis Detection and Classification Using Autoencoders and Extreme Learning Machines
by Jarrar Amjad, Muhammad Zaheer Sajid, Ammar Amjad, Muhammad Fareed Hamid, Ayman Youssef and Muhammad Irfan Sharif
AI 2025, 6(7), 151; https://doi.org/10.3390/ai6070151 - 8 Jul 2025
Viewed by 521
Abstract
Background/Objectives: Knee osteoarthritis (KOA) is a prevalent disorder affecting both older adults and younger individuals, leading to compromised joint function and mobility. Early and accurate detection is critical for effective intervention, as treatment options become increasingly limited as the disease progresses. Traditional diagnostic [...] Read more.
Background/Objectives: Knee osteoarthritis (KOA) is a prevalent disorder affecting both older adults and younger individuals, leading to compromised joint function and mobility. Early and accurate detection is critical for effective intervention, as treatment options become increasingly limited as the disease progresses. Traditional diagnostic methods rely heavily on the expertise of physicians and are susceptible to errors. The demand for utilizing deep learning models in order to automate and improve the accuracy of KOA image classification has been increasing. In this research, a unique deep learning model is presented that employs autoencoders as the primary mechanism for feature extraction, providing a robust solution for KOA classification. Methods: The proposed model differentiates between KOA-positive and KOA-negative images and categorizes the disease into its primary severity levels. Levels of severity range from “healthy knees” (0) to “severe KOA” (4). Symptoms range from typical joint structures to significant joint damage, such as bone spur growth, joint space narrowing, and bone deformation. Two experiments were conducted using different datasets to validate the efficacy of the proposed model. Results: The first experiment used the autoencoder for feature extraction and classification, which reported an accuracy of 96.68%. Another experiment using autoencoders for feature extraction and Extreme Learning Machines for actual classification resulted in an even higher accuracy value of 98.6%. To test the generalizability of the Knee-DNS system, we utilized the Butterfly iQ+ IoT device for image acquisition and Google Colab’s cloud computing services for data processing. Conclusions: This work represents a pioneering application of autoencoder-based deep learning models in the domain of KOA classification, achieving remarkable accuracy and robustness. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

46 pages, 1709 KiB  
Article
Federated Learning-Driven IoT Request Scheduling for Fault Tolerance in Cloud Data Centers
by Sheeja Rani S and Raafat Aburukba
Mathematics 2025, 13(13), 2198; https://doi.org/10.3390/math13132198 - 5 Jul 2025
Viewed by 372
Abstract
Cloud computing is a virtualized and distributed computing model that provides resources and services based on demand and self-service. Resource failure is one of the major challenges in cloud computing, and there is a need for fault tolerance mechanisms. This paper addresses the [...] Read more.
Cloud computing is a virtualized and distributed computing model that provides resources and services based on demand and self-service. Resource failure is one of the major challenges in cloud computing, and there is a need for fault tolerance mechanisms. This paper addresses the issue by proposing a multi-objective radial kernelized federated learning-based fault-tolerant scheduling (MRKFL-FTS) technique for allocating multiple IoT requests or user tasks to virtual machines in cloud IoT-based environments. The MRKFL-FTS technique includes Cloud RAN (C-RAN) and Virtual RAN (V-RAN). The proposed MRKFL-FTS technique comprises four entities, namely, IoT devices, cloud servers, task assigners, and virtual machines. Each IoT device generates several service requests and sends them to the control server. At first, radial kernelized support vector regression is applied in the local training model to identify resource-efficient virtual machines. After that, locally trained models are combined, and the resulting model is fed into the global aggregation model. Finally, using a weighted round-robin method, the task assigner allocates incoming IoT service requests to virtual machines. This approach improves resource awareness and fault tolerance in scheduling. The quantitatively analyzed results show that the MRKFL-FTS technique achieved an 8% improvement in task scheduling efficiency and fault prediction accuracy, a 36% improvement in throughput, and a 14% reduction in makespan and time complexity. In addition, the MRKFL-FTS technique resulted in a 13% reduction in response time. The energy consumption of the MRKFL-FTS technique is reduced by 17% and increases the scalability by 8% compared to conventional scheduling techniques. Full article
(This article belongs to the Special Issue Advanced Information and Signal Processing: Models and Algorithms)
Show Figures

Figure 1

11 pages, 1695 KiB  
Article
Mathematical Modeling and Statistical Evaluation of the Security–Performance Trade-Off in IoT Cloud Architectures: A Case Study of UBT Smart City
by Besnik Qehaja, Edmond Hajrizi, Behar Haxhismajli, Lavdim Menxhiqi, Galia Marinova and Elissa Mollakuqe
Appl. Sci. 2025, 15(13), 7518; https://doi.org/10.3390/app15137518 - 4 Jul 2025
Viewed by 191
Abstract
This paper presents a mathematical and statistical analysis of the security–performance trade-off in the context of the IoT Cloud architecture implemented at UBT Smart City. Through detailed modeling and real-world measurement data collected before and after the deployment of advanced security measures—such as [...] Read more.
This paper presents a mathematical and statistical analysis of the security–performance trade-off in the context of the IoT Cloud architecture implemented at UBT Smart City. Through detailed modeling and real-world measurement data collected before and after the deployment of advanced security measures—such as VPN configuration, Network Security Groups (NSGs), Route Tables, and DDoS Protection—we quantify the impact of security on system performance. We propose a mathematical framework to evaluate the propagation delay of telemetry data through the system and employ queueing theory (M/M/1 model) to simulate the behavior of critical data processing services. Additionally, we perform hypothesis testing and statistical comparison to validate the significance of the observed performance changes. The results show an average delay increase of approximately 19% following the implementation of security mechanisms, highlighting the inevitable trade-off between enhanced security and operational speed. Finally, we introduce a multi-objective cost-delay function that can guide the selection of optimal security configurations by balancing latency and cost, providing a valuable tool for the future optimization of secure IoT infrastructures Full article
Show Figures

Figure 1

47 pages, 2595 KiB  
Article
Advancing Data Privacy in Cloud Storage: A Novel Multi-Layer Encoding Framework
by Kamta Nath Mishra, Rajesh Kumar Lal, Paras Nath Barwal and Alok Mishra
Appl. Sci. 2025, 15(13), 7485; https://doi.org/10.3390/app15137485 - 3 Jul 2025
Viewed by 487
Abstract
Data privacy is a crucial concern for individuals using cloud storage services, and cloud service providers are increasingly focused on meeting this demand. However, privacy breaches in the ever-evolving cyber landscape remain a significant threat to cloud storage infrastructures. Previous studies have aimed [...] Read more.
Data privacy is a crucial concern for individuals using cloud storage services, and cloud service providers are increasingly focused on meeting this demand. However, privacy breaches in the ever-evolving cyber landscape remain a significant threat to cloud storage infrastructures. Previous studies have aimed to address this issue but have often lacked comprehensive coverage of privacy attributes. In response to the identified gap in privacy-preserving techniques for cloud computing, this research paper presents a novel and adaptable framework. This approach introduces a multi-layer encoding storage arrangement combined with the implementation of a one-time password authorization approach. By integrating these elements, the proposed approach aims to enhance both the flexibility and efficiency of data protection in cloud environments. The findings of this study are anticipated to have significant implications, contributing to the advancement of existing techniques and inspiring the development of innovative research-driven solutions. Continuous research efforts are required to validate the effectiveness of the proposed framework across diverse contexts and assess its performance against evolving privacy vulnerabilities in cloud computing. Full article
(This article belongs to the Special Issue Cybersecurity: Advances in Security and Privacy Enhancing Technology)
Show Figures

Figure 1

28 pages, 1056 KiB  
Review
SDI-Enabled Smart Governance: A Review (2015–2025) of IoT, AI and Geospatial Technologies—Applications and Challenges
by Sofianos Sofianopoulos, Antigoni Faka and Christos Chalkias
Land 2025, 14(7), 1399; https://doi.org/10.3390/land14071399 - 3 Jul 2025
Viewed by 602
Abstract
This paper presents a systematic, narrative review of 62 academic publications (2015–2025) that explore the integration of spatial data infrastructures (SDIs) with emerging smart city technologies to improve local governance. SDIs provide a structured framework for managing geospatial data and, in combination with [...] Read more.
This paper presents a systematic, narrative review of 62 academic publications (2015–2025) that explore the integration of spatial data infrastructures (SDIs) with emerging smart city technologies to improve local governance. SDIs provide a structured framework for managing geospatial data and, in combination with IoT sensors, geospatial and 3D platforms, cloud computing and AI-powered analytics, enable real-time data-driven decision-making. The review identifies four key technology areas: IoT and sensor technologies, geospatial and 3D mapping platforms, cloud-based data infrastructures, and AI analytics that uniquely contribute to smart governance through improved monitoring, prediction, visualization, and automation. Opportunities include improved urban resilience, public service delivery, environmental monitoring and citizen engagement. However, challenges remain in terms of interoperability, data protection, institutional barriers and unequal access to technologies. To fully realize the potential of integrated SDIs in smart government, the report highlights the need for open standards, ethical frameworks, cross-sector collaboration and citizen-centric design. Ultimately, this synthesis provides a comprehensive basis for promoting inclusive, adaptive and accountable local governance systems through spatially enabled smart technologies. Full article
Show Figures

Graphical abstract

32 pages, 1517 KiB  
Article
A Proposed Deep Learning Framework for Air Quality Forecasts, Combining Localized Particle Concentration Measurements and Meteorological Data
by Maria X. Psaropa, Sotirios Kontogiannis, Christos J. Lolis, Nikolaos Hatzianastassiou and Christos Pikridas
Appl. Sci. 2025, 15(13), 7432; https://doi.org/10.3390/app15137432 - 2 Jul 2025
Viewed by 289
Abstract
Air pollution in urban areas has increased significantly over the past few years due to industrialization and population increase. Therefore, accurate predictions are needed to minimize their impact. This paper presents a neural network-based examination for forecasting Air Quality Index (AQI) values, employing [...] Read more.
Air pollution in urban areas has increased significantly over the past few years due to industrialization and population increase. Therefore, accurate predictions are needed to minimize their impact. This paper presents a neural network-based examination for forecasting Air Quality Index (AQI) values, employing two different models: a variable-depth neural network (NN) called slideNN, and a Gated Recurrent Unit (GRU) model. Both models used past particulate matter measurements alongside local meteorological data as inputs. The slideNN variable-depth architecture consists of a set of independent neural network models, referred to as strands. Similarly, the GRU model comprises a set of independent GRU models with varying numbers of cells. Finally, both models were combined to provide a hybrid cloud-based model. This research examined the practical application of multi-strand neural networks and multi-cell recurrent neural networks in air quality forecasting, offering a hands-on case study and model evaluation for the city of Ioannina, Greece. Experimental results show that the GRU model consistently outperforms the slideNN model in terms of forecasting losses. In contrast, the hybrid GRU-NN model outperforms both GRU and slideNN, capturing additional localized information that can be exploited by combining particle concentration and microclimate monitoring services. Full article
(This article belongs to the Special Issue Innovations in Artificial Neural Network Applications)
Show Figures

Figure 1

22 pages, 551 KiB  
Article
Multi-Area, Multi-Service and Multi-Tier Edge-Cloud Continuum Planning
by Anargyros J. Roumeliotis, Efstratios Myritzis, Evangelos Kosmatos, Konstantinos V. Katsaros and Angelos J. Amditis
Sensors 2025, 25(13), 3949; https://doi.org/10.3390/s25133949 - 25 Jun 2025
Viewed by 279
Abstract
This paper presents the optimal planning of multi-area, multi-service, and multi-tier edge–cloud environments. The goal is to evaluate the regional deployment of the compute continuum, i.e., the type and number of processing devices, their pairing with a specific tier and task among different [...] Read more.
This paper presents the optimal planning of multi-area, multi-service, and multi-tier edge–cloud environments. The goal is to evaluate the regional deployment of the compute continuum, i.e., the type and number of processing devices, their pairing with a specific tier and task among different areas subject to processing, rate, and latency requirements. Different offline compute continuum planning approaches are investigated and detailed analysis related to various design choices is depicted. We study one scheme using all tasks at once and two others using smaller task batches. The latter both iterative schemes finish once all task groups have been traversed. Group-based approaches are presented as dealing with potentially excessive execution times for real-world sized problems. Solutions are provided for continuum planning using both direct complex and simpler, faster methods. Results show that processing all tasks simultaneously yields better performance but requires longer execution, while medium-sized batches achieve good performance faster. Thus, the batch-oriented schemes are capable of handling larger problem sizes. Moreover, the task selection strategy in group-based schemes influences the performance. A more detailed analysis is performed in the latter case, and different clustering methods are also considered. Based on our simulations, random selection of tasks in group-based approaches achieves better performance in most cases. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop