Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (230)

Search Parameters:
Keywords = web services and cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 14464 KB  
Article
Modular IoT Architecture for Monitoring and Control of Office Environments Based on Home Assistant
by Yevheniy Khomenko and Sergii Babichev
IoT 2025, 6(4), 69; https://doi.org/10.3390/iot6040069 - 17 Nov 2025
Viewed by 968
Abstract
Cloud-centric IoT frameworks remain dominant; however, they introduce major challenges related to data privacy, latency, and system resilience. Existing open-source solutions often lack standardized principles for scalable, local-first deployment and do not adequately integrate fault tolerance with hybrid automation logic. This study presents [...] Read more.
Cloud-centric IoT frameworks remain dominant; however, they introduce major challenges related to data privacy, latency, and system resilience. Existing open-source solutions often lack standardized principles for scalable, local-first deployment and do not adequately integrate fault tolerance with hybrid automation logic. This study presents a practical and extensible local-first IoT architecture designed for full operational autonomy using open-source components. The proposed system features a modular, layered design that includes device, communication, data, management, service, security, and presentation layers. It integrates MQTT, Zigbee, REST, and WebSocket protocols to enable reliable publish–subscribe and request–response communication among heterogeneous devices. A hybrid automation model combines rule-based logic with lightweight data-driven routines for context-aware decision-making. The implementation uses Proxmox-based virtualization with Home Assistant as the core automation engine and operates entirely offline, ensuring privacy and continuity without cloud dependency. The architecture was deployed in a real-world office environment and evaluated under workload and fault-injection scenarios. Results demonstrate stable operation with MQTT throughput exceeding 360,000 messages without packet loss, automatic recovery from simulated failures within three minutes, and energy savings of approximately 28% compared to baseline manual control. Compared to established frameworks such as FIWARE and IoT-A, the proposed approach achieves enhanced modularity, local autonomy, and hybrid control capabilities, offering a reproducible model for privacy-sensitive smart environments. Full article
Show Figures

Figure 1

11 pages, 3043 KB  
Proceeding Paper
IoT System for Catering Service in Hospitals
by Marcos Erazo-Perez, Juan Escobar-Naranjo and Ana Pamela Castro-Martin
Eng. Proc. 2025, 115(1), 18; https://doi.org/10.3390/engproc2025115018 - 15 Nov 2025
Viewed by 366
Abstract
In hospitals, IoT has facilitated connectivity between patients and medical services using historical health data. However, its adoption in hospital catering services has been slower. This work describes the implementation of an IoT system with a three-layer architecture: the first layer collects data [...] Read more.
In hospitals, IoT has facilitated connectivity between patients and medical services using historical health data. However, its adoption in hospital catering services has been slower. This work describes the implementation of an IoT system with a three-layer architecture: the first layer collects data on patient diets and environmental conditions from the food warehouse, the second layer processes this information, establishing rules and converting raw data into valuable information, and the third layer stores the data in the cloud, presenting it in a web application. A functional system was obtained that meets the needs of catering service personnel and the hospital in which it was implemented. Full article
(This article belongs to the Proceedings of The XXXIII Conference on Electrical and Electronic Engineering)
Show Figures

Figure 1

22 pages, 4967 KB  
Article
TreeHelper: A Wood Transport Authorization and Monitoring System
by Alexandru-Mihai Zvîncă, Sebastian-Ioan Petruc, Razvan Bogdan, Marius Marcu and Mircea Popa
Sensors 2025, 25(21), 6713; https://doi.org/10.3390/s25216713 - 3 Nov 2025
Viewed by 534
Abstract
This paper proposes TreeHelper, an IoT solution that aims to improve authorization and monitoring practices, in order to help authorities act faster and save essential elements of the environment. It is composed of two important parts: a web platform and an edge AI [...] Read more.
This paper proposes TreeHelper, an IoT solution that aims to improve authorization and monitoring practices, in order to help authorities act faster and save essential elements of the environment. It is composed of two important parts: a web platform and an edge AI device placed on the routes of tree logging trucks. The web platform is built using Spring Boot for the backend, React for the frontend and PostgreSQL as the database. It allows transporters to request wood transport authorizations in a straight-forward manner, while giving authorities the chance to review and decide upon these requests. The smart monitoring device consists of a Raspberry Pi for processing, a camera for capturing live video, a Coral USB Accelerator in order to accelerate model inference and a SIM7600 4G HAT for communication and GPS data acquisition. The model used is YOLOv11n and it is trained on a custom dataset of tree logging truck images. Model inference is run on the frames of the live camera feed and, if a truck is detected, the frame is sent to a cloud ALPR service in order to extract the license plate number. Then, using the 4G connection, the license plate number is sent to the backend and a check for an associated authorization is performed. If nothing is found, the authorities are alerted through an SMS message containing the license plate number and the GPS coordinates, so they can act accordingly. Edge TPU acceleration approximately doubles TreeHelper’s throughput (from around 5 FPS average to above 10 FPS) and halves its mean inference latency (from around 200 ms average to under 100 ms) compared with CPU-only execution. It also improves p95 latency and lowers CPU temperature. The YOLOv11n model, trained on 1752 images, delivers high validation performance (precision = 0.948; recall = 0.944; strong mAP: mAP50 = 0.967; mAP50-95 = 0.668), allowing for real-time monitoring. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

37 pages, 12943 KB  
Article
Natural Disaster Information System (NDIS) for RPAS Mission Planning
by Robiah Al Wardah and Alexander Braun
Drones 2025, 9(11), 734; https://doi.org/10.3390/drones9110734 - 23 Oct 2025
Viewed by 821
Abstract
Today’s rapidly increasing number and performance of Remotely Piloted Aircraft Systems (RPASs) and sensors allows for an innovative approach in monitoring, mitigating, and responding to natural disasters and risks. At present, there are 100s of different RPAS platforms and smaller and more affordable [...] Read more.
Today’s rapidly increasing number and performance of Remotely Piloted Aircraft Systems (RPASs) and sensors allows for an innovative approach in monitoring, mitigating, and responding to natural disasters and risks. At present, there are 100s of different RPAS platforms and smaller and more affordable payload sensors. As natural disasters pose ever increasing risks to society and the environment, it is imperative that these RPASs are utilized effectively. In order to exploit these advances, this study presents the development and validation of a Natural Disaster Information System (NDIS), a geospatial decision-support framework for RPAS-based natural hazard missions. The system integrates a global geohazard database with specifications of geophysical sensors and RPAS platforms to automate mission planning in a generalized form. NDIS v1.0 uses decision tree algorithms to select suitable sensors and platforms based on hazard type, distance to infrastructure, and survey feasibility. NDIS v2.0 introduces a Random Forest method and a Critical Path Method (CPM) to further optimize task sequencing and mission timing. The latest version, NDIS v3.8.3, implements a staggered decision workflow that sequentially maps hazard type and disaster stage to appropriate survey methods, sensor payloads, and compatible RPAS using rule-based and threshold-based filtering. RPAS selection considers payload capacity and range thresholds, adjusted dynamically by proximity, and ranks candidate platforms using hazard- and sensor-specific endurance criteria. The system is implemented using ArcGIS Pro 3.4.0, ArcGIS Experience Builder (2025 cloud release), and Azure Web App Services (Python 3.10 runtime). NDIS supports both batch processing and interactive real-time queries through a web-based user interface. Additional features include a statistical overview dashboard to help users interpret dataset distribution, and a crowdsourced input module that enables community-contributed hazard data via ArcGIS Survey123. NDIS is presented and validated in, for example, applications related to volcanic hazards in Indonesia. These capabilities make NDIS a scalable, adaptable, and operationally meaningful tool for multi-hazard monitoring and remote sensing mission planning. Full article
Show Figures

Figure 1

36 pages, 2937 KB  
Review
IoT, AI, and Digital Twins in Smart Cities: A Systematic Review for a Thematic Mapping and Research Agenda
by Erwin J. Sacoto-Cabrera, Antonio Perez-Torres, Luis Tello-Oquendo and Mariela Cerrada
Smart Cities 2025, 8(5), 175; https://doi.org/10.3390/smartcities8050175 - 16 Oct 2025
Cited by 1 | Viewed by 5647
Abstract
The accelerating complexity of urban environments has prompted cities to adopt digital technologies that improve efficiency, sustainability, and resilience. Among these, Urban Digital Twins (UDTw) have emerged as transformative tools for real-time representation, simulation, and management of urban systems. This Systematic Literature Review [...] Read more.
The accelerating complexity of urban environments has prompted cities to adopt digital technologies that improve efficiency, sustainability, and resilience. Among these, Urban Digital Twins (UDTw) have emerged as transformative tools for real-time representation, simulation, and management of urban systems. This Systematic Literature Review (SLR) examines the integration of Digital Twins (DTw), the Internet of Things (IoT), and Artificial Intelligence (AI) into the Smart City Development (SCD). Following the PSALSAR framework and PRISMA 2020 guidelines, 64 peer-reviewed articles from IEEE Xplore, Association for Computing Machinery (ACM), Scopus, and Web of Science (WoS) digital libraries were analyzed by using bibliometric and thematic methods via the Bibliometrix package in R. The review allowed identifying key technological trends, such as edge–cloud, architectures, 3D immersive visualization, Generative AI (GenAI), and blockchain, and classifies UDTw applications into five domains: traffic management, urban planning, environmental monitoring, energy systems, and public services. Persistent challenges have been also outlined, including semantic interoperability, predictive modeling, data privacy, and impact evaluation. This study synthesizes the current state of the field, by clearly identifying a thematic mapping, and proposes a research agenda to align technical innovation with measurable urban outcomes, offering strategic insights for researchers, policymakers, and planners. Full article
Show Figures

Figure 1

15 pages, 2861 KB  
Article
Sustainable Real-Time NLP with Serverless Parallel Processing on AWS
by Chaitanya Kumar Mankala and Ricardo J. Silva
Information 2025, 16(10), 903; https://doi.org/10.3390/info16100903 - 15 Oct 2025
Cited by 1 | Viewed by 993
Abstract
This paper proposes a scalable serverless architecture for real-time natural language processing (NLP) on large datasets using Amazon Web Services (AWS). The framework integrates AWS Lambda, Step Functions, and S3 to enable fully parallel sentiment analysis with Transformer-based models such as DistilBERT, RoBERTa, [...] Read more.
This paper proposes a scalable serverless architecture for real-time natural language processing (NLP) on large datasets using Amazon Web Services (AWS). The framework integrates AWS Lambda, Step Functions, and S3 to enable fully parallel sentiment analysis with Transformer-based models such as DistilBERT, RoBERTa, and ClinicalBERT. By containerizing inference workloads and orchestrating parallel execution, the system eliminates the need for dedicated servers while dynamically scaling to workload demand. Experimental evaluation on the IMDb Reviews dataset demonstrates substantial efficiency gains: parallel execution achieved a 6.07× reduction in wall-clock duration, an 81.2% reduction in total computing time and energy consumption, and a 79.1% reduction in variable costs compared to sequential processing. These improvements directly translate into a smaller carbon footprint, highlighting the sustainability benefits of serverless architectures for AI workloads. The findings show that the proposed framework is model-independent and provides consistent advantages across diverse Transformer variants. This work illustrates how cloud-native, event-driven infrastructures can democratize access to large-scale NLP by reducing cost, processing time, and environmental impact while offering a reproducible pathway for real-world research and industrial applications. Full article
(This article belongs to the Special Issue Generative AI Transformations in Industrial and Societal Applications)
Show Figures

Graphical abstract

12 pages, 284 KB  
Article
AI-Enabled Secure and Scalable Distributed Web Architecture for Medical Informatics
by Marian Ileana, Pavel Petrov and Vassil Milev
Appl. Sci. 2025, 15(19), 10710; https://doi.org/10.3390/app151910710 - 4 Oct 2025
Viewed by 908
Abstract
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical [...] Read more.
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical informatics, integrating artificial intelligence techniques and cloud-based services. The system ensures interoperability via HL7 FHIR standards and preserves data privacy and fault tolerance across interconnected medical institutions. A hybrid AI pipeline combining principal component analysis (PCA), K-Means clustering, and convolutional neural networks (CNNs) is applied to diffusion tensor imaging (DTI) data for early detection of neurological anomalies. The architecture leverages containerized microservices orchestrated with Docker Swarm, enabling adaptive resource management and high availability. Experimental validation confirms reduced latency, improved system reliability, and enhanced compliance with medical data exchange protocols. Results demonstrate superior performance with an average latency of 94 ms, a diagnostic accuracy of 91.3%, and enhanced clinical workflow efficiency compared to traditional monolithic architectures. The proposed solution successfully addresses scalability limitations while maintaining data security and regulatory compliance across multi-institutional deployments. This work contributes to the advancement of intelligent, interoperable, and scalable e-health infrastructures aligned with the evolution of digital healthcare ecosystems. Full article
(This article belongs to the Special Issue Data Science and Medical Informatics)
Show Figures

Figure 1

19 pages, 912 KB  
Article
Lightweight Embedded IoT Gateway for Smart Homes Based on an ESP32 Microcontroller
by Filippos Serepas, Ioannis Papias, Konstantinos Christakis, Nikos Dimitropoulos and Vangelis Marinakis
Computers 2025, 14(9), 391; https://doi.org/10.3390/computers14090391 - 16 Sep 2025
Cited by 1 | Viewed by 2995
Abstract
The rapid expansion of the Internet of Things (IoT) demands scalable, efficient, and user-friendly gateway solutions that seamlessly connect resource-constrained edge devices to cloud services. Low-cost, widely available microcontrollers, such as the ESP32 and its ecosystem peers, offer integrated Wi-Fi/Bluetooth connectivity, low power [...] Read more.
The rapid expansion of the Internet of Things (IoT) demands scalable, efficient, and user-friendly gateway solutions that seamlessly connect resource-constrained edge devices to cloud services. Low-cost, widely available microcontrollers, such as the ESP32 and its ecosystem peers, offer integrated Wi-Fi/Bluetooth connectivity, low power consumption, and a mature developer toolchain at a bill of materials cost of only a few dollars. For smart-home deployments where budgets, energy consumption, and maintainability are critical, these characteristics make MCU-class gateways a pragmatic alternative to single-board computers, enabling always-on local control with minimal overhead. This paper presents the design and implementation of an embedded IoT gateway powered by the ESP32 microcontroller. By using lightweight communication protocols such as Message Queuing Telemetry Transport (MQTT) and REST APIs, the proposed architecture supports local control, distributed intelligence, and secure on-site data storage, all while minimizing dependence on cloud infrastructure. A real-world deployment in an educational building demonstrates the gateway’s capability to monitor energy consumption, execute control commands, and provide an intuitive web-based dashboard with minimal resource overhead. Experimental results confirm that the solution offers strong performance, with RAM usage ranging between 3.6% and 6.8% of available memory (approximately 8.92 KB to 16.9 KB). The initial loading of the single-page application (SPA) results in a temporary RAM spike to 52.4%, which later stabilizes at 50.8%. These findings highlight the ESP32’s ability to serve as a functional IoT gateway with minimal resource demands. Areas for future optimization include improved device discovery mechanisms and enhanced resource management to prolong device longevity. Overall, the gateway represents a cost-effective and vendor-agnostic platform for building resilient and scalable IoT ecosystems. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

18 pages, 3408 KB  
Article
Enhancing Traditional Reactive Digital Forensics to a Proactive Digital Forensics Standard Operating Procedure (P-DEFSOP): A Case Study of DEFSOP and ISO 27035
by Hung-Cheng Yang, I-Long Lin and Yung-Hung Chao
Appl. Sci. 2025, 15(18), 9922; https://doi.org/10.3390/app15189922 - 10 Sep 2025
Cited by 1 | Viewed by 2582
Abstract
With the growing intensity of global cybersecurity threats and the rapid advancement of attack techniques, strengthening enterprise information and communication technology (ICT) infrastructures and enhancing digital forensics have become critical imperatives. Cloud environments, in particular, present substantial challenges due to the limited availability [...] Read more.
With the growing intensity of global cybersecurity threats and the rapid advancement of attack techniques, strengthening enterprise information and communication technology (ICT) infrastructures and enhancing digital forensics have become critical imperatives. Cloud environments, in particular, present substantial challenges due to the limited availability of effective forensic tools and the pressing demand for impartial and legally admissible digital evidence. To address these challenges, we propose a proactive digital forensics mechanism (P-DFM) designed for emergency incident management in enterprise settings. This mechanism integrates a range of forensic tools to identify and preserve critical digital evidence. It also incorporates the MITRE ATT&CK framework with Security Information and Event Management (SIEM) and Managed Detection and Response (MDR) systems to enable comprehensive and timely threat detection and analysis. The principal contribution of this study is the formulation of a novel Proactive Digital Evidence Forensics Standard Operating Procedure (P-DEFSOP), which enhances the accuracy and efficiency of security threat detection and forensic analysis while ensuring that digital evidence remains legally admissible. This advancement significantly reinforces the cybersecurity posture of enterprise networks. Our approach is systematically grounded in the Digital Evidence Forensics Standard Operating Procedure (DEFSOP) framework and complies with internationally recognized digital forensic standards, including ISO/IEC 27035 and ISO/IEC 27037, to ensure the integrity, reliability, validity, and legal admissibility of digital evidence throughout the forensic process. Given the complexity of cloud computing infrastructures—such as Chunghwa Telecom HiCloud, Amazon Web Services (AWS), Google Cloud, and Microsoft Azure—we underscore the critical importance of impartial and standardized digital forensic services in cloud-based environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2598 KB  
Article
Evaluating the Performance Impact of Data Sovereignty Features on Data Spaces
by Stanisław Galij, Grzegorz Pawlak and Sławomir Grzyb
Appl. Sci. 2025, 15(17), 9841; https://doi.org/10.3390/app15179841 - 8 Sep 2025
Viewed by 940
Abstract
Data Spaces appear to offer a solution to data sovereignty concerns in public cloud environments, which are managed by third parties and must therefore be considered potentially untrusted. The IDS Connector, a key component of Data Space architecture, acts as a secure gateway, [...] Read more.
Data Spaces appear to offer a solution to data sovereignty concerns in public cloud environments, which are managed by third parties and must therefore be considered potentially untrusted. The IDS Connector, a key component of Data Space architecture, acts as a secure gateway, enforcing data sovereignty by controlling data usage and ensuring that data processing occurs within a trusted and verifiable environment. This study compares the performance of cloud-native data sharing services offered by major cloud providers—Amazon, Microsoft, and Google—with Data Spaces services delivered via two connector implementations: the Dataspace Connector and the Prometheus-X Dataspace Connector. An extensive set of experiments reveals significant differences in the performance of cloud-native managed services, as well as between connector implementations and hosting methods. The results indicate that the differences in the performance of data sharing services are unexpectedly substantial between providers, reaching up to 187%, and that the performance of different connector implementations also varies considerably, with an average difference of 56%. This indicates that the choice of cloud provider and data space Connector implementation has a major impact on the performance of the designed solution. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

10 pages, 724 KB  
Article
Real-Time Speech-to-Text on Edge: A Prototype System for Ultra-Low Latency Communication with AI-Powered NLP
by Stefano Di Leo, Luca De Cicco and Saverio Mascolo
Information 2025, 16(8), 685; https://doi.org/10.3390/info16080685 - 11 Aug 2025
Viewed by 6805
Abstract
This paper presents a real-time speech-to-text (STT) system designed for edge computing environments requiring ultra-low latency and local processing. Differently from cloud-based STT services, the proposed solution runs entirely on a local infrastructure which allows the enforcement of user privacy and provides high [...] Read more.
This paper presents a real-time speech-to-text (STT) system designed for edge computing environments requiring ultra-low latency and local processing. Differently from cloud-based STT services, the proposed solution runs entirely on a local infrastructure which allows the enforcement of user privacy and provides high performance in bandwidth-limited or offline scenarios. The designed system is based on a browser-native audio capture through WebRTC, real-time streaming with WebSocket, and offline automatic speech recognition (ASR) utilizing the Vosk engine. A natural language processing (NLP) component, implemented as a microservice, improves transcription results for spelling accuracy and clarity. Our prototype reaches sub-second end-to-end latency and strong transcription capabilities under realistic conditions. Furthermore, the modular architecture allows extensibility, integration of advanced AI models, and domain-specific adaptations. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

23 pages, 2029 KB  
Systematic Review
Exploring the Role of Industry 4.0 Technologies in Smart City Evolution: A Literature-Based Study
by Nataliia Boichuk, Iwona Pisz, Anna Bruska, Sabina Kauf and Sabina Wyrwich-Płotka
Sustainability 2025, 17(15), 7024; https://doi.org/10.3390/su17157024 - 2 Aug 2025
Cited by 1 | Viewed by 1597
Abstract
Smart cities are technologically advanced urban environments where interconnected systems and data-driven technologies enhance public service delivery and quality of life. These cities rely on information and communication technologies, the Internet of Things, big data, cloud computing, and other Industry 4.0 tools to [...] Read more.
Smart cities are technologically advanced urban environments where interconnected systems and data-driven technologies enhance public service delivery and quality of life. These cities rely on information and communication technologies, the Internet of Things, big data, cloud computing, and other Industry 4.0 tools to support efficient city management and foster citizen engagement. Often referred to as digital cities, they integrate intelligent infrastructures and real-time data analytics to improve mobility, security, and sustainability. Ubiquitous sensors, paired with Artificial Intelligence, enable cities to monitor infrastructure, respond to residents’ needs, and optimize urban conditions dynamically. Given the increasing significance of Industry 4.0 in urban development, this study adopts a bibliometric approach to systematically review the application of these technologies within smart cities. Utilizing major academic databases such as Scopus and Web of Science the research aims to identify the primary Industry 4.0 technologies implemented in smart cities, assess their impact on infrastructure, economic systems, and urban communities, and explore the challenges and benefits associated with their integration. The bibliometric analysis included publications from 2016 to 2023, since the emergence of urban researchers’ interest in the technologies of the new industrial revolution. The task is to contribute to a deeper understanding of how smart cities evolve through the adoption of advanced technological frameworks. Research indicates that IoT and AI are the most commonly used tools in urban spaces, particularly in smart mobility and smart environments. Full article
Show Figures

Figure 1

25 pages, 1842 KB  
Article
Optimizing Cybersecurity Education: A Comparative Study of On-Premises and Cloud-Based Lab Environments Using AWS EC2
by Adil Khan and Azza Mohamed
Computers 2025, 14(8), 297; https://doi.org/10.3390/computers14080297 - 22 Jul 2025
Cited by 2 | Viewed by 1934
Abstract
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This [...] Read more.
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This study compares the efficacy of cloud-based solutions—specifically, Amazon Web Services (AWS) Elastic Compute Cloud (EC2)—against traditional settings like VirtualBox, with the goal of determining their potential to improve cybersecurity education. The study conducts systematic experimentation to compare lab environments based on parameters such as lab completion time, CPU and RAM use, and ease of access. The results show that AWS EC2 outperforms VirtualBox by shortening lab completion times, optimizing resource usage, and providing more remote accessibility. Additionally, the cloud-based strategy provides scalable, cost-effective implementation via a pay-per-use model, serving a wide range of pedagogical needs. These findings show that incorporating cloud technology into cybersecurity curricula can lead to more efficient, adaptable, and inclusive learning experiences, thereby boosting pedagogical methods in the field. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

32 pages, 3793 KB  
Systematic Review
Systematic Review: Malware Detection and Classification in Cybersecurity
by Sebastian Berrios, Dante Leiva, Bastian Olivares, Héctor Allende-Cid and Pamela Hermosilla
Appl. Sci. 2025, 15(14), 7747; https://doi.org/10.3390/app15147747 - 10 Jul 2025
Cited by 7 | Viewed by 9258
Abstract
Malicious Software, commonly known as Malware, represents a persistent threat to cybersecurity, targeting the confidentiality, integrity, and availability of information systems. The digital era, marked by the proliferation of connected devices, cloud services, and the advancement of machine learning, has brought numerous benefits; [...] Read more.
Malicious Software, commonly known as Malware, represents a persistent threat to cybersecurity, targeting the confidentiality, integrity, and availability of information systems. The digital era, marked by the proliferation of connected devices, cloud services, and the advancement of machine learning, has brought numerous benefits; however, it has also exacerbated exposure to cyber threats, affecting both individuals and corporations. This systematic review, which follows the PRISMA 2020 framework, aims to analyze current trends and new methods for malware detection and classification. The review was conducted using data from Web of Science and Scopus, covering publications from 2020 and 2024, with over 47 key studies selected for in-depth analysis based on relevance, empirical results and citation metrics. These studies cover a variety of detection techniques, including machine learning, deep learning and hybrid models, with a focus on feature extraction, malware behavior analysis and the application of advanced algorithms to improve detection accuracy. The results highlight important advances, such as the improved performance of ensemble learning and deep learning models in detecting sophisticated threats. Finally, this study identifies the main challenges and outlines opportunities of future research to improve malware detection and classification frameworks. Full article
Show Figures

Figure 1

26 pages, 9349 KB  
Article
Optical Remote Sensing for Global Flood Disaster Mapping: A Critical Review Towards Operational Readiness
by Molan Zhang, Zhiqiang Chen, Jun Wang, Bandana Kar, Marlon Pierce, Kristy Tiampo, Ronald Eguchi and Margaret Glasscoe
Remote Sens. 2025, 17(11), 1886; https://doi.org/10.3390/rs17111886 - 29 May 2025
Cited by 3 | Viewed by 3483
Abstract
Flood hazards and their disastrous consequences disrupt economic activity and threaten human lives globally. From a remote sensing perspective, since floods are often triggered by extreme climatic events, such as heavy rainstorms or tropical cyclones, the efficacy of using optical remote sensing data [...] Read more.
Flood hazards and their disastrous consequences disrupt economic activity and threaten human lives globally. From a remote sensing perspective, since floods are often triggered by extreme climatic events, such as heavy rainstorms or tropical cyclones, the efficacy of using optical remote sensing data for disaster and damage mapping is significantly compromised. In many flood events, obtaining cloud-free images covering the affected area remains challenging. Nonetheless, considering that floods are the most frequent type of natural disaster on Earth, optical remote sensing data should be fully exploited. In this article, firstly, we will present a critical review of remote sensing data and machine learning methods for global flood-induced damage detection and mapping. We will primarily consider two types of remote sensing data: moderate-resolution multi-spectral data and high-resolution true-color or panchromatic data. Big and semantic databases available for advanced machine learning to date will be introduced. We will develop a set of best-use case scenarios for using these two data types to conduct water-body and built-up area mapping with no to moderate cloud coverage. We will cross-verify traditional machine learning and current deep learning methods and provide both benchmark databases and algorithms for the research community. Last, with this suite of data and algorithms, we will demonstrate the development of a cloud-computing-supported computing gateway, which houses the services of both our remote-sensing-based machine learning engine and a web-based user interface. Under this gateway, optical satellite data will be retrieved based on a global flood alerting system. Near-real-time pre- and post-event flood analytics are then showcased for end-user decision-making, providing insights such as the extent of severely flooded areas, an estimated number of affected buildings, and spatial trends of damage. In summary, this paper’s novel contributions include (1) a critical synthesis of operational readiness in flood mapping, (2) a multi-sensor-aware review of optical limitations, (3) the deployment of a lightweight ML pipeline for near-real-time mapping, and (4) a proposal of the GloFIM platform for field-level disaster support. Full article
Show Figures

Figure 1

Back to TopTop