Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (89)

Search Parameters:
Keywords = Amazon web service

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 920 KB  
Article
A Controlled Comparative Evaluation of Infrastructure as Code Tools: Deployment Performance and Maintainability Across Terraform, Pulumi, and AWS CloudFormation
by Damir Regvart, Ivan Vlahović and Mislav Balković
Appl. Sci. 2026, 16(6), 2971; https://doi.org/10.3390/app16062971 - 19 Mar 2026
Viewed by 462
Abstract
Infrastructure as Code (IaC) underpins automated cloud provisioning in modern DevOps environments; however, controlled comparative evaluations of leading IaC tools under identical conditions remain limited. This study presents a controlled comparative evaluation of Terraform, Pulumi, and AWS CloudFormation within a standardized Amazon Web [...] Read more.
Infrastructure as Code (IaC) underpins automated cloud provisioning in modern DevOps environments; however, controlled comparative evaluations of leading IaC tools under identical conditions remain limited. This study presents a controlled comparative evaluation of Terraform, Pulumi, and AWS CloudFormation within a standardized Amazon Web Services environment. An identical multi-tier architecture was implemented using each tool, and repeated deployment cycles were conducted to observe differences in provisioning duration, removal time, structural maintainability, and operational characteristics. Descriptive statistical analysis across 30 controlled repetitions indicates that Terraform and Pulumi achieve comparable deployment performance, whereas CloudFormation requires more than twice the average provisioning time under the conditions evaluated. Removal durations were similar across tools but remained longest for CloudFormation. Structural analysis reveals trade-offs between declarative modular design, programmatic flexibility, and native cloud integration. The study provides a controlled, comparative framework to support evidence-based selection of IaC tools in production-oriented cloud environments. Full article
Show Figures

Figure 1

23 pages, 937 KB  
Systematic Review
Barriers to Accessing Cervical Cancer Screening and Treatment in the Amazon Region—A Systematic Review
by Marcia Helena Ribeiro de Oliveira, Sandra Lopes Aparício, José Antônio Cordero da Silva, Domingos Aires Leitão Neto, Sofia B. Nunes and Guilhermina Rêgo
J. Clin. Med. 2026, 15(3), 1206; https://doi.org/10.3390/jcm15031206 - 3 Feb 2026
Viewed by 627
Abstract
Background/Objectives: Unequal access to cervical cancer screening and treatment remains a significant contributor to preventable morbidity and mortality for women in the Amazon Basin, compounded by geographic, social and infrastructural barriers. Given the fragmented nature of the existing evidence, this systematic review [...] Read more.
Background/Objectives: Unequal access to cervical cancer screening and treatment remains a significant contributor to preventable morbidity and mortality for women in the Amazon Basin, compounded by geographic, social and infrastructural barriers. Given the fragmented nature of the existing evidence, this systematic review aims to synthesize available findings on barriers to cervical cancer screening and treatment for this region. The implications of these findings are examined to inform the development of actionable strategies to improve equity in prevention and care. Methods: This systematic review was conducted in accordance with PRISMA 2020. Searches were conducted on 7 November 2025, in PubMed/MEDLINE, Web of Science, and SciELO, utilizing combinations of MeSH terms, keywords, and free-text expressions. Studies were considered eligible if they addressed barriers to cervical cancer screening or treatment among women living in the Amazon Region. Two reviewers independently screened the studies and extracted the relevant data. The risk of bias was assessed using the JBI checklists, the Newcastle–Ottawa Scale, and the MMAT. A narrative synthesis summarized the results. Results: Of 57 studies identified, 11 were included. Organizational and health-system barriers were reported most frequently, including scheduling difficulties, long wait times, a shortage of professionals, and equipment unavailability. Socioeconomic barriers were most often related to younger age, low income, limited schooling, and care related expenses. Cultural factors were frequently linked to fear of the procedure and insufficient knowledge about cervical cancer. Geographic barriers included rural residence and travel difficulties. Conclusions: This systematic review indicates that disparities in cervical cancer screening in the Amazon region are primarily associated with organizational and health-system-related barriers, together with socioeconomic, cultural, and geographic factors. These findings highlight the need for equitable, multisectoral interventions to strengthen service organization, improve health literacy, and expand timely access to screening and treatment for underserved women. Full article
Show Figures

Figure 1

22 pages, 840 KB  
Article
A Comparative Evaluation of Snort and Suricata for Detecting Data Exfiltration Tunnels in Cloud Environments
by Mahmoud H. Qutqut, Ali Ahmed, Mustafa K. Taqi, Jordan Abimanyu, Erika Thea Ajes and Fatima Alhaj
J. Cybersecur. Priv. 2026, 6(1), 17; https://doi.org/10.3390/jcp6010017 - 8 Jan 2026
Viewed by 1822
Abstract
Data exfiltration poses a major cybersecurity challenge because it involves the unauthorized transfer of sensitive information. Intrusion Detection Systems (IDSs) are vital security controls in identifying such attacks; however, their effectiveness in cloud computing environments remains limited, particularly against covert channels such as [...] Read more.
Data exfiltration poses a major cybersecurity challenge because it involves the unauthorized transfer of sensitive information. Intrusion Detection Systems (IDSs) are vital security controls in identifying such attacks; however, their effectiveness in cloud computing environments remains limited, particularly against covert channels such as Internet Control Message Protocol (ICMP) and Domain Name System (DNS) tunneling. This study compares two widely used IDSs, Snort and Suricata, in a controlled cloud computing environment. The assessment focuses on their ability to detect data exfiltration techniques implemented via ICMP and DNS tunneling, using DNSCat2 and Iodine. We evaluate detection performance using standard classification metrics, including Recall, Precision, Accuracy, and F1-Score. Our experiments were conducted on Amazon Web Services (AWS) Elastic Compute Cloud (EC2) instances, where IDS instances monitored simulated exfiltration traffic generated by DNSCat2, Iodine, and Metasploit. Network traffic was mirrored via AWS Virtual Private Cloud (VPC) Traffic Mirroring, with the ELK Stack integrated for centralized logging and visual analysis. The findings indicate that Suricata outperformed Snort in detecting DNS-based exfiltration, underscoring the advantages of multi-threaded architectures for managing high-volume cloud traffic. For DNS tunneling, Suricata achieved 100% detection (recall) for both DNSCat2 and Iodine, whereas Snort achieved 85.7% and 66.7%, respectively. Neither IDS detected ICMP tunneling using Metasploit, with both recording 0% recall. It is worth noting that both IDSs failed to detect ICMP tunneling under default configurations, highlighting the limitations of signature-based detection in isolation. These results emphasize the need to combine signature-based and behavior-based analytics, supported by centralized logging frameworks, to strengthen cloud-based intrusion detection and enhance forensic visibility. Full article
(This article belongs to the Special Issue Cloud Security and Privacy)
Show Figures

Figure 1

24 pages, 4230 KB  
Article
Cloud-Based sEMG Segmentation for Muscle Fatigue Monitoring: A Wavelet–Quantile Approach with Computational Cost Assessment
by Aura Polo, Mario Callejas Cabarcas, Lácides Antonio Ripoll Solano, Carlos Robles-Algarín and Omar Rodríguez-Álvarez
Technologies 2026, 14(1), 16; https://doi.org/10.3390/technologies14010016 - 25 Dec 2025
Viewed by 1126
Abstract
This paper presents the development and cloud deployment of a system for the segmentation of electromyographic (EMG) signals oriented toward muscle fatigue monitoring in the biceps and triceps. A dataset of 30 subjects was used, resulting in 120 EMG and gyroscope files containing [...] Read more.
This paper presents the development and cloud deployment of a system for the segmentation of electromyographic (EMG) signals oriented toward muscle fatigue monitoring in the biceps and triceps. A dataset of 30 subjects was used, resulting in 120 EMG and gyroscope files containing between four and six strength exercise series each. After a quality assessment, approximately 80% of the signals (95 files) were classified as level 1 or 2 and considered suitable for segmentation and subsequent analysis. A near real-time segmentation algorithm was designed based on signal envelopes, sliding windows, and quantile thresholds, complemented with discrete wavelet transform (DWT) filtering. Using EMG alone, segmentation accuracy reached 83% for biceps and 54% for triceps; after incorporating DWT preprocessing, accuracy increased to 87.5% and 71%, respectively. By exploiting the gyroscope’s X-axis signal as a low-noise reference, the optimal configuration achieved an overall accuracy of 80%, with 83.3% for biceps and 76.2% for triceps. The prototype was deployed on Amazon Web Services (AWS) using EC2 instances and SQS queues, and its computational cost was evaluated across four server types. On a t2.micro instance, the maximum memory usage was approximately 219 MB with a dedicated CPU and a maximum processing time of 0.98 s per signal, demonstrating the feasibility of near real-time operation under conditions with limited resources. Full article
Show Figures

Figure 1

15 pages, 2861 KB  
Article
Sustainable Real-Time NLP with Serverless Parallel Processing on AWS
by Chaitanya Kumar Mankala and Ricardo J. Silva
Information 2025, 16(10), 903; https://doi.org/10.3390/info16100903 - 15 Oct 2025
Cited by 1 | Viewed by 1855
Abstract
This paper proposes a scalable serverless architecture for real-time natural language processing (NLP) on large datasets using Amazon Web Services (AWS). The framework integrates AWS Lambda, Step Functions, and S3 to enable fully parallel sentiment analysis with Transformer-based models such as DistilBERT, RoBERTa, [...] Read more.
This paper proposes a scalable serverless architecture for real-time natural language processing (NLP) on large datasets using Amazon Web Services (AWS). The framework integrates AWS Lambda, Step Functions, and S3 to enable fully parallel sentiment analysis with Transformer-based models such as DistilBERT, RoBERTa, and ClinicalBERT. By containerizing inference workloads and orchestrating parallel execution, the system eliminates the need for dedicated servers while dynamically scaling to workload demand. Experimental evaluation on the IMDb Reviews dataset demonstrates substantial efficiency gains: parallel execution achieved a 6.07× reduction in wall-clock duration, an 81.2% reduction in total computing time and energy consumption, and a 79.1% reduction in variable costs compared to sequential processing. These improvements directly translate into a smaller carbon footprint, highlighting the sustainability benefits of serverless architectures for AI workloads. The findings show that the proposed framework is model-independent and provides consistent advantages across diverse Transformer variants. This work illustrates how cloud-native, event-driven infrastructures can democratize access to large-scale NLP by reducing cost, processing time, and environmental impact while offering a reproducible pathway for real-world research and industrial applications. Full article
(This article belongs to the Special Issue Generative AI Transformations in Industrial and Societal Applications)
Show Figures

Graphical abstract

18 pages, 3408 KB  
Article
Enhancing Traditional Reactive Digital Forensics to a Proactive Digital Forensics Standard Operating Procedure (P-DEFSOP): A Case Study of DEFSOP and ISO 27035
by Hung-Cheng Yang, I-Long Lin and Yung-Hung Chao
Appl. Sci. 2025, 15(18), 9922; https://doi.org/10.3390/app15189922 - 10 Sep 2025
Cited by 2 | Viewed by 3739
Abstract
With the growing intensity of global cybersecurity threats and the rapid advancement of attack techniques, strengthening enterprise information and communication technology (ICT) infrastructures and enhancing digital forensics have become critical imperatives. Cloud environments, in particular, present substantial challenges due to the limited availability [...] Read more.
With the growing intensity of global cybersecurity threats and the rapid advancement of attack techniques, strengthening enterprise information and communication technology (ICT) infrastructures and enhancing digital forensics have become critical imperatives. Cloud environments, in particular, present substantial challenges due to the limited availability of effective forensic tools and the pressing demand for impartial and legally admissible digital evidence. To address these challenges, we propose a proactive digital forensics mechanism (P-DFM) designed for emergency incident management in enterprise settings. This mechanism integrates a range of forensic tools to identify and preserve critical digital evidence. It also incorporates the MITRE ATT&CK framework with Security Information and Event Management (SIEM) and Managed Detection and Response (MDR) systems to enable comprehensive and timely threat detection and analysis. The principal contribution of this study is the formulation of a novel Proactive Digital Evidence Forensics Standard Operating Procedure (P-DEFSOP), which enhances the accuracy and efficiency of security threat detection and forensic analysis while ensuring that digital evidence remains legally admissible. This advancement significantly reinforces the cybersecurity posture of enterprise networks. Our approach is systematically grounded in the Digital Evidence Forensics Standard Operating Procedure (DEFSOP) framework and complies with internationally recognized digital forensic standards, including ISO/IEC 27035 and ISO/IEC 27037, to ensure the integrity, reliability, validity, and legal admissibility of digital evidence throughout the forensic process. Given the complexity of cloud computing infrastructures—such as Chunghwa Telecom HiCloud, Amazon Web Services (AWS), Google Cloud, and Microsoft Azure—we underscore the critical importance of impartial and standardized digital forensic services in cloud-based environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2598 KB  
Article
Evaluating the Performance Impact of Data Sovereignty Features on Data Spaces
by Stanisław Galij, Grzegorz Pawlak and Sławomir Grzyb
Appl. Sci. 2025, 15(17), 9841; https://doi.org/10.3390/app15179841 - 8 Sep 2025
Cited by 1 | Viewed by 1483
Abstract
Data Spaces appear to offer a solution to data sovereignty concerns in public cloud environments, which are managed by third parties and must therefore be considered potentially untrusted. The IDS Connector, a key component of Data Space architecture, acts as a secure gateway, [...] Read more.
Data Spaces appear to offer a solution to data sovereignty concerns in public cloud environments, which are managed by third parties and must therefore be considered potentially untrusted. The IDS Connector, a key component of Data Space architecture, acts as a secure gateway, enforcing data sovereignty by controlling data usage and ensuring that data processing occurs within a trusted and verifiable environment. This study compares the performance of cloud-native data sharing services offered by major cloud providers—Amazon, Microsoft, and Google—with Data Spaces services delivered via two connector implementations: the Dataspace Connector and the Prometheus-X Dataspace Connector. An extensive set of experiments reveals significant differences in the performance of cloud-native managed services, as well as between connector implementations and hosting methods. The results indicate that the differences in the performance of data sharing services are unexpectedly substantial between providers, reaching up to 187%, and that the performance of different connector implementations also varies considerably, with an average difference of 56%. This indicates that the choice of cloud provider and data space Connector implementation has a major impact on the performance of the designed solution. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 1842 KB  
Article
Optimizing Cybersecurity Education: A Comparative Study of On-Premises and Cloud-Based Lab Environments Using AWS EC2
by Adil Khan and Azza Mohamed
Computers 2025, 14(8), 297; https://doi.org/10.3390/computers14080297 - 22 Jul 2025
Cited by 2 | Viewed by 2847
Abstract
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This [...] Read more.
The increasing complexity of cybersecurity risks highlights the critical need for novel teaching techniques that provide students with the necessary skills and information. Traditional on-premises laboratory setups frequently lack the scalability, flexibility, and accessibility necessary for efficient training in today’s dynamic world. This study compares the efficacy of cloud-based solutions—specifically, Amazon Web Services (AWS) Elastic Compute Cloud (EC2)—against traditional settings like VirtualBox, with the goal of determining their potential to improve cybersecurity education. The study conducts systematic experimentation to compare lab environments based on parameters such as lab completion time, CPU and RAM use, and ease of access. The results show that AWS EC2 outperforms VirtualBox by shortening lab completion times, optimizing resource usage, and providing more remote accessibility. Additionally, the cloud-based strategy provides scalable, cost-effective implementation via a pay-per-use model, serving a wide range of pedagogical needs. These findings show that incorporating cloud technology into cybersecurity curricula can lead to more efficient, adaptable, and inclusive learning experiences, thereby boosting pedagogical methods in the field. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

44 pages, 5183 KB  
Article
A Blockchain-Based Framework for Secure Data Stream Dissemination in Federated IoT Environments
by Jakub Sychowiec and Zbigniew Zieliński
Electronics 2025, 14(10), 2067; https://doi.org/10.3390/electronics14102067 - 20 May 2025
Viewed by 1577
Abstract
An industrial-scale increase in applications of the Internet of Things (IoT), a significant number of which are based on the concept of federation, presents unique security challenges due to their distributed nature and the need for secure communication between components from different administrative [...] Read more.
An industrial-scale increase in applications of the Internet of Things (IoT), a significant number of which are based on the concept of federation, presents unique security challenges due to their distributed nature and the need for secure communication between components from different administrative domains. A federation may be created for the duration of a mission, such as military operations or Humanitarian Assistance and Disaster Relief (HADR) operations. These missions often occur in very difficult or even hostile environments, posing additional challenges for ensuring reliability and security. The heterogeneity of devices, protocols, and security requirements in different domains further complicates the requirements for the secure distribution of data streams in federated IoT environments. The effective dissemination of data streams in federated environments also ensures the flexibility to filter and search for patterns in real-time to detect critical events or threats (e.g., fires and hostile objects) with changing information needs of end users. The paper presents a novel and practical framework for secure and reliable data stream dissemination in federated IoT environments, leveraging blockchain, Apache Kafka brokers, and microservices. To authenticate IoT devices and verify data streams, we have integrated a hardware and software IoT gateway with the Hyperledger Fabric (HLF) blockchain platform, which records the distinguishing features of IoT devices (fingerprints). In this paper, we analyzed our platform’s security, focusing on secure data distribution. We formally discussed potential attack vectors and ways to mitigate them through the platform’s design. We thoroughly assess the effectiveness of the proposed framework by conducting extensive performance tests in two setups: the Amazon Web Services (AWS) cloud-based and Raspberry Pi resource-constrained environments. Implementing our framework in the AWS cloud infrastructure has demonstrated that it is suitable for processing audiovisual streams in environments that require immediate interoperability. The results are promising, as the average time it takes for a consumer to read a verified data stream is in the order of seconds. The measured time for complete processing of an audiovisual stream corresponds to approximately 25 frames per second (fps). The results obtained also confirmed the computational stability of our framework. Furthermore, we have confirmed that our environment can be deployed on resource-constrained commercial off-the-shelf (COTS) platforms while maintaining low operational costs. Full article
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 2nd Edition)
Show Figures

Figure 1

14 pages, 2383 KB  
Article
Performance Variability in Public Clouds: An Empirical Assessment
by Sanjay Ahuja, Victor H. Lopez Chalacan and Hugo Resendez
Information 2025, 16(5), 402; https://doi.org/10.3390/info16050402 - 14 May 2025
Viewed by 2153
Abstract
Cloud computing is now established as a viable alternative to on-premise infrastructure from both a system administration and cost perspective. This research provides insight into cluster computing performance and variability in cloud-provisioned infrastructure from two popular public cloud providers, Amazon Web Services (AWS) [...] Read more.
Cloud computing is now established as a viable alternative to on-premise infrastructure from both a system administration and cost perspective. This research provides insight into cluster computing performance and variability in cloud-provisioned infrastructure from two popular public cloud providers, Amazon Web Services (AWS) and Google Cloud Platform (GCP). In order to evaluate the perforance variability between these two providers, synthetic benchmarks including Memory bandwidth (STREAM), Interleave or Random (IoR) performance, and Computational CPU performance by NAS Parallel Benchmarks-Embarrassingly Parallel (NPB-EP) were used. A comparative examination of the two cloud platforms is provided in the context of our research methodology and design. We conclude with a discussion of the results of the experiment and an assessment of the suitability of public cloud platforms for certain types of computing workloads. Both AWS and GCP have their strong points, and this study provides recommendations depending on user needs for high throughput and/or performance predictability across CPU, memory, and Input/Output (I/O). In addition, the study discusses other factors to help users decide between cloud vendors such as ease of use, documentation, and types of instances offered. Full article
(This article belongs to the Special Issue Performance Engineering in Cloud Computing)
Show Figures

Figure 1

21 pages, 8259 KB  
Article
A Cloud Computing Framework for Space Farming Data Analysis
by Adrian Genevie Janairo, Ronnie Concepcion, Marielet Guillermo and Arvin Fernando
AgriEngineering 2025, 7(5), 149; https://doi.org/10.3390/agriengineering7050149 - 8 May 2025
Cited by 2 | Viewed by 2472
Abstract
This study presents a system framework by which cloud resources are utilized to analyze crop germination status in a 2U CubeSat. This research aims to address the onboard computing constraints in nanosatellite missions to boost space agricultural practices. Through the Espressif Simple Protocol [...] Read more.
This study presents a system framework by which cloud resources are utilized to analyze crop germination status in a 2U CubeSat. This research aims to address the onboard computing constraints in nanosatellite missions to boost space agricultural practices. Through the Espressif Simple Protocol for Network-on-Wireless (ESP-NOW) technology, communication between ESP-32 modules were established. The corresponding sensor readings and image data were securely streamed through Amazon Web Service Internet of Things (AWS IoT) to an ESP-NOW receiver and Roboflow. Real-time plant growth predictor monitoring was implemented through the web application provisioned at the receiver end. On the other hand, sprouts on germination bed were determined through the custom-trained Roboflow computer vision model. The feasibility of remote data computational analysis and monitoring for a 2U CubeSat, given its minute form factor, was successfully demonstrated through the proposed cloud framework. The germination detection model resulted in a mean average precision (mAP), precision, and recall of 99.5%, 99.9%, and 100.0%, respectively. The temperature, humidity, heat index, LED and Fogger states, and bed sprouts data were shown in real time through a web dashboard. With this use case, immediate actions can be performed accordingly when abnormalities occur. The scalability nature of the framework allows adaptation to various crops to support sustainable agricultural activities in extreme environments such as space farming. Full article
Show Figures

Figure 1

21 pages, 2085 KB  
Article
Edge vs. Cloud: Empirical Insights into Data-Driven Condition Monitoring
by Chikumbutso Christopher Walani and Wesley Doorsamy
Big Data Cogn. Comput. 2025, 9(5), 121; https://doi.org/10.3390/bdcc9050121 - 8 May 2025
Cited by 4 | Viewed by 4653
Abstract
This study evaluates edge and cloud computing paradigms in the context of data-driven condition monitoring of rotating electrical machines. Two well-known platforms, the Raspberry Pi and Amazon Web Services Elastic Compute Cloud, are used to compare and contrast these two computing paradigms in [...] Read more.
This study evaluates edge and cloud computing paradigms in the context of data-driven condition monitoring of rotating electrical machines. Two well-known platforms, the Raspberry Pi and Amazon Web Services Elastic Compute Cloud, are used to compare and contrast these two computing paradigms in terms of different metrics associated with their application suitability. The tested induction machine fault diagnosis models are developed using popular algorithms, namely support vector machines, k-nearest neighbours, and decision trees. The findings reveal that while the cloud platform offers superior computational and memory resources, making it more suitable for complex machine learning tasks, it also incurs higher costs and latency. On the other hand, the edge platform excels in real-time processing and reduces network data burden, but its computational and memory resources are found to be a limitation with certain tasks. The study provides both quantitative and qualitative insights into the trade-offs involved in selecting the most suitable computing approach for condition monitoring applications. Although the scope of the empirical study is primarily limited to factors such as computational efficiency, scalability, and resource utilisation, particularly in the context of specific machine learning models, this paper offers broader discussion and future research directions of other key issues, including latency, network variability, and energy consumption. Full article
(This article belongs to the Special Issue Application of Cloud Computing in Industrial Internet of Things)
Show Figures

Figure 1

13 pages, 7086 KB  
Article
Amazon Web Service–Google Cross-Cloud Platform for Machine Learning-Based Satellite Image Detection
by David Pacios, Sara Ignacio-Cerrato, José Luis Vázquez-Poletti, Rafael Moreno-Vozmediano, Nikolaos Schetakis, Konstantinos Stavrakakis, Alessio Di Iorio, Jorge J. Gomez-Sanz and Luis Vazquez
Information 2025, 16(5), 381; https://doi.org/10.3390/info16050381 - 2 May 2025
Cited by 2 | Viewed by 1759
Abstract
Satellite image analysis is a critical component of Earth observation and satellite data analysis, providing detailed information on the effects of global events such as the COVID-19 pandemic. Cloud computing offers a flexible way to allocate resources and simplifies the management of infrastructure. [...] Read more.
Satellite image analysis is a critical component of Earth observation and satellite data analysis, providing detailed information on the effects of global events such as the COVID-19 pandemic. Cloud computing offers a flexible way to allocate resources and simplifies the management of infrastructure. In this study, we propose a cross-cloud system for ML-based satellite image detection, focusing on the financial and performance aspects of utilizing Amazon Web Service (AWS) Lambda and Amazon SageMaker for advanced machine learning tasks. Our system utilizes Google Apps Script (GAS) to create a web-based control panel, providing users with access to our AWS-hosted satellite detection models. Additionally, we utilize AWS to manage expenses through a strategic combination of Google Cloud and AWS, providing not only economic advantages, but also enhanced resilience. Furthermore, our approach capitalizes on the synergistic capabilities of AWS and Google Cloud to fortify our defenses against data loss and ensure operational resilience. Our goal is to demonstrate the effectiveness of a cloud environment in addressing complex and interdisciplinary challenges, particularly in the field of object analysis using spatial imagery. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

26 pages, 6409 KB  
Article
Design of Rotors in Centrifugal Pumps Using the Topology Optimization Method and Parallel Computing in the Cloud
by Xavier Andrés Arcentales, Danilo Andrés Arcentales and Wilfredo Montealegre
Machines 2025, 13(4), 307; https://doi.org/10.3390/machines13040307 - 10 Apr 2025
Cited by 2 | Viewed by 1274
Abstract
Designing flow machines is challenging due to numerous free geometrical parameters. This work aims to develop a parallelized computational algorithm in MATLAB version R2020a to design the rotor of a radial flow in a centrifugal pump using the finite-element method (FEM), topology optimization [...] Read more.
Designing flow machines is challenging due to numerous free geometrical parameters. This work aims to develop a parallelized computational algorithm in MATLAB version R2020a to design the rotor of a radial flow in a centrifugal pump using the finite-element method (FEM), topology optimization method (TOM), and parallel cloud computing (bare-metal vs. virtual machine). The goal is to minimize a bi-objective function comprising energy dissipation and vorticity within half a rotor circumference. When only minimizing energy dissipation (wd = 1, wr = 0), the performance achieved is 5.88 Watts. Considering both energy dissipation and vorticity (wd = 0.8, wr = 0.2), the performance is 5.94 Watts. These topology results are then extended to a full 3D model using Ansys Fluent version 18.2 to validate the objective functions minimized by TOM. The algorithm is parallelized and executed on multiple CPU cores in the cloud on two different platforms: Amazon Web Services (virtual machine) and Equinix (bare-metal machine), to accelerate the blade design process. In conclusion, mathematical optimization tools aid engineering designers in achieving non-intuitive designs and enhancing results. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

18 pages, 5165 KB  
Article
YOLOv5-Based Electric Scooter Crackdown Platform
by Seung-Hyun Lee, Sung-Hyun Oh and Jeong-Gon Kim
Appl. Sci. 2025, 15(6), 3112; https://doi.org/10.3390/app15063112 - 13 Mar 2025
Cited by 1 | Viewed by 1877
Abstract
As the use of personal mobility (PM) devices continues to rise, regulatory violations have become more frequent, highlighting the need for technological solutions to ensure efficient enforcement. This study addresses these challenges by proposing an AI-based enforcement platform. The system integrates the You [...] Read more.
As the use of personal mobility (PM) devices continues to rise, regulatory violations have become more frequent, highlighting the need for technological solutions to ensure efficient enforcement. This study addresses these challenges by proposing an AI-based enforcement platform. The system integrates the You Only Look Once version 5 (YOLOv5) object detection model, a deep-learning-based framework, with Global Positioning System (GPS) location data, Raspberry Pi 5, and Amazon Web Services (AWS) for data processing and web-based implementation. The YOLOv5 model was deployed in two configurations: one for detecting electric scooter usage and another for identifying legal violations. The system utilized AWS Relational Database Service (RDS), Simple Storage Service (S3), and Elastic Compute Cloud (EC2) to store violation records and host web applications. The detection performance was evaluated using mean average precision (mAP) metrics. The electric scooter detection model achieved mAP50 and mAP50-95 scores of 99.5 and 99.457, respectively. Meanwhile, the legal violation detection model attained mAP50 and mAP50-95 scores of 99.5 and 81.813, indicating relatively lower accuracy for fine-grained violation detection. This study presents a practical technological platform for monitoring regulatory compliance and automating fine enforcement for shared electric scooters. Future improvements in object detection accuracy and real-time processing capabilities are expected to enhance the system’s overall reliability. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence and Data Science)
Show Figures

Figure 1

Back to TopTop