Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (197)

Search Parameters:
Keywords = Docker

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4706 KB  
Article
Near-Real-Time Integration of Multi-Source Seismic Data
by José Melgarejo-Hernández, Paula García-Tapia-Mateo, Juan Morales-García and Jose-Norberto Mazón
Sensors 2026, 26(2), 451; https://doi.org/10.3390/s26020451 - 9 Jan 2026
Viewed by 89
Abstract
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish [...] Read more.
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish National Geographic Institute creates significant challenges due to differences in formats, update frequencies, and access methods. To overcome these limitations, this paper presents a modular and automated framework for the scheduled near-real-time ingestion of global seismic data using open APIs and semi-structured web data. The system, implemented using a Docker-based architecture, automatically retrieves, harmonizes, and stores seismic information from heterogeneous sources at regular intervals using a cron-based scheduler. Data are standardized into a unified schema, validated to remove duplicates, and persisted in a relational database for downstream analytics and visualization. The proposed framework adheres to the FAIR data principles by ensuring that all seismic events are uniquely identifiable, source-traceable, and stored in interoperable formats. Its lightweight and containerized design enables deployment as a microservice within emerging data spaces and open environmental data infrastructures. Experimental validation was conducted using a two-phase evaluation. This evaluation consisted of a high-frequency 24 h stress test and a subsequent seven-day continuous deployment under steady-state conditions. The system maintained stable operation with 100% availability across all sources, successfully integrating 4533 newly published seismic events during the seven-day period and identifying 595 duplicated detections across providers. These results demonstrate that the framework provides a robust foundation for the automated integration of multi-source seismic catalogs. This integration supports the construction of more comprehensive and globally accessible earthquake datasets for research and near-real-time applications. By enabling automated and interoperable integration of seismic information from diverse providers, this approach supports the construction of more comprehensive and globally accessible earthquake catalogs, strengthening data-driven research and situational awareness across regions and institutions worldwide. Full article
(This article belongs to the Special Issue Advances in Seismic Sensing and Monitoring)
Show Figures

Figure 1

25 pages, 692 KB  
Article
Decentralized Dynamic Heterogeneous Redundancy Architecture Based on Raft Consensus Algorithm
by Ke Chen and Leyi Shi
Future Internet 2026, 18(1), 20; https://doi.org/10.3390/fi18010020 - 1 Jan 2026
Viewed by 206
Abstract
Dynamic heterogeneous redundancy (DHR) architectures combine heterogeneity, redundancy, and dynamism to create security-centric frameworks that can be used to mitigate network attacks that exploit unknown vulnerabilities. However, conventional DHR architectures rely on centralized control modules for scheduling and adjudication, leading to significant single-point [...] Read more.
Dynamic heterogeneous redundancy (DHR) architectures combine heterogeneity, redundancy, and dynamism to create security-centric frameworks that can be used to mitigate network attacks that exploit unknown vulnerabilities. However, conventional DHR architectures rely on centralized control modules for scheduling and adjudication, leading to significant single-point failure risks and trust bottlenecks that severely limit their deployment in security-critical scenarios. To address these challenges, this paper proposes a decentralized DHR architecture based on the Raft consensus algorithm. It deeply integrates the Raft consensus mechanism with the DHR execution layer to build a consensus-centric control plane and designs a dual-log pipeline to ensure all security-critical decisions are executed only after global consistency via Raft. Furthermore, we define a multi-dimensional attacker model—covering external, internal executor, internal node, and collaborative Byzantine adversaries—to analyze the security properties and explicit defense boundaries of the architecture under Raft’s crash-fault-tolerant assumptions. To assess the effectiveness of the proposed architecture, a prototype consisting of five heterogeneous nodes was developed for thorough evaluation. The experimental results show that, for non-Byzantine external and internal attacks, the architecture achieves high detection and isolation rates, maintains high availability, and ensures state consistency among non-malicious nodes. For stress tests in which a minority of nodes exhibit Byzantine-like behavior, our prototype preserves log consistency and prevents incorrect state commitments; however, we explicitly treat these as empirical observations under a restricted adversary rather than a general Byzantine fault tolerance guarantee. Performance testing revealed that the system exhibits strong security resilience in attack scenarios, with manageable performance overhead. Instead of turning Raft into a Byzantine-fault-tolerant consensus protocol, the proposed architecture preserves Raft’s crash-fault-tolerant guarantees at the consensus layer and achieves Byzantine-resilient behavior at the execution layer through heterogeneous redundant executors and majority-hash validation. To support evaluation during peer review, we provide a runnable prototype package containing Docker-based deployment scripts, pre-built heterogeneous executors, and Raft control-plane images, enabling reviewers to observe and assess the representative architectural behaviors of the system under controlled configurations without exposing the internal source code. The complete implementation will be made available after acceptance in accordance with institutional IP requirements, without affecting the scope or validity of the current evaluation. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

30 pages, 2499 KB  
Article
Enhancing IoT Common Service Functions with Blockchain: From Analysis to Standards-Based Prototype Implementation
by Jiho Lee, Jieun Lee, Zehua Wang and JaeSeung Song
Electronics 2026, 15(1), 123; https://doi.org/10.3390/electronics15010123 - 26 Dec 2025
Cited by 1 | Viewed by 242
Abstract
The proliferation of Internet of Things (IoT) applications in safety-critical domains, such as healthcare, smart transportation, and industrial automation, demands robust solutions for data integrity, traceability, and security that surpass the capabilities of centralized databases. This paper analyzes how blockchain technology can be [...] Read more.
The proliferation of Internet of Things (IoT) applications in safety-critical domains, such as healthcare, smart transportation, and industrial automation, demands robust solutions for data integrity, traceability, and security that surpass the capabilities of centralized databases. This paper analyzes how blockchain technology can be integrated with core IoT service functions—including data management, security, device management, group coordination, and automated billing—to enhance immutability, trust, and operational efficiency. Our analysis identifies practical use cases such as consensus-driven tamper-proof storage, role-based access control, firmware integrity verification, and automated micropayments. These use cases showcase blockchain’s potential beyond traditional data storage. Building on this, we propose a novel framework that integrates a permissioned distributed ledger with a standardized IoT service layer platform through a Blockchain Interworking Proxy Entity (BlockIPE). This proxy dynamically maps IoT service functions to smart contracts, enabling flexible data routing to conventional databases or blockchains based on the application requirements. We implement a Dockerized prototype that integrates a C-based oneM2M platform with an Ethereum-compatible permissioned ledger (implemented using Hyperledger Besu) via BlockIPE, incorporating security features such as role-based access control. For performance evaluation, we use Ganache to isolate proxy-level overhead and scalability. At the proxy level, the blockchain-integrated path achieves processing latencies (≈86 ms) comparable to, and slightly faster than, the traditional database path. Although the end-to-end latency is inherently governed by on-chain confirmation (≈0.586–1.086 s), the scalability remains high (up to 100,000 TPS). This validates that the architecture secures IoT ecosystems with manageable operational overhead. Full article
(This article belongs to the Special Issue Blockchain Technologies: Emerging Trends and Real-World Applications)
Show Figures

Figure 1

19 pages, 5507 KB  
Article
RoboDeploy: A Metamodel-Driven Framework for Automated Multi-Host Docker Deployment of ROS 2 Systems in IoRT Environments
by Miguel Ángel Barcelona, Laura García-Borgoñón, Pablo Torner and Ariadna Belén Ruiz
Software 2026, 5(1), 1; https://doi.org/10.3390/software5010001 - 19 Dec 2025
Viewed by 281
Abstract
Robotic systems increasingly operate in complex and distributed environments, where software deployment and orchestration pose major challenges. This paper presents a model-driven approach that automates the containerized deployment of robotic systems in Internet of Robotic Things (IoRT) environments. Our solution integrates Model-Driven Engineering [...] Read more.
Robotic systems increasingly operate in complex and distributed environments, where software deployment and orchestration pose major challenges. This paper presents a model-driven approach that automates the containerized deployment of robotic systems in Internet of Robotic Things (IoRT) environments. Our solution integrates Model-Driven Engineering (MDE) with containerization technologies to improve scalability, reproducibility, and maintainability. A dedicated metamodel introduces high-level abstractions for describing deployment architectures, repositories, and container configurations. A web-based tool enables collaborative model editing, while an external deployment automator generates validated Docker and Compose artifacts to support seamless multi-host orchestration. We validated the approach through real-world experiments, which show that the method effectively automates deployment workflows, ensures consistency across development and production environments, and significantly reduces configuration effort. These results demonstrate that model-driven automation can bridge the gap between Software Engineering (SE) and robotics, enabling Software-Defined Robotics (SDR) and supporting scalable IoRT applications. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

22 pages, 2236 KB  
Article
An AI-Driven System for Learning MQTT Communication Protocols with Python Programming
by Zihao Zhu, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, I Nyoman Darma Kotama, Anak Agung Surya Pradhana, Alfiandi Aulia Rahmadani and Noprianto
Electronics 2025, 14(24), 4967; https://doi.org/10.3390/electronics14244967 - 18 Dec 2025
Viewed by 393
Abstract
With rapid developments of wireless communication and Internet of Things (IoT) technologies, an increasing number of devices and sensors are interconnected, generating massive amounts of data in real time. Among the underlying protocols, Message Queuing Telemetry Transport (MQTT) has become a widely adopted [...] Read more.
With rapid developments of wireless communication and Internet of Things (IoT) technologies, an increasing number of devices and sensors are interconnected, generating massive amounts of data in real time. Among the underlying protocols, Message Queuing Telemetry Transport (MQTT) has become a widely adopted lightweight publish–subscribe standard due to its simplicity, minimal overhead, and scalability. Then, understanding such protocols is essential for students and engineers engaging in IoT application system designs. However, teaching and learning MQTT remains challenging for them. Its asynchronous architecture, hierarchical topic structure, and constituting concepts such as retained messages, Quality of Service (QoS) levels, and wildcard subscriptions are often difficult for beginners. Moreover, traditional learning resources emphasize theory and provide limited hands-on guidance, leading to a steep learning curve. To address these challenges, we propose an AI-assisted, exercise-based learning platform for MQTT. This platform provides interactive exercises with intelligent feedback to bridge the gap between theory and practice. To lower the barrier for learners, all code examples for executing MQTT communication are implemented in Python for readability, and Docker is used to ensure portable deployments of the MQTT broker and AI assistant. For evaluations, we conducted a usability study using two groups. The first group, who has no prior experience, focused on fundamental concepts with AI-guided exercises. The second group, who has relevant background, engaged in advanced projects to apply and reinforce their knowledge. The results show that the proposed platform supports learners at different levels, reduces frustrations, and improves both engagement and efficiency. Full article
Show Figures

Figure 1

20 pages, 2501 KB  
Article
Field-Deployable Kubernetes Cluster for Enhanced Computing Capabilities in Remote Environments
by Teodor-Mihail Giurgică, Annamaria Sârbu, Bernd Klauer and Liviu Găină
Appl. Sci. 2025, 15(24), 12991; https://doi.org/10.3390/app152412991 - 10 Dec 2025
Viewed by 481
Abstract
This paper presents a portable cluster architecture based on a lightweight Kubernetes distribution designed to provide enhanced computing capabilities in isolated environments. The architecture is validated in two operational scenarios: (1) machine learning operations (MLOps) for on-site learning, fine-tuning and retraining of models [...] Read more.
This paper presents a portable cluster architecture based on a lightweight Kubernetes distribution designed to provide enhanced computing capabilities in isolated environments. The architecture is validated in two operational scenarios: (1) machine learning operations (MLOps) for on-site learning, fine-tuning and retraining of models and (2) web hosting for isolated or resource-constrained networks, providing resilient service delivery through failover and load balancing. The cluster leverages low-cost Raspberry Pi 4B units and virtualized nodes, integrated with Docker containerization, Kubernetes orchestration, and Kubeflow-based workflow optimization. System monitoring with Prometheus and Grafana offers continuous visibility into node health, workload distribution, and resource usage, supporting early detection of operational issues within the cluster. The results show that the proposed dual-mode cluster can function as a compact, field-deployable micro-datacenter, enabling both real-time Artificial Intelligence (AI) operations and resilient web service delivery in field environments where autonomy and reliability are critical. In addition to performance and availability measurements, power consumption, scalability bottlenecks, and basic security aspects were analyzed to assess the feasibility of such a platform under constrained conditions. Limitations are discussed, and future work includes scaling the cluster, evaluating GPU/TPU-enabled nodes, and conducting field tests in realistic tactical environments. Full article
Show Figures

Figure 1

18 pages, 693 KB  
Article
A Data Rate Monitoring Approach for Cyberattack Detection in Digital Twin Communication
by Cláudio Rodrigues, Waldir S. S. Júnior, Wilson Oliveira and Isomar Lima
Sensors 2025, 25(24), 7476; https://doi.org/10.3390/s25247476 - 9 Dec 2025
Viewed by 562
Abstract
The growing integration of Digital Twins (DTs) in Industry 4.0 environments exposes the physical–virtual communication layer as a critical vector for cyber vulnerabilities; while most studies focus on complex and resource-intensive security mechanisms, this work demonstrates that the inherently predictable nature of DT [...] Read more.
The growing integration of Digital Twins (DTs) in Industry 4.0 environments exposes the physical–virtual communication layer as a critical vector for cyber vulnerabilities; while most studies focus on complex and resource-intensive security mechanisms, this work demonstrates that the inherently predictable nature of DT communications allows simple statistical metrics—such as the μ+3σ threshold—to provide robust, interpretable, and computationally efficient anomaly detection. Using a Docker-based simulation, we emulate Denial-of-Service (DoS), Man-in-the-Middle (MiTM), and intrusion attacks, showing that each generates a distinct statistical signature (e.g., a 50-fold increase in packet rate during DoS). The results confirm that data rate monitoring offers a viable, non-intrusive, and cost-effective first line of defense, thereby enhancing the resilience of IIoT-based Digital Twins. Full article
(This article belongs to the Special Issue Reliable Autonomics and the Internet of Things)
Show Figures

Figure 1

35 pages, 2154 KB  
Article
Real-Time Digital Twins for Building Energy Optimization Through Blind Control: Functional Mock-Up Units, Docker Container-Based Simulation, and Surrogate Models
by Cristina Nuevo-Gallardo, Iker Landa del Barrio, Markel Flores Iglesias, Juan B. Echeverría Trueba and Carlos Fernández Bandera
Appl. Sci. 2025, 15(24), 12888; https://doi.org/10.3390/app152412888 - 6 Dec 2025
Viewed by 644
Abstract
The transition toward energy-efficient and smart buildings requires Digital Twins (DTs) that can couple real-time data with physics-based Building Energy Models (BEMs) for predictive and adaptive operation. Yet, despite rapid digitalisation, there remains a lack of practical guidance and real-world implementations demonstrating how [...] Read more.
The transition toward energy-efficient and smart buildings requires Digital Twins (DTs) that can couple real-time data with physics-based Building Energy Models (BEMs) for predictive and adaptive operation. Yet, despite rapid digitalisation, there remains a lack of practical guidance and real-world implementations demonstrating how calibrated BEMs can be effectively integrated into Building Management Systems (BMSs). This study addresses that gap by presenting a complete and reproducible end-to-end framework for embedding physics-based BEMs into operational DTs using two setups: (i) encapsulation as Functional Mock-up Units (FMUs) and (ii) containerisation via Docker. Both approaches were deployed and tested in a real educational building in Cáceres (Spain), equipped with a LoRaWAN-based sensing and actuation infrastructure. A systematic comparison highlights their respective trade-offs: FMUs offer faster execution but limited weather inputs and higher implementation effort, whereas Docker-based workflows provide full portability, scalability, and native interoperability with Internet of Things (IoT) and BMS architectures. To enable real-time operation, a surrogate modelling framework was embedded within the Docker architecture to replicate the optimisation logic of the calibrated BEM and generate predictive blind control schedules in milliseconds—bypassing simulation overhead and enabling continuous actuation. The combined Docker + surrogate setup achieved 10–15% heating energy savings during winter operation without any HVAC retrofit. Beyond the case study, this work provides a step-by-step, in-depth guideline for practitioners to integrate calibrated BEMs into real-time control loops using existing toolchains. The proposed approach demonstrates how hybrid physics- and data-driven DTs can transform building management into a scalable, energy-efficient, and operationally deployable reality. Full article
Show Figures

Figure 1

15 pages, 8022 KB  
Article
Evaluation of HLA Region-Specific High-Throughput Sequencing FASTQ Reads Combined with Ensemble HLA-Typing Tools for Rapid and High-Confidence HLA Typing
by Vijay G. Padul, Mini Gill, Jesus A. Perez, Javier J. Lopez, Santosh Kesari and Shashaanka Ashili
Biology 2025, 14(12), 1717; https://doi.org/10.3390/biology14121717 - 1 Dec 2025
Viewed by 558
Abstract
Background: Accurate human leukocyte antigen (HLA) genotyping is a critical step in the implementation of neoantigen peptide-based cancer immunotherapy. Existing computational tools for HLA genotyping using high-throughput sequencing data often lack sufficient accuracy when used individually. Employing an ensemble of multiple software tools [...] Read more.
Background: Accurate human leukocyte antigen (HLA) genotyping is a critical step in the implementation of neoantigen peptide-based cancer immunotherapy. Existing computational tools for HLA genotyping using high-throughput sequencing data often lack sufficient accuracy when used individually. Employing an ensemble of multiple software tools and sequencing sources can potentially address this limitation; however, this approach is hindered by increased processing time. Methods: We evaluated an ensemble method that utilizes four HLA-genotyping software tools applied to three sequencing sources—tumor exome, normal exome, and tumor RNA-seq—from the same individual to achieve high-confidence HLA genotyping. To reduce processing time, we incorporated an HLA region-specific FASTQ read-filtering strategy. With this protocol we also provide a Docker implementation of the FASTQ read-filtering pipeline and a software tool for the analysis of HLA genotypes. Results: Consensus HLA genotypes for HLA class I alleles, derived from filtered FASTQ files, showed complete concordance with those obtained from unfiltered original FASTQ files. The use of filtered FASTQ files significantly reduced the time required for HLA genotyping. Conclusions: These findings demonstrate the utility of the proposed HLA genotyping approach in achieving rapid and high-confidence two-field HLA genotyping. Full article
Show Figures

Figure 1

17 pages, 3217 KB  
Article
Optimization of Neural Network Models of Computer Vision for Biometric Identification on Edge IoT Devices
by Bauyrzhan Belgibayev, Madina Mansurova, Ganibet Ablay, Talshyn Sarsembayeva and Zere Armankyzy
J. Imaging 2025, 11(11), 419; https://doi.org/10.3390/jimaging11110419 - 20 Nov 2025
Viewed by 578
Abstract
This research is dedicated to the development of an intelligent biometric system based on the synergy of Internet of Things (IoT) technologies and Artificial Intelligence (AI). The primary goal of this research is to explore the possibilities of personal identification using two distinct [...] Read more.
This research is dedicated to the development of an intelligent biometric system based on the synergy of Internet of Things (IoT) technologies and Artificial Intelligence (AI). The primary goal of this research is to explore the possibilities of personal identification using two distinct biometric traits: facial images and the venous pattern of the palm. These methods are treated as independent approaches, each relying on unique anatomical features of the human body. This study analyzes state-of-the-art methods in computer vision and neural network architectures and presents experimental results related to the extraction and comparison of biometric features. For each biometric modality, specific approaches to data collection, preprocessing, and analysis are proposed. We frame optimization in practical terms: selecting an edge-suitable backbone (ResNet-50) and employing metric learning (Triplet Loss) to improve convergence and generalization while adapting the stack for edge IoT deployment (Dockerized FastAPI with JWT). This clarifies that “optimization” in our title refers to model selection, loss design, and deployment efficiency on constrained devices. Additionally, the system’s architectural principles are described, including the design of the web interface and server infrastructure. The proposed solution demonstrates the potential of intelligent biometric technologies in applications such as automated access control systems, educational institutions, smart buildings, and other areas where high reliability and resistance to spoofing are essential. Full article
(This article belongs to the Special Issue Techniques and Applications in Face Image Analysis)
Show Figures

Figure 1

23 pages, 663 KB  
Article
Authentication Challenges and Solutions in Microservice Architectures
by Constantin Lucian Aldea and Razvan Bocu
Appl. Sci. 2025, 15(22), 12088; https://doi.org/10.3390/app152212088 - 14 Nov 2025
Cited by 1 | Viewed by 1618
Abstract
In this paper, we examine the relevant vulnerabilities and security controls for ensuring the security of applications built on microservice architectures. Zero Trust security principles are used to conceptualize and implement a secure ecosystem using Spring Boot security components and Docker infrastructure. Relevant [...] Read more.
In this paper, we examine the relevant vulnerabilities and security controls for ensuring the security of applications built on microservice architectures. Zero Trust security principles are used to conceptualize and implement a secure ecosystem using Spring Boot security components and Docker infrastructure. Relevant security controls are analyzed and a proof of concept was created. The combination of security controls in the Docker environment, the deployment of these controls, and the analysis of their impact will also be the focus of this paper. Full article
(This article belongs to the Special Issue New Advances in Cybersecurity Technology and Cybersecurity Management)
Show Figures

Figure 1

9 pages, 889 KB  
Proceeding Paper
Integrating a Stereo Vision System on the F1Tenth Platform for Enhanced Perception
by Péter Farkas, Bence Török and Szilárd Aradi
Eng. Proc. 2025, 113(1), 10; https://doi.org/10.3390/engproc2025113010 - 28 Oct 2025
Viewed by 532
Abstract
During the development of vehicle control algorithms, effective real-world validation is crucial. Model vehicle platforms provide a cost-effective and accessible method for such testing. The open-source F1Tenth project is a popular choice, but its reliance on lidar sensors limits certain applications. To enable [...] Read more.
During the development of vehicle control algorithms, effective real-world validation is crucial. Model vehicle platforms provide a cost-effective and accessible method for such testing. The open-source F1Tenth project is a popular choice, but its reliance on lidar sensors limits certain applications. To enable more universal environmental perception, integrating a stereo camera system could be advantageous, although existing software packages do not yet support this functionality. Therefore, our research focuses on developing a modular software architecture for the F1Tenth platform, incorporating real-time stereo vision-based environment perception, robust state representation, and clear actuator interfaces. The system simplifies the integration and testing of control algorithms, while minimizing the simulation-to-reality gap. The framework’s operation is demonstrated through a real-world control problem. Environmental sensing, representation, and the control method combine classical and deep learning techniques to ensure real-time performance and robust operation. Our platform facilitates real-world testing and is suitable for validating research projects. Full article
(This article belongs to the Proceedings of The Sustainable Mobility and Transportation Symposium 2025)
Show Figures

Figure 1

13 pages, 2071 KB  
Article
OmniCellX: A Versatile and Comprehensive Browser-Based Tool for Single-Cell RNA Sequencing Analysis
by Renwen Long, Tina Suoangbaji and Daniel Wai-Hung Ho
Biology 2025, 14(10), 1437; https://doi.org/10.3390/biology14101437 - 17 Oct 2025
Viewed by 839
Abstract
Single-cell RNA sequencing (scRNA-seq) has revolutionized genomic investigations by enabling the exploration of gene expression heterogeneity at the individual cell level. However, the complexity of scRNA-seq data analysis remains a challenge for many researchers. Here, we present OmniCellX, a browser-based tool designed to [...] Read more.
Single-cell RNA sequencing (scRNA-seq) has revolutionized genomic investigations by enabling the exploration of gene expression heterogeneity at the individual cell level. However, the complexity of scRNA-seq data analysis remains a challenge for many researchers. Here, we present OmniCellX, a browser-based tool designed to simplify and streamline scRNA-seq data analysis while addressing key challenges in accessibility, scalability, and usability. OmniCellX features a Docker-based installation, minimizing technical barriers and ensuring rapid deployment on local machines or clusters. Its dual-mode operation (analysis and visualization) integrates a comprehensive suite of analytical tools for tasks such as preprocessing, dimensionality reduction, clustering, differential expression, functional enrichment, cell–cell communication, and trajectory inference on raw data while enabling alternative interactive and publication-quality visualizations on pre-analyzed data. Supporting multiple input formats and leveraging the memory-efficient data structure for scalability, OmniCellX can efficiently handle datasets spanning millions of cells. The platform emphasizes user flexibility, offering adjustable parameters for real-time fine-tuning, alongside extensive documentation to guide users at even beginner levels. OmniCellX combines an intuitive interface with robust analytical power to perform single-cell data analysis and empower researchers to uncover biological insights with ease. Its scalability and versatility make it a valuable tool for advancing discoveries in cellular heterogeneity and biomedical research. Full article
Show Figures

Figure 1

12 pages, 284 KB  
Article
AI-Enabled Secure and Scalable Distributed Web Architecture for Medical Informatics
by Marian Ileana, Pavel Petrov and Vassil Milev
Appl. Sci. 2025, 15(19), 10710; https://doi.org/10.3390/app151910710 - 4 Oct 2025
Viewed by 1064
Abstract
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical [...] Read more.
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical informatics, integrating artificial intelligence techniques and cloud-based services. The system ensures interoperability via HL7 FHIR standards and preserves data privacy and fault tolerance across interconnected medical institutions. A hybrid AI pipeline combining principal component analysis (PCA), K-Means clustering, and convolutional neural networks (CNNs) is applied to diffusion tensor imaging (DTI) data for early detection of neurological anomalies. The architecture leverages containerized microservices orchestrated with Docker Swarm, enabling adaptive resource management and high availability. Experimental validation confirms reduced latency, improved system reliability, and enhanced compliance with medical data exchange protocols. Results demonstrate superior performance with an average latency of 94 ms, a diagnostic accuracy of 91.3%, and enhanced clinical workflow efficiency compared to traditional monolithic architectures. The proposed solution successfully addresses scalability limitations while maintaining data security and regulatory compliance across multi-institutional deployments. This work contributes to the advancement of intelligent, interoperable, and scalable e-health infrastructures aligned with the evolution of digital healthcare ecosystems. Full article
(This article belongs to the Special Issue Data Science and Medical Informatics)
Show Figures

Figure 1

28 pages, 3252 KB  
Article
Toward Secure SDN Infrastructure in Smart Cities: Kafka-Enabled Machine Learning Framework for Anomaly Detection
by Gayathri Karthick, Glenford Mapp and Jon Crowcroft
Future Internet 2025, 17(9), 415; https://doi.org/10.3390/fi17090415 - 11 Sep 2025
Viewed by 921
Abstract
As smart cities evolve, the demand for real-time, secure, and adaptive network monitoring, continues to grow. Software-Defined Networking (SDN) offers a centralized approach to managing network flows; However, anomaly detection within SDN environments remains a significant challenge, particularly at the intelligent edge. This [...] Read more.
As smart cities evolve, the demand for real-time, secure, and adaptive network monitoring, continues to grow. Software-Defined Networking (SDN) offers a centralized approach to managing network flows; However, anomaly detection within SDN environments remains a significant challenge, particularly at the intelligent edge. This paper presents a conceptual Kafka-enabled ML framework for scalable, real-time analytics in SDN environments, supported by offline evaluation and a prototype streaming demonstration. A range of supervised ML models covering traditional methods and ensemble approaches (Random Forest, Linear Regression & XGBoost) were trained and validated using the InSDN intrusion detection dataset. These models were tested against multiple cyber threats, including botnets, dos, ddos, network reconnaissance, brute force, and web attacks, achieving up to 99% accuracy for ensemble classifiers under offline conditions. A Dockerized prototype demonstrates Kafka’s role in offline data ingestion, processing, and visualization through PostgreSQL and Grafana. While full ML pipeline integration into Kafka remains part of future work, the proposed architecture establishes a foundation for secure and intelligent Software-Defined Vehicular Networking (SDVN) infrastructure in smart cities. Full article
Show Figures

Figure 1

Back to TopTop