Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (234)

Search Parameters:
Keywords = microservice systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 1068 KB  
Article
Service-Oriented Architecture for Decision Support in Industrial Life-Cycle Management: Design, Implementation, and Evaluation
by Rui Neves-Silva
Processes 2026, 14(7), 1088; https://doi.org/10.3390/pr14071088 - 27 Mar 2026
Viewed by 359
Abstract
Manufacturing enterprises face increasing complexity in managing the complete life cycle of production systems, requiring integration of information from diverse sources to support timely maintenance, diagnostics, and operational decisions. This paper presents a comprehensive service-oriented architecture (SOA) for decision support in industrial life-cycle [...] Read more.
Manufacturing enterprises face increasing complexity in managing the complete life cycle of production systems, requiring integration of information from diverse sources to support timely maintenance, diagnostics, and operational decisions. This paper presents a comprehensive service-oriented architecture (SOA) for decision support in industrial life-cycle management, integrating real-time monitoring, predictive maintenance, and collaborative problem-solving across extended manufacturing enterprises. The architecture implements a three-layer service model comprising eight core collaborative services, three application services, and six life-cycle management services, orchestrated through a risk assessment module that monitors life-cycle parameters and triggers appropriate maintenance, diagnostics, or hazard prevention actions. The system was developed in the context of a European research project and validated in two industrial settings: automotive assembly lines at a German SME and air conditioning manufacturing at a Portuguese company. Results demonstrated substantial operational improvements, including reduced problem resolution time, lower diagnostic travel requirements, reduced spare-parts consumption, and increased structured problem registration. The original SOAP-based web-services implementation is further contextualized within the contemporary Industry 4.0 landscape through comparison with microservices architectures and discussion of integration paths involving OPC UA, Asset Administration Shells, and digital twins. The paper contributes a validated reference architecture for service-based industrial life-cycle management and clarifies its relevance as an early precursor of contemporary smart manufacturing approaches. Full article
Show Figures

Figure 1

21 pages, 2014 KB  
Article
A Machine Learning-Driven CRM Approach for Identifying Member Churn in a Brazilian Agro-Industrial Cooperative: A Practical Case Study
by Sergio Akio Tanaka, João Vitor da Costa Andrade, Alessandro Botelho Bovo, Attilio Converti, Danilo Sipoli Sanches and Hugo Valadares Siqueira
Algorithms 2026, 19(3), 180; https://doi.org/10.3390/a19030180 - 27 Feb 2026
Viewed by 458
Abstract
This study addresses member churn in a Brazilian agro-industrial cooperative by operationalizing a leakage-aware, governance-aligned machine-learning protocol within the organization’s Customer Relationship Management (CRM) system. Using real-world CRM data under confidentiality constraints, we followed a KDD-based workflow. This workflow includes: (i) multi-source integration; [...] Read more.
This study addresses member churn in a Brazilian agro-industrial cooperative by operationalizing a leakage-aware, governance-aligned machine-learning protocol within the organization’s Customer Relationship Management (CRM) system. Using real-world CRM data under confidentiality constraints, we followed a KDD-based workflow. This workflow includes: (i) multi-source integration; (ii) targeted preprocessing with explicit handling of severe class imbalance via undersampling; (iii) a unified validation scheme with stratified cross-validation, hyperparameter search, and controlled AutoML benchmarking; (iv) comparison of tabular learners (Random Forest, XGBoost, and Support Vector Classifier) and a voting ensemble; and (v) SHAP-based explainability to support transparent decision-making. Class rebalancing substantially improved minority-class performance; for instance, the “Inactive” recall increased from 0.27 to 0.74 with SVC. Across ten folds, AutoML achieved competitive mean ROC-AUC (0.8844), followed by XGBoost (0.8690) and Random Forest (0.8660); global metrics supported operational feasibility (accuracy 0.79–0.80; ROC-AUC up to 0.8876), while the ensemble delivered comparable discrimination (ROC-AUC 0.8845) with a modest precision gain. SHAP analyses yielded business-coherent drivers and enabled actionable, instance-level communication in the CRM. The resulting microservices-based module exposes ranked churn propensities and explanations in dashboards for risk stratification and prioritization of retention actions. Overall, the work provides an interpretable, reproducible, and production-ready methodological blueprint for predictive CRM in seasonal cooperative environments under governance and confidentiality constraints. Full article
Show Figures

Figure 1

30 pages, 2430 KB  
Article
ST-GraphRCA: A Root Cause Analysis Model for Spatio-Temporal Graph Propagation in IoT Edge Computing
by Tianyi Su, Ruibing Mo, Yanyu Gong and Haifeng Wang
Sensors 2026, 26(5), 1474; https://doi.org/10.3390/s26051474 - 26 Feb 2026
Viewed by 452
Abstract
Real-time processing demands for massive IoT sensor data necessitate reliance on distributed microservice systems within edge clusters. However, pinpointing the root cause of anomalies within these edge microservice clusters poses a critical challenge for intelligent IoT operation and maintenance. To address the issue, [...] Read more.
Real-time processing demands for massive IoT sensor data necessitate reliance on distributed microservice systems within edge clusters. However, pinpointing the root cause of anomalies within these edge microservice clusters poses a critical challenge for intelligent IoT operation and maintenance. To address the issue, a spatio-temporal graph propagation model ST-GraphRCA is proposed for root cause analysis in IoT edge environments. Our approach begins by resolving the fundamental issue of time-series asynchrony across distributed multi-source metrics. A PCA-DTW hybrid feature extraction method is introduced with a dynamic alignment strategy to mitigate the effects of random network delays and data deformation without requiring prior synchronization. Subsequently, ST-GraphRCA constructs a stream-based forward propagation graph based on the flow conservation principle. By integrating dynamic edge weights with node-level input–output anomaly scores, ST-GraphRCA precisely infers fault propagation pathways and identifies potential root cause candidates through causal reasoning. Finally, a topology-constrained high-utility mining algorithm filters these candidates. Using a constraint matrix, the algorithm filters out unreachable service combinations to locate low-frequency and high-risk root causes. Experimental results indicate that ST-GraphRCA achieves an F1-Score of 0.89, outperforming existing methods. In resource-constrained edge scenarios, its average localization time is merely 238.8 ms, representing a six-fold improvement over key benchmarks. Thus, ST-GraphRCA not only provides an efficient anomaly fault tracing solution for large-scale IoT systems but also offers technical support for the intelligent operation and maintenance of distributed microservice systems. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

22 pages, 10518 KB  
Article
A Scalable Microservices Architecture for Condition Monitoring and State-of-Health Tracking in Power Conversion Systems
by José M. García-Campos, Abraham M. Alcaide, A. Letrado-Castellanos, Ramon Portillo and Jose I. Leon
Sensors 2026, 26(4), 1282; https://doi.org/10.3390/s26041282 - 16 Feb 2026
Viewed by 511
Abstract
The role of power converters in modern electrical infrastructure (such as electric vehicle charging stations, battery energy storage systems and photovoltaic energy systems) has become critical. Given the high reliability required by these converters, continuous condition monitoring for predictive maintenance is mandatory. Traditional [...] Read more.
The role of power converters in modern electrical infrastructure (such as electric vehicle charging stations, battery energy storage systems and photovoltaic energy systems) has become critical. Given the high reliability required by these converters, continuous condition monitoring for predictive maintenance is mandatory. Traditional SCADA and HMI systems often face scalability bottlenecks and lack the flexibility in data aggregation and storage scalability required for long-term predictive maintenance. This paper proposes a scalable, containerized microservices-based architecture for degradation tracking and State-of-Health (SoH) monitoring in power conversion systems. The architecture features a decoupled four-layer structure, utilizing dedicated UDP servers for low-latency data ingestion, RabbitMQ (AMQP) for robust message routing, and a NoSQL (MongoDB) storage layer with a FastAPI interface. The proposed system was validated using a Hardware-in-the-Loop (HiL) setup with a Typhoon HIL606 simulator monitoring an Active Neutral Point Clamped (ANPC) power converter. Experimental stress tests demonstrated a Packet Delivery Ratio (PDR) of 1.0 at ingestion rates up to 100 messages per second (msgs/s) per node. The system exhibits transmission and processing overheads consistently below 5 ms, ensuring timely data availability for tracking thermal dynamics and parametric aging trends. This operational performance significantly exceeds the nominal requirement of 2 msgs/s for condition monitoring, ensuring robust data integrity. Finally, this modular approach provides the horizontal scalability necessary for Industry 4.0 integration, offering a high-performance framework for long-term health monitoring in modern power electronics. Full article
(This article belongs to the Special Issue Condition Monitoring of Electrical Equipment Within Power Systems)
Show Figures

Figure 1

40 pages, 4792 KB  
Article
GMD-AD: A Graph Metric Dimension-Based Hybrid Framework for Privacy-Preserving Anomaly Detection in Distributed Databases
by Awad M. Awadelkarim
Math. Comput. Appl. 2026, 31(1), 28; https://doi.org/10.3390/mca31010028 - 14 Feb 2026
Viewed by 463
Abstract
Distributed databases are increasingly used in enterprise and cloud environments, but their distributed architecture introduces significant security challenges, including data leaks and insider threats. In the context of escalating cyber threats targeting large-scale distributed databases and cloud-native microservice architectures, this paper presents Graph [...] Read more.
Distributed databases are increasingly used in enterprise and cloud environments, but their distributed architecture introduces significant security challenges, including data leaks and insider threats. In the context of escalating cyber threats targeting large-scale distributed databases and cloud-native microservice architectures, this paper presents Graph Metric Dimension-based Anomaly Detection (GMD-AD), a novel graph-structure model designed to enhance cybersecurity in distributed databases by leveraging the metric dimension of interaction graphs; further, GMD-AD addresses the critical need for real-time, low-overhead, and privacy-aware anomaly detection mechanisms. The model introduces a compact resolving set as landmarks to detect intrusions through distance vector variations with minimal computational overhead. The proposed framework offers four major contributions, including sequential metric dimension updates to support dynamic topologies; a parallel BFS strategy to enable scalable processing; the incorporation of the k-metric anti-dimension to provide provable privacy against re-identification attacks; and a hybrid pipeline in which resolving-set subgraphs are processed by graph neural networks prior to final classification using gradient boosting. Experiments conducted on the SockShop microservices benchmark and a real MongoDB sharded cluster with injected anomalies reveal 60% reduced localization latency (1200 ms → 480 ms), stable detection accuracy (>0.997), increased noise robustness (F1 0.95 → 0.97) and a drop of re-identification success rate from the baseline by 40 percentage points (68% → 28%) when k = 3, = 2. We demonstrated up to 60% latency reduction and 40% privacy improvement over baselines, validated on real MongoDB clusters. The findings show that GMD-AD is a scalable, real-time and privacy-preserving HTTP anomaly detection solution for both distributed database systems and microservice architectures. Full article
Show Figures

Figure 1

24 pages, 894 KB  
Article
Integrating Continuous Compliance into DevSecOps Pipelines: A Data Engineering Perspective
by Aleksandr Zakharchenko
Software 2026, 5(1), 6; https://doi.org/10.3390/software5010006 - 10 Feb 2026
Viewed by 1039
Abstract
Modern DevSecOps environments face a persistent tension between accelerating deployment velocity and maintaining verifiable compliance with regulatory, security, and internal governance standards. Traditional snapshot-in-time audits and fragmented compliance tooling struggle to capture the dynamic nature of containerized, continuous delivery, often resulting in compliance [...] Read more.
Modern DevSecOps environments face a persistent tension between accelerating deployment velocity and maintaining verifiable compliance with regulatory, security, and internal governance standards. Traditional snapshot-in-time audits and fragmented compliance tooling struggle to capture the dynamic nature of containerized, continuous delivery, often resulting in compliance drift and delayed remediation. This paper introduces the Continuous Compliance Framework (CCF), a data-centric reference architecture that embeds compliance validation directly into CI/CD pipelines. The framework treats compliance as a first-class, computable system property by combining declarative policies-as-code, standardized evidence collection, and cryptographically verifiable attestations. Central to the approach is a Compliance Data Lakehouse that transforms heterogeneous pipeline artifacts into a queryable, time-indexed compliance data product, enabling audit-ready evidence generation and continuous assurance. The proposed architecture is validated through an end-to-end synthetic microservice implementation. Experimental results demonstrate full policy lifecycle enforcement with a minimal pipeline overhead and sub-second policy evaluation latency. These findings indicate that compliance can be shifted from a post hoc audit activity to an intrinsic, verifiable property of the software delivery process without materially degrading deployment velocity. Full article
(This article belongs to the Special Issue Software Reliability, Security and Quality Assurance)
Show Figures

Figure 1

20 pages, 682 KB  
Article
Semantic Search for System Dynamics Models Using Vector Embeddings in a Cloud Microservices Environment
by Pavel Kyurkchiev, Anton Iliev and Nikolay Kyurkchiev
Future Internet 2026, 18(2), 86; https://doi.org/10.3390/fi18020086 - 5 Feb 2026
Viewed by 698
Abstract
Efficient retrieval of mathematical and structural similarities in System Dynamics models remains a significant challenge for traditional lexical systems, which often fail to capture the contextual dependencies of simulation processes. This paper presents an architectural approach and implementation of a semantic search module [...] Read more.
Efficient retrieval of mathematical and structural similarities in System Dynamics models remains a significant challenge for traditional lexical systems, which often fail to capture the contextual dependencies of simulation processes. This paper presents an architectural approach and implementation of a semantic search module integrated into an existing cloud-based modeling and simulation system. The proposed method employs a strategy for serializing graph structures into textual descriptions, followed by the generation of vector embeddings via local ONNX inference and indexing within a vector database (Qdrant). Experimental validation performed on a diverse corpus of complex dynamic models, compares the proposed approach against traditional information retrieval methods (Full-Text Search, Keyword Search in PostgreSQL, and Apache Lucene with Standard and BM25 scoring). The results demonstrate the distinct advantage of semantic search, achieving high precision (over 90%) within the scope of the evaluated corpus and effectively eliminating information noise. In comparison, keyword search exhibited only 24.8% precision with a significant rate of false positives, while standard full-text analysis failed to identify relevant models for complex conceptual queries (0 results). Despite a recorded increase in latency (~2 s), the study proves that the vector-based approach is a significantly more robust solution for detecting hidden semantic connections in mathematical model databases, providing a foundation for future developments toward multi-vector indexing strategies. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

8 pages, 2474 KB  
Proceeding Paper
Research on Techno-Economic Restructuring of Digital Twin and Big Data in Intelligent Manufacturing
by Yiwei Qiu
Eng. Proc. 2025, 120(1), 33; https://doi.org/10.3390/engproc2025120033 - 2 Feb 2026
Viewed by 430
Abstract
To address three critical challenges in traditional digital twin applications for smart manufacturing—static mirroring, localized optimization, and economic decoupling—we propose and validate a novel paradigm: the Twin-Data Mid-End (TDME) system driven by dual technological-economic mechanisms. By integrating real-time big data from production lines, [...] Read more.
To address three critical challenges in traditional digital twin applications for smart manufacturing—static mirroring, localized optimization, and economic decoupling—we propose and validate a novel paradigm: the Twin-Data Mid-End (TDME) system driven by dual technological-economic mechanisms. By integrating real-time big data from production lines, equipment, supply chains, and market terminals through unified semantic frameworks, microservices-based technical modules, and deep reinforcement learning decision engines, this system generates instant reward signals based on multi-dimensional economic metrics including marginal profits, delivery cycles, and inventory turnover. This enables seamless “hot-swappable” reconfiguration of process algorithms, equipment controls, scheduling strategies, and organizational structures without production interruption. The system concurrently facilitates technological iteration and economic restructuring while dynamically optimizing efficiency-profit Pareto frontiers. Objective validation across 12 months of closed-loop industrial trials demonstrates reduced line changeover time by 37%, decreased unit comprehensive costs by 14.6%, shortened market response cycles by 42%, and increased return on investment by 11%, highlighting the paradigm’s practical applicability and broad adaptability. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

20 pages, 919 KB  
Systematic Review
The Principle of Least Privilege in Microservices: A Systematic Mapping Study
by Shouki A. Ebad and Marwa Amara
Appl. Sci. 2026, 16(3), 1495; https://doi.org/10.3390/app16031495 - 2 Feb 2026
Viewed by 528
Abstract
While Microservice Architectures (MSAs) offer enhanced scalability and maintenance, they introduce significant complexity for access control and, specifically, the rigorous enforcement of the Principle of Least Privilege (PoLP). This lack of clear privilege boundaries is a major security vulnerability in microservice-based systems. To [...] Read more.
While Microservice Architectures (MSAs) offer enhanced scalability and maintenance, they introduce significant complexity for access control and, specifically, the rigorous enforcement of the Principle of Least Privilege (PoLP). This lack of clear privilege boundaries is a major security vulnerability in microservice-based systems. To address this gap, this study conducts a systematic mapping study to provide a comprehensive guide and taxonomy on implementing PoLP in MSA. We identify and categorize existing mechanisms, best practices, and the technical and non-technical challenges encountered during implementation. The systematic search identified 25 primary studies, revealing a significant contribution from journal venues, particularly Computers & Security. Key findings detail the top technical challenges, including performance overhead, fragile container isolation, and authentication/authorization gaps inherent in service-to-service communication. Proposed mechanisms are categorized into four groups: policy and access control, code and configuration hardening, runtime/kernel-level methods, and general frameworks. Similarly, organizational challenges are grouped by people/culture, tooling/architecture, process/governance, and resource/expertise. This study provides a valuable roadmap and taxonomy for diverse security stakeholders. The identified research gaps—concerning AI integration, DevSecOps adoption, education, and dynamic analysis—underscore the need to shift from the currently predominantly theoretical approaches towards practical, experimental research to advance the real-world application of PoLP. Full article
(This article belongs to the Special Issue Trends and Prospects in Software Security)
Show Figures

Figure 1

5 pages, 173 KB  
Proceeding Paper
From Camera to Algorithm: OpenCV and AI Workshop for the Cybersecurity of the Future
by Pablo Natera-Muñoz, Fernando Broncano-Morgado and Pablo Garcia-Rodriguez
Eng. Proc. 2026, 123(1), 4; https://doi.org/10.3390/engproc2026123004 - 30 Jan 2026
Viewed by 388
Abstract
Artificial vision and artificial intelligence (AI) are increasingly interconnected in cybersecurity. This work presents an overview of OpenCV-based visual computing as a core tool for intelligent security systems that analyze real-time visual data. It includes practical exercises on face, edge, motion, and color [...] Read more.
Artificial vision and artificial intelligence (AI) are increasingly interconnected in cybersecurity. This work presents an overview of OpenCV-based visual computing as a core tool for intelligent security systems that analyze real-time visual data. It includes practical exercises on face, edge, motion, and color detection, forming the basis for advanced object recognition using YOLOv10. Real applications, such as document processing and camera-based anomaly detection, are implemented in a microservice architecture with OpenCV, and deep learning frameworks. Integrating computer vision and AI is shown to be essential for developing resilient and autonomous cybersecurity infrastructures. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
32 pages, 4251 KB  
Article
Context-Aware ML/NLP Pipeline for Real-Time Anomaly Detection and Risk Assessment in Cloud API Traffic
by Aziz Abibulaiev, Petro Pukach and Myroslava Vovk
Mach. Learn. Knowl. Extr. 2026, 8(1), 25; https://doi.org/10.3390/make8010025 - 22 Jan 2026
Cited by 1 | Viewed by 1292
Abstract
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies [...] Read more.
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies with business risks. The system processes each event/access log through parallel numerical and textual branches: a set of anomaly detectors trained on traffic engineering characteristics and a hybrid NLP stack that combines rules, TF-IDF (Term Frequency-Inverse Document Frequency), and character-level models trained on enriched security datasets. Their results are integrated using a risk-aware policy that takes into account endpoint type, data sensitivity, exposure, and authentication status, and creates a discrete risk level with human-readable explanations and recommended SOC (Security Operations Center) actions. We implement this design as a containerized microservice pipeline (input, preprocessing, ML, NLP, merging, alerting, and retraining services), orchestrated using Docker Compose and instrumented using OpenSearch Dashboards. Experiments with OWASP-like (Open Worldwide Application Security Project) attack scenarios show a high detection rate for injections, SSRF (Server-Side Request Forgery), Data Exposure, and Business Logic Abuse, while the processing time for each request remains within real-time limits even in sequential testing mode. Thus, the pipeline bridges the gap between ML/NLP research for security and practical API protection channels that can evolve over time through feedback and retraining. Full article
(This article belongs to the Section Safety, Security, Privacy, and Cyber Resilience)
Show Figures

Figure 1

29 pages, 2803 KB  
Article
Benchmarking SQL and NoSQL Persistence in Microservices Under Variable Workloads
by Nenad Pantelic, Ljiljana Matic, Lazar Jakovljevic, Stefan Eric, Milan Eric, Miladin Stefanović and Aleksandar Djordjevic
Future Internet 2026, 18(1), 53; https://doi.org/10.3390/fi18010053 - 15 Jan 2026
Viewed by 953
Abstract
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are [...] Read more.
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are assessed across read-dominant, write-dominant, and mixed workloads, with concurrency levels ranging from low to high contention. The experimental setup is fully containerized and executed in a single-node environment to isolate persistence-layer behavior and ensure reproducibility. System performance is evaluated using multiple metrics, including percentile-based latency (p95), throughput, CPU utilization, and memory consumption. The results reveal distinct performance trade-offs among the evaluated configurations, highlighting the sensitivity of persistence mechanisms to workload composition and concurrency intensity. In particular, indexing strategies significantly affect read-heavy scenarios, while document-oriented persistence demonstrates advantages under write-intensive workloads. The findings emphasize the importance of workload-aware persistence selection in microservice-based systems and support the adoption of polyglot persistence strategies. Rather than providing absolute performance benchmarks, the study focuses on comparative behavioral trends that can inform architectural decision-making in practical microservice deployments. Full article
Show Figures

Figure 1

37 pages, 653 KB  
Article
Highly Efficient Software Development Using DevOps and Microservices: A Comprehensive Framework
by David Barbosa, Vítor Santos, Maria Clara Silveira, Arnaldo Santos and Henrique S. Mamede
Future Internet 2026, 18(1), 50; https://doi.org/10.3390/fi18010050 - 14 Jan 2026
Viewed by 1226
Abstract
With the growing popularity of DevOps culture among companies and the corresponding increase in Microservices architecture development—both known to boost productivity and efficiency in software development—an increasing number of organizations are aiming to integrate them. Implementing DevOps culture and best practices can be [...] Read more.
With the growing popularity of DevOps culture among companies and the corresponding increase in Microservices architecture development—both known to boost productivity and efficiency in software development—an increasing number of organizations are aiming to integrate them. Implementing DevOps culture and best practices can be challenging, but it is increasingly important as software applications become more robust and complex, and performance is considered essential by end users. By following the Design Science Research methodology, this paper proposes an iterative framework that closely follows the recommended DevOps practices, validated with the assistance of expert interviews, for implementing DevOps practices into Microservices architecture software development, while also offering a series of tools that serve as a base guideline for anyone following this framework, in the form of a theoretical use case. Therefore, this paper provides organizations with a guideline for adapting DevOps and offers organizations already using this methodology a framework to potentially enhance their established practices. Full article
Show Figures

Graphical abstract

27 pages, 1703 KB  
Article
Joint Optimization of Microservice and Database Orchestration in Edge Clouds via Multi-Stage Proximal Policy
by Xingfeng He, Mingwei Luo, Dengmu Liu, Zhenhua Wang, Yingdong Liu, Chen Zhang, Jiandong Wang, Jiaxiang Xu and Tianping Deng
Symmetry 2026, 18(1), 136; https://doi.org/10.3390/sym18010136 - 9 Jan 2026
Viewed by 463
Abstract
Microservices as an emerging architectural approach have been widely applied in the development of online applications. However, in large-scale service systems, frequent data communications, complex invocation dependencies, and strict latency requirements pose significant challenges to efficient microservice orchestration. In addition, microservices need to [...] Read more.
Microservices as an emerging architectural approach have been widely applied in the development of online applications. However, in large-scale service systems, frequent data communications, complex invocation dependencies, and strict latency requirements pose significant challenges to efficient microservice orchestration. In addition, microservices need to frequently access the database to achieve data persistence, creating a mutual dependency between the two, and this symmetry further increases the complexity of service orchestration and coordinated deployment. In this context, the strong coupling of service deployment, database layout, and request routing makes effective local optimization difficult. However, existing research often overlooks the impact of databases, fails to achieve joint optimization among databases, microservice deployments, and routing, or lacks fine-grained orchestration strategies for multi-instance models. To address the above limitations, this paper proposes a joint optimization framework based on the Database-as-a-Service (DaaS) paradigm. It performs fine-grained multi-instance queue modeling based on queuing theory to account for delays in data interaction, request queuing, and processing. Furthermore this paper proposes a proximal policy optimization algorithm based on multi-stage joint decision-making to address the orchestration problem of microservices and database instances. In this algorithm, the action space is symmetrical between microservices and database deployment, enabling the agent to leverage this characteristic and improve representation learning efficiency through shared feature extraction layers. The algorithm incorporates a two-layer agent policy stability control to accelerate convergence and a three-level experience replay mechanism to achieve efficient training on high-dimensional decision spaces. Experimental results demonstrate that the proposed algorithm effectively reduces service request latency under diverse workloads and network conditions, while maintaining global resource load balancing. Full article
Show Figures

Figure 1

21 pages, 4706 KB  
Article
Near-Real-Time Integration of Multi-Source Seismic Data
by José Melgarejo-Hernández, Paula García-Tapia-Mateo, Juan Morales-García and Jose-Norberto Mazón
Sensors 2026, 26(2), 451; https://doi.org/10.3390/s26020451 - 9 Jan 2026
Viewed by 522
Abstract
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish [...] Read more.
The reliable and continuous acquisition of seismic data from multiple open sources is essential for real-time monitoring, hazard assessment, and early-warning systems. However, the heterogeneity among existing data providers such as the United States Geological Survey, the European-Mediterranean Seismological Centre, and the Spanish National Geographic Institute creates significant challenges due to differences in formats, update frequencies, and access methods. To overcome these limitations, this paper presents a modular and automated framework for the scheduled near-real-time ingestion of global seismic data using open APIs and semi-structured web data. The system, implemented using a Docker-based architecture, automatically retrieves, harmonizes, and stores seismic information from heterogeneous sources at regular intervals using a cron-based scheduler. Data are standardized into a unified schema, validated to remove duplicates, and persisted in a relational database for downstream analytics and visualization. The proposed framework adheres to the FAIR data principles by ensuring that all seismic events are uniquely identifiable, source-traceable, and stored in interoperable formats. Its lightweight and containerized design enables deployment as a microservice within emerging data spaces and open environmental data infrastructures. Experimental validation was conducted using a two-phase evaluation. This evaluation consisted of a high-frequency 24 h stress test and a subsequent seven-day continuous deployment under steady-state conditions. The system maintained stable operation with 100% availability across all sources, successfully integrating 4533 newly published seismic events during the seven-day period and identifying 595 duplicated detections across providers. These results demonstrate that the framework provides a robust foundation for the automated integration of multi-source seismic catalogs. This integration supports the construction of more comprehensive and globally accessible earthquake datasets for research and near-real-time applications. By enabling automated and interoperable integration of seismic information from diverse providers, this approach supports the construction of more comprehensive and globally accessible earthquake catalogs, strengthening data-driven research and situational awareness across regions and institutions worldwide. Full article
(This article belongs to the Special Issue Advances in Seismic Sensing and Monitoring)
Show Figures

Figure 1

Back to TopTop