1. Introduction
Smart cities have emerged as complex socio-technical systems in which urban services, infrastructure management, and citizen-centered applications, such as smart parking systems [
1], smart traffic management systems [
2], smart buildings [
3], and smart waste management systems [
4], depend critically on deep sensing, interconnected digital platforms, and timely data processing [
5]. As cities evolve toward models of sustainable, resilient, and efficient urban governance, their digital infrastructures must support continuous monitoring, rapid situational awareness, and adaptive service delivery. These requirements cannot be met merely through centralized cloud-centric models. Instead, they rely on a distributed computing paradigm in which data is processed closer to its point of origin, reducing latency, alleviating backhaul congestion, and enabling real-time responsiveness across heterogeneous urban domains [
6].
Within this context, cloud-edge-IoT convergence plays a decisive role [
7]. Modern city services generate substantial volumes of heterogeneous data, ranging from environmental sensing to mobility flows and operational telemetry from municipal assets. Processing this information at the edge of the network enables faster reactions to local conditions, improves system resilience, and reduces the energy overhead associated with transmitting continuous data over long distances. These characteristics are essential for services that require strict responsiveness or operate in bandwidth-constrained environments, underscoring the relevance of edge-enabled architectures for smart city deployments [
7].
Socio-economic and environmental factors further reinforce the need for advanced digital infrastructures [
8]. Rapid urbanization increases the strain on transportation networks, waste management systems, utilities, and public services, creating a demand for data-driven solutions that enhance operational efficiency while promoting transparency, inclusion, and sustainability. Digital transformation enables new forms of collaboration across administrative boundaries, fosters innovation in city services, and contributes to an improved quality of life [
7]. However, achieving these benefits requires addressing systemic challenges such as interoperability across heterogeneous platforms, data protection guarantees in multi-stakeholder environments, and consistent coordination among independent administrative domains.
The strategic relevance of smart city digitization is also reflected in international policy frameworks. The European Union’s Digital Decade strategy and the United Nations’ Sustainable Development Goals (SDGs) explicitly identify digital infrastructures as foundational enablers of environmental efficiency, economic competitiveness, and societal well-being [
9,
10]. Recent analysis of smart city computing ecosystems emphasizes that cloud and edge capabilities are indispensable for achieving these objectives, as they provide the scalability, interoperability, and cross-domain integration required for city-wide services [
11]. Research and innovation programs further reinforce this vision by prioritizing open standards, interoperability mechanisms, and modular architectures that support heterogeneous urban deployments [
12].
Advances in fifth-generation connectivity, particularly the deployment of private 5G networks, strengthen this trajectory by providing high bandwidth, low latency, and highly reliable communication substrates for urban services. Combined with edge computing, these capabilities enable new classes of applications, such as adaptive resource management, distributed decision making, and context-aware automation, where intelligence must be executed close to the data source to ensure responsiveness and resilience. Urban infrastructures, therefore, require architectural frameworks that can seamlessly integrate edge devices, communication networks, orchestration systems, and semantic data management layers while upholding security and privacy constraints [
13].
Addressing these requirements requires a collaborative and open platform that can orchestrate heterogeneous service domains end-to-end, abstracting operational complexity from users, and preserving service guarantees throughout the lifecycle of deployed applications. To this end, the proposed environment introduces an interface to streamline the onboarding of complex applications by eliminating direct interaction with low-level orchestration mechanisms. A continuous Service Level Agreement (SLA) preservation loop links service definitions to closed-loop management processes, combining forecasting, decision support, and secure automated actuation. A programmable and encrypted integration fabric interconnects distributed components while alignment with open standards in contextual information modelling, service management, and edge computing ensures long-term interoperability and sustainability.
The objective of this work is to establish a smart city-oriented computing ecosystem that enables the coherent coordination of urban services, infrastructure operations, and cross-domain applications across heterogeneous environments. Existing solutions often remain fragmented, with limited interoperability, insufficient automation capabilities, and architectural constraints that hinder flexible multi-domain collaboration. The platform presented in this article addresses these gaps by enabling seamless coordination, distributed intelligence, and secure cross-domain orchestration.
To support this contribution, the article presents:
(i) a flexible architecture designed for multi-domain smart city environments;
(ii) the mechanisms enabling secure orchestration, semantic interoperability, and automated SLA preservation;
and (iii) a real-world validation conducted a specific local domain, demonstrating the operational viability, efficiency gains, and broader applicability of the proposed approach.
Together, these elements position the platform as a foundation for future smart city deployments capable of integrating heterogeneous infrastructures into a unified, adaptive, and trustworthy digital ecosystem.
2. State of the Art
Smart cities have evolved into complex digital ecosystems, where urban services, infrastructure, and governance processes rely increasingly on deep sensing, ubiquitous connectivity, and distributed computational capabilities. Their evolution is closely tied to the rapid expansion of the Internet of Things (IoT), which enables continuous monitoring of urban environments and facilitates data-driven decision-making across various domains, including mobility, public safety, energy management, and environmental monitoring. As cities adopt more advanced digital services, the underlying technological landscape must accommodate unprecedented heterogeneity: diverse devices, disparate communication technologies, fragmented data platforms, and multiple administrative domains. This complexity imposes strict requirements on scalability, interoperability, resilience, and latency, which traditional cloud-centric models cannot fully satisfy [
13,
14].
Initial IoT architectures commonly adopted a three-layer structure comprising (i) devices, (ii) networks, and (iii) applications. For example, in order to standardize a common reference architecture for the community, the 5G Infrastructure Association (5G IA) [
15] and the Alliance for Internet of Things Innovation (AIOTI) [
16,
17] proposed a three-layer High Level Architecture (HLA) [
18]. While this model served early deployments, its limitations became evident as real-world use cases demanded cross-system data flows, richer context awareness, and cooperation among previously isolated IoT silos. Modern smart city systems are increasingly resembling a Next Generation IoT (NG IoT) environment [
19], a network of heterogeneous networks that integrate sensing technologies with cloud computing, edge devices, 5G connectivity, data management frameworks, and AI-based analytics. This shift underscores the need for architectural approaches that not only support large-scale data ingestion and processing but also enforce security, privacy, and governance across multiple stakeholders and administrative boundaries.
The transition towards an edge cloud continuum has been particularly important for smart cities. Edge computing reduces latency, alleviates backhaul congestion, and enables geographically localized processing, which is essential for time-critical services. Meanwhile, cloud infrastructures continue to provide the elasticity required for large-scale analytics, long-term storage, and inter-domain coordination. Recent surveys emphasize that efficient smart city deployments rely on co-designed cloud and edge capabilities to ensure scalable urban services and support interoperability among heterogeneous digital infrastructures [
11]. However, despite the architectural benefits of distributed computation, the coexistence of diverse hardware, software platforms, and connectivity solutions introduces significant complexity in smart city operations.
A variety of architectural paradigms have been proposed to address these requirements. Cloud native orchestration frameworks offer scalability and robust lifecycle management, but often lack the responsiveness required for latency-sensitive urban applications. Fog computing emerged as an intermediate paradigm to extend computation towards the network edge; yet, fog deployments frequently encounter coordination difficulties and inconsistent interoperability across domains [
20]. Hybrid edge cloud systems aim to combine the advantages of both approaches; however, their real-world adoption remains fragmented. Governance constraints, uneven data management capabilities, and infrastructural heterogeneity continue to limit seamless integration across city-wide environments. Recent work on fog-based deterministic architectures further highlights ongoing efforts to reinforce predictability and reliability in distributed systems, particularly in anticipation of next-generation networking requirements [
21].
Standardisation bodies have made notable progress toward harmonising the distributed computing ecosystem. ETSI’s Multi-access Edge Computing (MEC) framework [
22] defines mechanisms for hosting applications at the network edge, while 3GPP has introduced specifications integrating edge functions into 5G and beyond architectures. ISO/IEC [
23] initiatives contribute complementary perspectives by articulating reference models and data exchange standards for smart city systems. Yet, despite these advances, full architectural alignment remains elusive. Differences in data models, control interfaces, orchestration semantics, and administrative governance hinder the creation of unified operational environments that can support cross-domain workflows at scale.
Research programs and pilot platforms have explored various strategies to overcome these challenges. Smart city testbeds have evaluated the deployment of distributed services [
7], network slicing, resource allocation, and automated management across heterogeneous infrastructures. Projects investigating predictable and deterministic networking, such as PREDICT-6G [
24], demonstrate growing interest in AI-enabled orchestration mechanisms for future distributed systems, emphasizing the need for coordinated control across devices, networks, and cloud platforms. Similarly, other initiatives focused on IoT federation, cross-platform data brokerage, or flexible modular architectures proposed under large-scale digital transformation programs highlight the value of interoperable, multi-domain environments. Nevertheless, existing approaches often remain confined to specific use cases or administrative settings, limiting their adaptability and widespread applicability [
25].
Artificial intelligence (AI) and machine learning (ML) increasingly play a role in orchestrating distributed infrastructures. Deep reinforcement learning, federated learning, and predictive modelling techniques have been applied to optimize resource allocation, anticipate workload fluctuations, and automate decision-making at the network edge. Despite their promise, these technologies introduce challenges related to transparency, explainability, robustness, and computational overhead. Issues that are particularly well-known in public service environments, such as smart cities, where accountability and reliability are essential.
This reveals substantial progress in distributed computing, IoT integration, and orchestration techniques, yet significant gaps persist. Current solutions struggle to provide unified, cross-domain management; end-to-end interoperability, automated and SLA driven orchestration, and secure, trustworthy collaboration across heterogeneous infrastructures, as evidenced by recent efforts to develop multi-domain deterministic networking frameworks. These limitations underscore the need for a flexible, open, and collaborative platform architecture that can support intelligent, adaptive, and secure service delivery in complex smart city contexts.
Comparative Analysis of Multi-Domain and Fog-Based Architectures
A number of existing platforms aim to support distributed smart city and IoT deployments, including multi-domain orchestration frameworks, federated IoT testbeds and fog o Multi-access Edge Computing (MEC) based architectures. While these approaches address specific aspects of scalability, interoperability, or latency reduction, they exhibit limitations when evaluated against the requirements of large-scale, multi-stakeholder urban ecosystems.
Multi-domain orchestration platforms, particularly those originating from cloud-native and 5G ecosystems, typically emphasize centralized coordination and assume homogeneous administrative control. Although they provide powerful automation and service lifecycle management capabilities, their federation models often require participating domains to align with a shared governance or trust framework. This assumption limits their applicability in smart city environments, where stakeholders operate under heterogeneous regulatory, security, and operational constraints and must preserve local autonomy [
26].
Federated IoT testbeds and smart city data platforms primarily focus on data sharing and semantic interoperability across heterogeneous sensing infrastructures. Approaches based on common context brokers or standardized data models enable cross-platform data access and experimentation. However, these platforms generally lack integrated mechanisms for end-to-end service orchestration and dynamic edge resource management, relying instead on static deployments or manual coordination between domains [
26].
Fog computing and MEC-based architectures extend computation toward the network edge to address latency and bandwidth constraints. Standardized frameworks such as ETSI MEC define reference models for hosting applications at the edge, yet typically assume tight coupling with network operators or predefined trust relationships. Coordination across multiple administrative domains remains limited, and security mechanisms for inter-domain orchestration are often outside the architectural scope of these solutions [
26].
In contrast, the architecture proposed in this work explicitly targets the gaps identified in existing approaches by combining three complementary design dimensions. First, it adopts a recursive federated domain model that enables autonomous domains to participate in city-wide services without relinquishing governance or operational sovereignty. Second, it introduces a dedicated secure integration fabric that enforces zero-trust principles for inter-domain communication, providing authenticated and encrypted control-plane connectivity independent of underlying network providers. Third, it supports SLA-driven edge resource orchestration through a split service model, allowing services to be dynamically instantiated and removed in response to contextual events, thereby optimizing resource usage in constrained edge environments.
This comparison highlights that existing solutions typically address federation, security, or edge orchestration in isolation. The proposed architecture integrates these dimensions within a unified framework, enabling secure, interoperable, and adaptive service orchestration across heterogeneous smart city domains.
3. Design of Architecture
3.1. Architectural Principles and Objectives
The proposed platform establishes a flexible, multi-domain ecosystem explicitly designed to overcome the fragmentation inherent in modern urban infrastructures. Its primary objective is to enable a collaborative “system of systems” in which municipal authorities act as the central coordinating entities, while autonomous administrative domains, such as municipal departments, universities, transportation authorities, or port operators, interoperate seamlessly while retaining full sovereignty over their internal resources and operational policies. This federated model reflects the reality that municipalities are responsible for coordinating city-wide services, yet must integrate infrastructures and services operated by independent stakeholders with distinct governance constraints. Urban stakeholders frequently employ heterogeneous security postures, governance models, or compliance requirements. For example, a municipal department may mandate strict regulatory compliance and operational continuity guarantees, while a university research network may permit the onboarding of experimental devices under controlled isolation, and a port authority may enforce certification-based authentication for all operational assets. These differences illustrate why operational sovereignty must be preserved even when domains participate in shared municipal smart-city services.
The rationale for establishing a unified orchestration environment stems from the evolution of smart-city systems toward automated, context-aware, and dynamically coordinated operations. Traditional deployments, built around vertical silos, limit the reuse of contextual information and prevent the composition of multi-domain services. The architecture, therefore, targets the three critical barriers identified in the State of the Art: interoperability, automation, and security. Interoperability is achieved through the adoption of standardized semantic data models, ensuring consistent machine-readability across heterogeneous devices, protocols, and vendor solutions. Automation is supported through policy-driven orchestration mechanisms that replace manual, error-prone provisioning with dynamic workflows capable of responding to real-time conditions. Security is strengthened by introducing a programmable overlay network that isolates inter-domain control traffic from the public internet, ensuring resilient and trustworthy management operations.
Collectively, these principles provide a foundation on which heterogeneous stakeholders can collaborate without abandoning autonomy, while maintaining compliance, operational integrity, and service continuity across smart-city environments.
3.2. Overall System Architecture
The system architecture is organized around a recursive “Federated Domain” model, composed of autonomous Local Domains operating at the edge and a Central Domain that provides cloud-level coordination. This structure is deliberately agnostic to underlying hardware or vendor-specific technologies, allowing individual sites to employ heterogeneous infrastructures ranging from bare-metal servers to virtualized clusters. By abstracting these differences, the platform enables uniform service deployment and lifecycle management across the entire smart-city environment.
The architecture is structured into three primary logical planes as shown in
Figure 1. At the foundation, the Infrastructure Layer provides a unified substrate that accommodates a diverse set of hardware resources, including IoT gateways, edge nodes, and specialized compute platforms. This layer enables the abstraction of local capabilities into a consistent environment upon which services can be deployed. Above this, the Connectivity Plane supports a hybrid mix of communication technologies tailored to the heterogeneous requirements of smart-city applications. Low-power wide-area networks, such as LoRaWAN or NB-IoT, serve massive sensor deployments, while higher-capacity systems, including dedicated Private 5G or Wi-Fi 6 networks, support bandwidth- or latency-critical tasks. These networks can also leverage capabilities such as traffic differentiation or slice-based isolation when available, aligning connectivity behavior with the needs of specific applications and domains.
The Management Layer operates hierarchically, with a Central End-to-End Orchestrator coordinating the actions of distributed Domain Orchestrators. This structure enables city-wide coherence while ensuring that each Local Domain preserves operational autonomy. Local Domain Orchestrators oversee the entire lifecycle of services within their administrative boundary, allowing for local decision-making even during connectivity disruptions. The Central Orchestrator, in turn, integrates these local decisions into a unified operational perspective, supporting coordinated service delivery across geographically distributed sites.
The overall architecture presented in this document is validated within a local domain, which is formed by a multi-site ecosystem that integrates various stakeholders. Each entity functions as a Local Domain with its own infrastructure characteristics and operational constraints. This environment reflects fundamental smart-city requirements: distributed sensing is handled through Local IoT Agents that normalize heterogeneous data sources, including radars, environmental sensors, and industrial equipment, while maintaining compatibility with the semantic data models adopted by the platform. Local autonomy is preserved through the Local Domain Orchestrator, which ensures that essential services remain operational even in the presence of upstream network partitions. This design supports resilient and context-aware service delivery across diverse urban infrastructures.
3.3. Orchestration and Control Framework
The orchestration framework relies on a domain-agnostic engine, which in this case is based on the OpenSlice platform, to deliver Network as a Service (NaaS) capabilities and ensure coherent lifecycle management across heterogeneous smart city environments. Its core function is the dynamic orchestration of software components, enabling the automated deployment, scaling, and termination of Kubernetes-based microservices in response to evolving operational conditions. OpenSlice interacts with Kubernetes through standard interfaces, typically invoking Helm charts or Custom Resource Definitions (CRDs) to manage service instantiation and configuration. This integration enables the orchestrator to standardize the service deployment workflow across Local Domains while maintaining compatibility with existing cloud-native toolchains.
A distinguishing feature of the orchestration framework is its support for a Split Service model. Under this paradigm, resource-intensive components are deployed only when needed, rather than remaining continuously active. This approach reduces baseline resource consumption within Local Domains, which are particularly valuable in constrained edge environments, while ensuring that critical functionality can be instantiated on demand. The Split Service model is governed by an SLA Preservation Closed Loop composed of three stages: continuous monitoring, decision support, and secure actuation. Monitoring modules track performance indicators and contextual variables; decision-making components evaluate SLA compliance and determine whether corrective actions are required; and secure actuation mechanisms apply these changes, such as scaling resources or activating dormant services. Together, these capabilities allow the orchestrator to maintain service guarantees even under fluctuating workload and network conditions.
To better understand the internal mechanics of the Edge level,
Figure 2 details the Local Domain architecture, illustrating how the Local Domain Orchestrator manages the infrastructure through a closed-loop system of monitoring, decision support, and secure actuation.
3.4. Data Management and Integration
The cornerstone of inter-domain communication within the platform is the Secure Integration Fabric (SIF), which provides an encrypted and programmable overlay interconnecting the Central Orchestrator, Local Domain Orchestrators, and distributed data-management components. The SIF ensures that all management and control traffic is transmitted through authenticated tunnels, logically isolated from the public internet and shielded from untrusted transport layers. This separation creates a resilient control plane capable of supporting distributed orchestration even in environments where connectivity conditions fluctuate or where domains rely on heterogeneous network providers.
Data flows across the platform follow a structured and standardized pipeline to ensure consistency, traceability, and semantic alignment. At the IoT Device level, sensors transmit encrypted payloads using their domain-specific access technologies, such as Private 5G, LTE, LoRaWAN, or other local communication substrates, to a Local IoT Agent. This agent performs protocol normalization and preprocessing tasks, converting raw device outputs into machine-readable entities that comply with the platform’s semantic model. At the Edge Node level, these harmonized data elements are forwarded to the Local Context Broker, which maintains a comprehensive view of domain-specific contextual information and supports local applications that require low-latency access to these data streams. At the General Platform level, selected contextual entities are federated into central repositories, enabling cross-domain observability, multi-domain analytics, and city-wide situational awareness.
Interoperability across domains is enforced through the adoption of the ETSI NGSI-LD standard, which defines a common representation for entities and their relationships. This ensures that applications developed in one domain can interpret contextual data originating from another domain without requiring customized adapters or translation logic. In practical deployments, this may include NGSI-LD representations of common smart-city concepts such as WasteContainer, TrafficFlowObserved, AirQualityObserved, or domain-specific industrial assets. By aligning all subsystems with a unified information model, the architecture avoids vendor lock-in, simplifies integration of heterogeneous devices, and facilitates the composition of cross-domain services that rely on a consistent understanding of the urban environment.
To demonstrate how the system achieves cross-domain interoperability,
Figure 3 illustrates the Data Ingestion and Semantic Harmonization Layer, where raw data from heterogeneous devices is normalized and mapped into the unified NGSI-LD standard.
3.5. Security Mechanisms
The architecture adopts a comprehensive security framework designed to support collaboration across heterogeneous administrative domains while maintaining strict control over trust relationships. At its core, the system follows a Zero Trust model, whereby no domain implicitly trusts another, regardless of organizational affiliation or network proximity. All cross-domain interactions are mediated through the Secure Integration Fabric (SIF), which enforces authentication, authorization, and policy validation at each ingress and egress point. This ensures that only verified entities can exchange control or data flows, thereby preventing lateral movement and reducing the attack surface associated with inter-domain communication.
Trust establishment among stakeholders is achieved through mutual authentication mechanisms embedded within the SIF overlay. The Central Orchestrator communicates with each Local Domain exclusively through standardized and access-controlled APIs, eliminating the need for direct exposure of internal infrastructure components. This approach enables each domain to maintain its own internal security posture while participating safely in a wider operational ecosystem. Policy enforcement can be tailored to domain-specific requirements, accommodating environments that demand stringent regulatory compliance as well as those supporting more flexible research or innovation activities.
Privacy preservation is an inherent aspect of the architectural design. High-resolution or sensitive sensor data, such as fine-grained environmental measurements or usage logs, is processed within the Local Domain and never transmitted externally in raw form. Instead, only aggregated, anonymized, or semantically enriched NGSI-LD entities are exchanged across domains or forwarded to the central platform. Privacy techniques, such as data masking, contextual aggregation, or temporal down-sampling, can be applied at the Local IoT Agent or Context Broker level, ensuring compliance with data-protection regulations, such as the GDPR, while maintaining the utility of shared information for analytics and cross-domain service orchestration. Collectively, these mechanisms ensure that the platform supports secure, trustworthy, and privacy-preserving collaboration among diverse stakeholders operating within a shared smart-city environment.
3.6. Alignment with Standards
The architecture has been designed to ensure long-term sustainability, interoperability, and vendor neutrality by aligning with key industry standards across edge computing, service management, and contextual information modelling. These standards provide the foundational principles that guide the design of each platform component, ensuring that the system can evolve alongside emerging technologies while remaining compatible with existing ecosystems.
The platform aligns closely with the ETSI Multi-access Edge Computing (MEC) framework, which defines mechanisms for deploying and managing applications at the network edge. The Local Domain model reflects MEC architectural principles by enabling the hosting of distributed applications, low-latency data processing, and the exposure of domain-specific services. When applicable, MEC specifications, such as ETSI MEC 003 [
22], which outlines the MEC reference architecture, serve as guiding references to maintain compatibility with standardized edge environments and to support future extensibility toward additional MEC-compliant domains.
Semantic interoperability is ensured through the adoption of ETSI NGSI-LD [
27], which provides the underlying data model for representing entities, their attributes, and the relationships between them. This standard forms the basis for the platform’s context management strategy, enabling heterogeneous Local Domains to share structured information without requiring custom data translation logic. By adhering to NGSI-LD conventions, the architecture prevents vendor lock-in, promotes semantic consistency, and facilitates the development of portable applications that can operate across diverse smart-city environments.
Service orchestration interfaces are built upon TM Forum Open APIs, particularly TMF633 for Service Catalog Management [
28]. This interface governs how services are described, exposed, and consumed across domains. By using TMF633 as the mechanism for presenting and retrieving service descriptors, such as Helm charts or other deployment artefacts, the platform ensures that orchestration workflows remain standardized and agnostic to the underlying implementation of the orchestration engine. This also facilitates the substitution or extension of orchestration components without jeopardizing interoperability.
Taken together, these standards provide a cohesive framework that enables the architecture to interoperate with existing systems, support modular growth, and maintain compatibility with evolving industry practices. Their incorporation ensures that the proposed platform remains open, flexible, and aligned with established guidelines for distributed service management in smart-city environments.
4. Validation of Architecture
This section provides a comprehensive assessment of the architectural framework by conducting empirical deployment testing within a smart city cluster. This evaluation is carried out in a real environment, a local domain that offers a realistic yet manageable environment that reflects the dynamics of a small-scale smart city. It involves a wide range of services, infrastructures and user interactions, closely resembling those found in modern urban environments and making it suitable setting for validating smart city solutions before their deployment a city scale.
4.1. Validation Methodology
The validation of the proposed architecture was conducted through a real-world deployment in a local domain, which was selected as a representative urban microenvironment that mirrors the heterogeneity, operational constraints, and multi-stakeholder characteristics of modern smart cities. The objective of the validation process was to assess the correctness, robustness, and operational viability of the architectural components under realistic conditions, focusing on their ability to support cross-domain orchestration, secure data integration, and dynamic service lifecycle management.
The evaluation followed a methodology structured around five dimensions: (i) functional correctness, assessing whether each architectural module, such as the Domain Orchestrator, Secure Integration Fabric, and NGSI-LD data pipeline, operated as designed; (ii) SLA oriented behavior, examining whether the Split Service orchestration model responded appropriately to threshold based triggers; (iii) resource efficiency, analyzing the reduction in baseline compute usage achieved through on demand microservice instantiation; (iv) data integration fidelity, evaluating the consistency and completeness of harmonized context updates reproduced in the observability platform; and (v) security posture, ensuring that all cross domain interactions occurred exclusively over authenticated, encrypted channels.
Metrics were collected from monitoring systems integrated into the test environment and from logs generated by the orchestration and data management components. The methodology emphasized empirical behavior under realistic operational conditions rather than synthetic benchmarking, ensuring that the conclusions reflect deployable smart city scenarios.
4.2. Description of the Smart City Test Environment
The validation activities were conducted within a local domain characterized by a heterogeneous infrastructure, mixed administrative responsibilities, and operational dynamics typical of contemporary smart city environments. The test environment integrates both physical and virtual resources into a unified edge cloud continuum, enabling the deployment, execution, and monitoring of the architectural components under realistic conditions.
The physical infrastructure comprises a distributed network of IoT sensing devices, with a focus in this deployment on a set of fill level sensors installed across municipal waste containers. These devices communicate via a Narrowband IoT (NB IoT) access network, chosen for its wide coverage and efficient support for low-power urban sensing. For testing and validation, data is first ingested through a commercial IoT platform before being forwarded to the local platform components, ensuring compatibility with existing sectoral systems and supporting progressive integration.
The local computational environment hosts all key platform functions required for the experiment. This includes a Local Context Broker implementing the NGSI-LD data model, the Local Domain Orchestrator based on the OpenSlice framework, the Secure Integration Fabric (SIF) Router responsible for establishing encrypted control plane tunnels, and an edge-level Kubernetes cluster that executes the microservices involved in monitoring and actuation workflows. The co-location of these components within the Local Domain ensures low-latency data handling and preserves privacy by processing high-resolution sensing information locally.
This integrated setup provides a realistic representation of a smart city operational domain, where heterogeneous devices, administrative boundaries, and connectivity substrates must interoperate seamlessly. It also allows for accurate assessment of resource usage, orchestration responsiveness, data integration fidelity, and secure coordination across the distributed architectural layers.
4.3. Deployment of the Platform Components
The deployment phase focused on instantiating the three foundational components of the platform, the Secure Integration Fabric (SIF), the Local Domain Orchestrator based on the OpenSlice framework, and the NGSI-LD compliant Orion Context Broker, along with the supporting compute environment required for executing the use-case-specific microservices. The objective of this deployment was to validate the operational behavior of the architectural stack under realistic smart city conditions and to confirm that orchestration, data integration, and secure communication workflows function consistently across heterogeneous infrastructure elements.
The Local Domain was provisioned with an edge-level Kubernetes cluster acting as the execution substrate for microservices involved in monitoring, event detection, and actuation. These workloads were deployed through OpenSlice, which interacts with the Kubernetes API using Helm charts to ensure consistent lifecycle management across all Local Domains. This mechanism provides a unified way to deploy, configure, and terminate containerized services, thereby enabling reproducible orchestration behavior across deployments.
The Secure Integration Fabric (SIF) Router was deployed at the boundary of the Local Domain to provide a dedicated, encrypted control plane channel between the Central Orchestrator and the Local Domain. All management operations, including service instantiation, policy updates, and orchestration triggers, were transmitted exclusively through this authenticated overlay, ensuring isolation from untrusted transport layers and preventing exposure of internal components. The SIF also ensures that local orchestration workflows remain operational even during temporary upstream connectivity disruptions.
The Orion Context Broker was deployed to provide semantic data management fully aligned with the NGSI-LD standard. A Local IoT Agent was configured to ingest data from the NB IoT sensor infrastructure, initially accessible through a commercial IoT platform, and convert raw payloads into NGSI-LD entities. These entities were stored and maintained within the Context Broker, enabling local applications and microservices to operate on harmonized contextual information while supporting selective federation of relevant context updates toward central observability platforms.
Together, these deployed components demonstrate that the platform can be instantiated across diverse smart city infrastructures while preserving interoperability and functional coherence. The deployment confirms that the orchestration engine, secure control plane overlay, and semantic data management layer collectively support continuous monitoring, event-driven orchestration, and privacy-preserving data flows in a realistic operational domain.
4.4. Experimental Scenarios
The validation was centered on a representative municipal service scenario designed to assess the platform’s ability to support automated, event-driven orchestration across the Local Domain. The selected scenario, resource-efficient waste management, reflects a common operational challenge in smart cities, where large-scale sensor deployments, heterogeneous data sources, and dynamic service needs must be coordinated reliably and with minimal manual intervention.
The workflow implemented in the experiment evaluated the interaction of the three core platform components with the supporting microservices deployed at the edge. As illustrated in
Figure 4, the scenario followed a sequential chain of operations:
- 1.
Data Ingestion: The workflow initiates at the physical layer, where waste containers transmit encrypted fill-level data payloads via the Narrowband IoT (NB-IoT) network. These payloads are received by the Local IoT Agent, which acts as the initial gateway for protocol normalization.
- 2.
Semantic Entity Management: The Local IoT Agent processes the raw payload and interacts with the Local Context Broker (Orion). If it is the first transmission, the agent creates a new WasteContainer entity; for subsequent transmissions, it updates the existing attributes (e.g., Fill_Level, Temperature). This ensures that the Context Broker always maintains the most current, harmonized state of the physical infrastructure.
- 3.
Upstream Federation: While local operations remain autonomous, selected context data is federated upstream to the Central Data Repository & Analytics module in the Cloud Domain. This enables city-wide historical analysis and long-term storage without overloading the local edge resources.
- 4.
Service Initialization: The service lifecycle is triggered from the top down. The Central E2E Orchestrator initiates the waste management service by sending a secure command to the Local Domain Orchestrator (OpenSlice). This communication occurs exclusively through the Secure Integration Fabric (SIF), ensuring that the control plane remains encrypted and isolated from public networks.
- 5.
Monitoring Service Deployment: Upon receiving the command, the Local Domain Orchestrator instructs the edge Kubernetes Cluster to deploy the first microservice: the Continuous Monitoring application. During this instantiation, the orchestrator injects the specific operational configuration, in this case, the fill-level threshold (e.g., >80%), directly into the application runtime.
- 6.
Context Subscription: Once running, the Continuous Monitoring application subscribes to the Local Context Broker. It continuously retrieves real-time updates for the WasteContainer entity to evaluate its Fill_Level against the injected threshold configuration.
- 7.
Event Trigger: When the monitoring application detects that the fill level has exceeded the defined limit, it generates an internal alert. Crucially, it does not handle the resolution itself but instead transmits a specific API trigger back to the Local Domain Orchestrator. This signal indicates that a critical event has occurred requiring a dedicated response.
- 8.
Actuation Service Deployment: In response to the trigger, the Local Domain Orchestrator dynamically instantiates the second microservice: the Secure Actuation application. This step validates the “Split Service” paradigm, where resource-intensive or specific-purpose logic is deployed only on demand rather than running continuously.
- 9.
Entity State Update: The newly deployed Secure Actuation application immediately interacts with the Local Context Broker to update the WasteContainer entity, marking its status as “Collection Pending.” This ensures that the system’s semantic state accurately reflects the pending operational action.
- 10.
External Notification and Teardown: Finally, the Secure Actuation application transmits an alert to the external waste collection company to schedule an urgent pickup. Once this task is successfully confirmed, the Local Domain Orchestrator automatically undeploys the Actuation service to reclaim edge computational resources, completing the efficiency loop
These experimental steps collectively evaluated the system’s capability to support automated, data-driven workflows, ensuring functional alignment across the sensing, integration, orchestration, and actuation layers. The scenario also provided detailed operational traces enabling subsequent analysis of orchestration efficiency, data fidelity, and control plane resilience.
4.5. Evaluation Outcomes
The validation activities yielded a comprehensive set of observations demonstrating the operational soundness, efficiency, and resilience of the proposed architecture within the smart city domain. These findings are based on orchestration traces, system logs, NGSI-LD data flow analytics, and monitoring information collected during the execution of the experimental scenario.
From a functional perspective, all core platform components, the Secure Integration Fabric (SIF), the Local Domain Orchestrator (OpenSlice), and the NGSI LD Context Broker, operated in accordance with the architectural specification. The SIF consistently maintained authenticated and encrypted control plane tunnels, ensuring that service lifecycle commands, orchestration triggers, and policy updates were exchanged securely and without exposure to untrusted networks. The Local Domain Orchestrator reliably instantiated and terminated the Notification Service in response to threshold-based triggers, validating the correct behavior of the Split Service orchestration paradigm. The Orion Context Broker upheld a coherent semantic representation of waste container fill levels, confirming the robustness of the data normalization and context management pipeline.
The evaluation also demonstrated notable improvements in resource efficiency relative to traditional monolithic deployments. Because actuation logic was instantiated only when required, the baseline computational footprint of the Local Domain remained minimal. To empirically substantiate this, we monitored the Kubernetes cluster resources during the lifecycle of the microservices.
Table 1 details the specific resource consumption across the execution phases.
As shown in
Table 1, the baseline idle state consumed only 0.21 mCPU and 14.12 MiB of memory. During the actuation phase, CPU consumption increased momentarily to 21.74 mCPU to handle the container initialization and alert dispatch, before returning immediately to baseline levels following teardown.
This on-demand approach offers quantifiable savings compared to a traditional deployment model where both monitoring and actuation microservices run continuously. In a traditional “always-on” scenario, the idle memory footprint would be approximately 29.8 MiB (combining the footprints of both services). By using the Split Service model, we achieved a 52.6% reduction in memory reservation during idle periods. Furthermore, the latency measured from the event trigger (fill-level threshold exceeded) to the full instantiation of the actuation service was approximately 6.878 s. This response time, which includes orchestration overhead, is well within the acceptable latency bounds for delay-tolerant smart city applications, such as waste management, confirming the model’s suitability for edge environments where resources are constrained and sustained workloads may not be feasible.
Beyond the quantitative resource efficiency, the proposed architecture specifically addresses both deployment complexity and operational overhead. Deployment complexity, which is often a barrier in heterogeneous smart city environments, is mitigated through the abstraction of the Infrastructure Layer and the use of standardized Helm charts driven by OpenSlice. This allows for uniform and reproducible service instantiation regardless of the underlying hardware, streamlining the initial onboarding process and eliminating the need for manual, site-specific configurations. Regarding operational overhead, the experimental validation of the Split Service model demonstrates a shift from static, maintenance-heavy deployments to an automated, on-demand paradigm. By coupling the SLA Preservation Closed Loop with event-driven instantiation, the system minimizes the need for continuous manual supervision and reduces the baseline maintenance surface. This is evidenced by the system’s ability to autonomously manage the lifecycle of actuation services, deploying them only when critical thresholds are reached.
The results further indicated strong fidelity in semantic data integration. All NB-IoT sensor payloads were successfully ingested, normalized, and transformed into NGSI-LD entities without semantic loss or schema inconsistency. These entities propagated correctly through the Local Context Broker and into the central observability environment when required, demonstrating that the platform’s data federation mechanisms preserve consistency across domains. Cross-checks between local and central data views confirmed that the harmonization process avoided duplication, temporal misalignment, and schema divergence.
The architecture additionally proved robust in terms of security and trust management. All orchestration interactions were confined to the encrypted control plane overlay established by the SIF, with no fallback to unsecured communication paths. Mutual authentication was successful in all observed cases, and no unauthorized ingress attempts reached internal domain components. The Local Domain continued to enforce its own security posture and access control policies independently of upstream conditions, validating the platform’s suitability for multi-stakeholder urban environments.
Finally, the system demonstrated strong resilience during temporary fluctuations in the upstream network. When connectivity to the Central Orchestrator degraded, the Local Domain Orchestrator and the Context Broker continued to operate normally, ensuring uninterrupted monitoring and local decision-making. Once connectivity was restored, the domain reintegrated into the broader orchestration environment without requiring manual intervention. This behavior is essential for maintaining operational continuity in real smart city settings, where network variability is expected.
Collectively, these results confirm that the proposed architecture fulfills its intended goals: enabling autonomous, secure, interoperable, and resource-efficient orchestration across heterogeneous smart city domains. The findings validate the architecture’s practical deployability and support its relevance as a foundation for future large-scale urban digital ecosystems.
5. Open Issues and Future Work
The validation presented in this article confirms the functional correctness of the platform’s core pillars: the Secure Integration Fabric (SIF), the OpenSlice-based orchestration engine, and the NGSI-LD semantic data pipeline. However, to fully realize the vision of a “system of systems” for smart cities, subsequent phases of the work must address the implementation of advanced human-machine interfaces and stress-test the federated architecture under complex, multi-site conditions.
5.1. Validation of Conversational Observability
While the current deployment validation relied on standard dashboard visualizations, this approach may scale poorly as the number of domains and entities increases. To address this, a primary objective for future development is the creation and integration of a Large Language Model (LLM) interface intended to function as a “conversational observability” layer. The architecture will benefit significantly from this addition, as it is designed to reduce operational complexity by enabling operators to issue natural language queries to retrieve status information, identify configuration inconsistencies, or trigger management workflows, rather than interacting directly with low-level orchestration APIs.
To address the challenges of reliability and “hallucinations” inherent in generative AI, the planned implementation will strictly adhere to a Retrieval Augmented Generation (RAG) process. In this proposed model, the LLM will consult an up-to-date knowledge base populated with system documentation, real-time monitoring information, and service descriptors. This design ensures that generated responses, such as answering specific questions like “What is the current fill status of this container?”, will remain aligned with the current platform state and valid operational data, avoiding inaccuracies that could compromise operational reliability. The future validation of this component will focus on two critical dimensions:
Accuracy over Latency: Unlike the real-time control loops, the conversational interface will prioritize the correctness of the information over sub-second latency. A response time of several seconds is considered acceptable, provided the data is grounded strictly in the current NGSI-LD entities stored in the Context Broker.
Read-Only Safety: To maintain the system’s security posture, the LLM will be restricted to an informational role. It will not be authorized to trigger orchestration commands or modify system states, ensuring that “human-in-the-loop” oversight is preserved for all actuation decisions.
5.2. Multi-Domain Replication and Scalability
The architecture is proposed as a “recursive federation” model capable of supporting autonomous local domains. While the current study validated this within a single local domain, the next phase will expand to a full multi-domain ecosystem to validate the platform’s location independence and replicability.
This expansion will involve testing the Central Cloud Layer, which coordinates multiple Local Domains, regardless of their physical proximity. This setup aims to test two critical scalability dimensions:
Geographical Independence via SIF: A second Local Domain will be instantiated at a distinct geographical location, connected to the Central Orchestrator via the Secure Integration Fabric. This will validate that the encrypted overlay network can maintain a stable control plane and enable seamless orchestration even when domains are separated by significant network distance and latency.
Service Heterogeneity and Isolation: To validate the platform’s ability to handle diverse urban workloads, the new Local Domain will not be limited to a single application. Instead, it will simultaneously host three distinct smart city services on the same infrastructure:
Air Quality Monitoring: A latency-tolerant service processing periodic environmental data.
Smart Building Room Usage: An event-driven service monitoring occupancy rates.
Traffic Radar Monitoring: A high-bandwidth service requiring real-time processing of mobility flows.
This multi-service deployment will serve as a stress test for the Infrastructure Layer. Specifically, it will validate the use of Kubernetes Namespaces to enforce logical isolation, ensuring that the resource-intensive traffic analysis does not degrade the performance of the air quality or building monitoring services. This step is essential to confirm that the “Split Service” orchestration model remains efficient when multiple services contend for edge resources.
6. Conclusions
This work introduced a flexible, multi-domain architecture designed to address the fragmentation and complexity inherent in modern smart city infrastructures. By leveraging a recursive federation model, the platform successfully reconciles the need for centralized city-wide coordination with the operational sovereignty required by independent administrative domains.
The validation conducted within the local domain of the waste management pilot demonstrated the robust performance of the core architectural pillars. The Secure Integration Fabric (SIF) proved effective in establishing a resilient, encrypted control plane, while the OpenSlice-based orchestration engine successfully executed dynamic, split-service workflows that optimize edge resource consumption. Furthermore, the adoption of the ETSI NGSI-LD standard ensured semantic interoperability, as evidenced by the high fidelity of harmonized data integration observed in the operational dashboards. The experimental results confirm that the system can reliably ingest, normalize, and visualize heterogeneous sensor data while maintaining strict security boundaries.
Looking forward, the architecture is positioned to evolve from a single-site deployment into a fully distributed “system of systems.” Future developments will focus on validating the platform’s replicability through a multi-site ecosystem that integrates a Central Cloud Layer with geographically dispersed Local Domains. This expansion will not only stress-test the network resilience of the federation but also evaluate the platform’s multi-tenancy capabilities by hosting diverse simultaneous services—including air quality monitoring, smart building usage, and traffic analysis—within isolated Kubernetes namespaces. Finally, the integration of a Retrieval Augmented Generation (RAG) based conversational layer will enhance operational observability, providing intuitive and natural language access to system insights while maintaining strict safeguards against reliability risks.