Next Article in Journal
Deep Learning-Based Ink Droplet State Recognition for Continuous Inkjet Printing
Previous Article in Journal
Multi-Background UAV Spraying Behavior Recognition Dataset for Precision Agriculture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Toward an Integrated IoT–Edge Computing Framework for Smart Stadium Development

by
Nattawat Pattarawetwong
,
Charuay Savithi
* and
Arisaphat Suttidee
Mahasarakham Business School, Mahasarakham University, Mahasarakham 44150, Thailand
*
Author to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2026, 15(1), 15; https://doi.org/10.3390/jsan15010015
Submission received: 13 December 2025 / Revised: 20 January 2026 / Accepted: 21 January 2026 / Published: 1 February 2026
(This article belongs to the Section Big Data, Computing and Artificial Intelligence)

Abstract

Large sports stadiums require robust real-time monitoring due to high crowd density, complex spatial configurations, and limited network infrastructure. This research evaluates a hybrid edge–cloud architecture implemented in a national stadium in Thailand. The proposed framework integrates diverse surveillance subsystems, including automatic number plate recognition, face recognition, and panoramic cameras, with edge-based processing to enable real-time situational awareness during high-attendance events. A simulation based on the stadium’s physical layout and operational characteristics is used to analyze coverage patterns, processing locations, and network performance under realistic event scenarios. The results show that geometry-informed sensor deployment ensures continuous visual coverage and minimizes blind zones without increasing camera density. Furthermore, relocating selected video processing tasks from the cloud to the edge reduces uplink bandwidth requirements by approximately 50–75%, depending on the processing configuration, and stabilizes data transmission during peak network loads. These findings suggest that processing location should be considered a primary architectural design factor in smart stadium systems. The combination of edge-based processing with centralized cloud coordination offers a practical model for scalable, safety-oriented monitoring solutions in high-density public venues.

1. Introduction

Contemporary sports stadium management encompasses far more than routine facility operations. During large-scale events, administrators must coordinate safety, crowd movement, infrastructure performance, and incident response in rapidly evolving and unpredictable conditions. In these settings, even brief delays in data processing or information dissemination can diminish situational awareness and reduce the effectiveness of operational responses. As a result, digital technologies are now central to stadium management, supporting continuous monitoring, timely decision-making, and coordinated control across complex and heterogeneous systems. Recent smart stadium initiatives globally have increasingly adopted Internet of Things (IoT) technologies, wireless communication networks, hybrid edge–cloud architectures, and data analytics to enhance operational efficiency, safety, and service quality [1].
Research on large-scale monitoring environments demonstrates that effective real-time operation results from the coordinated integration of sensing, communication, and computation across multiple system layers, rather than from isolated technologies. In this context, hybrid edge–cloud architectures have become prominent due to their ability to support latency-sensitive analytics while maintaining centralized coordination and long-term data management. Edge-based processing is especially important for video-intensive applications, where large data volumes and strict timing requirements place continuous demands on network resources. By relocating analytical tasks closer to data sources, edge-based processing reduces latency, stabilizes data transmission, and alleviates uplink congestion, establishing it as a widely accepted paradigm for large-scale, time-critical systems [2,3,4]. Recent developments indicate a shift from basic edge-assisted acceleration to edge intelligence, where analytical and learning capabilities are deployed at the network edge to support real-time decision-making under stringent latency and bandwidth constraints in video-intensive environments [5,6].
Stadiums now serve not only as event venues but also as multifunctional public assets integrated within broader urban and social ecosystems. Despite this expanded role, many large national stadiums continue to rely on legacy infrastructure, where core subsystems such as electrical services, water and plumbing, environmental control, and security have developed independently. Rajamangala National Stadium, the largest sporting venue in Thailand, illustrates these challenges. Although it regularly hosts major national and international events, its operational subsystems remain fragmented and are often supported by outdated, non-interoperable platforms. The lack of real-time operational data limits situational awareness, delays coordinated incident response, and restricts the stadium’s ability to comply with current international standards for large-scale venue management.
Instead of recommending the complete replacement of existing infrastructure, this study adopts an integration-oriented approach to smart stadium development. The proposed framework is structured around three complementary enablers: IoT-based sensing, edge-based processing, and smart digital applications. IoT technologies enable continuous data collection from diverse sources, such as surveillance cameras, access control systems, and environmental sensors, providing a real-time representation of stadium conditions [1]. Edge-based processing addresses the challenges of high-volume data streams, particularly video, by conducting computationally intensive analysis near the data sources. Recent research in mobile edge computing highlights the importance of jointly optimizing task offloading and resource allocation under latency and security constraints, especially in complex, data-intensive environments [7]. Empirical studies in sports event and venue management further demonstrate that edge-based architectures can enhance event information management and reduce network congestion during high-attendance activities [8]. Smart digital applications leverage these technical layers by converting processed data into actionable information that supports operational coordination, system automation, and informed decision-making.
Although interest in edge- and IoT-enabled systems is increasing within smart city and smart building research, empirical studies on their integrated architectural deployment in large national stadiums remain scarce, especially in developing country contexts. Most existing research on sports and venue management has concentrated on isolated technologies or cloud-centric solutions, with limited attention to integrated IoT edge architectures tailored for stadium-scale operations [9]. Similarly, much of the mobile edge computing literature focuses on generic urban or network-centric scenarios, leaving the adaptation of integrated edge–cloud frameworks for legacy stadium environments largely unexplored [10]. In Thailand, research seldom addresses how these technologies can be systematically aligned with the practical realities of public stadiums, which are characterized by large spatial scale, aging infrastructure, and complex operational requirements.
To address this gap, this study proposes an integrated IoT edge–cloud framework specifically designed for large, legacy stadium environments. In contrast to previous work that treats edge computing mainly as a secondary performance optimization, this framework prioritizes sensing configuration and processing location as central design principles for real-time stadium monitoring. By aligning sensing geometry, processing placement, and network behavior within a unified architectural structure, the proposed approach moves beyond technology-specific evaluations toward deployable, context-aware system design.
Accordingly, the objectives of this study are to (i) design an integrated IoT edge–cloud architecture suitable for real-time safety monitoring and operational coordination in a legacy national stadium, (ii) evaluate the impact of edge-based video processing on system responsiveness, coverage continuity, and uplink bandwidth efficiency under conditions of high crowd density and constrained network resources, and (iii) assess the feasibility of the proposed architecture as a scalable reference model for improving stadium operations and guiding future smart stadium deployments in Thailand.
Figure 1 presents the conceptual framework guiding the design and evaluation of this study. The framework demonstrates the transformation of traditionally fragmented stadium subsystems through coordinated digital integration. Central to this model, IoT-based sensing, edge-based processing, and smart digital applications constitute an integration layer that links physical infrastructure with digital operations. This integration highlights key operational outcomes, such as real-time monitoring, enhanced operational coordination, continuous coverage, and increased network efficiency. Rather than outlining a procedural workflow, the framework conceptualizes the relationships among existing systems, integration mechanisms, and operational performance, thereby providing a structured foundation for smart stadium development in large, legacy venues.
The principal contributions of this study are summarized as follows:
  • An integrated IoT edge–cloud architecture for smart stadium monitoring is presented, explicitly linking sensing geometry, processing location, and network behavior. This approach advances beyond technology-specific evaluations commonly reported in prior work.
  • A geometry-informed camera deployment strategy is demonstrated for a large national stadium, indicating that continuous visual coverage and coverage reduction can be achieved without increasing sensor density.
  • Comparative analysis provides quantitative evidence of uplink bandwidth reduction (approximately 50–75%) achieved by edge-based preprocessing compared to cloud-only transmission under high-attendance event conditions.
  • The proposed architecture is evaluated using a realistic stadium-scale use case that reflects complex spatial layouts, heterogeneous surveillance subsystems, and operational constraints characteristic of national sports venues.
  • The findings provide practical design guidance for deploying scalable and safety-oriented smart stadium systems, clarifying the conditions under which hybrid edge–cloud architectures are preferable to purely cloud-based or edge-only approaches.

2. Literature Review

2.1. Smart Stadium Concepts and Strategic Development

The smart stadium concept signifies a substantive transformation in the planning, operation, and governance of large sports venues. Contemporary stadiums are now viewed as complex sociotechnical systems where digital technologies influence operational practices, safety management, and stakeholder interactions. Developments in sensing technologies, high-capacity communication networks, and data analytics have allowed stadium operators to monitor conditions in near real time and coordinate activities that were previously fragmented and reliant on manual or semi-automated processes. In this context, the Internet of Things (IoT) serves as a foundational mechanism for connecting devices, systems, and users, thereby facilitating integrated security management, environmental control, and crowd monitoring [1].
The development of smart stadiums is increasingly recognized as a socio-technical transformation that extends beyond the deployment of advanced digital technologies to encompass organizational adaptation and governance-related challenges. Modern stadiums function as multifunctional public assets, hosting sporting events, large-scale gatherings, tourism activities, and a wide range of urban functions. In this context, digital transformation research on smart sports venues highlights the role of connected technologies in reshaping service delivery and stakeholder value creation, rather than merely improving technical efficiency [11]. At the same time, insights from governance-oriented studies of digitally mediated sports ecosystems emphasize that effective operation depends on cross-organizational coordination, alignment of diverse stakeholder interests, and adaptive decision-making under conditions of uncertainty [12]. Building on this strategic and institutional perspective, empirical research on connected stadium systems demonstrates that integrated digital infrastructures—such as digital twin–enabled environments—enhance stadiums’ capacity to respond to fluctuating demand, heightened safety expectations, and heterogeneous stakeholder needs, particularly during high-attendance events [13].
Operationally, the implementation of crowd analytics, automated access control, and continuous environmental monitoring has increased the ability of stadium operators to manage risk and uncertainty. From an architectural perspective, IoT-enabled sensing and data integration provide the technical foundation for faster situational awareness and more informed responses to dynamic conditions, particularly in complex environments characterized by high crowd density and stringent safety requirements [14]. Unlike conventional smart buildings, stadiums exhibit episodic usage patterns, abrupt fluctuations in operational loads, and require coordination among diverse stakeholders such as facility managers, security services, event organizers, and public authorities. These factors suggest that smart stadium initiatives must integrate architecture and organization with existing workflows and governance structures, rather than deploying isolated technologies.
In addition to short-term operational concerns, stadiums are increasingly understood as long-term public assets with enduring cultural, social, and economic significance. This perspective is consistent with corporate real estate management (CREM) frameworks, which emphasize lifecycle-oriented strategies aimed at balancing performance, safety, and stakeholder value over time, rather than optimizing facilities solely for immediate operational efficiency [15,16]. From this strategic standpoint, digital technologies assume a dual role by supporting routine daily operations while simultaneously generating actionable information for long-term planning, investment decision-making, and sustainable infrastructure management in complex venue environments [17]. Despite growing global interest in smart venue development, systematic empirical evidence addressing the implementation of smart stadiums in developing-country contexts remains relatively sparse, underscoring the need for context-sensitive frameworks that explicitly consider legacy infrastructure conditions and institutional constraints.

2.2. Internet of Things for Stadium Operations and Monitoring

The Internet of Things (IoT) serves as the foundational sensing layer in smart stadium systems by enabling continuous data acquisition and real-time information exchange among diverse subsystems. In stadium environments, IoT infrastructures commonly integrate surveillance cameras, access control devices, environmental sensors, and energy monitoring systems, thereby enhancing operational visibility and situational awareness during large-scale events [1,18]. By interconnecting previously isolated systems, IoT platforms enable more coordinated and timely operational responses during critical situations.
Empirical studies indicate that IoT-based monitoring architectures are critical for enhancing safety management in high-capacity public venues. Multisensor crowd monitoring systems provide near real-time observation of crowd density and movement patterns, enabling early detection of congestion, abnormal behavior, and potential safety risks [19,20]. Simultaneously, IoT-enabled public safety and emergency response systems support rapid alert dissemination and coordinated interventions, thereby strengthening operational resilience in densely populated and time-sensitive settings [21].
In addition to safety-related applications, IoT technologies support a broad spectrum of operational and service-oriented functions within stadium facilities. Sensor-driven systems inform smart ticketing, location-aware navigation, and personalized information services for spectators. At the facility level, IoT-enabled monitoring enables adaptive control of lighting, ventilation, and temperature based on occupancy and environmental conditions, thereby enhancing user comfort and increasing energy efficiency [14]. Collectively, these applications demonstrate that the Internet of Things contributes to both risk mitigation and responsive service delivery, as well as improved operational efficiency.
Despite these advantages, deploying IoT systems in large and aging stadiums presents significant challenges. Legacy infrastructure, limited interoperability among subsystems, and constrained network capacity can hinder scalability and reliability, especially during peak event periods. Additionally, security and privacy concerns are heightened in environments where real-time data inform safety-critical decisions [22]. These limitations indicate that IoT alone cannot fully address the latency, bandwidth, and reliability requirements of stadium-scale operations, thereby motivating the integration of complementary computing paradigms.

2.3. Edge and Cloud Computing for Real-Time Stadium Analytics

Large sports stadiums generate substantial volumes of heterogeneous data from surveillance cameras, access control systems, environmental sensors, and digital services. During major events, both data volume and velocity increase sharply, placing significant strain on computational and network resources. Although cloud computing platforms provide scalable storage and post-event analytics, exclusive reliance on remote cloud processing is frequently inadequate for time-critical stadium operations, where even short delays can undermine situational awareness and coordinated response [3]. Prior research in mobile edge computing emphasizes that system performance in such environments depends not merely on the presence of edge resources but on how computational tasks are partitioned and orchestrated across edge and cloud layers [23].
Edge computing mitigates these limitations by enabling localized processing near data sources. Shifting latency-sensitive tasks, such as video-based situational awareness, access verification, and early anomaly detection, to edge nodes reduces raw data transmission and alleviates uplink congestion, thus improving responsiveness under peak load conditions [2,24]. Empirical evidence from large-scale event management scenarios confirms that edge-based architectures enhance system robustness and maintain service continuity during high-attendance events [9].
Recent studies go beyond simple offloading models toward coordinated cloud–edge collaboration for real-time video analytics. Rather than treating edge and cloud as independent processing domains, contemporary approaches emphasize dynamic task partitioning, bandwidth-aware resource allocation, and orchestration across system layers. Nan et al. [25] demonstrate that cloud–edge collaborative video analytics frameworks, with distributed processing pipelines across edge and cloud resources, enable real-time querying and scalable analysis over large camera networks. Subsequent work highlights the importance of adaptive configuration and joint optimization of computation and communication to maintain low latency and high throughput in large-scale deployments [10,26].
Despite the increasing role of edge processing, cloud computing remains indispensable for long-term data aggregation, cross-system integration, and strategic analysis. In integrated stadium architectures, cloud services consolidate metadata and performance indicators from distributed edge nodes, supporting historical analysis, system-wide coordination, and informed decision-making beyond immediate operational needs [8,27]. These findings collectively suggest that effective stadium-scale analytics require carefully designed hybrid architectures in which the processing location is treated as a central design consideration rather than an implementation detail.

2.4. Stadium Asset Management and CREM-Based Perspectives

Large sports stadiums are among the most complex public real estate assets, with value determined not only by event success, but also by long-term safety, reliability, and sustainability. Corporate Real Estate Management (CREM) provides a strategic framework for addressing these challenges by viewing facilities as assets whose management decisions have lasting social, cultural, and economic implications [15]. CREM research emphasizes that asset value is generated through alignment between physical infrastructure, governance structures, and information flows that support evidence-based decision-making [16].
Traditional asset management approaches are highly dependent on periodic inspections and retrospective documentation, which are insufficient in dynamic environments where asset conditions can change rapidly. Recent research shows that integrating digital information—particularly IoT-enabled data streams—supports asset intelligence by enabling real-time condition monitoring, predictive maintenance, remote inspection, and more proactive planning and intervention through integrated asset data management platforms [28]. Digital representations of physical assets, supported by real-time data streams, improve lifecycle-oriented decision-making and strengthen the alignment between asset performance and organizational objectives [29].
In stadium contexts, CREM-oriented management is closely linked to sustainability and long-term resilience. Data-driven asset management supports more efficient resource utilization, better maintenance planning, and reduced lifecycle costs, all of which are critical for publicly owned venues of national significance. However, existing studies often address digital technologies and asset management frameworks as separate domains, offering limited guidance on their integration into unified, CREM-oriented systems [9]. This separation results in a lack of practical models to link real-time operational data with strategic asset decision-making.
Therefore, the literature highlights a clear need for integrated approaches that embed IoT sensing and edge-based analytics within CREM-oriented governance structures. This integration enables stadium operators to enhance asset intelligence, support evidence-based management, and improve long-term performance, particularly in developing country contexts where legacy infrastructure and resource constraints require carefully designed, architecture-driven solutions rather than isolated technology deployments.

2.5. Comparative Summary of Related Work and Research Gap

Although prior studies have extensively explored smart stadium concepts, IoT-enabled monitoring, and edge–cloud computing architectures, existing research remains fragmented across application scope, architectural focus, and evaluation scale. A significant portion of the literature addresses isolated technologies, such as edge-assisted video analytics or IoT-based facility monitoring, often evaluated in generic smart city or small-to-medium venue contexts. Moreover, many studies emphasize computational performance metrics—such as latency or throughput—without explicitly considering sensing geometry, spatial coverage continuity, or the constraints imposed by legacy stadium infrastructures.
Table 1 presents a structured comparison between representative prior studies and the present research. As summarized, earlier works typically investigate either cloud-centric or partially edge-enabled solutions, focus on limited operational zones, or lack empirical evaluation at the scale of a national stadium. In contrast, the present study adopts an integration-oriented perspective by jointly considering IoT sensing configuration, edge-based video processing, and cloud coordination within a unified architectural framework explicitly tailored to a large, legacy stadium environment. By treating sensing geometry and processing location as primary design variables and evaluating their combined impact on coverage continuity and network behavior, this study addresses a clear gap in existing smart stadium research and provides practical, deployable guidance for stadium-scale implementations.

3. Methods

3.1. Research Design and Study Context

This study adopts a system development approach to design and evaluate an integrated IoT edge architecture for real-time monitoring and safety-oriented operations at the Rajamangala National Stadium, Bangkok, Thailand. This research investigates how sensing technologies, communication networks, and distributed computing components can be integrated into a coherent operational architecture appropriate for stadium-scale cyber–physical systems, where situational awareness depends on coordinated interaction between physical infrastructure and digital processing capabilities [1,2].
Rajamangala National Stadium was selected as the case study site due to its scale, functional complexity, and reliance on legacy silo-based subsystems. As Thailand’s largest national stadium, the venue contains complex circulation routes, multilevel structures, and heterogeneous operational zones that impose constraints on surveillance coverage and real-time data integration. These characteristics make the stadium a representative testbed for evaluating smart stadium architectures under realistic operational constraints.
The research process began with a baseline assessment of the stadium layout and the existing monitoring conditions. The facility comprises a basement level (B1), five interior floors, and a sixth-floor roof deck. Each level exhibits distinct movement patterns and visibility conditions relevant to surveillance design. Key elements, including vehicular access points, pedestrian routes, vertical circulation zones and existing CCTV locations, were surveyed and mapped to identify structural constraints that affect sensor placement, coverage continuity, and deployment feasibility. This baseline assessment established the spatial and operational context for designing surveillance configurations that address high-density crowd flows, congestion risks, and time-critical safety scenarios, consistent with previous studies on monitoring in large public venues [13].
Design decisions were informed by prior research on multilayer IoT frameworks and smart-venue infrastructures. In particular, canonical IoT models emphasize the integration of perception-layer sensing, reliable communication networks, and application-level analytics to support coordinated operations in complex environments [1,14,18]. Subsequent studies on edge- and cloud-enabled large-scale monitoring systems further highlight the importance of unified data pipelines, subsystem interoperability, and real-time visualization for effective venue and event management [2,8,10]. These principles guided the selection of sensing modalities (ANPR, face recognition, and panoramic cameras) and informed the allocation of processing tasks between edge and cloud environments. Figure 2 provides the baseline spatial reference used for the subsequent design and simulation.

3.2. IoT–Edge Architecture Development

The proposed edge architecture was developed to facilitate real-time monitoring and safety management within the operational constraints of the Rajamangala National Stadium. Instead of utilizing a generic smart stadium template, the architecture was tailored to the stadium’s specific physical configuration, circulation patterns, and legacy monitoring systems. This context-driven approach allows the architecture to address challenges common to large venues, such as heterogeneous subsystems, inconsistent visibility, and fluctuating crowd densities during major events.
The design process commenced with a systematic analysis of existing monitoring practices and spatial characteristics. Key elements such as vehicular gates, pedestrian pathways, vertical circulation zones, and current camera placements were evaluated to identify coverage gaps, blind spots, and potential data transmission bottlenecks during periods of high attendance. These insights guided decisions regarding sensor placement and network topology to ensure continuous situational awareness under peak operational loads [1,2].
Aligned with established multilayer IoT design principles, the final architecture adopts a layered structure in which sensing devices collect operational data at the perception layer, edge nodes perform time-sensitive processing close to data sources, and cloud services provide centralized coordination, long-term storage, and cross-domain analytics [1,2,14]. Three primary sensing modalities were chosen to meet operational needs: Automatic Number Plate Recognition (ANPR) cameras for vehicular access control, face-recognition cameras for interior circulation and restricted zones, and panoramic cameras for wide-area crowd monitoring. Task allocation between edge and cloud layers was determined by latency requirements and data volume. Time-critical video analytics and event detection are processed at the edge, while aggregated storage, reporting, and system-level analytics are managed in the cloud. This approach is supported by evidence that edge-based processing reduces latency and enhances responsiveness during periods of high operational density [2].

3.3. Edge Layer and Cloud Layer

The edge layer is designed to handle latency-sensitive processing tasks in high-activity zones. Edge nodes perform localized functions such as motion detection, preliminary video filtering, metadata extraction, and congestion assessment. Processing data near its source reduces the volume of raw video transmitted over the network and enables rapid alert generation during safety-critical situations. This design improves responsiveness and operational reliability during peak events, consistent with the edge computing literature that emphasizes latency reduction and bandwidth efficiency [2,3].
The cloud layer provides centralized coordination, long-term data management, and system-wide oversight. Outputs generated at the edge—primarily structured metadata and event descriptors—are transmitted to the cloud for aggregation and visualization. Key cloud functions include long-term storage, cross-system analytics, device and configuration management, role-based access control, and unified dashboards for administrators [2,3,8,10]. Within the cloud environment, structured metadata (e.g., detected identities, vehicle identifiers, time-stamped events, and crowd density indicators) are maintained in scalable repositories, enabling both real-time monitoring and retrospective analysis.
Beyond operational monitoring, the cloud layer supports higher-level analytical functions, such as historical trend analysis, incident reconstruction, and maintenance planning. These functions extend the value of operational data beyond the immediate response and enable strategic decision-making and asset management over time. Together, the perception, edge, and cloud layers form an integrated architecture that balances real-time responsiveness with centralized coordination. Figure 3 summarizes the interactions among sensing devices, edge processing nodes, and cloud services.

3.4. Back-End Information System Architecture

Within the proposed framework, the back-end information system serves as the integration and governance layer that coordinates data generated across distributed sensing devices and edge processing components. Its primary role is to ensure the reliability, traceability, and manageability of operational data flows, rather than to function as the main high-performance analytics engine.
Service and API design. The back-end is implemented as a service-oriented system that exposes RESTful APIs for interoperability between edge nodes, cloud services, and operator dashboards. Edge nodes publish detection outputs and system status updates (e.g., camera/node health, event notifications, processing summaries) to the back-end through standardized endpoints. This API-first design reduces vendor lock-in and supports incremental extension of the system without disrupting edge-level processing.
Programming environment and deployment model. The back-end services are deployed in a server-side runtime appropriate for long-running, service-based operation (e.g., a containerized web service), enabling maintainable upgrades and modular service management. The design assumes typical components such as an API gateway, authentication middleware, and a dashboard service for operational supervision.
Database design and storage strategy. Data management is structured to balance auditability with storage efficiency. Core operational events are stored in relational tables to ensure consistency and traceability. At minimum, the back-end maintains the following conceptual entities:
  • Camera (camera_id, type, zone, location_reference, configuration_id);
  • EdgeNode (node_id, zone, status, heartbeat_timestamp);
  • DetectionEvent (event_id, timestamp, camera_id, node_id, event_type, confidence, spatial_reference);
  • IdentityResult (result_id, event_id, identity_token, confidence, method_type);
  • VehicleResult (result_id, event_id, plate_token, confidence);
  • SystemLog (log_id, timestamp, component, severity, message_reference).
High-frequency logs and telemetry are stored separately using log-optimized storage to support continuous ingestion and operational diagnostics. Raw video streams are not retained by default. Instead, the back-end stores references to relevant video segments associated with detected events (e.g., clip pointers or storage keys), enabling post-event review while minimizing storage requirements.
Integration of identification technologies. Recognition algorithms (ANPR and face recognition) are executed at the edge layer, where low latency is required. The back-end receives only the resulting structured metadata (e.g., identifiers/tokens, confidence levels, timestamps, and location references). This separation allows the back-end to consolidate results across multiple sources without being tightly coupled to specific recognition models while preserving an auditable record of safety-relevant events.
Security and access control. Operational access is governed by role-based authorization. Authorized personnel can view detection summaries, event logs, system health indicators, and network status by zone through centralized monitoring interfaces. Sensitive identity-related outputs are protected through controlled access policies consistent with the accountability requirements of large public venues.
In general, the back-end is intentionally positioned as an enabling infrastructure: It coordinates system operations, preserves auditable records, and supports centralized oversight, while leaving time-critical analytics to edge nodes. This division supports maintainability and scalability as operational requirements evolve.

3.5. Simulation and Deployment Configuration

The simulation and deployment configuration was designed to represent the physical structure and operational conditions of the Rajamangala National Stadium. Structural measurements, geometric modeling, and visibility analysis were used to evaluate surveillance coverage at all levels. The venue includes B1, five interior floors, and a roof deck level, each with different circulation patterns and monitoring requirements. Surveillance configurations were therefore customized by functional zone rather than uniformly applied.
All video data considered in this study are based on fixed IP surveillance cameras configured to provide Full HD streams (1920 × 1080 pixels), reflecting a common configuration in large public venues. This resolution supports a practical balance between image detail, coverage area, and edge processing requirements under real-world operational conditions. Three categories of cameras, ANPR, face recognition and panoramic cameras, were integrated into the simulation framework. Placement and configuration followed established design principles in smart stadium environments and large-scale video surveillance systems, ensuring alignment between camera modality, field of view, and installation geometry with respect to specific monitoring objectives [2,9,30].

3.5.1. ANPR Cameras

Vehicular access points are critical control zones during large-scale events where traffic volume, security requirements, and time constraints converge. ANPR cameras were assigned to entry and exit gates to enable automated vehicle identification and traffic monitoring. Installation parameters were defined using geometric relationships between lane width, camera distance to the stopping point, and vehicle approach angles to reflect realistic operating constraints rather than idealized setups.
The simulation results indicate that an approximate capture distance of 5 m provides consistent recognition conditions under typical lighting and event-related traffic speeds. Under this configuration, the blind zones were restricted to approximately 2.75 m and were primarily observed during vehicle deceleration and alignment near gate barriers. This configuration supports robust ANPR performance while remaining compatible with practical gate infrastructure constraints [8].

3.5.2. Face-Recognition Cameras

Monitoring interior circulation spaces requires a different surveillance approach from that used in open spectator areas. In enclosed environments such as corridors, stairs, and access-controlled zones, the primary objective is the reliable identification of individuals as they pass through constrained spaces. For this reason, face recognition cameras were concentrated on the basement level (B1) and floors 1–5 of Rajamangala National Stadium, where pedestrian flow is channeled through defined circulation paths and access points.
Although these floors differ in function, their architectural layouts are broadly comparable, enabling a consistent deployment strategy between levels. Camera placement decisions were guided by interior circulation geometry rather than floor-specific event functions. The height of the roof, the viewing angle, and the recognition distance were configured to reflect typical pedestrian trajectories and lighting conditions observed within the stadium.
To ensure that face recognition performance met operational requirements, visibility constraints were evaluated using the DORI (Detection, Observation, Recognition, Identification) framework as a design reference [30]. Rather than treating DORI as a prescriptive checklist, it was used to relate camera resolution and viewing distance to the level of facial detail required for operational recognition and identification in real-world indoor conditions. Particular emphasis was placed on the recognition and identification levels, which are most relevant to access control and post-event verification in interior stadium spaces.
Based on these considerations, minimum pixel per meter (PPM) thresholds were adopted to constrain blind regions and maintain consistent identification feasibility along primary pedestrian routes. Table 2 summarizes the DORI thresholds applied as configuration references. The application of these thresholds and the site-specific circulation analysis resulted in the allocation of 229 face recognition cameras across B1 and floors 1–5. This configuration was designed to balance coverage continuity with installation feasibility, supporting dense pedestrian flows typical of large-scale stadium events while maintaining identification-oriented image quality.

3.5.3. Panoramic Cameras

The spectator bowl was identified as the most challenging area for continuous visual monitoring due to its large radius and open configuration. Line-of-sight interruptions from the seating tiers and structural elements limit effective observation from the concourse and ground levels. Therefore, elevated viewpoints are required to achieve wide-area visibility.
The sixth-floor roof deck provides a continuous perimeter above the seating area and was used as the reference level for panoramic camera placement. As shown in Figure 4a, eight installation points (C1–C8) were defined along the perimeter of the roof, each oriented towards the center of the bowl. This radial viewing concept creates overlapping fields of view, reducing blind zones between adjacent cameras.
Geometric simulation took into account viewing angles, elevation differences between the roof deck and seating rows, and possible structural occlusions. The results confirm that the eight-camera configuration provides complete bowl-level visibility with an approximate radius of 125.7 m (Figure 4b). Since panoramic cameras support crowd awareness rather than individual identification, evaluation focused on detection and observation levels to ensure adequate situational awareness while remaining compatible with edge processing constraints.

3.6. Performance Evaluation and Data Analysis

System performance was evaluated using a staged workflow designed to reflect how stadium surveillance systems operate in real-world conditions. Rather than assessing individual components in isolation, the evaluation progressed from physical sensing coverage to system-level integration, thereby capturing the interdependencies among spatial design, visual analytics, and data-processing pipelines. The overall evaluation sequence is illustrated in Figure 5.
First, spatial coverage was assessed as a prerequisite for subsequent analytical tasks. Geometric simulation was conducted to examine camera coverage at vehicular access gates, interior circulation zones (B1–5), and the spectator bowl. Key deployment parameters—including camera elevation, viewing distance, radial viewing angles, and site-specific occlusions identified during the field survey—were incorporated into the model. This analysis determined whether the sensing layout provides continuous and functionally appropriate visibility across different camera modalities.
Second, image quality and detection feasibility were evaluated based on pixel-density criteria derived from the DORI framework [30]. Pixel-per-meter values were computed under simulated deployment conditions to verify that the configured camera placements deliver sufficient visual detail for their intended operational roles. These roles include license plate recognition at vehicular gates, facial identification within interior circulation areas, and crowd observation in spectator zones.
Third, edge-processing responsiveness was assessed by simulating event-level processing sequences representative of high-attendance stadium operations. Key performance indicators included processing latency, workload distribution across edge devices, and the rate at which event metadata were generated and transmitted to the cloud. This stage reflects operational scenarios in which timely edge inference is essential for maintaining situational awareness and supporting real-time decision-making.
Finally, the reliability of edge–cloud data flows was evaluated to ensure that processed outputs remain temporally consistent and synchronized across sensing modalities. Sensor streams were examined with respect to ingestion stability, packet completeness, and metadata alignment within the cloud environment. A system-level validation then integrated the results of spatial coverage, image quality, processing responsiveness, and data-flow reliability analyses to verify cohesive system operation under simulated event conditions.

3.7. Data Collection

Data collection focused on physical, operational, and technical parameters that directly affect system configuration, simulation accuracy, and performance evaluation. Field surveys were carried out in key operational zones, including vehicular gates, concourses, corridors, stairwells, and the spectator bowl. Measurements of distances, elevations, structural widths and circulation paths were recorded to support camera placement decisions, viewing angle definition, and development of spatially accurate simulation models.
The technical specifications for ANPR, face recognition, and panoramic cameras were obtained from manufacturer documentation and evaluated against DORI-based visibility requirements. These specifications were used to estimate feasible field of view, mounting heights, and anticipated recognition performance within the site constraints documented during surveys.
Operational conditions relevant to the feasibility of deployment were also documented, including typical lighting conditions, crowd density patterns during events, and the distribution of existing electrical and network infrastructure. Existing surveillance layouts, cabling routes, and network switch locations were reviewed to assess compatibility with the proposed architecture and identify potential constraints related to data transmission and processing latency. All data collected were integrated into system design, simulation, and evaluation to ensure that feasibility and performance assessments reflect real site conditions.

4. Results

4.1. ANPR Camera Performance

The performance of the Automatic Number Plate Recognition (ANPR) subsystem was evaluated through geometric simulation of vehicle approach scenarios at the main entry points of the Rajamangala National Stadium. Figure 6 presents side, top, and front views of the ANPR camera configuration to explicitly illustrate how camera height, tilt angle, and approach geometry jointly determine license plate visibility. These multiple perspectives are provided to clarify spatial constraints that cannot be fully represented by numerical parameters alone.
Throughout the simulated scenarios, the license plates remained visible and readable within an effective distance of approximately 5 m from the camera, corresponding to the width of the traffic lanes and typical stop positions at access gates. A blind region of approximately 2.75 m was observed directly below the camera mount due to downward viewing geometry. However, this blind region lies outside the primary plate capture zone for approaching vehicles and therefore does not materially affect recognition during normal entry operations.
Camera tilt was found to influence image clarity and glare. A tilt angle close to 30° produced clearer character visibility while maintaining sufficient depth of field across typical vehicle trajectories. Under this configuration, the pixel density and alignment remained adequate for stable character extraction during the simulated event conditions. The key geometric and performance parameters derived from the ANPR simulation are summarized in Table 3.

4.2. Face Recognition Camera Performance

The face recognition subsystem was evaluated throughout the stadium, encompassing corridors, concourses, stairways, and access-controlled zones. Figure 7 provides side, top, and front views of the face recognition camera geometry to illustrate the impact of mounting height, viewing angle, and corridor width on recognition coverage. These perspectives facilitate visualization of indoor spatial constraints that are not readily captured by coverage metrics alone.
A simulated deployment of 229 face recognition cameras across the basement level (B1) and floors 1–5 ensured continuous visual coverage along primary pedestrian routes. Facial visibility suitable for recognition was maintained over an effective target width of approximately 8.6 m, enabling each camera to monitor broad concourse segments without a significant reduction in recognition feasibility. A blind region of approximately 2.6 m was identified near the camera mounting position, attributable to near-field occlusion effects from camera height and tilt. This blind region remained within acceptable parameters for identification-focused monitoring in high-density indoor environments.
Camera tilt was a critical factor influencing recognition quality along standard walking trajectories. A tilt angle of approximately 26.8 degrees improved facial visibility and reduced occlusion from overhead structural elements. With this configuration, facial detection and recognition remained stable across varying pedestrian densities and walking speeds typical of ingress, egress, and internal circulation during large events. Overlapping fields of view between adjacent cameras enhanced continuity across stairways and inter-floor transition zones, thereby minimizing coverage gaps.
These findings indicate that the proposed face recognition configuration can maintain reliable person identification under dense pedestrian flow conditions characteristic of national stadium operations. The principal performance parameters derived from the simulation are summarized in Table 4.

4.3. Panoramic Camera Coverage Analysis

The performance of the panoramic camera was evaluated for wide-area monitoring of the stadium’s spectator bowl. Figure 8 presents side, top, and front views of the roof-level panoramic camera configuration to illustrate how elevation, radial orientation, and tilt angle allow uninterrupted bowl-level visibility. These perspectives are included to demonstrate why roof-level deployment overcomes line-of-sight obstructions caused by seating tiers and structural elements.
The simulation results show that panoramic cameras mounted along the sixth-floor roof deck achieved uninterrupted visibility across the entire spectator bowl. The effective coverage radius reached approximately 125.7 m, allowing complete observation of seating areas without requiring additional units to maintain continuity. No blind regions were observed within the seating tiers, reflecting the combined effect of elevated placement and radial viewing geometry.
The camera tilt influenced the uniformity of visibility across the upper and lower seating rows. A tilt angle close to 44.3 degrees produced the most balanced observation across different elevations, supporting stable monitoring of crowd distribution throughout the bowl. Because panoramic cameras are intended for situational awareness rather than individual identification, the evaluation focused on detection and observation performance rather than fine-grained recognition.
These results indicate that the panoramic subsystem is well-suited for monitoring large, open environments characterized by dynamic crowd behavior. The key geometric and coverage parameters derived from the simulation are summarized in Table 5.

4.4. Bandwidth Consumption: Cloud vs. Edge Computing

Bandwidth performance was assessed by comparing uplink demand under two processing strategies: (i) continuous cloud-only video transmission and (ii) edge-based preprocessing with selective data forwarding. Given the large number of high-resolution video streams generated during stadium events, the analysis focused on uplink volume and its implications for real-time operational feasibility.

4.4.1. Cloud-Only Transmission

In the cloud-only transmission scenario, all raw video streams from the surveillance subsystems were forwarded directly to the cloud. As summarized in Table 6, the total uplink demand reached approximately 119,760 MB per minute. The face recognition subsystem accounted for the largest share of this load (109,920 MB/min), reflecting continuous interior monitoring requirements. Panoramic cameras contributed 7680 MB/min, while ANPR cameras generated 2160 MB/min.
At this scale, sustained cloud-only transmission would exceed the uplink capacity typically available for on-site backhaul during large events. Such conditions would introduce congestion and variable latency, limiting the feasibility of real-time monitoring and timely response related to safety.

4.4.2. Edge-Based Video Transmission with Local Preprocessing

To quantify the impact of edge-based preprocessing, uplink reduction was calculated by comparing cloud-only and edge-based transmission volumes using Equation (1):
R e d u c t i o n   % = C l o u d   o n l y   v o l u m e E d g e   b a s e d   v o l u m e C l o u d   o n l y   v o l u m × 100
In the cloud-only scenario, the uplink volume corresponds to the continuous transmission of raw video streams from all camera subsystems, as summarized in Table 5. In contrast, the edge-based scenario replaces continuous raw video forwarding with local preprocessing at edge nodes, where video streams are filtered and analyzed in situ, and only structured metadata and event-triggered image frames are transmitted to the cloud.
Applying Equation (1) to the simulation results indicates that the overall volume of data from the uplink was reduced by approximately 50–75% compared to cloud-only transmission. The lower bound of this range corresponds to camera modalities with higher residual data forwarding requirements, while the upper bound reflects subsystems in which most raw video data were filtered and summarized locally prior to transmission. These findings demonstrate that edge-based preprocessing substantially reduces bandwidth demand by fundamentally altering the structure of data flow, transforming continuous high-volume video streams into compact, event-driven representations.
The observed reduction is consistent with previous studies showing that integrating deep learning–based analytics at the edge significantly decreases raw data transmission by performing inference and feature extraction near the data source before cloud forwarding [9,26,27].

4.5. Summary of Overall Smart Stadium Performance

Taken together, the results demonstrate that the proposed IoT edge architecture achieves reliable spatial coverage, manageable network load, and operational scalability under conditions representative of a national stadium. The ANPR subsystem supported vehicle entry monitoring with limited blind regions, the face recognition deployment of 229 cameras maintained continuous coverage across multilevel interior circulation spaces, and the panoramic subsystem provided uninterrupted bowl-level visibility.
At the network level, cloud-only transmission imposes uplink demands that are unlikely to be sustainable during high-attendance events. In contrast, edge-based preprocessing reduced data volume by 50–75% and produced more stable and predictable transmission behavior. These outcomes indicate that integrating IoT sensing with edge computing enables continuous situational awareness while remaining compatible with legacy stadium infrastructure, thus providing a practical foundation for smart stadium deployment in large national venues.

5. Discussion

This study investigated the performance of an integrated edge architecture in a large national stadium characterized by a complex spatial structure, episodic crowd surges and legacy infrastructure constraints. Rather than evaluating individual technologies in isolation, the findings illuminate how sensing configuration, processing location, and network behavior interact under realistic event conditions. This section interprets the key results, discusses their implications for smart stadium design and operations, considers architectural trade-offs and deployment scenarios, and reflects on strategic asset management relevance and study limitations.

5.1. Interpretation of Key Findings

The results indicate that effective smart stadium monitoring depends less on the absolute number of sensors deployed and more on the alignment between sensing geometry and computational architecture. Instead of relying on dense camera installations, the study demonstrates that coverage continuity and blind zone reduction can be achieved through geometry-informed deployment strategies that explicitly consider spatial layout, movement patterns, and functional zoning within the stadium. This finding challenges the implicit assumption, common in many surveillance deployments, that increasing the density is the primary means of improving monitoring performance.
The differentiated performance of camera modalities further underscores the importance of role-specific sensing. ANPR cameras proved to be effective in constrained vehicular access zones, where controlled geometry and predictable motion support reliable identification. Face recognition cameras were most effective in interior circulation spaces, where architectural confinement allows consistent visibility and identification. In contrast, panoramic cameras played a critical role in open spectator areas, where wide-area observation and early detection of crowd density changes are more valuable than individual identification. These results highlight that the effectiveness of smart stadium surveillance emerges from the complementary interaction of heterogeneous sensing modalities rather than from uniform or technology-centric deployment strategies.
System behavior under multi-target conditions reveals another defining characteristic of large stadium environments. High crowd density and simultaneous target presence are not exceptional scenarios but represent normal operating conditions during major events. The ability of the proposed video analytics pipeline to process multiple targets concurrently at the object level supports crowd-level situational awareness without relying on sequential frame-by-frame analysis. This reinforces the view that smart stadium systems must be designed around collective behavioral patterns rather than individual-centric detection alone, particularly in venues subject to episodic but extreme occupancy fluctuations.
Bandwidth analysis further demonstrates that architectural decisions regarding processing location fundamentally shape the system’s feasibility at stadium scale. The contrast between cloud-only and edge-assisted configurations shows that uplink capacity limitations are not peripheral technical issues, but structural constraints in large, high-density venues. By reducing raw video transmission and stabilizing network behavior, edge-based preprocessing enables real-time monitoring to be sustained during peak events. This finding reframes edge computing from an optional performance enhancement into an architectural requirement for smart stadium operations under realistic conditions.
Taken together, these findings suggest that smart stadium development requires a shift from technology-centric optimization toward architecture-centric design. In environments characterized by legacy infrastructure, variable network conditions, and extreme fluctuations in crowd density, overall system performance is determined by how the sensing, processing, and communication components are coordinated. This insight is particularly relevant for large public venues in developing country contexts, where incremental modernization must operate within existing constraints rather than through wholesale infrastructure replacement.

5.2. Implications for Smart Stadium Design and Operations

From a practical design perspective, the results indicate that improving monitoring performance in large stadiums does not necessarily require additional cameras or major physical modifications. In this study, coverage continuity and blind zone reduction were primarily achieved through adjustments in camera placement, viewing geometry, and processing architecture. This suggests that many existing stadium surveillance systems can be meaningfully upgraded by reconfiguring sensor deployment and data processing strategies, rather than by expanding hardware inventories.
The operational implications are particularly evident in the bandwidth results. Under cloud-only transmission, uplink demand increased rapidly and exceeded what is typically manageable during large events. When edge-based preprocessing was introduced, uplink demand decreased substantially and data flow became more stable. For stadium operators, this implies that real-time monitoring can be maintained even under high crowd density by shifting part of the analytical workload closer to data sources. In practice, this supports faster response, reduces pressure on shared network infrastructure, and allows operational teams to focus on critical events instead of continuous raw video review.
A representative use case arises during large-scale events such as national football matches or major concerts, when crowd density and network demand peak simultaneously. In such scenarios, panoramic cameras provide continuous wide-area visibility of spectator zones, while face-recognition and access control cameras operate in interior circulation areas. By performing initial video processing at the edge, the system can detect relevant events or abnormal crowd patterns in real time without transmitting full-resolution video streams to the cloud. Only selected video segments and structured metadata are forwarded for centralized coordination and post-event analysis. This operational pattern illustrates how a hybrid edge–cloud architecture supports both immediate safety monitoring and longer-term performance assessment in large stadium environments.

5.3. Architectural Trade-Offs and Deployment Scenarios

Although the proposed hybrid edge–cloud architecture delivers measurable performance benefits—particularly in reducing uplink bandwidth during high-attendance events—these benefits are accompanied by trade-offs that must be considered in deployment decisions. Shifting analytical tasks to the edge increases computational requirements at distributed nodes and introduces additional complexity in deployment, maintenance, and monitoring. Edge nodes must provide sufficient processing capacity and reliability, and managing multiple distributed components requires additional operational coordination. Consequently, the observed bandwidth reduction of approximately 50–75% should be interpreted as a context-dependent performance gain rather than a universally optimal outcome, consistent with prior observations in edge-assisted event management research [9].
In some scenarios, a cloud-centric architecture remains appropriate. When backhaul bandwidth is abundant and latency requirements are moderate, centralized cloud processing offers simplicity and flexibility. Applications such as post-event analytics, long-term trend analysis, or monitoring in smaller venues can be handled effectively without deploying on-site computational infrastructure. In these contexts, the benefits of elastic scalability and simplified system management may outweigh the advantages of localized processing.
On the contrary, edge-only architectures are better suited to narrowly defined, latency-critical tasks requiring immediate response and minimal reliance on external connectivity. Basic access control or localized anomaly detection can be efficiently handled at the edge, particularly in environments with severely constrained network capacity. However, the absence of centralized aggregation limits cross-camera coordination, long-term data retention, and system-wide situational awareness, all of which are increasingly important in complex venue operations.
Therefore, the hybrid edge–cloud architecture examined in this study is particularly well suited to large, high-density venues where real-time responsiveness must be balanced with centralized coordination. In national stadiums, safety monitoring and crowd management require low-latency video analytics under conditions of highly dynamic and fluctuating network load during peak events. In such environments, edge processing enables timely detection and preliminary analysis of crowd-related events close to the data source, thereby stabilizing network utilization and supporting rapid local response. Meanwhile, the cloud layer facilitates global coordination, aggregation of analytics results, and cross-zone situational awareness across subsystems and events. This functional separation and collaboration between edge and cloud layers is consistent with recent edge–cloud collaborative video analytics frameworks developed for real-time crowd gathering detection in large public transportation hubs, which demonstrate the effectiveness of distributed processing pipelines for managing dense crowds in complex public spaces [27].

5.4. Strategic and Asset Management Implications (CREM)

Beyond immediate operational benefits, the findings highlight the strategic value of digital surveillance infrastructure within stadium asset management. Treating sensing and analytics systems as long-term assets aligns with corporate real estate management principles (CREM), which emphasize lifecycle performance, resilience, and public value. For national venues such as Rajamangala National Stadium, the ability to enhance safety and monitoring capability without major physical reconstruction is particularly significant.
The proposed architecture demonstrates how digital systems can extend the functional life of existing facilities while supporting compliance with evolving international safety expectations. From a CREM perspective, this positions smart stadium technologies not merely as technical upgrades but as components of broader asset-governance strategies that improve institutional credibility, operational resilience, and public trust. By linking real-time operational data with long-term asset intelligence, the architecture supports evidence-based decision-making across both operational and strategic horizons.
Overall, the proposed IoT–edge–cloud framework demonstrates how technical performance improvements, such as enhanced spatial coverage and reduced bandwidth consumption, can be translated into tangible corporate real estate value. By enabling more efficient utilization of digital infrastructure and supporting data-driven governance, the framework aligns operational performance with long-term asset management and sustainability objectives in large-scale stadium environments.

5.5. Limitations and Future Research

This study evaluates an edge architecture under controlled spatial and technical conditions derived from the physical characteristics of the Rajamangala National Stadium. Although simulation enables systematic analysis of coverage geometry, visibility behavior, and network load, real-world factors such as uneven lighting, weather variation, and irregular crowd behavior may introduce additional variability not explicitly modeled.
The sensing configuration focused on visual surveillance systems, reflecting current operational priorities in stadium security. However, smart-venue environments increasingly incorporate additional sensing modalities, including thermal, acoustic, and environmental sensors. Future research could examine how integrating these data streams with visual analytics influences multisensor coordination and situational awareness.
Bandwidth reduction and processing performance were evaluated under representative encoding assumptions and hardware capabilities. Variations in device specifications or vendor implementations may affect processing efficiency and data-flow behavior. Evaluating the architecture under alternative configurations and live operational loads would provide deeper insight into scalability and robustness.
Finally, this study examines a single national stadium selected for its scale and operational complexity. Although this setting provides a rigorous test environment, differences in layout, capacity, and governance between venues may influence applicability. Extending the analysis to additional stadiums or large public facilities would help distinguish context-specific findings from those that are broadly generalizable.

6. Conclusions

This study investigated how an integrated IoT edge architecture can support real-time monitoring and operational management in large sports venues characterized by complex spatial structure, high crowd density, and legacy infrastructure constraints. Using Rajamangala National Stadium as a national-scale case study, simulation-based evaluation demonstrates that surveillance performance is shaped primarily by the alignment between sensing geometry and processing location, rather than by sensor density alone. Geometry-informed deployment of ANPR, face recognition, and panoramic cameras enabled continuous coverage with limited blind zones at vehicular access points, interior circulation spaces, and spectator areas.
Network analysis highlights the practical necessity of edge computing in data-intensive stadium environments. Under cloud-only transmission, uplink demand increased rapidly and exceeded levels that are typically manageable during high-attendance events. On the contrary, the introduction of edge-based preprocessing reduced the demand for uplink bandwidth by approximately 50–75%, depending on the camera modality and processing configuration, and produced a more stable data transmission behavior. These results indicate that distributed edge processing is not merely a performance optimization but an architectural requirement for sustaining real-time monitoring and analytics in large, high-density venues.
Beyond technical performance, the findings emphasize the strategic value of digital surveillance systems as long-term operational assets. By aligning the sensing and analytics infrastructure with broader facility management objectives, the proposed architecture supports incremental modernization without requiring extensive physical reconstruction, an important consideration for national stadiums and publicly owned venues. Although the analysis is based on simulation, the results provide a clear and transferable reference for designing and deploying IoT edge architectures in large public environments where safety, scalability, and network constraints must be addressed simultaneously.
Future research should validate the proposed architecture under live event conditions, examine its performance across multiple venues with different layouts and governance structures, and explore the integration of additional sensing modalities to enhance multisource situational awareness. Together, these directions will further clarify how edge-enabled smart stadium architectures can support resilient and sustainable venue operations at national and international scales.

Author Contributions

N.P. contributed to conceptualization, methodology, software development, investigation, data curation, visualization, and original draft preparation. C.S., the corresponding author, contributed to conceptualization, formal analysis, resource provision, funding acquisition, and validation. A.S. contributed to validation, writing (review and editing), supervision, and project administration. All authors have read and agreed to the published version of the manuscript.

Funding

Financial support for this study was provided by Mahasarakham Business School, Mahasarakham University, Thailand.

Data Availability Statement

Data supporting the findings of this study are available upon reasonable request from the corresponding author. Data are not publicly available because they contain security-sensitive information related to the surveillance layout and critical infrastructure of the Rajamangala National Stadium. Aggregated simulation parameters and anonymized configuration files can be provided upon a justified request and with permission from the relevant authorities.

Acknowledgments

The authors gratefully acknowledge Mahasarakham University for providing institutional support that facilitated this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Atzori, L.; Iera, A.; Morabito, G. The Internet of Things: A Survey. Comput. Netw. 2010, 54, 2787–2805. [Google Scholar] [CrossRef]
  2. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  3. Satyanarayanan, M. The Emergence of Edge Computing. Computer 2017, 50, 30–39. [Google Scholar] [CrossRef]
  4. Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef]
  5. Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence. IEEE Internet Things J. 2020, 7, 7457–7469. [Google Scholar] [CrossRef]
  6. Xu, X.; Chen, Y.; Zhang, J.; Liu, Q.; Jin, H. Edge Intelligence: Empowering Intelligence to the Edge of Network. Proc. IEEE 2021, 109, 1302–1324. [Google Scholar] [CrossRef]
  7. Zhang, S.; Tong, X.; Chi, K.; Shi, Z. Jointly Optimizing Task Offloading and Resource Allocation in MEC with Secure Data Transmission: A Multi-DNNs Approach. IEEE Trans. Mob. Comput. 2025; early access. [Google Scholar] [CrossRef]
  8. Du, Y.; Li, Y.; Chen, J.; Hao, Y.; Liu, J. Edge computing-based digital management system of game events in the era of Internet of Things. J. Cloud Comput. 2023, 12, 44. [Google Scholar] [CrossRef]
  9. Bu, Q.; Zhang, S.; Ma, N.; Luo, Q.; Sun, B. Research on Energy Management Method of Fuel Cell/Supercapacitor Hybrid Trams Based on Optimal Hydrogen Consumption. Sustainability 2023, 15, 11234. [Google Scholar] [CrossRef]
  10. Taleb, T.; Samdanis, K.; Mada, B.; Flinck, H.; Dutta, S.; Sabella, D. On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Commun. Surv. Tutor. 2017, 19, 1657–1681. [Google Scholar] [CrossRef]
  11. Yao, Z.; Zhang, L.; Zhao, X. The Transmission Effect of Threshold Experiences: Smart Sports Venue Consumption and Digital Transformation. Buildings 2025, 15, 3629. [Google Scholar] [CrossRef]
  12. da Silva Candeo, A.L.; Reyes, S., Jr.; Haller, N.; Richardson, A.; Preuss, H.; Souvignet, T.; Könecke, T.; Schubert, M. Governance and Integrity Challenges in Esports: A Scoping Review. Perform. Enhanc. Health 2025, 13, 100352. [Google Scholar] [CrossRef]
  13. Glebova, E.; Glebov, A.; Parry, K. Sports Venue Digital Twin Technology from a Spectator Virtual Visiting Perspective. Front. Sports Act. Living 2023, 5, 1289140. [Google Scholar] [CrossRef]
  14. Gubbi, J.; Buyya, R.; Marusic, S.; Palaniswami, M. Internet of Things (IoT): A Vision, Architectural Elements, and Future Directions. Future Gener. Comput. Syst. 2013, 29, 1645–1660. [Google Scholar] [CrossRef]
  15. Den Heijer, A.C. Managing the University Campus; Eburon: Delft, The Netherlands, 2011. [Google Scholar]
  16. Lindholm, A.-L.; Gibler, K.M.; Leväinen, K.I. Modeling the Value-Adding Attributes of Real Estate to the Wealth Maximization of the Firm. J. Real Estate Res. 2006, 28, 445–475. [Google Scholar] [CrossRef]
  17. Zhang, Y. Digital Infrastructure and Smart Venue Management. Smart Cities 2023, 6, 145–162. [Google Scholar]
  18. Al-Fuqaha, A.; Guizani, M.; Mohammadi, M.; Aledhari, M.; Ayyash, M. Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications. IEEE Commun. Surv. Tutor. 2020, 17, 2347–2376. [Google Scholar] [CrossRef]
  19. Ahmed, I.; Ahmad, M.; Ahmad, A.; Jeon, G. IoT-based crowd monitoring system: Using SSD with transfer learning. Comput. Electr. Eng. 2021, 93, 107226. [Google Scholar] [CrossRef]
  20. Ahmed, E.; Rehmani, M.H.; Raza, U. The Role of Big Data Analytics in Internet of Things. Comput. Netw. 2021, 129, 459–471. [Google Scholar] [CrossRef]
  21. Zhang, H.; Zhang, R.; Sun, J. Developing real-time IoT-based public safety alert and emergency response systems. Sci. Rep. 2025, 15, 29056. [Google Scholar] [CrossRef]
  22. Sicari, S.; Rizzardi, A.; Grieco, L.A.; Coen-Porisini, A. Security, Privacy and Trust in Internet of Things: The Road Ahead. Comput. Netw. 2020, 76, 146–164. [Google Scholar] [CrossRef]
  23. Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile Edge Computing: A Survey. IEEE Internet Things J. 2018, 5, 450–465. [Google Scholar] [CrossRef]
  24. Chen, J.; Ran, X. Deep Learning with Edge Computing. Proc. IEEE 2019, 107, 1655–1674. [Google Scholar] [CrossRef]
  25. Nan, Y.; Jiang, S.; Li, M. Large-Scale Video Analytics with Cloud–Edge Collaborative Continuous Learning. ACM Trans. Sens. Netw. 2023, 20, 14. [Google Scholar] [CrossRef]
  26. Chen, S.; Deng, J.; Tao, X.; Xie, X.; Tan, R.; Hong, T.; Liu, X. Bandwidth on a Budget: Real-Time Configuration for Edge Video Analysis. IEEE Trans. Comput. 2026, 75, 409–422. [Google Scholar] [CrossRef]
  27. Sun, L.; Sun, J.; Zhang, J.; Peng, X.; Zhang, F.; Zhang, D.; Ye, K.; Fan, J. Edge–Cloud Collaborative Video Analytics System for Crowd Gathering Detection in Metro Stations. Tsinghua Sci. Technol. 2025, 31, 1764–1777. [Google Scholar] [CrossRef]
  28. Gbadamosi, A.-Q.; Oyedele, L.O.; Davila Delgado, J.M.; Kusimo, H.; Akanbi, L.; Olawale, O.; Muhammed-yakubu, N. IoT for Predictive Assets Monitoring and Maintenance: An Implementation Strategy for the UK Rail Industry. Autom. Constuction 2021, 122, 103486. [Google Scholar] [CrossRef]
  29. Volk, R.; Stengel, J.; Schultmann, F. Building Information Modeling (BIM) for Existing Buildings—Literature Review and Future Needs. Autom. Constr. 2014, 38, 109–127. [Google Scholar] [CrossRef]
  30. EN 62676-4; Video Surveillance Systems for Use in Security Applications—Application Guidelines. European Committee for Standardization: Brussels, Belgium, 2015.
Figure 1. Conceptual Framework Model.
Figure 1. Conceptual Framework Model.
Jsan 15 00015 g001
Figure 2. Baseline spatial layout used as the reference environment for system design and simulation. Label 4, marked with a star (*), represents Rajamangala National Stadium.
Figure 2. Baseline spatial layout used as the reference environment for system design and simulation. Label 4, marked with a star (*), represents Rajamangala National Stadium.
Jsan 15 00015 g002
Figure 3. IoT–edge–cloud architecture illustrating the interaction between perception, edge, and cloud layers in the proposed smart stadium system.
Figure 3. IoT–edge–cloud architecture illustrating the interaction between perception, edge, and cloud layers in the proposed smart stadium system.
Jsan 15 00015 g003
Figure 4. Geometric simulation of panoramic camera deployment in the stadium: (a) installation locations of panoramic cameras at the sixth-floor roof level, and (b) simulated radial coverage illustrating bowl-level visibility. Labels C1–C8 indicate the panoramic camera installation points.
Figure 4. Geometric simulation of panoramic camera deployment in the stadium: (a) installation locations of panoramic cameras at the sixth-floor roof level, and (b) simulated radial coverage illustrating bowl-level visibility. Labels C1–C8 indicate the panoramic camera installation points.
Jsan 15 00015 g004
Figure 5. Evaluation workflow illustrating the sequential evaluation of spatial coverage, detection and image clarity, edge processing responsiveness, cloud integration, and system-level validation.
Figure 5. Evaluation workflow illustrating the sequential evaluation of spatial coverage, detection and image clarity, edge processing responsiveness, cloud integration, and system-level validation.
Jsan 15 00015 g005
Figure 6. Side, top, and front views of the ANPR camera simulation model. Colors are used for visualization purposes only, where shaded regions represent the camera field of view and do not encode additional quantitative information.
Figure 6. Side, top, and front views of the ANPR camera simulation model. Colors are used for visualization purposes only, where shaded regions represent the camera field of view and do not encode additional quantitative information.
Jsan 15 00015 g006
Figure 7. Side, top, and front views of the face-recognition camera simulation model. Colors are used for visualization purposes only; shaded regions represent the camera field of view and do not encode additional quantitative information.
Figure 7. Side, top, and front views of the face-recognition camera simulation model. Colors are used for visualization purposes only; shaded regions represent the camera field of view and do not encode additional quantitative information.
Jsan 15 00015 g007aJsan 15 00015 g007b
Figure 8. Side, top, and front views of the panoramic camera simulation model.
Figure 8. Side, top, and front views of the panoramic camera simulation model.
Jsan 15 00015 g008aJsan 15 00015 g008b
Table 1. Comparison of existing smart stadium and edge-enabled monitoring studies with the proposed work.
Table 1. Comparison of existing smart stadium and edge-enabled monitoring studies with the proposed work.
StudyApplication ContextArchitectural FocusEdge–Cloud
Integration
Stadium-Scale DeploymentGeometry-Aware SensingPerformance
Evaluation
Du et al. [8]Sports event information managementEdge-assisted event systemsPartialSmall to medium venuesNoLatency, system responsiveness
Xiao et al. [9]Smart venues (review)Conceptual frameworksConceptualGenericNoNot empirical
Chen et al. [26]Large-scale video analyticsCloud–edge collaborationYesNon-stadium environmentsNoBandwidth, processing latency
Taleb et al. [10]MEC and 5G edge architectureNetwork-centricYesGenericNoNetwork efficiency
Ahmed et al. [19]IoT-based crowd monitoringSensor-centric systemsLimitedEvent-based scenariosNoDetection accuracy
This studyNational stadium (legacy infrastructure)Integrated IoT–edge–cloud architectureYes (design-driven)Yes (full stadium scale)YesCoverage continuity and uplink bandwidth (50–75% reduction)
Table 2. DORI pixel-per-meter (PPM) thresholds applied for face recognition camera configuration based on EN 62676-4.
Table 2. DORI pixel-per-meter (PPM) thresholds applied for face recognition camera configuration based on EN 62676-4.
DORI LevelCamera Resolution Requirement (Pixels/m)
Detection25 PPM
Observation62.5 PPM
Recognition125 PPM
Identification250 PPM
Table 3. ANPR Simulation Performance Parameters.
Table 3. ANPR Simulation Performance Parameters.
ParameterValue
Blind Zone2.75 m
Lane Width≤5 m
Tilt Angle30°
Table 4. Performance parameters of the face recognition camera simulation.
Table 4. Performance parameters of the face recognition camera simulation.
ParameterValue
Blind Zone2.6 m
Target Width8.6 m
Tilt Angle26.8°
Table 5. Performance parameters of the simulation of the panoramic camera.
Table 5. Performance parameters of the simulation of the panoramic camera.
ParameterValue
Blind Zone0.0 m
Target Width125.7 m
Tilt Angle44.3°
Table 6. Uplink bandwidth consumption under cloud-only video transmission.
Table 6. Uplink bandwidth consumption under cloud-only video transmission.
Camera TypeQuantityBandwidth (MB/min)
Face Recognition229109,920
ANPR62160
Panoramic87680
Total119,760
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pattarawetwong, N.; Savithi, C.; Suttidee, A. Toward an Integrated IoT–Edge Computing Framework for Smart Stadium Development. J. Sens. Actuator Netw. 2026, 15, 15. https://doi.org/10.3390/jsan15010015

AMA Style

Pattarawetwong N, Savithi C, Suttidee A. Toward an Integrated IoT–Edge Computing Framework for Smart Stadium Development. Journal of Sensor and Actuator Networks. 2026; 15(1):15. https://doi.org/10.3390/jsan15010015

Chicago/Turabian Style

Pattarawetwong, Nattawat, Charuay Savithi, and Arisaphat Suttidee. 2026. "Toward an Integrated IoT–Edge Computing Framework for Smart Stadium Development" Journal of Sensor and Actuator Networks 15, no. 1: 15. https://doi.org/10.3390/jsan15010015

APA Style

Pattarawetwong, N., Savithi, C., & Suttidee, A. (2026). Toward an Integrated IoT–Edge Computing Framework for Smart Stadium Development. Journal of Sensor and Actuator Networks, 15(1), 15. https://doi.org/10.3390/jsan15010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop