Highlights
What are the main findings?
- This survey provides a systematic overview of the layer-wise design of edge AI-enabled smart cities, and four core components supporting the systems, spanning smart applications, sensing data, learning models, and hardware infrastructure, with an emphasis on how these components interact in urban contexts.
- This survey synthesizes and organizes applications across multiple domains in smart cities, including manufacturing, healthcare, transportation, buildings, and environments, demonstrating the breadth of real-world deployments. Moreover, it identifies the inherent challenges and analyzes corresponding solutions from the perspectives of sensing data sources, on-device learning model optimization, and hardware infrastructure, to support applications across different domains.
What is the implication of the main finding?
- This survey provides an integrated roadmap that can support researchers, engineers, and policymakers in advancing edge AI technologies for smart cities.
- This survey highlights open challenges and identifies future research directions for advancing more efficient, resilient, and intelligent urban ecosystems.
Abstract
Smart cities seek to improve urban living by embedding advanced technologies into infrastructures, services, and governance. Edge Artificial Intelligence (Edge AI) has emerged as a critical enabler by moving computation and learning closer to data sources, enabling real-time decision-making, improving privacy, and reducing reliance on centralized cloud infrastructure. This survey provides a comprehensive review of the foundations, challenges, and opportunities of edge AI in smart cities. In particular, we begin with an overview of layer-wise designs for edge AI-enabled smart cities, followed by an introduction to the core components of edge AI systems, including applications, sensing data, models, and infrastructure. Then, we summarize domain-specific applications spanning manufacturing, healthcare, transportation, buildings, and environments, highlighting both the softcore (e.g., AI algorithm design) and the hardcore (e.g., edge device selection) in heterogeneous applications. Next, we analyze the sources of sensing data generation, model design strategies, and hardware infrastructure that underpin edge AI deployment. Building on these, we finally identify several open challenges and provide future research directions in this domain. Our survey outlines a future research roadmap to advance edge AI technologies, thereby supporting the development of adaptive, harmonic, and sustainable smart cities.
1. Introduction
1.1. Background
Smart cities have emerged as a central focus with the rapid development of the Internet of Things (IoT), Cyber–Physical Systems (CPS), Artificial Intelligence (AI), and 5G wireless communication technologies [1,2,3,4,5]. Smart cities are not only a research frontier but also a transformative concept that directly affects the daily lives of citizens. They enable more reliable public transportation, faster emergency responses, cleaner air through intelligent environmental monitoring, optimized energy usage in buildings, and improved healthcare through real-time monitoring and predictive services. Every day, billions of interconnected devices and sensors in urban environments generate unprecedented volumes of data, which must be processed, analyzed, and acted upon in real time. By leveraging IoT, CPS, AI, and next-generation wireless communication technologies [6,7,8], smart cities provide a rich opportunity to design intelligent solutions that can address pressing urban challenges while enhancing the efficiency, resilience, and sustainability of city services.
1.2. Motivations
Edge Artificial Intelligence (Edge AI) has recently attracted significant attention because it brings learning and inference closer to data sources, reducing reliance on centralized cloud infrastructures [9,10,11]. This paradigm is especially important for smart city applications, where rapid response, privacy preservation, and energy efficiency are critical. Traditional cloud-centric approaches often fall short due to latency, bandwidth, privacy, and reliability concerns, particularly when crucial decisions must be made instantly for transportation safety, healthcare monitoring, or energy management. However, by enabling low-latency analytics directly on edge devices, edge AI empowers domains such as traffic management, environmental monitoring, healthcare, and industrial automation to operate more effectively.
Edge AI is also a natural fit for smart city services because it aligns well with the highly distributed, resource-constrained, and dynamic nature of urban systems. City infrastructure involves thousands of heterogeneous devices deployed across diverse locations, many of which cannot afford continuous connectivity to distant data centers. By processing data locally, edge AI reduces communication overhead, increases scalability, and allows services to remain resilient even under network disruptions. Furthermore, the proximity of edge intelligence enables context-aware decision-making that can adapt to local conditions, which is essential for personalized healthcare, adaptive traffic control, and energy-efficient building management. These advantages make edge AI a compelling paradigm for building the next generation of intelligent, responsive, and sustainable urban services.
The motivation for this survey stems from the need to provide an integrated view of edge AI in applied smart city subdomains, in contrast to existing surveys that typically focus on a single and isolated subdomain. For example, prior surveys [12,13,14,15,16,17,18,19,20,21,22] each concentrate on one or a few specific application areas. More importantly, to the best of our knowledge, there is no unified perspective that systematically highlights the common aspects shared across subdomains, such as data sources, learning models, and hardware infrastructure. This survey addresses that gap by proposing a unified taxonomy that spans real-world applications, sensing data, learning models, and hardware infrastructure, thereby clarifying the role of edge AI across the smart city landscape. Our work aims to serve researchers seeking to develop algorithms and models for edge intelligence, engineers and practitioners designing edge infrastructures, and policymakers interested in this domain. By offering a structured overview and identifying open research directions, this survey aims to guide interdisciplinary communities in advancing edge AI for smart cities.
1.3. Search Strategy for Literature
The papers were retrieved primarily from four databases: IEEE Xplore, ACM Digital Library, Elsevier, and arXiv. We focused on peer-reviewed journals and conference proceedings, which are publicly accessible. The search strategy employed a three-level query structure combining (i) general terms (e.g., edge AI, smart cities), (ii) domain-specific keywords (e.g., manufacturing, healthcare, transportation, buildings, environment), and (iii) technical descriptors (e.g., model compression, federated learning, edge data centers, hardware–software co-design, etc.). In addition, Boolean operators (AND, OR) and exclusion filters (-survey, -review) were applied. This final corpus reviews 241 papers and reports, with 215 works published between 2019 and 2025, reflecting the rapid growth of edge AI research in smart cities. Additionally, a smaller subset of 26 foundational works from 2011 to 2018 provides important conceptual baselines in this domain.
1.4. Related Surveys for Edge AI Empowered Smart Cities
Table 1 summarizes existing surveys on edge AI for smart cities. Earlier works focus on specific domains, such as geographic information systems (GISs) [12], microgrids [13], healthcare [15,18], public safety [16], or transportation video surveillance [17]. Others emphasize broader trends and requirements across applications [14], machine learning methods [19], or the integration of edge AI with blockchain [20]. More recent surveys review architectures and frameworks [21] or highlight TinyML for lightweight urban sensing [22]. While valuable, these studies are either domain-specific or limited in scope, underscoring the need for a comprehensive review that integrates applications, sensing data, learning models, and hardware infrastructures.
Table 1.
A Comparative Analysis of Existing Surveys Related to Edge AI for Smart Cities.
Compared with these existing works, our survey offers a more comprehensive and integrative perspective, covering the entire ecosystem of edge AI for smart cities by bringing together four complementary components: applications, sensing data, learning models, and hardware infrastructure. This holistic framing allows us to highlight cross-layer challenges, identify synergies among sensing, communication, computing, and control, and propose forward-looking research directions. Therefore, our work fills a critical gap by not only benchmarking existing studies but also providing an integrated roadmap that can support researchers, engineers, and policymakers in advancing edge AI technologies for smart cities.
1.5. Our Contributions
The contributions of this survey can be summarized as follows:
- First, this survey provides a systematic overview of the layer-wise design of edge AI-enabled smart cities, and four core components that support the systems, spanning smart applications, sensing data, learning models, and hardware infrastructure, with an emphasis on how these components interact in urban contexts.
- Second, it synthesizes and organizes applications across multiple domains in smart cities, including manufacturing, healthcare, transportation, buildings, and environments, demonstrating the breadth of real-world deployments.
- Third, it categorizes the inherent challenges and analyzes corresponding solutions from the perspectives of sensing data sources, on-device learning model optimization, and hardware infrastructure, to support applications across different domains.
- Finally, it highlights open challenges and identifies future research directions for advancing more efficient, resilient, and intelligent urban ecosystems.
1.6. Survey Road-Map
As illustrated in Figure 1, we now provide a road map of the survey. The survey is structured to provide a comprehensive understanding of the edge AI ecosystem for smart cities. In particular, Section 2, the summary section, discusses our five focused heterogeneous domains, e.g., manufacturing, healthcare, transportation, buildings, and environment. Specifically, it presents a system architecture for edge AI-empowered smart cities, outlining the envisioned layer-wise design and its core components, e.g., spanning applications (in Section 3), sensing data (in Section 4), learning models (in Section 5), and hardware infrastructure (in Section 6). They collectively provide a comprehensive understanding of how edge AI supports smart-city applications from data collection to computation and deployment. Particularly, Section 3 surveys recent edge AI applications across five heterogeneous domains, namely manufacturing, healthcare, transportation, buildings, and environment. Each application is analyzed with respect to its deployed edge devices, learning algorithms, main contributions, benefits, and identified gaps. Section 4 shifts the focus to the data aspect, discussing sensing data sources, challenges, and edge-side data processing solutions. Section 5 turns to edge AI learning models, covering lightweight design, model compression, dynamic models, and energy-aware optimization. Section 6 reviews edge AI hardware infrastructures that enable these edge-intelligent systems, including data centers, chip design, and supporting resources in land, electricity, and networking connectivity. Section 7 provides several open challenges, including data heterogeneity, hardware hysteresis, the need for synergy across sensing, communication, computing, and control as well as ethical, governance and policy, security, and privacy considerations. Section 8 explores future research directions, including heterogeneous sensing fusion, hardware–AI co-design, system-level co-design, and integrations between edge AI and digital twins. Finally, Section 9 concludes with final remarks and reflections.
Figure 1.
A Survey Road-map Highlighting the Structure of this Survey.
2. System Architecture for Edge AI-Empowered Smart Cities
This section presents the overall architecture for edge AI-empowered smart cities. We present two complementary views of the edge AI ecosystem. First, we introduce a vertical view of architecture (Figure 2) that shows how sensors collect data, edge and cloud systems process it, and applications use the results to control city services. Second, we present a horizontal view organized around four core components as shown in Figure 3: applications that define system requirements, sensing data that fuels intelligence, learning models that enable efficient on-device computation, and infrastructure that provides the hardware backbone. Together, these two perspectives, e.g., vertical layering and horizontal components, provide a complete architectural understanding of how edge AI systems are structured and deployed in smart-city contexts.
Figure 2.
The Envisioned Layer-wise Architecture for Edge AI-Empowered Smart Cities.
Figure 3.
Core Components of Edge AI for Smart Cities, Spanning Applications, Sensing Data, Learning Models, and Infrastructure.
2.1. Our Envisioned Layer-Wise Architecture for Edge AI-Empowered Smart Cities
The envisioned architecture for edge AI in smart cities follows a multi-layer design that integrates physical sensing, intelligent edge computing, networking, cloud processing, and diverse applications, as illustrated in Figure 2. This design builds on the foundations of edge intelligence paradigms and smart city [11,23,24]. The hierarchical Physical–Edge–Cloud structure is adopted from classical CPS-related literature [4,8,25,26], which provides a structural basis for sensing, computing, and control. The component definitions and inter-layer relationships extend from edge intelligence frameworks [11,23], articulating how sensing, computation, and networking elements interconnect. The design incorporates the smart-city perspective discussed in [14], thereby offering a unified model.
Although Figure 2 adopts the layered organization with features inspired from existing IoT and CPS architectures mentioned above, there are two distinct aspects we need to highlight: First, the cloud layer is explicitly incorporated rather than implicit in the referred architectures [11,23,24]. Several surveyed applications, particularly in the smart transportation, health, and environment sub-domains, rely heavily on cloud resources for analytical functions at scale that are difficult or inefficient to execute at the edge, including training models using data collected over long periods or across multiple sites, maintaining updated models, and supporting system-wide coordination. By explicitly representing this cloud–edge separation, the framework clarifies how edge intelligence and cloud computation are distributed and synergized in practice. Second, this framework delineates concrete components and responsibilities within each layer, rather than presenting them solely at an abstract level. For example, the edge layer identifies specific devices observed in the surveyed studies, such as UAVs, Raspberry Pi platforms, NVIDIA Jetson modules, and smart meters, thereby reflecting actual deployment practices.
From the bottom to the top, specifically, at the physical layer, urban environments are monitored through rich sensing modalities such as sound, images, biosignals, weather, and air quality, which provide the raw foundation for focusing on objects enabling their intelligent services at upper layers. The sensor layer builds on this by employing heterogeneous devices, including cameras, GPS modules, wearables, environmental sensors, and biosensors, to capture domain-specific data in real time. The edge layer leverages resource-constrained but intelligent devices such as Unmanned Aerial Vehicles (UAVs), Raspberry Pi, Arduino boards, NVIDIA Jetson modules, and smart meters, which locally process sensing data and enable low-latency inference. These are supported by the networking layer, consisting of gateways, routers, access points, hubs, and base stations that interconnect distributed edge devices and transmit both raw and pre-processed data. The cloud layer provides powerful computing infrastructure such as data centers and commercial platforms (e.g., Google Cloud, Amazon Web Services (AWS), Snowflake, and BigQuery) that train large-scale models, manage city-wide data, and generate optimized control policies. At the top, the application layer enables smart city services, including predictive maintenance in manufacturing, patient monitoring in healthcare, traffic management in transportation, building monitoring, and energy optimization in urban environments. Together, these layers ensure that sensing, communication, computing, and control are integrated to provide real-time, reliable, and adaptive intelligence for urban operations.
2.2. Core Components for Edge AI-Empowered Smart Cities
The architecture presented in Figure 3 outlines the structural organization of this survey and presents our original framework, highlighting how edge AI interconnects four key components: applications, sensing data, learning models, and infrastructure into a unified framework for the smart-city ecosystem. To the best of our knowledge, we are the first to provide an integrated, end-to-end design for edge AI systems, from data generation through model training, hardware deployment, and application delivery. This design enables identification of cross-component dependencies observed in the surveyed literature: sensing data characteristics influence model design choices, model requirements determine hardware platform selection, and infrastructure constraints shape feasible applications.
The first is applications, where edge AI drives innovation across domains, including smart manufacturing, healthcare, transportation, buildings, and environmental management, enabling context-aware and domain-specific services. The second is sensing data, which fuels edge AI by capturing multimodal information, yet also introduces challenges such as heterogeneity, real-time constraints, privacy concerns, and data imbalance. Addressing these requires solutions like lightweight design, adaptive sensing, federated learning, differential privacy, and secure communication. The third component is the learning model, where techniques such as lightweight architectures (e.g., compact CNNs, Tiny Transformers, graph neural networks), model compression (e.g., quantization, pruning, knowledge distillation), adaptive and dynamic models (e.g., early exit, context-aware inference), and energy- and latency-aware optimization strategies ensure efficient model training and inference at the edge. Finally, the infrastructure component provides the hardware backbone, including edge data centers with sustainable cooling, embedded AI chips with security features, and supporting facilities such as hybrid connectivity, land repurposing, and co-location. These components interact seamlessly to enable scalable, efficient, and sustainable edge AI solutions tailored for the complex demands of smart cities.
3. Edge AI Applications Across Heterogeneous Domains in Smart Cities
In this section, we systematically discuss five distinct domains in smart cities, including smart manufacturing, smart healthcare, smart transportation, smart buildings, smart environments, and describe how edge AI can be leveraged in each domain, as illustrated in Figure 4. These domains represent the core components of urban life: how people work, maintain health, move, inhabit spaces, and interact with their surroundings, and were selected based on prior works on edge AI and smart city architectures [14,20,27]. Each row in the Table 2, Table 3, Table 4, Table 5 and Table 6 describes an application along with algorithm, edge device, contribution, benefits, and limitations to ensure consistency. The tabular format was adapted from [28], and refined to align with the smart-city context.
Figure 4.
The Five Considered Sub-domains in Smart Cities: Smart Manufacturing, Smart Healthcare, Smart Transportation, Smart Building, and Smart Environment.
Table 2.
Edge AI Applications in Smart Manufacturing Domains.
Table 3.
Edge AI Applications in Smart Healthcare Domains.
Table 4.
Edge AI Applications in Smart Transportation Domains.
Table 5.
Edge AI Applications in Smart Building Domains.
Table 6.
Edge AI Applications for Smart Environment Domains.
Figure 5 provides an overview of the five domains, showing how algorithmic design and hardware choices meet domain-specific requirements. In particular, smart Manufacturing focuses on automation, predictive maintenance, and industrial efficiency; Smart Healthcare enables telemedicine, remote monitoring, and data-driven diagnosis; Smart Transportation enhances mobility through traffic optimization, autonomous driving, and emergency response; Smart Building emphasizes energy management, occupant comfort, and security; and Smart Environment addresses air quality, waste management, and disaster preparedness. Although each domain pursues distinct objectives, they share common challenges, including real-time inference, limited edge resources, privacy and security concerns, and the scalability of heterogeneous systems. Thus, we will systematically summarize both the AI algorithms design and hardware selections in each domain, as shown in Figure 5, in what follows.
Figure 5.
Applications in Edge AI for Smart Cities.
3.1. Smart Manufacturing Domains
Integrating edge AI into manufacturing supports predictive maintenance, process automation, quality inspection, and supply chain optimization. Several works address the integration of edge and cloud resources to balance real-time processing with centralized coordination. Li et al. [29] developed a hybrid computing framework with two-phase resource scheduling on Raspberry Pi 3, ensuring low-latency task execution for Industry 4.0 transitions. Building on this paradigm, Yang et al. [32] proposed AI-Mfg-Ops, a software-defined edge–cloud architecture for smart monitoring, analysis, and execution. Tang et al. [33] introduced a multi-agent system with Intelligent Production Edges (IPEs) enabling dynamic reconfiguration and self-organized control in mixed-flow production. Ying et al. [34] validated an edge–cloud platform integrated with quality management systems on a semiconductor production line, achieving reductions in downtime (12%), defects (20%), and costs (up to 80%).
In the context of production scheduling and resource management, Lin et al. [30] applied Multi-class Deep Q-Network (MDQN) for job-shop scheduling across distributed edge devices, while Ing et al. [31] showed how small- and medium-sized enterprises can adopt Deep Learning (DL) and Deep Reinforcement Learning (DRL) within existing quality management systems (QMS) to support AI-driven resource allocation. For condition monitoring and fault detection, Vermesan et al. [35] implemented ML and DL models including K-Means, Random Forest, SVM, and CNN on ARM Cortex-M4 processors in a soybean processing facility, improving productivity through Industrial IoT (IIoT) enabled sensing and control.
Beyond conventional industrial AI applications, Rane et al. [36] examined applications of generative AI tools such as ChatGPT and Bard in construction, covering robotics, building information management, and AR/VR systems, and outlined a cyclical AI framework for continuous learning. Xu et al. [37] investigated embodied AI systems where robots, sensors, and actuators enable self-learning and swarm intelligence in Industry 5.0 settings, identifying opportunities for improved productivity, sustainability, and worker well-being. Addressing energy consumption in IIoT environments, Zhu et al. [38] designed a heterogeneous edge computing framework using DRL-based scheduling on NVIDIA Jetson and Raspberry Pi 4B platforms, achieving 70–80% energy reduction compared to static and FIFO scheduling approaches.
3.2. Smart Healthcare Domains
Integrating edge AI into healthcare links patient monitoring, resource management, disease prediction, emergency response, and public health. These applications enable faster diagnostics, optimized resource use, proactive prevention, and comprehensive safety monitoring.
In the patient monitoring domain, Hayyolalam et al. [28] developed a framework employing DRL across distributed edge–cloud infrastructure to facilitate early disease detection while reducing medical costs, latency, and network load. Multiple works address continuous monitoring through wearable and IoT devices. Putra et al. [39] designed a non-invasive glucose monitoring system using LeNet-5 CNN and ANN on ESP32 microcontrollers with PPG sensors, offering a pain-free alternative to invasive methods. Putra et al. [40] integrated CNN-RNN models with You Only Look Once (YOLO) to process multimodal sensor and image data at edge nodes for real-time diagnostics and personalized care. Pradeep et al. [41] combined health-monitoring sensors, Raspberry Pi, and NVIDIA Jetson Nano with Random Forest models for continuous monitoring and secure remote consultation. Akram et al. [42] surveyed ML, Federated Learning (FL), TinyML, and hardware acceleration strategies for privacy-preserving cardiac monitoring, emphasizing differential privacy, model quantization, and Field Programmable Gate Array (FPGA) acceleration to address latency, energy, and personalization challenges.
Resource management, disease prediction, and emergency response applications leverage edge AI for operational efficiency. Rathi et al. [43] developed a framework that processes patient sensor data at edge nodes, prioritizing cases using max-heap queues and allocating resources through category-specific min-heaps, achieving response times between 0.2–0.4 units in hospital network simulations. Badidi [44] reviewed edge AI for early disease detection through wearable devices and medical sensors, highlighting CNN, RNN, and collaborative training for real-time health risk analysis. Ahmed et al. [45] designed ACA-R3, an edge-enabled protocol for autonomous connected ambulances (ACAs) that monitors patient vitals and optimizes routes based on GPS and traffic data, reducing handover time and improving emergency coordination through real-time telemedicine integration.
Addressing public health and safety, Thalluri et al. [46] incorporated edge computing into environmental monitoring, processing environmental data locally on Raspberry Pi devices with decision tree models generating automated alerts for abnormal levels. Sengupta et al. [47] deployed high-resolution network (HRNET), YOLO, Region-based CNN (R-CNN), Fully-CNN (F-CNN), and Mask R-CNN techniques on heterogeneous edge devices for contact tracing, mask detection, and social distancing monitoring from surveillance feeds, enhancing workplace safety compliance during epidemics. Choudhury et al. [48] implemented YOLOv4 with composite attention and deep pre-trained models (DPTMs) on Grove AI Hat and Raspberry Pi for mask detection, contact tracing, and cyber-threat detection, reducing latency through bandwidth optimization and edge parallelism to support data-driven pandemic response.
3.3. Smart Transportation Domains
Safety surveillance leverage edge AI for rapid detection of critical events while reducing latency, bandwidth consumption, and cloud dependence. Ke et al. [49] implemented Single Shot Detector (SSD) Inception on NVIDIA Jetson TX2 with dashcam, GPS, and Arduino for near-crash detection using multi-threaded linear-complexity tracking of objects. Neto et al. [50] developed MELINDA, distributing face detection and recognition tasks across hardware-accelerated edge devices using SSD MobileNet and FaceNet, processing video streams 33% faster than baseline approaches for unauthorized access alerts. Vision-based surveillance extends to diverse scenarios: Huu et al. [51] deployed YOLOv5 on Jetson Nano for abnormal activity detection, Broekman et al. [52] implemented YOLOv4-tiny on UAV-mounted edge hardware for vehicle classification, and Rahman et al. [53] combined CNNs, Long Short-Term Memory (LSTM), and YOLOv5 on Raspberry Pi and Jetson devices for real-time monitoring with zero-trust architecture. Soy et al. [54] applied kNN and Dynamic Time Warping on Raspberry Pi Pico to monitor aggressive driving on public transport, enabling early detection and proactive safety alerts.
Traffic management applications optimize signal control and network operations at the edge. Irshad et al. [56] designed IB-SEC, a secure edge platform employing African Buffalo Optimization (ABO) and distributed hash functions with Medium Access Control (MAC) protocols to adapt to network conditions and improve throughput and latency. Alkinani et al. [57] applied XGBoost to sensor data from mobile phones and vehicle-mounted devices for real-time traffic analysis. Vision-based approaches address traffic signal optimization, Lee et al. [58] implemented compressed video analysis model with CNNs-YOLOX and LT2 on Jetson AGX Xavier for real-time intersection monitoring in Pyeongtaek City, South Korea, reducing delays during non-peak hours, while Hazarika et al. [59] developed a dynamic traffic light system on Raspberry Pi 3 using YOLO and Rapid Automatic Keyword Extraction (RAKE) to adjust signal timing based on traffic density and inter-junction coordination. Murturi et al. [55] extended to urban logistics, executing the Expressive Numeric Heuristic Search Planner (EHNSP) on distributed IoT devices for real-time waste collection route optimization in densely populated areas.
For intelligent transportation systems (ITSs), Moubayed et al. [60] formulated V2X service placement as binary integer programming and proposed G-VSPA, a low-complexity greedy heuristic for deployment across eNBs and Roadside Units (RSUs), achieving near-optimal performance with reduced computational complexity. Jeong et al. [61] implemented FPGA-based license plate recognition using kNN on Raspberry Pi for power-efficient processing. Chavhan et al. [62] developed an AI-IoT multi-agent system applying Radial Basis Function Neural Network (RBF-NN) and stochastic queuing models on RSUs to process real-time V2X data, achieving 25% CO2 emission reduction while cutting energy use and congestion. Yang et al. [63] designed a self-learning Spiking Neural Network (SNN) navigation system on neuromorphic hardware for low-power cognitive routing in IoV, while Rong et al. [64] developed STGLLM-E, combining RoadSort with spatio-temporal modules and Generative Pre-trained Transformer (GPT) for traffic flow prediction in 6G-integrated autonomous transport systems, outperforming baselines in accuracy and training efficiency. Liu et al. [65] proposed Edge-MuSE, a multi-task system on Jetson Xavier NX performing visibility estimation, dehazing, road segmentation, and surface condition classification for improved environmental perception with lower latency and enhanced privacy.
3.4. Smart Building Domains
Integrating edge AI into smart buildings enables intelligent energy management, security, maintenance, and occupant-aware services, thereby enhancing sustainability, efficiency, safety, and comfort through real-time monitoring and adaptive control. In the context of energy management, Bajwa et al. [69] reviewed AI-enabled building management systems employing DL, Hybrid-DL, Deep Belief Networks (DBNs), Variational Autoencoders (VAEs), and DRL with IoT sensors, finding that these techniques improved energy efficiency while enhancing maintenance and comfort. Chen et al. [66] developed a Smart Home Energy Management System (SHEMS) prototype using fog–cloud architecture with two-stage ANN-based non-intrusive appliance monitoring (NIALM), supporting scalable residential demand-side management with real-time alerts. Márquez-Sánchez et al. [67] combined FL and DRL in an adaptive edge framework that optimizes energy consumption while accounting for occupant comfort and preferences, enabling personalized and sustainable management. Yang et al. [75] applied spherical fuzzy CRITIC-COCOSO decision algorithms to evaluate AI-based models for low-energy buildings, improving decision-making under uncertainty for strategies balancing sustainability, economic growth, and energy efficiency.
Monitoring and detection systems address maintenance, occupancy, and security. Atanane et al. [70] implemented TinyML-based leakage detection using CNN variants (EfficientNet, ResNet, AlexNet, MobileNet) on Arduino Nano33 BLE, analyzing sensor data for deviations in flow, pressure, or vibration to enable real-time detection with minimal intervention. Ahamad et al. [71] developed a real-time people tracking and counting system using SSD, MobileNet, and novel algorithms on Intel NUC12 Pro Mini PC, achieving 97% accuracy at 20-27 frames per second (FPS) across varied lighting conditions. Vijay et al. [72] employed CNN and YOLOv8 with CCTV cameras for customer behavior monitoring, heat map generation, and anomaly detection at the edge, reducing resource use while improving experience and security. Security-focused applications include Craciun et al. [73], who designed an intrusion detection system using hybrid XGBoost models on Orange Pi for real-time threat detection against DDoS attacks, and Reis et al. [74], who implemented Isolation Forest and LSTM Auto-Encoder (LSTM-AE) on Raspberry Pi and Jetson Nano for low-latency, privacy-preserving anomaly detection. Generative AI has also been explored as a decision-support tool for smart building integration within broader smart city ecosystems. Shahrabani et al. [68] employed Google Bard and OpenAI ChatGPT-3 to evaluate smart building integration into smart cities, assessing resilience, efficiency, and sustainability across energy, transportation, water, waste, and security domains.
The surveyed smart building applications reveal two distinct edge AI deployment patterns. First, application algorithms range from classical decision trees [75] and traditional CNNs to recent explorations of large language models for building evaluation [68] and advanced deep learning frameworks [69,70,74]. The algorithms are diverse compared to manufacturing or transportation, where CNNs and YOLO variants dominate. Second, smart building works exhibit limited real-world validation: most remain conceptual frameworks [67,70], prototypes lacking large-scale testing [66], or deployments with evaluation gaps [68,73,75]. This contrasts with transportation, where real-world traffic data and deployed camera systems are more common. The gap between algorithmic sophistication and deployment maturity suggests smart building edge AI remains in an earlier development stage.
3.5. Smart Environment Domains
Integrating edge AI into environmental monitoring enables real-time sensing of air quality, climate, energy, and mobility conditions, supporting sustainable urban development through low-latency perception, efficient resource use, and proactive pollution and safety management. In environment monitoring, Silva et al. [76] developed a hardware–software co-design pipeline for wearable edge AI using smart helmets with Raspberry Pi and Jetson devices. Using MLP and CNN models, the system enabled on-device ecological monitoring in environments with limited network connectivity. Almeida et al. [77] applied multiple neural network models at the edge to measure temperature, humidity, CO2 levels, and human traffic for low-cost workplace environmental monitoring. Liu et al. [65] implemented Edge-MuSE on NVIDIA Jetson Xavier NX, integrating visibility estimation, dehazing, road segmentation, and surface condition classification for low-latency, privacy-preserving environmental perception in traffic contexts.
Addressing sustainability, Chavhan et al. [62] designed RBF-NN and stochastic queuing models on RSUs to process real-time V2X data, achieving 25% CO2 emission reduction while decreasing energy use and noise pollution. Yang et al. [63] applied self-learning SNNs on neuromorphic architecture with fault-tolerant routing for energy-efficient, low-power traffic navigation. Rehman et al. [78] combined CNNs, MLPs, and A* search algorithms in a deep state-space model for urban fire surveillance, enhancing detection accuracy and emergency response optimization.
Environmental edge AI applications are closely tied to transportation infrastructure, linking sensing of emissions, air quality, and weather conditions with traffic management and vehicle navigation. This indicates that environmental monitoring often acts as a cross-cutting capability rather than a standalone domain, enhancing other smart city services: traffic systems can reduce emissions [62,63,65], and workplace monitoring can account for environmental factors [77]. These applications also use a wider variety of hardware than other domains, including wearable edge devices [76], standard embedded systems [65,77], vehicle-integrated sensors [62], and specialized neuromorphic architectures [63]. In contrast, smart building and transportation systems tend to rely on a narrower set of platforms, such as Raspberry Pi or Jetson devices, reflecting environmental monitoring’s distributed deployment across diverse infrastructures rather than centralized facilities.
3.6. Common Patterns and Limitations Across Five Sub-Domains
Edge AI implementations across each smart city domain share common patterns, yet also face domain-specific limitations that shape future research in edge deployment.
Common patterns across five sub-domains: Across smart manufacturing, healthcare, transportation, buildings, and environments, several patterns can be observed. Applications employ a wide range of edge devices, from low-power sensors and embedded processors to GPU-enabled platforms, resulting in similar trade-offs among computational capability, energy consumption, and real-time performance. To accommodate resource constraints, many applications adopt lightweight learning approaches, such as compact convolutional models, simplified temporal models, or classical machine learning techniques. Despite differences in sensing modalities, all domains face challenges in feature extraction, data quality, and robustness under real-world operating conditions. Additionally, applications handling sensitive data, particularly in healthcare, transportation, and smart buildings, favor on-device processing to minimize data transmission beyond the edge.
Common limitations and domain-specific limitations: Despite similarities, recurring limitations are evident across five domains. Validation studies are often small-scale, including single-facility investigations in manufacturing, short-duration trials in transportation, limited participant datasets in healthcare, prototype deployments in buildings, and idealized sensor setups in environmental monitoring. Simulation-based evaluation is prevalent, particularly in transportation and healthcare, whereas smart building and environment often remain conceptual or are tested only under controlled conditions. Moreover, robustness to real-world variability is rarely addressed, with limited attention to adverse weather, network disruptions, hardware failures, or heterogeneous operational conditions. Each domain also exhibits unique constraints that further limit deployment. Manufacturing systems struggle with interoperability challenges and evaluation gaps across heterogeneous production setups. Healthcare models rely on simulated or limited patient data and often lack standardized datasets. Transportation applications face issues with poor video quality, network interference, and limited-duration testing. Smart building prototypes rarely undergo large-scale or stress testing and typically evaluate only a narrow range of scenarios. Environmental monitoring systems are distributed across diverse infrastructures, introducing hardware heterogeneity, sparse sensor coverage, and difficulties in aggregating local measurements into actionable city-wide insights.
Hardware trade-offs: Hardware selection further reflects these domain-specific trade-offs. NVIDIA Jetson devices (TX2, Nano, Xavier, AGX Xavier) offer high computational capability with integrated GPU acceleration, supporting real-time inference for complex models, yet their higher power consumption and cost constrain deployment in energy-limited or large-scale networks. Raspberry Pi devices (Pi 3, Pi 4, Pi Pico) offer moderate performance at lower cost and power, making them suitable for diverse applications, though they struggle with computationally intensive tasks. Microcontrollers (ESP32, ARM Cortex-M4, Arduino) enable ultra-low-power, battery-operated, or energy-harvesting deployments but are limited to simple models due to limited memory and processing capacity. FPGA platforms offer reconfigurable hardware with custom acceleration and predictable latency, but require specialized expertise and longer development cycles.
In summary, edge AI applications across smart city domains exhibit technical feasibility, leveraging lightweight models, on-device processing, and a range of hardware platforms. However, the predominance of small-scale validations, simulation-based evaluations, and limited robustness to real-world variability underscores that applications remain at an experimental stage.
4. Edge AI Sensing Data for Smart Cities
Sensing data-fueled smart cities provide the foundation for diverse edge AI applications by enabling real-time, multimodal insights from urban infrastructure. In this section, we discuss heterogeneous urban sensing data sources, challenges in data collection, and corresponding solutions. Figure 6 provides a high-level view of this section, illustrating how different sensing modalities pose challenges and corresponding techniques to address them.
Figure 6.
Sensing Data Types in Edge AI for Smart Cities.
4.1. Urban Sensing Data Sources for Edge AI-Enabled Smart Cities
In smart cities, different types of sensing data, shown in Table 7, are collected across multiple domains to enable context-aware and adaptive edge AI applications:
Table 7.
Sensing Data Sources.
- Smart Manufacturing Applications: They leverage microphones, inertial measurement units (IMUs), and acoustic emission sensors to capture sound and vibration data for fault diagnosis [79,80]. Industrial and mechanical sensors measure rotation, torque, spindle speed, load, thickness, voltage, current, proximity, pressure, optical, and temperature, enabling anomaly detection and equipment failure prediction, thereby supporting uninterrupted production flow [81]. Time, speed, torque, and temperature measurements are further integrated with thermal models to predict thermal displacement in machining processes [82].
- Smart Healthcare Applications: They employ a variety of sensors to support continuous monitoring and early diagnosis. Electroencephalogram (EEG) sensors generate brain signals (EEG signals) that are analyzed for pathology detection [83]. Wearable devices collect physiological signals, including electrocardiogram (ECG) signals, in a non-invasive manner, enabling personalized health monitoring [84], Myocardial infarction detection [85], and continuous cardiac monitoring [86]. Biosensors measure temperature, blood pressure, pulse rate, and SpO2, supporting medical diagnosis [87] and remote patient monitoring [88]. In addition, breath sensors analyze exhaled air to facilitate early detection of respiratory diseases [89].
- Smart Transportation Applications: They employ environmental sensors and cameras together to measure temperature, humidity, and images for multi-task traffic surveillance [90]. Beyond surveillance, cameras capture image and video frames, which are analyzed for traffic monitoring [91], detect hazards [92], and detect vehicles [93]. LiDAR and radar provide point clouds and radar returns supporting hazard detection [92] and enable real-time decision making in dynamic traffic environments [94].
- Smart Building Applications: They deploy environmental sensors to monitor air quality, humidity, temperature, as well as smoke levels, which are analyzed to enhance energy efficiency and reduce consumption [95,96,97,98]. Motion sensors detect occupant presence, enabling reliable occupancy detection that supports adaptive control of indoor conditions [99].
- Smart Environment Applications: They rely on pollution sensors to monitor air quality, particulate matter, carbon dioxide (CO2), and nitrogen oxide (NOx) levels, supporting continuous air quality monitoring [100,101]. Environmental sensors measure temperature, humidity, pressure, weather conditions, and light intensity, which are utilized to enhance energy efficiency [102,103].
4.2. Challenges and Effective Edge Side Solutions in Smart City Sensing Data Collections
Despite the promise of pervasive sensing for smart cities, several challenges emerge when collecting, managing, and utilizing urban-scale data streams. To address these challenges, effective edge-side data processing techniques are required. We now summarize several solutions for each of the challenges considered below.
4.2.1. High Heterogeneity from Sensing
We consider two types of heterogeneity in smart city applications in what follows.
Intra-domain Heterogeneity. Smart city sensing involves integrating data collected from multiple sensing modalities within each specific domain, each with distinct data formats and characteristics. For example, in smart manufacturing, vibration sensors, infrared cameras, and industrial IoT devices generate time-series signals, thermal images, and structured machine logs for predictive maintenance and process optimization. In smart healthcare, wearable devices and ambient medical IoT sensors produce biosignals such as ECG and SpO2, continuous motion data, and patient health records in both structured and unstructured forms. In smart transportation, traffic cameras and roadside LiDAR produce high-resolution video streams and 3D point clouds, respectively, while GPS devices in connected vehicles generate continuous geospatial trajectories. In smart buildings, energy meters output structured numerical readings of power consumption, motion detectors provide binary occupancy data, and environmental sensors deliver continuous measurements of temperature, humidity, and air quality.
Thus, intra-domain heterogeneity reflects the diversity of sensing modalities and data formats within a single urban domain, requiring domain-specific preprocessing and harmonization strategies. This can be mitigated through domain-specific preprocessing and harmonization.
- Standardized Data Formats provide common structures for sensor description, observation encoding, and data access, simplifying integration of heterogeneous IoT devices and enabling interoperability. Without such standards, data from diverse sources would remain fragmented and difficult to analyze efficiently. To this end, Fazio et al. [104] introduced a dual abstraction layer based on Open Geospatial Consortium Sensor Web Enablement (OGC-SWE) standards, employing SensorML for sensor descriptions, O&M for encoding observations, and SOS for data access. Their four-layer, data-centric architecture used a shared database to manage asynchronous uploads and uniformly deliver sensor information. Rubí et al. [105] proposed a OneM2M-based Internet of Medical Things (IoMT) platform to address the interoperability gaps of OpenEHR for e-health devices. The platform extended OpenEHR semantics to transform SenML data into standardized formats, such as FHIR and OpenEHR, enabling big data analytics and online processing. Beyond formatting, data preprocessing is also key to managing intra-domain heterogeneity. Krishnamurthi et al. [25] reviewed approaches including wavelet-based denoising, missing value imputation with statistical and correlation-based models, outlier detection through voting mechanisms, SVM, and Principal Component Analysis (PCA), and data aggregation methods including tree-based, cluster-based, and centralized approaches to address the challenges of real-time IoT sensor data. Similarly, Kim et al. [106] introduced Thing Adaptation Layer (TAL), which uses device-specific TAS functions to translate raw sensor outputs into standardized data formats and convert control instructions into device-specific commands, enabling uniform access through REST APIs.
- Feature Extraction Pipelines use signal processing and domain-specific engineering methods, such as the FFT for vibration data or wavelet transforms, to transform raw sensor outputs into compact, comparable representations. By reducing variability across heterogeneous signals, these pipelines enable more accurate, efficient learning on edge devices. Concerning this, Alemayoh et al. [107] proposed a new data structuring approach for sensor-based HAR, in which duplicated triaxial IMU data were formatted into single and double-channel inputs to enhance temporal–spatial feature extraction. This design improved the accuracy and efficiency of lightweight neural network models for real-time motion recognition. Arunan et al. [108] designed FedMA, an FL framework for industrial health prognostics that addresses misalignment of feature extractors across heterogeneous clients. By matching neurons with similar feature extraction functions before averaging, FedMA preserved local features and improved prognostic accuracy compared with FedAvg. In the context of remote sensing, Wang et al. [109] investigated a scene classification framework that extracts heterogeneous features, including DS-SURF-LLC (dense SURF descriptors), Mean-Std-LLC (statistical features), and MO-CLBP-LLC (multi-orientation texture patterns). These features are fused using discriminant correlation analysis to generate compact representations, while decision-level fusion is performed by combining multiple SVM classifiers through majority voting, further enhancing classification performance.
- Edge Middleware provides a lightweight software layer for ingesting, processing, and distributing heterogeneous IoT data streams. By abstracting device-specific protocols and exposing uniform APIs, middleware frameworks enable real-time analytics, interoperability, and quality-of-service support at the edge. To this end, Akanbi et al. [110] developed a distributed stream processing middleware for real-time environmental monitoring. The framework was built on a publish/subscribe architecture with Apache Kafka, ingested heterogeneous data via Kafka Connect, and processed streams using Kafka Streams with numerical models. Kim et al. [106] proposed a oneM2M-based middleware platform with an open API that provides REST interfaces for interacting with WSN devices at the localhost, local area network, and global network levels. Likewise, Gomes et al. [111] proposed the M-Hub/Context Data Distribution Layer (CDDL) middleware to acquire, process, and distribute context data with QoC provisioning and monitoring. Unlike SSDL middleware that relied on different protocols for mobile and cloud communication, CDDL employs MQTT as a single protocol for both local and remote communication, ensuring that QoS policies are applied end-to-end [16].
Inter-domain Heterogeneity. The heterogeneity across multiple domains extends to data formats (video, audio, numerical, text logs, biosignals), sampling rates (milliseconds for traffic sensors vs. hours for utility meters), and quality (noisy acoustic signals vs. structured meter readings). Bringing these sources together enables holistic applications. For example, combining transportation data with environmental sensor streams allows a city to correlate traffic congestion with air pollution exposure, informing both mobility management and public health interventions. This cross-domain fusion is fundamental to smart city intelligence but requires standardized data representations and efficient multimodal sensing fusion frameworks to overcome incompatibilities.
Thus, inter-domain heterogeneity arises when integrating multimodal data across different urban sectors, demanding cross-domain interoperability standards and multimodal AI techniques to unlock system-level intelligence in smart cities. Addressing this challenge requires fusion across data types and sectors.
- Common Data Models (CDMs) standardize how data are structured and interpreted across systems, acting as a shared semantic “language” for heterogeneous sources. By defining consistent entities and relationships, such as linking sensors to devices or associating readings with locations, CDMs enable interoperability and cross-domain integration. Peng et al. [112] proposed a Semantic Web-based method that uses an OWL integration ontology to unify health and home environment data. Their method combined HL7 FHIR, Web, WoT, and Linked Data into a semantic resource graph at the resource integration layer, enabling standardized access through semantic APIs. Ali et al. [113] proposed a semantic mediation model to address interoperability in heterogeneous healthcare services. Their framework applied the Web of Objects paradigm, incorporating virtual and composite virtual objects, semantic annotation, ontology alignment, and deep representation learning, while leveraging a Common Data Model to transform diverse data into standardized formats. Likewise, Adel et al. [114] proposed a semantic ontology-based model for distributed healthcare systems to address interoperability across heterogeneous EHR sources. These sources are transformed into OWL ontologies and merged into a unified ontology, enabling unified queries through SPARQL and Description Logic. Implementing shared ontologies (e.g., CityGML, Brick schema for buildings) provided semantic consistency across domains.
- Multimodal AI Frameworks combine heterogeneous data sources such as images, sensor readings, text, and temporal signals into unified models that capture complementary information. By jointly learning from multiple modalities, they improve accuracy, robustness, and decision-making in complex edge and IoT applications. For example, Ahmed et al. [115] investigated a multimodal AI framework to address the challenge of delayed detection and response to traffic incidents, integrated YOLOv11 for real-time accident detection, Moondream2 to generate scene descriptions, and GPT-4 Turbo to produce actionable reports. Alghieth et al. [116] proposed Sustain AI, a multimodal DL framework to address the challenge of increasing energy demand and carbon emissions in industrial manufacturing. The system integrated CNNs for defect detection, RNNs for energy prediction, and reinforcement learning (RL) for dynamic energy optimization. Reis et al. [117] proposed an IoT- and AI-driven framework that fused multimodal data from traffic sensors, environmental monitors, and historical logs, employing LSTM networks for congestion prediction and DQNs for route optimization within an edge–cloud hybrid architecture. Likewise, Ranatunga et al. [118] proposed an ontology-based data access framework to integrate heterogeneous environmental geospatial data, employing Ontop for semantic translation and PostgreSQL/PostGIS for storage. A web-based SPARQL Query Interface enabled querying and visualization. The framework enabled a unified semantic knowledge graph, which can be used for performing analysis and decision-making.
- Cross-domain Data Integration addresses the challenge of integrating information from diverse application domains, such as buildings, transportation, and healthcare, into a unified framework. For example, Valtoline et al. [119] investigated an ontology-based approach for cross-domain IoT platforms that applied a multi-granular Spatio-Temporal-Thematic (STT) data model and Semantic Virtualization to annotate heterogeneous sensor schemas with domain ontology concepts. Fan et al. [120] designed BuildSenSys, a cross-domain learning system that reused building sensing data for performing traffic prediction. The system combined correlation analysis with an LSTM-based encoder–decoder, applying cross-domain attention to capture building-traffic relationships and temporal attention to model historical dependencies.
4.2.2. Real-Time Constraints
Many smart city applications, such as adaptive traffic light control, autonomous vehicle coordination, or emergency response systems, demand rapid processing and decision-making. Latency in collecting, transmitting, or analyzing data can compromise system effectiveness and even public safety. Designing sensing pipelines that guarantee real-time performance while operating on resource-constrained edge devices is a key challenge.
Achieving these real-time requirements necessitates low-latency data processing and communication mechanisms.
- Edge Inference Acceleration combines specialized hardware and optimized software runtimes to perform ML inference on edge devices, reducing latency, conserving bandwidth, and improving privacy. For example, Wang et al. [121] designed a cloud–edge collaborative framework for pedestrian and vehicle detection by compressing YOLOv4 models via L1-regularization-based channel pruning and accelerating inference with TensorRT quantization on the NVIDIA Jetson TX2. Zhang et al. [122] proposed edgeIS, an edge-assisted framework for mobile instance segmentation that replaced the traditional “track + detect” paradigm with a “transfer + infer” mobile-edge collaboration scheme. The framework applied mechanisms like motion-aware mask transfer, contour-instructed edge inference acceleration, and content-based RoI selection, reducing latency by 48% while preserving accuracy above 0.92. Likewise, Han et al. [123] developed SDPMP, a self-adaptive dynamic programming algorithm that accelerated CNN inference by combining pipeline parallelism with inter-layer and intra-layer partitioning across heterogeneous edge devices.
- Lightweight Design for Stream Processing aims to achieve high performance while minimizing resource consumption on edge computing devices. For example, Zhang et al. [124] investigated ECStream, a lightweight edge–cloud framework for structural health monitoring that applied fine-grained scheduling of atomic and composite stream operators. This design reduced bandwidth usage by 73.01% and end-to-end latency by 20.37% on average.
- Computation Offloading transfers computationally intensive tasks from resource-limited devices to more powerful remote nodes such as Edge computing servers, Fog computing nodes, or Cloud computing clusters. In this direction, Cheng et al. [125] proposed a Lyapunov optimization-based scheme for fog computing systems, which comprised energy-harvesting mobile devices. In their approach, computation offloading, subcarrier assignment, and power allocation are jointly optimized to minimize system cost in terms of latency, energy consumption, and device weights. Liu et al. [126] proposed a two-layer vehicular fog computing architecture and designed a real-time task offloading algorithm, which classified tasks by delay, assigned them to four offloading lists, and scheduled them based on deadlines and utilization to maximize the task service ratio. Also, Gao et al. [127] proposed PORA, a predictive offloading and resource allocation scheme for multitier fog computing systems. The system formulated the problem as a stochastic network optimization and applied Lyapunov-based decomposition to enable distributed online offloading, thereby minimizing time-average power consumption while ensuring queue stability.
4.2.3. Privacy and Security Concerns
Urban sensing often involves sensitive data, such as high-resolution video, geolocation traces, or health-related signals. Without robust privacy-preserving mechanisms, these data streams raise risks of surveillance, identity leakage, or malicious exploitation. Ensuring data confidentiality, secure transmission, and compliance with regulatory frameworks (e.g., GDPR, HIPAA) is critical for public trust in smart city systems. Protecting citizen data thus requires privacy-preserving and secure edge computing techniques.
- Federated Learning trains multiple edge devices without sharing raw data. Each device trains a local model and sends only updates for aggregation, which preserves privacy and reduces bandwidth usage. For instance, Liu et al. [128] proposed P2FEC, which exchanged gradients instead of raw data, and applied secure multi-party aggregation during initialization, training, and updating stages to preserve privacy. Li et al. [129] proposed ADDETECTOR, an FL-based smart healthcare system for Alzheimer’s disease detection that collected user audio via IoT devices and applied topic-based linguistic features, differential privacy, and asynchronous aggregation to preserve privacy across user, client, and cloud layers. Wang et al. [130] proposed PPFLEC, a privacy-preserving FL scheme for IoMT under edge computing that used secret sharing with weight masks to protect gradients, a digital signature to ensure message integrity, and periodic local training to reduce communication overhead and accelerate convergence. Likewise, Stephanie et al. [131] designed a blockchain-supported ensemble FL framework that employed secure multi-party computation for privacy, FedAVG and weighted ensemble methods for aggregating heterogeneous models, and blockchain to guarantee data integrity, auditability, and version control.
- Secure Communication Protocols protect data exchanged between devices by ensuring confidentiality, integrity, and authenticity during transmission. To this end, Winderickx et al. [132] proposed HeComm, a fog-enabled architecture, which ensures end-to-end secure communication across heterogeneous IoT networks by establishing secret keys with the HeComm protocol and applying object security at the application layer. Swamy et al. [133] proposed Secure Vision, a layered Wireless Sensor Network (WSN) architecture that combines secure MAC and routing protocols, Transport Layer Security/Transport Layer Security (TLS/DTLS)-based transport security, and image processing techniques such as steganography and watermarking to ensure end-to-end confidentiality, integrity, and resilience.
4.2.4. Data Imbalance and Sparsity
Sensing coverage in cities is rarely uniform. Some regions, such as downtown or high-traffic zones, are saturated with redundant data from numerous sensors, while other areas suffer from sparse or unreliable coverage due to infrastructure gaps. This imbalance complicates the development of robust AI models, which must learn to operate effectively across both dense and sparse data environments. Techniques such as adaptive sampling, synthetic data generation, and cross-region model transfer are needed to mitigate these disparities. Uneven sensing coverage can be alleviated with adaptive and data-augmentation methods.
- Adaptive Sensing enables edge devices to adjust their sensing frequency, resolution, or modality in real time based on environmental conditions or workload demands, allowing them to conserve energy while maintaining data quality. Machidon et al. [134] proposed an adaptive compressive sensing–DL pipeline which dynamically adjusts sampling rates with a learned measurement matrix and entropy-based tuning. It preserved model accuracy while reducing sensor sampling and battery usage by up to 46%. Wang et al. [135] proposed a UAV-based lightweight detection algorithm, which can improve small-target recognition by adding MODConv to the detection head and using LSKAttention to adjust the sensing field adaptively. Combined with Soft-NMS, this adaptive design reduces missed detections while maintaining efficiency with FPW, thereby lowering computational cost. Likewise, Ghosh et al. [136] proposed an adaptive sensing framework for IoT nodes that combines Q-learning and LSTM to optimize energy use while maintaining sensing accuracy. Q-learning dynamically selected an optimal subset of sensors based on cross-correlation and energy constraints, while LSTM predicted missing parameters from sampled data.
- Data Augmentation expands training datasets by applying transformations to existing samples. Generative models such as GANs, Diffusion models can be used to synthesize missing sensor signals or enrich sparse datasets. For instance, Li et al. [137] proposed WixUp, a generic data augmentation framework for wireless human tracking that used Gaussian mixture-based and probability-based transformations to augment diverse wireless data formats and supports unsupervised domain adaptation through self-training. Orozco et al. [138] designed FedTPS, an FL framework for traffic flow prediction that augmented each client’s dataset with synthetic traffic data generated by a diffusion model built on a UNet backbone, trained collaboratively across silos. Pal et al. [139] proposed an ensemble data augmentation model for cardiac arrhythmia detection that combined borderline undersampling of majority classes with chaos-based oversampling of minority classes to balance ECG datasets.
- Cross-region Transfer Learning applies knowledge from a data-rich source region to a data-scarce target region to improve model performance. It is beneficial for tasks where data collection is expensive, complex, or geographically limited, as it addresses distribution shifts across regions. For instance, Guo et al. [140] proposed C3DA, a universal domain adaptation method for remote sensing by combining a two-stage attention mechanism with the C3 criterion (certainty, confidence, consistency) to filter out outliers and unknown classes, improving scene classification accuracy across diverse geographic regions. Zhang et al. [141] introduced Target-Skewed Joint Training (TSJT), a one-stage transfer learning framework for cross-city spatiotemporal forecasting. The framework combined a Target-Skewed Backward (TSB) strategy, which selectively refines gradients from source-city data to benefit the target city, with a Node Prompting Module (NPM) that encoded shared spatiotemporal patterns. Likewise, Zhao et al. [142] proposed an adaptive remote sensing scene recognition network to mitigate domain shift across sensors. Their approach learned sensor-invariant representations adversarially, aligned class-conditional distributions contrastively, and transferred semantic relationship knowledge to improve cross-scene recognition.
5. Edge AI Learning Models for Smart Cities
In this section, we discuss four key strategies for enabling efficient model deployment in edge AI for smart cities, spanning lightweight architectures, model compression, adaptive and dynamic models, and energy- and latency-aware model optimizations. Figure 7 provides a high-level view of this section illustrating how different strategies are used to achieve goals aligned with sub-domains.
Figure 7.
Learning Models in Edge AI for Smart Cities.
5.1. Model Design and Optimization Strategies for Edge AI in Smart Cities
Table 8 outlines four key strategies for enabling efficient model deployment in edge AI for smart cities. For example, lightweight architectures are designed from scratch for compactness and speed, making them suitable for latency-sensitive services like traffic monitoring, though often at the cost of reduced accuracy. Model compression techniques such as quantization, pruning, and knowledge distillation, shrink large models while maintaining most of their accuracy, which is useful in domains like healthcare or safety analytics but may still strain very limited devices. Adaptive and dynamic models allow flexible computation through conditional execution, making them effective for fluctuating workloads such as adaptive traffic control, albeit with added complexity in stability and calibration. Finally, energy- and latency-aware optimization explicitly co-designs models with hardware to minimize power consumption and delay, as in energy-aware or latency-optimized models, which are crucial for long-term sustainability in sensor-rich or wearable applications. Together, these strategies provide complementary solutions for balancing accuracy, efficiency, flexibility, and scalability in smart city AI services. In what follows, we explain each strategy in detail.
Table 8.
Different Model Design and Optimization Strategies for Edge AI in Smart Cities.
5.2. Lightweight Model Architectures
Compact CNNs are smaller, more efficient versions of CNNs designed for resource-constrained devices like microcontrollers and FPGAs. These models are increasingly applied in smart city domains, including healthcare, surveillance, and people counting, where efficiency, low memory usage, and fast inference are critical. In healthcare, Wong et al. [143] proposed a low-complexity binarized 2D-CNN classifier on FPGA-based edge platforms. The design combined a quantized multilayer perceptron (qMLP) for ECG-to-binary image conversion with a binary CNN (bCNN) for classification, reduced multiply–accumulate operations, and achieved 5.8× reduction compared to conventional CNNs. Aarotale et al. [144] introduced PatchBMI-Net, a lightweight facial patch-based ensemble model for body mass index (BMI) prediction on mobile devices. The model processed six facial regions independently with compact CNNs and averaged their outputs, yielding a 5.4× reduction in size and a 3× improvement in inference compared to heavy-weight CNN baselines. Peng et al. [145] developed a lightweight CNN-based cough detection system for FPGA edge deployment. The design used depth-wise separable convolutions and grouped point-wise convolutions, which are inspired by ShuffleNet, together with channel shuffle operations to improve efficiency. In video surveillance, Khan et al. [146] proposed LCDnet, a lightweight crowd density estimation model that integrated a compact CNN architecture with curriculum learning for improved ground-truth generation. The design produced density maps that preserved spatial details while maintaining low computational cost, memory usage, and inference time, making it suitable for drone-based surveillance. For counting the number of people in indoor spaces, Yen et al. [147] developed an adaptive system on a Raspberry Pi 4 equipped with fisheye lens. The system employed a lightweight YOLOv4-tiny model with spatial pyramid pooling (SPP), spatial attention mechanism (SAM), and depthwise separable convolutions to enhance accuracy while reducing computational costs.
Tiny Transformers are smaller, efficient versions of Vision Transformers (ViTs), whereas hybrid CNN–Transformer models integrate the local feature extraction of CNNs with the long-range dependency modeling of Transformers. Together, these approaches balance the global context modeling capability of Transformers with the need for lightweight architectures for resource-constrained devices. In vision tasks, Wu et al. [148] introduced the Progressive Shift Ladder Transformer (PSLT), a lightweight ViT backbone. The design included ladder self-attention blocks that feature multiple branches and a progressive shifting mechanism. In this design, each branch processed a subset of the input channels, reducing the number of parameters and floating-point operations (FLOPs) while preserving the ability to capture long-range interactions. The outputs from all branches are combined using a pixel-adaptive fusion module. In environmental monitoring, Annane et al. [149] proposed a CNN–Transformer hybrid model with blockchain integration for forest fire prediction using drone-captured images. The CNN extracted spatial features; meanwhile, the Transformer captured temporal features such as smoke visibility, fire intensity, and fire direction. These features are fused to predict fire spread and extent, with secure data storage and communication ensured through blockchain. In smart home applications, Ye et al. [150] proposed Galaxy, a collaborative edge-AI system designed to accelerate Transformer inference for voice assistants. The system introduced a hybrid model parallelism to orchestrate inference across heterogeneous edge devices, a workload-planning algorithm to maximize resource utilization, and a tile-based scheme to overlap computation and communication under bandwidth constraints. Evaluations showed that Galaxy reduced end-to-end latency by up to 2.5× across multiple edge environments. In speech processing, Wahab et al. [151] proposed a lightweight encoder-decoder architecture for real-time speech enhancement on edge devices. The encoder employed adaptive, frequency-aware gated convolution to emphasize speech-relevant features. At the same time, the Ginformer-based bottleneck applied low-rank projections and Simple Recurrent Unit (SRU)-based temporal gating to reduce complexity and capture long-range dependencies.
Graph Neural Networks (GNNs) are DL models, which are designed to process graph-structured data, where nodes represent entities and edges represent their connections [152]. By iteratively aggregating information from neighbor nodes, GNNs generate richer node representations to support tasks such as classification, prediction, and detection. In vehicular edge networks, Huang et al. [153] proposed a task migration strategy using a dual-layer GNN. The vehicle-layer GNN predicted vehicular trajectories, while the RSU-layer GNN forecasted resource availability. Their outputs are combined through a task-based maximum flow algorithm (T-MFA) to guide task migration and resource allocation. For vehicular routing, Daruvuri et al. [154] introduced a data packet routing protocol within a three-layer terminal–edge–cloud architecture. GNNs are employed to capture traffic patterns and multi-hop connectivity, while Transformer-based Reinforcement Learning (TRL) adapts routing strategies through interactions with the IoV environment. In rail transit applications, Huang et al. [155] presented Rail-RadarGNN, a radar-only obstacle detection method that embedded radar points into a high-dimensional feature space. The model integrated auto-registration and graph attention mechanisms for feature extraction, while the GAEC module enhanced feature alignment and the MPNN-R module preserved feature integrity to support deeper training.
Automated Machine Learning (AutoML) streamlines model development via automating tasks like feature engineering, model selection, and hyperparameter tuning, which allows faster adaptation in resource-constrained environments. Neural architecture search (NAS) complements AutoML by discovering lightweight yet high-performing neural architectures. These approaches optimize models not only for accuracy but also for the limited compute, memory, and energy budgets of edge devices. For instance, Mitra et al. [156] proposed EVE, an AutoML co-exploration framework for energy-harvesting IoT devices. EVE employed an RNN-based RL controller to search for shared-weight compressed models with varying sparsity, pruning types, and patterns, allowing adaptation to changing energy levels. Similarly, Cereda et al. [157] applied NAS with pruning-in-time (PIT) to optimize CNNs for visual pose estimation on nano-UAVs. Using PULP-Frontnet and MobileNetv1 as seed models, the mask-based DNAS tool generated smaller, more efficient networks while maintaining accuracy. The resulting CNNs, deployed on the Bitcraze Crazyflie 2.1 drone, were up to 5.6× smaller and 1.5× faster than baseline models.
5.3. Model Compression
Model quantization converts a neural network’s parameters (weights and activations) from a higher-precision format (e.g., 32-bit floating-point numbers) to a lower-precision format (e.g., 8-bit integers) to reduce the model size and accelerate inference. This lowers memory usage, storage requirements, and computational costs, enabling efficient deployment on resource-constrained devices. For example, Wong et al. [143] proposed a binarized 2D-CNN classifier for wearables on FPGA fabric. The design integrated a quantized multilayer perceptron (qMLP) that converts ECG signals into binary images using 8-bit weights and a quantized ReLU, followed by a binary CNN (bCNN) that classifies them with binary activations and XNOR operations. This quantization reduced multiply–accumulate operations by 5.8× compared to existing wearable CNNs while maintaining 98.5% accuracy. Similarly, Li et al. [158] proposed a quantization algorithm for CNN-based human foot detection to support inductive electric tailgate systems. The model applied linear asymmetric quantization, converting 32-bit floating-point weights into 8-bit fixed-point values, reducing memory use by 4× and enabling efficient edge deployment. Yang et al. [159] studied quantization and acceleration of the YOLOv5 vehicle detection model on an NVIDIA Tesla T4 GPU. After training, the model was quantized and accelerated using TensorRT, which mapped floating-point activations and weights to 8-bit fixed-point values through calibration, achieving up to 4× faster inference than floating-point operations. Shah et al. [160] introduced OwlsEye, a real-time low-light video instance segmentation system implemented on the Intel Nezha Embedded platform with YOLOv8-Nano-Segmentation. With the EQyTorch framework for fixed-posit quantization, along with brightness verification and asynchronous FIFO pipelining, throughput improved from 0.6 FPS to 28 FPS, with latency and power efficiency gains of up to 9.02× and 87.91× compared to INT8 and FP32. Furthermore, Han et al. [161] designed a lightweight real-time vehicle detection system for edge devices using pruning, quantization, and knowledge distillation. In this system, quantization converted floating-point weights into low-bit integers. As a result, latency decreased from 45 ms to 32–35 ms, and memory usage dropped from 120 MB to 32 MB.
Neural network pruning improves efficiency by removing unnecessary connections (weights) or structures (neurons, channels), thereby reducing model size, memory usage, and computation time while often maintaining high accuracy. This compression facilitates storage and deployment on resource-constrained devices. For example, Li et al. [162] incorporated a multistage pruning technique in CNN-based ECG classification models on edge devices. The method sequentially prunes and fine-tunes convolutional layers, reducing model complexity while maintaining performance. At 60% sparsity, the model led to 97.7% accuracy and 93.59% F1-score, while reducing runtime complexity by 60.4%, outperforming traditional pruning methods. Huang et al. [163] proposed a lightweight defect detection system for edge devices using pruning to compress the model and accelerate inference. Their module applied L2-norm-based filter selection and retrained with Least Absolute Shrinkage and Selection Operator (LASSO) regularization, progressively removing less important filters. On the Kneron KL520 AI dongle, the pruned model achieved 97.7% accuracy and 28.2 FPS, running 1.6× faster than the unpruned model and 2× faster than on the Jetson Nano. Li et al. [164] proposed ABM-SpConv-SIMD, an inference optimization framework that applied unstructured pruning to generate sparse CNN models for industrial IoT edge devices. The framework included a sparsity exploration stage to determine optimal pruning ratios, followed by iterative pruning and fine-tuning with weight regrowth to preserve accuracy. In smart lighting applications, Putrada et al. [165] employed cost-complexity pruning to compress AdaBoost models and introduced NoCASC (Normalized-Complemented Accuracy-Size Curve) to identify the optimal trade-off between accuracy and model size. Pruning reduced the number of AdaBoost decision tree base learners, while NoCASC aims to guide the selection of the best-pruned model. The final AdaBoost model was 2.7× smaller than the original and 6.9× smaller than KNN.
Weight sharing, or parameter sharing, reuses the same set of weights across different parts of a neural network, reducing the number of learnable parameters. By reducing memory requirements and computational costs while improving generalization, it enables efficient deployment on resource-constrained devices. To this end, Xing et al. [166] applied CNN-based face recognition to detect fatigue during driving, where convolutional layers with weight sharing and local connections reduced parameters and computational complexity are compared to fully connected networks. Xu et al. [167] examined adversarial robustness in graph-based NAS in the context of edge AI-enabled transportation systems. They used a One-Shot NAS with weight sharing to train a SuperNet, where subnetworks with different compression ratios shared parameters. This design showed adversarial attacks transfer more effectively between models of similar size. Zhang et al. [168] proposed NASRec, a weight-sharing NAS framework for recommender systems. Their approach trained a large SuperNet with heterogeneous operators and dense connectivity once. At the same time, subnetworks were derived by zeroing out operators and connections while sharing weights, improving training efficiency and reducing log loss by ≈0.001.
Knowledge distillation (KD) tends to transfer knowledge from a large teacher model to a relatively smaller student model, enabling efficient deployment on resource-constrained devices without significant loss of accuracy [169]. For example, Wang et al. [170] proposed the Heterogeneous Brain Storming (HBS) method, a bidirectional KD framework for object detection in industrial IoT systems. In this approach, cloud-to-fog distillation transfers knowledge from a large cloud model to multiple fog models, while fog-to-cloud distillation ensembles the fog models to update the cloud model. This bidirectional process improved accuracy while lowering communication and storage costs. Mudavat et al. [171] proposed LiteViT, which distilled the capabilities of a ViT teacher into a compact MobileViT-XXS model for crop-disease classification. By combining cross-entropy with Kullback–Leibler (KL) divergence losses, LiteViT achieved 99.3% accuracy, nearly matching the teacher’s 99.7% while being suitable for edge deployment. In healthcare, Peng et al. [172] proposed PADistillation, a semi-supervised dual KD framework for knee osteoarthritis diagnosis. Using a teacher-bridge-student setup with attention-guided distillation and personalized pixel shuffling for privacy protection, the method achieved 88.2% accuracy while limited labeled data is available. Likewise, Cao et al. [173] proposed an online knowledge distillation (OKD) framework for machine health prognosis at the edge. Their design integrates response-based, feature-based, and relation-based distillation modules, with an adaptive mutual-learning strategy that alternates between independent and collaborative learning modes.
5.4. Adaptive and Dynamic Models
Dynamic inference enables neural networks to adapt their computation during inference based on input complexity. By skipping redundant operations for simpler inputs, these models reduce computational cost and achieve faster inference without sacrificing accuracy. For instance, Zhou et al. [174] proposed DPDS, a dynamic path-based framework for accelerating DNN inference in edge environments. It used exit designators for online exit prediction, allowing easy inputs to exit the network early, while a min-cut-based strategy partitions the multi-exit DNN across devices. This approach achieved 1.87×–6.78× faster inference. Likewise, Xia et al. [175] proposed an instance-based inference paradigm that adjusts model depth and width at runtime. The Layer-Net (L-Net) and Channel-Net (C-Net) predict which layers and channels to skip or scale for each input. The integrated LC-Net reduces computation by up to 11.9× FLOPs while improving accuracy on CIFAR-10 and ImageNet.
Early exit allows a neural network to stop processing and generate predictions at intermediate layers rather than constantly traversing the full depth. This reduces inference latency and computation for easy inputs while preserving full-depth accuracy for more complicated cases. Early-exit models are particularly effective in multi-tier edge platforms, where they support computation partitioning and efficient resource use. For instance, Rashid et al. [176] introduced TMEX CNN, a template matching-based early-exit model for myocardial infarction detection deployed on wearable devices. It combined a multi-output CNN with an output block selector that applied Pearson correlation for early exit, reducing redundant computation and improving energy efficiency by 6–8% over the baseline. Sponner et al. [177] introduced Temporal Decisions, an early-exit strategy that exploited temporal correlation in sensor data streams. Their approach integrated Difference Detection and Temporal Patience mechanisms to monitor variations in outputs across consecutive inputs, allowing inference to terminate early when changes are minimal. This reduced redundant computation, achieving up to 80% fewer operations while keeping accuracy within 5% of the original model. Ghanathe et al. [178] presented T-RecX, an early-exit framework designed for tiny-CNN models. It added a single early-exit block with pointwise and depthwise convolutions to enable confident intermediate predictions. Evaluations showed that T-RecX reduced FLOPs by an average of 31.6% with only around 1% drop in terms of accuracy. Likewise, She et al. [179] proposed a real-time inference scheduler that combined dynamic batching with early-exit DNNs for edge inference. Multiple DNNs with intermediate exit points are pre-installed on the server, allowing samples with confident predictions to terminate early and save computation.
Context-aware neural networks incorporate additional information such as spatio-temporal cues, user behavior, and/or cross-modal relationships beyond the immediate input. For instance, Zhou et al. [180] designed a hierarchical context-aware hand detection algorithm, which can be used for naturalistic driving. It integrated context prior estimation, context-aware detectors, and post-processing for bounding box refinement. Their framework used context cues, including common hand shapes, typical locations, driver habits, and joint spatial distributions between hands to improve detection accuracy. Xiong et al. [181] proposed DuTongChuan, a context-aware translation model for simultaneous interpreting. It employed an Information Unit (IU) boundary detector to segment streaming ASR input, and a tailored NMT model that applies partial decoding to early segments and context-aware decoding to later ones. This design balanced latency and translation quality, achieving about 86% accuracy with latency typically under 3 s. Likewise, Chindiyababy et al. [182] proposed a CNN-based facial emotion recognition framework that integrated multimodal data fusion and context-aware modeling. The system improved real-world performance by 15% with context-awareness, while a lightweight design reduced processing requirements by 25%, supporting edge deployment.
5.5. Energy- and Latency-Aware Optimization in Modeling Designs
Energy-aware training focuses on reducing the energy consumption of DL models by optimizing computation, communication, and resource usage during training. The goal is to balance accuracy with efficiency, which is much important for resource-constrained edge and IoT environments. For instance, Singh et al. [183] proposed G-CNaaS, an energy-aware camera network architecture that reduced the carbon footprint. The system selected optimal camera sets for each virtual network based on field-of-view, angular distance, observation range, and residual energy, while edge devices handled real-time analysis to reduce latency and energy overhead. Yao et al. [184] proposed DECO, a dynamic computation offloading framework for IoV, modeled as a Markov decision process and optimized with the TD3 algorithm. DECO jointly minimized delay and energy consumption by adaptively scheduling offloading between RSUs and vehicles, achieving up to 31.6% improvement over DQN and 24.6% over Deep Deterministic Policy Gradient (DDPG) across different RSU scales. In healthcare, Singh et al. [185] proposed a hierarchical FL framework using wireless body area networks (WBANs), fog servers, and cloud resources. Training is formulated as a cost minimization problem that jointly considers latency, energy consumption, and model loss. Local CNNs are trained using SGD, and global aggregation is performed with FedAvg and clustering to reduce communication overhead, lowering total cost by 11.13% while achieving 95.85% accuracy. Also, Wen et al. [186] introduced RA-MaOVSM, a vehicle selection model formulated FL in IoV as a many-objective optimization problem. The model jointly considered computation, communication, energy, and data resources of vehicles to minimize training delay and energy cost while improving global performance on non-IID data. Results show that while delay-based selection reduced training time, energy-based selection minimized consumption.
Hardware–software co-design optimizes both neural network (NN) architectures and the hardware they run on simultaneously to improve performance, efficiency, and latency. By exploring the joint design space of algorithms and hardware, this approach enables practical deployment of DL on edge devices with limited memory and compute resources. For example, Bui et al. [187] implemented real-time object detection and tracking on the Xilinx Zynq platform. Gaussian smoothing, background subtraction, and morphology filters were mapped to FPGA fabric, while blob detection, histogram checking, and the Kalman filter were executed on ARM cores. The system processed 1080p video at 60 FPS with less than 2W power consumption. Ponzina et al. [188] developed DL framework at the edge. On the software side, ensemble-based transformations improved robustness, enabling aggressive quantization to reduce memory and computation. On the hardware side, parallel in-memory execution using multiple independently managed memory banks exploits these optimizations for efficient inference, yielding over 70% energy and latency gains. Likewise, Hur et al. [189] proposed FlexRun, an FPGA-based accelerator for NLP. The hardware provides a reconfigurable base architecture with Gemv and Vec units, while the software uses design space exploration and automation to optimize configurations per model. This co-design adapts to diverse NLP workloads, achieving up to 2.73× speedup over FPGA baselines and 2.69× higher performance than NVIDIA V100 GPUs.
Batch-size adaptation dynamically adjusts the batch size during training or inference rather than using a fixed value, allowing models to balance throughput and latency in resource-constrained environments. For instance, Gokarn et al. [190] introduced MOSAIC, a framework that boosts edge video analytics by batching frames from multiple cameras into a single canvas for DNN inference. Its Mosaic Across Scales (MoS) pipeline identified critical regions from each frame, resized them based on importance, and bin-packs them into a fixed-size canvas. This batching method allowed simultaneous processing of multiple streams, achieving up to 4.75× higher throughput on Jetson TX2 with less than 1% accuracy loss in pedestrian detection. Also, Zhang et al. [191] proposed BCEdge, a scheduling framework that optimized batch size and the number of concurrent DNN models to deliver SLO-aware inference on edge devices. Their approach used a maximum entropy-based DRL scheduler and a lightweight interference prediction model to adapt batching and concurrency while meeting latency constraints. On NVIDIA Xavier NX with YOLOv5, BCEdge improved utility by up to 37.6% compared with prior methods.
6. Edge AI Hardware Infrastructure for Smart Cities
In this section, we discuss edge AI hardware infrastructure for smart cities from three perspectives: data centers, chip design, and the requirements for land, electricity, and connectivity to enable facility functionality.
6.1. Edge AI Datacenter
Edge Micro Data Centers (EDMCs) enable low-latency processing near urban demand hubs such as hospitals and transit centers. These are compact, modular containers, integrated with cooling systems, built-in redundancy, and renewable energy sources to enhance reliability and sustainability. For instance, Gudepu et al. [192] proposed a three-stage architecture using EMDCs for firefighting. Sensor data is first processed locally with LSTM and Random Forest for fire prediction, followed by CNN-based confirmation from camera feeds, and finally, 360-degree video streaming to fire stations. This reduced latency and bandwidth, ensuring fast, reliable emergency response. Likewise, Szántó et al. [193] implemented Deep Simple Online Realtime Tracking (DeepSORT) within the BRAINE EMDC, integrating heterogeneous nodes orchestrated by Kubernetes. FPGA resources were exposed through Petalinux and Xilinx Runtime (XRT), enabling containerized deployment of object-tracking workloads in smart city environments. Likewise, Pérez et al. [194] proposed an ANN-based model to predict power consumption in GPU-based edge data centers for Advanced Driver Assistance System (ADAS) applications. Using real traffic traces and CNN workloads, the model incorporated GPU features and Dynamic Voltage and Frequency Scaling (DVFS) settings to forecast energy usage 1 hour ahead, achieving a prediction error below 7.4%. This enabled proactive energy management to improve efficiency in federated edge deployments.
Furthermore, Manda et al. [195] explored energy-efficient cooling methods such as liquid, free-air, evaporative, and geothermal cooling to reduce power usage in telecom data centers. They also emphasized modular designs minimize construction waste, optimize energy usage, and support scalable growth. Minazzo et al. [196] presented an EMDC design with CPUs, GPUs, FPGAs, and NVMe storage cooled by a passive two-phase thermosyphon system, achieving a Power Usage Effectiveness (PUE) of 1.034 at maximum load and 1.007 at medium load, with fully passive cooling possible below 186 W. This approach improved energy efficiency and reused waste heat.
Sustainability efforts also extend to task scheduling. For instance, Zhou et al. [197] introduced GreenEdge, a mobile offloading framework combining device-to-device (D2D) communication and energy harvesting (EH) for IoT devices. It executed tasks locally, offloaded them to energy-rich peers, or forwarded them to an edge datacenter (EDC), enabling energy-aware scheduling. Likewise, Aujla et al. [198] proposed EDCSuS, a framework for deploying geo-distributed EDCs for 5G-enabled vehicles. The EDCSuS integrated renewable energy, resource sharing, and caching to reduce energy consumption. At the same time, an SDN controller managed service requests and flow paths to reduce energy consumption and enhance service reliability.
6.2. AI Chips Design for Embedded AI
Heterogeneous accelerators, including Graphics Processing Units (GPUs) and Field-Programmable Gate Arrays (FPGAs), are specialized processors, which work alongside general-purpose Central Processing Units (CPUs). They enhance performance and energy efficiency by offloading parallel or data-intensive tasks, while serial components remain on the CPU. To this end, Lin et al. [199] surveyed Coarse-grained reconfigurable architecture (CGRA) and FPGA/Application-specific integrated circuit (ASIC) implementations for CNN inference on resource-constrained devices and introduced a unified evaluation metric. Zhai et al. [200] developed an FPGA-based accelerator for vehicle detection and tracking using YOLOv3, YOLOv3-tiny, and DeepSORT techniques, applied for structured pruning, fixed-point quantization, and hardware optimizations to shrink model size by up to 98.2% and reach 168.72 fps on six parallel video streams. Marino et al. [201] proposed ME-ViT, a memory-efficient FPGA accelerator for ViT inference that lowered memory traffic by loading parameters once, storing intermediate results on-chip, reusing buffers, and fusing operations such as LayerNorm and Pseudo-Softmax. Building on this trend, Huang et al. [202] proposed EdgeLLM, a CPU-FPGA heterogeneous accelerator for LLM inference that applied mixed-precision systolic arrays (FP16 × FP16 for MHA, FP16 × INT4 for FFN) and log-scale structured weight sparsity to increase computational efficiency further. Also, Roa-Tort et al. [203] presented a low-cost ADAS platform that integrated FPGA-based vision with STMicroelectronics STM32 control on a 1:5 scale vehicle. The FPGA performed real-time YOLOv3-Tiny inference, while the STM32 handles sensor data and actuation, enabling synchronized lane tracking and object detection.
Low-power AI processors are specialized chips designed to perform ML and other AI tasks while consuming minimal energy. By processing AI data directly on the device instead of sending it to remote servers, these processors improve both latency and data privacy. For instance, Martin et al. [204] presented two edge-based vision systems for person detection to enhance safety. The approach integrated an NXP I.MX 8M Plus embedded platform and an NVIDIA Jetson TX2-based VAEC system using the DECIoT framework built on EdgeX Foundry. These systems enabled real-time person detection with CNNs (YOLOv5s). Likewise, Chen et al. [205] proposed Xyloni, an ultra-low-power accelerator for real-time wildfire detection on sensor nodes with reduced energy use. Their approach used low-power flash and Ferroelectric RAM (FeRAM) to store weights and activations, and time-shared a low-power FPGA across CNN layers to reduce both power consumption and data transmission.
Chip-level security embeds protection directly into hardware, safeguarding sensitive data, ensuring device integrity, and enabling secure communication. A key element is the Trusted execution environment (TEE), which is an isolated and encrypted area within a processor that securely runs sensitive code and data to preserve confidentiality and integrity. For instance, Liu et al. [206] proposed MirrorNet, which splits a DNN into a low-accuracy BackboneNet running in the everyday world and a lightweight Companion Partial Monitor in the TEE that monitors and corrects intermediate outputs to preserve confidentiality. Ding et al. [207] proposed GNNVault, which secured GNN inference by running a public backbone trained on a substitute graph outside the TEE and a small private rectifier inside it that uses the real graph to correct embeddings, preventing leakage of private parameters and graph edges. Sun et al. [208] developed TensorShield, which used attention-based Explainable AI (XAI) to identify privacy-critical tensors and selectively shield them inside the TEE with latency-aware placement, mitigating model stealing and membership inference attacks while minimizing performance overhead.
6.3. The Needs of Land, Electricity, and Connectivity
Limited space in dense urban areas makes it difficult to install new infrastructure, so re-purposing existing infrastructure, such as street signposts, rooftops, and bus stops, can help overcome land constraints by placing edge computing nodes closer to users and devices. Related to this direction, Adkins et al. [209] introduced Signpost, a modular energy-harvesting sensing platform for city-scale deployments that mounts on existing street signposts, and used a solar-powered backplane with pluggable sensor modules and shared power, communication, and processing interfaces, enabling dense deployments without new wired infrastructure. Kamal et al. [210] proposed an IoT-based system to enhance existing bus stops in the United Arab Emirates (UAE). Their approach utilized a Raspberry Pi and a WiFi-enabled microcontroller, equipped with temperature, humidity, light, motion, and air pollution sensors, to transmit data to a Firebase cloud database. This setup enabled the display of real-time status and alerts via a mobile app. Ke et al. [211] proposed using bus stops and buses as data mules to reduce reliance on cellular networks. They applied greedy-based Maximal Sensor Coverage (MSC) and Cost-Effective Lazy Forward (CELF)-based Minimal Delivery Delay (MDD) algorithms for edge node placement, reducing the coverage budget by over 77% and delivery latency by more than 20 min.
Co-location pairs large-scale power consumers, such as Data centers, with nearby renewable sources, like solar or wind farms, to provide a high-capacity, dedicated power supply. Energy harvesting delivers low-maintenance power for distributed devices such as sensors. For instance, Vadi et al. [212] developed an off-grid smart street lighting system powered by solar panels with battery storage and controlled through LoRaWAN. They used the Perturb and Observe (P&O) Maximum Power Point Tracking (MPPT) algorithm for solar energy extraction and transmitted data via Message Queuing Telemetry Transport (MQTT), achieving 97.96% MPPT efficiency and reliable operation of four LED streetlights over 1.2 km. Biundini et al. [213] proposed LoRaCELL, an IoT-based smart lighting system that uses synchronized edge sensor nodes and a central LoRa device for data collection and transmission, reducing gateway requirements and lowering deployment costs compared to LoRaWAN. Likewise, Alsubi et al. [214] introduced Stash, a back-end energy storage design for batteryless sensors under intermittent energy harvesting that stored excess energy in an additional capacitor after the main capacitors were charged. This allowed sensors to operate longer and perform extra tasks, improving coverage by up to 15% under variable-energy conditions.
A hybrid connectivity backbone combines Fiber-optic communication, 5G/6G cellular, and low-power wide-area network (LPWAN) technologies to create a robust network that leverages the strengths of each. By offering multiple communication pathways, this hybrid approach improves resilience, provides redundancy, and ensures higher uptime. For instance, Igartua et al. [215] evaluated LoRaWAN-based LPWANs deployed in Santander for a smart parking service. They used buried radar-based parking sensors that sent status data via LoRaWAN, and found that, despite signal degradation from urban obstacles and parked vehicles, robust coverage is possible with careful gateway placement and network planning. Castellanos et al. [216] assessed Fixed wireless access (FWA) deployment in the 60 GHz band for urban and rural areas in Belgium using a 3D ray-tracing network planning tool. Their design used a hierarchical backhaul of Edge Node (EN), Points of presence (PoP), and Customer premises equipment (CPE), showing that 95% coverage is possible with lamp-post ENs, requiring 300 ENs/km2 in urban and 75 ENs/km2 in rural areas. Also, Rafi et al. [217] analyzed LoRaWAN, Narrowband Internet of Things (NB-IoT), and Sigfox LPWANs against 4G/5G for smart agriculture connectivity. They found that LoRaWAN, NB-IoT, and Sigfox enable long-range, low-power monitoring, while 4G/5G support high-throughput, low-latency tasks, and that combining LPWAN with 5G models enhances reliability and reduces connectivity costs by up to 30%.
Battery backups and small-scale energy storage provide resilience by maintaining power during grid outages and ensuring continuous operation. For instance, Narayan et al. [218] developed BOBBER, a batteryless prototyping platform for intermittent accelerators powered by Radio-frequency energy harvesting. Their design combined an MCU running intermittency-aware firmware with an FPGA for CNN kernel acceleration, enabling correct and complete LeNet-5 inference under intermittent power conditions. Song et al. [219] presented TaDA, a system architecture for batteryless IoT devices powered by energy harvesting. They used hardware interconnect with persistent storage to decouple tasks across multiple MCUs, allowing each task to run on the most efficient MCU and achieving up to 96.7% energy savings and 68.7× throughput improvement.
7. Open Challenges
We now identify several open challenges, such as heterogeneous data integration, hardware hysteresis, and the necessity of synergy among sensing, communication, computing, and control. Additionally, we discuss ethical, governance, and policy considerations, as well as security and privacy concerns, for edge AI-based smart city systems.
7.1. Consensus and Synchronization Among Heterogeneous Sensing Data
Consensus and synchronization across heterogeneous sensing data from devices such as cameras, IMUs, LiDAR, radar, microphones, WiFi/CSI, loop detectors, and environmental sensors are essential for producing reliable insights in smart city applications. Beyond differing modalities, practical deployments must contend with clock drift, transport jitter, rolling-vs.-global shutter timing, LiDAR spin-phase offsets, and per-device buffering, all of which skew timestamps and distort cross-stream causality. Hardware-assisted time synchronization (e.g., PTP/IEEE 1588 with NIC-level timestamping) and sensor-specific timing models can substantially reduce skew, but still require application-layer compensation to meet millisecond-level deadlines in closed-loop urban systems [220].
In addition, the lack of consistent time-stamping, the presence of measurement noise, and the absence of standardized data fusion protocols create further barriers, leading to mismatched or delayed decisions in real-time urban systems. Even when wall-clock time is available (e.g., via NTP), heterogeneous uncertainty representations and undocumented preprocessing (exposure control, filtering, compression) hinder principled fusion; consequently, pipelines benefit from explicitly modeling timestamp uncertainty alongside sensor noise and using metadata schemas that carry calibration provenance and effective sampling characteristics [221].
Finally, achieving cross-modal temporal and spatial alignment is a prerequisite for reliable multi-sensor fusion, but remains difficult in the wild. Robust practice combines continuous spatiotemporal self-calibration (jointly estimating extrinsics and time offsets) with graph-based estimation that treats timing, poses, and landmarks in a unified factor graph; this allows online refinement as environments, temperatures, and loads change, maintaining low reprojection error and bounded inter-stream skew during long-running city operations [222].
7.2. Hardware Hysteresis for AI Algorithms
Hardware hysteresis refers to lagged, path-dependent, or non-linear responses in physical components that bias measurements and actuator commands, ultimately degrading the reliability of AI-driven decisions. In practice, the phenomenon spans magnetic or ferroelectric effects, mechanical backlash, piezo or thermal creep, and sensor bias drift, all of which violate the i.i.d. and stationarity assumptions embedded in many models [223,224].
In sensing pipelines, hysteresis and drift manifest as input distortions, e.g., IMU bias that depends on temperature or motion trajectory, camera rolling-shutter skew that varies with exposure, and ADC nonlinearity at range extremes, so that nominally identical scenes produce different feature statistics. Without compensation, these path-dependent errors propagate into perception stacks (detection, tracking, SLAM), shifting decision boundaries and inflating uncertainty in ways conventional calibration cannot fully absorb [220].
On the computing side, accelerators introduce their own “temporal hysteresis”: dynamic voltage and frequency scaling (DVFS), shared-resource contention, and thermal throttling create load- and history-dependent inference latencies. Such variability breaks fixed-deadline assumptions in time-critical loops (e.g., traffic signal control), where missing a control interval can be more harmful than delivering a slightly less accurate estimate on time [225].
7.3. The Necessity of Synergy Among Sensing, Communication, Computing, and Control in Edge AI for Smart Cities
The integration of sensing, communication, computing, and control is essential for smart city systems to operate robustly under tight latency budgets, intermittent connectivity, and nonstationary environments. Each component is typically engineered with distinct objectives (e.g., sensing accuracy, spectral efficiency, computational throughput, and closed-loop stability), which can be mutually constraining [226]. The main difficulty lies in the fact that each component is designed with its own constraints and optimization goals, which often conflict with one another. If these elements are not properly synchronized, problems such as communication bottlenecks, computational overload, or unstable control mechanisms may occur, which can undermine both the efficiency, resilience, and safety of urban infrastructure.
7.4. Ethical, Governance, and Policy Considerations
Ethical considerations. The large-scale use of edge AI in smart cities introduces ethical risks that extend beyond algorithmic accuracy. Continuous and multimodal sensing, such as video, audio, WiFi CSI, and mobility traces, enables fine-grained inference about individuals and communities, potentially revealing health status, mobility patterns, socio-economic conditions, or social interactions [227,228]. When these data streams are repurposed for objectives other than their original design (function creep), they can undermine citizens’ expectations of privacy and contextual integrity [229]. In addition, because many smart city applications are deployed unevenly across districts, learning models may encode spatial or demographic biases, which can lead to unequal quality of service or disproportionate surveillance. To mitigate these risks, ethical-by-design mechanisms, including purpose limitation, data minimization, privacy-preserving or federated learning, and fairness-aware model selection and aggregation, must be embedded into the system architecture.
Governance challenges. Edge AI complicates governance because decisions are produced by distributed and heterogeneous processing pipelines that may include the sensing device, an intermediate edge node, a gateway, a cloud service, and a municipal platform. In such settings, it is not straightforward to determine responsibility when an inference is incorrect, a model is outdated, or a compromised device injects falsified data [230]. Effective governance, therefore, requires explicit model provenance, auditability of edge inferences, and clearly defined human-in-the-loop or human-on-the-loop procedures for safety-critical domains [231]. Lifecycle governance is also essential because municipalities and operators must be able to mandate model updates, security patches, and decommissioning across a diverse edge fleet to prevent model drift and security degradation over time.
Policy and regulatory implications. Smart city AI deployments typically involve multiple stakeholders, including municipal governments, transportation authorities, utilities, private vendors, and sometimes academic or healthcare partners. Misaligned policies on consent, data retention, and data sharing can create fragmentation and can also hinder cross-domain analytics. Policy frameworks should therefore specify permissible data flows across domains, minimum anonymization requirements for cross-organizational sharing and retention, and deletion schedules that are consistent with local privacy regulations [232]. Public-sector procurement policies can also be used to require interoperability, standardized metadata for AI models, explainability features, and baseline security in edge AI solutions, so that performance and latency gains do not undermine civil liberties or transparency.
7.5. Security and Privacy Concerns
Security and privacy issues are central challenges in smart city edge AI because data is collected continuously from large numbers of heterogeneous devices deployed in public or semi-public spaces. Prior surveys on security and privacy in smart cities and IoT infrastructures [233] emphasize that the resulting attack surface is significantly larger than in traditional, centrally managed systems, requiring defense-in-depth across devices, networks, and cloud backends.
First, from a data security perspective, edge nodes such as roadside units, cameras, Wi-Fi access points, environmental sensors, and mobile devices can become potential attack surfaces if they are not properly authenticated, patched, and monitored. Adversaries can compromise an edge device, inject falsified sensor readings, or tamper with local models and firmware to degrade the quality of AI services or cause unsafe decisions in safety-critical applications (e.g., traffic control, public safety alerts, or remote health monitoring). As recommended by recent guidance for smart-city and critical-infrastructure deployments, secure communication channels, strong device identity management, encrypted model distribution, remote attestation, and continuous integrity checking should be treated as core architectural requirements rather than optional add-ons [232,234].
Second, model security and robustness are equally important. Federated edge learning models deployed in smart city environments are exposed to adversarial inputs [235], distribution shifts, and model poisoning attacks [236,237] that can be mounted through compromised devices or communication channels. Recent surveys on federated learning security [10] show that data poisoning attacks, backdoor attacks, and flipping attacks can substantially degrade global model performance or extract sensitive information, especially when updates are aggregated from partially trusted edge nodes. Robustness, therefore, requires both secure training pipelines (e.g., robust aggregation, anomaly detection for updates, and secure enclaves for aggregation servers) and robust inference (e.g., adversarially trained models, certified robustness techniques, or runtime detection of anomalous inputs) that are compatible with the resource constraints of edge devices.
Finally, privacy also deserves attention because many smart city applications rely on data that can be linked, directly or indirectly, to individuals or to small groups [231]. Even if the raw data are not shared outside the edge, repeated observations over time can enable re-identification or profiling, especially when multiple data sources are correlated. Location traces, mobility patterns, video analytics, and Wi-Fi CSI sensing can reveal habits, social ties, and vulnerabilities if they are stored without strict access control and retention policies. To address this problem, privacy-preserving analytics at the edge, e.g., federated or split learning, differential privacy for model updates, and role-based access to urban data platforms should be explicitly described and, whenever possible, evaluated in terms of the privacy guarantees they provide.
8. Future Research Directions
In the following, we discuss several future research directions for edge AI-enabled smart cities in four areas: heterogeneous sensing fusion, hardware AI co-design, system-level optimization across sensing, communication, computing, and control, and the integration with digital twins.
8.1. Heterogeneous Sensing Fusion
Smart cities rarely rely on a single sensing modality. Video cameras, LiDAR, mmWave radar, WiFi/CSI, environmental sensors, vehicular telemetry, and crowdsensed mobile data all have different sampling rates, spatial granularity, privacy sensitivity, and noise characteristics. A key research direction is the design of fusion pipelines that can integrate these heterogeneous streams into a temporally aligned and semantically consistent view of the urban environment [238]. This requires unified data and metadata schemas for describing sensor capabilities, uncertainty levels, missing data patterns, and spatial references, so that edge devices from different vendors can interoperate without extensive manual configuration.
Another open problem is adaptive and context-aware fusion. In practical deployments, some modalities may be degraded or intermittently unavailable because of weather, occlusion, device failure, or network congestion. Future methods should assign fusion weights based on current sensing quality, historical reliability, and the criticality of the application. Learning-based fusion, for example, attention-based multimodal networks or graph-based fusion over sensor topologies, can learn these weights automatically from data; however, they must be designed to run on resource-constrained edge devices. Ultimately, evaluation benchmarks for urban multimodal fusion remain immature. Public datasets that cover multiple city domains and include real-world artifacts such as time skew, sensor loss, and adversarial noise would make it easier to compare fusion approaches in a fair and reproducible way.
8.2. Hardware-AI Co-Design
Edge AI in cities must operate on a very wide spectrum of devices, ranging from microcontroller-class sensor nodes to single-board computers to GPU-enabled edge servers in traffic cabinets. A promising direction is to co-design hardware and AI models so that the model architecture, operator set, and memory layout are directly matched to the capabilities of the target platform [239]. On the model side, this includes neural architecture search under device constraints, quantization-aware training, pruning, and sparsity patterns that align with the accelerator, and operator reordering to reduce memory traffic. On the hardware side, this includes domain-specific accelerators for vision, CSI-based sensing, or audio analytics, on-chip support for secure model loading and attestation, and energy-proportional designs that can scale with the current workload.
There is also a need for lifecycle-aware co-design. Smart city applications evolve over time. Models must be updated to reflect new objects, traffic patterns, or anomalies. Future work should develop hardware-friendly formats for over-the-air model updates, on-device model versioning, and partial model deployment, so that edge nodes can remain online without interruption. Toolchains that provide end-to-end compilation from high-level AI frameworks to heterogeneous edge hardware, along with profiling feedback from real-world deployments, will enable rapid iteration on both hardware and model designs. Security must be integrated into this co-design process, for example, by supporting encrypted model execution or trusted execution environments on the edge, because public space devices are exposed to tampering.
8.3. Sensing, Communication, Computing and Control Co-Design
Current systems often optimize sensing, networking, and computing in isolation. In a dense smart city, this separation leaves performance untapped. A more holistic direction is to jointly determine what to sense, when to communicate, where to compute, and how to control actuators, subject to latency, energy, and reliability constraints [240]. For example, if a traffic monitoring camera already detects stable conditions, the system can lower the frame rate, reduce the frequency of model inference, and defer transmission, which saves energy and bandwidth for other critical tasks. Conversely, if an anomaly or safety-critical event is detected, the system can temporarily elevate sensing and communication rates and can schedule computing tasks at the nearest capable edge node.
To achieve this behavior, research is needed on hierarchical and distributed optimization frameworks that span device-level schedulers, edge cluster orchestrators, and city-level controllers. Such frameworks must take into account stochastic wireless links, time-varying workloads, mobility of users and vehicles, and the economic cost of using wide-area backhaul. They should also incorporate control-theoretic notions, such as stability and safety, so that control actions remain valid even under packet loss or delayed inference. Learning-based co-design is another promising direction. Reinforcement learning or bandit formulations can be used to learn joint sensing and offloading policies from real traffic. Still, they must be made sample-efficient and safe enough for operation in public infrastructure.
8.4. When Edge AI Meets Digital Twins: A New Opportunity for Smart Cities
Digital twins provide virtual replicas of physical assets, processes, or entire city districts. Their usefulness depends on how closely they track the current state of the physical world. Edge AI can provide this missing link by performing fast, in situ interpretation of sensor data and pushing state updates to the twin in near real time [241]. This creates a closed loop in which the twin has an up-to-date view of traffic, energy loads, air quality, or crowd density, and the physical system can receive optimized control policies, what-if analyses, or maintenance alerts that were tested safely in the virtual environment.
Future work in this area can proceed in several directions. First, there is a need for standard interfaces between edge analytics services and twin platforms, including schemes for time-stamped state updates, model uncertainty, and provenance. Second, twin-aware edge analytics can select which data to send with high fidelity and which to aggregate locally, ensuring the twin remains accurate without overwhelming the network. Third, digital twins can be utilized to train, test, and validate edge AI models before deployment. Synthetic or replayed urban scenarios can expose models to rare events, bias patterns, or adversarial conditions that are hard to collect in practice. This loop can significantly shorten the deployment cycle for new smart city services and support planning tasks such as large-scale traffic re-routing, energy sharing among buildings, or coordinated public safety responses, while maintaining the safety of the physical infrastructure.
9. Conclusions
This survey has examined edge AI as a key enabler for next-generation smart cities, where large volumes of heterogeneous data must be processed close to the sources of sensing and actuation. By moving computation to the edge, smart city systems can achieve lower latency, greater context awareness, and improved data locality, which together reduce reliance on centralized cloud infrastructure. We have organized the literature around four core components of edge AI-enabled smart cities, namely applications, sensing data, learning models, and hardware infrastructure, and we have discussed how these components interact in practical urban deployments across manufacturing, healthcare, transportation, buildings, and environment. Beyond providing a structured synthesis, we have highlighted several cross-cutting challenges that limit current deployments. Looking ahead, we have identified multiple research directions that can strengthen the role of edge AI in smart cities and will inspire further innovations towards more harmonious and sustainable smart cities.
Funding
This research was supported in part by the US National Science Foundation under award number 2449627. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agency.
Data Availability Statement
Not applicable. The study does not report any data.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Liu, X.; Qian, C.; Hatcher, W.G.; Xu, H.; Liao, W.; Yu, W. Secure Internet of Things (IoT)-Based Smart-World Critical Infrastructures: Survey, Case Study and Research Opportunities. IEEE Access 2019, 7, 79523–79544. [Google Scholar] [CrossRef]
- Gharaibeh, A.; Salahuddin, M.A.; Hussini, S.J.; Khreishah, A.; Khalil, I.; Guizani, M.; Al-Fuqaha, A. Smart Cities: A Survey on Data Management, Security, and Enabling Technologies. IEEE Commun. Surv. Tutor. 2017, 19, 2456–2501. [Google Scholar] [CrossRef]
- Mallapuram, S.; Ngwum, N.; Yuan, F.; Lu, C.; Yu, W. Smart city: The state of the art, datasets, and evaluation platforms. In Proceedings of the 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS), Wuhan, China, 24–26 May 2017; pp. 447–452. [Google Scholar] [CrossRef]
- Chen, Y.; Qian, C.; Hussaini, A.; Yu, W. Chapter 2—CPS foundation and principles. In Edge Intelligence in Cyber-Physical Systems; Intelligent Data-Centric Systems; Yu, W., Ed.; Academic Press: Cambridge, MA, USA, 2025; pp. 9–34. [Google Scholar] [CrossRef]
- Xu, G.; Hussaini, A.; Adetifa, O.; Yu, W. Chapter 3–Representative CPS application domains. In Edge Intelligence in Cyber-Physical Systems; Intelligent Data-Centric Systems; Yu, W., Ed.; Academic Press: Cambridge, MA, USA, 2025; pp. 35–67. [Google Scholar] [CrossRef]
- Hatcher, W.G.; Yu, W. A Survey of Deep Learning: Platforms, Applications and Emerging Research Trends. IEEE Access 2018, 6, 24411–24432. [Google Scholar] [CrossRef]
- Mohammadi, M.; Al-Fuqaha, A.; Sorour, S.; Guizani, M. Deep Learning for IoT Big Data and Streaming Analytics: A Survey. IEEE Commun. Surv. Tutor. 2018, 20, 2923–2960. [Google Scholar] [CrossRef]
- Xu, H.; Yu, W.; Griffith, D.; Golmie, N. A Survey on Industrial Internet of Things: A Cyber-Physical Systems Perspective. IEEE Access 2018, 6, 78238–78259. [Google Scholar] [CrossRef]
- Guo, Y.; Qian, C.; Song, J.; Yu, W. Chapter 6—Edge intelligence in CPS. In Edge Intelligence in Cyber-Physical Systems; Intelligent Data-Centric Systems; Yu, W., Ed.; Academic Press: Cambridge, MA, USA, 2025; pp. 141–164. [Google Scholar] [CrossRef]
- Tian, P.; Liao, W.; Qian, C.; Guo, Y.; Yu, W. Chapter 12—Foundation of secured edge intelligence in CPS. In Edge Intelligence in Cyber-Physical Systems; Intelligent Data-Centric Systems; Yu, W., Ed.; Academic Press: Cambridge, MA, USA, 2025; pp. 297–323. [Google Scholar] [CrossRef]
- Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing. arXiv 2019, arXiv:1905.10083. [Google Scholar] [CrossRef]
- Tao, W. Interdisciplinary urban GIS for smart cities: Advancements and opportunities. Geo-Spat. Inf. Sci. 2013, 16, 25–34. [Google Scholar] [CrossRef]
- Khan, S.; Paul, D.; Momtahan, P.; Aloqaily, M. Artificial Intelligence Framework for Smart City Microgrids: State of the Art, Challenges, and Opportunities. In Proceedings of the 2018 Third International Conference on Fog and Mobile Edge Computing (FMEC), Barcelona, Spain, 23–26 April 2018; pp. 283–288. [Google Scholar] [CrossRef]
- Khan, L.U.; Yaqoob, I.; Tran, N.H.; Kazmi, S.M.A.; Dang, T.N.; Hong, C.S. Edge-Computing-Enabled Smart Cities: A Comprehensive Survey. IEEE Internet Things J. 2020, 7, 10200–10232. [Google Scholar] [CrossRef]
- Kamruzzaman, M.M. New Opportunities, Challenges, and Applications of Edge-AI for Connected Healthcare in Smart Cities. In Proceedings of the 2021 IEEE Globecom Workshops (GC Wkshps), Madrid, Spain, 7–11 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Maltezos, E.; Karagiannidis, L.; Dadoukis, A.; Petousakis, K.; Misichroni, F.; Ouzounoglou, E.; Gounaridis, L.; Gounaridis, D.; Kouloumentas, C.; Amditis, A. Public safety in smart cities under the edge computing concept. In Proceedings of the 2021 IEEE International Mediterranean Conference on Communications and Networking (MeditCom), Athens, Greece, 7–10 September 2021; pp. 88–93. [Google Scholar] [CrossRef]
- Ezzat, M.A.; Ghany, M.A.A.E.; Almotairi, S.; Salem, M.A.M. Horizontal Review on Video Surveillance for Smart Cities: Edge Devices, Applications, Datasets, and Future Trends. Sensors 2021, 21, 3222. [Google Scholar] [CrossRef] [PubMed]
- Alnoman, A. Edge Computing Services for Smart Cities: A Review and Case Study. In Proceedings of the 2021 International Symposium on Networks, Computers and Communications (ISNCC), Dubai, United Arab Emirates, 31 October 2021–2 November 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Band, S.S.; Ardabili, S.; Sookhak, M.; Chronopoulos, A.T.; Elnaffar, S.; Moslehpour, M.; Csaba, M.; Torok, B.; Pai, H.T.; Mosavi, A. When Smart Cities Get Smarter via Machine Learning: An In-Depth Literature Review. IEEE Access 2022, 10, 60985–61015. [Google Scholar] [CrossRef]
- Badidi, E. Edge AI and Blockchain for Smart Sustainable Cities: Promise and Potential. Sustainability 2022, 14, 7609. [Google Scholar] [CrossRef]
- Shankar, V. Edge AI: A Comprehensive Survey of Technologies, Applications, and Challenges. In Proceedings of the 2024 1st International Conference on Advanced Computing and Emerging Technologies (ACET), Ghaziabad, India, 23–24 August 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Trigkas, A.; Piromalis, D.; Papageorgas, P. Edge Intelligence in Urban Landscapes: Reviewing TinyML Applications for Connected and Sustainable Smart Cities. Electronics 2025, 14, 2890. [Google Scholar] [CrossRef]
- Xu, D.; Li, T.; Li, Y.; Su, X.; Tarkoma, S.; Jiang, T.; Crowcroft, J.; Hui, P. Edge Intelligence: Architectures, Challenges, and Applications. arXiv 2020, arXiv:2003.12172. [Google Scholar] [CrossRef]
- Bellini, P.; Nesi, P.; Pantaleo, G. IoT-Enabled Smart Cities: A Review of Concepts, Frameworks and Key Technologies. Appl. Sci. 2022, 12, 1607. [Google Scholar] [CrossRef]
- Krishnamurthi, R.; Kumar, A.; Gopinathan, D.; Nayyar, A.; Qureshi, B. An Overview of IoT Sensor Data Processing, Fusion, and Analysis Techniques. Sensors 2020, 20, 6076. [Google Scholar] [CrossRef]
- Liu, X.; Xu, H.; Liao, W.; Yu, W. Reinforcement Learning for Cyber-Physical Systems. In Proceedings of the 2019 IEEE International Conference on Industrial Internet (ICII), Orlando, FL, USA, 11–12 November 2019; pp. 318–327. [Google Scholar] [CrossRef]
- Mrabet, M.; Sliti, M. Towards Secure, Trustworthy and Sustainable Edge Computing for Smart Cities: Innovative Strategies and Future Prospects. IEEE Access 2025, 13, 174236–174253. [Google Scholar] [CrossRef]
- Hayyolalam, V.; Aloqaily, M.; Özkasap, Ö.; Guizani, M. Edge Intelligence for Empowering IoT-Based Healthcare Systems. IEEE Wirel. Commun. 2021, 28, 6–14. [Google Scholar] [CrossRef]
- Li, X.; Wan, J.; Dai, H.N.; Imran, M.; Xia, M.; Celesti, A. A Hybrid Computing Solution and Resource Scheduling Strategy for Edge Computing in Smart Manufacturing. IEEE Trans. Ind. Inform. 2018, 15, 4225–4234. [Google Scholar] [CrossRef]
- Lin, C.C.; Deng, D.J.; Chih, Y.L.; Chiu, H.T. Smart Manufacturing Scheduling with Edge Computing Using Multiclass Deep Q Network. IEEE Trans. Ind. Inform. 2019, 15, 4276–4284. [Google Scholar] [CrossRef]
- Ing, J.; Hsieh, J.; Hou, D.; Hou, J.; Liu, T.; Zhang, X.; Wang, Y.; Pan, Y.T. Edge-Cloud Collaboration Architecture for AI Transformation of SME Manufacturing Enterprises. In Proceedings of the 2020 IEEE/ITU International Conference on Artificial Intelligence for Good (AI4G), Geneva, Switzerland, 21–25 September 2020; pp. 170–175. [Google Scholar] [CrossRef]
- Yang, C.; Lan, S.; Wang, L.; Shen, W.; Huang, G.G.Q. Big Data Driven Edge-Cloud Collaboration Architecture for Cloud Manufacturing: A Software Defined Perspective. IEEE Access 2020, 8, 45938–45950. [Google Scholar] [CrossRef]
- Tang, H.; Li, D.; Wan, J.; Imran, M.; Shoaib, M. A Reconfigurable Method for Intelligent Manufacturing Based on Industrial Cloud and Edge Intelligence. IEEE Internet Things J. 2020, 7, 4248–4259. [Google Scholar] [CrossRef]
- Ying, J.; Hsieh, J.; Hou, D.; Hou, J.; Liu, T.; Zhang, X.; Wang, Y.; Pan, Y.T. Edge-enabled cloud computing management platform for smart manufacturing. In Proceedings of the 2021 IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd4.0&IoT), Rome, Italy, 7–9 June 2021; pp. 682–686. [Google Scholar] [CrossRef]
- Vermesan, O.; Coppola, M.; Bahr, R.; Bellmann, R.O.; Martinsen, J.E.; Kristoffersen, A.; Hjertaker, T.; Breiland, J.; Andersen, K.; Sand, H.E.; et al. An Intelligent Real-Time Edge Processing Maintenance System for Industrial Manufacturing, Control, and Diagnostic. Front. Chem. Eng. 2022, 4, 900096. [Google Scholar] [CrossRef]
- Rane, N.; Choudhary, S.; Rane, J. A New Era of Automation in the Construction Industry: Implementing Leading-Edge Generative Artificial Intelligence, such as ChatGPT or Bard. SSRN Electron. J. 2024. [Google Scholar] [CrossRef]
- Xu, J.; Sun, Q.; Han, Q.L.; Tang, Y. When Embodied AI Meets Industry 5.0: Human-Centered Smart Manufacturing. IEEE/CAA J. Autom. Sin. 2025, 12, 485–501. [Google Scholar] [CrossRef]
- Zhu, S.; Ota, K.; Dong, M. Green AI for IIoT: Energy Efficient Intelligent Edge Computing for Industrial Internet of Things. IEEE Trans. Green Commun. Netw. 2022, 6, 79–88. [Google Scholar] [CrossRef]
- Putra, K.T.; Surahmat, I.; Chamim, A.N.N.; Ramadhan, M.Z.; Wicaksana, D.; Alissa, R.A.D.N. Continuous Glucose Monitoring: A Non-Invasive Approach for Improved Daily Healthcare. In Proceedings of the 2023 3rd International Conference on Electronic and Electrical Engineering and Intelligent System (ICE3IS), Yogyakarta, Indonesia, 9–10 August 2023; pp. 395–400. [Google Scholar] [CrossRef]
- Putra, K.T.; Arrayyan, A.Z.; Hayati, N.; Firdaus; Damarjati, C.; Bakar, A.; Chen, H.C. A Review on the Application of Internet of Medical Things in Wearable Personal Health Monitoring: A Cloud-Edge Artificial Intelligence Approach. IEEE Access 2024, 12, 21437–21452. [Google Scholar] [CrossRef]
- Pradeep, M.; Jyotsna, K.; Vedantham, R.; Bolisetty, V.V.; Yadav, G.R.; Kanakaprabha, S. Next-Gen Telehealth: A Low-Latency IoT and Edge AI Framework for Personalized Remote Diagnosis. In Proceedings of the 2025 International Conference on Inventive Computation Technologies (ICICT), Kirtipur, Nepal, 23–25 April 2025; pp. 1870–1874. [Google Scholar] [CrossRef]
- Akram, M.S.; Varma, B.S.; Javed, A.; Harkin, J.; Finlay, D. Toward TinyDPFL Systems for Real-Time Cardiac Healthcare: Trends, Challenges, and System-Level Perspectives on AI Algorithms, Hardware, and Edge Intelligence. TechRxiv 2025, 168, 103587. [Google Scholar] [CrossRef]
- Rathi, V.K.; Rajput, N.K.; Mishra, S.; Grover, B.A.; Tiwari, P.; Jaiswal, A.K.; Hossain, M.S. An edge AI-enabled IoT healthcare monitoring system for smart cities. Comput. Electr. Eng. 2021, 96, 107524. [Google Scholar] [CrossRef]
- Badidi, E. Edge AI for Early Detection of Chronic Diseases and the Spread of Infectious Diseases: Opportunities, Challenges, and Future Directions. Future Internet 2023, 15, 370. [Google Scholar] [CrossRef]
- Ahmed, S.T.; Basha, S.M.; Ramachandran, M.; Daneshmand, M.; Gandomi, A.H. An Edge-AI-Enabled Autonomous Connected Ambulance-Route Resource Recommendation Protocol (ACA-R3) for eHealth in Smart Cities. IEEE Internet Things J. 2023, 10, 11497–11506. [Google Scholar] [CrossRef]
- Thalluri, L.N.; Venkat, S.N.; Prasad, C.V.V.D.; Kumar, D.V.; Kumar, K.P.; Sarma, A.V.S.Y.N.; Adapa, S.D. Artificial Intelligence Enabled Smart City IoT System using Edge Computing. In Proceedings of the 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 7–9 October 2021; pp. 12–20. [Google Scholar] [CrossRef]
- Sengupta, K.; Srivastava, P.R. HRNET: AI-on-Edge for Mask Detection and Social Distancing Calculation. SN Comput. Sci. 2022, 3, 157. [Google Scholar] [CrossRef]
- Choudhury, A.; Sarma, K.K.; Misra, D.D.; Guha, K.; Iannacci, J. Edge Computing for Smart-City Human Habitat: A Pandemic-Resilient, AI-Powered Framework. J. Sens. Actuator Netw. 2024, 13, 76. [Google Scholar] [CrossRef]
- Ke, R.; Cui, Z.; Chen, Y.; Zhu, M.; Yang, H.; Wang, Y. Edge Computing for Real-Time Near-Crash Detection for Smart Transportation Applications. arXiv 2020, arXiv:2008.00549. [Google Scholar] [CrossRef]
- Neto, A.R.; Silva, T.P.; Batista, T.; Delicato, F.C.; Pires, P.F.; Lopes, F. Leveraging Edge Intelligence for Video Analytics in Smart City Applications. Information 2020, 12, 14. [Google Scholar] [CrossRef]
- Huu, N.N.T.; Mai, L.; Minh, T.V. Detecting Abnormal and Dangerous Activities Using Artificial Intelligence on The Edge for Smart City Application. In Proceedings of the 2021 15th International Conference on Advanced Computing and Applications (ACOMP), Ho Chi Minh City, Vietnam, 24–26 November 2021; pp. 85–92. [Google Scholar] [CrossRef]
- Broekman, A.; Gräbe, P.J.; Steyn, W.J. Real-time traffic quantization using a mini edge artificial intelligence platform. Transp. Eng. 2021, 4, 100068. [Google Scholar] [CrossRef]
- Rahman, M.A.; Hossain, M.S.; Showail, A.J.; Alrajeh, N.A.; Ghoneim, A. AI-Enabled IIoT for Live Smart City Event Monitoring. IEEE Internet Things J. 2023, 10, 2872–2880. [Google Scholar] [CrossRef]
- Soy, H. Edge AI-Assisted IoV Application for Aggressive Driver Monitoring: A Case Study on Public Transport Buses. Int. J. Automot. Sci. Technol. 2023, 7, 213–222. [Google Scholar] [CrossRef]
- Murturi, I.; Egyed, A.; Dustdar, S. Utilizing AI Planning on the Edge. IEEE Internet Comput. 2022, 26, 28–35. [Google Scholar] [CrossRef]
- Irshad, R.R.; Hussain, S.; Hussain, I.; Ahmad, I.; Yousif, A.; Alwayle, I.M.; Alattab, A.A.; Alalayah, K.M.; Breslin, J.G.; Badr, M.M.; et al. An Intelligent Buffalo-Based Secure Edge-Enabled Computing Platform for Heterogeneous IoT Network in Smart Cities. IEEE Access 2023, 11, 69282–69294. [Google Scholar] [CrossRef]
- Alkinani, M.H.; Almazroi, A.A.; Adhikari, M.; Menon, V.G. Artificial Intelligence-Empowered Logistic Traffic Management System Using Empirical Intelligent XGBoost Technique in Vehicular Edge Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 4499–4508. [Google Scholar] [CrossRef]
- Lee, S.; Baek, S.; Woo, W.H.; Ahn, C.; Yoon, J. Edge AI-Based Smart Intersection and Its Application for Traffic Signal Coordination: A Case Study in Pyeongtaek City, South Korea. J. Adv. Transp. 2024, 2024, 8999086. [Google Scholar] [CrossRef]
- Hazarika, A.; Choudhury, N.; Nasralla, M.M.; Khattak, S.B.A.; Rehman, I.U. Edge ML Technique for Smart Traffic Management in Intelligent Transportation Systems. IEEE Access 2024, 12, 25443–25458. [Google Scholar] [CrossRef]
- Moubayed, A.; Shami, A.; Heidari, P.; Larabi, A.; Brunner, R. Edge-Enabled V2X Service Placement for Intelligent Transportation Systems. IEEE Trans. Mob. Comput. 2019, 20, 1380–1392. [Google Scholar] [CrossRef]
- Jeong, Y.; Oh, H.W.; Kim, S.; Lee, S.E. An Edge AI Device based Intelligent Transportation System. J. Inf. Commun. Converg. Eng. 2022, 20, 166–173. [Google Scholar] [CrossRef]
- Chavhan, S.; Gupta, D.; Gochhayat, S.P.; N, C.B.; Khanna, A.; Shankar, K.; Rodrigues, J.J.P.C. Edge Computing AI-IoT Integrated Energy-efficient Intelligent Transportation System for Smart Cities. ACM Trans. Internet Technol. 2022, 22, 3507906. [Google Scholar] [CrossRef]
- Yang, S.; Tan, J.; Lei, T.; Linares-Barranco, B. Smart Traffic Navigation System for Fault-Tolerant Edge Computing of Internet of Vehicle in Intelligent Transportation Gateway. IEEE Trans. Intell. Transp. Syst. 2023, 24, 13011–13022. [Google Scholar] [CrossRef]
- Rong, Y.; Mao, Y.; Cui, H.; He, X.; Chen, M. Edge Computing Enabled Large-Scale Traffic Flow Prediction with GPT in Intelligent Autonomous Transport System for 6G Network. IEEE Trans. Intell. Transp. Syst. 2024, 26, 17321–173388. [Google Scholar] [CrossRef]
- Liu, C.; Yang, H.; Zhu, M.; Wang, F.; Vaa, T.; Wang, Y. Real-Time Multi-Task Environmental Perception System for Traffic Safety Empowered by Edge Artificial Intelligence. IEEE Trans. Intell. Transp. Syst. 2024, 25, 517–531. [Google Scholar] [CrossRef]
- Chen, Y.Y.; Chen, M.H.; Chang, C.M.; Chang, F.S.; Lin, Y.H. A Smart Home Energy Management System Using Two-Stage Non-Intrusive Appliance Load Monitoring over Fog-Cloud Analytics Based on Tridium’s Niagara Framework for Residential Demand-Side Management. Sensors 2021, 21, 2883. [Google Scholar] [CrossRef]
- Márquez-Sánchez, S.; Calvo-Gallego, J.; Erbad, A.; Ibrar, M.; Fernandez, J.H.; Houchati, M.; Corchado, J.M. Enhancing Building Energy Management: Adaptive Edge Computing for Optimized Efficiency and Inhabitant Comfort. Electronics 2023, 12, 4179. [Google Scholar] [CrossRef]
- Shahrabani, M.M.N.; Apanaviciene, R. An AI-Based Evaluation Framework for Smart Building Integration into Smart City. Sustainability 2024, 16, 8032. [Google Scholar] [CrossRef]
- Bajwa, A.; Jahan, F.; Siddiqui, N.a.; Ahmed, I. A Systematic literature review on AI-enabled smart building management systems for energy efficiency and sustainability. SSRN Electron. J. 2024, 03, 01–27. [Google Scholar] [CrossRef]
- Atanane, O.; Mourhir, A.; Benamar, N.; Zennaro, M. Smart Buildings: Water Leakage Detection Using TinyML. Sensors 2023, 23, 9210. [Google Scholar] [CrossRef]
- Ahamed, I.; Ranathunga, C.D.; Udayantha, D.S.; Ng, B.K.K.; Yuen, C. Real-Time AI-Driven People Tracking and Counting Using Overhead Cameras. In Proceedings of the TENCON 2024—2024 IEEE Region 10 Conference (TENCON), Singapore, 1–4 December 2024; pp. 952–955. [Google Scholar] [CrossRef]
- Vijay, K.; Jeyanth, V.; Kamalesh, S.; Dheva Prasath, D. Enhancing Operational Efficiency and Security in Multi-Sectioned Businesses with Edge AI through Real-Time Analytics. In Proceedings of the 2024 International Conference on Electronic Systems and Intelligent Computing (ICESIC), Chennai, India, 22–23 November 2024; pp. 251–256. [Google Scholar] [CrossRef]
- Craciun, R.A.; Caramihai, S.I.; Mocanu, S.; Pietraru, R.N.; Moisescu, M.A. Hybrid Machine Learning for IoT-Enabled Smart Buildings. Informatics 2025, 12, 17. [Google Scholar] [CrossRef]
- Reis, M.J.C.S.; Serôdio, C. Edge AI for Real-Time Anomaly Detection in Smart Homes. Future Internet 2025, 17, 179. [Google Scholar] [CrossRef]
- Yang, L. AI-Based Classification Model for Low-Energy Buildings: Promoting Sustainable Economic Development of Smart Cities with Spherical Fuzzy Decision Algorithm. IEEE Access 2025, 13, 18386–18402. [Google Scholar] [CrossRef]
- Silva, M.C.; Silva, J.C.F.d.; Delabrida, S.; Bianchi, A.G.C.; Ribeiro, S.P.; Silva, J.S.; Oliveira, R.A.R. Wearable Edge AI Applications for Ecological Environments. Sensors 2021, 21, 5082. [Google Scholar] [CrossRef]
- Almeida, E.; Sandoval, D.; Sánchez, A.; Dávila, P.F. On the Design of an AI Enabled Edge Workplace Environment Monitoring Station. In Proceedings of the 2024 IEEE Biennial Congress of Argentina (ARGENCON), San Nicolás de los Arroyos, Argentina, 18–20 September 2024; pp. 1–7. [Google Scholar] [CrossRef]
- Rehman, A.; Saeed, F.; Rathore, M.M.; Paul, A.; Kang, J. Smart city fire surveillance: A deep state-space model with intelligent agents. IET Smart Cities 2024, 6, 199–210. [Google Scholar] [CrossRef]
- Sodhro, A.H.; Pirbhulal, S.; Albuquerque, V.H.C.d. Artificial Intelligence-Driven Mechanism for Edge Computing-Based Industrial Applications. IEEE Trans. Ind. Inform. 2018, 15, 4235–4243. [Google Scholar] [CrossRef]
- Zhang, Y.; Tang, D.; Zhu, H.; Zhou, S.; Zhao, Z. An Efficient IIoT Gateway for Cloud–Edge Collaboration in Cloud Manufacturing. Machines 2022, 10, 850. [Google Scholar] [CrossRef]
- Sathupadi, K.; Achar, S.; Bhaskaran, S.V.; Faruqui, N.; Abdullah-Al-Wadud, M.; Uddin, J. Edge-Cloud Synergy for AI-Enhanced Sensor Network Data: A Real-Time Predictive Maintenance Framework. Sensors 2024, 24, 7918. [Google Scholar] [CrossRef]
- Kristiani, E.; Wang, L.Y.; Liu, J.C.; Huang, C.K.; Wei, S.J.; Yang, C.T. An Intelligent Thermal Compensation System Using Edge Computing for Machine Tools. Sensors 2024, 24, 2531. [Google Scholar] [CrossRef]
- Muhammad, G.; Alhamid, M.F.; Long, X. Computing and Processing on the Edge: Smart Pathology Detection for Connected Healthcare. IEEE Netw. 2019, 33, 44–49. [Google Scholar] [CrossRef]
- Wang, W.H.; Hsu, W.S. Integrating Artificial Intelligence and Wearable IoT System in Long-Term Care Environments. Sensors 2023, 23, 5913. [Google Scholar] [CrossRef]
- Gragnaniello, M.; Balbi, F.; Martellotta, G.; Borghese, A.; Marrazzo, V.R.; Maresca, L.; Breglio, G.; Irace, A.; Riccio, M. Edge-AI on Wearable Devices: Myocardial Infarction Detection with Spectrogram and 1D-CNN. In Proceedings of the 2024 IEEE 22nd Mediterranean Electrotechnical Conference (MELECON), Porto, Portugal, 25–27 June 2024; pp. 485–490. [Google Scholar] [CrossRef]
- Rahman, M.; Morshed, B.I. A Smart Wearable for Real-Time Cardiac Disease Detection Using Beat-by-Beat ECG Signal Analysis with an Edge Computing AI Classifier. In Proceedings of the 2024 IEEE 20th International Conference on Body Sensor Networks (BSN), Chicago, IL, USA, 15–17 October 2024; pp. 1–4. [Google Scholar] [CrossRef]
- Izhar, M.; Naqvi, S.A.A.; Ahmed, A.; Abdullah, S.; Alturki, N.; Jamel, L. Enhancing Healthcare Efficacy Through IoT-Edge Fusion: A Novel Approach for Smart Health Monitoring and Diagnosis. IEEE Access 2023, 11, 136456–136467. [Google Scholar] [CrossRef]
- Doulani, K.; Adhikari, M.; Hazra, A. Edge-Based Smart Health Monitoring Device for Infectious Disease Prediction Using Biosensors. IEEE Sens. J. 2023, 23, 20215–20222. [Google Scholar] [CrossRef]
- Ooko, S.O.; Mukanyiligira, D.; Munyampundu, J.P.; Nsenga, J. Edge AI-based Respiratory Disease Recognition from Exhaled Breath Signatures. In Proceedings of the 2021 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), Amman, Jordan, 16–18 November 2021; pp. 89–94. [Google Scholar] [CrossRef]
- Ke, R.; Liu, C.; Yang, H.; Sun, W.; Wang, Y. Real-Time Traffic and Road Surveillance with Parallel Edge Intelligence. IEEE J. Radio Freq. Identif. 2022, 6, 693–696. [Google Scholar] [CrossRef]
- Chen, G.; Lin, Y.; Sun, M.; İk, T. Managing Edge AI Cameras for Traffic Monitoring. In Proceedings of the 2022 23rd Asia-Pacific Network Operations and Management Symposium (APNOMS), Takamatsu, Japan, 28–30 September 2022; pp. 1–4. [Google Scholar] [CrossRef]
- Aguilar-Rivera, A.; Vilalta, R.; Parada, R.; Perez, F.M.; Vázquez-Gallego, F. Evaluation of AI-based Smart-Sensor Deployment at the Extreme Edge of a Software- Defined Network. In Proceedings of the 2022 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Phoenix, AZ, USA, 14–16 November 2022; pp. 1–5. [Google Scholar] [CrossRef]
- Chen, C.; Wang, C.; Liu, B.; He, C.; Cong, L.; Wan, S. Edge Intelligence Empowered Vehicle Detection and Image Segmentation for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2023, 24, 13023–13034. [Google Scholar] [CrossRef]
- Rahmati, M. Edge AI-Powered Real-Time Decision-Making for Autonomous Vehicles in Adverse Weather Conditions. arXiv 2025, arXiv:2503.09638. [Google Scholar] [CrossRef]
- Abid, Y.M.; Jalil, M.A.; Abd, S.K.; Sabah, H.A.; Bakri, B.I.; Maktoof, M.A.J.; Mohammed, A.; Adile, M.; Ahmed, M.S.M.; Mohammed, M.A.; et al. Analysis of the Energy Efficiency of Smart City Buildings Based on Deep Learning Algorithms Using AI. In Proceedings of the 2024 International Conference on Smart Systems for Electrical, Electronics, Communication and Computer Engineering (ICSSEECC), Coimbatore, India, 28–29 June 2024; pp. 672–677. [Google Scholar] [CrossRef]
- Essa, M.E.S.M.; El-shafeey, A.M.; Omar, A.H.; Fathi, A.E.; Maref, A.S.A.E.; Lotfy, J.V.W.; El-Sayed, M.S. Reliable Integration of Neural Network and Internet of Things for Forecasting, Controlling, and Monitoring of Experimental Building Management System. Sustainability 2023, 15, 2168. [Google Scholar] [CrossRef]
- Elsisi, M.; Tran, M.Q.; Mahmoud, K.; Lehtonen, M.; Darwish, M.M.F. Deep Learning-Based Industry 4.0 and Internet of Things towards Effective Energy Management for Smart Buildings. Sensors 2021, 21, 1038. [Google Scholar] [CrossRef]
- Tushar, W.; Wijerathne, N.; Li, W.T.; Yuen, C.; Poor, H.V.; Saha, T.K.; Wood, K.L. Internet of Things for Green Building Management. IEEE Signal Process. Mag. 2018, 35, 100–110. [Google Scholar] [CrossRef]
- Sayed, A.N.; Bensaali, F.; Himeur, Y.; Houchati, M. Edge-Based Real-Time Occupancy Detection System through a Non-Intrusive Sensing System. Energies 2023, 16, 2388. [Google Scholar] [CrossRef]
- Su, X.; Liu, X.; Motlagh, N.H.; Cao, J.; Su, P.; Pellikka, P.; Liu, Y.; Petäjä, T.; Kulmala, M.; Hui, P.; et al. Intelligent and Scalable Air Quality Monitoring with 5G Edge. IEEE Internet Comput. 2021, 25, 35–44. [Google Scholar] [CrossRef]
- Ramu, V.; Naresh, U.; Prakash, P.; Shyamala, G.; Saritha, P.; Atheeswaran, A. Real-Time Air Quality Monitoring with Edge AI and Machine Learning Algorithm. In Proceedings of the 2024 8th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 6–8 November 2024; pp. 1204–1209. [Google Scholar] [CrossRef]
- Botticini, S.; Comini, E.; Iacono, S.D.; Flammini, A.; Gaioni, L.; Galliani, A.; Ghislotti, L.; Lazzaroni, P.; Re, V.; Sisinni, E.; et al. Index Air Quality Monitoring for Light and Active Mobility. Sensors 2024, 24, 3170. [Google Scholar] [CrossRef] [PubMed]
- Saleh, A.; Donta, P.K.; Morabito, R.; Motlagh, N.H.; Tarkoma, S.; Lovén, L. Follow-Me AI: Energy-Efficient User Interaction with Smart Environments. IEEE Pervasive Comput. 2025, 24, 32–42. [Google Scholar] [CrossRef]
- Fazio, M.; Paone, M.; Puliafito, A.; Villari, M. Heterogeneous Sensors Become Homogeneous Things in Smart Cities. In Proceedings of the 2012 Sixth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Palermo, Italy, 4–6 July 2012; Volume 1, pp. 775–780. [Google Scholar] [CrossRef]
- Rubí, J.N.S.; Gondim, P.R.L. IoMT Platform for Pervasive Healthcare Data Aggregation, Processing, and Sharing Based on OneM2M and OpenEHR. Sensors 2019, 19, 4283. [Google Scholar] [CrossRef]
- Kim, J.; Choi, S.C.; Ahn, I.Y.; Sung, N.M.; Yun, J. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms. Sensors 2016, 16, 1645. [Google Scholar] [CrossRef]
- Alemayoh, T.T.; Lee, J.H.; Okamoto, S. New Sensor Data Structuring for Deeper Feature Extraction in Human Activity Recognition. Sensors 2021, 21, 2814. [Google Scholar] [CrossRef]
- Arunan, A.; Qin, Y.; Li, X.; Yuen, C. A Federated Learning-Based Industrial Health Prognostics for Heterogeneous Edge Devices Using Matched Feature Extraction. IEEE Trans. Autom. Sci. Eng. 2024, 21, 3065–3079. [Google Scholar] [CrossRef]
- Wang, X.; Xu, M.; Xiong, X.; Ning, C. Remote Sensing Scene Classification Using Heterogeneous Feature Extraction and Multi-Level Fusion. IEEE Access 2020, 8, 217628–217641. [Google Scholar] [CrossRef]
- Akanbi, A.; Masinde, M. A Distributed Stream Processing Middleware Framework for Real-Time Analysis of Heterogeneous Data on Big Data Platform: Case of Environmental Monitoring. Sensors 2020, 20, 3166. [Google Scholar] [CrossRef]
- Gomes, B.; Muniz, L.; Silva, F.d.S.e.; Santos, D.d.; Lopes, R.; Coutinho, L.; Carvalho, F.; Endler, M. A Middleware with Comprehensive Quality of Context Support for the Internet of Things Applications. Sensors 2017, 17, 2853. [Google Scholar] [CrossRef]
- Peng, C.; Goswami, P. Meaningful Integration of Data from Heterogeneous Health Services and Home Environment Based on Ontology. Sensors 2019, 19, 1747. [Google Scholar] [CrossRef]
- Ali, S.; Chong, I. Semantic Mediation Model to Promote Improved Data Sharing Using Representation Learning in Heterogeneous Healthcare Service Environments. Appl. Sci. 2019, 9, 4175. [Google Scholar] [CrossRef]
- Adel, E.; El-Sappagh, S.; Barakat, S.; Kwak, K.S.; Elmogy, M. Semantic Architecture for Interoperability in Distributed Healthcare Systems. IEEE Access 2022, 10, 126161–126179. [Google Scholar] [CrossRef]
- Ahmed, A.; Farhan, M.; Eesaar, H.; Chong, K.T.; Tayara, H. From Detection to Action: A Multimodal AI Framework for Traffic Incident Response. Drones 2024, 8, 741. [Google Scholar] [CrossRef]
- Alghieth, M. Sustain AI: A Multi-Modal Deep Learning Framework for Carbon Footprint Reduction in Industrial Manufacturing. Sustainability 2025, 17, 4134. [Google Scholar] [CrossRef]
- Reis, M.J.C.S. Internet of Things and Artificial Intelligence for Secure and Sustainable Green Mobility: A Multimodal Data Fusion Approach to Enhance Efficiency and Security. Multimodal Technol. Interact. 2025, 9, 39. [Google Scholar] [CrossRef]
- Ranatunga, S.; Ødegård, R.S.; Jetlund, K.; Onstein, E. Use of Semantic Web Technologies to Enhance the Integration and Interoperability of Environmental Geospatial Data: A Framework Based on Ontology-Based Data Access. ISPRS Int. J. Geo-Inf. 2025, 14, 52. [Google Scholar] [CrossRef]
- Valtolina, S.; Ferrari, L.; Mesiti, M. Ontology-Based Consistent Specification of Sensor Data Acquisition Plans in Cross-Domain IoT Platforms. IEEE Access 2019, 7, 176141–176169. [Google Scholar] [CrossRef]
- Fan, X.; Xiang, C.; Chen, C.; Yang, P.; Gong, L.; Song, X.; Nanda, P.; He, X. BuildSenSys: Reusing Building Sensing Data for Traffic Prediction with Cross-Domain Learning. IEEE Trans. Mob. Comput. 2019, 20, 2154–2171. [Google Scholar] [CrossRef]
- Wang, H.; Chen, Y.; Liu, B.; Mo, R.; Lin, W. Optimization of the Pedestrian and Vehicle Detection Model based on Cloud-Edge Collaboration. In Proceedings of the 2022 IEEE Smartworld, Ubiquitous Intelligence & Computing, Scalable Computing & Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous & Trusted Vehicles (SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta), Haikou, China, 15–18 December 2022; pp. 2076–2083. [Google Scholar] [CrossRef]
- Zhang, J.; Huang, X.; Xu, J.; Wu, Y.; Ma, Q.; Miao, X.; Zhang, L.; Chen, P.; Yang, Z. Edge Assisted Real-time Instance Segmentation on Mobile Devices. In Proceedings of the 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), Bologna, Italy, 10–13 July 2022; pp. 537–547. [Google Scholar] [CrossRef]
- Han, B.; Dai, P.; Li, K.; Zhao, K.; Lei, X. SDPMP: Inference Acceleration of CNN Models in Heterogeneous Edge Environment. In Proceedings of the 2024 7th World Conference on Computing and Communication Technologies (WCCCT), Chengdu, China, 12–14 April 2024; pp. 194–198. [Google Scholar] [CrossRef]
- Zhang, W.; Guo, C.; Gao, Y.; Dong, W. Edge Cloud Collaborative Stream Computing for Real-Time Structural Health Monitoring. arXiv 2023, arXiv:2310.07130. [Google Scholar] [CrossRef]
- Chang, Z.; Liu, L.; Guo, X.; Sheng, Q. Dynamic Resource Allocation and Computation Offloading for IoT Fog Computing System. IEEE Trans. Ind. Inform. 2019, 17, 3348–3357. [Google Scholar] [CrossRef]
- Liu, C.; Liu, K.; Xu, X.; Ren, H.; Jin, F.; Guo, S. Real-time Task Offloading for Data and Computation Intensive Services in Vehicular Fog Computing Environments. In Proceedings of the 2020 16th International Conference on Mobility, Sensing and Networking (MSN), Tokyo, Japan, 17–19 December 2020; pp. 360–366. [Google Scholar] [CrossRef]
- Gao, X.; Huang, X.; Bian, S.; Shao, Z.; Yang, Y. PORA: Predictive Offloading and Resource Allocation in Dynamic Fog Computing Systems. IEEE Internet Things J. 2020, 7, 72–87. [Google Scholar] [CrossRef]
- Liu, G.; Wang, C.; Ma, X.; Yang, Y. Keep Your Data Locally: Federated-Learning-Based Data Privacy Preservation in Edge Computing. IEEE Netw. 2021, 35, 60–66. [Google Scholar] [CrossRef]
- Li, J.; Meng, Y.; Ma, L.; Du, S.; Zhu, H.; Pei, Q.; Shen, X. A Federated Learning Based Privacy-Preserving Smart Healthcare System. IEEE Trans. Ind. Inform. 2021, 18, 2021–2031. [Google Scholar] [CrossRef]
- Wang, R.; Lai, J.; Zhang, Z.; Li, X.; Vijayakumar, P.; Karuppiah, M. Privacy-Preserving Federated Learning for Internet of Medical Things Under Edge Computing. IEEE J. Biomed. Health Inform. 2023, 27, 854–865. [Google Scholar] [CrossRef]
- Stephanie, V.; Khalil, I.; Atiquzzaman, M.; Yi, X. Trustworthy Privacy-Preserving Hierarchical Ensemble and Federated Learning in Healthcare 4.0 With Blockchain. IEEE Trans. Ind. Inform. 2023, 19, 7936–7945. [Google Scholar] [CrossRef]
- Winderickx, J.; Singelée, D.; Mentens, N. HeComm: End-to-End Secured Communication in a Heterogeneous IoT Environment Via Fog Computing. In Proceedings of the 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 12–15 January 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Swamy, B.V.; Gopi, S.; Sudhakar, K.; Girish, P.; Rani, M.J.; Upadhyay, S. Secure Vision: Enhancing Data Security in Wireless Sensor Networks through Image Processing. In Proceedings of the 2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Kirtipur, Nepal, 3–5 October 2024; pp. 512–519. [Google Scholar] [CrossRef]
- Machidon, A.; Pejović, V. Enabling resource-efficient edge intelligence with compressive sensing-based deep learning. In Proceedings of the 19th ACM International Conference on Computing Frontiers, Turin, Italy, 17–22 May 2022; pp. 141–149. [Google Scholar] [CrossRef]
- Wang, Q.; Ye, G.; Chen, Q.; Zhang, S.; Wang, F. A UAV perspective based lightweight target detection and tracking algorithm for intelligent transportation. Complex Intell. Syst. 2025, 11, 73. [Google Scholar] [CrossRef]
- Ghosh, S.; Layeghy, S.; De, S.; Chatterjee, S.; Portmann, M. Leveraging LSTM and Reinforcement Learning for Adaptive Sensing in CIoT Nodes. IEEE Trans. Consum. Electron. 2025, 71, 178–188. [Google Scholar] [CrossRef]
- Li, Y.; Nandakumar, R. WixUp: A Generic Data Augmentation Framework for Wireless Human Tracking. In Proceedings of the 23rd ACM Conference on Embedded Networked Sensor Systems, Irvine, CA, USA, 6–9 May 2025; pp. 449–462. [Google Scholar] [CrossRef]
- Orozco, F.; Gusmão, P.P.B.d.; Wen, H.; Wahlström, J.; Luo, M. Federated Learning for Traffic Flow Prediction with Synthetic Data Augmentation. arXiv 2024, arXiv:2412.08460. [Google Scholar] [CrossRef]
- Pal, D.; Mukhopadhyay, S.; Gupta, R. Ensembled Data Augmentation Model for Simplified Cardiac Arrhythmia Detection Under Limited Minority Class Data. IEEE Trans. Instrum. Meas. 2024, 73, 3460887. [Google Scholar] [CrossRef]
- Guo, J.; Lai, Y.; Zhang, J.; Zheng, J.; Fu, H.; Gan, L.; Hu, L.; Xu, G.; Che, X. C3DA: A Universal Domain Adaptation Method for Scene Classification from Remote Sensing Imagery. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, X.; Yu, X.; Sun, Z.; Wang, K.; Wang, Y. Drawing Informative Gradients from Sources: A One-stage Transfer Learning Framework for Cross-city Spatiotemporal Forecasting. Proc. AAAI Conf. Artif. Intell. 2025, 39, 1147–1155. [Google Scholar] [CrossRef]
- Zhao, Y.; Li, S.; Liu, C.H.; Han, Y.; Shi, H.; Li, W. Domain Adaptive Remote Sensing Scene Recognition via Semantic Relationship Knowledge Transfer. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–13. [Google Scholar] [CrossRef]
- Wong, D.L.T.; Li, Y.; John, D.; Ho, W.K.; Heng, C.H. Low Complexity Binarized 2D-CNN Classifier for Wearable Edge AI Devices. IEEE Trans. Biomed. Circuits Syst. 2022, 16, 822–831. [Google Scholar] [CrossRef]
- Aarotale, P.N.; Hill, T.; Rattani, A. PatchBMI-Net: Lightweight Facial Patch-based Ensemble for BMI Prediction. In Proceedings of the 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Istanbul, Turkiye, 5–8 December 2023; pp. 4022–4028. [Google Scholar] [CrossRef]
- Peng, P.; Jiang, K.; You, M.; Xie, J.; Zhou, H.; Xu, W.; Lu, J.; Li, X.; Xu, Y. Design of an Efficient CNN-Based Cough Detection System on Lightweight FPGA. IEEE Trans. Biomed. Circuits Syst. 2023, 17, 116–128. [Google Scholar] [CrossRef]
- Khan, M.A.; Menouar, H.; Hamila, R. LCDnet: A lightweight crowd density estimation model for real-time video surveillance. J. Real-Time Image Process. 2023, 20, 29. [Google Scholar] [CrossRef]
- Yen, M.H.; Lin, B.S.; Kuo, Y.L.; Lee, I.J.; Lin, B.S. Adaptive Indoor People-Counting System Based on Edge AI Computing. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 255–263. [Google Scholar] [CrossRef]
- Wu, G.; Zheng, W.S.; Lu, Y.; Tian, Q. PSLT: A Light-Weight Vision Transformer with Ladder Self-Attention and Progressive Shift. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 11120–11135. [Google Scholar] [CrossRef]
- Annane, B.; Lakehal, A.; Alti, A. Secured Forest Fire Prediction Using Blockchain and CNN Transformers. In Proceedings of the 2024 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM), Setif, Algeria, 19–21 November 2024; pp. 1–7. [Google Scholar] [CrossRef]
- Ye, S.; Du, J.; Zeng, L.; Ou, W.; Chu, X.; Lu, Y.; Chen, X. Galaxy: A Resource-Efficient Collaborative Edge AI System for In-situ Transformer Inference. In Proceedings of the IEEE INFOCOM 2024—IEEE Conference on Computer Communications, Vancouver, BC, Canada, 20–23 May 2024; pp. 1001–1010. [Google Scholar] [CrossRef]
- Wahab, F.E.; Ye, Z.; Saleem, N.; Bourouis, S.; Hussain, A. Lightweight Adaptive Deep Learning for Efficient Real-Time Speech Enhancement on Edge Devices. IEEE Trans. Consum. Electron. 2025. [Google Scholar] [CrossRef]
- Liang, F.; Qian, C.; Yu, W.; Griffith, D.; Golmie, N. Survey of graph neural networks and applications. Wirel. Commun. Mob. Comput. 2022, 2022, 9261537. [Google Scholar] [CrossRef]
- Huang, X.; Huang, T.; Cheng, P.; Yuan, J.; Zhao, S.; Zhang, G. Optimizing Task Migration for Public and Private Services in Vehicular Edge Networks: A Dual-Layer Graph Neural Network Approach. IEEE Trans. Mob. Comput. 2025, 24, 13191–13208. [Google Scholar] [CrossRef]
- Daruvuri, R. Optimized Data Packet Routing in Vehicular Networks Using Edge Intelligence and GNNs. In Proceedings of the 2025 International Conference on Engineering, Technology & Management (ICETM), Oakdale, NY, USA, 13–14 May 2025; pp. 1–6. [Google Scholar] [CrossRef]
- Huang, Y.; Xu, T.; Cai, W.; Li, J. A Graph Neural Network Based Obstacle Detection for Radar Point-Cloud in Rail Transit. In Proceedings of the 2025 8th International Conference on Advanced Algorithms and Control Engineering (ICAACE), Shanghai, China, 21–23 March 2025; pp. 986–991. [Google Scholar] [CrossRef]
- Mitra, T.; Young, E.; Xiong, J.; Islam, S.; Zhou, S.; Ran, R.; Jin, Y.F.; Wen, W.; Ding, C.; Xie, M. EVE. In Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, San Diego, CA, USA, 26–30 October 2022; pp. 1–9. [Google Scholar] [CrossRef]
- Cereda, E.; Crupi, L.; Risso, M.; Burrello, A.; Benini, L.; Giusti, A.; Pagliari, D.J.; Palossi, D. Deep Neural Network Architecture Search for Accurate Visual Pose Estimation aboard Nano-UAVs. arXiv 2023, arXiv:2303.01931. [Google Scholar] [CrossRef]
- Li, Y.; Wang, R.; Wu, H. Neural network quantization algorithms based on weights and activation values. Int. Conf. Algorithms Microchips Netw. Appl. 2022, 12176, 7–10. [Google Scholar] [CrossRef]
- Yang, Y. Quantization and Acceleration of YOLOv5 Vehicle Detection Based on GPU Chips. In Proceedings of the 2024 International Conference on Generative Artificial Intelligence and Information Security, Kuala Lumpur, Malaysia, 10–12 May 2024; pp. 425–429. [Google Scholar] [CrossRef]
- Shah, G.; Goud, A.; Momin, Z.; Mekie, J. OwlsEye: Real-Time Low-Light Video Instance Segmentation on Edge and Exploration of Fixed-Posit Quantization. In Proceedings of the 2025 38th International Conference on VLSI Design and 2024 23rd International Conference on Embedded Systems (VLSID), Bangalore, India, 4–8 January 2025; pp. 451–456. [Google Scholar] [CrossRef]
- Han, M. Design and Optimization of Lightweight Vehicle Real-Time Detection System Based on Edge Computing and Model Compression. In Proceedings of the 2025 International Conference on Algorithm, Artificial Intelligence and Computer Vision (AAICV), London, UK, 21–23 March 2025; pp. 309–313. [Google Scholar] [CrossRef]
- Li, X.; Panicker, R.C.; Cardiff, B.; John, D. Multistage Pruning of CNN Based ECG Classifiers for Edge Devices. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; pp. 1965–1968. [Google Scholar] [CrossRef]
- Huang, H.T.; Chiu, T.Y.; Lin, C.Y. A Light-Weight Defect Detection System for Edge Computing. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics—Taiwan, Taipei, Taiwan, 6–8 July 2022; pp. 521–522. [Google Scholar] [CrossRef]
- Li, X.; Gong, X.; Wang, D.; Zhang, J.; Baker, T.; Zhou, J.; Lu, T. ABM-SpConv-SIMD: Accelerating Convolutional Neural Network Inference for Industrial IoT Applications on Edge Devices. IEEE Trans. Netw. Sci. Eng. 2023, 10, 3071–3085. [Google Scholar] [CrossRef]
- Putrada, A.G.; Abdurohman, M.; Perdana, D.; Nuha, H.H. NoCASC: A Novel Optimized Cost-Complexity Pruning for AdaBoost Model Compression on Edge Computing-Based Smart Lighting. In Proceedings of the 2024 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT), Mataram, Indonesia, 28–30 November 2024; pp. 729–734. [Google Scholar] [CrossRef]
- Xing, J.; Fang, G.; Zhong, J.; Li, J. Application of Face Recognition Based on CNN in Fatigue Driving Detection. In Proceedings of the 2019 International Conference on Artificial Intelligence and Advanced Manufacturing, Dublin, Ireland, 17–19 October 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Xu, P.; Wang, K.; Hassan, M.M.; Chen, C.M.; Lin, W.; Hassan, M.R.; Fortino, G. Adversarial Robustness in Graph-Based Neural Architecture Search for Edge AI Transportation Systems. IEEE Trans. Intell. Transp. Syst. 2022, 24, 8465–8474. [Google Scholar] [CrossRef]
- Zhang, T.; Cheng, D.; He, Y.; Chen, Z.; Dai, X.; Xiong, L.; Yan, F.; Li, H.; Chen, Y.; Wen, W. NASRec: Weight Sharing Neural Architecture Search for Recommender Systems. In Proceedings of the ACM Web Conference 2023, Austin, TX, USA, 30 April–4 May 2023; pp. 1199–1207. [Google Scholar] [CrossRef]
- Chen, Z.; Tian, P.; Liao, W.; Chen, X.; Xu, G.; Yu, W. Resource-Aware Knowledge Distillation for Federated Learning. IEEE Trans. Emerg. Top. Comput. 2023, 11, 706–719. [Google Scholar] [CrossRef]
- Wang, C.; Yang, G.; Papanastasiou, G.; Zhang, H.; Rodrigues, J.J.P.C.; Albuquerque, V.H.C.d. Industrial Cyber-Physical Systems-Based Cloud IoT Edge for Federated Heterogeneous Distillation. IEEE Trans. Ind. Inform. 2020, 17, 5511–5521. [Google Scholar] [CrossRef]
- Mudavat, S.M.; Mitra, A.; Mohanty, S.P.; Kougianos, E. LiteViT: Leveraging the Power of Transformers for Edge AI in Crop Disease Classification. In Proceedings of the 2025 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), Kalamata, Greece, 6–9 July 2025; Volume 1, pp. 1–6. [Google Scholar] [CrossRef]
- Peng, L.; Xu, L.; Wang, X.; Wu, L.; Liu, J.; Zeng, W.; Piran, M.J. An Autonomous AI Framework for Knee Osteoarthritis Diagnosis via Semi-Supervised Learning and Dual Knowledge Distillation. IEEE J. Biomed. Health Inform. 2025, 1–14. [Google Scholar] [CrossRef]
- Cao, Y.; Ni, Q.; Jia, M.; Zhao, X.; Yan, X. Online Knowledge Distillation for Machine Health Prognosis Considering Edge Deployment. IEEE Internet Things J. 2024, 11, 27828–27839. [Google Scholar] [CrossRef]
- Zhou, M.; Zhou, B.; Wang, H.; Dong, F.; Zhao, W. Dynamic Path Based DNN Synergistic Inference Acceleration in Edge Computing Environment. In Proceedings of the 2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS), Beijing, China, 14–16 December 2021; pp. 567–574. [Google Scholar] [CrossRef]
- Xia, W.; Yin, H.; Dai, X.; Jha, N.K. Fully Dynamic Inference with Deep Neural Networks. IEEE Trans. Emerg. Top. Comput. 2022, 10, 962–972. [Google Scholar] [CrossRef]
- Rashid, N.; Demirel, B.U.; Odema, M.; Faruque, M.A.A. Template Matching Based Early Exit CNN for Energy-efficient Myocardial Infarction Detection on Low-power Wearable Devices. ACM Interact. Mobile Wearable Ubiquitous Technol. 2022, 6, 1–22. [Google Scholar] [CrossRef]
- Sponner, M.; Servadei, L.; Waschneck, B.; Wille, R.; Kumar, A. Temporal Decisions: Leveraging Temporal Correlation for Efficient Decisions in Early Exit Neural Networks. arXiv 2024, arXiv:2403.07958. [Google Scholar] [CrossRef]
- Ghanathe, N.P.; Wilton, S. T-recx: Tiny-resource efficient convolutional neural networks with early-exit. In Proceedings of the 20th ACM International Conference on Computing Frontiers, Bologna, Italy, 9–11 May 2023; pp. 123–133. [Google Scholar] [CrossRef]
- She, Y.; Shi, T.; Wang, J.; Liu, B. Dynamic Batching and Early-Exiting for Accurate and Timely Edge Inference. In Proceedings of the 2024 IEEE 99th Vehicular Technology Conference (VTC2024-Spring), Singapore, 24–27 June 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Zhou, T.; Pillai, P.J.; Yalla, V.G. Hierarchical context-aware hand detection algorithm for naturalistic driving. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1291–1297. [Google Scholar] [CrossRef]
- Xiong, H.; Zhang, R.; Zhang, C.; He, Z.; Wu, H.; Wang, H. DuTongChuan: Context-aware Translation Model for Simultaneous Interpreting. arXiv 2019, arXiv:1907.12984. [Google Scholar] [CrossRef]
- Chindiyababy, U.; Kakkar, P.; Vedula, J.; Yunus, J.; Umidbek, A.; Sharma, S. Deep Learning-Based Facial Emotion Recognition for Advanced Human-Computer Interaction. In Proceedings of the 2025 First International Conference on Advances in Computer Science, Electrical, Electronics, and Communication Technologies (CE2CT), Bhimtal, India, 21–22 February 2025; pp. 1247–1252. [Google Scholar] [CrossRef]
- Singh, N.A.; Roy, A.; Misra, S. Edge Intelligence for Rendering Green Camera-Network-as-a-Service. IEEE Trans. Green Commun. Netw. 2022, 6, 365–375. [Google Scholar] [CrossRef]
- Yao, L.; Xu, X.; Bilal, M.; Wang, H. Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning. IEEE Trans. Intell. Transp. Syst. 2023, 24, 12991–12999. [Google Scholar] [CrossRef]
- Singh, H.; Singh, M.B.; Kagale, P.P.; Pratap, A. Cost-Aware Hierarchical Federated Learning for Smart Healthcare. In Proceedings of the 2024 16th International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 3–7 January 2024; pp. 141–146. [Google Scholar] [CrossRef]
- Wen, J.; Zhang, J.; Zhang, Z.; Cui, Z.; Cai, X.; Chen, J. Resource-aware multi-criteria vehicle participation for federated learning in Internet of vehicles. Inf. Sci. 2024, 664, 120344. [Google Scholar] [CrossRef]
- Bui, C.; Patel, N.; Patel, D.; Rogers, S.; Sawant, A.; Manwatkar, R.; Tabkhi, H. A Hardware/Software Co-Design Approach for Real-Time Object Detection and Tracking on Embedded Devices. In Proceedings of the SoutheastCon 2018, St. Petersburg, FL, USA, 19–22 April 2018; pp. 1–7. [Google Scholar] [CrossRef]
- Ponzina, F.; Machetti, S.; Rios, M.; Denkinger, B.W.; Levisse, A.; Ansaloni, G.; Peón-Quirós, M.; Atienza, D. A Hardware/Software Co-Design Vision for Deep Learning at the Edge. IEEE Micro 2022, 42, 48–54. [Google Scholar] [CrossRef]
- Hur, S.; Na, S.; Kwon, D.; Kim, J.; Boutros, A.; Nurvitadhi, E.; Kim, J. A Fast and Flexible FPGA-based Accelerator for Natural Language Processing Neural Networks. ACM Trans. Archit. Code Optim. 2023, 20, 1–24. [Google Scholar] [CrossRef]
- Gokarn, I.; Sabbella, H.; Hu, Y.; Abdelzaher, T.; Misra, A. MOSAIC: Spatially-Multiplexed Edge AI Optimization over Multiple Concurrent Video Sensing Streams. In Proceedings of the 14th Conference on ACM Multimedia Systems, Vancouver, BC, Canada, 7–10 June 2023; pp. 278–288. [Google Scholar] [CrossRef]
- Zhang, Z.; Li, H.; Zhao, Y.; Lin, C.; Liu, J. BCEdge: SLO-Aware DNN Inference Services with Adaptive Batching on Edge Platforms. arXiv 2023, arXiv:2305.01519. [Google Scholar] [CrossRef]
- Gudepu, V.; Pappu, B.; Javvadi, T.; Bassoli, R.; Fitzek, F.H.; Valcarenghi, L.; Devi, D.V.N.; Kondepu, K. Edge Computing in Micro Data Centers for Firefighting in Residential Areas of Future Smart Cities. In Proceedings of the 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Maldives, Maldives, 16–18 November 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Szántó, P.; Kiss, T.; Sipos, K.J. FPGA accelerated DeepSORT object tracking. In Proceedings of the 2023 24th International Carpathian Control Conference (ICCC), Miskolc-Szilvásvárad, Hungary, 12–14 June 2023; pp. 423–428. [Google Scholar] [CrossRef]
- Pérez, S.; Pérez, J.; Arroba, P.; Blanco, R.; Ayala, J.L.; Moya, J.M. Predictive GPU-based ADAS Management in Energy-Conscious Smart Cities. In Proceedings of the 2019 IEEE International Smart Cities Conference (ISC2), Casablanca, Morocco, 14–17 October 2019; pp. 349–354. [Google Scholar] [CrossRef]
- Manda, J.K. Green Data Center Innovations for Telecom: Exploring Innovative Technologies and Designs for Energy-Efficient and Sustainable Data Centers. SSRN Electron. J. 2024. [Google Scholar] [CrossRef]
- Minazzo, E.M.; Rouaze, G.; Marcinichen, J.B.; Thome, J.R.; Buining, F. Thermal-Hydraulic Characterization of a Thermosyphon Cooling System for Highly Compact Edge MicroDataCenter. J. Electron. Packag. 2024, 146, 041105. [Google Scholar] [CrossRef]
- Zhou, Z. GreenEdge: Greening Edge Data centers with Energy-Harvesting IoT Devices. In Proceedings of the 2019 IEEE 27th International Conference on Network Protocols (ICNP), Chicago, IL, USA, 8–10 October 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Aujla, G.S.; Kumar, N.; Garg, S.; Kaur, K.; Ranjan, R. EDCSuS: Sustainable Edge Data Centers as a Service in SDN-Enabled Vehicular Environment. IEEE Trans. Sustain. Comput. 2022, 7, 263–276. [Google Scholar] [CrossRef]
- Lin, W.; Adetomi, A.; Arslan, T. Low-Power Ultra-Small Edge AI Accelerators for Image Recognition with Convolution Neural Networks: Analysis and Future Directions. Electronics 2021, 10, 2048. [Google Scholar] [CrossRef]
- Zhai, J.; Li, B.; Lv, S.; Zhou, Q. FPGA-Based Vehicle Detection and Tracking Accelerator. Sensors 2023, 23, 2208. [Google Scholar] [CrossRef]
- Marino, K.; Zhang, P.; Prasanna, V.K. ME- ViT: A Single-Load Memory-Efficient FPGA Accelerator for Vision Transformers. In Proceedings of the 2023 IEEE 30th International Conference on High Performance Computing, Data, and Analytics (HiPC), Goa, India, 18–21 December 2023; pp. 213–223. [Google Scholar] [CrossRef]
- Huang, M.; Shen, A.; Li, K.; Peng, H.; Li, B.; Su, Y.; Yu, H. EdgeLLM: A Highly Efficient CPU-FPGA Heterogeneous Edge Accelerator for Large Language Models. IEEE Trans. Circuits Syst. I Regul. Pap. 2025, 72, 3352–3365. [Google Scholar] [CrossRef]
- Roa-Tort, K.; Fabila-Bustos, D.A.; Hernández-Chávez, M.; León-Martínez, D.; Apolonio-Vera, A.; Ortega-Gutiérrez, E.B.; Cadena-Martínez, L.; Hernández-Lozano, C.D.; Torres-Pérez, C.; Cano-Ibarra, D.A.; et al. FPGA–STM32-Embedded Vision and Control Platform for ADAS Development on a 1:5 Scale Vehicle. Vehicles 2025, 7, 84. [Google Scholar] [CrossRef]
- Martin, J.; Cantero, D.; González, M.; Cabrera, A.; Larrañaga, M.; Maltezos, E.; Lioupis, P.; Kosyvas, D.; Karagiannidis, L.; Ouzounoglou, E.; et al. Embedded Vision Intelligence for the Safety of Smart Cities. J. Imaging 2022, 8, 326. [Google Scholar] [CrossRef] [PubMed]
- Chen, J.; Jun, S.W.; Mundra, A.; Ta, J. Xyloni: Very Low Power Neural Network Accelerator for Intermittent Remote Visual Detection of Wildfire and Beyond. In Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and Design, Beach, CA, USA, 5–7 August 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Liu, Z.; Luo., Y.; Duan, S.; Zhou, T.; Xu, X. MirrorNet: A TEE-Friendly Framework for Secure On-Device DNN Inference. In Proceedings of the 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), San Francisco, CA, USA, 28 October 2023–2 November 2023; pp. 1–9. [Google Scholar] [CrossRef]
- Ding, R.; Xu, T.; Ding, A.A.; Fei, Y. Graph in the Vault: Protecting Edge GNN Inference with Trusted Execution Environment. arXiv 2025, arXiv:2502.15012. [Google Scholar] [CrossRef]
- Sun, T.; Jiang, B.; Lin, H.; Li, B.; Teng, Y.; Gao, Y.; Dong, W. TensorShield: Safeguarding On-Device Inference by Shielding Critical DNN Tensors with TEE. arXiv 2025, arXiv:2505.22735. [Google Scholar] [CrossRef]
- Adkins, J.; Ghena, B.; Jackson, N.; Pannuto, P.; Rohrer, S.; Campbell, B.; Dutta, P. The Signpost Platform for City-Scale Sensing. arXiv 2018, arXiv:1802.07805. [Google Scholar] [CrossRef]
- Kamal, M.; Atif, M.; Mujahid, H.; Shanableh, T.; Al-Ali, A.R.; Nabulsi, A.A. IoT Based Smart City Bus Stops. Future Internet 2019, 11, 227. [Google Scholar] [CrossRef]
- Ke, R.; Zhuang, Y.; Pu, Z.; Wang, Y. A Smart, Efficient, and Reliable Parking Surveillance System with Edge Artificial Intelligence on IoT Devices. arXiv 2020, arXiv:2001.00269. [Google Scholar] [CrossRef]
- Vadi, S. Design and Implementation of an Off-Grid Smart Street Lighting System Using LoRaWAN and Hybrid Renewable Energy for Energy-Efficient Urban Infrastructure. Sensors 2025, 25, 5579. [Google Scholar] [CrossRef]
- Biundini, I.Z.; Pinto, M.F.; Honório, L.M.; Capretz, M.A.M.; Timotheo, A.O.; Dantas, M.A.R.; Villela, P.C. LoRaCELL-Driven IoT Smart Lighting Systems: Sustainability in Urban Infrastructure. Sensors 2024, 24, 574. [Google Scholar] [CrossRef]
- Alsubhi, A.; Babatunde, S.; Tobias, N.; Sorber, J. Stash: Flexible Energy Storage for Intermittent Sensors. ACM Trans. Embed. Comput. Syst. 2024, 23, 1–23. [Google Scholar] [CrossRef]
- Igartua, M.A.; Llopis, L.J.d.l.C.; Begin, T.; Santana, J.R.; Sotres, P.; Pérez, J.; Sánchez, L.; Lanza, J.; Muñoz, L. LoRaWAN-based Smart Parking Service: Deployment and Performance Evaluation. In Proceedings of the 19th ACM International Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks on 19th ACM International Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks, Montreal, QC, Canada, 30 October 2022; pp. 107–114. [Google Scholar] [CrossRef]
- Castellanos, G.; Beelde, B.D.; Plets, D.; Martens, L.; Joseph, W.; Deruyck, M. Evaluating 60 GHz FWA Deployments for Urban and Rural Environments in Belgium. Sensors 2023, 23, 1056. [Google Scholar] [CrossRef]
- Rafi, M.S.M.; Behjati, M.; Rafsanjani, A.S. Reliable and Cost-Efficient IoT Connectivity for Smart Agriculture: A Comparative Study of LPWAN, 5G, and Hybrid Connectivity Models. arXiv 2025. [Google Scholar] [CrossRef]
- Narayanan, V.; Sahu, R.; Sun, J.; Duwe, H. BOBBER A Prototyping Platform for Batteryless Intermittent Accelerators. In Proceedings of the 2023 ACM/SIGDA International Symposium on Field Programmable Gate Arrays, Monterey, CA, USA, 12–14 February 2023; pp. 221–228. [Google Scholar] [CrossRef]
- Song, W.; Kaxiras, S.; Voigt, T.; Yao, Y.; Mottola, L. TaDA: Task Decoupling Architecture for the Battery-less Internet of Things. In Proceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems, Hangzhou, China, 4–7 November 2024; pp. 409–421. [Google Scholar] [CrossRef]
- Kelly, J.; Sukhatme, G.S. Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor self-calibration. Int. J. Robot. Res. 2011, 30, 56–79. [Google Scholar] [CrossRef]
- Gao, J.; Li, P.; Chen, Z.; Zhang, J. A survey on deep learning for multimodal data fusion. Neural Comput. 2020, 32, 829–864. [Google Scholar] [CrossRef]
- Furgale, P.; Rehder, J.; Siegwart, R. Unified temporal and spatial calibration for multi-sensor systems. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 1280–1286. [Google Scholar] [CrossRef]
- Macki, J.W.; Nistri, P.; Zecca, P. Mathematical models for hysteresis. SIAM Rev. 1993, 35, 94–123. [Google Scholar] [CrossRef]
- Isidori, A. Nonlinear Control Systems: An Introduction; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar] [CrossRef]
- Fu, Y.; Kottenstette, N.; Lu, C.; Koutsoukos, X.D. Feedback thermal control of real-time systems on multicore processors. In Proceedings of the Tenth ACM International Conference on Embedded Software (EMSOFT ’12), Tampere, Finland, 7–12 October2012; pp. 113–122. [Google Scholar] [CrossRef]
- Liu, Z.; Chen, X.; Wu, H.; Wang, Z.; Chen, X.; Niyato, D.; Huang, K. Integrated sensing and edge AI: Realizing intelligent perception in 6G. IEEE Commun. Surv. Tutor 2025. Early Access. [Google Scholar] [CrossRef]
- Kitchin, R. The ethics of smart cities and urban science. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2016, 374, 20160115. [Google Scholar] [CrossRef]
- Van Zoonen, L. Privacy concerns in smart cities. Gov. Inf. Q. 2016, 33, 472–480. [Google Scholar] [CrossRef]
- Crawford, K.; Calo, R. There is a blind spot in AI research. Nature 2016, 538, 311–313. [Google Scholar] [CrossRef] [PubMed]
- Janssen, M.; Kuk, G. The challenges and limits of big data algorithms in technocratic governance. Gov. Inf. Q. 2016, 33, 371–377. [Google Scholar] [CrossRef]
- Tabassi, E. Artificial Intelligence Risk Management Framework (AI RMF 1.0); U.S. Department of Commerce: Washington, DC, USA, 2023. [Google Scholar] [CrossRef]
- Smuha, N.A. The EU approach to ethics guidelines for trustworthy artificial intelligence. Comput. Law Rev. Int. 2019, 20, 97–106. [Google Scholar] [CrossRef]
- Sookhak, M.; Tang, H.; He, Y.; Yu, F.R. Security and privacy of smart cities: A survey, research issues and challenges. IEEE Commun. Surv. Tutor. 2018, 21, 1718–1743. [Google Scholar] [CrossRef]
- CISA. Cybersecurity Best Practices for Smart Cities. White Paper, Cybersecurity and Infrastructure Security Agency. 2023. Available online: https://www.cisa.gov/sites/default/files/2023-04/cybersecurity-best-practices-for-smart-cities_508.pdf (accessed on 11 December 2025).
- Guo, Y.; Ji, T.; Wang, Q.; Yu, L.; Li, P. Quantized adversarial training: An iterative quantized local search approach. In Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 8–11 November 2019; pp. 1066–1071. [Google Scholar] [CrossRef]
- Guo, Y.; Wang, Q.; Ji, T.; Wang, X.; Li, P. Resisting distributed backdoor attacks in federated learning: A dynamic norm clipping approach. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 1172–1182. [Google Scholar] [CrossRef]
- Alharbi, S.; Guo, Y.; Yu, W. Collusive backdoor attacks in federated learning frameworks for IoT systems. IEEE Internet Things J. 2024, 11, 19694–19707. [Google Scholar] [CrossRef]
- Orchi, H.; Diallo, A.B.; Elbiaze, H.; Sabir, E.; Sadik, M. A Contemporary Survey on Multisource Information Fusion for Smart Sustainable Cities: Emerging Trends and Persistent Challenges. Inf. Fusion 2025, 114, 102667. [Google Scholar] [CrossRef]
- Bringmann, O.; Ecker, W.; Feldner, I.; Frischknecht, A.; Gerum, C.; Hämäläinen, T.; Hanif, M.A.; Klaiber, M.J.; Mueller-Gritschneder, D.; Bernardo, P.P.; et al. Automated HW/SW co-design for edge AI: State, challenges and steps ahead. In Proceedings of the 2021 International Conference on Hardware/Software Codesign and System Synthesis, Virtual, 10–13 October 2021; pp. 11–20. [Google Scholar] [CrossRef]
- Habibzadeh, H.; Soyata, T.; Kantarci, B.; Boukerche, A.; Kaptan, C. Sensing, communication and security planes: A new challenge for a smart city system design. Comput. Netw. 2018, 144, 163–200. [Google Scholar] [CrossRef]
- Bibri, S.E.; Huang, J. AI and AI-Powered Digital Twins for Smart, Green, and Zero-Energy Buildings: A Systematic Review of Leading-Edge Solutions for Advancing Environmental Sustainability Goals. Environ. Sci. Ecotechnol. 2025, 28, 100628. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).