Next Article in Journal
IoT, AI, and Digital Twins in Smart Cities: A Systematic Review for a Thematic Mapping and Research Agenda
Previous Article in Journal
Smart Cities: A Systematic Review of Emerging Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Meta-Analysis of Artificial Intelligence in the Built Environment: High-Efficacy Silos and Fragmented Ecosystems

School of Sustainable Engineering and the Built Environment, Ira A. Fulton Schools of Engineering, Arizona State University, 660 S. College Ave., P.O. Box 873005, Tempe, AZ 85287-3005, USA
*
Author to whom correspondence should be addressed.
Smart Cities 2025, 8(5), 174; https://doi.org/10.3390/smartcities8050174
Submission received: 7 August 2025 / Revised: 8 October 2025 / Accepted: 9 October 2025 / Published: 15 October 2025

Abstract

Highlights

What are the main findings?
  • AI/ML/DL/IoT applications demonstrate substantial performance improvements (15–40%) within specific built environment domains, with a meta-analysis of 71 studies revealing consistent efficacy across energy, water, transportation, construction, and waste management systems.
  • Despite technological success, current implementations remain predominantly fragmented, with 91.5% of applications operating as isolated “silos” lacking cross-domain integration (Levels 0 and 1), and only 1.4% achieving real-time integration.
What is the implication of the main finding?
  • The proven efficacy of AI-driven solutions within domains provides a strong foundation for scaling smart city implementations, but the lack of integration prevents realization of systemic benefits and synergies.
  • Achieving truly connected, sustainable cities demand a paradigm shift from siloed.
  • Applications to integrated frameworks that strategically overlay AI-driven intelligence onto existing infrastructure, supported by new governance models and ethical considerations.

Abstract

Cities face mounting pressures to deliver reliable, low-carbon services amid rapid urbanization and budget constraints. Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), and the Internet of Things (IoT) are widely promoted to automate operations and strengthen decision-support across the built environment; however, it remains unclear whether these interventions are both effective and systemically integrated across domains. We conducted a Preferred Reporting Items for Systematic Reviews (PRISMA) aligned systematic review and meta-analysis (January 2015–July 2025) of empirical AI/ML/DL/IoT interventions in urban infrastructure. Searches across five open-access indices Multidisciplinary Digital Publishing Institute (MDPI), Directory of Open Access Journals (DOAJ), Connecting Repositories (CORE), Bielefeld Academic Search Engine (BASE), and Open Access Infrastructure for Research in Europe (OpenAIRE)returned 7432 records; after screening, 71 studies met the inclusion criteria for quantitative synthesis. A random-effects model shows a large, pooled effect (Hedges’ g = 0.92; 95% CI: 0.78–1.06; p < 0.001) for within-domain performance/sustainability outcomes. Yet 91.5% of implementations operate at integration Levels 0–1 (isolated or minimal data sharing), and only 1.4% achieve real-time multi-domain integration (Level 3). Publication bias is likely (Egger’s test p = 0.03); a conservative bias-adjusted estimate suggests a still-positive effect of g ≈ 0.68–0.70. Findings indicate a dual reality: high efficacy in silos but pervasive fragmentation that prevents cross-domain synergies. We outline actions, mandating open standards and APIs, establishing city-level data governance, funding Level-2/3 integration pilots, and adopting cross-domain evaluation metrics to translate local gains into system-wide value. Overall certainty of evidence is rated Moderate based on Grading of Recommendations Assessment, Development, and Evaluation (GRADE) due to heterogeneity and small-study effects, offset by the magnitude and consistency of benefits.

1. Introduction

1.1. The Smart City Evolution: From Technological Novelty to the Search for Systemic Impact

The 21st century is defined by an unprecedented wave of urbanization. With projections indicating that 68% of the global population will reside in urban areas by 2050, the strain on critical infrastructure systems is intensifying [1,2]. This immense demographic shift, while a catalyst for economic growth and innovation, places significant demands on foundational urban services, including power, water, transportation, communications, and waste management systems [3]. In response to these pressures, the “Smart City” paradigm emerged, championing the integration of Information and Communication Technologies (ICT) to enhance urban efficiency, sustainability, and overall quality of life [4]. Early smart city initiatives often manifested as discrete, technologically driven solutions to narrow problems. Examples include intelligent street lighting designed for localized energy savings, sensor-equipped waste bins to optimize collection routes, or isolated traffic monitoring systems targeting congestion in a specific corridor [5]. While innovative, these applications frequently resulted in a collection of fragmented, low-impact solutions that failed to contribute to broader, systemic urban improvements or leverage the inherently interconnected nature of city infrastructure [6]. The discourse surrounding smart cities has since matured, evolving beyond a purely technocratic focus. There is now a pressing imperative to transition from these isolated attempts toward creating intelligent urban ecosystems that deliver sustainable, and human-centric built environment for all stakeholders, including the civil, sustainable, and construction engineers who design, build, and maintain the complex fabric of the built environment.

1.2. The Untapped Potential: AI, ML, DL, and IoT for Overlaying Intelligence on Existing Built Environment Infrastructure

The synergistic convergence of the Internet of Things (IoT), Machine Learning (ML), Deep Learning (DL), and Artificial Intelligence (AI) presents a transformative opportunity to realize a more profound and integrated vision for smart cities. IoT facilitates the collection of vast, real-time data streams from diverse urban systems, including operational data from buildings, energy grids, water networks, transportation arteries, and active construction sites. ML and DL algorithms can subsequently identify intricate patterns, predict future states, and reveal hidden interdependencies within these complex datasets. Building upon this foundation, AI enables intelligent decision-making, automated control, and adaptive management strategies to optimize system-wide performance, enhance environmental sustainability, and improve the delivery of urban services. A cornerstone of this untapped potential lies in the strategic overlay of AI-driven intelligence onto existing urban infrastructure. Rather than focusing predominantly on the deployment of entirely new smart hardware, a more sustainable and scalable opportunity exists in leveraging the data streams and control points already embedded within established city assets. For instance, Building Information Models (BIMs) integrated with existing Building Management System (BMS) data and IoT sensor overlays can be used by AI to dynamically optimize energy consumption and indoor environmental quality. AI can analyze data from existing traffic sensors, utility Supervisory Control and Data Acquisition (SCADA) systems, and construction site logistics platforms to co-manage urban freight and construction traffic, minimizing disruption and emissions. Similarly, ML algorithms can predict maintenance needs for aging water pipelines or power distribution networks based on operational data, enabling proactive and cost-effective asset management [7].

1.3. The Central Research Problem: The Hypothesis of Pervasive Fragmentation

Despite clear technological potential and a pressing need for integrated solutions, a central hypothesis guiding this research is that current smart city implementations within the built environment remain largely fragmented. Smart solutions such as intelligent lighting, building energy dashboards, or traffic signal optimization often operate as isolated systems. Applications are typically developed and deployed within specific domains, with limited interoperability, negligible data sharing across systems, and no co-optimization between them. These silos while often effective for narrowly defined tasks, fail to capture systemic efficiencies and can even have unintended negative consequences. For example, optimizing traffic flow in one sector might inadvertently increase energy demand or worsen air quality in adjacent areas if not coordinated with grid management and environmental monitoring systems [8]. This prevailing fragmentation is a significant barrier to the holistic vision of a connected, sustainable, circular, and human-centric smart city. The stakes are high: fragmentation represents not just a missed opportunity to capture technical efficiencies, but a failure to achieve the human-centric goals of the modern smart city paradigm.

1.4. Aim, Objectives, and Research Questions

While the fragmentation of smart city applications is often discussed anecdotally, a comprehensive quantitative synthesis of this phenomenon is lacking. To address this critical gap, the aim of this paper is to systematically map and quantitatively assess the “As-Is” state of AI, ML, DL, and IoT applications across key domains of the built environment.
The specific objectives are: (1) to systematically identify, categorize, and characterize empirical studies detailing AI/ML/DL/IoT implementations in core built environment domains; (2) to conduct a meta-analysis of reported performance and sustainability gains to determine their quantifiable efficacy; (3) to develop and apply a robust coding framework to quantitatively assess the degree to which applications leverage existing infrastructure and implement cross-domain integration; and (4) to analyze historical trends (2015–2025) in technology use, problem complexity, and systemic integration.
  • This study is guided by the following key research questions (RQs):
  • RQ1: What is the quantifiable efficacy of current AI/ML/DL/IoT applications in improving domain-specific performance and sustainability metrics within various sectors of the built environment?
  • RQ2: What is the distribution and historical trend of these smart applications across different built environment domains, particularly concerning their reliance on new versus existing infrastructure?
  • RQ3: To what measurable extent do current AI/ML/DL/IoT applications in the built environment demonstrate cross-domain integration rather than operating as fragmented, standalone solutions?

2. Background and Literature Review

2.1. Defining the “True Connected City”: Key Attributes for the Built Environment

To evaluate the current state of smart cities, it is essential to first establish a normative benchmark. The vision of a “True Connected City” transcends the mere deployment of technology; it is defined by a set of core attributes that collectively aim to create urban spaces that are efficient, sustainable, resilient, circular, and fundamentally human-centric. Drawing from the evolution of smart city discourse, these key attributes are defined as follows:
  • Human-Centricity: The primary goal is enhancing the quality of life for all urban inhabitants and for the professionals who build and maintain the city. Technology is a means to this end, not the end itself.
  • Sustainability: Urban systems are designed to minimize their environmental footprint through optimized resource consumption, reduced emissions, and the integration of renewable energy sources.
  • Circularity: Embracing circular economy principles, moving from linear “take-make-waste” models to closed-loop systems where resources and materials are reused, recycled, and regenerated.
  • Robustness and Resilience: Systems are designed to withstand, adapt to, and recover from shocks and stresses, such as extreme weather events, infrastructure failures, or public health crises.
  • Intelligent Infrastructure Leverage: The ability to strategically overlay intelligence upon existing infrastructure assets rather than depending solely on expensive and disruptive new deployments.
  • Comprehensive Data Integration: The technical and governance backbone that enables the federation and harmonization of heterogeneous data across different urban domains, forming the foundation for systemic intelligence.

2.2. Review of AI/ML/DL/IoT Application Archetypes in Key Built Environment Domains

Current smart city applications often manifest as standalone systems that address domain-specific challenges, demonstrating localized successes but rarely achieving the cross-domain synergy envisioned by the “True Connected City” concept [6]. A brief review of common application archetypes across core-built environment domains is as follows:
Buildings and Energy Systems: Applications in this area predominantly focus on optimizing energy distribution and consumption through AI-driven load forecasting, predictive maintenance for Heating, Ventilation, and Air-Conditioning (HVAC) systems, and smart meter analytics. However, these systems often optimize building operations without full integration with other urban systems like transportation networks or grid-level demand response programs, limiting their systemic impact [9].
Transportation Networks: Smart transportation utilizes AI and IoT for adaptive traffic control, intelligent parking solutions, and real-time public transit tracking. While these innovations can improve localized traffic flow and reduce congestion, such systems often operate independently and are rarely integrated with construction logistics planning, urban air quality monitoring, or public event management systems [10].
Water Management: The use of IoT sensors and ML algorithms enables real-time water quality monitoring, leak detection in distribution networks, and prediction of consumption patterns. These systems, however, seldom integrate with broader urban planning models, land use data, or building-level consumption data for holistic, city-wide demand management [11].
Construction Operations: The construction industry is increasingly adopting IoT for equipment tracking and safety monitoring, drones for site surveying, and ML for project schedule optimization. These applications are powerful but typically remain confined within the digital boundaries of a single project, with limited data exchange for city-level planning or coordination with municipal services.
Waste Management: A common application is the use of smart waste bins equipped with fill-level sensors to optimize collection schedules and routes. While this improves the efficiency of a single municipal service, it represents a fragmented approach if the data is not integrated into city-wide logistics platforms that could co-optimize waste collection with other freight and service vehicle movements [12].

2.3. Documented Barriers to System Integration

The literature consistently identifies several key barriers that hinder the development of integrated, truly smart cities [8]. These documented barriers serve as direct causal antecedents of the fragmentation that this study aims to quantify; in effect, the literature predicts the problem that the subsequent results will empirically confirm. These barriers are not purely technical but are deeply embedded in organizational and economic structures:
Technical Barriers: A primary obstacle is the lack of interoperability standards between diverse technologies, platforms, and data formats from different vendors. This “vendor lock-in” and the use of proprietary protocols make it difficult to share data and create communication pathways between systems [13].
Organizational Barriers: Municipal governance is often structured in vertical silos (e.g., Department of Transportation, Department of Public Works), with separate mandates, budgets, and data governance practices. This organizational fragmentation directly mirrors and causes the technological fragmentation observed on the ground, as there is little incentive or mechanism for cross-departmental collaboration on integrated projects [6].
Economic Barriers: Funding models for urban innovation tend to favor discrete, short-term pilot projects with easily measurable, domain-specific outcomes. Securing funding for complex, long-term, cross-departmental integration initiatives is significantly more challenging, as the return on investment is harder to quantify and attribute to a single entity [8].
Privacy and Security Barriers: Legitimate concerns about data privacy, ownership, and cybersecurity create significant hurdles for data sharing across multiple systems. Establishing trusted, secure frameworks for exchanging potentially sensitive information between public and private entities is a complex legal and technical challenge [14].

2.4. Research Gap

While these barriers are qualitatively well-documented and widely discussed, a quantitative synthesis that measures the true extent of fragmentation across the built environment has been lacking. No previous study has systematically reviewed the broad body of empirical literature on smart city applications to measure the level of integration achieved or to correlate this with the effectiveness of the interventions. This meta-analysis directly addresses this critical gap by providing the first comprehensive, data-driven assessment of the “As-Is” state of AI/ML/DL/IoT applications in the built environment, moving the discussion from anecdotal observation to empirical evidence.

3. Materials and Methods

3.1. Study Design and Methodological Framework

This study is a Systematic Review and Meta-Analysis reported in accordance with PRISMA 2020 as presented in Figure 1 [15]. The review question and eligibility criteria were structured using the Population, Intervention, Comparison, Outcome, Study design (PICOS) framework. All data collection, management, and synthesis were performed using Microsoft Excel (MS365 Analysis ToolPak, Solver, and Power Automate) and STATA (version 17).

3.2. Protocol and Registration

No prior protocol for this review was published or registered (e.g., PROSPERO). We acknowledge this as a methodological limitation. To mitigate potential risks associated with the absence of protocol registration, we adhered strictly to PRISMA 2020 guidance and fully documented all review methods to support transparency and reproducibility.

3.3. Data Sources and Search Strategy

We developed a comprehensive search strategy to identify relevant literature published from January 2015 through 31 July 2025 (final search date). Total of 64 search strings were constructed by systematically combining terms from four technology categories (“Artificial Intelligence”, “Machine Learning”, “Deep Learning”, “Internet of Things”) with six built-environment domains (“buildings”, “energy systems”, “transportation”, “water management”, “construction”, “waste management”). These strings were executed across five open-access indices/aggregators to achieve full-text availability for effect-size extraction: (MDPI), (DOAJ), (CORE), (BASE), and (OpenAIRE). Number of string searches (320 total executions: 64 strings × 5 databases) were performed between October 2024 and July 2025. We did not apply publication status limits beyond the date range; language was restricted to English per eligibility criteria. The searches yielded 7432 records in total. After removing 2156 duplicates, 5276 unique records remained for screening.

3.4. Eligibility Criteria

Eligibility was defined using the PICOS framework. Population: systems or processes within the urban built environment (e.g., buildings, energy grids, transportation systems, water networks, active construction sites, and municipal waste systems). Intervention: explicit application of AI/ML/DL/IoT for monitoring, prediction, optimization, or control. Comparison: a defined baseline or comparator (e.g., pre-implementation state or traditional method) enabling effect estimation. Outcome: quantitative performance or sustainability metrics reported with sufficient statistical detail to calculate effect sizes (e.g., means, standard deviations, sample sizes). Study design: empirical, peer-reviewed studies (journal articles or conference papers) published in English between January 2015 and July 2025. Exclusions included non-urban systems (e.g., agriculture), non-infrastructure applications (e.g., e-governance platforms), purely conceptual or theoretical papers, studies without a baseline or comparative data, studies reporting only qualitative outcomes, and simulation only studies without real world data. A concise summary of inclusion and exclusion criteria is presented in Table 1.

3.5. Study Selection Process

Study selection followed the PRISMA 2020 flow (Figure 1). After deduplication (5276 unique records from 7432 initial hits), two reviewers independently screened all titles and abstracts against the eligibility criteria, excluding 4864 records. We sought full texts for 412 records; 23 could not be retrieved, leaving 389 articles for full-text assessment. The same two reviewers independently evaluated full texts for eligibility. Disagreements at either stage were resolved by discussion and consensus, with a third reviewer available to arbitrate if necessary. Inter-rater agreement was high: Cohen’s kappa was 0.89 for the title/abstract screening stage and 0.92 for the full-text review stage, indicating substantial to almost perfect agreement. During full-text eligibility screening, we excluded 318 studies for predefined reasons: lack of quantitative outcome metrics (n = 261), absence of a baseline comparator (n = 18), or simulation only without real-world data (n = 39). A total of (n = 71) studies met all inclusion criteria and were retained for quantitative synthesis. Table 2 summarizes selection counts by database.

3.6. Data Extraction and Coding

Data from each of the 71 included studies were extracted using a standardized form to ensure consistency and transparency. Two reviewers independently recorded key study characteristics (e.g., publication year, geographic region, built environment domain, and type of AI/ML/DL/IoT application), details of the intervention and comparator (including whether existing infrastructure was leveraged or new systems were implemented, and the assigned integration level on the 0–3 scale), and all relevant quantitative outcomes. For effect size computation, numerical data such as sample sizes, means, and standard deviations for intervention and baseline groups were collected whenever available. Any discrepancies in data extraction were resolved through consensus. All extracted information was tabulated and cross-verified against the original articles to ensure accuracy in coding and to comprehensively address the research questions.

3.7. Risk of Bias and Study Quality Assessment

Rationale for tool selection: Risk Of Bias In Non-randomized Studies of Interventions (ROBINS-I) [16] is designed for non-randomized intervention studies that estimate comparative effects; in contrast, Prediction model Risk Of Bias ASsessment Tool (PROBAST) [17] targets prediction-model studies and (Quality Assessment of Diagnostic Accuracy Studies) QUADAS-2 [18] targets diagnostic-accuracy studies. Given our corpus comprises quasi-experimental and observational intervention evaluations, ROBINS-I aligns with our study designs, whereas PROBAST and QUADAS-2 do not.
The risk of bias for each included study was independently assessed by two reviewers using the ROBINS-I tool (Risk Of Bias In Non-randomized Studies of Interventions). This tool is specifically designed for the types of non-randomized and observational studies included in this review and evaluates each study as an attempt to emulate a hypothetical pragmatic randomized trial. We assessed the seven distinct domains of bias covered by ROBINS-I: (1) bias due to confounding, (2) bias in selection of participants, (3) bias in classification of interventions, (4) bias due to deviations from intended interventions, (5) bias due to missing data, (6) bias in measurement of outcomes, and (7) bias in selection of the reported result. Disagreements between reviewers were resolved through discussion and consensus. Based on these domain-level assessments, an overall risk of bias judgment was assigned to each study as ‘Low’, ‘Moderate’, ‘High’, or ‘Critical’. No studies were excluded based on their risk of bias; however, studies judged to be at critical risk would have been excluded. The summary results are provided in the Results section.
Inter-rater reliability was assessed using Cohen’s kappa (κ) to ensure consistency in the study selection process. This statistic measures the level of agreement between two raters, accounting for the possibility of agreement occurring by chance.
  • Formula: k = (pope)/(1 − pe)
    po = the relative observed agreement among raters.
    pe = the hypothetical probability of chance agreement.
  • Calculated Values:
    Title/Abstract Screening: κ = 0.89
    Full-Text Review: κ = 0.92
Interpretation: Both values indicate a “substantial” level of agreement between the two independent reviewers, confirming a robust and consistent selection procedure.
The effect size for each study was calculated using Hedges’ g, a measure of the standardized mean difference between two groups. Hedges’ g is preferred over Cohen’s d for meta-analyses as it includes a correction for small sample bias, making it more accurate for studies with fewer than 20 participants.
  • Formula: g = (M1 − M2)/Spooled
    M1 and M2 are the means of the intervention and control groups, respectively.
    Spooled is the pooled standard deviation, calculated as: Spooled = ((n1 − 1)s1 + (n2 − 1)s2)/(n1 + n2 − 2)
  • n1 and n2 are the sample sizes of the two groups.
  • s1 and s2 are the standard deviations of the two groups.
Overall Pooled Effect Size: The random-effects model yielded a pooled effect size of Hedges’ g = 0.92 with a 95% Confidence Interval of [0.78–1.06]. This is considered a “large” effect, indicating that the AI interventions produced substantial improvements.

3.8. Heterogeneity: I2 Statistic

The I2 statistic was used to quantify the percentage of variation across studies that is due to genuine heterogeneity rather than chance.
  • Formula: I2 = 100% × (Qdf)/Q
    Q is Cochran’s Q statistic, a measure of the total variation.
    df is the degrees of freedom (number of studies − 1).
  • Calculated Value: I2 = 87.3%
Interpretation: This value indicates “high” heterogeneity, confirming that the magnitude of the intervention’s effect varied significantly across different studies and contexts. This justifies the use of a random-effects model for the meta-analysis.

3.9. Egger’s Regression Test

Egger’s test was used to statistically assess the asymmetry of the funnel plot [19]. It regresses the effect estimates against their standard errors. A significant intercept suggests the presence of small-study effects, where smaller studies show different, often larger, effects than larger ones, which can be an indicator of publication bias.
  • Concept: A linear regression of the standard normal deviate (effect size/standard error) against precision (1/standard error).
  • Result: The test was statistically significant (t = 2.14, p = 0.03), indicating the presence of funnel plot asymmetry.

3.10. GRADE Certainty Assessment

We appraised overall certainty of evidence for the main outcomes using the GRADE approach [20]. Evidence from experimental designs was initially rated as high certainty and from observational designs as low certainty. We considered five downgrading factors: risk of bias, inconsistency (e.g., I2 > 50%), indirectness, imprecision (e.g., wide confidence intervals), and publication bias (e.g., Egger’s test). We considered three upgrading factors: large magnitude of effect, dose–response gradient, and whether plausible confounding would reduce the observed effect. Final certainty ratings (high, moderate, low, very low) are reported in the Results.

4. Results

4.1. Meta-Analysis of Intervention Effectiveness

Total of 71 studies published between 2015 and 2025 met inclusion criteria and were synthesized quantitatively. Studies spanned all major built environment domains, including building energy management, transportation networks, water systems, construction operations, and municipal waste services. Most were journal articles; all compared an AI/ML/DL/IoT intervention against a conventional baseline or pre-implementation state and reported quantitative outcomes sufficient for effect size estimation. The random effects meta-analysis yielded a pooled Hedges’ g of 0.92 (95% CI: 0.78–1.06; p < 0.001), indicating a large average improvement attributable to AI-enabled interventions. Considerable heterogeneity was observed (I2 = 87.3%; between study variance τ2 ≈ 0.28; Q test p < 0.001), justifying the random effects model and subgroup analyses. A planned subgroup analysis by domain indicated statistically significant differences across sectors (between group Q = 28.4; p < 0.001): building and energy applications exhibited the largest average effects (approximately g = 0.88–0.92), transportation and construction showed moderate positive effects (approximately g = 0.65–0.75), and water and waste management exhibited smaller yet positive mean effects (approximately g = 0.5–0.6). The forest plot in Figure 2 illustrates that most individual studies reported positive effects.

4.2. Quantitative Analysis of System Integration

A central and stark finding of this research is the quantitative confirmation of the pervasive fragmentation hypothesized in the introduction. The analysis of the integration level of all 71 applications, presented in Table 3, reveals a landscape dominated by isolated, siloed systems. This table provides the core empirical evidence for the paper’s central argument by translating the qualitative concept of “silos” into concrete quantitative data.
The overwhelming majority of studies (67.6%) were classified as Level 0, meaning the application operated entirely within a single domain with no external data exchange. An additional 23.9% were classified as Level 1, demonstrating only limited, often manual, data sharing. Combined, this means that 91.5% of all documented AI/ML/DL/IoT applications operate at a basic level of integration (Level 0 or 1), functioning effectively as disconnected islands of intelligence. Only a small fraction of studies, five in total (7.0%), achieved Level 2 integration, which involves automated data sharing and analysis between two or three domains. These rare cases typically involved linking building energy management systems with the power grid or combining transportation data with infrastructure sensors.
Most strikingly, only one single study (1.4%) in the entire sample demonstrated Level 3 comprehensive integration. This unique case was a study of Singapore’s Smart Nation Sensor Platform, which was explicitly designed for horizontal integration of data streams across multiple urban systems to enable coordinated, city-wide management [21]. The network visualization in Figure 3 provides a compelling and intuitive illustration of this fragmentation, showing a sparse network of isolated domain nodes that powerfully communicates the “disconnected islands” metaphor.

4.3. Temporal Trends

Analysis of temporal trends revealed a significant evolution in the technological sophistication of applications over the 2015–2025 period as demonstrated in Figure 4. In the earlier years of the review, applications predominantly relied on traditional machine learning algorithms or simple rule-based systems. Since 2020, there has been a marked and rapid shift towards the use of more complex Machine Learning architectures and the emergence of reinforcement learning (RL), which is a subset of Machine Learning, approaches for dynamic and adaptive control problems. This clear trend indicates a growing maturity in the field’s algorithmic toolkit. However, importantly, this increase in algorithmic complexity has not translated into higher levels of systemic integration. The analysis showed that integration levels have remained stagnant over time; the proportion of applications operating at Levels 0–1 was 91.5% in the 2021–2025 period, identical to the overall proportion across the entire decade. This divergence is a key finding, demonstrating that despite rapid advances in AI technology itself, the fundamental barriers to integration are not being overcome. The field is developing more powerful engines but continues to place them in disconnected vehicles, suggesting that the bottlenecks are non-technical in nature and are becoming relatively more pronounced as technology outpaces integration frameworks.

4.4. Publication Bias

Visual inspection of the funnel plot of study effect sizes versus their standard errors revealed a slight asymmetry. This observation was confirmed by Egger’s regression test for small-study effects, which was statistically significant (t = 2.14, p = 0.03), indicating the likely presence of publication bias favoring positive results. To assess the potential impact of this bias, we adjusted the pooled effect size downward to a more conservative Hedges’ g ≈ 0.68 (95% CI: 0.45–0.91). Based on the funnel plot asymmetry, we estimate the true effect size may be approximately 25–30% lower than observed, suggesting a more conservative estimate of g ≈ 0.68–0.70, though this remains a substantial positive effect.
Using GRADE, the overall certainty of evidence for the primary pooled effect was rated Moderate due to study design and suspected publication bias with high inconsistency (I2 = 87.3%), partially offset by the large observed effect size.

5. Discussion

5.1. Principal Findings and the “Fragmented Ecosystems”

This meta-analysis reveals a stark duality in the current smart city landscape. On one hand, the evidence is clear that AI, ML, DL, and IoT technologies deliver substantial benefits within their targeted domains. The overall pooled effect size (Hedges’ g ≈ 0.92) indicates that these interventions yield significant improvements in performance and sustainability metrics compared to traditional methods, validating the considerable investments in such technologies to date. In particular, applications in buildings and energy systems show especially strong effects (on average g ≈ 0.8–0.9), likely due to well-defined optimization targets and relatively mature sensor infrastructures in those sectors. This confirms that, when applied to specific problems, smart technology interventions can be highly effective tools for urban improvement.
An equally striking finding is the severe lack of integration among these successful applications. With 91.5% of studied implementations operating at only Level 0–1 integration, the long-envisioned smart city ideal of interconnected urban systems remains largely unrealized. This fragmentation represents a fundamental barrier to achieving the cross-domain synergies and systemic benefits promised by smart city initiatives. In short, the field has achieved high efficacy within individual silos, but the absence of cross-system integration means that these gains do not translate into broader systemic improvements a shortfall that is the hallmark of the integration paradox.

5.2. Implications for Practice and Policy

The dual reality identified in this meta-analysis—high efficacy within silos coupled with pervasive fragmentation—has profound implications for practice and policy. For practitioners and policymakers, these findings offer clear, evidence-based benchmarks. The demonstrated efficacy (pooled g ≈ 0.92) confirms that organizations can confidently invest in domain-specific Artificial Intelligence solutions with a reasonable expectation of significant performance improvements (often in the range of 20–35%).
However, the results also serve as a critical warning: these improvements are currently realized in isolation (91.5% at Levels 0–1), and systemic gains will not materialize automatically. To bridge the gap between localized success and city-wide impact, a strategic shift from piecemeal technology adoption to an integration-focused paradigm is essential. Table 4 synthesizes the key empirical findings, the underlying problems they reveal, and the strategic actions required to address them.
As Table 4 illustrates, unlocking systemic value requires concrete actions across procurement, governance, funding, and evaluation. We outline the following key recommendations:
  • Mandate Open Standards and APIs: City governments must require adherence to open data standards and the provision of public Application Programming Interfaces (APIs) for all new smart infrastructure procurements. This is essential to prevent vendor lock-in and ensure the future interoperability of systems.
  • Establish City-Level Data Governance: Municipalities must move beyond siloed departmental data management. Establishing city-level data governance frameworks with clear policies for data sharing, alongside dedicated cross-departmental “integration task forces,” is crucial for breaking down organizational barriers.
  • Pilot and Champion Integrated Projects: Public and private investment should be strategically directed towards pilot projects that explicitly target Level 2 or Level 3 integration. Demonstrating tangible benefits in such integrated pilots can de-risk larger-scale implementations and build momentum for broader adoption.
  • Develop and Adopt Cross-Domain Metrics: Evaluation frameworks for smart city projects must evolve to capture cross-domain synergies rather than just isolated efficiencies. For example, a new traffic management system should be evaluated not only on traffic flow improvements but also on its impact on city-wide air quality and energy consumption.
  • Foster Public–Private Integration Partnerships: Collaborative initiatives are needed to build the “integration infrastructure”—the middleware, platforms, and urban digital twin environments—that can link disparate systems. Such infrastructure should be treated as a public good, supported by public–private partnerships to ensure broad access and utility.

5.3. Theoretical Contributions

Prior reviews of smart cities often focus on single sectors (e.g., energy or building operations); to our knowledge, no cross-domain meta-analysis has jointly quantified pooled effectiveness and measured integration levels across the built environment. This helps reconcile why positive sectoral findings have not translated into system-wide benefits: integration, not algorithmic sophistication, is the primary bottleneck.
This meta-analysis makes several important theoretical contributions. Foremost, it empirically challenges technologically deterministic perspectives on smart city development. The finding that the level of system integration appears to be a more powerful determinant of systemic impact than the specific type or sophistication of the AI algorithm used is profound. This places the focus squarely on the principles of sociotechnical systems thinking, which posits that effective solutions emerge from the interplay between technology and the complex social, organizational, and infrastructural fabric of the city [22]. The work demonstrates quantitatively what has long been argued qualitatively: the primary bottlenecks in smart city evolution are often not technological, but organizational, political, and economic.
Furthermore, this study contributes a methodological framework for classifying and measuring system integration (0 to 3 scale), providing a standardized tool that can be adopted in future research to enable consistent, comparative analysis of smart city projects. By providing the first empirical, quantitative evidence of pervasive fragmentation, this study moves the discourse beyond anecdote and establishes a data-driven baseline against which future progress can be measured.

5.4. Limitations

Policy and gray literature were excluded. Additionally, we also acknowledge regional skew and under-representation of non-English and Global South studies. Our focus on open-access, English language, indices ensured full-text availability for effect extraction but excludes subscription-only work (e.g., Web of Science and Scopus); we propose for future research to extend coverage. This study, while comprehensive, has several limitations.
First, by focusing on open-access databases, we may have missed relevant literature in subscription-based sources; moreover, the evidence base is geographically skewed, with a predominance of studies from technologically advanced regions in Asia, Europe, and North America, so the findings may not fully generalize to cities in the Global South.
Second, the studies included typically report short-term to medium-term outcomes (median follow-up ≈ 18 months), leaving the long-term durability of observed improvements uncertain.
Third, although we conducted analyses to detect publication bias (and found that the true average effect size may be somewhat lower than 0.92), any such bias implies that the field’s positive results should be interpreted with caution.
Fourth, risk-of-bias (ROBINS-I) were only conducted for the included 71 studies, as a result, potential methodological limitations within the primary studies were not quantitatively accounted for and could influence the pooled outcomes.
Fifth, the between study heterogeneity was high (I2 = 87.3%), suggesting that unmeasured factors (such as differences in local context, data quality, or implementation practices) likely moderate the effectiveness of these interventions.
Sixth, we observed limited empirical, outcome-reporting studies in public spaces, green areas and service, recreational facilities during this cycle. We therefore mark these as gaps in the present evidence base and flag them for prioritized inclusion in an update.
Finally, the integration-level coding scheme (Levels 0–3), while systematically applied, is a new framework that may require refinement. Certain borderline cases for example, a “smart building” that uses one external data stream involved some subjectivity in classification, which could affect reproducibility. Taken together, these limitations mean that the results should be generalized with appropriate caution.

5.5. Future Research Directions

The findings and limitations of this meta-analysis point to several critical areas for future research:
  • Longitudinal and Scalability Studies: There is a pressing need for long-term research that tracks the performance and societal impacts of smart city interventions over multiple years to determine if initial gains are sustained as projects are scaled from pilot to city-wide deployment.
  • Screening and synthesis methods: Utilize AI-assisted screening (active-learning tools) to improve speed, coverage, efficiency and consistency. Conduct a detailed bibliometric content mapping (e.g., VOSviewer 1.6.20) as a complementary lens without diluting the meta-analytic focus.
  • Comparative Effectiveness of Integration: Future research should explicitly design studies to directly quantify the added value of integration for instance, by conducting controlled experiments comparing an AI solution implemented in a silo versus the same solution in an integrated context.
  • Mixed-Methods Research on Barriers and Enablers: In-depth mixed-methods research combining quantitative data with qualitative case studies is needed to understand how socio-technical hurdles are (or are not) overcome in practice, and what factors most enable successful integration.
  • Development of Standardized Benchmarks: The research community would benefit from the development of common benchmarks, open datasets, and shared simulation environments (e.g., open-source urban digital twins [23]) to allow standardized testing and comparison of integrated solutions.
  • Ethical, Equity, and Governance Implications: As integration becomes more feasible, research must intensify its focus on the associated ethical and equity dimensions, including issues of data privacy, algorithmic bias, and new models of inclusive governance for large-scale, integrated urban AI systems. We propose also to incorporate government guidance and implementation reports to triangulate practice-led evidence.

6. Conclusions

This comprehensive systematic review and meta-analysis of 71 empirical studies provides robust, dual-faceted evidence on the state of artificial intelligence in the built environment. The findings confirm that AI/ML/DL/IoT applications are highly effective tools, capable of delivering substantial performance and sustainability improvements within their specific domains, as evidenced by a large, pooled effect size of g = 0.92. This validates the significant investments and research focus placed on these technologies to date.
However, this success is overshadowed by the study’s primary conclusion: the current smart city landscape is defined by “Fragmented Systems.” Despite high individual effectiveness, a staggering 91.5% of implementations operate as disconnected, high-performing silos. This pervasive systemic fragmentation represents the fundamental barrier to the evolution of smart cities, preventing the realization of synergistic, city-wide benefits that are the ultimate promise of the paradigm. While technological capabilities in AI are advancing at a rapid pace, they are being constrained by persistent organizational, regulatory, and legacy infrastructure barriers. The temporal analysis reveals that the gap between algorithmic sophistication and integration levels is, if anything, widening. Without a deliberate strategic shift, the present pattern of fragmented “islands of smartness” will likely become further entrenched. The rare but highly impactful examples of integrated systems serve as powerful proof-of-concept for the immense potential that current approaches are leaving unrealized.
Ultimately, the success of the smart city vision will depend less on the next algorithm and more on solving the profound challenges of integration. This requires a concerted effort from policymakers, planners, and engineers to mandate interoperability, reform governance structures to break down departmental silos, and develop the technical and social frameworks for a true system-of-systems approach. By using the evidence provided by this analysis to inform strategy, the global community can begin to build the bridges between today’s disconnected applications and tomorrow’s truly intelligent, sustainable, and human-centric built environments.

Author Contributions

Conceptualization, O.A.; Methodology, O.A.; Software, O.A.; Validation, O.A. and S.T.A.; Formal Analysis, O.A.; Writing—Original Draft Preparation, O.A.; Writing—Review and Editing, O.A. and S.T.A.; Visualization, O.A.; Supervision, S.T.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data used for this research, as well as PRISMA 2020 checklist, are available from the corresponding author on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AbbreviationDefinition
AIArtificial Intelligence
BIMBuilding Information Model
BMSBuilding Management System
CIConfidence Interval
DLDeep Learning
RLReinforcement Learning
FLFederated Learning
IoTInternet of Things
MLMachine Learning
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
PICOSPopulation, Intervention, Comparison, Outcome, and Study Design
SCADASupervisory Control and Data Acquisition
APIApplication Programming Interface
ROBINS-IRisk Of Bias In Non-randomized Studies of Interventions
GRADEGrading of Recommendations, Assessment, Development and Evaluations

References

  1. United Nations, Department of Economic and Social Affairs (UN DESA). World Urbanization Prospects: The 2018 Revision—Report; United Nations: New York, NY, USA, 2019; Available online: https://population.un.org/wup/assets/WUP2018-Report.pdf (accessed on 18 July 2025).
  2. United Nations, Department of Economic and Social Affairs (UN DESA). World Urbanization Prospects: The 2018 Revision—Highlights; United Nations: New York, NY, USA, 2019; Available online: https://population.un.org/wup/assets/WUP2018-Highlights.pdf (accessed on 18 July 2025).
  3. Gubbi, J.; Buyya, R.; Marusic, S.; Palaniswami, M. Internet of Things (IoT): A vision, architectural elements, and future directions. Future Gener. Comput. Syst. 2013, 29, 1645–1660. [Google Scholar] [CrossRef]
  4. Albino, V.; Berardi, U.; Dangelico, R.M. Smart Cities: Definitions, Dimensions, Performance, and Initiatives. J. Urban Technol. 2015, 22, 3–21. [Google Scholar] [CrossRef]
  5. Biyik, C.; Allam, Z.; Pieri, G.; Moroni, D.; O’Fraifer, M.; O’Connell, E.; Olariu, S.; Khalid, M. Smart parking systems: Reviewing the literature, architecture and ways forward. Smart Cities 2021, 4, 623–642. [Google Scholar] [CrossRef]
  6. Mora, L.; Deakin, M.; Reid, A. Strategic principles for smart city development: A multiple case study analysis of European best practices. Technol. Forecast. Soc. Change 2019, 142, 70–97. [Google Scholar] [CrossRef]
  7. Zanella, A.; Bui, N.; Castellani, A.; Vangelista, L.; Zorzi, M. Internet of Things for smart cities. IEEE Internet Things J. 2014, 1, 22–32. [Google Scholar] [CrossRef]
  8. Syed, A.S.; Sierra-Sosa, D.; Kumar, A.; Elmaghraby, A. IoT in smart cities: A survey of technologies, practices and challenges. Smart Cities 2021, 4, 429–475. [Google Scholar] [CrossRef]
  9. Chen, Y.; Norford, L.K.; Samuelson, H.W.; Malkawi, A. Optimal control of HVAC and window systems for natural ventilation through reinforcement learning. Energy Build. 2018, 169, 195–205. [Google Scholar] [CrossRef]
  10. Vijayalakshmi, B.; Ramar, K.; Jhanjhi, N.Z.; Verma, S.; Kaliappan, M.; Vijayalakshmi, K.; Vimal, S.K.; Ghosh, U. An attention-based deep learning model for traffic flow prediction using spatiotemporal features towards sustainable smart city. Int. J. Commun. Syst. 2021, 34, e4609. [Google Scholar] [CrossRef]
  11. Oberascher, M.; Rauch, W.; Sitzenfrei, R. Towards a smart water city: A comprehensive review of applications, data requirements, and communication technologies for integrated management. Sustain. Cities Soc. 2022, 76, 103442. [Google Scholar] [CrossRef]
  12. Khan, S.; Ali, B.; Alharbi, A.A.K.; Alotaibi, S.; Alkhathami, M.S.; Alshehri, S.; Khattak, H.A.; Khattak, S.; Khan, I. Efficient IoT-Assisted Waste Collection for Urban Smart Cities. Sensors 2024, 24, 3167. [Google Scholar] [CrossRef]
  13. García López, P.; Montresor, A.; Epema, D.; Datta, A.; Higashino, T.; Iamnitchi, A.; Barcellos, M.; Felber, P.; Rivière, E.; Siganos, G.; et al. Edge-centric Computing: Vision and Challenges. ACM SIGCOMM Comput. Commun. Rev. 2015, 45, 37–42. [Google Scholar] [CrossRef]
  14. Joyce, A.; Javidroozi, V. Smart city development: Data sharing vs. data protection legislations. Cities 2024, 148, 104859. [Google Scholar] [CrossRef]
  15. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  16. Sterne, J.A.C.; Hernán, M.A.; Reeves, B.C.; Savović, J.; Berkman, N.D.; Viswanathan, M.; Henry, D.; Altman, D.G.; Ansari, M.T.; Boutron, I.; et al. ROBINS-I: A tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016, 355, i4919. [Google Scholar] [CrossRef] [PubMed]
  17. Wolff, R.F.; Moons, K.G.M.; Riley, R.D.; Whiting, P.F.; Westwood, M.; Collins, G.S.; Reitsma, J.B.; Kleijnen, J.; Mallett, S. PROBAST: A tool to assess the risk of bias and applicability of prediction model studies. Ann. Intern. Med. 2019, 170, 51–58. [Google Scholar] [CrossRef]
  18. Whiting, P.F.; Rutjes, A.W.S.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.G.; Sterne, J.A.C.; Bossuyt, P.M.M. QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef]
  19. Egger, M.; Davey Smith, G.; Schneider, M.; Minder, C. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997, 315, 629–634. [Google Scholar] [CrossRef]
  20. Brożek, J.L.; Canelo-Aybar, C.; Compalati, E.; Wiercioch, W.; Baldeh, T.; Khan, R.; Akl, E.A.; Alonso-Coello, P.; Guyatt, G.H.; Schünemann, H.J. GRADE Guidelines 30: The GRADE approach to assessing the certainty of modeled evidence—An overview in the context of health decision-making. BMJ Evid.-Based Med. 2021, 129, 138–150. [Google Scholar] [CrossRef]
  21. Sipahi, E.B.; Saayi, Z. The world’s first “Smart Nation” vision: The case of Singapore. Smart Cities Reg. Dev. (SCRD) J. 2024, 8, 41–58. [Google Scholar] [CrossRef]
  22. Baxter, G.; Sommerville, I. Socio-Technical Systems: From Design Methods to Systems Engineering. Interact. Comput. 2011, 23, 4–17. [Google Scholar] [CrossRef]
  23. Céspedes-Cubides, A.S.; Jradi, M. A review of building digital twins to improve energy efficiency in the operational stage. Energy Inform. 2024, 7, 11. [Google Scholar] [CrossRef]
Figure 1. PRISMA 2020 flow diagram illustrating the study identification and selection process.
Figure 1. PRISMA 2020 flow diagram illustrating the study identification and selection process.
Smartcities 08 00174 g001
Figure 2. The Forest Plot of Meta Analysis Result.
Figure 2. The Forest Plot of Meta Analysis Result.
Smartcities 08 00174 g002
Figure 3. Network Analysis of Cross-Domain Integration in Smart City Applications.
Figure 3. Network Analysis of Cross-Domain Integration in Smart City Applications.
Smartcities 08 00174 g003
Figure 4. Analysis of Artificial Intelligence Temporal Trends in the Built Environment.
Figure 4. Analysis of Artificial Intelligence Temporal Trends in the Built Environment.
Smartcities 08 00174 g004
Table 1. PICOS Study Inclusion and Exclusion Criteria.
Table 1. PICOS Study Inclusion and Exclusion Criteria.
CriterionInclusion CriteriaExclusion Criteria
PopulationSystems or processes within the urban built environment: buildings, energy grids, transportation systems, water networks, construction sites, and waste management systems.Non-urban systems (e.g., agriculture); non-infrastructure applications (e.g., e-governance platforms).
InterventionExplicit application of AI, ML, DL, or IoT technology for monitoring, prediction, optimization, or control.Purely conceptual or theoretical papers; studies not focused on AI/ML/DL/IoT as the primary intervention.
ComparisonA defined baseline or comparator (e.g., traditional method, pre-intervention state) allowing assessment of the intervention’s effect.Studies with no comparative data or baseline provided.
OutcomeQuantitative performance or sustainability metrics reported with sufficient statistical detail for effect size calculation (e.g., mean, standard deviation, n).Studies reporting only qualitative outcomes or lacking sufficient statistical data for meta-analysis.
Study DesignPeer-reviewed empirical studies (e.g., journal articles, conference papers) published in English between January 2015 and July 2025.Literature reviews, editorials, dissertations, theses; non-English language publications; purely simulation studies without real data.
Table 2. PRISMA Study Selection Summary (Five Databases).
Table 2. PRISMA Study Selection Summary (Five Databases).
Source
Database
Records
Identified
Records After
Duplicate Removal
Title/Abstract ScreenedFull-Text Articles AssessedIncluded in Meta-
Analysis (n = 71)
MDPI18921446144611226
DOAJ1743116911699519
CORE1654113511358612
BASE1287942942629
OpenAIRE856584584345
Total74325276527638971
Table 3. Distribution of Studies by Integration Level (n = 71).
Table 3. Distribution of Studies by Integration Level (n = 71).
Integration Leveln (% of Studies)Description
Level 048 (67.6%)Single-domain, entirely siloed implementation.
Level 117 (23.9%)Limited data sharing (e.g., visualization or one-way data feeds only).
Level 25 (7.0%)Moderate integration (automated data sharing/analysis between 2 and 3 domains).
Level 31 (1.4%)Comprehensive multi-domain integration (real-time co-optimization across domains).
Table 4. Synthesis of Key Findings, Core Problems, and Strategic Suggestions.
Table 4. Synthesis of Key Findings, Core Problems, and Strategic Suggestions.
Key FindingCore ProblemStrategic Suggestion
Domain specific AI solutions yield large improvements (pooled g ≈ 0.92) but operate largely at Levels 0–1 integration (91.5%).Benefits remain localized, vendor lock-in, siloed governance, and unclear data-sharing rules prevent cross-domain co-optimization.Mandate open standards and APIs in procurement require implementation-ready data models and interoperability tests at acceptance.
Algorithmic sophistication has increased (e.g., RL adoption) without tangible gains in integration.Organizational and economic hurdles outpace technical progress.Establish city-level data governance and cross-department integration task forces with shared KPIs; publish integration roadmaps.
Evidence is positive but heterogenous (I2 = 87.3%); possible small-study effects (Egger’s p = 0.03).Limited longitudinal follow-up and inconsistent reporting obstruct generalization and replication.Fund Level-2/3 integration pilots with multi-year follow-up, adopt cross-domain metrics
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alrasbi, O.; Ariaratnam, S.T. A Meta-Analysis of Artificial Intelligence in the Built Environment: High-Efficacy Silos and Fragmented Ecosystems. Smart Cities 2025, 8, 174. https://doi.org/10.3390/smartcities8050174

AMA Style

Alrasbi O, Ariaratnam ST. A Meta-Analysis of Artificial Intelligence in the Built Environment: High-Efficacy Silos and Fragmented Ecosystems. Smart Cities. 2025; 8(5):174. https://doi.org/10.3390/smartcities8050174

Chicago/Turabian Style

Alrasbi, Omar, and Samuel T. Ariaratnam. 2025. "A Meta-Analysis of Artificial Intelligence in the Built Environment: High-Efficacy Silos and Fragmented Ecosystems" Smart Cities 8, no. 5: 174. https://doi.org/10.3390/smartcities8050174

APA Style

Alrasbi, O., & Ariaratnam, S. T. (2025). A Meta-Analysis of Artificial Intelligence in the Built Environment: High-Efficacy Silos and Fragmented Ecosystems. Smart Cities, 8(5), 174. https://doi.org/10.3390/smartcities8050174

Article Metrics

Back to TopTop