Next Article in Journal
Real-Time Object Detection Model for Electric Power Operation Violation Identification
Previous Article in Journal
Psychological Network Analysis for Risk and Protective Factors of Problematic Social Media Use
Previous Article in Special Issue
Active Distribution Network Source–Network–Load–Storage Collaborative Interaction Considering Multiple Flexible and Controllable Resources
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Impact of EU Regulations on AI Adoption in Smart City Solutions: A Review of Regulatory Barriers, Technological Challenges, and Societal Benefits

by
Bo Nørregaard Jørgensen
* and
Zheng Grace Ma
SDU Center for Energy Informatics, University of Southern Denmark, 5230 Odense, Denmark
*
Author to whom correspondence should be addressed.
Information 2025, 16(7), 568; https://doi.org/10.3390/info16070568
Submission received: 21 May 2025 / Revised: 5 June 2025 / Accepted: 12 June 2025 / Published: 2 July 2025
(This article belongs to the Special Issue Artificial Intelligence and Data Science for Smart Cities)

Abstract

This review investigates the influence of European Union regulations on the adoption of artificial intelligence in smart city solutions, with a structured emphasis on regulatory barriers, technological challenges, and societal benefits. It offers a comprehensive analysis of the legal frameworks in effect by 2025, including the Artificial Intelligence Act, General Data Protection Regulation, Data Act, and sector-specific directives governing mobility, energy, and surveillance. This study critically assesses how these regulations affect the deployment of AI systems across urban domains such as traffic optimization, public safety, waste management, and energy efficiency. A comparative analysis of regulatory environments in the United States and China reveals differing governance models and their implications for innovation, safety, citizen trust, and international competitiveness. The review concludes that although the European Union’s focus on ethics and accountability establishes a solid basis for trustworthy artificial intelligence, the complexity and associated compliance costs create substantial barriers to adoption. It offers recommendations for policymakers, municipal authorities, and technology developers to align regulatory compliance with effective innovation in the context of urban digital transformation.

1. Introduction

Cities across Europe are increasingly relying on artificial intelligence to enhance urban services and infrastructure management [1,2]. Applications such as energy grid optimization, traffic management, and public safety enhancement demonstrate the potential of AI-driven smart city solutions to deliver notable improvements in efficiency, sustainability, and quality of life [2,3]. By 2025, it is anticipated that artificial intelligence will enable over thirty percent of smart city applications, particularly within the domain of urban mobility, highlighting the growing dependence on intelligent systems in municipal operations [3]. Nevertheless, the integration of AI in cities introduces significant challenges concerning ethics, security, and governance [4,5,6]. Unregulated implementation may intensify surveillance, jeopardize privacy, or institutionalize bias within public services [7,8,9]. Ensuring that AI is deployed responsibly and in accordance with European values has consequently become a priority for policymakers [10,11,12,13].
The European Union has established itself as a global leader in technological regulation through the development of a comprehensive framework to address the risks and challenges posed by artificial intelligence and related digital technologies [10,11,12]. Central to this framework is the recently adopted Artificial Intelligence Act, alongside earlier regulations concerning data protection and cybersecurity, all of which exert a direct influence on the design and deployment of AI in smart city contexts [2,11]. The Artificial Intelligence Act (Regulation (EU) 2024/1689) represents the first dedicated law on AI worldwide, introducing harmonized rules based on a risk-oriented approach [11,13,14]. The regulation classifies AI systems by risk level—unacceptable, high, limited, and minimal—and imposes stringent obligations on high-risk applications, which are particularly relevant to essential urban services [14,15,16]. Supporting this regulation are instruments such as the General Data Protection Regulation, which secures personal data in all smart city applications involving citizen information [17,18], and the NIS 2 Directive on network and information security, which mandates cybersecurity protocols for critical sectors including energy, transport, water, waste, and public administration [19]. Furthermore, domain-specific instruments such as the Network Code on Cybersecurity for the Electricity Sector address the resilience of the power grid [20,21], while the EU Cybersecurity Act introduces a certification framework for information and communication technology product security [22]. The forthcoming Cyber Resilience Act will establish binding cybersecurity standards for virtually all connected hardware and software, aiming to ensure that digital products and devices are secure by design [23,24,25].
Collectively, these instruments construct a complex regulatory environment for developers and deployers of AI in smart city ecosystems [10,11]. While the regulations are designed to mitigate risk, protect personal data and fundamental rights, enhance the security and reliability of AI systems, and build public trust in smart city innovation [1,12,13], they also introduce legal responsibilities and liabilities that may hinder innovation if not addressed effectively [26,27,28]. Municipal governments and technology providers engaged in urban AI deployment must contend with a variety of requirements, including algorithmic transparency and human oversight under the Artificial Intelligence Act [14,16,29], privacy-by-design under the General Data Protection Regulation [17,18], and cybersecurity risk management and incident reporting as prescribed by the NIS 2 Directive and relevant sectoral instruments [19,21]. The challenge of simultaneously fulfilling multiple regulatory obligations is especially acute for smaller municipalities or startups operating in partnership with local authorities [5,30].
This article synthesizes insights from regulatory texts, academic literature, industry white papers, and institutional reports to provide a comprehensive resource on the current state of European Union policy concerning AI-enabled smart cities [10,11]. It is intended to support researchers, policymakers, and practitioners in understanding the prevailing regulatory framework and its implications, as well as in formulating strategies to achieve compliance while advancing societal and technological goals [11,12]. The development of artificial intelligence within smart city ecosystems will depend substantially on the co-evolution of governance structures and technological capabilities. Within the European Union, this requires harmonizing urban innovation with the overarching regulatory ambition to foster secure, inclusive, and trustworthy artificial intelligence [13,31,32].
While the primary focus of this review is the regulatory environment of the European Union, it incorporates a structured comparative analysis of the United States and China to contextualize the EU’s regulatory approach globally. These two jurisdictions were selected due to their geopolitical significance and contrasting governance models. The intention is not to survey smart city AI applications worldwide but to provide a focused regulatory comparison relevant to transatlantic and transpacific policy dialogues.
Existing literature reviews have explored the intersection of AI and smart city governance, but they vary in regulatory scope. For example, some surveys propose broad principles or frameworks for governing AI in cities [33] or classify generic adoption barriers—with policy and regulatory hurdles being just one category among technological and organizational challenges [4]. Others zoom in on particular contexts, such as a recent review focusing on European AI governance in smart cities [34], which examines the EU’s emerging AI Act alongside privacy and ethics laws. However, none have provided a dedicated, in-depth analysis of the European Union’s regulatory framework and its impact on smart city AI adoption. This paper addresses that gap by concentrating on the EU regulatory environment—from the GDPR to the forthcoming EU AI Act—and evaluating how these regulations create unique barriers or catalysts for AI-driven smart city solutions. In doing so, our review offers a more specific and comparative perspective than prior works, delving into the EU’s approach in detail and contrasting its implications with broader international discussions on AI regulation in urban innovation.
The structure of the review is as follows. Section 2 describes the methodological approach. Section 3 offers a detailed account of the European Union’s regulatory landscape in 2025, including horizontal instruments such as the Artificial Intelligence Act, General Data Protection Regulation, Data Act, and Data Governance Act, as well as sectoral regulations relevant to domains such as energy, mobility, and public safety. Section 4 analyzes the regulatory barriers these frameworks pose for the adoption of artificial intelligence in cities, including compliance burdens, legal uncertainty, constraints on high-risk use cases, and disparities in capacity among municipalities and developers. Section 5 addresses the technological challenges intersecting with legal obligations, including data fragmentation, interoperability issues, infrastructure limitations, cybersecurity vulnerabilities, and the difficulty of implementing explainable and context-sensitive AI systems. Section 6 highlights the societal benefits of artificial intelligence in urban environments, from improved mobility and energy efficiency to enhanced public services and environmental outcomes, reinforcing the alignment between responsible governance and broader policy objectives. Section 7 provides a comparative analysis of the European Union’s rights-based regulatory model in contrast to the decentralized regulatory context of the United States and the state-led deployment strategy of China. Section 8 concludes the review by summarizing the key findings and arguing that, despite presenting barriers to innovation, the European Union’s regulatory framework lays the groundwork for a trustworthy and sustainable approach to AI adoption in cities.

2. Methodology

This scoping review was conducted following the PRISMA-ScR guidelines, which offer a structured framework for synthesizing interdisciplinary research in evolving domains such as AI regulation and smart cities. The objective was to systematically map the regulatory, technological, and societal factors influencing the adoption of artificial intelligence in European smart city contexts. A predefined protocol guided the review process, including scope definition, literature identification and selection, data extraction, and thematic synthesis.
The review was guided by three primary research questions, derived from the conceptual framing in the Introduction:
  • What are the major regulatory barriers posed by the European Union’s legal framework to the development and deployment of AI in smart city domains?
  • What are the core technological challenges that intersect with legal constraints in the implementation of AI-based smart city solutions?
  • What societal benefits can AI deliver in smart cities, and how do EU regulations align with or constrain the realization of these benefits?
To provide a structured visual overview of how these three research questions are addressed, Figure 1 presents a conceptual framework for the scoping review. It illustrates the thematic structure of the analysis, which is organized around regulatory barriers, technological challenges, and societal benefits. These domains are subsequently examined through a comparative assessment of governance approaches across the European Union, the United States, and China. This framework supports the reader’s understanding of the review’s analytical flow and transregional perspective.
Eligibility Criteria: Given the multidisciplinary scope of the topic, the review draws on a broad spectrum of sources. Eligible materials include peer-reviewed academic literature, encompassing journal articles and conference proceedings, as well as European Union regulatory texts such as regulations, directives, and official guidance. Gray literature was also considered, including policy papers, industry reports, technical documents, and strategic roadmaps. Only documents that explicitly address artificial intelligence applications in smart cities within the European Union, or that directly examine EU regulations including the Artificial Intelligence Act, General Data Protection Regulation, Data Act, NIS2 Directive, and relevant sector-specific rules, were included. Sources focusing on artificial intelligence in general or in non-urban contexts without clear relevance to European smart cities were excluded. The review covers publications from 2013 to 2025, capturing significant regulatory developments and the progression of smart city technologies over this period. Only sources published in English were considered.
Information Sources and Search Strategy: A comprehensive search strategy was adopted to ensure thorough coverage of regulatory, academic, and policy-relevant materials. Academic databases including Scopus, IEEE Xplore, and Web of Science were queried using keyword combinations such as “artificial intelligence” AND “smart cities” AND “Europe” OR “EU” AND “regulation” OR “policy” OR “GDPR” OR “AI Act” OR “NIS2” OR “data governance.” In parallel, targeted searches were conducted on official European Union legal and policy repositories, including EUR-Lex, the European Commission’s artificial intelligence and digital policy portals, and the European Data Protection Board. Institutional websites such as those of ENISA, CEPS, CEER, and EU-funded projects including AI4Cities and PRELUDE were examined for relevant technical and policy documentation. Additional searches were carried out using Google Scholar and Google with domain-specific filters, site: europa.eu, to retrieve white papers, legislative drafts, and gray literature not indexed in academic databases.
Source Selection: After duplicate removal, all records were screened by title and abstract or executive summary against the eligibility criteria. Sources were excluded if they lacked a European scope, failed to address AI deployment in smart city solutions, or did not engage substantively with regulatory or technological issues. The remaining sources were reviewed in full to assess their inclusion. A PRISMA flow diagram (Figure 2) summarizes the number of sources identified, screened, excluded, and included in the final synthesis.
Data Extraction and Synthesis: A structured extraction form was developed to chart relevant data from each source, capturing (a) regulatory provisions and constraints (e.g., AI Act classifications, GDPR requirements, NIS2 obligations), (b) technological dependencies and challenges (e.g., data interoperability, explainability, infrastructure limitations), and (c) societal impacts and public value dimensions (e.g., improved mobility, energy efficiency, citizen trust). A thematic synthesis approach was used to categorize findings under three analytical pillars: regulatory barriers, technological challenges, and societal benefits. Within each pillar, inductive sub-categorization was applied. For instance, under regulatory barriers, subthemes included compliance complexity, high-risk use-case restrictions, and fragmentation in interpretation. Selected findings were summarized in tables to illustrate key regulatory instruments, policy developments, and implementation challenges. Narrative synthesis was then applied to align these insights with the review’s core research questions, enabling an integrated understanding of how regulation interacts with innovation and social outcomes in the European smart city landscape.
Applicability of PRISMA: The PRISMA-ScR framework was selected for this review due to its capacity to structure and synthesize knowledge across diverse sources and disciplinary boundaries. Its emphasis on scoping rather than evaluating evidence is particularly well suited for transdisciplinary inquiries such as the regulation of artificial intelligence in smart city contexts, which span legal texts, technical standards, and policy reports. By enabling the systematic mapping of heterogeneous evidence, including normative EU legal instruments, technical white papers, and academic studies, the PRISMA-ScR supports the integration of insights from fields that do not share a unified methodological tradition. Nevertheless, certain limitations of the method should be acknowledged. PRISMA-ScR was originally designed for the aggregation of empirical studies and may be less equipped to fully capture interpretive dimensions present in regulatory analysis or governance studies. Furthermore, the reliance on document-based sources may underrepresent tacit or practice-based knowledge relevant to municipal decision-making. These limitations were partially mitigated by incorporating gray literature, official EU repositories, and cross-sectoral publications. However, the boundaries of the method must be recognized, particularly in relation to the epistemic diversity and normative character of the domains reviewed.

3. EU Regulatory Landscape for AI in Smart Cities

European smart city initiatives are subject to one of the most stringent legal frameworks for digital technology globally [11,13,14]. A combination of general and sector-specific European Union regulations imposes requirements on artificial intelligence systems that may act as barriers or sources of friction for their deployment in urban services [4,16,18]. This section examines the primary regulatory constraints, including data protection provisions, the forthcoming Artificial Intelligence Act and related proposals, and other European laws relevant to key smart city sectors [2,10,12]. It considers how compliance obligations may impede or delay the adoption of artificial intelligence technologies [26,27,28].

3.1. EU AI Act

A central element of the European Union’s approach to artificial intelligence governance is the Artificial Intelligence Act, the first comprehensive law on AI globally [11,13,14]. Adopted in late 2023 and entering into force in 2024, the regulation introduces a harmonized, risk-based framework applicable across all member states [35,36]. The Artificial Intelligence Act classifies AI systems into four categories of risk: minimal, limited, high, and unacceptable, with corresponding obligations. Most everyday applications, such as spam filters and traffic prediction tools, are considered minimal risk and are subject only to voluntary codes of conduct [16]. Systems classified as limited risk must comply with specific transparency requirements, such as the obligation for chatbots to disclose their artificial nature. High-risk AI systems, by contrast, must meet extensive regulatory requirements under the Act [14,35,36].
High-risk categories, as defined in Annex III of the Artificial Intelligence Act, include AI systems used in safety-critical infrastructure, law enforcement, the administration of justice, and other contexts with substantial implications for individuals’ lives and rights [13,14]. Many smart city applications, such as AI-based traffic control, algorithms for public service allocation, autonomous public transport, and biometric identification in public spaces, are expected to be classified as high risk due to their impact on public safety and fundamental rights [2,29]. Developers and deployers of high-risk AI must implement risk management procedures, ensure the use of high-quality datasets to minimize bias, provide clear information to users, enable human oversight, and maintain detailed technical documentation and logging to support compliance audits [35,36].
Before deployment, such systems may also be subject to conformity assessments and must be registered in the European Union database of high-risk AI systems. At the opposite end of the risk spectrum, the Act prohibits a limited number of artificial intelligence practices classified as unacceptable. For instance, systems involving the social scoring of individuals by public authorities or those employing subliminal manipulation that causes harm are explicitly banned [15,16,35]. Importantly, the Act effectively prohibits real-time remote biometric identification, such as live facial recognition in public spaces for law enforcement purposes, except in narrowly defined emergency situations [16,18], Other forms of remote biometric identification, such as retrospective facial recognition using closed-circuit television footage, are not banned but are classified as high risk. Consequently, police forces or municipal authorities deploying such systems must comply with the stringent safeguards established by the Act [9,18]. This prohibition reflects fundamental rights considerations in the European context and has direct implications for smart surveillance initiatives in urban environments.
The implementation of the Artificial Intelligence Act is phased. It entered into force in 2024, but its provisions apply over a transition period to allow stakeholders to prepare. By early 2025, the bans on unacceptable AI practices take effect, and within approximately one to two years, that is, by 2025 to 2026, the compliance obligations for high-risk and certain other AI systems, including possibly rules on general-purpose AI systems, will become mandatory [29,36]. Cities and municipalities, as deployers or sometimes providers of AI systems, will therefore soon be legally required to ensure that any high-risk AI they use, for example, in traffic management, public safety, or administrative decisions, complies with the standards of the Act [5,14,29]. In practice, this means that local governments must assess the risk category of AI solutions they procure, request the necessary compliance documentation from vendors, and in some cases modify or forgo certain use cases that may be prohibited, such as the use of live facial recognition through street surveillance cameras [9,18].
The Artificial Intelligence Act aims to foster trustworthy AI. Its proponents argue that by mitigating risks and enhancing public trust, the Act will ultimately support wider adoption of artificial intelligence, including in smart cities, on a foundation grounded in safety and fundamental rights [12,35]. However, concerns remain that the administrative burden and uncertainty introduced by the new rules may hinder innovation at the municipal level or discourage participation by smaller providers, an issue that will be revisited in the discussion of regulatory barriers [26,27].

3.2. Data Protection (GDPR) and Privacy

Long before the adoption of the Artificial Intelligence Act, the European Union’s robust data protection framework had already shaped the development of smart city initiatives. The General Data Protection Regulation, in force since 2018, applies to all processing of personal data within smart city systems. Much of the data that support smart city functions, including video from street cameras, residents’ mobility or energy consumption data, facial imagery, and vehicle registration details, constitute personal data subject to the provisions of the General Data Protection Regulation. The regulation imposes strict requirements concerning lawfulness, purpose limitation, data minimization, storage limitation, security, and the protection of individual rights, resulting in a substantial compliance burden for municipal authorities and technology providers [18,26]. For example, the deployment of an artificial intelligence-based traffic camera system that captures identifiable individuals or vehicles requires a valid legal basis under the General Data Protection Regulation such as the public interest or individual consent, a clearly defined purpose such as traffic flow analysis, and safeguards including data anonymization or deletion protocols. Special categories of personal data, including biometric identifiers used in facial recognition technologies, are subject to even stricter limitations. These categories are generally prohibited from processing unless a specific exception applies, due to their sensitive nature [18].
As a consequence, cities must proceed with caution when deploying artificial intelligence systems that involve the monitoring of citizens, often being required to conduct Data Protection Impact Assessments and to consult regulatory authorities for high-risk data processing activities [17,18]. Numerous European smart city initiatives have revised their scope in response to privacy constraints. For instance, some municipalities have opted not to use artificial intelligence for individualized facial recognition in public spaces or have adopted anonymized data analytics instead, citing concerns about compliance with the General Data Protection Regulation [8,9]. Researchers have observed that the European Union’s stringent personal data protection laws can present considerable obstacles to smart city development, compelling projects to allocate significant resources to compliance and, in some cases, restricting the availability of data required to support artificial intelligence applications [26,37].
This has created a context in which developers of smart city technologies in Europe incur higher initial costs to implement privacy by design. In some instances, companies have redirected their focus to regions with less stringent data protection frameworks in order to test data-intensive innovations [7,26]. In summary, the General Data Protection Regulation remains both a foundational safeguard and a constraint for artificial intelligence in smart cities. It protects citizens’ privacy rights while simultaneously acting as a regulatory barrier when artificial intelligence solutions depend on large-scale personal data collection.
In addition to the General Data Protection Regulation, the proposed ePrivacy Regulation, which remains under negotiation as of 2025, may further influence the handling of smart city data, particularly communications metadata and data from Internet of Things sensors deployed in public spaces. However, its final form remains uncertain [26]. At the same time, the Law Enforcement Directive (EU) 2016/680 [38] governs the processing of personal data by police and other public authorities for security-related purposes. This is particularly relevant where municipal police forces use artificial intelligence for surveillance or predictive policing, adding an additional layer of data protection obligations, including the need to satisfy tests of necessity and proportionality when deploying new technologies [16,18].

3.3. Data Governance and Sharing Frameworks

Recognizing that data are essential to artificial intelligence, the European Union has introduced legislation to promote data sharing and availability in a trustworthy manner. The Data Governance Act, which became applicable in September 2023, establishes mechanisms for sharing public sector and personal data for uses that serve the public interest [39,40]. It permits the reuse of certain categories of sensitive public sector data, such as anonymized health, environmental, or transport data, under strict conditions. It also promotes the creation of data intermediation services as neutral brokers and sets rules for data altruism, allowing individuals to donate their data for societal benefit [39]. For smart cities, the Data Governance Act is particularly relevant, as many valuable datasets, including traffic patterns, energy consumption, and pollution levels, are held by public authorities or generated through citizen activities. Under this framework, municipalities can share data with artificial intelligence developers more confidently. For example, a city may grant researchers access to urban mobility data through a trusted intermediary without breaching privacy or relinquishing control [39,41].
The Data Governance Act also supports the development of Common European Data Spaces in strategic domains, including domains directly relevant to smart cities such as mobility, energy, and public administration [39]. Indeed, an EU-supported initiative is underway to create a federated “Data Space for Smart and Sustainable Cities and Communities,” providing governance frameworks and standards to help cities and private partners exchange data while respecting European values [30,42]. This is expected to reduce a key barrier to AI adoption, lack of accessible data, by increasing the pool of data available for training algorithms and enabling cross-city learning, all within a regulated trust framework [18].

3.4. Data Act

Complementing the Data Governance Act, the Data Act, which entered into force in January 2024, represents another critical element of the European Union’s data strategy. While the Data Governance Act emphasizes governance mechanisms and altruistic data sharing, the Data Act establishes new legal rights to access and share data generated by connected devices and services [39,43]. It is designed to dismantle data silos associated with the Internet of Things. In the context of smart cities, this means that data produced by sensors, vehicles, smart appliances, and urban infrastructure should not remain exclusively under the control of manufacturers or operators. The Data Act grants users of connected products, including municipal authorities, the right to access the data generated by those devices and to share it with third-party service providers of their choice [18,43]. For instance, if a city installs smart waste bins or traffic sensors supplied by a vendor, the Data Act ensures that the city can retrieve the sensor data and either use it directly or provide it to an artificial intelligence company to enhance route optimization, without the data being exclusively held by the vendor.
The Act also prohibits unfair contractual terms that hinder data sharing and introduces provisions for business-to-government data access in emergency situations. For example, cities may request relevant private sector data, such as telecommunications mobility data during a crisis, for uses that serve the public interest [39,43]. In addition, the Data Act contains measures to facilitate switching between cloud service providers and to mandate interoperability standards, helping cities avoid vendor lock-in when deploying artificial intelligence platforms [43]. Most of its provisions will take effect during the 2025 to 2026 period following a transition phase, with full implementation expected by the end of 2025. The Act is expected to significantly increase the volume of data available to support artificial intelligence solutions in urban contexts [30,43]. In summary, the Data Governance Act and the Data Act are European Union instruments that, rather than restricting artificial intelligence, aim to enable it by expanding access to data while maintaining public trust. They form part of the enabling regulatory framework that complements protective instruments such as the General Data Protection Regulation and the Artificial Intelligence Act.

3.5. Sector-Specific Regulations and Directives

Beyond these horizontal frameworks, several domain-specific EU laws influence AI applications in particular smart city sectors.

3.5.1. Mobility and Transportation

Intelligent Transport Systems are a central element of smart city development. In 2023, the European Union revised its Intelligent Transport Systems Directive (2010/40/EU) [44] through Directive (EU) 2023/2661 [41] to accelerate the deployment of digital transport services. The updated Directive explicitly addresses connected and automated mobility and mandates the interoperability of transport data and services across the European Union [40,41]. It requires, for example, the standardization of data relating to roadworks, traffic, and multimodal travel information, and mandates that such data be made available through a common European mobility data space [40]. This regulatory initiative obliges cities to ensure that their traffic data and smart mobility services comply with European specifications. In return, they gain access to broader data integration, such as vehicles communicating real-time hazard information to traffic management centers.
European Union vehicle safety regulations have begun to incorporate artificial intelligence components. Since July 2022, the General Safety Regulation requires that all new vehicles be equipped with advanced driver assistance systems, some of which are artificial intelligence-based, such as intelligent speed assistance and lane-keeping technologies, as well as black-box data recorders [42,45]. The European Union is also developing a unified regulatory framework for autonomous vehicles. By 2024 to 2025, the European Commission aims to enable Union-wide type approval of automated vehicles, moving beyond the limited Level 3 automation rules currently governed by the United Nations Economic Commission for Europe [45]. By 2026, the European Union intends to permit the approval of higher-level automated driving systems, such as autonomous shuttles and freight vehicles, and to harmonize the regulation of cross-border autonomous vehicle operations [42,45]. This development will offer cities piloting autonomous buses or robotaxis a more coherent legal pathway at the European level, replacing the fragmented national experimental frameworks that had previously governed such initiatives. In the interim, several member states, including Germany and France, have enacted national legislation authorizing autonomous driving in controlled conditions. However, a comprehensive European Union regulation is forthcoming to standardize the approach. In addition, the planned European Connected and Autonomous Vehicle Alliance, an initiative of the European Union, is expected to develop common software and standards for vehicle artificial intelligence, potentially benefiting municipal vehicle fleets [42,45].
These transport regulations are intended to reduce legal uncertainty and support the safe deployment of artificial intelligence-driven mobility solutions. However, they also impose specific obligations, such as the mandatory inclusion of data recorders in automated vehicles and cybersecurity requirements for vehicle systems under the NIS2 Directive, which practitioners must carefully observe [19].

3.5.2. Energy and Utilities

In the energy sector, European Union regulations prioritize smart grids, data accessibility, and system security. The Electricity Directive (EU) 2019/944 [46] mandates the deployment of smart metering and grants consumers rights to access their energy consumption data, aligning with the objectives of the Data Act [39,43]. Energy network operators are required to comply with interoperability standards, such as the use of national data hubs, which support the optimization of grids through artificial intelligence [2,30]. The European Green Deal has prompted policies that promote digitalization to enhance energy efficiency. For instance, cities are expected to implement building automation and demand response systems to achieve European Union energy efficiency targets [2,47]. The Data Governance Act and the Data Act also contribute by enabling the sharing of energy data, under appropriate privacy safeguards, to support the development of artificial intelligence models for demand forecasting and improved integration of renewable energy sources [39,43].
In addition, the Critical Entities Resilience Directive of 2022 [48] and the NIS2 Directive [19] include the energy sector within their scope. This means that urban energy infrastructure employing artificial intelligence, such as automated grid management systems, must comply with cybersecurity and resilience requirements [19]. Although there is not yet a dedicated artificial intelligence law for the energy sector, European Union funding programs, including Horizon Europe and Digital Europe, actively support the use of artificial intelligence in urban energy management [42,47]. The prevailing regulatory approach in the energy domain seeks to ensure both data availability to foster innovation and the reliability of infrastructure. A practical example is found in certain member states where regulators now require utilities to offer open application programming interfaces for real-time energy consumption data. This allows third-party artificial intelligence applications to assist consumers and city managers in optimizing energy use [2,30]. While such policies are enabling, ensuring compliance, such as verifying that artificial intelligence-based forecasts do not violate the General Data Protection Regulation when household data are involved, remains a significant operational challenge [18,26].

3.5.3. Public Safety and Surveillance

Surveillance remains one of the most sensitive areas in the context of smart cities. European fundamental rights law, including the General Data Protection Regulation and the Charter of Fundamental Rights of the European Union, imposes strict limitations on surveillance practices. The Artificial Intelligence Act’s prohibition on real-time biometric identification, with exceptions limited to serious crimes or terrorist threats, effectively constitutes a ban on indiscriminate facial recognition in urban surveillance systems [15,16,18]. Several European Union institutions and member states have affirmed that mass biometric surveillance is incompatible with the concept of human-centric smart cities. Even prior to the adoption of the Artificial Intelligence Act, the European Data Protection Board had cautioned against the use of live facial recognition in public spaces [9]. As a consequence, numerous European cities have either refrained from deploying or have explicitly prohibited the use of facial recognition in public areas. For example, cities such as Amsterdam and Helsinki have committed to abstaining from facial recognition closed-circuit television until a clear legal framework is in place. The Artificial Intelligence Act reinforces this cautious approach [18,29].
At the same time, less invasive artificial intelligence surveillance tools are permitted, although still subject to regulation. For example, an artificial intelligence system that analyzes closed-circuit television feeds to detect behavioral anomalies, without identifying individuals, may be allowed as a high-risk system, provided it complies with transparency obligations and is subject to human oversight [14,16]. The use of artificial intelligence by law enforcement authorities, such as predictive policing algorithms or suspect identification tools, is governed by both the Artificial Intelligence Act, where applicable, and the Law Enforcement Directive. Such systems must be demonstrably necessary and proportionate under national legislation. In addition, the Artificial Intelligence Act introduces further obligations, including requirements for accuracy testing and measures to mitigate bias [16,18].
Cities therefore encounter regulatory barriers in the adoption of certain artificial intelligence-based policing technologies. The European Union framework requires thorough justification and the implementation of safeguards, which may delay or preclude deployment [26,28]. However, these regulations play a critical role in protecting citizens from potential overreach and misuse of artificial intelligence surveillance, thereby upholding societal values [7]. A real-world example is Hungary’s attempted implementation of biometric street cameras, which attracted criticism for its likely incompatibility with the forthcoming Artificial Intelligence Act [9,18]. In summary, the European Union clearly signals that although artificial intelligence can contribute to public safety, such as by enabling faster emergency response through gunshot detection or crowd monitoring algorithms, it must not do so at the expense of fundamental rights. Cities are required to innovate within these regulatory parameters, prioritizing artificial intelligence solutions that enhance safety without relying on continuous personal identification or unwarranted tracking.

3.5.4. Other Domains

In domains such as waste management, water supply, and general municipal administration, European Union law typically applies through the horizontal regulatory frameworks already discussed, including those on data governance, privacy, and the Artificial Intelligence Act, rather than through domain-specific rules for artificial intelligence [10,11]. Waste management, for instance, can benefit from artificial intelligence applications in route optimization and the sorting of recyclable materials. There is no European legislation prohibiting the use of artificial intelligence for routing waste collection vehicles. However, if such a system employs cameras to detect when bins are full and inadvertently capture images of individuals nearby, the General Data Protection Regulation would apply [17,18]. Similarly, while the Urban Wastewater Treatment Directive (EU) 2024/3019 [49] and related instruments do not reference artificial intelligence, the use of artificial intelligence to monitor water quality in real time must be aligned with existing standards for environmental data reporting [39].
In the area of municipal administration, the European Union’s digital government initiatives encourage the adoption of artificial intelligence, such as chatbots for electronic public services, while simultaneously promoting algorithmic transparency in public sector applications [11,12,29]. Although not yet codified in law, there are soft-law instruments, including guidance issued by the European Commission on the use of artificial intelligence in public services, which are followed by many cities [10,31]. In addition, public procurement law in the European Union, particularly Directives 2014/24/EU [50] and related instruments, plays an important role. Cities procuring artificial intelligence systems are required to do so through competitive procedures and may include criteria ensuring compliance with ethical and legal standards in their tender specifications [13,14]. The European Union has issued recommendations on the procurement of artificial intelligence, advising public authorities to request algorithmic explanations and risk assessments from vendors [10,36]. While these recommendations are not legally binding, they influence municipal procurement practices and contribute to establishing expectations that artificial intelligence solutions adopted by cities meet principles of trustworthiness [12,31].

3.6. Upcoming Liability and Safety Rules

While the Artificial Intelligence Act establishes ex ante obligations, the European Union has also considered ex post regulatory measures for instances in which artificial intelligence causes harm. In late 2022, the European Commission proposed an Artificial Intelligence Liability Directive aimed at easing the process for victims to claim compensation, for example, by lowering the burden of proof [27,45]. This proposal was particularly relevant for municipalities, as liability rules would determine how individuals could seek redress if, for instance, an autonomous bus operated by a city were to cause injury. However, as of early 2025, the Commission withdrew the proposal due to the absence of political consensus [27,45]. Concerns have been raised that the directive might lead to over-regulation or result in unnecessary duplication of existing liability frameworks.
In place of the Artificial Intelligence Liability Directive, the regulatory focus has shifted to the revision of the Product Liability Directive (EU) 2024/2853 [51] in order to explicitly include software and artificial intelligence. This ensures that if a product incorporating artificial intelligence, such as a smart city sensor system or an autonomous vehicle, malfunctions and causes harm, the manufacturer or deployer can be held liable under harmonized rules across the European Union. The revised Product Liability Directive was adopted in 2024 and will enter into force by 2026. It will extend strict liability to producers of artificial intelligence-enabled products and, in some cases, to software providers [52]. For cities, this development provides greater legal clarity, as vendors of artificial intelligence systems will bear defined liability, facilitating the recovery of damages when malfunctions occur. However, municipalities themselves may also be held liable if found negligent in deploying artificial intelligence, although the application of sovereign immunity to public authorities continues to vary across national legal systems.
Furthermore, the European Union’s Cybersecurity Act and the new Cyber Resilience Act, enacted in 2024, require that connected devices and software, including artificial intelligence systems, comply with cybersecurity standards and bear CE markings for cyber safety between 2025 and 2027 [23,24,53]. These measures aim to ensure that smart city Internet of Things and artificial intelligence systems are protected against cyberattacks, a critical concern as cities increasingly automate essential services. While compliance with these new safety and security obligations may increase costs or delay the deployment of certain artificial intelligence solutions pending certification, the long-term objective is to prevent severe incidents, such as the malicious takeover of municipal artificial intelligence infrastructure [24,25].

3.7. A Dense Regulatory Framework

In summary, by 2025, the EU has built a dense regulatory framework affecting AI in smart cities. Protective regulations like the AI Act, the GDPR, and sectoral privacy/safety rules set boundaries and obligations that can act as barriers or slow down the adoption of certain AI technologies in cities. Meanwhile, enabling frameworks like the Data Governance Act, the Data Act, and updated ITS standards seeks to provide the fuel in terms of data and legal clarity for AI innovation in cities. This combination reflects the EU’s philosophy of promoting technologically advanced smart cities that are also “lawful and ethical”. This EU legislation illustrates a dual-purpose regulatory strategy: to safeguard fundamental rights while simultaneously enabling innovation. As demonstrated, horizontal regulations such as the General Data Protection Regulation and the Artificial Intelligence Act establish essential legal boundaries for AI development and deployment in smart cities, especially concerning ethical use, transparency, and citizen trust. Concurrently, enabling instruments like the Data Governance Act and the Data Act are designed to unlock the potential of data-driven technologies by promoting interoperability, secure data access, and common standards.
Yet, this regulatory architecture can appear opaque to municipal stakeholders who must coordinate compliance across overlapping domains of data protection, cybersecurity, product safety, and sector-specific standards. To support a more integrated understanding, Table 1 presents a consolidated overview of the principal EU regulatory instruments discussed in this review, highlighting their scope, relevance to smart cities, key obligations, and implementation status. This synthesis underscores the density of the framework and the operational complexity it introduces, particularly for city-level authorities and SMEs tasked with deploying trustworthy AI systems.
The next sections delve into how these regulations translate into on-the-ground barriers and challenges for smart city stakeholders and how overcoming them can unlock significant societal benefits.

4. Regulatory Barriers to AI Adoption in Smart Cities

While the EU’s comprehensive regulatory framework provides important safeguards, it also introduces non-trivial barriers to the adoption of AI in smart city initiatives. These barriers can be legal, administrative, and financial in nature, stemming from the need to comply with multiple regulations and the uncertain interpretation of new rules. Here, we analyze the major regulatory barriers faced by cities and companies deploying AI in urban environments, as identified by studies and practical experiences.

4.1. Compliance Burden and Complexity

One of the most immediate barriers is the complexity involved in complying with the European Union’s regulatory framework for artificial intelligence and data [12,26,31]. A municipal authority seeking to implement an artificial intelligence system must navigate a dense and multifaceted body of legislation. This includes ensuring compliance with the General Data Protection Regulation for data protection, determining whether the system qualifies as high risk under the Artificial Intelligence Act and meeting the associated obligations, considering relevant sector-specific regulations in areas such as transport or health, and addressing applicable public procurement and cybersecurity requirements [11,14,19,26]. Meeting these legal obligations frequently demands specialized legal and technical expertise, which many municipalities may not possess internally.
For example, carrying out a Data Protection Impact Assessment for a smart city artificial intelligence project, as required under the General Data Protection Regulation for high-risk data processing, is a complex undertaking. It may necessitate the engagement of external data privacy consultants and can significantly delay project implementation [17,18,26,56]. The Artificial Intelligence Act will introduce similar obligations, including conformity assessments and extensive documentation for high-risk systems. These are tasks with which cities have limited experience, as safety certification has traditionally applied to physical products rather than algorithmic systems [26,35,36,57]. Smaller municipalities may find these processes particularly burdensome and difficult to manage [26,29,31].
The costs associated with compliance, including the implementation of privacy by design, the conduct of audits, and the preparation of technical documentation to satisfy the requirements of the Artificial Intelligence Act, can place considerable strain on municipal budgets and may deter innovation. A recent qualitative study of European smart city projects found that high compliance and localization costs, particularly those related to adapting solutions to meet European legal standards, often compel developers either to seek additional funding or to reduce the scale and ambition of their artificial intelligence deployments [26,37]. In some instances, promising artificial intelligence pilot projects are discontinued by city authorities who are concerned about potential legal risks or who lack the financial capacity to manage the associated compliance burden. This situation has been characterized as a chilling effect, in which well-intentioned regulations inadvertently inhibit public sector innovation in artificial intelligence due to apprehension over non-compliance [26,28,58].

4.2. Data Access Limitations Due to Privacy

Although the European Union actively promotes data sharing, privacy regulations continue to limit access to certain datasets that could support artificial intelligence development [39,43]. The General Data Protection Regulation, while essential for safeguarding individual rights, prevents the unrestricted use of many types of urban data unless they are anonymized or used with informed consent [17,18,26,59]. Effective anonymization is technically complex and can reduce the utility of data for artificial intelligence applications [18,60]. City officials have reported challenges in using historical closed-circuit television footage to train computer vision models due to privacy requirements that mandate the blurring of faces and vehicle registration plates. This process is costly and often beyond the capacity of small teams, resulting in potentially valuable training data remaining unused [7,8,26,61].
Similarly, real-time personalized data, such as tracking individual vehicles across a city to analyze congestion patterns, is generally prohibited unless robust pseudonymization techniques are applied [17,18,56]. The purpose limitation principle under the General Data Protection Regulation further restricts data reuse. Data collected for one purpose, such as billing for water usage, cannot typically be repurposed for a different artificial intelligence project, such as predicting neighborhood water demand, without either obtaining renewed consent from citizens or demonstrating that the new use falls under a compatible purpose [17]. Obtaining consent on a population-wide scale is often impractical for municipal initiatives, thereby creating a significant barrier to the reuse of data [26,59].
As a consequence, developers are sometimes required to rely on synthetic data or simulations to train artificial intelligence models, which may not be as effective as real-world data [1,26,60]. Alternatively, cities may limit their use of artificial intelligence to aggregate non-personal data. For example, general traffic flow counts may be used instead of detailed vehicle trajectory data, thereby constraining the potential sophistication of the artificial intelligence solution [37,60]. One scholar has observed that smart city development in the European Union “suffers from one-sided inputs” because privacy regulations restrict access to certain data streams, which may introduce bias into artificial intelligence systems or reduce their accuracy [26].
While the Data Governance Act now offers mechanisms to enable the secure sharing of sensitive data, such as through data altruism and trusted intermediaries, these frameworks remain in their early stages as of 2025 and have not yet been widely adopted by cities. As a result, the data access barrier continues to persist in practice [30,39,41].

4.3. Restrictions on High-Risk Use-Cases

The Artificial Intelligence Act’s classification of certain smart city applications as high risk or prohibited presents a direct regulatory obstacle to their adoption. The most prominent example is the prohibition of real-time remote biometric identification in public spaces for law enforcement purposes. Even if a city’s police authorities consider facial recognition useful for locating missing persons or apprehending suspects more efficiently, European Union law prevents the broad deployment of such systems [15,16,18]. This effectively removes a technological option from the smart city repertoire, for better or for worse, and has prompted some security technology companies to redirect their focus away from such solutions in the European market [7,9,18].
Even in the absence of outright prohibitions, the obligations associated with high-risk artificial intelligence systems may significantly delay deployment [14,35,36]. For example, a city-operated artificial intelligence system used to allocate social housing, which involves decisions that affect individuals’ rights, would be classified as high risk. In such cases, the city or its vendor must implement comprehensive risk management procedures, maintain audit logs, and meet other compliance requirements. If these measures are not in place, the system cannot legally be brought into operation [14,36,57]. As a result, regulation may slow the introduction of beneficial artificial intelligence solutions. An artificial intelligence-based traffic management system that dynamically controls traffic signals could be considered sufficiently critical to qualify as high risk due to its implications for public safety. If the vendor is unable to provide the required conformity assessment, or if the designated notified bodies experience delays in certification, the city would not be permitted to activate the system until compliance is fully achieved. This has raised concerns about a potential shortage of qualified bodies capable of certifying artificial intelligence systems during the initial implementation period [29,36,57].
Such delays constitute a barrier to time-sensitive innovation. Some city officials have expressed concern that by the time compliance procedures are completed, the technology may already be outdated or the momentum of the pilot project lost [5,26,31]. Furthermore, the classification of systems as high risk may discourage smaller local developers or start-ups, which frequently serve as drivers of smart city innovation [26,28,58]. Concerns about liability also influence decision-making. Under forthcoming product liability rules, if a municipality deploys an artificial intelligence system and it causes harm, legal action may be brought against the authority with greater ease [27,45,52]. While this framework offers important protection for citizens and is beneficial in principle, it contributes to a risk-averse environment among public officials. Many may choose to adopt a cautious approach, favoring traditional technologies over the perceived legal uncertainties associated with artificial intelligence [26,28].
In summary, regulatory risk aversion constitutes a significant barrier. Municipal leaders may choose to avoid or postpone the adoption of artificial intelligence not because it is explicitly prohibited by law but because the legal framework introduces uncertainty and the possibility of legal liability [26,28,58].

4.4. Fragmentation and Uncertainty in Interpretation

In the initial phase of implementation of the Artificial Intelligence Act and related legislation, there remains uncertainty regarding the interpretation of specific provisions in the context of smart cities. Determining whether a particular municipal artificial intelligence system qualifies as high risk can be complex. Annex III of the Act lists high-risk use cases, such as traffic management systems that directly affect public safety. However, questions arise regarding systems that only provide recommendations to human operators, such as an artificial intelligence tool offering guidance to traffic managers without directly controlling traffic signals. Whether such a system qualifies as high risk remains open to interpretation. In the absence of jurisprudence or formal guidance, cities are left operating within a legal gray area. This uncertainty constitutes a barrier, as many organizations adopt a cautious stance or seek legal opinions, which requires additional time and resources. The International Association of Privacy Professionals, for example, has noted that distinctions between prohibited and permitted uses, such as the definition of emotion recognition systems, which are disfavored, compared to non-invasive biometric analysis, remain unclear [16,18]. These ambiguities may deter the deployment of such technologies in Europe altogether [7,9].
Moreover, although the Artificial Intelligence Act establishes a unified regulatory framework, its implementation may differ across member states in practice. Each member state is responsible for designating enforcement authorities, which may involve existing regulatory bodies or newly created institutions and may impose varying penalties. As a result, a company supplying artificial intelligence systems to municipalities may encounter differing regulatory expectations in jurisdictions such as France and Germany. Until full harmonization is achieved, firms often adhere to the most stringent interpretation to minimize legal risk, thereby increasing development costs [36,45,58]. This situation is reminiscent of the early implementation period of the General Data Protection Regulation, when regulatory uncertainty and divergent national approaches led many organizations to adopt a cautious stance in deploying new data-driven services [18,56].
Another relevant consideration concerns public procurement. While European Union procurement directives mandate open tendering processes, some municipal procurement officials remain uncertain about their obligations under the Artificial Intelligence Act. Questions persist regarding whether procurement contracts must include specific artificial intelligence compliance clauses or whether purchasing a system that has not yet been certified under the Act could later be deemed non-compliant once the legislation is fully in force [10,13]. This uncertainty has made some procurement departments slower or more hesitant to approve artificial intelligence acquisitions in 2025, thereby creating a de facto barrier to adoption. Although anecdotal, such concerns have been reported at various smart city conferences. It is anticipated that as the regulatory environment matures, and with the publication of interpretative guidance from bodies such as the Commission’s Artificial Intelligence Office, this uncertainty will diminish. However, during the transitional period, it remains a significant obstacle [29,36].

4.5. Resource Inequalities and the Innovation Gap

Larger global technology firms typically possess the legal resources and financial capacity to ensure compliance with European Union regulations and, in some cases, to influence their formulation [26,28,31]. In contrast, smaller artificial intelligence start-ups and municipal information technology departments often lack such capabilities. This regulatory barrier disproportionately affects smaller actors, despite their frequent role in driving local innovation [5,58]. A start-up offering a promising artificial intelligence solution for urban traffic management may choose to avoid the European market or fail to scale its operations due to the cost of achieving legal compliance, resulting in cities missing out on potentially more effective or affordable technologies [26,27,58]. Similarly, wealthier municipalities, such as national capitals, are generally able to invest in pilot programs and navigate compliance obligations, whereas smaller cities may be compelled to abandon artificial intelligence initiatives due to limited resources [26,29].
This situation may result in an innovation gap in which only well-resourced cities are able to realize the benefits of artificial intelligence, contrary to the European Union’s objectives of territorial cohesion and equal opportunity [10,12]. In response to this concern, the Artificial Intelligence Act includes provisions for Regulatory Sandboxes as experimental frameworks that allow innovators to collaborate with regulators in testing artificial intelligence systems under conditions of regulatory flexibility [14,36,62]. These sandboxes offer the potential for cities to trial artificial intelligence solutions in a controlled setting, thereby mitigating some compliance-related barriers. However, as of 2025, most sandboxes remain in the process of being established, and it is not yet clear how effective they will be in lowering practical barriers to deployment [36,62]. In the meantime, the uneven ability of different cities to manage regulatory demands constitutes a systemic barrier at the ecosystem level [26,28,58].

4.6. Ethical and Public Acceptability Concerns Enforced by Regulation

European Union regulations often reflect underlying ethical principles such as fairness, transparency, and the mitigation of bias. In some cases, even where a smart city artificial intelligence project is legally permissible, it may encounter resistance due to public opposition grounded in these values. This dynamic can be understood as a societal regulatory effect [7,31]. For example, an artificial intelligence system used to allocate municipal services may fully comply with high-risk requirements under the Artificial Intelligence Act. Nevertheless, if the public perceives the system as a non-transparent mechanism that delivers unfair outcomes, it may provoke backlash or lead to political reluctance to proceed [31,37,61]. Public awareness and activism concerning digital rights are particularly prominent in Europe. In response to civil society pressure, numerous municipal councils have voluntarily imposed moratoria on the use of specific artificial intelligence technologies, such as facial recognition, even in the absence of formal legal prohibitions [7,9,18].
This cultural context means that, beyond compliance with formal legal requirements, cities must demonstrate algorithmic transparency and fairness to the public, which functions as a normative barrier [5,31,62]. The Artificial Intelligence Act introduces certain transparency obligations, such as user notifications and, in some cases, summary explanations for decisions made by high-risk systems. However, delivering meaningful transparency for complex artificial intelligence models is a considerable challenge [14,36,57]. Ensuring that a system is unbiased and explainable can be as demanding as fulfilling statutory obligations [16,61,62]. This frequently necessitates additional technical work, including the implementation of bias audits and the integration of explanation tools. In some instances, it may also require compromising on model accuracy to achieve greater simplicity or interpretability [18,61].
City administrators must consider the risk that deploying an artificial intelligence system which produces a controversial outcome, such as a predictive policing model that disproportionately targets minority neighborhoods, could not only contravene legal standards but also erode public trust and result in legal challenges under anti-discrimination law [28,63]. The imperative to align with ethical and equality principles, which are strongly embedded in European Union law and values, constitutes a barrier in that it constrains the use of certain artificial intelligence approaches, such as opaque or black-box models, and requires additional measures to validate system behavior [16,31,61]. Nevertheless, this is a barrier that cities largely recognize as essential to address in order to prevent societal harm.

4.7. Overcoming Regulatory Barriers

In examining these regulatory barriers, it becomes evident that the European Union’s protective approach introduces short- to medium-term challenges for the adoption of artificial intelligence. Municipalities must invest in legal expertise, engage with stakeholders, and, in some cases, forgo the most advanced artificial intelligence techniques in favor of more transparent, though potentially less powerful, alternatives [31,61]. The barriers include compliance costs, limitations on data access, restrictions on use cases, legal uncertainty, disparities in institutional capacity, and ethical constraints [18,26,58].
While this may present a daunting picture, it reflects only one dimension of the broader landscape. Many of these barriers are being addressed through capacity-building efforts and the adaptation of technologies. For example, best practices and technical tools are emerging, with some supported through European Union funding, that aim to automate elements of General Data Protection Regulation compliance or facilitate the testing of artificial intelligence systems for bias. These innovations are expected to reduce the regulatory burden over time [14,36,62]. The following section explores the technological challenges that are closely linked to these regulatory issues, noting that the resolution of regulatory constraints often depends on technical advances, such as privacy-enhancing technologies and explainable artificial intelligence methods [16,61]. By addressing regulatory and technological challenges in tandem, smart cities can chart a responsible path towards realizing the benefits of artificial intelligence [11,31].

5. Technological Challenges in Implementing AI in Smart Cities

In addition to regulatory barriers, smart city stakeholders encounter a range of technological challenges when implementing artificial intelligence solutions. These challenges are often closely linked to regulatory concerns. For instance, technical limitations such as insufficient data may be compounded by legal restrictions on data usage, while, in some cases, technical tools can help alleviate regulatory constraints, such as the use of privacy-enhancing technologies to comply with the General Data Protection Regulation. This section examines the principal technical and operational challenges, including issues related to data, infrastructure, algorithms, and institutional capacity, that cities must address in order to adopt artificial intelligence effectively.

5.1. Data Quality, Silos, and Interoperability

Artificial intelligence systems are only as effective as the data on which they are trained and which they continuously receive. Although cities produce vast volumes of data, ensuring their quality and accessibility presents a fundamental challenge. Urban data are frequently siloed across various departments and systems, with traffic data stored in one database, public transport data in another, and utility usage data elsewhere, often in incompatible formats and without standardization. Integrating these datasets to enable holistic artificial intelligence analysis, such as coordinating traffic control with pollution monitoring and public transport schedules, is often extremely difficult. Hence, interoperability challenges pose a significant barrier to seamless system integration [4,37]. Legacy systems commonly used by municipal departments may not be compatible with modern artificial intelligence platforms. For instance, traffic signal control infrastructure may operate using proprietary protocols, making real-time data extraction difficult and limiting an artificial intelligence system’s capacity to achieve a comprehensive view of traffic conditions.
Data quality presents an additional challenge. Sensors may produce errors or contain data gaps. For example, a malfunctioning air quality sensor could transmit inaccurate readings, potentially misleading an artificial intelligence model. Many municipalities lack robust data governance frameworks, resulting in datasets that may contain inaccuracies, outdated information, or embedded biases, such as crime statistics that systematically underreport certain types of incidents [26,28,63]. Training artificial intelligence systems on such data can yield unreliable or distorted outcomes [61,63]. Achieving data interoperability frequently requires the adoption of common standards. The European Union’s Intelligent Transport Systems Directive and the broader push towards common European data spaces promote this objective. However, the practical implementation of these standards, such as DATEX II for traffic data or CityGML for spatial data, is technically complex and often progresses slowly [30,40,41].
Solving data silos might require overhauling IT systems or investing in data middleware; until done, AI implementations are frequently limited to narrow verticals because horizontally merging data is too challenging [2,4]. This limitation prevents cities from fully utilizing AI’s predictive power which often comes from connecting diverse data sources. However, empirical evidence illustrates the impact of data integration challenges on AI deployment. In Barcelona, the introduction of an AI-based smart grid platform, supported by more than 19,000 smart meters, contributed to measurable reductions in municipal energy consumption and improved HVAC performance in public buildings [64]. By contrast, in Helsinki, despite the extensive availability of open datasets through the Helsinki Region Infoshare platform, technical reports indicate persistent interoperability issues due to inconsistent metadata structures and varying data formats. These issues complicate the integration of heterogeneous urban datasets for advanced AI modelling [65].

5.2. Infrastructure and Connectivity Constraints

Smart city artificial intelligence solutions typically depend on a robust digital infrastructure. This includes extensive sensor networks through Internet of Things devices, high-speed connectivity such as fifth-generation or fiber-optic networks, and access to edge or cloud computing resources. Many municipalities, especially smaller ones, do not yet possess the necessary infrastructure to support such systems [26,31]. For example, operating an artificial intelligence system that manages traffic in real time may require numerous cameras or Internet of Things detectors at every intersection, along with a reliable, low-latency network to transmit the data to a centralized or edge computing unit. Where network bandwidth is insufficient or sensor coverage is incomplete, the performance of the artificial intelligence system is significantly constrained [4,37]. Certain advanced applications, including autonomous vehicles or drone-based surveillance, depend critically on fifth-generation networks to achieve the required low latency. Consequently, delays in telecommunications infrastructure deployment can postpone the implementation of artificial intelligence solutions that rely on such connectivity [42,45].
Computing infrastructure also presents a significant challenge. Artificial intelligence algorithms, particularly those based on deep learning, are often computationally demanding. Municipal information technology departments typically lack access to high-performance computing resources such as graphics processing unit clusters, leading them to depend on cloud computing services. However, reliance on the cloud introduces concerns regarding latency for real-time operations, as well as compliance issues related to the General Data Protection Regulation and national policies governing the use of cloud services by public authorities [17,18,56]. Establishing local edge computing infrastructure, whereby data are processed on servers located within the city’s network to reduce latency, poses both technical and financial challenges [2,47]. In the absence of adequate computing capacity, some cities are compelled to use simpler, less data-intensive algorithms, which may result in reduced accuracy and limited benefits.
Energy consumption and infrastructure maintenance also present notable challenges. The deployment of thousands of Internet of Things sensors, while enabling data collection at scale, imposes additional maintenance responsibilities and increases energy demands, which must be accounted for by the municipality [2]. As a result, infrastructure readiness remains uneven across Europe. Leading smart cities such as Barcelona and Amsterdam have made considerable progress in expanding Internet of Things coverage and ensuring reliable connectivity. However, many mid-sized municipalities are still in the process of establishing this foundational infrastructure, which often necessitates limiting the scope of artificial intelligence projects or delaying their implementation [5,26].

5.3. Skill and Knowledge Gaps

The implementation and management of artificial intelligence systems require specialized expertise, which many municipal administrations currently lack. The shortage of skilled personnel is a frequently cited obstacle; cities often do not employ data scientists, artificial intelligence engineers, or even information technology staff with relevant expertise [4,26]. This situation results in a reliance on external vendors or consultants, which can be financially burdensome and may lead to the formation of knowledge silos. In the absence of internal capacity, municipalities also face difficulties in critically assessing artificial intelligence proposals or in monitoring algorithmic performance and ethical compliance over time [4,61,62].
Public sector institutions often rely on traditional skill sets, and while efforts to reskill staff in artificial intelligence and data analytics are underway in some municipalities, they remain incomplete [4,31]. In addition, senior decision-makers, including city executives and policymakers, may have a limited understanding of the capabilities and limitations of artificial intelligence. These cognitive barriers can result in either overestimation, whereby the technology is embraced uncritically without sufficient planning, or underestimation, in which potentially valuable solutions are dismissed due to a lack of understanding [4,58]. A systematic literature review has emphasized that executive education and awareness are essential. Bridging this knowledge gap is necessary to provide informed leadership and to support the successful implementation of artificial intelligence initiatives [4,5].
Where such understanding is absent, artificial intelligence projects may fail to receive political support or may be implemented ineffectively. In essence, the development of human capacity must keep pace with the deployment of technology. Achieving this alignment presents a significant challenge, requiring sustained investment in training programs, long-term institutional commitment, and, in some cases, cultural transformation within bureaucratic organizations [4,12,31].

5.4. Algorithmic Limitations and Context Adaptation

AI models often face difficulties when confronted with the complex, dynamic, and context-specific nature of cities. A model trained in one city might not work well in another due to differences in layout, population behavior, or simply climate. For example, a computer vision system for detecting potholes on streets might have high accuracy in a city with certain road materials and lighting conditions, but transfer it to a different environment, and accuracy drops. Adapting algorithms to local contexts requires additional data and tuning. Sometimes, city data is so unique that off-the-shelf AI models trained on generic datasets do not suffice, and custom development is needed [4,37].
Urban environments are inherently unpredictable. An artificial intelligence system controlling traffic may encounter unforeseen events, such as a spontaneous public demonstration blocking major roads or atypical traffic patterns emerging in the aftermath of the COVID-19 pandemic. Such scenarios may lie outside the system’s training data, resulting in inadequate or erroneous responses. Ensuring the robustness of artificial intelligence in the face of novel situations remains an unresolved technical challenge [16,62]. The potential for failures in critical urban functions also raises regulatory concerns. Municipalities are frequently required to maintain a human fallback mechanism or manual override, which adds complexity to system design and implementation [14,36,57].
Another significant limitation concerns the explainability of artificial intelligence algorithms. Many of the most effective techniques, such as deep learning neural networks, function as black boxes. However, as previously noted, cities require artificial intelligence systems that are transparent and accountable [7,31,61]. The development of explainable artificial intelligence that can be interpreted by municipal officials and understood by the public is an ongoing area of research [16,62]. At present, prioritizing explainability often necessitates the use of simpler models, such as decision trees or rule-based systems, which may offer reduced accuracy compared to more complex alternatives. This presents a technical trade-off between interpretability and performance [18,61].
Finally, certain urban challenges remain at the forefront of artificial intelligence research. Examples include truly multimodal mobility optimization, which involves balancing vehicles, drones, and pedestrian flows within a single model, as well as community sentiment analysis that is free from bias. In these domains, the underlying algorithms are still in development, and technology has not yet reached full maturity. As a result, the current technical limitations of artificial intelligence represent a barrier to realizing the broader vision of fully integrated smart cities [61,62].

5.5. Cybersecurity and Reliability

As cities adopt artificial intelligence and connect devices, they also increase their exposure to cyber threats. A major challenge lies in securing artificial intelligence systems and their underlying data pipelines against tampering. If a malicious actor were to compromise a city’s artificial intelligence-based traffic control system, the result could be widespread congestion or accidents. Similarly, if data streams are manipulated through data poisoning attacks, the artificial intelligence system could be misled into making harmful decisions. Addressing cybersecurity is both a technical and organizational challenge. It requires the implementation of strong encryption, authentication protocols, and anomaly detection systems, which often rely on artificial intelligence to protect other artificial intelligence processes, as well as continuous patching to resolve vulnerabilities. The European Union’s NIS2 Directive imposes more stringent cybersecurity obligations on public administration and critical infrastructure sectors, including transport and utilities. This places a legal requirement on cities to strengthen their cyber defenses [19,66]. However, implementing advanced cybersecurity measures within legacy municipal information technology systems is both difficult and costly [26,31]. Many local authorities continue to operate outdated software in some departments, lacking the necessary security features. Bringing these systems up to a standard that supports reliable artificial intelligence deployment is a long-term undertaking [23,26,53].
Reliability and resilience of artificial intelligence systems in urban contexts are also important concerns. These systems must be capable of operating effectively during power outages, network disruptions, and hardware failures. For instance, if a traffic light controller powered by artificial intelligence loses network connectivity, it should revert to a fail-safe mode, such as defaulting to fixed signal timings. Designing for such resilience presents a significant technical challenge that requires rigorous engineering and testing [20,25]. The potential for technical failure intersects with regulatory considerations, particularly regarding liability in the event that an artificial intelligence system fails during an outage and causes harm. This contributes to the reluctance of city officials to entrust artificial intelligence with mission-critical functions [45,52]. Establishing confidence in system reliability demands extensive simulation and piloting, which, while essential, introduces delays in deployment and represents an additional barrier [57,62].

5.6. Integrating AI into Legacy Processes and Culture

Beyond software and hardware, implementing artificial intelligence in a municipal context requires integration into existing workflows and institutional structures. This introduces both organizational and technical integration challenges. City employees may resist or distrust artificial intelligence systems, particularly if they perceive technology as a threat to their roles or lack understanding of how it functions [4,5]. Such resistance can result in limited cooperation and may undermine project effectiveness. For example, traffic control staff might disregard or override artificial intelligence-generated recommendations due to mistrust. Effective change management and training are essential to help staff view artificial intelligence as a tool that enhances, rather than replaces, their professional capacities. This represents a socio-technical challenge that must be addressed for successful adoption [4,12,31].
From a technical perspective, many municipal operations are governed by strict Standard Operating Procedures and, in some cases, legal mandates. Modifying these frameworks to accommodate artificial intelligence, such as permitting an algorithm to make decisions previously made by human operators, may necessitate bureaucratic or legal reforms [14,62]. This type of integration is often a slow process. For example, a city may pilot an artificial intelligence-based scheduling tool for public transport, but existing union agreements may require human dispatchers to approve any changes, prompting the need for renegotiation. This illustrates the extent to which technological, human, and regulatory systems are interdependent within city governance [31,61]. Realizing the full benefits of artificial intelligence frequently demands a re-engineering of organizational processes, which is a complex undertaking that goes far beyond simply installing new software [36,62]. It requires strong leadership and a clearly articulated vision for integrating artificial intelligence into public service delivery [4,58].

5.7. Facing Technological Challenges

The technological challenges associated with the adoption of artificial intelligence in smart cities, including data limitations, infrastructure deficits, shortages in relevant skills, algorithmic constraints, cybersecurity risks, and difficulties in system integration, are substantial. These challenges often intersect with and amplify the regulatory barriers discussed previously. For instance, inadequate data quality is not only a technical problem but also complicates regulatory compliance, such as demonstrating that an artificial intelligence system operates fairly. Similarly, the absence of skilled personnel makes it more difficult to interpret and apply legal requirements [4,18,26,63]. Nonetheless, both municipal authorities and the European Union are actively pursuing strategies to address these issues. European Union funding for urban digital innovation supports initiatives aimed at developing common data platforms and delivering training programs to equip city officials with the knowledge required to manage artificial intelligence responsibly [10,12,47].
Technology providers are advancing privacy-preserving artificial intelligence techniques, such as federated learning, in which models are trained across decentralized data sources without requiring the exchange of raw data. These methods aim to address restrictions on data sharing while maintaining system performance [18,60]. Progress is also being made in enhancing model explainability and bias detection, which will assist in meeting both ethical and legal expectations while continuing to employ advanced algorithms [16,61,62]. Addressing these technical challenges is a continuous effort. It may be argued that regulatory pressures are serving as a catalyst for innovation. For example, the requirement to comply with the General Data Protection Regulation has driven developments in anonymization techniques and the advancement of federated artificial intelligence approaches [17,28,56,60].

6. Societal Benefits of AI in Smart City Domains

Despite the complexity of the regulatory environment and the significant technological challenges, European smart cities continue to adopt artificial intelligence solutions due to the substantial societal benefits they offer. In general, municipalities pursue artificial intelligence and related technologies to improve safety, sustainability, efficiency, and the overall quality of life for residents [2,3,31]. This section outlines the principal benefits observed and anticipated across key smart city domains, including smart mobility, energy management, public safety, environmental monitoring, waste management, and urban governance. These examples illustrate why such innovations are regarded as worthwhile, even when navigating complex regulatory requirements [2,47]. Understanding these benefits is essential for contextualizing the subsequent discussion on how to balance innovation with regulatory compliance.

6.1. Smart Mobility

Transport is often the flagship component of smart city initiatives. AI techniques such as machine learning and real-time analytics are used to optimize traffic flow, improve public transport systems, and enhance road safety. For instance, artificial intelligence-based traffic signal control systems can adjust in real time according to prevailing traffic conditions, thereby reducing idle times and alleviating congestion. Empirical studies suggest that such systems can significantly reduce commute durations and lower vehicle emissions [1,2,3,40]. Artificial intelligence-enabled navigation applications assist drivers in avoiding heavily congested routes and, when deployed at scale, can contribute to a more balanced distribution of traffic across urban networks. Public transport systems also benefit from artificial intelligence, as cities increasingly use it to analyze ridership patterns and adjust bus or train schedules dynamically, resulting in shorter waiting times and improved service reliability for commuters [2,3].
Moreover, artificial intelligence plays a central role in the development of autonomous vehicles and smart logistics. Autonomous shuttles and self-driving buses, which have been piloted in several European cities, offer the potential to enhance mobility for elderly and disabled individuals and to address service gaps in public transport through on-demand solutions [3,42]. By 2030, widespread adoption of autonomous vehicles in urban areas is expected to reduce road accidents, the majority of which result from human error, while also improving overall traffic efficiency [42,45]. Artificial intelligence-enabled driver assistance technologies, such as collision avoidance, are already contributing to accident prevention and saving lives. In summary, smart mobility applications of artificial intelligence promote safer roads, reduce travel times, and support environmental objectives through decreased fuel consumption [3,40].
Globally, improvements in mobility are identified as the principal anticipated benefit of artificial intelligence, including reductions in congestion and enhancements in traffic safety [3,31]. Predictive analytics, for example, can help identify intersections with a high risk of accidents, enabling authorities to implement preventive measures such as installing improved signage or increasing speed enforcement [2,3]. The societal benefit of such interventions is a measurable reduction in collisions and injuries. In addition, artificial intelligence facilitates the integration of emerging mobility services, such as ride-sharing and micro-mobility solutions, including electric scooters and bicycle-sharing schemes, into the wider urban transport network. By coordinating these various modes of transport, for instance, by recommending an optimal combination of walking, cycling, and public transit for a specific journey, artificial intelligence contributes to more convenient and personalized mobility options. This enhances both the quality of life for residents and the overall accessibility of the city [30,40,41].

6.2. Energy Management in Smart Grids and Buildings

Artificial intelligence plays a crucial role in managing energy consumption and supporting the integration of renewable energy in smart cities. One of its primary benefits is enhanced energy efficiency. Artificial intelligence algorithms can forecast energy demand and adjust supply in real time, thereby optimizing the operation of power grids and district heating or cooling systems [2,47]. In several European cities, including Copenhagen and Barcelona, artificial intelligence is employed to manage smart energy grids in order to reduce emissions. These systems learn from historical consumption patterns to balance energy load and storage [2,3]. For example, artificial intelligence systems can automatically reduce energy usage by dimming street lighting or adjusting heating, ventilation, and air conditioning in public buildings during periods of low occupancy. Across municipal building portfolios, artificial intelligence-based analytics are used to identify retrofit opportunities and fine-tune system performance, frequently resulting in double-digit percentage reductions in energy consumption [3,30].
Another benefit is better integration of renewable energy. AI can predict solar and wind production and adjust grid controls to accommodate these variable sources, reducing reliance on fossil fuel plants [2,3,47]. Smart cities with microgrids use AI to switch between solar panels, batteries, and the main grid optimally, ensuring reliability while maximizing clean energy use [2,20,30]. This contributes to climate goals and can lower energy costs.
On the consumer side, smart meters integrated with artificial intelligence provide residents with detailed insights into their energy consumption and offer personalized recommendations for behavioral changes or optimal tariff selection, thereby enabling individuals to reduce their energy bills [18,60]. In some municipalities, mobile applications have been introduced that gamify energy conservation, using artificial intelligence to customize suggestions for each household. This contributes to increased public awareness and education on energy use [60]. In summary, the application of artificial intelligence in energy management supports cost savings, lowers greenhouse gas emissions, and enhances energy security by enabling more resilient grid operations [2,20,47]. During extreme weather events, such as heatwaves or cold spells, artificial intelligence systems can help manage peak demand to prevent blackouts, thereby providing direct benefits to public welfare [3,47]. As Europe advances towards carbon neutrality, these artificial intelligence-driven efficiencies in urban energy systems constitute an essential contribution to environmental sustainability and the public good [2,47].

6.3. Public Safety and Security

Artificial intelligence tools can enhance a city’s capacity to prevent crime and respond to emergencies, within the constraints established by applicable law. One of the primary benefits is faster emergency response. For instance, artificial intelligence-powered gunshot detection systems, which combine acoustic sensors with machine learning, can accurately identify the location of gunfire or explosions and notify police and emergency medical services within seconds. This capability can save lives by reducing response times. In cities where such systems have been deployed, officials have reported improvements in handling critical incidents [3]. Similarly, surveillance cameras equipped with artificial intelligence-based anomaly detection can monitor public spaces or critical infrastructure and automatically flag irregular patterns, such as a crowd suddenly dispersing, which may signal panic or an ongoing attack. This enables authorities to intervene more rapidly [3,16].
Although European privacy regulations prohibit the identification of individuals without legitimate cause, artificial intelligence can still be applied in privacy-preserving ways to monitor public safety. For example, artificial intelligence systems can be used to count people in order to prevent overcrowding at public events or to detect unattended items in locations such as underground stations [7,18]. These capabilities contribute to enhancing safety in shared urban spaces. Artificial intelligence is also employed in disaster management. Early warning systems use artificial intelligence to analyze data from environmental sensors, monitoring risks such as floods, earthquakes, and wildfires, as well as from social media platforms. This enables authorities to issue alerts and organize evacuations more effectively [3,30].
In healthcare emergencies, several cities have adopted artificial intelligence-driven dispatch systems for ambulances. These systems prioritize calls and calculate optimal routing for emergency vehicles, thereby reducing response times and improving survival rates for time-sensitive conditions such as cardiac arrest [3,47]. In addition, by analyzing historical incident data, artificial intelligence can support the strategic allocation of police and firefighting resources to locations where they are statistically most likely to be required. This enables a more efficient, data-driven deployment of first responders, contributing to improved public safety outcomes [2,3].
It is important to recognize that the societal benefit of artificial intelligence in public safety is closely linked to public trust. For this reason, cities are approaching the deployment of artificial intelligence in public safety with a focus on transparency and citizen engagement. This ensures that technologies such as surveillance cameras are used in ways that align with public expectations, for example, by prioritizing traffic safety or crowd monitoring rather than individual tracking [7,9,31]. When implemented appropriately, residents experience enhanced security without perceiving an infringement on their fundamental rights.

6.4. Environmental Monitoring and Urban Planning

Artificial intelligence provides powerful tools for addressing urban environmental challenges and supporting the development of healthier cities. Many municipalities now employ artificial intelligence models to monitor air quality and predict pollution episodes. By integrating data from air quality sensors, weather forecasts, and traffic information, artificial intelligence can identify areas at risk of elevated pollution levels hours or days in advance. This enables cities to implement temporary mitigation measures, such as traffic restrictions or public transport incentives, to reduce exposure [2,3]. Such interventions benefit public health by allowing authorities to warn vulnerable populations, for example, through smartphone alerts, advising them to remain indoors or wear protective masks when poor air quality is anticipated. In the longer term, artificial intelligence analysis of pollution data helps to identify major pollution sources, thereby informing policy interventions such as traffic optimization or industrial emissions control [2,47].
Artificial intelligence is also applied in water management. Smart water networks use artificial intelligence to detect leaks and predict pipe failures, enabling proactive maintenance that conserves water, which is an increasingly scarce resource, and prevents property damage resulting from water main ruptures [3,30]. In the field of waste management, artificial intelligence-based vision systems installed at recycling facilities can automatically sort waste streams with greater speed and accuracy than manual methods. This enhances recycling rates and reduces reliance on landfill disposal, delivering both environmental and economic benefits for society [2,3,47].
In the area of urban greenery, some cities employ artificial intelligence-driven analytics applied to satellite and drone imagery to monitor urban heat islands and determine where tree planting would deliver the greatest cooling effect. This supports greening initiatives [3,37]. Urban planning and governance also benefit from artificial intelligence through the development of digital twins, which are virtual representations of the city that simulate urban processes. For example, city planners can use an artificial intelligence-enabled digital twin to model how a proposed housing development would influence traffic patterns, pollution levels, and the demand for public services before construction begins [30,41]. This facilitates more evidence-based and sustainable planning decisions, ultimately producing urban designs that more effectively balance growth with quality of life.
Some European cities, such as Lisbon, have employed digital twin simulations to prepare for flooding scenarios and to coordinate emergency responses. This represents a clear societal benefit through improved risk reduction [3,47]. More broadly, in the domains of environmental management and urban planning, the application of artificial intelligence enables more informed decision-making and more efficient interventions. These technologies help to ensure that cities remain clean, resilient, and comfortable [2,3,47].

6.5. Governance and Public Services

Artificial intelligence is also enhancing everyday interactions between citizens and municipal administrations. Many local governments have deployed artificial intelligence-powered chatbots and virtual assistants on their websites or messaging platforms to manage common inquiries, such as questions about waste collection schedules or permit procedures. This has improved the accessibility of city services by providing residents with instant responses, in multiple languages, without requiring them to navigate complex bureaucratic processes [3,31]. The benefit is twofold: citizens enjoy greater convenience, while municipal employees experience a reduction in routine workload, allowing them to focus on more complex responsibilities. Early assessments indicate high user satisfaction with well-designed municipal chatbots, as they reduce waiting times and frequently resolve matters that would previously have necessitated an in-person visit [2,3]. In addition to enhancing service delivery, artificial intelligence can support new models of participatory governance. Civil society organizations, local advocacy groups, and digital rights coalitions increasingly contribute to the shaping of AI policies in urban settings, advocating for transparency, inclusion, and accountability in algorithmic systems. Examples include citizen assemblies deliberating on ethical AI use, community audits of algorithmic decisions, and grassroots campaigns influencing municipal procurement choices.
AI also enhances transparency and participation in governance. Some cities employ artificial intelligence to analyze citizen feedback gathered from social media platforms, surveys, and call centers in order to identify recurring concerns or gaps in service delivery. This enables city officials to respond more effectively to community needs [3,58]. Experimental artificial intelligence systems have also been developed to support budget allocation by simulating the outcomes of various spending scenarios. These tools assist city councils in making decisions that aim to maximize public welfare [3,62]. Although final decisions remain the responsibility of elected officials, such tools provide a data-driven and objective foundation for deliberation, thereby increasing the likelihood of policy choices that serve the broader interests of the community.
Artificial intelligence can also support the detection of fraud and errors in municipal operations. For example, it can flag anomalous patterns in procurement data that may indicate corrupt practices or verify the consistency of benefit applications to prevent misuse of public resources. These applications help to ensure that public funds are managed appropriately, contributing indirectly but significantly to public trust [3,28].
Finally, artificial intelligence and data analytics have enabled new forms of civic engagement. Several European cities have established open data portals and organized hackathons in which artificial intelligence practitioners and civic technologists develop applications that benefit the local community. Examples include applications that help users locate the nearest available parking space or identify the safest cycling route by analyzing municipal data [2,30,47]. These initiatives promote a culture of innovation and co-creation, empowering citizens to both benefit from and contribute to the smart city ecosystem [5,30].

6.6. Benefits of AI for Smart Cities

Across a range of domains, artificial intelligence offers tangible societal benefits for cities. These include improved public safety, better environmental quality, more efficient service delivery, increased citizen engagement, and progress toward sustainability objectives. Examples include reductions in travel times, decreased energy consumption, and higher crime clearance rates, all of which are highly significant when applied at an urban scale [1,2,3]. Importantly, these benefits align with several of the European Union’s overarching policy goals, such as the European Green Deal, Vision Zero for road traffic fatalities, and the promotion of inclusive digital governance [12,47].
This alignment suggests that, in principle, European Union regulations are not intended to obstruct these benefits but rather to ensure they are realized in a responsible and ethical manner. The primary challenge lies in navigating the regulatory requirements in such a way that artificial intelligence can be deployed effectively while avoiding harm and respecting fundamental rights [11,14,31]. The following section explores these challenges in greater detail, with a focus on the barriers that cities face as a result of European Union regulations and technical constraints, even as they work toward achieving the benefits outlined above.

7. Global Comparison: EU vs. US vs. China

To contextualize the European Union’s approach, this section compares the regulatory frameworks and implementation strategies of the United States and China—the two most prominent non-EU actors in global AI and smart city governance. The European Union’s approach, as detailed in earlier sections, is characterized by proactive and comprehensive regulation aimed at ensuring that AI develops in line with fundamental rights and public values [11,12,14]. In contrast, the United States has, until recently, taken a more laissez-faire or piecemeal approach, relying more on market forces and existing laws, with limited central regulatory intervention [28,67]. China’s model features strong central planning and an emphasis on state control, data nationalism, and rapid deployment of AI technologies often tied closely to public security and social governance [68,69].
While these governance models are primarily distinguished by their normative orientations and institutional structures, a meaningful comparison also benefits from empirical context. Quantitative indicators such as the number of enacted AI-related regulations, public investments in smart city innovation, patent activity, and the scale of national pilot programs offer valuable insights into the operational realities and strategic priorities of each region. These metrics help clarify how different regulatory approaches shape the pace, scope, and nature of AI deployment in urban environments. As shown in Table 2, China significantly outpaces both the European Union and the United States in terms of smart city pilot coverage and AI patent filings, reflecting its emphasis on rapid adoption and technological self-reliance. Conversely, the European Union leads in regulatory breadth, with a structured legislative ecosystem encompassing data protection, AI oversight, cybersecurity, and sector-specific mandates. These figures provide a concrete basis for understanding the trade-offs between innovation speed, legal safeguards, and public trust that underpin each region’s approach to governing AI in smart cities.
These differing approaches influence not only how AI is adopted in cities but also the balance between innovation and regulation in each context. In this section, we provide a comparative analysis of the EU, US, and Chinese approaches to regulating and facilitating AI in smart cities. We highlight key differences in legal frameworks, policy priorities, and on-the-ground outcomes for smart city development.

7.1. European Union—A Regulated, Rights-Based Approach

The European Union’s approach to artificial intelligence and smart cities can be characterized by the principle of innovation enabled by trust. European policymakers contend that public acceptance of artificial intelligence and, consequently, its long-term success depend on the establishment of robust safeguards. As a result, the European Union has implemented or is in the process of finalizing a comprehensive legislative framework affecting artificial intelligence. This includes the General Data Protection Regulation for data privacy, the forthcoming Artificial Intelligence Act for artificial intelligence-specific oversight, sectoral directives such as the NIS2 Directive for cybersecurity, and revised product safety and liability legislation that accounts for the unique risks posed by artificial intelligence technologies [14,17,19,23,52].
This constitutes a broad regulatory framework governing the development and deployment of artificial intelligence systems. Within smart city contexts, this means that European municipalities must operate within clearly defined boundaries concerning the permissible uses of artificial intelligence. For instance, the deployment of real-time facial recognition technology in public spaces is effectively prohibited, either through forthcoming explicit bans under the Artificial Intelligence Act or through the cautious positions adopted by data protection authorities [7,9,16,18]. Automated decision-making by municipal agencies must include human oversight, as required by Article 22 of the General Data Protection Regulation [17,18]. Furthermore, artificial intelligence systems applied in sectors such as public transport or energy are required to comply with established safety and cybersecurity standards prior to implementation.
While these regulations pose challenges, as discussed, they also reflect Europe’s normative priorities: privacy, human dignity, transparency, and fairness [11,31,63]. The EU sees these as non-negotiable foundational values, even if that means a slower or more constrained rollout of new tech. Indeed, European institutions often emphasize the concept of “Trustworthy AI,” which implies that AI should be lawful, ethical, and robust in practice [14,31,70]. The hope is that by ensuring AI respects individuals’ rights (for instance, by not engaging in secret profiling or discrimination), the public will trust and embrace AI solutions, leading to sustainable adoption [28,61,63].
Europe also adopts a precautionary approach, identifying potential risks at an early stage and introducing regulation to prevent harm. This is evident in the Artificial Intelligence Act’s classification of risk levels and the explicit prohibition of certain practices [14,16]. At the same time, the European Union recognizes the importance of maintaining competitiveness in innovation. To that end, regulatory measures are complemented by substantial public investment. The European Union and its member states have made significant investments in artificial intelligence research and smart city pilot projects through funding programs such as Horizon Europe and the Digital Europe Program. Additionally, initiatives such as the European Innovation Partnership on Smart Cities and Communities have been established to facilitate the exchange of best practices [10,12,30,47].
The inclusion of regulatory sandboxes in the Artificial Intelligence Act, which allow for the temporary relaxation of certain rules under supervisory conditions to test innovative solutions, reflects an element of flexibility within an otherwise stringent framework [14,36,62]. In addition, the European Union actively promotes open data and interoperability, both through legal instruments such as the Open Data Directive and through publicly funded projects. These efforts aim to stimulate artificial intelligence innovation by making public sector data more accessible and reusable [30,39,47]. Compared with the United States and China, the European approach is more policy-driven and coordinated. Many rules originate from Brussels and are subsequently implemented across all member states with the aim of creating a harmonized digital market [11,31,58]. In theory, this reduces internal fragmentation, as companies that comply with European Union regulations can offer their smart city artificial intelligence solutions across the entire Union. However, this also entails a higher regulatory threshold than is found in other global jurisdictions [26,27,58].
European smart city projects frequently prioritize citizen engagement and ethical considerations. Cities such as Barcelona and Amsterdam have adopted digital rights frameworks and established algorithm registries, reflecting the European Union’s broader values-based approach to digital governance [5,7,71]. While critics contend that the European Union’s extensive regulatory framework may hinder innovation or delay implementation, proponents argue that it serves to prevent societal harm and public backlash, thereby establishing a more resilient foundation for the long-term adoption of artificial intelligence [16,31,58]. Importantly, the European Union’s regulatory influence extends beyond its borders. Through what is often referred to as the Brussels effect, European legislation such as the General Data Protection Regulation has prompted reforms in data governance practices worldwide [58,59,67]. Should the Artificial Intelligence Act be enacted, it is likely to produce a similar global impact by shaping international standards for artificial intelligence [10,14,36].
In summary, the EU’s approach to AI in smart cities is cautiously optimistic: it sees huge potential for societal benefit, but it believes those benefits can only be realized sustainably if strong guardrails are in place [31,63].

7.2. United States—A Decentralized, Innovation-First Approach

In the United States, the approach to regulating artificial intelligence, including its application in smart cities, has until recently been relatively limited at the federal level. There has been no equivalent to the European Union’s comprehensive Artificial Intelligence Act or the General Data Protection Regulation. Instead, the United States has relied on a combination of existing legal frameworks, sector-specific regulations, and market-driven self-regulation. For instance, concerns related to bias in artificial intelligence systems or potential harm to consumers are typically addressed retrospectively through anti-discrimination laws or the Federal Trade Commission’s authority over deceptive business practices rather than through ex ante rules specifically designed for artificial intelligence [28,58,59].
In practice, this regulatory context meant that, over the past decade, American cities enjoyed greater flexibility to experiment with artificial intelligence technologies, owing to the relative absence of comprehensive federal legal constraints. Cities such as Pittsburgh and Phoenix became early testing grounds for autonomous vehicles, operating under local or state-level guidelines rather than national frameworks. Similarly, police departments across the United States deployed predictive policing tools and facial recognition technologies in the absence of a nationwide prohibition. Recent academic work has highlighted how AI is transforming public sector accountability and discretion in US urban governance, particularly in areas like permitting, inspections, and service dispatch [72]. The authors argue that while AI can improve bureaucratic responsiveness, it simultaneously raises complex ethical and transparency challenges, necessitating new institutional capacities and oversight mechanisms to ensure democratic legitimacy.
Notably, however, some local governments responded to civil society’s concerns by enacting their own restrictions. Cities including San Francisco and Boston chose to prohibit the use of facial recognition by public authorities through municipal legislation [7,9]. The United States approach is therefore frequently characterized as bottom-up and fragmented, with individual states and cities adopting varied measures, resulting in a patchwork of regulatory regimes [27,28].
For example, the state of Illinois has enacted legislation regulating the use of biometric data, which directly affects the deployment of facial recognition technologies in cities such as Chicago. New York City has passed a law mandating bias audits for artificial intelligence systems used in hiring practices, and California has implemented a comprehensive consumer privacy law (the California Consumer Privacy Act and its amendment, the California Privacy Rights Act), which, although less stringent than the General Data Protection Regulation, introduces certain constraints relevant to artificial intelligence data use [18,28,59]. At the same time, many jurisdictions in the United States have not established artificial intelligence-specific regulations, permitting the largely unregulated deployment of such technologies. This fragmented landscape poses challenges for companies, as vendors of smart city artificial intelligence solutions must navigate a variety of legal requirements depending on the location. However, it also enables some regions to adopt new technologies more rapidly in the absence of federal direction. In practice, American technology firms often launch services domestically under relatively permissive conditions and subsequently adapt them to comply with the more rigorous requirements of the European market [11,58,59].
In terms of policy, the federal government of the United States has traditionally prioritized investment and advisory efforts over direct regulation. Documents such as the White House’s Blueprint for an AI Bill of Rights (2022) [73] outline non-binding principles for the safe and ethical use of artificial intelligence. Similarly, agencies such as the National Institute of Standards and Technology have developed resources like the Artificial Intelligence Risk Management Framework, which provides voluntary guidance [58,67,73,74]. These initiatives promote the best practices in areas such as transparency, fairness, and accountability but do not carry the force of law. The overarching federal strategy has been to support innovation through funding for artificial intelligence research and development and by promoting the adoption of artificial intelligence within government operations while largely delegating regulatory authority to sectoral agencies or state-level jurisdictions [58,67].
For example, the United States Department of Transportation has issued guidelines for the deployment of autonomous vehicles, but no binding federal legislation has been enacted. This approach reflects a deliberate effort to avoid stifling innovation in a rapidly evolving technological field [45]. The innovation-first strategy is driven by a broader objective of preserving global leadership in artificial intelligence. United States policymakers have expressed concern that excessive regulation could prompt technology development to shift to other jurisdictions [58,59]. However, this strategy has resulted in inconsistent protection against the risks associated with artificial intelligence. Only recently has there been increasing momentum within the United States to consider more comprehensive regulatory frameworks, influenced in part by European Union initiatives and several high-profile controversies surrounding artificial intelligence technologies [36,58].
As of 2025, discussions are ongoing in the United States Congress concerning the regulation of artificial intelligence, and numerous legislative proposals have been introduced. The Biden administration had also begun to exercise existing regulatory powers. For example, the Federal Trade Commission has issued warnings indicating that it may penalize companies whose artificial intelligence systems violate consumer protection laws [67]. However, any forthcoming federal legislation is expected to be more limited in scope than the European Union’s comprehensive approach. Proposed measures are likely to focus on specific high-risk applications or transparency requirements rather than establishing a single, overarching regulatory framework [28,58,73].
In the smart city context, municipalities in the United States often prioritize practical problem-solving and the implementation of pilot projects, typically without a formal requirement to conduct measures such as impact assessments unless mandated by local legislation [5,7]. This approach can facilitate rapid deployment, as seen in the adoption of artificial intelligence-powered gunshot detection systems or traffic analytics technologies. However, this accelerated pace has occasionally resulted in public backlash, particularly when implementations have raised privacy concerns or failed to perform as intended. Community protests against predictive policing systems perceived as biased are one such example [7,9]. In response, a form of self-regulation is emerging. A number of cities have voluntarily adopted guidelines or ethical principles to govern the use of artificial intelligence in public services, and civil society organizations remain actively engaged in monitoring and evaluating municipal artificial intelligence initiatives [7,28,61].
Overall, the regulatory approach in the United States is decentralized and pluralistic. Some jurisdictions adopt permissive policies, while others have introduced targeted bans or oversight mechanisms. In comparison with the European Union, the United States places greater emphasis on innovation and economic opportunity and less on uniform rights-based restrictions, although this divergence has narrowed somewhat in recent years with growing attention to artificial intelligence ethics [58,73,74]. One commonly cited advantage of the United States model is that it facilitates rapid innovation and entrepreneurship. Companies are able to iterate quickly, and cities can realize the benefits of new technologies at an early stage. The corresponding disadvantage is the potential for harmful applications or erosion of public trust if insufficient safeguards are in place. This can result in a reactive regulatory response after harm has occurred, in contrast to the European Union’s more preventive model of governance [58,59].

7.3. China—A State-Driven, High-Adoption Approach

China presents a marked contrast to both the European Union and the United States in its approach to artificial intelligence. The Chinese government has designated artificial intelligence as a national strategic priority and has integrated it extensively into its urban development agendas, frequently under the banners of smart cities and safe cities. The overarching policy orientation may be characterized as maximizing adoption and control in service of state objectives. Chinese policy is centrally directed and guided by ambitious national targets. The State Council’s “New Generation Artificial Intelligence Development Plan of 2017” outlined the goal of establishing China as the global leader in artificial intelligence by 2030, with smart cities serving as a prominent domain to demonstrate and apply that leadership. Consequently, artificial intelligence applications have been deployed across Chinese cities at an unprecedented scale and pace. More than 500 cities in China have participated in smart city pilot initiatives, the highest number globally by a considerable margin [75,76]. Many of these projects involve advanced artificial intelligence technologies for surveillance, traffic control, public service delivery, and related functions.
A prominent example is the City Brain project, pioneered in Hangzhou by Alibaba. This initiative involves an integrated artificial intelligence platform that optimizes traffic flow across the entire city. Reports indicate that it has significantly reduced congestion and enabled emergency vehicles to receive priority passage through dynamic control of traffic signals [76,77]. Furthermore, empirical studies of smart city data governance in China, such as in Shenzhen, reveal a strong state-led model combining administrative authority with enterprise-led technical development [78]. This approach enables rapid AI deployment at urban scale but also generates tensions between innovation and safeguards for data privacy and public accountability.
The Chinese model frequently combines public security objectives with smart city technologies. Extensive networks of surveillance cameras, equipped with facial and behavioral recognition capabilities, are deployed to monitor public spaces. Use cases include the identification of wanted individuals in crowds and the detection of minor infractions, such as jaywalking at intersections [79,80]. Pilot implementations of the social credit system in selected cities aggregate data from various sources, including artificial intelligence-driven surveillance and analytics, to assign scores to individuals or businesses. These scores aim to encourage compliance with laws and promote trustworthiness within society. However, the system has attracted international concern due to its potential implications for civil liberties and individual freedoms [80,81].
In regulatory terms, China’s approach may be described as paradoxical, exhibiting both permissive and highly interventionist characteristics. It is permissive that personal privacy does not hold the same legal or normative status as it does in the European Union. Widespread surveillance and data sharing among government agencies are permitted with limited restriction. At the same time, the approach is interventionist, as the state exercises close oversight into the deployment of artificial intelligence. Rather than enacting laws that constrain government use of artificial intelligence, China has introduced regulations designed to align artificial intelligence with state objectives and to exert control over private technology companies. For example, regulations on algorithmic recommender systems, which came into effect in 2022, require companies to register the details of their algorithms with government authorities and to ensure that their systems promote socialist values [69,82,83]. Regulations governing deepfakes and generative artificial intelligence mandate licensing, along with compliance with state censorship requirements [82,84,85].
In 2021, China enacted the Personal Information Protection Law, which serves as the country’s principal privacy legislation. In principle, it grants individuals rights that are comparable to those found in the General Data Protection Regulation. However, the law includes broad exemptions for state agencies and matters related to national security, which means that while it imposes restrictions on private companies regarding the handling of personal data, it does not significantly constrain government surveillance activities [86,87]. As a result, while Chinese technology firms must exercise caution in processing user data, public authorities retain extensive freedom to utilize personal data for governance purposes, including in the context of smart city operations. In practice, the Chinese public tends to have different expectations concerning privacy and is often more willing to exchange personal data for increased security or convenience. This is influenced by specific cultural, historical, and political factors [69,80,86].
This regulatory environment has enabled Chinese cities to implement artificial intelligence projects that would likely encounter legal constraints or public resistance in the European Union or parts of the United States. As a result, many Chinese cities are at the forefront of artificial intelligence deployment. Examples include artificial intelligence-managed traffic systems in major metropolitan areas, artificial intelligence-driven utility management in so-called sponge cities designed for flood control, and the use of artificial intelligence in urban policing, where authorities reportedly employ surveillance systems to identify suspicious behavioral patterns and prevent crimes [75,77,79,88]. Facial recognition technology is also widely used for various purposes, ranging from subway fare payments to the controlled dispensing of public toilet paper in an effort to reduce waste. The perceived benefits, particularly in terms of operational efficiency and, from the government’s perspective, the maintenance of social order, are strongly emphasized.
However, the Chinese model has faced significant criticism concerning civil liberties. The extensive surveillance infrastructure and the high degree of data integration are viewed by many observers as intrusive and as enabling forms of authoritarian control [69,75,80]. In China’s artificial intelligence-driven smart cities, the distinction between enhancing public services and surveilling citizens is often unclear. Technologies introduced under the premise of traffic management, for example, have also been employed to monitor individual movements. The state maintains that the creation of a secure and well-governed society constitutes a collective benefit and that such measures are therefore justified.
On the regulatory front, China’s primary focus is not on safeguarding individual rights in relation to state authority but rather on ensuring that artificial intelligence does not threaten social stability or undermine party governance. This orientation has produced regulations such as those concerning algorithmic systems and deepfakes, which are intended to prevent misuse, including the dissemination of false information or the deployment of algorithms that may foster addiction or social unrest. These regulations also serve to ensure government oversight over influential artificial intelligence platforms [69,82,83,85]. In essence, China’s artificial intelligence governance framework is designed to preserve state control while accelerating technological development. In contrast to the European Union’s extensive deliberative process surrounding the Artificial Intelligence Act, China tends to introduce regulations more rapidly and enforce them with greater immediacy. These regulations are generally more narrowly scoped, addressing specific content or security concerns, rather than constituting a comprehensive framework grounded in human rights [77,84,89].

7.4. Key Differences and Implications

These divergent approaches, including the European Union’s principled regulation, the United States’ market-led and decentralized framework, and China’s state-driven acceleration, have several implications. One key difference concerns the balance between deployment speed and regulatory oversight. Chinese cities often prioritize rapid implementation and address consequences retrospectively or, in some cases, suppress public dissent, which enables swift progress in smart infrastructure development [75,76,77]. Cities in the United States also tend to deploy technologies relatively quickly, although they may be constrained later by public opinion, activism, or litigation in the absence of established regulatory frameworks [7,28]. By contrast, municipalities in the European Union typically experience slower deployment rates, as they are required to demonstrate regulatory compliance from the outset and often undertake extensive assessments. However, such projects may benefit from greater initial public trust and legitimacy [11,26,31].
This gives rise to another point of divergence, namely, public trust and the potential for backlash. In the European Union, citizens may place trust in the legal framework to protect their rights, although there is the possibility of frustration if technological innovations are perceived to be arriving too slowly [31]. In the United States, the public often benefits from earlier access to new technologies, but trust may be undermined if these systems lead to controversy or harm, prompting reactive regulatory or legal responses [58,59]. In China, public trust is more complex. Some evidence indicates that many citizens support the government’s use of technology for security purposes, while others privately object to pervasive surveillance. Nevertheless, open expressions of dissent are infrequent and are often subject to suppression [69,80,86].
Another important distinction lies in how companies approach innovation. European firms operate under stricter design constraints, requiring them to integrate privacy protections and regulatory compliance from the outset. While this can increase development costs and slow the pace of iteration, it may also position these firms as global leaders in ethical and trustworthy technology [14,31,61]. In contrast, American companies often innovate freely within their domestic market and subsequently adapt their products to meet European standards. For example, many United States-based applications and artificial intelligence services have had to modify features to comply with the General Data Protection Regulation when entering the European Union market [58,59]. Chinese companies, many of which operate within public–private partnerships supported by the state, benefit from a large domestic market, permissive data use policies, and strong government incentives for adoption. These conditions have enabled major firms such as Alibaba, Tencent, and Huawei, along with numerous artificial intelligence start-ups, to deploy smart city technologies extensively within China. Many of these solutions are now being exported globally, particularly to developing countries that are seeking rapid smart city modernization [69,75,80].
This situation introduces important geopolitical considerations. Chinese firms, supported by the state, export comprehensive smart city solutions that often include advanced surveillance capabilities, raising concerns about the international spread of digital authoritarianism [75,80,84]. In response, both the European Union and the United States have begun to offer alternative models. These include initiatives such as the United States Smart Cities Marketplace and various European Union international partnerships, which prioritize privacy protection and the use of open standards [11,47,58]. In the area of global standards and regulatory coordination, the three regions have initiated more structured dialogue. For instance, the European Union and the United States convene the Trade and Technology Council, where discussions on artificial intelligence governance are intended to prevent the emergence of entirely incompatible regulatory regimes [58]. However, achieving alignment remains difficult due to fundamental differences in governance philosophy. While the European Union and the United States both operate within democratic systems, the European Union is more inclined to impose regulation in order to uphold democratic values. This regulatory divergence may narrow over time, particularly as the United States recognizes the need for greater alignment to avoid trade friction, especially if the extraterritorial provisions of the European Union’s Artificial Intelligence Act begin to take effect [14,36,58]. Meanwhile, China continues to promote its own model of digital sovereignty and actively participates in international standards organizations such as the International Organization for Standardization and the International Telecommunication Union, aiming to shape technical standards to its advantage [77,87]. Institutions led by China have, at times, promoted frameworks that legitimize extensive surveillance as a component of smart city development, a position strongly opposed by the European Union [75,80,84].
The European Union prioritizes ethical alignment and the protection of fundamental rights, even if this slows the pace of technological deployment [14,31]. The United States prioritizes innovation and economic dynamism, addressing emerging challenges through a more ad hoc and decentralized approach [28,58]. China, by contrast, prioritizes rapid implementation and centralized state control, viewing technology as a strategic tool for governance and national development [75,80,82]. Each of these models has produced notable successes as well as criticism. For residents of smart cities, these differences translate into distinctly different urban experiences. A citizen in a European Union smart city may have access to advanced digital services while also encountering visible privacy notices and benefiting from strong legal protections for personal data [17,31]. A resident in a United States smart city might enjoy early access to innovative services but may also express concern over data governance and the fairness of automated systems [7,59]. In a Chinese smart city, citizens often benefit from highly efficient urban systems and enhanced public security but typically accept continuous monitoring and have limited influence over how these technologies are applied [79,80,86].
As cities around the world increasingly turn to artificial intelligence to address urban challenges, many look to the European Union, the United States, or China as reference models or sources of technological solutions. The European Union offers a human-centric, regulated smart city framework that aims to combine technological advancement with strong privacy protections. This approach appeals to other democratic nations and aligns with the United Nations Human Settlements Program’s principles of people-centered smart cities [14,31,47]. The United States model prioritizes innovation and typically offers more affordable, consumer-oriented solutions. Many Internet of Things devices and urban digital applications originate in the United States market and are adopted flexibly by cities worldwide, although this model may lack a comprehensive governance structure [28,58,59]. The Chinese model provides integrated systems and financing packages, which are particularly attractive to cities in Asia and Africa. However, it also raises critical discussions about the trade-off between service efficiency and the implications of pervasive surveillance [75,80,88].
This comparative perspective highlights that the governance of artificial intelligence is inseparable from cultural values and political systems. Each region’s approach to smart city artificial intelligence reflects its specific way of balancing individual rights, collective benefits, economic priorities, and the role of the state [31,58,69,84].
Looking ahead, there are ongoing efforts to identify areas of common ground. All three regions recognize the importance of establishing shared standards for artificial intelligence interoperability and safety, and all participate in international forums addressing the ethics of artificial intelligence. Even China has endorsed certain high-level ethical principles, although interpretations and implementations vary [31,80]. Over time, some degree of convergence may emerge. The United States may introduce more regulatory safeguards, the European Union could refine its regulatory framework to better accommodate innovation, and China might adapt its policies in response to domestic concerns about potential misuse. Global challenges such as climate change and pandemics also provide incentives for greater cooperation on artificial intelligence for public good, including the exchange of best practices in pandemic response within urban environments [10,47]. Nonetheless, in the short to medium term, the European Union, the United States, and China are expected to maintain distinct trajectories in their approaches to smart city artificial intelligence. European smart cities are likely to focus on responsible innovation grounded in rights and ethics, American smart cities on entrepreneurial and market-driven innovation, and Chinese smart cities on state-directed innovation with an emphasis on governance and efficiency. Each model represents a different expression of artificial intelligence deployment in urban contexts.
To synthesize the comparative analysis of AI governance in smart cities across the European Union, the United States, and China, Table 3 presents a structured matrix contrasting the three jurisdictions along five critical dimensions: data governance, AI risk classification, public oversight, surveillance norms, and public sector innovation policies. This matrix provides a visual summary of the regulatory philosophies and institutional practices discussed in the preceding section, highlighting how distinct governance models shape the development and deployment of AI in urban contexts.

8. Discussion

The adoption of artificial intelligence in smart city solutions sits at the intersection of technological possibility, regulatory frameworks, and societal needs. This review has examined how the European Union’s regulations impact AI deployment in key urban domains, the technical and organizational challenges that accompany those deployments, and the substantial societal benefits that motivate cities to invest in AI-enabled systems. We have also contrasted the EU’s approach with the strategies of the United States and China, highlighting fundamental differences in philosophy and practice that shape smart city trajectories around the world.
Several key themes emerge from this analysis. First, European Union regulations constitute both a barrier and a protective framework for the application of artificial intelligence in smart cities. On the one hand, instruments such as the General Data Protection Regulation, the forthcoming Artificial Intelligence Act, and various sector-specific directives impose stringent requirements. These include obligations related to privacy protection, transparency, and risk management, which can slow project timelines and necessitate substantial compliance efforts. These regulatory constraints pose particular challenges for smaller municipalities and companies, and they contribute to concerns regarding Europe’s competitiveness in rapidly evolving artificial intelligence markets. On the other hand, the European Union’s principled approach establishes a clear and consistent regulatory framework that encourages responsible innovation. By defining limits, such as prohibiting intrusive surveillance and mandating the mitigation of algorithmic bias, these regulations aim to prevent harm and to foster public trust in artificial intelligence systems. Such trust is essential for sustainable adoption, as citizens are more inclined to embrace artificial intelligence-powered services when assured of strong safeguards for privacy and ethics. In this respect, European Union regulations can be seen as a double-edged instrument, moderating unrestrained technological development while helping to ensure that innovation aligns with European social values. The principal challenge for European smart cities is to navigate these regulations efficiently, making effective use of mechanisms such as regulatory sandboxes and the harmonization of technical standards to reduce barriers to compliance. This review also identified opportunities for improvement, including the need to clarify the application of the General Data Protection Regulation to artificial intelligence systems serving the public interest and to provide more detailed technical guidance under the Artificial Intelligence Act. Ultimately, the success of the European model will depend on maintaining an ongoing dialogue among regulators, city authorities, industry stakeholders, and civil society to refine the legal framework in response to real-world implementation challenges.
Second, the technological challenges associated with implementing artificial intelligence in urban contexts are substantial but can be overcome with appropriate strategies. Key obstacles identified include data integration, interoperability, algorithmic transparency, and cybersecurity. These challenges frequently delay or complicate project execution. Addressing them will require investment in capacity-building within municipal administrations, including the modernization of information technology infrastructure, the recruitment or training of personnel with expertise in data science and artificial intelligence, and the development of partnerships with academic institutions and technology providers. It will also necessitate the advancement and deployment of new technologies that are compatible with regulatory requirements. Examples include privacy-preserving artificial intelligence methods, such as federated learning and differential privacy, which enable models to be trained on decentralized municipal datasets without compromising individual privacy, as well as explainable artificial intelligence tools that generate interpretable outputs to support transparent decision-making in public services. The European Union and national governments can play a crucial role by supporting research and pilot initiatives in these areas, a process that is already underway. Municipalities, in turn, should actively share insights through networks such as EUROCITIES or the Open and Agile Smart Cities initiative, thereby facilitating the dissemination of best practices. A central insight is that regulatory compliance and technical robustness are not mutually exclusive but can be pursued concurrently. For example, enhancing data quality and ensuring thorough documentation serve both to meet legal standards and to improve the accuracy, reliability, and usability of artificial intelligence systems. Promoting a culture of compliance as a dimension of quality may therefore transform procedural requirements into meaningful contributions to performance and public trust. Although the technological barriers remain significant, they are diminishing over time as cities continue to innovate and adapt. Technologies that once seemed speculative, such as artificial intelligence for traffic coordination or predictive infrastructure maintenance, are now being deployed in practice. This trend suggests that sustained innovation, supported by thoughtful and adaptive governance, will continue to yield effective and trustworthy smart city solutions.
Third, the societal benefits of artificial intelligence in smart cities are both compelling and wide-ranging. From reducing traffic congestion and lowering carbon emissions to improving public health outcomes and enhancing the accessibility of government services, artificial intelligence has demonstrated the capacity to address numerous urban challenges more effectively than traditional approaches. These benefits contribute directly to public welfare by saving time, improving safety, reducing environmental pollution, and enabling services to be tailored to individual needs. Many of these advantages also align with broader policy objectives, such as the European Union’s Green Deal, which promotes sustainability, and the Digital Decade targets, which aim to advance the digital transformation of public services. In this context, the responsible deployment of artificial intelligence should not be viewed as an isolated technological initiative but rather as a strategic enabler for achieving wider societal goals, including climate resilience, social inclusion, and more efficient public administration. To fully realize these benefits, it is essential that citizens remain central to the design and implementation of smart city initiatives. Public engagement and transparency are critical. While this review primarily focused on the perspectives of policymakers, regulators, and technology providers, it is increasingly recognized that citizens, communities, and civil society actors also play an essential role in shaping the governance of AI in smart cities. Their involvement—through public consultations, watchdog organizations, or community-based initiatives—can influence regulatory interpretations and implementation strategies at the municipal level. Future research and policy design should further integrate these bottom-up contributions to ensure that AI systems reflect not only institutional priorities but also the values and lived experiences of diverse urban populations. When residents understand how an artificial intelligence-based system may improve areas such as commuting, and when they see that appropriate safeguards are in place, they are more likely to support such innovations. European cities have taken a leading role in participatory governance, employing mechanisms such as citizen juries to deliberate on algorithmic use and public dashboards to visualize smart city metrics. These practices should be continued and expanded, as they not only build public trust but also generate valuable feedback to improve services, for example, by allowing residents to report issues that an artificial intelligence system may have overlooked. Moreover, the distribution of societal benefits must be equitable. A recognized risk associated with advanced technologies in urban settings is the emergence or exacerbation of a digital divide, whereby certain groups may lack access to new services. Older adults, for example, may not use smartphone applications to access municipal services. To address this, cities should adopt complementary strategies that ensure inclusivity. These may include maintaining non-digital service access points and investing in digital literacy programs. European Union frameworks support such efforts through accessibility requirements and funding for digital inclusion, reinforcing the principle that technological advancement must be accompanied by measures to ensure that no one is left behind.
Finally, the comparative analysis of the European Union, the United States, and China illustrates that there is no universal model for governing artificial intelligence in smart cities. Each jurisdiction presents distinct strengths from which others can learn as well as shortcomings to be avoided. The European Union’s emphasis on ethics, fundamental rights, and accountability has become increasingly relevant in an era marked by concerns over data misuse and algorithmic bias. These challenges are now being recognized globally, including in the United States and China, which could benefit from elements of the European regulatory model. For instance, some US states have begun to adopt privacy laws resembling the General Data Protection Regulation, and China’s Personal Information Protection Law incorporates aspects of international privacy frameworks. Conversely, the European Union is also taking note of the rapid pace of innovation in the United States and China and acknowledging the need to strengthen its own innovation ecosystem. This recognition is evident in initiatives aimed at reducing administrative burdens, expanding the use of regulatory sandboxes, and increasing investment in artificial intelligence start-ups. These measures reflect an emerging consensus that regulation must be accompanied by effective mechanisms to support growth and technological development. There is also a growing awareness that international cooperation will be essential in certain domains. Shared challenges such as the establishment of technical standards for artificial intelligence safety, the promotion of data interoperability, and the mitigation of transnational risks, including cybersecurity threats and the spread of disinformation, cannot be addressed by any single jurisdiction in isolation. Forums such as the Global Partnership on Artificial Intelligence and structured dialogues, including the European Union and United States Trade and Technology Council and European Union and China digital engagements, offer opportunities for mutual learning and coordination on areas of common concern, even where political or governance differences persist. For smart cities, such cooperation may enable the exchange of best practices. A European city could adopt a promising artificial intelligence application piloted in an American municipality, while an American city might implement transparency and accountability mechanisms inspired by European models. Engagement with China is more complex due to divergent governance norms. Nevertheless, the experience of Chinese cities in deploying artificial intelligence at scale, such as in managing large-scale urban transit systems, offers technical insights that could be abstracted and adapted to other contexts, excluding the surveillance dimensions.

9. Policy Recommendations for Facilitating AI Adoption in Smart Cities

In response to documented challenges and stakeholder feedback, the following recommendations are supported by selected case examples to illustrate practical implementation pathways. These policy proposals aim to mitigate compliance burdens, particularly for smaller municipalities, and to optimize existing governance mechanisms such as regulatory sandboxes. They are designed to enable the responsible and effective deployment of artificial intelligence in smart city contexts, while maintaining the European Union’s commitment to legal safeguards, ethical standards, and public trust.

9.1. Tailored Support for Small and Medium-Sized Cities

Smaller municipalities face disproportionate barriers in navigating the European Union’s regulatory landscape for artificial intelligence. Limited financial and human resources often prevent them from conducting Data Protection Impact Assessments, preparing conformity assessments under the AI Act, or developing the required technical documentation. To address this, the European Commission and Member States should establish dedicated technical assistance services and co-financing schemes within programs such as Digital Europe or the Cohesion Fund. These should be explicitly designed to support regulatory compliance for public interest use-cases in smaller jurisdictions. A centralized guidance platform offering templates, toolkits, and real-world case studies tailored to municipal needs would further democratize access to trustworthy AI. For example, Limerick City and County Council leveraged Horizon 2020 funding in the +CityxChange project (grant 824260) to roll out AI-enabled positive-energy pilots while introducing a city-wide Data Management Plan, updating its Data-Protection Policy to align with the GDPR and the Irish Data Protection Act 2018 [90], and appointing a dedicated Data-Protection Officer to oversee compliance and supplier agreements. The Insight.Limerick.ie platform now exposes linked open-data APIs that feed municipal AI services, illustrating how medium-sized cities can pair EU co-financing with ready-made governance templates to reduce regulatory burden [91].

9.2. Decentralized and Inclusive Regulatory Sandboxes

The Artificial Intelligence Act foresees the implementation of regulatory sandboxes, yet their scope and accessibility remain limited. It is recommended that regulatory sandboxes be decentralized to regional or metropolitan levels and structured to prioritize participation by local governments and small and medium-sized enterprises (SMEs). These sandboxes should enable real-world testing of AI systems in controlled environments with regulatory oversight, particularly for applications in transport, energy management, and public administration. Public dissemination of sandbox findings, including technical configurations and compliance strategies, should be mandated to foster cross-city learning and policy harmonization. Barcelona’s participation in the AI4Cities project exemplified this approach, trialing an AI-based mobility platform within a regulatory sandbox that enabled iterative risk assessments in dialogue with city stakeholders and suppliers [92].

9.3. Streamlined Compliance Pathways for Public Interest Applications

The European Commission should issue supplementary guidance under both the AI Act and the General Data Protection Regulation to clarify how legal obligations apply to AI systems deployed in public interest contexts, especially those presenting minimal individual risk. A fast-track conformity assessment procedure should be introduced for standardized, low-risk AI applications commonly used by municipalities, such as traffic flow optimization or predictive maintenance. This would reduce administrative delays and compliance costs while preserving accountability and transparency standards. The Finnish city of Tampere piloted a fast-track AI procurement model for traffic signal optimization, supported by predefined conformity checklists aligned with AI Act draft obligations [93].

9.4. Mandates for Interoperability and Open Standards

Procurement processes across the EU should be reformed to mandate that AI systems procured by public authorities conform to established interoperability frameworks and open standards. The development and integration of compliance toolkits, such as those being piloted in the Data Space for Smart and Sustainable Cities and Communities (DS4SSCC), should be accelerated. These toolkits should include embedded mechanisms for data governance, algorithmic transparency, and cybersecurity certification, thus easing the technical burden on individual municipalities. Amsterdam’s use of the FIWARE platform in its city IoT architecture demonstrates how enforcing open standards at the procurement stage can dismantle vendor lock-in and support cross-domain interoperability [94].

9.5. Capacity-Building Through Urban Peer Learning Networks

Municipal capacity for AI governance must be systematically enhanced through knowledge exchange and institutional support. The European Union should invest in strengthening peer learning platforms such as EUROCITIES, OASC, and Living-in.EU, with a focus on regulatory compliance, technical deployment, and citizen engagement strategies. These networks should serve as hubs for disseminating best practices, training modules, and interoperable software solutions that have proven effective in navigating AI regulation at the local level. EUROCITIES’ Digital Forum has facilitated peer learning on AI ethics, with Helsinki and Rotterdam co-developing a shared methodology for bias auditing in municipal decision-support systems [95].
Together, these recommendations provide a policy pathway to reduce the regulatory and operational barriers faced by smart cities, especially those with limited resources, and to accelerate the equitable and trustworthy adoption of AI across the European Union.

10. Conclusions

The landscape of artificial intelligence adoption in smart city solutions in 2025 reflects a trajectory of cautious progress under a framework of guided principles, particularly within the European context. The European Union’s regulatory environment, while complex, offers a distinct model for integrating technological advancement with social responsibility. By identifying and addressing both regulatory and technical barriers, European stakeholders have the opportunity to accelerate the responsible deployment of artificial intelligence across urban systems. Achieving this will require sustained dialogue between regulators and innovators to refine policy frameworks, targeted investment in infrastructure and skill development, and continuous engagement with the public to maintain a social mandate for artificial intelligence adoption. The societal benefits at stake include safer transport systems, improved air quality, more efficient public services, and healthier, better-connected communities. These gains are substantial and merit the effort required to overcome existing obstacles. The European Union’s comparative approach suggests it is well positioned to assume a leadership role in shaping a global paradigm for human-centric smart cities, offering a clear alternative to more laissez-faire or surveillance-oriented models. As cities around the world stand on the threshold of artificial intelligence-driven transformation, the European experience illustrates that a principled and citizen-focused pathway is both feasible and potentially advantageous in the long term. This path does not only support cities that are more technologically advanced but also fosters environments that are more sustainable, inclusive, and responsive to the needs of their residents. The journey remains ongoing. Both artificial intelligence and urban systems will continue to evolve, and regulation must adapt accordingly. However, with careful stewardship, the vision of smart cities that enhance quality of life while upholding public trust and fundamental rights can be fully realized. Hence, the impact of European Union regulations on artificial intelligence adoption in smart cities has been both substantial and formative. These regulations present clear challenges but also establish a foundation of trust upon which future urban innovation can be built. The coming decade will serve as a test of whether this model can deliver at scale and whether the European Union can sustain an environment in which artificial intelligence flourishes in service of societal values and democratic governance, offering a benchmark for other regions to consider.

Author Contributions

B.N.J. and Z.G.M. have contributed equally to the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AIArtificial intelligence
AI ActArtificial Intelligence Act (EU Regulation 2024/1689)
CEConformité Européenne (“CE marking” that shows EU product conformity)
CEERCouncil of European Energy Regulators
CEPSCentre for European Policy Studies
COMEuropean Commission working-document code (e.g., “COM(2021) 206 final”)
COVID-19Coronavirus Disease 2019
DATEX IIData Exchange standard (EU road-traffic/ITS data model, 2nd edition)
DS4SSCCData Space for Smart and Sustainable Cities and Communities
ENISAEuropean Union Agency for Cybersecurity
EUEuropean Union
EUROCITIESNetwork of major European cities (“EUROCITIES”)
GDPRGeneral Data Protection Regulation (Regulation (EU) 2016/679)
IEEEInstitute of Electrical and Electronics Engineers
ITInformation technology
ITSIntelligent Transport Systems
MIAIMultidisciplinary Institute in Artificial Intelligence (Grenoble, FR)
NISNetwork and Information Security (EU cybersecurity framework)
NIS2Second Directive on Security of Network and Information Systems (Directive (EU) 2022/2555)
OASCOpen & Agile Smart Cities (city innovation network)
PRELUDEPredictive Retrospective Energy Performance to Upgrade Decarbonise EU (EU H-2020 project)
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
PRISMA-ScRPRISMA extension for Scoping Reviews
RMFRisk Management Framework (referencing NIST AI RMF 1.0)
USUnited States (of America)

References

  1. Wolniak, R.; Stecuła, K. Artificial Intelligence in Smart Cities—Applications, Barriers, and Future Directions: A Review. Smart Cities 2024, 7, 1346–1389. [Google Scholar] [CrossRef]
  2. Valerio, P. Energy Regulation and Innovation for Smart Cities. Available online: https://citiesofthefuture.eu/energy-regulation-and-innovation-for-smart-cities/ (accessed on 3 May 2025).
  3. Roth, Z.; Incera, M. The Rise of AI-Powered Smart Cities. Available online: https://www.spglobal.com/en/research-insights/special-reports/ai-smart-cities (accessed on 3 May 2025).
  4. Ben Rjab, A.; Mellouli, S.; Corbett, J. Barriers to artificial intelligence adoption in smart cities: A systematic literature review and research agenda. Gov. Inf. Q. 2023, 40, 101814. [Google Scholar] [CrossRef]
  5. Dragonetti, W.; Garcia, R. Cities Discuss Regulation of AI and Its Many Challenges. Available online: https://eurocities.eu/latest/cities-discuss-regulation-of-ai-and-its-many-challenges/ (accessed on 3 May 2025).
  6. Cities of the Future. NIS2 Directive Casts a Wider Net Over Smart City Infrastructure. Available online: https://citiesofthefuture.eu/nis2-directive-casts-a-wider-net-over-smart-city-infrastructure/ (accessed on 3 May 2025).
  7. Muggah, R. ‘Smart’ Cities Are Surveilled Cities. Available online: https://foreignpolicy.com/2021/04/17/smart-cities-surveillance-privacy-digital-threats-internet-of-things-5g/ (accessed on 1 May 2025).
  8. Smith, B. Why Smart Cities Threaten Citizens’ Right to Privacy. Available online: https://www.urbanet.info/why-smart-city-data-treatens-citizens-right-to-privacy/ (accessed on 2 May 2025).
  9. Jakubowska, E. How to fight Biometric Mass Surveillance After the AI Act. Available online: https://edri.org/our-work/how-to-fight-biometric-mass-surveillance-after-the-ai-act-a-legal-and-practical-guide/ (accessed on 2 May 2025).
  10. European Commission. AI Act (Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence)—Shaping Europe’s Digital Future. Available online: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed on 3 May 2025).
  11. European Parliament, CotEU. EU AI Act: First Regulation on Artificial Intelligence. Available online: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 3 May 2025).
  12. European Parliament Research Service. A European Approach to Artificial Intelligence (Excellence and Trust). Available online: https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/649360/EPRS_BRI(2020)649360_EN.pdf (accessed on 6 May 2025).
  13. European Commission. Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (accessed on 6 May 2025).
  14. European Parliament, CotEU. Regulation (EU) 2024/1689—Artificial Intelligence Act. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 6 May 2025).
  15. European Commission. Article 5: Prohibited AI Practices (EU AI Act). Available online: https://artificialintelligenceact.eu/article/5/ (accessed on 6 May 2025).
  16. Future of Life Institute. High-Level Summary of the AI Act (Art. 5 Prohibited AI Practices). Available online: https://artificialintelligenceact.eu/high-level-summary/ (accessed on 6 May 2025).
  17. European Parliament, CotEU. Regulation (EU) 2016/679—General Data Protection Regulation (GDPR). Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 6 May 2025).
  18. Lawne, R. Biometrics in the EU: Navigating the GDPR, AI Act. Available online: https://iapp.org/news/a/biometrics-in-the-eu-navigating-the-gdpr-ai-act (accessed on 2 May 2025).
  19. European Parliament, CotEU. Directive (EU) 2022/2555 on Measures for a High Common Level of Cybersecurity Across the Union (NIS2). Available online: https://eur-lex.europa.eu/eli/dir/2022/2555/oj (accessed on 6 May 2025).
  20. European Parliament, CotEU. Commission Delegated Regulation (EU) 2024/1366 of 11 March 2024 Supplementing Regulation (EU) 2019/943 by Establishing a Network Code on Sector-Specific Rules for Cybersecurity Aspects of Cross-Border Electricity Flows. Available online: http://data.europa.eu/eli/reg_del/2024/1366/oj (accessed on 6 May 2025).
  21. Directorate-General for Energy. New Network Code on Cybersecurity for EU Electricity Sector. Available online: https://energy.ec.europa.eu/news/new-network-code-cybersecurity-eu-electricity-sector-2024-03-11_en (accessed on 3 May 2025).
  22. European Parliament, CotEU. Regulation (EU) 2019/881—The Cybersecurity Act. Available online: http://data.europa.eu/eli/reg/2019/881/oj (accessed on 6 May 2025).
  23. European Parliament, CotEU. Regulation (EU) 2024/2847—Cyber Resilience Act. Available online: http://data.europa.eu/eli/reg/2024/2847/oj (accessed on 6 May 2025).
  24. European Commission. Cyber Resilience Act—Enhancing Cybersecurity of Products with Digital Elements. Available online: https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act (accessed on 3 May 2025).
  25. European Commission. Proposal for a Regulation on Horizontal Cybersecurity Requirements for Products with Digital Elements (Cyber Resilience Act). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0454 (accessed on 6 May 2025).
  26. Wernick, A.; Banzuzi, E.; Mörelius-Wulff, A. Do European smart city developers dream of GDPR-free countries? The pull of global megaprojects in the face of EU smart city compliance and localisation costs. Internet Policy Rev. 2023, 12, 28–45. [Google Scholar] [CrossRef]
  27. Andrews, C. European Commission withdraws AI Liability Directive from Consideration. Available online: https://iapp.org/news/a/european-commission-withdraws-ai-liability-directive-from-consideration/ (accessed on 3 May 2025).
  28. Williams, A. What Could Horizontal AI Legislation Look Like In the US? Exploring the US Algorithmic Accountability Act. Available online: https://www.holisticai.com/blog/us-algorithmic-accountability-act (accessed on 4 May 2025).
  29. Valtasaari, S. Exploring the AI Act: What It Means for the OASC Community. Available online: https://oascities.org/exploring-the-ai-act-what-it-means-for-the-oasc-community/ (accessed on 2 May 2025).
  30. European Commission. Data Space for Smart and Sustainable Cities and Communities (DS4SSCC). Available online: https://www.ds4sscc.eu/ (accessed on 3 May 2025).
  31. Laux, J.; Wachter, S.; Mittelstadt, B. Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and the Acceptability of Risk. SSRN Electron. J. 2022. [Google Scholar] [CrossRef]
  32. European Commission. Coordinated Plan on Artificial Intelligence. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0795 (accessed on 6 May 2025).
  33. Zhou, Y.; Kankanhalli, A. AI Regulation for Smart Cities: Challenges and Principles. In Smart Cities and Smart Governance: Towards the 22nd Century Sustainable City; Estevez, E., Pardo, T.A., Scholl, H.J., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 101–118. [Google Scholar]
  34. Fabregue, B. Artificial intelligence governance in smart cities: A European regulatory perspective. J. Auton. Intell. 2024, 7, 1–17. [Google Scholar] [CrossRef]
  35. European Commission. AI Act Enters into Force. Available online: https://commission.europa.eu/news/ai-act-enters-force-2024-08-01_en (accessed on 3 May 2025).
  36. Supra, J.D. AI Watch: Global Regulatory Tracker—European Union (Feb 2025 Update). Available online: https://www.jdsupra.com/legalnews/ai-watch-global-regulatory-tracker-3466836/ (accessed on 5 May 2025).
  37. Joyce, A.; Javidroozi, V. Smart City Development: Data Sharing vs. Data Protection Legislations. Cities 2024, 148, 104859. [Google Scholar] [CrossRef]
  38. European Parliament, CotEU. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data by Competent Authorities for the Purposes of the Prevention, Investigation, Detection or Prosecution of Criminal Offences or the Execution of Criminal Penalties, and on the Free Movement of Such Data, and Repealing Council Framework Decision 2008/977/JHA. Available online: https://eur-lex.europa.eu/eli/dir/2016/680/oj (accessed on 6 May 2025).
  39. European Commission. European Data Governance Act—Increasing Trust in Data Sharing. Available online: https://digital-strategy.ec.europa.eu/en/policies/data-governance-act (accessed on 2 May 2025).
  40. Council of the European Union. Council Adopts New Directive on Intelligent Transport Systems (ITS). Available online: https://www.consilium.europa.eu/en/press/press-releases/2023/10/23/council-adopts-new-framework-to-boost-the-roll-out-of-intelligent-transport-systems/ (accessed on 6 May 2025).
  41. European Parliament, CotEU. Directive (EU) 2023/2661 Amending Directive 2010/40/EU on Intelligent Transport Systems. Available online: https://eur-lex.europa.eu/eli/dir/2023/2661/oj/eng (accessed on 6 May 2025).
  42. Maczkovics, C.; Grelier, L.-A.; Choi, S.J.; Coget, M.; Hüsch, M. European Commission Publishes Automotive Industrial Action Plan. Available online: https://www.globalpolicywatch.com/2025/03/european-commission-publishes-automotive-industrial-action-plan/ (accessed on 2 May 2025).
  43. European Commission. Data Act Explained—A Pillar of the European Strategy for Data. Available online: https://digital-strategy.ec.europa.eu/en/policies/data-act (accessed on 3 May 2025).
  44. European Parliament, CotEU. Directive (EU) 2010/40/EU of the European Parliament and of the Council of 7 July 2010 on the Framework for the Deployment of Intelligent Transport Systems in the Field of Road Transport and for Interfaces with Other Modes of Transport. Available online: https://eur-lex.europa.eu/eli/dir/2010/40/oj (accessed on 6 May 2025).
  45. Legorburu, J. Withdrawal of the AI Liability Directive. Available online: https://byrnewallaceshields.com/news-and-recent-work/publications/withdrawal-of-the-ai-liability-directive.html (accessed on 2 May 2025).
  46. European Parliament, CotEU. Directive (EU) 2019/944 of the European Parliament and of the Council of 5 June 2019 on Common Rules for the Internal Market for Electricity and Amending Directive 2012/27/EU (Recast). Available online: https://eur-lex.europa.eu/eli/dir/2019/944/oj (accessed on 3 May 2025).
  47. European Commission. Energy and Smart Cities. Available online: https://energy.ec.europa.eu/topics/clean-energy-transition/energy-and-smart-cities_en (accessed on 3 May 2025).
  48. European Parliament, CotEU. Directive (EU) 2022/2557 of the European Parliament and of the Council of 14 December 2022 on the Resilience of Critical Entities and Repealing Council Directive 2008/114/EC. Available online: https://eur-lex.europa.eu/eli/dir/2022/2557/oj (accessed on 6 May 2025).
  49. European Parliament, CotEU. Directive (EU) 2024/3019 of the European Parliament and of the Council of 27 November 2024 Concerning Urban Wastewater Treatment (Recast). Available online: https://eur-lex.europa.eu/eli/dir/2024/3019/oj (accessed on 2 May 2025).
  50. European Parliament, CotEU. Directive (EU) 2014/24/EU of the European Parliament and of the Council of 26 February 2014 on Public Procurement and Repealing Directive 2004/18/EC. Available online: https://eur-lex.europa.eu/eli/dir/2014/24/oj (accessed on 6 May 2025).
  51. European Parliament, CotEU. Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on Liability for Defective Products and Repealing Council Directive 85/374/EEC. Available online: https://eur-lex.europa.eu/eli/dir/2024/2853/oj (accessed on 6 May 2025).
  52. European Commission. EU Adapts Product Liability Rules to the Digital Age and Circular Economy. Available online: https://commission.europa.eu/news-and-media/news/eu-adapts-product-liability-rules-digital-age-and-circular-economy-2024-12-09_en#:~:text=New%20product%20liability%20rules%20have,due%20to%20a%20defective%20product (accessed on 3 May 2025).
  53. Moille, C.; McCluskey, C.; Scott, G. EU Adopts Cyber Resilience Act for Connected Devices. Available online: https://www.goodwinlaw.com/en/insights/publications/2024/10/alerts-technology-ftec-eu-adopts-cyber-resilience-act (accessed on 2 May 2025).
  54. European Parliament, CotEU. Regulation (EU) 2022/868—Data Governance Act. Available online: https://eur-lex.europa.eu/eli/reg/2022/868/oj (accessed on 6 May 2025).
  55. European Parliament, CotEU. Regulation (EU) 2023/2854—Data Act. Available online: https://eur-lex.europa.eu/eli/reg/2023/2854/oj (accessed on 6 May 2025).
  56. Valerio, P. Europe’s GDPR Slaps Data Collected by Cities. Available online: https://citiesofthefuture.eu/europes-gdpr-slaps-data-collected-by-cities/ (accessed on 3 May 2025).
  57. Karathanasis, T. Guidance on Classification and Conformity Assessments for High-Risk AI Systems Under EU AI Act. Available online: https://ai-regulation.com/guidance-on-high-risk-ai-systems-under-eu-ai-act/ (accessed on 2 May 2025).
  58. Engler, A. The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment. Available online: https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/ (accessed on 3 May 2025).
  59. Wallace, N.; Castro, D. The Impact of the EU’s New Data Protection Regulation (GDPR) on AI. Available online: https://www2.datainnovation.org/2018-impact-gdpr-ai.pdf (accessed on 6 May 2025).
  60. TrustArc Inc. Protecting Personal Data in Smart Cities: The Role of Privacy Tech. Available online: https://trustarc.com/resource/protecting-personal-data-in-smart-cities/ (accessed on 3 May 2025).
  61. Lane, L. Preventing Long-Term Risks to Human Rights in Smart Cities: A Critical Review of Responsibilities for Private AI Developers. Internet Policy Rev. 2023, 12. [Google Scholar] [CrossRef]
  62. Buocz, T.; Pfotenhauer, S.; Eisenberger, I. Regulatory Sandboxes in the AI Act: Reconciling Innovation and Safety? Law Innov. Technol. 2023, 15, 357–389. [Google Scholar] [CrossRef]
  63. Zuiderveen Borgesius, F. Discrimination, Artificial Intelligence, and Algorithmic Decision-Making. Available online: https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73 (accessed on 7 May 2025).
  64. Adler, L. How Smart City Barcelona Brought the Internet of Things to Life. Available online: https://datasmart.hks.harvard.edu/news/article/how-smart-city-barcelona-brought-the-internet-of-things-to-life-789 (accessed on 3 May 2025).
  65. City of Helsinki. Helsinki Region Infoshare (HRI). Available online: https://hri.fi/en_gb/ (accessed on 2 May 2025).
  66. Council of the European Union. Cyber Resilience Act: Council Adopts New Law on Security Requirements for Digital Products. Available online: https://www.consilium.europa.eu/en/press/press-releases/2024/10/10/cyber-resilience-act-council-adopts-new-law-on-security-requirements-for-digital-products/ (accessed on 3 May 2025).
  67. The White House. FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Available online: https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ (accessed on 3 May 2025).
  68. Forbes. How Does China’s Approach to AI Regulation Differ from the US And EU? Available online: https://www.forbes.com/sites/forbeseq/2023/07/18/how-does-chinas-approach-to-ai-regulation-differ-from-the-us-and-eu/ (accessed on 3 May 2025).
  69. Gong, J.; Dorwart, H. AI Governance in China: Strategies, Initiatives, and Key Considerations. Available online: https://www.twobirds.com/en/insights/2024/china/ai-governance-in-china-strategies-initiatives-and-key-considerations (accessed on 3 May 2025).
  70. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
  71. Novet, J. Amsterdam and Helsinki Launch Algorithm Registries to Bring Transparency to AI. Available online: https://venturebeat.com/ai/amsterdam-and-helsinki-launch-algorithm-registries-to-bring-transparency-to-public-deployments-of-ai/ (accessed on 2 May 2025).
  72. Goldsmith, S.; Yang, J. AI and the Transformation of Accountability and Discretion in Urban Governance. arXiv 2025, arXiv:2502.13101. [Google Scholar]
  73. White House Office of Science Technology and Policy. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. Available online: https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/ (accessed on 6 May 2025).
  74. U. S. National Institute of Standards Technology. AI Risk Management Framework (AI RMF) 1.0. Available online: https://www.nist.gov/itl/ai-risk-management-framework (accessed on 6 May 2025).
  75. Ekman, A. China’s Smart Cities: The New Geopolitical Battleground. Available online: https://www.ifri.org/en/publications/etudes-de-lifri/chinas-smart-cities-new-geopolitical-battleground (accessed on 3 May 2025).
  76. Memoori. Progress vs Privacy—A Tale of Smart City Development in China. Available online: https://memoori.com/progress-vs-privacy-a-tale-of-smart-city-development-in-china/ (accessed on 2 May 2025).
  77. Mercator Institute for China Studies. China’s AI Development Model in an Era of Technological Deglobalization. Available online: https://merics.org/en/report/chinas-ai-development-model-era-technological-deglobalization (accessed on 2 May 2025).
  78. Xie, S.; Luo, N.; Yarime, M. Data Governance for Smart Cities in China: The Case of Shenzhen. Policy Des. Pract. 2024, 7, 66–86. [Google Scholar] [CrossRef]
  79. Li, P.; Stubbs, J. Chinese Tech Patents Tools That Can Detect, Track Uighurs. 2021. Available online: https://www.reuters.com/article/world/chinese-tech-patents-tools-that-can-detect-track-uighurs-idUSKBN29I2ZX/ (accessed on 2 May 2025).
  80. Kiggins, R. Smart City Governance in China: Ethics, Privacy and the Social Credit System. In Towards Socially Just Urban Futures; Hollands, J.R., Ed.; Springer: Berlin/Heidelberg, Germany, 2021; pp. 233–250. [Google Scholar]
  81. Horizons. China Social Credit System Explained—How It Works. Available online: https://joinhorizons.com/china-social-credit-system-explained/ (accessed on 3 May 2025).
  82. Linklaters. China’s First Generative AI Regulation Unveiled: Are There Positive Signals for the Emerging Technology Under Global Scrutiny? Available online: https://techinsights.linklaters.com/post/102iji5/chinas-first-generative-ai-regulation-unveiled-are-there-positive-signals-for-t (accessed on 2 May 2025).
  83. Carnegie Endowment for International Peace. China’s AI Regulations and How They Get Made. 2024. Available online: https://carnegieendowment.org/2024/02/27/tracing-roots-of-china-s-ai-regulations-pub-91815 (accessed on 3 May 2025).
  84. Arcesati, R. Lofty Principles, Conflicting Incentives: AI Ethics and Governance in China. Available online: https://merics.org/en/report/lofty-principles-conflicting-incentives-ai-ethics-and-governance-china (accessed on 3 May 2025).
  85. Wikipedia. Interim Measures for the Management of Generative AI Services. Available online: https://en.wikipedia.org/wiki/Interim_Measures_for_the_Management_of_Generative_AI_Services (accessed on 6 May 2025).
  86. Pernot-Leplay, E. China’s Approach on Data Privacy Law: A Third Way Between the U.S. and EU? Penn State J. Law Int. Aff. 2020, 8, 49–112. [Google Scholar]
  87. Sheehan, M. Tracing the Roots of China’s AI Regulations. Available online: https://carnegieendowment.org/2024/02/27/tracing-roots-of-china-s-ai-regulations-pub-91815 (accessed on 3 May 2025).
  88. Mercator Institute for China Studies. China’s AI Future in a Quest for Geopolitical, Computing, and Electric Power. 2024. Available online: https://merics.org/en/comment/chinas-ai-future-quest-geopolitical-computing-and-electric-power (accessed on 3 May 2025).
  89. Shanghai Municipal People’s Congress. Shanghai Regulations on Promoting the Development of AI Industry. 2022. Available online: http://www.ssme.sh.gov.cn/fg/20221212/24735.html (accessed on 4 May 2025).
  90. Oireachtas. Data Protection Act 2018. Available online: https://www.irishstatutebook.ie/eli/2018/act/7/enacted/en/html (accessed on 2 May 2025).
  91. Ahlers, D.; Riedesel, K. D11.15 Data Management Plan 6 (+CityxChange). Available online: https://cityxchange.eu/wp-content/uploads/2023/09/D11.15-Data-Management-Plan-6-final.pdf (accessed on 3 May 2025).
  92. AI4Cities Consortium. AI4Cities: Pre-Commercial Procurement for Carbon-Neutral Mobility and Energy Solutions. Available online: https://openresearch.amsterdam/en/page/102890/ai4cities (accessed on 3 May 2025).
  93. City of Tampere. AI-IoT Test Solution for Pedestrian Traffic Safety in Tampere. Available online: https://financialit.net/news/artificial-intelligence/city-tampere-and-tieto-develop-ai-iot-test-solution-pedestrian-traffic (accessed on 3 May 2025).
  94. Fiware Foundation. FIWARE for Smart Cities and Territories: A Digital Twin Approach. Available online: https://www.fiware.org/wp-content/directories/marketing-toolbox/material/FIWAREBrochure_SmartCities.pdf (accessed on 6 May 2025).
  95. Godson, A. Ethical AI in Cities: Eurocities Data Schema for Algorithm Registers. Available online: https://eurocities.eu/latest/nine-cities-set-standards-for-the-transparent-use-of-artificial-intelligence/ (accessed on 3 May 2025).
Figure 1. Conceptual framework of the review outlining the structured analysis of EU AI regulation in smart city solutions, organized into three thematic pillars—regulatory barriers, technological challenges, and societal benefits—followed by a comparative assessment of governance approaches in the European Union, the United States, and China.
Figure 1. Conceptual framework of the review outlining the structured analysis of EU AI regulation in smart city solutions, organized into three thematic pillars—regulatory barriers, technological challenges, and societal benefits—followed by a comparative assessment of governance approaches in the European Union, the United States, and China.
Information 16 00568 g001
Figure 2. PRISMA flow diagram for source identification and selection.
Figure 2. PRISMA flow diagram for source identification and selection.
Information 16 00568 g002
Table 1. Chronological overview of EU regulatory instruments governing AI in smart cities.
Table 1. Chronological overview of EU regulatory instruments governing AI in smart cities.
YearInstrumentScopeKey Provisions Relevant to Smart CitiesStatus
2016Public Procurement Directives (Directive (EU) 2014/24) [50] Public AI system acquisitionMandates ethical and legal compliance in tendering; encourages inclusion of AI transparency clausesIn force since 2016
2018GDPR (Regulation (EU) 2016/679) [38] All personal data processingPrivacy-by-design; data minimization; purpose limitation; impact assessments; special protection for biometric dataIn force since 2018
2018Law Enforcement Directive (Directive (EU) 2016/680) [38]Public safety and surveillanceConditions for personal data use by police; proportionality and necessity assessmentsIn force since 2018
2019Cybersecurity Act (Regulation (EU) 2019/881) [22]ICT certificationVoluntary cybersecurity certification framework; supports NIS2 and CRA complianceIn force since 2019
2021Electricity Directive (Directive (EU) 2019/944) [46]Energy systemsSmart metering obligations; access to energy consumption data; support for AI-based grid optimizationIn force since 2021
2023Data Governance Act (Regulation (EU) 2022/868) [54]Data reuse and sharingEnables public sector data sharing; establishes trusted intermediaries; supports data altruism and common data spacesIn force since 2023
2023ITS Directive (Directive (EU) 2023/2661) [41]Transport/mobility systemsStandardization of traffic and transport data; mandates data access via mobility data spaceIn force since 2023
2024AI Act (Regulation (EU) 2024/1689) [14]All AI systemsRisk-based classification (minimal to high risk); bans on unacceptable uses; transparency and conformity obligations for high-risk systemsIn force (2024), phased application through 2026
2024Data Act (Regulation (EU) 2023/2854) [55] IoT and connected device dataUser access rights to machine-generated data; mandates B2G data sharing; prohibits unfair data contracts; promotes cloud switchingEnacted 2024, applicable from mid-2025
2024NIS2 Directive (Directive (EU) 2022/2555) [19]Cybersecurity for essential servicesCybersecurity risk management; reporting obligations for digital infrastructure and city utilitiesTransposition deadline: 17 October 2024
2024Cyber Resilience Act (Regulation (EU) 2024/2847) [23]Security of digital productsMandatory CE marking; lifecycle security requirements for AI software/hardwareAdopted 2024, fully applicable from 2027
2024Product Liability Directive (Directive (EU) 2024/2853) [51]AI-related harm and defectsExtends strict liability to software and AI; clarifies municipal/public sector responsibilitiesAdopted 2024, applicable from 2026
2024Network Code on Cybersecurity (Regulation (EU) 2024/1366) [20]Cross-border electricity gridSets cybersecurity standards for power grid operations using AIAdopted March 2024
2024Urban Wastewater Treatment Directive (Directive (EU) 2024/3019) [49]Environmental monitoringIndirectly relevant; AI systems for water monitoring must meet reporting and data quality standardsAdopted 2024
Table 2. Comparative indicators of AI regulation and smart city development in the European Union, United States, and China (as of 2025).
Table 2. Comparative indicators of AI regulation and smart city development in the European Union, United States, and China (as of 2025).
IndicatorEuropean Union (EU)United States (US)China
Number of AI-related regulations (binding instruments)~12 (e.g., AI Act, GDPR, Data Act, NIS2)None comprehensive; sectoral and state-level rules~6 major AI-specific regulations (e.g., PIPL, algorithm law)
Public investment in AI and smart cities (2021–2025)~EUR 15 billion (Horizon Europe, Digital Europe)>USD 10 billion (federal + state programs)>CNY 200 billion (~CNY 25 billion) via central and provincial plans
Annual AI patent filings (latest available year)~6000 (via EPO)~15,000 (USPTO)>30,000 (via CNIPA, leading globally)
Number of smart cities or pilot programs~100 (e.g., Living-in.EU, EIP-SCC)~150 (local-level, fragmented initiatives)>500 cities in national pilot programs
National AI strategy publication year201820192017
Facial recognition use in public spacesEffectively prohibited (AI Act Article 5)Permitted in some jurisdictions, banned in othersWidely deployed, including for security and payment
Table 3. Comparative governance matrix for AI in smart cities across the EU, the US, and China.
Table 3. Comparative governance matrix for AI in smart cities across the EU, the US, and China.
DimensionEuropean Union (EU)United States (US)China
Data GovernanceGDPR enforces strict privacy, purpose limitation, and data minimization; data altruism and intermediaries promoted (Data Governance Act).Sectoral data laws (e.g., CCPA) with limited federal protection; fragmented and state-dependent.PIPL regulates companies; broad exemptions for state use; extensive integration of personal and public data.
AI Risk ClassificationFormalized under the AI Act (2024/1689) with risk tiers: unacceptable, high, limited, minimal.No unified risk classification; voluntary frameworks like NIST AI RMF exist.Sectoral laws address risk implicitly; state-led oversight defines acceptable practices (e.g., algorithm law).
Public OversightIndependent enforcement bodies (DPAs, AI offices); formal mechanisms for public complaints and audits.Mixed: FTC has limited authority; oversight varies by state and sector; increasing congressional interest.Centralized supervision by state agencies; algorithm registration and pre-deployment vetting mandated.
Surveillance NormsReal-time biometric surveillance banned (AI Act Article 5); strong rights protections limit mass surveillance.Allowed in some cities; restrictions depend on local legislation; often contested in courts.Pervasive facial recognition and behavioral tracking common; public surveillance integrated into governance.
Public Sector Innovation PoliciesRegulatory sandboxes, Digital Europe funding, and procurement standards support ethical AI deployment.Innovation-first approach; pilots widespread; limited ex ante constraints; self-regulation prevalent.Strong state funding and deployment mandates; AI integrated into national development and security goals.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jørgensen, B.N.; Ma, Z.G. Impact of EU Regulations on AI Adoption in Smart City Solutions: A Review of Regulatory Barriers, Technological Challenges, and Societal Benefits. Information 2025, 16, 568. https://doi.org/10.3390/info16070568

AMA Style

Jørgensen BN, Ma ZG. Impact of EU Regulations on AI Adoption in Smart City Solutions: A Review of Regulatory Barriers, Technological Challenges, and Societal Benefits. Information. 2025; 16(7):568. https://doi.org/10.3390/info16070568

Chicago/Turabian Style

Jørgensen, Bo Nørregaard, and Zheng Grace Ma. 2025. "Impact of EU Regulations on AI Adoption in Smart City Solutions: A Review of Regulatory Barriers, Technological Challenges, and Societal Benefits" Information 16, no. 7: 568. https://doi.org/10.3390/info16070568

APA Style

Jørgensen, B. N., & Ma, Z. G. (2025). Impact of EU Regulations on AI Adoption in Smart City Solutions: A Review of Regulatory Barriers, Technological Challenges, and Societal Benefits. Information, 16(7), 568. https://doi.org/10.3390/info16070568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop