Next Article in Journal
Hospital Resilience in a Multi-Hazard Era: Water Security Planning in Northern Thailand
Previous Article in Journal
Assessing Mobility-Driven Socio-Economic Impacts on Quality of Life in Small Urban Areas: A Case Study of the Great Belt Fixed Link Corridor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Data Models and Frameworks in Urban Environments in the Context of AI

by
H. Patricia McKenna
AmbientEase, Victoria, BC V8V 4Y9, Canada
Urban Sci. 2025, 9(7), 239; https://doi.org/10.3390/urbansci9070239
Submission received: 6 May 2025 / Revised: 10 June 2025 / Accepted: 23 June 2025 / Published: 25 June 2025

Abstract

This review paper provides a comprehensive review and analysis of the research and practice literature relating to data models and frameworks pertaining to urban and other AI-rich environments, extending to the planetary environment. Elements of focus include the very definition, along with the nature and stability, of the concept of AI itself; consideration of the notion of “open” in an AI context; data sharing, exchange, access, control, and use; and associated challenges and opportunities. Current gaps and problems in the literature on these data models are identified, giving rise to opportunities for research and practice going forward. One of the key gaps associated with AI models and frameworks lies in meeting the needs of the public, with the current top-down approach to AI design, development, and use emerging as a key problem. Such gaps set the stage for a number of recommendations, including human–AI collaboration; extending understanding of human–AI interactions; risk mitigation associated with artificial superintelligence and agentic approaches; and rethinking current AI models and the very definition of AI. This review paper is significant in that it integrates a SWOT (strengths, weaknesses, opportunities, threats) analysis to synthesize challenges, opportunities, gaps, and problems, offering a roadmap for human–AI interactions and collaborations in urban development.

1. Introduction

The design of digital cities and communities was introduced by Yoo et al. (2010) [1] as a socio-technical innovation with implications for infrastructure, services, applications, urban spaces, and interactions. Building on these digital capabilities, Nam and Pardo (2011) [2] provided a conceptual framework for a smart city involving humans, technology, and institutional dimensions. Conceptual relatives of the smart city were identified by Nam and Pardo (2011) [2] as intelligent cities, learning cities, and creative cities, to name a few. While acknowledging the success potential of top-down or bottom-up approaches to smart city development, Nam and Pardo (2011) [2] emphasized that “active involvement from every sector of the community is essential” in generating a “synergy” enabling “involved, informed and trained critical mass necessary for transformation.” An analysis of use cases of big data in cities around the world is provided by Lim et al. (2018) [3], because “the knowledge and framework for data use for smart cities (SCs)”, they argue, “remain relatively unknown.” From their analysis, Lim et al. (2018) [3] provide a categorization of findings from four reference models, while identifying six challenges contributing to a framework for data use for smart cities, which are said to be important for “urban planning and policy development in the modern data-rich economy.” Curry et al. (2022) [4] articulate the notion of “data spaces” in an era of data-rich environments, both organizational and personal, in a European context. For Curry et al. (2022) [4], data spaces enable the formulation of a framework in support of data ecosystems and the sharing of data. Ullah et al. (2023) [5] explore smart cities in terms of the Internet of Things (IoT) and machine learning (ML), contributing to the notion of data-centric smart environments, while Batty (2023) [6] traces the movement and evolution of AI toward the notion of urban AI. Luusua et al. (2023) [7] explore the role of AI in urban contexts in articulating a concept of urban AI involving “the built environment, infrastructure, places, people, and their practices.” With the rapid emergence of artificial intelligence (AI), Suchman (2023) [8] questions the very nature and stability of the concept of AI. Suchman (2023) [8] highlights the importance of problematizing AI and its seeming inevitability that is said to be “more symptomatic of the problems of late capitalism than promising of solutions to address them.” Dhar (2024) [9] points to the evolving nature of AI, specifically the shifting paradigms of AI, highlighting the “disconcertingly unaddressed” issues of trust and alignment. Widder et al. (2024) [10] explore the notion of “open” AI, identifying the need for “a wider scope for AI development” along with a “greater diversity of methods” and “support for technologies that more meaningfully attend to the needs of the public” as opposed to those of “commercial interests.” Addressing data models in the context of planetary computing involving geography and AI (GeoAI), Böhlen (2025) [11] notes that “the model cannot be scrutinized by all and often remains opaque to inquiry.” According to Böhlen (2025) [11], data models “reveal their peculiarities and limitations only in operation, when faced with new data and unexpected circumstances.” McKenna (2025) [12] emphasizes the need for improved awareness and the involvement of people and data interactions in AI-rich environments through mapping of data models, while contributing to the construction of a case for rethinking and evolving current data models and frameworks in an era of AI. West and Aydin (2025) [13] refer to the AI alignment paradox with “mainstream AI alignment research”, wherein “[t]he better we align AI models with our values, the easier we make it for adversaries to misalign the models”, as in, “more virtuous AI may be more easily made vicious.”
As such, this review of data models and frameworks for AI-rich environments is important in that a range of perspectives are brought together to foster an understanding of their complexities, risks, and potentials. The purpose of this review paper is to look more closely at the models and frameworks underlying AI-rich environments in the midst of the rush to develop, market, adopt, and embrace rapidly evolving and much-hyped AI technologies. The significance of this work is that it provides a space for debate, discussion, and possibly the expansion and extension of our thinking about human–AI data models and frameworks, offering a synthesis and a roadmap for urban development.
The explorations in this review paper give rise to the following research question:
RQ: What is the nature of the need for extending and/or enriching data models and frameworks in urban environments in the era of AI?
In summary, the main aim of this work is to contribute to literacies pertaining to data models and frameworks in the era of AI. The principal conclusions of this paper pertain to the need for risk mitigation for data models and frameworks, together with the need for rethinking and evolving AI data models in the rush toward artificial superintelligence.
Definitions for key terms used in this paper are provided in Section 1.1, based on the research and practice literature, followed by an overview of the objectives of this work in Section 1.2.

1.1. Definitions

Artificial Intelligence (AI). Mollick (2024) [14] refers to AI as “a General Purpose Technology” (GPT) comprising “once in-a-generation technologies” said to “touch every industry and every aspect of life”, and states that “in some ways, generative AI might even be bigger”, as in, “a new thing in the world, a co-intelligence.”
Frameworks. Partelow (2023) [15] explores the meaning, purpose, use, and value of frameworks, finding that “using a specific framework helps in part to position the work of a researcher in a field and its related concepts, theories, and paradigms”, as in, a framework serves as a “positioning tool.” In terms of their role, frameworks emerge “as bridging tools that enable connections between levels of knowledge”, according to Partelow (2023) [15], while their value is associated with their contestability, enabling them to “motivate engagement” and stimulate debate in support of “communication and synthesis.” Partelow (2023) [15] claims that “if they were more detailed”, frameworks “would be models.”
Models. Smaldino (2023) [16] describes a model as “any physical or abstract structure that can potentially represent a real-world phenomenon.”

1.2. Objectives

The key objectives of this paper are as follows: (a) to provide a comprehensive review of the research and practice literature pertaining to models and frameworks for data in the era of AI; (b) to identify key challenges and opportunities emerging from the research and practice literature pertaining to human–AI data models and frameworks; (c) to identify gaps or problems in the research literature pertaining to models and frameworks for data in the era of AI; and (d) to provide recommendations for future directions going forward for research and practice opportunities pertaining to human–AI data models and frameworks.

1.3. Methodology

The approach used in preparing this review paper consisted of a research strategy to identify practice or research work pertaining to models or frameworks in data-rich urban environments and regions, and beyond. As such, this work encompasses the emerging trajectory for data models and frameworks from digital technologies (Yoo et al., 2010) [1], to smart cities (Nam and Pardo, 2011) [2], to urban AI (Luusua et al., 2023) [7].

2. Literature Review

A review of the research and practice literature is provided in this section, focusing on data models in urban environments and beyond in the context of AI (Section 2.1), and data frameworks in urban environments and beyond in the context of AI (Section 2.2).

2.1. Data Models in Urban Environments in the Context of AI

The four reference models identified by Lim et al. (2018) [3] for urban data-use cases (extending to AI) include (a) local network development, (b) local information diffusion, (c) preventive local administration, and (d) local operations management. According to Lim et al. (2018) [3], reference models (a) and (b) pertain to citizens and visitors, and (c) and (d) pertain to local government and companies. Wright and Davidson (2020) [17] explore the relationships between data models and digital twins, claiming the latter “to be associated with an object that actually exists” and that “a digital twin without a physical twin is a model.” According to Wright and Davidson (2020) [17], the use of the digital twin (DT) notion is particularly helpful “when an object is changing over time.” Given the dynamic nature of cities as high- and low-frequency (Batty, 2018) [18], as in, changing quickly or slowly over time, respectively, and given the rapidly evolving nature of AI capabilities (Yue et al., 2025) [19], we begin to see how digital twins perhaps blur the boundaries between models and frameworks. Yet, it is helpful to recall the distinction between models and frameworks in the Definitions Section 1.1 of this paper, where, according to Partelow (2023) [15], “if they were more detailed”, frameworks “would be models.” As such, frameworks are said to serve as positioning and bridging tools in support of communication, contestation, engagement, and synthesizing, while models, with their evolving detail, may, in the case of digital twins, result in “moving a digital model closer and closer to the real thing” (Batty, 2018) [18]. Le et al. (2022) [20] employ foundation models in the rethinking of data-driven networking, while identifying challenges (such as network settings) and opportunities (such as analysis and management). With regard to the European Union (EU), Klîmek et al. (2023) [21] articulate the notion of model-driven data exchange, as well as multi-modal data management, in data spaces using the example of Atlas, an “extensible toolset integrating techniques and approaches.”
Alsamhi et al. (2024) [22] advance models for decentralized data-sharing (DDS), such as federated learning (FL), decentralized file systems (DFS), and semantic web (SW), to name a few. According to Alsamhi et al. (2024) [22], based on survey findings, such technologies “are empowered by DDS within the Dataspace 4.0 paradigm” and, in combination, “open the door to improved security, cooperation, and creativity in data exchange and management.” In the case, for example, of urban planning and design in smart cities, Peldon et al. (2024) [23] acknowledge issues with DT implementation associated with data management complexities, interoperability, and cybersecurity, along with the need for robustness and for being “socially attuned.” Kuilman et al. (2024) [24] consider the notion of value alignment in AI models, in relation to contextual and other types of relevancy, by advancing the need for contestability “throughout the lifecycle of an AI system” in addressing design oversights and the like. In this way, individuals are meaningfully involved, according to Kuilman et al. (2024) [24], “such that they can play an active part in the use of such a system.”
In exploring claims of openness in AI, Widder et al. (2024) [10] consider components of AI, in terms of the materials involved in the creation and use of large AI systems, to include computational power, data, frameworks, labor, and models. Key affordances of open AI systems are said to be extensibility, reusability, and transparency, while “maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models” (Widder et al., 2024) [10]. Widder et al. (2024) [10] conclude that “whether AI will be open or closed serves to distract from the overwhelmingly opaque nature of most corporate AI systems, both open and closed”, with the effect of “drawing valuable energy and initiative away from questions on the implications of AI in practice.” Instead, Widder et al. (2024) [10] encourage the “creation of meaningful alternatives to the present AI model”, but not “through the pursuit of open AI development alone”, affording, as it does, “data transparency and documentation”, which “are valuable for accountability”, while “maximally open AI projects helpfully illustrate the limits of what is possible.” Argota Sánchez-Vaquerizo (2025) [25] explores urban digital twins (UDTs) and the notion of metaverses, as in “a 3D extension of the Internet”, for exploring alternative scenarios involving participatory governance and human-centric approaches in future cities, while being attentive to the need for mitigating risks such as socially divisive urban experiences. More broadly, from a planetary computing perspective, Böhlen (2025) [6] explores GeoAI data models, finding them to be both inscrutable and opaque. Hou et al. (2025) [26] speak of the notion of next-generation urban sensing enabled through large language models (LLMs). Said to be a transformative in terms of impact, Hou et al. (2025) [26] focus on human–AI knowledge transfer, urban mechanisms awareness, and automated decision-making with AI agents, with the aim of achieving “more intelligent, responsible, and sustainable urban development.”
Table 1 provides an overview, organized by author and year, of perspectives on data models in urban environments and beyond in the context of AI from 2018 to 2025.
In summary, Figure 1 provides a depiction of perspectives on data models in urban environments and beyond in the context of AI, focusing on decentralized data-sharing, digital twins, Geo-AI, mapping, model-driven data exchange, reference models, relevancy, and shifting paradigms.
Closely related to data models are data frameworks, as reviewed in Section 2.2.

2.2. Data Frameworks in Urban Environments in the Context of AI

Lane et al. (2014) [27] incorporate conceptual, practical, and statistical frameworks in their edited work in support of engagement with a focus on privacy, big data, and the public good. Cabrera-Barona and Merschdorf (2018) [28] introduce an urban quality space–place framework drawing on geographic information systems, place and social interactions, and sense of place in relation to quality of life. Cabrera-Barona and Merschdorf (2018) [28] argue that their framework accommodates bottom-up practices involving citizens in everyday life. Lim et al. (2018) [3] contribute to the formulation of data-use frameworks for smart cities encompassing “reference models, challenges, and considerations”, while emphasizing the need, going forward, for review papers to integrate the literature in the field. Arribas-Bel et al. (2021) [29] advance the notion of open data products (ODPs) in support of a framework for “creating valuable analysis-ready data.” In distinguishing between ODPs and “purely Open Data”, Arribas-Bel et al. (2021) [29] claim the “key difference” to be “the value added, which widens accessibility and use of data that would otherwise be expensive or inaccessible.” For example, Arribas-Bel et al. (2021) [29] indicate that “[c]omponents of an ODP might include sophisticated data analysis to transform input data, digital infrastructure to host generated datasets, and dashboards, interactive web mapping sites or academic papers documenting the process.” Curry et al. (2022) [4] develop a framework for sharing data in the context of data ecosystems and AI. Liu et al. (2022) [30] provide a cyberspace perspective through a conceptual framework for geographic information science (GIScience) involving geospatial big data, where cyber human activities are said to require integration with Geo-AI models. Sharma et al. (2023) [31] provide a critical evaluation of AI studies across a variety of sectors in order to create a theoretical framework to guide academics and practitioners and “define future research trends” based on an educational use case involving “Chatbots’ potential as student mentors.”
Alsamhi et al. (2024) [22] formulate a “novel framework” for the integration of decentralized data-sharing (DDS) technologies such as FL, DFS, and SW, which provides support for privacy, security, and interoperability, among other benefits. Sargiotis (2024) [32] addresses the evolving and dynamic nature of data governance at the organizational level, highlighting adaptability, best practices, and attention to ethics in AI-rich environments. Bengio et al. (2025) [33] are concerned with “unchecked AI agency” risks ranging from “public safety and security” to “potentially irreversible loss of human control”, and, in response, they propose use of the Scientist AI framework that is said to be “trustworthy and safe by design.” Such a framework is proposed to address “human-like agency in AI systems” such as artificial general intelligence (AGI) (Bengio et al., 2025) [33]. Bengio et al. (2025) [33] describe AGI as “anticipated future systems with intelligence comparable to humans or superior to humans”, as in artificial superintelligence (ASI), where use of the Scientist AI framework provides guidance on “designing for understanding rather than pursing goals” and “making inferences based on that understanding.” From an urban informatics perspective, Yue et al. (2025) [19] propose a human–AI symbiosis framework for urban sustainable development characterized by co-creation, collaboration, and partnership in addressing complex urban issues across sectors. Stephanidis et al. (2025) [34] speak of the notion of putting “society in the loop”, citing the work of Starke et al. (2022) [35], in an effort to consider human capabilities (Nussbaum, 2007) [36], which supports a framework for assessing health, well-being, and human flourishing more generally in the context of AI.
Table 2 provides an overview, organized by author and year, of perspectives on data frameworks in urban environments and beyond in the context of AI from 2014 to 2025.
In summary, Figure 2 provides a visual rendering of perspectives on data frameworks in urban environments in the context of AI, focusing on data ecosystems, data use, data governance, decentralized data-sharing technologies, engagement, open data products, rethinking of models and frameworks, and Scientist AI.
As shown in Table 2 and Figure 2 in Section 2.2, a wide variety of frameworks emerge from the research literature specific to, or applicable to, urban environments. These frameworks pertain to engagement, data use in smart cities, data governance, data sharing, understanding and mitigating risks, and much more. As per the definition in Section 1.1, frameworks provide opportunities for engagement, contestability, debate, communication, positioning, and so on, and it is hoped that this review makes these activities possible for urban environments and beyond.
Based on this review of the research and practice literature, what follows is an overview of the challenges and opportunities for data models and frameworks in the context of AI in urban environments and beyond.

3. Challenges and Opportunities for Data Models and Frameworks in Urban Environments in the Context of AI

Based on use cases and confirmed by the research literature, Lim et al. (2018) [3] identify six challenges associated with the transformation of data into information in smart cities: managing data quality; integrating different data; addressing privacy issues; understanding the needs of people, i.e., citizens, employees, and visitors; enhancing geographic information delivery methods; and designing smart-city services. Dhar (2024) [9] identifies law and trust as key challenges for the current AI paradigm, where existing laws do not yet accommodate emergent and evolving developments. Further, knowledge representation is said to be opaque, according to Dhar (2024) [9], which “hinders human understanding” in a world where “explanation and transparency” matter, rendering trust and alignment “disconcertingly unaddressed.” In the interests of transparency, accountability, and more trustworthy systems, Patidar et al. (2024) [37] provide a comprehensive exploration of explainable artificial intelligence (XAI) methods and practices. XAI is described as “a set of techniques and methodologies aimed at making the decision-making process of AI and ML models understandable and transparent to humans” (Patidar et al., 2024) [37]. Patidar et al. (2024) [37] describe XAI as “a critical paradigm shift” in favor of transparency and accountability, where “enabling users to understand the rationale behind AI predictions and recommendations” contributes to greater safety, fairness, and trust. Kuilman et al. (2024) [24] identify the challenge with AI models of relevance in relation to alignment, context, and contestability, possibly opening up opportunities to involve people more actively in system lifecycle use. Kumar et al. (2024) [38] explore challenges and opportunities for AI systems in moving from a model-centric to a data-centric approach, concluding that the two are complementary and, as such, a model-data-centric AI approach is advanced in support of data quality and the dynamic nature of data. Shumailov et al. (2024) [39] address the challenge of model collapse, where “indiscriminately learning from data produced from other models” gives way to “a degenerative process whereby, over time, models forget the true underlying data distribution.” As such, Shumailov et al. (2024) [39] stress “the value of data collected about genuine human interactions with systems” and the importance of data provenance so as “to distinguish data generated by LLMs” and remain alert to the potential for “catastrophic forgetting” and “data poisoning.” Widder et al. (2024) [10], in highlighting the challenge of the opaque nature of corporate AI systems, encourage movement beyond distractions with openness and towards opportunities for developing more meaningful alternatives to existing AI models.
Responding to challenges pertaining to trust and inscrutability issues with large language models (LLMs), Bateson et al. (2025) [40] present an early-stage method for enabling more transparency regarding the underlying workings of deep learning models in contributing to the field of “mechanistic interpretability”, while acknowledging many limitations, from incomplete explanations of model computation, to “the role of inactive features,” and to graph complexity, to name a few. Bateson et al. (2025) [41] explore the internal workings of Anthropic’s Claude 3.5 Haiku AI model in a range of contexts, aided by use of their circuit-tracing methodology. Bateson et al. (2025) [41] describe the challenge of understanding the black-box nature of AI models as akin to learning about the biology of an LLM, and, as such, seek to “reverse engineer how these models work on the inside” in order to “better understand them and assess their fitness for purpose.”
Böhlen (2025) [11] highlights the challenges of GeoAI models associated with their limitations and peculiarities in action. Alber et al. (2025) [42] address the challenge of data poisoning in the healthcare sector associated with the adoption of LLMs and the potential “to spread false medical knowledge.” As such, Alber et al. (2025) [42] shed light on “emergent risks from LLMs trained indiscriminately on web-scraped data, particularly in healthcare”, arguing that “misinformation can potentially compromise patient safety.”
As if in a plea for a new AI model, Rothman (2025) [43] articulates the need for voices outside the AI industry to assist in shaping the future of such technologies, posing many questions, such as “[w]ould most people--people who are not computer scientists, and who have not devoted their lives to the creation of A.I.--think that they might find their life’s meaning through talking to one?” Rothman (2025) [43] challenges the reader with many other questions, ranging from “what we want from A.I.”, “what we don’t want”, and “[w]hat do we value in people, and in society?”, to the determinants of the success or failure of A.I. and whether “the value of human minds and human freedom” will be undermined, while arguing for the “need to debate and assert a new set of human values.”
Anderson and Rainie (2025) [44] explore the notion of “being human” in relation to emerging AI developments though the voices of “global technology experts” from over 300 respondents in many countries, finding that “the likely magnitude of change in human’s native capabilities and behaviors as they adapt to artificial intelligence (AI) will be ‘deep and meaningful’ or even ‘dramatic’ over the next decade.” For example, Juan Ortiz Freuler claims that “[t]he growing integration of predictive models into everyday life is challenging three core concepts of our social structure: identity, autonomy, and responsibility”, such that “[t]he individual, with all the complexity of lived experience, becomes increasingly irrelevant in the face of these algorithms” (Anderson and Rainie, 2025) [44]. And yet, in terms of potential and opportunity, Keram Malicki-Sanchez (Anderson and Rainie, 2025) [44] maintains that “LLMs can be programmed to reveal uncharted territory if we are well-versed in interacting with them effectively to harness that potential.” LLMs, according to Keram Malicki-Sanchez, “do not preclude the teaching of curiosity and fundamentals” such that “[i]nteraction with these tools—for that is what they are—can engender new energy within humans toward the exploration and iterative development of new ideas” whereby [t]he offshoot side effect of ‘creativity’ inspired by working with AI models can increase our appreciation for the distinct beauty and value of naturally-derived human output” (Anderson and Rainie, 2025) [44].
Among the challenges identified by Hou et al. (2025) [26] for urban sensing in an era of AI are cultural adaptability, privacy, space–time cognition, and the utilization of multi-modal data. Liu et al. (2025) [45] address the “intricate, multifaceted challenges” of foundational agents based on “a modular and brain-inspired AI agent framework” by focusing on their modular foundation, self-enhancement and adaptive capabilities, and collaborative and evolutionary multi-agent systems. For Liu et al. (2025) [45], foundational agents give rise to “the critical imperative of building safe, secure, and beneficial AI systems” while being attentive to “security threats, ethical alignment, robustness, and practical mitigation strategies necessary for trustworthy real-world deployment.”
The challenges and opportunities for data models and frameworks in urban environments and beyond in the context of AI are presented in Table 3, organized by author and year for the timeframe of 2018 to 2025.
In summary, Figure 3 provides a visual rendering of challenges and opportunities pertaining to data models and frameworks in urban environments in the context of AI. Key challenges emerging from this review include alignment, data integration, privacy, quality, and trust, to name a few.
Key opportunities emerging from this review include the analysis and delivery of data, understanding the needs of people, stability, and trust, to name a few. Examples and elements associated with each are also included outside the diagram. Indeed, there seems to be an interaction between the two, where challenges give rise to the potential for opportunities. Probing further, the literature is explored for gaps in Section 3.1 and for problems in Section 3.2.

3.1. Gaps: Data Models and Frameworks in Urban Environments in the Context of AI

Some areas where there are gaps that require further exploration in regard to data models and frameworks in the era of AI are outlined below.
Dorostkar and Ziari (2025) [46] identify the need “to bridge the gap between tradition and modernity” in smart cities and in the research literature, and present a culturally sustainable urban planning framework involving the integration of smart technologies, where it is said that Internet of Things (IoT) “sensors, AI algorithms, and green infrastructure can be harmonized with feng shui principles to enhance urban efficiency and sustainability.” Gomez et al. (2025) [47] explore the nature of gaps in human–AI collaboration through a systematic review of the literature, noting the lack of “a common vocabulary for human–AI interaction protocols.” In response, Gomez et al. (2025) [47] advance a taxonomy of interaction patterns in AI-assisted decision-making as “a tool to understand interactivity with AI” in support of “designs for achieving clear communication, trustworthiness, and collaboration.” Such interaction is perhaps encompassed in the promise of what Yoo et al. (2010) [1] describe as the design of digital cities. McKenna (2025) [12] calls for the rethinking and evolving of data models and frameworks in the era of AI, offering a conceptual framework for data in the context of human–AI interactions and urban environments which features awareness of data generation, uses, and experiences, focusing on the elements of donation, ownership alternatives, sharing, and privacy. A gap in meaningfully addressing the needs of the public is identified by Widder et al. (2024) [10], while Gartrell et al. (2025) [48] introduce the Thinking Machines Lab in response to what is said to be a gap in understanding among the scientific community of frontier AI systems, given rapid advances in capabilities. Accordingly, the purpose of the Thinking Machines Lab is to “make AI systems more widely understood, customizable, and generally capable” while “building models at the frontier of capabilities” (Gartrell et al., 2025) [48]. Silver and Sutton (2025) [49] speak in terms of “streams of experience” enabling “experiential learning” for AI agents in their environments to move beyond the limits or gaps of human-centric AI systems.

3.2. Problems: Data Models and Frameworks in Urban Environments in the Context of AI

Some problems facing data models and frameworks in the era of AI, which require further exploration, are outlined below.
Russell (2022) [50] points to open/research problems associated with AI, the control problem, and the question of the consequences of successful AI development, giving rise to the need for a new model for AI and the need for a new AI definition. Suchman (2023) [8] identifies the need for the problematization of AI in terms of the stability of the concept, the nature of the problems that AI technologies are designed to remedy, and the planetary implications of the algorithmic intensification that characterizes the AI race. Dhar (2024) [9] points to alignment and trust (e.g., instances of seemingly credible but incorrect knowledge) as unaddressed problems associated with the evolution and shifting of the AI paradigm.
Among other potential problems is that of the top-down or bottom-up approach to AI model development and deployment, as identified by Anderson & Rainie (2025) [44]. Yet, from an organizational and management perspective, Kim et al. (2014) [51] find evidence of “the complementary roles of top-down and bottom-up action plans.” In considering smart-city approaches, Hendawy and da Silva (2023) [52] describe top-down as a techno-centric approach, bottom-up as a socio-centric approach, and in-between as a socio-technical approach. As such, the recommendation of Nam and Pardo (2011) [2] for a “synergy” of active involvement from every sector is further articulated by Hendawy and da Silva (2023) [52], who advance the notion of “hybrid smartness” as an integrated alternative that involves balancing and interweaving the top-down and bottom-up approaches in support of equity and sustainability, thus informing “the wider planning discourse.” Concerned with the problem of “superintelligent agents” and catastrophic risks, Bengio et al. (2025) [33] advance a world model, offering instead a safer path “that generates theories to explain data and a question-answering inference machine”, incorporating “uncertainty to mitigate the risks of overconfident predictions.” Ray (2025) [53] describes AI model development problems associated with the “restricted and static” nature of training data, emerging from recent work by Silver and Sutton (2025) [49], who advocate for movement beyond an “era of human data” to an “era of experience” for AI model learning.
As shown in Table 4, key gaps and problems associated with AI models and frameworks, organized by author, are summarized.
In the interests of being critical and constructive, the research question posed in Section 1 of this paper is reformulated here as a proposition, as follows:
P1: Data models and frameworks in the context of AI in urban environments and beyond are in need of extension and enrichment in their ability to address catastrophic risks while ensuring safe, trustworthy, and non-agentic systems.
This proposition is addressed in relation to a discussion of the review findings in Section 4.

4. Discussion

The findings from this review of the research and practice literature pertaining to AI data models and frameworks are depicted in Figure 4 as emergent, on the one hand, and in terms of design and development, on the other hand. Aspects relating to challenges are listed in the upper-left box, opportunities in the lower-left box, gaps in the upper-right box, and problems in the lower-right box. Indeed, this review may point to the potential for a sort of SWOT (strengths, weaknesses, opportunities, threats) analysis (Teece, 2018) [54], the elements of which are positioned in Figure 4, enabling the potential for richer considerations and interpretations.

4.1. SWOT Analysis of Review Findings for Data Models and Frameworks in the Era of AI

The SWOT analysis framework, when applied to the academic literature, in this review paper offers additional analytical value, beyond the categorization of information, by offering an overlay structure for challenges, gaps, opportunities, and problems. With this overlay, an additional layer of analysis emerges for consideration, pertaining to the relationships between challenges and weaknesses, opportunities and strengths, gaps and opportunities, and problems and threats. A concrete decision or recommendation that this structure clarifies could be that identified by Dorostkar and Ziari (2025) [34], where a cultural gap could be perceived as an opportunity for the integration of smart technologies with feng shui principles and traditions in support of smart, sustainable cities.

4.1.1. Challenges/Weaknesses

Among the challenges for data models and frameworks in the era of AI, which could also be interpreted as potential weaknesses, are those pertaining to analysis and delivery (Arribas-Bel et al., 2021) [29], limitations noted when AI models are in action (Böhlen, 2025) [11]: privacy (Lane et al., 2014) [27], data quality (Lim et al., 2018; Kumar et al., 2024) [3,38], and relevance (Kuilman et al., 2024) [24]. Perhaps important in addressing the challenges and weaknesses of privacy and accuracy in the era of AI is the work by Ebel, Garimella, and Reagen at New York University (NYU) (NYU, 2025) [55] on privacy-preserving AI models that enable secure neural network computation in the processing of sensitive information through applying “fully homomorphic encryption (FHE) to deep learning”, where decryption is not required.

4.1.2. Opportunities/Strengths

Among the opportunities for data models and frameworks in the era of AI, which could also be interpreted as potential strengths, are those pertaining to awareness (McKenna, 2025) [12], data-sharing ecosystems (Curry et al., 2022) [4], improving literacies (Bateson et al., 2025; Gartrell et al., 2025) [41,48]), more diverse methods (Widder et al., 2024) [10], and a rethinking of current models (McKenna, 2025; Russell, 2022) [12,50].

4.1.3. Gaps/Opportunities

Among the gaps for data models and frameworks in an era of AI, which could also be interpreted as opportunities, are those pertaining to the integration of culture (e.g., feng shui) with technologies in urban planning (Dorostkar and Ziari, 2025) [46]; human–AI collaboration taxonomy (Gomez et al., 2025) [47]; models that learn experientially from their streams of experience in the environment (Silver and Sutton, 2025) [49] in support of cultural heritage and sustainability; understanding and attending to the needs of the public (Lim et al., 2018; Widder et al., 2024) [3,10]; and understanding frontier AI capabilities, in support of model building and customizability (Gartrell et al., 2025) [48]. Addressing the top-down AI model gap with frontier AI, Gartrell et al. (2025) [48] provide opportunities through creating a space in the form of the Thinking Machine Lab, involving the following beliefs: science is better when shared; AI works for everyone, as in human–AI collaboration and the importance of adaptability and the like; solid foundations matter; and learning by doing through co-design, iteration, and the like.

4.1.4. Problems/Threats

Among the problems for data models and frameworks in the era of AI, which could also be interpreted as threats, are those pertaining to AI control (Russell, 2022; Bengio et al., 2025) [33,50]; the paradoxical nature of value alignment (West and Aydin, 2025) [13] and contexts in which alignment and trust are said to be unaddressed (Dhar, 2024) [9]; catastrophic risks (Shumailov et al., 2024; Bengio et al., 2025) [33,39]; the problematization of AI in terms of the stability of the concept (Suchman, 2023) [8] and, indeed, the need for contestability (Kuilman et al., 2024) [24]; and the top-down approach to the design, development, and deployment of AI (Anderson & Rainie, 2025) [44].
Further, it is worth noting the work of Mollick (2024) [14], who speaks of the notion of co-intelligence, claiming that AI tools such as ChatGPT 3.5 and 4 “don’t act like you expect a computer to act”, but rather, “they act more like a person”, and, as such, contribute to the realization “that you are interacting with something new, something alien, and that things are about to change”, thus simultaneously providing, it would seem, the potential for opportunities as well as threats. In a McKinsey report, drawing on the work of Hoffman and Beato (2025) [56] on superagency and transformation, Mayer et al. (2025) [57] explore the challenges, opportunities, threats, and risks of harnessing AI “to amplify human agency and unlock new levels of creativity and productivity in the workplace” while automating not just tasks, but also cognitive functions.
Hoffman and Beato (2025) [56] speak of superagency, encouraging a lens of opportunity to view a future in which we can “actively shape a world where human ingenuity and the power of AI combine to create something extraordinary”, while offering a “roadmap for using AI inclusively and adaptively to improve our lives and create positive change.” However, also of note is the evolution of AI models toward an “agentic approach” referred to as “streams”, enabling AI models to “learn from the experience of the environment” (Ray, 2025) [53], as advanced by Massachusetts Institute of Technology (MIT) researchers Silver and Sutton (2025) [49]; this results in what is said to be “superhuman capabilities”, thus indicating a move beyond the “era of human data” toward an “era of experience” full of challenges, risks, and opportunities. Yet, Bengio et al. (2025) [33] call instead for non-agentic systems, advancing Scientist AI, to mitigate the risks and enhance the benefits of AI while contributing to the understanding of AI, with implications for improving AI literacies. Further, response to concerns with “techno-solutionism” in the urban informatics sector seems to be particularly relevant to agentic and non-agentic approaches with the human–AI symbiosis framework proposed by Yue et al. (2025) [19], which “positions AI not as a standalone fix but as a partner in navigating urban complexity.” Indeed, the need for improving AI literacies seems to be supported in the work of Jacobs and Munoz (2025) [58] in relation to corporations and academic institutions. Finally, taking into consideration the state of AI in 2025, Strickland (2025) [59] refers to Stanford’s AI index, which shows that the release of AI models in 2024 emerged largely from industry as opposed to the government or academia, and that from 2023 to 2024, the release of AI models is said to have declined, possibly as a result of “increasing complexity of the technology and the ever-rising costs of training.” Caprotti et al. (2024) [60] address the issue of why AI matters for urban studies, covering many issues pertaining to interactions with data frameworks and models in the era of AI—from spatial and surveillance issues to algorithmic governance, to name a few. As such, Caprotti et al. (2024) [60] elaborate on the need for developing research directions in urban AI in support of “just and equitable futures for all.”

4.2. Theorizing and Framework Formulation for Data in the Era of AI

The data frameworks emerging from this review of the research and practice literature will, it is hoped, aid researchers by providing positioning tools (Partelow, 2023) [15] for use in the theorizing of urban AI. Indeed, it may be that theories such as ambient theory for smart cities (ATSC), extended to environments and spaces in the form of ambient theory for smart spaces (McKenna, 2024) [61], could accommodate or be extended further to accommodate AI and even ASI theorizing. This is because ambient theory accommodates awareness, technologies, and people in relation to smart environments and spaces involving the adaptive, dynamic, emergent, interactive, and pervasive, placing a focus on action pertaining to planning, designing, and creating (McKenna, 2024) [61].
Based on the findings of this review, and building upon the mapping of current data models in the era of AI by McKenna (2025) [12], a possibly innovative new framework for understanding and evolving data models and frameworks in urban environments in the era of AI, and possibly even ASI, is proposed, as shown in Figure 5.
As shown on the left in Figure 5, the framework features the broad involvement, as recommended by Nam and Pardo (2011) [2], of communities, governments, industry, researchers, technology companies, and practitioners. Ethics, policymaking, and responsible approaches form a key part of the involvement of all of these both in data generation and in tool creation, testing, and use activities. Data analysis is an important part of learning about what emerges, aiding in working with uncertainties, and identifying areas that may be underexplored. Data value creation for all involved is a key outcome, as are the monetization and valuation of the work and data of all involved. What emerges must also be planetary-centric, taking into consideration all life forms and the unique role of planet Earth in the larger scheme of things.

4.3. Limitations and Mitigations

Possible limitations of this review pertain to the rapidly evolving and emergent nature of AI technologies, systems, and ecosystems, resulting in a literature review that captures a snapshot in time of data models and frameworks. This snapshot is mitigated by the potential for ongoing opportunities in the updating and extending of the literature review.

4.4. Future Directions

Going forward, opportunities for research and practice pertaining to AI data models and frameworks that would likely respond to the need for problematizing AI (Suchman, 2023) [8] include, but are not limited, to the following points.
Firstly, opportunities arise in relation to possibilities for human–AI interactions, considering the perspective of Mollick (2024) [14] on co-intelligence, that of Gomez et al. (2025) [47] on human–AI collaboration, and that of Gartrell et al. (2025) [48] on AI that works for everyone.
Secondly, the mitigation of risks associated with artificial superintelligence (Bengio et al., 2025) [33] points to areas for research and practice, even as positive avenues are navigated for superagency (Hoffman and Beato, 2025) [55]; as an “agentic approach” is advanced where AI models learn experientially from the environment (Silver and Sutton, 2025) [49]; and as changes in being human in an era of AI are discussed (Anderson and Rainie, 2025) [44].
Thirdly, like Bengio et al. (2025) [33], Russell (2022) [50] identifies the need for a new AI model (incorporating uncertainty) that is “more robust, controllable, and deferential”, since, in the face of “real-world applications”, the existing “standard model”, which is “designed to optimize a fixed, known objective”, is rendered “increasingly untenable” due to “the difficulty of specifying objectives completely and correctly.”
And, from a practice perspective, Dorostkar and Ziari (2025) [46] emphasize the importance of AI algorithms and other technologies in the context of smart cities that enable the integration of cultural-heritage elements in support of urban development that is culturally sustainable.
As such, recommendations for research and practice going forward, in response to the challenges, opportunities, gaps, and problems emerging from this review of the research and practice literature on AI data models and frameworks, are outlined in Table 5, organized by action and focus area in order to guide community members, developers and designers, policymakers, and researchers.
The action and focus areas pertain broadly to human–AI collaborations; human–AI interactions; rethinking, developing, and exploring new AI models and the definitions of AI, ASI, and being human; and risk mitigation for AI and ASI. Given the highly interwoven nature of the recommendations emerging from this review of the literature on data models and frameworks in the era of AI, all of the items are priorities that urgently require action and focus by designers and developers of AI and ASI, researchers, policymakers, and community members.

5. Conclusions

In the era of rapidly evolving AI, this paper is significant in that it provides an overview of the research and practice literature on AI data models and frameworks, capturing a snapshot in time. In terms of implications, this paper points to the possible enrichment of perspectives on challenges, opportunities, gaps, and problems in AI data models and frameworks through the use of a SWOT analysis that takes into consideration the layering of strengths, weaknesses, opportunities, and threats. This paper makes a contribution to the urban science data space through the formulation of a conceptual framework for evolving data models and frameworks in the era of AI. Key recommendations emerging from this review pertain to research and practice opportunities for human–AI collaborations and interactions; the rethinking of current AI models and frameworks with a view to developing and exploring new AI models; a new definition of AI in the movement toward artificial superintelligence; the urgency of ASI risk mitigation; and consideration of what it means to be human in this rapidly changing space.
This work will be of interest globally to researchers; practitioners; AI developers, designers, and users; educators; governments; community members; and anyone concerned with data models and frameworks in urban environments and beyond in the era of AI.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author H. Patricia McKenna is employed by AmbientEase, Inc. The author declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
AGIArtificial General Intelligence
ASIArtificial Superintelligence
ATSCAmbient Theory for Smart Cities
DDSDecentralized Data-Sharing
DFSDecentralized File System
DTDigital Twin
EUEuropean Union
FLFederated Learning
FHEFully Homomorphic Encryption
GeoAIGeographic Artificial Intelligence
GIScienceGeographic Information Science
GPTGeneral Purpose Technology
IoTInternet of Things
LLMsLarge Language Models
MASMulti-Agent System
MITMassachusetts Institute of Technology
NYUNew York University
ODPsOpen Data Products
SCsSmart Cities
SWSemantic Web
SWOTStrengths Weaknesses Opportunities Threats
UDTsUrban Digital Twins
XAIExplainable Artificial Intelligence

References

  1. Yoo, Y.; Bryant, A.; Wigand, R.T. Designing digital communities that transform urban life: Introduction to the special section on digital cities. Commun. Assoc. Inf. Syst. 2010, 27, 637–640. [Google Scholar] [CrossRef]
  2. Nam, T.; Pardo, T.A. Conceptualizing smart city with dimensions of technology, people, and institutions. In Proceedings of the 12th Annual International Digital Government Research Conference: Digital Government Innovation in Challenging Times, College Park, MD, USA, 12–15 June 2011; pp. 282–291. [Google Scholar] [CrossRef]
  3. Lim, C.; Kim, K.-J.; Maglio, P.P. Smart cities with big data: Reference models, challenges, and considerations. Cities 2018, 82, 86–99. [Google Scholar] [CrossRef]
  4. Curry, E.; Scerri, S.; Tuikka, T. Data spaces: Design, deployment, and future directions. In Data Spaces; Curry, E., Scerri, S., Tuikka, T., Eds.; Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  5. Ullah, A.; Anwar, S.M.; Li, J.; Nadeem, L.; Mahmood, T.; Rehman, A.; Saba, T. Smart cities: The role of Internet of Things and machine learning in realizing a data-centric smart environment. Complex Intell. Syst. 2023, 10, 1607–1637. [Google Scholar] [CrossRef]
  6. Batty, M. The emergence and evolution of urban AI. AI Soc. 2023, 38, 1045–1048. [Google Scholar] [CrossRef]
  7. Luusua, A.; Ylipulli, J.; Foth, M.; Aurigi, A. Urban AI: Understanding the emerging role of artificial intelligence in smart cities. AI Soc. 2023, 38, 1039–1044. [Google Scholar] [CrossRef]
  8. Suchman, L. The uncontroversial ‘thingness’ of AI. Big Data Soc. 2023, 10, 4. [Google Scholar] [CrossRef]
  9. Dhar, V. The paradigm shifts in artificial intelligence: Even as we celebrate AI as a technology that will have far-reaching benefits for humanity, trust and alignment remain disconcertingly unaddressed. Commun. ACM 2024, 67, 50–59. [Google Scholar] [CrossRef]
  10. Widder, D.G.; Whittaker, M.; West, S.M. Why ‘open’ AI systems are actually closed, and why this matters. Nature 2024, 635, 827–833. [Google Scholar] [CrossRef]
  11. Böhlen, M. On the Logics of Planetary Computing: Artificial Intelligence and Geography in the Alas Mertajati; Routledge: London, UK, 2025. [Google Scholar] [CrossRef]
  12. McKenna, H.P. Improving our awareness of data generation, use, and ownership: People and data interactions in AI-rich environments. In Distributed, Ambient and Pervasive Interactions; Streitz, N.A., Konomi, S., Eds.; HCII 2025. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2025; Volume 15802. [Google Scholar] [CrossRef]
  13. West, R.; Aydin, R. The AI alignment paradox: The better we align AI models with our values, the easier we may make it to realign them with opposing values. Commun. ACM 2025, 68, 24–26. [Google Scholar] [CrossRef]
  14. Mollick, E. Co-Intelligence: Living and Working with AI; Portfolio; Penguin Random House: Westminster, MD, USA, 2024. [Google Scholar]
  15. Partelow, S. What is a framework? Understanding their purpose, value, development and use. J. Environ. Stud. Sci. 2023, 13, 510–519. [Google Scholar] [CrossRef]
  16. Smaldino, P. What are models and why should we use them to understand social behavior? Code Horiz. Blog 2023. Available online: https://codehorizons.com/what-are-models-and-why-should-we-use-them-to-understand-social-behavior/ (accessed on 5 May 2025).
  17. Wright, L.; Davidson, S. How to tell the difference between a model and a digital twin. Adv. Model. Simul. Eng. Sci. 2020, 7, 13. [Google Scholar] [CrossRef]
  18. Batty, M. Digital twins. Environ. Plan. B Urban Anal. City Sci. 2018, 45, 817–820. [Google Scholar] [CrossRef]
  19. Yue, Y.; Yan, G.; Lan, T.; Cao, R.; Gao, Q.; Gao, W.; Huang, B.; Huang, G.; Huang, Z.; Kan, Z.; et al. Shaping future sustainable cities with AI-powered urban informatics: Toward human-AI symbiosis. Comput. Urban Sci. 2025, 5, 31. [Google Scholar] [CrossRef]
  20. Le, F.; Srivatsa, M.; Ganti, R.; Sekar, V. Rethinking data-driven networking with foundation models: Challenges and opportunities. In Proceedings of the 21st ACM Workshop on Hot Topics in Networks, Austin, TX, USA, 14–15 November 2022; Association for Computing Machinery: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  21. Klîmek, J.; Koupil, P.; Škoda, P.; Bártîk, J.; Stenchlák, S.; Nečaský, M. Atlas: A toolset for efficient model-driven data exchange in data spaces. In Proceedings of the 2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), Västerås, Sweden, 1–6 October 2023; pp. 4–8. [Google Scholar] [CrossRef]
  22. Alsamhi, S.H.; Hawbani, A.; Kumar, S.; Timilsina, M.; Al-Qatf, M.; Haque, R. Empowering dataspace 4.0: Unveiling promise of decentralized data-sharing. IEEE Access 2024, 12, 112637–112658. [Google Scholar] [CrossRef]
  23. Peldon, D.; Banihashemi, S.; LeNguyen, K.; Derrible, S. Navigating urban complexity: The transformative role of digital twins in smart city development. Sustain. Cities Soc. 2024, 111, 105583. [Google Scholar] [CrossRef]
  24. Kuilman, S.K.; Siebert, L.C.; Buijsman, S.; Jonker, C.M. How to gain control and influence algorithms: Contesting AI to find relevant reasons. AI Ethics 2024, 5, 1571–1581. [Google Scholar] [CrossRef]
  25. Argota Sánchez-Vaquerizo, J. Urban Digital Twins and metaverses towards city multiplicities: Uniting or dividing urban experiences? Ethics Inf. Technol. 2025, 27, 4. [Google Scholar] [CrossRef]
  26. Hou, C.; Zhang, F.; Li, Y.; Li, H.; Mai, G.; Kang, Y.; Yao, L.; Yu, W.; Yao, Y.; Gao, S.; et al. Urban sensing in the era of large language models. Innovation 2025, 6, 100749. [Google Scholar] [CrossRef]
  27. Lane, J.; Stodden, V.; Bender, S.; Nissenbaum, H. (Eds.) Privacy, Big Data, and the Public Good: Frameworks for Engagement; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar] [CrossRef]
  28. Cabrera-Barona, P.F.; Merschdorf, H. A Conceptual Urban Quality Space-Place Framework: Linking Geo-Information and Quality of Life. Urban Sci. 2018, 2, 73. [Google Scholar] [CrossRef]
  29. Arribas-Bel, D.; Green, M.; Rowe, F.; Singleton, A. Open data products-A framework for creating valuable analysis ready data. J. Geograpical Syst. 2021, 23, 497–514. [Google Scholar] [CrossRef] [PubMed]
  30. Liu, X.; Chen, M.; Claramunt, C.; Batty, M.; Kwan, M.-P.; Senousi, A.M.; Cheng, T.; Strobl, J.; Cöltekin, A.; Wilson, J.; et al. Geographic information science in the era of geospatial big data: A cyberspace perspective. Innovation 2022, 3, 100279. [Google Scholar] [CrossRef] [PubMed]
  31. Sharma, B.M.; Verma, D.K.; Raghuwanshi, K.D.; Dubey, S.; Nair, R.; Malviya, S. Generic framework of new era artificial intelligence and its applications. In International Conference on Applied Technologies; Botto-Tobar, M., Zambrano Vizuete, M., Montes León, S., Torres-Carrión, P., Durakovic, B., Eds.; ICAT 2023. Communications in Computer and Information Science; Springer: Cham, Switzerland, 2024; Volume 2049. [Google Scholar] [CrossRef]
  32. Sargiotis, D. Conclusion: The evolving landscape of data governance. In Data Governance; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  33. Bengio, Y.; Cohen, M.; Fornasiere, D.; Ghosn, J.; Greiner, P.; MacDermott, M.; Mindermann, S.; Oberman, A.; Richardson, J.; Richardson, O.; et al. Superintelligent agents pose catastrophic risks: Can Scientist AI offer a safer path? arXiv 2025, arXiv:2502.15657. [Google Scholar]
  34. Stephanidis, C.; Salvendy, G.; Antona, M.; Duffy, V.G.; Gao, Q.; Karwowski, W.; Konomi, S.; Nah, F.; Ntoa, S.; Rau, P.-L.P.; et al. Seven HCI grand challenges revisited: Five-year progress. Int. J. Hum.-Comput. Interact. 2025, 1–49. [Google Scholar] [CrossRef]
  35. Starke, C.; Baleis, J.; Keller, B.; Marcinkowski, F. Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data Soc. 2022, 9, 1–16. [Google Scholar] [CrossRef]
  36. Nussbaum, M. Human rights and human capabilities. Harv. Hum. Rights J. 2007, 20, 21–24. [Google Scholar]
  37. Patidar, N.; Mishra, S.; Jain, R.; Prajapati, D.; Solanki, A.; Suthar, R.; Patel, K.; Patel, H. Transparency in AI decision making: A survey of explainable AI methods and applications. Adv. Robot. Technol. 2024, 2, 1–10. [Google Scholar] [CrossRef]
  38. Kumar, S.; Datta, S.; Singh, V.; Singh, S.K.; Sharma, R. Opportunities and challenges in data-centric AI. IEEE Access 2024, 12, 33173–33189. [Google Scholar] [CrossRef]
  39. Shumailov, I.; Shumaylov, Z.; Zhao, Y.; Papernot, N.; Anderson, R.; Gal, Y. AI models collapse when trained on recursively generated data. Nature 2024, 631, 755–759. [Google Scholar] [CrossRef]
  40. Ameisen, E.; Lindsey, J.; Pearce, A.; Gurnee, W.; Turner, N.L.; Chen, B.; Citro, C.; Abrahams, D.; Carter, S.; Hosmer, B.; et al. Circuit tracing: Revealing computational graphs in language models. Transform. Circuits Thread 2025. Available online: https://transformer-circuits.pub/2025/attribution-graphs/methods.html (accessed on 31 March 2025).
  41. Lindsey, J.; Gurnee, W.; Ameisen, E.; Chen, B.; Pearce, A.; Turner, N.L.; Citro, C.; Abrahams, D.; Carter, S.; Hosmer, B.; et al. On the biology of a large language model: We investigate the internal mechanisms used by Claude 3.5 Haiku—Anthropic’s lightweight production model—In a variety of contexts, using our circuit tracing methodology. Transform. Circuits Thread 2025. Available online: https://transformer-circuits.pub/2025/attribution-graphs/biology.html (accessed on 31 March 2025).
  42. Alber, D.A.; Yang, Z.; Alyakin, A.; Yang, E.; Rai, S.; Valliani, A.A.; Zhang, J.; Rosenbaum, G.R.; Amend-Thomas, A.K.; Kurland, D.B.; et al. Medical large language models are vulnerable to data-poisoning attacks. Nat. Med. 2025, 31, 618–626. [Google Scholar] [CrossRef] [PubMed]
  43. Rothman, J. Are We Taking A.I. Seriously Enough? There’s No Longer Any Scenario in Which A.I. Fades into Irrelevance. We Urgently Need Voices from Outside the Industry to Help Shape Its Future. The New Yorker, 1 April 2025. Available online: https://www.newyorker.com/culture/open-questions/are-we-taking-ai-seriously-enough (accessed on 1 April 2025).
  44. Anderson, J.; Rainie, L. Expert Views on the Impact of AI on the Essence of Being Human; Elon University’s Imagining the Digital Future Center: Elon, NC, USA, 2025; Available online: https://imaginingthedigitalfuture.org/wp-content/uploads/2025/03/Being-Human-in-2035-ITDF-report.pdf (accessed on 4 April 2025).
  45. Liu, B.; Li, X.; Zhang, J.; Wang, J.; He, T.; Hong, S.; Liu, H.; Zhang, S.; Song, K.; Zhu, K.; et al. Advances and challenges in foundation agents: From brain-inspired intelligence to evolutionary, collaborative, and safe systems. arXiv 2025, arXiv:2504.01990. [Google Scholar]
  46. Dorostkar, E.; Ziari, K. Integrating ancient Chinese feng shui philosophy with smart city technologies: A culturally sustainable urban planning framework for contemporary China. J. Chin. Archit. Urban. 2025, 025080018. [Google Scholar] [CrossRef]
  47. Gomez, C.; Cho, S.M.; Ke, S.; Huang, C.-M.; Unberath, M. Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review. Front. Comput. Sci. 2025, 6, 2024. [Google Scholar] [CrossRef]
  48. Gartrell, A. Thinking Machines. Thinking Machines Lab Blog. 2025. Available online: https://thinkingmachines.ai (accessed on 9 April 2025).
  49. Silver, D.; Sutton, R.S. Welcome to the era of experience. In Designing an Intelligence; MIT Press: Cambridge, MA, USA, 2025; Available online: https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf (accessed on 3 May 2025).
  50. Russell, S. If We Succeed. Daedalus 2022, 151, 43–57. [Google Scholar] [CrossRef]
  51. Kim, Y.H.; Sting, F.J.; Loch, C.H. Top-down, bottom-up, or both? Toward an integrative perspective on operations strategy formation. J. Oper. Manag. 2014, 32, 462–474. [Google Scholar] [CrossRef]
  52. Hendawy, M.; da Silva, I.F.K. Hybrid smartness: Seeking a balance between top-down and bottom-up smart city approaches. In Intelligence for Future Cities; Goodspeed, R., Sengupta, R., Kyttä, M., Pettit, C., Eds.; CUPUM 2023; The Urban Book Series; Springer: Cham, Switzerland, 2023. [Google Scholar] [CrossRef]
  53. Ray, T. AI Has Grown Beyond Human Knowledge, Says Google’s DeepMind Unit: A New Agentic Approach Called ‘Streams’ Will Let AI Models Learn from the Experience of the Environment Without Human ‘Pre-Judgment’. ZDNet, 18 April 2025. Available online: https://www.zdnet.com/article/ai-has-grown-beyond-human-knowledge-says-googles-deepmind-unit/ (accessed on 26 April 2025).
  54. Teece, D.J. SWOT analysis. In The Palgrave Encyclopedia of Strategic Management; Augier, M., Teece, D.J., Eds.; Palgrave Macmillan: London, UK, 2018. [Google Scholar] [CrossRef]
  55. NYU. Encryption Breakthrough Lays Groundwork for Privacy-Preserving AI Models: New AI Framework Enables Secure Neural Network Computation Without Sacrificing Accuracy. New York University, Tandon School of Engineering News. 2025. Available online: https://engineering.nyu.edu/news/encryption-breakthrough-lays-groundwork-privacy-preserving-ai-models (accessed on 20 April 2025).
  56. Hoffman, R.; Beato, G. Superagency: What Could Possibly Go Right with Our AI Future? Authors Equity. 2025. Available online: https://www.simonandschuster.com/books/Superagency/Reid-Hoffman/9798893310108 (accessed on 18 March 2025).
  57. Mayer, H.; Yee, L.; Chui, M.; Roberts, R. Superagency in the Workplace: Empowering People to Unlock AI’s Full Potential. McKinsey Digital, 28 January 2025. Available online: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work (accessed on 5 May 2025).
  58. Jacobs, G.; Munoz, J.M. AI and Education: Strategic Imperatives for Corporations and Academic Institutions. California Management Review Insights, 2 May 2025. Available online: https://cmr.berkeley.edu/2025/05/ai-and-education-strategic-imperatives-for-corporations-and-academic-institutions/ (accessed on 5 May 2025).
  59. Strickland, E. 12 graphs that explain the state of AI in 2025 > Stanford’s AI index tracks performance, investment, public opinion, and more. IEEE Spectr. 2025. Available online: https://spectrum.ieee.org/ai-index-2025? (accessed on 5 May 2025).
  60. Caprotti, F.; Cugurullo, F.; Cook, M.; Karvonen, A.; Marvin, S.; McGuirk, P.; Valdez, A.M. Why does urban Artificial Intelligence (AI) matter for urban studies? Developing research directions in urban AI research. Urban Geogr. 2024, 45, 883–894. [Google Scholar] [CrossRef]
  61. McKenna, H.P. An exploration of theory for smart spaces in everyday life: Enriching ambient theory for smart cities. In Intelligent Data-Centric Systems; Lyu, Z., Ed.; Smart Spaces; Academic Press: Cambridge, MA, USA, 2024; pp. 17–46. [Google Scholar] [CrossRef]
Figure 1. Perspectives on data models.
Figure 1. Perspectives on data models.
Urbansci 09 00239 g001
Figure 2. Perspectives on data frameworks.
Figure 2. Perspectives on data frameworks.
Urbansci 09 00239 g002
Figure 3. Challenges and opportunities for data models and frameworks.
Figure 3. Challenges and opportunities for data models and frameworks.
Urbansci 09 00239 g003
Figure 4. Challenges, opportunities, gaps, and problems for data models and frameworks.
Figure 4. Challenges, opportunities, gaps, and problems for data models and frameworks.
Urbansci 09 00239 g004
Figure 5. Conceptual framework for evolving data models and frameworks in the era of AI.
Figure 5. Conceptual framework for evolving data models and frameworks in the era of AI.
Urbansci 09 00239 g005
Table 1. Perspectives on data models in urban environments in the context of AI [3,9,10,11,12,17,20,21,22,24,25,26].
Table 1. Perspectives on data models in urban environments in the context of AI [3,9,10,11,12,17,20,21,22,24,25,26].
AuthorYearData Models
Lim et al.2018Four reference models for urban data-use cases in SCs
Wright & Davidson2020Data models and digital twins
Le et al.2022Foundation models for rethinking data-driven networking
Alsamhi et al.2024Models for decentralized data-sharing (FL, DFS, SW)
Klîmek et al.2023Atlas: model-driven data exchange and multi-modal data management
Dhar2024Shifting paradigms of AI: trust and alignment as unaddressed
Kuilman et al.2024Contestability for value alignment and relevancy in AI
Widder et al.2024Creation and use of meaningful alternatives to AI models
Argota Sánchez-Vaquerizo2025Urban digital twins and metaverses
Böhlen2025Geo-AI model inscrutability and opaqueness
Hou et al.2025Urban sensing and LLMs
McKenna2025Mapping of current data models for awareness in the era of AI
Table 2. Perspectives on data frameworks in urban environments in the context of AI [3,4,12,19,22,27,28,29,30,31,32,33,34].
Table 2. Perspectives on data frameworks in urban environments in the context of AI [3,4,12,19,22,27,28,29,30,31,32,33,34].
AuthorYearData Frameworks
Lane et al.2014Conceptual, practical, and statistical frameworks and engagement
Cabrera-Barona & Merschdorf2018Urban quality space–place framework
Lim et al.2018Data-use frameworks for SCs
Arribas-Bel et al.2021Open data products framework, widening accessibility and use
Curry et al.2022Framework for sharing data in data ecosystems
Liu et al.2022GIScience in relation to geospatial data and cyberspace
Alsamhi et al.2024Framework for integrating decentralized data-sharing tech
Sharma et al.2023Generic framework of new era AI and applications
Sargiotis2024Data governance at the organizational level
Bengio et al.2025Scientist AI framework for understanding and mitigating risks
McKenna2025Rethinking and evolving data frameworks in the AI era
Stephanidis2025Human capabilities framework in an AI context
Yue et al.2025Human–AI symbiosis framework
Table 3. Challenges and opportunities for data models and frameworks in urban environments [3,9,10,11,13,24,26,38,39,41,42,43,44,45].
Table 3. Challenges and opportunities for data models and frameworks in urban environments [3,9,10,11,13,24,26,38,39,41,42,43,44,45].
AuthorYearData Models and Frameworks: Challenges and Opportunities
Lim et al.2018Six challenges for transforming data into information in SCs
Dhar2024Alignment, trust, and legislation
Kuilman et al.2024Relevance in relation to context, alignment, and contestability
Kumar et al.2024Moving from a model-centric to a data-centric approach
Shumailov et al.2024Model collapse and the importance of data provenance
Widder et al.2024Openness—needs of public vs. commercial interests
Alber et al.2025Data poisoning and misinformation in the healthcare sector
Anderson & Rainie2025Exploration of “being human” in a world of AI
Bateson et al.2025Method to expose behaviors of large language models
Böhlen2025Geo-AI model—limitations and peculiarities in action
Hou et al.2025Urban sensing and LLMs
Liu et al.2025Foundational agents and the need for safe, secure, and beneficial AI
Rothman2025Calling for voices outside of the AI industry to shape the future
West & Aydin2025AI alignment paradox
Table 4. Emergent gaps and problems with models and frameworks in the era of AI [8,9,10,33,44,46,47,48,49,50,53].
Table 4. Emergent gaps and problems with models and frameworks in the era of AI [8,9,10,33,44,46,47,48,49,50,53].
AuthorsGapsProblems
Anderson & Rainie Top-down/bottom-up approach
Bengio et al. Catastrophic risks
Dhar Alignment; trust
Dorostkar & ZiariCulturally sustainable urban planning framework
Gomez et al.Human–AI collaboration
Ray AI model development
Russell AI control; research
Gartrell et al.Understanding of frontier AI among the scientific community
Silver & SuttonModels that learn from experience of the environment
Suchman Problematization of AI
Widder et al.Meaningfully addressing the needs of the public
Table 5. Recommendations for research and practice: data models and frameworks in urban spaces.
Table 5. Recommendations for research and practice: data models and frameworks in urban spaces.
Action/FocusCommunity MembersDevelopersPolicymakersResearchers
Human–AI collaboration
Human–AI interactions
Rethinking AI models/definitions
Risk mitigation for AI/ASI
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

McKenna, H.P. A Review of Data Models and Frameworks in Urban Environments in the Context of AI. Urban Sci. 2025, 9, 239. https://doi.org/10.3390/urbansci9070239

AMA Style

McKenna HP. A Review of Data Models and Frameworks in Urban Environments in the Context of AI. Urban Science. 2025; 9(7):239. https://doi.org/10.3390/urbansci9070239

Chicago/Turabian Style

McKenna, H. Patricia. 2025. "A Review of Data Models and Frameworks in Urban Environments in the Context of AI" Urban Science 9, no. 7: 239. https://doi.org/10.3390/urbansci9070239

APA Style

McKenna, H. P. (2025). A Review of Data Models and Frameworks in Urban Environments in the Context of AI. Urban Science, 9(7), 239. https://doi.org/10.3390/urbansci9070239

Article Metrics

Back to TopTop