Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems
Abstract
1. Introduction: Trustworthy AI for Whom?
- (i)
- Recent advances in digital watermarking present a scalable solution for distinguishing AI-generated content from human-authored material. SynthID-Text, a watermarking algorithm discussed by Dathathri et al. [142], provides an effective way to mark AI-generated text, ensuring that content remains identifiable without compromising its quality. This watermarking framework offers a pathway for managing AI’s outputs on a massive scale, potentially curbing the spread of misinformation. However, questions of accessibility and scalability remain, particularly in jurisdictions where trust infrastructures are underdeveloped. SynthID-Text’s deployment exemplifies how watermarking can help maintain trust in AI content, yet its application primarily serves contexts where technological infrastructure supports high computational demands, leaving out communities with limited resources;
- (ii)
- The concept of “personhood credentials” (PHCs) provides another lens for exploring trust. According to Adler et al. [143], PHCs allow users to authenticate as real individuals rather than AI agents, introducing a novel method for countering AI-powered deception. This system, based on zero-knowledge proofs, ensures privacy by verifying individuals’ authenticity without exposing personal details. While promising, PHCs may inadvertently centralize trust among issuing authorities, which could undermine local, decentralized trust systems. Additionally, the adoption of PHCs presents ethical challenges, particularly in regions where digital access is limited, raising further questions about inclusivity in digital spaces purportedly designed to be “trustworthy”;
- (iii)
- In the context of decentralized governance, Poblet et al. [133] highlighted the role of blockchain-based oracles as tools for digital democracy, providing external information to support decision making within blockchain networks. Oracles serve as intermediaries between real-world events and digital contracts, enabling secure, decentralized information transfer in applications like voting and community governance. Their use in digital democracy platforms has demonstrated potential for enhancing transparency and collective decision making. Yet, this approach is not without challenges; the integration of oracles requires robust governance mechanisms to address biases and inaccuracies, especially when scaling across diverse socio-political landscapes. Thus, oracles provide valuable insights into building trustworthy systems, but their implementation remains context-dependent, raising critical questions about the universality of digital trust.
2. Methods: Trustworthy AI Systematic EU Policy Analysis Through AI Act and Draghi Report
2.1. AI Act at the Crossroads of Innovation and Responsibility
2.1.1. Risk Classification: A Unified Framework with Tailored Enforcement
2.1.2. Human Oversight: Enhancing Governance in Critical Sectors
2.1.3. Innovation Sandboxes: Bridging Compliance and Creativity
2.1.4. Sector-Specific Priorities: Aligning AI with Regional Significance
2.1.5. A Unified Vision with Localized Flexibility
2.1.6. Toward a Balanced Future?
- (i)
- Federated Learning (Aligned with Data Governance and Privacy):Supports privacy-preserving AI governance by enabling distributed training on sensitive data without centralizing information, ensuring compliance with GDPR and AI Act’s high-risk AI requirements.Example: Used in healthcare applications, allowing hospitals to collaboratively train AI models while preserving patient confidentiality;
- (ii)
- Blockchain-Based Provenance Tracking (Aligned with Transparency Obligations and Public Sector AI Applications):Ensures immutability of AI-generated content, enabling verifiable authenticity for AI-driven decisions, which is crucial in public services and media regulation.Example: Applied in journalism and digital identity systems to authenticate content sources and prevent AI-generated misinformation;
- (iii)
- Zero-Knowledge Proofs (ZKPs) (Aligned with Data Governance and Compliance):Allows verification of AI interactions without exposing sensitive data, reinforcing trust in decentralized AI systems while complying with strict data protection laws.Example: Used in identity verification protocols, ensuring that AI-driven authentication mechanisms operate transparently without privacy risks;
- (iv)
- Decentralized Autonomous Organizations (DAOs) for Crowdsourced Verification (Aligned with Human Oversight and AI Governance):Introduces community-driven AI auditing, ensuring democratic oversight in high-risk AI applications where centralized institutions may lack credibility or impartiality.Example: Implemented in fact-checking initiatives, where DAOs enable collective content moderation and AI accountability mechanisms;
- (v)
- AI-Powered Digital Watermarking (Aligned with Transparency and Misinformation Regulation):Embeds traceable markers into AI-generated content, ensuring that users are informed when interacting with AI-generated media, aligning with the AI Act’s transparency provisions.Example: Used in deepfake detection and content verification systems, particularly in elections and media trust initiatives;
- (vi)
- Explainable AI (XAI) (Aligned with High-Risk AI Requirements and Human Oversight):Enhances interpretability of AI decisions, ensuring accountability in high-stakes AI applications where explainability is legally mandated.Example: Adopted in finance, legal, and medical AI models to provide clear justifications for algorithmic outcomes, addressing concerns over AI opacity;
- (vii)
- Privacy-Preserving Machine Learning (PPML) (Aligned with Compliance and Innovation Sandboxes):Facilitates secure AI model training without compromising user privacy, enabling safe AI innovation in regulatory sandboxes while ensuring alignment with compliance standards.Example: Used in cross-border AI collaborations, particularly in fintech and digital identity management, to protect personal data while enabling AI innovation.Conclusion: Bridging Policy and Technology for Trustworthy AI
- Operationalizing risk management and compliance measures within the AI Act’s framework;
- Providing real-world applications that ensure AI technologies align with democratic values such as transparency, accountability, and human oversight;
- Addressing the limitations of centralized AI governance by introducing decentralized, privacy-preserving, and community-driven trust mechanisms.
2.2. Draghi Report
2.2.1. Trustworthiness Beyond Technological Robustness
2.2.2. Economic Competitiveness vs. Ethical Equity
2.2.3. Trustworthiness in High-Stakes Sectors
2.2.4. Toward a Participatory and Inclusive Vision
- Balancing Economic Competitiveness and Ethical Integrity—AI-powered digital watermarking and blockchain-based provenance tracking ensure transparency in high-stakes sectors like journalism and finance, mitigating risks of AI-generated misinformation and algorithmic opacity without stifling innovation;
- Ensuring Trustworthiness in High-Stakes Sectors—Federated learning, PPML, and XAI provide privacy-preserving, interpretable AI governance models, essential for healthcare, law enforcement, and energy sectors, where bias mitigation and explainability are crucial;
- Advancing Participatory and Inclusive AI Governance—DAOs and ZKPs introduce decentralized verification models, shifting AI accountability from top-down regulatory enforcement to bottom-up community-driven governance, aligning with the Draghi Report’s call for inclusive AI ecosystems.
2.3. Trustworthy AI for Whom: Approaching from Decentralized Web3 Ecosystem Perspective
2.3.1. The Challenges of Detection Techniques for Trust Through Decentralized Web3 Ecosystems
2.3.2. GenAI and Disinformation/Misinformation: A Perfect Storm?
2.3.3. Ethical AI and Accountability in Decentralized Systems
2.3.4. The Role of Blockchain in AI Content Authentication
2.3.5. Transdisciplinary Approaches to AI Governance
2.3.6. Addressing the Elephant in the Room
2.4. Justification for the Relevance and Rigor of the Methodology
2.4.1. Bridging Policy and Practice for Technological Communities
2.4.2. The AI Act as a Framework for Risk Classification and Ethical Safeguards
2.4.3. The Draghi Report as a Vision for Strategic Resilience
2.4.4. Policy Relevance in Decentralized Web3 Ecosystems
2.4.5. Advancing Detection Techniques of Trust
2.4.6. A Transdisciplinary Perspective for a Complex Problem
3. Results: Seven Detection Techniques of Trust Through Decentralized Web3 Ecosystems
- Why These Seven Techniques? Selection Criteria and Justification (Table 3)
- Regulatory Alignment—They directly address trust, transparency, and accountability challenges outlined in the AI Act and Draghi Report, ensuring compliance with risk classification, data sovereignty, and explainability mandates;
- Decentralized Suitability—Each technique is designed to function within decentralized Web3 environments, overcoming the limitations of centralized AI governance mechanisms;
- Operational Feasibility—These techniques have been successfully deployed in real-world use cases, as demonstrated by European initiatives such as GAIA-X, OriginTrail, C2PA, and EBSI, which integrate AI detection mechanisms into trustworthy governance frameworks.
Detection Technique | Why Chosen? | Key Challenge Addressed |
---|---|---|
Federated Learning (T1) | Aligns with privacy-first AI frameworks (GDPR and AI Act) and ensures secure, decentralized AI model training. | Privacy protection and AI trust in decentralized networks. |
Blockchain-Based Provenance Tracking (T2) | Provides immutable verification of content origin, crucial for combating misinformation. | Ensuring AI-generated content authenticity. |
Zero-Knowledge Proofs (ZKPs) (T3) | Balances verification and privacy, crucial in decentralized AI governance. | Trust verification without compromising data privacy. |
DAOs for Crowdsourced Verification (T4) | Enables community-driven AI content validation, reducing centralized biases. | Democratic, transparent AI oversight. |
AI-Powered Digital Watermarking (T5) | Ensures traceability of AI-generated content, preventing deepfake and AI-driven disinformation. | Tracking AI-generated media for accountability. |
Explainable AI (XAI) (T6) | Improves trust in AI decision making, aligning with human oversight principles in the AI Act. | Making AI decision processes understandable. |
Privacy-Preserving Machine Learning (PPML) (T7) | Provides secure AI verification while maintaining user privacy. | Balancing AI transparency and personal data security. |
- Synergistic Effects: How These Techniques Complement Each Other
- Enhancing Transparency and Provenance:
- ○
- Blockchain-based provenance tracking (T2) and AI-powered watermarking (T5) create a dual-layer verification system—blockchain ensures immutability, while watermarking ensures content traceability at a granular level;
- ○
- Example: In journalism and media trust, C2PA integrates blockchain and watermarking to validate the authenticity of AI-generated content.
- Strengthening Privacy and Data Sovereignty:
- ○
- Federated learning (T1) and privacy-preserving machine learning (T7) ensure that AI models can be trained and verified without compromising personal data, reinforcing compliance with GDPR and AI Act privacy mandates;
- ○
- Example: The GAIA-X initiative integrates federated learning and PPML to enable secure AI data sharing across European industries.
- Democratizing AI Governance:
- ○
- DAOs (T4) and Explainable AI (T6) create transparent, participatory AI decision-making frameworks, ensuring AI accountability in decentralized ecosystems;
- ○
- Example: The Aragon DAO model enables crowdsourced content verification, while XAI ensures decisions remain interpretable and contestable.
- Ensuring Robust AI Authentication:
- ○
- ZKPs (T3) and blockchain-based provenance tracking (T2) create a dual-layer trust framework—ZKPs enable confidential verification, while blockchain ensures traceability;
- ○
- Example: The European Blockchain Services Infrastructure (EBSI) integrates ZKPs and blockchain for secure and verifiable credential authentication.
- Bridging Policy and Practice: Why These Techniques Matter
- Addressing Specific Risks Identified in the AI Act and Draghi Report: They directly support risk classification, human oversight, transparency, and privacy protection;
- Ensuring AI Trustworthiness in Decentralized Governance: They prevent misinformation, verify AI-generated content authenticity, and democratize AI oversight, addressing trust deficits in decentralized AI ecosystems;
- Strengthening European Leadership in Trustworthy AI: They align with ongoing European AI initiatives (GAIA-X, EBSI, C2PA, MUSKETEER, and Trust-AI), reinforcing Europe’s commitment to ethical AI innovation.
- Operationalizing the Techniques in Decentralized AI Governance
3.1. Federated Learning for Decentralized AI Detection (T1)
3.2. Blockchain-Based Provenance Tracking (T2)
3.3. Zero-Knowledge Proofs (ZKPs) for Content Authentication (T3)
3.4. DAOs for Crowdsourced Verification (T4)
3.5. AI-Powered Digital Watermarking (T5)
3.6. Explainable AI (XAI) for Content Detection (T6)
3.7. Privacy-Preserving Machine Learning (PPML) for Secure Content Verification (T7)
- Unlike abstract AI governance models, this article systematically identifies where and how these methods are implemented;
- Example: GAIA-X’s federated learning directly translates into privacy-enhancing AI practices that ensure compliance with EU data sovereignty mandates.
- The article does not rely on theoretical speculations; rather, it systematically aligns EU regulatory imperatives (AI Act and Draghi Report) with practical technological implementations;
- Example: EBSI’s integration of ZKPs resolves AI trust dilemmas by ensuring privacy-preserving yet verifiable digital transactions, aligning directly with EU’s cross-border regulatory frameworks.
- Unlike generic AI ethics proposals, this article makes crystal clear that trustworthy AI must serve multiple actors, including citizens, regulators, industries, and communities;
- Example: DAOs empower communities by decentralizing AI governance, ensuring transparent, crowd-validated content oversight instead of opaque, corporate-controlled moderation.
4. Discussions and Conclusions
4.1. Discussions, Results, and Conclusions
4.2. Limitations
- (i)
- Technical and Operational Challenges: Many of the techniques discussed, such as federated learning and PPML, require advanced computational infrastructure (quantum computing) and significant technical expertise. Their deployment in resource-constrained environments may be limited, perpetuating global inequalities in digital access and trust frameworks;
- (ii)
- Ethical and Governance Gaps: While tools like DAOs and blockchain foster transparency and decentralization, they raise ethical concerns regarding power concentration among technologically savvy elites [28]. As recently noted by Calzada [28] and supported by the AI hype approach described by Floridi [248], decentralization does not inherently equate to democratization; instead, it risks replicating hierarchical structures in digital contexts;
- (iii)
- Regulatory Alignment and Enforcement: The AI Act and the Draghi Report provide robust policy frameworks, but their enforcement mechanisms remain uneven across EU member states. This regulatory fragmentation may hinder the uniform implementation of the detection techniques proposed;
- (iv)
- Public Awareness and Engagement: A significant barrier to adoption lies in the public’s limited understanding of decentralized technologies. As Medrado and Verdegem highlighted [240], there is a need for more inclusive educational initiatives to bridge the knowledge gap and promote trust in AI governance systems;
- (v)
- Emergent Risks of AI: GenAI evolves rapidly, outpacing regulatory and technological safeguards. This dynamism introduces uncertainties about the long-term effectiveness of the proposed detection techniques.
4.3. Future Research Avenues
- (i)
- Context-Specific Adaptations: Further research is needed to tailor decentralized Web3 tools to diverse regional and cultural contexts. This involves integrating local governance norms and socio-political dynamics into the design and implementation of detection frameworks;
- (ii)
- Inclusive Governance Models: Building on the principles of participatory governance discussed by Mejias and Couldry [241], future studies should examine how multistakeholder frameworks can be institutionalized within decentralized ecosystems. Citizen assemblies, living labs, and co-design workshops offer promising methods for inclusive decision making;
- (iii)
- User-Centric Design: Enhancing UX for detection tools such as digital watermarking and blockchain provenance tracking is crucial. Future research should focus on creating user-friendly interfaces that simplify complex functionalities, fostering greater public engagement and trust;
- (iv)
- Ethical and Legal Frameworks: Addressing the ethical and legal challenges posed by decentralized systems requires interdisciplinary collaboration. Scholars in law, ethics, and social sciences should work alongside technologists to develop governance models that balance innovation with accountability;
- (v)
- AI Literacy Initiatives: Expanding on Sieber et al. [Sieber], there is a need for targeted educational programs to improve public understanding of AI technologies. These initiatives could focus on empowering marginalized communities, ensuring equitable access to the benefits of AI;
- (vi)
- Monitoring and Evaluation Mechanisms: Future studies should investigate robust metrics for assessing the efficacy of detection techniques in real-world scenarios. This includes longitudinal studies to monitor their impact on trust, transparency, and accountability in decentralized systems;
- (vii)
- Emergent Technologies and Risks: Finally, research should anticipate the future trajectories of AI and Web3 ecosystems, exploring how emerging technologies such as quantum computing or advanced neural networks may impact trust frameworks;
- (viii)
- Learning from Urban AI: A potentially prominent field is emerging around the concept of Urban AI, which warrants further exploration. The question of “trustworthy AI for whom?” echoes the earlier query of “smart city for whom?”, suggesting parallels between the challenges of integrating AI into urban environments and the broader quest for trustworthy AI [249,250,251,252,253,254]. Investigating the evolution of urban AI as a distinct domain could provide valuable insights into the socio-technical dynamics of trust, governance, and inclusivity within AI-driven urban systems [255,256,257].
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Alwaisi, S.; Salah Al-Radhi, M.; Németh, G. Automated child voice generation: Methodology and implementation. In Proceedings of the 2023 International Conference on Speech Technology and Human-Computer Dialogue (SpeD), Bucharest, Romania, 25–27 October 2023; pp. 48–53. [Google Scholar] [CrossRef]
- Alwaisi, S.; Németh, G. Advancements in expressive speech synthesis: A review. Infocommunications J. 2024, 16, 35–49. [Google Scholar] [CrossRef]
- European Commission. The Future of European Competitiveness: A Competitiveness Strategy for Europe. European Commission, September 2024. Available online: https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en#paragraph_47059 (accessed on 18 November 2024).
- European Parliament and Council. Regulation (EU) 2024/1689 of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations and Directives. Off. J. Eur. Union. 2024, L1689, 1–144. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 18 November 2024).
- Yang, F.; Goldenfein, J.; Nickels, K. GenAI Concepts. In Melbourne: ARC Centre of Excellence for Automated Decision-Making and Society RMIT University, and OVIC; RMIT University: Melbourne, Australia, 2024. [Google Scholar] [CrossRef]
- Insight & Foresight. How Generative AI Will Transform Strategic Foresight. 2024. Available online: https://hkifoa.com/wp-content/uploads/2024/12/how-genai-transform-strategic-foresight.pdf (accessed on 1 February 2025).
- Amoore, L.; Campolo, A.; Jacobsen, B.; Rella, L. A world model: On the political logics of generative AI. Political Geogr. 2024, 113, 103134. [Google Scholar] [CrossRef]
- Chafetz, H.; Saxena, S.; Verhulst, S.G. A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI; The GovLab: New York, NY, USA, 2024; Available online: https://arxiv.org/abs/2405.04333 (accessed on 1 September 2024).
- Delacroix, S. Sustainable data rivers? Rebalancing the data ecosystem that underlies generative AI. Crit. AI 2024, 2. [Google Scholar] [CrossRef]
- Gabriel, I.; Manzini, A.; Keeling, G.; Hendricks, L.A.; Rieser, V.; Iqbal, H.; Tomašev, N.; Ktena, I.; Kenton, Z.; Rodriguez, M.; et al. The ethics of advanced AI assistants. arXiv 2024, arXiv:2404.16244. Available online: https://arxiv.org/abs/2404.16244 (accessed on 1 February 2025).
- Shin, D.; Koerber, A.; Lim, J.S. Impact of misinformation from generative AI on user information processing: How people understand misinformation from generative AI. New Media Soc. 2024, 14614448241234040. [Google Scholar] [CrossRef]
- Tsai, L.L.; Pentland, A.; Braley, A.; Chen, N.; Enríquez, J.R.; Reuel, A. An MIT Exploration of Generative AI: From Novel Chemicals to Opera; MIT Governance Lab.: Cambridge, MA, USA, 2024. [Google Scholar] [CrossRef]
- Weidinger, L.; Rauh, M.; Marchal, N.; Manzini, A.; Hendricks, L.A.; Mateos-Garcia, J.; Bergman, S.; Kay, J.; Griffin, G.; Bariach, B.; et al. Sociotechnical Safety Evaluation of Generative AI Systems. arXiv 2023, arXiv:2310.11986. Available online: https://arxiv.org/abs/2310.11986 (accessed on 1 February 2025).
- Allen, D.; Weyl, E.G. The Real Dangers of Generative AI. J. Democr. 2024, 35, 147–162. [Google Scholar] [CrossRef]
- Kitchin, R. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences; Sage: London, UK, 2014. [Google Scholar]
- Cugurullo, F.; Caprotti, F.; Cook, M.; Karvonen, A.; McGuirk, P.; Marvin, S. (Eds.) Artificial Intelligence and the City: Urbanistic Perspectives on AI; Routledge: Abingdon, UK, 2024. [Google Scholar] [CrossRef]
- Farina, M.; Yu, X.; Lavazza, A. Ethical considerations and policy interventions concerning the impact of generative AI tools in the economy and in society. AI Ethics 2023, 5, 737–745. [Google Scholar] [CrossRef]
- Calzada, I. Smart City Citizenship; Elsevier Science Publishing Co Inc.: Cambridge, MA, USA, 2021; ISBN 978-0-12-815300-0. [Google Scholar] [CrossRef]
- Aguerre, C.; Campbell-Verduyn, M.; Scholte, J.A. Global Digital Data Governance: Polycentric Perspectives; Routledge: Abingdon, UK, 2024. [Google Scholar]
- Angelidou, M.; Sofianos, S. The Future of AI in Optimizing Urban Planning: An In-Depth Overview of Emerging Fields of Application. In Proceedings of the International Conference on Changing Cities VI: Spatial, Design, Landscape, Heritage & Socio-Economic Dimensions, Rhodes Island, Greece, 24–28 June 2024. [Google Scholar]
- Polanyi, K. The Great Transformation: The Political and Economic Origins of Our Time; Farrar & Rinehart: New York, NY, USA, 1944. [Google Scholar]
- Solaiman, I.; Brundage, M.; Clark, J.; Askell, A.; Herbert-Voss, A.; Wu, J.; Radford, A.; Krueger, G.; Kim, J.W.; Kreps, S.; et al. Release Strategies and the Social Impacts of Language Models. arXiv 2019, arXiv:1908.09203. Available online: https://arxiv.org/abs/1908.09203 (accessed on 1 February 2025).
- Calzada, I. Artificial Intelligence for Social Innovation: Beyond the Noise of Algorithms and Datafication. Sustainability 2024, 16, 8638. [Google Scholar] [CrossRef]
- Fang, R.; Bindu, R.; Gupta, A.; Zhan, Q.; Kang, D. LLM Agents can Autonomously Hack Websites. arXiv 2024, arXiv:2402.06664. Available online: https://arxiv.org/abs/2402.06664 (accessed on 1 February 2025).
- Farina, M.; Lavazza, A.; Sartori, G.; Pedrycz, W. Machine learning in human creativity: Status and perspectives. AI Soc. 2024, 39, 3017–3029. [Google Scholar] [CrossRef]
- Abdi, I.I. Digital Capital and the Territorialization of Virtual Communities: An Analysis of Web3 Governance and Network Sovereignty; Politecnico di Milano: Milan, Italy, 2024. [Google Scholar]
- Murray, A.; Kim, D.; Combs, J. The Promise of a Decentralized Internet: What is Web 3.0 and How Can Firms Prepare? Bus. Horiz. 2022, 65, 511–526. [Google Scholar] [CrossRef]
- Calzada, I. Decentralized Web3 Reshaping Internet Governance: Towards the Emergence of New Forms of Nation-Statehood? Future Internet 2024, 16, 361. [Google Scholar] [CrossRef]
- Calzada, I. From data-opolies to decentralization? The AI disruption amid the Web3 Promiseland at stake in datafied democracies. In Research and Innovation Forum; Visvizi, A., Corvello, V., Troisi, O., Eds.; Springer: Cham, Switzerland, 2024. [Google Scholar]
- Calzada, I. Democratic erosion of data-opolies: Decentralized Web3 technological paradigm shift amidst AI disruption. Big Data Cogn. Comput. 2024, 8, 26. [Google Scholar] [CrossRef]
- Calzada, I. Disruptive technologies for e-diasporas: Blockchain, DAOs, data cooperatives, metaverse, and ChatGPT. Futures 2023, 154, 103258. [Google Scholar] [CrossRef]
- Gebhardt, C.; Pique Huerta, J.M. Integrating Triple Helix and Sustainable Transition Research for Transformational Governance: Climate Change Adaptation and Climate Justice in Barcelona. Triple Helix 2024, 11, 107–130. [Google Scholar] [CrossRef]
- Allen, D.; Frankel, E.; Lim, W.; Siddarth, D.; Simons, J.; Weyl, E.G. Ethics of Decentralized Social Technologies: Lessons from Web3, the Fediverse, and Beyond, Harvard University Edmond & Lily Safra Center for Ethics. 2023. Available online: https://ash.harvard.edu/resources/ethics-of-decentralized-social-technologies-lessons-from-web3-the-fediverse-and-beyond/ (accessed on 1 September 2024).
- De Filippi, P.; Cossar, S.; Mannan, M.; Nabben, K.; Merk, T.; Kamalova, J. Report on Blockchain Governance Dynamics. Project Liberty Institute and BlockchainGov, May 2024. Available online: https://www.projectliberty.io/institute (accessed on 20 November 2024).
- Daraghmi, E.; Hamoudi, A.; Abu Helou, M. Decentralizing Democracy: Secure and Transparent E-Voting Systems with Blockchain Technology in the Context of Palestine. Future Internet 2024, 16, 388. [Google Scholar] [CrossRef]
- Liu, X.; Xu, R.; Chen, Y. A Decentralized Digital Watermarking Framework for Secure and Auditable Video Data in Smart Vehicular Networks. Future Internet 2024, 16, 390. [Google Scholar] [CrossRef]
- Moroni, S. Revisiting subsidiarity: Not only administrative decentralization but also multidimensional polycentrism. Cities 2024, 155, 105463. [Google Scholar] [CrossRef]
- Van Kerckhoven, S.; Chohan, U.W. Decentralized Autonomous Organizations: Innovation and Vulnerability in the Digital Economy; Routledge: Oxon, UK, 2024. [Google Scholar]
- Singh, A.; Lu, C.; Gupta, G.; Chopra, A.; Blanc, J.; Klinghoffer, T.; Tiwary, K.; Raskar, R. A Perspective on Decentralizing AI; MIT Media Lab.: Cambridge, MA, USA, 2024. [Google Scholar]
- Mathew, A.J. The myth of the decentralised internet. Internet Policy Rev. 2016, 9, 1–16. Available online: https://policyreview.info/articles/analysis/myth-decentralised-internet (accessed on 1 February 2025). [CrossRef]
- Zook, M. Platforms, blockchains and the challenges of decentralization. Camb. J. Reg. Econ. Soc. 2023, 16, 367–372. [Google Scholar] [CrossRef]
- Kneese, T.; Oduro, S. AI Governance Needs Sociotechnical Expertise: Why the Humanities and Social Sciences are Critical to Government Efforts. Data Soc. Policy Brief 2024, 1–10. Available online: https://datasociety.net/wp-content/uploads/2024/05/DS_AI_Governance_Policy_Brief.pdf (accessed on 1 February 2025).
- OECD. Assessing Potential Future Artificial Intelligence Risks, Benefits and Policy Imperatives. OECD Artificial Intelligence Papers. No. 27, November 2024. Available online: https://oecd.ai/site/ai-futures (accessed on 20 November 2024).
- Nabben, K.; De Filippi, P. Accountability protocols? On-chain dynamics in blockchain governance. Internet Policy Rev. 2024, 13. [Google Scholar] [CrossRef]
- Nanni, R.; Bizzaro, P.G.; Napolitano, M. The false promise of individual digital sovereignty in Europe: Comparing artificial intelligence and data regulations in China and the European Union. Policy Internet 2024, 16, 711–726. [Google Scholar] [CrossRef]
- Schroeder, R. Content moderation and the digital transformations of gatekeeping. Policy Internet 2024, 1–16. [Google Scholar] [CrossRef]
- Gray, J.E.; Hutchinson, J.; Stilinovic, M.; Tjahja, N. The pursuit of ‘good’ Internet policy. Policy Internet 2024, 16, 480–484. [Google Scholar] [CrossRef]
- Pohle, J.; Santaniello, M. From multistakeholderism to digital sovereignty: Toward a new discursive order in internet governance. Policy Internet 2024, 16, 672–691. [Google Scholar] [CrossRef]
- Viano, C.; Avanzo, S.; Cerutti, M.; Cordero, A.; Schifanella, C.; Boella, G. Blockchain tools for socio-economic interactions in local communities. Policy Soc. 2022, 41, 373–385. [Google Scholar] [CrossRef]
- Karatzogianni, A.; Tiidenberg, K.; Parsanoglou, D. The impact of technological transformations on the digital generation: Digital citizenship policy analysis (Estonia, Greece, and the UK). DigiGen Policy Brief 2022. [Google Scholar] [CrossRef]
- European Commission. Commission Guidelines on Prohibited Artificial Intelligence Practices Established by Regulation (EU) 2024/1689 (AI Act); European Commission: Brussels, Belgium, 2025; Available online: https://digital-strategy.ec.europa.eu/en/library/second-draft-general-purpose-ai-code-practice-published-written-independent-experts (accessed on 9 February 2025).
- Huang, J.; Bibri, S.E.; Keel, P. Generative Spatial Artificial Intelligence for Sustainable Smart Cities: A Pioneering Large Flow Model for Urban Digital Twin. Environ. Sci. Ecotechnol. 2025, 24, 100526. [Google Scholar] [CrossRef] [PubMed]
- European Commission. Commission Guidelines on Prohibited Artificial Intelligence Practices—ANNEX; European Commission: Brussels, Belgium, 2025; Available online: https://ec.europa.eu (accessed on 9 February 2025).
- European Commission. Regulation (EU) 2024/1689 on Harmonised Rules on Artificial Intelligence (AI Act); European Commission: Brussels, Belgium, 2025; Available online: https://ec.europa.eu (accessed on 9 February 2025).
- Petropoulos, A.; Pataki, B.; Juijn, D.; Janků, D.; Reddel, M. Building CERN for AI: An Institutional Blueprint; Centre for Future Generations: Brussels, Belgium, 2025; Available online: http://www.cfg.eu/building-cern-for-ai (accessed on 9 February 2025).
- National Technical Committee 260 on Cybersecurity of SAC. AI Safety Governance Framework; The State Council: The People’s Republic of China: Beijing, China, 2024; Available online: https://www.tc260.org.cn/upload/2024-09-09/1725849192841090989.pdf (accessed on 9 February 2025).
- Creemers, R. China’s Emerging Data Protection Framework. J. Cybersecur. 2022, 8, tyac011. [Google Scholar] [CrossRef]
- Raman, D.; Madkour, N.; Murphy, E.R.; Jackson, K.; Newman, J. Intolerable Risk Threshold Recommendations for Artificial Intelligence; Center for Long-Term Cybersecurity, UC Berkeley: Berkeley, CA, USA, 2025; Available online: https://cltc.berkeley.edu (accessed on 9 February 2025).
- Wald, B. Artificial Intelligence and First Nations: Risks and Opportunities; Ministry of Health: Toronto, ON, Canada, 2025; Available online: https://firstnations.ai/report.pdf (accessed on 9 February 2025).
- Zeng, Y. Global Index for AI Safety: AGILE Index on Global AI Safety Readiness; International Research Center for AI Ethics and Governance, Chinese Academy of Sciences: Beijing, China, 2025; Available online: https://www.agile-index.ai/Global-Index-For-AI-Safety-Report-EN.pdf (accessed on 9 February 2025).
- Iosad, A.; Railton, D.; Westgarth, T. Governing in the Age of AI: A New Model to Transform the State; Tony Blair Institute for Global Change: London, UK, 2024; Available online: https://tonyblairinstitute.org/ai-governance (accessed on 9 February 2025).
- UN-Habitat. World Smart Cities Outlook 2024; UN-Habitat: Nairobi, Kenya, 2024; Available online: https://unhabitat.org/smartcities2024 (accessed on 9 February 2025).
- Popelka, S.; Narvaez Zertuche, L.; Beroche, H. Urban AI Guide 2023; Urban AI: Paris, France, 2023; Available online: https://urbanai.org/guide2023 (accessed on 9 February 2025).
- World Economic Forum. The Global Public Impact of GovTech: A $9.8 Trillion Opportunity; WEF: Geneva, Switzerland, 2025; Available online: https://weforum.org/govtech2025 (accessed on 9 February 2025).
- Boonstra, M.; Bruneault, F.; Chakraborty, S.; Faber, T.; Gallucci, A.; Hickman, E.; Kema, G.; Kim, H.; Kooiker, J.; Hildt, E.; et al. Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment. arXiv 2024, arXiv:2401.1234. Available online: https://arxiv.org/abs/2401.12345 (accessed on 9 February 2025).
- UK Government. AI Opportunities Action Plan: Government Response; Department for Science, Innovation & Technology: London, UK, 2025. Available online: https://www.gov.uk/government/publications/ai-opportunities-action-plan (accessed on 9 February 2025).
- Ben Dhaou, S.; Isagah, T.; Distor, C.; Ruas, I.C. Global Assessment of Responsible Artificial Intelligence in Cities: Research and Recommendations to Leverage AI for People-Centred Smart Cities; United Nations Human Settlements Programme (UN-Habitat): Nairobi, Kenya, 2024; Available online: https://www.unhabitat.org (accessed on 9 February 2025).
- David, A.; Yigitcanlar, T.; Desouza, K.; Li, R.Y.M.; Cheong, P.H.; Mehmood, R.; Corchado, J. Understanding Local Government Responsible AI Strategy: An International Municipal Policy Document Analysis. Cities 2024, 155, 105502. [Google Scholar] [CrossRef]
- Bipartisan House AI Task Force. Leading AI Progress: Policy Insights and a U.S. Vision for AI Adoption, Responsible Innovation, and Governance; United States Congress: Washington, DC, USA, 2025. Available online: https://www.house.gov/ai-task-force (accessed on 9 February 2025).
- World Bank. Global Trends in AI Governance: Evolving Country Approaches; World Bank: Washington, DC, USA, 2024; Available online: https://www.worldbank.org/ai-governance (accessed on 9 February 2025).
- World Economic Forum. The Global Risks Report 2025; WEF: Geneva, Switzerland, 2025; Available online: https://www.weforum.org/publications/global-risks-report-2025 (accessed on 9 February 2025).
- World Economic Forum. Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents; WEF: Geneva, Switzerland, 2024; Available online: https://www.weforum.org/ai-frontier (accessed on 9 February 2025).
- Claps, M.; Barker, L. Moving from “Why AI” to “How to AI”—A Playbook for Governments Procuring AI and GenAI; IDC Government Insights: Washington, DC, USA, 2024; Available online: https://idc.com/research/ai-procurement (accessed on 9 February 2025).
- Couture, S.; Toupin, S.; Mayoral Baños, A. Resisting and Claiming Digital Sovereignty: The Cases of Civil Society and Indigenous Groups. Policy Internet 2025, 16, 739–749. [Google Scholar] [CrossRef]
- Pohle, J.; Nanni, R.; Santaniello, M. Unthinking Digital Sovereignty: A Critical Reflection on Origins, Objectives, and Practices. Policy Internet 2025, 16, 666–671. [Google Scholar] [CrossRef]
- European Commission. The Potential of Generative AI for the Public Sector: Current Use, Key Questions, and Policy Considerations; Digital Public Governance, Joint Research Centre: Brussels, Belgium, 2025; Available online: https://ec.europa.eu/jrc-genai (accessed on 9 February 2025).
- Heeks, R.; Wall, P.J.; Graham, M. Pragmatist-Critical Realism as a Development Studies Research Paradigm. Dev. Stud. Res. 2025, 12, 2439407. [Google Scholar] [CrossRef]
- García, A.; Alarcón, Á.; Quijano, H.; Kruger, K.; Narváez, S.; Alimonti, V.; Flores, V.; Mendieta, X. Privacidad en Desplazamiento Migratorio; Coalición Latinoamericana #MigrarSinVigilancia: Mexico City, Mexico, 2024; Available online: https://migrarsinvigilancia.org (accessed on 9 February 2025).
- PwC Global. Agentic AI—The New Frontier in GenAI; PwC: London, UK, 2025; Available online: https://pwc.com/ai-strategy (accessed on 9 February 2025).
- Calzada, I. (Libertarian) Decentralized Web3 Map: In Search of a Post-Westphalian Territory. SSRN. 2024. Available online: https://globalgovernanceprogramme.eui.eu/project/new-network-sovereignties-the-rise-of-non-territorial-states/libertarian-decentralised-web3-map-in-search-of-a-post-westphalian-territory/ (accessed on 3 February 2025).
- Sharkey, A. Could a Robot Feel Pain? AI Soc. 2024. [Google Scholar] [CrossRef]
- Behuria, P. Is the Study of Development Humiliating or Emancipatory? The Case Against Universalising ‘Development’. Eur. J. Dev. Res. 2025. [Google Scholar] [CrossRef]
- World Economic Forum. The Future of Jobs Report 2025; WEF: Geneva, Switzerland, 2025; Available online: https://www.weforum.org/reports/the-future-of-jobs-report-2025 (accessed on 9 February 2025).
- Bengio, Y.; Mindermann, S.; Privitera, D. International AI Safety Report 2025; AI Safety Institute: London, UK, 2025. Available online: https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/ (accessed on 9 February 2025).
- European Commission Joint Research Centre. Data Sovereignty for Local Governments: Enablers and Considerations; JRC Report No. 138657; European Commission: Brussels, Belgium, 2025; Available online: https://ec.europa.eu/jrc (accessed on 9 February 2025).
- OECD. AI and Governance: Regulatory Approaches to AI and Their Global Implications; OECD Publishing: Paris, France, 2024; Available online: https://www.oecd.org/ai-regulation (accessed on 9 February 2025).
- OECD. Digital Public Infrastructure for Digital Governments; OECD Public Governance Policy Papers No. 68; OECD Publishing: Paris, France, 2024; Available online: https://www.oecd.org/digitalpublic-infrastructure (accessed on 9 February 2025).
- OpenAI. AI in America: OpenAI’s Economic Blueprint; OpenAI: San Francisco, CA, USA, 2025; Available online: https://openai.com (accessed on 9 February 2025).
- Majcher, K. (Ed.) Charting the Digital and Technological Future of Europe: Priorities for the European Commission (2024–2029); European University Institute: Florence, Italy, 2024; Available online: https://www.eui.eu (accessed on 9 February 2025).
- Savova, V. Navigating Privacy in Crypto: Current Challenges and (Future) Solutions. Educ. Sci. Res. Innov. 2024, II, 71–85. [Google Scholar]
- Nicole, S.; Mishra, V.; Bell, J.; Kastrop, C.; Rodriguez, M. Digital Infrastructure Solutions to Advance Data Agency in the Age of Artificial Intelligence; Project Liberty Institute & Global Solutions Initiative: Paris, France, 2024; Available online: https://projectliberty.io (accessed on 9 February 2025).
- Nicole, S.; Vance-Law, S.; Spelliscy, C.; Bell, J. Towards Data Cooperatives for a Sustainable Digital Economy; Project Liberty Institute & Decentralization Research Center: New York, NY, USA, 2025; Available online: https://www.projectliberty.io/wp-content/uploads/2025/01/PL_Practical_Data_Governance_Solutions_Report_v4.pdf (accessed on 4 February 2025).
- Qlik. Maximizing Data Value in the Age of AI; Qlik: 2024. Available online: https://www.qlik.com/us/resource-library/maximizing-data-value-in-the-age-of-ai (accessed on 9 February 2025).
- Lauer, R.; Merkel, S.; Bosompem, J.; Langer, H.; Naeve, P.; Herten, B.; Burmann, A.; Vollmar, H.C.; Otte, I. (Data-) Cooperatives in Health and Social Care: A Scoping Review. J. Public Health 2024. [Google Scholar] [CrossRef]
- Kaal, W.A. AI Governance via Web3 Reputation System. Stanford J. Blockchain Law Policy 2025, 8, 1. Available online: https://stanford-jblp.pubpub.org/pub/aigov-via-web3 (accessed on 9 February 2025). [CrossRef]
- Roberts, T.; Oosterom, M. Digital Authoritarianism: A Systematic Literature Review. Inf. Technol. Dev. 2024. [Google Scholar] [CrossRef]
- Roberts, H.; Hine, E.; Floridi, L. Digital Sovereignty, Digital Expansionism, and the Prospects for Global AI Governance. SSRN Electron. J. 2024. Available online: https://ssrn.com/abstract=4483271 (accessed on 9 February 2025).
- European Committee of the Regions. AI and GenAI Adoption by Local and Regional Administrations; European Union: Brussels, Belgium, 2024; Available online: https://interoperable-europe.ec.europa.eu/collection/portal/news/study-ai-and-genai-adoption-local-regional-administrations (accessed on 9 February 2025).
- French Artificial Intelligence Commission. AI: Our Ambition for France; French AI Commission: Paris, France, 2024; Available online: https://gouvernement.fr (accessed on 9 February 2025).
- UK Government. Copyright and Artificial Intelligence; Intellectual Property Office: London, UK, 2024. Available online: https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence (accessed on 9 February 2025).
- Institute of Development Studies. Indigenous Knowledge and Artificial Intelligence; IDS Series; Navigating Data Landscapes: Brighton, UK, 2024; Available online: https://www.ids.ac.uk (accessed on 9 February 2025).
- Congressional Research Service. Indigenous Knowledge and Data: Overview and Issues for Congress; CRS Report No. R48317; CRS: Washington, DC, USA, 2024. Available online: https://crsreports.congress.gov (accessed on 9 February 2025).
- UK Government. AI Opportunities Action Plan; Department for Science, Innovation and Technology: London, UK, 2025. Available online: https://www.gov.uk/official-documents (accessed on 9 February 2025).
- Centre for Information Policy Leadership (CIPL). Applying Data Protection Principles to Generative AI: Practical Approaches for Organizations and Regulators; CIPL: Washington, DC, USA, 2024; Available online: https://www.informationpolicycentre.com (accessed on 9 February 2025).
- Holgersson, M.; Dahlander, L.; Chesbrough, H.; Bogers, M.L.A.M. Open Innovation in the Age of AI. Calif. Manag. Rev. 2024, 67, 5–20. [Google Scholar] [CrossRef]
- State of California. State of California Guidelines for Evaluating Impacts of Generative AI on Vulnerable and Marginalized Communities; Office of Data and Innovation: Sacramento, CA, USA, 2024. Available online: https://www.genai.ca.gov (accessed on 9 February 2025).
- Bogen, M.; Deshpande, C.; Joshi, R.; Radiya-Dixit, E.; Winecoff, A.; Bankston, K. Assessing AI: Surveying the Spectrum of Approaches to Understanding and Auditing AI Systems; Center for Democracy & Technology: Washington, DC, USA, 2025; Available online: https://cdt.org (accessed on 9 February 2025).
- Current AI (2025). Available online: https://www.currentai.org/ (accessed on 4 February 2025).
- Mannan, M.; Schneider, N.; Merk, T. Cooperative Online Communities. In The Routledge Handbook of Cooperative Economics and Management; Routledge: London, UK, 2024; pp. 411–432. [Google Scholar] [CrossRef]
- Durmus, M. Critical Thinking is Your Superpower: Cultivating Critical Thinking in an AI-Driven World; Mindful AI Press: Frankfurt, Germany, 2024; Available online: https://mindful-ai.org (accessed on 9 February 2025).
- Lustenberger, M.; Spychiger, F.; Küng, L.; Cuadra, P. Mastering DAOs: A Practical Guidebook for Building and Managing Decentralized Autonomous Organizations; ZHAW Institute for Organizational Viability: Zurich, Switzerland, 2024; Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5001424 (accessed on 9 February 2025).
- Fritsch, R.; Müller, M.; Wattenhofer, R. Analyzing Voting Power in Decentralized Governance: Who Controls DAOs? arXiv 2024, arXiv:2204.01176. Available online: https://arxiv.org/abs/2204.01176 (accessed on 9 February 2025). [CrossRef]
- EuroHPC Joint Undertaking. Selection of the First Seven AI Factories to Drive Europe’s Leadership in AI; EuroHPC JU: Luxembourg, 2024; Available online: https://eurohpc-ju.europa.eu/index_en (accessed on 9 February 2025).
- Marchal, N.; Xu, R.; Elasmar, R.; Gabriel, I.; Goldberg, B.; Isaac, W. Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data. arXiv 2024, arXiv:2406.13843. Available online: https://arxiv.org/abs/2406.13843 (accessed on 9 February 2025).
- Davenport, T.H.; Gupta, S.; Wang, R. SuperTech Leaders and the Evolution of Technology and Data Leadership; ThoughtWorks, Chicago, IL, USA 2024. Available online: https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_supertech_leaders_and_the_evolution_of_technology_and_data_leadership.pdf (accessed on 9 February 2025).
- European Securities and Markets Authority (ESMA). Final Report on the Guidelines on the Conditions and Criteria for the Qualification of Crypto-Assets as Financial Instruments; ESMA: Paris, France, 2024; Available online: https://www.esma.europa.eu (accessed on 9 February 2025).
- Shrishak, K. AI-Complex Algorithms and Effective Data Protection Supervision: Bias Evaluation; European Data Protection Board (EDPB): Brussels, Belgium, 2024; Available online: https://edpb.europa.eu (accessed on 9 February 2025).
- Ada Lovelace Institute. Buying AI: Is the Public Sector Equipped to Procure Technology in the Public Interest? Ada Lovelace Institute: London, UK, 2024; Available online: https://www.adalovelaceinstitute.org (accessed on 9 February 2025).
- European Court of Auditors (ECA). AI Auditors: Auditing AI-Based Projects, Systems, and Processes; ECA: Luxembourg, 2024; Available online: https://eca.europa.eu (accessed on 9 February 2025).
- Züger, T.; Asghari, H. Introduction to the special issue on AI systems for the public interest. Internet Policy Rev. 2024, 13. [Google Scholar] [CrossRef]
- Papadimitropoulos, V.; Perperidis, G. On the Foundations of Open Cooperativism. In The Handbook of Peer Production; Bauwens, M., Kostakis, V., Pazaitis, A., Eds.; Wiley: Hoboken, NJ, USA, 2021; pp. 398–410. [Google Scholar] [CrossRef]
- Tarkowski, A. Data Governance in Open Source AI: Enabling Responsible and Systemic Access; Open Future: Warsaw, Poland, 2025; Available online: https://opensource.org/wp-content/uploads/2025/01/2025-OSI-DataGovernanceOSAI.pdf (accessed on 1 February 2025).
- Gerlich, M. Societal Perceptions and Acceptance of Virtual Humans: Trust and Ethics across Different Contexts. Soc. Sci. 2024, 13, 516. [Google Scholar] [CrossRef]
- Waldner, D.; Lust, E. Unwelcome change: Coming to terms with democratic backsliding. Annu. Rev. Political Sci. 2018, 21, 93–113. [Google Scholar] [CrossRef]
- Roose, K. The Data That Powers A.I. Is Disappearing Fast. 2024. Available online: https://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html (accessed on 1 September 2024).
- Kolt, N. Governing AI Agents. SSRN 2024. [Google Scholar] [CrossRef]
- Calzada, I. Data (un)sustainability: Navigating utopian resistance while tracing emancipatory datafication strategies. In Digital (Un)Sustainabilities: Promises, Contradictions, and Pitfalls of the Digitalization-Sustainability Nexus; Certomá, C., Martelozzo, F., Iapaolo, F., Eds.; Routledge: Oxon, UK, 2024. [Google Scholar] [CrossRef]
- Benson, J. Intelligent Democracy: Answering the New Democratic Scepticism; Oxford University Press: Oxford, UK, 2024. [Google Scholar]
- Coeckelbergh, M. Artificial intelligence, the common good, and the democratic deficit in AI governance. AI Ethics 2024. [Google Scholar] [CrossRef]
- García-Marzá, D.; Calvo, P. Algorithmic Democracy: A Critical Perspective Based on Deliberative Democracy; Springer Nature: Cham, Switzerland, 2024. [Google Scholar]
- KT4Democracy. Available online: https://kt4democracy.eu/ (accessed on 1 January 2024).
- Levi, S. Digitalización Democrática: Soberanía Digital para las Personas; Rayo Verde: Barcelona, Spain, 2024. [Google Scholar]
- Poblet, M.; Allen, D.W.E.; Konashevych, O.; Lane, A.M.; Diaz Valdivia, C.A. From Athens to the Blockchain: Oracles for Digital Democracy. Front. Blockchain 2020, 3, 575662. [Google Scholar] [CrossRef]
- De Filippi, P.; Reijers, W.; Morshed, M. Blockchain Governance; MIT Press: Boston, MA, USA, 2024. [Google Scholar]
- Visvizi, A.; Malik, R.; Guazzo, G.M.; Çekani, V. The Industry 5.0 (I50) Paradigm, Blockchain-Based Applications and the Smart City. Eur. J. Innov. Manag. 2024, 28, 5–26. [Google Scholar] [CrossRef]
- Roio, D.; Selvaggini, R.; Bellini, G.; Dintino, A. SD-BLS: Privacy preserving selective disclosure of verifiable credentials with unlinkable threshold revocation. In Proceedings of the 2024 IEEE International Conference on Blockchain (Blockchain), Copenhagen, Denmark, 19–22 August 2024; pp. 505–511. [Google Scholar] [CrossRef]
- Viano, C.; Avanzo, S.; Boella, G.; Schifanella, C.; Giorgino, V. Civic blockchain: Making blockchains accessible for social collaborative economies. J. Responsible Technol. 2023, 15, 100066. [Google Scholar] [CrossRef]
- Ahmed, S.; Jaźwińska, K.; Ahlawat, A.; Winecoff, A.; Wang, M. Field-building and the epistemic culture of AI safety. First Monday 2024. [Google Scholar] [CrossRef]
- Tan, J.; Merk, T.; Hubbard, S.; Oak, E.R.; Rong, H.; Pirovich, J.; Rennie, E.; Hoefer, R.; Zargham, M.; Potts, J.; et al. Open Problems in DAOs. arXiv 2023, arXiv:2310.1920. Available online: https://arxiv.org/abs/2310.19201v2 (accessed on 1 February 2025).
- Petreski, D.; Cheong, M. Data Cooperatives: A Conceptual Review. ICIS 2024 Proceedings. 15. 2024. Available online: https://aisel.aisnet.org/icis2024/lit_review/lit_review/15 (accessed on 1 February 2025).
- Stein, J.; Fung, M.L.; Weyenbergh, G.V.; Soccorso, A. Data cooperatives: A framework for collective data governance and digital justice’, People-Centered Internet. 2023. Available online: https://peoplecentered.net/wp-content/uploads/2024/09/Data-Cooperatives-Report-.pdf (accessed on 1 September 2024).
- Dathathri, S.; See, A.; Ghaisas, S.; Huang, P.-S.; McAdam, R.; Welbl, J.; Bachani, V.; Kaskasoli, A.; Stanforth, R.; Matejovicova, T.; et al. Scalable watermarking for identifying large model outputs. Nature 2024, 634, 818–823. [Google Scholar] [CrossRef]
- Adler, S.; Hitzig, Z.; Jain, S.; Brewer, C.; Chang, W.; DiResta, R.; Lazzarin, E.; McGregor, S.; Seltzer, W.; Siddarth, D.; et al. Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online. arXiv 2024, arXiv:2408.07892. Available online: https://arxiv.org/abs/2408.07892 (accessed on 1 September 2024).
- Fratini, Samuele and Hine, Emmie and Novelli, Claudio and Roberts, Huw and Floridi, Luciano, Digital Sovereignty: A Descriptive Analysis and a Critical Evaluation of Existing Models. Available online: https://ssrn.com/abstract=4816020 (accessed on 21 April 2024).
- Hui, Y. Machine and Sovereignty for a Planetary Thinking. University of Minnesota Press: Minneapolis, MN, USA; London, UK, 2024. [Google Scholar]
- New America. From Digital Sovereignty to Digital Agency; New America Foundation, NYC, USA 2023. Available online: https://www.newamerica.org/planetary-politics/briefs/from-digital-sovereignty-to-digital-agency/ (accessed on 20 November 2024).
- Glasze, G.; Cattaruzza, A.; Douzet, F.; Dammann, F.; Bertran, M.-G.; Bômont, C.; Braun, M.; Danet, D.; Desforges, A.; Géry, A.; et al. Contested Spatialities of Digital Sovereignty. Geopolitics 2023, 28, 919–958. [Google Scholar] [CrossRef]
- The Conversation. Elon Musk’s Feud with Brazilian Judge is Much More Than a Personal Spat—It’s About National Sovereignty, Freedom of Speech, and The Rule of Law. 2023. Available online: https://theconversation.com/elon-musks-feud-with-brazilian-judge-is-much-more-than-a-personal-spat-its-about-national-sovereignty-freedom-of-speech-and-the-rule-of-law-238264 (accessed on 20 September 2024).
- The Conversation. Albanese Promises to Legislate Minimum Age for Kids’ Access to Social Media. 2023. Available online: https://theconversation.com/albanese-promises-to-legislate-minimum-age-for-kids-access-to-social-media-238586 (accessed on 20 September 2024).
- Calzada, I. Data Co-operatives through Data Sovereignty. Smart Cities 2021, 4, 1158–1172. [Google Scholar] [CrossRef]
- Belanche, D.; Belk, R.W.; Casaló, L.V.; Flavián, C. The dark side of artificial intelligence in services. Serv. Ind. J. 2024, 44, 149–172. [Google Scholar] [CrossRef]
- European Parliament. EU AI Act: First Regulation on Artificial Intelligence. 2023. Available online: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 23 November 2024).
- Bambauer, J.R.; Zarsky, T. Fair-Enough AI (August 08, 2024). Forthcoming in the Yale Journal of Law & Technology. Available online: https://ssrn.com/abstract=4924588 (accessed on 1 February 2025).
- Dennis, C. What Should Be Internationalised in AI Governance? Oxford Martin AI Gov. Initiat. 2024. [Google Scholar]
- Ghioni, R.; Taddeo, M.; Floridi, L. Open Source Intelligence and AI: A Systematic Review of the GELSI Literature. SSRN. Available online: https://ssrn.com/abstract=4272245 (accessed on 18 November 2024).
- Bullock, S.; Ajmeri, N.; Batty, M.; Black, M.; Cartlidge, J.; Challen, R.; Chen, C.; Chen, J.; Condell, J.; Danon, L.; et al. Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy. 2024. Available online: https://ai4ci.ac.uk (accessed on 20 November 2024).
- Alon, I.; Haidar, H.; Haidar, A.; Guimón, J. The future of artificial intelligence: Insights from recent Delphi studies. Futures 2024, 165, 103514. [Google Scholar] [CrossRef]
- Harris, D.E.; Shull, A. Generative AI, Democracy and Human Rights. Centre for International Governance Innovation. Available online: https://www.cigionline.org/static/documents/FoT_PB_No._12_-_Harris_and_Shull_gzjUYYD.pdf (accessed on 1 February 2025).
- Narayanan, A. Understanding Social Media Recommendation Algorithms. Kn. First Amend. Inst. 2023, 9, 1–49. [Google Scholar]
- Settle, J.E. Frenemies: How Social Media Polarizes America; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
- European Commission; Joint Research Centre; Lähteenoja, V.; Himanen, J.; Turpeinen, M.; Signorelli, S. The Landscape of Consent Management Tools—A Data Altruism Perspective; Publications Office of the European Union: Luxembourg, 2024. [Google Scholar] [CrossRef]
- Fink, A. Data cooperative. Internet Policy Rev. 2024, 13, 1–12. [Google Scholar] [CrossRef]
- Nabben, K. AI as a Constituted System: Accountability Lessons from an LLM Experiment. Data Policy 2024, 6, e57. [Google Scholar] [CrossRef]
- Von Thun, M.; Hanley, D.A. Stopping Big Tech from Becoming Big AI; Open Markets Institute and Mozilla: Washington, DC, USA; Mountain View, CA, USA, 2024. [Google Scholar]
- Rajamohan, R. Networked Cooperative Ecosystems. 2024. Available online: https://paragraph.xyz/@v6a/networked-ecosystems-2 (accessed on 1 February 2025).
- Ananthaswamy, A. Why Machines Learn: The Elegant Math Behind Modern AI; Penguin: London, UK, 2024. [Google Scholar]
- Bengio, Y. AI and catastrophic risk. J. Democr. 2023, 34, 111–121. [Google Scholar] [CrossRef]
- European Parliament. Social Approach to the Transition to Smart Cities; European Parliament: Luxembourg, 2023. [Google Scholar]
- Magro, A. Emerging Digital Technologies in the Public Sector: The Case of Virtual Worlds; Publications Office of the European Union: Luxembourg, 2024. [Google Scholar]
- Estévez Almenzar, M.; Fernández Llorca, D.; Gómez, E.; Martínez Plumed, F. Glossary of Human-Centric Artificial Intelligence; Publications Office of the European Union: Luxembourg, 2022. [Google Scholar] [CrossRef]
- Varon, J.; Costanza-Chock, S.; Tamari, M.; Taye, B.; Koetz, V. AI Commons: Nourishing Alternatives to Big Tech Monoculture; Coding Rights: Rio de Janeiro, Brazil, 2024; Available online: https://codingrights.org/docs/AICommons.pdf (accessed on 9 February 2025).
- Verhulst, S.G. Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science. Front. Policy Labs. 2024, 1, 1–10. [Google Scholar] [CrossRef]
- Mitchell, M.; Palmarini, A.B.; Moskvichev, A. Comparing Humans, GPT-4, and GPT-4V on abstraction and reasoning tasks. arXiv 2023, arXiv:2311.09247. [Google Scholar]
- Gasser, U.; Mayer-Schönberger, V. Guardrails: Guiding Human Decisions in the Age of AI; Princeton University Press: Princeton, NJ, USA, 2024. [Google Scholar]
- United Nations High-level Advisory Body on Artificial Intelligence. Governing AI for Humanity: Final Report; United Nations: New York, NY, USA, 2024. [Google Scholar]
- Vallor, S. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking; OUP: New York, NY, USA, 2024. [Google Scholar]
- Buolamwini, J. Unmasking AI: My Mission to Protect What is Human in a World of Machines; Random House: London, UK, 2023. [Google Scholar]
- McCourt, F.H. Our Biggest Fight: Reclaiming Liberty, Humanity, and Dignity in the Digital Age; Crown Publishing: London, UK, 2024. [Google Scholar]
- Muldoon, J.; Graham, M.; Cant, C. Feeding the Machine: The Hidden Human Labour Powering AI; Cannongate: Edinburgh, UK, 2024. [Google Scholar]
- Burkhardt, S.; Rieder, B. Foundation models are platform models: Prompting and the political economy of AI. Big Data Soc. 2024, 11, 20539517241247839. [Google Scholar] [CrossRef]
- Finnemore, M.; Sikkink, K. International Norm Dynamics and Political Change. Int. Organ. 1998, 52, 887–917. [Google Scholar] [CrossRef]
- Lazar, S. Connected by Code: Algorithmic Intermediaries and Political Philosophy; Oxford University Press: Oxford, UK, 2024. [Google Scholar]
- Hoeyer, K. Data Paradoxes: The Politics of Intensified Data Sourcing in Contemporary Healthcare; MIT Press: Cambridge, MA, USA, 2023. [Google Scholar]
- Hughes, T. The political theory of techno-colonialism. Eur. J. Political Theory 2024. [Google Scholar] [CrossRef]
- Srivastava, S. Algorithmic Governance and the International Politics of Big Tech; Cambridge University Press: Cambridge, MA, USA, 2021. [Google Scholar]
- Utrata, A. Engineering territory: Space and colonies in Silicon Valley. Am. Political Sci. Rev. 2024, 118, 1097–1109. [Google Scholar] [CrossRef]
- Lehdonvirta, V.; Wú, B.; Hawkins, Z. Weaponized Interdependence in a Bipolar World: How Economic Forces and Security Interests Shape the Global Reach of U.S. and Chinese Cloud Data Centres; Oxford Internet Institute, University of Oxford & Aalto University: Oxford, UK, 2025; Accepted to Review of International Political Economy. [Google Scholar]
- Guersenzvaig, A.; Sánchez-Monedero, J. AI research assistants, intrinsic values, and the science we want. AI Soc. 2024. [Google Scholar] [CrossRef]
- Wachter-Boettcher, S. Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threat of Toxic Tech; WW Norton & Co: London, UK, 2018. [Google Scholar]
- D’Amato, K. ChatGPT: Towards AI subjectivity. AI Soc. 2024. [Google Scholar] [CrossRef]
- Shavit, Y.; Agarwal, S.; Brundage, M.; Adler, S.; O’Keefe, C.; Campbell, R.; Lee, T.; Mishkin, P.; Eloundou, T.; Hickey, A.; et al. Practices for Governing Agentic AI Systems; OpenAI: San Francisco, CA, USA, 2023. [Google Scholar]
- Bibri, S.E.; Allam, Z. The Metaverse as a Virtual Form of Data-Driven Smart Urbanism: On Post-Pandemic Governance through the Prism of the Logic of Surveillance Capitalism. Smart Cities 2022, 5, 715–727. [Google Scholar] [CrossRef]
- Bibri, S.E.; Visvizi, A.; Troisi, O. Advancing Smart Cities: Sustainable Practices, Digital Transformation, and IoT Innovations; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
- Sharifi, A.; Allam, Z.; Bibri, S.E.; Khavarian-Garmsir, A.R. Smart cities and sustainable development goals (SDGs): A systematic literature review of co-benefits and trade-offs. Cities 2024, 146, 104659. [Google Scholar] [CrossRef]
- Singh, A. Advances in Smart Cities: Smarter People, Governance, and Solutions. J. Urban Technol. 2019, 26, 1–4. [Google Scholar] [CrossRef]
- Reuel, A.; Bucknall, B.; Casper, S.; Fist, T.; Soder, L.; Aarne, O.; Hammond, L.; Ibrahim, L.; Chan, A.; Wills, P.; et al. Open Problems in Technical AI Governance. arXiv 2024, arXiv:2407.14981. Available online: https://arxiv.org/abs/2407.14981 (accessed on 1 February 2025).
- Aho, B. Data communism: Constructing a national data ecosystem. Big Data Soc. 2024, 11, 20539517241275888. [Google Scholar] [CrossRef]
- Valmeekam, K.; Sreedharan, S.; Marquez, M.; Olmo, A.; Kambhampati, S. On the Planning Abilities of Large Language Models—A Critical Investigation. arXiv 2023, arXiv:2305.15771. Available online: https://arxiv.org/abs/2305.15771 (accessed on 1 February 2025).
- Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; Cao, Y. ReAct: Synergizing reasoning and acting in language models. arXiv 2022, arXiv:2210.03629. Available online: https://arxiv.org/abs/2210.03629 (accessed on 1 February 2025).
- Krause, D. Web3 and the Decentralized Future: Exploring Data Ownership, Privacy, and Blockchain Infrastructure. Preprint 2024. [Google Scholar] [CrossRef]
- Lazar, S.; Pascal, A. AGI and Democracy; Allen Lab for Democracy Renovation: Cambridge, MA, USA, 2024. [Google Scholar]
- Ovadya, A. Reimagining Democracy for AI. J. Democr. 2023, 34, 162–170. [Google Scholar] [CrossRef]
- Ovadya, A.; Thorburn, L.; Redman, K.; Devine, F.; Milli, S.; Revel, M.; Konya, A.; Kasirzadeh, A. Toward Democracy Levels for AI. Pluralistic Alignment Workshop at NeurIPS 2024. Available online: https://arxiv.org/abs/2411.09222 (accessed on 14 November 2024).
- Alnabhan, M.Q.; Branco, P. BERTGuard: Two-Tiered Multi- Domain Fake News Detection with Class Imbalance Mitigation. Big Data Cogn. Comput. 2024, 8, 93. [Google Scholar] [CrossRef]
- Gourlet, P.; Ricci, D.; Crépel, M. Reclaiming artificial intelligence accounts: A plea for a participatory turn in artificial intelligence inquiries. Big Data Soc. 2024, 11, 20539517241248093. [Google Scholar] [CrossRef]
- Spathoulas, G.; Katsika, A.; Kavallieratos, G. Privacy Preserving and Verifiable Outsourcing of AI Processing for Cyber-Physical Systems. In International Conference on Information and Communications Security; Springer: Singapore, 2024. [Google Scholar]
- Abhishek, T.; Varda, M. Data Hegemony: The Invisible War for Digital Empires. Internet Policy Rev. 2024. Available online: https://policyreview.info/articles/news/data-hegemony-digital-empires/1789 (accessed on 1 September 2024).
- Alaimo, C.; Kallinikos, J. Data Rules: Reinventing the Market Economy; MIT Press: Cambridge, MA, USA, 2024. [Google Scholar]
- OpenAI. GPT-4 Technical Report; OpenAI: San Francisco, CA, USA, 2023. [Google Scholar]
- Dobbe, R. System safety and artificial intelligence. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–22 June 2022. [Google Scholar]
- Bengio, Y.; Mindermann, S.; Privitera, D.; Besiroglu, T.; Bommasani, R.; Casper, S.; Choi, Y.; Goldfarb, D.; Heidari, H.; Khalatbari, L.; et al. International Scientific Report on the Safety of Advanced AI: Interim Report. arXiv 2024, arXiv:2412.05282. [Google Scholar]
- World Digital Technology Academy (WDTA). Large Language Model Security Requirements for Supply Chain; WDTA AI-STR-03; World Digital Technology Academy: Geneva, Switzerland, 2024. [Google Scholar]
- AI4GOV. Available online: https://ai4gov-project.eu/2023/11/14/ai4gov-d3-1/ (accessed on 1 January 2024).
- Cazzaniga, M.; Jaumotte, F.; Li, L.; Melina, G.; Panton, A.J.; Pizzinelli, C.; Rockall, E.; Tavares, M.M. Gen-AI: Artificial Intelligence and the Future of Work; IMF Staff Discussion Note SDN2024/001; International Monetary Fund: Washington, DC, USA, 2024. [Google Scholar]
- ENFIELD. 2024. Available online: https://www.enfield-project.eu/about (accessed on 1 September 2024).
- Palacios, S.; Ault, A.; Krogmeier, J.V.; Bhargava, B.; Brinton, C.G. AGAPECert: An Auditable, Generalized, Automated, Privacy-Enabling Certification Framework with Oblivious Smart Contracts. IEEE Trans. Dependable Secur. Comput. 2022, 20, 3269–3286. [Google Scholar] [CrossRef]
- GPAI Algorithmic Transparency in the Public Sector. A State-of-the-Art Report of Algorithmic Transparency Instruments; Global Partnership on Artificial Intelligence; OECD: Paris, France, 2024; Available online: www.gpai.ai (accessed on 1 September 2024).
- Lazar, S.; Nelson, A. AI safety on whose terms? Science 2023, 381, 138. [Google Scholar] [CrossRef]
- HAI. Artificial Intelligence Index Report 2024; HAI: Palo Alto, CA, USA, 2024. [Google Scholar]
- Nagy, P.; Neff, G. Conjuring algorithms: Understanding the tech industry as stage magicians. New Media Soc. 2024, 26, 4938–4954. [Google Scholar] [CrossRef]
- Kim, E.; Jang, G.Y.; Kim, S.H. How to apply artificial intelligence for social innovations. Appl. Artif. Intell. 2022, 36, 2031819. [Google Scholar] [CrossRef]
- Calzada, I.; Cobo, C. Unplugging: Deconstructing the Smart City. J. Urban Technol. 2015, 22, 23–43. [Google Scholar] [CrossRef]
- Visvizi, A.; Godlewska-Majkowska, H. Not Only Technology: From Smart City 1.0 through Smart City 4.0 and Beyond (An Introduction). In Smart Cities: Lock-In, Path-dependence and Non-linearity of Digitalization and Smartification; Visvizi, A., Godlewska-Majkowska, H., Eds.; Routledge: London, UK, 2025; pp. 3–16. Available online: https://www.taylorfrancis.com/chapters/edit/10.1201/9781003415930-2/technology-anna-visvizi-hanna-godlewska-majkowska (accessed on 1 February 2025).
- Troisi, O.; Visvizi, A.; Grimaldi, M. The Different Shades of Innovation Emergence in Smart Service Systems: The Case of Italian Cluster for Aerospace Technology. J. Bus. Ind. Mark. 2024, 39, 1105–1129. [Google Scholar] [CrossRef]
- Visvizi, A.; Troisi, O.; Corvello, V. Research and Innovation Forum 2023: Navigating Shocks and Crises in Uncertain Times—Technology, Business, Society; Springer Nature: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
- Caprotti, F.; Cugurullo, F.; Cook, M.; Karvonen, A.; Marvin, S.; McGuirk, P.; Valdez, A.M. Why does urban Artificial Intelligence (AI) matter for urban studies? Developing research directions in urban AI research. Urban Geogr. 2024, 45, 883–894. [Google Scholar] [CrossRef]
- Caprotti, F.; Duarte, C.; Joss, S. The 15-minute city as paranoid urbanism: Ten critical reflections. Cities 2024, 155, 105497. [Google Scholar] [CrossRef]
- Cugurullo, F.; Caprotti, F.; Cook, M.; Karvonen, A.; McGuirk, P.; Marvin, S. The rise of AI urbanism in post-smart cities: A critical commentary on urban artificial intelligence. Urban Stud. 2024, 61, 1168–1182. [Google Scholar] [CrossRef]
- Sanchez, T.W.; Fu, X.; Yigitcanlar, T.; Ye, X. The Research Landscape of AI in Urban Planning: A Topic Analysis of the Literature with ChatGPT. Urban Sci. 2024, 8, 197. [Google Scholar] [CrossRef]
- Kuppler, A.; Fricke, C. Between innovative ambitions and erratic everyday practices: Urban planners’ ambivalences towards digital transformation. Town Plan. Rev. 2024, 96, 2. [Google Scholar] [CrossRef]
- Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; Picador: London, UK, 2019. [Google Scholar]
- Lorinc, J. Dream States: Smart Cities, Technology, and the Pursuit of Urban Utopias; Coach House Books: Toronto, Canada, 2022. [Google Scholar]
- Leffel, B.; Derudder, B.; Acuto, M.; van der Heijden, J. Not so polycentric: The stratified structure & national drivers of transnational municipal networks. Cities 2023, 143, 104597. [Google Scholar] [CrossRef]
- Luccioni, S.; Jernite, Y.; Strubell, E. Power hungry processing: Watts driving the cost of AI deployment? In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, Rio de Janeiro, Brazil, 3–6 June 2024. [Google Scholar]
- Gohdes, A.R. Repression in the Digital Age: Surveillance, Censorship, and the Dynamics of State Violence; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
- Seger, E.; Avin, S.; Pearson, G.; Briers, M.; Heigeartaigh S, Ó.; Bacon, H.; Ajder, H.; Alderson, C.; Anderson, F.; Baddeley, J.; et al. Tackling Threats to Informed Decision-Making in Democratic Societies: Promoting Epistemic Security in a Technologically-Advanced World; The Alan Turing Institute: London, UK, 2020. [Google Scholar]
- Burton, J.W.; Lopez-Lopez, E.; Hechtlinger, S.; Rahwan, Z.; Aeschbach, S.; Bakker, M.A.; Becker, J.A.; Berditchevskaia, A.; Berger, J.; Brinkmann, L.; et al. How large language models can reshape collective intelligence. Nat. Hum. Behav. 2024, 8, 1643–1655. [Google Scholar] [CrossRef]
- Lalka, R. The Venture Alchemists: How Big Tech Turned Profits into Power; Columbia University Press: New York, NY, USA, 2024. [Google Scholar]
- Li, F.-F. The Worlds I See: Curiosity, Exploration, and Discovery and the Dawn of AI; Macmillan: London, UK, 2023. [Google Scholar]
- Medrado, A.; Verdegem, P. Participatory action research in critical data studies: Interrogating AI from a South–North approach. Big Data Soc. 2024, 11, 20539517241235869. [Google Scholar] [CrossRef]
- Mejias, U.A.; Couldry, N. Data Grab: The New Colonialism of Big Tech (and How to Fight Back); WH Allen: London, UK, 2024. [Google Scholar]
- Murgia, M. Code Dependent: Living in the Shadow of AI; Henry Holt and Co.: London, UK, 2024. [Google Scholar]
- Johnson, S.; Acemoglu, D. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity; Basic Books: London, UK, 2023. [Google Scholar]
- Rella, L.; Hansen, K.B.; Thylsturp, N.B.; Campbell-Verduyn, M.; Preda, A.; Rodima-Taylor, D.; Xu, R.; Straube, T. Hybrid materialities, power, and expertise in the era of general purpose technologies. Distinktion J. Soc. Theory 2024. [Google Scholar] [CrossRef]
- Merchant, B. Blood in the Machine: The Origins of the Rebellion Against Big Tech; Little, Brown and Company: London, UK, 2023. [Google Scholar]
- Sieber, R.; Brandusescu, A.; Adu-Daako, A.; Sangiambut, S. Who are the publics engaging in AI? Public Underst. Sci. 2024, 33, 634–653. [Google Scholar] [CrossRef]
- Tunç, A. Can AI determine its own future? AI Society 2024. [Google Scholar] [CrossRef]
- Floridi, L. Why the AI Hype is Another Tech Bubble. Available online: https://ssrn.com/abstract=4960826 (accessed on 18 September 2024).
- Batty, M. The New Science of Cities; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
- Batty, M. Inventing Future Cities; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Batty, M. Urban Analytics Defined. Environ. Plan. B Urban Anal. City Sci. 2019, 46, 403–405. [Google Scholar] [CrossRef]
- Marvin, S.; Luque-Ayala, A.; McFarlane, C. Smart Urbanism: Utopian Vision or False Dawn? Routledge: New York, NY, USA, 2016. [Google Scholar]
- Marvin, S.; Graham, S. Splintering Urbanism: Networked Infrastructures, Technological Mobilities, and the Urban Condition; Routledge: London, UK, 2001. [Google Scholar]
- Marvin, S.; Bulkeley, H.; Mai, L.; McCormick, K.; Palgan, Y.V. Urban Living Labs: Experimenting with City Futures. Eur. Urban Reg. Stud. 2018, 25, 317–333. [Google Scholar] [CrossRef]
- Kitchin, R. Code/Space: Software and Everyday Life; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
- Kitchin, R.; Lauriault, T.P.; McArdle, G. Knowing and Governing Cities through Urban Indicators, City Benchmarking, and Real-Time Dashboards. Reg. Stud. Reg. Sci. 2015, 2, 6–28. [Google Scholar] [CrossRef]
- Calzada, I. Platform and data co-operatives amidst European pandemic citizenship. Sustainability 2020, 12, 8309. [Google Scholar] [CrossRef]
- Monsees, L. Crypto-Politics: Encryption and Democratic Practices in the Digital Era; Routledge: Oxon, UK, 2020. [Google Scholar]
- European Commission. Second Draft of the General Purpose AI Code of Practice; Digital Strategy; European Commission: Brussels, Belgium, 2024; Available online: https://digital-strategy.ec.europa.eu/en/library/second-draft-general-purpose-ai-code-practice-published-written-independent-experts (accessed on 10 February 2025).
- Visvizi, A.; Kozlowski, K.; Calzada, I.; Troisi, O. Multidisciplinary Movements in AI and Generative AI: Society, Business, Education; Edward Elgar: Chentelham, UK, 2025. [Google Scholar]
- Leslie, D.; Burr, C.; Aitken, M.; Cowls, J.; Katell, M.; Briggs, M. Artificial Intelligence, Human Rights, Democracy, and the Rule of Law: A Primer; Council of Europe: Strasbourg, France, 2021; Available online: https://edoc.coe.int/en/artificial-intelligence/10206-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-a-primer.html (accessed on 1 February 2025).
- Hossain, S.T.; Yigitcanlar, T. Local Governments Are Using AI without Clear Rules or Policies, and the Public Has No Idea. QUT Newsroom. Available online: https://www.qut.edu.au/news/realfocus/local-governments-are-using-ai-without-clear-rules-or-policies-and-the-public-has-no-idea (accessed on 9 January 2025).
- Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
- Bousetouane, F. Agentic Systems: A Guide to Transforming Industries with Vertical AI Agents. arXiv 2025, arXiv:2501.00881. Available online: https://arxiv.org/abs/2501.00881 (accessed on 9 January 2025).
- Calzada, I. Generative AI and the Urban AI Policy Challenges Ahead: Trustworthy AI for Whom? Available online: https://www.emeraldgrouppublishing.com/calls-for-papers/generative-ai-and-urban-ai-policy-challenges-ahead-trustworthy-ai-whomn (accessed on 1 February 2025).
- Hossain, S.T.; Yigitcanlar, T.; Nguyen, K.; Xu, Y. Cybersecurity in Local Governments: A Systematic Review and Framework of Key Challenges. Urban Gov. 2025. [Google Scholar] [CrossRef]
- Laksito, J.; Pratiwi, B.; Ariani, W. Harmonizing Data Privacy Frameworks in Artificial Intelligence: Comparative Insights from Asia and Europe. PERKARA—J. Ilmu Huk. Dan Polit. 2024, 2, 579–588. [Google Scholar] [CrossRef]
- Nature. Science for Policy: Why Scientists and Politicians Struggle to Collaborate. Nature 2024. Available online: https://www.nature.com/articles/d41586-024-03910-4 (accessed on 9 January 2025).
Aspect | EU-Wide Application Under AI Act | Country-Specific Focus [3,4] |
---|---|---|
| AI systems are classified as unacceptable, high, limited, or minimal risk. | Individual states may prioritize specific sectors (e.g., healthcare in Germany, transportation in the Netherlands) where high-risk AI applications are more prevalent. |
| Mandatory requirements for data quality, transparency, robustness, and oversight. | Enforcement and oversight approaches may vary, with some countries opting for stricter testing and certification processes. |
| Users must be informed when interacting with AI (e.g., chatbots and deepfakes). | Implementation might vary, with some countries adding requirements for specific sectors like finance (France) or public services (Sweden). |
| Data used by AI systems must be free from bias and respect privacy. | States with stronger data protection laws, like Germany, may adopt stricter data governance and audit practices. |
| High-risk AI requires mechanisms for human intervention and control. | Emphasis may vary, with some states prioritizing human oversight in sectors like education (Spain) or labor (Italy). |
| Non-compliance can result in fines up to 6% of global turnover. | While fines are harmonized, enforcement strategies may differ based on each country’s regulatory framework. |
| Creation of sandboxes to promote safe innovation in AI. | Some countries, like Denmark and Finland, have existing sandbox initiatives and may expand them to further support AI development. |
| Member states align their AI strategies with the AI Act’s principles. | Countries may adapt strategies to their economic strengths (e.g., robotics in Czechia and AI-driven fintech in Luxembourg). |
| Public services using AI must comply with the Act’s requirements. | Some countries prioritize transparency and ethics in government AI applications, with additional guidelines (e.g., Estonia and digital services). |
Dimension | Key Insights | Implications |
---|---|---|
| Encompasses transparency, accountability, and ethical integrity. | Calls for participatory governance to ensure inclusivity and co-construction of trust. |
| Tension between fostering innovation and maintaining ethical standards. | Uneven playing fields for SMEs and grassroots initiatives; innovation sandboxes as a potential equalizer. |
| Focus on healthcare, law enforcement, and energy; risks of bias and misuse. | Continuous monitoring and inclusive frameworks to ensure systems empower rather than oppress vulnerable populations. |
| Advocates for inclusion via citizen assemblies, living labs, and co-design workshops. | Encourages diverse stakeholder engagement to align technological advancements with democratic values. |
| Balances economic growth with societal equity. | Promotes innovation while safeguarding against tech concentration and ethical oversights. |
| Risks of bias, misinformation, and reduced accountability in decentralized ecosystems. | Emphasizes blockchain and other tech as solutions to enhance accountability without compromising user privacy. |
| Highlights disparities in economic benefits across industries and societal groups. | Need for policies that ensure AI benefits reach marginalized communities and foster equity. |
| Debate over prioritizing technological robustness vs. societal inclusivity in trustworthiness. | Shift required towards frameworks addressing underrepresented groups. |
Techniques | Definition |
---|---|
T1. Federated Learning for Decentralized AI Detection | Collaborative AI model training across decentralized platforms, preserving privacy without sharing raw data. |
T2. Blockchain-Based Provenance Tracking | Blockchain technology records content creation and dissemination, enabling transparent tracking of content authenticity. |
T3. Zero-Knowledge Proofs for Content Authentication | Cryptographic method to verify content authenticity without revealing underlying private data. |
T4. Decentralized Autonomous Organizations (DAOs) for Crowdsourced Verification | Crowdsourced content verification through DAOs, allowing communities to collectively vote and verify content authenticity. |
T5. AI-Powered Digital Watermarking | Embedding unique identifiers into AI-generated content to trace and authenticate its origin. |
T6. Explainable AI (XAI) for Content Detection | Provides transparency in AI model decision making [236], explaining why content was flagged as AI-generated. |
T7. Privacy-Preserving Machine Learning (PPML) for Secure Content Verification | Enables secure detection and verification of content while preserving user privacy, leveraging homomorphic encryption and other techniques. |
Technique | European Initiative | Response to the Research Question | Trustworthy AI for Whom? Who Benefits? (Stakeholder-Specific Trust Outcomes) |
---|---|---|---|
T1. Federated Learning for Decentralized AI Detection | GAIA-X initiative promoting secure and decentralized data ecosystems https://www.gaia.x.eu (accessed on 1 February 2025) | Supports user-centric data sharing and privacy compliance across Europe | End Users and Citizens: GAIA-X (federated learning) enables privacy-first AI model training, ensuring individuals retain control over their data while fostering AI transparency in federated data-sharing ecosystems. |
T2. Blockchain-Based Provenance Tracking | OriginTrail project ensuring data and product traceability https://origintrail.io/ (accessed on 1 February 2025) | Enhances product authenticity and trust in supply chains for consumers and industries | Communities and Organizations: Tools like OriginTrail (blockchain-based provenance tracking) ensure that organizations and consumers can trust the authenticity of data and products. Verifiable content provenance fosters trust in digital ecosystems, particularly in journalism, supply chains, and digital identity verification. |
T3. Zero-Knowledge Proofs (ZKPs) for Content Authentication | European Blockchain Services Infrastructure (EBSI) for credential verification https://digital-strategy.ec.europa.eu/en/policies/european-blockchain-services-infrastructure (accessed on 1 February 2025) | Ensures privacy and security for credential verification in education and public services | Regulators and Policymakers: By embedding EU principles into operational frameworks, initiatives like the European Blockchain Services Infrastructure (EBSI) demonstrate that trustworthy AI aids regulators in enforcing compliance while maintaining transparency and inclusivity across borders. ZKPs balance AI trust with privacy, ensuring secure, privacy-preserving verification—an essential feature for cross-border governance, regulatory compliance, and digital identity frameworks. |
T4. DAOs for Crowdsourced Verification | Aragon platform enabling collaborative decentralized governance https://www.aragon.org/ (accessed on 1 February 2025) | Empowers communities with participatory governance and collaborative decision making | Communities and Organizations: Tools like Aragon (DAOs) empower decentralized decision making, fostering collaborative governance among community members. This enables collective content validation, minimizing centralized control over AI governance, fostering participatory, democratic AI decision making. |
T5. AI-Powered Digital Watermarking | C2PA initiative embedding metadata and watermarks in digital media https://c2pa.org/ (accessed on 1 February 2025) | Improves traceability and content authenticity for media and journalism | Industry and Innovation Ecosystems: Projects like C2PA (digital watermarking) support industrial and media ecosystems by providing robust frameworks. These initiatives promote innovation while adhering to ethical guidelines. Essential for combatting AI-generated misinformation, C2PA watermarking ensures content authenticity, benefiting journalists, digital platforms, and content creators. |
T6. Explainable AI (XAI) for Content Detection | Horizon 2020 Trust-AI project developing explainable AI models www.trustai.eu (accessed on 1 February 2025) | Enhances transparency and trust in AI decision making for users and professionals | End Users and Citizens: Projects like Trust-AI (XAI) focus on user-centric designs that prioritize transparency and data privacy. Citizens gain trust in AI systems when these systems explain their decisions, safeguard personal data, and remain accountable. This increases AI decision-making transparency, empowering citizens to understand and contest automated decisions, particularly in finance, healthcare, and legal AI applications. |
T7. Privacy-Preserving Machine Learning (PPML) for Secure Content Verification | MUSKETEER project creating privacy-preserving machine learning frameworks https://musketeer.eu/ (accessed on 1 February 2025) | Ensures secure AI training and compliance with privacy laws for industry stakeholders | Industry and Innovation Ecosystems: Projects like MUSKETEER (PPML) support industrial ecosystems by providing robust frameworks for privacy-preserving analysis and content authentication. These initiatives promote innovation while adhering to ethical guidelines, ensuring privacy-respecting AI governance and enabling secure collaboration while maintaining GDPR compliance. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Calzada, I.; Németh, G.; Al-Radhi, M.S. Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems. Big Data Cogn. Comput. 2025, 9, 62. https://doi.org/10.3390/bdcc9030062
Calzada I, Németh G, Al-Radhi MS. Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems. Big Data and Cognitive Computing. 2025; 9(3):62. https://doi.org/10.3390/bdcc9030062
Chicago/Turabian StyleCalzada, Igor, Géza Németh, and Mohammed Salah Al-Radhi. 2025. "Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems" Big Data and Cognitive Computing 9, no. 3: 62. https://doi.org/10.3390/bdcc9030062
APA StyleCalzada, I., Németh, G., & Al-Radhi, M. S. (2025). Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems. Big Data and Cognitive Computing, 9(3), 62. https://doi.org/10.3390/bdcc9030062