Next Article in Journal
Application Status, Challenges, and Development Prospects of Smart Technologies in Home-Based Elder Care
Next Article in Special Issue
Explainable Face Recognition via Improved Localization
Previous Article in Journal
Toward Building a Domain-Based Dataset for Arabic Handwritten Text Recognition
Previous Article in Special Issue
Multi-Feature AND–OR Mechanism for Explainable Modulation Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Decentralized Governance: A Case Study of KlimaDAO Decision-Making

1
Department of Technology Application and Human Resource Development, National Taiwan Normal University, Taipei 106, Taiwan
2
Department of History, National Taiwan Normal University, Taipei 106, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(12), 2462; https://doi.org/10.3390/electronics14122462
Submission received: 5 May 2025 / Revised: 14 June 2025 / Accepted: 16 June 2025 / Published: 17 June 2025
(This article belongs to the Special Issue Explainability in AI and Machine Learning)

Abstract

This study proposes an AI-assisted governance framework to enhance decision-making within decentralized autonomous organizations (DAOs). By integrating chain-of-thought (CoT) reasoning with stakeholder-adaptive recommendations, the framework improves decision alignment, increases voter participation, and enhances governance transparency. Through simulations based on historical KlimaDAO data, the system achieved a 97% alignment with past decisions, a projected 40% increase in participation, and a 35% improvement in governance clarity. To support quantitative analysis in tokenomics, we developed a tailored CoT reasoning strategy, effectively reducing information asymmetry and generating structured, trustworthy recommendations. These results underscore the potential of AI to foster more inclusive and transparent DAO governance. Future work will explore deploying lightweight AI models and extending this approach to a broader range of DAO ecosystems.

1. Introduction

Decentralized autonomous organizations (DAOs) have emerged as transformative governance models that promote transparent, community-driven blockchain decision-making. By distributing authority among token holders, DAOs seek to democratize power and align stakeholder interests [1]. However, despite their promise, DAOs face persistent governance challenges that undermine their effectiveness and legitimacy.
Recent studies highlight three core issues, as follows:
  • Proposals often involve highly technical content, such as smart contract modifications or complex tokenomics, which create comprehension barriers for average participants [2];
  • Misalignment between short-term speculation and long-term protocol sustainability erodes coherent stakeholder incentives [3];
  • Vote concentration among “whales” discourages broader participation and weakens collective engagement [4].
These structural problems have contributed to declining voter turnout and increased decision-making inefficiencies.
Empirical data reinforce these concerns. Since early 2024, DAO-wide voter participation has dropped by over 40%, with critical proposals passing with only 20–30% of token holders participating [5]. To illustrate this trend, Figure 1 presents historical voting participation patterns in KlimaDAO, a leading protocol in the voluntary carbon market. Although early proposals drew strong engagement, especially during landmark reforms like KIP-31, recent votes, including the consequential KIP-65, have suffered from significantly reduced turnout. Furthermore, these turnout rates represent overestimated participation rates, yet participation has remained remarkably low, even for crucial proposals such as KIP-65. This trend suggests that the increasing complexity of governance decisions and the cognitive burden placed on voters has become a significant barrier to informed and active participation.
KlimaDAO’s governance lifecycle is a structured, four-stage process for transparent, community-driven decision-making. The process begins on the KlimaDAO forum, where new proposals are introduced as a request for comment (RFC) for public discussion and assessment. Following a supportive discussion, a formal KlimaDAO improvement proposal (KIP) undergoes an informal “temperature check” poll to gauge consensus. Successful KIPs then advance to a formal, binding vote on Snapshot, an open-source platform for KLIMA holders. Snapshot ensures vote integrity by recording token balances at a specific block height to prevent manipulation. Voting power is directly proportional to a user’s holdings of the KLIMA token. The platform facilitates off-chain, gasless voting via a wallet signature, with vote data stored on decentralized systems like IPFS. Finally, if a KIP secures a majority vote (typically >50%), the DAO proceeds with on-chain execution. To ensure security, financial and contractual operations are managed by a multi-signature wallet, like a Gnosis Safe, which requires approvals from multiple core members [5,7].
We propose an AI-assisted decision-support framework that leverages large language models (LLMs) enhanced with chain-of-thought (CoT) reasoning in response to these challenges. We aim to reduce informational asymmetries, simplify complex proposals, and deliver stakeholder-adaptive recommendations that empower long-term holders and short-term traders within DAO ecosystems.
Our key contributions are as follows:
  • We design an LLM-based decision-support pipeline capable of synthesizing governance proposals and on-chain economic metrics into actionable insights.
  • We implement CoT reasoning to enhance explainability and mitigate hallucination, increasing trust in automated recommendations.
  • We generate stakeholder-specific recommendations tailored to different governance roles and incentives.
  • We evaluate the framework through simulations using historical KlimaDAO data, showing improvements in decision alignment, projected voter participation, and governance transparency.
Empirical results demonstrate a 97% alignment with historical decisions, a projected 40% increase in participation, and a 35% improvement in governance clarity. These findings suggest that AI-enhanced frameworks can reduce cognitive barriers and foster more inclusive, transparent DAO governance.
We also explore future directions, including the integration of hybrid human–AI governance models [8] and emotion-aware LLMs [9], to further personalize recommendations based on evolving community sentiment [8,9].
The rest of this paper is structured as follows: Section 2 reviews related literature; Section 3 outlines our research design; Section 4 details our methods; Section 5 presents our findings; Section 6 offers an analysis; and Section 7 concludes with future research directions. Despite the growing deployment of DAOs in blockchain governance, persistent challenges such as proposal complexity, voter disengagement, and stakeholder misalignment remain unresolved [10,11]. Existing research rarely addresses how AI-driven systems can enhance clarity, reduce cognitive burden, and tailor recommendations for diverse voting personas. This study aims to bridge this gap by proposing and evaluating an LLM-enhanced framework that integrates tokenomic analysis, sentiment interpretation, and adaptive reasoning to improve participatory decision-making.

2. Literature Review

2.1. DAO Governance Challenges

While decentralized autonomous organizations (DAOs) promise more democratic and transparent governance mechanisms, several persistent challenges undermine these ideals.
First, governance proposals frequently contain complex technical details, such as smart contract upgrades or tokenomics adjustments, which create significant comprehension barriers for average token holders [11]. As a result, many community members may refrain from voting due to a lack of understanding or confidence in evaluating such proposals.
Second, DAOs often experience a misalignment of incentives between short-term speculators and long-term protocol supporters. Speculative trading behavior prioritizes immediate gains, while governance decisions involve long-term ecosystem health and sustainability considerations. This divergence can fragment stakeholder interests and reduce engagement in governance processes.
Finally, the concentration of voting power among large token holders (“whales”) represents another structural challenge. When a small subset of participants disproportionately controls decision-making power, smaller holders may feel disempowered and choose to abstain from governance activities altogether [12]. This dynamic ultimately undermines the decentralized ethos and inclusiveness that DAOs seek to promote.
These challenges contribute to declining voter turnout, reduced diversity of viewpoints in decision-making, and diminished legitimacy of governance outcomes. Addressing these issues requires innovative mechanisms to lower participation barriers, align incentives, and foster broader stakeholder engagement.

2.2. AI-Driven Governance Decision Support

Recent advances in large language models (LLMs) have opened new avenues for augmenting decision-making in complex governance environments [10]. AI-driven systems have demonstrated capabilities in summarizing discussions, extracting key arguments, and generating recommendations tailored to diverse stakeholder perspectives [13].
In decentralized communities, where the volume and complexity of discourse can overwhelm participants, AI-generated explanations enhance proposal clarity and transparency. Studies have shown that explanation quality significantly influences trust and engagement in AI-assisted decision-making contexts [14,15]. Furthermore, chain-of-thought (CoT) reasoning has emerged as a promising technique to improve the interpretability and persuasiveness of AI-generated recommendations [16]. By guiding users through step-by-step logical reasoning, CoT-based outputs can help bridge knowledge gaps and lower the cognitive burden of governance participation.
However, most existing applications of LLMs in decision support focus on general-purpose domains, such as legal reasoning or document summarization [17,18]. Their adoption in DAO governance remains limited, and few studies have explored how AI-generated rationales may directly influence voting behaviors in decentralized settings.

2.3. Hybrid Governance Models

Hybrid governance models have been proposed to address the limitations of purely human or purely algorithmic decision-making processes. These frameworks combine human judgment with AI-driven assistance to balance efficiency, transparency, and inclusiveness [19].
In the context of DAOs, hybrid models can empower participants without ceding control to automated systems. For example, AI agents can assist by providing contextual explanations, identifying relevant precedents, or surfacing diverse viewpoints, while final decision-making authority remains with token holders [17]. Such collaborative arrangements have been explored in adjacent domains, including citizen assemblies and participatory budgeting [20,21].
Integrating AI into decentralized governance introduces new trust, accountability, and value alignment challenges [22]. Ensuring that AI-generated recommendations are perceived as fair and unbiased is critical to fostering adoption. Furthermore, designing interfaces and workflows seamlessly incorporating AI explanations into DAO governance processes remains an open research question.
While hybrid governance models hold considerable promise, further empirical studies are needed to evaluate their effectiveness in real-world DAO ecosystems and establish best practices for human–AI collaboration in decentralized decision-making.

2.4. Recent Advances in DAO Governance Research

While AI-assisted decision support has been the focus of growing research, research on DAO governance has also evolved significantly in recent years. Recent empirical and conceptual works have explored new voting mechanisms, governance structures, and stakeholder engagement challenges. For example, Tamai and Kasahara (2024) propose resilient voting mechanisms that mitigate whale and collusion risks in DAO contexts [4]. Furthermore, Davidson (2025) provides a comprehensive theoretical analysis of decentralized governance models and their implications for aligning stakeholder incentives [3]. Together, these studies underscore the growing scholarly attention to the complexities of decentralized governance and the need for transparent and inclusive decision-making processes.
These studies collectively indicate that DAO governance faces persistent tensions between efficiency, transparency, and inclusiveness. However, quantitative models that simulate the behavioral impacts of governance interventions, particularly AI-assisted reasoning tools, remain rare. Our study addresses this gap by integrating empirical governance data with simulation-based analysis, offering a novel perspective on how LLM-driven explanations may reshape participation and decision dynamics in decentralized ecosystems.

3. Research Design

Building upon the governance challenges outlined in the introduction, our research design adopts a structured, multi-layered approach to evaluating the integration of large language models (LLMs) into decentralized decision-making environments. We selected KlimaDAO as an illustrative case study due to its extensive governance data and active community participation.

3.1. Case Selection

KlimaDAO is a carbon credit-focused decentralized autonomous organization (DAO) that combines blockchain infrastructure with a dynamic governance ecosystem. Its active voting history and transparency in proposal outcomes provide a robust empirical foundation for simulating AI-assisted decision-making processes.

3.2. Data Collection Overview

We curated three primary data sources to inform our framework, as follows:
  • Governance proposals: Dataset of KIP-1 to KIP-65, including proposal texts, categories, and outcomes.
  • On-chain economic indicators: Metrics include token price, market cap, treasury balance, TVL, and liquidity depth.
  • Community sentiment: Extracted from forums and discussion platforms, offering qualitative proposal insights.

3.3. Simulation Process Design

We designed a structured simulation process to rigorously assess the potential impact of AI-generated explanations on governance participation. This process models the behavioral response of DAO participants based on historical voting records and forum sentiment analysis. The workflow consists of five distinct stages, as illustrated below.

3.3.1. Step 1: Data Preparation and Token-Holder Classification

We first constructed a dataset integrating governance proposals (KIP-1 to KIP-65), on-chain economic indicators, and forum discussions. Token holders were categorized into three groups based on their historical voting behavior:
  • Voters: Participants who voted on proposals.
  • Lapsed voters: Token holders that actively voted in the past, but did not vote on the “current” proposal.
  • Inactive holders: Token holders who neither voted nor participated in discussions (excluded from this simulation).
This classification allows us to isolate the lapsed voters group, the primary target population for our AI explanation intervention.

3.3.2. Step 2: Proposal Sentiment Clarity Scoring

Each proposal was analyzed using a fine-tuned DistilBERT sentiment classifier to generate polarity scores from forum discussions. Entropy was calculated for each proposal to measure sentiment clarity:
H ( p ) = i = 1 n p i log p i
where p i represents the probability distribution of positive, neutral, and negative sentiments.
Proposals with H ( p ) < 0.4 were classified as high clarity, while those above the threshold were considered low clarity. Historical data revealed that high-clarity proposals exhibited 20–45% higher voter turnout than low-clarity proposals.

3.3.3. Step 3: AI Explanation Exposure Simulation

We designed a simulation scenario focusing on unambiguous proposals to assess the potential of AI-generated chain-of-thought explanations in mobilizing lapsed voters. Prior studies on civic participation [23] indicate that clear and accessible explanations can significantly increase engagement among previously inactive stakeholders. Drawing from this empirical clarity-turnout correlation, we adopted a conservative conversion estimate of 60%, representing the proportion of exposed abstainers expected to be activated by transparent and stakeholder-relevant reasoning.
In this simulation, we assumed 100% exposure of lapsed voters to AI explanations for high-clarity proposals. Combined with the 60% conversion rate, this models the realistic yet optimistic scenario where abstainers are more likely to participate in the voting process when provided with enhanced reasoning.

3.3.4. Step 4: Conversion Rate Application

Based on observed historical turnout gaps, we conservatively assumed a 60% conversion rate:
C v o t e r s = A × R
where
  • A = Number of lapsed voters exposed to AI explanations.
  • R = Conversion rate (set at 60%).
This resulted in a projected number of new voters attributable to AI-generated explanations.

3.3.5. Step 5: Aggregated Participation Uplift Calculation

Finally, the projected voter uplift was calculated by aggregating newly converted voters across all high-clarity proposals and comparing this against historical participation levels:
U = C t o t a l V h i s t o r i c a l × 100 %
where
  • C t o t a l = Total projected converted voters
  • V h i s t o r i c a l = Historical voter turnout for high-clarity proposals
The simulation results indicated a potential 40% increase in overall participation, aligning with the aggregated projections from individual proposal simulations. Historical turnout is calculated from 24 October 2021 to 16 July 2024.

3.3.6. Remarks on Assumptions and Model Limitations

It is important to note that while these assumptions are grounded in empirical observations, they are inherently hypothetical. This simulation aims to model potential outcomes under realistic conditions rather than predict deterministic results. Conservative parameterization (e.g., 60% conversion) was adopted to ensure robustness and avoid overstating the impact of AI-generated explanations. Appendix A and Appendix B summarize this simulation’s assumptions and parameter values for clarity and reproducibility.

3.4. Evaluation Metrics

We adopted the following three key evaluation metrics to assess governance outcomes: (1) Decision alignment, which measures the consistency between AI recommendations and historical community voting outcomes; (2) voter engagement uplift, estimating the projected increase in voter participation driven by more precise AI-generated explanations; and (3) governance transparency, evaluating the depth and clarity of rationale provided by the AI system.
This design enables quantitative and qualitative assessment of AI’s role in DAO governance, providing a replicable framework for future research across other decentralized ecosystems.

4. Methodology

This section presents the methodological framework adopted to evaluate the proposed AI-assisted DAO governance system. Specifically, we elaborate on the evaluation metrics, simulation design, data processing pipeline, system architecture, hallucination testing protocol, and prompt engineering strategy.

4.1. System Design and Evaluation Metrics

We implemented a comparative evaluation to assess the effectiveness of stakeholder-specific AI prompting in governance scenarios. Explanations generated under general-purpose and adaptive stakeholder-aware conditions were scored using the Clarity Score rubric (Figure 2). The results revealed that 69.2% of adaptive outputs achieved a score of 4 or higher, significantly outperforming the 52.3% attained by the general-purpose baseline. These findings underscore the value of context-aware prompting strategies and are consistent with prior research emphasizing tailored explanations to enhance AI transparency and participatory decision-making [14,16].
To ensure the reliability and rigor of the evaluation, clarity scoring was conducted by two independent annotators, achieving an 87% inter-rater agreement. This high level of consistency reflects adherence to established best practices in explainable AI research, which emphasize human-centered evaluation and validation protocols [15,24]. The overall assessment process is illustrated in Figure 3, which depicts the systematic procedure employed by annotators to assign Clarity Scores to AI-generated proposal explanations.
To enable holistic evaluation, we define three key governance metrics.
  • Decision Alignment ( P align ): Alignment measures the degree to which AI-generated recommendations coincide with historical community voting outcomes. Using a binary comparison, each AI recommendation was matched against historical decisions, and P align was calculated as the percentage of aligned cases out of the total proposals evaluated.
  • Voter Engagement Uplift ( P engagement ): Projected participation improvements were modeled through sentiment-based simulations, leveraging historical turnout data and abstainer analysis [21,25]. Specifically, proposals with sentiment entropy lower than 0.4 were categorized as “high clarity”, with empirical data showing up to 45% greater participation rates. Based on these observations, we assumed that 60% of lapsed voters exposed to AI-generated clear explanations would participate, yielding a projected +40% engagement uplift.
  • Governance Transparency ( T transparency ): Transparency was operationalized as the percentage of AI explanations achieving a Clarity Score 4 , based on stepwise reasoning, comparative analysis, and clear recommendations. Scoring followed annotation procedures with dual reviewer validation, consistent with best practices in explainable AI studies [15].
As shown in Table 1, 69.2% of explanations met this threshold, which represents a 35% relative improvement compared to the baseline clarity rate of 52.3% under general-purpose prompting.

4.2. Simulation Framework and Dataset

Our evaluation pipeline integrated multiple data sources, including governance proposal metadata (kip_proposals.csv), full proposal texts and voting records (kip_proposals_ detailed.csv), and protocol economic indicators (protocol_metrics_all.csv). These datasets were processed to generate temporally aligned inputs enriched with sentiment scores and risk features, providing a robust foundation for analysis.
The end-to-end simulation workflow consisted of four stages: (1) data aggregation, (2) prompt construction (Figure 4), (3) chain-of-thought (CoT) reasoning, and (4) stakeholder-specific recommendation generation. CoT prompting (Figure 5), recognized for its ability to reduce hallucination and improve reasoning clarity [16], was employed to structure AI responses using logical templates aligned with stakeholder perspectives.
Figure 6 illustrates key financial indicators, including total value locked (TVL) and KLIMA token liquidity, from 2021 to 2025, offering critical economic context for governance decision-making. These evolving trends reflect the shifting protocol dynamics that have shaped DAO proposal content and influenced voter behavior. The indicators were integrated as features in AI prompt construction to capture these dynamics, enabling more contextually grounded simulations. In these simulations, we modeled voter responsiveness under the assumption that historically inactive participants would demonstrate a 60% conversion rate when presented with proposals exhibiting sentiment entropy below 0.4—a threshold denoting clear sentiment signals.
To ensure simulation realism, on-chain metrics (Figure 6) were visualized and analyzed to capture governance-relevant trends. This economic context was embedded in the AI prompts to simulate realistic decision scenarios, following participatory AI modeling approaches [25]. Figure 7 depicts multiple governance-relevant metrics. These metrics were structured as prompt inputs to provide temporal and financial context for LLM-generated decisions.
Furthermore, to ensure methodological transparency and minimize the risk of overfitting or fabricated results, all simulation parameters and conversion assumptions were grounded in historical participation data and validated against observed turnout patterns across multiple proposals. Assumptions such as the 60% conversion rate were conservatively set and are fully disclosed in Appendix A and Appendix B to support reproducibility.

4.3. Hallucination Mitigation Through CoT

Addressing hallucination remains a critical concern in governance applications, where factual accuracy is paramount. To assess this, we evaluated hallucination rates across 65 proposals under two prompting conditions, that is, with and without chain-of-thought (CoT) reasoning. In this study, we define the hallucination rate as the proportion of model outputs that contain any of the following errors: (1) incorrect or misreported economic figures, (2) misuse or mistranslation of domain-specific terminology, or (3) internal analytical inconsistencies across successive steps. As shown in Table 2, CoT prompting led to a notable reduction in hallucination, decreasing rates from 60% to 32.31%. This substantial improvement underscores the value of CoT in enhancing the reliability of AI-generated explanations. However, this study does not include a comparative experiment against a human-written summary or a general-purpose AI baseline. While our results validate the effectiveness of our specialized prompting strategy, establishing these external baselines would offer a more comprehensive assessment of the model’s performance and is a key direction for our future work. Accordingly, Table 2 provides a consolidated view of hallucination rates across different prompting strategies, further validating CoT’s effectiveness in governance-oriented AI applications [14,18].
An error rate of 32.31% is comparatively low and operationally acceptable, especially given that our workflow (Figure 8) involves multiple images and a multi-stage chain-of-thought procedure rather than a simple single-turn question–answer exchange. When large language models are deployed in domains with dense specialist terminology—such as medicine or law—their error rates are typically much higher than those reported for open-domain chatbots. Chelli et al. systematically examined LLM performance in clinical contexts and found errors ranging from 28.6% to 91.3% [26]. Likewise, Magesh et al. observed that even domain-specific legal models augmented with retrieval-augmented generation (RAG) still exhibited error rates between 20% and 40% [27]. These benchmarks suggest that our current figure is within an acceptable—and, in fact, competitive—range for high-complexity, expert-level applications.

5. Results

5.1. Participation Uplift Simulation and Findings

Figure 9 presents the simulation results, projecting a 40% increase in voter participation driven by AI-generated explanations. This uplift stems from lowering cognitive barriers through transparent and accessible narratives, enabling members to understand complex proposals better. These findings align with civic engagement literature, emphasizing explanation clarity as a critical factor in mobilizing voters [17,21], and highlight AI’s potential to enhance democratic participation within DAOs.

5.2. Decision Alignment Evaluation

AI-generated recommendations were benchmarked against historical DAO voting records across 65 KlimaDAO improvement proposals (KIPs). As reported in Table 3, the system achieved a 97% alignment rate (95% CI: 0.9285, 1.0000), demonstrating that context-aware AI explanations can mirror collective decision-making outcomes with high fidelity. Such alignment supports prior work on AI-assisted decision augmentation [13,19].

5.3. Governance Transparency and Interpretability

Explanation clarity was further assessed using the Clarity Score rubric. Results (Table 1) show that 69.2% of explanations were rated transparent, outperforming fixed prompts and confirming the value of adaptive prompting. This aligns with prior studies emphasizing the importance of explanation quality in fostering AI adoption and trust in governance contexts [23].

5.4. Sensitivity Analysis of Conversion Rate

We initially assumed a conservative conversion rate of 0.6. To explore how different rates influence participation growth, we also evaluated scenarios of 50%, 55%, 65%, and 70%. Table 4 illustrates how each conversion rate affects the projected participation uplift. The results show a consistently substantial positive impact on engagement, even under the most conservative assumptions tested.

5.5. Summary of Governance Metrics

Table 5 summarizes the quantitative evaluation results across key governance metrics, including decision alignment, projected voter participation uplift, explanation transparency, and hallucination rates. These consolidated findings demonstrate that integrating chain-of-thought reasoning with stakeholder-targeted prompting enables AI-assisted governance frameworks to overcome comprehension barriers, foster participatory inclusiveness, and enhance decentralized decision-making processes.

6. Discussion

While our case study focuses on token-weighted voting, we anticipate similar AI-support benefits in DAOs that adopt conviction voting, reputation-based governance, or quadratic funding. In such systems, LLMs could help explain the mathematical implications of dynamic vote weighting or cost-curve participation, making governance more accessible.

6.1. Interpretation of Results

While the simulation projects meaningful gains, these results should not be viewed as deterministic. The behavioral uplift model reflects plausible scenarios grounded in observed correlations between explanatory clarity and voter turnout, supported by conservative assumptions.
The AI-assisted framework achieved a 97% alignment rate with historical decisions, validating its ability to internalize and replicate the KlimaDAO community’s governance rationale. Rather than merely reproducing past outputs, the model consistently demonstrated a nuanced understanding of proposal priorities, particularly those concerning treasury policy, token inflation, and yield tradeoffs.
Equally notable is the projected 40% increase in voter participation, attributed to AI-generated explanations lowering comprehension barriers and activating latent token holders. By enhancing clarity, these explanations serve as catalysts for engagement, making complex proposals more accessible.
In parallel, a 35% improvement in governance transparency- measured by Clarity Scores- highlights the effectiveness of chain-of-thought (CoT) reasoning. The structured logic underpinning AI recommendations improves interpretability, fostering trust and reinforcing the perceived legitimacy of decision-making processes.
Acknowledging the limitations of our sentiment analysis approach is essential, particularly regarding using DistilBERT as the classifier. Since the entropy score used to evaluate proposal clarity is directly based on this model’s sentiment probability outputs ( p i ), any biases or constraints inherent in DistilBERT may influence the clarity assessment. For example, a classifier pre-trained on general language corpora may not fully grasp the specialized financial or technical language commonly found in DAO discussions. This mismatch could result in misinterpretation of sentiment—such as identifying a technically cautious remark as negative—which would distort the entropy score and potentially misclassify certain proposals as having high clarity. Although DistilBERT provides a solid baseline, these dependencies highlight the importance of further model validation and domain-specific fine-tuning in future work.

6.2. Error Analysis and Potential Biases

Our analysis of errors revealed a hallucination rate of 32.31% even with CoT prompting. However, we found that most of these were minor numerical discrepancies (e.g., predicting 11.5 M instead of 11.25 M) that did not alter qualitative judgments. However, it is critical to look beyond numerical inaccuracies and consider potential systematic biases that could influence the AI’s reasoning, as our current hallucination metrics may not capture these. We identify two primary areas of potential systematic bias:
  • Overemphasis on specific indicators: The LLM, pre-trained on general financial data, might assign disproportionate weight to familiar short-term metrics (e.g., price volatility) over DAO-specific indicators of long-term health (e.g., treasury runway, staking ratios). This could lead to recommendations that favor short-term gains at the expense of protocol sustainability, even if no explicit numerical errors are present.
  • Semantic misinterpretation: DAO governance is rich with domain-specific jargon (e.g., “bonding”, “staking pressure”, and “APY decay”). An LLM might misinterpret these terms based on their definitions in other contexts, leading to flawed logical chains. For example, it could analyze “staking pressure” as a simple supply-demand issue without grasping the complex game-theoretic implications of the protocol.
To address these challenges, we propose several mitigation strategies for future iterations of this framework:
  • Systematic prompt auditing: The prompt templates (as shown in Figure 5) are instrumental in guiding the AI’s focus. We plan to establish a regular auditing process where domain experts review and refine these prompts to ensure they encourage a balanced consideration of all relevant indicators rather than unintentionally directing the AI’s attention toward specific metrics.
  • Human-in-the-loop cross-checking: A human expert could quickly review the AI’s generated reasoning before finalizing a recommendation. This “cross-checking” is not for catching numerical errors but for identifying logical fallacies or semantic misinterpretations that are obvious to an expert but invisible to the model. This aligns with the principles of hybrid intelligence discussed in our literature review.

6.3. Connection to Prior Literature

Our findings contribute to and expand on recent AI-assisted governance and public decision-making work. Prior studies have demonstrated that LLMs can improve policy accessibility by automatically generating intelligible summaries [28], and foster constructive civic dialogue via AI-guided interventions [29]. Other research has shown that LLMs can help polarized communities reach common ground on contentious issues [30], and that CoT reasoning improves model explainability in legal and regulatory settings [31].
Our study aligns with this work, particularly demonstrating that transparency improves engagement. However, our contribution goes further by introducing a dynamic simulation framework grounded in actual DAO data. Unlike previous efforts focused on static summarization or conversational nudging, our case-based evaluation incorporates economic indicators, voting history, and sentiment analysis to simulate adaptive economic reasoning. This approach offers interpretability and quantifiable impact across diverse governance metrics.
Our findings extend existing research that underscores the role of AI-driven explainability in facilitating more informed decision-making processes [28,29,30]. While these studies highlight AI’s impact on public discourse and policy clarity, our research demonstrates measurable benefits within decentralized governance contexts, providing novel empirical evidence for applying LLMs in blockchain ecosystems.

6.4. Practical Implications and Scalability

The success of our case study demonstrates that LLMs, when guided by structured prompting, can provide trustworthy, stakeholder-sensitive recommendations in complex governance environments. However, resource constraints, such as API latency and cost, may hinder the adoption of smaller DAOs. Strategies such as prompt compression, smaller fine-tuned models, or open-source LLM deployment could mitigate these barriers.
We also note that DAO stakeholders are not static; their roles and goals evolve. Future systems should integrate continuous sentiment profiling and behavioral logging to tailor recommendations dynamically. Additionally, human-in-the-loop governance structures remain essential to ensure that AI tools augment, rather than override, collective deliberation. To further aid practitioners in translating these findings into application, we offer the following bullet-pointed guidance for DAO implementers:
  • Pilot on a small scale: DAO teams should begin by piloting CoT-based explanations on a limited subset of non-critical proposals to gauge their effectiveness within their specific community context.
  • Start with high-clarity proposals: Initially, focus on applying AI explanations to proposals already identified as having high sentiment clarity (i.e., low entropy), as these are most likely to benefit from structured, supplementary reasoning.
  • Engage human-in-the-loop for review: As discussed in our bias mitigation strategies, always incorporate a final human review before publishing AI-generated explanations. This is crucial for catching nuanced semantic errors and ensuring the output aligns with the community’s implicit values.
  • Present persona-based explanations: When feasible, generate and present different versions of an explanation tailored to distinct stakeholder personas (e.g., long-term holders vs. short-term traders) to address their varying incentives and concerns directly.
Moreover, future studies should empirically evaluate and compare the effectiveness of adaptive stakeholder-specific prompting against crude stakeholder categorization methods. For instance, a critical next step is to assess the reliability of the sentiment classifier itself. This can be achieved through a pilot study involving pilot human annotation of a small, representative sample of forum discussions. By comparing the DistilBERT-generated sentiment scores against those assigned by human experts, we can calculate inter-rater reliability (e.g., using Cohen’s Kappa) and identify any systemic biases. The insights from this validation would be crucial for refining the model and improving the accuracy of the entropy-based clarity scoring. Following this, controlled experiments or user studies are recommended to quantitatively assess incremental benefits provided by adaptive approaches, particularly in diverse governance contexts.

7. Conclusions

This study proposed a structured framework for integrating large language models (LLMs) into decentralized governance processes, with KlimaDAO serving as an empirical case. By combining chain-of-thought (CoT) reasoning, stakeholder-aware prompting, and simulation-based analysis, the proposed system demonstrated measurable improvements across three governance dimensions: decision alignment, participation, and transparency.
Our findings indicate that AI-generated explanations can reduce informational barriers and foster broader engagement in DAO environments. Specifically, the system achieved a 97% decision alignment rate with historical outcomes and projected a 40% uplift in voter participation, underscoring the potential of CoT-based reasoning to increase comprehension and trust in AI-assisted recommendations.
Beyond validating the immediate benefits, this research advances decentralized governance studies by offering one of the first quantitative simulation models to assess AI’s systemic impact in a DAO context. Unlike prior work that predominantly emphasizes static summarization or normative deliberation, our framework integrates dynamic protocol indicators, stakeholder-targeted prompting, and reflective AI reasoning to address real-world governance challenges.
Future research directions should evaluate the framework’s generalizability across diverse DAO structures, including alternative voting mechanisms (e.g., conviction voting, quadratic funding) and heterogeneous stakeholder dynamics. Our immediate next step is establishing more robust performance baselines, as recommended. We plan to conduct a pilot study to approximate a human-level comparison. This study will involve the following steps:
  • Sampling: A small, representative set of proposals (e.g., 5–10 KIPs with varying complexity) will be selected from our dataset.
  • Human annotation: We will engage 2–3 domain experts to manually write summaries for these proposals, outlining the key arguments, risks, and potential outcomes.
  • Baseline AI generation: The proposals will be summarized using a general-purpose AI model with a non-targeted, generic prompt.
  • Comparative analysis: We will then quantitatively and qualitatively compare the outputs from our CoT-based system against the human-written summaries and the general AI baseline, focusing on metrics such as factual accuracy, hallucination rate, and the clarity of the reasoning provided.
Furthermore, to address the question of external validity and generalizability, we will extend this pilot into a second phase focusing on user-centric evaluation. This phase will involve a small-scale A/B test with blockchain-savvy participants to gauge the impact of our AI-generated explanations on comprehension and intended voting behavior. The design is as follows:
  • Participant recruitment: We will recruit participants with experience in DAO governance or cryptocurrency ecosystems.
  • A/B testing protocol: Participants will be randomly assigned to two groups. The control group will receive original, historical KlimaDAO proposals. The treatment group will receive the same proposals supplemented with our AI-generated explanations.
  • Data collection: After reviewing the materials, both groups will complete a survey designed to measure (a) their objective understanding of the proposal’s content, (b) their self-reported confidence in their decision, and (c) their intended vote (for, against, or abstain).
  • Analysis: We will compare the outcomes between the two groups to quantitatively assess whether AI-generated explanations lead to higher comprehension, greater decision confidence, and a change in voting propensity.
Including this user-focused experiment will allow us to move beyond performance metrics and evaluate the framework’s practical utility in fostering more informed and engaged governance. This pilot will provide a crucial benchmark to better quantify the added value of our structured AI approach. Additionally, longitudinal studies and field experiments are needed to assess the sustained behavioral effects of AI-assisted decision-making and to refine adaptive prompting techniques for evolving governance ecosystems.
Overall, this study lays foundational work toward hybrid AI–human collaboration models in decentralized governance, offering actionable insights for DAO designers, protocol developers, and governance scholars seeking to build more inclusive and transparent decision-making systems.

Author Contributions

Conceptualization, Y.-C.T.; methodology, Y.-C.T.; software, J.-H.C.; validation, Y.-C.T.; formal analysis, J.-H.C.; investigation, J.-H.C. and C.-W.H.; resources, C.-W.H.; data curation, C.-W.H.; writing—original draft preparation, J.-H.C.; writing—review and editing, Y.-C.T.; visualization, J.-H.C.; supervision, Y.-C.T.; project administration, Y.-C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing applies to this article as datasets were generated or analyzed during the current study at https://github.com/bigmoumou/LLM_DAO_Governance accessed on 15 June 2025).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Appendix A. Simulation Assumptions and Parameterization

  • Voter participation model: Simulated uplift assumes that among historically non-voting addresses, 60% were exposed to proposals with low or unclear sentiment. Based on NLP classification, we found that 66% of these users would have participated if given clearer AI explanations, resulting in a projected 40% net increase.
  • Sentiment classifier: A fine-tuned DistilBERT model (F1 = 0.87) labeled forum posts linked to each KIP proposal.
  • Transparency rubric: Governance explanations were scored (0–5) across the following dimensions: rationale depth, clarity, comparative justification, stakeholder-specificity, and outcome forecast. See Table A1 for examples.

Appendix B. Simulation Assumptions and Sentiment-Engagement Model

  • Lapsed voter identification: Token holders that actively voted in the past, but did not vote on the “current” proposal.
  • Engagement sensitivity: Empirical uplift correlation based on sentiment clarity entropy.
  • Response factor: 60% of abstainers with high sentiment clarity are assumed to vote if given AI explanations.

Appendix C. Clarity Scoring Rubric and Examples

Table A1. Example of clarity scoring for a proposal explanation.
Table A1. Example of clarity scoring for a proposal explanation.
DimensionCriteria Fulfilled
Stepwise reasoning“Token burn reduces inflation pressure → stabilizes long-term yield.” → Score: 2
Comparative Evaluation“Option A increases treasury value but risks liquidity, Option B offers a safer yield.” → Score: 2
Final recommendation“Therefore, Option B is preferred for sustainable growth.” → Score: 2
Total Score6/6

Appendix D. Full Example of AI-Generated Governance Explanation

  • Proposal: KIP-42
  • AI Rationale (Long-Term Holder View):
  • “The current treasury runway has declined by 18% in the past quarter due to increasing liquidity outflows. Option A proposes reducing APY to curb inflation. Based on projected demand and existing liquidity buffers, this tradeoff favors long-term sustainability. Therefore, I recommend voting for Option A.”
  • AI Rationale (Short-Term Trader View):
  • “Option B retains a higher yield, but risks depleting the treasury within six months. Given recent market volatility, short-term value extraction remains feasible, though risk tolerance must be high. For short-term gains, Option B is preferable.”

Appendix E. Full Text of Prompts

# Background
The proposal for this vote is as follows: KIP title Below is a detailed description of the proposal: KIP proposals
# Economic Conditions
We generated four charts to illustrate the current economic conditions, which are attached: Chart 1 eco1.png (Liquidity Indicators): Shows the treasury solvency ratio, short-term redemption capacity, and liquidity depth. Chart 2 eco2.png (Unstaking Risk Indicators): Shows the staking coverage ratio and staking deflationary pressure. Chart 3 eco3.png (Runaway Inflation Indicators): Shows trends in the inflation rate and market capitalization. Chart 4 eco4.png (Market Cap Bubble Indicators): Shows 30-day and 90-day rolling volatility and TVL trends. Supplement: Each chart contains two dark gray bar charts representing the time of the last vote (left bar) and the current vote (right bar). These can be used as a baseline to more accurately capture differences over time.
# Voting Options
Please analyze the following two options: Option A: YES: I agree, create the program Option B: NO: I disagree, don’t create
# Previous Voting Results
The result of the previous community vote was the last vote option (last vote ratio%); please compare this with your previous recommendation. Observe if the previous recommendation was inappropriate. If not, no further discussion is needed. If it was, please conduct a review and incorporate the review results into this discussion.
# Request
Please carefully read the proposal content, economic condition charts, and voting options above, and then conduct an in-depth analysis based on this information. Please use a chain-of-thought approach to list your considerations for each option, paying special attention to the following points:
  • Evaluate the impact of the different options on inflation rate, price trends, treasury health, and market confidence. (After listing the results, please double-check for any errors in numerical data.)
  • Please explain step-by-step how you use the economic indicators shown in the charts (such as liquidity, unstaking risk, inflation indicators, and market cap bubble indicators) to support your analysis. Analyze charts one, two, three, and four step-by-step, as these represent the current economic conditions.
  • Quantify your judgment of changes in economic data (e.g., expected inflation rate to decrease from 200% to 150%), and evaluate how different choices will affect economic conditions.
  • Review phase: Check if the analysis results so far deviate from the numerical values given in “economic conditions”. If there are discrepancies, please return to the first step and perform the analysis again. If there are no errors, please continue.
  • Based on your analysis, clearly recommend one option and explain your reasoning.
  • Based on your analysis, clearly recommend one option and explain your reasoning.
  • Confirm if the analysis results conflict with current interests. If not, please continue. If there is a conflict, please re-analyze considering the current user’s interests. Users pursuing long-term interests seek the sustainable development of KLIMA, while users pursuing short-term interests seek to realize profits in the short term.
  • Clearly recommend one option and explain your reasoning. Please provide detailed step-by-step explanations and conclusions to serve as a reference for voting decisions.
  • Finally, re-examine the recommendations based on the interests of long-term or short-term investors for any logical flaws in the reasoning. If none are found, proceed. If any exist, return to Step 6 and repeat the analysis.
# Finally, please output in the following format: Final Recommended Option: A, B, C, etc., for the following reasons:
  • The impact of this choice on economic conditions;
  • Whether the chosen result is for long-term or short-term benefits, and why;
  • Through reverse verification, choosing other options would lead to negative consequences;
  • Whether there is a difference between the previous community vote result and the previous recommendation, and if so, whether it affects this vote;
  • Other supplements.

Appendix F. AI Tools Utilized

We employed advanced AI tools to enhance this manuscript’s clarity, grammar, and overall quality during the writing and revision process. In particular, we used Grammarly [32] to perform language editing and proofreading. Additionally, we utilized ChatGPT [33], a large language model developed by OpenAI, to assist in drafting, organizing, and refining the content of this paper. Both tools were accessed in February 2025 and played a crucial role in improving the final document.

References

  1. Wang, S.; Ding, W.; Li, J.; Yuan, Y.; Ouyang, L.; Wang, F.Y. Decentralized autonomous organizations: Concept, model, and applications. IEEE Trans. Comput. Soc. Syst. 2019, 6, 870–878. [Google Scholar] [CrossRef]
  2. Bellavitis, C.; Fisch, C.; Momtaz, P.P. The Rise of Decentralized Autonomous Organizations (DAOs): A First Empirical Glimpse. Venture Capital 2022, 24, 1–25. [Google Scholar] [CrossRef]
  3. Davidson, S. The nature of the decentralised autonomous organisation. J. Institutional Econ. 2025, 21, e5. [Google Scholar] [CrossRef]
  4. Tamai, S.; Kasahara, S. DAO voting mechanism resistant to whale and collusion problems. Front. Blockchain 2024, 7, 1405516. [Google Scholar] [CrossRef]
  5. KlimaDAO. KlimaDAO Governance Proposals on Snapshot. Available online: https://snapshot.org/#/klimadao.eth (accessed on 1 February 2025).
  6. KlimaDAO. KLIMA Token Tracker. Available online: https://polygonscan.com/token/0x4e78011ce80ee02d2c3e649fb657e45898257815 (accessed on 3 June 2025).
  7. Jirásek, M. Klima DAO: A crypto answer to carbon markets. J. Organ. Des. 2023, 12, 271–283. [Google Scholar] [CrossRef]
  8. Floridi, L. (Ed.) Ethics, Governance, and Policies in Artificial Intelligence; Philosophical Studies Series; Springer: Cham, Switerland, 2021; Volume 144, pp. XII, 394. [Google Scholar] [CrossRef]
  9. Gamage, G.; Silva, D.D.; Mills, N.; Alahakoon, D.; Manic, M. Emotion AWARE: An artificial intelligence framework for adaptable, robust, explainable, and multi-granular emotion analysis. J. Big Data 2024, 11, 93. [Google Scholar] [CrossRef]
  10. Pe na Calvín, A.; Duenas-Cid, D.; Ahmed, J. Is DAO Governance Fostering Democracy? Reviewing Decision-making in Decentraland. In Proceedings of the 58th Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 7–10 January 2025; HICSS Conference Office: Waikoloa, HI, USA, 2025. [Google Scholar]
  11. Wang, Q.; Yu, G.; Sai, Y.; Sun, C.; Nguyen, L.D.; Chen, S. Understanding daos: An empirical study on governance dynamics. IEEE Trans. Comput. Soc. Syst. 2025, 1–19. [Google Scholar] [CrossRef]
  12. OECD. Tackling Civic Participation Challenges with Emerging Technologies; Technical Report; OECD Publishing: Paris, France, 2025. [Google Scholar]
  13. Thornton, G. AI Oversight: Bridging Technology and Governance. Grant Thornton Insights, 13 October 2023. [Google Scholar]
  14. Doshi-Velez, F.; Kim, B. Towards a rigorous science of interpretable machine learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
  15. Wang, D.; Yang, Q.; Zhang, Y. Explainable AI: A review of methods and applications. Inf. Fusion 2022, 73, 1–35. [Google Scholar] [CrossRef]
  16. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Ichter, B.; Xia, F.; Chi, E.; Le, Q.; Zhou, D. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv 2022, arXiv:2201.11903. [Google Scholar]
  17. McKinney, S. Integrating Artificial Intelligence into Citizens’ Assemblies: Benefits, Concerns, and Future Pathways. J. Deliberative Democr. 2024, 20, 45–60. [Google Scholar] [CrossRef]
  18. Consortium, E. Decentralized Governance of AI Agents. arXiv 2025, arXiv:2412.17114. [Google Scholar]
  19. Shneiderman, B. Human-Centered Artificial Intelligence: Three Fresh Ideas. ResearchGate. 2022. Available online: https://aisel.aisnet.org/thci/vol12/iss3/1/ (accessed on 1 February 2025).
  20. Zhang, B. U.S. Public Assembly on High Risk Artificial Intelligence; Technical Report; Center for New Democratic Processes: Saint Paul, MN, USA, 2023. [Google Scholar]
  21. Fest, I.C. Jérôme Duberry (2022) Artificial Intelligence and Democracy: Risks and Promises of AI-mediated citizen–government relations, Edward Elgar: Cheltenham. Inf. Polity 2023, 28, 435–438. [Google Scholar] [CrossRef]
  22. Yousefi, Y. Approaching Fairness in Digital Governance: A Multi-Dimensional, Multi-Layered Framework. In Proceedings of the 17th International Conference on Theory and Practice of Electronic Governance, Pretoria, South Africa, 1–4 October 2024; pp. 41–47. [Google Scholar]
  23. Sieber, R.; Brandusescu, A.; Sangiambut, S.; Adu-Daako, A. What is civic participation in artificial intelligence? Environ. Plan. Urban Anal. City Sci. 2024, 51, 1234–1256. [Google Scholar] [CrossRef]
  24. Doran, D.; Schulz, S.; Besold, T.R. What Does Explainable AI Really Mean? A New Conceptualization of Perspectives. arXiv 2017, arXiv:1710.00794. [Google Scholar]
  25. Zhang, A.; Walker, O.; Nguyen, K.; Dai, J.; Chen, A.; Lee, M.K. Deliberating with AI: Improving Decision-Making for the Future through Participatory AI Design and Stakeholder Deliberation. arXiv 2023, arXiv:2302.11623. [Google Scholar] [CrossRef]
  26. Chelli, M.; Descamps, J.; Lavoué, V.; Trojani, C.; Azar, M.; Deckert, M.; Raynier, J.L.; Clowez, G.; Boileau, P.; Ruetsch-Chelli, C. Hallucination rates and reference accuracy of ChatGPT and Bard for systematic reviews: Comparative analysis. J. Med. Internet Res. 2024, 26, e53164. [Google Scholar] [CrossRef] [PubMed]
  27. Magesh, V.; Surani, F.; Dahl, M.; Suzgun, M.; Manning, C.D.; Ho, D.E. Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools. J. Empir. Leg. Stud. 2025. forthcoming. Available online: https://arxiv.org/abs/2405.20362 (accessed on 1 February 2025).
  28. Carullo, G. Large Language Models for Transparent and Intelligible AI-Assisted Public Decision-Making. CERIDAP 2023, 3, 100. [Google Scholar] [CrossRef]
  29. Argyle, L.P.; Bail, C.A.; Busby, E.C.; Gubler, J.R.; Howe, T.; Rytting, C.; Sorensen, T.; Wingate, D. Leveraging AI for Democratic Discourse: Chat interventions can improve online political conversations at scale. Proc. Natl. Acad. Sci. USA 2023, 120, e2311627120. [Google Scholar] [CrossRef] [PubMed]
  30. Tessler, M.H.; Bakker, M.A.; Jarrett, D.; Sheahan, H.; Chadwick, M.J.; Koster, R.; Evans, G.; Campbell-Gillingham, L.; Collins, T.; Parkes, D.C.; et al. AI can help humans find common ground in democratic deliberation. Science 2024, 386, eadq2852. [Google Scholar] [CrossRef] [PubMed]
  31. Fujita, M.; Onaga, T.; Kano, Y. LLM Tuning and Interpretable CoT: KIS Team in COLIEE 2024. In New Frontiers in Artificial Intelligence (JSAI-isAI 2024); Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2024; Volume 14741, pp. 140–155. [Google Scholar] [CrossRef]
  32. Grammarly, Inc. Grammarly. Available online: https://www.grammarly.com (accessed on 1 February 2025).
  33. OpenAI. ChatGPT (February 2025 Version) [Large Language Model]. Available online: https://chat.openai.com (accessed on 1 February 2025).
Figure 1. Voting participation trends based on unique addresses over time. Each data point represents a KIP voting event sourced from Snapshot KlimaDAO. The figure highlights three distinct periods, as follows: the initial surge during protocol launch, gradual decline as KIPs reduced APY, and sporadic peaks in response to major reforms like KIP-31. However, participation remained low even for crucial proposals like KIP-65, likely due to the increasing complexity of governance decisions. To establish a baseline for calculating participation rates, we used the number of unique KLIMA holders reported on PolygonScan as of 3 June 2025, as historical holder data was not freely available. This figure is a conservative estimate, representing a relatively low point in holder numbers. Consequently, the calculated participation rates presented here are likely overestimated [5,6].
Figure 1. Voting participation trends based on unique addresses over time. Each data point represents a KIP voting event sourced from Snapshot KlimaDAO. The figure highlights three distinct periods, as follows: the initial surge during protocol launch, gradual decline as KIPs reduced APY, and sporadic peaks in response to major reforms like KIP-31. However, participation remained low even for crucial proposals like KIP-65, likely due to the increasing complexity of governance decisions. To establish a baseline for calculating participation rates, we used the number of unique KLIMA holders reported on PolygonScan as of 3 June 2025, as historical holder data was not freely available. This figure is a conservative estimate, representing a relatively low point in holder numbers. Consequently, the calculated participation rates presented here are likely overestimated [5,6].
Electronics 14 02462 g001
Figure 2. Clarity Score evaluation flow for governance recommendations.
Figure 2. Clarity Score evaluation flow for governance recommendations.
Electronics 14 02462 g002
Figure 3. Clarity Score evaluation flow for governance recommendations.
Figure 3. Clarity Score evaluation flow for governance recommendations.
Electronics 14 02462 g003
Figure 4. Initial prompt integrating structured metrics and sentiment insights.
Figure 4. Initial prompt integrating structured metrics and sentiment insights.
Electronics 14 02462 g004
Figure 5. Chain-of-thought instruction template used for governance proposals.
Figure 5. Chain-of-thought instruction template used for governance proposals.
Electronics 14 02462 g005
Figure 6. Trend of total value locked and total KLIMA in LP from 2021 to 2025. These indicators were used as features in AI prompt construction. Simulation parameters assumed a 60% conversion rate for historically inactive voters exposed to proposals with sentiment entropy below 0.4, which were classified as having clear sentiment signals.
Figure 6. Trend of total value locked and total KLIMA in LP from 2021 to 2025. These indicators were used as features in AI prompt construction. Simulation parameters assumed a 60% conversion rate for historically inactive voters exposed to proposals with sentiment entropy below 0.4, which were classified as having clear sentiment signals.
Electronics 14 02462 g006
Figure 7. Time series of key economic metrics used as simulation inputs. This figure illustrates (a) the market capitalization of KLIMA, (b) the total value locked (TVL) of the protocol, (c) the market value of the treasury, (d) the price of the KLIMA token, and (e) the total amount of unstaked KLIMA tokens.
Figure 7. Time series of key economic metrics used as simulation inputs. This figure illustrates (a) the market capitalization of KLIMA, (b) the total value locked (TVL) of the protocol, (c) the market value of the treasury, (d) the price of the KLIMA token, and (e) the total amount of unstaked KLIMA tokens.
Electronics 14 02462 g007
Figure 8. Workflow diagram illustrating the pipeline steps in the AI governance support system.
Figure 8. Workflow diagram illustrating the pipeline steps in the AI governance support system.
Electronics 14 02462 g008
Figure 9. Projected increase in voter turnout resulting from AI-generated explanations. The simulation estimates a 40% rise in participation among historically inactive token holders, driven by improved clarity and accessibility of proposal content. The estimated 40% voter turnout uplift reflects an assumed response from previously inactive stakeholders, contingent on exposure to AI-generated transparent explanations identified through our Clarity Score rubric.
Figure 9. Projected increase in voter turnout resulting from AI-generated explanations. The simulation estimates a 40% rise in participation among historically inactive token holders, driven by improved clarity and accessibility of proposal content. The estimated 40% voter turnout uplift reflects an assumed response from previously inactive stakeholders, contingent on exposure to AI-generated transparent explanations identified through our Clarity Score rubric.
Electronics 14 02462 g009
Table 1. Governance transparency scoring results.
Table 1. Governance transparency scoring results.
Clarity Score (0–6)Proposal CountPercentage
4–6 (Transparent)4569.2%
0–3 (Not Transparent)2030.8%
Average Clarity Score4.2Out of maximum 6
Table 2. Hallucination rate comparison.
Table 2. Hallucination rate comparison.
ConfigurationErrors (n = 65)Error Rate
Without CoT3960.00%
With CoT2132.31%
Table 3. Alignment of AI recommendations with historical community decisions.
Table 3. Alignment of AI recommendations with historical community decisions.
MetricValueDescription
Total Proposals Analyzed65All KIP-1 to KIP-65 proposals
Aligned Recommendations63AI matched historical majority
Alignment Rate ( P align )97%Accuracy of AI predictions
Table 4. Sensitivity analysis of the conversion rate on participation uplift.
Table 4. Sensitivity analysis of the conversion rate on participation uplift.
Conversion RateProjected Participation Uplift
50%33.3%
55%36.7%
60%40.0%
65%43.3%
70%46.7%
Table 5. Summary of governance metrics and simulation results.
Table 5. Summary of governance metrics and simulation results.
MetricValueExplanation
Decision Alignment ( P align )97%AI matched 63 of 65 historical decisions
Voter Engagement Uplift ( P engagement )+40%Projected from sentiment-based engagement simulation
Governance Transparency ( T transparency )+35%69% of proposals scored ≥ 4 in clarity
Hallucination Rate (with CoT)32.31%Reduced from 60% without CoT
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.-H.; Hsu, C.-W.; Tsai, Y.-C. Intelligent Decentralized Governance: A Case Study of KlimaDAO Decision-Making. Electronics 2025, 14, 2462. https://doi.org/10.3390/electronics14122462

AMA Style

Chen J-H, Hsu C-W, Tsai Y-C. Intelligent Decentralized Governance: A Case Study of KlimaDAO Decision-Making. Electronics. 2025; 14(12):2462. https://doi.org/10.3390/electronics14122462

Chicago/Turabian Style

Chen, Jun-Hao, Chia-Wei Hsu, and Yun-Cheng Tsai. 2025. "Intelligent Decentralized Governance: A Case Study of KlimaDAO Decision-Making" Electronics 14, no. 12: 2462. https://doi.org/10.3390/electronics14122462

APA Style

Chen, J.-H., Hsu, C.-W., & Tsai, Y.-C. (2025). Intelligent Decentralized Governance: A Case Study of KlimaDAO Decision-Making. Electronics, 14(12), 2462. https://doi.org/10.3390/electronics14122462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop