You are currently viewing a new version of our website. To view the old version click .
Sustainability
  • Review
  • Open Access

19 December 2025

Multi-Criteria Decision Analysis Framework for Evaluating Tools Supporting Renewable Energy Communities

,
,
,
,
and
1
Institute of Industrial Electronics, Electrical Engineering and Energy, Riga Technical University, LV-1048 Riga, Latvia
2
Electric Power Engineering Department, Sumy State University, 40007 Sumy, Ukraine
3
Energy Institute, University College Dublin, D04 V1W8 Dublin, Ireland
*
Author to whom correspondence should be addressed.

Abstract

Renewable energy communities are emerging as key players in the sustainable energy transition, yet there is a lack of systematic approaches for evaluating the digital tools that support their development and operation. This study proposes a comprehensive methodology for assessing tools for supporting renewable energy communities, based on a system of key performance indicators and the multi-criteria decision analysis framework method. Twenty-three specific sub-criteria were defined and scored for each tool, and a weighted sum model was applied to aggregate performance. To ensure robust comparison, criteria weights were derived using both expert judgement (pairwise comparisons of ranking and analytical hierarchy process) and objective data-driven methods (the entropy-based method and the criteria importance through intercriteria correlation weighting method). The framework was applied to a diverse sample of contemporary renewable energy community’s tools, including open-source, commercial, and European Union project tools. Key findings indicate that some of the tools have shown noticeable rank shifts between expert-weighted and data-weighted evaluations, reflecting that expert opinions emphasize technical and operational features while objective variability elevates environmental and economic criteria. This assessment enables stakeholders to compare energy community tools based on structured criteria, offering practical guidance for tool selection and highlighting areas for future improvement.

1. Introduction

The transition to renewable energy is a central priority of modern energy policy, driven by climate goals, energy security concerns, and declining technology costs. Renewable energy sources (RES) are now recognized as essential for building sustainable and resilient energy systems. At the European Union (EU) level, directives such as the Renewable Energy Directive (EU) 2018/2001 [1] and its amendment Directive 2023/2413 [2], together with the Internal Electricity Market Directive (EU) 2019/944 [3], set binding targets for renewable deployment and promote citizen participation in community-based initiatives.
Within this broader framework, Renewable Energy Communities (RECs) and Citizen Energy Communities (CECs) have emerged as key instruments for local empowerment, active consumer involvement, and the integration of distributed renewables. The concept of RECs represents a more participative and democratic model for organising and managing energy systems, reflecting the growing societal interest in alternative, citizen-driven arrangements. RECs are distinguished by several core principles: inclusive and voluntary governance, citizen-centric ownership and effective control, and a primary purpose oriented towards social and environmental benefits rather than financial profit [4]. Accelerating REC development is not only a policy goal but also a socio-technical necessity for achieving the EU’s 2040 climate neutrality targets, as decentralized systems enhance resilience and citizen engagement [4].
As decentralized energy production expands, demand for planning and management tools has also increased. They have become essential for efficient, optimized, and well-controlled implementation of energy communities (ECs) within energy systems, providing technological support for system optimization, energy modelling, and data management to ensure effective operation and help achieve social and economic benefits for the communities [5]. Various comparisons and evaluations of energy modelling tools (EMTs) and cases of their use can be found in the recent academic literature. Reviews are usually focused on software tools for urban energy system modelling [6], urban building energy monitoring [7], and hybrid energy system designs [8].
For instance, the authors of [9] provide a valuable initial overview of digital tools supporting RECs, although the analysis remains primarily descriptive and limited to qualitative classification across design, creation, and operation phases. The evaluation does not apply a formalized scoring system, a weighting mechanism, or a structured decision-support framework, which restricts reproducibility and comparability across tools. Additionally, the scope of assessment omits key functional areas that are increasingly critical for REC implementation, such as EV integration, demand response aggregation, user-centric design features, and support for socio-legal frameworks. No attention is given to usability aspects, participatory tools, or how tools address localized regulatory or governance contexts—all of which are essential for real-world community adoption and scalability. In this way, while the study offers a broad typological landscape of available tools, it leaves unaddressed the question of which ones perform best under varying user priorities or functional requirements.
The authors of [10] propose a qualitative evaluation approach for energy system modelling frameworks, examining properties like transparency (open-source), collaborative potential, and structural flexibility. Their approach is tailored to the structural and philosophical characteristics of modelling tools—primarily at the developer and framework level—and therefore lacks consideration of emerging themes crucial to REC tools, such as social participation, regulatory adaptation layers, and open interface usability.
A systematic review in [11] offers a timely and structured review of twelve EMTs relevant to RECs; the focus here lies predominantly on the representation of data inputs, simulation features, and technical outputs across a general REC planning workflow. Although the study identifies which tools support techno-economic and, to a lesser extent, spatial and environmental modelling, it does not apply a formal scoring system or weighting mechanism to evaluate tool performance systematically. Their assessment is qualitative in nature and does not point out the relative importance of each criterion. In addition, the review highlights—but does not deeply explore—the lack of social participation tools, EV integration, or user-oriented outputs such as dashboards and governance support. Moreover, the authors do not benchmark tools using real-world operational criteria such as interoperability, usability, or readiness for community deployment, leaving a gap in actionable decision support for creators of RECs.
Paper [12] emphasizes that most academic modelling of EC focuses primarily on technical and economic outputs, often neglecting dimensions such as co-creation, educational support, and citizen empowerment. This mirrors gaps identified in digital tool assessments, where features enabling collaborative design, shared decision-making, and local capacity-building are rarely incorporated. The authors of [13], through qualitative interviews, further illustrate the evolving role of digital mediation within RECs. Their study finds that while tools increasingly support technical functions like monitoring and demand response optimization, stakeholders express a clear demand for more participatory and relational features. These include capabilities for peer knowledge exchange, shared project initiation, and coordination with external institutional actors—functionalities that are largely absent in current tool offerings.
Article [14] offers a technically detailed review of open datasets and tools for local ECs, serving as a valuable technical catalogue. However, it lacks a systematic evaluation of platform usability and practical applicability from a stakeholder perspective. Key dimensions such as user participation, regulatory adaptability, and educational support are overlooked, and no structured comparison or decision-support framework is provided to guide tool selection based on multidimensional performance.
Across literature, a range of methodological approaches have been used to evaluate and compare EMTs. A common approach is feature-based benchmarking: many reviews compile a checklist of functionalities and assess which tools have which features. For example, Ref. [9] maps each tool against services needed in each project phase (design, creation, operation), essentially creating a feature matrix to identify strengths and voids. Similarly, Ref. [11] compares tools by the types of inputs they require and the outputs they produce (technical, financial, environmental, spatial), using a structured comparison framework. These studies rely on document analysis and tool documentation to score capabilities, often presented in comparative tables. Another technique employed is multi-criteria decision analysis (MCDA). Some reviews explicitly rank or score tools against multiple criteria. For instance, broader energy system model evaluations have introduced qualitative scoring on dimensions like transparency, complexity handling, and collaboration support. According to [15], stakeholder integration in energy system modelling remains limited, as most frameworks still prioritize technical and economic performance over participatory and social dimensions. Their systematic review, based on a SWOT analysis of more than eighty studies, highlights that current modelling tools rarely include mechanisms for co-design, user feedback, or stakeholder interaction throughout the modelling process. The authors argue that this technocentric orientation reduces the applicability of such tools for community-level energy planning and decision-making. They identify opportunities for improvement through the adoption of hybrid approaches—combining MCDA, participatory modelling, and digital interfaces—to better capture local priorities, social acceptance, and behavioural factors. These findings reinforce the relevance of incorporating social and participatory criteria into the KPI framework used to evaluate REC tools.
Some researchers have taken a case study simulation approach to tool evaluation, which are applied to a common scenario to compare results and user experience. For example, Ref. [16] conducted a practical comparison of an optimization model vs. a simulation model for the same municipal energy system, highlighting differences in outcomes and modelling effort. Though not focused on REC tools per se, this demonstrates the value of side-by-side case studies to reveal how the choice of tools can influence planning recommendations.
Another important methodological dimension concerns the integration of stakeholder perspectives and usability evaluations into tool assessment. Although a relatively uncommon occurrence, some studies have included surveys or interviews to capture user feedback regarding the functionality and applicability of EMTs. For instance, the Finnish study on digital mediation within RECs collected practitioner insights on missing or underdeveloped platform features, highlighting the need for participatory design and relational functionalities [13]. Moreover, the authors of paper [15] have called for the inclusion of user-centred evaluation criteria—such as interface usability, documentation quality, and the learning curve—within tool benchmarking frameworks. Nevertheless, quantitative usability metrics such as the time required to configure simulations, the frequency of operational errors or user satisfaction ratings, remain largely absent from the comparative literature. This methodological gap indicates that most reviews still assess tools based on documented functionalities rather than empirical user testing, limiting the understanding of real-world performance and accessibility.
Despite the proliferation of digital tools designed to support RECs, existing evaluations remain fragmented and limited in scope. Prior studies have primarily focused on technical modelling capabilities or high-level descriptions of tool functionalities, often omitting critical dimensions such as environmental performance, social engagement, and regulatory adaptability. Key reviews in the field have revealed several recurring gaps: the limited integration of electric mobility and multi-vector energy systems, insufficient support for participatory design and citizen interaction, and an overarching emphasis on feasibility indicators at the expense of broader sustainability and usability considerations. Moreover, while some frameworks qualitatively assess platform features, they often lack reproducible scoring methodologies, structured prioritization of evaluation criteria, and actionable insights for community stakeholders. Crucially, no existing study provides a comprehensive, multi-domain comparison of REC tools using a formalized KPI and weighting framework. The absence of reproducible scoring and stakeholder-sensitive prioritization leaves a significant gap in practical decision support for real-world REC development.
To address these gaps, this study introduces a structured and reproducible evaluation framework that can rank REC tools according to multiple stakeholder-valued criteria. The framework combines key performance indicators (KPIs) with MCDA. The framework spans six functional criteria—technical, operational, economic, environmental, social, and quality and adoption—disaggregated into 23 measurable sub-criteria. The framework applies stakeholder-weighted aggregation using the weighted sum model (WSM) [17] to capture the priorities of non-expert community members. Each criterion is scored on a normalized scale, and aggregated scores are calculated using both expert-informed and data-driven weighting methods. This dual-layered approach enables sensitivity analysis to reflect varying stakeholder priorities and enhances transparency in the comparison of tools.
The main contributions of this paper are summarized as follows:
  • A comprehensive KPI–MCDA framework for evaluating REC software across technical, economic, environmental, social, usability, and governance criteria is presented;
  • Incorporation of multiple weighting methods (stakeholder-derived vs. data-driven) is used to enhance transparency and replicability;
  • Demonstration of the framework on representative tools to highlight underexplored features such as EV integration, participatory co-design modules, usability, transparency metrics, and legal adaptability is offered.
By structuring evaluation in this way, our approach moves beyond prior studies to provide actionable, multidimensional performance scores that reflect the diverse needs of REC practitioners and community stakeholders.
The remainder of the paper is structured as follows: Section 2 presents the methodology, including tool selection criteria, the KPI calculation and the MCDA process; Section 3 reports the evaluation results across a sample of tools; the discussion and the conclusions are summarised in the last two sections.

2. Methodology

The selection of the tools was guided by two eligibility criteria designed to ensure that each entry could be evaluated transparently and on equal methodological footing. First, a platform or tool had to be documented through an official, publicly accessible source—such as a project website, product page, user or administrator manual, or open repository—detailing functionalities relevant to REC. Second, the platform had to be accessible for independent inspection, either via a public demo or sandbox, a time-limited trial, an openly available installer or codebase, or sufficiently detailed technical documentation to enable functional verification without vendor mediation. Solutions were excluded when they failed to satisfy either of these two conditions.
Using this procedure, the authors first compiled a longlist of 30 REC-related tools from peer-reviewed articles, EU project deliverables, and structured web searches. To capture practice-oriented tools under-represented in journals, the authors complemented the literature scan by systematically querying official project and vendor websites, documentation portals, user manuals, open repositories, demo or sandbox instances, and publicly available installers using controlled keywords (such as ‘REC tool/platform’, ‘energy sharing software’, ‘community microgrid’, ‘REC toolkit’, ‘EV smart charging for RECs’). Application of the two eligibility criteria described above reduced this list to 19 tools. No geographical restrictions were imposed: both European and global tools were considered and included whenever they met the documentation and accessibility requirements.
In addition to the desk-based review, the authors interacted directly with those tools that provide open-source distributions or public demo/sandbox environments. In practice, MiGRIDS Lite, OpenEMS, PROCSIM and the Rectool Simulator were installed or accessed via their publicly available repositories or web interfaces, and representative workflows were tested to validate the documented functionalities. For the remaining tools such as Cleanwatts, Hive Power FLEXO, Powerledger, GreenPocket Energy Cockpit, BECoop, UP-STAIRS, there was no access and the assessment relied on official documentation.
Building on this corpus, the authors conduct a structured desk-based [18,19] evaluation of digital tools that support the design, creation, or operation of RECs. The objective is to compare alternatives on a common, multi-criteria basis that reflects the needs of a non-expert audience (e.g., consumers or/and community initiators), while remaining transparent and reproducible.
The methodology has two pillars:
  • A KPI framework structured across six dimensions;
  • A scoring rubric with normalized 0/0.5/1 criteria and stakeholder-weighted aggregation via MCDA, using a WSM [20,21,22] as the primary ranking method.
MCDA methodologies have been widely applied across areas including renewable energy planning, transportation, sustainability assessment, quality management, and supply chain optimization. Their adaptability allows researchers to customize criteria and weighting systems to adapt the methodology to specific context of research [21]. MCDA provides a structured and systematic framework for managerial decision-making by explicitly incorporating multiple criteria or objectives into the evaluation and ranking of alternative options. MCDA formalizes the decision-making process, thereby enhancing both the quality of decisions and the quality of decision-making practices, by explicitly documenting the applied criteria, their relative weights, and the rationale behind scoring [22].
This KPI–MCDA decision framework is developed for structured evaluation, which combines a set of KPIs with MCDA techniques to guide tool selection. It includes four steps: (1) define evaluation criteria and sub-criteria; (2) score each alternative using a standard rubric (e.g., 0/0.5/1); (3) assign weights to criteria (via expert judgment or data-driven methods); and (4) compute overall scores using the weighted sum model. This structure can be adapted to other domains by modifying the criteria and scoring rules while retaining the same MCDA logic.
This decision framework is not limited to REC tools; the same approach of KPI selection, scoring, and weight sensitivity analysis can be applied to evaluating alternatives in other domains of sustainability or technology selection.
The KPI assessment methodology includes five consecutive steps for evaluation of each tool (Figure 1).
Figure 1. KPI assessment methodology.
Figure 1 summarises the KPI assessment methodology, which consists of five consecutive steps that are described in detail in Section 2.1, Section 2.2, Section 2.3 and Section 2.4. First, the KPI definition step (Section 2.1) identifies the main performance domains that describe the tools’ functional and qualitative characteristics. Second, the ranking of criteria (Section 2.2) specifies the most relevant sub-criteria within each domain and organises them according to their importance. Appendix A provides the full scoring table and the three-level (0/0.5/1) scoring rubric for each sub-criterion. Third, each tool is evaluated against all sub-criteria within every KPI dimension (Section 2.2), reflecting its actual features, functionality, and performance; the resulting sub-criterion scores are then aggregated into dimension-level KPIs, forming the input dataset for the subsequent MCDA step (Section 2.3). Fourth, the weight coefficients for the six KPI dimensions are calculated (Section 2.4), determining how significant each criterion is compared to the others by applying both expert-judgement-based and objective data-driven methods. Finally, the weighted sum model (WSM) is implemented (Section 2.3) to combine all weighted scores into a single overall index for each tool, enabling their comparative ranking and the identification of top-performing solutions.
The authors chose a simple three-level scoring scheme to balance granularity with consistency in evaluation. A binary 0/1 scale was too coarse to capture partial fulfilment of criteria, while a more fine-grained scale could give a false sense of precision given the qualitative nature of many criteria. The midpoint score of 0.5 allows the authors to indicate that a tool partially meets a criterion.

2.1. KPI Definition

Based on the analysis of existing studies on KPI frameworks [23,24,25,26,27] for energy systems and community-oriented systems, six main KPI dimensions were defined to evaluate digital tools for REC: technical modelling, operational, economic, environmental, social, and quality and adoption (Figure 2).
Figure 2. KPI framework for evaluating digital tools for REC.

2.2. KPI Sub-Criteria

The second step focuses on identifying and structuring detailed criteria within each KPI dimension.
The 23 KPI sub-criteria were derived from a longlist of indicators collected from previous KPI frameworks for ECs, studies on energy system and microgrid tools, and the documentation of existing REC-related platforms [23,24,25,26,27,28,29,30,31,32,33,34]. The team of authors then refined this list by (i) retaining only those indicators that are directly relevant for evaluating digital tools supporting RECs, (ii) merging or removing redundant items with overlapping meaning, and (iii) ensuring that each remaining sub-criterion can be operationalised consistently from publicly available documentation, demos or open-source code.

2.2.1. Technical Modelling

In the technical modelling (TECH) KPI dimension, the authors identified and evaluated a set of sub-criteria that capture the platform’s ability to represent the technical complexity of RECs and their multi-energy interactions. These sub-criteria address the modelling depth, analytical scope, and realism of technical functionalities.
The ‘Energy vectors’ criterion (TECH_vec) [28] assesses the extent to which a platform models multiple energy carriers and their couplings within a community context. Multi-vector capability is essential for representing sector coupling (e.g., electricity-to-heat via heat pumps, combined heat and power, thermal storage, and electric-vehicle smart charging), thereby enabling integrated techno-economic assessment rather than electricity-only appraisals. A platform scores higher when it can model several carriers and their interactions, rather than electricity alone.
The ‘Optimisation’ criterion (TECH_opt) evaluates whether the platform goes beyond basic simulation to offer design or operational optimization, and whether the optimization objective space is single- or multi-dimensional.
The ‘Simulation capability’ (TECH_sim) criterion evaluates the temporal fidelity and breadth of the simulation engine used for techno-economic assessment and operational studies. Higher capability entails time-series simulation over full annual cycles, sub-hourly granularity where needed (e.g., EV/BESS control), and consistent mass-/energy-balance handling across coupled vectors [29].
The ‘Forecasting’ (TECH_forec) criterion assesses whether the platform provides endogenous forecasting of key time series relevant to REC planning and operation (e.g., load/consumption, renewable generation).
The ‘LV/MV grid constraints or losses’ (TECH_grid) criterion addresses the representation of distribution-network feasibility—voltage bounds, thermal limits, losses, reverse power flows, and curtailment—at low and medium voltage levels. For RECs, distribution constraints often determine admissible asset sizing and operational envelopes [30].
The ‘Spatial/GIS capabilities’ (TECH_spat) criterion evaluates geospatial awareness and place-based modelling, including building-level siting, roof orientation and shading, community perimeter rules (e.g., same-substation constraints), and proximity to thermal networks or other infrastructures. Spatialized modelling is a prerequisite for actionable planning and compliance with jurisdiction-specific REC boundaries.

2.2.2. Operations and Control

The Operations and Control (OPER) dimension assesses whether a platform is ready for day-to-day operation of an EC: running diverse assets, ingesting telemetry, producing actionable analytics and reports, settling energy sharing, exposing reliable interfaces, and coordinating flexibility.
The criterion ‘Asset classes’ (OPER_ascl) valuates the scope and diversity of assets that the platform can natively model, monitor, or control within an energy community framework. It reflects both the breadth (the variety of asset types) and depth (the level of technical detail and control granularity) of device integration [31].
The ‘Analytics and reporting’ criterion (OPER_analyt) [32] evaluates the platform’s data analytics, performance assessment, and reporting capabilities, which are essential for operational optimization and strategic decision support. It covers the transformation of raw telemetry data into actionable insights, including statistical analysis, KPI tracking, detection of anomalies, and forecasting of trends. Advanced tools integrate predictive or prescriptive analytics, provide automated performance summaries, and enable custom report generation for various stakeholders (operators, policymakers, or community members).
‘Demand response and flexibility aggregation’ (OPER_flex) [33] assesses the platform’s ability to aggregate and activate flexibility from distributed energy resources (DERs) and controllable loads—including batteries, heat pumps, and smart appliances. It focuses on how effectively the system can enrol flexible assets, predict available flexibility, nominate resources for activation, and execute automated control strategies in response to internal signals or market events.
The ‘EV management’ sub-criterion (OPER_EV) evaluates the platform’s native capabilities for EV charging management, including smart-charging policies and schedule configuration. Emphasis is placed on the coordination of charging with price/tariff signals, renewable generation forecasts, and distribution-network constraints, as required for cost-effective and grid-compliant operation in energy communities.

2.2.3. Economic

The economic dimension of KPI (ECON) evaluates a platform’s capability to support techno-economic assessment and market realism [34].
‘Financial indicators’ (ECON_fin) assess whether the platform implements standard project-finance indicators [35] for community-scale assets and portfolios, enabling rigorous techno-economic appraisal and comparability across scenarios.
The sub-criterion ‘Tariff/market models’ [12] (ECON_tar) evaluates how realistically the platform represents end-user tariffs and market price signals—from flat/static rates to time-of-use, dynamic wholesale/retail, or real-time pricing—since tariff fidelity materially affects REC economics and operational strategies.
‘Sensitivity analysis’ (ECON_sens) tests the robustness of techno-economic results to variation in key drivers (prices, load/generation, discount rate, etc.), via batch ‘what-if’ runs or formal uncertainty modules.
‘Benefit-sharing calculators’ [36] (ECON_shar) determine whether the platform provides transparent and configurable mechanisms for distributing collective benefits and costs within the REC. It focuses on how energy, financial savings, and operational costs are allocated among members based on predefined or dynamic sharing rules.

2.2.4. Environmental

The environmental (ENVIR) [26] KPI dimension evaluates the platform’s capability to quantify, monitor, and optimize the environmental performance of RECs. Its purpose is twofold: first, to assess what environmental impacts are measured, focusing on robust and transparent carbon accounting; and second, to examine how these metrics influence decision-making through the integration of environmental objectives or constraints into system design and operational optimization.
The ‘Carbon accounting’ (ENVIR_carb) sub-criterion verifies whether the tool quantifies greenhouse-gas impacts of REC designs/operations with sufficient temporal and geographic resolution.
Environmental objective support (ENVIR_obj) assesses whether environmental performance is treated as a first-class decision driver. Mature tools embed emissions (or emission intensity) as a design/operation objective alongside cost and reliability, or as binding constraints. This enables transparent trade-off analysis and policy-aligned planning.

2.2.5. Social

This KPI dimension evaluates how effectively the platform supports transparent participation, communication, and decision-making within the REC [37]. It focuses on whether members have clear visibility of data, results, and impacts, as well as access to interactive tools or dashboards that promote understanding and engagement. The dimension also examines the presence of mechanisms for capturing member preferences, feedback, and co-design inputs, ensuring that community decisions reflect user priorities rather than purely technical optimization. Additionally, it considers whether the platform provides built-in guidance, tutorials, or simplified interfaces that help users participate meaningfully without requiring advanced technical skills.
The ‘Member portals and transparency dashboards’ sub-criterion (SOC_trans) addresses user-facing transparency. It evaluates how clearly the platform communicates energy, financial, and environmental data to community members through dashboards, portals, and reports. This criterion focuses on member-level visibility and trust, ensuring that users can understand and verify community performance and decision-making outcomes.
The ‘Co-design features’ (SOC_des) sub-criterion assesses the platform’s capability to actively involve community members in planning, decision-making, and operational processes through participatory design functionalities.
The ‘Education’ (SOC_educ) sub-criterion evaluates the platform’s ability to educate, guide, and support users—both community members and administrators—through a combination of integrated assistance tools and external documentation. It measures how effectively the platform lowers technical barriers and ensures that users can understand, operate, and expand the system confidently.

2.2.6. Quality Indicators

The ‘Quality indicators’ (QUAL) KPI dimension evaluates the overall maturity, robustness, and user-readiness of the platform supporting RECs. It focuses on how well the tool performs in practical implementation, ensuring reliability, usability, and long-term sustainability [38]. The dimension consists of six sub-criteria described below.
The ‘Usability’ (QUAL_us) sub-criterion assesses how intuitive and user-friendly the platform interface is for different user groups (e.g., administrators, members, operators). Tools that offer multilingual interfaces, contextual help, and user-tailored dashboards demonstrate enhanced usability.
The ‘Reliability and performance’ (QUAL_perf) sub-criterion evaluates the platform’s technical robustness, responsiveness, and stability under different operational conditions. It includes aspects such as system uptime, error handling, data integrity, and computational efficiency. Tools that maintain consistent performance during peak data loads or simulation runs, and that provide redundancy and backup mechanisms, score higher.
The ‘Openness’ (QUAL_open) sub-criterion refers to the technical and architectural openness of the platform. It evaluates whether the system provides open-source access, transparent algorithms, well-documented application programming interfaces, and compliance with interoperability standards. The focus is on developer- and integrator-level transparency—ensuring reproducibility, interoperability, and long-term vendor independence.

2.3. KPI Calculation and Tool Evaluation

In this study, each sub-criterion is scored on a three-level scale 0/0.5/1, where 0 denotes no support (or no verifiable evidence), 0.5 denotes partial support, and 1 denotes full support with documented/demonstrated evidence.
The dimension-average KPI score for tool i is computed as the arithmetic mean over its sub-criteria [23]:
K P I ¯ i , d = 1 n d · j = 1 n d x i j ( d )
where d is the dimension label (d ∈ {TECH, OPER, ECON, ENVIR, SOC, QUAL}); nd is the total amount of sub-criteria of dimension d; j is the index of the sub-criterion within dimension d (j ∈ {1, …, nd}); i is the index of the tool (i ∈ {1, …, I}); x i j ( d ) is the sub-criterion score for dimension d of i-th tool on sub-criterion j, ({0; 0.5; 1}). Descriptions of the scoring levels for every sub-criterion in each dimension are provided in Appendix A.
To compute the overall performance score Si for the i-th evaluated platform, the WSM [39,40] is applied. This model aggregates the normalized performance scores from each of the six defined KPI dimensions, according to their assigned weight coefficients wd:
S i = d   { T E C H ,   O P E R ,   E C O N ,   E N V I R ,   S O C ,   Q U A L } w d · K P I ¯ i , d
where w d   is the weight of dimension d, and w d = 1 .

2.4. Determination of Weight Coefficients

In this review, the authors have employed four well-established approaches to determine the weighting coefficients [41] wd for the six offered KPI dimensions: rank order centroid (ROC) [42,43] analytic hierarchy process (AHP, pairwise comparisons) [44,45,46], entropy weight method (EWM) [47] and the criteria importance through intercriteria correlation (CRITIC) [48]. These methods were chosen for their reproducibility, transparency, and firm grounding in MCDA literature [49]. The ROC and AHP approaches use the authors’ expert judgment in a structured manner, while EWM and CRITIC derive weights objectively from the variability and correlation of the data. All four methods produce a normalized weight vector (summing to 1), which is then used to weight each dimension’s contribution in the overall performance evaluation by the WSM model. A comparative analysis of different weight sets is provided in Section 3.

2.4.1. ROC

In this case, the ROC method serves as a transparent, non-compensatory weighting approach suitable in the absence of stakeholder-derived numerical weights. It relies on the assumption that decision-makers (here, the authors) can establish a rank order of KPI dimension importance (from 1 to 6), even if they cannot specify precise magnitudes of difference between them.
To determine the final ranking, the authors agreed to use the average rank method, which is widely applied in multi-criteria analysis. The formula for calculating the average rank for each criterion is as follows [50]:
R ¯ c = a = 1 A u t h R c a A u t h
where R ¯ c is the average rank of criterion c ; R c a is the rank assigned to criterion c by author a ; Auth is the number of authors.
In expression (4), w d R O C denotes the weight assigned to the d th dimension by the ROC method [42]:
w d R O C = 1 C · r = c C 1 r
The variable C is the total number of KPI dimensions, and c is the rank of the d th dimension (with c = 1 for the highest-ranked dimension and c = C for the lowest). The summation index r runs from c to C , so r = c C 1 / r sums the reciprocals of the integers from c through C . Dividing this sum by C normalizes the weights so that all C weights sum to one. Thus, a higher-ranked dimension (smaller c ) has more terms in the sum and therefore receives a larger w d R O C , reflecting its relatively greater importance.

2.4.2. AHP

In this case, AHP was applied as a structured expert judgement method in the absence of stakeholder surveys. The research team constructed a 6 × 6 reciprocal pairwise comparison matrix A, covering the six KPI dimensions, using a simplified Saaty scale [51]. Each matrix element a d r reflects the perceived importance of dimension d relative to dimension r. The matrix is reciprocal: add = 1 (for all diagonal elements), adr = 1/ard (when d ≠ r).
The resulting pairwise comparison matrix has the following form:
A = 1 a 12 a 13 a 1 C 1 a 12 1 a 23 a 2 C 1 a 13 1 a 23 1 a 3 C 1 a 1 C 1 a 2 C 1 a 3 C 1
where A is the pairwise comparison matrix of the dimensions.
The weight coefficient by the AHP method is calculated as follows [44]:
w d A H P = ( r = 1 C a d r ) 1 C k = 1 C ( r = 1 C a k r ) 1 C ,
where adr indicates how much more important dimension d is compared to dimension r; ( r = 1 n d a d r ) 1 n d is geometric mean of row d; index d denotes the row corresponding to the dimension whose weight is being calculated; k is the row index in the normalisation term k = 1 C ( r = 1 C a k r ) 1 C ; r is the column index used in the product r = 1 C a d r to compute the geometric mean of row d.

2.4.3. EWM

The fundamental idea of EWM [52] is that a dimension provides more decision-making value if its values are more dispersed (less uncertain or more informative). Conversely, dimensions with uniformly distributed scores are considered less informative and are therefore assigned lower weights. Since it uses only data (no expert judgements), EWM was selected to cross-check subjective schemes (ROC and AHP) in hybrid weighting.
The normalized weight for each KPI dimension by EWM is evaluated as follows [53]:
w d E W M = d d k = 1 C d d
where k is an auxiliary index, k ∈ {1, …, C}; d d is the divergence of each dimension d and computes as:
d d = 1 E d
where Ed is the entropy of dimension d. Dimensions with a higher divergence are considered more informative.
The entropy is defined as follows:
E d = 1 ln I i = 1 I p i d · ln p i d
where p i d represents the normalized proportion of the performance of tool i on KPI dimension d, relative to all other tools for that dimension; I is the total number of tools.
Expression (9) is bounded in [0, 1], where Ed = 1 implies a uniform distribution (low discriminating power) and Ed = 0 implies high contrast (high information content).
The normalized proportion of the performance of tool i on KPI dimension d is calculated as follows:
p i d = K P I ¯ i d i = 1 I K P I ¯ i d + ε
where ε is a small positive constant (in this case, ε = 10–12) to prevent division by zero. This normalization ensures i I p i d = 1 , mimicking a probability distribution over tools for each dimension.
Entropy captures the contrast intensity or information richness of each criterion. The greater the dispersion of data, the higher the criterion’s ability to distinguish alternatives, and thus the higher its assigned weight. As a result, EWM tends to assign the highest weights to criteria that best differentiate the performance of alternatives—those that contain the most ‘information’ about the decision space.

2.4.4. CRITIC

The CRITIC method [48] is another objective approach that assigns weights by considering both the contrast intensity (standard deviation) and conflict (correlation) among criteria. It emphasizes criteria that vary strongly and are less correlated with others.
The weight for each KPI dimension by CRITIC is evaluated as follows:
w d C R I T I C = C d k = 1 C C k
where C d is the information content (contrast intensity) of dimension d; C k is the information content of dimension k.
The C d parameter is calculated as follows:
C d = σ d · k C ( 1 p d k )
where σ d is the standard deviation of dimension d; ρdk is the Pearson correlation [54] between dimensions d and k. The correlation analysis between dimensions is calculated by vector K P I i d .
σ d is evaluated by Equation (13):
σ d = 1 I 1 · i = 1 I ( K P I ¯ i d μ d ) 2
where
μ d = 1 I · i = 1 I K P I ¯ i d
The Pearson correlation between dimensions is estimated as:
p d k = i = 1 I ( K P I ¯ i d μ d ) · ( K P I ¯ i k μ k ) ( i = 1 I ( K P I ¯ i d μ d ) ) 2 · ( i = 1 I ( K P I ¯ i k μ k ) ) 2

3. Results

3.1. Overview of the Selected EMTs

The objective of this section is to present and interpret the comparative results of the evaluated software tools across the six defined KPI dimensions. Each tool was assessed based on its performance across 23 sub-criteria spanning technical, operational, economic, environmental, social, and quality-related dimensions. The scoring followed the normalized 0/0.5/1 scale described earlier, enabling consistent comparison and aggregation across functionally diverse tools.
Based on the defined selection criteria, Table 1 provides an overview of the selected software tools, including their primary goals, applicable spatial scale, and scope. The tools’ focus areas range from microgrid dispatch optimization to municipal-scale REC planning, reflecting the diversity of functionalities and use cases relevant to community energy systems.
Table 1. Overview of the selected EMTs for EC assessment, considering the main aim of the software, scale and scope, availability, integration with other software, simulation type, reference studies in which the tool is considered.
The tools span a broad spatial spectrum—from individual households and microgrids (e.g., MiGRIDS Lite, OpenEnergyMonitor) to municipal and district-level communities (e.g., LocalRES, eNeuron), and even national or cross-border implementations (e.g., Powerledger, Energy Web, REScoopVPP). Several tools, such as OpenEMS or Hive Power FLEXO, explicitly support scalability from small communities to industrial or grid-level integration. Tools such as eNeuron, LocalRES, and Cleanwatts emphasize multi-vector optimisation (electricity, heat, mobility), while others like Powerledger and Energy Web focus on market mechanisms, trading, and blockchain infrastructure. Tools like GreenPocket and REScoop Energy Community Platform emphasize community engagement, transparency, and governance, while Rectool and PROCSIM focus on planning and dataset-based modelling.

3.2. Results by Methodological Step

This section presents the outcomes of the KPI-based evaluation following the sequence of methodological steps defined in the assessment framework. Each step reflects a specific stage of the analysis—from KPI definition and ranking of sub-criteria scoring, KPI dimension weighting, to the final evaluation of each tool.

3.3. KPI Results

According to Equation (1), the dimension-average KPI score for each tool is computed. Results of the scoring levels for every sub-criterion in each dimension are provided in Appendix B, Appendix C and Appendix D. The results of KPI scores for each considered tool are presented in Table 2.
Table 2. Average KPIs of the tools.
Table 2 presents a heatmap of average KPI scores across six evaluation dimensions for each assessed tool. Warmer colours indicate lower scores, while cooler colours highlight stronger performance.
The heatmap reveals that some tools—such as eNeuron, (+)CityxChange, Cleanwatts, Hive Power FLEXO, and Powerledger—exhibit consistently high values across all KPI categories. These platforms combine strong technical capabilities, sustainability features, and user adoption, making them well-balanced solutions for integrated REC applications.
Other tools, such as BECoop, MiGRIDS Lite, LocalRES, and PROCSIM, show high scores in only one or two categories while scoring low elsewhere. This suggests functional specialization and alignment with narrower use cases rather than comprehensive energy community support.
Meanwhile, platforms like OpenEMS, REScoopVPP, and Energy Web display more uneven performance, excelling in specific technical or infrastructure areas but lacking strength across all KPI dimensions. These patterns underscore the importance of matching tools to the intended scope and priorities of each REC initiative.

3.4. Ranking of KPI Dimension

In this paper, all authors independently proposed their rankings for six evaluation criteria that are essential for decision-making in the studied context. Each author assigned ranks from 1 (most important) to 6 (least important). The purpose of this stage was to consolidate these individual rankings into a unified order of importance, which would later serve as the basis for calculating weight coefficients for each criterion. According to the methodology described in Section 2.1., to determine the final ranking, the authors agreed to use the average rank method, which is widely applied in multi-criteria analysis. The collected data and the results of the consolidation method are summarized in Table 3 below.
Table 3. Rankings of each KPI criterion by author.
By applying a transparent mathematical approach, the team ensured that the final ranking reflects the collective expertise of all contributors.

3.5. Results of Weight Coefficients

The final results of all the weight calculations, based on Section 2, are presented in Table 4. A 6 × 6 reciprocal comparison matrix constructed by the AHP method is shown in Appendix E.
Table 4. Weight coefficients by the ROC, AHP, EWM and CRITIC methods.
In the EWM and CRITIC objective weighting methods, the input dataset is the I × C matrix X = x i j ( d ) of the dimension-average KPI scores defined in Section 2.3. Each element of this matrix is the aggregated score of a given tool on a given KPI dimension, obtained by averaging the tool’s sub-criterion scores in that dimension. EWM uses this matrix to compute entropy-based variability for each dimension, while CRITIC derives standard deviations and Pearson correlations from the same matrix to quantify contrast intensity and inter-dimension conflict.
The sum of the weight coefficients assigned to all KPI dimensions equals 1.0 for each of the four methods, confirming that the normalization was performed correctly, and that each method maintains a valid weight distribution across all the evaluated criteria.
Table 4 demonstrates that the ROC weights align with the established priorities. The TECH and OPER dimensions have the highest weights—0.4083 and 0.2417 respectively—confirming their critical role in the decision-making framework. Mid-ranked dimensions such as QUAL (0.1583) and ECON (0.1028) add moderate value, whereas SOC (0.0611) and ENVIR (0.0278) contribute less. This spread shows a clear focus on technical and operational factors, while environmental and social considerations, though included, have a less pronounced effect on the total performance score.
The AHP weights closely follow the consolidated ranking, with TECH (0.3735) and OPER (0.2545) emerging as the most influential dimensions. QUAL (0.1620) and ECON (0.1021) occupy intermediate positions, whereas SOC (0.0650) and ENVIR (0.0430) remain the least significant ones. This distribution confirms the strong emphasis on technical and operational dimensions, while still incorporating qualitative and sustainability considerations into the decision-making framework.
EWM produced a distribution that differs significantly from that of expert-based methods. ENVIR (0.3386) and ECON (0.2369) emerged as the most influential criteria due to their high variability, whereas TECH (0.1328) and OPER (0.1262) received moderate weights. SOC (0.1040) and QUAL (0.0614) contributed the least. This outcome highlights the value of incorporating data-driven weighting to complement subjective assessments and strengthen the robustness of the overall evaluation.
The last method, CRITIC, produced a criterion with a high variability and low correlation. ENVIR (0.2046) and SOC (0.1811) emerged as the most influential ones, reflecting their distinctive informational contribution. OPER (0.1643) and QUAL (0.1591) received moderate weights, while ECON (0.1536) and TECH (0.1373) were the least significant ones. This outcome demonstrates how CRITIC complements other methods by highlighting criteria that reduce redundancy and enhance the robustness of the overall evaluation.
The ROC and AHP methods pointed to TECH and OPER as the top priorities. That pretty much matches what the authors had already thought was important. The EWM and CRITIC methods work a bit differently since they base their weights on the data itself and are data-driven techniques. These methods identified the ENVIR and ECON domains as the most significant ones, emphasizing the role of environmental and economic variability in the dataset. This really puts the spotlight on how much environmental and economic data can shift the results.
The differences between the weighting methods reveal that ROC and AHP reflect broader strategic or expert-defined priorities, while EWM and CRITIC uncover insights rooted in the intrinsic patterns and variability of the data.

3.6. Results of Final Score

Figure 3 represents the overall performance scores of the tools under four different weighting methods. Each line traces the normalized KPI score for a tool under one weighting scheme, allowing for a direct visual comparison across methods (see Equation (2)). This figure highlights how the choice of weighting approach—expert-based or data-driven—influences the relative assessment of each tool. Numeric results are available in Appendix F.
Figure 3. The overall performance scores of the tools.
Tools such as eNeuron, CityxChange, Cleanwatts, Hive Power FLEXO and Powerledger consistently achieve the highest scores using all four weighting schemes. This indicates that these tools deliver balanced performance across technical, operational, economic, and sustainability criteria, regardless of the weight approach.
While the overall ranking of leading tools remains stable, differences appear in mid- and low-performing tools. For example, BECoop, MiGRIDS Lite, REScoopVPP, Compile Toolbox and Rectool Simulator show noticeable variation between ROC and AHP methods and data-driven methods (EWM, CRITIC). This suggests that expert judgment emphasizes technical and operational aspects, whereas objective methods highlight variability in environmental and economic indicators. ROC and AHP consistently assign greater weight to technical and operational KPIs, resulting in higher scores for tools with strong engineering and control capabilities.
Under the EWM and CRITIC methods, tools with strong environmental and economic performance—such as (+)CityxChange, Cleanwatts, Powerledger, Energy Web and LocalRES—gain a relative advantage compared to expert-based weighting. This is because the environmental and economic criteria show higher variation across tools and are less correlated with other dimensions. As a result, the objective methods assign them more weight, which amplifies the scores of tools that perform well in these areas.

3.7. Real-World Applications of REC Tools

To support the practical relevance of the proposed evaluation framework, the authors briefly summarize three real-world implementations of tools included in the assessment.

3.7.1. Powerledger and Decentralized Market Mechanisms

The quantitative results underscore that platforms excelling in the ECON dimension (specifically the Benefit-sharing calculators, ECON_shar), and QUAL dimension (particularly Openness, QUAL_open), often integrate sophisticated market mechanisms that enforce transparency and fairness. This finding is empirically validated by the deployment of Powerledger, a tool evaluated within this framework. Powerledger leverages blockchain technology to facilitate peer-to-peer energy trading, moving beyond static optimization calculations to dynamic, market-driven energy allocation [91]. In demonstrations such as the Brooklyn Microgrid project and various deployments across Australia and Southeast Asia, the platform utilizes a dual token model (e.g., POWR, Sparkz) to allow residential and commercial participants to transparently share and transact their energy surplus at mutually desired prices [69,91].
This practical implementation directly addresses the sub-criterion of ECON_shar by providing an auditable, decentralized ledger for transactional settlements. The use of blockchain technology inherently reinforces the QUAL_open criterion, establishing a trust layer necessary for scaling community participation, which often presents a major barrier in traditional centralized energy systems. Thus, the high scoring of Powerledger in these domains is justified by its capacity to enable the scalability of trust and financial transparency through the distributed ledger technology.

3.7.2. +CityxChange and Participatory Planning for Positive Energy Blocks

The evaluation framework’s prioritization of SOC criteria is validated by large-scale EU demonstration projects focusing on co-creation, such as the +CityxChange initiative, which aims to develop positive energy blocks in urban environments [62].
While technical optimization is often a primary design goal, the +CityxChange project explicitly demonstrates that successful REC implementation requires digital platforms that actively facilitate co-design features (SOC_des) and effective member portals and transparency dashboards (SOC_trans). This platform is designed to enable citizen participation, transforming energy consumers into active prosumers and ‘positive energy champions’ through integrated physical and digital engagement strategies. This intentional focus ensures that technical optimizations are aligned with community preferences and behaviors.
The +CityxChange model confirms that achieving high performance ENVIR goals, such as carbon reduction for PEBs, is dependent on strong performance in the SOC dimension. The SOC_des participatory framework acts as the necessary mediation layer, converting purely technical goals into achievable behavioral outcomes within the community [64].

3.7.3. Cleanwatts Kiplo and Automated Operational Control

The consistent robustness of top-ranked platforms in the Operational (OPER) domain is driven by their capacity to manage multi-asset environments and respond dynamically to grid needs, as demonstrated by the Cleanwatts Kiplo STEP (Smart Transactive Energy Platform) [64]. Kiplo is implemented as a commercial, end-to-end management system designed to automate the coordination of diverse assets, including PV, BESS, and EV chargers, to maximize local self-consumption while ensuring regulatory compliance. Kiplo has implemented its community energy platform in Miranda do Douro, Portugal—the country’s first REC under the new RED II framework. The platform excels in demand response and flexibility aggregation (OPER_flex) by connecting local energy markets to multi-layered upstream markets. This feature allows the REC to monetize small-load flexibility while adhering to existing physical, legal, and regulatory barriers and frameworks.
This deployment provides a concrete validation of how tools can successfully execute complex EV management (OPER_EV) strategies, optimizing charging schedules against dynamic price signals and renewable energy forecasts [66]. The ability of the platform to maintain automated, compliant coordination reveals that such tools are evolving from simple optimization engines into regulatory enabling infrastructure (OPER/ECON). This functionality is crucial for achieving high scores in operational readiness and addresses the policy challenge of integrating decentralized flexibility into centralized grids, while assuring system security.

4. Discussion

This paper presents a structured KPI–MCDA evaluation of digital tools supporting RECs, aiming to close existing gaps in platform assessment methodologies. Previous literature has primarily focused on technical and economic functionalities, often omitting critical aspects such as user engagement, regulatory adaptability, EV implementation or platform usability. The results confirm that while several tools perform well across multiple functional dimensions, many others show strong performance in only a narrow set of criteria, which limits their broader applicability in real-world community contexts.
During the analysis, several methodological and practical considerations emerged that shaped the final approach and should be considered when interpreting the results. Firstly, the selection of the pairwise comparison scale for the AHP component required careful deliberation. The authors evaluated two options: the discrete odd-number scale commonly used in AHP (e.g., 1, 3, 5) and broader ranges such as 1–5 or 1–9, as proposed in classical AHP literature. While a 1–9 scale offers granularity, empirical observations indicated that the perceived differences in criterion importance across tools were moderate rather than extreme. Therefore, to avoid overstating the influence of marginal differences, the authors opted for a limited 1–5 scale. This compromise balances methodological rigour with real-world interpretability and reflects the underlying distribution of expert judgment. Another practical issue concerned the uniqueness of pairwise ratings. After internal deliberation, the authors concluded that permitting repeated scores across criteria was more realistic. In real-world settings, multiple criteria may be viewed as equally important, and forcing strict ordinal rankings could misrepresent true expert preferences. Allowing repeated values helped retain the semantic integrity of expert assessments and avoided introducing artificial precision into the weighting model. Recognizing the inherent subjectivity of expert-derived weights, the authors further triangulated results using two objective, data-driven methods: EWM and CRITIC. These methods rely solely on the observed variability and correlation within the data and offer complementary perspectives.
If considering the ranking results from the perspective of commercial versus open-source tools, it becomes evident that separating these categories is unnecessary. Both types of solutions exhibit strengths and weaknesses depending on the weighting method and performance criteria. Commercial tools such as Cleanwatts and Powerledger achieve high scores under the EWM and CRITIC methods, indicating strong performance under data-driven or variability-sensitive criteria. Meanwhile, open-source tools like OpenEMS and OpenEnergyMonitor perform competitively under weighting schemes such as ROC or AHP, showing that open-source solutions can also excel when expert-based or rank-based weighting is applied. The table also shows instances where commercial and open-source tools achieve very similar performance scores—for example, GreenPocket Energy Cockpit (commercial) and OpenEnergyMonitor (open source) under the AHP method—indicating that neither category is inherently superior across all evaluation dimensions.
For smaller-scale, household-level tools focused on technical control, expert-derived weighting methods like ROC and AHP (which place the greatest importance on technical and operational criteria) tend to favor these platforms. Under ROC/AHP weightings, tools with strong engineering and control capabilities achieve higher overall scores, reflecting how expert judgment emphasizes technical performance. In contrast, community-level planning tools and larger district-scale platforms oriented toward market integration gain an edge under objective data-driven methods, EWM and CRITIC, which assign greater weight to environmental and economic criteria. These entropy- and correlation-based weightings amplify the scores of tools excelling in sustainability or market-trading features, since those dimensions exhibit higher variability across the tool set. Consequently, the impact of each weighting scheme is scope-dependent: expert-based weights benefit technology-centric solutions at the household/microgrid scale, whereas objective methods highlight platforms aligned with broader community and market goals. Notably, the most well-rounded tools such as eNeuron, CityxChange, Cleanwatts, deliver balanced performance across all criteria and thus rank highly under every weighting approach. This underscores that comprehensive, multi-domain design ensures a tool remains a top performer regardless of whether the evaluation emphasizes technical control or community-focused sustainability priorities.
The authors intend to explore this balance further in future work, focusing on how different combinations of platform functionalities—such as real-time flexibility control, co-design interfaces, and environmental optimization—can be integrated into scalable, user-friendly digital infrastructures. Particular attention will be paid to the role of interoperability standards, open-source architectures, and modular design in enhancing platform adaptability across diverse regulatory contexts. Additionally, future studies will investigate the incorporation of user feedback and empirical testing to refine the scoring framework and better align platform evaluation with the lived experiences of REC stakeholders.

5. Conclusions

The results of this study provide valuable insights that extend beyond the numerical rankings of the evaluated platforms. The consistent top performance of eNeuron, (+)CityxChange, Cleanwatts, Hive Power FLEXO, and Powerledger across all weighting schemes does not reflect a methodological coincidence but rather the robustness and comprehensiveness of their design. These platforms simultaneously address technical, operational, economic, environmental, social, and quality domains, demonstrating that balanced multi-domain integration, rather than specialization in one area, determines overall excellence in the multi-criteria evaluation context. Meanwhile, the variability observed among mid-ranked platforms such as BECoop, MiGRIDS Lite, REScoopVPP, Rectool, and PROCSIM under different weighting scenarios reveals that the relevance and performance of each tool are highly dependent on stakeholder priorities. This indicates that platform suitability is contextual and should align with whether a project is primarily technology-driven or community-oriented. A review of the results also shows that the distinction between commercial and open-source tools does not translate into systematic performance differences. High and low scores appear in both categories depending on the weighting approach.
From a practical perspective, the findings suggest that the selection of tools should be guided by project-specific objectives rather than overall ranking. For ECs that prioritise operational control, flexibility, and integration of EV or demand response, tools with strong operational capabilities are more appropriate. Conversely, initiatives focusing on early-stage design and planning would benefit from tools that demonstrate superior capability in feasibility and spatial analysis. The six KPI dimensions proposed in this study can serve as a comprehensive framework for structuring technical specifications, procurement processes, and evaluation protocols, helping to prevent the omission of critical functionalities such as network constraint modelling, energy-sharing mechanisms, or multi-objective optimisation. In addition, the analysis underscores the importance of modularity and interoperability: while universal tools can serve as a system’s core, the integration of specialised solutions through open interfaces and scalable architectures can improve overall system performance and adaptability.
At the policy level, the results highlight the need to embed environmental and economic sharing dimensions in national and regional regulatory frameworks. Since the rankings are most sensitive to these domains under data-driven weighting schemes, policymakers should consider incorporating requirements for carbon accounting, transparent benefit allocation, and lifecycle performance tracking into public funding programmes and pilot project evaluations. Establishing a baseline set of functional requirements for REC digital tools—covering grid constraint modelling, flexibility management, multi-objective optimisation, and user transparency—would reduce fragmentation among projects and ensure interoperability across different governance levels. Furthermore, the use of publicly available KPI reporting and disclosure of weighting schemes would enhance comparability and accountability across municipal and community-led initiatives.

Author Contributions

Conceptualization: L.P. and A.M.; methodology: L.P., S.H. and I.D.; software: S.H. and L.P.; validation: R.Z. and P.N.; formal analysis: L.P., S.H., R.Z. and P.N.; investigation: R.Z., S.H. and P.N.; resources: L.P. and I.D.; writing—original draft preparation: L.P., S.H., R.Z. and P.N.; writing—review and editing: L.P. and A.M.; visualization: S.H.; supervision: A.M.; project administration: A.M.; funding acquisition: L.P. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The research conducted in this publication was funded by the Latvian Council of Science for the project LV_UA/2025/2 and by the Ministry of Education and Science of Ukraine under the grant number 0125U002848 ‘Development of an open-source tool to support energy communities with electric vehicles and battery energy storage’.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Sub-CriteriaSub-Criterion Score
x = 0x = 0.5x = 1
TECH_vecSingle vector onlyTwo vectorsThree or more vectors with explicit couplings
TECH_optNo optimizer
(scenario calculation only)
Single-objective optimizationMulti-objective optimization or equivalent explicit trade-off exploration
TECH_SimAggregate/static or seasonal snapshot calculations without continuous time-seriesHourly time-series over representative periods or full year with limited sub-hourly supportFull-year time-series with optional sub-hourly steps, consistent multi-vector balance, and documented numerical/validation details.
TECH_forecNo built-in forecastingBuilt-in forecasting for at least one stream (e.g., PV or load) or automated import from external services without accuracy reportingConfigurable multi-stream forecasting (load and RES at minimum) with documented methods, horizon/cadence control, and accuracy/confidence reporting.
TECH_spatNon-spatial, aggregate inputs onlyBasic GIS support (layer import, georeferencing of assets)Advanced spatial analytics (3D/shading, geoprocessing queries, perimeter/network checks)
TECH_gridSingle-bus balance without grid representationSimplified treatment (aggregate losses or transformer caps)Explicit LV/MV network model with power-flow and constraint checking
ECON_finCosts onlySome KPIs (e.g., LCOE and payback) but not the full set or without transparent assumptionsFull KPI set (NPV, LCOE, IRR, payback) with parameterized assumptions and clear reporting
ECON_tarSingle flat tariff onlyMultiple static/TOU tariffs or limited dynamic importNative dynamic pricing support and per-member/asset tariff assignment with different tariff set scenarios.
ECON_sensNo built-in sensitivityManual/scenario-by-scenario variation with limited aggregationIntegrated batch sensitivity or stochastic analysis with summarized robustness metrics
ECON_sharNo explicit sharing logic (manual spreadsheets required)Single or hard-coded scheme with limited configurabilityMultiple configurable schemes (including dynamic allocation) with transparent statements and exports
ENVIR_carbNo emission quantification or only static, generic emission factors without transparency or time resolutionBasic carbon accounting is implemented using national or annual average factors, with limited spatial or temporal granularityComprehensive carbon accounting framework with high-resolution (hourly or regional) emission factors, transparent methodology, and automatic tracking of emissions across scenarios or operational periods
ENVIR_objEnvironmental indicators are reported only as outputs; no influence on design, control, or optimisation decisionsEnvironmental parameters (e.g., CO2 intensity, renewable share) can be used qualitatively or as secondary evaluation metrics but not directly optimised or constrainedEnvironmental performance explicitly incorporated as an optimisation objective or constraint (e.g., CO2 minimisation, renewable penetration target, emission caps), with capability for trade-off and scenario analysis
SOC_transNo member-facing interface or dashboards; information is only accessible to administratorsBasic dashboards with limited visibility (e.g., simple energy or cost summaries without detailed breakdowns or role differentiation)Comprehensive, role-based member portal with detailed visualisation of energy, cost, and environmental data; includes data export, report generation, and transparency features that support trust and engagement
SOC_desNo participatory or feedback mechanisms; community members have no structured way to provide inputBasic feedback options (e.g., static survey or manual preference collection) without integration into platform logic or scenario designFully integrated co-design environment with interactive tools (surveys, voting, preference inputs) and feedback loops that directly influence planning, optimisation, or governance decisions.
SOC_educNo built-in help, onboarding, or external documentation; users must rely on ad hoc supportBasic user manual or FAQ provided; limited contextual help or outdated documentationComprehensive learning ecosystem combining interactive onboarding, contextual help, structured documentation (user & developer), and online training resources ensuring accessibility for all user types
OPER_asclThe platform supports only a single or very limited asset type (e.g., PV monitoring only) with no control or interoperability functionsMultiple asset types are represented but with limited depth (e.g., monitoring without control or lack of standardized integration)Wide range of controllable and observable assets supported natively, with full data integration, real-time control capability, and interoperability across multiple device classes
OPER_analytNo analytical or reporting functionality beyond raw data logsBasic analytics and standard KPI visualisation (e.g., daily/weekly summaries)Advanced analytics with predictive/prescriptive functions, automated KPI tracking, and multi-user report generation
OPER_flexNo flexibility or demand response capabilities; assets operate independentlyBasic manual or schedule-based flexibility activation; limited to a single asset class (e.g., batteries or EVs)Full flexibility aggregation with automated event handling, forecasting, multi-asset coordination (including EVs), and verification of delivery
OPER_EVAbsence of EV-specific functionality beyond manual metering/loggingSupport for imported charging schedules or basic rule-based charging, without optimization against price/RES signals and without explicit handling of network constraintsIntegrated, policy-based smart charging with optimization against dynamic prices and RES forecasts, explicit enforcement of network/connection constraints, and provision of monitoring and compliance logs
QUAL_usComplex, unintuitive interface requiring expert-level knowledge; no guidance or accessibility supportModerately usable interface with partial structure and limited contextual helpHighly intuitive, user-centred design with clear workflows, multilingual support, and built-in interactive guidance
QUAL_perfPlatform exhibits frequent errors, crashes, or data inconsistencies; performance degrades significantly under normal loadPlatform operates reliably under standard conditions but shows occasional instability or slow performance under heavy computation or large datasetsPlatform demonstrates high reliability and computational performance with stable uptime, efficient resource management, and proven resilience during intensive simulations or multi-user operation
QUAL_openClosed-source platform with proprietary data models and no public documentation or APIsPartially open system (e.g., documented APIs or selected modules available) with limited transparencyFully open or transparent ecosystem: open-source code, public API documentation, open data models, and community-driven development

Appendix B

KPI/Tools/ToolGreenPocket Energy CockpitBECoopMiGRIDS LiteLocalRESeNeuron(+)CityxChangeCleanwattsEnergy Web
Technical
Energy vector 10.50.511110
Simulation capability00.510.510.500
Forecasting 000.5010.510.5
Optimization 0010.51110
Spatial/GIS 00.500.5110.50
LV/MV grid constraints & losses 000010.500
Operational
Asset classes000.50.51111
EV management0000.5110.50.5
Analytics and reporting10.50.50.51111
Demand response/flexibility aggregation0000.51111
Economic
Financial KPIs 0.510.50.510.510
Tariff/market models100.50.51111
Sensitivity 00.50.50.510.50.50
Benefit-sharing calculators 000010.510
Environmental
Carbon accounting00.500.51111
Environmental objectives0101110.50.5
Social
Member portals & transparency10.500.51111
Co-design features0101110.50
Education 0100.51110.5
Quality and Adoption
Usability 0.50.50.50.51110.5
Reliability and performance000.5010.510.5
Openness00100.50.500.5

Appendix C

KPI/Tools/ToolOpenEMSHive Power FLEXOPowerledgerUP-STAIRSEnergy Community PlatformRectool SimulatorProcsim
Technical
Energy vector 11100.500
Simulation capability0.50.5000.50.50.5
Forecasting 0.5110010
Optimization 0.5110000
Spatial/GIS 00.500010
LV/MV grid constraints & losses 0.510.50000
Operational
Asset classes11100.50.50.5
EV management1110000
Analytics and reporting11100.500
Demand response/flexibility aggregation1110000
Economic
Financial KPIs 0.51100.500
Tariff/market models11100.500
Sensitivity 0.50.50.500.500.5
Benefit-sharing calculators 01100.500
Environmental
Carbon accounting0.51100.500
Environmental objectives0.50.50.500.500
Social
Member portals & transparency0.51100.500
Co-design features00.50.50.5100
Education 0.51110.500.5
Quality and Adoption
Usability 0.5110.50.50.50.5
Reliability and performance11100.500.5
Openness0.5000111

Appendix D

KPI/Tools/ToolCompile Toolbox/ComPilot & Related ToolsREScoopVPPQuixoticOpenEnergyMonitor
Technical
Energy vector 00.500.5
Simulation capability00.500.5
Forecasting 0.510.50
Optimization 0.50.500
Spatial/GIS 1000
LV/MV grid constraints & losses 1000
Operational
Asset classes0.510.51
EV management0100
Analytics and reporting1110.5
Demand response/flexibility aggregation1100
Economic
Financial KPIs 0000
Tariff/market models10.50.51
Sensitivity 0000
Benefit-sharing calculators 0000
Environmental
Carbon accounting0000
Environmental objectives0000
Social
Member portals & transparency1110.5
Co-design features00.50.50
Education 0.50.50.50.5
Quality and Adoption
Usability 00.50.50.5
Reliability and performance00.510
Openness0100.5

Appendix E

TECHOPERQUALECONSOCENVIR
TECH123455
OPER1/212345
QUAL1/31/21234
ECON1/41/31/2123
SOC1/51/41/31/212
ENVIR1/61/51/41/31/21

Appendix F

Tool/Weighting MethodsTotal KPI Values Before WeightingOverall Performance Score (ROC Weights Method)Overall Performance Score (AHP Weights Method)Overall Performance Score (EWM Weights Method)Overall Performance Score (CRITIC Weights Method)
GreenPocket Energy Cockpit1.29170.21380.21280.18710.2087
BECoop2.50000.26900.27690.48340.4392
MiGRIDS Lite1.41670.37010.35840.14770.2218
LocalRES3.08330.40630.41790.56460.5481
eNeuron5.83330.97360.97310.98720.9713
(+)CityxChange5.41670.84520.85270.94170.9100
Cleanwatts5.12500.74690.75870.90440.8673
Hive Power FLEXO5.50000.87920.88390.95260.9209
Powerledger5.25000.77710.79050.91990.8882
UP-STAIRS0.66670.05690.05950.06400.1197
Energy Web2.95830.40180.42510.51710.5210
OpenEMS3.50000.63710.64350.55780.5819
Energy Community Platform2.75000.34000.34980.45530.4737
REScoopVPP2.87500.57090.57420.32720.4763
Quixotic1.75000.25740.26370.19320.2996
OpenEnergyMonitor1.45830.25750.25890.18630.2397
Compile Toolbox1.87500.41150.40380.25230.2980
Rectool Simulator1.04170.27950.26840.10840.1613
PROCSIM1.16670.19280.19450.12390.1954

References

  1. The European Parliament and The Council of The European Union. Directive (EU) 2018/2001 of the European Parliament and of the Council of 11 December 2018 on the Promotion of the Use of Energy from Renewable Sources (Recast) (Text with EEA Relevance.). Available online: https://eur-lex.europa.eu/eli/dir/2018/2001/oj (accessed on 14 August 2025).
  2. The European Parliament and The Council of The European Union. Directive (EU) 2023/2413 of the European Parliament and of the Council of 18 October 2023 Amending Directive (EU) 2018/2001, Regulation (EU) 2018/1999 and Directive 98/70/EC as Regards the Promotion of Energy from Renewable Sources, and Repealing Council Directive (EU) 2015/652. Available online: https://eur-lex.europa.eu/eli/dir/2023/2413/oj/eng (accessed on 14 August 2025).
  3. The European Parliament and The Council of The European Union. Directive (EU) 2019/944 of the European Parliament and of the Council of 5 June 2019 on Common Rules for the Internal Market for Electricity and Amending Directive 2012/27/EU (Recast) (Text with EEA Relevance.). Available online: https://eur-lex.europa.eu/eli/dir/2019/944/oj/eng (accessed on 16 August 2025).
  4. Caramizaru, E.; Uihlein, A. Energy Communities: An Overview of Energy and Social Innovation; JRC Publications Repository; Publications Office of the European Union: Luxembourg, 2020; ISBN 978-92-76-10713-2. [Google Scholar] [CrossRef]
  5. Arias, A. Digital Tools and Platforms for Renewable Energy Communities: A Comprehensive Literature Review; Politecnico di Milano: Milan, Italy, 2024. [Google Scholar]
  6. Yazdanie, M.; Orehounig, K. Advancing urban energy system planning and modeling approaches: Gaps and solutions in perspective. Renew. Sustain. Energy Rev. 2021, 137, 110607. [Google Scholar] [CrossRef]
  7. Ferrando, M.; Causone, F.; Hong, T.; Chen, Y. Urban building energy modeling (UBEM) tools: A state-of-the-art review of bottom-up physics-based approaches. Sustain. Cities Soc. 2020, 62, 102408. [Google Scholar] [CrossRef]
  8. Sinha, S.; Chandel, S.S. Review of software tools for hybrid renewable energy systems. Renew. Sustain. Energy Rev. 2014, 32, 192–205. [Google Scholar] [CrossRef]
  9. Giannuzzo, L.; Minuto, F.D.; Schiera, D.S.; Branchetti, S.; Petrovich, C.; Gessa, N.; Frascella, A.; Lanzini, A. Assessment of renewable energy communities: A comprehensive review of key performance indicators. Energy Rep. 2025, 13, 6609–6630. [Google Scholar] [CrossRef]
  10. Wiese, F.; Hilpert, S.; Kaldemeyer, C. A qualitative evaluation approach for energy system modelling frameworks. Energy Sustain. Soc. 2018, 8, 13. [Google Scholar] [CrossRef]
  11. Vecchi, F.; Stasi, R.; Berardi, U. Modelling tools for the assessment of renewable energy communities. Energy Rep. 2024, 11, 3941–3962. [Google Scholar] [CrossRef]
  12. Velkovski, B.; Gjorgievski, V.Z.; Kothona, D.; Bouhouras, A.S.; Cundeva, S.; Markovska, N. Impact of tariff structures on energy community and grid operational parameters. Sustain. Energy Grids Netw. 2024, 38, 101382. [Google Scholar] [CrossRef]
  13. Shahzad, K.; Tuomela, S.; Juntunen, J.K. Emergence and prospects of digital mediation in energy communities: Ecosystem actors’ perspective. Energy Sustain. Soc. 2025, 15, 35. [Google Scholar] [CrossRef]
  14. Kazmi, H.; Munné-Collado, Í.; Mehmood, F.; Syed, T.A.; Driesen, J. Towards data-driven energy communities: A review of open-source datasets, models and tools. Renew. Sustain. Energy Rev. 2021, 148, 111290. [Google Scholar] [CrossRef]
  15. Amin, R. Exploring stakeholder engagement in energy system modelling and planning: A systematic review using SWOT analysis. Energ. Sustain. Plan. J. 2025, 28, 153–178. [Google Scholar] [CrossRef]
  16. Johannsen, R.M.; Prina, M.G.; Østergaard, P.A.; Mathiesen, B.V.; Sparber, W. Municipal energy system modelling—A practical comparison of optimisation and simulation approaches. Energy 2023, 269, 126803. [Google Scholar] [CrossRef]
  17. Weighted Sum Method—An Overview. ScienceDirect Topics. Available online: https://www.sciencedirect.com/topics/computer-science/weighted-sum-method (accessed on 5 November 2025).
  18. Macgregor, G. How to Do Research: A Practical Guide to Designing and Managing Research Projects, 3rd ed.; Library Review: London, UK, 2007; pp. 337–339. [Google Scholar] [CrossRef]
  19. Bowen, G.A. Document Analysis as a Qualitative Research Method. Qual. Res. J. 2009, 9, 27–40. [Google Scholar] [CrossRef]
  20. Triantaphyllou, E. Multi-Criteria Decision Making Methods: A Comparative Study; Springer: Boston, MA, USA, 2000. [Google Scholar] [CrossRef]
  21. Mardani, A.; Zavadskas, E.K.; Khalifah, Z.; Zakuan, N.; Jusoh, A.; Nor, K.; Khoshnoudi, M. A review of multi-criteria decision-making applications to solve energy management problems: Two decades from 1995 to 2015. Renew. Sustain. Energy Rev. 2017, 71, 216–256. [Google Scholar] [CrossRef]
  22. Henderson, J.; Peeling, R. A framework for early-stage sustainability assessment of innovation projects enabled by weighted sum multi-criteria decision analysis in the presence of uncertainty. Open Res. Eur. 2024, 4, 162. [Google Scholar] [CrossRef]
  23. Jahangirian, M.; Taylor, S.J.E.; Young, T.; Robinson, S. Key performance indicators for successful simulation projects. J. Oper. Res. Soc. 2017, 68, 747–765. [Google Scholar] [CrossRef]
  24. Roubtsova, E. KPI design as a simulation project. In Proceedings of the 32nd European Modeling and Simulation Symposium, Online, 16–18 September 2020; pp. 120–129. [Google Scholar] [CrossRef]
  25. Kifor, C.V.; Olteanu, A.; Zerbes, M. Key Performance Indicators for Smart Energy Systems in Sustainable Universities. Energies 2023, 16, 1246. [Google Scholar] [CrossRef]
  26. Lamprousis, G.D.; Golfinopoulos, S.K. The Integrated Energy Community Performance Index (IECPI): A Multidimensional Tool for Evaluating Energy Communities. Urban Sci. 2025, 9, 264. [Google Scholar] [CrossRef]
  27. Bianco, G.; Bonvini, B.; Bracco, S.; Delfino, F.; Laiolo, P.; Piazza, G. Key Performance Indicators for an Energy Community Based on Sustainable Technologies. Sustainability 2021, 13, 8789. [Google Scholar] [CrossRef]
  28. Mancò, G.; Tesio, U.; Guelpa, E.; Verda, V. A review on multi energy systems modelling and optimization. Appl. Therm. Eng. 2024, 236, 121871. [Google Scholar] [CrossRef]
  29. Hao, J.; Yang, Y.; Xu, C. A comprehensive review of planning, modeling, optimization, and control of distributed energy systems. Carbon Neutrality 2022, 1, 28. [Google Scholar] [CrossRef]
  30. Taxt, H.; Bjarghov, S.; Askeland, M.; Crespo del Granado, P.; Morch, A.; Degefa, M.Z.; Rana, R. Integration of energy communities in distribution grids: Development paths for local energy coordination. Energy Strategy Rev. 2025, 58, 101668. [Google Scholar] [CrossRef]
  31. Obi, M.; Slay, T.; Bass, R. Distributed energy resource aggregation using customer-owned equipment: A review of literature and standards. Energy Rep. 2020, 6, 2358–2369. [Google Scholar] [CrossRef]
  32. Li, H.; Johra, H.; de Andrade Pereira, F.; Hong, T.; Le Dréau, J.; Maturo, A.; Wei, M.; Liu, Y.; Saberi-Derakhtenjani, A.; Nagy, Z.; et al. Data-driven key performance indicators and datasets for building energy flexibility: A review and perspectives. Appl. Energy 2023, 343, 121217. [Google Scholar] [CrossRef]
  33. Ranaboldo, M.; Aragüés-Peñalba, M.; Arica, E.; Bade, A.; Bullich-Massagué, E.; Burgio, A.; Caccamo, C.; Caprara, A.; Cimmino, D.; Domenech, B.; et al. A comprehensive overview of industrial demand response status in Europe. Renew. Sustain. Energy Rev. 2024, 203, 114797. [Google Scholar] [CrossRef]
  34. Teng, Q.; Wang, X.; Hussain, N.; Hussain, S. Maximizing economic and sustainable energy transition: An integrated framework for renewable energy communities. Energy 2025, 317, 134544. [Google Scholar] [CrossRef]
  35. Delapedra-Silva, V.; Ferreira, P.; Cunha, J.; Kimura, H. Methods for Financial Assessment of Renewable Energy Projects: A Review. Processes 2022, 10, 184. [Google Scholar] [CrossRef]
  36. Minuto, F.D.; Lanzini, A. Energy-sharing mechanisms for energy community members under different asset ownership schemes and user demand profiles. Renew. Sustain. Energy Rev. 2022, 168, 112859. [Google Scholar] [CrossRef]
  37. Ryszawska, B.; Rozwadowska, M.; Ulatowska, R.; Pierzchała, M.; Szymański, P. The Power of Co-Creation in the Energy Transition—DART Model in Citizen Energy Communities Projects. Energies 2021, 14, 5266. [Google Scholar] [CrossRef]
  38. Berendes, S.; Hilpert, S.; Günther, S.; Muschner, C.; Candas, S.; Hainsch, K.; van Ouwerkerk, J.; Buchholz, S.; Söthe, M. Evaluating the usability of open source frameworks in energy system modelling. Renew. Sustain. Energy Rev. 2022, 159, 112174. [Google Scholar] [CrossRef]
  39. Department for Energy Security & Net Zero. Use of Multi-Criteria Decision Analysis in Options Appraisal of Economic Cases. 2024. Available online: https://assets.publishing.service.gov.uk/media/6645e4b2b7249a4c6e9d3631/Use_of_MCDA_in_options_appraisal_of_economic_cases.pdf (accessed on 15 October 2025).
  40. O’Shea, R.; Deeney, P.; Triantaphyllou, E.; Diaz-Balteiro, L.; Tarim, S.A. Weight Stability Intervals for Multi-Criteria Decision Analysis Using the Weighted Sum Model. Expert Syst. Appl. 2026, 296, 128460. [Google Scholar] [CrossRef]
  41. Methods of Choosing Weights. Available online: https://ebrary.net/134839/mathematics/methods_choosing_weights (accessed on 18 October 2025).
  42. Hatefi, M.A. An Improved Rank Order Centroid Method (IROC) for Criteria Weight Estimation: An Application in the Engine/Vehicle Selection Problem. Informatica 2023, 34, 249–270. [Google Scholar] [CrossRef]
  43. Kunsch, P. A Critical Analysis on Rank-Order-Centroid (ROC) and Rank-Sum (RS) Weights in Multicriteria-Decision Analysis; Vrije Universiteit Brussel: Brussels, Belgium, 2019. [Google Scholar]
  44. Diahovchenko, I.M.; Kandaperumal, G.; Srivastava, A.K. Enabling resiliency using microgrids with dynamic boundaries. Electr. Power Syst. Res. 2023, 221, 109460. [Google Scholar] [CrossRef]
  45. Bozorg-Haddad, O.; Loáiciga, H.; Zolghadr-Asli, B. Analytic Hierarchy Process (AHP). In A Handbook on Multi-Attribute Decision-Making Methods; Wiley & Sons Publication Inc.: Hoboken, NJ, USA, 2021. [Google Scholar] [CrossRef]
  46. Pascoe, S. A Simplified Algorithm for Dealing with Inconsistencies Using the Analytic Hierarchy Process. Algorithms 2022, 15, 442. [Google Scholar] [CrossRef]
  47. Zhu, Y.; Tian, D.; Yan, F. Effectiveness of Entropy Weight Method in Decision-Making. Math. Probl. Eng. 2020, 2020, 3564835. [Google Scholar] [CrossRef]
  48. Krishnan, A.R.; Kasim, M.M.; Hamid, R.; Ghazali, M.F. A Modified CRITIC Method to Estimate the Objective Weights of Decision Criteria. Symmetry 2021, 13, 973. [Google Scholar] [CrossRef]
  49. Zhang, Q.; Fan, J.; Gao, C. CRITID: Enhancing CRITIC with advanced independence testing for robust multi-criteria decision-making. Sci. Rep. 2024, 14, 25094. [Google Scholar] [CrossRef]
  50. Roszkowska, E. Rank Ordering Criteria Weighting Methods—A Comparative Overview. Optim. Econ. Stud. 2013, 5, 14–33. [Google Scholar] [CrossRef]
  51. Saaty, T.L.; Vargas, L.G. Models, Methods, Concepts & Applications of the Analytic Hierarchy Process, 2nd ed.; Springer: New York, NY, USA, 2022; ISBN 978-1-4614-3596-9. [Google Scholar]
  52. Arce, M.E.; Saavedra, Á.; Míguez, J.L.; Granada, E. The use of grey-based methods in multi-criteria decision analysis for the evaluation of sustainable energy systems: A review. Renew. Sustain. Energy Rev. 2015, 47, 924–932. [Google Scholar] [CrossRef]
  53. Gao, X.; An, R. Research on the coordinated development capacity of China’s hydrogen energy industry chain. J. Clean. Prod. 2022, 377, 134177. [Google Scholar] [CrossRef]
  54. Berman, J.J. Chapter 4—Understanding Your Data. In Data Simplification; Morgan Kaufmann Publishers: Burlington, MA, USA, 2016; pp. 135–187. ISBN 9780128037812. [Google Scholar] [CrossRef]
  55. Bult-Ito, Y. UAF News and Information, University of Alaska Fairbanks. Free Tool Helps Small Communities Pick Renewable Energy Sources. UAF News and Information. 7 August 2025. Available online: https://www.uaf.edu/news/free-tool-helps-small-communities-pick-renewable-energy-sources.php (accessed on 5 November 2025).
  56. Gilchrist, P. New Tool Looks to Make Grid Modeling More Accessible to Small Communities. KUAC. 24 August 2025. Available online: https://fm.kuac.org/2025-08-24/new-tool-looks-to-make-grid-modeling-more-accessible-to-small-communities (accessed on 5 November 2025).
  57. Localres. Available online: https://www.localres.eu/ (accessed on 5 November 2025).
  58. eNeuron. Optimising the Design and Operation of Local Energy Communities Based on Multi-Carrier Energy Systems. Available online: https://eneuron.eu/ (accessed on 5 November 2025).
  59. +CityxChange. Positive City ExChange—Enabling the Co-Creation of the Future We Want to Live in. Available online: https://cityxchange.eu/ (accessed on 6 November 2025).
  60. +CityxChange. Developing a Lighthouse Project for Positive Energy Districts. +CityxChange Project, Horizon 2020 Grant Agreement No 824260; Sustainable Places 2019: [Limerick, Ireland], 2021. Available online: https://www.sustainableplaces.eu/wp-content/uploads/2021/04/CityxChange-%E2%80%93-Developing-a-Lighthouse-Project-for-Positive-Energy-Districts.pdf (accessed on 5 November 2025).
  61. +CityxChange. CORDIS—EU Research Results. Positive City Exchange. Available online: https://cordis.europa.eu/project/id/824260/results (accessed on 15 August 2025).
  62. ABB. The Climate-Positive City. Available online: https://new.abb.com/news/detail/110049/the-climate-positive-city (accessed on 19 August 2025).
  63. Gall, T.; Carbonari, G.; Ahlers, D.; Wyckmans, A. Co-Creating Local Energy Transitions Through Smart Cities: Piloting a Prosumer-Oriented Approach. In Review of World Planning Practice; International Society of City and Regional Planners: Hague, The Netherlands, 2020; Volume 16, pp. 112–127. Available online: https://www.institute-urbanex.org/wp-content/uploads/2020/11/Co-Creating-Local-Energy-Transitions-Through-Smart-Cities-Piloting-a-Prosumer-Oriented-Approach.pdf (accessed on 20 August 2025).
  64. Cleanwatts. Cleanwatts—Shaping the Future of Sustainable Energy. Available online: https://cleanwatts.energy/ (accessed on 21 August 2025).
  65. Cleanwatts. Cleanwatts Official Channel. YouTube Channel. Available online: https://www.youtube.com/@cleanwatts4048 (accessed on 25 August 2025).
  66. ABB. ABB-Cleanwatts Solution—Scaling Community Energy Solutions. Available online: https://new.abb.com/low-voltage/solutions/energy-efficiency/abb-cleanwatts (accessed on 26 August 2025).
  67. Hive Power SA. FLEXO. Available online: https://www.hivepower.tech/flexo (accessed on 2 September 2025).
  68. Powerledger. Powerledger—Software Solutions for Tracking, Tracing and Trading Renewable Energy. Available online: https://powerledger.io (accessed on 5 September 2025).
  69. Messari. Power Ledger (POWR)—Project Profile. Available online: https://messari.io/project/power-ledger/profile (accessed on 3 September 2025).
  70. CoinMarketCap. Powerledger (POWR)—Cryptocurrency Profile. Available online: https://coinmarketcap.com/currencies/power-ledger (accessed on 8 September 2025).
  71. eCREW. The App. Available online: https://ecrew-project.eu/the-app (accessed on 9 September 2025).
  72. GreenPocket GmbH. Residential Customers—Energy Cockpit for Residential Customers. Available online: https://www.greenpocket.com/products/residential-customers (accessed on 11 September 2025).
  73. BECoop. D2.4 BECoop Toolkit—Final. BECoop Project (Horizon 2020 Grant Agreement No 952930). October 2022. Available online: https://becoop-kep.eu/wp-content/uploads/2023/11/D2.4_BECoop_Toolkit-Final_V1.0_compressed.pdf (accessed on 15 September 2025).
  74. BECoop. Unlocking the Community Bioenergy Potential. Available online: https://ieecp.org/projects/becoop/ (accessed on 17 September 2025).
  75. UP-STAIRS. UP-Lifting Energy Communities. Available online: https://www.h2020-upstairs.eu/ (accessed on 19 September 2025).
  76. UP-STAIRS. About the UP-STAIRS. Available online: https://www.h2020-upstairs.eu/about (accessed on 23 September 2025).
  77. COMPILE. Integrating Community Power in Energy Islands. Available online: https://main.compile-project.eu/ (accessed on 25 September 2025).
  78. OpenEnergyMonitor. OpenEnergyMonitor—Open Source Energy Monitoring and Analysis Tools. Available online: https://openenergymonitor.org/ (accessed on 26 September 2025).
  79. OpenEnergyMonitor. Emoncms—User Login. Available online: https://emoncms.org/app/view?name=MySolarBattery (accessed on 29 September 2025).
  80. Joint Research Centre, European Commission. REC Tool—Renewable Energy Communities Tool. Available online: https://ses.jrc.ec.europa.eu/rectool (accessed on 30 September 2025).
  81. De Paola, A.; Musiari, E.; Andreadou, N.; Fortunati, L.; Francesco, G.; Anselmi, G.P. An Open-Source IT Tool for Energy Forecast of Renewable Energy Communities. IEEE Access 2025, 13, 69619–69630. [Google Scholar] [CrossRef]
  82. Velosa, N.; Gomes, E.; Morais, H.; Pereira, L. PROCSIM: An Open-Source Simulator to Generate Energy Community Power Demand and Generation Scenarios. Energies 2023, 16, 1611. Available online: https://www.mdpi.com/1996-1073/16/4/1611 (accessed on 3 October 2025). [CrossRef]
  83. Energy Web. Energy Web—Built, Connect, Transform. Available online: https://www.energyweb.org/ (accessed on 6 October 2025).
  84. Energy Web. Energy Web X Ecosystem. Documentation Overwiev. Available online: https://docs-launchpad.energyweb.org (accessed on 8 October 2025).
  85. OpenEMS Association e.V. OpenEMS—The Open Source Energy Management System. Available online: https://openems.io/ (accessed on 10 October 2025).
  86. OpenEMS Association e.V. OpenEMS—Introduction. Available online: https://openems.github.io/openems.io/openems/latest/introduction.html (accessed on 13 October 2025).
  87. REScoop.eu. Energy Community Platform—One-Stop Solution for Community Energy Projects. Available online: https://energycommunityplatform.eu/ (accessed on 15 October 2025).
  88. REScoopVPP. REScoopVPP—Community-Driven Virtual Power Plant and Smart Building Ecosystem. Available online: https://www.rescoopvpp.eu/ (accessed on 17 October 2025).
  89. European Climate, Infrastructure and Environment Executive Agency. Horizon Energy: REScoopVPP—Smart Building Ecosystem for Energy Communities. Available online: https://cinea.ec.europa.eu/featured-projects/horizon-energy-rescoopvpp-smart-building-ecosystem-energy-communities_en (accessed on 20 October 2025).
  90. Quixotic. Quixotic—Cloud Solution to Automate Energy Billing and Invoicing Operations for Energy Communities and Utilities. Available online: https://www.quixotic.energy/ (accessed on 23 October 2025).
  91. Shan, S.; Yang, S.; Becerra, V.; Deng, J.; Li, H. A Case Study of Existing Peer-to-Peer Energy Trading Platforms: Calling for Integrated Platform Features. Sustainability 2023, 15, 16284. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.