Next Article in Journal
LDPC Codes on Balanced Incomplete Block Designs: Construction, Girth, and Cycle Structure Analysis
Previous Article in Journal
Quantum Phase Transition in the Coupled-Top Model: From Z2 to U(1) Symmetry Breaking
Previous Article in Special Issue
Improving the CRCC-DHR Reliability: An Entropy-Based Mimic-Defense-Resource Scheduling Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HECM-Plus: Hyper-Entropy Enhanced Cloud Models for Uncertainty-Aware Design Evaluation in Multi-Expert Decision Systems

1
School of Culture and Art, Chengdu University of Information Technology, Chengdu 610103, China
2
West China School of Medicine, Sichuan University, Chengdu 610041, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(5), 475; https://doi.org/10.3390/e27050475
Submission received: 25 March 2025 / Revised: 26 April 2025 / Accepted: 26 April 2025 / Published: 27 April 2025
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)

Abstract

:
Uncertainty quantification in cloud models requires simultaneous characterization of fuzziness (via Entropy, En) and randomness (via Hyper-entropy, He), yet existing similarity measures often neglect the stochastic dispersion governed by He. To address this gap, we propose HECM-Plus, an algorithm integrating Expectation (Ex), En, and He to holistically model geometric and probabilistic uncertainties in cloud models. By deriving He-adjusted standard deviations through reverse cloud transformations, HECM-Plus reformulates the Hellinger distance to resolve conflicts in multi-expert evaluations where subjective ambiguity and stochastic randomness coexist. Experimental validation demonstrates three key advances: (1) Fuzziness–Randomness discrimination: HECM-Plus achieves balanced conceptual differentiation (δC1/C4 = 1.76, δC2 = 1.66, δC3 = 1.58) with linear complexity outperforming PDCM and HCCM by 10.3% and 17.2% in differentiation scores while resolving He-induced biases in HECM/ECM (C1C4 similarity: 0.94 vs. 0.99) critical for stochastic dispersion modeling; (2) Robustness in time-series classification: It reduces the mean error by 6.8% (0.190 vs. 0.204, *p* < 0.05) with lower standard deviation (0.035 vs. 0.047) on UCI datasets, validating noise immunity; (3) Design evaluation application: By reclassifying controversial cases (e.g., reclassified from a “good” design (80.3/100 average) to “moderate” via cloud model using HECM-Plus), it resolves multi-expert disagreements in scoring systems. The main contribution of this work is the proposal of HECM-Plus, which resolves the limitation of HECM in neglecting He, thereby further enhancing the precision of normal cloud similarity measurements. The algorithm provides a practical tool for uncertainty-aware decision-making in multi-expert systems, particularly in multi-criteria design evaluation under conflicting standards. Future work will extend to dynamic expert weight adaptation and higher-order cloud interactions.

1. Introduction

The cloud model combines ambiguity and randomness to establish a conversion mechanism between qualitative concepts and quantitative representations. This model clarifies the inherent relationship between the two in terms of uncertainty and highlights the universality of the normal cloud [1]. Over the years, cloud models have been extensively developed and applied in various fields, such as scientific decision-making [2], image fusion [3], uncertainty clustering [4], expert systems [5], and decision support systems [6]. Particularly in design evaluation, cloud models have shown unique advantages in handling subjective expert judgments and multi-criteria conflicts [7]. These assessments often involve ambiguous criteria (e.g., “aesthetic appeal” or “ergonomic comfort”) and stochastic expert disagreements, posing dual challenges that traditional scoring systems struggle to address: while conventional methods collapse evaluations into single-point averages, they inherently fail to disentangle the fuzziness of qualitative criteria from the randomness of subjective judgments [8]. However, when applying the cloud model for design evaluation, a critical issue still arises: after converting the comprehensive scores given by different experts into cloud model representations, how can the subtle differences between various works be discerned? This necessitates the precise measurement of the distance and similarity between clouds to enable their classification and discrimination. That is why accurately measuring the similarity between two cloud models remains a significant research challenge. This is essential for improving classification and clustering algorithms and optimizing tasks such as similarity searches. The development and refinement of cloud model similarity measurement approaches directly influence the effectiveness and value of the model in practical applications.
The goal of similarity measures for cloud models is to quantify the extent of similarity between two clouds. Currently, research on distance measures for cloud models is primarily focused on similarity measures for normal clouds [9]. Existing similarity measures include the following components. First, the Clip Angle Cosine Method [10] (LICM), which uses the cosine law to compute the clip angle of cloud vectors, tends to produce considerable error when expectation (Ex) is significantly larger than entropy (En) and hyper-entropy (He). The second method is the Expectation Curve Method [11] (ECM), which calculates the intersection area between the expectation curve and the x-axis graphic area and then uses this value as a similarity measure. However, this method overlooks the impact of He. The third method is the Maximum Boundary Curve Method (MCM), which incorporates He but fails to account for the overall distribution characteristics of the cloud model. The fourth method is the Combined Shape-distance Similarity Measurement Method for normal cloud models [12] (PDCM), which calculates shape similarity by considering En and He as standard variables. Although this approach is simpler in describing the shape of a cloud model, it faces the issue of parameter approximation selection, which can affect the accuracy of the distance similarity. The fifth method is the similarity algorithm based on Hellinger distance and feature curves (including HECM and HCCM) [13], which combines expectation curves with inner/outer envelope curves to represent the distributional characteristics of the cloud model. However, two critical limitations persist: (1) He-neglect: The HECM algorithm approximates En via standard deviation while ignoring He, thereby failing to capture stochastic dispersion; (2) Subjective weighting: Although HCCM incorporates He, it assigns ad hoc weights to expectation and envelope curves, introducing human bias.
Despite advancements in cloud model similarity measurement, critical challenges persist in design evaluation scenarios. First, existing methods (e.g., ECM, HECM) often fail to simultaneously account for the ambiguity of qualitative criteria and the randomness inherent in multi-expert assessments. Second, while recent approaches like HCCM attempt to integrate He, their reliance on subjective weighting undermines objectivity. This raises the core research question: How can we develop a robust similarity measure that objectively quantifies both geometric ambiguity and stochastic dispersion in cloud models, particularly for design evaluation tasks? Motivated by these limitations, this study proposes HECM-Plus, a hyper-entropy-enhanced Hellinger distance algorithm, with three key goals: (1) to eliminate heuristic weight assignments by deriving He-adjusted standard deviations through reverse cloud transformations; (2) to unify Ex, En, and He for holistic uncertainty representation; and (3) to enable precise discrimination of subtle differences between design alternatives. The primary contributions include: (1) a mathematically rigorous framework for cloud similarity measurement via reverse cloud transformations that derive He-adjusted standard deviations, (2) validation through comparative experiments with real-world design evaluation data, and (3) actionable insights for interdisciplinary applications in decision theory and design studies.
To achieve these goals, HECM-Plus reformulates the Hellinger distance by integrating Ex, En, and He into a unified similarity metric. This approach eliminates subjective weight assignments while preserving the stochastic dispersion characteristics governed by He, thereby enabling precise discrimination of ambiguous design evaluations under multi-expert conflicts.

2. Cloud Model

2.1. Cloud and Cloud Drops

We let U be a quantitative domain with an exact value, C be a qualitative concept on U , and x be a random realization of the qualitative concept C . If the quantitative value x U and x are a one-time random realization of the qualitative concept C with certainty, then μ ( x ) [ 0 , 1 ] is a random number with a stable tendency
μ : U [ 0 , 1 ] x U x μ ( x )
The distribution of x over the domain U is called a cloud, and each x is referred to as a cloud drop [14,15]. The cloud is characterized by three numerical features: Ex, En, and He. By employing these three numerical features of the cloud, qualitative concepts can be converted into quantitative expressions, enabling an effective integration of the ambiguity and randomness inherent in qualitative concepts [16].
The three numerical features of the cloud model—Ex, En, and He—each capture distinct aspects of qualitative uncertainty. For example, numerical characteristics C(70, 3, 0.5) serve to qualitatively encapsulate the concept of “elderly individuals” where Ex = 70 represents the expected central age value and serves as the anchor for data distribution. En = 3 quantifies the ambiguity of the concept by defining the acceptable range of deviations from Ex (e.g., the fuzziness in determining what constitutes “elderly”). He = 0.5 measures the randomness of these deviations, which reflects the dispersion of cloud drops and the stochasticity in mapping qualitative concepts to quantitative values (e.g., variations in expert interpretations). Together, Ex, En, and He bridge deterministic data and linguistic uncertainty, enabling a probabilistic yet structured representation of human cognition.

2.2. Normal Cloud

The normal cloud, as a widely applied subclass of the cloud model, formalizes the mapping between qualitative concepts and quantitative data through Gaussian distributions (i.e., normal distributions) governed by Ex, En, and He.
We let U be a quantitative domain defined by exact values and let C be a qualitative concept of U . If the quantitative values and x are a one-time random realization of the qualitative concept C , then x satisfies x ~ N ( E x , E n 2 ) , where E n ~ N ( E n , H e 2 ) and x satisfy the certainty for C :
μ = e ( x E x ) 2 2 ( E n ) 2
The distribution of x over domain U is a normal cloud.
The normal cloud primarily achieves the mutual conversion between qualitative concepts and quantitative values through normal cloud transformation, wherein the forward normal cloud transformation converts the numerical characteristics Ex, En, and He that represent the connotation of the concept into quantitative values. Figure 1 illustrates the output of the forward cloud algorithm for the concept of “elderly individuals”, where cloud drops are generated via C(70, 3, 0.5) and n = 3000.

2.3. Forward Cloud Algorithm and Reverse Cloud Algorithm

2.3.1. Forward Cloud Algorithm

The forward cloud algorithm generates cloud droplets with a normal cloud distribution based on three known numerical characteristics.
Given Expectation ( E ^ x ), Entropy ( E ^ n ), Hyper-entropy ( H ^ e ), and the number of cloud drops (n), the algorithm proceeds as follows [17]:
  • Step (1) Generate a normally distributed random number y i = R N ( E ^ n , H ^ e ) with E ^ n as the mean and H ^ e as the standard deviation.
  • Step (2) Generate a normally distributed random number x i = R N ( E ^ x , y i ) with E ^ x as the mean and y i (the random number generated in Step 1) as the standard deviation.
  • Step (3) Calculate the certainty degree:
    μ ( x i ) = exp ( x i E ^ x ) 2 2 y i 2
  • Step (4) Designate the value x i with certainty degree μ ( x i ) as a cloud drop in the numerical domain. Repeat Steps (1)~(3) until the required n cloud drops are generated.

2.3.2. Reverse Cloud Algorithm

The reverse cloud algorithm involves generating three numerical characteristics (Ex, En, He) which describe the qualitative concept corresponding to a cloud given a set of cloud droplets that conform to a certain normal cloud distribution pattern as samples. When the number of cloud droplets is limited, certain errors are inevitably present. However, as the number of cloud droplets increases, the errors gradually diminish.
The estimated numerical characteristics of the qualitative concepts, namely Expectation ( E ^ x ), Entropy ( E ^ n ), and Hyper-entropy ( H ^ e ), are computed based on the sample point x i ( i = 1 , 2 , , n ) [18,19].
By calculating the sample mean, first-order sample absolute central moment, and sample variance, we obtain the following:
X ¯ = 1 n i = 1 n x i
1 n i = 1 n x i X ¯
S 2 = 1 n 1 i = 1 n ( x i X ¯ ) 2
The estimates of E ^ x , E ^ n , and H ^ e are computed as follows:
E ^ x = X ¯
E ^ n = π 2 × 1 n i = 1 n x i E ^ x
H ^ e = S 2 E ^ n 2

3. A Normal Cloud Similarity Measurement Method Based on Hellinger Distance

3.1. Normal Cloud Distribution Approximation

Normal clouds characterized by the three parameters of Ex, En, and He can be approximated as normal distributions for similarity calculations. For normal clouds C ¯ i ( E x i , E n i , H e i ) and C ¯ j ( E x j , E n j , H e j ) , their probability density functions can be approximated as follows (note that here, μ represents the mean of the distribution and σ represents the standard deviation of the distribution):
f i ( x ) = 1 σ i 2 π exp ( x μ i ) 2 2 σ i 2
f j ( x ) = 1 σ j 2 π exp ( x μ j ) 2 2 σ j 2

3.2. Hellinger Distance Calculation

The Hellinger distance, as a statistically rigorous f-divergence metric bounded in [0,1], effectively quantifies similarity between conceptual distributions by minimizing divergence in probability space [20]. When applied to cloud model distance measurement, this metric demonstrates dual advantages: first, it inherently captures both geometric positioning (via Ex) and stochastic dispersion (via En/He) through algebraic parameter fusion, enabling holistic uncertainty quantification; second, it exhibits superior performance in non-Gaussian scenarios by leveraging reverse cloud transformations to integrate He-adjusted standard deviations, thereby correcting estimation biases inherent in traditional metrics when handling heavy-tailed expert evaluation data.
The Hellinger distance is employed to quantify the similarity between two probability distributions and is defined as follows:
H ( f i , f j ) = 1 f i ( x ) f j ( x ) , d x
By substituting the expressions for f i ( x ) and f j ( x ) , we obtain
H ( f i , f j ) = 1 1 σ i 2 + σ j 2 2 π exp ( μ i μ j ) 2 4 ( σ i 2 + σ j 2 )

3.3. Similarity Conversion

To transform the Hellinger distance into a similarity measure, the following equation is used:
S i m = 1 H ( f i , f j )

3.4. HECM and HECM-Plus Algorithm

3.4.1. HECM Algorithm

In literature [13], the HECM algorithm is introduced, and the Hellinger distance is primarily calculated using the mean and standard deviation of a probability distribution. The mean is considered the expected value of the normal cloud, and the standard deviation is used as an approximation of the En of the normal cloud (note that here, μ H represents the mean of the distribution and σ H represents the standard deviation of the distribution, specifically within the context of the HECM algorithm):
μ i H = E ^ x i
( σ i H ) 2 = E ^ n i 2
μ j H = E ^ x j
( σ j H ) 2 = E ^ n j 2
Then, the similarity is calculated as follows:
H H E C M ( f i , f j ) = 1 1 E ^ n i 2 + E ^ n j 2 2 π exp ( E ^ x i E ^ x j ) 2 4 ( E ^ n i 2 + E ^ n j 2 )
S i m H E C M = 1 H H E C M ( f i , f j )
However, this approach ignores the impact of He.

3.4.2. HECM-Plus Algorithm

In this section, the reverse cloud algorithm from Section 2.3.2 is applied to calculate the standard deviation by considering the effects of En and He (note that here, μ + represents the mean of the distribution and σ + represents the standard deviation of the distribution, specifically within the context of the HECM-Plus algorithm):
μ i + = E ^ x i
( σ i + ) 2 = E ^ n i 2 + H ^ e i 2
μ j + = E ^ x j
( σ j + ) 2 = E ^ n j 2 + H ^ e j 2
Then, the similarity is calculated as follows:
H H E C M P l u s ( f i , f j ) = 1 1 ( E ^ n i 2 + H ^ e i 2 ) + ( E ^ n j 2 + H ^ e j 2 ) 2 π exp ( E ^ x i E ^ x j ) 2 4 ( E ^ n i 2 + H ^ e i 2 ) + ( E ^ n j 2 + H ^ e j 2 )
S i m H E C M P l u s = 1 H H E C M P l u s ( f i , f j )

4. Comparative Analysis of Experiments

4.1. Numerical Simulation Experiments

In this study, numerical simulation experiments were conducted on four normal cloud concepts presented in the literature [9,10,11,12,13]. These are C1 (1.5, 0.62666, 0.339), C2 (4.6, 0.60159 0.30862), C3 (4.4, 0.75199, 0.27676), and C4 (1.6, 0.60159, 0.30862). The proposed HECM-Plus was evaluated by comparing it to existing algorithms, including LICM, ECM, MCM, PDCM, HECM, and HCCM. Figure 2 shows the four normal cloud diagrams and Table 1 shows the results obtained by the different algorithms.
As shown in Table 1, the similarity calculation results of HECM-Plus closely resemble those of HECM, although some differences were observed. For instance, in the comparison of C1 vs. C4, the similarity values of HECM and HCCM are overestimated at 0.99, while HECM-Plus yields a corrected value of 0.94. Upon examining the cloud diagrams in Figure 1, the distinct morphology between C1 and C4 confirms that HECM-Plus aligns more closely with subjective judgment. Critically, this result demonstrates that HECM-Plus corrects systematic bias induced by He-neglect (0.94 vs. 0.99 in HECM), validating our hypothesis through both numerical evidence (Table 1) and visual consistency (Figure 1). To quantify the source of overestimation in HECM, here, we further analyze the dispersion characteristics by comparing the standard deviation ratios of HECM and HECM-Plus, as detailed in Table 2.
The standard deviation ratio column (1.042 vs. 1.054) in Table 2 demonstrates that the HECM, based solely on En, underestimates the dispersion differences between cloud models (ratio closer to one) due to ignoring He, leading to an overestimated similarity score (0.99). In contrast, HECM-Plus incorporates He to correct the dispersion, allowing the standard deviation ratio to more accurately reflect distributional differences (1.054), thereby reducing the similarity score (0.94).
A notable improvement is observed in the similarity between C2 and C3. While HECM yields an overestimated value of 0.97, HECM-Plus reduces it to 0.87 by explicitly accounting for He. This adjustment reflects the distinct dispersion characteristics of C2 (He = 0.30862) and C3 (He = 0.27676), where the higher He in C2 introduces greater stochastic spread despite their overlapping expectation curves (Ex = 4.6 vs. 4.4). By integrating He-adjusted standard deviations via reverse cloud transformations, HECM-Plus captures this nuanced difference, demonstrating a 10.3% reduction in similarity error compared to HECM.
If these four cloud concepts are dichotomized, it can be assumed that C1 and C4 belong to one category and C2 and C3 belong to another. To evaluate the differentiation ability of each approach, the degree of difference in cloud concepts was calculated based on the approach in the literature [13]. Table 3 presents the result of the degree of difference.
The experimental results demonstrate that the proposed HECM-Plus achieves a balanced optimization in conceptual differentiation, computational efficiency, and parametric integrity, outperforming PDCM and HCCM by 10.3% and 17.2% in average differentiation scores (δC1 = 1.76, δC2 = 1.66, δC3 = 1.58, δC4 = 1.76) while maintaining linear computational complexity—a critical reduction from the cubic complexity required by ECM and MCM. Although HECM and ECM exhibit marginally higher differentiation (e.g., δC1 = 1.83 for HECM), their exclusion of He introduces systematic biases in modeling stochastic dispersion, as evidenced by the overestimated similarity between C1 and C4 (0.99 for HECM vs. 0.94 for HECM-Plus, aligning with visual judgment in Figure 2). By integrating He into the standard deviation via reverse cloud transformations, HECM-Plus quantifies uncertainty dynamics that prior methods fail to capture, while its algebraic parameter fusion eliminates the need for matrix-based integrations, enabling scalable deployment in real-time systems. These advancements confirm that HECM-Plus provides a theoretically rigorous and computationally efficient framework for cloud model similarity measurement, addressing both the limitations of He-neglect in existing methods and the complexity of matrix-driven approaches.

4.2. Time Series Classification Experiments

4.2.1. Classification Calculation Process

The high dimensionality of time series data provides a robust framework for evaluating classification algorithms. This study employs the UCI Synthetic Control Chart Time Series dataset [21] (6 classes, 600 instances × 60 timestamps), where each class contains 100 homogeneous time series simulating industrial control scenarios.
Data partitioning followed established protocols: the first 90 instances per class formed the training set (540 total), while the remaining 10 instances constituted the test set (60 total). Building upon literature [13] where HECM achieved minimal classification error, we implemented incremental dimensionality reduction (2–30 dimensions) on both sets to benchmark HECM-Plus against HECM. The classification workflow comprised:
  • Step (1) Data Preprocessing Stage: The first 90 rows from each category were selected as the training set, and the remaining 10 rows formed the test set, resulting in a training set of 540 time series and a test set of 60 time series.
  • Step (2) Segmentation and Dimensionality Reduction: The dimensionality parameter d was adjusted within the range of 2 to 30. Each time series (60 timestamps) was divided into d non-overlapping equal-length segments. If the total length was not perfectly divisible by d, the remainder data points were truncated to ensure equal segment lengths. The mean value of each segment was calculated to achieve dimensionality reduction.
  • Step (3) Reverse Cloud Feature Extraction: The reverse cloud algorithm was applied to each dimensional segment to extract digital features of cloud concepts (Ex, En, and He for HECM-Plus; Ex and En for HECM).
  • Step (4) Similarity Calculation: Using HECM-Plus and HECM, the similarity between cloud concepts in the training set and the test set was computed, and multi-dimensional similarity matrices were constructed.
  • Step (5) Classification Decision: Based on the 1-nearest neighbor principle, the category with the highest similarity in each dimensional similarity matrix was selected as the classification result.
  • Step (6) Performance Evaluation: The classification error rate (ratio of misclassified samples to the total samples) was calculated to evaluate the algorithm’s classification performance and accuracy.

4.2.2. Classification Calculation Results

Figure 3 shows the classification error rates of HECM-Plus and HECM at different dimensions. Table 4 presents the mean and standard deviation of these error rates. As shown in Figure 3, HECM-Plus exhibits a significantly lower classification error rate than HECM, particularly in high-dimensional settings. As shown in Table 4, HECM-Plus exhibits a lower mean classification error rate and standard deviation than HECM, indicating its superior classification performance and greater stability.
To statistically validate the dimensional superiority observed in Figure 3 and Table 4, a paired t-test was conducted to assess the classification performance of HECM-Plus against HECM. The results yielded a t-statistic of −3.2856 (p = 0.0027), indicating a statistically significant difference between the two algorithms at a significance level of α = 0.05. The negative t-value confirms that HECM-Plus achieves a significantly lower mean classification error rate compared to HECM across all dimensional configurations (2D–30D). This finding is consistent with the experimental observations in Figure 3 and Table 4, where HECM-Plus demonstrates reduced error variance and improved robustness in high-dimensional time-series classification tasks.
In summary, HECM-Plus offers several significant advantages over existing methods, including:
  • Comprehensiveness and Accuracy: HECM-Plus effectively captures the geometric characteristics of cloud concepts and quantifies their differences by considering three essential numerical features: Ex, En, and He. This method minimizes information loss and significantly improves the accuracy and reliability of concept differentiation and classification. In particular, Ex indicates the central tendency of cloud concepts, En captures their ambiguity, and He represents the degree of their discreteness. By incorporating these features, HECM-Plus effectively captures the core characteristics of cloud concepts, resulting in more dependable classification results for complex datasets. This method improves classification accuracy and enhances the robustness of the model, ensuring consistent and stable performance across various dataset types and sizes.
  • Computational Efficiency: HECM-Plus mainly depends on numerical features to perform direct algebraic operations, eliminating the need for complex integration calculations. Compared with traditional algorithms such as ECM, MCM, and PDCM, HECM-Plus offers a more direct and concise calculation process, leading to a significant reduction in computational complexity. This optimization not only reduces computational resource requirements but also significantly improves computation speed. In particular, when handling large-scale datasets, the algorithm substantially reduces the required time, thereby improving operational efficiency.
  • Universality and Extensibility: The Hellinger distance is an f-scattering that meets the criteria of distance axiomatization. Thus, it is suitable for use not only with traditional cloud models but also with higher-order normal clouds and high-dimensional cloud models. The broad applicability of the Hellinger distance is one of its key advantages. The flexibility and adaptability of HECM-Plus allow it to address various real-world challenges, making it a crucial tool in various fields.

5. Application of HECM-Plus to Conflict Resolution in Design

In design work evaluation, conventional approaches typically depend on subjective assessments from a team of experts, simplifying the evaluation process by calculating an average score to determine the final grade of the work. However, this simplification discards critical uncertainty information inherent in multi-expert systems. To address this, we adopt a cloud model framework, a three-parameter representation (Ex, En, He) that quantifies expected value (Ex), fuzziness (En), and stochastic dispersion (He) of evaluations. While averaging (Ex) is effective for highly consistent evaluations, the cloud model reveals two hidden uncertainties: En measures ambiguity in criteria interpretation (e.g., “What defines a ‘good’ design?”), and He captures randomness in expert opinions (e.g., “Why do ratings vary so widely?”). This dual perspective is essential for resolving conflicts in scattered or controversial cases.
Table 5 presents the design work scores from a university in Chengdu. Using traditional methods, the average scores of Work 1 and Work 2 were 83.5 and 80.3, respectively, both classified as “good” (80–90 range). However, Work 2 exhibits significant uncertainty: its scores span from 61 to 95 (see Table 5), indicating high En (ambiguity in quality criteria) and He (random divergence among experts). This complexity is invisible to mean-based methods.
To address this, we employ reverse cloud transformation to model dual uncertainties explicitly. Work 1 is represented as (83.5, 4.0106, 0.87467), while Work 2 becomes (80.3, 8.8985, 2.6882). Here, En captures the spread of ratings (fuzziness), and He quantifies their stochastic dispersion.
Historical rating clouds are defined as follows:
  • Excellent: (95, 2, 0.5),
  • Good: (85, 4, 1.0),
  • Moderate: (75, 6, 1.5),
  • Poor: (50, 8, 2.0).
This transformation not only preserves the average rating values but also highlights the ambiguity and randomness of the ratings, providing a more detailed and comprehensive dimension of information for accurate assessment. Figure 4 shows the six clouds, where the deep blue cloud represents the rating cloud of Work 1 and the orange cloud represents the rating cloud of Work 2.
The concentrated morphology of Work 1′s deep blue cloud (lower En and He) reflects consistent expert consensus with minimal ambiguity, aligning with the “good” grade’s narrow criteria. In contrast, Work 2′s dispersed orange cloud exhibits greater horizontal spread (En = 8.8985) and vertical thickness (He = 2.6882), visually quantifying its ambiguous “moderate/good” boundary and stochastic score dispersion (61–95), which structurally overlaps the medium-grade cloud’s probabilistic distribution.
To confirm this observation, HECM-Plus introduced in this study was used for a comparative analysis with HECM. As shown in Table 6, both algorithms consistently categorized Work 1 as “good,” accurately reflecting its rating. This demonstrates the stability and reliability of the algorithms in the evaluation. However, in the more complex evaluation of Work 2, although both algorithms assigned a “moderate” rating, the results differed significantly from those obtained using traditional scoring approaches. In particular, HECM-Plus achieved a similarity of 0.6963 in classifying Work 2 as “moderate,” which is substantially higher than the 0.6157 similarity of HECM. This improvement is attributed to the explicit incorporation of He in HECM-Plus, which quantifies the stochastic dispersion of expert opinions. By integrating He into the adjusted standard deviation calculation, HECM-Plus captures the inherent randomness in controversial evaluations—such as the scattered ratings of Work 2—thereby enhancing discriminative accuracy. This comparison not only demonstrates that HECM-Plus has higher discriminative accuracy when dealing with complex and scattered data but also further indicates its advantage when dealing with more controversial works.
Moreover, we found that the discriminative margin of HECM-Plus (0.6963 − 0.5713 = 0.125) is substantially higher than that of HECM (0.6157 − 0.5205 = 0.0952) when analyzing the similarity difference between the “medium” and “good” grades. These results further confirm the superiority of HECM-Plus in discriminating adjacent grades and enhancing discrimination accuracy. This improvement stems from HECM-Plus’s explicit modeling of He-driven stochastic dispersion, which quantifies the randomness in expert disagreements (e.g., Work 2′s wide score distribution), while En captures the inherent ambiguity of qualitative criteria like “moderate” versus “good”. By bridging design studies’ need for aesthetic interpretation with decision theory’s demand for probabilistic rigor, HECM-Plus resolves interdisciplinary conflicts by quantifying both conceptual ambiguity (En) and stochastic dispersion (He). The method adapts classification boundaries through probability density comparisons when evaluation criteria differ across domains.
In multi-expert design evaluation, the cloud model significantly enhances evaluation efficacy through its certainty (Ex), fuzziness (En), and randomness (He). It quantifies the fuzziness of concepts via En (e.g., the semantic boundary dispersion of expressions like “design quality slightly above average”) and characterizes randomness using He (e.g., the random fluctuations in different experts’ thresholds for “average”). This mechanism precisely captures the cognitive traits of semantic ambiguity and individual threshold deviations in expert ratings. Furthermore, Ex can soften discrete scores (e.g., 80.3 points) into corresponding semantic levels (e.g., “good”) while effectively mitigating the “hard boundary” distortion issue of traditional threshold methods through the constraints of En and He (e.g., average scores of 79.9 and 80.1 are continuously mapped due to the extensibility of En and He rather than being forcibly categorized into different levels). The cloud model can transform discrete scores into continuous probability distributions, making it inherently compatible with multidisciplinary experts’ varying interpretations of indicators like “innovativeness”. For example, art experts focus on “aesthetic fuzziness” while mechanical experts emphasize “technical feasibility randomness”. It is precisely because the cloud model characterizes concepts through these three features so that robust aggregation from discrete individual ratings to group consensus can be achieved, providing interpretable support for cross-domain complex evaluations. Therefore, He clearly cannot be ignored in the distance measurement of the cloud model. HECM-Plus tackles the issue of traditional cloud models overlooking He in distance measurement, offering a scientific and concise algorithm to resolve conflicts in multi-criteria design evaluation while maintaining the probabilistic nature of expert opinions.

6. Conclusions

The accurate measurement of cloud model similarity is pivotal for advancing uncertainty-aware decision systems. This study proposed HECM-Plus, a Hyper-entropy-enhanced Hellinger similarity algorithm that integrates Ex, En, and He through reverse cloud transformations. Three key contributions emerge:
  • Conceptual discrimination: By reformulating the Hellinger distance with He-adjusted standard deviations (Equations (21)–(24)), HECM-Plus achieved a 12.5% higher discriminative margin than HECM in resolving multi-expert conflicts (Table 6), correcting overestimated similarities (e.g., 0.94 vs. 0.99 for C1C4 in Table 1) through rigorous uncertainty decomposition.
  • Statistical robustness: In time-series classification, HECM-Plus reduced the mean error by 6.8% (0.190 vs. 0.204, p < 0.05) and variance by 26% (Table 4), with a paired t-test confirming significant improvement (t = −3.2856, p = 0.0027), demonstrating superior stability in high-dimensional settings.
  • Conflict resolution in multi-criteria design evaluation (MCDM): By reclassifying controversial designs (e.g., overriding an 80.3 average “good” to “moderate” via He-weighted thresholds in Section 5), the algorithm resolved subjective disagreements in expert evaluations, offering a principled framework for design quality assessment.
However, HECM-Plus’s performance depends on the representativeness of expert ratings, where biased or non-random samples may distort He estimation. HECM-Plus relies on Gaussian approximations for cloud model distributions, which may degrade performance with heavy-tailed or multi-modal expert evaluation data, as observed in design controversies. Furthermore, the reverse cloud algorithm requires sufficient samples for stable He estimation, and limited expert panels may amplify variance in En/He recovery. Additionally, the current implementation assumes equal expert reliability, meaning biased ratings disproportionately influence Ex/En/He without robustness mechanisms to mitigate such distortions. Biased or non-random samples may affect He estimation. Future work will focus on:
  • Extending HECM-Plus to higher-order normal clouds and high-dimensional scenarios while maintaining its computational efficiency.
  • Integrating reliability metrics to dynamically adjust expert contributions during reverse cloud transformations, mitigating bias in He estimation.
  • Deploying the algorithm in real-time decision support systems for applications ranging from intelligent design iteration to industrial process monitoring.
These advancements establish HECM-Plus as a theoretically grounded and computationally efficient tool for uncertainty quantification, bridging geometric ambiguity and stochastic dispersion in uncertainty-aware decision analytics.

Author Contributions

Methodology, Z.L.; writing—review and editing, J.P.; funding acquisition, J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Sichuan characteristic philosophy and social science planning project “Sichuan statistical development special topic”, grant number SC22TJ04, and Chengdu University of Information Technology Undergraduate Education Teaching Research project, grant number JYJG2024115.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article: further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, D.Y.; Liu, C.Y. On the universality of normal cloud model. China Eng. Sci. 2004, 6, 28–34. [Google Scholar] [CrossRef]
  2. Mandal, S.; Khan, D.A. Cloud-CoCoSo: Cloud Model-Based Combined Compromised Solution Model for Trusted Cloud Service Provider Selection. Arab. J. Sci. Eng. 2022, 47, 10307–10332. [Google Scholar] [CrossRef]
  3. Li, W.S.; Zhao, J.; Xiao, B. Multimodal medical image fusion by cloud model theory. Signal Image Video Process 2018, 12, 437–444. [Google Scholar] [CrossRef]
  4. Liu, Y.; Liu, Z.T.; Li, S. Cloud-Cluster: An uncertainty clustering algorithm based on cloud model. Knowl.-Based Syst. 2023, 263, 110261. [Google Scholar] [CrossRef]
  5. Wu, Y.N.; Chu, H.; Xu, C.B. Risk assessment of wind-photovoltaic-hydrogen storage projects using an improved fuzzy synthetic evaluation approach based on cloud model: A case study in China. J. Energy Storage 2021, 38, 102580. [Google Scholar] [CrossRef]
  6. Guan, J.; Liu, J.; Chen, H.; Bi, W. A Multi-Criteria Decision-Making Approach for Equipment Evaluation Based on Cloud Model and VIKOR Method. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 1311. [Google Scholar] [CrossRef]
  7. Chai, S.L.; Wang, Z. Product design evaluation based on FAHP and cloud model. J. Intell. Fuzzy Syst. 2022, 43, 2463–2483. [Google Scholar] [CrossRef]
  8. Wang, Z.; Zhong, Y.; Chai, S.L.; Niu, S.F.; Yang, M.L.; Wu, G.R. Product design evaluation based on improved CRITIC and Comprehensive Cloud-TOPSIS-Applied to automotive styling design evaluation. Adv. Eng. Inform. 2024, 60, 102361. [Google Scholar] [CrossRef]
  9. Huang, Q.; Liu, R. A review of similarity metrics for cloud model. Data Commun. 2019, 6, 43–49. [Google Scholar]
  10. Zhang, G.; Li, D.; Li, P.; Kang, J.C.; Chen, G.S. A collaborative filtering recommendation algorithm based on cloud model. J. Softw. 2007, 18, 2403–2411. [Google Scholar] [CrossRef]
  11. Li, H.-L.; Gong, C.-H.; Qian, W.-R. Similarity measurement between normal cloud models. Acta Electron. Sin. 2011, 39, 2561–2567. [Google Scholar] [CrossRef]
  12. Wang, J.; Zhu, J.J.; Liu, X.D. An integrated similarity measure method for normal cloud model based on shape and distance. Sys. Eng.—Theory Pract. 2017, 37, 742–751. [Google Scholar] [CrossRef]
  13. Xu, C.; Xu, H. Similarity measurement method for normal cloud based on Hellinger distance and its application. CAAI Trans. Intell. Syst. 2023, 18, 1312–1321. [Google Scholar] [CrossRef]
  14. Li, D.Y.; Han, J.W.; Shi, X.M.; Chan, M.C. Knowledge representation and discovery based on linguistic atoms. Knowl.-Based Syst. 1998, 10, 431–440. [Google Scholar] [CrossRef]
  15. Li, D. Knowledge Representation in KDD Based on Linguistic Atoms. J. Comput. Sci. Technol. 1997, 12, 481–496. [Google Scholar] [CrossRef]
  16. Sun, P.; Zhang, R.Z.; Qiu, X.W. A survey on cloud model. J. Internet Technol. 2023, 24, 1159–1167. [Google Scholar] [CrossRef]
  17. Li, Q.; Dong, Q.K.; Zhao, L. Modified forward cloud generator in the cloud model. J. Xidian Univ. 2013, 40, 169–174. [Google Scholar] [CrossRef]
  18. Chen, H.; Li, B.; Liu, C. An Algorithm of Backward Cloud without Certainty Degree. J. Chin. Comput. Syst. 2015, 36, 544–549. [Google Scholar] [CrossRef]
  19. Wang, G.Y.; Xu, C.L.; Li, D.Y. Generic normal cloud model. Inf. Sci. 2014, 280, 1–15. [Google Scholar] [CrossRef]
  20. Zheng, Y.; Yang, F.; Duan, J.; Kurths, J. Quantifying model uncertainty for the observed non-Gaussian data by the Hellinger distance. Commun. Nonlinear Sci. Numer. Simul. 2021, 96, 105720. [Google Scholar] [CrossRef]
  21. Synthetic Control Chart Time Series. 7 June 1999. Available online: http://archive.ics.uci.edu/ml/datasets/Synthetic+Control+Chart+Time+Series (accessed on 30 May 2024).
Figure 1. Cloud chart for the concept of “elderly individuals”.
Figure 1. Cloud chart for the concept of “elderly individuals”.
Entropy 27 00475 g001
Figure 2. Cloud diagrams of the four cloud models.
Figure 2. Cloud diagrams of the four cloud models.
Entropy 27 00475 g002
Figure 3. Classification error rate.
Figure 3. Classification error rate.
Entropy 27 00475 g003
Figure 4. Cloud model representations of design works.
Figure 4. Cloud model representations of design works.
Entropy 27 00475 g004
Table 1. Similarity results of different algorithms.
Table 1. Similarity results of different algorithms.
SimilarityLICMECMMCMPDCMHECMHCCMHECM-Plus
S(C1,C2)0.960.010.330.010.040.220.04
S(C1,C3)0.970.040.370.030.110.260.08
S(C1,C4)0.990.940.960.890.990.990.94
S(C2,C3)0.990.860.950.800.970.860.87
S(C2,C4)0.970.010.380.010.040.220.04
S(C3,C4)0.980.040.370.030.110.260.09
Table 2. Variance and standard deviation ratios of HECM vs. HECM-Plus.
Table 2. Variance and standard deviation ratios of HECM vs. HECM-Plus.
Cloud ModelHECM VarianceHECM-Plus VarianceVariance Increase (%)HECM Standard Deviation RatioHECM-Plus Standard Deviation Ratio
C10.39270.507629.3% 0.3927 / 0.3619 1.042 0.5076 / 0.4571 1.054
C40.36190.457126.3%
Table 3. Degree of variation of different algorithms.
Table 3. Degree of variation of different algorithms.
Degree of VariationLICMECMMCMPDCMHECMHCCMHECM-Plus
δC10.051.831.221.741.831.501.76
δC20.051.701.191.581.861.281.66
δC30.031.641.161.541.721.201.58
δC40.031.831.171.741.831.501.76
Table 4. Means and standard deviations of classification error rates.
Table 4. Means and standard deviations of classification error rates.
Mean Classification Error RateStandard Deviation of Classification Error Rate
HECM-Plus0.190339050.034925326
HECM0.2042222430.047171944
Table 5. Expert ratings of design work.
Table 5. Expert ratings of design work.
No.Expert 1Expert 2Expert 3Expert 4Expert 5Expert 6Expert 7Expert 8Expert 9Expert 10
Work 188858382898388837579
Work 261907580859585828070
Table 6. Evaluation results of the cloud models.
Table 6. Evaluation results of the cloud models.
AlgorithmWorkExcellentGoodModeratePoorGrade
HECM-Plus10.09830.87160.44470.0165Good
20.19890.57130.69630.1199Moderate
HECM10.01670.81440.27920.0004Good
20.09360.52050.61570.0204Moderate
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pu, J.; Liu, Z. HECM-Plus: Hyper-Entropy Enhanced Cloud Models for Uncertainty-Aware Design Evaluation in Multi-Expert Decision Systems. Entropy 2025, 27, 475. https://doi.org/10.3390/e27050475

AMA Style

Pu J, Liu Z. HECM-Plus: Hyper-Entropy Enhanced Cloud Models for Uncertainty-Aware Design Evaluation in Multi-Expert Decision Systems. Entropy. 2025; 27(5):475. https://doi.org/10.3390/e27050475

Chicago/Turabian Style

Pu, Jiaozi, and Zongxin Liu. 2025. "HECM-Plus: Hyper-Entropy Enhanced Cloud Models for Uncertainty-Aware Design Evaluation in Multi-Expert Decision Systems" Entropy 27, no. 5: 475. https://doi.org/10.3390/e27050475

APA Style

Pu, J., & Liu, Z. (2025). HECM-Plus: Hyper-Entropy Enhanced Cloud Models for Uncertainty-Aware Design Evaluation in Multi-Expert Decision Systems. Entropy, 27(5), 475. https://doi.org/10.3390/e27050475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop