Next Article in Journal
Takagi–Sugeno–Kang Fuzzy Inference Tracking Controller for UAV Bicopter
Previous Article in Journal
The Solvability of an Infinite System of Nonlinear Integral Equations Associated with the Birth-And-Death Stochastic Process
Previous Article in Special Issue
Centroid-Induced Ranking of Triangular Picture Fuzzy Numbers and Applications in Decision-Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Reliability, Uncertainty, and Subjectivity in Design Knowledge Flow: A CMZ-BENR Augmented Framework for Kansei Engineering

1
Faculty of Innovation and Design, City University of Macau, Macau 999078, China
2
Faculty of Data Science, City University of Macau, Macau 999078, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(5), 758; https://doi.org/10.3390/sym17050758
Submission received: 2 April 2025 / Revised: 8 May 2025 / Accepted: 13 May 2025 / Published: 14 May 2025
(This article belongs to the Special Issue Fuzzy Set Theory and Uncertainty Theory—3rd Edition)

Abstract

As a knowledge-intensive activity, the Kansei engineering (KE) process encounters numerous challenges in the design knowledge flow, primarily due to issues related to information reliability, uncertainty, and subjectivity. Bridging this gap, this study introduces an advanced KE framework integrating a cloud model with Z-numbers (CMZ) and Bayesian elastic net regression (BENR). In stage-I of this KE, data mining techniques are employed to process online user reviews, coupled with a similarity analysis of affective word clusters to identify representative emotional descriptors. During stage-II, the CMZ algorithm refines K-means clustering outcomes for market-representative product forms, enabling precise feature characterization and experimental prototype development. Stage-III addresses linguistic uncertainties in affective modeling through CMZ-augmented semantic differential questionnaires, achieving a multi-granular representation of subjective evaluations. Subsequently, stage-IV employs BENR for automated hyperparameter optimization in design knowledge inference, eliminating manual intervention. The framework’s efficacy is empirically validated through a domestic cleaning robot case study, demonstrating superior performance in resolving multiple information processing challenges via comparative experiments. Results confirm that this KE framework significantly improves uncertainty management in design knowledge flow compared to conventional implementations. Furthermore, by leveraging the intrinsic symmetry of the normal cloud model with Z-numbers distributions and the balanced ℓ1/ℓ2 regularization of BENR, CMZ–BENR framework embodies the principle of structural harmony.

1. Introduction

The design knowledge flow in product development refers to the progressive transformation of raw data into information and actionable design knowledge [1,2]. This knowledge-driven process, exemplified in Kansei engineering (KE) for emotional design, has grown increasingly complex alongside advancements in the knowledge economy, digital twin technology, and intelligent design tools [3,4]. Even though the management is complex, it enables the rapid retrieval of emotional design knowledge through KE for new product development. This is especially beneficial for small- and medium-sized enterprises (SMEs) with fewer employees and substantial workloads [5]. Therefore, as an innovative knowledge flow of design, KE not only boosts enterprise competitiveness through the mapping model but also enhances development efficiency through knowledge management [6,7,8]. A central challenge in current KE practices is ensuring that this knowledge flow remains reliable, accommodates uncertainty, and respects the subjectivity inherent in human affective responses [9,10,11]. In practice, they even rely on the design expert system to provide optimization [12,13]. It is also necessary to screen a large amount of user data and feedback, while considering the possibility of noise or biased information, which makes it difficult to consistently convert heterogeneous inputs into reliable design decisions. Addressing reliability, uncertainty, and subjectivity in tandem is therefore crucial for effective knowledge-driven product design, yet it remains a thorny issue in today’s KE implementations.
Existing approaches are not enough to improve measure reliability in KE. To elicit user emotions (Kansei) quantitatively, designers often employ psychological, physiological, and mixed methods [14,15,16], especially the well-known questionnaire scales [17]. These conventional scales are easy to administer but can introduce significant biases and inconsistencies in the measurement of differences. It is due to the wavering attitudes of the respondents that the use of the scale is inconsistent, reducing the reliability of the collected data. Some scholars have only proposed to improve the measurement method to enhance the development of the questionnaire in recent years [18], but obviously, the accurate representation of the data results still needs to be changed. Because in the real world, the expression of design information is mostly uncertain. Even if we have improved information transparency by expanding the scale scoring range (e.g., 5- or 7-point Likert scales [19]), introducing neutrality and ambiguity options (e.g., VAS-RRP [20], NRS [21]), and setting multidimensional reference options (e.g., Rasch-PANAS [22]), it is difficult to consider uncertainty and reliability together. On this basis, the Z-number can be extended to encode how sure an expert is about a given Kansei evaluation, that is, the 2-tuple that constitutes the definite value and the degree of certainty of the value. (e.g., I am quite sure I think it is very fashionable.). This is the significance of the study of Z-numbers.
Existing approaches are all exploring as much as possible how to handle uncertainty and subjectivity in KE. Altering the granularity of the language constitutes a favorable approach to achieving this [23]. Regarding user preference evaluation, the KE modelling approach by linguistic representation [24] is proposed to deal with the uncertainty and fuzziness in the evaluation. Guo et al. [25] emphasized the Heuristics-Kansei evaluation of user group consensus using a dominance-based rough set approach. Lou et al. [26] proposed an integrated decision-making method, consisting of a normal cloud model and EGG, to evaluate product design schemes, but it failed to predict the Kansei knowledge. However, the existing research on qualitative evaluation information is focused on randomness and fuzziness through rough [27], fuzzy [28], and hesitant [29] methods, without considering the implicit cognitive friction of decision-makers, such as the subtle differences in the evaluation of similar figures. Therefore, an uncertainty transformation model is important to introduce into user preference evaluation research, which is a combination of probability theory and fuzzy set theory, namely the normal cloud model (CM). Besides, facing the design decision-making problem, the information and hierarchical characteristics of evaluation are more fully reflected, and the real situation can be simulated more intuitively in the cloud generator [30].
Existing approaches are to reduce the deviation by improving the modeling technology in KE. Regarding user preference prediction, it primarily employs machine learning technology to establish the Kansei bridge (regression and classification), and can be approximately categorized into statistical regression and artificial intelligence algorithms [31]. One category includes linear regression, regularization regression, logistic regression, and similar techniques [32], while the other category comprises artificial neural networks, decision trees, support vector machines, and related methods [33]. Compared with artificial intelligence algorithms that suffer from hard explanations of small sample sizes, the regularization method in statistical regression not only highlights the explainability of the model but also mitigates the overfitting risk, such as elastic net regression (ENR). ENR can handle multicollinearity induced by high-dimensional data in simple linear regression and can also screen low-sensitive features, meaning it possesses the characteristics of model simplicity, interpretability, and generalization. Nevertheless, ENR still depends on experts to select the corresponding norm for the penalty, and there are certain hyperparameter issues.
In summary, a variety of tools have been proposed to tackle reliability, uncertainty, and subjectivity in design knowledge flow, but each covers only a part of the problem. At the same time, some scholars are conducting in-depth research using a combination of multiple methods. Liu et al. [34] highlighted the design knowledge adjacency network based on the multi-layer network, intuitionistic fuzzy, and grey correlation to enhance design prediction. Shen et al. [35] endeavored to integrate the associative creative thinking process and the fuzzy KE process, while concurrently employing statistical and artificial intelligence algorithms for the prediction of innovative product feature knowledge. Li et al. [36] pioneered a new method combining variable precision rough sets (VPRSs) and Bayesian regularization backpropagation neural network (BR-BPNN) to predict KE knowledge for improving prediction accuracy and design satisfaction. However, most of them fail to further improve the reliability of information and strengthen the interpretation of models.
Based on the above literature review, the research gap can be perceived as follows:
(1)
The existing perceptual engineering questionnaire measurements lack descriptions of reliability and uncertainty. The traditional SD, Likert, or NRS scales only obtain quantitative scores, but fail to reflect the respondents’ confidence in their own judgments and the ambiguity of their affective response.
(2)
The affective response prediction modeling in KE ignores uncertainty and subjectivity. Most of the existing regression/classification models, such as ENR and SVR, take the values of implicit collinearity as input, and hyperparameters rely on empirical parameter tuning. It is difficult to balance the interpretability, sparsity, and processing ability of the uncertain information of the model.
(3)
In perceptual engineering, there is a lack of an overall design knowledge flow framework. Most studies only focus on a single link in questionnaire preprocessing, clustering optimization, or regression modeling, and fail to provide a full-process solution from data collection, information fusion, to knowledge reasoning.
At the same time, the information acquisition and processing of evaluative questionnaires in multiple stages of the KE need to be optimized. Thus, this study proposes an enhanced KE process that considers information reliability, uncertainty, and subjectivity. The proposed novel process aims to solve the bias problem of conventional questionnaire evaluation and the subjective problem of collinearity and hyperparameter setting of conventional linear regression in the design of knowledge flow.
In comparison to the existing studies, this study makes three distinctive contributions:
(1)
A novel enhanced KE process based on CMZ-BENR is proposed to fully consider the reliability, uncertainty, and subjectivity of information in the design knowledge flow.
(2)
CMZ is employed in the KE to address the uncertainty and subjectivity in the clustering results of market-representative samples, and to handle the reliability and uncertainty of the semantic difference questionnaire for affective words and morphological features. By leveraging the intrinsic symmetry of normal cloud distributions and Z-number mappings, CMZ ensures a balanced and harmonious treatment of information uncertainty and subjectivity.
(3)
BENR is adopted in the KE to solve the issues of partial elasticity and subjectivity when establishing mathematical models of affective words and morphological features. The symmetric ℓ1/ℓ2 regularization within BENR achieves an equilibrium between sparsity and stability.
The remainder of this study is organized as follows. In Section 2, a review of preparatory theoretical knowledge is presented. In Section 3, the combination of CMZ and BENR is described, and a novel CMZ-BENR enhanced Kansei engineering process is proposed. In Section 4, a practical example of the domestic cleaner robot is provided to demonstrate the developed design knowledge flow by the enhanced Kansei engineering process. In addition, a comparative analysis was carried out to verify the validity of the results. Finally, findings and deductions about information reliability, uncertainty, and subjectivity are further explored in Section 5 and meticulously summarized in Section 6.

2. Preliminaries

This section will introduce concepts in two parts. On the one hand, linguistic, cloud model, and Z-numbers are uncertainty information processors, including multi-granularity linguistic term sets (MGLTSs), multi-granularity linguistic scale functions (MGLSFs), cloud model (CM), and Z-numbers. On the other hand, the elastic net regression and Bayesian optimal are machine learning algorithms, including the ENR with partial estimation and the Bayesian method of hyperparameter optimization.

2.1. Linguistic, Cloud Model, and Z-Numbers

These definitions are as follows: (1) MGLSFs capture more detailed semantic differences through diverse granularity, which can strengthen the representation of user preference degree and reduce subjectivity in decisions [37]. (2) CM represents and deals with uncertainty through the 3-tuple (expectation, entropy, and hyper-entropy) of the digital cloud, and can also more fully represent the probability distribution of real-world data [38]. (3) Z-numbers improve information trust through the 2-tuple (information content and information reliability). Especially for the implicit preference information embedded in the real world, it is impossible to find out the true probability distribution (unbiased and absoluteness) of the data source [39]. A review of the above concepts is defined as follows.

2.1.1. Multi-Granularity Linguistic Term Sets

The MGLTSs are proposed to express the uncertainty evaluation of different decision-makers [40]. The point of MGLTSs is that it has a diverse linguistic scale. For example, the Likert scale of 5 or 7 points represents the granularity of linguistic degrees of 5 and 7, respectively, to describe the degree of linguistic difference by affective preference.
Definition 1.
Suppose that there exist l type sets of several linguistic terms i of different granularity, represented as s i l . Let S = { s i l  | i = 0, 1, 2, …, 2t, t ∈ N*} be a finite and completely ordered linguistic term set (LTS) with odd cardinality, where t is a non-negative integer. For s i l  and s j l , of the same granularity, it is required to allow the properties [41]: (1) The set is ordered s i l  ≤  s j l , if and only if i ≤ j. (2) The set has a negation operator neg( s i l ) = s j l , if i + j = 2t.

2.1.2. Multi-Granularity Linguistic Scale Functions

The MGLSF is an effective quantitative tool for the conversion from language terms to numerical values. It can both express semantics more flexibly and effectively impose the original message. Therefore, the MGLSF assigns different linguistic values to language terms in different cases. It has more advantages, particularly in the practice of the linguistic evaluation scale.
Definition 2.
Let s i l  ∈ S be a linguistic term. If θ i l ∈ [0, 1] is a numerical value, then the MGLSF is mapping from s i l  to θ i l  (i = 0, 1, …, 2t), and it is defined as H: s i l  → θ i l  (i = 0, 1, …, 2t). At this point, H illustrates the semantics of s i l , and θ i l  reflects the preference of the decision-maker when choosing s i l . The MGLSF is a strictly monotonously increasing function concerning the subscript i. According to [42,43,44], three types of MGLSFs that can be adapted to different semantic situations are shown as follows:
MGLSFs-I :   H 1 ( s i l ) = θ i l = i 2 t , 0 i 2 t
MGLSFs-II :   H 2 ( s i l ) = θ i l = a t a t 1 2 a t 2 , 0 i t a t + a i t 2 2 a t 2 , t < i 2 t
MGLSFs-III :   H 3 ( s i l ) = θ i l = t b ( t i ) b 2 t b , 0 i t t c ( i t ) c 2 t c , t < i 2 t
where a∈[1.37, 1.4]. And b, c∈(0, 1] represent the curvature of the value function for gain and loss, respectively [44].

2.1.3. Normal Cloud Model

Definition 3
([45]). Suppose that there exists a universe of discourse U and a qualitative concept T in U. If d ∈ U is a random instantiation of concept T. Let U be the effective domain U = [dmin, dmax], and the cloud droplet is denoted by (d, y C M ), which satisfies d ~ Ndrop (Ex, E n 2 ), En′~ Ndrop (En, He2). Then, the distribution of d in U is called the CM, which is shown as follows:
y CM = exp ( d E x ) 2 2 ( E n ) 2
where the overall quantitative properties of a concept can be described in CM using three numerical features, Expected value (Ex), Entropy value (En), and Hyper-Entropy value (He), concerning the number d. The CM can be demonstrated as CM = (Ex, En, He). Ex is the mathematical expectation of the distribution of cloud droplets in the universe of discourse U about qualitative concept T. En embodies the fuzziness and randomness of concept T, serving as a reflection of the dispersion extent of cloud droplets. The uncertainty of En is measured by He, which also determines the thickness of the cloud.
In practical terms, the “young” students in a group can be denoted as the CM based on the relevant analyses (for more details, refer to [46]). Assuming that the cloud CM = (22.5, 3.80, 1.36), then the cloud with 1000 cloud droplets can be presented by graph, as shown in Figure 1.

2.1.4. Transformation of Uncertainty

Transforming qualitative concepts into quantitative values necessitates the use of effective and reliable tools. The CM represents linguistic concepts through three numerical features, enabling interchangeable and objective transformations between qualitative concepts and quantitative values. Currently, there are two methods for converting linguistic values into cloud representations: the golden section (GS) [47] and the MGLSF [48]. The GS is constrained by a granularity limit of 5, while the MGLSF can accommodate the MGLTS with odd cardinality.
Definition 4
([49]). Let s i l  be a multi-granularity linguistic term in S, then the CM = ( E x i l , E n i l , H e i l ) can be generated by the following procedure:
Step 1: Calculate θ i l = H( s i l ) by applying MGLSFs.
Step 2: Calculate E x i l . Given that the effective domain U = [dmin, dmax] and the cloud droplet is denoted by (d, y C M ). The normal distribution of CM follows the “3σ principle”, and the formula is as follows:
E x i l = d min + θ i l + θ i + 1 l 2 ( d max d min )   , 0 i < t d min + θ i 1 l + 2 θ i l + θ i + 1 l 4 ( d max d min )   , i = t d min + θ i 1 l + θ i l 2 ( d max d min )   , t < i 2 t
E n i l = θ i + 1 l θ i l 6 ( d max d min )   , 0 i < t θ i + 1 l θ i 1 l 12 ( d max d min )   , i = t θ i l θ i 1 l 6 ( d max d min )   , t < i 2 t
Step 3: Calculate E n i l . Due to E n ~ Ndrop (En, He2), E n i l can be regarded as the mean value of E n i l and the adjacent value of E n i 1 l and E n i + 1 l . Therefore, it can be calculated by E n i l   =   ( E n i 1 l + E n i l + E n i + 1 l ) / 3 .
Step 4: Calculate H e i l . Since E n ~ Ndrop (En, He2), H e i l also follows the “3σ principle” of the distribution curve. Then, H e i l = ( max E n i l E n i l ) / 3 can be obtained.

2.1.5. Z-Numbers

Definition 5
([50]). Z-numbers, denoted as Z = (A, B), are an ordered pair of fuzzy numbers. Z = (A, B) is associated with a real-valued uncertain variable d, where component A is a fuzzy constraint on what d can take, and the other component B is a reliability measure of A.
When a product form is evaluated by common statements such as “I likely think this product form is very good” or “I explicitly think this product form is good”, then natural linguistic terms can be distilled components A and B by Z-numbers such as (Very Good, Likely) and (Good, Explicitly). At this point, the MGLTSs are used as follows
A = S = { s 0 1 = Worst ,   s 1 1 = Very   Bad ,   s 2 1 = Bad ,   s 3 1 = Normal ,   s 4 1 = Good ,   s 5 1 = Very   Good ,   s 6 1 = Optimal } B = S = { s 0 2 = Inexplicitly ,   s 1 2 = Unlikely ,   s 2 2 = Neutral ,   s 3 2 = Likely ,   s 4 2 = Explicitly }
Therefore, it can be argued that linguistic value S is a fuzzy constraint A, and probability value S′ is a reliability measure B. And then Z-numbers can be obtained.

2.2. Elastic Net Regression and Bayesian Optimization

2.2.1. Elastic Net Regression

To clarify the mapping between affective words and morphological features, ordinary least squares regression (OLS) and ENR methods are used to explain the prediction model of the KE.
Definition 6
([51]). Suppose there are n product samples, and the affective words evaluation value of the m sample ym is taken as the dependent variable (m = 1, 2, …, n), and there are q morphological elements xmp as independent variables in the sample (p = 1, 2, …, q). Then the affective words evaluation value of the m sample and the response of each item and category object follow the general linear regression model, which is expressed as
y m = β 0 + β 1 x m 1 + + β p x m p + + β q x m q + ε m
The general linear model employs OLS for Equation (7). The parameters are estimated by minimizing the RSS, then the objective function is represented as
β ^ OLS = arg min m = 1 n ( y m β 0 p = 1 q x m p β p ) 2
Definition 7
([52]). Elastic net regression (ENR) is an improved linear regression model derived from Ridge and Lasso regression. Through the combination of ℓ1 and ℓ2 penalties added in Equation (8), it is shown to automatically select performance features and inspire highly correlated features to be selected or removed together. When the adjustment coefficient α selected by the penalized model and the model penalty intensity λ are involved, then the loss function is represented as
β ^ ENR = arg min m = 1 n ( y m β 0 p = 1 q x m p β p ) 2 + λ α p = 1 q β j + 1 2 ( 1 α ) p = 1 q β p 2
In the above, α∈[0, 1] is a higher-level hyperparameter that depends on empirical selection, and λ is a penalty parameter that is used to be chosen via K-fold cross-validation (K-CV), obviously, by setting α = 0 if Ridge regression is required, and by setting α = 1 if Lasso regression is required.
The R-package (R version 4.2.3 (2023-03-15 ucrt)) “glmnet” [53] encompasses efficient functions of ENR, such as calculating the entire path value of optimal λ in iteration, outputting several λ in iteration by the K-CV, plotting coefficient paths and errors, and predicting data. However, it is still necessary to manually set the adjustment coefficient α of the model.

2.2.2. Bayesian Optimization

In the ENR, β ^ ENR suffers from the “double shrinkage problem” about 1 and 2 penalties, and there are theoretically an infinite number of combinations of α and λ. Therefore, the combinations of α and λ need to be further explored by employing Bayesian optimization, which can find the optimal solution according to the model performance function under the iterated combination [54].
The Bayesian optimization hinges on two core elements: the prior function (PF) and the acquisition function (ACF) [55]. The PF serves as a probabilistic agent model to predict the behavior of the objective function, such as the Gaussian process (GP). Not only is GP a nonlinear random function that can represent the joint distribution of any finite set of points in a high-dimensional linear space as a multivariate normal distribution, but it is also used to characterize the uncertainty of the objective function for solving the multiarmed bandit problem in Bayesian optimization [56]. The R-packages “DiceKriging” and “DiceOptim” [57] can implement GP regression as a surrogate model and EGO to explore and exploit the optimization space.
Concurrently, the ACF, typically the loss function, evaluates the optimality of the query sequence, which means directing the search to the point with the most information, such as the probability of improvement (PI), the expected improvement (EI), and the upper confidence bound (UCB). The UCB is a commonly used ACF of the Bayesian optimization that combines predictive exploitation and uncertainty exploration of current models [58]. The combination of GP-UCB replaces the parameter search method of conventional irregular grid search and can be implemented by the R-package “ParBayesionOptimization” [59].

3. Methodology

In this section, a novel CMZ-BENR enhanced KE process is proposed, which emphasizes that the design knowledge flow considering the reliability, uncertainty and subjectivity of information.

3.1. Cloud Model with Z-Numbers

All three approaches from the MGLSFs, CM and Z-numbers, addressing the challenges of uncertainty and ambiguity in linguistic information within soft computing, aim to provide a mathematical method to quantify and manipulate these uncertainties. They are closely related to the logic of fuzzy linguistics and can be effectively articulated through scoring functions, normal distributions, and confidence intervals. This is particularly valuable for enabling effective decision-making support in scenarios of complete ambiguity.
In the face of increasing information uncertainty in real-world assessment, it’s no longer sufficient to depend solely on Z-numbers for reliability, cloud models for representing uncertainty, and multi-granularity for minimizing subjectivity. Instead, the challenge is how to incorporate these elements into an information processor effectively, referred to as the gray box as the cloud model with Z-numbers (CMZ). Based on the three characteristics described above, they are integrated into an information uncertainty processor, where MGLSFs-CMZ are built into a hierarchical processor. The lower layer uses MGLSFs to understand subjective information, the middle layer uses CM to convert uncertain information, and the top layer uses Z-numbers to enhance the output of reliable information. The gray box can be shown in Figure 2. More details of CMZ algorithm are available in Appendix A.1.
The crucial aspect is that the linguistic granularity degree of MGLTSs can be adaptively determined through Definition 1, allowing for the derivation of θ i l corresponding to the specified granularity level through nonlinear transformation facilitated by MGLSFs. Use Equations (1)–(3) in Definition 2 for a worthwhile transformation. Subsequently, uncertainty transformation can be executed in accordance with the θ i l convention established between MGLTSs and MGLSFs, ensuring that all evaluated linguistic granularity is assigned the specific θ i l . Ultimately, this process culminates in the generation of 3-tuple information about the CM associated with θ i l through Equations (5)–(7) in Definition 4. At the same time, considering that decision-makers are usually non-single individuals, it is necessary to apply the calculating of the CM as follows to effectively integrate the CM of all decision-makers into a CM. Supposed that CM1 = ( E x 1 l , E n 1 l , H e 1 l ) and CM2 = ( E x 2 l , E n 2 l , H e 2 l ), then the rule of addition is CM1 + CM2 = ( E x 1 l + E x 2 l , E n 1 l 2 + E n 2 l 2 , H e 1 l 2 + H e 2 l 2 ) and the rule of scalar multiplication is τ CM1 = ( τ E x 1 l , τ E n 1 l , τ H e 1 l ). The sum-average calculation can be implemented. From the above calculation, the CM results of several decision-makers can be processed, the operation can be repeated to generate dual CM on the fuzzy constraint and reliability measure, and then the dual CM can be merged into a CMZ through Definition 5.
Therefore, this information processor focuses on eliminating the subjectivity of linguistic values strengthens the uncertainty of linguistic evaluation and maintains the reliable measure of source evaluation information.

3.1.1. Questionnaire Design of CMZ

This part highlights the method in which the characteristics of CMZ can be employed to represent the reliability and uncertainty of information during the acquisition and processing of questionnaire data, to minimize subjectivity to the greatest extent possible. Distinct from the direct granular linguistics selection utilized in previous questionnaires, a description of the degree of reliability is incorporated, which could manifest as a variation in scale rather than a fixed value. For instance, in the scenario of multi-sample clustering under a specific affective word, the evaluation is constructed based on the belonging and not-belonging of positive and negative words, and the description of the reliability degree of belonging and not-belonging is enhanced. In the case of multi affective words prediction within a certain sample, the selective evaluation of the positive antonyms of the features is conducted, and the description of the reliability of the description selection is intensified. The questionnaire design of CMZ is shown in Figure 3.
In Figure 3, different scales are used depending on the situation: 5-point Likert (which directly indicates the degree of agreement with a statement), 5-point SD (semantic differential scale, which is suitable for complex affective or attitudinal analysis, with differential expressions of positive and negative connotations), and 11-point NRS (numeric rating scale, which is a subjective intensity scale from medical, introduced to describe reliability), but all are linguistic evaluations with granular information.

3.1.2. Computation with Z-Numbers

Let CM be substituted into Z-numbers to represent further, that there are two clouds of linguistic value from fuzzy constrain and probability value from reliability measure. Then the cloud model with Z-numbers (CMZ) can be defined as CMZ = (A, B) = [( E x i l , E n i l , H e i l ), ( E x ~ i l , E n ~ i l , H e i l ~ )].
For example, the uncertainty and reliability information can be obtained after processing the data of the above CMZ questionnaire. In the subsequent KE process, representative product images focusing on clustering can be more convincing. The reliability cloud in part B of CMZ can be utilized to weight the eigenvalues of uncertain clouds, which is the improved K-means method developed in Section 3.3.2. This improves the scientific interpretation of information. Additionally, the fixed granularity of the SD is improved in uncertainty affective response evaluation. Therefore, in Section 3.3.3, the design knowledge prediction of CMZ can be established in the BENR, which uses the score function of the uncertainty linguistic value and reliability probability value.

3.2. Bayesian Elastic Net Regression

Due to the diversity and complexity of design information, conventional linear regression make it difficult to solve the problems of overfitting and multicollinearity in knowledge prediction processing, while regularization regression enhances the robustness and interpretability of the model. Therefore, regularization methods such as Ridge, Lasso, and ENR have replaced random methods like stepwise regression, and have become the main means of model selection. Although the regularization method improves the model performance, it brings the problem of hyperparameters in the process of model selection. Bayesian optimization can deal with model uncertainty and reduce the prior intervention level of experts [60]. Bayesian optimization can evaluate and fit the model by constructing a performance function as the objective function of optimization, which refers to the RMSE and R2 of the combined iterative model using multiple penalty coefficients. The maximization parameters are explored and exploited with the support of the PF and ACF of Bayesian optimization. Therefore, BENR is proposed to build more explanatory predictive models from the design information. The gray box of the knowledge prediction can be shown in Figure 4.
Only relying on RMSE to evaluate the model is likely to result in accurate fitting information due to error averaging, while excessive pursuit of the idealization of R2 is likely to result in overfitting of the model. Therefore, when the RMSE is as small as possible it goes to 0 and the R2 is as large as possible it goes to 1, it is considered to have good predictability. A performance function can then be constructed to represent it as follows
ρ ( γ ) = 1 R M S E + 1 R 2
where γ is the hyperparameter combination α and λ, denoted by γ = (α, λ). When ρ(γ) reaches the maximum stable value, the best parameter combination is found. Generally, K-CV is used to determine the performance of the model under parameter combination γ, and the theoretical combination of parameter γ is infinite, and it is necessary to search the grid through the boundary constraints of α and λ, which is obviously inefficient and irregular.
Bayesian optimization is adopted to adjust the parameters, and the known prior information is fully used to find the optimal parameter γ of the performance function ρ(γ) under the maximum objective. The PF in Bayesian optimization is carried out by the GP. Assume that the initialized t# performance values obey the joint Gaussian distribution with expectation of 0 as follows
ρ = ρ 1 ρ 2 ρ t # N ( 0 , K ) = N 0 0 0 , K 11 K 12 K 1 n K 21 K 22 K 2 n K t # 1 K t # 2 K t # n
where K is the covariance matrix, the squared exponential kernel function is taken as K.
K i # j # = k ( ρ i # , ρ j # ) = exp ( ( ρ i # ρ i # ) 2 2 )
When a new performance value ρ* is added, the updated performance distribution still follows the joint Gaussian distribution.
ρ 1 ρ t # ρ * N 0 0 , K K ρ ρ * T K ρ ρ * K ρ * ρ *
where K ρ ρ * is composed of K ρ ρ * = k ρ 1 , ρ * , k ρ 2 , ρ * , , k ρ t # , ρ * .
Based on the known prior information gathered by the GP, the conditional probability of ρ* can be obtained as posterior information under the premise of the preliminarily explored data combination (γ, ρ) and the new performance parameter γ*. The updated N (μ*, Σ*) can be further obtained according to Bayes’ theorem as follows
μ * = K ρ ρ * K ρ ρ 1 ρ * = K ρ * ρ * K ρ ρ * K ρ ρ 1 K ρ ρ * T
where the mean value μ* is the expected utility value of the combination of parameters, and its practical significance is that if the μ* is larger, the greater the probability of becoming the optimal solution, that is, the more worth exploring. The variance value Σ* is expressed as the uncertainty of the effect of the parameter combination. If the Σ* is larger, the more diversified parameter combinations, the more possibilities exist, and the more worth exploit [61].
Therefore, there exists an issue regarding how to effectively strike a balance between exploration and exploitation. Specifically, whether to further exploit within the currently acquired optimal region, or to attempt to explore potential optimal solutions in a new area to prevent falling into local optimality. To effectively address this balancing problem, the ACF in Bayesian optimization needs to be further defined. The advantage of UCB lies in the balance between depth (exploitation) and width (exploration) of the unknown space, so UCB will be selected as the ACF. UCB is as follows.
A C F UCB ( γ ) = μ * ( γ ) + 1.96 *
where the constant 1.96 indicates that balanced exploitation and exploration using 95% confidence intervals [62]. Thus, within this confidence interval, The next possible maximized ACFUCB(γ) can be expressed as
γ t # + 1 = arg max A C F UCB ( γ ) = arg max μ t # + 1 ( γ ) + 1.96 t # + 1
After the iteration of t# round, the optimal parameter combination γoptimal can be found to meet the requirement of maximizing ρ(γoptimal).
To summarize, BENR may be divided into five steps: (1) construction of ENR, (2) definition of the performance function for ENR in Equation (10), (3) specification of PF in Bayesian optimization based on prior information collected by the GP in Equations (11)–(14), (4) establishment of ACF in Bayesian optimization, with determination of the optimal γ based on the trade-off between exploitation and exploration using UCB in Equation (15), and (5) calculation of ENR associated with the γoptimal in Equation (16). More details of BENR algorithm are available in Appendix A.2.

3.3. CMZ-BENR Enhanced KE

A novel KE process is enhanced through CMZ-BENR for information processing and knowledge transformation in the design knowledge flow. In this process, the information reliability, uncertainty, and subjectivity of affective words, morphological features, affective responses, and morphological design knowledge are fully considered, and it can be divided into four stages, as shown in Figure 5.

3.3.1. Stage-I of the Enhanced KE

In stage-I, this flow of design knowledge refers to the initial processing from textual data to preference information. Especially for the subjective problem of affective words in review information mining. The execution flow is from Steps 1 to 4, in Figure 6.
Step 1: Merchandise online reviews crawling. The online reviews of the merchandise from E-commerce platforms are analyzed through web scraping enabled by the “Octoparse” software (8.6.2.803).
Step 2: Affective words collection. The descriptive adjective information is collected from the scraping data to form an information base of initial affective words. Moreover, the word frequency descending order is arranged and screened in the information base of the initial affective words, and the frequency prominent word group is obtained under a certain threshold setting.
Step 3: Affective words library rebuilding. Considering the similar representation of the words themselves and the subjective word habituation of different users, it is necessary to match the selected high-frequency words with Chinese and English-based word databases. The Open-source Online Reverse Dictionary System, “WordNet” in English and “WantWords” in Chinese, is taken as the base thesaurus with the help of “GoogleNews-vectors” API to calculate the similarity of the matched words to obtain the representative morphological feature after convergence and the antonyms of the base lexicon are matched to construct the positive and negative lexical pairs.
Step 4: Representative affective words focusing. The word vector model “Word2Vec” from R-package “wordVectors” is used to calculate the semantic similarity of the affective word library one by one, and the highest point is accumulated as the representation of word groups.

3.3.2. Stage-II of the Enhanced KE

In stage-II, this flow of design knowledge refers to the initial and secondary processing from image data to feature information, especially for the uncertainty and reliability of sample clustering. The execution flow is from Steps 5 to 9, in Figure 7.
Step 5: E-commerce platform product crawling. Product images can also be crawled on E-commerce platforms, and the principle of crawling is to select them according to the brand list and determine several images with historical significance, market representation, and independent form.
Step 6: Product images collection. Image pre-processing refers to standardizing the image size (25*25 mm), standardizing the image perspective (dimetric), removing background noise (matting), and overall desaturation processing (monochrome) to prevent the influence of irrelevant factors in the affective response test caused by non-form factors and form the initial product image information base.
Step 7: Representative product images focusing. According to the CMZ questionnaire (5-points Likert input the CMZ processor) of affective words belonging and not-belonging, spatial clustering is carried out on the uncertainty values of linguistic evaluation of “belonging and not-belonging”. Here, the cloud similarity distance of CMZ uncertainty value and reliability value is used to improve the composite Euclidean Distance, and then the composite distance of “belonging” and “not-belonging” is calculated respectively, and stored as the coordinates of “belonging” and “not-belonging” clustering space, and finally K-means clustering is completed in two-dimensional space.
Let CMZ1 = (A1, B1) = [( E x 1 l , E n 1 l , H e 1 l ), ( E x ~ 1 l , E n ~ 1 l , H e 1 l ~ )] and CMZ2 = (A2, B2) = [( E x 2 l , E n 2 l , H e 2 l ), ( E x ~ 2 l , E n ~ 2 l , H e 2 l ~ )] be two arbitrary clouds with Z-numbers. According to the cloud similarity distance calculation, it can be divided into the uncertainty distance of the CMZ part A and the reliability distance of the CMZ part B. as Dis(A1, A2) and Dis(B1, B2) are calculated as follows
D i s ( A 1 , A 2 ) = ( E x 1 l E x 2 l ) 2 + ( E n 1 l E n 2 l ) 2 + ( H e 1 l H e 2 l ) 2 D i s ( B 1 , B 2 ) = ( E x ~ 1 l E x ~ 2 l ) 2 + ( E n ~ 1 l E n ~ 2 l ) 2 + ( H e ~ 1 l H e ~ 2 l ) 2
Then, the composite distance of CMZ can be further obtained as follows
D i s ( CMZ 1 , CMZ 2 ) = D i s ( A 1 , A 2 ) D i s ( B 1 , B 2 )
where the influence of two kinds of information on feature similarity can be comprehensively reflected in one value Dis(CMZ1, CMZ2) by combining the deterministic part and the uncertain part in the form of the square root of the multiplication.
Based on the above composite distance calculation, the “belonging” and “not-belonging” of the features are represented as coordinates D = (Disbelonging, Disnot-belonging) in Equations (17) and (18). Obviously, the form of this two-dimensional feature is easy to perform K-means calculation, and the specific K-means steps are as follows: (1) the cluster centers are initialized, (2) samples are assigned to the nearest cluster centers, (3) new cluster centers are computed, and (4) iterative updates are carried out in sequence. Besides that, in the initial cluster center, the optimal number of clusters K can be determined based on the decreasing trend of the SSE in the elbow graph, and eventually the improved K-Means is accomplished.
Step 8: Experimental sample images rebuilding. In general, users have mental inertia in their perception of the existing market sample form, which makes not convenient to establish their appropriate image form mapping directly. Therefore, it is necessary to arrange and combine the morphological features extracted from representative samples. The representative sample form subsequent to cluster grouping is further analyzed in terms of commonality and individuality. Based on the regulations of product form description, features can be extracted from aspects such as contour, decomposition, and structure, among which the outer contour features are particularly prominent in form perception, thereby enabling the description of the Top and Lateral profiles. Considering the combination of various forms under regional characteristics, it can be decomposed and described as the penetrating contour. Hence, the contour lines of the Top, Lateral, and Penetration profiles were extracted to analyze the morphological features of the sample and the specific different types of elements under the morphological features were catalogued to rebuild. Due to the relationship between form generation efficiency and element diversity in the design process, Taguchi orthogonal design will be adopted to establish a robust experiment plan card to optimize the product form generation process with as few experiments as possible. The unequal-level orthogonal experiment is implemented by R-package “DoE.base”. The generated plan cards met the requirement that the experimental items were uniformly dispersed and well-comparable, and then the combination generated by the plan cards could be modeled in Rhinoceros 3D. The 3D digitization only extracted the contour lines and part of the feature lines to simplify the morphological information and highlight the outer contour features of the morphology elements.
Step 9: Product morphological features. Dummy variable coding is carried out for specific different types of elements in the experimental samples, so that the elements under different contour features can be interpreted by binary classification. For example, if the top view contour features are divided into three elements, and samples 1 to 3 have different morphology elements, then the coding situation can be expressed as sample-1 is “100”, sample-2 is “010”, and sample-3 is “001”. For specific different types of elements in the experimental samples, dummy variable coding can be carried out, so that the modeling elements under different contour features can be interpreted by binary classification. Then, the encoded feature and element information can be used as the independent variable of the morphological design knowledge prediction model in the later stage.

3.3.3. Stage-III of the Enhanced KE

In stage-III, this design knowledge flow refers to the affective response analysis of affective words and morphological features, especially for the subjective evaluation information from users. The execution flow is from Steps 10 to 11, in Figure 8.
Step 10: Reliability and uncertainty affective response questionnaire. With the help of the CMZ questionnaire, the uncertainty affective response of several experimental samples under the image stimuli was obtained by the SD method, and the obtained data set could be recorded as dual 3-tuple information by the CMZ processor.
Step 11: Reliability and uncertainty affective response evaluation results. Since the information form of CMZ is dual 3-tuple, it is difficult to directly input the dependent variable of the prediction model, so the results of CMZ questionnaire are needed to process and transform, and the scoring function is a tool that can well reflect the information accumulation of uncertainty and reliability.
In fuzzy theory, the score function is commonly constructed to transform the total score of different sets. According to the description of CM in Equation (4) of Definition 3 above, the scoring function of the CM [63] considers that the scoring value of a cloud drop (d, y CM ) is C S   = d y CM .
Thus, it is possible to compare CM according to the score values. Then in most cases the total cloud score CS cannot be truly measured, such as the number of uncertain cloud droplets Ndrop and the random cloud droplet distribution. However, the approximate value C S ^ of CS was determined by Monte Carlo simulation [63], and the total score was simulated by a forward cloud generator based on the known numerical characteristics of Ex, En and He. The estimated score C S ^ of CM can be obtained as
C S ^ = 1 N drop i drop N drop d i drop y i drop CM
Since the data of CMZ is in the form of dual 3-tuples, it is challenging to input the dependent variable of the predictive model directly. Hence, it is requisite to further process the outcomes of the CMZ questionnaire, namely, to construct a generalized scoring function. According to the characteristics of CMZ, combined with the coordinated contribution of scoring and the harmonic average idea of entropy, the two parts before and after CMZ = (A, B) = [( E x i l , E n i l , H e i l ), ( E x ~ i l , E n ~ i l , H e i l ~ )] are scored respectively. The score of CMZ part A is C S ^ ( A ) , and CMZ B of cloud is C S ^ ( B ) , then CMZ is divided into
Γ CMZ = C S ^ ( A ) C S ^ ( B ) 1 E n i l + 1 E n ~ i l
where the numerator C S ^ ( A ) C S ^ ( B ) represents the multiplication of uncertainty and reliability scores, A higher C S ^ ( A ) , implies that the linguistic value within the uncertainty range is greater; a higher C S ^ ( B ) indicates that the probability value within the reliability range is higher; and a higher level of both C S ^ ( A ) and C S ^ ( B ) suggests that the joint effect is more significant. The denominator (1/ E n i l ) + (1/ E n ~ i l ) is a harmonic averaging form that ensures that the part with greater entropy has less weight on the result, while the part with less entropy (i.e., less uncertainty) has more weight. This means that CM with higher uncertainty have a moderately reduced impact on results, while CM with more certainty can play a larger role. The harmonic averaging was chosen to avoid a CM with higher entropy dominating the entire system by adding entropy directly. The root sign for the whole is to keep the numerical balance and avoid the result being too extreme.
In summary, the generalized scoring function ΓCMZ is an information cumulant that well reflects uncertainty and reliability by Equations (19) and (20).

3.3.4. Stage-IV of the Enhanced KE

In stage-IV, this design knowledge flow refers to the calculation of preference information and condenses it into interpretable design knowledge. It specifically addresses the problem of subjective expert intervention in hyperparameter adjustment during calculations. The execution flow is from Steps 12 to 13, in Figure 9.
Step 12: Reliability and uncertainty affective prediction model. In linear regression based on machine learning, the total score of CMZ is taken as the dependent variable of the uncertainty affective prediction model, while the independent variable is the dummy variable of the encoded morphological feature and element information, thus completing the BENR prediction model. The correlation coefficient for BENR output is the degree of support for morphological design knowledge.
Step 13: Morphological design knowledge. In the same way, repeat the above process to establish multiple prediction models of affective response and gain valuable morphological design knowledge through affective coupling.

4. Case Study

4.1. Illustrative Example

Compared with conventional household cleaning products, domestic cleaning robots have witnessed a more rapid iteration in demand, and the form-stimulated demand is often more sensitive. Therefore, the morphological design supported by affective information can assist in attracting consumers and enhance the competitiveness of enterprises in the market. Simultaneously, for efficient design and agile manufacturing, it is essential to develop the performance of the knowledge flow deeply. Therefore, an illustrative example of the domestic cleaning robot is adopted to validate how the enhanced KE fully considers the reliability, uncertainty and subjectivity of information in processing design knowledge flow.
In stage-I of the enhanced KE, A total of 105 descriptive words describing the morphology of the domestic cleaning robot is extracted by crawling the merchandise online reviews about domestic cleaning robots on E-commerce platforms (https://www.jd.com/, accessed on 31 December 2023) through the Octopus, which is used as the information base of the initial affective words. The Chinese words were translated into English and recorded in Table 1.
In this study, the screening threshold was defined as 4, and the screening criterion was set as the significant difference meaning. Thus, the frequencies of prominent word groups such as “concise”, “flexible”, “fashionable”, and “soft” could be obtained. According to the base thesaurus, match four types of words respectively and compare with “concise”. For example, it can be concluded that “simple”, “minimalistic”, tidy”, “clean”, “clear” and “fresh” as the affective word group 1. The semantic similarity is calculated by putting the affective word group into the “Word2Vec” word vector under this API interface, and the word group similarity matrix can be constructed. Then select the items with higher row or column point values from the word group similarity matrix as the representative affective word of the word group. Similarly, the above four types of word groups are calculated, and the results are shown in Figure 10. Thus, Representative affective words focusing are “clear”, “portable”, “upscale”, and “friendly”.
In stage-II of the enhanced KE, Image crawling is also carried out on the E-commerce above the platform for capturing reviews, and 50 product images are finally crawled according to the selecting rules of the preset path, and the image size, perspective, background and color are gradually pre-processed to form the initial product image information base, as shown in Figure 11.
To find product features with more commonness and individuality from numerous initial samples, it is necessary to rely on design experts to analyze the situation using multiple affective words for the feature. A total of 7 design experts were invited to conduct the “belonging” and “not-belonging” evaluation on the image features of 50 initial product images. The CMZ can be calculated from the evaluation results of MGLTSs (uncertainty is 5 granularity and reliability is 11 granularity) by the MGLSFs-II (a = 1.37). Here, take “clear” as an example to get the CMZ of 50 samples, as shown in Table 2. Then, the CMZ is compared and calculated to complete K-means clustering according to the proposed composite distance calculation, “clear” is used as shown in Table 3.
At the same time, the calculation of K-means is further performed, and appropriate K values are determined according to SSE conditions of the elbow method, where the value of K below it is basically 4. Clustering conditions of composite distance under four affective words can be obtained, and representative products of clusters can be determined by the distance within the cluster from the cluster center to the coordinates, as shown in Figure 12. Taking “clear” as an example, It is divided into four classes, and the class centers are determined based on the Euclidean distance. Thus, it can be concluded that the center representative of cluster 1 is 34, that of cluster 2 is 27, that of cluster 3 is 16, and that of cluster 4 is 42. By analogy, the different class center representatives of the remaining three representative affective words can be obtained.
Based on the representative product sample identified by the cluster graph of four affective words, the design expert can further clarify that there are four morphological elements in the outer contour of the Top profile, five morphological elements in the Lateral profile, and the Penetration profiles with 3 morphological elements are relatively simple. Thus, twelve design elements influence the general outline of the product in practice. However, there are 60 combinations of these design elements to be combined, and it is obviously not wise to analyze them all. Therefore, it is necessary to adopt the orthogonal experimental design with three unequal levels of factors, which are 4-levels under one factor, 5-levels under one factor and 3-levels under one factor, namely the experiment plan card. According to the Experimental plan card, 3D modeling was carried out and only the general outline was retained, and Experimental sample images rebuilding was completed, as shown in Figure 13. Drawing on the morphological element information in Figure 9, design experts classify the top profile into “rectangle, circle, arch, and shell”, the lateral profile is classify into “rectangle, circle, lower-arch, upper-arch, and trapezoidal”, the penetration profile of trapezoidal is classify into “none, rectangle, and circle”, and the rebuilding experimental sample can be further encoded, as shown in Table 4.
In stage-III of the enhanced KE, utilizing the CMZ questionnaire, design experts analyzed the uncertain affective responses associated with 12 morphological elements across three types of product characteristics using four affective descriptors. The 5-point SD and 11-point NRS method was employed to capture the linguistic granularity of user affective responses while enhancing the reliability of the uncertainty constraints. The output was generated through a dual 3-tuple approach of the CMZ. The uncertainty and reliability score functions ( C S ^ ( A ) and C S ^ ( B ) ) were calculated separately by CMZ, and these were then integrated into the generalized scoring function (ΓCMZ) that encapsulates abundant user affective information by using a harmonic average. Taking “clear” as an example, the evaluation results of 20 users for 36 experimental samples are expressed as CMZ (Cronbach’s α = 0.872). C S ^ ( A ) , C S ^ ( B ) and ΓCMZ respectively, and the results are presented in Table 5.
In stage-IV of the enhanced KE, uncertainty affective prediction is performed with BENR. Bayesian optimization is configured with hyperparameter exploration boundaries set as α∈[0, 1] and λ∈[0.001, 10]. The initial number of GP points is established at 30, with a maximum iteration limit of 120. The coefficient of the UCB exploration and exploitation is specified as 1.96. Taking “clear” as an example, the optimization prediction and the optimization exploration process is conducted in Figure 14.
Part (a) of Figure 14 illustrates the volatility in performance function values during both initial and iterative phases under UCB, part (b) depicts convergence trends in utility values throughout ACF exploration iterations, part (c) showcases best performance function development over iteration, part (d) highlights best hyperparameter exploration within the defined space, while part (e) compares actual and predicted ENR outcomes by utilizing the best hyperparameters identified. Additionally, calculation results for three other affective words are included in Appendix B.
Similarly, the other three types of CMZ questionnaires (such as “portable”, “upscale” and “friendly”) also have good internal consistency within the scale (Cronbach’s α = 0.760, 0.825, 0.831). When the four affective words are individually predicted, the BENR coefficient can be interpreted as the knowledge pertaining to their morphological design, as illustrated in Figure 15.
However, product morphology frequently exhibits multi-affective coupling, which can further represent a coupling knowledge graph, as depicted in Figure 12. The coefficients are normalized in a symmetric range (from −100% to 100%) in Figure 12. In design knowledge flow, design experts can intuitively distill morphological design knowledge with significant contributions from the single-affective word, while simultaneously extracting high-quality morphological design insights from the multiple-affective word within the overlapping regions of the coupling knowledge graph.
For example, if the morphological design is considered only “clear”, then according to the design knowledge result in Figure 15a, the feature order and element types should be “Penetration1-Lateral1-Top2”. If it is considered only “upscale”, then the knowledge result is “Top2-Lateral2-Penetration2” in Figure 15b, and the feature of the Penetration can be ignored seemly. If the four affective words are considered at the same time, the knowledge result is Top2, Lateral2, and Penetration1 in Figure 16.

4.2. Comparative Result

Before presenting the quantitative results, it is crucial to recognize that conventional KE—relying on fixed bipolar scales such as SD or degree scales like NRS—offer only narrow-granularity scoring and thus fail to capture the intrinsic randomness and fuzziness of human emotional judgments. Moreover, subjective responses often include degree adverbs and extreme ratings that skew the overall affective profile. Similarly, standard penalized regression techniques apply static regularization with manually tuned parameters, making it difficult to balance sparsity versus stability and preventing the model from accurately reflecting uncertainty in affective prediction. In contrast, our CMZ–BENR framework employs cloud model and Z-numbers theory to jointly model randomness and fuzziness at multiple granularities, while leveraging symmetric ℓ1/ℓ2 regularization under Bayesian optimization to automatically trade off sparsity and robustness—thereby delivering a more reliability, uncertainty-aware prediction of user emotions. The results of CMZ and BENR in enhanced processing of information reliability, uncertainty, and subjectivity will be compared respectively, as follows.
(1)
Comparison I: SD, NRS, SD-CM and SD/NRS-CMZ.
The linguistic granularity processing of the scale is used for information processing to form valuable affective knowledge. Taking Card ID 1 under the “clear” as an example, print out different results for comparison in Figure 17.
In Figure 17a, a density histogram is used to illustrate the attitudes of 20 users as the group toward uncertainty in these circumstances. Most users focus on the SD results of +2 and +1, yielding an expected value of 1.7, approximating a normal distribution. However, if an individual user evaluation diverges from the group, outliers such as −2 and −1 may occur. Consequently, the evaluation outcomes are prone to distortion due to inherent subjectivity.
In Figure 17b, the density histogram similarly serves to depict attitudes toward reliability. This representation benefits from NRS-11 by providing a more nuanced understanding of the reliability associated with evaluations while still being affected by subjective assessments.
In Figure 17c, MGLTSs and MGLSFs are employed to perform non-linear transformations on the linguistic granularity of SD results, thereby introducing flexibility into the evaluation process while further articulating it through the three digital features of CM to simulate group assessment through 1000 cloud droplets. This approach offers a more nuanced representation of the current group of uncertain attitudes, and the CM transformation of uncertainty enhances the capacity for expanding the qualitative concept T within the universe of discourse U. At the same time, the robustness of granularity differentiation is retained.
In Figure 17d, drawing upon the concept of Z-numbers, a collective attitude towards uncertainty and reliability is established, resulting in a CMZ that comprises both SD-CM and VRS-CM. Compared with the results of the first three (a)~(c), CMZ proves that this method further strengthens the reliable description of linguistic granules, alleviates the uncertainty of evaluation results, and weakens the subjectivity of users.
The above analysis of the results of the affective response questionnaire by users with more information about the raters shows that the CMZ method presents its unique advantages. Relying solely on the histogram to output the mean is not reliable in the clustering questionnaire in stage-II when compared to design experts with a smaller number of users. At the same time, due to the professional background of design experts, they often exhibit a higher level of confidence in their evaluations. In this context, reliability measures can enhance the validity of the evaluation information, making the results derived from the CMZ methodology more robust. Therefore, in stage-II of the enhanced KE, applying CMZ to improve K-Means will provide significant advantages.
(2)
Comparison II: Ridge, Lasso, ENR and BENR.
The current uncertainty affective response evaluation results and product morphological features, when input into the uncertainty affective prediction model, generate a singular matrix (a non-full-rank matrix) due to the non-deterministic nature of the input. This makes it impossible to obtain the standard inverse matrix using the OLS. To address this issue, alternative solutions such as the generalized inverse and penalized regression can be utilized. The generalized inverse (“ginv” function of R-package “MASS”) can produce an inverse matrix under non-full-rank conditions, thus ensuring the feasibility of regression calculation. However, it does not address multicollinearity within the data, leading to unstable regression coefficients, as well as poor model interpretability and predictive performance. Conversely, penalized regression methods, such as the Ridge, Lasso, and ENR (R-package “glmnet”), incorporate a regularization term that mitigates the effects of multicollinearity, enhancing model stability and interpretability, particularly in models with complex features. Additionally, penalized regression effectively reduces the risk of overfitting, resulting in more robust predictive performance on the test set. Therefore, this study adopts penalized regression as the primary approach to enhance the model of predictive accuracy and applicability.
The performance function ρ(γ) was used to analyze different penalized regression methods, and different results were printed for comparison, as shown in Table 6. According to the principle of maximizing ρ(γ), the proposed BENR achieves the best performance among the four affective word cases. BENR can autonomously determine the optimal combination of hyperparameters, demonstrating its advantages in model selection. Next is ENR, which can determine the optimal penalty intensity λ through K-CV. However, since the model adjustment coefficient α needs to be manually set, the results still exhibit some randomness. Lasso performs well in variable selection and mitigating overfitting risks, but its overall performance is moderate. In comparison, Ridge has a relatively large bias due to a high penalty intensity λ, leading to suboptimal results.

5. Discussion

A novel CMZ-BENR enhanced KE process is put forward to address the limitations of reliability, uncertainty, and subjectivity in information flow within design knowledge management. This KE process is divided into four stages: affective word extraction, morphological feature analysis, uncertainty affective response, and uncertainty affective prediction. In each stage, methods that highlight the inherent advantages of information are applied. For instance, affective words are processed through mainstream approaches of online review crawling and sentiment analysis; morphological features are clustered using the compound weighted similarity distance of the CMZ within the K-means algorithm.; the affective response is represented by a CMZ generalized scoring function with harmonic averaging form; and knowledge prediction is achieved through the BENR model, which is capable of automatically adjusted hyperparameter optimization.
Additionally, in the morphological clustering and affective response stages, a CMZ processor is introduced to enhance the interpretability of the gray box. This processor enables the transformation of uncertainty from MGLTSs to MGLSFs, thereby eliminating linguistic granularity uncertainty, reducing subjective biases in individual assessments, and enhancing information reliability by integrating single-CM into Z-number as dual-CM representations in the form of CMZ. It is precisely the charm of symmetry.
To further validate the methodological scientificity, innovativeness, and generalization of this enhanced KE in design knowledge flow, the discussion will proceed from the following three aspects.
(1)
Discussion I: the conventional KE, partial improved KE, and enhanced KE.
The conventional KE step typically involves the following: First, design experts identify the design object within the design knowledge flow, then collect affective words and product images from as many sales channels as possible, such as advertisements, shopping malls, and websites, and process them according to certain rules to build an affective words database and a sample image database. Based on this, users are invited to use the SD method to express their affective responses, and the relevant design knowledge is predicted using the QT1 model based on the response results. However, during crawling, focusing, responding, and predicting, the complexity of the real world brings high uncertainty, and the subjectivity of different stakeholders further affects the accuracy and consistency of the design knowledge. From a symmetry perspective, the conventional KE workflow exhibits an asymmetric treatment of information. While expert judgments are heavily weighted, the balancing of uncertainty and reliability is largely neglected, leading to an imbalanced (non-symmetric) knowledge flow.
The partially improved KE refers to optimizing only certain stages or local methods to improve affective recognition or prediction accuracy. In Section 4.2, the KE has been compared and analyzed in detail concerning perceptual information representation methods and penalized regression prediction methods. The results show that CMZ and BENR have unique advantages in their applications in the design knowledge flow, both of which are effective gray boxes that handle uncertain information and perform well. In the process of perceptual information representation, CMZ improves the uncertain affective responses from questionnaire survey users, thus more densely describing affective information in Figure 13. At the same time, in the laboratory environment, CMZ enhances the clustering effect by using cloud similarity to improve the reliability of information evaluation when design experts screen and evaluate product samples. In the penalized regression prediction, BENR reduces the subjectivity of design expert intervention compared to other penalized regressions in Table 6 and uses Bayesian optimization for self-supervised learning to optimize hyperparameters while maintaining a certain degree of uncertainty, thereby improving the reliability of prediction results. Nevertheless, from a symmetry standpoint, partial improvements often rebalance one dimension (e.g., uncertainty) at the expense of another (e.g., subjectivity), resulting in a locally symmetric but globally asymmetric workflow.
The enhanced KE offers a design knowledge flow that addresses the limitations of single perceptual information representation and penalized regression in supporting high-dimensional information. In handling complex affective information and managing uncertainty, the combination of CMZ and BENR demonstrates outstanding results, as illustrated in the example in Section 4.1. Mapping from MGLTSs to MGLSFs, then converting uncertainty from MGLSFs to CM, and finally integrating CM into CMZ, with further extensions to K-Means clustering and BENR optimization—the proposed enhanced KE achieves a seamless integration across multiple methods. This cohesive approach significantly enhances the performance of design knowledge flow in managing complex emotional information. Crucially, by exploiting the inherent symmetry of normal cloud distributions in CMZ and the balanced ℓ1/ℓ2 regularization in BENR, the enhanced KE attains a structurally harmonious (fully symmetric) process, ensuring that reliability, uncertainty, and subjectivity are treated in equilibrium throughout the workflow.
Unfortunately, both partially improved and enhanced KE still have certain subjectivity in the recognition of product morphological features, and it is impossible to classify more representative morphological features from human crowdsourcing, which can be further explored in the future.
(2)
Discussion II: different types of the enhanced KE process.
Generating Kansei profiles through a probabilistic approach is a reasonable way to model Kansei data as a probability distribution. It enables capturing the uncertainty resulting from human subjective judgment and the ambiguity of the Kansei words themselves [24]. In contrast, both fuzzy linguistic processing and linguistic probability distribution are also employed in perceptual information [64]. The CMZ presented in this study emphasizes multi-granularity linguistic value transformation and the characterization of cloud droplet probability distribution. Simultaneously, CMZ can signify both the uncertainty and reliability probabilities of linguistic values. Hence, in the design knowledge flow about stage-II and -III of the KE, the introduction of CMZ is significant for addressing information reliability, uncertainty, and subjectivity.
Combining the rough set (RS) for attribute reduction and the SVR for nonlinear mapping between affective words and morphological features can enhance the KE process and effectively capture morphological design knowledge [27]. The control of noise processing and model robustness in SVR mainly depends on the kernel function and the penalty parameter C setting, which can be optimized by adjusting the tolerance for errors outside the ε range. However, SVR cannot achieve 1-regularized sparse feature selection, contrasted by ENR. In addition, as a nonlinear regression model, SVR is difficult to handle multicollinearity and lacks a clear explanation of the contribution of each feature. In contrast, ENR, as a penalty linear regression, can handle multicollinearity and provide intuitive coefficient explanations, making it easy to analyze feature contributions. Furthermore, the proposed BENR is an ENR for adaptive hyperparameter setting based on Bayesian optimal, utilizing the built-in probabilistic boosting to improve performance and enhance robustness for the model. Compared with [27], CMZ-BENR contains more information than the RS-SVR, taking full account of reliability, uncertainty and subjectivity.
The integrated application of CMZ-BENR naturally brings more operational complexity and computational complexity of information integration, which needs to be developed in the design system in the future.
(3)
Discussion III: various types of design knowledge flow.
Some scholars employ the knowledge guide incorporating prior information to stimulate knowledge discovery and constitute the design knowledge flow [11], which is also an excellent approach to improve information reliability and mitigate design subjectivity. Nevertheless, the extraction of morphological knowledge requires further reinforcement in future research. It is generally believed that effective knowledge management is the crucial way to improve design efficiency. Therefore, in the current research [12], the common approach is to embed the KE process into the knowledge map and knowledge representation, to build a demand-oriented knowledge management model that achieves knowledge acquisition, modeling and prediction. But this system architecture has not been fully explored in terms of information reliability, uncertainty, and subjectivity. In contrast, this study aims to strengthen the systematic knowledge management model’s information processing to further enhance the effectiveness of the design knowledge flow.
It should be noted, however, that focuses only on the optimization of the KE process and does not explore the comprehensive framework of the knowledge-based system in depth. This limitation will be addressed in future research.
To sum up, following these discussions, the core of this research is optimizing the design knowledge flow by adopting the enhanced KE. Demonstrates the superiority of the enhanced KE in theory, and validates its effectiveness in practice through the case study. It not only enhances the efficiency of the design process but also boosts the innovation and adaptability of the design through better integration and utilization of information resources. However, in the design knowledge flow, the optimization of local details for knowledge-based systems needs further research.

6. Conclusions

This study proposes an uncertainty treatment for knowledge acquisition and representation to enhance the reliability of representations while minimizing subjectivity as much as possible, named as the CMZ-BENR enhanced KE process. This study has the following contributions compared with previous studies.
(1)
The enhanced KE process addresses uncertainties and subjectivities and highlights reliabilities in design knowledge flow.
(2)
The CMZ improves the subjective rating value description of reliability and uncertainty in the KE process. This is attributed to the symmetry of the normal cloud distribution and the Z-number itself.
(3)
The BENR enhances the modelling performance concerning preference morphological uncertainty and subjectivity in the KE process. This is because the Bayesian method can be utilized to automatically balance in the 1/2 regularization.
(4)
This novel process optimizes the design knowledge flow, exemplified by the domestic cleaning robot.
This study can provide theoretical guidelines and design references for design knowledge flow. Furthermore, it may also signal the potential for more profound knowledge reasoning and interpretation, thereby establishing a design knowledge flow that is more physically realistic and warrants further investigation.
Nevertheless, the present study has some limitations and needs to be followed up.
(1)
The complexity of calculation. The CMZ module requires repetitive cloud parameter simulation and Z-numbers mapping, resulting in computational complexity. The Bayesian optimization of BENR incurs a large amount of computational overhead for the 1/2 hyperparameters, especially for the high-dimensional feature sets.
(2)
Rely on expert intervention. In stage-II, it still relies on artificial morphological feature analysis, introducing subjective biases and limiting scalability.
(3)
Symmetry and extensibility of the framework. Although how CMZ-BENR integrates knowledge was discussed, it only completes the knowledge representation from stage-II to stage-IV and cannot achieve the optimization of higher-level design goals.
(4)
Within the scope of the case verification. Although a highly demanding and iterative sweeping robot was selected for case verification, it is also necessary to explore a broader range of personalized product categories. Therefore, the research is limited to a single product type (domestic cleaning robots) and may not be able to expand to other fields or more complex design Spaces. And the emotional heterogeneity of user comments from a single platform.
To address these challenges, the following future development directions can be proposed
(1)
Algorithm acceleration and scalability. Attempt to explore approximate reasoning (e.g., random cloud sampling) and parallel/distributed implementations to reduce the running time of CMZ and BENR.
(2)
Automatically extract features through bioinformatics and symmetry-driven models. Integrating bioinformatics-inspired process or multi-omics data processing, treating morphological descriptors as “design genes”, and supporting symmetric autoencoder architectures or graph neural networks for end-to-end perceptual feature learning.
(3)
Symmetric parameterization and evolutionary design (Stage-V). Expand the KE framework, including evolutionary algorithms and parametric modeling that utilize structural symmetry, to complete a completely symmetrical reasoning loop under a unified knowledge management paradigm.
(4)
Broader verification. The CMZ-BENR process is applied to different product categories (for example, automotive interiors, consumer electronics), and cross-domain user research is conducted to evaluate universality and robustness. In the future, it will be possible to select more reviews data from e-commerce platforms to achieve cross-platform demographic analysis
These supplements have deepened our reflection on the limitations of the method and mapped out specific paths for future work, enhancing the rigor and vision of the manuscript.

Author Contributions

Conceptualization, H.L. and P.W.; methodology, H.L., P.W. and J.L.; software, H.L.; validation, P.W., J.L. and C.C.; formal analysis, H.L.; investigation, H.L.; resources, H.L. and P.W.; data curation, H.L.; writing—original draft preparation, H.L.; writing—review and editing, P.W., J.L. and C.C.; visualization, H.L. and J.L.; supervision, P.W., J.L. and C.C.; project administration, P.W.; funding acquisition, P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Technology Development Fund of the Macao Special Administrative Region [Grant Number 0036/2022/A].

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Faculty of Innovation and Design, City University of Macau (RE-2024-12-30).

Data Availability Statement

Data will be made available on request.

Acknowledgments

The authors would like to thank all experimenters of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
KEKansei engineering
CMZZ-number cloud model
BENRBayesian elastic network regression
CMNormal cloud model
ENRElastic network regression
MGLTSsMulti-granularity linguistic terms sets
MGLSFsMulti-granularity linguistic scale functions
K-CVK-fold cross-validation
PFPrior function
ACFAcquisition function
GPGaussian process
UCBUpper bound confidence
SDSemantic differential scale
NRSNumeric rating scale

Appendix A

Appendix A.1

The procedure of the cloud model with Z-numbers are divided into three parts, as follows.
Algorithm A1: Transformation of the MGLTSs-MGLSFs
Input: A matrix of i linguistic values with l granularity as s i l , specific granularity size 2t (i = 0, 1, …, 2t) // Input: MGLTSs
Output: A matrix θ i l  of H 1 ( s i l ) , H 2 ( s i l )  and H 3 ( s i l )  according to different language scenarios // Output: MGLSFs
1:function(MGLTSs)
2:    MGLSFs <- list() // Calculate the scaling function for each term
3:   for (term in MGLTSs) do
4:      scale_function <- some_Transformation_logic(term) // Transformation logic
5:     MGLSFs[[term]] <- scale_function
6:   end for
7:   return(MGLSFs)
8:end function
Algorithm A2: Generator of the forward normal cloud model CM ( E x i l , E n i l , He i l , ndrop)
Input: Three parameters E x i l , E n i l  and  H e i l  about CM, the number of cloud droplets Ndrop
Output: A group of Ndrop cloud droplets conforming to E x i l , E n i l  and  H e i l  distributions
1:function(Ex, En, He, Ndrop)
2:   cloud_drops <- numeric(Ndrop) // Store the generated cloud droplets
3:  for (i in 1:ndrop) do
4:     En_prime <- rnorm(1, mean = En, sd = He) // Generate random En’
5:     d_value <- rnorm(1, mean = Ex, sd = En_prime) // Generate d values
6:     cloud_drops[i] <- d_value // Store cloud drop d values
7:  end for
8:  return(cloud_drops)
9:end function
Algorithm A3: Calculator of the CM with Z-numbers
Input: Z-number with components A (fuzzy constraint on what d can take) and B (reliability measure of A)
Output: Dual CM based on Z-numbers
1:function(A, B, Ex, En, He, Ndrop)
2:  cloud_ A <- Generate_normal_cloud_model ( E x i l , E n i l , H e i l , Ndrop) // Generate CMA
3:  cloud_ B <- Generate_normal_cloud_model ( E x ~ i l , E n ~ i l , H e i l , Ndrop) // Generate CMB
4:   Dual_cloud_model <- list(cloud_ A = cloud_ A, cloud_ B = cloud_ B) // Combination
5:  return(Dual_cloud_model)
6:end function

Appendix A.2

The procedure for the Bayesian optimal Elastic Net Regression are divided into three parts, as follows.
Algorithm A4: Definition of the performance function of ENR
Input: Hyperparameters λ and α of ENR, training data Xtrain and Ytrain, test data Xtest and Ytest
Output: Performance indicators RMSE and R2 of ENR
7:function objective_function(lambda, alpha, Xtrain, Ytrain, Xtest, Ytest) // Set the main function
8:    model <- train_elastic_net(Xtrain, Ytrain, alpha = α, lambda = λ) // Construct ENR
9:      Ypredict <- predict(model, Xtest) // Generate predictions on the test set
10:      RMSE <- sqrt(mean_squared_error(Ytest, Ypredict)) //Calculate RMSE
11:    R2<- 1-(sum((YtestYpredict)^2)/sum((Ytest − mean(Ytest))^2)) // Calculate R2
12:    ρ(γ) <- 1/(RMSE + abs(1 − R2)) // Define comprehensive performance functions ρ(γ)
13:   return ρ(γ)
14:end function
Algorithm A5: Actuator of the Bayesian optimization process
Input: Hyperparameters search space about λ and α (param_space), initial number of sampling points(init_points), numbers of iterations(n_iter), Xtrain, Ytrain, Xtest, Ytest
Output: λbest, αbest, best_score
15: function bayesian_optimization(param_space, init_points, n_iter, Xtrain, Ytrain, Xtest, Ytest)
16:    history <- empty list // Construct history list
17:    // Initialization: Randomly generate several initial points
18:    For i <- 1 to init_points do // Generate hyperparameters search space
19:         λsample <- random_uniform(param_space.λmin, param_space.λmax) // Boundary of λ
20:         αsample <- random_uniform(param_space.αmin, param_space.αmax) // Boundary of α
21:         score <- objective_function(λsample, αsample, Xtrain, Ytrain, Xtest, Ytest) //Search space
22:       append history with {λsample, αsample, score} // Save to history
23:   end for
24:    // Start the Bayesian optimization iteration
25:   For iter <- 1 to n_iter do
26:       gp_model <- train_gaussian_process(history.params, history.scores) // Construct agent
27:        // Use the collection function to select the next sampling point
             next_point <- ucb_acquisition_function(gp_model, param_space)
28:           // Use the UCB function for exploitation and exploration
           function ucb_acquisition_function(gp_model, param_space, κ = 1.96) // 95% CI
29:              max_ucb <- −∞ // Explore from the minimum
30:              best_point <- None // underexploitation
31:             for λsample in param_space.lambda_range do
32:               for αsample in param_space.alpha_range do
33:                 μ, σ <- gp_model.predict([λsample, αsample])
34:                  ucb_value <- μ + κ* σ // Construct the UCB function
35:                  if ucb_value > max_ucb then // Explore the maximum of UCB
36:                   max_ucb <- ucb_value
37:                    best_point <- [λsample, αsample] // Mark the optimal point
38:                 end if
39:               end for
40:             end for
41:             return best_point
42:           end function
43:        λnext <- best_point[1] // Record new λ points separately
44:        αnext <- best_point[2] // Record new α points separately
45:         // Evaluate New point performance
             score_next <- objective_function(λnext, αnext, Xtrain, Ytrain, Xtest, Ytest)
46:       append history with {λnext, αnext, score_next} // Save to history
47:   end for
48:    // Returns the optimal combination of hyperparameters
49:    best_index <- find_max(history.scores) // Catch maximum value of history combination
50:    λbest <- history[best_index].lambda // Catch optimal λ
51:    αbest <- history[best_index].alpha // Catch optimal α
52:    best_score <- history[best_index].score // Store optimal combination
53:   return(λbest, αbest, best_score)
54:end function
Algorithm A6: Calculator of BENR
Input: λbest, αbest, Xtrain, Ytrain, Xtest, Ytest
Output: Predictions value, RMSE, R2
55:function calculator_of_BENR(Xtrain, Ytrain, Xtest, Ytest)
56:    // Use optimal hyperparameters to train the final ENR
      final_model <- train_elastic_net(Xtrain, Ytrain, alpha = αbest, lambda = λbest)
57:    // Generate predictions and calculate final performance on the test set
      final_predictions <- predict(final_model, Xtest)
58:      final_RMSE <- sqrt(mean((Ytest − final_predictions)^2))
59:      final_R2 <- 1 − (sum((Ytest − final_predictions)^2)/sum((Ytest − mean(Ytest))^2))
60:    // Output final model performance
61:   print(“Predictions”, final_predictions)
62:   print(“RMSE”, final_RMSE)
63:   print(“R2”, final_R2)
64:   return(final_model, final_RMSE, final_ R2)
65:end function

Appendix B

Figure A1. The BENR of the “portable”.
Figure A1. The BENR of the “portable”.
Symmetry 17 00758 g0a1
Figure A2. The BENR of the “upscale”.
Figure A2. The BENR of the “upscale”.
Symmetry 17 00758 g0a2
Figure A3. The BENR of the “friendly”.
Figure A3. The BENR of the “friendly”.
Symmetry 17 00758 g0a3

References

  1. Oygür, I. The machineries of user knowledge production. Des. Stud. 2018, 54, 23–49. [Google Scholar] [CrossRef]
  2. Benabdellah, A.C.; Zekhnini, K.; Bag, S.; Gupta, S.; Jabbour, A. Designing a collaborative product development process from a knowledge management perspective. J. Knowl. Manag. 2024, 28, 2383–2412. [Google Scholar] [CrossRef]
  3. Tošenovský, F. A Space of Apt Product Designs Based on Market Information. Appl. Sci. 2024, 14, 8771. [Google Scholar] [CrossRef]
  4. Qiu, K.; Su, J.; Zhang, X.; Yang, W. Evaluation and Balance of Cognitive Friction: Evaluation of Product Target Image Form Combining Entropy and Game Theory. Symmetry 2020, 12, 1398. [Google Scholar] [CrossRef]
  5. Yang, C.X.; Liu, F.; Ye, J.N. A product form design method integrating Kansei engineering and diffusion model. Adv. Eng. Inf. 2023, 57, 102058. [Google Scholar] [CrossRef]
  6. Liu, Z.H.; Wu, J.F.; Chen, Q.P.; Hu, T. An improved Kansei engineering method based on the mining of online product reviews. Alex. Eng. J. 2023, 65, 797–808. [Google Scholar] [CrossRef]
  7. Wang, C.C.; Yang, C.H.; Wang, C.S.; Chang, T.R.; Yang, K.J. Feature recognition and shape design in sneakers. Comput. Ind. Eng. 2016, 102, 408–422. [Google Scholar] [CrossRef]
  8. Li, Y.H.; Wang, W.W.; Yue, S.T.; Wang, J.M.; Lei, B. A new product development method to incorporating customer sensory preferences in food product design. Adv. Eng. Inf. 2024, 62, 102769. [Google Scholar] [CrossRef]
  9. Tang, W.Y.; Xiang, Z.R.; Ding, T.C.; Zhao, X.; Zhang, Q.; Zou, R. Research on multi-objective optimisation of product form design based on kansei engineering. J. Eng. Des. 2024, 35, 1023–1048. [Google Scholar] [CrossRef]
  10. Li, Z.; Tian, Z.G.; Wang, J.W.; Wang, W.M.; Huang, G.Q. Dynamic mapping of design elements and affective responses: A machine learning based method for affective design. J. Eng. Des. 2018, 29, 358–380. [Google Scholar] [CrossRef]
  11. Cok, V.; Vlah, D.; Povh, J. Methodology for mapping form design elements with user preferences using Kansei engineering and VDI. J. Eng. Des. 2022, 33, 144–170. [Google Scholar] [CrossRef]
  12. Zhong, D.; Fan, J.; Yang, G. Knowledge management of product design: A requirements-oriented knowledge management framework based on Kansei engineering and knowledge map. Adv. Eng. Inf. 2022, 52, 101541. [Google Scholar] [CrossRef]
  13. Wang, Z.; Liu, W.D.; Yang, M.L.; Han, D.J. A Multi-Objective Evolutionary Algorithm Model for Product Form Design Based on Improved SPEA2. Appl. Sci. 2019, 9, 2944. [Google Scholar] [CrossRef]
  14. Benaissa, B.; Kobayashi, M. The consumers’ response to product design: A narrative review. Ergonomics 2023, 66, 791–820. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, Q.; Liu, Z.; Yang, B.; Wang, C. Product Styling Cognition Based on Kansei Engineering Theory and Implicit Measurement. Appl. Sci. 2023, 13, 9577. [Google Scholar] [CrossRef]
  16. Lin, F.F.; Xu, W.N.; Li, Y.; Song, W. Exploring the Influence of Object, Subject, and Context on Aesthetic Evaluation through Computational Aesthetics and Neuroaesthetics. Appl. Sci. 2024, 14, 7384. [Google Scholar] [CrossRef]
  17. Acar, E.; Bayrak, G.; Jung, Y.; Lee, I.; Ramu, P.; Ravichandran, S.S. Modeling, analysis, and optimization under uncertainties: A review. Struct. Multidiscip. Optim. 2021, 64, 2909–2945. [Google Scholar] [CrossRef]
  18. Flake, J.K.; Fried, E.I. Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them. Adv. Methods Pract. Psychol. Sci. 2020, 3, 456–465. [Google Scholar] [CrossRef]
  19. Jebb, A.T.; Ng, V.; Tay, L. A Review of Key Likert Scale Development Advances: 1995–2019. Front. Psychol. 2021, 12, 637547. [Google Scholar] [CrossRef]
  20. Sung, Y.T.; Wu, J.S. The Visual Analogue Scale for Rating, Ranking and Paired-Comparison (VAS-RRP): A new technique for psychological measurement. Behav. Res. Methods 2018, 50, 1694–1715. [Google Scholar] [CrossRef]
  21. Schwarz, N.; Knäuper, B.; Hippler, H.-J.; Noelle-Neumann, E.; Clark, L. Rating scales numeric values may change the meaning of scale labels. Public Opin. Q. 1991, 55, 570–582. [Google Scholar] [CrossRef]
  22. Medvedev, O.N.; Roemer, A.; Krageloh, C.U.; Sandham, M.H.; Siegert, R.J. Enhancing the precision of the Positive and Negative Affect Schedule (PANAS) using Rasch analysis. Curr. Psychol. 2023, 42, 1554–1563. [Google Scholar] [CrossRef]
  23. Fukuda, S. Emotional Engineering: Service Development; Springer: London, UK, 2011; pp. 289–310. [Google Scholar] [CrossRef]
  24. Chanyachatchawan, S.; Yan, H.B.; Sriboonchitta, S.; Huynh, V.N. A linguistic representation based approach to modelling Kansei data and its application to consumer-oriented evaluation of traditional products. Knowl.-Based Syst. 2017, 138, 124–133. [Google Scholar] [CrossRef]
  25. Guo, F.; Hu, M.C.; Duffy, V.G.; Shao, H.; Ren, Z.G. Kansei evaluation for group of users: A data-driven approach using dominance-based rough sets. Adv. Eng. Inf. 2021, 47, 101241. [Google Scholar] [CrossRef]
  26. Lou, S.H.; Feng, Y.X.; Li, Z.W.; Zheng, H.; Tan, J.R. An integrated decision-making method for product design scheme evaluation based on cloud model and EEG data. Adv. Eng. Inf. 2020, 43, 101028. [Google Scholar] [CrossRef]
  27. Kang, X.H. Combining rough set theory and support vector regression to the sustainable form design of hybrid electric vehicle. J. Clean. Prod. 2021, 304, 127137. [Google Scholar] [CrossRef]
  28. Dong, Y.F.; Zhu, R.Z.; Peng, W.; Tian, Q.H.; Guo, G.; Liu, W.R. A fuzzy mapping method for Kansei needs interpretation considering the individual Kansei variance. Res. Eng. Des. 2021, 32, 175–187. [Google Scholar] [CrossRef]
  29. Wang, T.X.; Zhou, M.Y. A method for product form design of integrating interactive genetic algorithm with the interval hesitation time and user satisfaction. Int. J. Ind. Ergon. 2020, 76, 102901. [Google Scholar] [CrossRef]
  30. Jing, L.T.; Zhang, H.Y.; Dou, Y.B.; Feng, D.; Jia, W.Q.; Jiang, S.F. Conceptual design decision-making considering multigranularity heterogeneous evaluation semantics with uncertain beliefs. Expert Syst. Appl. 2024, 244, 122963. [Google Scholar] [CrossRef]
  31. Chan, K.Y.; Kwong, C.K.; Wongthongtham, P.; Jiang, H.M.; Fung, C.K.Y.; Abu-Salih, B.; Liu, Z.X.; Wong, T.C.; Jain, P. Affective design using machine learning: A survey and its prospect of conjoining big data. Int. J. Comput. Integr. Manuf. 2020, 33, 645–669. [Google Scholar] [CrossRef]
  32. Wang, J.Y.; Allebach, J. Automatic Assessment of Online Fashion Shopping Photo Aesthetic Quality. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 2915–2919. [Google Scholar] [CrossRef]
  33. Li, Z.; Tian, Z.G.; Wang, J.W.; Wang, W.M. Extraction of affective responses from customer reviews: An opinion mining and machine learning approach. Int. J. Comput. Integr. Manuf. 2020, 33, 670–685. [Google Scholar] [CrossRef]
  34. Liu, C.; Kim, K. Construction and application of data-driven knowledge adjacency network for product CMF design. J. Adv. Mech. Des. Syst. Manuf. 2023, 17, JAMDSM0032. [Google Scholar] [CrossRef]
  35. Shen, H.C.; Wang, K.C. Affective product form design using fuzzy Kansei engineering and creativity. J. Ambient Intell. Hum. Comput. 2016, 7, 875–888. [Google Scholar] [CrossRef]
  36. Li, Z.R.; Shi, K.; Dey, N.; Ashour, A.S.; Wang, D.; Balas, V.E.; McCauley, P.; Shi, F.Q. Rule-based back propagation neural networks for various precision rough set presented Kansei knowledge prediction: A case study on shoe product form features extraction. Neural Comput. Appl. 2017, 28, 613–630. [Google Scholar] [CrossRef]
  37. Pei, Z.; Zheng, L. New unbalanced linguistic scale sets: The linguistic information representations and applications. Comput. Ind. Eng. 2017, 105, 377–390. [Google Scholar] [CrossRef]
  38. Wang, G.Y.; Xu, C.L.; Li, D.Y. Generic normal cloud model. Inf. Sci. 2014, 280, 1–15. [Google Scholar] [CrossRef]
  39. Banerjee, R.; Pal, S.K.; Pal, J.K. A Decade of the Z-Numbers. IEEE Trans. Fuzzy Syst. 2022, 30, 2800–2812. [Google Scholar] [CrossRef]
  40. Herrera-Viedma, E.; Palomares, I.; Li, C.C.; Cabrerizo, F.J.; Dong, Y.C.; Chiclana, F.; Herrera, F. Revisiting Fuzzy and Linguistic Decision Making: Scenarios and Challenges for Making Wiser Decisions in a Better Way. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 191–208. [Google Scholar] [CrossRef]
  41. Herrera, F.; Martínez, L. A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Trans. Fuzzy Syst. 2000, 8, 746–752. [Google Scholar] [CrossRef]
  42. Xu, Z.S. Deviation measures of linguistic preference relations in group decision making. Omega 2005, 33, 249–254. [Google Scholar] [CrossRef]
  43. Peng, H.G.; Wang, J.Q. Hesitant Uncertain Linguistic Z-Numbers and Their Application in Multi-criteria Group Decision-Making Problems. Int. J. Fuzzy Syst. 2017, 19, 1300–1316. [Google Scholar] [CrossRef]
  44. Liu, D.H.; Liu, Y.H.; Wang, L.Z. Distance measure for Fermatean fuzzy linguistic term sets based on linguistic scale function: An illustration of the TODIM and TOPSIS methods. Int. J. Intell. Syst. 2019, 34, 2807–2834. [Google Scholar] [CrossRef]
  45. Li, D.Y.; Liu, C.Y.; Gan, W.Y. A New Cognitive Model: Cloud Model. Int. J. Intell. Syst. 2009, 24, 357–375. [Google Scholar] [CrossRef]
  46. Wang, J.Q.; Wang, P.; Wang, J.; Zhang, H.Y.; Chen, X.H. Atanassov’s Interval-Valued Intuitionistic Linguistic Multicriteria Group Decision-Making Method Based on the Trapezium Cloud Model. IEEE Trans. Fuzzy Syst. 2015, 23, 542–554. [Google Scholar] [CrossRef]
  47. Wang, J.Q.; Peng, J.J.; Zhang, H.Y.; Liu, T.; Chen, X.H. An Uncertain Linguistic Multi-criteria Group Decision-Making Method Based on a Cloud Model. Group Decis. Negot. 2015, 24, 171–192. [Google Scholar] [CrossRef]
  48. Wang, J.Q.; Lu, P.; Zhang, H.Y.; Chen, X.H. Method of multi-criteria group decision-making based on cloud aggregation operators with linguistic information. Inf. Sci. 2014, 274, 177–191. [Google Scholar] [CrossRef]
  49. Peng, H.G.; Wang, J.Q. A Multicriteria Group Decision-Making Method Based on the Normal Cloud Model With Zadeh’s Z-Numbers. IEEE Trans. Fuzzy Syst. 2018, 26, 3246–3260. [Google Scholar] [CrossRef]
  50. Zadeh, L.A. A Note on Z-numbers. Inf. Sci. 2011, 181, 2923–2932. [Google Scholar] [CrossRef]
  51. Xiong, Y.; Li, Y.; Pan, P.Y.; Chen, Y. A regression-based Kansei engineering system based on form feature lines for product form design. Adv. Mech. Eng. 2016, 8, 1687814016656107. [Google Scholar] [CrossRef]
  52. Zou, H.; Hastie, T. Regularization and Variable Selection Via the Elastic Net. J. R. Stat. Soc. Ser. B Stat. Methodol. 2005, 67, 301–320. [Google Scholar] [CrossRef]
  53. Friedman, J.H.; Hastie, T.; Tibshirani, R. Regularization Paths for Generalized Linear Models via Coordinate Descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef]
  54. Li, Q.; Lin, N. The Bayesian Elastic Net. Bayesian Anal. 2010, 5, 151–170. [Google Scholar] [CrossRef]
  55. Shahriari, B.; Swersky, K.; Wang, Z.; Adams, R.P.; Freitas, N.d. Taking the Human Out of the Loop: A Review of Bayesian Optimization. Proc. IEEE 2016, 104, 148–175. [Google Scholar] [CrossRef]
  56. Srinivas, N.; Krause, A.; Kakade, S.M.; Seeger, M.W. Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting. IEEE Trans. Inf. Theory 2012, 58, 3250–3265. [Google Scholar] [CrossRef]
  57. Roustant, O.; Ginsbourger, D.; Deville, Y. DiceKriging, DiceOptim: Two R packages for the analysis of computer experiments by kriging-based metamodeling and optimization. J. Stat. Softw. 2012, 51, 1–55. [Google Scholar] [CrossRef]
  58. Kandasamy, K.; Schneider, J.; Poczos, B. High Dimensional Bayesian Optimisation and Bandits via Additive Models. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 295–304. Available online: https://dl.acm.org/doi/10.5555/3045118.3045151 (accessed on 1 April 2025).
  59. Snoek, J.; Larochelle, H.; Adams, R.P. Practical Bayesian optimization of machine learning algorithms. In Proceedings of the International Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3 December 2013; pp. 2951–2959. Available online: https://dl.acm.org/doi/10.5555/2999325.2999464 (accessed on 1 April 2025).
  60. Kaplan, D. On the Quantification of Model Uncertainty: A Bayesian Perspective. Psychometrika 2021, 86, 215–238. [Google Scholar] [CrossRef]
  61. Joy, T.T.; Rana, S.; Gupta, S.; Venkatesh, S. Batch Bayesian optimization using multi-scale search. Knowl.-Based Syst. 2020, 187, 104818. [Google Scholar] [CrossRef]
  62. Bian, C.; Wang, X.F.; Shao, W.Y.; Xin, J.C.; Hu, R.; Lu, Y.M.; Liu, H.T. Adaptive confidence bound based Bayesian optimization via potentially optimal Lipschitz conditions. Eng. Optim. 2023, 55, 2051–2069. [Google Scholar] [CrossRef]
  63. Wang, P.; Xu, X.; Huang, S.; Cai, C. A Linguistic Large Group Decision Making Method Based on the Cloud Model. IEEE Trans. Fuzzy Syst. 2018, 26, 3314–3326. [Google Scholar] [CrossRef]
  64. Chakraborty, S.; Saha, A.K. Selection of forklift unit for transport handling using integrated MCDM under neutrosophic environment. Facta Univ. Ser. Mech. Eng. 2022, 22, 235–256. [Google Scholar] [CrossRef]
Figure 1. An example of cloud model generator.
Figure 1. An example of cloud model generator.
Symmetry 17 00758 g001
Figure 2. The information processor of cloud model with Z-numbers.
Figure 2. The information processor of cloud model with Z-numbers.
Symmetry 17 00758 g002
Figure 3. Questionnaire design of CMZ.
Figure 3. Questionnaire design of CMZ.
Symmetry 17 00758 g003
Figure 4. Bayesian elastic net regression information processor.
Figure 4. Bayesian elastic net regression information processor.
Symmetry 17 00758 g004
Figure 5. A novel CMZ-BENR enhanced KE process. Stage-I (green) is affective words data acquisition and preprocessing, including online review crawling, affective word extraction, synonym expansion, and market-sample image collection. Stage-II (blue) is CMZ-enhanced clustering and prototype generation, where cloud model with Z-number similarity refines K-means results and produces representative product forms. Stage-III (yellow) is reliability and uncertainty affective evaluation, employing CMZ-augmented semantic differential questionnaires to capture both sentiment scores and confidence levels. Stage-IV (orange) is BENR for affective prediction, integrating multi-source CMZ data into a symmetrically regularized regression model to infer final morphological design knowledge.
Figure 5. A novel CMZ-BENR enhanced KE process. Stage-I (green) is affective words data acquisition and preprocessing, including online review crawling, affective word extraction, synonym expansion, and market-sample image collection. Stage-II (blue) is CMZ-enhanced clustering and prototype generation, where cloud model with Z-number similarity refines K-means results and produces representative product forms. Stage-III (yellow) is reliability and uncertainty affective evaluation, employing CMZ-augmented semantic differential questionnaires to capture both sentiment scores and confidence levels. Stage-IV (orange) is BENR for affective prediction, integrating multi-source CMZ data into a symmetrically regularized regression model to infer final morphological design knowledge.
Symmetry 17 00758 g005
Figure 6. Stage-I about the inputs, processes, and outputs.
Figure 6. Stage-I about the inputs, processes, and outputs.
Symmetry 17 00758 g006
Figure 7. Stage-II about the inputs, processes, and outputs.
Figure 7. Stage-II about the inputs, processes, and outputs.
Symmetry 17 00758 g007
Figure 8. Stage-III about the inputs, processes, and outputs.
Figure 8. Stage-III about the inputs, processes, and outputs.
Symmetry 17 00758 g008
Figure 9. Stage-IV about the inputs, processes, and outputs.
Figure 9. Stage-IV about the inputs, processes, and outputs.
Symmetry 17 00758 g009
Figure 10. Representative affective words focusing based on Word2Vec similarity distance.
Figure 10. Representative affective words focusing based on Word2Vec similarity distance.
Symmetry 17 00758 g010
Figure 11. The information base of the initial product image through the pre-processing.
Figure 11. The information base of the initial product image through the pre-processing.
Symmetry 17 00758 g011
Figure 12. Improvement of CMZ K-means clustering result.
Figure 12. Improvement of CMZ K-means clustering result.
Symmetry 17 00758 g012
Figure 13. Experimental rebuilding of the morphology elements.
Figure 13. Experimental rebuilding of the morphology elements.
Symmetry 17 00758 g013
Figure 14. With the BENR of the “clear” as an example. (a) Score vs. optimization epoch. Red dots mark local-optimum trials, blue dots mark non-optimal trials. (b) Utility vs. optimization epoch, tracking how the acquisition function value evolves over successive trials. (c) Change in performance score over all iterations. The exploration phase (blue background) and exploitation phase (red background) are shaded, with the best iteration indicated by the dashed green line. (d) Hyperparameter space exploration in the (α, λ) plane. The point color encodes score, and the selected optimum (α = 0.682, λ = 0.032) is highlighted in green. (e) Scatter plot of actual vs. predicted values with reference line. The fitted regression reports R² = 0.763 and RMSE = 5.199.
Figure 14. With the BENR of the “clear” as an example. (a) Score vs. optimization epoch. Red dots mark local-optimum trials, blue dots mark non-optimal trials. (b) Utility vs. optimization epoch, tracking how the acquisition function value evolves over successive trials. (c) Change in performance score over all iterations. The exploration phase (blue background) and exploitation phase (red background) are shaded, with the best iteration indicated by the dashed green line. (d) Hyperparameter space exploration in the (α, λ) plane. The point color encodes score, and the selected optimum (α = 0.682, λ = 0.032) is highlighted in green. (e) Scatter plot of actual vs. predicted values with reference line. The fitted regression reports R² = 0.763 and RMSE = 5.199.
Symmetry 17 00758 g014
Figure 15. Morphological design knowledge of the affective word (asterisks denote * p < 0.05, ** p < 0.01, *** p < 0.001). (a) “clear”: standardized coefficients for Top1–4, Lateral1–5 and Penetration1–3 variables; partial correlation coefficients and rank order (r = 0.800*** rank 1; r = 0.641*** rank 2; r = 0.515** rank 3); model constant = 47.272, α = 0.682, λ = 0.032, R² = 0.763, RMSE = 5.199. (b) “portable”: coefficients for the same morphological variables; partial correlations (r = 0.573*** rank 1; r = 0.415* rank 2; r = 0.406* rank 3); constant = 35.273, α = 0.756, λ = 0.008, R² = 0.479, RMSE = 2.924. (c) “upscale”: variable coefficients; partial correlations (r = 0.683*** rank 1; r = 0.665*** rank 2; r = 0.331 rank 3); constant = 34.145, α = 0.756, λ = 0.008, R² = 0.632, RMSE = 2.019. (d) “friendly”: coefficients for Top, Lateral and Penetration dimensions; partial correlations (r = 0.698*** rank 1; r = 0.581*** rank 2; r = 0.400* rank 3); constant = 36.439, α = 1.000, λ = 0.002, R² = 0.605, RMSE = 3.636.
Figure 15. Morphological design knowledge of the affective word (asterisks denote * p < 0.05, ** p < 0.01, *** p < 0.001). (a) “clear”: standardized coefficients for Top1–4, Lateral1–5 and Penetration1–3 variables; partial correlation coefficients and rank order (r = 0.800*** rank 1; r = 0.641*** rank 2; r = 0.515** rank 3); model constant = 47.272, α = 0.682, λ = 0.032, R² = 0.763, RMSE = 5.199. (b) “portable”: coefficients for the same morphological variables; partial correlations (r = 0.573*** rank 1; r = 0.415* rank 2; r = 0.406* rank 3); constant = 35.273, α = 0.756, λ = 0.008, R² = 0.479, RMSE = 2.924. (c) “upscale”: variable coefficients; partial correlations (r = 0.683*** rank 1; r = 0.665*** rank 2; r = 0.331 rank 3); constant = 34.145, α = 0.756, λ = 0.008, R² = 0.632, RMSE = 2.019. (d) “friendly”: coefficients for Top, Lateral and Penetration dimensions; partial correlations (r = 0.698*** rank 1; r = 0.581*** rank 2; r = 0.400* rank 3); constant = 36.439, α = 1.000, λ = 0.002, R² = 0.605, RMSE = 3.636.
Symmetry 17 00758 g015
Figure 16. Morphological design knowledge through affective coupling.
Figure 16. Morphological design knowledge through affective coupling.
Symmetry 17 00758 g016
Figure 17. Representation of different evaluation information. (a) Histogram of “uncertainty” with overlaid fitted density (dashed blue) and expected value Ex = 1.7 (vertical line). (b) Histogram of “reliability” with overlaid fitted density (dashed red) and expected value Ex = 9.1 (vertical line). (c) Normal cloud model for “uncertainty” in the discourse universe U, showing cloud droplets and membership curve (Ex = 78.049, En = 4.103) (d) Z-number cloud model combining “uncertainty” (blue:) Ex = 78.049, En = 4.103, He = 0.241) and “reliability” (red: Ex = 79.219, En = 2.134, He = 0.241).
Figure 17. Representation of different evaluation information. (a) Histogram of “uncertainty” with overlaid fitted density (dashed blue) and expected value Ex = 1.7 (vertical line). (b) Histogram of “reliability” with overlaid fitted density (dashed red) and expected value Ex = 9.1 (vertical line). (c) Normal cloud model for “uncertainty” in the discourse universe U, showing cloud droplets and membership curve (Ex = 78.049, En = 4.103) (d) Z-number cloud model combining “uncertainty” (blue:) Ex = 78.049, En = 4.103, He = 0.241) and “reliability” (red: Ex = 79.219, En = 2.134, He = 0.241).
Symmetry 17 00758 g017
Table 1. The information base of the initial affective words.
Table 1. The information base of the initial affective words.
Affective Words Related to the Shape of Domestic Cleaning Robot
concise, simple, minimalistic, elegant, majestic, flexible, lightweight, streamlined, convenient, ergonomic, exquisite, delicate, subtle, luxury, premium, upscale, noble, textured, versatile, fashionable, unique, technological, smart, innovative, inviting, kind, friendly, welcoming, spacious, bulky, solid, rounded, full, refreshing, compact, fresh, light, soft, presentable, tidy, clean, aesthetic, smooth, …, harmonious
The information base of the initial affective word in no order.
Table 2. With the MGLTS-MGLSFs of the “clear” as an example.
Table 2. With the MGLTS-MGLSFs of the “clear” as an example.
SampleExpert 1Expert 7
MGLTs ( s i l )MGLSFs ( θ i l )MGLTSs ( s i l )MGLSFs ( θ i l )
1 [ ( s 3 1 ,   s 5 2 ) ,   ( s 0 1 ,   s 6 2 )][(0.71, 0.50), (0.00, 0.55)] [ ( s 3 1 ,   s 5 2 ) ,   ( s 0 1 ,   s 7 2 )][(0.71, 0.50), (0.00, 0.61)]
2 [ ( s 2 1 ,   s 8 2 ) ,   ( s 1 1 ,   s 7 2 )][(0.50, 0.71), (0.29, 0.61)] [ ( s 3 1 ,   s 5 2 ) ,   ( s 0 1 ,   s 8 2 )][(0.71, 0.50), (0.00, 0.71)]
3 [ ( s 3 1 ,   s 7 2 ) ,   ( s 0 1 ,   s 6 2 )][(0.71, 0.61), (0.00, 0.55)] [ ( s 2 1 ,   s 5 2 ) ,   ( s 0 1 ,   s 5 2 )][(0.50, 0.50), (0.00, 0.50)]
49 [ ( s 1 1 ,   s 6 2 ) ,   ( s 3 1 ,   s 7 2 )][(0.29, 0.55), (0.71, 0.61)] [ ( s 2 1 ,   s 5 2 ) ,   ( s 2 1 ,   s 7 2 )][(0.50, 0.50), (0.50, 0.61)]
50 [ ( s 1 1 ,   s 7 2 ) ,   ( s 3 1 ,   s 8 2 )][(0.29, 0.61), (0.71, 0.71)] [ ( s 2 1 ,   s 7 2 ) ,   ( s 3 1 ,   s 5 2 )][(0.50, 0.61), (0.71, 0.50)]
According to the compound tuple-2 structure concerning the concepts of “belonging” and “not-belonging” in Z-numbers, MGLTS = [(Abelonging, Bbelonging), (Anot-belonging, Bnot-belonging)], then the S of Abelonging and Anot-belonging is a set by { s 0 1 , s 1 1 , s 2 1 , s 3 1 , s 4 1 , s 5 1 } to describe the fuzzy constraint, At the same time, Bbelonging and Bnot-belonging are sets by { s 0 2 , s 1 2 , s 2 2 , s 3 2 , s 4 2 , s 5 2 , s 6 2 , s 7 2 , s 8 2 , s 9 2 , s 10 2 , s 11 2 } to describe the reliability measure.
Table 3. With the CMZ of the “clear” as an example.
Table 3. With the CMZ of the “clear” as an example.
SampleCMZbelongingCMZnot-belonging
1[(62.61, 3.92, 0.30), (56.47, 1.09, 0.58)][(28.74, 4.04, 0.26), (51.51, 0.87, 0.66)]
2[(61.11, 3.86, 0.32), (60.39, 1.26, 0.53)][(32.31, 4.01, 0.27), (54.96, 1.02, 0.60)]
3[(64.68, 3.89, 0.31), (54.53, 0.99, 0.62)][(18.02, 4.14, 0.23), (53.84, 0.96, 0.63)]
49[(48.49, 3.58, 0.41), (55.35, 1.03, 0.60)][(53.01, 3.64, 0.39), (53.02, 0.93, 0.64)]
50[(45.48, 3.70, 0.37), (56.08, 1.11, 0.58)][(59.04, 3.89, 0.31), (57.37, 1.19, 0.55)]
Table 4. The morphological coding information of experimental samples.
Table 4. The morphological coding information of experimental samples.
Card IDTopLateralPenetration
RecCirArcSheRecCirLarcUarcTraNonRecCir
ID 1100010000100
ID 2100001000100
ID 3100000100100
ID 35000100100001
ID 36000100010001
Morphological elements are abbreviated as “Rec” stands for rectangular, “Cir” for circle, “Arc” for arch, “Larc” for lower-arch, “Uarc” for upper-arch, “She” for shell, “Tra” for trapezoid, “Non” for none. Dummy variables are encoded as “0” for “NA” and “1” for “TRUE”.
Table 5. With the score function of the “clear” as an example.
Table 5. With the score function of the “clear” as an example.
Card IDCMZSD C S ^ ( A ) C S ^ ( B ) ΓCMZ
ID 1[(78.05, 4.10, 0.24), (79.22, 2.13, 0.24)]57.7557.7668.43
ID 2[(63.44, 3.99, 0.28), (67.26, 1.70, 0.27]44.0746.7749.54
ID 3[(70.02, 4.01, 0.29), (72.26, 1.83, 0.27)]50.2051.6157.74
ID 35[(36.17, 3.95, 0.31), (67.50, 1.73, 0.31)]25.7746.8738.07
ID 36[(31.89, 3.95, 0.31), (71.14, 1.88, 0.31)]23.2350.7138.72
Table 6. Performance result of different penalized regression methods.
Table 6. Performance result of different penalized regression methods.
Affective
Words
RidgeLassoENR (α = 0.5)BENR (This Study)
R2RMSEρ(γ)R2RMSEρ(γ)R2RMSEρ(γ)R2RMSEρ(γ)
clear0.760 5.232 0.183 0.671 6.119 0.155 0.720 5.642 0.169 0.763 5.199 0.184
portable0.394 3.155 0.266 0.431 3.056 0.276 0.453 2.996 0.282 0.479 2.924 0.290
upscale0.594 2.123 0.395 0.616 2.063 0.409 0.602 2.100 0.400 0.632 2.019 0.419
friendly0.569 3.798 0.236 0.603 3.644 0.247 0.600 3.656 0.247 0.605 3.636 0.248
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, H.; Wang, P.; Liu, J.; Chu, C. Integrating Reliability, Uncertainty, and Subjectivity in Design Knowledge Flow: A CMZ-BENR Augmented Framework for Kansei Engineering. Symmetry 2025, 17, 758. https://doi.org/10.3390/sym17050758

AMA Style

Lin H, Wang P, Liu J, Chu C. Integrating Reliability, Uncertainty, and Subjectivity in Design Knowledge Flow: A CMZ-BENR Augmented Framework for Kansei Engineering. Symmetry. 2025; 17(5):758. https://doi.org/10.3390/sym17050758

Chicago/Turabian Style

Lin, Haoyi, Pohsun Wang, Jing Liu, and Chiawei Chu. 2025. "Integrating Reliability, Uncertainty, and Subjectivity in Design Knowledge Flow: A CMZ-BENR Augmented Framework for Kansei Engineering" Symmetry 17, no. 5: 758. https://doi.org/10.3390/sym17050758

APA Style

Lin, H., Wang, P., Liu, J., & Chu, C. (2025). Integrating Reliability, Uncertainty, and Subjectivity in Design Knowledge Flow: A CMZ-BENR Augmented Framework for Kansei Engineering. Symmetry, 17(5), 758. https://doi.org/10.3390/sym17050758

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop