Next Article in Journal
Pyrolysis and Volatile Evolution Behaviors of Cold-Rolling Oily Sludge
Next Article in Special Issue
An Iterative Conceptual Design Process for Modular Product Based on Sustainable Analysis and Creative Template Method
Previous Article in Journal
Optimization Design and Performance Evaluation of a Hybrid Excitation Claw Pole Machine
Previous Article in Special Issue
Tourists’ Spatial–Temporal Behavior Patterns Analysis Based on Multi-Source Data for Smart Scenic Spots: Case Study of Zhongshan Botanical Garden, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Decision Making of Design for Coupling System Based on Complexity Composite

1
School of Mechanical Engineering, Hebei University of Technology, Tianjin 300401, China
2
National Engineering Research Center for Technological Innovation Method and Tool, Hebei University of Technology, Tianjin 300401, China
3
Yueqing Institute of Technological Innovation, Yueqing 325600, China
4
Hebei Hanguang Heavy Industry Co. Ltd., Handan 056000, China
*
Author to whom correspondence should be addressed.
Processes 2022, 10(3), 542; https://doi.org/10.3390/pr10030542
Submission received: 23 January 2022 / Revised: 4 March 2022 / Accepted: 7 March 2022 / Published: 11 March 2022
(This article belongs to the Special Issue Advances in Digital Design and Manufacturing)

Abstract

:
With the continuous improvement in user demand and the increasingly obvious complexity characteristics, such as strong coupling and nonlinearity of the system, it has become a new problem faced by the decision making of conceptual design. Decision-making methods in conceptual design such as traditional information axioms and the multi-criteria decision-making (MCDM) framework cannot provide a good solution for systems with strong coupling and nonlinearity. To fill these gaps, this paper proposes a decision-making method of design for a coupling system based on a complexity composite method (DMDCC). DMDCC is composed of a prediction model and a synthetic paradigm. In the early stages of decision making of design, design-centric complexity (DCC) theory is used to preliminarily sort the schemes. Then, driven by the twin data of existing products, the complexity of function units is obtained by using the prediction model. The complexity of different function units is composited by the composite paradigm to obtain the total function realization probability of the design scheme. A more accurate calculation of the total function realization probability of new products can assist designers to judge the scheme from the perspective of function and help to guide the decision making of the design of new systems’ development scientifically and reasonably. The engineering example of the bulk traditional Chinese medicine dispensing device is used to verify the scientific effectiveness of the research.

1. Introduction

With the rapid development of modern engineering technology, the conceptual design of a system is increasingly recognized as a decision-making process [1,2]. Conceptual design is the most important stage in system design, which determines the cost, quality, and function of the subsequent life cycle. The main purpose of the decision making of a conceptual design is to obtain the best design scheme with the highest realization probability of system function, mainly focusing on the functional domain [3,4]. However, with the decision makers’ subjective tendency towards the design scheme, erroneous or unstable decision-making methods will affect the subsequent development [5,6]. The existing decision-making methods often use a variety of evaluation indexes to make decisions on the design scheme. However, some evaluation indexes may cause coupling or conflict, which affects the accuracy of decision making. Therefore, it is of great significance for decision making to propose a scientific and reasonable method to avoid the coupling and conflict between evaluation indexes.
In order to replace the subjective judgment of decision makers, many researchers have proposed various numerical evaluation methods based on the multi-criteria decision-making (MCDM) framework [7]. Moreover, a variety of fuzzy algorithms [4,7,8,9] and grey decision-making methods [10,11] are introduced to deepen the research of the MCDM framework. However, almost all need to set weights to measure the relative importance of each criterion in these methods [12], and different weights will produce different results [13]. Thus, the judgment of decision makers is tested. These methods fail to deal with conflicting relationships between evaluation criterions, nor can they reflect the extent to which each objective affects the overall design desirability. In addition, the literature reviewed above seldom considered, or deeply discussed, how to obtain the accurate effect value [4,11]. Therefore, when there are conflicts in the design expectations of different objectives, the effectiveness of the decision-making results may be ignored, which may lead to deviation from the decision-making results.
Information axioms [14] can well avoid the conflicting relationship between evaluation criteria. This method does not need the decision maker to give the weight of the evaluation index, it only needs to determine the design ranges and system ranges of each index, and then the total information of each scheme can be obtained, so as to determine the best design scheme. The specific effect value is used to assist designers to reduce the influence of subjective understanding on the evaluation results. However, the traditional information axiom still has the problems of randomness and fuzziness of evaluation indexes. Some scholars have extended the information axiom in the way of fuzzy mathematics [15,16,17,18,19,20]. In the information axiom, each evaluation index is independent of each other. The total information content can be obtained by adding the information content of each index linearly. However, there is correlation and coupling between the function units of the coupling system, which does not necessarily fully meet the independence axiom. For the function units with coupling, the real information content of the system cannot be obtained only by linear addition, which affects the accuracy of the decision making about the coupling system.
Therefore, in order to solve the above problems, this paper introduces design-centric complexity (DCC) theory to explore a decision-making method based on complexity, which provides a possible solution for the influence of the coupling among function units in a coupling system on the total information content.
In order to reduce the influence of the coupling between function units on the total function realization probability of the system, a complexity composite method based on the principle of error composite is proposed to calculate the total function realization probability of the system. Eventually a decision-making method of design for a coupling system based on a complexity composite method (DMDCC) was established. The complexity of system function units is composited by an error composite to solve the influence of coupling between function units on the function realization probability. Taking the function realization probability as the standard, the total function realization probability of the scheme is used to assist designers to make more scientific and reasonable decision making. It is helpful for companies to concentrate their limited resources on the most scientific and reasonable ideas, so as to obtain the maximum profit in the market competition.
In the framework proposed in this paper, firstly, the three-criteria method of DCC theory is used to sort the design schemes. Then, the complexity prediction model of the function unit is established by using the data of existing system digital twins. The complexity prediction model is used to calculate the function realization probability of the corresponding function units. Finally, using the established complexity composite paradigm, the realization probability of each function unit is composited to obtain the total function realization probability of the design scheme with coupled function units.
The main contributions of this research are as follows:
This study proposes a method for calculating the information content between coupled function units in a coupling system, which extends the traditional information axiom. This method solves the problem that the coupling indexes in traditional axiomatic design cannot be added linearly and improves the scientific rationality of screening schemes in the decision making of design.
Digital twin technology is introduced into the decision making to predict the realization probability of the function units combined with a neural network and to provide relevant twin data. This work combines digital twins with the decision making of design, and expands the digital twin’s application in the field of decision making of design.
The following sections of this paper are arranged as follows:
Section 2 introduces the relevant tools and theoretical basis for DMDCC. Section 3 describes the construction process of DMDCC. Section 4 introduces the design-centric complexity composite model. In Section 5, the decision making of the bulk traditional Chinese medicine (TCM) dispensing device is introduced to verify the effectiveness of the DMDCC. Section 6 discusses and evaluates the scientificity and superiority of this method. Section 7 summarizes the contributions and shortcomings of this method and puts forward the future research direction.

2. Theoretical Foundations

This section reviews the research methods of DCC and the application of digital twin technology. In Section 2.1, Suh’s DCC theory is introduced to determine the method of complexity research in this paper. Section 2.2 introduces the concept and application fields of digital twins as well as the purpose and significance of introducing digital twins into the design stage. Section 2.3 introduces data-driven modeling based on the BP neural network. This section is intended to provide a theoretical background to this study.

2.1. DCC Theory

Suh [21] put forward DCC theory in 1999. Design-centric complexity is based on the function realization probability in system design, and the complexity of a system is evaluated by the probability [22]. The higher the realization probability of system function, the lower the complexity [23], and, conversely, the lower the realization probability of system function, the higher the complexity will be [23]. DCC theory focuses the complexity of the system on the function domain, which can greatly reduce the possibility of complexity of the physical domain [21]. Through the application of DCC theory, the relationship among user needs, system functions, and design parameters of function mapping can be established. The realization probability of system function can reflect the complexity of system function, which can be expressed by system information content. The composite of the complexity can effectively help to calculate the function realization probability of the system. The complexity of a system is a function determined by both the system range and the design range [24]. Their common range is the information content contained in both the system range and the design range. The probability density function of functional requirements is shown in Figure 1 [21]. The classification of complexity in DCC theory is explained in Appendix A.
DCC theory provides a method to analyze complexity from the functional level. The problem and the type of complexity can be determined by using the method of three-criteria complexity [25]. Thus, the design schemes can be preliminarily and optimally sorted and evaluated in order to shorten the design cycle. At the same time, the complexity is represented by the function realization probability in DCC theory. From the perspective of function, each system has its total function, and the total function can obtain multiple sub functions and function units through function decomposition [26]. Similarly, the total function realization probability can be obtained by calculating the function realization probability of each function unit and then compositing them by the method of complexity composite. This paper proposes a method of complexity composite and evolution to help calculate the function realization probability of the new system in the decision-making stage. In Section 3, the rules of composites of system complexity are described in detail, thus providing comprehensive guidance for the quantification of new system complexity.

2.2. Digital Twins

DCC theory describes the expression method of system function realization probability in detail. However, the accurate prediction of the function realization probability also requires the drive of a large number of historical data and the participation of digital design. The design process is becoming more digitalized than ever before, as indicated by the universal application of digital design packages (e.g., CAD, FEA, CAE, and CAM) [27]. Digital twin technology is the latest research achievement in the field of digital design. Its appearance provides an effective method for connecting physical space and virtual space to solve the problem of inaccurate complexity prediction [28,29,30].
The digital twin model was originally defined as a system-oriented, multi-physics, multi-scale, multi-probability, highly integrated simulation model, which can reflect the physical function, real-time status, and evolution trend of the model by twin data [31]. Recently, many scholars have made supplements and improvements on the basis of this concept, so that the digital twin can be used not only for system operation and maintenance [32,33,34,35] but also for the system design stage. For example, Tao proposed a five-dimension digital twin model [36], as shown in Figure 2. Reference [27] proposed a digital twin-driven product design framework and bicycle redesign, which opened up a new direction for digital twin-driven product design. On this basis, reference [37] integrated the digital twin technology with engineering innovation theory and knowledge reuse concepts to establish a systematic sustainable innovation pathway by inducing characteristic parameters. Therefore, it can be considered that the introduction of digital twin technology into the current proposed decision-making framework is feasible. DT technology has gained prominence, partially due to the uptake of Internet of Things technologies, which allow for the monitoring of physical twins at high spatial resolutions, almost in real time, through both miniature devices and remote sensing, thereby producing ever-increasing data streams [38].
The currently proposed DMDCC monitors the running process of existing systems based on digital twin technology. In decision making, digital twin technology is used to collect the twin data of existing systems to predict the change in the realization probability of new system function units. The introduction of digital twin technology to monitor the changes in system parameters during the operation of existing systems will achieve this purpose.

2.3. Data-Driven Modeling Based on BP Neural Network

Data-driven modeling has attracted much attention in recent years because of its incomparability in adaptability, accuracy, predictability, and simplicity [39]. The rise of artificial intelligence and cyber–physical systems has been pushing the boundaries of their possibilities [40,41]. The prediction results of a BP neural network have high fitting degree and reliability [42,43,44]. Based on the above research, the data-driven modeling method has obvious advantages over the traditional engineering correlation and mathematical modeling in the establishment of complex relationship models, and the BP neural network has made great contributions. It can be considered that the decision making of applying the BP neural network to the design stage in this paper is feasible.
The data-driven modeling in DMDCC proposed in this paper is based on a BP neural network for the physical parameters corresponding to function units. In the decision-making process, the BP neural network is used to establish the relationship between the output parameters and the input physical parameters of the function, so as to obtain the realization probability of the function unit to provide support for the complexity composite. The introduction of a BP neural network to model the relationship between physical parameters can achieve this purpose.
In summary, there is a close dependence between DCC theory, digital twins, and data-driven modeling. In DCC theory, the complexity of function units needs to be composited to calculate the realization probability of total function. The complexity of functional elements is affected by changes in input parameters. Therefore, it is necessary to establish the relationship between the input parameters and the output parameters of the function unit in the existing system improvement in order to obtain the complexity of the function unit. Data-driven modeling can meet the above purposes, so as to serve the complexity composite. As the accuracy of data-driven modeling is based on the quality of input data [45], digital twin technology is introduced to monitor the change process of existing system parameters and obtain corresponding data, so as to increase the accuracy of data-driven modeling.

3. Composite Paradigm of Complexity

Based on the theoretical research in Section 2, this paper proposes a method to evaluate the function realization probability of a design scheme through a complexity composite. The method includes the complexity prediction of the function units and the complexity composite of the total function. Figure 3 shows the process of prediction and composite. The complexity prediction model is constructed by a BP neural network, which completes the mapping from the physical parameters of function units to output the physical parameters and provides twin data with the digital twin of the existing system. In the new system, there are constant parameters and variable parameters. The realization probability of the function unit with invariant parameters is obtained by the prediction model. The realization probability of function units with variable parameters is obtained by a digital twin model or experimental method. The total function realization probability of the new system can be obtained by calculating the realization probability of all function units with the complexity composite paradigm. The specific design method and process are shown in Section 3.1, Section 3.2 and Section 3.3.

3.1. Design Scheme Sorting Principle

For many design schemes, the operation efficiency of the model needs to be improved. This paper proposes a scheme sorting principle to preliminarily sort the design scheme. The principle is as follows:
  • Time-dependent complexity should be excluded as much as possible in the improved design scheme.
  • The functional structure of the previous generation system should be changed as little as possible.
  • The design of DP must consider three aspects of information content: the robustness, signal-to-noise ratio, and response rate.
The first step is to discern the type of complexity present in each design scheme. The type of system complexity can be determined based on the three-criteria complexity type determination method in DCC theory. The system complexity determination process is shown in Figure 4.
I and C are concepts in DCC theory. I stands for information content. In DCC theory, I is an important index to measure the complexity of the system as it is used to distinguish the complexity types of the system. When I = , it is impossible to distinguish the types of complexity contained in the system, so the function C ( t ) of system complexity varying with time is defined as shown in Formula (1).
C ( t ) = 1 t t = 0 I ( t )
According to the functional structure of the design scheme, it can be determined whether new functions are introduced. The introduction of new functions will lead to changes in the functional structure. Since the time-dependent complexity has a greater impact on the prediction accuracy of the system, the priority of the design scheme is as shown in Figure 5.
When multiple design schemes are in the same priority, the robustness, signal-to-noise ratio, and response rate of design parameters are comprehensively considered. In axiomatic design, robustness refers to the characteristic that the design parameters can still meet the functional requirements when they have large random changes. Design stiffness is an important index to measure the robustness of design parameters. The relationship between the design rigidity of the scheme and functional requirements (FRs) is shown in Figure 6.
From the analysis in Figure 6, it can be seen that when there are no constraints, the smaller the rigidity, the smaller the impact on FRs. Curve 1 shows that the design rigidity is large, and curve 2 shows that the design rigidity is small. However, sometimes the system needs a fast response rate, and too-small design rigidity will also affect the response rate of the system. Therefore, in the sorting of design schemes, the three properties of design parameters can be comprehensively considered according to the actual situation.

3.2. Complexity Prediction Model of Function Unit

Before establishing the complexity prediction model to calculate the function realization probability of function units, it is necessary to use DCC theory to sort the complexity of multiple design schemes. In order to reduce the number of iterations and improve the efficiency of evaluation, priority is used to determine the total function realization probability. When establishing the prediction model, it is necessary to establish the digital twin model of the design scheme to calculate the function realization probability. The digital twin model can simulate part of the scheme to improve the complexity judgment of part of the structure.
The probability of function realization is the most intuitive index to reflect design-centric complexity. According to the size and change of function realization probability, the complexity can be accurately reflected. The prediction of function realization probability also needs to apply back propagation (BP) neural network technology. Its purpose is to realize the prediction process by establishing the relationship between the physical parameters in the system and the relevant parameters in the design range. The complexity prediction model is driven by the twin data obtained from the digital twin monitoring parameters of the previous generation system.
The probability of function realization is mainly reflected in the overlap degree of system ranges and design ranges. The output of some function units indirectly affects the system ranges of the total function, and its influence can be expressed by the output of other function units. Therefore, in order to reduce the workload of analysis and calculation, it is not necessary to establish the complexity prediction model of all function units. Before the establishment of the complexity prediction model, it is necessary to screen the function units and determine the function units that directly affect the system ranges of total function.
Both system ranges and design ranges can be expressed in the form of “metric + value”. According to the expression of function, the carrier of metric belongs to one of the types of flow, such as energy flow, material flow, and information flow. Therefore, the function structure can be used to determine the function units that directly affect the system ranges of the total function. According to the type of flow, it extends from the output end to the input end of the function structure. The function units involved in this process are the system range function units that directly affect the total function.
There is a clear functional relationship between the input and output physical parameters of some function units. For example, the input and output parameters of the heat-generating function unit may satisfy Joule’s law. However, in most cases, there is no clear physical relationship between the output and input physical parameters of the function, so other mathematical methods are needed to establish the relationship model. The theory of a BP neural network proves that the two-layer neural network can realize arbitrary nonlinear mapping. Generally, when the number of training samples is small and the hidden layer nodes are small, the two-layer neural network structure can well realize function mapping. Increasing the number and nodes of hidden layers can improve the calculation accuracy, but the calculation efficiency will be reduced. Different choices can be made according to specific application scenarios. This paper adopts a two-layer BP neural network.
The number of neurons in the hidden layer is related to the balance between computational accuracy and computational efficiency. At present, there is no practical determination method. Most formulas will give some excess for the flexible adjustment of the number of hidden layer nodes. This paper adopts the following formula:
k = m + n + a
where k is the number of nodes in the hidden layer; m ,   n are the number of nodes in the input layer and the output layer, respectively; and a is the excess, which is selected according to the actual situation.
Essentially, in the complexity prediction model, the relationship between input parameter x i and output parameter s i of the function unit in the system is established via the BP neural network algorithm. The output parameters refer to the physical parameters in the system that can reflect the function realization probability of the function unit. There are four steps to this:
  • Firstly, the function of the existing systems is analyzed, and the corresponding components are determined in the function model according to the function units determined by the function structure.
  • Then, physical parameters corresponding to the function units of the system range-related function units are extracted.
  • Relevant twin data are obtained from the digital twin model of the previous generation systems, and the corresponding characteristics are obtained by using these twin data. These features are combined into the input x i as the training set and the corresponding features of the new system as the test set.
  • The nonlinear relationship between x i and s i is established by the BP neural network.
The function unit complexity prediction model can be expressed as follows:
s = f ( x 1 , x 2 , , x n )
According to the relationship between function and physical parameters, four types of function unit changes are determined:
  • The structure does not change;
  • The structure changes, but only the values of physical parameters;
  • The structure changes, and the attributes of the physical parameters also change, but the original understanding does not change;
  • The principle solution changes.
The above physical parameters refer to the parameters within the system, excluding products and supersystems. For the first two types of function units in the design scheme, the complexity prediction model can be directly used to calculate the function realization probability. The latter two types of function units need simulation verification or experimental analysis with the digital twin model to obtain the function realization probability. The process of establishing the prediction model is shown in Figure 7.

3.3. Composite Paradigm of Total Functional Complexity

The total function realization probability is a criterion for the decision making of design. However, only the function realization probability of the function unit is obtained by the complexity prediction model. Therefore, a complexity composite method is needed to calculate the function realization probability of the total function. Based on error theory and DCC theory, this paper proposes a complexity composite paradigm.
Error can be described as the disparity between the measured data and the truth-value due to imperfect experimental method or equipment and the influence of the surrounding environment. Based on error theory, error can be categorized into systematic errors and random errors. In practice, most random errors occur as a result of normal distribution. Systematic errors are not compensatory, and they are fixed or obey certain function laws. According to their characteristics, systematic errors can be divided into five types: constant systematic error, linear systematic error, nonlinear systematic error, periodic systematic error, and complex systematic error. The relationship between the five systematic errors Δ and measurement process t is shown in Figure 8, where t represents time, and the change in system error with time in the measurement has a certain law. Curves a–e represent the variation in the above five system errors in turn.
If systematic errors and random errors exist at the same time, the characteristics of errors are as shown in Figure 9. The abscissa is the measured value, and the ordinate is the distribution density of the measured value after multiple measurements. According to the above definition and characteristics of errors, it can be found that there is a certain similarity between error and the complexity of DCC theory. The similarity comparison between the error and the complexity of DCC theory is specified in Appendix A. In error theory, the error composite is defined as follows: based on the correct analysis and composite of various error factors in the system, the comprehensive influence of these errors on the system is correctly expressed. Referring to the definition of error composite, this paper defines complexity composite as follows: the complexity of different function units in the system is composited to obtain the overall complexity so as to determine the impact of the realization probability of the total function of the system.
In the practical engineering system, there are many function units. Each function unit has the complexity of different types or a different function realization probability. Therefore, designers are required to composite different complexities to obtain the total function realization probability so as to make decisions according to the total function realization probability of different schemes. Since error has the same characteristics as the complexity of DCC theory, the complexity of function units is composited according to the error composite method. The system range of the total function can be obtained by using the error composite method to composite the complexity of function units. The deviation in each function unit’s functional requirements can be determined by the complexity prediction model. Supposing that the system has n function units, and the function requirement value of function unit is z i ( i = 1 ,   2 ,   ,   n ) , the relationship between the function requirement value S   and   z of the total function can be expressed as:
y = f ( z 1 , z 2 , , z n )
By analyzing the eigenvalues of the design scheme and using the complexity prediction model, the deviation value of each function unit can be obtained. The deviation in the total function requirement value is expressed as follows:
Δ y = f z 1 d z 1 + f z 2 d z 2 + + f z n d z n
where f z i ,   ( i = 1 , 2 , , n ) is the deviation transfer coefficient of each function unit; it reflects the influence degree of the deviation Δz of each function unit on the functional requirements value Δy. If k i = f z i , ( i = 1 , 2 , , n ) , the above formula can be simplified as:
Δ y = i = 1 n k i Δ z i
To calculate the probability of total function realization, it is necessary to determine the distribution of the total function requirement value. In practical engineering, most functional requirements are caused by normal distribution. The standard deviation is σ i ( i = 1 , 2 , , n ) . Based on the combination method of square root, the standard deviation of the total function requirement value can be obtained as:
σ y = i = 1 n ( k i σ i ) 2 + 2 1 i < j n ρ i j ( k i σ i ) ( k j σ j )
where ρ i j is the coupling degree between the two functions. In statistics, the calculation of coupling degree refers to the correlation coefficient between the output physical quantities of two function units. According to the composite of deviation and standard deviation, the complexity composite paradigm of total function can be obtained as:
U = x 0 + i = 1 n k i Δ z i ± t i = 1 n ( k i σ i ) 2 + 2 1 i < j n ρ i j ( k i σ i ) ( k j σ j )
where x 0 is the central value of the design ranges of the total function of the system and t is the confidence coefficient. If the requirement value of the total function conforms to normal distribution, when the confidence probability is 99.7%, t = 3. When the confidence probability is 95%, t = 2. Generally, when the confidence coefficient is t = 3, that is, when the confidence is 99.7%, it can be considered to cover almost all of the values.
In the complexity composite paradigm, the coupling coefficient reflects the influence of the coupling degree among the complexity of function units on that of the total function. In order to calculate the coupling coefficient, it is necessary to determine whether there is a coupling relationship between function units. Based on axiomatic design theory, after completing the zigzag mapping between the function domain and structure domain, if the design parameters are adjusted without affecting other functional requirements, it can be ensured that there is no mutual influence between various functions.
According to the design matrix, it can be determined whether there is a coupling relationship between the function units related to the design ranges. However, the value of the coupling coefficient cannot be obtained via the design matrix. In statistics, the following methods are usually used to judge the coupling coefficient: the direct judgment method, observation, simple calculation method, and direct calculation method. In this paper, the direct calculation method is used to calculate the coupling between the two functional requirements. Let the function requirement values of F R i and F R j be z i and z j , respectively. The coupling coefficient is calculated as:
ρ i j = C o v ( Z i , Z j ) D ( Z i ) D ( Z j )
where C o v ( Z i , Z j ) is the covariance of the functional requirements of the two function units. By inserting the above formula into Formula (8), the system ranges of the total function in the new system design scheme can be obtained. According to the design range and system range, the common range can be obtained, which is the integral range of the total function realization probability.
The design matrix proposed in this paper is the design matrix of axiomatic design, which is used to determine the coupling degree of the system. If there is a coupling relationship between functions, it may lead to system problems. The design matrix can determine three coupling relationships: uncoupled design, quasi-coupled design, and coupled design.
After obtaining the system range of the total function, the probability density function of the total function requirement value needs to be calculated. Letting ( Z 1 , Z 2 , , Z n ) be an n-dimensional random vector, the set of all possible values is D . The domain of multivariate functions y = f ( z 1 , z 2 , , z n ) is A D . f ( z 1 , z 2 , , z n ) is the distribution density of ( Z 1 , Z 2 , , Z n ) . For the convenience of expression, it is written as follows:
D ( y ) = { ( z 1 , z 2 , , z n ) | f ( z 1 , z 2 , , z n ) y }
The distribution function of Y is expressed as:
F Y ( y ) = P { Y y } = D ( y ) f ( z 1 , z 2 , , z n ) d z 1 d z 2 d z n
If F Y ( y ) is a piecewise differentiable function, the distribution density f Y ( y ) of Y can be obtained, which is the probability density function of the total functional requirements value. Furthermore, by calculating the probability density function of the total function requirement value and the area of the common range, the realization probability of the total function can be obtained.

4. A Decision-Making Method of Design for Coupling System Based on Complexity Composite

In order to accurately predict the total function realization probability of the scheme in the design stage, this paper proposes a DMDCC. The process of complexity composite and evolution is divided into three steps. The first step is the generation and sorting process of the design scheme. The second step is to establish the complexity prediction model of the function unit. The third step is to use the complexity composite paradigm to calculate the total function realization probability of the design scheme. This paper calculates the total function realization probability of the design scheme by compositing the function realization probability of the function unit, effectively reducing bad decisions being made and improving evaluation efficiency. The DMDCC is shown in Figure 10, and the key steps are as follows:
Step 1: Several new design schemes are obtained from the designer, and the three-criteria methods of DCC theory are used to perform preliminary sorting processing on them.
Step 2: Based on the data of the digital twins of the previous generation systems, a complexity prediction model is established. The function realization probability of the corresponding function unit is calculated using the complexity prediction model.
Step 3: Using the complexity composite paradigm, the function realization probability of the function unit is composited to obtain the total function realization probability of the design scheme.

5. Engineering Example

5.1. Case Background

As the foundation of Chinese medicine, TCM is regarded highly by Chinese people. At present, manual prescription is the main method of dispensing in the market of TCM. There are a number of existing forms of TCM, including powder, granules, rhizomes, leaves, and filaments. Because of their different shapes, there are considerable problems in dispensing TCM. The existing TCM devices often do not take into account the pollution of various traditional Chinese medicines through the same channel, and whether each pair of traditional Chinese medicines can be quickly proportioned according to demand. The device proposed in this paper is designed for the specific needs of the hospital. The requirements are as follows:
  • the device is supposed to be easy and simple enough to handle the proportion of different forms of traditional Chinese medicine;
  • the dispensing channel needs to be improved to enhance the efficiency of dispensing;
  • the existing device needs to be improved on the premise of reducing the cost as much as possible.
The device improves the efficiency of dispensing by improving the dispensing channel, and can store multiple prepared TCMs at the same time for rapid response. Therefore, we have been entrusted by a hospital to provide a scientific and reasonable method for the decision making for improving the powder granular bulk TCM dispensing device, thereby developing a rhizome bulk TCM dispensing device.

5.2. Decision-Making Process of the Bulk TCM Dispensing Device

In this paper, the proposed DMDCC is used to composite the complexity of the design scheme of the bulk TCM dispensing device, and the scheme is evaluated and selected according to its function realization probability. In this paper, the rhizome bulk TCM dispensing device is improved by the digital twin model of the powder granular bulk TCM dispensing device, and then the simulation experiment is carried out. The experimental data are fed back to the detailed design of the rhizome bulk TCM dispensing device. The digital twin model of the powder granular bulk TCM dispensing device is shown in Figure 11.
Step 1: Firstly, several new design schemes are obtained from the designer.
Scheme 1: The screw blade is changed to double-screw conveying. Two spiral blades with different rotation directions are selected, and their rotation direction is opposite in the conveying process. Thus, there is no extrusion between the TCM and the spiral feeding barrel, as shown in Figure 12a.
Scheme 2: The wall of the extruded side of the spiral feeding cylinder is improved. At this time, the problem of the TCM being stuck on the blade and the wall of the spiral feeding barrel due to the gravity effect will be avoided, as shown in Figure 12b.
Scheme 3: The vibration effect can be obtained by querying the function of separating solids in the effect library. The TCM is separated by the vibration effect, as shown in Figure 12c.
The three schemes are made to improve the structure of the function units separating the TCM or to obtain the effect solution, failing to introduce new functions or change the existing functional structure. There is no obvious degradation process for the physical parameters in the structure of schemes 1 and 2, so there is no time-dependent complexity. However, there are some differences in the robustness of the two design parameters, as shown in Figure 13. Due to the single screw conveyor used in scheme 2, there is a large angle adjustment in the design range, while scheme 1 is smaller; that is, the design rigidity of scheme 2 is smaller. Therefore, scheme 2 is better than scheme 1 in conveying accuracy.
According to the three-criteria method, the complexity of scheme 3 is analyzed, and the results are shown in Formula (12). It can be concluded that scheme 3 contains time-dependent combinatorial complexity.
l i m 0 C ( t ) = { l i m t t 1 1 t t = 0 t 1 I ( t ) = 0 t < t 1 l i m t 1 < t < t 2 1 t t = t 1 t 2 I ( t ) = n t 1 < t < t 2 l i m t 2 1 t 2 t 2 = 0 I ( t 2 ) = t t 2
Therefore, the ranking of the design schemes is scheme 2, scheme 1, and scheme 3. Next, the complexity analysis of the design scheme is carried out, and the total function realization probability is calculated until the scheme that can meet the design requirements is identified.
Step 2: It is necessary to establish the complexity prediction model according to the design range of the powder granular bulk TCM dispensing device in order to predict the complexity of the rhizome bulk TCM dispensing device. First, in order to identify the function units directly related to the design range of the bulk TCM dispensing device, it is necessary to establish its functional structure, as shown in Figure 14.
According to the requirements of the national pharmacopeia, the dispensing precision of TCM grasp is 20%. Therefore, the metric of the design ranges of the bulk TCM dispensing device is dispensing precision, and the value is 20%. The carrier of dispensing precision is TCM, and TCM is also the material flow. Therefore, since the output of function structure is material flow, it can be confirmed that the relevant function units are output TCM, collect TCM, separate TCM, and save TCM. In this device, there is no upstream and downstream equipment of the bulk TCM dispensing device, so this paper does not analyze the saved TCM.
After determining the function units directly related to the design ranges, the physical parameters corresponding to the function units need to be obtained according to the function analysis. The results are shown in Table 1.
The output physical parameters of the separate TCM, collect TCM, and output TCM function units are output efficiencies η 1 ,   η 2 ,   and   η 3 , respectively. Output efficiency refers to the ratio of the actual output to the theoretical output of TCM after the operation of each function unit. According to the digital twin model of the bulk TCM dispensing device, the twin data of the corresponding physical quantities are extracted. The physical parameters extracted from the function unit are combined with the output physical parameters to form the input data of the BP neural network. Since there are no series products in this paper, the constant physical parameters, such as the size of the discharge port and the thickness of the spiral blade, are excluded. Then, the complexity prediction model of the function unit is established by using a neural network toolbox based on MATLAB.
The complexity prediction model of the function unit of separate TCM is established, and the prediction results are shown in Figure 15.
The decision coefficient of the results calculated by this training model is 0.969, and it can be used to well predict the results. Then, the complexity prediction model is established for the collect TCM and output TCM function units, and the process is the same as that discussed above. Finally, the decision coefficient of each function unit is above 0.90, which proves that it can accurately predict the change in characteristic parameters.
Since scheme 2 fails to change the structure of the collect TMC and output TCM function units, the complexity prediction model of these two function units is used to determine the function realization probability.
The powder granular bulk TCM dispensing device is the existing system, and the rhizome bulk TCM dispensing device is the new system. The digital twin model of new systems is improved based on existing systems. The establishment process is as follows:
  • The data corresponding to each parameter of the function unit are collected by using the digital twin model of the existing system, as shown in Figure 11.
  • SolidWorks is used to build the model of new systems.
  • The prototype model built by SolidWorks is imported into EDEM software. Then, according to the prototype data of existing systems, the constraints of each part can be set.
  • A virtual granule factory can be set with constraints according to the data statistics of the rhizome bulk TCM dispensing device.
  • The directions of gravity acceleration, dynamic constraints, calculation domain and other constraints are set, and, finally, the digital twin model of the new system is improved.
Due to the limitation of software, the data collected by the sensor cannot be directly transmitted to the digital prototype of new systems. Therefore, this study uses the data of existing systems collected by LabVIEW to input the digital model of new systems to complete the complexity prediction of new systems.
The function realization probability of the separate TCM function unit is firstly calculated. In this paper, the rhizome TCM Atractylodes macrocephala was used to extract the characteristics. Through the actual measurement, the diameter distribution of Atractylodes macrocephala is approximately normal distribution X ~ N ( 13.3 , 3.12 ) , the thickness is about 4.35 mm, and the density is 0.53 g/cm3. The parameters, such as the size distribution and density of TCM feedback from the physical prototype, are set to fill the particles so as to achieve the linkage between the digital prototype and the physical prototype. The rhizome TCM is filled with spherical particles, and the digital twin model is used for simulation analysis, as shown in Figure 16.
With the increase in demand for TCM input by the user, the accuracy of the bulk TCM dispensing device also increases. Therefore, the experiment is carried out with a smaller demand in rhizome TCM. According to the concept of the digital twins “virtual and reality fusion–virtual control reality”, a simulation experiment is designed with a demand of 5 g to monitor the output quality of TCM at the best rotary angle. The ratio of the actual output weight to theoretical output weight (the output efficiency of TCM η 1 ) is calculated by 100 simulation experiments with the digital twin model in Figure 16. Then the distribution of the separate TCM functional requirement value is obtained, as shown in Figure 17. The separate TCM functional requirement value roughly conforms to the normal distribution, and the distribution law is X ~ N ( 1.0969 ,   0.0117 2 ) .
The collect TCM and output TCM function units can use the complexity prediction model to obtain their function realization probability. In the input of their prediction models, the changed physical parameter is disintegrating slag rate, which is the main factor that affects the output, collection, and transportation stages of TCM. The 2 kg Atractylodes macrocephala tablets are randomly allocated to 100 groups, and the total weight of each group and the weight of the crushed residue powder obtained through the screen are measured. The ratio of the two is the disintegrating slag rate of each group. The distribution of the disintegrating slag rate is obtained by statistical analysis, as shown in Figure 18. The distribution of the disintegrating slag rate roughly shows normal distribution X ~ N ( 0.05 , 0.0038 ) .
In the collect TCM function unit, there are also three physical parameters. The positioning deviation of the baffle roughly shows normal distribution X ~ N ( 9.8 ,   5.37 ) . The indoor temperature and humidity are uniformly distributed at X ~ U ( 5 ,   25 ) , X ~ U ( 0.15 ,   0.35 ) , respectively. The above parameters are inversely transformed according to the probability distribution, and the combined data group is substituted into the complexity prediction model of the collect TCM function unit. Finally, the distribution of the function requirement value of collecting TCM is obtained, as shown in Figure 19. The functional requirement value of collecting TCM roughly conforms to normal distribution, and the distribution law is X ~ N ( 0.9133 ,   0.033 2 ) .
In the output TCM function unit for outputting, except for the common parameters of the collect TCM function unit, there is also a controllable variable parameter—conveyor belt speed. Combined with the actual situation of grasping speed and structural vibration, the belt speed of the powder granular bulk TCM dispensing device ranges from 0.3~0.5 m/s. Therefore, the belt speed range of the rhizome bulk TCM dispensing device is also 0.3~0.5 m/s. Finally, the distribution of the output TCM functional requirement value is obtained, as shown in Figure 20. The functional requirement value of outputting TCM roughly conforms to a normal distribution, and the distribution law is X ~ N ( 0.9685 ,   0.0248 2 ) .
Step 3: Before the complexity composite of the total function, the function design matrix of the three functions is established. Thus, the coupling between the three can be determined, as shown in Formula (13). FR1, FR2, and FR3 are the separate TCM, collect TCM, and output TCM function units, respectively. DP1, DP2, and DP3 are the screw feeding mechanism, the medicine storage space formed by the baffle and conveyor belt, and the conveyor belt, respectively. There is a coupling relationship between collecting and outputting TCM, because the positioning of the drug storage space needs to control the conveyor belt.
[ F R 1 F R 2 F R 3 ] = [ 1     0     0 0     1     0 0     1     1 ] [ D P 1 D P 2 D P 3 ]
The functional requirement values of the collect TCM and output TCM function units are combined. Then, the coupling coefficient is calculated using Pearson formula. The relationship between the output of the three function units and the design ranges is η = η 1 η 2 η 3 . η is the design range of total function (0.8~1.2). According to the equal action principle of error theory, the design ranges of the three function units are (0.885, 1.115), (0.769, 1), and (0.769, 1). According to the data of design ranges and functional requirement value of each function unit, the summary of characteristic coefficient can be obtained, as shown in Table 2.
Inserting the above data into the complexity composite paradigm (7) allows us to obtain the system ranges of the total function as (0.801, 1.361). Therefore, the common range is (0.801, 1.2). According to the calculation formula of total function realization probability, the total function realization probability of scheme 2 is 89.13%. Similarly, the total function realization probability of scheme 1 is 86.34%, and that of scheme 3 is 84.27%.

6. Discussion

In the discussion of the schemes, the proposed method is verified via three methods. The first is the contrast experiment of the physical prototype, and the specific process is shown in Appendix B. By using a physical prototype for experiments, the real function realization probability is calculated through statistical analysis. The second way is to invite experts in related fields and use the analytic hierarchy process (AHP) method to determine the weight to evaluate the scheme in this paper, and the specific process is shown in Appendix C. The third method analyzes the scheme of this paper with the traditional information axiom to verify the scientificity and superiority of the method proposed in this paper, and the specific process is shown in Appendix D.
Through the comparative experiment of the prototype, it can be found that the decision-making results obtained by DMDCC are consistent with the results of the actual prototype, which are scheme 2 > scheme 1 > scheme 3. Therefore, it can be proved that the method in this paper is scientific. However, the total function realization probability obtained by the proposed method is slightly lower than that of the physical prototype. Analysis of the reasons shows that the data used in predicting the realization probability of functional elements are the digital twin data of the existing system, which have certain differences from the real data of the actual prototype.
Based on the AHP method, the ranking of the three schemes is different from that of the actual prototype. However, the best scheme selected is scheme 2, which can prove the rationality of this method. When the degree of functional realization of the scheme is relatively close, this difference may be due to the subjectivity of weight setting and the conflict between evaluation criteria in the AHP method. The personnel participating in the evaluation are the representatives of the target customers of the case, so their evaluation of the case can represent the evaluation results of the system.
It can be seen from Appendix D Table A5 that based on the traditional information axiom, the ranking results of the three schemes are consistent with those of the actual prototype, which can prove the applicability of the method. However, from the perspective of function realization probability, there is a certain gap between the function realization probability obtained by the traditional information axiom and the actual prototype, and this gap is larger than that of the method proposed in this paper. This proves the superiority of the proposed method in accuracy compared with traditional information axioms.
Compared with the traditional method of calculating the total information content by linear addition in axiomatic design, the proposed method solves the problem that the coupling between function units affects the total function realization probability in system design. This paper innovatively applies the integration of digital twin technology, DCC theory, and error principle to the decision making of design. Its advantages can be seen from the following aspects. Firstly, the traditional axiomatic design uses various irrelevant indexes to calculate the information content of the system, ignoring the coupling existing in the system. Coupling is inherent in systems and affects the realization of system functions. In this paper, a complexity composite method is proposed to solve this coupling, so as to obtain a more accurate function realization probability. Secondly, this method uses the existing system’s digital twin data to predict the function realization probability of the new generation systems. This process determines the parameters that affect the function unit through functional analysis, so as to extract the twin data of the parameters. Therefore, the important parameters affecting the total function realization probability can be determined through this process. After a period of operation, the system can be maintained for important parameters, thus maintaining a high level of function realization probability and increasing the system’s service life.
There are TCM devices with better accuracy on the market, but they are not suitable for specified occasions. The evaluated systems are specially designed for the specific context of the designated hospital. The decision making in this paper is also based on the specific needs of the hospital for device improvement.
In addition, the device proposed in this paper meets the following economic needs according to the feedback of the pharmacy.
  • The cost is lower. After the introduction of ten devices, the pharmacy saved about 20.4% in dispensing costs in the first year, increased the dispensing efficiency by 12% in the same time, and saved about 8% in electricity costs.
  • Post maintenance is simple. It is mainly to replace the mechanical connecting parts, which can save more maintenance costs for the pharmacy.

7. Conclusions

This paper proposes a complexity composite method based on the error composite principle by combining the decision-making method of design, system complexity, and emerging digital twin technology. Compared with the example of the bulk TCM dispensing device, the effectiveness of this method in the decision-making stage is verified so as to assist designers to make more scientific and reasonable decisions.
The main contributions of the current research can be concluded as follows:
This study proposes a method for calculating the information content between coupled function units in the coupling system, which extends the traditional information axiom. This method directly solves the problem that the coupling indexes in traditional axiomatic design cannot be added linearly, and improves the scientific rationality of screening schemes in decision making of design.
The digital twin technology is introduced into the decision making of design, which provides twin data support for the establishment of the system ranges of coupled functional units. This work is a precursor to integrating digital twin technology with traditional decision-making theories in the decision making of design.
Despite the contribution of a new decision-making method, the limitations of this study are also evident. The introduction of digital twin technology provides the twin data drive for the decision making so as to improve authenticity and reliability. However, the limitations of this emerging technology also restrict its use in decision making of design, which needs to rely on the digital twin model of the previous generation of products. To date, researchers have only established a part of the digital twin model to verify the theory. This implies that applying digital twin technology in decision making to predict the functional realization probability of new systems has yet to be developed.
These limitations indicate possible future research directions. In the future, we should establish a large number of digital twins of components, adjust various parameters based on the digital twins of existing systems, and establish a more perfected digital twin model of new systems so as to make the calculation process of total function realization probability more efficient and accurate. In addition, through the comparison with other existing research methods, we can better prove the superiority of this method in the decision making of design.

Author Contributions

Conceptualization, P.Z. and B.Z.; project administration, P.Z.; resources, P.Z.; supervision, Z.N.; writing—original draft, S.C. and B.Z.; writing—review and editing, S.C. and Z.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 51975181).

Data Availability Statement

The data discussed in the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

There is no conflict of interest that exists in the submission of this manuscript, and the manuscript has been approved by all authors for publication.

Appendix A

In DCC theory, complexity is divided into four different types: time-independent real complexity, time-independent imaginary complexity, time-dependent combinatorial complexity, and time-dependent periodic complexity [21].
Time-independent real complexity indicates that the system range is inside, or partly or completely outside the system’s design range [46]. Time-independent imaginary complexity is the complexity caused by the designer’s ignorance of the system. Time-dependent periodic complexity refers to the uncertainty of periodic change in the realization probability of system function. The system range gradually deviates from the design range, but, eventually, it will overlap with the design range, and the function realization probability will return to the initial state. Time-dependent combinatorial complexity means that the system range gradually deviates from the design range with time, and the function realization probability decreases until functional failure [25]. By establishing the functional periodicity [47], the time-dependent combinatorial complexity can be transformed into time-dependent periodic complexity so that the system can maintain a higher function realization probability.
The system error can be understood as the deviation between the mean value of the current output value of the function and the design range, while the random error can be interpreted as the distribution of the functional requirements value. The causes of system error include the measuring device, measurement method, measuring environment, and measuring personnel. There are two reasons for the complexity of DCC theory:
  • there are problems in the structure domain itself;
  • the system range shift caused by the change in physical parameters of the structure domain over time.
The system error corresponds to the complexity type of DCC theory according to whether it changes with the measurement time and the change trend:
(1)
Constant systematic error and time-independent real complexity
The constant systematic error is caused by the measurement factors that do not change during the measurement process, and the sign and magnitude of the error are constant. Time-independent real complexity is due to an error of principle and the degree of uncertainty does not change with time. Therefore, the constant system error has the same performance characteristics as the time-independent real complexity.
(2)
Periodic systematic error and time-dependent periodic complexity
The periodic systematic error refers to the fact that the error changes according to the periodic law with the change of time in the measurement process. The main reason for the time-dependent periodic complexity is that the physical parameters related to the system range show periodic changes. Therefore, the periodic systematic error has the same characteristics as the time-dependent periodic complexity.
(3)
Time-dependent combinatorial complexity and systematic errors of linear, nonlinear, and complex variation laws
The systematic errors of linear, nonlinear, and complex variation laws are mainly due to the variation in measuring devices, methods, personnel, or environmental factors. This change leads to the same or different changes in the systematic errors of measurement. The systematic error and the system range of time-dependent combinatorial complexity are all due to the measurement process or time variation constantly deviating in one direction until the error range or functional realization probability is not satisfied. Therefore, the systematic errors of linear, nonlinear, and complex variation laws have the same characteristics as the time-dependent combination complexity.
(4)
Time-independent imaginary complexity
Time-independent imaginary complexity is the uncertainty caused by the designer’s lack of some specific design knowledge, not the real uncertainty. Different designers have different degrees of existing design knowledge, so time-independent imaginary complexity is too subjective and difficult to quantify. Therefore, time-independent imaginary complexity is not discussed in this article.

Appendix B. Comparative Experiment of Physical Prototype

Firstly, the prototypes of the three schemes were trial produced. Taking scheme 2 as an example, after completing the prototype, this paper uses the bulk Atractylodes macrocephala tablets to carry out the medicine dispensing experiment. The dispensing accuracy is obtained by comparing the dispensing information (weight of TCM) with the actual output weight of the TCM. After 100 experiments, the dispensing precision scatter diagram of the rhizome bulk TCM dispensing device is obtained, as shown in Figure A1.
Figure A1. Scattered point diagram of adjustment accuracy.
Figure A1. Scattered point diagram of adjustment accuracy.
Processes 10 00542 g0a1
According to the statistical analysis of dispensing accuracy distribution, the total function realization probability of the rhizome bulk TCM dispensing device is 86.67%. Similarly, the total function realization probability of the scheme I prototype and scheme III prototype is obtained, as shown in Table A1. Finally, the ranking results of the three design schemes with real function realization probability are scheme 2 > scheme 1 > scheme 3.
Table A1. Comparison of Function Realization Probability of Three Physical Prototypes.
Table A1. Comparison of Function Realization Probability of Three Physical Prototypes.
SchemeScheme 1Scheme 2Scheme 3
Total function
realization probability
83.29%86.67%81.94%

Appendix C. Comparison with AHP

In order to evaluate the effectiveness of the proposed method, three staff members from different hospitals in different places and two design experts in related fields were invited to evaluate the three schemes. The pharmacy staff included a pharmacist from a hospital in Tianjin and two staff who assisted pharmacists in the distribution of TCM. They had 10 years, 4 years, and 6 years of work experience, respectively. The two design experts are professors of TRIZ theory in Hebei University of Technology, who have more than ten years of product design experience.
According to the evaluation index of products and services in the literature [48], three comprehensive indexes of “customer perception”, “customer cognitive”, and “company influence” are subdivided. The final evaluation index system is shown in Table A2.
Table A2. The evaluation index system.
Table A2. The evaluation index system.
Target LayerComprehensive Index Layer
(Weight Value)
Project Index Layer (Weight Value)
Product function
realization degree
1 Customer perception—tangible assets (0.31)11 Material of the product (0.07)
12 Product convenience (0.62)
13 Product durability (0.31)
2 Customer cognitive—sustainability (0.58)21 Energy saving (0.65)
22 Views on waste reduction (0.22)
23 Time saving/convenience (0.14)
3 Company influence—consumption (0.11)31 Consumption of manufacturing raw materials (0.07)
32 Waste from manufacturing (0.20)
33 Greenhouse gas emissions from manufacturing (0.72)
First, the evaluation rules of the evaluation index are formulated according to the difference of each evaluation index, as shown in Table A3.
Table A3. Score evaluation table.
Table A3. Score evaluation table.
DescriptionVery GoodGoodModerateNot Very GoodNot GoodVery Bad
Score9–107–85–63–41–20–1
The weight is determined by AHP, and the appropriate measurement scale is introduced to represent the relative importance of each index. Therefore, the judgment matrix is constructed (set as A , where the element x i j in a represents the relative importance of e i and e j , x j i = 1 x i j ) and check its consistency. Via inspection, the eigenvector of the matrix is calculated as the corresponding weight value. The evaluation indicators are divided into five grades: equally important, slightly important, obviously important, strongly important, and extremely important. They correspond to scales 1, 3, 5, 7, and 9, respectively, while 2, 4, 6, and 8 are between the corresponding adjacent grades.
For the judgment matrix, the “summation method” [49] is used to find its feature vectors:
  • The original judgment matrix X = ( x i j ) n × k is normalized according to columns, and X is obtained:
    x i j = x i j i = 1 n x i j
  • Calculating the vector ω :
    ω i = j k x i j
  • ω normalized to obtain the feature vectors:
    ω i = ω i i = 1 n ω i
The weight set of the comprehensive indicator layer is determined by the above method: ω = { ω 1 , ω 2 , ω 3 } , and the weight set of the project indicator layer is ω 1 = { ω i 1 , ω i 2 , , ω i j } . Due to the limitation of space, this paper marks the project indicator layer according to the above method and constructs the judgment matrix A of the first layer:
[ A U 1 U 2 U 3 U 1 1 1 / 5 7 U 2 5 1 3 U 3 1 / 7 1 / 3 1 ]
The matrix satisfies the consistency test. Then, using Formulas (A1)–(A3), its future vector can be obtained as ω = { ω 1 , ω 2 , ω 3 } = { 0.31 ,   0.58 , 0.11 } .
Similarly, the second judgment matrix A 1 , A 2 , and A 3 can be obtained by using the above method:
[ A 1 U 11 U 12 U 13 U 11 1 1 / 7 1 / 7 U 12 7 1 3 U 13 7 1 / 3 1 ]
[ A 2 U 21 U 22 U 23 U 21 1 7 3 U 22 1 / 7 1 3 U 23 1 / 3 1 / 3 1 ]
[ A 3 U 31 U 32 U 33 U 31 1 1 / 5 1 / 7 U 32 5 1 1 / 7 U 33 7 7 1 ]
The matrix above satisfies the consistency test. Formulas (A1)–(A3) are used to obtain its feature vector as follows:
ω 1 = { 0.07 , 0.62 , 0.31 }
ω 2 = { 0.65 , 0.22 , 0.14 }
ω 3 = { 0.07 , 0.20 , 0.72 }
The weight values of each indicator in the comprehensive indicator layer and the project indicator layer are shown in Table A1.
Then, the weighted formula is used to calculate the evaluation results:
W = 1 m ( i = 1 n A i W i ) m
where m refers to the number of experts, n is the number of evaluation indexes, A i is the score of evaluation index i , W i is the weight value of the evaluation index i , and W is the total score. The design scheme is evaluated according to the above calculation method.
The evaluation results of five experts in related fields are brought into Formula (A11) to calculate the final evaluation results of the product function realization degree, as shown in Table A3. According to the results shown in Table A4, the order of the function realization degree of the bulk TCM dispensing device is scheme 2 > scheme 1 > scheme 3.
Table A4. Final evaluation results of product function realization degree.
Table A4. Final evaluation results of product function realization degree.
Comprehensive Evaluation IndicatorScore for
Scheme 1
Score for
Scheme 2
Score for
Scheme 3
Customer cognitive6.716.656.97
Customer perception6.597.356.04
Company influence5.057.637.61
Final score of product function realization degree6.136.726.26

Appendix D. Comparison with Information Axiom

In order to verify the rationality of the method proposed in this paper, the case is also studied based on the classical evaluation method, namely the information axiom. Since the information axiom and its derivative methods are both a linear addition for the information content of each index, the advantages of this method can be obtained by discussing the traditional information axiom.
The axiomatic design theory proposed by Suh [14] uses the information axiom to evaluate the design schemes. This axiomatic design theory is characterized by using the success probability that the design parameters meet the functional requirements, that is, the function realization probability, to express the designer’s satisfaction with the evaluation index. By calculating the information content of the design scheme, the design scheme with the minimum information content is selected as the best design. The information axiom gets rid of the influence of traditional methods, relying on the designer’s experience of design scheme evaluation, and avoids the conflict between evaluation indexes in MCDM. In this case, we calculate the information content of the design scheme according to the Formulas (A12)–(A14).
I = l o g 2 p
p = d l d u f ( F R ) d F R = A c r
I t = I 1 + I 2 + + I m = i = 1 m I i
where I refers to the information content of a single index, p refers to the function realization probability, and I t refers to the total information content of the system. When the design scheme best meets the user’s needs, the information content I is the smallest, and the function realization probability p is the largest; on the contrary, when the design scheme is the least in line with the user’s needs, I is the largest and p is the smallest.
The three design schemes are sorted by the information axiom, and the result is scheme 2 > scheme 1 > scheme 3. Table A5 lists the function realization probabilities obtained through the physical prototype experiment, DMDCC, and information axiom.
Table A5. Comparison of Function Realization Probability of Three Physical Prototypes.
Table A5. Comparison of Function Realization Probability of Three Physical Prototypes.
SchemeScheme 1Scheme 2Scheme 3
total function realization probabilityexperiment of physical prototype83.29%86.67%81.94%
DMDCC86.34%89.13%84.27%
information axiom79.49%83.96%78.86%

References

  1. Zurita, N.S.; Colby, M.K.; Tumer, I.Y.; Et, A. Design of Complex Engineered Systems Using Multi-Agent Coordination. J. Comput. Inf. Sci. Eng. 2017, 18, 11003. [Google Scholar] [CrossRef] [Green Version]
  2. Ming, Z.; Nellippallil, A.B.; Yan, Y.; Wang, G.; Goh, C.H.; Allen, J.K.; Mistree, F. PDSIDES-A Knowledge-Based Platform for Decision Support in the Design of Engineering Systems. J. Comput. Inf. Sci. Eng. 2018, 18, 041001. [Google Scholar] [CrossRef]
  3. Rehman, F.; Yan, X. A Case Study to Support Conceptual Design Decision Making Using Context Knowledge. In Advanced Design and Manufacture to Gain a Competitive Edge; Springer: Berlin/Heidelberg, Germany, 2008; pp. 13–22. [Google Scholar]
  4. Jing, L.; Zhan, Y.; Li, Q.; Peng, X.; Li, J.; Gao, F.; Jiang, S. An integrated product conceptual scheme decision approach based on Shapley value method and fuzzy logic for economic-technical objectives trade-off under uncertainty. Comput. Ind. Eng. 2021, 156, 107281. [Google Scholar] [CrossRef]
  5. Girodon, J.; Monticolo, D.; Bonjour, E.; Perrier, M. An organizational approach to designing an intelligent knowledge-based system: Application to the decision-making process in design projects. Adv. Eng. Inform. 2015, 29, 696–713. [Google Scholar] [CrossRef]
  6. Wang, Y.; Yu, S.; Ma, N.; Wang, J.; Hu, Z.; Liu, Z.; He, J. Prediction of product design decision Making: An investigation of eye movements and EEG features. Adv. Eng. Inform. 2020, 45, 101095. [Google Scholar] [CrossRef]
  7. Qi, J.; Hu, J.; Peng, Y. Integrated rough VIKOR for customer-involved design concept evaluation combining with customers’ preferences and designers’ perceptions. Adv. Eng. Inform. 2020, 46, 101138. [Google Scholar] [CrossRef]
  8. Wang, H.; Duanmu, L.; Lahdelma, R.; Li, X. A fuzzy-grey multicriteria decision making model for district heating system. Appl. Therm. Eng. 2018, 128, 1051–1061. [Google Scholar] [CrossRef]
  9. Zhang, X.; Qu, Y.; Tong, S. Evaluating New Product Alternatives in Case of Uncertain Targets and Product Attributes. Oper. Res. Manag. Sci. 2020, 29, 166–174. [Google Scholar] [CrossRef]
  10. Li, W.; Ren, X.; Ding, S.; Dong, L. A multi-criterion decision making for sustainability assessment of hydrogen production technologies based on objective grey relational analysis. Int. J. Hydrogen Energy 2020, 45, 34385–34395. [Google Scholar] [CrossRef]
  11. Wang, H.; Fang, Z.; Wang, D.; Liu, S. An integrated fuzzy QFD and grey decision-making approach for supply chain collaborative quality design of large complex products. Comput. Ind. Eng. 2020, 140, 106212. [Google Scholar] [CrossRef]
  12. Wang, P.; Meng, P.; Zhai, J.; Zhu, Z. A hybrid method using experiment design and grey relational analysis for multiple criteria decision making problems. Knowl. Based Syst. 2013, 53, 100–107. [Google Scholar] [CrossRef]
  13. Yurdakul, M.; Tansel, İ.Ç.Y. Application of correlation test to criteria selection for multi criteria decision making (MCDM) models. Int. J. Adv. Manuf. Technol. 2009, 40, 403–412. [Google Scholar] [CrossRef]
  14. Suh, N. Axiomatic Design: Advances and Applications; Oxford University Press: New York, NY, USA, 2001. [Google Scholar]
  15. Chen, D.; Chu, X.; Feng, T.; Li, Y.; Zuo, G. Product-service system evaluation based on information axiom in hybrid uncertain environment. Comput. Integr. Manuf. Syst. 2014, 20, 2071–2078. [Google Scholar] [CrossRef]
  16. Zhang, R.; Qiu, J.; Jia, Q.; Wang, X.; Dong, M. An improved fuzzy evaluation method for design schemes based on the information axiom. Int. J. Comput. Integr. Manuf. 2014, 27, 1067–1078. [Google Scholar] [CrossRef]
  17. He, Q.; Zhang, R.J.; Guo, N.; Cong, D.S.; Qiu, J.W. Fuzzy Information Axiom in Conceptual Design Scheme Evaluation of Handling Equipments. Adv. Mater. Res. 2013, 605–607, 263–266. [Google Scholar] [CrossRef]
  18. Tian, Q.; Xiao, R.; Zhong, Y.; Du, Y.; Yang, H. Evaluation Schemes Method for Design Based on Information Axiom and Fuzzy Mathematics. Trans. Chin. Soc. Agric. Mach. 2008, 39, 136–140. [Google Scholar]
  19. Wang, X.; Tang, D.; Lou, P. Multi-attribute Decision Making Method Based on Design Axioms. J. Southwest Jiaotong Univ. 2008, 43, 392–397. [Google Scholar] [CrossRef]
  20. Xiao, R.; Cheng, X.; Liao, X. Evaluation method and its application for design scheme based on fuzzy information axiom. Comput. Integr. Manuf. Syst. 2007, 2331–2338. [Google Scholar] [CrossRef]
  21. Suh, N.P. Complexity: Theory and Applications; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  22. Zhang, P.; Wang, Z.; Zhao, B.; Wang, T.; Tan, R. Review of the research on disruptive innovation theory. J. Mach. Des. 2018, 35, 1–6. [Google Scholar] [CrossRef]
  23. Zhang, P.; Dong, Y.; Zhang, H.; Zhang, J.; Tan, R. Decomposition Process Model for Complex Electromechanical Based on Design-centric Complexity and TRIZ. J. Mech. Eng. 2016, 52, 17–24. [Google Scholar] [CrossRef]
  24. Zhang, P.; Zhang, J.; Dong, Y.; Zhang, H.; Tan, R. Analysis process model of design-centric complexity based on additional effect. Comput. Integr. Manuf. Syst. 2018, 24, 444–454. [Google Scholar] [CrossRef]
  25. Zhang, P.; Yang, B.; Zhang, H.; Tan, R. Conflict determination oriented to CAI based on designrcentric complexity. Comput. Integr. Manuf. Syst. 2013, 19, 330–337. [Google Scholar] [CrossRef]
  26. Cao, D.; Tan, R.; Yuan, C.; Zhang, J. Conceptual design of mechanical products based on function decomposition. J. Mech. Eng. 2001, 37, 13–17. [Google Scholar] [CrossRef]
  27. Tao, F.; Sui, F.; Liu, A.; Qi, Q.; Zhang, M.; Song, B.; Guo, Z.; Lu, S.C.Y.; Nee, A.Y.C. Digital twin-driven product design framework. Int. J. Prod. Res. 2019, 57, 3935–3953. [Google Scholar] [CrossRef] [Green Version]
  28. Zhang, K.; Qu, T.; Zhou, D.; Jiang, H.; Lin, Y.; Li, P.; Guo, H.; Liu, Y.; Li, C.; Huang, G.Q. Digital twin-based opti-state control method for a synchronized production operation system. Robot. Comput. Integr. Manuf. 2020, 63, 101892. [Google Scholar] [CrossRef]
  29. Lu, Y.; Liu, C.; Wang, K.I.; Huang, H.; Xu, X. Digital Twin-driven smart manufacturing: Connotation, reference model, applications and research issues. Robot. Comput. Integr. Manuf. 2020, 61, 101837. [Google Scholar] [CrossRef]
  30. Aivaliotis, P.; Georgoulias, K.; Chryssolouris, G. The use of Digital Twin for predictive maintenance in manufacturing. Int. J. Comput. Integr. Manuf. 2019, 32, 1067–1080. [Google Scholar] [CrossRef]
  31. Zhang, B.; Li, X.; Wan, X. From Digital Twin to Digital Engineering Modeling and Simulation Entering a New Era. J. Syst. Simul. 2019, 31, 369–376. [Google Scholar] [CrossRef]
  32. Karve, P.M.; Guo, Y.; Kapusuzoglu, B.; Mahadevan, S.; Haile, M.A. Digital twin approach for damage-tolerant mission planning under uncertainty. Eng. Fract. Mech. 2020, 225, 106766. [Google Scholar] [CrossRef]
  33. Francisco, A.; Mohammadi, N.; Taylor, J.E. Smart City Digital Twin-Enabled Energy Management: Toward Real-Time Urban Building Energy Benchmarking. J. Manag. Eng. 2020, 36, 04019045.1–04019045.11. [Google Scholar] [CrossRef]
  34. Zheng, Y.; Yang, S.; Cheng, H. An application framework of digital twin and its case study. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 1141–1153. [Google Scholar] [CrossRef]
  35. Chakraborty, S.; Adhikari, S. Machine learning based digital twin for dynamical systems with multiple time-scales. Comput. Struct. 2021, 243, 106410. [Google Scholar] [CrossRef]
  36. Tao, F.; Liu, W.; Zhang, M. Five-dimension digital twin model and its ten applications. Comput. Integr. Manuf. Syst. 2019, 25, 1–18. [Google Scholar] [CrossRef]
  37. Nie, Z.; Zhang, P.; Wang, F.; Wang, Z. Sustainable innovation pathway for mechanical products by inducing characteristic parameters. Adv. Eng. Inform. 2021, 49, 101349. [Google Scholar] [CrossRef]
  38. Pylianidis, C.; Osinga, S.; Athanasiadis, I.N. Introducing digital twins to agriculture. Comput. Electron. Agric. 2021, 184, 105942. [Google Scholar] [CrossRef]
  39. Leong, W.D.; Teng, S.Y.; How, B.S.; Ngan, S.L.; Lam, H.L.; Tan, C.P.; Ponnambalam, S.G. Adaptive analytical approach to lean and green operations. J. Clean. Prod. 2019, 235, 190–209. [Google Scholar] [CrossRef] [Green Version]
  40. Kusiak, A. Smart manufacturing must embrace big data. Nature 2017, 544, 23–25. [Google Scholar] [CrossRef] [PubMed]
  41. Ji, W.; Yin, S.; Wang, L. A big data analytics based machining optimisation approach. J. Intell. Manuf. 2019, 30, 1483–1495. [Google Scholar] [CrossRef] [Green Version]
  42. Teng, S.Y.; How, B.S.; Leong, W.D.; Teoh, J.H.; Siang Cheah, A.C.; Motavasel, Z.; Lam, H.L. Principal component analysis-aided statistical process optimisation (PASPO) for process improvement in industrial refineries. J. Clean. Prod. 2019, 225, 359–375. [Google Scholar] [CrossRef] [Green Version]
  43. Wang, K.; Tan, R.; Peng, Q.; Wang, F.; Shao, P.; Gao, Z. A holistic method of complex product development based on a neural network-aided technological evolution system. Adv. Eng. Inform. 2021, 48, 101294. [Google Scholar] [CrossRef]
  44. Kang, M.; Shao, B.; Zhu, Y.; Chen, C.; Wang, T. Comparison of multiple nonlinear regression model and BP neural network model for landslide slip distance prediction. Geol. Bull. China 2021, 1–12. [Google Scholar]
  45. Bui, D.T.; Nhu, V.H.; Hoang, N.D. Prediction of soil compression coefficient for urban housing project using novel integration machine learning approach of swarm intelligence and Multi-layer Perceptron Neural Network. Adv. Eng. Inform. 2018, 38, 593–604. [Google Scholar] [CrossRef]
  46. Zhang, P.; Jing, S.H.; Nie, Z.F.; Al, E. Design and Development of Sustainable Product Service Systems Based on Design-Centric Complexity. Sustainability 2021, 13, 532. [Google Scholar] [CrossRef]
  47. Zhang, P.; Tan, R. Method of Establish Functional Period Assisted by TRIZ. J. Mech. Eng. 2012, 48, 78–85. [Google Scholar] [CrossRef]
  48. Chou, C.; Chen, C.; Conley, C. An approach to assessing sustainable product-service systems. J. Clean. Prod. 2015, 86, 277–284. [Google Scholar] [CrossRef]
  49. Bertolini, M.; Braglia, M.; Carmignani, G. Application of the AHP methodology in making a proposal for a public work contract. Int. J. Proj. Manag. 2006, 24, 422–430. [Google Scholar] [CrossRef]
Figure 1. The probability density function of functional requirements.
Figure 1. The probability density function of functional requirements.
Processes 10 00542 g001
Figure 2. A five-dimension digital twin model.
Figure 2. A five-dimension digital twin model.
Processes 10 00542 g002
Figure 3. Complexity composite process.
Figure 3. Complexity composite process.
Processes 10 00542 g003
Figure 4. The system complexity determination process.
Figure 4. The system complexity determination process.
Processes 10 00542 g004
Figure 5. Design scheme ranking principle.
Figure 5. Design scheme ranking principle.
Processes 10 00542 g005
Figure 6. The relationship between the design rigidity of the scheme and functional requirements.
Figure 6. The relationship between the design rigidity of the scheme and functional requirements.
Processes 10 00542 g006
Figure 7. Complexity prediction model.
Figure 7. Complexity prediction model.
Processes 10 00542 g007
Figure 8. Different types of systematic errors.
Figure 8. Different types of systematic errors.
Processes 10 00542 g008
Figure 9. Characteristics of simultaneous existence of systematic error and random error.
Figure 9. Characteristics of simultaneous existence of systematic error and random error.
Processes 10 00542 g009
Figure 10. The design-centric complexity composite model.
Figure 10. The design-centric complexity composite model.
Processes 10 00542 g010
Figure 11. The digital twin model of the powder granular bulk TCM dispensing device.
Figure 11. The digital twin model of the powder granular bulk TCM dispensing device.
Processes 10 00542 g011
Figure 12. Improved design scheme.
Figure 12. Improved design scheme.
Processes 10 00542 g012
Figure 13. Comparison of design rigidity between scheme 1 and scheme 2.
Figure 13. Comparison of design rigidity between scheme 1 and scheme 2.
Processes 10 00542 g013
Figure 14. Functional structure of powder granular bulk TCM grasping machine.
Figure 14. Functional structure of powder granular bulk TCM grasping machine.
Processes 10 00542 g014
Figure 15. Training results of complexity prediction model of function unit of separate TCM.
Figure 15. Training results of complexity prediction model of function unit of separate TCM.
Processes 10 00542 g015
Figure 16. Simulation experiment of digital twin model for rhizome TCM particles.
Figure 16. Simulation experiment of digital twin model for rhizome TCM particles.
Processes 10 00542 g016
Figure 17. The distribution of the functional requirement value of the separate TCM function unit.
Figure 17. The distribution of the functional requirement value of the separate TCM function unit.
Processes 10 00542 g017
Figure 18. The distribution of disintegrating slag rate of rhizome TCM.
Figure 18. The distribution of disintegrating slag rate of rhizome TCM.
Processes 10 00542 g018
Figure 19. The distribution of the functional requirement value of collecting TCM.
Figure 19. The distribution of the functional requirement value of collecting TCM.
Processes 10 00542 g019
Figure 20. The distribution of the functional requirement value of outputting TCM.
Figure 20. The distribution of the functional requirement value of outputting TCM.
Processes 10 00542 g020
Table 1. Corresponding relationship between function units and physical parameters.
Table 1. Corresponding relationship between function units and physical parameters.
Function UnitComponentPhysical Parameters
Separate TCMScrew feeding barrel, discharge drum, spiral bladeOutlet size, spiral blade thickness, inner diameter of spiral blade, outer diameter of spiral blade, cylinder wall finish, clearance between blade and feeding barrel, corner, speed, temperature, humidity, particle size of traditional Chinese medicine, density of traditional Chinese medicine, rated weight
Collect TCMMedicine boxHeight of medicine box, width of medicine box, distance between discharge port and conveyor belt, positioning accuracy of medicine box, temperature, humidity, rated weight, disintegrating slag rate
Output TCMSynchronous wheel, synchronous belt, roller, conveyor belt, medicine boxHeight of medicine box, width of medicine box, distance between discharge port and conveyor belt, reduction ratio, conveyor belt surface characteristic parameters, positioning accuracy of conveyor belt, speed of conveyor belt, temperature, humidity, rated weight, disintegrating slag rate
Table 2. Characteristic coefficients of function units.
Table 2. Characteristic coefficients of function units.
Function Unit FRRelative Deviation of Function Unit (%)Standard Deviation of Function Unit (%)Coupling Coefficient
Separate TCM9.691.170
Collect TCM2.883.30.630
Output TCM8.42.48
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, P.; Chen, S.; Zhao, B.; Nie, Z. Research on Decision Making of Design for Coupling System Based on Complexity Composite. Processes 2022, 10, 542. https://doi.org/10.3390/pr10030542

AMA Style

Zhang P, Chen S, Zhao B, Nie Z. Research on Decision Making of Design for Coupling System Based on Complexity Composite. Processes. 2022; 10(3):542. https://doi.org/10.3390/pr10030542

Chicago/Turabian Style

Zhang, Peng, Shaohan Chen, Boyuan Zhao, and Zifeng Nie. 2022. "Research on Decision Making of Design for Coupling System Based on Complexity Composite" Processes 10, no. 3: 542. https://doi.org/10.3390/pr10030542

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop