Some Linguistic Neutrosophic Cubic Mean Operators and Entropy with Applications in a Corporation to Choose an Area Supervisor

: In this paper, we combined entropy with linguisti neutrosophic cubic numbers and used it in daily life problems related to a corporation that is going to choose an area supervisor, which is the main target of our proposed model. For this, we ﬁrst develop the theory of linguistic neutrosophic cubic numbers, which explains the indeterminate and incomplete information by truth, indeterminacy and falsity linguistic variables (LVs) for the past, present, as well as for the future time very effectively. After giving the deﬁnitions, we initiate some basic operations and properties of linguistic neutrosophic cubic numbers. We also deﬁne the linguistic neutrosophic cubic Hamy mean operator and weighted linguistic neutrosophic cubic Hamy mean (WLNCHM) operator with some properties, which can handle multi-input agents with respect to the different time frame. Finally, as an application, we give a numerical example in order to test the applicability of our proposed model.

Definition 10. Letg = p (α,α) ,p (β,β) ,p (γ,γ) , be an LNCN that depends on LT S ,p. Then, the score function, accuracy function and certain function of the LNCN,g, are defined as follows: (i): (ii): (iii): Now, with the help of the above-defined function, we introduce a ranking method for these function.

Entropy of LNCSs
Entropy is used to control the unpredictability in different sets like the fuzzy set (FS), intuitionistic fuzzy set (IFS), etc. In 1965, Zadeh [37] first defined the entropy of FS to determine the ambiguity in a quantitative manner. This notion of fuzziness plays a significant role in system optimization, pattern classification, control and some other areas. He also gave some points of its effects in system theory. Recently, the non-probabilistic entropy was axiomatized by Luca et al. [38]. The intuitionistic fuzzy sets are intuitive and have been widely used in the fuzzy literature. The entropy G of a fuzzy set H satisfies the following conditions, Differences occur in Axiom 2 and 3. Kaufmann [39] suggested a distance measure of soft entropy. A new non-probabilistic entropy measure was introduced by Kosko [40]. In [41] Majumdar and Samanta introduced the notion of two single-valued neutrosophic sets, their properties and also defined the distance between these two sets. They also investigated the measure of entropy of a single-valued neutrosophic set. The entropy of IFSs was introduced by Szmidt and Kacprzyk [42]. The fuzziness measure in terms of distance between the fuzzy set and its compliment was put forward by Yager [43].
The LNCS was examined by managing undetermined data with the truth, indeterminacy and falsity membership function. For the neutrosophic entropy, we will trace the Kosko idea for fuzziness calculation [40]. Kosko proposed to measure this information feature by a similarity function between the distance to the nearest crisp element and the distance to the farthest crisp element. For neutrosophic information, we refer the work by Patrascu [45] where he has given the following definition including from Equation (30) to (33). It states that: the two crisp elements are (1, 0, 0) and (0, 0, 1). We consider the following vector: B = (µ − ν, µ + ν − 1, w) . For (1, 0, 0) and (0, 0, 1), it results in B Tru = (1, 0, 0) and B Fal = (−1, 0, 0) . We will now compute the distances as follows: The neutrosophic entropy will be defined by the similarity between these two distances. The similarity E c and neutrosophic entropy V c are defined as follows: Definition 15. Suppose that H = xî,p (αH,αH)(xî) ,p (βH,βH)(xî) ,p (γH,γH)(xî) | xî ∈ X is an LNCS; we define the entropy of LNCS as a function G˚k :k(X) → [0, t], where t is an odd cardinality with t + 1.
The following are some conditions. 3. H is less uncertain than I; we assume Depending on the entropy value in Equation (34), we can obtain G˚k(H) ≤ G˚k(î). is an LNCS in U. Then, the entropy of U will be:

The Method for MAGDM Based on the WLNCHM Operator
In this section, we discuss MAGDM, based on the WLNCHM operator with LNCN. Let U = {U 1 , U 2 , . . . , U m } be the set of alternatives, V = {V 1 , V 2 , . . . , V n } be the set of attributes andẘ = (ẘ 1 ,ẘ 2 , . . . ,ẘ n ) T be the weight vector. Then, by LNCNs and from the predefined linguistic term set ϕ = {ϕ j | j ∈ [0, t]} (where t + 1 is an odd cardinality), the decision makers are invited to evaluate the alternatives Uî(î = 1, 2, . . . , m) over the attributes V j (j = 1, 2, . . . , n). The DMs can assign the uncertain LT S to the truth, indeterminacy and falsity linguistic terms and the certain LT S to the truth, indeterminacy and falsity linguistic terms in each LNCNs, which is based on the LT S in the evaluation process of the linguistic evaluation of each attribute V j (j = 1, 2, . . . , n) on each alternative Uî(î = 1, 2, . . . , m). Thus, we obtain the decision matrix S = (sî j )m × n, gî j ,gî j = (pαˆı Based on the above information, the MAGDM on the WLNCM operator is described as follows: Step 1: Regulate the decision making problem. Step 2: Calculategî j = W LNCM(sî 1 , sî 2 , . . . , sî n ) to obtain the collective approximation value for alternatives Uî with respect to attribute V j .
Step 5: In this step, we find out the sequence of the alternatives Uî(î = 1, 2, . . . , m) . According to the ranking order of Definition 8, with a greater score function ϕ(S), the ranking order of alternatives Uî is the best. If the score functions are the same, then the accuracy function of alternatives Uî is larger, and then, the ranking order of alternatives U i is better. Furthermore, if the score and accuracy function both are the same, then the certain function of alternatives Uî is larger, and then, the ranking order of alternatives Uî is best.

Numerical Applications
A corporation intends to choose one person to be the area supervisor from five candidates (U 1 − U 4 ), to be further evaluated according to the three attributes, which are shown as follows: ideological and moral quality (V 1 ), professional ability (V 2 ) and creative ability (V 3 ). The weights of the indicators areẘ = (0.5, 0.3, 0.2).

Procedure
Case 1: If the weights of the element are absolutely unidentified, then we use the suggested technique to solve the above problem in which the decision making steps are as follows: Step 1: Let U = {U 1 , U 2 , . . . , U 4 } be a set of alternatives and V = {V 1 , V 2 , V 3 } be a set of attributes. Let S = (sî j ) 4×3 be a set of decision matrices. A decision matrix evaluates each alternative based on the given attributes; Step 2: Calculate sî j = W LNCHM(sî 1 , sî 2 , . . . , sî n ) to obtain the overall assessment value for alternatives Uî with respect to attribute V j .
Step 3: We utilize the entropy of LNCSs to calculate the weight of the attributes, i.e., let s j = (p (αj,αj) ,p (βj,βj) ,p (γj,γj) ) be the LNCN and G˚k(s j ) be the weight of attributes, i.e., Step 5: We find the values of score function ϕ(S) as: = 0.657 Step 6: According to the value of the score function, the ranking of the candidates can be confirmed, i.e., S 4 S 2 S 1 S 3. , so S 4 is the best alternatives. Case 2: If the DM gives the information about the attributes and weight and the weight vector isẘ = (0.1, 0.5, 0.4), then the score function ϕ(Sî)(î = 1, 2, 3, 4) of Case 2 can be obtained as follows; ϕ(S 1 ) = 0.451, ϕ(S 2 ) = 0.435, ϕ(S 3 ) = 0.504, ϕ(S 4 ) = 0.492. The ranking of these score functions is S 3 S 4 S 1 S 2 . Thu,s due to the diverse weights of attributes, the ranking of Case 2 is different from that of Case 1.
In the MADM method, the attribute weights can return relative values in the decision method. However, due to the issues such as data loss, time pressure and incomplete field knowledge of the DMs, the information about attribute weights is not fully known or completely unknown. Through some methods, we should derive the weight vector of attributes to get possible alternatives. In Case 2, the attribute weights are usually determined based on DMs' opinions or preferences, while Case 1 uses the entropy concepts to determine weight values of attributes to successfully balance the manipulation of subjective factors. Therefore, the entropy of LNCS is applied in the decision process to give each attribute a more objective and reasonable weight.

Comparison Analysis
From the comparison analysis, one can see that the advanced method is more appropriate for articulating and handling the indeterminate and inconsistent information in linguistic decision making problems to overcome the insufficiency of several linguistic decision making methods in the existing work. In fact, most of the decision making problems based on different linguistic variables in the literature not only express inconsistent and indeterminate linguistic results, but the linguistic method suggested in the study is a generalization of existing linguistic methods and can handle and represent linguistic decision making problems with LNN information. We also see that the advanced method has much more information than the existing method in [26,32,44]. In addition, the literature [26,32,44] is the same as the best and worst and different from our methods. The reason for the difference between the given literature and our method may be the decision thought process. Some initial information may be missing during the aggregation process. Moreover, the conclusions are different. Different aggregation operators may appear [32], and our methods are consistent with the aggregation operator and receive a different order. However, [32] may have some limitations because of the attributes. The weight vector is given directly, and the positive and negative ideal solutions are absolute. Other than this, the ranking in the literature [26,32,44] is different from the proposed method. The reason for the difference may be uncertainty in LNN membership since the information is inevitably distorted in LIFN. Our method develops the neutrosophic cubic theory and decision making method under a linguistic environment and provides a new way for solving linguistic MAGDM problems with indeterminate and inconsistent information.

Conclusions
In this paper, we work out the idea of LNCNs, their operational laws and also some properties and define the score, accuracy and certain functions of LNCNs for ranking LNCNs. Then, we define the LNCHM and WLNCHM operators. After that, we demonstrate the entropy of LNCNs and relate it to determine the weights. Next, we develop MAGDM based on WLNCHM operators to solve multi-attribute group decision making problems with LNCN information. Finally, we provide an example of the developed method.