Next Article in Journal
Fractals as Julia and Mandelbrot Sets of Complex Cosine Functions via Fixed Point Iterations
Next Article in Special Issue
A Novel Quintic B-Spline Technique for Numerical Solutions of the Fourth-Order Singular Singularly-Perturbed Problems
Previous Article in Journal
Relating Fluctuating Asymmetries and Mean Values and Discordances of Asymmetries in a Set of Morphological Traits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Separation and Identification of Sub-Information Based on Dynamic Mathematical Model

School of Mathematics and Statistics, Huanghuai University, Zhumadian 463000, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(2), 477; https://doi.org/10.3390/sym15020477
Submission received: 16 January 2023 / Revised: 4 February 2023 / Accepted: 6 February 2023 / Published: 10 February 2023
(This article belongs to the Special Issue Algebraic Systems, Models and Applications)

Abstract

:
P-sets (P stands for Packet), a set pair with dynamic and law characteristics, are made up of an internal and an outer P-set, which is obtained by introducing dynamic characteristics into the Cantor set and improving the Cantor set. The concepts of α F -sub-information, α F ¯ -sub-information, and ( α F , α F ¯ ) -sub-information are presented in this paper based on P-sets, and it is then suggested that the relationship between the generation of sub-information and its attribute, the process of attribute reasoning, reasoning structure, and sub-information intelligent separation-acquisition be explored. These findings were used to design a sub-information intelligent separation-identification algorithm. By using these results, the application of intelligent separation and the identification of case information are given.

1. Introduction

The following characteristics are shared by information ( x ) in all information fields:
(1)
Some information elements x i in information ( x ) lose their value of existence under certain conditions (information elements x i are redundant information of ( x ) ), they are deleted from ( x ) , or x i is moved out of ( x ) .
(2)
( x ) is an incomplete information, some information elements x i are missing in ( x ) , x i are added from outside ( x ) to inside ( x ) , or x i are moved from outside ( x ) to inside ( x ) .
(3)
The information element x i in ( x ) has attributes (characteristics of x i ), and ( x ) is an attribute set (attributes are divided into numeric attributes and non-numeric attributes).
Characteristics (1)–(3) are hidden in the information system and are not known to the general public, or have not attracted people’s attention. In short, information ( x ) is an attribute set with dynamic characteristics. In order to study the diversified recommendation problem based on a real-world data set, a many-objective recommendation model and algorithm design were constructed [1]. A privacy protection scheme that guarantees the safe sharing of unmanned aerial vehicle big data is designed [2]. A new method for recognizing pig face information images is proposed [3]. These references give the algorithm design of information (data) mining and recognition in different fields but do not involve characteristics (1)–(3). The focus of this research is on the mathematical models and methods that are used to investigate the information ( x ) with characteristics (1)–(3) and their applications.
The idea of P-sets was put forth in [4], and it is obtained by giving Cantor set X dynamic properties in order to make it better. Assuming that X s attribute set is α , then P-sets have the following dynamic properties:
  • Supplementing some attributes in α , α becomes α F and Cantor set X becomes internal P-set X F ¯ , where α α F and X F ¯ X .
  • Deleting some attributes in α , α becomes α F ¯ and Cantor set X becomes outer P-set X F , where α F α and X X F .
  • Supplementing some attributes in α , and deleting some attributes in α , α becomes ( α F , α F ¯ ) and Cantor set X becomes P-sets ( X F ¯ , X F ) , where α F ¯ α α F and X F ¯ X X F .
The internal P-set matches the characteristics of (1), the outer P-set matches the characteristics of (2), and the P-sets match the characteristics of (1) and (2). Clearly, P-sets are the theoretical foundation, approach, and model preparation for this research.
Several researchers have investigated the theory and application of P-sets and have achieved some outstanding results. The L. A. Zadeh fuzzy set is enhanced by the use of P-sets, and a separated fuzzy set made up of an internal and an external separated fuzzy set is proposed [5]. It has been proposed [6] that P-sets have an algebraic model. By adding an assistant set to the function, P-sets to expand it [7]. A new approach to studying big data with the P-sets mathematical model has been proposed in references [8,9]. By introducing P-sets to intuitionistic fuzzy sets, the intuitionistic fuzzy sets are improved, and P-intuitionistic fuzzy sets are proposed [10]. More dynamic characteristics and applications of P-sets are discussed [11,12,13]. P-information fusion and application based on P-sets are obtained [14,15,16,17]. Inverse P-sets are proposed, which is the symmetric form of P-sets [18,19,20,21,22,23,24]. Function P-sets are given, which is the functional form of P-sets [25,26], and the function inverse P-sets are also proposed, which is the symmetric form of function P-sets [27,28,29]. However, for the problem of how to separate and identify the corresponding changed information elements due to the change of the structural characteristics (attribute set) of information ( x ) , these references do not provide theoretical methods and algorithm design research.
This work examines the integration of P-sets with information systems using the P-sets mathematical model, and it uses the attribute reasoning produced by P-sets as a brilliant way to discover some unknown fundamental ideas. The relationship between sub-information generation and its attributes is discussed in this paper, along with how to determine its reasoning pattern and logical properties. The sub-information intelligent separation identification theorem is also demonstrated, and a sub-information intelligent separation identification algorithm is developed. The application of intelligent case information separation and identification is described in the end. In this study, new conceptual and theoretical findings are presented.
P-sets and their straightforward structure are introduced in the next section as a theoretical background in order to make the discussion and understanding of the conclusions easier.

2. Dynamic Model with Attribute Conjunction

Cantor set X = { x 1 , x 2 , , x q } U is given, and α = { α 1 , α 2 , , α k } V is the attribute set of X , X F ¯ is referred to as internal P-sets generated by X , referred to as that X F ¯ is internal P-set for short,
X F ¯ = X X
X is referred to as F ¯ -element deleted set of X ,
X = { x i | x i X , f ¯ ( x i ) = u i ¯ X , f ¯ F ¯ }
If the attribute set α F of X F ¯ meets
α F = α { α i | f ( β i ) = α i α , f F }
where in Equation (3), β i V , β i ¯ α , f F changes β i into f ( β i ) = α i α ; in Equation (1), X F ¯ Ø , X F ¯ = { x 1 , x 2 , , x p } , p < q ; p , q N + .
The Cantor set X = { x 1 , x 2 , , x q } U is given, and α = { α 1 , α 2 , , α k } V is the attribute set of X , X F is referred to as outer P-sets generated by X , referred to as that X F is outer P-set for short,
X F = X X +
X + is referred to as F -element supplemented set of X ,
X + = { u i | u i U , u i ¯ X , f ( u i ) = x i X , f F }
If attribute set α F ¯ of X F meets
α F ¯ = α { β i | f ¯ ( α i ) = β i ¯ α , f ¯ F ¯ }
where in Equation (6), α i α , f ¯ F ¯ changes α i into f ¯ ( α i ) = β i ¯ α , a F ¯ Ø ; in Equation (4), X F = { x 1 , x 2 , , x r } , q < r ; q , r N + .
The set pair, which is composed of the internal packet set X F ¯ and outer packet set X F , is referred to as P-sets generated by X , referred to as P-sets for short and written as
( X F ¯ , X F )
From Equations (9) and (11), we obtain:
{ ( X i F ¯ , X j F ) | i I , j J }
Equation (8) is referred to as the family of P-sets generated by X , which is the general expression of P-sets, where both I and J are indicator sets.
From Equations (1)–(8), we obtain:
Proposition 1.
Under the condition of F = F ¯ = , P-sets ( X F ¯ , X F ) and Cantor set  X meet:
( X F ¯ , X F ) F = F ¯ = = X
Proof. 1.
if F = F ¯ = , from Equation (3), we get: α F = α { α i | f ( β i ) = α i α , f F } = α = α , { α i | f ( β i ) = α i α , f F } = ; and X = in the Formula (2), X F ¯ = X X = X = X in the Formula (1).
2. if F = F ¯ = , from Equation (6), we get: α F ¯ = α { β i | f _ ( α i ) = β i ¯ α , f _ F _ _ } = α = α ; { β i | f ¯ ( α i ) = β i ¯ α , f ¯ F ¯ } = , and X + = in the Formula (5), X F = X X + = X = X in the Formula (4).
Based on 1 and 2, we can complete this Proposition. □
Proposition 2.
Under the condition of  F = F ¯ = , the family of P-sets { ( X i F ¯ , X j F ) | i I , j J } and Cantor set  X meet:
{ ( X i F ¯ , X j F ) | i I , j J } F = F ¯ = = X
The proof is similar to Proposition 1, and it is omitted.
From Formulas (1)–(8) and Propositions 1 and 2, we can easily obtain the dynamic characteristics of P-sets as follows:
Under the condition that we continuously add attributes to α , X generates an internal P-sets X i F ¯ ; similarly we delete attributes continuously in α , X dynamically generates an outer P-sets X j F ; if the attributes are supplemented and deleted at the same time in α , X dynamically generates P-sets ( X i F ¯ , X j F ) , i , j = 1 , 2 , , n .
Remarks:
  • U is a finite element domain and V is a finite attribute domain.
  • f F and f ¯ F ¯ are element (attribute) transfer, F = { f 1 , f 2 f n } and F ¯ = { f ¯ 1 , f ¯ 2 , , f ¯ n } are the family of elements (attributes) transfer, and the element (attributes) transfer is a concept of function or transformation.
  • The characteristics of f F are that: for element u i U , u i ¯ X , f F changes u i into f ( u i ) = x i X ; for attribute β i V , β i ¯ α , f F changes β i into f ( β i ) = α i α .
  • The characteristics of f ¯ F ¯ are that: for element x i X , f ¯ F ¯ changes x i into f ¯ ( x i ) = u i ¯ X ; for attribute α i α , f ¯ F ¯ changes α i into f ¯ ( α i ) = β i ¯ α .
  • The dynamic characteristics of Formula (1) are the same as the dynamic characteristics of down-counter T = T 1 .
  • The dynamic characteristics of Formula (4) are the same as the dynamic characteristics of accumulator T = T + 1 . For example, for the Formula (4) X 1 F = X X 1 + , let X = X 1 F , X 2 F = X X 2 + = ( X 1 F X 1 + ) X 2 + , , so on.
Facts and attributes of the existence of P-sets
X = { x 1 , x 2 , x 3 , x 4 , x 5 } is a finite commodity element set of five apples, and α = { α 1 , α 2 , α 3 } is the attribute set confined in X , where α 1 denotes red color, α 2 denotes sweet taste, α 3 denotes produced by Henan province of China. Obviously, x i has attributes α 1 , α 2 and α 3 ; the attribute α i of x i meets “conjunctive normal form”, x i X , i = 1 , 2 , , 5 , moreover
α i = α 1 α 2 α 3
Let α 4 denotes weight is 150 g, supplementing attribute α 4 in α , α is changed into α F = { α 1 , α 2 , α 3 } { α 4 } , and X is changed into internal P-sets X F ¯ = X { x 4 , x 5 } = { x 1 , x 2 , x 3 } . Clearly, x 1 has attributes α 1 , α 2 , α 3 and α 4 , x i X F ¯ , i = 1 , 2 , 5 , moreover
α i = ( α 1 α 2 α 3 ) α 4 = α 1 α 2 α 3 α 4
If deleting attribute α 3 in α , α is changed into α F ¯ = { α 1 , α 2 , α 3 } { α 3 } = { α 1 , α 2 } , and X is changed into outer P-sets X F = X { x 6 , x 7 } = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 } . Clearly, x i has attributes α 1 , α 2 ,   x i X , i = 1 , 2 , , 7 , moreover
α i = ( α 1 α 2 α 3 ) α 3 = α 1 α 2
This simple fact and logical feature can be accepted by ordinary people. The relationship among X F ¯ , X F and finite ordinary element set X are shown in Figure 1.
Agreement: ( x ) = X , ( x ) F ¯ = X F ¯ , ( x ) F = X F , ( ( x ) F ¯ = ( x ) F ) = ( X F ¯ , X F ) .
By using the characteristics and concept of P-sets in Section 2, we get Section 3.

3. Sub-Information Generation and Attribute Relationships

Definition 1.
If attribute  α of information  ( x ) and attribute α F of information  ( x ) F ¯ meet
α F = α Δ α
then ( x ) F ¯ is called  α F -sub-information dynamically generated by  ( x ) .
In Equation (11), Δ α , Δ α α = ,  Δ α is formed by attribute α i outside α being transferred from outside α to inside α , the α F -sub-information ( x ) F ¯ has the attribute set α F .
Definition 2.
If attribute αof information  ( x ) and attribute  α F ¯ of information ( x ) F meet
α F ¯ = α α
then ( x ) F is called α F ¯ -sub-information dynamically generated by ( x ) .
In Equation (12), α , α α , α is formed by attribute α i inside α being transferred from inside α to outside α , the α F ¯ -sub-information ( x ) F has the attribute set α F ¯ .
Definition 3.
If the attribute  ( α F , α F ¯ ) of  ( ( x ) F ¯ , ( x ) F ) satisfies
( α F , α F ¯ ) = ( α Δ α , α α )
then ( ( x ) F ¯ , ( x ) F ) is called ( α F , α F ¯ ) -sub-information dynamically generated by ( x ) .
Here, Equation (13) means: α F = α Δ α , α F ¯ = α α .
Proposition 3.
The information coefficient η F ¯ of  α F -sub-information ( x ) F ¯ is a point in the discrete interval  [ 0 , 1 ] , or
η F ¯ [ 0 , 1 ]
Here: η F ¯ = c a r d ( ( x ) F ¯ ) / c a r d ( ( x ) ) ; η = c a r d ( ( x ) ) / c a r d ( ( x ) ) = 1 ; η is the self-information coefficient of ( x ) ; c a r d = c a r d i n a l n u m b e r ; [ 0 , 1 ] is the unit discrete interval.
Proof. 
Because 0 < c a r d ( ( x ) F ¯ ) < c a r d ( ( x ) ) , and η F ¯ = c a r d ( ( x ) F ¯ ) / c a r d ( ( x ) ) , we can get: 0 < η F ¯ < η = c a r d ( ( x ) ) / c a r d ( ( x ) ) = 1 , therefore η F ¯ [ 0 , 1 ] , and Formula (14) is proved. □
Proposition 4.
The information coefficient  η F of  α F ¯ -sub-information  ( x ) F is a point outside the discrete interval  [ 0 , 1 ] , or
η F ¯ [ 0 , 1 ]
Similar to Proposition 3, it is easy to prove Proposition 4, and the proofs are omitted.
Proposition 5.
The discrete interval  [ η F ¯ , η F ] formed by the information coefficient  ( η F , η F ¯ ) of  ( α F , α F ¯ ) -sub-information  ( ( x ) F ¯ , ( x ) F ) and the unit discrete interval  [ 0 , 1 ] satisfy
[ η F ¯ , η F ] [ 0 , 1 ]
Proof. 
Because 0 < η F ¯ = c a r d ( ( x ) F ¯ ) / c a r d ( ( x ) ) < 1 , η F = c a r d ( ( x ) F ) / c a r d ( ( x ) ) > 1 , therefore the discrete interval [ η F ¯ , η F ] composed of η F ¯ and η F and the unit discrete interval [ 0 , 1 ] satisfy [ η F ¯ , η F ] [ 0 , 1 ] , Proposition 5 is obtained. □

4. Attribute Reasoning and Sub-Information Intelligent Separation-Identification

Under the condition of supplementing attributes in α, if
( x ) F ¯ = ( x ) ( x )
then ( x ) is called the α F redundant set of ( x ) F ¯ , where ( x ) F ¯ is α F -sub-information generated by ( x ) , ( x ) 0 .
Under the condition of deleting attributes in α , if
( x ) F = ( x ) Δ ( x )
then Δ ( x ) is called the α F ¯ supplementary set of ( x ) F , where ( x ) F is α F ¯ -sub-information generated by ( x ) , Δ ( x ) 0 .
If α and α F are attribute of ( x ) and ( x ) F ¯ respectively, and satisfy
i f α α F , t h e n ( x ) F ¯ ( x )
We call Formula (19) is α F -attribute reasoning generated by α F -sub-information ( x ) F ¯ ; α α F is called α F -attribute reasoning condition, ( x ) F ¯ ( x ) is called α F -attribute inference conclusion. Where “ ” is equivalent to “ ”.
If α and α F ¯ are attribute of ( x ) and ( x ) F , respectively, and satisfy
i f α F ¯ α , t h e n ( x ) ( x ) F
we call Formula (20) is α F ¯ -attribute reasoning generated by α F ¯ -sub-information ( x ) F ; α F ¯ α is called α F ¯ -attribute reasoning condition, ( x ) ( x ) F is called α F ¯ -attribute inference conclusion.
If α and ( α F , α F ¯ ) are attribute of ( x ) and ( ( x ) F ¯ , ( x ) F ) respectively, and satisfy
i f ( α , α F ¯ ) ( α F , α ) , t h e n ( ( x ) F ¯ , ( x ) F ) ( ( x ) , ( x ) F )
we call Formula (21) is ( α F , α F ¯ ) -attribute reasoning generated by ( α F , α F ¯ ) -sub-information ( ( x ) F ¯ , ( x ) F ) ; ( α F , α F ¯ ) ( α F , α ) is called ( α F , α F ¯ ) -attribute reasoning condition, ( ( x ) F ¯ , ( x ) ) ( ( x ) , ( x ) F ) is called ( α F , α F ¯ ) -attribute inference conclusion.
Formula (21) represents i f α α F , t h e n ( x ) F ¯ ( x ) and i f α F ¯ α ,   t h e n ( x ) ( x ) F .
Theorem 1.
Conjunction extension theorem of  α F -sub-information attribute
If α i is attribute of x i ( x ) F ¯ , then α i satisfies
α i = ( λ = 1 k α λ ) λ = k + 1 k + m α λ
Proof. 
If α = { α 1 , α 2 , , α k } is the attribute set of information,
( x ) = { x 1 , x 2 , , x q }
according to Formula (3), supplement attribute α k + 1 , α k + m in α , then delete the redundant information elements α k + 1 , α k + m in ( x ) , at this time, ( x ) generates ( x ) F ¯ = { x 1 , x 2 , , x p } , p < q , ( x ) F ¯ has attribute set
α F = { α 1 , α 2 , , α k , α k + 1 , α k + m }
or the information element x i ( x ) F ¯ has attribute α 1 α 2 α k α k + 1 α k + m = ( λ = 1 k α λ ) λ = k + 1 k + m α λ . Therefore, the attribute α i of x i ( x ) F ¯ satisfies: α i = ( λ = 1 k α λ ) λ = k + 1 k + m α λ . □
Theorem 2.
Conjunction contraction theorem of  α F ¯ -sub-information attribute
If α j is attribute of x j ( x ) F , then α j satisfies
α j = ( λ = 1 k α λ ) ( t = k n + 1 k α λ )
Proof. 
If ( x ) F = { x 1 , x 2 , , x r } is generated by ( x ) = { x 1 , x 2 , , x q } , then q < r . α = { α 1 , α 2 , , α k } is the attribute set of information ( x ) = { x 1 , x 2 , , x q } , information element x j ( x ) has attribute α 1 α 2 α k , Or the attribute α j of information element x j ( x ) satisfies: α j = λ = 1 k α λ ; according to Formula (3), ( x ) F has attribute set α F ¯ = { α 1 , α 2 , , α k n } , or information element x i ( x ) F has attribute α 1 α 2 α k n . Therefore, the attribute α j of x j ( x ) F satisfies: α j = ( λ = 1 k α λ ) ( t = k n + 1 k α λ ) . □
Theorem 3.
Separation identification theorem of  α F -sub-information  ( x ) F ¯
If ( x ) F ¯ , and attribute α F , α meet Formula (19), then
1 α F -sub-information ( x ) F ¯ is intelligently separated and identified in ( x ) , that is
I D E ( ( x ) F ¯ , ( x ) ) ( x ) F ¯ ( x )
2 The information coefficient η F ¯ of α F -sub-information ( x ) F ¯ and the information coefficient η of ( x ) satisfy
η F ¯ η < 0
where in Formula (24), I D E = i d e n t i f i c a t i o n .
Proof. 
From Formulas (1)–(3) in Section 1 and Formula (19) in Section 4, we obtain the following: under the condition of α α F , redundant information element x i is deleted in ( x ) , then there is ( x ) , so that ( x ) F ¯ = ( x ) ( x ) , and ( x ) F ¯ ( x ) , that is: ( x ) F ¯ is identified with respect to ( x ) , and Formula (24) is obtained. 0 < η F ¯ = c a r d ( ( x ) F ¯ ) / c a r d ( ( x ) ) < 1 , and η = c a r d ( ( x ) ) / c a r d ( ( x ) ) = 1 satisfies η F ¯ η < 0 , we get Formula (25). □
Inference 1.
The α F -sub-information ( x ) k F ¯ satisfying the  α F -attribute reasoning condition  α α F is intelligently separated and identified within  ( x ) in the order of  k = 1 , 2 , , n , and
( x ) n F ¯ = min i = 1 n ( ( x ) i F ¯ )
where in Formula (26),  ( x ) n F ¯ .
Proof. 
Given information ( x ) = { x 1 , x 2 , , x m } , α = { α 1 , α 2 , , α p } is the attribute set of information ( x ) , the attribute α j of information element x j ( x ) satisfies: α j = λ = 1 p α λ ; under the condition that the attribute is constantly supplemented in α , α generates α 1 F , α 2 F , , α n F in turn, and α 1 F α 2 F α k 1 F α n F , at the same time, ( x ) generates α F -sub-information ( x ) k F ¯ with synchronization, k = 1 , 2 , , n , easy to know ( x ) 1 F ¯ = ( x ) ( x ) 1 , ( x ) 2 F ¯ = ( x ) 1 F ¯ ( x ) 2 = ( ( x ) ( x ) 1 ) ( x ) 2 , , ( x ) n F ¯ = ( x ) n 1 F ¯ ( x ) n 1 = ( x ) t = 1 n 1 ( x ) t , clearly, ( x ) n F ¯ ( x ) n 1 F ¯ ( x ) 2 F ¯ ( x ) 1 F ¯ ( x ) . That is: the α F -sub-information ( x ) k F ¯ is intelligently separated and identified within ( x ) in the order of k = 1 , 2 , , n , and ( x ) n F ¯ = min i = 1 n ( ( x ) i F ¯ ) . □
Theorem 4.
Separation identification theorem of  α F ¯ -sub-information  ( x ) F
If ( x ) F , ( x ) and attribute α F ¯ , α meet Formula (20), then
1 α F -sub-information ( x ) F ¯ is intelligently separated and identified Outside of ( x ) , that is
I D E ( ( x ) F , ( x ) ) ( x ) ( x ) F
2 The information coefficient η F of α F ¯ -sub-information ( x ) F and the information coefficient η of ( x ) satisfy
η F η > 0
Inference 2.
The α F ¯ -sub-information ( x ) k F satisfying the α F ¯ -attribute reasoning condition α F ¯ α is intelligently separated and identified Outside of ( x ) in the order of k = 1 , 2 , , n , and
( x ) n F = max j = 1 n ( ( x ) j F )
The proofs of Theorem 4 and Inference 2 are similar to those of Theorem 3 and Inference 1, respectively, and the proofs are omitted. It is easy to deduce Inferences 3 and 4 from Theorems 3 and 4.
Inference 3.
The sufficient and necessary condition for the simultaneous intelligent separation and identification of  α F -sub-information  ( x ) F ¯ and  α F ¯ -sub-information  ( x ) F is that the attribute  α F of  α F -sub-information  ( x ) F ¯ and the attribute  α F ¯ of  α F ¯ -sub-information  ( x ) F meet
α F α F ¯
Inference 4.
( α k F , α k F ¯ ) -sub-information  ( ( x ) k F ¯ , ( x ) k F ) satisfying  ( α F , α F ¯ ) -attribute reasoning condition  ( α k F , α k + 1 F ¯ ) ( α k + 1 F , α k F ¯ ) constitutes  ( α F , α F ¯ ) -sub-information  ( ( x ) F ¯ , ( x ) F ) chain,
( ( x ) n F ¯ , ( x ) 1 F ) ( ( x ) n 1 F ¯ , ( x ) 2 F ) ( ( x ) 2 F ¯ , ( x ) n 1 F ) ( ( x ) 1 F ¯ , ( x ) n F )

5. α F -Sub-Information ( x ) F ¯ Intelligent Separation Identification Algorithm

In order for this part to correspond to the example in Section 6, only α F -sub-information ( x ) F ¯ intelligent separation identification algorithm will be given, which is a part of the intelligent separation identification algorithm of ( α F , α F ¯ ) -sub-information ( ( x ) F ¯ , ( x ) F ) ; the complete sub-information ( ( x ) F ¯ , ( x ) F ) intelligent separation identification algorithm is composed of α F -sub-information ( x ) F ¯ intelligent separation identification algorithm and α F ¯ -sub-information ( x ) F intelligent separation identification algorithm. The α F -sub-information ( x ) F ¯ intelligent separation identification algorithm is shown in Figure 2.
The detailed process of the algorithm is as follows:
(1)
Algorithm preparation: information ( x ) and its attribute set α are given
(2)
Under the condition that the attribute is continuously obtained and supplemented, ( x ) generates multiple ( x ) k F ¯ ; α generates multiple α k F , k = 1 , 2 , , n .
(3)
Under α F -attribute reasoning, α k F -inference base is intelligently generated.
(4)
Select attribute reasoning in the inference base to obtain α F -sub-information ( x ) k F ¯  one by one, k = 1 , 2 , , t .
(5)
Define ( x ) 0 F ¯ as target α F -sub-information, compare ( x ) k F ¯ in step 3 with target sub-information ( x ) 0 F ¯ ; if ( x ) i F ¯ ( x ) 0 F ¯ , return to steps (3) and (4), start the algorithm cycle at this time; if ( x ) k F ¯ ( x ) 0 F ¯ , compare ( x ) k F ¯ with α k F to confirm α k F -sub-information ( x ) k F ¯ , that is, ( x ) k F ¯ is identified.
(6)
The algorithm ends.

6. Application of Intelligent Separation Identification of Case Information

The application example is taken from the reconnaissance and detection of case Q information, which is a part of the complete example. For some special reasons, the information of case Q is represented by (x), and the information element x i in ( x ) represents the suspect of case Q , the attribute obtained at the scene of case Q is indicated by α i , and the names of x i and α i are omitted. The main idea of the example is as follows: under the condition that attribute α i is gradually obtained, the α F -sub-information ( x ) F ¯ (the principal offender of the case) of ( x ) is intelligently separated and identified from ( x ) . The initial information ( x ) and attribute set α of case Q are respectively:
( x ) = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 }
α = { α 1 , α 2 , α 3 }
I. If new attribute α i ( i = 4 , 5 ) are obtained and certified for the k th times, then according to Formula (3) in Section 2, the attribute set α k F is
α k F = α { α 4 , α 5 } = { α 1 , α 2 , , α 5 }
Using α F -attribute reasoning Formula (19), under the condition of α α k F , according to the Formula (1) in Section 2, it is obtained that α k F -sub-information ( x ) k F ¯ is
( x ) k F ¯ = ( x ) { x 2 , x 4 , x 5 , x 8 , x 9 } = { x 1 , x 3 , x 6 , x 7 , x 10 }
II. If the new attribute α i ( i = 6 , 7 , 8 )  is obtained and certified for the ( k + i ) th times, then according to Formula (3) in Section 2, the attribute set α k + 1 F is
α k + 1 F = α k F { α 6 , α 7 , α 8 } = { α 1 , α 2 , , α 8 }
Using α F -attribute reasoning Formula (19), under the condition of α k F α k + 1 F , according to the Formula (1) in Section 2, it is obtained that α k + 1 F -sub-information ( x ) k + 1 F ¯ is
( x ) k + 1 F ¯ = ( x ) k F ¯ { x 1 , x 6 } = { x 3 , x 7 , x 10 }
x 1 , x 2 , x 4 , x 5 , x 6 , x 8 , x 9 satisfying Formulas (34) and (36) are deleted from ( x ) , ( x ) k + 1 F ¯ is generated by ( x ) , ( x ) k + 1 F ¯ is intelligently separated and identified from ( x ) . Before the new attribute α i is obtained, x 3 , x 7 , x 10 are hidden in ( x ) , and no one knows.
Analysis and certification of case  Q :
(1)
It can be seen from Formulas (34) and (35) that, since α 4 , α 5 are supplemented to α k F , Formula (33) becomes Formula (34) and Formula (32) becomes Formula (35), each x i in ( x ) k F ¯ satisfies the attribute conjunction extension: λ = 1 5 α λ , or element x i in ( x ) k F ¯ has attribute α i , α i = ( λ = 1 3 α λ ) ( λ = 4 5 α λ ) , and satisfy Theorem 3, so there is ( x ) = { x 2 , x 4 , x 5 , x 8 , x 9 } ( x ) , so that ( x ) k F ¯ =   ( x ) ( x ) = { x 1 , x 3 , x 6 , x 7 , x 10 } , according to Theorem 1, ( x ) k F ¯ is separated and identified in ( x ) , and all suspects x 1 , x 3 , x 6 , x 7 , x 10 are examined.
(2)
As the investigation of the case deepens, new attributes α 6 , α 7 , α 8 are continuously obtained and supplemented to α k + 1 F , Formula (34) becomes Formula (36) and Formula (35) becomes Formula (37), each x i in ( x ) k + 1 F ¯ satisfies the attribute conjunction extension: ( λ = 1 5 α λ ) λ = 6 8 α λ , or element x i in ( x ) k + 1 F ¯ has attribute α i , α i = ( λ = 1 3 α λ ) α 4 α 5 ( t = 6 8 α t ) = ( λ = 1 3 α λ ) ( t = 4 8 α t ) , and satisfy Theorem 3, so there is ( x ) = { x 1 , x 6 } ( x ) k F ¯ , so that ( x ) k + 1 F ¯ = ( x ) k F ¯ ( x ) = { x 3 , x 7 , x 10 } , and ( x ) k + 1 F ¯ ( x ) k F ¯ , according to Theorem 1, ( x ) k + 1 F ¯ = { x 3 , x 7 , x 10 } is separated and identified in ( x ) k F ¯ . Case Q was uncovered and five suspects were arrested, including three major suspects and two minor suspects.

7. Discussion

The concept of sub-information is presented in this paper using the mathematical model of P-sets, and the dynamic attribute change rule of the information element in information is accurately described using the joint expansion contraction feature of the attribute. Through the investigation of the properties of the information element attribute combination, we can discover useful information hidden in the information that is not immediately apparent to people. On the basis of these, attribute reasoning and sub-information attribute characteristics are used to derive the sub-information separation identification theorem, and a specific algorithm is provided. In fact, dynamic characteristics are inherent in information systems. Compared with similar research, this paper proposes a new method to solve intelligent information mining and identification in dynamic information systems, and the algorithm design given is more practical.
The study of criminology in public security systems and information science is combined, conditional information is refined and made simpler, and theoretical results with significant practical value are obtained using a dynamic mathematical model. The examples in this paper are the compression and simplification of real examples. For some reasons, the original appearance of the examples has been omitted. The mathematical model used in this study is linked to the background of the application, and the theoretical results obtained match that background. The research presented in this paper has expanded the theoretical and practical applications of information systems.

Author Contributions

Conceptualization, X.Z.; methodology, X.Z.; writing—original draft preparation, X.Z.; writing—review and editing, L.S.; software, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (NSFC) under Grant 12171193 and the Fund of Young Backbone Teacher in Henan Province under Grant 2021GGJS158, Key project of Henan Education Department 23B110012.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cao, B.; Zhao, J.W.; Lv, Z.H.; Yang, P. Diversified Personalized Recommendation Optimization Based on Mobile Data. IEEE Trans. Intell. Transp. Syst. 2021, 22, 2133–2139. [Google Scholar] [CrossRef]
  2. Lv, Z.H.; Qiao, L.; Hossain, M.S.; Choi, B.J. Analysis of Using Blockchain to Protect the Privacy of Drone Big Data. IEEE Netw. 2021, 35, 44–49. [Google Scholar] [CrossRef]
  3. Xu, S.Q.; He, Q.H.; Tao, S.B.; Chen, H.T.; Chai, Y.; Zheng, W.X. Pig Face Recognition Based on Trapezoid Normalized Pixel Difference Feature and Trimmed Mean Attention Mechanism. IEEE Trans. Instrum. Meas. 2023, 72, 3500713. [Google Scholar] [CrossRef]
  4. Shi, K.Q. P-sets and its applications. J. Adv. Syst. Sci. Appl. 2009, 9, 209–219. [Google Scholar]
  5. Shi, K.Q.; Li, S.W. Separated fuzzy set ( A ˜ F ¯ , A ˜ F ) and the intelligent fusion of fuzzy information. J. Shandong Univ. Nat. Sci. 2022, 57, 1–13. (In Chinese) [Google Scholar]
  6. Li, X.C. An algebraic model of P-sets. J. Shangqiu Norm. Univ. 2020, 36, 1–5. (In Chinese) [Google Scholar]
  7. Yu, X.Q.; Xu, F.S. Function p ( σ , τ ) -Set and Its Characteristics. J. Jilin Univ. Nat. Sci. 2018, 56, 53–59. (In Chinese) [Google Scholar]
  8. Shi, K.Q. Big data structure-logic characteristics and big data law. J. Shandong Univ. Nat. Sci. 2019, 54, 1–29. (In Chinese) [Google Scholar]
  9. Hao, X.M.; Shi, K.Q. Big date intelligent retrieval and big date block element intelligence separation. J. Comput. Sci. 2020, 47, 113–121. (In Chinese) [Google Scholar]
  10. Hao, X.M.; Chen, Y.T.; Li, M.W.; Li, X.R. P-intuitionistic fuzzy sets and its applications. J. Fuzzy Syst. Math. 2022, 36, 124–130. (In Chinese) [Google Scholar]
  11. Fan, C.X.; Lin, K.K. P-sets and the reasoning-identification of disaster information. Int. J. Converg. Inf. Technol. 2012, 7, 337–345. [Google Scholar]
  12. Lin, H.K.; Fan, C.X. The dual form of P-reasoning and identification of unknown attribute. Int. J. Digit. Content Technol. Its Appl. 2012, 6, 121–131. [Google Scholar]
  13. Zhang, X.Q. P-augmented matrix and its application in dynamic tracking recognition. J. Anhui Univ. Nat. Sci. 2022, 46, 53–58. (In Chinese) [Google Scholar]
  14. Tang, J.H.; Zhang, L.; Shi, K.Q. Outer P-information law reasoning and its application in intelligent fusion and separating of information law. J. Microsyst. Technol. 2018, 24, 4389–4398. [Google Scholar] [CrossRef]
  15. Zhang, X.Q.; Zhang, J.Y.; Shi, K.Q. Dynamic Boundary Characteristics of P-sets and Information Dynamic Fusion Generation. J. Xinyang Norm. Univ. 2022, 35, 364–368. (In Chinese) [Google Scholar]
  16. Hao, X.M.; Li, N.N. Quantitative characteristics and applications of P-information hidden mining. J. Shandong Univ. Nat. Sci. 2019, 54, 9–14. (In Chinese) [Google Scholar]
  17. Liu, J.Q.; Zhang, H.Y. Information P-dependence and P-dependence mining-sieving. J. Comput. Sci. 2018, 45, 202–206. (In Chinese) [Google Scholar]
  18. Shi, K.Q. Inverse P-sets. J. Shandong Univ. Nat. Sci. 2012, 47, 98–109. (In Chinese) [Google Scholar]
  19. Zhang, X.Q.; Shen, L.; Shi, K.Q. ( α F , α F ¯ ) -information fusion generated by information segmentation and its intelligent retrieval. J. Math. 2022, 10, 713. [Google Scholar] [CrossRef]
  20. Fan, C.X.; Huang, S.L. Inverse P-reasoning discovery identification of Inverse P-information. J. Intercont Digit. Content Technol. Its Appl. 2012, 6, 735–744. [Google Scholar]
  21. Lin, K.K.; Fan, C.X. Embedding camouflage of inverse P-information and application. Int. J. Converg. Inf. Technol. 2012, 7, 471–480. [Google Scholar]
  22. Yu, X.Q.; Xu, F.S. Random inverse packet information and its acquisition. J. Appl. Math. Nonlinear Sci. 2020, 5, 357–366. [Google Scholar] [CrossRef]
  23. Li, S.W.; Shi, K.Q. Inverse separated fuzzy set ( A ˜ ¯ F , A ˜ ¯ F ¯ ) on of fuzzy information and secure acquisition. J. Shandong Univ. Nat. Sci. 2022, 57, 1–14. (In Chinese) [Google Scholar]
  24. Zhang, L.; Ren, X.F. The relationship between abnormal information system and inverse P-augmented matrices. J. Shandong Univ. Nat. Sci. 2019, 54, 15–21. (In Chinese) [Google Scholar]
  25. Shi, K.Q. Function P-Sets. Int. J. Mach. Learn. Cybern. 2011, 2, 281–288. [Google Scholar] [CrossRef]
  26. Chen, B.H.; Zhang, L. Attribute Relations of Data Compound-decomposition and Data Intelligent Acquisition. J. Fuzzy Syst. Math. 2021, 35, 167–174. (In Chinese) [Google Scholar]
  27. Shi, K.Q. Function inverse P-sets and information law fusion. J. Shandong Univ. Nat. Sci. 2012, 47, 73–80. (In Chinese) [Google Scholar]
  28. Zhang, L.; Ren, X.F.; Shi, K.Q. Inverse p-matrix reasoning model-based the intelligent dynamic separation and acquisition of educational information. Microsyst. Technol. 2018, 24, 4415–4421. [Google Scholar] [CrossRef]
  29. Tang, J.H.; Chen, B.H.; Zhang, L.; Bai, X.R. Function inverse P-sets and the dynamic separation of inverse P-information laws. J. Shandong Univ. Nat. Sci. 2013, 48, 104–110. (In Chinese) [Google Scholar]
Figure 1. X F ¯ and X F are the internal P-sets and outer P-sets generated by X respectively, X F ¯ and X F constitute P-Sets ( X F ¯ , X F ) ; X F ¯ and X F are represented by solid lines, and finite ordinary element set X is represented by dotted lines, U is a finite element domain.
Figure 1. X F ¯ and X F are the internal P-sets and outer P-sets generated by X respectively, X F ¯ and X F constitute P-Sets ( X F ¯ , X F ) ; X F ¯ and X F are represented by solid lines, and finite ordinary element set X is represented by dotted lines, U is a finite element domain.
Symmetry 15 00477 g001
Figure 2. The α F -sub-information ( x ) F ¯ intelligent separation identification algorithm.
Figure 2. The α F -sub-information ( x ) F ¯ intelligent separation identification algorithm.
Symmetry 15 00477 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Shen, L. Intelligent Separation and Identification of Sub-Information Based on Dynamic Mathematical Model. Symmetry 2023, 15, 477. https://doi.org/10.3390/sym15020477

AMA Style

Zhang X, Shen L. Intelligent Separation and Identification of Sub-Information Based on Dynamic Mathematical Model. Symmetry. 2023; 15(2):477. https://doi.org/10.3390/sym15020477

Chicago/Turabian Style

Zhang, Xiuquan, and Lin Shen. 2023. "Intelligent Separation and Identification of Sub-Information Based on Dynamic Mathematical Model" Symmetry 15, no. 2: 477. https://doi.org/10.3390/sym15020477

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop