Next Article in Journal
Application of the Approximate Bayesian Computation Algorithm to Gamma-Ray Spectroscopy
Previous Article in Journal
Lung Lobe Segmentation Based on Lung Fissure Surface Classification Using a Point Cloud Region Growing Approach
Previous Article in Special Issue
About Granular Rough Computing—Overview of Decision System Approximation Techniques and Future Perspectives
Open AccessArticle

The Auto-Diagnosis of Granulation of Information Retrieval on the Web

Department of Computer Science, Opole University of Technology, Proszkowska 76, 45-758 Opole, Poland
Algorithms 2020, 13(10), 264; https://doi.org/10.3390/a13100264
Received: 18 September 2020 / Revised: 9 October 2020 / Accepted: 13 October 2020 / Published: 16 October 2020
(This article belongs to the Special Issue Granular Computing: From Foundations to Applications)

Abstract

In this paper, a postulation on the relationship between the memory structure of the brain’s neural network and the representation of information granules in the semantic web is presented. In order to show this connection, abstract operations of inducing information granules are proposed to be used for the proposed logical operations systems, hereinafter referred to as: analysis, reduction, deduction and synthesis. Firstly, the searched information is compared with the information represented by the thesaurus, which is equivalent to the auto-diagnosis of this system. Secondly, triangular norm systems (information perception systems) are built for fuzzy or vague information. These are fuzzy sets. The introduced logical operations and their logical values, denoted as problematic, hypothetical, validity and decidability, are interpreted in these fuzzy sets. In this way, the granularity of the information retrieval on the Web is determined according to the type of reasoning.
Keywords: granular calculation; semantic web; triangular norms; information retrieval granular calculation; semantic web; triangular norms; information retrieval

1. Introduction

The simulation of conscious processes on the Web presented in [1] is an auto-diagnosis of information retrieval in this network, understood as matching information retrieved with information represented by the thesaurus. The model of the “auto-diagnosis agent awareness” model and its information technology (IT) interpretation, and then the logical basis for this interpretation, were described in the attribute language (AL).
The auto-diagnosis agent that searches for information on the Web processes computer-based sets of descriptions of knowledge represented by the Web. By applying the analogy to the conscious processes of humans, this knowledge is properly compared with the thesaurus: (1) concepts and roles that the agent knows, (2) assertions that the agent knows, and (3) axioms of the concepts and assertions use that the agent understands through the conceived rules. The information retrieval system that the agent knows and understands is called auto-diagnosis of information retrieval on the Web. The following problems were presented in the cited paper:
  • Determining the auto-diagnosis of the concept ‘I’ for the auto-diagnosis agent,
  • Whether the frequency of auto-diagnosis cycles (frequency of updating the auto-diagnosis history) should be consistent with the frequency of 40 Hz of the thalamus–cortex cycles,
  • Whether the frequency of synchronization of auto-diagnosis cycles with the stability of auto-diagnosis history should correspond to brain waves: alpha (8–12 Hz), beta (above 12 Hz), theta (4–8 Hz), delta (0.5–4 Hz).
It is intuitively assumed that the conceiving of information, represented by descriptions of something, is to discern, distinguish, and identify a thing. The information conceived about an object is preceded by the perception of the description of this object. It consists of references of certain information resources about the object to the degree of compliance of this information with the precisely established knowledge, represented in a set of object descriptions called the thesaurus [2,3]. Thus, object perception determines the weight, rank, and importance of object descriptions representing information about this object. This also applies to sources of information about objects that point to these objects, which are called, in computer science entities, signs of objects, which are different from their descriptions. Each such reference is called the information granule [3,4] and its occurrence is called data about the object.
The description of the information granules indicates something that the information is concerned with. On the Web, any description, and thus a description of an information granule, has an address made up of information in the memory of the computer connected to a network. This address indicates a sign available to a person, revealing what the information applies to, including a specific description of the object to which this information relates. This sign is, for example, a natural language expression describing the object or a representative of data about these objects, as well as their image or their sound characteristics. Granules are grouped into granule systems in which granular calculations are made, i.e., the information about the objects is interpreted. For precisely established knowledge, granules are data sets. When an incorrect classification of objects is used to represent knowledge, granules cannot be described by abstract data sets. In this situation, the following non-standard formalities of set theory [5] are proposed for determining information granules: interval analysis [6], fuzzy sets [7,8,9], rough sets [10,11,12,13,14] and shadowed sets [5,15,16]. In the indicated papers, as well as in other papers on granular calculations, the theory of information granules was missing. The description of the information granules system was specified in [1,2]. The paper [2] also shows the structure of inducing the granule system by fuzzy algebra as a perception system. In this paper, a postulation of the relationship between the structure of the brain’s neural network memory and the semantic web is continued [1,17].
With the perception of knowledge in the brain, i.e., human information processing, the field of heuristics and psychologists strongly connect analysis, reduction, deduction and synthesis in information processing. Classical logic has already described the deduction system containing a bivalent propositional calculus well, in which statements are true (they are valid) or they are not true (they are not valid). The statement is true if it is compatible with what is known with the thesaurus, and when there is no such compatibility, statements are not true. Deduction, in the psychological and heuristic sense, is supported by the processes of analysis, reduction and synthesis. They are described in the Section 3.1 ‘Algorithm for the compatibility of ontology expressions with thesaurus expressions’. Any intelligence, human and AI, can process information using the information processing indicated above. Of course, the complexity of this processing, in the context of overall processing, is different for each process. The paper [18] presents the self-diagnosis system of searching for information by the Web search engines, including automatic information search, as a model of a conscious system. Therefore, referring to the problems cited above, a hypothesis can be made:
The main frequencies (waves) of behaviour of intelligence systems in information processing are the frequencies (waves) of the processes of analysis, reduction, deduction and synthesis and result from the complexity of these processes.
This paper limits the understanding of this hypothesis only to the self-diagnosis system of searching for information by the Web search engines. Only the first stage of verification of this hypothesis has been dealt with, i.e., the precise (formal) description of the analysis, reduction, deduction and synthesis processes in the Web. Only then will it be possible to estimate the overall complexity of these processes or for specific self-diagnosis systems. In this sense, in this paper, for the first time in the scientific literature, a formal conceptual apparatus is formulated for the description of logics: analysis, reduction, deduction, synthesis and relations between them. This will enable the specification of the algorithm given in the paper. Furthermore, for the first time, a second-order semantic network [2] is presented, in which there are concepts of statements and roles between concepts, called conjunctions (Section 2). Inspired by cybernetic terms of positive and negative feedback, they were introduced to describe information processing. This made it possible for the processing schemes to take the form of logical squares.
No reference was made to the latest works on information granulation on the Web and granular calculations, as they do not describe the systems of analysis, reduction, deduction and synthesis. In order to simplify the presentation of the theory, a standard structure of fuzzy set algebras was used to construct the perception of information searched for on the Web. In the future, similarly, other information perception systems can be used for this purpose.

2. Conceiving of Assertions on the Web

In 1999, as a part of the World Wide Web Consortium W3C, the Resource Description Framework (RDF) standard was developed, described on the W3C pages [19,20], and its model can be found in [21]. Taking this model as standard, knowledge representation in the semantic web can be formulated in the RDF marking language—this is a resource description model. Then, the resource is all that can be identified by subtitles in the URI language. One receives RDF documents—descriptions of resources and relationships between them, and also RDF diagrams—resource classes, types of relationships, and types of restrictions. The RDF model is a set of statements, identified with the semantic web, statements which have schemas of triples—node, edge, node—presented in Figure 1.
This triple can be written schematically: subject-predicate-object
For example: John-is_father-Natalie.
Each element of this triple is a resource identified by URI. To avoid name conflicts, the concept of namespace is introduced. There are standard namespaces for RDF: rdfs, rdf, defining the basic RDF classes:
  • rdfs: Resource—class containing all resources,
  • rdfs: Class—class containing all classes and their instances,
  • rdf: Property—class containing all properties,
  • and other rdfs: Literal, rdfs: Datatype, rdf: langString, rdf: HTML and rdf: XMLLiteral.
The standard namespace will be identified, for simplicity, with the Web thesaurus. Any classes containing instances in which subjects or objects are connected to a certain predicate are called concepts, and predicates are called roles.
For any concepts A , B and role R, the notation A . R . B means:
concept instance A-R-concept instance B.
For example, when the concept A has the name ‘father’, and B has the name ‘daughter’, then the notation is as follows:
John-is_father-Natalie
The equivalent notation is: father John-is_father-daughter Natalie.
If x is an instance of concept A and y is an instance of concept B, then the notation is used:
x-A.R.B-y.
Thus, A . R . B is a role R limited to instances of concepts A , B .
The statements that x is an instance of the concept C, and x , y are instances related to the role R are called assertions of concept C or role R, which are written as: x : C , ( x , y ) : R and reads: x is in the concept C, x , y are in the role R. Dual concepts C d and dual roles R d are defined by formulas:
x : C d ¬ ( x : C ) ,
( x , y ) : R d ¬ ( ( x , y ) : R ) .
Such assertions and their dual assertions are treated as atomic assertions. Atomic formulas are formulas of the assertion language. The assertion formulas are any substitution of the sentence calculus schemes with atomic assertions: e.g., if, in the scheme ( ( p q ) p ) q , p will be substitute by the atomic assertion x : C , and instead of q, will be ( x , y ) : R , then the formula for these assertions with the description ϕ is given by the formula ( x , y ) : ϕ = d f ( ( x : C ( x , y ) : R ) x : C ) ( x , y ) : R where ( x , y ) : ϕ is the formula assertion ϕ and it is read as: instances of x , y are instances of the assertion occurring in the formula ϕ . The assertion occurrence is coded by assigning it a value of one and no occurrence by a value of zero. The fulfilment of the assertion connectives, such as negation ¬, conjunction ∧, disjunction ∨, implication ⇒ and exclusion ∖ (the assertion does not occur if the second one occurs), is specified in the Table 1. It is similar to propositional calculus.
Furthermore, the second-order semantic web [1,2] is described and then there are the concepts of assertion formulas and the roles between them, called connectives of assertion formulas.
Thus, concepts of atomic assertions of the form x : C , ( x , y ) : R are sets Ψ 1 = { x : C | x U , x C } , Ψ 2 = { ( x , y ) : R | x U , y U , ( x , y ) R } , where U is a set of all instances in the Web, i.e., x : C is the instance of concept Ψ 1 , when ( x : C ) Ψ 1 ; and ( x , y ) : R is the instance of concept Ψ 2 when ( ( x , y ) : R ) Ψ 2 . Atomic assertions are instances of these concepts. Ψ 1 is a concept of the assertion C, and Ψ 2 is a concept of the role assertion R. For the assertion formula ϕ with assertions like ( x 1 , x 2 , , x k ) : ϕ , its concept is:
Ψ = { ( x 1 , x 2 , , x k ) : ϕ | x 1 , x 2 , , x k U , ( x 1 , x 2 , , x k ) : ϕ } .
It is assumed that:
¬ Ψ 1 = d f { x : C d | x U , x C d } ,
¬ Ψ 2 = d f { ( x , y ) : R d | x U , y U , ( x , y ) R d } .
¬ Ψ = d f { ( x 1 , x 2 , , x k ) : ϕ d | x 1 , x 2 , , x k U , ( x 1 , x 2 , , x k ) : ϕ d } ,
where ϕ d is a dual formula to the formula ϕ , in the sense of the propositional calculus:
d = d f , d = d f , d = d f \ , \ d = d f ,
and for any assertion formulas α , β :
( α d ) d = α , ( ¬ α ) d = α d ,
( α β ) d = α d d β d ,
where □ is one of the assertion connectives , , , \ . De Morgan’s laws apply:
¬ ( α β ) ( ¬ α ¬ β ) , ¬ ( α β ) ( ¬ α ¬ β ) ,
¬ ( α β ) ( ¬ α \ ¬ β ) , ¬ ( α \ β ) ( ¬ α ¬ β ) ,
For any concepts of the assertion formulas Ψ 1 , Ψ 2 , and assertions α Ψ 1 , β Ψ 2 :
α , β are in the role ( Ψ 1 Ψ 2 ) iff ( ¬ α β )
This role between α Ψ 1 , β Ψ 2 , is called positive coupling between these assertion formulas: if one does not belong to the concept Ψ 1 , then second belongs to the concept Ψ 2 .
α , β are in the role ( Ψ 1 Ψ 2 ) iff ¬ ( α β ) ,
This role between α Ψ 1 , β Ψ 2 , is called negative coupling, connection between these assertion formulas: it is not true that if one belongs to the concept Ψ 1 , then second belongs to the concept Ψ 2 .
α , β are in the role ( Ψ 1 Ψ 2 ) iff ¬ ( α β ) ¬ ( β α )
This role between α Ψ 1 , β Ψ 2 , is called negative feedback between these assertion formulas: it is not true that if one belongs to the concept Ψ 1 , then second belongs to the concept Ψ 2 and otherwise.
α , β are in the role ( Ψ 1 Ψ 2 ) iff ( ¬ α β ) ( ¬ β α ) ,
This role between α Ψ 1 , β Ψ 2 , is called positive feedback between these assertion formulas: if one does not belong to the concept Ψ 1 , then second belongs to the concept Ψ 2 and otherwise.
Concepts Ψ 1 , Ψ 2 , Ψ 3 , Ψ 3 of the assertion formulas create a set of roles: { ( Ψ 1 Ψ 2 ) , ( Ψ 1 Ψ 2 ) , ( Ψ 1 Ψ 2 ) , ( Ψ 1 Ψ 2 ) } , called the logical square of these concepts (Figure 2).
It can be noted that:
  • Positive coupling is satisfied if the consequent is satisfied.
  • Negative coupling is satisfied if it is excluded that the antecedent is satisfied, when the consequent is satisfied.
  • Positive feedback is satisfied if the consequent is satisfied.
  • Negative feedback is satisfied if it is excluded that the antecedent is satisfied, when the consequent is satisfied.
For example, the following logical squares are satisfied for De Morgan’s law in Figure 3, Figure 4 and Figure 5.
Theorem 1.
There is a logical square:
Proof (Proof of Theorem 1).
For any connectives of the assertion formulas α , β , by showing that each ‘edge’ of square in Figure 6 is the substitution of the appropriate tautology of the propositional calculus by the formulas α , β proves this theorem. □
The square from the Theorem 1 is called the De Morgan logical square.
The following notation agreement can be accepted:
α D D β D = d f α β , ( α β ) D = d f α D D β D ,
where
α D α , β D β , 1 D = d f 1 , 0 D = d f 0 ,
( 1 A ) D = d f 0 D , ( 0 A ) D = d f 1 D
( 1 D ) A = d f 0 A , ( 0 D ) A = d f 1 A ,
α A A β A = d f ( ¬ ( ¬ ( α A ) D D ¬ ( β A ) D ) ) A
( α β ) A = d f α A A β A ,
( 1 R ) D = d f 0 D , ( 0 R ) D = d f 1 D
( 1 D ) R = d f 0 R , ( 0 D ) R = d f 1 R ,
α R R β R = d f ( ¬ ( ( α R ) D D ( β R ) D ) ) R
( α β ) R = d f α R R β R ,
( 1 S ) D = d f 0 D , ( 0 S ) D = d f 1 D
( 1 D ) S = d f 1 S , ( 0 D ) S = d f 0 S ,
α S S β S = d f ( ¬ ( α S ) D D ¬ ( β S ) D ) S
( α β ) S = d f α S S β S .

3. Compatibility of the Ontology Expressions with the Thesaurus Expressions

Below, a certain heuristic algorithm based on the experience of the author of this paper is presented. The algorithm allows us to determine how heuristic methods of information retrieval on the Web by the auto-diagnosis agent are understood, such as analysis, reduction, deduction and synthesis. In this chapter, Formulas (17)–(29) are used to specify logical operation tables used in the methods of analysis, reduction, deduction and synthesis.

3.1. Algorithm for the Compatibility of Ontology Expressions with Thesaurus Expressions

Procedure compare the expressions searched on the Web with the thesaurus expressions download assertions X from the Web by the auto-diagnosis agent (see Algorithm 1).
Algorithm 1
Algorithms 13 00264 i001

3.2. Deduction

It is assumed that 1 D means that the assertion formula is justified, while 0 R is unjustified. In Table 2, we present the values of the connectives for the deduction.

3.3. Analysis

It is assumed that 1 A means that the assertion formula is problematic, while 0 A is unproblematic. In Table 3, we present the values of the connectives for the analysis.
( 1 A ) D = d f 0 D , ( 0 A ) D = d f 1 D
( 1 D ) A = d f 0 A , ( 0 D ) A = d f 1 A ,
α A A β A = d f ( ¬ ( ¬ ( α A ) D D ¬ ( β A ) D ) ) A

3.4. Reduction

It is assumed that 1 R means that the assertion formula is hypothetical, while 0 R is not hypothetical. In Table 4, we present the values of the connectives for the reduction.
( 1 R ) D = d f 0 D , ( 0 R ) D = d f 1 D
( 1 D ) R = d f 0 R , ( 0 D ) R = d f 1 R ,
α R R β R = d f ( ¬ ( ( α R ) D D ( β R ) D ) ) R

3.5. Synthesis

It is assumed that 1 S means that the assertion formula is resolved, while 0 S is to be resolved. In Table 5, we present the values of the connectives for the synthesis.
( 1 S ) D = d f 0 D , ( 0 S ) D = d f 1 D
( 1 D ) S = d f 1 S , ( 0 D ) S = d f 0 S ,
α S S β S = d f ( ¬ ( α S ) D D ( β S ) D ) S

4. Assertion Perception Systems in de Morgan Algebras

4.1. Triangular Norms of Deduction

The presented t-norms in the following section are analogous to the standard t-norms described in the literature [7,24,25].

4.1.1. Deduction Norms

The operation D : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] is called the t-norm of deduction in [ 0 , 1 ] or the triangular norm of deduction, when it satisfies following conditions for any numbers x , y , z [ 0 , 1 ] :
  • boundary conditions
    0 D y = 0 , y D 1 = y ,
  • monotonicity
    x D y z D y , when x z ,
  • commutativity
    x D y = y D x ,
  • associativity
    x D ( y D z ) = ( x D y ) D z ,
  • there exists x D y = sup { t [ 0 , 1 ] : x D t y } .
The described operation D : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] is called a t-residuum in the set [ 0 , 1 ] .
The operations : [ 0 , 1 ] [ 0 , 1 ] , D : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] , D : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] are described by formulas for any numbers x , y [ 0 , 1 ] :
x = 1 x ,
x D y = 1 ( 1 x ) D ( 1 y ) ,
x D y = 1 ( 1 x ) D ( 1 y ) ,
they are called complement, s-norm of deduction or triangular co-norm of deduction, s-residuum of deduction or co-residuum of deduction, respectively.
The s-norm satisfies the following conditions:
Theorem 2.
For any numbers x , y , z [ 0 , 1 ] :
  • boundary conditions
    0 D y = y , y D 1 = 1 ,
  • monotonicity
    x D y z D y , w h e n x z ,
  • commutativity
    x D y = y D x ,
  • associativity
    x D ( y D z ) = ( x D y ) D z ,
  • there is
    x D y = inf { t L : y x D t } .
The next norms are introduced by notation:

4.1.2. Analysis Norms

The t-norm of analysis:
x A y = d f ( x D y )
The s-norm of analysis:
x A y = d f ( x D y )
The t-residuum of analysis:
x A y = d f ( x D y )
The s-residuum of analysis:
x A y = d f ( x D y )

4.1.3. Reduction Norms

The t-norm of reduction:
x R y = d f ( x D y )
The s-norm of reduction:
x R y = d f ( x D y )
The t-residuum of reduction:
x R y = d f ( x D y )
The s-residuum of reduction:
x R y = d f ( x D y )

4.1.4. Synthesis Norms

The t-norm of synthesis:
x S y = d f ( x D y )
The s-norm of synthesis:
x S y = d f ( x D y )
The t-residuum of synthesis:
x S y = d f ( x D y )
The s-residuum of synthesis:
x S y = d f ( x D y )

4.1.5. De Morgan Algebra

It is easy to check the following Theorem:
Theorem 3.
Tables of the values of operations—complement, t-norm, s-norm, t-residuum and s-residuum—limited to values of zero and one, for deduction, analysis, reduction and synthesis, respectively, are isomorphic for values of zero and one with the tables of logical values for negation, alternatives, conjunctions, implications and the exclusion of assertions, as given in Table 2, Table 3, Table 4 and Table 5 for deduction, analysis, reduction and synthesis systems, respectively.
Algebras described in Theorem 3 are called the de Morgan algebras:
  • the algebra of deduction:
    N S D = [ 0 , 1 ] , , D , D , D , D ,
  • the algebra of analysis:
    N S A = [ 0 , 1 ] , , A , A , A , A ,
  • the algebra of reduction:
    N S R = [ 0 , 1 ] , , R , R , R , R ,
  • the algebra of synthesis:
    N S S = [ 0 , 1 ] , , S , S , S , S .

4.2. Perception Systems

According to the suggestions given in the Introduction, object perception determines the weight, rank and validity of object descriptions representing information about an object, pointing to objects, i.e., to descriptions of objects called instances (objects belonging to the universe U). Each such reference is called an information granule for the description of an object [2,3].
It is postulated that only such perception sets P s will be considered for which only one perception with the same name exists for each μ P s used on the Web:
μ : U [ 0 , 1 ] .
In addition, for any de Morgan algebra N S , where for one { D , A , R , S } , for each x U and any perceptions μ 1 , μ 2 P s , there is one such perception μ 3 P s that fulfils one of these conditions:
  • Let μ 3 = d f μ 1 , then μ 1 ( x ) = ( μ 1 ( x ) ) ,
  • Let μ 3 = d f μ 1 μ 2 , then ( μ 1 μ 2 ) ( x ) = μ 1 ( x ) μ 2 ( x ) ,
  • Let μ 3 = d f μ 1 μ 2 , then ( μ 1 μ 2 ) ( x ) = μ 1 ( x ) μ 2 ( x ) ,
  • Let μ 3 = d f μ 1 μ 2 , then ( μ 1 μ 2 ) ( x ) = μ 1 ( x ) μ 2 ( x ) ,
  • Let μ 3 = d f μ 1 μ 2 , then ( μ 1 μ 2 ) ( x ) = μ 1 ( x ) μ 2 ( x ) .
The algebras thus defined PS = P s , , , , , are called, respectively, the perception systems of deduction, analysis, reduction and synthesis. When P s is the set of all functions μ : U [ 0 , 1 ] , identified in this paper with fuzzy sets, then we can talk about fuzzy set algebras.

5. Conclusions

The problems presented in the paper [1] listed in the Introduction were not resolved in this paper, but they indicated a certain direction for research. The structure of the perception of logical values has been revealed: problematic, hypothetical, justification and decidability, which may correspond to such neuronal processes of information processing in the human brain as stimulation, inhibition, conditioning and communication of neurons. It can also be assumed that these processes can be simulated by the methods described above: analysis, reduction, deduction and synthesis. The computational complexity of these processes may then justify the occurrence of so-called brain waves. These simulations can be improved, e.g., by the ideas of heuristics proposed in [22,23]. This research motivates us to formulate a strict theory of information granules induced by the systems of the perception of analysis, reduction, deduction and synthesis. This theory can be used in other fields of science—not only in computer engineering, but also in psychology or research on the human brain. The results obtained in this paper will be used in the future, mainly to refine the given heuristic algorithm for the compatibility of ontology expressions with thesaurus expressions on the Web. The aim of this research will be to extend information processing in the self-diagnosis system of automatic information retrieval by the Web search engines in accordance with this algorithm and to study the complexity of this processing.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bryniarska, A. Autodiagnosis of Information Retrieval on the Web as a Simulation of Selected Processes of Consciousness in the Human Brain. In Biomedical Engineering and Neuroscience; Hunek, W.P., Paszkiel, S., Eds.; Advances in Intelligent Systems and Computing 720; Springer: Berlin/Heidelberg, Germany, 2018; pp. 111–120. [Google Scholar]
  2. Bryniarska, A. Certain information granule system as a result of sets approximation by fuzzy context. Int. J. Approx. Reason. 2019, 111, 1–20. [Google Scholar] [CrossRef]
  3. Peters, J.F. Discovery of perceptually near information granules. In Novel Developments in Granular Computing: Applications of Advanced Human Reasoning and Soft Computation; Yao, J.T., Ed.; Information Science Reference: Hersey, NY, USA, 2009. [Google Scholar]
  4. Peters, J.F.; Ramanna, S. Affinities between perceptual granules: Foundations and Perspectives. In Human-Centric Information Processing Through Granular Modelling; Bargiela, A., Pedrycz, W., Eds.; Springer: Berlin, Germany, 2009; pp. 49–66. [Google Scholar]
  5. Pedrycz, W. Allocation of information granularity in optimization and decision-making models: Towards building the foundations of Granular Computing. Eur. J. Oper. Res. 2014, 232, 137–145. [Google Scholar] [CrossRef]
  6. Moore, R. Interval Analysis; Prentice-Hall: Englewood Clifis, NJ, USA, 1966. [Google Scholar]
  7. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  8. Zadeh, L.A. Towards a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst. 1997, 90, 111–117. [Google Scholar] [CrossRef]
  9. Zadeh, L.A. Toward a generalized theory of uncertainty (GTU) an outline. Inf. Sci. 2005, 172, 1–40. [Google Scholar] [CrossRef]
  10. Pawlak, Z. Rough sets. Int. J. Comp. Inform. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  11. Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991. [Google Scholar]
  12. Pawlak, Z.; Skowron, A. Rough membership function. In Advaces in the Dempster-Schafer of Evidence; Yeager, R.E., Fedrizzi, M., Kacprzyk, J., Eds.; Wiley: New York, NY, USA, 1994; pp. 251–271. [Google Scholar]
  13. Pawlak, Z.; Skowron, A. Rudiments of rough sets. Inf. Sci. 2007, 177, 3–27. [Google Scholar] [CrossRef]
  14. Pawlak, Z.; Skowron. A. Rough sets and Boolean reasoning. Inf. Sci. 2007, 177, 41–73. [Google Scholar] [CrossRef]
  15. Pedrycz, W. Shadowed sets: Representing and processing fuzzy sets. IEEE Trans. Syst. Man Cybern. Part B 1998, 28, 103–109. [Google Scholar] [CrossRef] [PubMed]
  16. Pedrycz, W. Knowledge-Based Clustering: From Data to Information Granules; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  17. Lindsay, P.H.; Norman, D.A. Human Information Processing: An Introduction to Psychology; Academic Press, Inc.: New York, NY, USA, 1972. [Google Scholar]
  18. Bryniarska, A. The Paradox of the Fuzzy Disambiguation in the Information Retrieval. Int. J. Adv. Res. Artif. Intell. 2013, 2, 55–58. [Google Scholar] [CrossRef]
  19. Manola, F.; Miller, E. RDF Primer. 2004. Available online: http://www.w3.org/TR/rdf-primer/ (accessed on 15 October 2020).
  20. Resource Description Framework (RDF). RDF Working Group. 2004. Available online: http://www.w3.org/RDF/ (accessed on 15 October 2020).
  21. Lassila, O.; Swick, R.R. Resource Description Framework (RDF): Model and Syntax Specification. Rekomendacja W3C. 2003. Available online: http://www.w3.org/TR/REC-rdf-syntax/ (accessed on 15 October 2020).
  22. Michalewicz, Z.; Fogel, D.B. How to Solve It: Modern Heuristic; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  23. Kowalski, R. Logic for Problem Solving; North-Holland: New York, NY, USA; Amsterdam, The Netherlands; Oxford, UK, 1979. [Google Scholar]
  24. Bobillo, F.; Straccia, U. A Fuzzy Description Logic with Product T-norm. In Proceedings of the 2007 IEEE International Fuzzy Systems Conference, London, UK, 23–26 July 2007; pp. 1–6. [Google Scholar] [CrossRef]
  25. Sanchez, E. Fuzzy Logic and the Semantic Web; Elsevier: Amsterdam, The Netherlands, 2006; ISBN 13: 978-0-444-51948-1. [Google Scholar]
Figure 1. Connection between subject and object of a statement on the Semantic Web.
Figure 1. Connection between subject and object of a statement on the Semantic Web.
Algorithms 13 00264 g001
Figure 2. The logical square of the concepts Ψ 1 , Ψ 2 , Ψ 3 , Ψ 4 of the assertion formulas.
Figure 2. The logical square of the concepts Ψ 1 , Ψ 2 , Ψ 3 , Ψ 4 of the assertion formulas.
Algorithms 13 00264 g002
Figure 3. The logical square.
Figure 3. The logical square.
Algorithms 13 00264 g003
Figure 4. The logical square.
Figure 4. The logical square.
Algorithms 13 00264 g004
Figure 5. The logical square.
Figure 5. The logical square.
Algorithms 13 00264 g005
Figure 6. The logical square.
Figure 6. The logical square.
Algorithms 13 00264 g006
Table 1. Values of the assertion connectives.
Table 1. Values of the assertion connectives.
α β ¬ α α β α β α β α \ β
1101110
0111011
1001000
0010010
Table 2. Values of the connectives for the deduction.
Table 2. Values of the connectives for the deduction.
α D 1 D 0 D 1 D 0 D
β D 1 D 1 D 0 D 0 D
¬ α D 0 D 1 D 0 D 1 D
α D D β D 1 D 1 D 1 D 0 D
α D D β D 1 D 0 D 0 D 0 D
α D D β D 1 D 1 D 0 D 1 D
α D \ D β D 0 D 1 D 0 D 0 D
Table 3. Values of the connectives for the analysis.
Table 3. Values of the connectives for the analysis.
α A 1 A 0 A 1 A 0 A
β A 1 A 1 A 0 A 0 A
¬ α A 0 A 1 A 0 A 1 A
α A A β A 1 A 0 A 0 A 0 A
α A A β A 1 A 1 A 1 A 0 A
α A A β A 0 A 1 A 0 A 0 A
α A \ A β A 1 A 1 A 0 A 1 A
Table 4. Values of the connectives for the reduction.
Table 4. Values of the connectives for the reduction.
α R 1 R 0 R 1 R 0 R
β R 1 R 1 R 0 R 0 R
¬ α R 0 R 1 R 0 R 1 R
α R R β R 0 R 0 R 0 R 1 R
α R R β R 0 R 1 R 1 R 1 R
α R R β R 0 R 0 R 1 R 0 R
α R \ R β R 1 R 0 R 1 R 1 R
Table 5. Values of the connectives for the synthesis.
Table 5. Values of the connectives for the synthesis.
α S 1 S 0 S 1 S 0 S
β S 1 S 1 S 0 S 0 S
¬ α S 0 S 1 S 0 S 1 S
α S S β S 0 S 1 S 1 S 1 S
α S S β S 0 S 0 S 0 S 1 S
α S S β S 1 S 0 S 1 S 1 S
α S \ S β S 0 S 0 S 1 S 0 S
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop