Next Article in Journal
Efficient Malware Classification by Binary Sequences with One-Dimensional Convolutional Neural Networks
Previous Article in Journal
Tail Conditional Moments for Location-Scale Mixture of Elliptical Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simplifying Implications with Positive and Negative Attributes: A Logic-Based Approach

by
Francisco Pérez-Gámez
,
Domingo López-Rodríguez
,
Pablo Cordero
,
Ángel Mora 
and
Manuel Ojeda-Aciego
*
Departamento Matemática Aplicada, Universidad de Málaga, 29071 Málaga, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(4), 607; https://doi.org/10.3390/math10040607
Submission received: 15 December 2021 / Revised: 9 February 2022 / Accepted: 12 February 2022 / Published: 16 February 2022
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
Concepts and implications are two facets of the knowledge contained within a binary relation between objects and attributes. Simplification logic (SL) has proved to be valuable for the study of attribute implications in a concept lattice, a topic of interest in the more general framework of formal concept analysis (FCA). Specifically, SL has become the kernel of automated methods to remove redundancy or obtain different types of bases of implications. Although originally FCA used only the positive information contained in the dataset, negative information (explicitly stating that an attribute does not hold) has been proposed by several authors, but without an adequate set of equivalence-preserving rules for simplification. In this work, we propose a mixed simplification logic and a method to automatically remove redundancy in implications, which will serve as a foundational standpoint for the automated reasoning methods for this extended framework.

1. Introduction

Since the 1980s, formal concept analysis (FCA) has been a solid framework to analyze data and extract hidden knowledge, comparable to other well-known techniques in terms of cost. Given a binary table, called formal context, FCA can build ontologies similar to AI-based knowledge representation methods [1] but with a solid algebraic structure, in which order theory and logic are the main tools: given a formal context, FCA builds a hierarchical structure of concepts, the so-called concept lattice (indeed a complete lattice), which encompasses all the information in the formal context. Moreover, in the same process, FCA returns sets of implications and/or association rules (well-known in other areas such as data mining, machine learning, and rough set theories) with a rich algebraic framework in which we can also compute closed sets and their minimal generators, pseudointents, different types of bases, etc. This knowledge can reveal interesting patterns and solve significant problems in modern areas as social network analysis [2,3] or recommender systems [4,5].
In the classical framework of FCA, the formal context is a binary relation between the elements of two sets, that is, a relationship between a set of objects and a set of attributes, establishing the properties that each object does satisfy (namely, positive information). Notwithstanding, sometimes the properties that are not satisfied for each object (negative information) are also relevant. For instance, in a table in which objects are birds, the object “ostrich” shall not have the property (attribute) “fly”. That is, for “ostrich”, not only the positive information is relevant (“large”, “heavy”, “fast”); the negative information (“does not fly”) is also relevant.
The management of negative attributes in association rules already appeared in [6,7]. Mining concise sets of association rules (also called bases) is of particular importance in the machine learning community, and, to this end, several works have been proposed that, on top of the classical minimum support–minimum confidence strategy, impose other measures of interestingness [8] or informativeness [9] based on statistical parameters, which help in the pruning of frequent item sets when determining a representative set of association rules.
In our work, we focus on exact association rules (also called implications); hence, we resort to the framework of FCA, where the first occurrences of negative information appear in Missaoui et al. [10,11], in which the authors computed mixed implications (with positive and negative attributes) from a double context formed by the initial context together with its opposite. This last approach can generate a huge number of redundant rules and algorithms executed with increased execution time. Moreover, and more importantly, the relationship between positive and negative information (mixed attributes) is not considered.
In [12], Rodríguez et al. proposed a generalization of classical FCA to consider both positive and negative attributes together with the relationship between them. New derivation operators and a Galois connection was defined to extend the classical framework. In addition, algorithms to compute the mixed concept lattice and mixed implications were proposed; furthermore, an axiomatic system based on the simplification paradigm was proposed to manage mixed implications [13].
In different areas, such as artificial intelligence, database theory, data mining, and machine learning, one of the main problems is the huge degree of redundancy contained in the rules extracted from a dataset. The focus of this work is the study of redundancy elimination in implications with positive and negative attributes. We must recall that several authors have worked to remove redundancy in association rules in data mining. Zaki [14] used the notion of closed itemsets to reduce the set of rules, and also stated “the number of redundant rules is exponential in the length of the longest frequent itemsets”. Other works study the relaxation of the notion of closure and closed itemsets [15] to describe a compact set of association rules which is approximately informative in the sense that the support and confidence of the remaining association rules can be derived from this compact set with high accuracy. Very recent works [16,17] prove that this is still an open problem.
When dealing with implications, our team has worked on this problem, providing the axiomatic system of simplification logic [18], on which it is possible to develop automated methods to remove redundancy in formal concept analysis which constitute the core to obtain bases of implications [19]. However, all these works consider the positive information in the dataset. To the best of our knowledge, no methods have been developed to eliminate redundancy in the rules when considering positive and negative attributes within the rules.
In this paper, we develop an automated logic-based method to remove redundancy in a set of mixed implications, that is, implications relating positive and negative attributes. For this, we propose the notion of a simplified mixed implicational system and an axiomatic system equivalent to that given in [12], but with rules better suited for implementation. The main idea is a set of logical equivalences oriented to detect redundancy and contradictions between positive and negative attributes, and therefore to simplify the set of implications, by removing attributes inside a single implication or even removing the whole implication.
The rest of this work is structured as follows: in Section 2, we present some preliminary notions about FCA and implicational systems with positive and mixed attributes. In Section 3, the idea of a simplified mixed implicational system is presented together with its motivation. In Section 4, we present logical equivalence rules especially suited for implications with mixed attributes, with the purpose of simplifying the set of implications. The algorithm to build simplified systems of implications is presented in Section 5, and a thorough experimental evaluation of the simplification achieved by the proposed algorithm is given in Section 6, as well as a discussion of the obtained results. The conclusions and future research lines are presented in Section 7. For the sake of readability, the proofs of the technical results of this work are collected in Appendix A.

2. Preliminary Notions and Results

Formal concept analysis (FCA) [20,21] is a mathematical theory based on lattice theory which analyses the information given in a formal context, i.e., a relationship between a set of objects and a set of attributes stored in a table. Usually, a formal context is given by a triple K = ( G , M , I ) , where G is the set with all the objects, M is the set with all the attributes and I is called the incidence and is defined by I : G × M { 0 , 1 } and I ( g , m ) = 1 if the object g has the attribute m and I ( g , m ) = 0 otherwise.
FCA extracts the information stored in a formal context by using the derivation operators, namely, the pair of mappings ( , ) defined as follows: given X G , we have that X = { m M I ( g , m ) = 1 for all g X } , that is, the attributes shared by all the objects in X, and, given Y M , we have Y = { g G I ( g , m ) = 1 for all m Y } , that is, all the objects that share all the attributes in Y. Using the derivation operators, a formal concept is defined as a pair ( X , Y ) satisfying X = Y and X = Y ; the set of formal concepts can be ordered by inclusion in the first component, and this ordering provides the structure of a complete lattice.
Another type of information provided by a formal context is given by the so-called attribute implications, which are any expression of the form A B where A and B are sets of attributes, i.e., A , B M for a given a formal context K = ( G , M , I ) . We are interested in implications that hold in the formal context K : an implication A B holds in K if A B , i.e., if all the objects that have all the attributes from A satisfy the attributes from B as well. We denote by L M the set of all the implications in M.
Simplification logic ( S ) [18] was introduced as a means to remove redundancies in a given set of implications that hold in a formal context. The axiom system for S consists of the axiom schema [Ref] and the three inference rules [Frag], [Comp], [Simp] given below for all sets A , B , C M :
[Ref]
Reflexivity: S A A .
[Frag]
Fragmentation: (We will follow the usual convention in this research area of omitting the symbol ∪ whenever necessary. For instance, in this rule B C means B C .) A B C S A B .
[Comp]
Composition: A B , C D S A C B D .
[Simp]
Simplification: A B , C D S A ( C B ) D .
The notion of inference in S is defined as usual: let ϕ be an implication in L M and let Σ be a set of implications in L M . We say that ϕ is a syntactic consequence of Σ in S (and we denote it by Σ S ϕ , if there exists a sequence of implications ϕ 1 , , ϕ n such that ϕ n = ϕ ) and, for all ϕ i with 1 i n we have either ϕ i Σ or we can obtain ϕ i by applying one of the rules of S to the implications in the set { ϕ j j < i } .
We recall below some derived rules from simplification logic, which we will use in this paper. The proof is straightforward using the definition of derivation. Given a set of attributes M, the following inference rules hold for all A , B , C , D M :
[GenRef]
Generalized reflexivity: S A C if C A .
[Augm]
Augmentation: A B S A C B C .
It is worth mentioning that some rules of simplification logic are in fact logical equivalences. The main equivalence rules that we have in the S are the following:
[FragEq]
{ A B } { A B A } .
[UnEq]
{ A B , A C } { A B C } .
[GenEq]
{ A B , C D } { A B D } when A C A B .
[⌀-Eq]
{ A } .
[SimpEq]
{ A B , C D } { A B , C B D B } when A C B .
Thus far, we have used the information explicitly given by the formal context, but have said nothing about pairs satisfying I ( g , m ) = 0 . The point is that, in principle, I ( g , m ) = 0 does not mean that object g does not have the attribute m; it means that we do not have any evidence either in favour or to the contrary. If I ( g , m ) = 0 , then we affirm that the object g does not have the attribute m. Some authors [11,12,22,23] assume that I ( g , m ) = 0 means that the object g does not have the attribute m. To the best of our knowledge, the paper [11] was the first of these approaches which, given K = ( G , M , I ) , built a new formal context ( K K ¯ ) = ( G , M M ¯ , I * ) where I * ( g , m ) = I ( g , m ) for all m M and I * ( g , m ¯ ) = min ( 1 , 1 I ( g , m ) ) ; that is, I * ( g , m ¯ ) = 0 if I ( g , m ) = 1 and I * ( g , m ¯ ) = 1 otherwise. In this approach, the attributes in M are called positive and those in M ¯ are called negative. Note that, with this view, we duplicate the number of columns, and, as a consequence, the method is not the most efficient possible.
In this work, we adopt the approach considered in [12] in which I ( g , m ) = 0 means that the object g does not have the attribute m and, instead of duplicating the number of columns, the derivation operators are changed without changing the formal context.
The extended operators considered here are : 2 G 2 M M ¯ and : 2 M M ¯ 2 G , and are defined as follows:
X = { m M ( g , m ) I g X } { m ¯ M ¯ ( g , m ) I g X }
Y = { g G ( g , m ) I m Y } { g G ( g , m ) I m ¯ Y }
When considering both positive and negative attributes, we will say that we are using a mixed context; when working with K (resp. K ¯ ) with the classical view, we say that we are working with a positive (resp. negative) context.
As in the classical case, we can define when an implication holds in a mixed context in terms of ⇑ and ⇓ as follows: A B is valid in a mixed context (denoted by K A B ) if and only if A B .
The axiom system of simplification logic was extended to this new approach with mixed attributes in [12,24].
The new axiomatic system contains one axiom schema [Ref] and four inference rules, [Simp], [Key], [InKey] and [Red]:
[Ref]
Reflexivity: S A A .
[Simp]
Simplification: A B , C D S A ( C B ) D .
[Key]
Key: (Following the convention, hereonwards b represents the singleton { b } , and A b ¯ means A { b ¯ } ). A b S A b ¯ M M ¯ .
[InKey]
Inverse key: A b M M ¯ S A b ¯ .
[Red]
Reduction: A b C , A b ¯ C S A C .
This new system is a proper extension of that of simplification logic in that rules [Frag] and [Comp] can be derived from the new axioms [12]. Our set of derived rules can be further extended by a version of the ex contradictione quodlibet and the contraposition rule:
[Cont]
Contradiction: a a ¯ M M ¯ .
[Rft]
Reflection: A a b A b ¯ a ¯ .
Notice that [Key] is in fact the converse of [InKey], and they provide an equivalence between the implications A b M M ¯ and A b ¯ , which reflects the fact that whenever the set A { b } is inconsistent, then A b ¯ should hold. Moreover, a version of the well-known cut rule arises as the equivalence between A C and the set { A b C , A b ¯ C } .

3. Simplified Mixed Implicational Systems

In order to formally define the notion of asimplified mixed implicational system, we will use the following notation:
Notation 1.
Hereafter, we will use X ¯ : = { x ¯ : x X } and notice that x ¯ ¯ = x for any x M M ¯ . Thus, x X ¯ if, and only if, x ¯ X . For any A , B M M ¯ , we have A B ¯ = A ¯ B ¯ and also A B ¯ = A ¯ B ¯ .
Definition 1.
Let M be a set of attributes and Σ be an implicational system with attributes in M M ¯ . Σ is said to be a simplified mixed implicational system (or sm-implicational systemin short) if the following conditions hold for all A B , C D Σ :
 (i) 
B and A B = ;
 (ii) 
A = C implies B = D ;
 (iii) 
A C implies C B = = D B ;
 (iv) 
B M M ¯ and A A ¯ = ;
 (v) 
If x A C ¯ and A x = C x ¯ , then D ¬ B .
The first three properties are inherited from the definition of a simplified implicational system by [19] and express the idea that the size of the problem cannot be reduced by applying the equivalence rules for positive attributes. We comment below on the ideas underlying conditions (iv) and (v), which are specific to the case of mixed attributes.
For (iv), the condition B M M ¯ refers to the fact that, on the one hand, M M ¯ is not admissible and, on the other hand, if A , we would have that A M M ¯ is equivalent to A x x ¯ for any x A (hence, the system would be reduced by considering the latter). Furthermore, by the derived rule [Cont], any implication of the form A B with a contradiction in the antecedent ( A A ¯ ) is valid; therefore, it can be safely removed from the implicational system.
Finally, (v) expresses syntactically a situation in which rule [Red] would apply. If A x = C x ¯ , there exists a set of attributes S such that A = S x and C = S x ¯ . In addition, if D B , we would have
{ A B , C D } = { S x B , S x ¯ D } { S x B D , S x D , S x ¯ D } [ FragEq ] { S x B D , S D } [ Red ] ;
hence, the size of the system can be reduced.
The next theorem provides a set of equivalence rules oriented to obtaining an automated method to build an sm-implicational system from a given one. For the sake of readability, the proofs of all the technical results are collected in Appendix A.
Theorem 1.
Let K = ( G , M , I ) be a formal context and A , B , C , D M M ¯ . Then, the following equivalences hold:
[KeyEq]
If A C and B C ¯ , then
{ A B , C D } { A B } .
[InKeyEq]
If there exists x A C , such that A x C , then
{ A M M ¯ , C D } { A M M ¯ , C D x ¯ } .
[RedEq]
If D B and there exists x A C ¯ , with A x C x ¯ , then
{ A B , C D } { A B , C x ¯ D } .
By using the previous theorem, we propose the following inference rules, which are more convenient to be implemented algorithmically.
[Ref]
Reflexivity: A A .
[Simp]
Simplification: A B , C D A ( C B ) D .
[Key ]
Key: A B , C D C M M ¯ if A C and B C ¯ .
[InKey ]
Inverse key: A M M ¯ , C D C D x ¯ if there exists x A C and A x C .
[Red ]
Reduction: A B , C D C x ¯ D if D B and there exists x A C ¯ with A x C x ¯ .
Of course, this new axiomatic system is equivalent to the initial one, as shown in the theorem below.
Theorem 2.
The system formed by rules[Ref],[Simp],[Key ],[InKey ]and[Red ]is equivalent to that formed by[Ref],[Simp],[Key],[InKey]and[Red].

4. Simplification via Equivalence Rules

In this section, we introduce some other equivalence rules that will be useful for removing redundant attributes. The starting point will use the following result:
Lemma 1.
For all A , B , C , D M M ¯ , we have:
[ContEq]
If A A ¯ , then { A B } .
[ContEq ]
If A and A B B ¯ , then { A B } { A M M ¯ } { A x x ¯ } for any x A .
[ContEq ]
If C , A C D and B C D ¯ , then, for any x C
{ A B , C D } { A B , C M M ¯ } { A B , C x x ¯ } .
The equivalences above allow us to detect contradictions and, hence, reduce the size of the set of implications. Below, we propose some other equivalence rules that take into account the possible relationship between different implications in order to reduce their size.
Theorem 3.
Consider A , B , C , D M M ¯ :
[KeyEq ]
If there exist x A D , y B C ¯ with A x = C y ¯ , then
{ A B , C D } { A B y , C y ¯ y } { A B y , C M M ¯ } .
[KeyEq ]
If A C and B D ¯ , for any x C we have that then
{ A B , C D } { A B , C x x ¯ } .
[RedEq ]
If D B and there exists x A C ¯ such that A x = C x ¯ , then
{ A B , C D } { A B D , C x ¯ D } .
[RftEq]
If there exist x A , y B C ¯ and A x = C y ¯ , then
{ A B , C D } { A B y , C D x ¯ } .
[RftEq ]
If there exist x A D ¯ , y B C ¯ and A x C y ¯ , then
{ A B , C D } { A B , C D x ¯ } .
[MixUnEq]
If there exist x A , y C such that A x = C y and b D , then
{ A b , C D } { ( A x ) b ¯ x ¯ y ¯ , C D b } .
Among all the rules above, there are two which, in principle, do not help to reduce the size of the system([RftEq] and [MixUnEq]), but guarantee to keep the number of attributes fixed. However, there is a particular case where the size is reduced as a result of removing implications: the application of either [RftEq] or [MixUnUnitEq] when one of the implications is a unit implication (its right-hand side has only one element) allows to remove that implication. The result is formally stated in the following corollary:
Corollary 1.
For all A , C , D M M ¯ and b , y M M ¯ , the following equivalences hold:
[RftUnitEq]
If y C ¯ and there exist x A such that A x = C y ¯ , then
{ A y , C D } { C D x ¯ } .
[MixUnUnitEq]
If there exist x A , y C such that A x = C y , then
{ A b , C b } { ( A x ) b ¯ x ¯ y ¯ } .

5. Automatic Computation of sm-Implicational Systems

In this section, we propose an algorithm to simplify a system of implications with positive and negative attributes. The algorithm is based on the equivalence rules obtained in Section 3 and Section 4. For the sake of readability, the main algorithm is decomposed into subroutines that check the conditions required to apply certain equivalence rules.
Following the same strategy as in Section 4, we start by defining the algorithms to simplify the implications that contain contradictions. In Algorithm 1, we translate [ContEq ] into pseudo-code, whereas in Algorithm 2 we do the same with [ContEq ]. Mathematics 10 00607 i001
In Algorithm 2 below, we work with two implications, A B and C D , and check the conditions of [ContEq ] in both orders by changing the roles of A B and C D in the equivalence rule, so we can simplify both implications in a single execution of Simplify-Cont2. We will make use of this strategy in the rest of the algorithms. Mathematics 10 00607 i002
The equivalence rules [KeyEq] and [KeyEq ] are presented in Algorithm 3. Observe that the conditions of those equivalence rules are nested since they had some common requirements so the algorithm can be written more compactly. Additionally, notice that when an implication X Y can be removed from the implicational system due to the use of an equivalence rule, in the algorithms, it is indicated as X , Y : = . Mathematics 10 00607 i003
Algorithm 4 presents the pseudocode for [KeyEq ]. It has not been incorporated into the same algorithm as [KeyEq] and [KeyEq ] for the sake of readability. The equivalence rules derived from the inference rule [Red]are condensed in Algorithm 5. Then, in Algorithm 6, we present the equivalence rules [RftEq ] and [RftUnitEq]. Mathematics 10 00607 i004 Mathematics 10 00607 i005 Mathematics 10 00607 i006
Algorithm 7 presents the code for the equivalence rule [MixUnUnitEq]. It is a simple translation of the conditions into pseudocode. Note that, in this case, there is no need to include the converse conditions due to the equality required. Mathematics 10 00607 i007
The complete method is presented in Algorithm 8, which incorporates the other algorithms in order to simplify the implications with all the studied equivalence rules. We can explain its procedure as follows: firstly, the set Σ of proper implications (i.e., the antecedent and consequent are disjoint) without contradictions is built from Σ . We initialize Σ s : = Σ and reset Σ . We will use Σ as a list to store the simplified implications in each iteration.
For each implication A B Σ s , we try to simplify it with the other implications already stored in Σ . If the result after simplification is a proper implication (nonempty consequent), it is added to Σ . By this procedure, all implications are compared with each other, and both A B Σ s and all C D Σ are simplified in a single step. The algorithm checks whether the implications have any contradiction; in that case, rules [ContEq] and [ContEq ] are used (lines 5 and 8). Lines 9–10 express the conditions to use [GenEq], that is, where A B is more general than C D or vice versa, keeping only the more general. If it is not the case, then the algorithm proceeds to check the conditions of the simplification rules described in Section 3 and Section 4. The first simplification to be considered is [SimpEq] (lines 12–15), since it allows us to remove attributes from both the left-hand and right-hand sides of the implications, whenever applicable. Later, the algorithm checks all the simplifications that are specific for mixed attributes and that have already been described in Algorithms 2 to 7. Line 22 is used to add the implications C D that have not been removed after all the simplification steps. Lines 23–26 add the implication A B (if it has not been removed) to Σ . Mathematics 10 00607 i008
The theorem below proves the termination and correctness of Algorithm 8 in the sense that the ouput is an implicational system equivalent to the input and, in addition, is an sm-implicational system. The notion of size and the following notations will be useful for the proofs of the results hereafter.
The size of an implicational system Σ is defined as Σ = A B Σ | A | + | B | , where | X | represents the cardinality of the set X. Thus, Σ is the amount of attributes that appear in the antecedents and consequents in the implications in Σ .
Theorem 4.
The functionSimplify-Mixed given in Algorithm 8 reaches a fixed point Σ given any finite implicational system Σ. In addition, Σ Σ and Σ is an sm-implicational system.
Proof. 
First, note that, after the comparison of a pair of implications (lines 7–22), the size of { A B , C D } does not increase, since there are only three possibilities: (1) no simplification is made; (2) one or more attributes are removed from the left-hand side or the right-hand side of any of the two implications; and (3) one implication is removed at the cost of adding, at most, one attribute to the other implication (this happens in [RftUnitEq] and [MixUnUnitEq]). Since the minimum size of a proper implication is one, in all these three cases, we can guarantee that the size of the system does not increase.
Moreover, if no simplification is made in an iteration over all implications A B Σ s , i.e., the system is not modified in that iteration, then the algorithm stops. Thus, in an iteration of the repeat loop in lines 2–27, the only two possibilities are (1) the system is reduced by removing implications or by removing attributes, and (2) the system is not modified in the iteration. There cannot be infinitely many iterations since the size of the system is finite and decreases in each iteration. Hence, the algorithm will stop after finitely many steps.
Let Σ be an implicational system and let Σ be the output of Algorithm 8 on Σ . Since Σ is built iteratively by applying equivalence rules, it is obvious that Σ Σ . Let us now prove that Σ is an sm-implicational system. Assume A B , C D Σ :
 (i) 
B and A B = follow from the construction of Σ in lines 1 and 23–26;
 (ii) 
Let us prove that in Σ there cannot be two different implications A B and C D with A = C . At a given iteration of the algorithm, if two implications verify that their left-hand sides are equal, they would meet the conditions in line 9, so they will be merged into a single implication, and, therefore, the algorithm would not reach a fixed point at that iteration. Hence, in the fixed point Σ , there cannot be such duplicity since the implications would have been previously merged;
 (iii) 
A C implies C B = = D B : it is a consequence of the application of [SimpEq]in lines 12–15;
 (iv) 
B M M ¯ and A A ¯ = due to rules [ContEq] and [ContEq ];
 (v) 
If x A C ¯ and A x = C x ¯ , then D ¬ B . Otherwise, [RedEq] could be applied, and therefore, Σ would not be the fixed point.
Hence, Σ is an sm-implicational system.    □
Note that the only equivalence rules for mixed attributes needed to obtain an sm-implicational system are [ContEq], [ContEq ] and [RedEq]. The purpose of the rest of equivalence rules is to further minimize and get an even simpler implicational system.
Let us now show a brief example of the application of Algorithm 8, omitting minor details for an easier reading.
Example 1.
Given Σ = { a d b ¯ c , b ¯ c ¯ a d , a c b ¯ d , a c ¯ b d } , the application ofSimplify-Mixed ( Σ ) is as follows:
  • Firstly, the pair { a d b ¯ c , b ¯ c ¯ a d } is studied. Using[ContEq ], we simplify b ¯ c ¯ a d into c ¯ b . Thus,
    { a d b ¯ c , b ¯ c ¯ a d , a c b ¯ d , a c ¯ b d } { a d b ¯ c , c ¯ b , a c b ¯ d , a c ¯ b d } ;
  • Later, { a d b ¯ c , a c ¯ b d } satisfies the conditions of[RftUnitEq], so a d b ¯ c is removed.
    { a d b ¯ c , c ¯ b , a c b ¯ d , a c ¯ b d } { c ¯ b , a c b ¯ d , a c ¯ b d } ;
  • For { c ¯ b , a c ¯ b d } ,[SimpEq]can be applied, changing a c ¯ b d into a c ¯ d :
    { c ¯ b , a c b ¯ d , a c ¯ b d } { c ¯ b , a c b ¯ d , a c ¯ d } ;
  • When comparing { a c b ¯ d , a c ¯ d } ,[RedEq]can be applied, transforming a c b ¯ d into a b ¯ d :
    { c ¯ b , a c b ¯ d , a c ¯ d } { c ¯ b , a b ¯ d , a c ¯ d } ;
  • Now, for { a b ¯ d , a c ¯ d } ,[MixUnUnitEq]can also be applied, changing a c ¯ d to a d ¯ b c and removing a b ¯ d :
    { c ¯ b , a b ¯ d , a c ¯ d } { c ¯ b , a d ¯ b c } ;
  • At this point, the algorithm has reached a fixed point and returns Σ = { c ¯ b , a d ¯ b c } .
We can check that the size of Σ is 16, and the size of the sm-implicational system obtained by Algorithm 8 is 6; that is, we have obtained a size reduction of ≈62.5%.
This example shows that the algorithm can be highly effective in reducing the size of a system with mixed attributes. In the next section, we will present a thorough experimental evaluation of the size reduction achieved. We conclude this section by presenting a worst-case complexity analysis of the algorithm, showing that it has polynomial complexity.
Theorem 5.
Let Σ be an implicational system with mixed attributes. The worst-case time complexity of Algorithm 8 on Σ is O ( | Σ | 2 Σ ) .
Proof. 
In every iteration of the algorithm, we remove at least one attribute or one implication. Therefore, the maximum number of iterations of the repeat loop is Σ = max { | Σ | , Σ } , since Σ | Σ | , for we are considering only proper implications. Furthermore, in each iteration, all possible pairs of implications are studied in order to apply the equivalence rules, so a maximum of O ( | Σ | 2 ) steps are made in every iteration. In aggregate, the maximum number of steps is O ( | Σ | 2 Σ ) .    □
It is worth noting that, as Σ | Σ | , the complexity of Algorithm 8 is polynomial, O ( Σ 3 ) , in the size of the input.
As a related issue, it is important to note that the complexity of the construction of the canonical basis of implications, as well as of the enumeration of the rules belonging to this basis, has been studied before by other authors [25,26,27]. However, such a problem is different from the one we study in the present work, where we consider a previously computed system of implications as the input for our method, which, with polynomial complexity, returns an equivalent simplified system. Moreover, the input system in our proposal does not have to represent a basis associated with a given formal context.

6. Experimental Results

In order to evaluate the capability of Algorithm 8 for simplifying an implicational system, a number of random mixed contexts have been generated with different numbers of attributes in M and different densities (proportion of non-zero elements in the table of the relation I), and their Duquenne–Guigues bases of implications [28] has been computed. As these bases are built without using the logic of mixed attributes, it makes sense to use them as a benchmark for the proposed algorithm.
Contexts were constructed for | M | { 4 , 5 , , 10 } (that is, | M M ¯ | { 8 , 10 , , 20 } ) and density δ { 0.1 , 0.25 } . This choice of values for δ is due to two main reasons: on the one hand, (purely positive) formal contexts in real situations are very sparse, with a very low proportion of non-zero elements in the table of the relation I; on the other hand, the apposition of a context K and its associated negative context, K ¯ , by construction, always has density 0.5. Therefore, the choice of δ has no relevance to the complexity of the problem since, as we have seen in Theorem 5, the main indicators of the complexity of the problem are the size and the cardinality of the implication basis. Therefore, particular values of δ typically representative of low densities have been selected. For each combination of these parameters, 50 contexts have been randomly generated. All experiments have been performed using the R programming language and the library fcaR [29], particularly for the generation of the datasets as well as for the computation of the implication bases.
As stated in Section 5, in order to obtain an sm-implicational system, it is enough to consider [ContEq], [ContEq ]and [RedEq]. For this reason, in the experiment, we have compared two different versions of the algorithm: (v1) where only the simplifications related to the positive attributes, [GenEq]and [SimpEq], as well as the aforementioned [ContEq], [ContEq ] and [RedEq], are performed; (v2) the simplification algorithm as described in Algorithm 8. Thus, (v1) is obtained by removing lines 6 and 10 from Algorithm 5 and lines 17–21 from Algorithm 8. The idea behind this comparison is to check the improvement obtained by adding the rest of the equivalence rules.
Hence, for each experiment carried out, the size and the cardinal of the sm-implicational system returned by the simplification algorithm have been measured, both for version (v1) and (v2). The number of iterations performed by each of the two versions before reaching the fixed point was also taken into account, as well as the number of simplifications performed and the calculation time.
In Table 1, we present the averaged results of the execution of the two versions on the problems generated according to their number of attributes and density. As expected, whereas the reduction produced by the algorithm version (v1) to obtain sm-implicational systems ranges from 19% to 74% with respect to the system size, and 2% to 47% with respect tocardinality, version (v2) provides much more reduced outputs, with a range of reduction between 73% and 85% with respect size, and between 61% to 79% with respect to cardinality. Furthermore, it is worth mentioning that, despite the fact that (v2) performs a greater number of checks, the computation time is, in many occasions, smaller than that of (v1); the reason is that in (v2), it is possible to perform many more simplifications in each iteration. This shows that the extra cost of applying the extra equivalence rules pays off, since we obtain a notably smaller version of the implicational system with shorter computation times.
The study of the relationship between the size and cardinality of the input system and the size and cardinality of the simplified systems by versions (v1) and (v2) provides the following results: in Figure 1, we can see the practically linear relationship in the reduction of both factors. Again, we observe that (v2) presents a lower slope, corroborating the trend indicated in Table 1. A linear regression has been performed on the data, obtaining
(v1)
Σ 0.629 Σ (with R 2 = 0.956 ), and | Σ | 0.975 | Σ | (with R 2 = 0.9992 ).
(v2)
Σ 0.249 Σ (with R 2 = 0.967 ), and | Σ | 0.331 | Σ | (with R 2 = 0.961 ).
Finally, in Figure 2, we show the execution time of the algorithm for each of the problems, depending on the size of the input. On the scatterplot, the cubic fit to the data has been plotted using Σ as the predictor. It can be seen how the theoretical polynomial fit mentioned in Theorem 5 is also reflected in the experimental evaluation.

Discussion

The topic of attribute reduction and simplification of implicational systems is important insofar as it can be considered a preliminary step to other algorithms (computation of logical closure, generation of direct bases) whose computational complexity depends directly on the size of the implicational system; however, to the best of our knowledge, there is no previous algorithm for simplifying implications with mixed attributes and, thus, our proposal is the first one in this extended framework.
We have presented experimental evidence of the algorithm’s ability to reduce the size of an implicational system with mixed attributes. Specifically, we have presented two versions of the algorithm: the first one (v1) simply guarantees the obtainment of an sm-implicational system, and the most complete one (v2) obtains more simplified results, achieving an average reduction of about 75% of the initial system size (see Figure 1), even with a lower execution time (see Figure 2).

7. Conclusions and Future Work

We have proposed a novel logic-based method able to face the problem of simplifying implicational systems consisting of positive and negative attributes. This method is based on the simplification logic for mixed attributes, which provides a sound and complete framework to deal with this type of formal context. Among the contributions of this paper, we have found a set of logical equivalence rules for mixed attribute implications suitable for computational implementation and an algorithm to simplify an implicational system with a polynomial time-complexity. The experimental results show very promising results with respect to the reduction ratio of the algorithm.
Concerning future work, a thorough study of the minimality properties of sm-implicational systems is needed in order to pursue definitions similar to the different types of bases for classical implicational systems [19]. This will certainly be a first step on which to build automated reasoning methods based on simplification logic.
Another interesting direction for future work is the extension of these results with unknown information, which may be due to the existence of partial information in the data, or because the context has been obtained by compacting the data, as proposed in [30]. In this line, we will follow the research line initiated in [31] and consider the formal contexts as trivalued relations in which a third value representing the unknown is added apart from the standard positive and negative values.
Extending the work presented here to the case of association rules will require adaptation of the axiomatic system and equivalence properties in order to ensure the correctness of the scheme and the preservation of the informativeness and completeness properties of a basis of rules, and this task will be undertaken in future work.
Finally, in this work, we have focused on rules with conjunctive antecedents and consequents, where both positive and negative attributes appear. However, this is not the only point of view through which to model the problem of negation. In particular, the use of disjunctive rules, such as in [32], is of particular interest because of their generality. As part of our future work, we will explore the relationship of the axiomatic system and the equivalence rules described under this construction.

Author Contributions

Conceptualization, F.P.-G., D.L.-R., P.C., Á.M. and M.O.-A.; Investigation, F.P.-G., D.L.-R., P.C., Á.M. and M.O.-A.; Validation, F.P.-G., D.L.-R., P.C., Á.M. and M.O.-A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors were partially supported by the Spanish Ministry of Science, Innovation, and Universities (MCIU), State Agency of Research (AEI), Junta de Andalucía (JA), Universidad de Málaga (UMA) and European Regional Development Fund (FEDER) through the projects PGC2018-095869-B-I00 (MCIU/AEI/FEDER), TIN2017-89023-P (MCIU/AEI/FEDER), PRE2018-085199 and UMA2018-FEDERJA-001 (JA/UMA/FEDER).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs of the Technical Results

 Proof of Theorem 1 
[KeyEq]
Clearly, { A B , C D } { A B } . Let us prove { A B } { C D } under the hypotheses. Assume A C and consider x B C ¯ . Then:
(1) A B (by hypothesis);
(2) x ¯ x ¯ (by [Ref]);
(3) A x ¯ B x ¯ (by [Comp] on (1) and (2));
(4) C A (by [GenRef], since A C );
(5) C A x ¯ (by [Comp] with (2) and (4), since x ¯ C , by hypothesis);
(6) C B x ¯ (by [Simp] on (5) and (3));
(7) B x (by [GenRef], since x B );
(8) B x ¯ M M ¯ (by [Key]on (7));
(9) C M M ¯ (by [Simp] on (6) and (9));
(10) C D (by [Frag]).
[InKeyEq]
Since { A M M ¯ , C D x ¯ } S { A M M ¯ , C D } by [Frag], let us show that, under the hypotheses, we can infer C D x ¯ from A M M ¯ and C D :
(1) ( A x ) x M M ¯ (by hypothesis);
(2) A x x ¯ (by [InKey] on (1));
(3) C A x (by [GenRef] and A x C );
(4) C x ¯ (by [Simp] on (2) and (3));
(5) C D (by hypothesis;)
(6) C D x ¯ (by [Comp] on(4) and (5)).
[RedEq]
{ A B , C x ¯ D } S { A B , C D } holds since:
(1) C C x ¯ (by [GenRef]);
(2) C x ¯ D (by premise);
(3) C D (by [Simp] on (1) and (2)).
Let us show now that we have { A B , C D } S { C x ¯ D } :
(1) C x ¯ A x (by hypothesis and [GenRef]);
(2) x x (by [Ref]);
(3) ( C x ¯ ) x ( A x ) x (by [Comp] on (1) and (2));
(4) A B (by premise);
(5) ( C x ¯ ) x B (by [Simp] on (3) and (4));
(6) B D (by hypothesis and [GenRef]);
(7) ( C x ¯ ) x D (by [Simp] on (5) and (6));
(8) C = ( C x ¯ ) x ¯ D (by premise);
(9) C x ¯ D (by [Red] on (7) and (8)).
   □
 Proof of Theorem 2 
It suffices to show that [Key], [InKey] and [Red] hold in the new system, since the converse is true by Theorem 1.
[Key]
Let us suppose A b and show that A b ¯ M M ¯ :
(1) A b (premise);
(2) A b ¯ A b ¯ (by [Ref]);
(3) A b ¯ M M ¯ (by [Key ] on (1) and (2)).
[InKey]
Let is suppose A b M M ¯ and prove that A b ¯ :
(1) A b M M ¯ (premise);
(2) A A (by [Ref]);
(3) A A b ¯ (by [InKey ] on (1) and (2));
(4) A b ¯ (by [Frag]).
[Red]
Assuming A b C and A b ¯ C , then, by [Red ], we can infer A C .
   □
 Proof of Lemma 1 
[ContEq]
We only need to show how to derive A B from the axioms. Suppose there is x A A ¯ , i.e., x , x ¯ A :
(1) A x x ¯ (by [GenRef]);
(2) x x ¯ M M ¯ (by [Cont]);
(3) A M M ¯ (by [Simp] on (2) and (3));
(4) A B (by [Frag]).
[ContEq ]
Firstly note that by [Key] and [InKey]  we have { A M M ¯ } { A x x ¯ } for all x A . In addition, { A M M ¯ } { A B } , so it suffices to show that we can infer A B A M M ¯ . Consider y A B B ¯ ; note that, in particular, y ¯ B ¯ A ¯ B ¯ . Then:
(1) A B (premise);
(2) A A B (by [Augm] on (1));
(3) A y y ¯ (by [Frag] on (2));
(4) y y ¯ M M ¯ (by [Cont]);
(5) A M M ¯ (by [Simp] on (3) and (4)).
[ContEq ]
As with [ContEq ], it now suffices to prove { A B , C D } C M M ¯ under the hypotheses. Thus:
(1) C D A (by [GenRef]);
(2) C D (premise);
(3) C A (by [Simp] on (1) and (2));
(4) A B (premise);
(5) C B (by [Simp] on (3) and (4));
(6) C B C D (by [Comp] on (2) and (5) and C C );
(7) C M M ¯ (by [ContEq ]).
   □
 Proof of Theorem 3 
[KeyEq ]
We will prove just the first equivalence, since the second one is obtained by the application of [Key].
Assume A B and C D . Since A B y can be obtained by [Frag], we just have to show how to infer C y ¯ y :
(1) ( A x ) x y (by [Frag] on the premise A B );
(2) ( C y ¯ ) x y (by hypothesis A x = C y ¯ );
(3) ( C y ¯ ) y ¯ x (by [Frag] on the premise C D );
(4) ( C y ¯ ) x ¯ y (by [Rft]);
(5) C y ¯ y (by [Red] on (2) and (4)).
Under the hypotheses, suppose A B y and C y ¯ y ; let us prove that A B and C D :
(1) A x y (by premise and C y ¯ = A x );
(2) A A x (by [GenRef]);
(3) A y (by [Simp] on (1) and (2));
(4) A B (by [Comp] of (3) with premise A B y );
(5) ( C y ¯ ) y ¯ M M ¯ (by [Key] on the second premise);
(6) C D (by [Frag]).
[KeyEq ]
Consider x C and y B D ¯ and assume as premises A B and C D .
(1) C A (by [GenRef]);
(2) A B (premise);
(3) C B (by [Simp] on (1) and (2));
(4) C B D (by [Comp] on (3) and premise C D );
(5) C x x ¯ (by lemma 1 [ContEq ], since y , y ¯ B D ).
Now, assuming A B and C x x ¯ , let us prove C D :
(1) C M M ¯ (by [Key]of second premise);
(2) C D (by [Frag]).
[RedEq ]
Let us denote S = A x = C x ¯ , so A = S x and C = S x ¯ . Then:
{ A B , C D } { A B D , A D , C D } (by [Frag] and [UnEq])
{ A B D , S x D , S x ¯ D } (since A = S x and C = S x ¯ )
{ A B D , S = C x ¯ D } (by [RedEq])
 [RedEq]
Let us show that { A B , C D } { A B y , C D x ¯ } :
(1) A B (premise);
(2) C D (premise);
(3) A B y (by [Frag] on (1));
(4) ( A x ) x y (by [Frag] on (1));
(5) ( A x ) y ¯ x ¯ (by [Rft] on (4));
(6) ( C y ¯ ) y ¯ x ¯ (by hypothesis A x = C y ¯ );
(7) C D x ¯ (by [UnEq] on (2) and (6)).
For the converse:
(1) A B y (premise);
(2) ( C y ¯ ) y ¯ x ¯ (by [Frag] on the premise C D x ¯ );
(3) ( C y ¯ ) x y (by [Rft] on (2);
(4) ( A x ) x y (by hypothesis A x = C y ¯ );
(5) A B (by [Comp] on (1) and (4));
(1) C D (by [Frag] on C D x ¯ ).
[RftEq ]
{ A B , C D } S { A B , C D x ¯ } by [Frag].
It suffices to show that we can infer C D from { A B , C D x ¯ } :
(1) A B (premise);
(2) C D x ¯ (premise);
(3) ( A x ) x y (by [Frag] on (1));
(4) ( A x ) y ¯ x ¯ (by [Rft] on (3));
(5) C y ¯ A x (by hypothesis and [GenRef]);
(6) ( C y ¯ ) y ¯ ( A x ) y ¯ (by [Augm] on (5));
(7) C x ¯ (by [Simp] on (4) and (6));
(8) C D (by [UnEq] on (2) and (7)).
[MixUnEq]
Let us write S = A x = C y . Then:
Since C D b can be obtained by using [Frag], we just need to prove S b ¯ x ¯ y ¯ :
(1) S x b (premise);
(2) S y b (by [Frag] on C D );
(3) S b ¯ x ¯ (by [Rft] on (1));
(4) S b ¯ y ¯ (by [Rft] on (2));
(5) S b ¯ x ¯ y ¯ (by [UnEq] on (3) and (4)).
Assume premises { S b ¯ x ¯ y ¯ , C D b } and let us prove A b and C D :
(1) S b ¯ x ¯ (by [Frag] on first premise);
(2) A b (by [Rft] on (1) and S x = A );
(3) S b ¯ y ¯ (by [Frag] on first hypothesis);
(4) C b (by [Rft] on (3) and S y = B );
(5) C D b (second premise);
(1) C D (by [UnEq] on (4) and (5)).

References

  1. Staab, S.; Studer, R. Handbook on Ontologies, 2nd ed.; Springer Publishing Company, Incorporated: New York, NY, USA, 2009. [Google Scholar]
  2. Messaoudi, A.; Missaoui, R.; Ibrahim, M.H. Detecting Overlapping Communities in Two-mode Data Networks using Formal Concept Analysis. Revue des Nouvelles Technologies de l’Information 2019, RNTI-E-35, 189–200. [Google Scholar]
  3. Ibrahim, M.H.; Missaoui, R.; Vaillancourt, J. Identifying Influential Nodes in Two-Mode Data Networks Using Formal Concept Analysis. IEEE Access 2021, 9, 159549–159565. [Google Scholar] [CrossRef]
  4. Cordero, P.; Enciso, M.; López, D.; Mora, A. A conversational recommender system for diagnosis using fuzzy rules. Expert Syst. Appl. 2020, 154, 113449. [Google Scholar] [CrossRef]
  5. Cordero, P.; Enciso, M.; Ángel, M.; Ojeda-Aciego, M.; Rossi, C. A Formal Concept Analysis Approach to Cooperative Conversational Recommendation. Int. J. Comput. Intell. Syst. 2020, 13, 1243–1252. [Google Scholar] [CrossRef]
  6. Agrawal, R.; Srikant, R. Fast Algorithms for Mining Association Rules in Large Databases. In Proceedings of the 20th International Conference on Very Large Data Bases, Santiago, Chile, 12–15 September 1994; pp. 487–499, VLDB ’94. [Google Scholar]
  7. Boulicaut, J.F.; Bykowski, A.; Jeudy, B. Towards the Tractable Discovery of Association Rules with Negations. In Flexible Query Answering Systems; Larsen, H.L., Andreasen, T., Christiansen, H., Kacprzyk, J., Zadrożny, S., Eds.; Springer Publishing Company: New York, NY, USA, 2001; pp. 425–434. [Google Scholar]
  8. Wu, X.; Zhang, C.; Zhang, S. Efficient mining of both positive and negative association rules. ACM Trans. Inf. Syst. (TOIS) 2004, 22, 381–405. [Google Scholar] [CrossRef]
  9. Ben Yahia, S.; Gasmi, G.; Mephu Nguifo, E. A new generic basis of “factual” and “implicative” association rules. Intell. Data Anal. 2009, 13, 633–656. [Google Scholar] [CrossRef]
  10. Missaoui, R.; Nourine, L.; Renaud, Y. An Inference System for Exhaustive Generation of Mixed and Purely Negative Implications from Purely Positive Ones. In Proceedings of the 7th International Conference on Concept Lattices and Their Applications, CEUR Workshop Proceedings. Sevilla, Spain, 19–21 October 2010; Volume 672, pp. 271–282. [Google Scholar]
  11. Missaoui, R.; Nourine, L.; Renaud, Y. Computing Implications with Negation from a Formal Context. Fundam. Informaticae 2012, 115, 357–375. [Google Scholar] [CrossRef]
  12. Rodríguez-Jiménez, J.M.; Cordero, P.; Enciso, M.; Mora, A. Data mining algorithms to compute mixed concepts with negative attributes: An application to breast cancer data analysis. Math. Methods Appl. Sci. 2016, 39, 4829–4845. [Google Scholar] [CrossRef]
  13. Cordero, P.; Enciso, M.; Mora-Bonilla, A.; Rodríguez-Jiménez, J. Inference of Mixed Information in Formal Concept Analysis. In Trends in Mathematics and Computational Intelligence; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2019; pp. 81–87. [Google Scholar]
  14. Zaki, M.J. Generating Non-Redundant Association Rules. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Boston, MA, USA, 20–23 August 2000; pp. 34–43. [Google Scholar]
  15. Cheng, J.; Ke, Y.; Ng, W. Effective elimination of redundant association rules. Data Min. Knowl. Discov. 2008, 16, 221–249. [Google Scholar] [CrossRef]
  16. Díaz Vera, J.; Negrín Ortiz, G.; Molina, C.; Amparo Vila, M. Knowledge redundancy approach to reduce size in association rules. Informatica 2020, 44, 167–181. [Google Scholar] [CrossRef]
  17. Jin, M.; Wang, H.; Zhang, Q. Association rules redundancy processing algorithm based on hypergraph in data mining. Clust. Comput. 2019, 22, 8089–8098. [Google Scholar] [CrossRef]
  18. Mora, A.; Cordero, P.; Enciso, M.; Fortes, I.; Aguilera, G. Closure via functional dependence simplification. Int. J. Comput. Math. 2012, 89, 510–526. [Google Scholar] [CrossRef]
  19. Rodríguez Lorenzo, E.; Bertet, K.; Cordero, P.; Enciso, M.; Mora, A. Direct-optimal basis computation by means of the fusion of simplification rules. Discret. Appl. Math. 2018, 249, 106–119. [Google Scholar] [CrossRef]
  20. Ganter, B.; Wille, R. Formal Concept Analysis’ Mathematical Foundations; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  21. Wille, R. Restructuring Lattice Theory: An Approach Based on Hierarchies of Concepts. Ordered Sets 1982, 83, 445–470. [Google Scholar]
  22. Konecny, J. Attribute implications in L-concept analysis with positive and negative attributes: Validity and properties of models. Int. J. Approx. Reason. 2020, 120, 203–215. [Google Scholar] [CrossRef]
  23. Rodríguez-Jiménez, J.M.; Cordero, P.; Enciso, M.; Rudolph, S. Concept lattices with negative information: A characterization theorem. Inf. Sci. 2016, 369, 51–62. [Google Scholar] [CrossRef]
  24. Rodríguez-Jiménez, J.M. Extracción de Conocimiento Usando Atributos Negativos en el Análisis de Conceptos Formales Aplicaciones en la Ingeniería. Ph.D. Thesis, Universidad de Málaga, Málaga, Spain, 2017. [Google Scholar]
  25. Kuznetsov, S.O. On the Intractability of Computing the Duquenne-Guigues Base. J. Univers. Comput. Sci. 2004, 10, 927–933. [Google Scholar]
  26. Distel, F.; Sertkaya, B. On the complexity of enumerating pseudo-intents. Discret. Appl. Math. 2011, 159, 450–466. [Google Scholar] [CrossRef] [Green Version]
  27. Babin, M.A.; Kuznetsov, S.O. Computing premises of a minimal cover of functional dependencies is intractable. Discret. Appl. Math. 2013, 161, 742–749. [Google Scholar] [CrossRef]
  28. Guigues, J.L.; Duquenne, V. Familles Minimales d’Implications Informatives Résultant d’un Tableau de Données Binaires. Mathématiques Sci. Hum. 1986, 95, 5–18. [Google Scholar]
  29. López-Rodríguez, D.; Mora, A.; Domínguez, J.; Villalón, A.; Johnson, I.; fcaR: Formal Concept Analysis. R Package Version 1.1.0. 2020. Available online: https://cran.r-project.org/web/packages/fcaR/index.html (accessed on 14 December 2021).
  30. Ganter, B.; Meschke, C. A Formal Concept Analysis Approach to Rough Data Tables. In Transactions on Rough Sets XIV; Springer: Berlin/Heidelberg, Germany, 2011; pp. 37–61. [Google Scholar]
  31. Pérez-Gámez, F.; Cordero, P.; Enciso, M.; Mora, A. A New Kind of Implication to Reason with Unknown Information. Lect. Notes Comput. Sci. 2021, 12733, 74–90. [Google Scholar]
  32. Hamrouni, T.; Ben Yahia, S.; Mephu Nguifo, E. Sweeping the disjunctive search space towards mining new exact concise representations of frequent itemsets. Data Knowl. Eng. 2009, 68, 1091–1111. [Google Scholar] [CrossRef]
Figure 1. Comparison of versions (v1) and (v2) with respect to a reduction ratio of Σ (a) and of | Σ | (b) for all the datasets used in the experiments.
Figure 1. Comparison of versions (v1) and (v2) with respect to a reduction ratio of Σ (a) and of | Σ | (b) for all the datasets used in the experiments.
Mathematics 10 00607 g001
Figure 2. Execution time of versions (v1) and (v2), with cubic adjustment.
Figure 2. Execution time of versions (v1) and (v2), with cubic adjustment.
Mathematics 10 00607 g002
Table 1. Data from the experimental evaluation. Columns 3 to 10 express the average value obtained in the experiments for each of the studied parameters (cardinality, size and execution time).
Table 1. Data from the experimental evaluation. Columns 3 to 10 express the average value obtained in the experiments for each of the studied parameters (cardinality, size and execution time).
(v1)(v2)
| M | δ | Σ | Σ | Σ | Σ t | Σ | Σ t
40.18.3445.684.4011.660.1052.686.640.089
0.2511.4855.927.5221.440.2612.968.940.121
50.111.5075.926.6821.860.1863.9212.140.167
0.2522.86120.3817.9057.021.6946.1621.080.852
60.115.60117.229.7834.780.4355.6619.160.335
0.2538.48220.1832.50111.586.29910.7840.942.771
70.120.40171.0413.6451.880.9567.3626.240.563
0.2561.96379.9254.96203.4020.35618.6875.169.439
80.128.04254.3620.3278.562.38810.0437.561.642
0.25102.84652.4494.92368.5667.20933.68146.4434.22
90.135.80351.1027.20108.803.31912.9051.202.596
0.25149.10992.20140.10571.40144.11450.50232.0074.970
100.146.21486.5036.40149.907.92117.6070.105.220
0.25218.921494.20208.90891.10393.85081.40393.10188.461
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pérez-Gámez, F.; López-Rodríguez, D.; Cordero, P.; Mora , Á.; Ojeda-Aciego, M. Simplifying Implications with Positive and Negative Attributes: A Logic-Based Approach. Mathematics 2022, 10, 607. https://doi.org/10.3390/math10040607

AMA Style

Pérez-Gámez F, López-Rodríguez D, Cordero P, Mora  Á, Ojeda-Aciego M. Simplifying Implications with Positive and Negative Attributes: A Logic-Based Approach. Mathematics. 2022; 10(4):607. https://doi.org/10.3390/math10040607

Chicago/Turabian Style

Pérez-Gámez, Francisco, Domingo López-Rodríguez, Pablo Cordero, Ángel Mora , and Manuel Ojeda-Aciego. 2022. "Simplifying Implications with Positive and Negative Attributes: A Logic-Based Approach" Mathematics 10, no. 4: 607. https://doi.org/10.3390/math10040607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop