Next Article in Journal
Benchmarking Point Cloud Feature Extraction with Smooth Overlap of Atomic Positions (SOAP): A Pixel-Wise Approach for MNIST Handwritten Data
Previous Article in Journal
Hysteresis in Neuron Models with Adapting Feedback Synapses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Alternative Support Threshold Computation for Market Basket Analysis

Rulex Innovation Labs, Rulex Inc., 16122 Genoa, Italy
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(2), 71; https://doi.org/10.3390/appliedmath5020071
Submission received: 24 April 2025 / Revised: 26 May 2025 / Accepted: 10 June 2025 / Published: 13 June 2025

Abstract

:
This article aims to limit the rule explosion problem affecting market basket analysis (MBA) algorithms. More specifically, it is shown how, if the minimum support threshold is not specified explicitly, but in terms of the number of items to consider, it is possible to compute an upper bound for the number of generated association rules. Moreover, if the results of previous analyses (with different thresholds) are available, this information can also be taken into account, hence refining the upper bound and also being able to compute lower bounds. The support determination technique is implemented as an extension to the Apriori algorithm but may be applied to any other MBA technique. Tests are executed on benchmarks and on a real problem provided by one of the major Italian supermarket chains, regarding more than 500 , 000 transactions. Experiments show, on these benchmarks, that the rate of growth in the number of rules between tests with increasingly more permissive thresholds ranges, with the proposed method, is from 21.4 to 31.8 , while it would range from 39.6 to 3994.3 if the traditional thresholding method were applied.

1. Introduction

Market basket analysis (MBA) identifies a group of techniques used to process a transactional dataset for extracting association rules characterized by values of relevance and reliability greater than a given minimum.
Generally, users of any software which addresses the field of Market Basket Analysis (MBA) belong to one of the two following categories:
  • Data scientists, experts in algorithms and programming, but lacking domain-specific knowledge related to the MBA application;
  • Managers, experts in their proper applicative domain, but (possibly) lacking specific competencies in designing and interpreting association rule mining algorithms.
Data scientists and managers usually cooperate [1], combining their skills for a more effective analysis. Distinguishing relevant and irrelevant data automatically is crucial for data scientists, both to reduce the computational cost of MBA algorithms, allowing us to also generate useful association rules for big datasets, and to provide managers with succinct, actionable information. Well-known MBA algorithms, such as Apriori [2,3], FP-Growth [4] and Eclat [5], filter data according to a relevance threshold: items (and itemsets) whose number of occurrences in the dataset does not overcome this threshold are discarded.
Still, data scientists may have to deal with datasets whose characteristics are unknown. It is then challenging to find an effective threshold that guarantees to filter data as desired. Moreover, some datasets may involve a considerable number of items and the support level of many of them may only differ slightly (e.g., supermarket purchases). Then, a small change in the support threshold significantly alters the association rules, i.e., the algorithm is sensitive to the choice of this parameter.
An alternative support threshold computation technique is proposed to face this problem: the number of items to consider (most frequent one first) is specified and the corresponding support threshold is automatically determined and applied.
It will be proven that, by employing this technique, results are more predictable, even for a user lacking domain-specific knowledge. More specifically, an upper bound for the number of generated association rules is computable and this bound may be refined taking into account the results of other analyses, with different parametrization, addressing the same dataset.
To validate the effectiveness of the proposed technique, tests are executed on well-known benchmarks (such as the retail dataset [6]), as well as on real data concerning 552,626 transactions, provided by a big Italian supermarket chain whose name has to be kept confidential.
The paper is organized as follows: Section 1.1 introduces market basket analysis terminology and Section 1.2 is dedicated to related work. Section 2 describes the proposed automatic support threshold computation technique and quantifies the bound that it is possible to impose on the number of generated association rules by employing it. Section 3 presents experimental results and Section 4 examines the current limitations of the approach, as well as future work. Conclusions follow.

1.1. Definitions

Denote with I = { b 1 , …, b n } the universe constituted by the items b 1 , …, b n (being n N ), with BI an itemset. The cardinality | B | of the itemset is equivalent to the number of items b i such that b i B for i = 1, …, n. If | B | = 1 , B is defined as an item.
Let D be a dataset constituted by a set of transactions T 1 , , T m (with m N ), such that T j I for each j = 1, …, m; that is, each transaction constitutes an itemset. An occurrence of the itemset B in D takes place if, for at least one value of j between 1 and m, B T j . The number of occurrences of B in D is then equivalent to the number of different j such that B T j .
An association rule r is defined as
r = ( A , C ) ,
where A , C I and A C = . The rule r identifies a relation between the occurrence of the itemset A and that of C; usually, A is called antecedent, and C is called consequent. A rule r is then univocally associated with the itemsets A and C and so, necessarily, its cardinality
| r | = | A C |
It is important to note that each frequent itemset B such that | B | > 1 may generate multiple association rules r = ( A , B / A ) , according to the choice of A. More specifically, up to
k = 1 | B | 1 | B | k = 2 | B | 2
association rules can be generated. As a matter of fact, the antecedent of the rule can in fact contain from one to | B | 1 different items, chosen inside the itemset B. Hence, at most | B | k distinct association rules whose antecedent part has cardinality k can be derived.
Three proper measures, defined in terms of the occurrences of A and C, characterize the association rule r as follows:
  • The support  s B of an itemset B is the number of occurrences of the itemset B in D ; s B / m measures the empirical probability of the occurrence of B in D , where m denotes the total number of transactions in D . Considering that any rule r = ( A , C ) is univocally associated with the itemset A C , its support and empirical probability of occurrence, respectively, denoted as s r and s r / m , are equal to s A C and s A C / m .
  • The confidence  c r of r is the ratio between s r and the number s a of occurrences of A in D :
    c r = s r s a .
    It is a measure of the reliability of r and represents how often the consequent occurs if the antecedent is verified.
  • The lift  l r of r is the ratio between the empirical probability that r occurs in D and its expected value in case A and C were independent. The probability of occurrence for an itemset in D is expressed as the ratio between its support and the total number of transactions m, then:
    l r = P ( r ) P ( A ) P ( C ) = s r m s a s c m 2 = s r m s a s c .
    where s c denotes the number of occurrences of C in D . Lift represents a measure of correlation between the occurrences of itemsets A and C; it is greater than 1 in case a positive correlation exists and lower than 1 for negative ones.

1.2. Related Work

The technique described in the present article has been implemented as an extension of an existing one, i.e., the Apriori algorithm, widely employed in the MBA field, also recently [7,8] and described in [2,3].
The Apriori algorithm is constituted by a loop; at each step k, all the itemsets of cardinality k with support at least equal to a threshold value t N are identified. The itemsets fulfilling this condition are defined as frequent itemsets and the set composed of them is referred to as F k .
The value of k is initialized to 1 and increased after each iteration; the loop ends if k overcomes the value y N of maximum cardinality specified by the user or if F k = ∅.
Each iteration is composed of two steps:
  • Candidate generation: the output of this step is the candidate set   C k . C 1 is composed of all the items in D ; C k with k > 1 is composed of all the itemsets of cardinality k such that, if any item is removed from them, the resulting itemset belongs to F k 1 .
  • Pruning: the support of each itemset in C k is computed. If it is at least equal to t, it is included in F k .
Finally, all the itemsets belonging to F i   i = 1 , , y are examined to identify all the association rules that are associated with them, according to the choice of the antecedent part. All these rules, possibly filtered by a minimum threshold of confidence and/or lift set by the user, constitute the output of the Apriori algorithm.
Apriori needs a complete scan of the dataset for each pruning step of the candidate itemsets, i.e., once for each considered rule cardinality, and can consequently be slower than the Fp-Growth algorithm [4], which overcomes this constraint by means of auxiliary data structures. Nevertheless, Fp-Growth is less suitable for the considered application, since the reference dataset may be composed of a huge number of transactions and items; hence, the memory overhead due to auxiliary data structures could be relevant. In addition, given the practical flavor of the proposed technique, the focus will be on low-cardinality association rules (few antecedents, few consequents), which are characterized by higher support and may be applied more easily. Then, the computational time advantage of Fp-Growth is less relevant. The authors of [9] discuss another tree-based algorithm (DFP-SEPS) for frequent pattern mining, in case the input data is received as a stream, while in [10], an integration between Apriori and FP-Growth for the extraction of multi-level association rules is discussed.
Also, Eclat [5] would offer some advantages with respect to Apriori in appropriate operational contexts. It allows an efficient parallelization in mining frequent itemsets, which can be used for generating association rules. Still, a pre-processing phase as well as an auxiliary data structure are necessary and, also in this case, the need to consider big datasets and low-cardinality association rules dampens the advantages and emphasizes the drawbacks of the algorithm.
It is also worth considering that the proposed technique can be adopted whatever the association rule mining algorithm is. We have chosen to consider Apriori, since it is the most consolidated one and allows us to single out more clearly their innovative aspects.
A possible support threshold determination technique is described in [11]; the authors propose generating association rules with variable accuracy, but it is necessary to work on a sampled version of the original dataset, consequently risking a loss of information.
In [12], a filtering criterion based on multiple, automatically determined support thresholds is defined. Yet, the proposed approach is not general: it may be applied only if it is possible to define a taxonomy for items, i.e., only if items are grouped in known categories and, possibly, sub-categories. Also, the authors of [13,14], more recently, suggest faster and refined algorithms for multiple support thresholds determination but, even in these cases, an item taxonomy needs to be specified.
An automated tuning algorithm for generating different support thresholds is proposed in [15]; there, the refinement process is iterative and hence not suitable for working with big datasets.
The authors of [16] describe how to automatically determine multiple support thresholds by means of the ant colony optimization algorithm [17]. The need for an iterative execution reduces computational efficiency; furthermore, the user loses control over the thresholds, completely relying on the algorithm. Conversely, the technique proposed in this paper allows control over the result, enabling specification of the number of considered items. Similar observations are applicable to the suggestion, by the same authors [18], to apply another evolutionary algorithm, i.e., particle swarm optimization, [19] to compute the value of support thresholds.
The use of two different support thresholds is proposed in [20]: one for most frequent items and another for rare ones. If an itemset is composed only of frequent items, it must satisfy a specified criterion not to be discarded; if rare items are included then another, less restrictive, criterion is applied. Also, the notion of relative support is introduced and used to prune the itemsets more effectively. The MBA algorithm gains flexibility but is still unpredictably sensitive to the choice of these parameters.
A drastic solution to the rule explosion problem is depicted in [21]: the user is enabled to determine how many itemsets (leading then to association rules) they would like to generate. Automatically, the best ones are chosen according to an optimizing criterion. Still, the optimization step implies only mining maximal frequent itemsets, and this is known to be a lossy itemset compression technique; a significant bound to rule explosion comes then at the price of an unavoidable and only partially controllable loss of information.
In [22], a general solution to the problem of automatically determining a reasonable support threshold is proposed, but in a different operational context; the aim is, in fact, to look for rare itemsets, rather than for frequent ones.
A comprehensive framework for assessing the statistical significance of the set of itemsets (and, consequently, of association rules) is developed in [23]. Using this statistical significance measurement as a reference, it is possible to compute a suitable support threshold. Still, the procedure to compute this threshold is complex and computationally intensive, while the aim of the present paper is to equip any MBA user with an intuitive tool to deal with a well-known, pervasive problem.
Also, the approach described in [24] represents an original solution to the determination of the support threshold, but the goal of the procedure proposed by the authors is not to bound rule explosion but to attenuate the effects of information uncertainty affecting input data.
Extensions to traditional market basket analysis algorithms, so that they can benefit from the improvements associated with the use of a Map-Reduce computing paradigm, have been recently discussed in [25].
Alternative approaches also encompass the idea of estimating the number of occurrences of items and itemsets rather than computing, thus having a computational advantage at the expense of having a more noisy, non-deterministic, result [26].

2. Materials and Methods

Through the adoption of the technique proposed in this article, the user is still allowed to choose a maximum cardinality y for the itemsets, but they are not requested to provide the minimum support threshold t. Instead of t, they specify the number f N of items to monitor.
A support threshold t R , which allows to consider only the f most frequent items, is then automatically computed by ordering C 1 in descending order of support and setting t equal to the value of its f-th element. Ties are broken according to a proper ordering criterion, e.g., the order of appearance in the dataset, so that F 1 is always composed of exactly f items.
An estimate of the number of rules that the analysis is going to generate is crucial information for an MBA user and adopting the described support threshold computation technique improves the predictability of the number of generated association rules. More specifically, it allows to computation of valid upper bounds given y and f.
Denote with e the number of frequent itemsets and with d the number of association rules associated with them. Then, since (1), the following inequalities can be readily formulated:
d j = 2 y e j ( 2 j 2 ) , e j = 2 y e j
where e j denotes the number of frequent itemsets of cardinality j N , which is a fraction of the number of candidate itemsets generated by the j-th iteration of the Apriori algorithm. This quantity can be upper-bounded by the number f j of combinations of j elements between the f items satisfying the support constraint. Then, we have:
e j f j for   each = 2 , , y .
It follows that d and e are upper-bounded by an expression involving the value of f and y:
e e ^ = j = 2 y f j ,
d j = 2 y e j ( 2 j 2 ) j = 2 y f j ( 2 j 2 ) .
The proposed technique to compute a support threshold allows the user to fix the value of f, instead of indirectly specifying it through the choice of a minimum support threshold. Together with the choice of y, this allows us to bind the value of e and, consequently, of d. In other words, limiting the number of items to be considered in the rule extraction process allows us to define an upper bound for the number of rules that will be generated.
Also, let e ( 1 ) be the number of frequent itemsets if f 1 items are taken into account. Then, it is possible to compute a more accurate bound for e ( 2 ) , that is, the number of frequent itemsets obtained when considering f 2 items, being f 2 > f 1 . Let us denote with e ^ ( 1 ) and e ^ ( 2 ) the upper bounds for e ( 1 ) and e ( 2 ) computed according to (6). We aim to show that:
e ( 2 ) e ( 1 ) e ^ ( 2 ) e ^ ( 1 ) = j = 2 y f 2 j j = 2 y f 1 j ,
i.e., e ( 2 ) is upper-bounded as follows:
e ( 2 ) e ( 1 ) + e ^ ( 2 ) e ^ ( 1 ) .
Theorem 1.
e ( 2 ) e ( 1 ) e ^ ( 2 ) e ^ ( 1 )
Proof. 
The set A 1 is defined as including the original f 1 items, while the set A 2 comprises the additional f 2 f 1 ones. The value of e ( 2 ) is equivalent to:
e ( 2 ) = e ( 1 ) + e ( 21 ) + e ( 22 )
where e ( 21 ) accounts for the frequent itemsets comprising items belonging to A 1 together with items belonging to A 2 . Conversely, e ( 22 ) denotes the number of frequent itemsets involving only items in A 2 . By substituting (10) in (8), we can reformulate the thesis of the theorem as:
e ( 21 ) + e ( 22 ) e ^ ( 2 ) e ^ ( 1 ) .
From (6), it follows that
e ( 22 ) j = 2 y f 2 f 1 j ,
e ^ ( 2 ) = j = 2 y f 2 j ,
e ^ ( 1 ) = j = 2 y f 1 j .
Now, denoting with e j ( 21 ) the number of frequent itemsets of cardinality j that concurs to e ( 21 ) , so that
e ( 21 ) = j = 2 y e j ( 21 ) ,
we can observe that each frequent itemset concurring to e j ( 21 ) includes k items from A 1 and j k items from A 2 , with k = 1, …, j 1 . Since the number of different ways for selecting k items from A 1 is f 2 f 1 k , whereas the other j k items from A 2 can be chosen in f 1 j k different ways, we have:
e j ( 21 ) k = 1 j 1 f 1 k f 2 f 1 j k .
By employing the relationship of the hypergeometric distribution [27]
k = 0 j f 1 k f 2 f 1 j k = f 2 j ,
we obtain
k = 1 j 1 f 1 k f 2 f 1 j k = f 2 j f 1 j f 2 f 1 j .
and by substituting (16) in (15), it follows that:
e j ( 21 ) f 2 j f 1 j f 2 f 1 j
from which (see (14))
e ( 21 ) j = 2 y f 2 j f 1 j f 2 f 1 j .
The quantity e ( 21 ) + e ( 22 ) can then be upper-bounded by employing (18) for e ( 21 ) and (12) for e ( 22 )
e ( 21 ) + e ( 22 ) j = 2 y f 2 j f 1 j = e ^ ( 2 ) e ^ ( 1 ) .
Then, if r 1 association rules are generated by taking into account f 1 items, it is possible to upper-bound the number r 2 of association rules generated considering f 2 items (being f 2 > f 1 ) by combining (7) and (9):
r 2 r 1 + j = 2 y f 2 j f 1 j ( 2 j 2 ) .

3. Results

The effectiveness of the computed bound is verified for differently sized problems, addressing both well-known benchmarks and problems concerning real sales data from big companies. More specifically, five cases are considered:
  • The groceries dataset [28], comprising 9835 transactions and 169 different categories of items;
  • The Mondrian foodmart sales dataset (https://github.com/kijiproject/kiji-modeling/tree/master/kiji-modeling-examples/src/main/datasets/foodmart, accessed on 15 January 2020) including 7824 transactions (which are assumed to be identified by their customer id) and 1559 different items;
  • The retail dataset [6], comprising 88,162 transactions, with 16,470 different items;
  • The online retail dataset [29], comprising 25,900 transactions, with 4194 different item descriptions;
  • Real, anonymous (no detailed data will be commented, only aggregate results) data gathered by one of the largest Italian supermarket chains; this dataset is composed of 552,626 transactions, involving 51,892 different items. The transactions refer to thousands of different customers, with an average basket size of 13.21 items (and a standard deviation in the basket size of 12.33 items). At least one of the 100 most frequent items is included in 56.1 % of the transactions.
For all these datasets, we aim to verify the predictive power of the upper bound determined according to (20).
The tests where an explicit minimum support threshold is specified are referred to as group 1, whereas group 2 includes the tests in which the number of items to consider is set by the user. Firstly, an experiment showing the better predictability of results in group 2 is proposed.
Let us denote with s α R the support of the α th-most frequent item in the dataset, being α N (in experiments, α = 10). Let γ N represent the number of tests for each dataset, for both group 1 and group 2. Then, for each β N such that 1 ⩽ β γ
  • A new test is included in group 1, setting s α / β as minimum item support: r β ( 1 ) rules will be generated;
  • A new test is included in group 2, taking α β items into account: r β ( 2 ) rules will be generated.
Let δ N denote the minimum itemset support threshold. Considering that the present work addresses the comparison between two different ways of computing the minimum item support, δ has been set to the same value for both groups of tests; in particular, the least restrictive possible value, δ = 1, has been adopted. A minimum lift threshold of 1 has been chosen for both groups to select rules identifying positive correlations between antecedents and consequents (see Equation (3)). The proposed technique for support threshold computation, as anticipated in the introduction, can be applied to any frequent pattern mining algorithm (Apriori has just been picked as an example). For better generality, in the following, we will focus our analysis of the results on the number of rules generated at each step, which is independent of the choice of the algorithm and from its implementation details rather than on computation time or memory peak, which are correlated with the total number of extracted rules but depend on the choice of the pattern mining algorithm and on its implementation.
Figure 1 shows how the ratios r β ( 1 ) / r 1 ( 1 ) and r β ( 2 ) / r 1 ( 2 ) vary with β for group 1 (left plot) and group 2 (right plot), for all considered datasets.
The growth in the number of rules is more regular for group 2, i.e., if the approach proposed in the present paper is adopted to filter data: r 5 ( 2 ) ranges from 21.4 to 31.8 , while r 5 ( 1 ) ranges from 39.6 to 3994.3 . The quantitative results displayed in Figure 1 are also reported in Table 1.
An upper bound for r β ( 2 ) has been computed, according to (7) (being y = 2):
r β ( 2 ) β α 2 2 = 100 β 2 10 β .
Also, it is possible to bound r β ( 2 ) more accurately, by taking into account the number r 1 ( 2 ) of rules generated for β = 1: the value of the bound is then computed by substituting y = 2, f 1 = α and f 2 = α β in (20):
r β ( 2 ) r 1 ( 2 ) + 2 α β 2 α 2 = r 1 ( 2 ) + 100 β 2 10 β 90 ;
since α = 10. The accuracy of the upper bound for the number of association rules generated by the β -th iteration can be further improved by taking into account the number of association rules generated in the ( β 1 )-th iteration of the test instead of the first one (that is, the one with β = 1). This corresponds to substituting y = 2, f 1 = α ( β − 1) and f 2 = α β in (20):
r β ( 2 ) r β 1 ( 2 ) + 2 α β 2 α ( β 1 ) 2 = r β 1 ( 2 ) + 200 β 110 .
Figure 2 displays the upper bounds for r β ( 2 ) / r 1 ( 2 ) according to (21)–(23).

4. Discussion

Before summarizing the main contribution of the paper in the conclusions, we would like to draw attention to the aspects that are not yet captured by this initial set of experiments and, consequently, identify and suggest some possible areas for future work.
Firstly, considering that the adoption of the proposed technique is expected to facilitate the interaction between data scientists and domain-expert users, human feedback gathered about the use of the suggested support threshold computation technique would be valuable, as well as to evaluate how the users make use of the information concerning increasingly refined upper bounds for the number of rules that they will need to analyze and apply.
Also, the paper is focused on the definition, the calculation, and the experimental validation of bounds allowing us to have an idea about how many rules could be extracted. Yet, even if support itself is one of the metrics which are commonly associated with rule quality, in perspective, a comparative analysis of the effect of the proposed technique on rule quality as a whole, from different angles (for example, according to confidence, lift, symmetric confidence), could constitute an additional contribution stemming from the present work.
Future work will also be oriented towards providing contributions to the creation of effective item packages, allowing the management and the evaluation of item hierarchies (which may be constituted, for instance, by single items, which may or may not be included in packages, belonging to categories) by means of MBA techniques.

5. Conclusions

This paper aims to show to what extent an alternative support threshold determination, guided by the choice of the number of items to take into account, may be used to attenuate the rule explosion problem.
More specifically, experiments show, on five different datasets, that the rate of growth in the number of rules between tests with increasingly more permissive thresholds ranges from 21.4 to 31.8 with the proposed method, while it would range from 39.6 to 3994.3 , if the traditional thresholding method were applied.
As shown in Section 3, addressing both well-known benchmarks and real datasets, the use of this technique induces better predictability in terms of the number of generated rules, resulting in a particularly useful result for users lacking domain-specific knowledge.

Author Contributions

Conceptualization, D.V. and M.M.; Methodology, D.V.; Investigation, D.V.; Writing—original draft, D.V.; Writing—review & editing, D.V. and M.M.; Supervision, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Only one of the datasets described in the article is not publicly available, while the others can be downloaded from public repositories.

Conflicts of Interest

Author Damiano Verda and Marco Muselli were employed by the company Rulex Innovation Labs. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Kumar, P.; Manisha, K.; Nivetha, M. Market Basket Analysis for Retail Sales Optimization. In Proceedings of the 2024 Second International Conference on Emerging Trends in Information Technology and Engineering (ICETITE), Vellore, India, 22–23 February 2024; IEEE: New York, NY, USA, 2024; pp. 1–7. [Google Scholar]
  2. Agrawal, R.; Imieliński, T.; Swami, A. Mining association rules between sets of items in large databases. In Proceedings of the ACM SIGMOD Record, ACM, Washington, DC, USA, 26–28 May 1993; Volume 22, pp. 207–216. [Google Scholar]
  3. Agrawal, R.; Srikant, R. Fast algorithms for mining association rules. In Proceedings of the 20th International Conference on Very Large Data Bases, VLDB, Santiago de Chile, Chile, 12–15 September 1994; Volume 1215, pp. 487–499. [Google Scholar]
  4. Han, J.; Pei, J.; Yin, Y. Mining frequent patterns without candidate generation. In Proceedings of the ACM SIGMOD Record, ACM, Dallas, TX, USA, 16–18 May 2000; Volume 29, pp. 1–12. [Google Scholar]
  5. Zaki, M.J.; Parthasarathy, S.; Ogihara, M.; Li, W. Parallel algorithms for discovery of association rules. Data Min. Knowl. Discov. 1997, 1, 343–373. [Google Scholar] [CrossRef]
  6. Brijs, T. Retail market basket data set. In Proceedings of the Workshop on Frequent Itemset Mining Implementations (FIMI’03), Melbourne, FL, USA, 19 November 2003. [Google Scholar]
  7. Omol, E.J.; Onyango, D.A.; Mburu, L.W.; Abuonji, P.A. Apriori algorithm and market basket analysis to uncover consumer buying patterns: Case of a Kenyan supermarket. Buana Inf. Technol. Comput. Sci. (BIT CS) 2024, 5, 51–63. [Google Scholar] [CrossRef]
  8. Wahidi, N.; Ismailova, R. A market basket analysis of seven retail branches in Kyrgyzstan using an Apriori algorithm. Int. J. Bus. Intell. Data Min. 2025, 26, 236–255. [Google Scholar] [CrossRef]
  9. Alavi, F.; Hashemi, S. DFP-SEPSF: A dynamic frequent pattern tree to mine strong emerging patterns in streamwise features. Eng. Appl. Artif. Intell. 2015, 37, 54–70. [Google Scholar] [CrossRef]
  10. Alcan, D.; Ozdemir, K.; Ozkan, B.; Mucan, A.Y.; Ozcan, T. A comparative analysis of apriori and fp-growth algorithms for market basket analysis using multi-level association rule mining. In Global Joint Conference On Industrial Engineering And Its Application Areas; Springer: Cham, Switzerland, 2022; pp. 128–137. [Google Scholar]
  11. Park, J.S.; Yu, P.S.; Chen, M.S. Mining association rules with adjustable accuracy. In Proceedings of the Sixth International Conference on Information and Knowledge Management, ACM, Las Vegas, NV, USA, 10–14 November 1997; pp. 151–160. [Google Scholar]
  12. Tseng, M.C.; Lin, W.Y. Efficient mining of generalized association rules with non-uniform minimum support. Data Knowl. Eng. 2007, 62, 41–64. [Google Scholar] [CrossRef]
  13. Vo, B.; Le, B. Fast algorithm for mining generalized association rules. Int. J. Database Theory Appl. 2009, 2, 1–12. [Google Scholar]
  14. Baralis, E.; Cagliero, L.; Cerquitelli, T.; D’Elia, V.; Garza, P. Support driven opportunistic aggregation for generalized itemset extraction. In Proceedings of the Intelligent Systems (IS), 2010 5th IEEE International Conference, London, UK, 7–9 July 2010; IEEE: New York, NY, USA, 2010; pp. 102–107. [Google Scholar]
  15. Hu, Y.H.; Chen, Y.L. Mining association rules with multiple minimum supports: A new mining algorithm and a support tuning mechanism. Decis. Support Syst. 2006, 42, 1–24. [Google Scholar] [CrossRef] [PubMed]
  16. Kuo, R.; Shih, C. Association rule mining through the ant colony system for National Health Insurance Research Database in Taiwan. Comput. Math. Appl. 2007, 54, 1303–1318. [Google Scholar] [CrossRef]
  17. Dorigo, M.; Birattari, M. Ant colony optimization. In Encyclopedia of Machine Learning; Springer: Cham, Switzerland, 2010; pp. 36–39. [Google Scholar]
  18. Kuo, R.J.; Chao, C.M.; Chiu, Y. Application of particle swarm optimization to association rule mining. Appl. Soft Comput. 2011, 11, 326–336. [Google Scholar] [CrossRef]
  19. Kennedy, J. Particle swarm optimization. In Encyclopedia of Machine Learning; Springer: Cham, Switzerland, 2010; pp. 760–766. [Google Scholar]
  20. Yun, H.; Ha, D.; Hwang, B.; Ho Ryu, K. Mining association rules on significant rare data using relative support. J. Syst. Softw. 2003, 67, 181–191. [Google Scholar] [CrossRef]
  21. Salam, A.; Khayal, M.S.H. Mining top- k frequent patterns without minimum support threshold. Knowl. Inf. Syst. 2012, 30, 57–86. [Google Scholar] [CrossRef]
  22. Sadhasivam, K.S.; Angamuthu, T. Mining rare itemset with automated support thresholds. J. Comput. Sci. 2011, 7, 394. [Google Scholar] [CrossRef]
  23. Kirsch, A.; Mitzenmacher, M.; Pietracaprina, A.; Pucci, G.; Upfal, E.; Vandin, F. An efficient rigorous approach for identifying statistically significant frequent itemsets. J. ACM (JACM) 2012, 59, 12. [Google Scholar] [CrossRef]
  24. Tong, Y.; Chen, L.; Yu, P.S. Ufimt: An uncertain frequent itemset mining toolbox. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Beijing, China, 12–16 August 2012; pp. 1508–1511. [Google Scholar]
  25. Vats, S.; Sharma, V.; Bajaj, M.; Singh, S.; Sagar, B. Advanced frequent itemset mining algorithm (AFIM). In Uncertainty in Computational Intelligence-Based Decision Making; Elsevier: Amsterdam, The Netherlands, 2025; pp. 187–201. [Google Scholar]
  26. Sadeequllah, M.; Rauf, A.; Rehman, S.U.; Alnazzawi, N. Probabilistic support prediction: Fast frequent itemset mining in dense data. IEEE Access 2024, 12, 39330–39350. [Google Scholar] [CrossRef]
  27. Feller, W. An Introduction to Probability Theory and Its Applications; John Wiley & Sons: Hoboken, NJ, USA, 2008; Volume 2. [Google Scholar]
  28. Hahsler, M.; Hornik, K.; Reutterer, T. Implications of probabilistic data modeling for mining association rules. In From Data and Information Analysis to Knowledge Engineering; Springer: Cham, Switzerland, 2006; pp. 598–605. [Google Scholar]
  29. Chen, D.; Sain, S.L.; Guo, K. Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining. J. Database Mark. Cust. Strategy Manag. 2012, 19, 197–208. [Google Scholar] [CrossRef]
Figure 1. Number of generated rules with respect to thresholding factor. Group 1 tests are represented in the left plot (a). Group 2 results are displayed in the right one (b). On the x axis, we have the tested values for b e t a . On the y axis, we have the ratio between the number of rules with a given b e t a and the number of rules with b e t a = 1 for group 1 (a) and group 2 (b) tests.
Figure 1. Number of generated rules with respect to thresholding factor. Group 1 tests are represented in the left plot (a). Group 2 results are displayed in the right one (b). On the x axis, we have the tested values for b e t a . On the y axis, we have the ratio between the number of rules with a given b e t a and the number of rules with b e t a = 1 for group 1 (a) and group 2 (b) tests.
Appliedmath 05 00071 g001
Figure 2. The number of generated rules is upper-bounded in group 2 for all datasets: upper bound, upper bound given first iteration, and upper bound given previous iteration, respectively, identifying the upper bounds computed by means of Equations (21)–(23).
Figure 2. The number of generated rules is upper-bounded in group 2 for all datasets: upper bound, upper bound given first iteration, and upper bound given previous iteration, respectively, identifying the upper bounds computed by means of Equations (21)–(23).
Appliedmath 05 00071 g002
Table 1. Number of rules and generated with respect to the thresholding factor, r β ( 1 ) / r 1 ( 1 ) and r β ( 2 ) / r 1 ( 2 ) values observed in Group 1 and Group 2 tests.
Table 1. Number of rules and generated with respect to the thresholding factor, r β ( 1 ) / r 1 ( 1 ) and r β ( 2 ) / r 1 ( 2 ) values observed in Group 1 and Group 2 tests.
Dataset β r β ( 1 ) / r 1 ( 1 ) r β ( 2 ) / r 1 ( 2 )
Groceries111
Groceries29.383.83
Groceries319.279.38
Groceries433.4417.37
Groceries539.6426.75
Foodmart111
Foodmart22189.703.87
Foodmart32194.658.98
Foodmart42194.6516.52
Foodmart52194.6526.17
Retail111
Retail24.933.63
Retail326.388.35
Retail475.8713.69
Retail5155.2521.39
Online Retail111
Online Retail2148.003.91
Online Retail3812.539.16
Online Retail42218.5816.80
Online Retail53994.3126.43
Supermarket Chain Data111
Supermarket Chain Data225.514.41
Supermarket Chain Data3186.4210.46
Supermarket Chain Data4601.8719.89
Supermarket Chain Data51344.7831.77
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Verda, D.; Muselli, M. Alternative Support Threshold Computation for Market Basket Analysis. AppliedMath 2025, 5, 71. https://doi.org/10.3390/appliedmath5020071

AMA Style

Verda D, Muselli M. Alternative Support Threshold Computation for Market Basket Analysis. AppliedMath. 2025; 5(2):71. https://doi.org/10.3390/appliedmath5020071

Chicago/Turabian Style

Verda, Damiano, and Marco Muselli. 2025. "Alternative Support Threshold Computation for Market Basket Analysis" AppliedMath 5, no. 2: 71. https://doi.org/10.3390/appliedmath5020071

APA Style

Verda, D., & Muselli, M. (2025). Alternative Support Threshold Computation for Market Basket Analysis. AppliedMath, 5(2), 71. https://doi.org/10.3390/appliedmath5020071

Article Metrics

Back to TopTop