The mining of fuzzy association rules has been proposed in the literature recently. Many of the ensuing algorithms are developed to make use of only a single processor or machine. They can be further enhanced by taking advantage of the scalability of parallel or distributed computer systems. The increasing ability to collect data and the resulting huge data volume make the exploitation of parallel or distributed systems become more and more important to the success of fuzzy association rule mining algorithms. This chapter proposes a new distributed algorithm, called DFARM, for mining fuzzy association rules from very large databases. Unlike many existing algorithms that adopt the support-confidence framework such that an association is considered interesting if it satisfies some user-specified minimum percentage thresholds, DFARM embraces an objective measure to distinguish interesting associations from uninteresting ones. This measure is defined as a function of the difference in the actual and the expected number of tuples characterized by different linguistic variables (attributes) and linguistic terms (attribute values). Given a database, DFARM first divides it into several horizontal partitions and assigns them to different sites in a distributed system. It then has each site scan its own database partition to obtain the number of tuples characterized by different linguistic variables and linguistic terms (i.e., the local counts), and exchange the local counts with all the other sites to find the global counts. Based on the global counts, the values of the interestingness measure are computed, and the sites can uncover interesting associations. By repeating this process of counting, exchanging counts, and calculating the interestingness measure, it unveils the underlying interesting associations hidden in the data. We implemented DFARM in a distributed system and used a popular benchmark data set to evaluate its performance. The results show that it has very good size-up, speedup, and scale-up performance. We also evaluated the effectiveness of the proposed interestingness measure on two synthetic data sets. The experimental results show that it is very effective in differentiating between interesting and uninteresting associations.
Key Terms in this Chapter
Fuzzy Partitioning: It is a methodology for generating fuzzy sets to represent the underlying data. Fuzzy partitioning techniques can be classified into three categories: grid partitioning, tree partitioning, and scatter partitioning. Of the different fuzzy partitioning methods, grid partitioning is the most commonly used in practice, particularly in system control applications. Grid partitioning forms a partition by dividing the input space into several fuzzy slices, each of which is specified by a membership function for each feature dimension.
Negative Association Rule: A negative association rule’s antecedent and consequent show a negative association. If its antecedent is satisfied, it is unlikely that its consequent will be satisfied.
Fuzzy Association Rule: A fuzzy association rule involves linguistic terms (fuzzy sets) in its antecedent and/or consequent.
Positive Association Rule: A positive association rule’s antecedent and consequent show a positive association. If its antecedent is satisfied, it is likely that its consequent will be satisfied.
Adjusted Residual: It is a statistic defined as a function of the difference in the actual and the expected number of tuples characterized by different linguistic variables (attributes) and linguistic terms (attribute values). It can be used as an interestingness measure.
Interestingness Measure: An interestingness measure represents how interesting an association is. The support is an example of interestingness measures.
Associative Classification: It is a classification method based on association rules. An association rule with the class label as its consequent provides a clue that a tuple satisfying its antecedent belongs to a specific class. It can therefore be used as the basis of classification.