Classification Algorithms and Dataflow Implementation

Classification Algorithms and Dataflow Implementation

DOI: 10.4018/978-1-7998-8350-0.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The implementation of data mining methods on dataflow computers enables an easy use of parallelism, but it also faces numerous obstacles. The problem underlying the impossibility of using currently developed algorithms in their existing form is their adaptation to von Neumann computer model, which assumes sequential calculations and intensive use of memory. This is one of the reasons why there are no fully developed classification algorithms on dataflow computer models in the open literature at the moment when this text is written. This chapter summarizes the characteristics that can be used as directions in the future construction of algorithms and outlines drafts for two implementations of the K-nearest neighbor algorithm.
Chapter Preview
Top

Introduction

The dataflow paradigm has been used in an increasing number of software products (Maxeler Technologies, 2021). The characteristics of the dataflow paradigm have proven to be practical in various kinds of applications that involve processing large amounts of data (for example, in Google's Cloud dataflow). On the other side, different data mining methods and analyses based on data mining are nowadays incorporated in numerous business projects and applications as well as science and research. Incorporating the possibility of using data mining methods and algorithms (especially classification methods and algorithms which are most prevalent in applications) are very important for the future development of dataflow for at least two reasons: (1) in the dataflow paradigm itself, the possibility of implementing various data mining methods on dataflow machines is related to an increasing number of application areas, and (2) a wider presence on the market means a higher level of profit for manufacturers of dataflow computers. More initial steps have already been taken (for example see (Guo et al, 2016)), but due to the complexity of implementation, full use of data mining methods in the control-flow paradigm is still a long way off.

The development of dataflow systems started from the first “pure” dataflow architectures (static and dynamic), went through the introduction of new concepts during the 80's (Hurson, & Kavi, 2007; Treleaven, Hopkins, & Rautenbach, 1982), and evolved into today's dataflow computers that include hybrid architectures containing “pure” dataflow components with associated control-flow parts (Ye et al, 2020). In parallel with the development of dataflow computers, dataflow programming languages were also developed. Starting from “pure” dataflow programming languages (Whiting, & Pascoe, 1994), over a mixture in dataflow and another paradigm (for example, functional paradigm), up to contemporary specially developed variants of “ordinary” languages and their compilers used to generate а dataflow code such as the Maxeler Java/maxCompiler (Milutinović, Salom, Trifunović, & Giorgi, 2015), or the Wave tool used to compile programs into a dataflow graph (Nicol, 2017a).

As modern commercial dataflow systems include control-flow CPU (and sometimes additional GPU), all currently known data mining algorithms can consequently be implemented in a certain way. This kind of implementation can only be named as “implementation on dataflow“ but does not bring any program improvement or significant advantage compared to solving an identical problem on control-flow computers.

The implementation of complete data mining methods with all corresponding options solely on dataflow computers without using control-flow components does not exist in open literature at the moment when this text is written. Only the first steps have been taken, including the implementation of Euclidean distance calculation, correlation or elementary linear regression (see (Maxeler Technologies, 2021)). Data mining methods often include very complex calculations, data preprocessing, work with different data types and value manipulation, anomaly detection, and (hardware) operations that are not always supported by available dataflow hardware. These facts are valid to methods from all three groups of data mining techniques—classification1, clustering, and association rules.

Instead of implementing complete algorithms, it is possible to perform partial implementation on dataflow2, while parts of the algorithms that are not suitable for dataflow implementation can be implemented in accordance with the control-flow paradigm on control-flow components on a (hybrid) dataflow computer. Emphasis will be placed on parts of algorithms whose characteristics correspond more to the dataflow paradigm and bring new quality, i.e. implementation (parts) of algorithms whose execution is potentially more efficient on data flow machines compared to the classical (von Neumann) control-flow architecture. The term “more efficient”, in addition to lower energy consumption which is expected for dataflow computers, covers better program performance or easier writing of correct programs. A short description of Maxeler dataflow architecture is added at the end of this chapter in order to clarify and illustrate the basic concepts and architecture of currently available commercial systems.

Key Terms in this Chapter

Execution Graph: Execution graph of the dataflow paradigm which describes the program.

Von Neumann: Von Neumann paradigm as conventional programming paradigm.

Functional Programming: Programming model based on expressions and closures.

Dataflow: Programming paradigm based on execution graphs.

DFE: Dataflow engine which reconfigures according to the execution graph.

Complete Chapter List

Search this Book:
Reset