Neural Network-Based Process Analysis in Sport

Neural Network-Based Process Analysis in Sport

Juergen Perl
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-59904-849-9.ch177
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Processes in sport like motions or games are influenced by communication, interaction, adaptation, and spontaneous decisions. Therefore, on the one hand, those processes are often fuzzy and unpredictable and so have not extensively been dealt with, yet. On the other hand, most of those processes structurally are roughly determined by intention, rules, and context conditions and so can be classified by means of information patterns deduced from data models of the processes. Self organizing neural networks of type Kohonen Feature Map (KFM) help for classifying information patterns – either by mapping whole processes to corresponding neurons (see Perl & Lames, 2000; McGarry & Perl, 2004) or by mapping process steps to neurons, which then can be connected by trajectories that can be taken as process patterns for further analyses (see examples below). In any case, the dimension of the original data (i.e. the number of contained attributes) is reduced to the dimension of the representing neuron (normally 2 or 3), which makes it much easier to deal with. Additionally, extensions of the KFM-approach are introduced, which are able to flexibly adjust the net to dynamically changing training situations. Moreover, those extensions allow for simulating adaptation processes like learning or tactical behaviour. Finally, a current project is introduced, where tactical processes in soccer are analysed under the aspect of simulation-based optimization.
Chapter Preview
Top

Main Focus Of The Chapter

Artificial Neural Networks

Current developments in the fields of Soft Computing and/or Computational Intelligence demonstrate how information patterns can be taken from data collections by means of fuzziness, similarity and learning, which the approach of Artificial Neural Networks gives an impressive example for. In particular self organizing neural networks of type KFM (Kohonen Feature Map) play an important role in aggregating input data to clusters or types by means of a self organized similarity analysis (Kohonen, 1995).

Key Terms in this Chapter

DyCoNG: The concept of DyCoNG combines the concepts of DyCoN and GNG and completes it by dynamically generating “quality” neurons in order to represent relevant and rare information during the training process (Perl et al., 2006).

Test: In a test, an attribute vector is fed to the network to determine its type – i.e. the neuron it is corresponding to.

Type: The collection of attribute vectors that, after training, is represented by a neuron is called its type. Also the representing neuron can be called the type.

GNG: A GNG is network without a fixed neuron topology, which is able to generate new neurons on demand. Therefore a GNG is able to dynamically adapt its neuron structure to amount and structure of the trained information (Fritzke, 1997).

Information Pattern: An information pattern is a structure of information units like e.g. a vector or matrix of numbers, a stream of video frames, or a distribution of probabilities.

Training: During the training, attribute vectors are fed to the network and mapped to the corresponding neuron the entry of which is most similar to that of the attribute vector. After the training, the space of training attribute vectors is (more or less) completely represented by the neurons of the network – meaning that every training attribute vector belongs to a neuron the entry of which it is most similar to.

DyCoN: A DyCoN is a KFM-type network, where each neuron contains an individual PerPot-based self-control of its activation radius and learning rate. The DyCoN-concept enables for continuous learning and therefore supports continuous training and testing, training in phases and with generated data, on line-adaptation during tests and analyses, and flexible adaptation to new information patterns (Perl, 2002 a). (Note that DyCoN is used commercially. Therefore, technical details cannot be published but are under secrecy by DyCoS GmbH (www.dycos.net)).

PerPot: PerPot is a model of dynamic adaptation, where an input flow feeds an internal strain potential as well as an internal response potentials, from which an output potential is fed by specifically delayed flows. Since the strain flow is negative and the response flow is positive, resulting in an oscillating stabilizing adaptation, the model is called antagonistic (Perl, 2002 a).

KFM: A KFM consists of a (normally: 2-dimensional) matrix of neurons, each of which contains a vector of attributes. Two neurons are called similar if the (Euclidian) distance of their attribute vectors is below a given threshold. Two neurons are called neighboured if they are next to each other regarding the given net topology (see Kohonen, 1995).

Cluster: A collection of neurons is called a cluster, if they are similar and locally neighboured. Due to the topology preserving property of KfM-training classes of similar training vectors are mapped to clusters of neighboured neurons.

Complete Chapter List

Search this Book:
Reset