Analysis of Large-Scale OMIC Data Using Self Organizing Maps

Analysis of Large-Scale OMIC Data Using Self Organizing Maps

Hans Binder (Interdisciplinary Center for Bioinformatics, University of Leipzig, Germany) and Henry Wirth (Interdisciplinary Center for Bioinformatics, University of Leipzig, Germany)
Copyright: © 2015 |Pages: 12
DOI: 10.4018/978-1-4666-5888-2.ch157
OnDemand PDF Download:

Chapter Preview


The Som Portraying Method

The data produced by high-throughput bioanalytics is usually given as a feature matrix of dimension N x M (see Figure 1) where N is the number of features measured per sample and M is the number of samples referring, e.g., to different treatments, time points or individuals. As a convention, each row of the matrix will be termed profile of the respective feature. The columns on the other hand will be termed states referring to each of the conditions studied. In general, the number of features can range from several thousands to millions, depending on the experimental screening technique used. Typically, this number largely exceeds the number of states studied, i.e. N>>M. SOM machine learning aims at reducing the number of relevant features by grouping the input data into clusters of appropriate size, and thus to transform the matrix of input data into a matrix of so-called meta-data with a reduced number of meta-features, K<<N (Figure 1a and b). In other words, SOM aims at mapping the space of the high-dimensional input data onto meta-data space of reduced dimensionality.

Figure 1.

Two-step data compression using SOM machine learning: Firstly, the input data are transformed into meta-data where each meta-feature is trained such that its profile resembles that of a cluster of input features. Secondly, meta-data are clustered into ‘spots’ of similar meta-features. Data reduction topology of the SOM resembles that of Neuronal Nets as shown on the right.

Key Terms in this Chapter

Machine Learning: Branch of artificial intelligence addressing algorithms which derive knowledge in terms of patterns from empirical ‘input’ data. Those patterns can be utilized to characterize and make predictions on previously unknown data.

Mass Spectrometry (MS): Analytical technique that produces spectra of the masses of the atoms or molecules comprising a sample.

SNP: Single nucleotide polymorphisms are persistent mutations of a single base pair in the genome. They are the most frequent source of DNA sequence variation among humans.

MALDI-TOF Mass Spectrometry: Matrix-assisted laser desorption/ionization –time of flight MS is used to ionize large molecules such as proteins.

Neural Networks: Are thought to form the basal structure enabling brain function. Artificial neural networks adopt those wiring structures to solve problems in machine learning and pattern recognition on computers.

Feature Selection: Techniques to extract statistically significant and therefore potentially biologically relevant features such as differentially expressed genes from a data set.

Meta-Feature: The prefix ‘meta’ is of Greek origin and means ‘among’, ‘after’, ‘beside’ or ‘with’. As prefix it is often used to identify something that provides information about something else. A metagene, for example, can serve as surrogate for a number of ‘real’ single genes showing similar expression patterns. The term ‘meta-feature’ thus generalizes this view by defining a feature as representative for a set of single features.

DNA Microarray: A key technology in high-throughput molecular biological data acquisition. They allow massively parallel measurement of abundances of many ten thousands of e.g. mRNAs (expression arrays) or millions of SNPs (SNP array) in a given sample solution.

Omics: A useful concept in biology which informally annotates a field of study ending in ‘-omics’. Omics aims at the collective characterization and quantification of pools of biological molecules that translate into the structure, dynamics and function, of an organism. Accordingly ‘genomics’ deals with the entirety of an organism's hereditary information coded in its DNA (also called genome); ‘transcriptomics’ deals with the entirety of RNA transcribed from the DNA (transcriptome), ‘proteomics’ deals with the entirety of proteins translated from the mRNA (proteome) and ‘epigenomics’ addresses factors and mechanisms affecting the accessibility of genomic information by modifications of its structure, e.g. via DNA-methylation or chemical modifications of the histones serving as DNA-packing proteins (epigenome). Historically, the first ‘omics’ term used was ‘genome’ created in 1920 by the botanist H. Winkler as a blend of the words ‘gene’ and ‘chromosome’ to annotate the chromosome set as the material foundations of an organism. In the last years one observes however an inflation of ‘omics’-terms often used simply to annotate any field of study.

Complete Chapter List

Search this Book: