Learning Binding Affinity from Augmented High Throughput Screening Data

Learning Binding Affinity from Augmented High Throughput Screening Data

Nicos Angelopoulos, Andreas Hadjiprocopis, Malcolm D. Walkinshaw
Copyright: © 2013 |Pages: 22
DOI: 10.4018/978-1-4666-3604-0.ch020
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In high throughput screening a large number of molecules are tested against a single target protein to determine binding affinity of each molecule to the target. The objective of such tests within the pharmaceutical industry is to identify potential drug-like lead molecules. Current technology allows for thousands of molecules to be tested inexpensively. The analysis of linking such biological data with molecular properties is thus becoming a major goal in both academic and pharmaceutical research. This chapter details how screening data can be augmented with high-dimensional descriptor data and how machine learning techniques can be utilised to build predictive models. The pyruvate kinase protein is used as a model target throughout the chapter. Binding affinity data from a public repository provide binding information on a large set of screened molecules. The authors consider three machine learning paradigms: Bayesian model averaging, Neural Networks, and Support Vector Machines. The authors apply algorithms from the three paradigms to three subsets of the data and comment on the relative merits of each. They also used the learnt models to classify the molecules in a large in-house molecular database that holds commercially available chemical structures from a large number of suppliers. They discuss the degree of agreement in compounds selected and ranked for three algorithms. Details of the technical challenges in such large scale classification and the ability of each paradigm to cope with these are put forward. The application of machine learning techniques to binding data augmented by high-dimensional can provide a powerful tool in compound testing. The emphasis of this work is on making very few assumptions or technical choices with regard to the machine learning techniques. This is to facilitate application of such techniques by non-experts.
Chapter Preview
Top

Background

Recent estimates suggest that there are about 3,000 possible druggable proteins (Zheng et al., 2006). These have structural features that allow the binding of small molecules. When bound, these adjust the biological function of their target. The object of ligand discovery is to identify molecules that will have a desired effect to the function of the protein.

Descriptor based approaches to ligand discovery have a long history in the field of QSAR (Hansch, Hoekman, & Gao, 1996). In early approaches simple models built by experts were proposed as predictors of chemical activity. The well known Lipinski Rule of Fives (Lipinski, Lombardo, Dominy, & Feeney, 2001) describes four properties (descriptors) common to the most successful drug molecules. The model depends on just four descriptors: solubility, molecular weight and the number of hydrogen bond donor and acceptor atoms in the molecule. There are however hundreds of characterising descriptors for every molecule (Todeschini, Consonni, Mannhold, Kubinyi, & Timmerman, 2000) covering calculated physical properties like polarizability or shape properties describing for example the presence of ring structures or particular chemical groups in the molecule. More modern approaches use statistical methods to build models based on large numbers of descriptors. In most cases preprocessing is used to do feature selection, that is to reduce the original number of descriptors to a subset that can be used to provide predictions which are approximately as good as those achieved by using the whole set of descriptors.

Complete Chapter List

Search this Book:
Reset