Overview of Cellular Computing-Basic Principles and Applications

Overview of Cellular Computing-Basic Principles and Applications

Amit Das, Rakhi Dasgupta, Angshuman Bagchi
DOI: 10.4018/978-1-7998-1204-3.ch095
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Computers, due to their raw speed and massive computing power, have been highly used by biologists to expedite life science research whereas several computational algorithms like artificial neural network, genetic algorithm and many similar ones have been inspired by the behaviors of several biological or cellular entities. However till date both these disciplines i.e. life sciences and computer sciences have mostly progressed separately while recent studies are increasingly highlighting the impact of each discipline on the other. The chapter describes several features of biological systems which could be used for further optimizations of computer programs or could be engineered to harness necessary computational capabilities in lieu of traditional silico chip systems. We also highlight underlying challenges and avenues of implementations of cellular computing.
Chapter Preview
Top

Introduction

Computers, especially the modern day desktops and workstations, owing to their massive processing power and speed, have been traditionally used for deeper understanding of several biological phenomena. Techniques essential for high through put processing of genetic (gene sequencing), transcriptomic (micro-array), proteomic (mass spectroscopy) and imaging (microscopy and diagnostic techniques like CT-Scan, PET, MRI etc.) data, heavily rely on traditional silicon chip based computing systems (Kuznetsov et al., 2013). However at the heart of every computer, lies a processor which ultimately understands Boolean logic and rely on direct or modified form of Von Neumann architecture. This basic simplistic core of a computer also resembles the behavior of an individual cell which responds to certain input parameters (signals) through the generation of an output based on certain sets of logic. This common feature between a computer and a cell gives rise to a relatively new field of study, termed as ‘Cellular Computing’ (CC) which focuses on harnessing the power and characteristics of biological entities like cells for the purpose of computation rather than using traditional in-silico network analogy based methods for better understanding of complex systems (Sipper, 1999; C Teuscher, 2012). In case of CC, the word ‘computation’ does not necessarily restrict its meaning to ‘adding numbers’ only, rather it is used to describe achievable outcomes through direct or engineered (or synthetic) exploitation of bio-molecular / cellular logic sets which have been evolutionarily selected.

The two following example will provide better understanding of the analogy between computer and cellular logic processing system and a couple of the advantages of CC:

  • 1.

    The bacterial growth scenario:

Logics like IF/ELSE or AND/OR are a common part of almost every kind of algorithm and rarely requires any introduction. Now it is remarkable to highlight that even the miniature scale cellular biological entity i.e. a bacteria is always responding to its surrounding environment based on such logics and that too is related to its primary aim which is bacterial growth and division (Figure 1). For example IF nutrition is present in the surrounding medium, then continue growth ELSE suspend growth and prepare for tough situation. So it is clear that, without going into any underlying biological pathway details, it is possible to find to similarity between a cell (or bacteria) and a computer. To make a little complex scenario lets add one more factor in the bacterial growth scenario i.e. the temperature. IF nutrition is present in surroundings AND temperature is favorable, continue growth ELSE suspend growth (Figure 1).

The underlying molecular details of this bacterial sensing and growth regulation are also an outcome of series of events of such logic and thereby can be specifically categorized as ‘Molecular Computing’ which although related but for the better understanding of the readers are discussed in details in a separate chapter and thereby carefully kept outside the scope of this chapter.

  • 2.

    The traveling salesman problem:

Before we go in to the details of the problem, let’s have a brief understanding of the ways data are processed by a computer. A computer processor primarily works in sequential mode. Although availability of multiple threads combined with massive speed, a computer processor can perform millions of calculations per second but at the heart of this speedy calculator lays a system which can only perform in sequential mode and thereby can only manage sequential inputs. Even in case of present day scenario which show cases the availability of multi-core processor and general purpose graphics processing units (GP-GPU) which are promoted to facilitate miniature to large scale parallelization (depending on the number of processor cores), it must be remembered that they are all essentially arrays of sequentially performing processor which is managed in a smart way (that too by another sets of sequentially performing processor) for a certain extent of performance gain. However separate lines of codes are required to make a program able to harness this performance benefit and thereby results in its own additional energetic cost. The bottom line of this discussion is that true parallelization of computational architecture is yet to be achieved.

Complete Chapter List

Search this Book:
Reset