Neural Network Model to Estimate and Predict Cell Mass Concentration in Lipase Fermentation

Neural Network Model to Estimate and Predict Cell Mass Concentration in Lipase Fermentation

David K. Daniel (VIT University, India) and Vikramaditya Bhandari (Shasun Pharma Solutions Limited, UK)
Copyright: © 2014 |Pages: 14
DOI: 10.4018/978-1-4666-4940-8.ch015
OnDemand PDF Download:
List Price: $37.50


Lipase is an industrially important enzyme with major use in food industries. The demand of lipase is increasing every year. An online prediction of cell mass concentration is of great value in real time process involving the production of lipase. In the current work, the use of a back-propagation multilayer neural network to predict cell mass during lipase production by Rhizopus delemar NRRL 1472 is targeted. Network training data with respect to time is generated by carrying out experiments in laboratory. The fungus is grown in erlenmeyer flasks at initial pH of 5.6, temperature of 30ºC, and at 150 rpm. During the experiments, readings for cell mass growth are collected in specific period of time. By the training data, an artificial neural network model programmed in MATLAB for Windows is trained and used for prediction of cell mass. The Levenberg-Marquardt algorithm with back-propagation is used in the network to get the optimized weights. The optimum network configuration with different activation function and the number of nodes in the hidden layer are identified by trial and error method. Sigmoid unipolar activation function is 2-5-1, whereas logarithmoid and sigmoid bipolar is 2-3-1. These are chosen according to the values of Sum of Square of Errors (SSE), Root Mean Square (RMS) training and testing. The sigmoid unipolar activation function gives a good fit for estimated value with network configuration 2-5-1, which could be used for generalization.
Chapter Preview

Literature Survey

The idea of using artificial neural network as a potential solution strategy for problems, which required complex data analysis, is not new. In the early 1940’s scientists came up with the hypothesis that neurons, fundamental, active cells is all animal nervous systems might be regarded as devices for manipulating binary numbers. With the advent of modern electronics, it was only natural to try to harness this thinking process. The first step toward artificial neural networks came in 1943 when Warren McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper on how a neuron might work. They modeled a simple neural network with electrical circuits. Frank Rosenblatt invented the artificial neural network in 1958 called perceptron and demonstrated how the human brain processed visual data and learned to recognize objects. In 1958, Bernard Widrow and Marcian Hoff developed models, which they called ADALINE and MADALINE. These models were named for their use of Multiple Adaptive Linear Elements. MADALINE was first the neuron network to be applied to a real world problem. It was an adaptive filter, which eliminated echoes on phone lines. From then on, many researchers from diverse disciplines had concentrated in the field of ANN, including modeling, system identification and control. Bhat and McAvoy (1990) applied a back-propagation neural network to model the dynamic response of pH in a continuous stirred tank reactor. They proved that back- propagation neural network was able to model the non-linear characteristics of the continuous stirred tank reactor better than an auto regressive moving average model. Bhat et al., (1990) applied neural nets for modeling non-linear chemical systems. A back-propagation net was used successfully to model a steady-state reactor, a dynamic pH stirred tank system, and interpretation of biosensor data. They proved that back-propagation neural network was able to model the non-linear characteristics of the continuous stirred tank reactor better than auto regressive moving average model. Linko & Zhu (1992) applied both in real-time estimation and multi-step ahead prediction of enzyme activity and biomass dry matter in fungal Aspergillus niger fermentation. Back-propagation algorithm with a momentum term was used in the training of the neural network on the basis of varying input/output pair data sets. Freeman and Skapura (1992) discussed the different algorithms, applications and programming techniques associated with artificial neural networks, particularly the Levenberg- Marquardt optimization algorithm. Horiuchi et al., (2001) proposed a simple modeling method for microbial dynamic behavior in a chemostat using a neural network. They proposed and applied the same to the pH response in continuous anaerobic acidogenesis. A three-layered neural network with a back-propagation algorithm was used to model the pH step response of the acid reactor. The simulation results revealed that the ANN system could successfully model the transient behavior in response to pH change in the acid reactor under various retention times. To simulate the entire range of pH response, only the experimental data during the steady-state and the pH shift were required. Pramanik (2004) discussed the production of ethanol from the grape waste using the using Saccharomyces cerevisiae yeast based on based on feed forward architecture and back propagation as training algorithm. The Levenberg- Marquardt optimization algorithm was used to upgrade the network by minimizing the sum square error (SSE).

Complete Chapter List

Search this Book: