Google Stock Movement: A Time Series Study Using LSTM Network

Google Stock Movement: A Time Series Study Using LSTM Network

Nishu Sethi, Shalini Bhaskar Bajaj, Jitendra Kumar Verma, Utpal Shrivastava
DOI: 10.4018/978-1-7998-5876-8.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Human beings tend to make predictions about future events irrespective of probability of occurrence. We are fascinated to solve puzzles and patterns. One such area which intrigues many, full of complexity and unpredicted behavior, is the stock market. For the last decade or so, we have been trying to find patterns and understand the behavior of the stock market with the help of robust computation systems and new approaches to extract and analyze the huge amount of data. In this chapter, the authors have tried to understand stock price movement using a long short-term memory (LSTM) network and predict future behavior of stock price.
Chapter Preview
Top

Googl Stock Price Prediction

Googl has been established since 1998 in the technology field and evolving at a lightning speed. Stock prices for big tech giants like Googl vary a lot in a short timespan which makes it more difficult to predict stock movement (Fama, 1965).

Here we have used LSTM Neural Network (one kind of Recurrent Neural Network figure 2) to train it on Googl Stocks (GOOGL) data and predict future prices (Finance, n.d.) (Shah et al., 2018). LSTM is better at learning from big chunks of data and produces a sustainable model to predict the future of stock prices.

RNN: A Brief Overview

As mentioned above LSTM Network is a special type of Recurrent Neural (RNN) Network as shown in figure 1 (Rather et al., 2015). In a simple recurrent neural network, there is a set of repetitive neural units. As the network forwards it passes the data from starting units to the end units i.e. all the units are connected to each other. Initially RNN looks good for all sort of problems ranging from speech recognition to text classification but later on various researchers found drawbacks in the structure of the neural network as well as it performed poorly when compared to benchmark results (Yoshihara et al., 2014).

Figure 1.

Structure of a simple RNN

978-1-7998-5876-8.ch004.f01

In this network:

  • 978-1-7998-5876-8.ch004.m01= One unit of the Neural Network

  • 978-1-7998-5876-8.ch004.m02 = Inputs for the units of neural network

  • 978-1-7998-5876-8.ch004.m03 = Output from these Units

Complete Chapter List

Search this Book:
Reset