Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore

Martin Žáček (University of Ostrava, Czech Republic)

Copyright: © 2017
|Pages: 21

DOI: 10.4018/978-1-5225-0565-5.ch002

Chapter Preview

TopMany statistical methods relate to data which are independent, or at least uncorrelated. There are many practical situations where data might be correlated. This is particularly so where repeated observations on a given system are made sequentially in time. (Reiner, 2010)

Data gathered sequentially in time are called a *time series*.

The analysis of these experimental data that have been observed at different points in time leads to new and unique problems in statistical modeling and inference. The obvious correlation introduced by the sampling of adjacent points in time can severely restrict the applicability of the many conventional statistical methods traditionally dependent on the assumption that these adjacent observations are independent and identically distributed. The systematic approach by which one goes about answering the mathematical and statistical questions posed by these time correlations is commonly referred to as time series analysis.

Historically, time series methods were applied to problems in the physical and environmental sciences. This fact accounts for the basic engineering flavor permeating the language of time series analysis. The first step in any time series investigation always involves careful scrutiny of the recorded data plotted over time.

Before looking more closely at the particular statistical methods, it is appropriate to mention that two separate, but not necessarily mutually exclusive, approaches to time series analysis exist, commonly identified as the time domain approach and the frequency domain approach. ()

A time series is a set of statistics, usually collected at regular intervals. Time series data occur naturally in many application areas:

*•***Economics and Finance:**E.g., monthly data for unemployment, hospital admissions, daily exchange rate, a share price, etc. (Barro, 1987).*•***Environmental Modelling:**E.g., daily rainfall, air quality readings.*•***Meteorology and Hydrology:**E.g., weather forecast.*•***Demographics:**E.g., population development.*•***Medicine:**E.g., ECG brain wave activity every 2−8 secs.*•*Engineering and Quality Control.

Figure 1 shows the daily returns (or percent change) of the New York Stock Exchange (NYSE) from February 2, 1984 to December 31, 1991. It is easy to spot the crash of October 19, 1987 in Figure 1. The data shown in Figure 1 are typical of return data. The mean of the series appears to be stable with an average return of approximately zero, however, the volatility (or variability) of data changes over time. In fact, the data show volatility clustering; that is, highly volatile periods tend to be clustered together. A problem in the analysis of these type of financial data is to forecast the volatility of future returns. For example, GARCH models have been developed to handle these problems.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved