Astronomical Roots of Risk Management Measures

Astronomical Roots of Risk Management Measures

Colin Read
DOI: 10.4018/978-1-5225-4754-9.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The mean-variance approach has remained the de facto method to characterize risk ever since Markowitz' development of Modern Portfolio Theory. This mean-variance underpinning goes back much further, though, to an era before modern street lighting when humankind held a fascination with the cosmos and the movement of the planets. At the same time, physicists and mathematicians were employed to allow gamblers to improve their odds in games of chance. The techniques are now applied to the more down-to-earth challenges of the characterization of risk and optimization of reward. I describe the work of the pioneers who collective gave us the mean-variance tool. This retrospective analysis of the history of risk and financial markets arose from the collective innovations of Daniel Bernoulli, Carl Friedrich Gauss, Louis Bachelier, Jacob Marschak, Harry Markowitz, William Sharpe, Paul Samuelson, and Fischer Black and Myron Scholes. Their contributions helped establish our understanding of the science of risk management.
Chapter Preview
Top

Introduction

Risk is everywhere. Nature is riddled with uncertainty, and almost every human decision creates an outcome that cannot be predicted with complete certainty. The disciplines of finance and economics are built upon the notion that every decision has a cost, and not all costs can be known in advance, even if their probabilities may. But while human decision-making and finances are riddled with uncertainties, our understanding of risk is relatively new and remains incomplete. I describe the roots of the study of risk in decision making, and its evolution and refinement with improvements in economic theory and applied mathematical techniques. I then treat how measures of risk are incorporated into sophisticated models of finance.

In the first section, I describe the first analytic model of risk, from the work of two Bernoulli cousins in their understanding of games of chance. In section two, I describe how the applied mathematicians of the nineteenth century began to measure and incorporate uncertainty into mathematics, and how a mathematician named Louis Bachelier used these measures to price risk. I then turn in section three to the originator of the modern definition of risk, and father of the Chicago School, Frank Hyneman Knight. In section four, I treat the economics pioneer Jacob Marschak. I describe in section five how his in describing the risk-return tradeoff, and his graduate student, Harry Markowitz, who took Marschak’s definition of the risk-return tradeoff and created Modern Portfolio Theory. In section six, I show how William Sharpe and his contemporaries evolved Markowitz’ Modern Portfolio Theory into a technique to “price” individual securities of uncertain returns. I then demonstrate how Fischer Black and Myron Scholes reinvented Bachelier’s work in the development of the most common method to price risk and volatility in derivatives markets, through the Black-Scholes options pricing model in section seven. I summarize the current state of our understanding in section eight and conclude in section nine.

The first foray into the science of risk

Games of chance had fascinated mathematicians for centuries before a chance exchange between two brothers who shared perhaps mathematics most famous pedigree. The Bernoulli family, best known for dozens of innovations in mathematics, but especially for the Bernoulli Effect that keeps airplanes in the air, are descendants of Nicolaus Bernoulli (1623-1708), the family patriarch. Daniel Bernoulli (1700-1782) received a correspondence from his cousin that posed a simple question, which is now known as the St. Petersburg Paradox. Should one be willing to bet one ducat for a fair coin toss that will yield two ducats if heads, or zero ducats if tails? If so, why would few be willing to instead bet 1,000 ducats for a 50/50 chance of winning two thousand ducats, assuming they could afford it?

This coin flipping gamble was simple and common game at the time. The mathematics of the problem seemed simple enough. It also forced Bernoulli to better understand uncertainty.

The risk of this problem was one that could be quantified in advance. The uncertainties were of known probabilities, in this case a 50/50 chance a coin would come up heads or tails. Bernoulli recognized that these known probabilities ought to be used to calculate not the expected winnings, but rather the expected valuation of the wins and losses to the gambler. In his chapter, written in Latin in 1738, and hence lost to modern risk managers until the 20th Century translation by Louise Sommer,1,2 Bernoulli defined an expected value:

Expected values are computed by multiplying each possible gain by the number of ways in which it can occur, and then dividing the sum of these products by the total number of possible cases where, in this theory, the consideration of cases which are all of the same probability is insisted upon.3

From this insight, Bernoulli stated the paradox:

Complete Chapter List

Search this Book:
Reset