In this episode, Charlie uses statistical decision theory to determine where a would-be kidnapper will meet his target.
"We know there are known knowns: there are things we know we know. We also know there are known unknowns: that is to say we know there are things we know we don't know. But there are also unknown unknowns — the ones we don't know we don't know."
~ Former U.S. Defense Secretary Donald Rumsfeld speaking on normative decision theory.
Normative decision theory is the study of how fully informed and perfectly rational actors will behave in a given set of circumstances. However, for most real-world decisions, obtaining complete information is impossible. Most decisions are made with incomplete, inaccurate, and imprecise information. It is hard work to quantify exactly how much of the available information is accurate and how much how much information is missing that the actor does not know is missing.
Also, Humans act in decisively non-rational ways. Cognitive errors and emotions can cloud judgement, and humans often act illogically, even when given complete information. The study of the choices that human beings actually make, as opposed to the choices a perfect actor would make, is called decision analysis.
Normative decision theory attempts to predict how logical and fully informed act. By removing either or both of these assumptions, we can attempt to model real-world decisions better.
Sometimes there are multiple actors, and their choices affect each other in either a social or a competitive context. The study of decisions made in such an interactive context falls under the heading of the subject of game theory. Much of game theory is the quantification of an actor modeling the decision-making process of another actor. This process is recursive, because the second actor is himself modeling the decision making process of the first, so in order do a complete analysis, an actor must accurately model his own modeling process.
The situation is decidedly more complex when there is uncertainty surrounding the consequences of making various choices, either because the actor has incomplete information about the world, or because the uncertainty is inherent to the system, such as is the case with dice rolls or quantum wave-form collapse.
Statistical decision theory deals with the science of how to make choices in a world where the end result of making certain choices is subject to random chance. One convenient way to model systems for statistical decision theory is to think of actors as playing a game where the outcome of the game is that each actor is awarded some (different) number of points. Here points is simply a generic unit-less stand in for a type of reward such as dollars or liquidated enemy assets. One method, though not the only one, assumes that each actor attempts to maximize their average reward when playing the game repeatedly. Once a choice is made, there are many outcomes that can result. Each outcome has a particular reward and a particular probability of occurring which depends on the actions of the actor. The average reward for such a choice is the sum of the products of each of the particular rewards times the probability of that reward coming to fruition.
In equations, this is:
Such an average reward is called the expected value of a choice.
Making choices by comparing their expected values is a pervasive method of analysis because of its ease of computation and its power.
The widespread adoption of insurance seems to defy the naive assumption that everyone works to maximize their expected monetary reward.
Let us simplify car insurance somewhat. Let us suppose that you are considering insuring yourself against a catastrophic car accident sometime in the next year. The cost of having such a bad accident is $500,000, and the probability of you having such an accident is 0.05%. An insurance company is willing to insure you against this risk for the next year for a price of $500.
Relying only on expected value is obviously flawed, as shown by the previous activities. A more sophisticated structure is required to accurately model the behavior of rational actors.
Consider a destitute man who has no money at all. To him, $10 means a very great deal, because with this amount of money he can purchase enough food to feed himself for a day or two. However, to Bill Gates, $10 means much less. It is likely that Mr. Gates might not even notice the addition or removal of $10 from a bank account. Such a trivial amount would be a rounding error, or perhaps lost in the earned interest. What is the difference between these two cases? Why is the same amount of money worth much more to one person than another?
The answer is that when a rational person has money, the first things they buy are the things that they need the most. Luxuries come second or not at all. Whether a person has $100 or $100 billion, their first priorities are food and shelter. What comes after these is necessarily worth less to the buyer, simply because of the fact that they come afterwards. This is the law of diminishing returns. The value of a set amount of money to a person decreases as the person becomes wealthier.
What does the law of diminishing returns say about our insurance example? Does it explain why taking insurance is a good or a bad idea?
To try to take this fact into account, we focus not on the amount of reward (money) an actor receives, but rather on how much happiness an actor receives from an outcome. We quantify this happiness with a utility function that depends on the amount of money received. The utility function is a map from the real line to the real line, where we interpret the domain as being the amount of money gained or lost in a particular game, and we interpret the range as being the amount of happiness or sadness such a gain or loss of money will bring about. We don't necessarily assume that more money brings about more happiness, but we do assume that more money cannot make a person less happy, so the utility function must be non-decreasing. Because of the law of diminishing returns, the utility function cannot ever be concave up, though we do not restrict it from being locally linear.