Mathematicians estimate money in proportion to its quantity,
and men of good sense in proportion to the usage that they may make of it.
Gabriel
Cramer (1704-1752)
(2001-07-04) Rational decision analysis and the concept of
utility...
How do utilities differ from expectations? How are utilities used?
A utility is a numerical rating assigned to every possible outcome
a decision maker may be faced with.
(In a choice between several alternative prospects,
the one with the highest utility is always preferred.)
To qualify as a true utility scale however, the rating must be such that
the utility of any uncertain prospect is equal to the expected value
(the mathematical expectation) of the utilities of all its possible outcomes
(which could be either "final" outcomes or uncertain prospects themselves).
When decisions are made by a so-called rational agent
(if A is preferred to B and B to C, then A must be preferred to C), it should be
clear that some numerical scale can be devised to rate any possible outcome
"simply" by comparing and ranking these.
Determining equivalence in money terms may be helpful in such a systematic process
but it's not theoretically indispensable.
What may be less clear, however, is how to devise such a rating system so that
it would possess the above fundamental property required of a utility scale.
One theoretical way to do so is to compare prospects and/or final outcomes
to tickets entitling the holder to a chance at winning some jackpot,
which is at least as valuable as any outcome under consideration.
A ticket with a face value of 75% means a chance of winning the jackpot
with a probability of 0.75 and it will be assigned a utility of 0.75.
Anything which is estimated to be just as valuable as such a ticket (no more, no less)
will be assigned a utility of 0.75 as well.
The scale so defined does have the property required of utility scales.
Consider, for example, a prospect which may have one of two outcomes:
The first outcome has a probability of 0.3 and a utility of 0.6
(it could be a ticket with a 60% face value).
The second outcome has a probability of 0.7 and a utility of 0.2
(it could be a ticket with a 20% face value).
When these two outcomes actually consist of lottery tickets, the whole thing
is completely equivalent
(think long and hard about this) to having a chance
to win the jackpot with probability
0.3 ´ 0.6 +
0.7 ´ 0.2 = 0.32 .
The prospect has therefore, by definition, a utility of 0.32,
and we do observe that the result has been computed with the same rule as a
mathematical expectation.
It would be so in any other case involving either lottery tickets or
things/situations previously assigned a utility
(by direct or indirect comparisons with such tickets).
The type of utilities introduced above are between 0 and 1,
but no such restriction is in fact required.
The key observation is that we may either translate or rescale a utility scale
without affecting at all the decisions it implies:
Each side of every comparison is translated or rescaled the same way and it does
not affect inequalities as long as the scaling factor is positive.
In particular, we may keep the same utility scale if we're faced with an outcome
more valuable than whatever jackpot we first considered.
If that jackpot is estimated to be just as desirable as a chance of
winning the bigger prize with probability p, we may assign a utility 1/p to the
bigger prize (and that, of course, is larger than 1).
Similarly, the original "ticket" scale may have to be extended to assign
negative utilities to certain undesirable situations.
Considering such a situation "in context",
as an outcome of a prospect whose other outcomes are quite positive, allows
the semi-direct use of the "ticket" scale to evaluate its negative utility.
Even when there is no such thing as a "top prize",
the utilities of all prospects must be bounded.
(Recall the difference between a maximum, which is achieved in at least one
case, and an upper limit, which may not be.
Utilities have an upper limit, not necessarily a maximum.)
This may be visualized by considering that the utility function of money,
which is normally nondecreasing, may either have an asymptote or be constant above
a certain point.
For a proof that utilities must be bounded,
see our discussion of the St. Petersburg's Paradox...
In real life, utilities are not linearly related to money values
(or else the lotteries would go out of business), which is another way to say that
the mathematical expectation of a monetary gamble need not be the proper
utility measure to use.
The monetary expectation is only a special example of a utility,
which is mathematically acceptable but not at all realistic.
It is, unfortunately, given in elementary texts
(which do not introduce the utility concept)
as the sole basis for a rational analysis of gambling decisions.
This is clearly not so in practice:
For example, you may be willing to pay one dollar for
an (unfair) chance in 2000 000 at $1000 000,
but very few people (if any) would pay
$499 999 for a chance in two at $1000 000.
However, someone could take the latter bet in a very special situation
where an immediate gain of $1000 000 would make a critical difference,
whereas the loss of even half a million might not be crucial...
The rational basis for such choices is based on the utilities involved.
Before you analyze choices, you have to determine the relevant "utility curve" carefully
when it comes to actual possible outcomes:
If your current wealth is W, what would be the
exact utility rating to you of a total wealth equal to W, W-1,
W-499999, or W+1000000?
How does that compare to nonmonetary things like the loss of a limb?
Above or below the knee?
What's a relationship or a marriage worth?
What about social status?
Recognition? Public ridicule?
Will you go out naked for $10 000, for $10,
or would someone have to pay you not to expose yourself?
Everything that carries any weight at all in your choices has to be assigned some
utility on your own personal scale, which you may only build by introspection or,
better, retrospection (recalling relevant past choices).
In some cases, comparisons with the ubiquitous money scale may help.
Although the so-called
utility function (u) which gives utility as a function of money
(total wealth) is normally not a linear function,
it may have a simple
mathematical form under certain common assumptions (see below).
One caveat is that nonmonetary gratifications often play a role in actual choices which
seem based solely on monetary exchanges:
There's some playful element in any lottery, which increases the appeal of
purchasing a lottery ticket.
Lottery operators know this very well and they design
their lottery "games" with this in mind.
Note that it's always the entire situation which is
assigned a utility rating, not its separate components
(money, health, happiness, etc.).
Now, if you assume that your attitude towards money does not
depend on how much of it you have right now,
then the monetary part of your own utility function u
must be (up to irrelevant rescaling)
an exponential function of your wealth.
(It could also be linear, but this is usually disallowed on the ground that a proper
utility function must be bounded when the stakes are potentially unbounded.)
Mathematically, this assumption states that your
utility function u is such that the quantity h
is irrelevant in your preference between
something of utility p u(a+h) + (1-p) u(b+h) compared to
something of utility q u(c+h) + (1-q) u(d+h).
For now, we'll leave it up to the reader to show that this is true if [easy] and
only if [tougher] the function u is either linear [ruled out] or of the
following form, up to some irrelevant rescaling:
u(x) = 1 - exp( -x/r )
Although x should normally be equal to one's entire wealth, changing the
"zero point" merely rescales linearly an exponential function and is therefore
irrelevant to decisions (as explained above).
It's therefore customary, when using the exponential utility function,
to consider that x is the amount to be gained or lost in a given gamble.
Separate gambles can be analyzed separately with an exponential utility
function (that's not true for any other utility functions).
In the above expression for an exponential utility function of money,
the constant amount r (measured in the same money unit used for the variable x)
is called the risk tolerance (or risk aversion).
For more general utility functions that
risk tolerance isn't a constant and may be defined at each point x
of the utility curve as follows:
r (x) = -u' (x) / u'' (x)
Notice that this definition is indeed independent of the allowed linear rescaling
of the utility function.
Portfolio managers will tell you that an investor's risk tolerance
is roughly proportional to his assets
(at least that's what most of them assume to be true).
This may be interpreted in either one of two ways:
EITHER: When prospects are analyzed, the risk tolerance
used in the analysis of future uncertainty is the constant corresponding
to the current situation.
At the next step, when certain events have actually come to pass,
a different constant will be used to make a slightly different analysis,
using the new risk tolerance corresponding to the new situation.
OR: The utility function used to make strategical
decisions incorporates the future variability of the investor's risk tolerance.
For example, if the risk tolerance is indeed proportional to wealth (r=kx),
then the utility function is a solution of the differential equation:
k x u'' + u' = 0
Solve this by letting y be u'(x), so that
k x dy + y dx = 0 (or k dy/y + dx/x = 0), which means
that y is proportional to x-1/k.
Therefore, up to some irrelevant rescaling,
the utility u is also a power of x, namely -x1-1/k.
For this function to have an upper limit, the exponent should be negative.
This is to say that we must have k<1.
The rule of thumb [that's all it is]
in the corporate world seems to be that the management of most companies
behaves as if k=1/6 (risk tolerance = one sixth of equity).
With the second of the above interpretations, this would mean the
utility function of a major corporation (unless it's close to bankruptcy)
would typically be -1/x5.
Rather surprisingly, interviews of experienced corporate decision makers
seem to be consistent with this...
(2001-07-04) The Saint-Petersburg Game
A fair coin is tossed until heads appears.
If the game lasts for n+1 tosses, the player receives
2n dollars.
Namely: $1 if heads appears first,
$2 if it takes two tosses, then $4, $8, $16, $32, etc.
What's a decent price to pay for the privilege to play this game?
This is called the "Saint-Petersburg Paradox":
The mathematical expectation of the above Saint-Petersburg Game
is infinite, since it would be the sum of the following divergent series:
Clearly however, nobody would ever pay more than a few dollars for a
shot at this type of gamble... Why?
When the question was first posed,
early in the 18th century, it was still believed that the value of a gamble should only
be based on its "fair" price, which is another name for its
mathematical expectation.
The fact that it clearly cannot be so with the above
game ultimately led to the introduction of the modern concept of the
utility of a prospect.
The discussion originated with a
correspondence between the Swiss mathematician, residing in Basel, Nicolas Bernoulli
(1687-1759, not to be confused with his well-known father,
also called Nicolas, 1662-1705), and Pierre Rémond de Montmort (1678-1719), in Paris.
Montmort had authored a successful book entitled
Essay d'analyse sur les jeux de hazard (Paris, 1708).
Bernoulli was making
suggestions for a future edition, focusing on a set of 5 problems to appear on page 402,
including "Problem 5", which essentially describes a
version of the above Petersburg Game...
The very first letter from Bernoulli (dated September 9, 1713) mentions a die
instead of a fair coin, but the lower probability (1/6) of terminating the game at each toss
makes the expectation series diverge even more rapidly.
(Bernoulli introduces other payoff sequences which are not necessarily paradoxical,
so that Montmort initially missed his point.)
A few years later,
Gabriel Cramer (1704-1752) was prompted
to address the issue from London, in a
letter to Bernoulli, dated May 21, 1728.
(Since he turned 20, in 1724, Cramer had been sharing a chair of mathematics in Geneva
with Giovanni Ludovico Calandrini , under an arrangement that called for one of them to
travel while the other was teaching.)
Cramer restated the game in its modern form, for the sake of simplicity,
with a fair coin instead of a die.
He went on to say that "mathematicians estimate money in proportion to its quantity,
and men of good sense in proportion to the usage that they may make of it".
Cramer quantified that statement in terms of what's now called a
"utility function", which he dubbed a "moral value of goods".
Cramer's first example of a utility function
was simply proportional to the money amount up to a certain point
(he used 224coins, for convenience)
and constant thereafter.
His second example was a utility function of money proportional to the square root
of the amount of money.
Either of these utility functions does assign a finite utility
to the original Petersburg game, but the second one
would fail to resolve the issue if the payoff sequence was increasing faster
(for example, if the player was payed
4n dollars for completing n+1 tosses).
In fact,
this very example may be used to show that any
utility function must have an upper bound, or else one could exhibit an infinite
sequence of prospects, the n-th of which having a utility at least equal to
2n.
Offering the n-th such prospect as payoff for successfully
completing n tosses in a Petersburg game would assign infinite "utility"
to such a game, which is not acceptable.
(The basic utility tenet is to assign a finite utility rating
to a single prospect, which is what the whole Petersburg game is.)
This revived the issue originally raised by Nicolas Bernoulli,
who asked the opinion of his brilliant cousin,
Daniel Bernoulli (1700-1782).
At that time, Daniel was professor of mathematics in St. Petersburg,
and his influential work on the subject would later be published (in 1738) by the
St. Petersburg Academy, which is how the paradox got its modern name.
Back in 1731, Daniel Bernoulli rediscovered (independently of Cramer)
the modern notion of utilities, which Nicolas Bernoulli kept rejecting...
Daniel also made a point which Cramer had missed entirely, namely that it is
generally crucial to consider only the entire wealth of the player and assign
a utility only to the whole thing, as the marginal utility of
an additional coin will depend on the rest of one's fortune.
Bernoulli ventured the guess that the additional utility (du) of an additional dollar (dw)
could be inversely proportional to one's entire wealth w.
This assumption (du = k dw/w) makes utility (u) a logarithmic function of the
total wealth (w).
As we are free to rescale utilities, it may then be stated without loss of generality
that this translates into u(w) = ln(w).
However, this logarithmic "utility" function suffers
from the same flaw as Cramer's square root function of money, because it's
not bounded either: If a successful sequence of n+1 tosses was payed
exp(k 2n), the game would still end up having an
infinite "utility", even for a small value of the parameter k.
With a small value like k=0.01, there's an unattractive sequence of payoffs at
first, then the growth becomes explosive:
$1.01, $1.02, $1.04, $1.08, $1.17, $1.38, $1.90, $3.60, $12.94, $167.34, $28001.13,
$784063053.14, ...
This sequence of payoffs is clearly worth a substantial premium,
but consider the related schedule where you get payed $1.00 for
any successful sequence of less than 100 tosses and exp(k 2n-100) dollars
thereafter. That gamble is worth $1.00 to absolutely anybody,
in spite of the fact that its logarithmic "utility" is infinite...
There is no way around it. Utilities are always bounded.
If we're presented with a theoretical problem where payoffs are unbounded,
as they are in the Petersburg Gamble, then the utility function itself must have an
upper limit (in practical situations, potential payoffs are always bounded,
which makes the exact mathematical form of the utility function irrelevant
beyond a certain point and the issue does not arise because of such practical limits).
If a tool, like Bernoulli's logarithmic utilities, fails to make sense of the
Petersburg Gamble for some particular payoff schedule,
then it clearly cannot be trusted to analyze any other schedule.
It turns out that only very few utility functions allow a self-consistent
analysis fully compatible with the nature of the question we are asked.
In fact, we only have the freedom to choose a single scalar parameter
(the player's so-called risk tolerance)! Read on:
There's an hidden assumption in this and other similar theoretical puzzles,
which we must make explicit in order to solve the riddle:
The question is asked out of context and must be answered likewise if
it is to be answered at all.
We are not to involve sordid details about the
rest of the player's life (size of bank accounts, mortgages, etc.).
That approach is logically consistent only with the assumption of an exponential
utility function of money, which is the only type of utility function
where decisions about a particular prospect are not influenced by the
rest of one's situation...
It does not make sense to analyze an isolated gamble except
by assuming an exponential utility function, since no other
utility function of money even permits such isolation.
This is a theoretical argument, of course, but it's clearly appropriate
for a theoretical question like the one at hand...
Since we must, we shall happily assume that the player's
utility function of money is
of the form 1-exp(-x/r) for some parameter r
(which is a dollar amount, usually called risk tolerance).
In this, x should generally be the player's total wealth, but the unique properties
of the exponential function allow us to consider that x is simply the
amount gained or lost in the gamble(s) at hand
(since changing the zero point on the money scale merely rescales exponential
utilities without affecting the comparisons relevant for decisions).
We do not have such freedom with
a more general utility function, as Daniel Bernoulli first recognized.
Also, since additive and/or (positive) multiplicative constants in the utility function
do not affect decisions, we may as well use u(x) = -exp(-x/r)
as the utility of gaining (or losing) x dollars in the gamble at hand.
(The only aesthetic thing lost in the rescaling is that we no longer
have a utility of 0 for a gain of 0.)
It's interesting to observe that the exponential utility function
(with a positive risk tolerance)
does not have a lower bound.
Therefore it could not be used to analyze a gamble with unbounded negative payoffs (or fees).
This is not surprising in view of the fact that such a gamble is clearly a major
decision which cannot be considered independently of the rest of the
player's situation, because the entire wealth of the player (and more) is at risk.
Everybody's actual overall utility function is bounded on both sides (if you can't
possibly repay a huge debt, it makes very little difference if it is $100,000,000 or
$200,000,000). The decision of whether to play the Petersburg Game is a minor one
for which the exponential utility function is entirely appropriate.
The decision to bankroll
such a game would be a major one, even for a risk-loving entity (if one was
ever foolish enough to be attracted by the tiny fees ordinary players are willing to risk).
After this long preamble, the rest is easy.
Let's call u(x) the utility of having x more dollars than initially.
If you pay y dollars for the privilege to play, the utility of playing the
Petersburg game is clearly
å u(2n-y) / 2n+1
and the gamble should be accepted if and only if this is greater than u(0).
In the particular case where u is exponential, this is equivalent to comparing
å u(2n) / 2n+1
and u(y), namely the utility of the free gamble
and the utility of a so-called certainty equivalent (CE).
The CE is whatever (minimum) amount of money we would be willing to receive as a
compensation for giving up the right to gamble.
It may not be quite the same as the (maximum) price we're willing to pay to acquire
that right!
Only in the case of the exponential (or linear) utility function are these two amounts always
equal.
The CE is the quantity actually computed in
Cramer's original text based on a square root utility function.
It was probably silently assumed at the time that the CE would not be too different from
the price one would be willing to pay.
However, rigorously speaking, the minimum acceptable selling price (the CE) and the maximum
acceptable buying price are only equal in the case of the exponential
(or linear) utility function!
All told, if a player has an exponential utility function with a
risk tolerance equal to r (expressed in dollars),
the highest price (y) s/he will be willing to pay for a shot at the Petersburg game
is given by the relation:
¥
exp(-y/r) =
å
exp(-2n/r) / 2n+1
n=0
Once we evaluate the sum on the RHS, this is easy to solve for y
(just take the natural logarithms of both sides and multiply by -r).
The computation is best done numerically (see table below) for midrange values of r,
but we may also want to investigate what happens when r
is very large or very small:
For large values of r, we may observe that,
when r is much larger than 2n, each term of the sum is roughly
equal to 1/2n+1-1/2r.
This near-constancy goes on for a number of terms
roughly equal to the base-2 logarithm of r,
after which the terms vanish exponentially fast.
(Notice how the exponential utility function turns out to behave very much like
the original "moral value" function proposed by Cramer in 1728;
proportional to the money at first, then nearly constant after a certain threshold.)
We may thus expect the RHS to be equal to about
k-ln(r)/(2r ln(2)) for some constant k, which turns out to
be equal to 1.
The natural logarithm of that, for large values of r, would therefore be
-ln(r)/(r ln(4)), so that y is roughly equal to
ln(r)/ln(4) for large values of r
(actually, it's about 0.5549745 above that).
On the other hand, when r is very small,
the sum on the RHS essentially reduces to its first term, so that y is
extremely close to 1+r ln(2) .
The rest of the expansion is smaller than any power of r,
since the leading term equals (-r /2)exp(-1/r).
In particular, a player with a risk tolerance of zero
(r = 0) will only pay $1 for the gamble, since this is the amount
s/he is guaranteed to get back...
For educational purposes, we've included what a similar analysis would entail
for a nonexponential utility function (last two columns of the above table).
The utility function chosen is such that wealth (or equity) is 6 times the
risk tolerance appearing in the first column.
The entire fortune of the player is thus taken into
account (something we avoided with the exponential function).
Note that the price for which the player is willing to sell a right to play
(the CE, or certainty equivalent) is different from the price he would be
willing to pay to acquire such a right, although this is only significant at low
levels of risk tolerance (both prices are always equal for an exponential utility).
At a zero risk tolerance, it's the buying price
which is equal to $1 (since we're guaranteed to get $1 back, no matter what), whereas
the selling price may be significantly greater
(the value is ½ 631/5
or about $1.145086 in this particular case).
That's because a nonexponential utility function integrates future variations of the
risk tolerance and this influences the decision, which is not solely based
on the current instantaneous value of the player's risk
tolerance -u'(x)/u"(x)...
If your browser can run JavaScript (which is probably the case),
you may obtain nontabulated values by entering either the risk tolerance
or the exponential CE at the top of their respective columns (in some cases,
you may not get more than 7 or 8 significant digits from the script,
whereas the tabulated values
are correct within half a unit of the last digit displayed).
You may wish to use the table backwards:
Determine by introspection what the Petersburg Gamble is worth to you
and you will know roughly what your risk tolerance is.
For example, if you decide that a Petersburg game is worth $6,
your risk tolerance is $1872.28.
The method may not be very accurate because
you are essentially guessing on a logarithmic scale which amplifies errors
(estimating the game to be worth $6.05 would correspond to a
risk tolerance of $2008.07).
However, it's only the order of magnitude of your risk tolerance
which counts for many decisions, and the Petersburg game will allow
you to evaluate that.
(2016-07-12) The Two-Envelope Problem (exchange paradox)
That's puzzling only if one misuses the concept of random variable.
You are presented with two indistinguishable envelopes,
knowing only that one of them contains twice as much money as the other.
After you've made your choice, you're given an opportunity to swap.
Should you do so?
Common sense (correctly) says that it doesn't make any difference.
However, there's a popular fallacious argument associated with this problem which
would seem to indicate that the other envelope is always preferable
because whatever the actual value X of your envelope maybe, the expected
value of the other envelope is supposedly 25% larger, based on the following
equation:
½ [ X / 2 ] + ½ [ 2 X ] = 1.25 X
This misguiding tautology is certainly not a correct way to compute
the expected value of the second envelope!
The fallacy is simply that X is a random variable,
which cannot be used as if it were an ordinary variable (i.e.,
the unknown value of some fixed parameter, not subject to chance).
The only legitimate parameter in this problem determines the unrevealed
amount of money in the envelopes (a in one, 2a
in the other).
The fact that the parameter a is hidden is just a
circumstance which makes it impossible to know
which one of the following events has occured, even if you can peek inside
your envelope before deciding to swap or not:
X = a and the value of the other envelope is 2 X = 2 a.
X = 2 a and the value of the other envelope is X/2 = a.
The expected value of the amount in either envelope is clearly equal to:
½ [ a ] + ½ [ 2 a ] = 1.5 a
That expression is utterly irrelevant in practice, since you don't know the value of
a at the time you are given the opportunity to swap.
You simply know that it will be the same value regardless of your choice.
However, being allowed to look inside your chosen envelope before deciding to swap may
definitely influence your decision:
Are you willing to gamble half of your winnings for an 50% chance at doubling your money?
Well, as the rest of this page goes to show, it all depends on what amount of money is actually at stake.
A rational decision will depend on how much money you had before the deal unless
your utility function is exponential. Let's consider only that case.
If r is your risk aversion
(risk tolerance) then the (normalized) utility you assign to
a gain of x amount of money is:
u(x) = 1 - exp ( -x / r )
Thus, if you see an amount x in your envelope, you should risk to lose x/2 for a 50% chance
to gain x only if the utility of swapping is positive:
0 < ½ [ 1 - exp ( x / 2r ) ] + ½ [ 1 - exp ( -x / r ) ]
Introducing the variable y = exp ( x / 2r ) > 1 the above reads:
0 < 2 - y - 1 / y2
or
0 < 2 y2 - y3 - 1 = (1-y)(y2-y-1)
Therefore, the last bracketed polynomial must be negative.
That quadratic polynomial has a negative root
-1/f and a positive root f
(the golden ratio).
Our inequality is thus satisfied for y > 1 if and only if y exceeds
the latter quantity.
For the amount of money x this translate into this condition:
x < 2 r Log ( f )
= (0.96242365...) r
In other words, rational players should swap envelopes iff
they find less than about 96.24% of their own
risk aversion inside their first envelope.