In practice, the Bayesian formula (which we shall presently introduce)
is often applied to an uncertain hypothesis A and a "test" B for it,
relating the a priori
probability P(A) to the a posteriori probability P(A|B).
However, both A and B can simply be considered to be events
placed on the same footing, which do not play different mathematical rôles:
- P(A) denotes the probability of A. P(B) is the probability of B.
- P(A,B) is the joint probability that A and B both occur.
- P(A|B) is the conditional probability of A when B does occur.
P(A|B) is read as the probability of "A knowing B".
The following holds:
P(A|B) P(B) = P(A,B) = P(B,A) = P(B|A) P(A)
The fact that togetherness
is symmetric [i.e., P(A,B) = P(B,A) ]
is crucial in the above, which may serve as a proof of the following Bayes' formula of inference,
upon which Bayesian statistics is based.
The formula itself is arguably due to Laplace (1812) as
Thomas Bayes (1702-1761)
didn't bother with such a formal expression. Neither did
Richard Price
who edited the work of Bayes posthumously (1763).
Bayes' Theorem (Bayes' Formula)
P(A|B) = |
P(B|A) P(A) |
|
P(B) |
|
This is consistent with a classical description of reality
where probabilities are expressed in terms of classical events which
could be, among other possibilities, independent or mutually exclusive.
There are probabilistic systems which cannot be described in classical terms
(no two events are ever independent or mutually exclusive).
The quantum universe (which we live in)
is one such system, where the above foundational relation of Bayesian statistics is neither
obvious nor true, as experimental violations of
Bell's inequality demonstrate.
Bayes' theorem is just true in a self-consistent system where
probability is a classical measure
satisfying the following axiom of measure theory for the subsets
of some universal set E of probability 1.
Thomas Bayes (c.1701-1761)
|
Richard Price (1723-1791)
|
Laplace (1749-1827)
Wikipedia :
Bayes' theorem
|
Bayesian probability
|
Bayesian inference
No, it's not. At least not a perfect one.
If it was, then it wouldn't be capable of irrational decisions.
Arguably, the biology of the brain involves processes that entail
voting and the associated irrational nontransitivy
stemming from Condorcet's paradox.
This is not to say, thankfully, that humans cannot consciously revise their
beliefs or opinions when presented with new evidence.
It merely goes to say that it takes some effort to do so.
Just as it takes some effort to practice mathematics and obtain flawless
results, in spite of our natural tendencies for intuition,
preconceptions and mistakes.