Table of contents |
2 Bayes' theorem 3 Non-probabilitistic uses |
In probability theory, a normalizing constant is a constant by which an everywhere nonnegative function must be multiplied in order to get a probability density function or a probability mass function. For example, we have
Definition and examples
so that
is a probability density function. This is the density of the standard normal distribution. (Standard, in this case, means the expected value is 0 and the variance is 1.)
Similarly,
In Bayes' theorem says that the product of the prior probability measure and the likelihood function is proportional to the posterior probability measure. Proportional to implies that one must multiply by a normalizing constant in order to assign measure 1 to the whole space, i.e., to get a probability measure. In a simple discrete case we have
Bayes' theorem
where P(H0) is the prior probability that the hypothesis is true; P(D|H0) is the likelihood of the data given that the hypothesis is true; and P(H0|D) is the posterior probability that the hypothesis is true given the data. P(D) should be the probability of producing the data, but on its own is difficult to calculate, so an alternative way to describe this relationship is as one of proportionality:
The Legendre polynomials are characterized by orthogonality with respect to the uniform measure on the interval [− 1, 1] and the fact that they are normalized so that their value at 1 is 1. The constant by which one multiplies a polynomial in order that its value at 1 will be 1 is a normalizing constant.Non-probabilitistic uses