There are two types of uniform distribution: discrete and continuous.
Table of contents |
2 The continuous case |
In the discrete case, if there are N possible outcomes x1, x2, ..., xN which are distributed uniformly, then the probability
of the outcome xn is:
In the continuous case, the uniform distribution is also called the rectangular distribution because of the shape of its probability density function (see below). It is parameterised by the smallest and largest
values that the uniformly-distributed random variable can take, a and
b. The probability density function of the uniform distribution is thus:
For a random variable following this distribution, the expected value is (a + b)/2 and the standard deviation is
(b - a)/√12.
This distribution can be generalized to more complicated sets than intervals. If S is a Borel set of positive, finite measure, the uniform probability distribution on S can be specified by saying that the pdf is zero outside S and constantly equal to 1/K on S, where K is the Lebesgue measure of S.
The standard uniform distribution is the continuous uniform distribution with
the values of a and b set to 0 and 1 respectively, so that the random
variable can take values only between 0 and 1.
When working with probability, it is often useful to run experiments such as computational simulations. Many programming languages have the ability to generate [[Pseudorandom number sequence|pseudo-random
numbers]] which are effectively distributed according to the standard uniform
distribution.
If u is a value sampled from the standard uniform distribution, then the value a + (b - a)u follows the uniform distribution parametrised by a and b, as described above. Other transformations can be used to generate other statistical distributions from the uniform distribution. (see uses below)
Although the uniform distribution is not commonly found in nature, it is particularly useful for sampling from arbitrary distributions.
A general method is the inverse transform sampling method, which uses the cumulative distribution function (CDF) of the target random variable. This method is very useful in theoretical work. Since simulations using this method require inverting the CDF of the target variable, alternative methods have been divised for the cases where the CDF is not known in closed form. One such method is rejection sampling.
The normal distribution is an important example where the inverse transform method is not efficient. However, there is an exact method, the Box-Muller transformation, which uses the inverse transform to convert two independent uniform variables into two independent gaussian variables.The discrete case
A simple example of the discrete uniform distribution is throwing a fair die
(or, "a dice"). The possible values of x are 1,2,3,4,5,6; and each time the die is thrown, the probability of a given
score is 1/6.The continuous case
and the cumulative distribution function is:
The graph of the probability density function for the continuous uniform distribution looks like:The Standard uniform distribution
Sampling from a uniform distribution
Uses of the uniform distribution