Table of contents |
2 Constructing stochastic processes 3 Bibliography |
Mathematically, a stochastic process is usually defined as an indexed collection of random variables
Definition
where i runs over some index set I and W is some probability space on which the random variables are defined.
This definition captures the idea of a random function in the following way. To make a function
Of course, the mathematical definition of a function includes the case "a function from {1,...,n} to R is a vector in Rn", so multivariate random variables are a special case of stochastic processes.
For our first infinite example, take the domain to be N, the natural numbers, and our range to be R, the real numbers. Then, a function f : N → R is a sequence of real numbers, and a stochastic process with domain N and range R is a random sequence. The following questions arise:
What is a suitable elementary example to develop in full? Maybe coin-tossing or random walk?
In the ordinary axiomatization of probability theory by means of measure theory, the problem is to construct a sigma-algebra of measurable subsets of the space of all functions, and then put a finite measure on it. For this purpose one traditionally uses a method called Kolmogorov extension.
There is at least one alternative axiomatization of probability theory by means of expectations on algebras of observables. In this case the method goes by the name of Gelfand-Naimark-Segal construction.
This is analogous to the two approaches to measure and integration, where one has the choice to construct measures of sets first and define integrals later, or construct integrals first and define set measures as integrals of characteristic functions.
The Kolmogorov extension proceeds along the following lines: assuming that a probability measure on the space of all functions f : X → Y exists, then it can be used to specify the probability distribution of finite-dimensional random variables [f(x1),...,f(xn)]. Now, from this n-dimensional probability distribution we can deduce an (n-1)-dimensional marginal probability distribution for [f(x1),...,f(xn-1)]. There is an obvious compatibility condition, namely, that this marginal probability distribution be the same as the one derived from the full-blown stochastic process. When this condition is expressed in terms of probability densities, the result is called the Chapman-Kolmogorov equation.
The Kolmogorov extension theorem guarantees the existence of a stochastic process with a given family of finite-dimensional probability distributions satisfying the Chapman-Kolmogorov compatibility condition.
Recall that, in the Kolmogorov axiomatization, measurable sets are the sets which have a probability or, in other words, the sets corresponding to yes/no questions that have a probabilistic answer.
The Kolmogorov extension starts by declaring to be measurable all sets of functions where finitely many coordinates [f(x1),...,f(xn)] are restricted to lie in measurable subsets of Yn. In other words, if a yes/no question about f can be answered by looking at the values of at most finitely many coordinates, then it has a probabilistic answer.
In measure theory, if we have a countably infinite collection of measurable sets, then the union and intersection of all of them is a measurable set. For our purposes, this means that yes/no questions that depend on countably many coordinates have a probabilistic answer.
The good news is that the Kolmogorov extension makes it possible to construct stochastic processes with fairly arbitrary finite-dimensional distributions. Also, every question that one could ask about a sequence has a probabilistic answer when asked of a random sequence. The bad news is that certain questions about functions on a continuous domain don't have a probabilistic answer. One might hope that the questions that depend on uncountably many values of a function be of little interest, but the really bad news is that virtually all concepts of calculus are of this sort. For example:
One solution to this problem is to require that the stochastic process be separable. In other words, that there be some countable set of coordinates {f(xi)} whose values determine the whole random function f.
In the algebraic axiomatization of probability theory, one of whose main proponents was Segal, the primary concept is not that of probability of an event, but rather that of a random variable. Probability distributions are determined by assigning an expectation to each random variable. The measurable space and the probability measure arise from the random variables and expectations by means of well-known representation theorems of analysis. One of the important features of the algebraic approach is that apparently infinite-dimensional probability distributions are not harder to formalize than finite-dimensional ones.
Random variables are assumed to have the following properties:
An expectation E on an algebra A of random variables is a normalized, positive linear functional. What this means is that
Interesting special cases
Examples
Constructing stochastic processes
The Kolmogorov extension
Separability, or what the Kolmogorov extension does not provide
The algebraic approach
This means that random variables form complex abelian *-algebras. If a=a*, the random variable a is called "real".Bibliography