The notion of an independent variable often (but not always) implies the ability to choose the levels of the independent variable and that the dependent variable will respond naturally as in the stimulus-response model. The independent variable x may be a scalar or a vector. In the former case we may write one of the simplest linear-regression models as follows:
Historically, in applications to measurements in astronomy, the "error" was actually a random measurement error, but in many applications, ε is merely the amount by which the individual -value differs from the average -value among individuals having the same -value. The average value of the random "error" is zero. Often in linear regression problems statisticians rely on the Gauss-Markov assumptions:
Sometimes stronger assumptions are relied on:
It is often erroneously thought that the reason the technique is called "linear regression" is that the graph of is a line. But in fact, if the model is
A statistician will usually estimate the unobservable values of the parameters α and β by the method of least squares, which consists of finding the values of and that minimize the sum of squares of the residuals
Notice that, whereas the errors are independent, the residuals cannot be independent because the use of least-squares estimates implies that the sum of the residuals must be 0, and the dot-product of the vector of residuals with the vector of -values must be 0, i.e., we must have
These facts make it possible to use Student's t-distribution with degrees of freedom (so named in honor of the pseudonymous "Student") to find confidence intervals for and .
Denote by capital Y the column vector whose ith entry is yi, and by capital X the n x 2 matrix whose second column contains the xi as its ith entry, and whose first column contains n 1s. Let ε be the column vector containing the errors εi. Let δ and d be respectively the 2x1 column vector containing α and β and the 2x1 column vector containing the estimates a and b. Then the model can be written as
Then it can be shown that
The matrix In - X (X' X)-1 X' that appears above is a symmetric idempotent matrix of rank n - 2. Here is an example of the use of that fact in the theory of linear regression. The finite-dimensional spectral theorem of linear algebra says that any real symmetric matrix M can be diagonalized by an orthogonal matrix G, i.e., the matrix G'MG is a diagonal matrix. If the matrix M is also idempotent, then the diagonal entries in G'MG must be idempotent numbers. Only two real numbers are idempotent: 0 and 1. So In-X(X'X)-1X', after diagonalization, has n-2 0s and two 1s on the diagonal. That is most of the work in showing that the sum of squares of residuals has a chi-square distribution with n-2 degrees of freedom.
Table of contents |
2 Estimating beta 3 Estimating alpha 4 Displaying the residuals 5 Ancillary statistics |
We sum the observations, the squares of the Y's and X's and the products of X*Y to obtain the following quantities.
We use the summary statistics above to calculate b, the estimate of beta.
We use the estimate of beta and the other statistics to estimate alpha by:
The first method of displaying the residuals use the histogram or cumulative distribution to depict the similarity (or lack thereof) to a normal distribution. Non-normality suggests that the model may not be a good summary description of the data.
We plot the residuals, against the independent variable, X. There should be no discernible trend or pattern if the model is satisfactory for this data. Some of the possible problems are:
The sum of squared deviations can be partitioned as in ANOVA to indicate what part of the dispersion of the dependent variable is explained by the independent variable.
The correlation coefficient, r, can be calculated by
Summarizing the data
and similarly.
and SYY similarly.
Estimating beta
Estimating alpha
Displaying the residuals
Ancillary statistics
This statistic is a measure of how well a straight line describes the data. Values near zero suggest that the model is ineffective. r2 is frequently interpreted as the fraction of the variability explained by the independent variable, X