# Generalized Least SquaresΒΆ

In [1]:
from __future__ import print_function
import statsmodels.api as sm
import numpy as np
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)


The Longley dataset is a time series dataset:

In [2]:
data = sm.datasets.longley.load()
print(data.exog[:5])

[[      1.       83.   234289.     2356.     1590.   107608.     1947. ]
[      1.       88.5  259426.     2325.     1456.   108632.     1948. ]
[      1.       88.2  258054.     3682.     1616.   109773.     1949. ]
[      1.       89.5  284599.     3351.     1650.   110929.     1950. ]
[      1.       96.2  328975.     2099.     3099.   112075.     1951. ]]



Let's assume that the data is heteroskedastic and that we know the nature of the heteroskedasticity. We can then define sigma and use it to give us a GLS model

First we will obtain the residuals from an OLS fit

In [3]:
ols_resid = sm.OLS(data.endog, data.exog).fit().resid


Assume that the error terms follow an AR(1) process with a trend:

$\epsilon_i = \beta_0 + \rho\epsilon_{i-1} + \eta_i$

where $\eta \sim N(0,\Sigma^2)$

and that $\rho$ is simply the correlation of the residual a consistent estimator for rho is to regress the residuals on the lagged residuals

In [4]:
resid_fit = sm.OLS(ols_resid[1:], sm.add_constant(ols_resid[:-1])).fit()
print(resid_fit.tvalues[1])
print(resid_fit.pvalues[1])

-1.43902298399
0.173784447884



While we don't have strong evidence that the errors follow an AR(1) process we continue

In [5]:
rho = resid_fit.params[1]


As we know, an AR(1) process means that near-neighbors have a stronger relation so we can give this structure by using a toeplitz matrix

In [6]:
from scipy.linalg import toeplitz

toeplitz(range(5))

Out[6]:
array([[0, 1, 2, 3, 4],
[1, 0, 1, 2, 3],
[2, 1, 0, 1, 2],
[3, 2, 1, 0, 1],
[4, 3, 2, 1, 0]])

In [7]:
order = toeplitz(range(len(ols_resid)))


so that our error covariance structure is actually rho**order which defines an autocorrelation structure

In [8]:
sigma = rho**order
gls_model = sm.GLS(data.endog, data.exog, sigma=sigma)
gls_results = gls_model.fit()


Of course, the exact rho in this instance is not known so it it might make more sense to use feasible gls, which currently only has experimental support.

We can use the GLSAR model with one lag, to get to a similar result:

In [9]:
glsar_model = sm.GLSAR(data.endog, data.exog, 1)
glsar_results = glsar_model.iterative_fit(1)
print(glsar_results.summary())

                           GLSAR Regression Results
==============================================================================
Dep. Variable:                      y   R-squared:                       0.996
Method:                 Least Squares   F-statistic:                     295.2
Date:                Tue, 02 Dec 2014   Prob (F-statistic):           6.09e-09
Time:                        12:53:58   Log-Likelihood:                -102.04
No. Observations:                  15   AIC:                             218.1
Df Residuals:                       8   BIC:                             223.0
Df Model:                           6
Covariance Type:            nonrobust
==============================================================================
coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
const      -3.468e+06   8.72e+05     -3.979      0.004     -5.48e+06 -1.46e+06
x1            34.5568     84.734      0.408      0.694      -160.840   229.953
x2            -0.0343      0.033     -1.047      0.326        -0.110     0.041
x3            -1.9621      0.481     -4.083      0.004        -3.070    -0.854
x4            -1.0020      0.211     -4.740      0.001        -1.489    -0.515
x5            -0.0978      0.225     -0.435      0.675        -0.616     0.421
x6          1823.1829    445.829      4.089      0.003       795.100  2851.266
==============================================================================
Omnibus:                        1.960   Durbin-Watson:                   2.554
Prob(Omnibus):                  0.375   Jarque-Bera (JB):                1.423
Skew:                           0.713   Prob(JB):                        0.491
Kurtosis:                       2.508   Cond. No.                     4.80e+09
==============================================================================

Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 4.8e+09. This might indicate that there are
strong multicollinearity or other numerical problems.


/home/skipper/.virtualenvs/statsmodels/local/lib/python2.7/site-packages/scipy/stats/stats.py:1205: UserWarning: kurtosistest only valid for n>=20 ... continuing anyway, n=15
int(n))



Comparing gls and glsar results, we see that there are some small differences in the parameter estimates and the resulting standard errors of the parameter estimate. This might be do to the numerical differences in the algorithm, e.g. the treatment of initial conditions, because of the small number of observations in the longley dataset.

In [10]:
print(gls_results.params)
print(glsar_results.params)
print(gls_results.bse)
print(glsar_results.bse)

[-3797854.9015      -12.7656       -0.038        -2.1869       -1.1518
-0.0681     1993.9529]
[-3467960.6325       34.5568       -0.0343       -1.9621       -1.002
-0.0978     1823.1829]
[ 670688.6993      69.4308       0.0262       0.3824       0.1653
0.1764     342.6346]
[ 871584.0517      84.7337       0.0328       0.4805       0.2114
0.2248     445.8287]



#### Previous topic

Weighted Least Squares

#### Next topic

statsmodels.regression.linear_model.OLS