[SciPy-user] ROLS

John Hassler hasslerjc at comcast.net
Tue Oct 2 17:35:37 EDT 2007


The question on recursive ordinary least squares (ROLS) estimation 
caught my interest, because I used to teach a course in adaptive 
control.  At the time (mid '90s), we were using Matlab (since I'd never 
heard of Python).  I found some of the old programs I'd used in class 
and translated them to Python.  Here is one, if anyone is interested.  
This ROLS program is very simple, even in Fortran.  It uses the Matrix 
Inversion Lemma (MIL) to calculate the update to the covariance matrix.  
This is known to be somewhat unstable, numerically, but works 'well 
enough' for many problems.  The Bierman UD or Givens square root methods 
are more stable, but somewhat more complicated to program.  For class 
examples, I usually used the simplest method.

 From ROLS, it's easy to move on to Approximate Maximum Likelihood 
(AML), and not too difficult to get to the Kalman filter.

Here is a simple ROLS with forgetting factor, and a driver program to 
play with.
john

# +++++++++++++++++++++++++++++++++++++++++++
#                rolsftest.py
#    j. c. hassler
#    12-feb-95
#    Originally written in Matlab.  Translated to Python 2-oct-07.
#      This is a simple driver program to test recursive OLS with a
#    forgetting factor, rolsf.  The process is an ARMA model driven by
#    a random input.  The parameters are estimated by ROLSF.
#
# ------------------------------------------
import numpy as nu
import random
from rolsf import *
from pylab import *

tht = [-.50, .25, 1., .25]    # true parameters
p = 1000.*nu.eye(4)           # initialize covariance matrix
# large values because we have very low confidence in the initial guess.
th = [0., 0., 0., 0.]         # initial guesses at parameters
lam = 0.95                    # forgetting factor
xt = [0.,  0., 0., 0.]        # other initial values
x = [0., 0., 0., 0.]
a = nu.zeros(200,float)
b = nu.zeros(200,float)
indx = nu.zeros(200,int)
for i in range(200):
    if i>100:                 # change one of the parameters
        tht[0] = -.40         # .. on the fly.
# the point of the forgetting factor is to make the estimator responsive
# to such changes.
    e = 0.02*(.5 - random.random())  # random 'noise'
    u = 1.*(.5 - random.random())    # random forcing function
    yt = nu.inner(xt,tht)            # truth model
 
    xt[1] = xt[0]            # stuff in the new truth-value-y ...
    xt[0] = yt               # ... and new input
    xt[3] = xt[2]
    xt[2] = u
    y = yt + e               # add 'measurement noise' to the true value
    [th,p] = rolsf(x,y,p,th,lam)    # call recursive OLS with FF
    x[1] = x[0]              # stuff in the new y ...
    x[0] = y                 # ... and new input to the design model
    x[3] = x[2]
    x[2] = u
    a[i] = th[0]             # save for later plotting
    b[i] = th[1]
    indx[i] = i
plot(indx,a,indx,b)
grid()
show()

# +++++++++++++++++++++++++++++++++
#         function [th,p] = rolsf(x,y,p,th,lam)
#    j. c. hassler
#    12-feb-95
#    Originally written in Matlab.  Translated to Python 2-oct-07.
#      Recursive ordinary least squares for single output case, including a
#    forgetting factor, using the Matrix Inversion Lemma method.
#      Enter with x(N,1) = input, y = output, p(N,N) = covariance,
#    th(N,1) = estimate,  lam = forgetting factor
# -------------------------
import numpy as nu
def rolsf(x,y,p,th,lam):
    a=nu.inner(p,x)
    g=1./(nu.inner(x,a)+lam)
    k=g*a
    e=y-nu.inner(x,th)
    th=th+k*e
    p=(p-g*nu.outer(a,a))/lam
    return th, p
   



More information about the SciPy-User mailing list