Some time ago ( here, here and here ) I posted about the Kalman filter and recently I have been looking at Kalman filters again because of this Trend Without Hiccups paper hosted at SSRN. I also came across this Estimation Lecture paper which provides MATLAB code for the testing of Kalman filters and my Octave suitable version of this code is shown in the code box below.
clear all ;
1 ; % some function declarations
function [ x_pred , p_pred ] = predict( x , P , F , Q )
x_pred = F * x ;
p_pred = F * P *F' + Q ;
endfunction
function [ nu , S ] = innovation( x_pred , p_pred , z , H , R )
nu = z - H * x_pred ; % innovation
S = H * p_pred * H' + R ; % innovation covariance
endfunction
function [ x_new , p_new ] = innovation_update( x_pred , p_pred , nu , S , H )
K = p_pred * H' / S ; % Kalman gain
x_new = x_pred + K * nu ; % new state
p_new = p_pred - K * S * K' ; % new covariance
endfunction
pkg load signal ; % for xcorr function
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Script to generate some "true" constant velocity data for later assessment
%%% Generates:
%%% x: the state history which evolves according to
%%% x(k+1) = Fx(k) + w(k)
%%% w: the process noise history (randomly generated)
%%% z: a set of observations on the state corrupted by noise
%%% v: the noise on each observation (randomly generated)
N = 100 ;
delT = 1 ;
F = [ 1 delT ;
0 1 ] ;
H = [ 1 0 ] ;
% process and measurement noise variances
sigma2Q = 0.01 ;
sigma2R = 0.1 ;
% process covariance matrix
Q = sigma2Q * [ delT/3 delT/2 ;
delT/2 delT ] ;
P = Q ;
R = sigma2R * [ 1 ] ;
x = zeros( 2 , N ) ;
w = zeros( 2 , N ) ;
z = zeros( 1 , N ) ;
v = zeros( 1 , N ) ;
for ii = 2 : N
w( : , ii ) = randn( 2 , 1 ) .* sigma2Q ; % generate process noise
x( : , ii ) = F * x( : , ii - 1 ) + w( : , ii ) ; % update state
v( : , ii ) = randn( 1 ) * R ; % generate measurement noise
z( : , ii ) = H * x( : , ii ) + v( : , ii ) ; % get "true" measurement
endfor
% visualise data
%figure( 1 ) ; plot( x( 1 , : ) , 'k' , 'linewidth' , 2 ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Script to generate some "true" constant acceleration data for later assessment
%%% Generates:
%%% x: the state history which evolves according to
%%% x(k+1) = Fx(k) + w(k)
%%% w: the process noise history (randomly generated)
%%% z: a set of observations on the state corrupted by noise
%%% v: the noise on each observation (randomly generated)
N = 100 ;
delT = 1 ;
F = [ 1 delT delT/2 ;
0 1 delT ;
0 0 1 ] ;
H = [ 1 0 0 ] ;
% process and measurement noise variances
sigma2Q = 0.01 ;
sigma2R = 0.1 ;
% process covariance matrix
Q = sigma2Q * [ delT/20 delT/8 delT/6 ;
delT/8 delT/3 delT/2 ;
delT/6 delT/2 delT ] ;
P = Q ;
R = sigma2R * [ 1 ] ;
x = zeros( 3 , N ) ;
w = zeros( 3 , N ) ;
z = zeros( 1 , N ) ;
v = zeros( 1 , N ) ;
for ii = 2 : N
w( : , ii ) = randn( 3 , 1 ) .* sigma2Q ; % generate process noise
x( : , ii ) = F * x( : , ii - 1 ) + w( : , ii ) ; % update state
v( : , ii ) = randn( 1 ) * R ; % generate measurement noise
z( : , ii ) = H * x( : , ii ) + v( : , ii ) ; % get "true" measurement
endfor
% visualise data
%figure( 1 ) ; plot( x( 1 , : ) , 'k' , 'linewidth' , 2 ) ;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Octave script to assess Kalman filter performance
%%% The script assumes the existence of a vector z of
%%% noise corrupted observations
N = length( z ) ; % number of Klamn filter iterations
Qfactor = 1 ; % process noise mult factor
Rfactor = 1 ; % measurement noise mult factor
delT = 1 ; % time step
F = [ 1 delT ;
0 1 ] ; % update matrix
H = [ 1 0 ] ; % measurement matrix
sigmaQ = Qfactor * sqrt( 0.01 ) ;
sigmaR = Rfactor * sqrt( 0.1 ) ;
Q = sigmaQ^2 * [ 1/3 1/2 ;
1/2 1 ] ; % process noise covariance matrix
P = 10 * Q ;
R = sigmaR^2 * [ 1 ] ; % measurement noise covariance
% allocate space prior to loop
xhat = zeros( 2 , N ) ; % state estimate
nu = zeros( 1 , N ) ; % innovation
S = zeros( 1 , N ) ; % innovation (co)variance
q = zeros( 1 , N ) ; % normalised innovation squared
for ii = 2 : N
% predict using update matrix F and updated values of P and Q from previous ii loop
% x_pred is the prediction of state values
% p_pred is the prediction of the covariance matrix P given previous P and Q
[ x_pred , p_pred ] = predict( xhat( : , ii - 1 ) , P , F , Q ) ;
% measurement
% "nu" is difference between predicted values and measured values of the states,
% given the measurement matrix H and measurement noise R
% "S" is a measurement of how the covariance matrix P is predicted to have changed
% during the above prediction step, given that this cannot actually be known or
% directly measured as not all the underlying state changes can be directly measured.
[ nu( : , ii ) , S( : , ii ) ] = innovation( x_pred , p_pred , z( ii ) , H , R ) ; % orig
% update step updates the state estimates and covariance matrix P using the Kalman gain,
% which is internally calculated in the "innovation_update" function
[ xhat( : , ii ) , P ] = innovation_update( x_pred , p_pred , nu( : , ii ) , S( : , ii ) , H ) ;
% q is just a record keeping vector for later analysis of normalised innovation squared
q( : , ii ) = nu( : , ii )' * inv( S( : , ii ) ) * nu( : , ii ) ;
endfor
sumQ = sum( q ) ; % determine Sum q which is Chiˆ2 on N d.o.f.
r = xcorr( nu ) ; % get autocorrealtion of innovation
% plot state and state estimate
subplot( 2 , 2 , 1 ) ; plot( x( 1 , : ) , 'k' , 'linewidth' , 2 , xhat( 1 , : ) , 'r' , 'linewidth' , 2 ) ;
title( 'State and State Esimate' ) ;legend( 'State' , 'State Estimate' ) ;
% plot innovation and 2sigma confidence interval
subplot( 2 , 2 , 2 ) ; plot( nu , 'b' , 'linewidth' , 2 , 2 * sqrt( S ) , 'r' , -2 * sqrt( S ) , 'r' , ...
zeros(1,N) , 'r.' , 'linewidth' , 1 ) ;
title( 'Innovation and 2sigma confidence intervals' ) ; legend( 'Innovation' , '2Sigma Levels' ) ;
% plot normalised innovation squared
subplot( 2 , 2 , 3 ) ; plot( q , 'k' , 'linewidth' , 2 , mean( q ) .* ones( 1 , N ) , 'r' , 'linewidth' , 2 ) ;
title( 'Normalised innovation squared' ) ;
% plot autocorrelation of innovation (normalised)
subplot( 2 , 2 , 4 ) ; plot( r( N : 2 * N - 1 ) / r( N ) , 'k' , 'linewidth' , 2 , zeros(1,N) , 'k.' , 'linewidth' , 1 ) ;
title( 'Autocorrelation of innovation (normalised)' ) ;
Soon I shall be using this code, with some additions perhaps, to test various Kinematic model implementations of Kalman filters on financial time series with a view to identifying which models are suitable or not.
More in the near future.