Perfect observations in a reduced-order 3D-Var/OI analysis
Introduction: 3D-Var/OI analysis
The linarised 3D-Var expression is an OI-type analysis. The analyis, x a , is a correction of the background field, x b :
x a = x b + δ x
where all vectors have length I . The number of observations is M , with:
M<I
The analysis increment, δ x , is a linear function of the innovation, d , a vector of length M :
δ x = K d
where K is the gain matrix, of order ( I,M ) and
d = y o -H x b
Here y o is the observation vector (of length M ) and y b H x b is the background estimate of the observations, obtained by applying the observation operator H to the background, x b .
The gain matrix has two equivalent expressions:
K = B H T ( H B H T + R ) -1 K = ( B -1 + H T R -1 H ) -1 H T R -1
Where B is the ( I,I ) background error covariance matrix, R is the ( M,M ) observation error covariance matrix, and H , of order ( M,I ) , is the Jacobian matrix (evaluated in the background x b ) of the function H (which may be non-linear).
That the two expressions are equivalent is easily seen. In fact the relation: ( B -1 + H T R -1 H ) -1 H T R -1 = B H T ( H B H T + R ) -1
is equivalent to: H T R -1 ( H B H T + R )=( B -1 + H T R -1 H ) B H T
which is an identity.
Now, in order to assume that observations are perfect, only the first of the two expressions can be used. By setting R =0 , it becomes:
K perfectobs = B H T ( H B H T ) -1
Remark that H B H T has order ( M,M ) with M<I , then it can be invertible (though this is not guaranteed: for example it is not invertible when H has two equal rows).
 
In the perspective of the “perfect observation” assumption, in order to look more deeply into the two expressions, it is useful to define a couple of scalars:
σ o 2 1 M Tr R
σ b 2 1 I Tr B
where Tr indicates the trace of the matrix. Then, define the following “tilde” matrices so that: σ o 2 R = R σ b 2 B = B
The two gain matrix expressions become:
K = B H T [ H B H T + σ o 2 σ b 2 R ] -1 K = [ σ o 2 σ b 2 B -1 + H T R -1 H ] -1 H T R
Consider now the observations to be perfect with respect to the background field. This is obtained by assuming that:
σ o 2 σ b 2 1
By neglecting σ o 2 / σ b 2 , the first expression readily becomes, as above: K perfectobs = B H T ( H B H T ) -1 = B H T ( H B H T ) -1
In the second expression, though: σ o 2 σ b 2 B -1 + H T R -1 H H T R -1 H
where R -1 has order ( M,M ) , but the matrix H T R -1 H has order ( I,I ) . So it must be rank-deficient, i. e. non-invertible, because M<I . The second expression cannot be used for perfect observations.
 
Reduced-order
In the general expression, the analysis is obtained as a linear combination of M vectors of length I , the M columns of the ( I,M ) matrix B H T .
A reduced-order analysis is obtained as a combination of N vectors, with N<M , collected in the N columns of the ( I,N ) matrix E . This is done when the matrix B can be approximated as:
B E Γ E T
Where the column of E are supposed to have been normalised, so that the magnitude of the background error is carried by the ( N,N ) matrix Γ , the “background error covariance matrix” in the subspace spanned by the columns of E . Again, two equivalent expressions are obtained for the gain matrix:
K E = E Γ ( H E ) T [ H E Γ ( H E ) T + R ] -1
K E = E [ Γ -1 + ( H E ) T R -1 H E ] -1 ( H E ) T R -1
Note that the matrix E (the N -dimensional basis) appears in both expressions on the left. That the two expressions are the same can be seen in a way similar to what was shown above. In fact, the relation:
[ Γ -1 + ( H E ) T R -1 H E ] -1 ( H E ) T R -1 = Γ ( H E ) T [ H E Γ ( H E ) T + R ] -1
is equivalent to:
( H E ) T R -1 [ H E Γ ( H E ) T + R ]=[ Γ -1 + ( H E ) T R -1 H E ] Γ ( H E ) T
which again is an identity.
Now, use again:
R = σ o 2 R
And use the scalar γ 2 , defined as:
γ 2 1 N Tr ( Γ )
so that: γ 2 Γ = Γ
The two expressions become: K E = E Γ ( H E ) T [ H E Γ ( H E ) T + σ o 2 γ 2 R ] -1
K E = E [ σ o 2 γ 2 Γ -1 + ( H E ) T R -1 H E ] -1 ( H E ) T R -1
In order to assume that observations are perfect with respect to the background field, assume:
σ o 2 γ 2 1
Since N<M, to neglect σ o 2 / γ 2 in the second expression is not a problem:
K E perfectobs = E [ ( H E ) T R -1 H E ] -1 ( H E ) T R -1 = E [ ( H E ) T R -1 H E ] -1 ( H E ) T R -1
In the first expression, though, the matrix H E Γ ( H E ) T has order ( M,M ) , then it is rank-deficient and not invertible, because Γ has order ( N,N ) , with N<M .
In conclusions, in the reduced-order analysis, it is the second expression that has to be used when the case of “perfect” observations is considered.
 
Illustrative case N=1
When M>N=1 :
 
The analysis increment, obtained applying the gain matrix to the innovation, is then a multiple of the vector e :
δ x = K E d = e ( H e ) T R -1 d 1 γ 2 + ( H e ) T R -1 H e = e m=1 M η m d m σ m 2 1 γ 2 + m=1 M η m 2 σ m 2
Its expression for “perfect” observations is obtained by defining:
σ o 2 = 1 M m=1 M σ m 2
Then R = σ o 2 R , where R also is diagonal, with elements μ m 2 σ m 2 / σ o 2 . The analysis increment is:
δ x = K E d = e ( H e ) T R -1 d σ o 2 γ 2 + ( H e ) T R -1 H e = e m=1 M η m d m μ m 2 σ o 2 γ 2 + m=1 M η m 2 μ m 2
Now neglect σ o 2 / γ 2 to obtain the analysis increment in the perfect observations case:
δ x = K E d = e ( H e ) T R -1 d ( H e ) T R -1 H e = e m=1 M η m d m μ m 2 m=1 M η m 2 μ m 2
 
Licenza Creative Commons Francesco Uboldi 2014,2015,2016,2017