The linarised 3D-Var expression is an OI-type analysis. The analyis, ${x}^{a}$, is a correction of the background field, ${x}^{b}$:

The analysis increment, $\delta x$, is a linear function of the innovation, $d$, a vector of length $M$:

Here ${y}^{o}$ is the observation vector (of length $M$) and ${y}^{b}\equiv H{x}^{b}$ is the background estimate of the observations, obtained by applying the observation operator $H$ to the background, ${x}^{b}$.

$$K=B{H}^{T}{\left(HB{H}^{T}+R\right)}^{-1}$$$$K={\left({B}^{-1}+{H}^{T}{R}^{-1}H\right)}^{-1}{H}^{T}{R}^{-1}$$

Where $B$ is the $\left(I,I\right)$ background error covariance matrix, $R$ is the $\left(M,M\right)$ observation error covariance matrix, and $H$, of order $\left(M,I\right)$, is the Jacobian matrix (evaluated in the background ${x}^{b}$) of the function $H$ (which may be non-linear).

That the two expressions are equivalent is easily seen. In fact the relation: $${\left({B}^{-1}+{H}^{T}{R}^{-1}H\right)}^{-1}{H}^{T}{R}^{-1}=B{H}^{T}{\left(HB{H}^{T}+R\right)}^{-1}$$

is equivalent to:$${H}^{T}{R}^{-1}\left(HB{H}^{T}+R\right)=\left({B}^{-1}+{H}^{T}{R}^{-1}H\right)B{H}^{T}$$

Now, in order to assume that observations are perfect, only the first of the two expressions can be used. By setting $R=0$, it becomes:

Remark that $HB{H}^{T}$ has order $\left(M,M\right)$ with $M<I$, then it can be invertible (though this is not guaranteed: for example it is not invertible when $H$ has two equal rows).

In the perspective of the “perfect observation” assumption, in order to look more deeply into the two expressions, it is useful to define a couple of scalars:

where $Tr$ indicates the trace of the matrix. Then, define the following “tilde” matrices so that:$${\sigma}_{o}^{2}\stackrel{\sim}{R}=R$$$${\sigma}_{b}^{2}\stackrel{\sim}{B}=B$$

$$K=\stackrel{\sim}{B}{H}^{T}{\left[H\stackrel{\sim}{B}{H}^{T}+\frac{{\sigma}_{o}^{2}}{{\sigma}_{b}^{2}}\stackrel{\sim}{R}\right]}^{-1}$$$$K={\left[\frac{{\sigma}_{o}^{2}}{{\sigma}_{b}^{2}}{\stackrel{\sim}{B}}^{-1}+{H}^{T}{\stackrel{\sim}{R}}^{-1}H\right]}^{-1}{H}^{T}\stackrel{\sim}{R}$$

Consider now the observations to be perfect *with respect to the background field*. This is obtained by assuming that:

By neglecting ${\sigma}_{o}^{2}/{\sigma}_{b}^{2}$, the first expression readily becomes, as above:$${K}_{perfectobs}=\stackrel{\sim}{B}{H}^{T}{\left(H\stackrel{\sim}{B}{H}^{T}\right)}^{-1}=B{H}^{T}{\left(HB{H}^{T}\right)}^{-1}$$

In the second expression, though:$$\frac{{\sigma}_{o}^{2}}{{\sigma}_{b}^{2}}{\stackrel{\sim}{B}}^{-1}+{H}^{T}{\stackrel{\sim}{R}}^{-1}H\approx {H}^{T}{\stackrel{\sim}{R}}^{-1}H$$

where ${\stackrel{\sim}{R}}^{-1}$ has order $\left(M,M\right)$, but the matrix ${H}^{T}{\stackrel{\sim}{R}}^{-1}H$ has order $\left(I,I\right)$. So it must be rank-deficient, i. e. non-invertible, because $M<I$. The second expression cannot be used for perfect observations.

In the general expression, the analysis is obtained as a linear combination of $M$ vectors of length $I$, the $M$ columns of the $\left(I,M\right)$ matrix $B{H}^{T}$.

A reduced-order analysis is obtained as a combination of $N$ vectors, with $N<M$, collected in the $N$ columns of the $\left(I,N\right)$ matrix $E$. This is done when the matrix $B$ can be approximated as:

Where the column of $E$ are supposed to have been normalised, so that the magnitude of the background error is carried by the $\left(N,N\right)$ matrix$\Gamma $, the “background error covariance matrix” in the subspace spanned by the columns of $E$. Again, two equivalent expressions are obtained for the gain matrix:

$${K}_{E}=E{\left[{\Gamma}^{-1}+{\left(HE\right)}^{T}{R}^{-1}HE\right]}^{-1}{\left(HE\right)}^{T}{R}^{-1}$$

Note that the matrix $E$ (the $N$-dimensional basis) appears in both expressions on the left. That the two expressions are the same can be seen in a way similar to what was shown above. In fact, the relation:

$${\left[{\Gamma}^{-1}+{\left(HE\right)}^{T}{R}^{-1}HE\right]}^{-1}{\left(HE\right)}^{T}{R}^{-1}=\Gamma {\left(HE\right)}^{T}{\left[HE\Gamma {\left(HE\right)}^{T}+R\right]}^{-1}$$

$${\left(HE\right)}^{T}{R}^{-1}\left[HE\Gamma {\left(HE\right)}^{T}+R\right]=\left[{\Gamma}^{-1}+{\left(HE\right)}^{T}{R}^{-1}HE\right]\Gamma {\left(HE\right)}^{T}$$

The two expressions become:$${K}_{E}=E\stackrel{\sim}{\Gamma}{\left(HE\right)}^{T}{\left[HE\stackrel{\sim}{\Gamma}{\left(HE\right)}^{T}+\frac{{\sigma}_{o}^{2}}{{\gamma}^{2}}\stackrel{\sim}{R}\right]}^{-1}$$

$${K}_{E}=E{\left[\frac{{\sigma}_{o}^{2}}{{\gamma}^{2}}{\stackrel{\sim}{\Gamma}}^{-1}+{\left(HE\right)}^{T}{\stackrel{\sim}{R}}^{-1}HE\right]}^{-1}{\left(HE\right)}^{T}{\stackrel{\sim}{R}}^{-1}$$

$${{K}_{E}}^{perfect\phantom{\rule{6px}{0ex}}obs}=E{\left[{\left(HE\right)}^{T}{\stackrel{\sim}{R}}^{-1}HE\right]}^{-1}{\left(HE\right)}^{T}{\stackrel{\sim}{R}}^{-1}=E{\left[{\left(HE\right)}^{T}{R}^{-1}HE\right]}^{-1}{\left(HE\right)}^{T}{R}^{-1}$$

In the first expression, though, the matrix $HE\stackrel{\sim}{\Gamma}{\left(HE\right)}^{T}$ has order $\left(M,M\right)$, then it is rank-deficient and not invertible, because $\stackrel{\sim}{\Gamma}$ has order $\left(N,N\right)$, with $N<M$.

In conclusions, in the reduced-order analysis, it is the second expression that has to be used when the case of “perfect” observations is considered.

- the matrix $E$ has a single column, the vector $e$ (of length $I$);
- the matrix $\Gamma $ is reduced to a scalar, ${\gamma}^{2}$;
- the matrix $HE=He$ becomes a vector of length $M$, with components ${\eta}_{m}$. ${\left(He\right)}^{T}$ is then a row vector;
- Let ${d}_{m}={y}_{m}^{o}-{y}_{m}^{b}$ be the components of the innovation vector $d$ (of length $M$);
- Assume $R$, which has order $\left(M,M\right)$, to be diagonal, with diagonal elements ${\sigma}_{m}^{2}$.

The analysis increment, obtained applying the gain matrix to the innovation, is then a multiple of the vector** **$e$:

$$\delta x={K}_{E}d=e\frac{{\left(He\right)}^{T}{R}^{-1}d}{\frac{1}{{\gamma}^{2}}+{\left(He\right)}^{T}{R}^{-1}He}=e\frac{{\sum}_{m=1}^{M}\frac{{\eta}_{m}{d}_{m}}{{\sigma}_{m}^{2}}}{\frac{1}{{\gamma}^{2}}+{\sum}_{m=1}^{M}\frac{{\eta}_{m}^{2}}{{\sigma}_{m}^{2}}}$$

Then $R={\sigma}_{o}^{2}\stackrel{\sim}{R}$, where $\stackrel{\sim}{R}$ also is diagonal, with elements ${\mu}_{m}^{2}\equiv {\sigma}_{m}^{2}/{\sigma}_{o}^{2}$. The analysis increment is:

$$\delta x={K}_{E}d=e\frac{{\left(He\right)}^{T}{\stackrel{\sim}{R}}^{-1}d}{\frac{{\sigma}_{o}^{2}}{{\gamma}^{2}}+{\left(He\right)}^{T}{\stackrel{\sim}{R}}^{-1}He}=e\frac{{\sum}_{m=1}^{M}\frac{{\eta}_{m}{d}_{m}}{{\mu}_{m}^{2}}}{\frac{{\sigma}_{o}^{2}}{{\gamma}^{2}}+{\sum}_{m=1}^{M}\frac{{\eta}_{m}^{2}}{{\mu}_{m}^{2}}}$$

Now neglect ${\sigma}_{o}^{2}/{\gamma}^{2}$ to obtain the analysis increment in the perfect observations case: