In signal processing, H2 and H∞ norms are most important measures for system performance. In this entry, I try to explain the difference between them in view of filter design.
For a stable and causal system F(z), the H2 norm is defined by
This is the average (or root-mean-square) value of the magnitude of the frequency response of F(z).
On the other hand, the H∞ norm is defined by
This is the maximum value of the magnitude of the frequency response of F(z).
Now, let us consider the following inverse-filtering problem:
Given a filter H(z), find another filter K(z) such that H(z)K(z) ≈ 1.
For this problem, we can use the above two norms. By the H2 norm, the problem is formulated as
(H2) Find stable and causal K(z) that minimizes || H(z)K(z)-1 ||2
By the H∞ norm, the problem is also formulated as
(Hinf) Find stable and causal K(z) that minimizes || H(z)K(z)-1 ||∞
The difference is that (H2) tries to minimize the average magnitude of the error system H(z)K(z)-1 while (Hinf) tries to minimize the maximum magnitude. The following picture shows an example of the two designs.
The blue line is the magnitude of the frequency response of the error H(z)K(z)-1 by the H2 design. The error is very small at almost all frequencies but is very large around a frequency, say f0. On the other hand, the red line is the error magnitude by H∞ design, which is uniformly small since the H∞ optimization is a minmax one that tries to make the response as flat as possible. The average value by H2 is smaller than by H∞ and the maximum value by H∞ is smaller than by H2.
By this picture, the difference is clear.
H2-designed filter shows very nice performance for almost all frequencies but is fragile for the frequency f0. Hence, H2 is the better if it is quite certain that the input signals do not contain frequency components around f0. On the other hand, H∞-designed filter guarantees a certain error level for all frequencies. In other words, H∞ optimization leads to robustness against uncertainty in the frequency components of input signals.