Run this tutorial

Click here to run this tutorial on binder: try on binder
Please note that starting the notebook server may take a couple of minutes.

Tutorial: Interpolatory model order reduction for LTI systems

Here we discuss interpolatory model order reduction methods (also known as: moment matching, Krylov subspace methods, and Padé approximation) for LTI systems. In Tutorial: Linear time-invariant systems, we discussed transfer functions of LTI systems, which give a frequency-domain description of the input-output mapping and are involved in system norms. Therefore, a reasonable approach to approximating a given LTI system is to build a reduced-order model such that its transfer function interpolates the full-order transfer function.

We start with simpler approaches (higher-order interpolation at single point [Gri97]) and then move on to bitangential Hermite interpolation which is directly supported in pyMOR.

We demonstrate them on Penzl’s FOM example.

from pymor.models.examples import penzl_example

fom = penzl_example()
print(fom)
LTIModel
    class: LTIModel
    number of equations: 1006
    number of inputs:    1
    number of outputs:   1
    continuous-time
    linear time-invariant
    solution_space:  NumpyVectorSpace(1006, id='STATE')

Interpolation at infinity

Given an LTI system

\[\begin{split}\begin{align*} \dot{x}(t) & = A x(t) + B u(t), \\ y(t) & = C x(t) + D u(t), \end{align*}\end{split}\]

the most straightforward interpolatory method is using Krylov subspaces

\[\begin{split}\begin{align*} V & = \begin{bmatrix} B & A B & \cdots & A^{r - 1} B \end{bmatrix}, \\ W & = \begin{bmatrix} C^T & A^T C^T & \cdots & {\left(A^T\right)}^{r - 1} C^T \end{bmatrix} \end{align*}\end{split}\]

to perform a (Petrov-)Galerkin projection. This will achieve interpolation of the first \(2 r\) moments at infinity of the transfer function. The moments at infinity, also called Markov parameters, are the coefficients in the Laurent series expansion:

\[\begin{split}\begin{align*} H(s) & = C {(s I - A)}^{-1} B + D \\ & = C \frac{1}{s} {\left(I - \frac{1}{s} A \right)}^{-1} B + D \\ & = C \frac{1}{s} \sum_{k = 0}^{\infty} {\left(\frac{1}{s} A \right)}^{k} B + D \\ & = D + \sum_{k = 0}^{\infty} C A^k B \frac{1}{s^{k + 1}} \\ & = D + \sum_{k = 1}^{\infty} C A^{k - 1} B \frac{1}{s^{k}}. \end{align*}\end{split}\]

The moments at infinity are thus

\[M_0(\infty) = D, \quad M_k(\infty) = C A^{k - 1} B, \text{ for } k \geqslant 1.\]

A reduced-order model

\[\begin{split}\begin{align*} \dot{\widehat{x}}(t) & = \widehat{A} \widehat{x}(t) + \widehat{B} u(t), \\ \widehat{y}(t) & = \widehat{C} \widehat{x}(t) + \widehat{D} u(t), \end{align*}\end{split}\]

where

\[\widehat{A} = {\left(W^T V\right)}^{-1} W^T A V, \quad \widehat{B} = {\left(W^T V\right)}^{-1} W^T B, \quad \widehat{C} = C V, \quad \widehat{D} = D,\]

with \(V\) and \(W\) as above, satisfies

\[C A^{k - 1} B = \widehat{C} \widehat{A}^{k - 1} \widehat{B}, \quad k = 1, \ldots, 2 r.\]

We can compute \(V\) and \(W\) using the arnoldi function.

from pymor.algorithms.krylov import arnoldi

r_max = 5
V = arnoldi(fom.A, fom.E, fom.B.as_range_array(), r_max)
W = arnoldi(fom.A.H, fom.E.H, fom.C.as_source_array(), r_max)

We then use those to perform a Petrov-Galerkin projection.

from pymor.reductors.basic import LTIPGReductor

pg = LTIPGReductor(fom, W, V)
roms_inf = [pg.reduce(i + 1) for i in range(r_max)]

To compare, we first draw the Bode plots.

import matplotlib.pyplot as plt
plt.rcParams['axes.grid'] = True

w = (1e-1, 1e4)
bode_plot_opts = dict(sharex=True, squeeze=False, figsize=(6, 8), constrained_layout=True)

fig, ax = plt.subplots(2, 1, **bode_plot_opts)
fom.transfer_function.bode_plot(w, ax=ax, label='FOM')
for rom in roms_inf:
    rom.transfer_function.bode_plot(w, ax=ax, label=f'ROM $r = {rom.order}$')
_ = ax[0, 0].legend()
_ = ax[1, 0].legend()
_images/d51d51927fc2cd9b542933028cd2aca7b85ebafd5c0d88d0231b837e621f4068.png

As expected, we see good approximation for higher frequencies. Drawing the magnitude of the error makes it clearer.

fig, ax = plt.subplots()
for rom in roms_inf:
    err = fom - rom
    err.transfer_function.mag_plot(w, ax=ax, label=f'Error $r = {rom.order}$')
_ = ax.legend()
_images/b74f58d3f22b26b053197807903c75ca974b7a1f1dffa700281a742e6f84c16e.png

To check the stability of the ROMs, we can plot their poles and compute the maximum real part.

fig, ax = plt.subplots()
markers = '.x+12'
spectral_abscissas = []
for rom, marker in zip(roms_inf, markers):
    poles = rom.poles()
    spectral_abscissas.append(poles.real.max())
    ax.plot(poles.real, poles.imag, marker, label=f'ROM $r = {rom.order}$')
_ = ax.legend()
print(f'Maximum real part: {max(spectral_abscissas)}')
Maximum real part: -9.324900122511512
_images/7bdf1e3065e41aeb958d57615f4d7dfe0bef5f3f27ad80d3077c9cab7df06c1a.png

We see that they are all asymptotically stable.

Interpolation at zero

The next option is using inverse Krylov subspaces (more commonly called the Padé approximation).

\[\begin{split}\begin{align*} V & = \begin{bmatrix} A^{-1} B & A^{-2} B & \cdots & A^{-r} B \end{bmatrix}, \\ W & = \begin{bmatrix} A^{-T} C^T & {\left(A^{-T}\right)}^{2} C^T & \cdots & {\left(A^{-T}\right)}^{r} C^T \end{bmatrix} \end{align*}\end{split}\]

This will achieve interpolation of the first \(2 r\) moments at zero of the transfer function.

\[\begin{split}\begin{align*} H(s) & = C {(s I - A)}^{-1} B + D \\ & = C {\left(A \left(s A^{-1} - I\right)\right)}^{-1} B + D \\ & = C {\left(s A^{-1} - I\right)}^{-1} A^{-1} B + D \\ & = -C {\left(I - s A^{-1}\right)}^{-1} A^{-1} B + D \\ & = -C \sum_{k = 0}^{\infty} {\left(s A^{-1} \right)}^{k} A^{-1} B + D \\ & = D - \sum_{k = 0}^{\infty} C A^{-(k + 1)} B s^k. \end{align*}\end{split}\]

The moments at zero are

\[\begin{split}\begin{align*} M_0(0) & = D - C A^{-1} B = H(0), \\ M_k(0) & = -C A^{-(k + 1)} B = \frac{1}{k!} H^{(k)}(0), \text{ for } k \ge 1. \end{align*}\end{split}\]

We can again use the arnoldi function to compute \(V\) and \(W\).

from pymor.operators.constructions import InverseOperator

r_max = 5
v0 = fom.A.apply_inverse(fom.B.as_range_array())
V = arnoldi(InverseOperator(fom.A), InverseOperator(fom.E), v0, r_max)
w0 = fom.A.apply_inverse_adjoint(fom.C.as_source_array())
W = arnoldi(InverseOperator(fom.A.H), InverseOperator(fom.E.H), w0, r_max)

Then, in the same way, compute the Petrov-Galerkin projection…

pg = LTIPGReductor(fom, W, V)
roms_zero = [pg.reduce(i + 1) for i in range(r_max)]

…and draw the Bode plot.

fig, ax = plt.subplots(2, 1, **bode_plot_opts)
fom.transfer_function.bode_plot(w, ax=ax, label='FOM')
for rom in roms_zero:
    rom.transfer_function.bode_plot(w, ax=ax, label=f'ROM $r = {rom.order}$')
_ = ax[0, 0].legend()
_ = ax[1, 0].legend()
_images/ac31e961b94e20437bcdb4fc86b902aec905445963f4c878a1ce47ce47375ad1.png

Now, because of interpolation at zero, we get good approximation for lower frequencies. The error plot shows this better.

fig, ax = plt.subplots()
for rom in roms_zero:
    err = fom - rom
    err.transfer_function.mag_plot(w, ax=ax, label=f'Error $r = {rom.order}$')
_ = ax.legend()
_images/a7b91474fc633278c5060544b0929ea2e219ca2fcdd766fa5dcda20865d17067.png

Checking stability using the poles of the ROMs.

fig, ax = plt.subplots()
spectral_abscissas = []
for rom, marker in zip(roms_zero, markers):
    poles = rom.poles()
    spectral_abscissas.append(poles.real.max())
    ax.plot(poles.real, poles.imag, marker, label=f'ROM $r = {rom.order}$')
_ = ax.legend()
print(f'Maximum real part: {max(spectral_abscissas)}')
Maximum real part: -1.0000026360387688
_images/70daa3b5e9ba18dfa43db7a40daba69b18f49e514a262c8d6f5cfd9cea902af4.png

The ROMs are again all asymptotically stable.

Interpolation at an arbitrary finite point

A more general approach is using rational Krylov subspaces (also known as the shifted Padé approximation).

\[\begin{split}\begin{align*} V & = \begin{bmatrix} (\sigma I - A)^{-1} B & (\sigma I - A)^{-2} B & \cdots & (\sigma I - A)^{-r} B \end{bmatrix}, \\ W & = \begin{bmatrix} (\sigma I - A)^{-H} C^T & {\left((\sigma I - A)^{-H}\right)}^{2} C^T & \cdots & {\left((\sigma I - A)^{-H}\right)}^{r} C^T \end{bmatrix}, \end{align*}\end{split}\]

for some \(\sigma \in \mathbb{C}\) that is not an eigenvalue of \(A\).

This will achieve interpolation of the first \(2 r\) moments at \(\sigma\) of the transfer function.

\[\begin{split}\begin{align*} H(s) & = C {(s I - A)}^{-1} B + D \\ & = C {((s - \sigma) I + \sigma I - A)}^{-1} B + D \\ & = C {\left((\sigma I - A) \left((s - \sigma) (\sigma I - A)^{-1} + I\right)\right)}^{-1} B + D \\ & = C {\left((s - \sigma) (\sigma I - A)^{-1} + I\right)}^{-1} {(\sigma I - A)}^{-1} B + D \\ & = C {\left(I - (s - \sigma) (A - \sigma I)^{-1}\right)}^{-1} {(\sigma I - A)}^{-1} B + D \\ & = C \sum_{k = 0}^{\infty} {\left((s - \sigma) (A - \sigma I)^{-1} \right)}^{k} {(\sigma I - A)}^{-1} B + D \\ & = D - \sum_{k = 0}^{\infty} C (A - \sigma I)^{-(k + 1)} B (s - \sigma)^{k}. \end{align*}\end{split}\]

The moments at \(\sigma\) are

\[\begin{split}\begin{align*} M_0(\sigma) & = C {(\sigma I - A)}^{-1} B + D = H(\sigma), \\ M_k(\sigma) & = -C {(A - \sigma I)}^{-(k + 1)} B = \frac{1}{k!} H^{(k)}(\sigma), \text{ for } k \ge 1. \end{align*}\end{split}\]

To preserve realness in the ROMs, we interpolate at a conjugate pair of points (\(\pm 200 i\)).

from pymor.algorithms.gram_schmidt import gram_schmidt

s = 200j
r_max = 5
v0 = (s * fom.E - fom.A).apply_inverse(fom.B.as_range_array())
V_complex = arnoldi(InverseOperator(s * fom.E - fom.A), InverseOperator(fom.E), v0, r_max)
V = fom.A.source.empty(reserve=2 * r_max)
for v1, v2 in zip(V_complex.real, V_complex.imag):
    V.append(v1)
    V.append(v2)
_ = gram_schmidt(V, atol=0, rtol=0, copy=False)
w0 = (s * fom.E - fom.A).apply_inverse_adjoint(fom.C.as_source_array())
W_complex = arnoldi(InverseOperator(s * fom.E.H - fom.A.H), InverseOperator(fom.E), w0, r_max)
W = fom.A.source.empty(reserve=2 * r_max)
for w1, w2 in zip(W_complex.real, W_complex.imag):
    W.append(w1)
    W.append(w2)
_ = gram_schmidt(W, atol=0, rtol=0, copy=False)

We perform a Petrov-Galerkin projection.

pg = LTIPGReductor(fom, W, V)
roms_sigma = [pg.reduce(2 * (i + 1)) for i in range(r_max)]

Then draw the Bode plots.

fig, ax = plt.subplots(2, 1, **bode_plot_opts)
fom.transfer_function.bode_plot(w, ax=ax, label='FOM')
for rom in roms_sigma:
    rom.transfer_function.bode_plot(w, ax=ax, label=f'ROM $r = {rom.order}$')
_ = ax[0, 0].legend()
_ = ax[1, 0].legend()
_images/5f0bd20498c5d0e3c5ba1b5cb1a45288a8a53a0796dc319f07adceed8c504f0f.png

The ROMs are now more accurate around the interpolated frequency, which the error plot also shows.

fig, ax = plt.subplots()
for rom in roms_sigma:
    err = fom - rom
    err.transfer_function.mag_plot(w, ax=ax, label=f'Error $r = {rom.order}$')
_ = ax.legend()
_images/2a1bf289fa052f5f250b9245ba58d6fe767cd71e9b4ceb5164751d201e5f5071.png

The poles of the ROMs are as follows.

fig, ax = plt.subplots()
spectral_abscissas = []
for rom, marker in zip(roms_sigma, markers):
    poles = rom.poles()
    spectral_abscissas.append(poles.real.max())
    ax.plot(poles.real, poles.imag, marker, label=f'ROM $r = {rom.order}$')
_ = ax.legend()
print(f'Maximum real part: {max(spectral_abscissas)}')
Maximum real part: 633.2912934294063
_images/fa8140fe40e0c5c4839965a3862f3c47b83eed77c0077bbc04b6b951c2b6f041.png

We observe that some of the ROMs are unstable. In particular, interpolation does not necessarily preserve stability, just as Petrov-Galerkin projection in general. A common approach then is using a Galerkin projection with \(W = V\) if \(A + A^T\) is negative definite. This preserves stability, but reduces the number of interpolated moments from \(2 r\) to \(r\).

Interpolation at multiple points

To achieve good approximation over a larger frequency range, instead of local approximation given by higher-order interpolation at a single point, one idea is to do interpolation at multiple points (sometimes called multipoint Padé), whether of lower or higher-order. pyMOR implements bitangential Hermite interpolation (BHI) for different types of Models, i.e., methods to construct a reduced-order transfer function \(\widehat{H}\) such that

\[\begin{split}\begin{align*} H(\sigma_i) b_i & = \widehat{H}(\sigma_i) b_i, \\ c_i^H H(\sigma_i) & = c_i^H \widehat{H}(\sigma_i), \\ c_i^H H'(\sigma_i) b_i & = c_i^H \widehat{H}'(\sigma_i) b_i, \end{align*}\end{split}\]

for given interpolation points \(\sigma_i\), right tangential directions \(b_i\), and left tangential directions \(c_i\), \(i = 1, \ldots, r\). Such interpolation is relevant for systems with multiple inputs and outputs, where interpolation of matrices

\[\begin{split}\begin{align*} H(\sigma_i) & = \widehat{H}(\sigma_i), \\ H'(\sigma_i) & = \widehat{H}'(\sigma_i), \end{align*}\end{split}\]

may be too restrictive. Here we focus on single-input single-output systems, where tangential and matrix interpolation are the same.

Bitangential Hermite interpolation is also relevant for \(\mathcal{H}_2\)-optimal model order reduction and methods such as IRKA (iterative rational Krylov algorithm), but we don’t discuss them in this tutorial.

Interpolation at multiple points can be achieved by concatenating different \(V\) and \(W\) matrices discussed above. In pyMOR, LTIBHIReductor can be used directly to perform bitangential Hermite interpolation.

import numpy as np
from pymor.reductors.interpolation import LTIBHIReductor

bhi = LTIBHIReductor(fom)
freq = np.array([0.5, 5, 50, 500, 5000])
sigma = np.concatenate([1j * freq, -1j * freq])
b = np.ones((10, 1))
c = np.ones((10, 1))
rom_bhi = bhi.reduce(sigma, b, c)

We can compare the Bode plots.

fig, ax = plt.subplots(2, 1, **bode_plot_opts)
fom.transfer_function.bode_plot(w, ax=ax, label='FOM')
rom_bhi.transfer_function.bode_plot(w, ax=ax, label=f'ROM $r = {rom_bhi.order}$')
for a in ax.flat:
    for f in freq:
        a.axvline(f, color='gray', linestyle='--')
    _ = a.legend()
_images/221750f77ac012898e6229d7374fbce5f3803b68ed4dc941cd9fb719801d07aa.png

The error plot shows interpolation more clearly.

fig, ax = plt.subplots()
err = fom - rom_bhi
err.transfer_function.mag_plot(w, label=f'Error $r = {rom_bhi.order}$')
for f in freq:
    ax.axvline(f, color='gray', linestyle='--')
_ = ax.legend()
_images/c8873db4a82271923d01231a2a1df838e74bdeccfabe254f44177f1f820b3280.png

We can check stability of the system.

fig, ax = plt.subplots()
poles = rom_bhi.poles()
ax.plot(poles.real, poles.imag, '.', label=f'ROM $r = {rom.order}$')
_ = ax.legend()
print(f'Maximum real part: {poles.real.max()}')
Maximum real part: -1.0071270631815548
_images/9f23ed8e5442b6a3eb3ae3bf5247ad2d0a0a1f7f2c49b4f37ae4c5ab89133b64.png

In this case, the ROM is asymptotically stable.

Download the code: tutorial_interpolation.md, tutorial_interpolation.ipynb.