Run this tutorial

Click here to run this tutorial on binder: try on binder
Please note that starting the notebook server may take a couple of minutes.

Tutorial: Interpolatory model order reduction for LTI systems

Here we discuss interpolatory model order reduction methods (also known as: moment matching, Krylov subspace methods, and Padé approximation) for LTI systems. In Tutorial: Linear time-invariant systems, we discussed transfer functions of LTI systems, which give a frequency-domain description of the input-output mapping and are involved in system norms. Therefore, a reasonable approach to approximating a given LTI system is to build a reduced-order model such that its transfer function interpolates the full-order transfer function.

We start with simpler approaches (higher-order interpolation at single point [Gri97]) and then move on to bitangential Hermite interpolation which is directly supported in pyMOR.

We demonstrate them on Penzl’s FOM example.

from pymor.models.examples import penzl_example

fom = penzl_example()
print(fom)
LTIModel
    class: LTIModel
    number of equations: 1006
    number of inputs:    1
    number of outputs:   1
    continuous-time
    linear time-invariant
    solution_space:  NumpyVectorSpace(1006)

Interpolation at infinity

Given an LTI system

x˙(t)=Ax(t)+Bu(t),y(t)=Cx(t)+Du(t),

the most straightforward interpolatory method is using Krylov subspaces

V=[BABAr1B],W=[CTATCT(AT)r1CT]

to perform a (Petrov-)Galerkin projection. This will achieve interpolation of the first 2r moments at infinity of the transfer function. The moments at infinity, also called Markov parameters, are the coefficients in the Laurent series expansion:

H(s)=C(sIA)1B+D=C1s(I1sA)1B+D=C1sk=0(1sA)kB+D=D+k=0CAkB1sk+1=D+k=1CAk1B1sk.

The moments at infinity are thus

M0()=D,Mk()=CAk1B, for k1.

A reduced-order model

x^˙(t)=A^x^(t)+B^u(t),y^(t)=C^x^(t)+D^u(t),

where

A^=(WTV)1WTAV,B^=(WTV)1WTB,C^=CV,D^=D,

with V and W as above, satisfies

CAk1B=C^A^k1B^,k=1,,2r.

We can compute V and W using the arnoldi function.

from pymor.algorithms.krylov import arnoldi

r_max = 5
V = arnoldi(fom.A, fom.E, fom.B.as_range_array(), r_max)
W = arnoldi(fom.A.H, fom.E.H, fom.C.as_source_array(), r_max)
Log Output

We then use those to perform a Petrov-Galerkin projection.

from pymor.reductors.basic import LTIPGReductor

pg = LTIPGReductor(fom, W, V)
roms_inf = [pg.reduce(i + 1) for i in range(r_max)]
Log Output

To compare, we first draw the Bode plots.

import matplotlib.pyplot as plt
plt.rcParams['axes.grid'] = True

w = (1e-1, 1e4)
bode_plot_opts = dict(sharex=True, squeeze=False, figsize=(6, 8), constrained_layout=True)

fig, ax = plt.subplots(2, 1, **bode_plot_opts)
fom.transfer_function.bode_plot(w, ax=ax, label='FOM')
for rom in roms_inf:
    rom.transfer_function.bode_plot(w, ax=ax, label=f'ROM $r = {rom.order}$')
_ = ax[0, 0].legend()
_ = ax[1, 0].legend()
_images/02be8d7c559ebf76aa95716af5f7e4a3c83f4bca3fc85e99d514ad3ff0922aaf.png

As expected, we see good approximation for higher frequencies. Drawing the magnitude of the error makes it clearer.

fig, ax = plt.subplots()
for rom in roms_inf:
    err = fom - rom
    err.transfer_function.mag_plot(w, ax=ax, label=f'Error $r = {rom.order}$')
_ = ax.legend()
_images/97a0be5bb3b1ceae9acc6c4c3bf32451b524de7179e56c08041fa7cf06262e28.png

To check the stability of the ROMs, we can plot their poles and compute the maximum real part.

fig, ax = plt.subplots()
markers = '.x+12'
spectral_abscissas = []
for rom, marker in zip(roms_inf, markers):
    poles = rom.poles()
    spectral_abscissas.append(poles.real.max())
    ax.plot(poles.real, poles.imag, marker, label=f'ROM $r = {rom.order}$')
_ = ax.legend()
print(f'Maximum real part: {max(spectral_abscissas)}')
Maximum real part: -9.324900122511472
_images/7f5bac148e0f62e144f6ce1c964f73e44fc61226a33bb7919b2185330f672ef8.png

We see that they are all asymptotically stable.

Interpolation at zero

The next option is using inverse Krylov subspaces (more commonly called the Padé approximation).

V=[A1BA2BArB],W=[ATCT(AT)2CT(AT)rCT]

This will achieve interpolation of the first 2r moments at zero of the transfer function.

H(s)=C(sIA)1B+D=C(A(sA1I))1B+D=C(sA1I)1A1B+D=C(IsA1)1A1B+D=Ck=0(sA1)kA1B+D=Dk=0CA(k+1)Bsk.

The moments at zero are

M0(0)=DCA1B=H(0),Mk(0)=CA(k+1)B=1k!H(k)(0), for k1.

We can again use the arnoldi function to compute V and W.

from pymor.operators.constructions import InverseOperator

r_max = 5
v0 = fom.A.apply_inverse(fom.B.as_range_array())
V = arnoldi(InverseOperator(fom.A), InverseOperator(fom.E), v0, r_max)
w0 = fom.A.apply_inverse_adjoint(fom.C.as_source_array())
W = arnoldi(InverseOperator(fom.A.H), InverseOperator(fom.E.H), w0, r_max)
Log Output

Then, in the same way, compute the Petrov-Galerkin projection…

pg = LTIPGReductor(fom, W, V)
roms_zero = [pg.reduce(i + 1) for i in range(r_max)]
Log Output

…and draw the Bode plot.

fig, ax = plt.subplots(2, 1, **bode_plot_opts)
fom.transfer_function.bode_plot(w, ax=ax, label='FOM')
for rom in roms_zero:
    rom.transfer_function.bode_plot(w, ax=ax, label=f'ROM $r = {rom.order}$')
_ = ax[0, 0].legend()
_ = ax[1, 0].legend()
_images/2328276cdd132a3cea370fa9959a9625b24119f46f765192bc9ee833d5cf6aab.png

Now, because of interpolation at zero, we get good approximation for lower frequencies. The error plot shows this better.

fig, ax = plt.subplots()
for rom in roms_zero:
    err = fom - rom
    err.transfer_function.mag_plot(w, ax=ax, label=f'Error $r = {rom.order}$')
_ = ax.legend()
_images/1876352b3d9cf8d6e963cee83c9684472ec9a4e58039441f3e136dba8e590223.png

Checking stability using the poles of the ROMs.

fig, ax = plt.subplots()
spectral_abscissas = []
for rom, marker in zip(roms_zero, markers):
    poles = rom.poles()
    spectral_abscissas.append(poles.real.max())
    ax.plot(poles.real, poles.imag, marker, label=f'ROM $r = {rom.order}$')
_ = ax.legend()
print(f'Maximum real part: {max(spectral_abscissas)}')
Maximum real part: -1.000002636038773
_images/a4080738249ac73ac983a0918ebd9f3c802fcb1d89e91938c9201d5e2d271191.png

The ROMs are again all asymptotically stable.

Interpolation at an arbitrary finite point

A more general approach is using rational Krylov subspaces (also known as the shifted Padé approximation).

V=[(σIA)1B(σIA)2B(σIA)rB],W=[(σIA)HCT((σIA)H)2CT((σIA)H)rCT],

for some σC that is not an eigenvalue of A.

This will achieve interpolation of the first 2r moments at σ of the transfer function.

H(s)=C(sIA)1B+D=C((sσ)I+σIA)1B+D=C((σIA)((sσ)(σIA)1+I))1B+D=C((sσ)(σIA)1+I)1(σIA)1B+D=C(I(sσ)(AσI)1)1(σIA)1B+D=Ck=0((sσ)(AσI)1)k(σIA)1B+D=Dk=0C(AσI)(k+1)B(sσ)k.

The moments at σ are

M0(σ)=C(σIA)1B+D=H(σ),Mk(σ)=C(AσI)(k+1)B=1k!H(k)(σ), for k1.

To preserve realness in the ROMs, we interpolate at a conjugate pair of points (±200i).

from pymor.algorithms.gram_schmidt import gram_schmidt

s = 200j
r_max = 5
v0 = (s * fom.E - fom.A).apply_inverse(fom.B.as_range_array())
V_complex = arnoldi(InverseOperator(s * fom.E - fom.A), InverseOperator(fom.E), v0, r_max)
V = fom.A.source.empty(reserve=2 * r_max)
for v1, v2 in zip(V_complex.real, V_complex.imag):
    V.append(v1)
    V.append(v2)
_ = gram_schmidt(V, atol=0, rtol=0, copy=False)
w0 = (s * fom.E - fom.A).apply_inverse_adjoint(fom.C.as_source_array())
W_complex = arnoldi(InverseOperator(s * fom.E.H - fom.A.H), InverseOperator(fom.E), w0, r_max)
W = fom.A.source.empty(reserve=2 * r_max)
for w1, w2 in zip(W_complex.real, W_complex.imag):
    W.append(w1)
    W.append(w2)
_ = gram_schmidt(W, atol=0, rtol=0, copy=False)
Log Output

We perform a Petrov-Galerkin projection.

pg = LTIPGReductor(fom, W, V)
roms_sigma = [pg.reduce(2 * (i + 1)) for i in range(r_max)]
Log Output

Then draw the Bode plots.

fig, ax = plt.subplots(2, 1, **bode_plot_opts)
fom.transfer_function.bode_plot(w, ax=ax, label='FOM')
for rom in roms_sigma:
    rom.transfer_function.bode_plot(w, ax=ax, label=f'ROM $r = {rom.order}$')
_ = ax[0, 0].legend()
_ = ax[1, 0].legend()
_images/d388e2aeb5011ea71ee0cb59285d63f742362ebf0b8dd38f56ac58cfca312478.png

The ROMs are now more accurate around the interpolated frequency, which the error plot also shows.

fig, ax = plt.subplots()
for rom in roms_sigma:
    err = fom - rom
    err.transfer_function.mag_plot(w, ax=ax, label=f'Error $r = {rom.order}$')
_ = ax.legend()
_images/f39ab95ca0d6ded3b9cd761cd688b2b76613e18ee090d51be8bd3965954ca489.png

The poles of the ROMs are as follows.

fig, ax = plt.subplots()
spectral_abscissas = []
for rom, marker in zip(roms_sigma, markers):
    poles = rom.poles()
    spectral_abscissas.append(poles.real.max())
    ax.plot(poles.real, poles.imag, marker, label=f'ROM $r = {rom.order}$')
_ = ax.legend()
print(f'Maximum real part: {max(spectral_abscissas)}')
Maximum real part: 633.2912934294059
_images/51918c5adc2fe45346218eba65e30e7d6ff81c28ea7ebdad404a93c7fe7cb698.png

We observe that some of the ROMs are unstable. In particular, interpolation does not necessarily preserve stability, just as Petrov-Galerkin projection in general. A common approach then is using a Galerkin projection with W=V if A+AT is negative definite. This preserves stability, but reduces the number of interpolated moments from 2r to r.

Interpolation at multiple points

To achieve good approximation over a larger frequency range, instead of local approximation given by higher-order interpolation at a single point, one idea is to do interpolation at multiple points (sometimes called multipoint Padé), whether of lower or higher-order. pyMOR implements bitangential Hermite interpolation (BHI) for different types of Models, i.e., methods to construct a reduced-order transfer function H^ such that

H(σi)bi=H^(σi)bi,ciHH(σi)=ciHH^(σi),ciHH(σi)bi=ciHH^(σi)bi,

for given interpolation points σi, right tangential directions bi, and left tangential directions ci, i=1,,r. Such interpolation is relevant for systems with multiple inputs and outputs, where interpolation of matrices

H(σi)=H^(σi),H(σi)=H^(σi),

may be too restrictive. Here we focus on single-input single-output systems, where tangential and matrix interpolation are the same.

Bitangential Hermite interpolation is also relevant for H2-optimal model order reduction and methods such as IRKA (iterative rational Krylov algorithm), but we don’t discuss them in this tutorial.

Interpolation at multiple points can be achieved by concatenating different V and W matrices discussed above. In pyMOR, LTIBHIReductor can be used directly to perform bitangential Hermite interpolation.

import numpy as np
from pymor.reductors.interpolation import LTIBHIReductor

bhi = LTIBHIReductor(fom)
freq = np.array([0.5, 5, 50, 500, 5000])
sigma = np.concatenate([1j * freq, -1j * freq])
b = np.ones((10, 1))
c = np.ones((10, 1))
rom_bhi = bhi.reduce(sigma, b, c)
Log Output

We can compare the Bode plots.

fig, ax = plt.subplots(2, 1, **bode_plot_opts)
fom.transfer_function.bode_plot(w, ax=ax, label='FOM')
rom_bhi.transfer_function.bode_plot(w, ax=ax, label=f'ROM $r = {rom_bhi.order}$')
for a in ax.flat:
    for f in freq:
        a.axvline(f, color='gray', linestyle='--')
    _ = a.legend()
_images/98c68acc4aa96c01df91379d4e7b0932d59dbf41c31e2fd33650c00436f52f28.png

The error plot shows interpolation more clearly.

fig, ax = plt.subplots()
err = fom - rom_bhi
err.transfer_function.mag_plot(w, label=f'Error $r = {rom_bhi.order}$')
for f in freq:
    ax.axvline(f, color='gray', linestyle='--')
_ = ax.legend()
_images/b1e24f568ddf1a0c36736027d58c09766854b115fcb71f2a1a868c000f858736.png

We can check stability of the system.

fig, ax = plt.subplots()
poles = rom_bhi.poles()
ax.plot(poles.real, poles.imag, '.', label=f'ROM $r = {rom.order}$')
_ = ax.legend()
print(f'Maximum real part: {poles.real.max()}')
Maximum real part: -1.0071270631815525
_images/059b0e23e3aa1e041759a7aec3358234fba623e5c45779fa35b1386e78b9ad70.png

In this case, the ROM is asymptotically stable.

Download the code: tutorial_interpolation.md, tutorial_interpolation.ipynb.