Module Contents

class pymor.reductors.era.ERAReductor(data, sampling_time, force_stability=True, feedthrough=None)[source]

Bases: pymor.core.cache.CacheableObject

Eigensystem Realization Algorithm reductor.

Constructs a (reduced) realization from a sequence of Markov parameters \(h_i\), for \(i\in\{1,\,\dots,\,2s-1\}\), \(s\in\mathbb{N}\), by a (reduced) orthogonal factorization of the Hankel matrix of Markov parameters

\[\begin{split}H = \begin{bmatrix} h_1 & h_2 & \dots & h_s \\ h_2 & h_3 & \dots & h_{s+1}\\ \vdots & \vdots & \ddots & \vdots\\ h_s & h_{s+1} & \dots & h_{2s-1} \end{bmatrix}=U\Sigma V^T\in\mathbb{R}^{ps\times ms},\end{split}\]

where \(r\leq\min\{ms,ps\}\) is the reduced order. See [Kun78].

In order for the identified model to be stable, the Markov parameters decay substantially within \(s\) samples. Stability is enforced automatically through zero-padding and can be deactivated by setting force_stability=False.

For a large number of inputs and/or outputs, the factorization of the Hankel matrix can be accelerated by tangentially projecting the Markov parameters to reduce the dimension of the Hankel matrix, i.e.

\[\hat{h}_i = W_L^T h_i W_R,\]

where \(n_L \leq p\) and \(n_R \leq m\) are the number of left and right tangential directions and \(W_L \in \mathbb{R}^{p \times n_L}\) an \(W_R \in \mathbb{R}^{m \times n_R}\) are the left and right projectors, respectively. See [KG16].


NumPy array that contains the first \(n\) Markov parameters of an LTI system. Has to be one- or three-dimensional with either:

data.shape == (n,)

for scalar-valued Markov parameters or:

data.shape == (n, p, m)

for matrix-valued Markov parameters of dimension \(p\times m\), where \(m\) is the number of inputs and \(p\) is the number of outputs of the system.


A number that denotes the sampling time of the system (in seconds).


Whether the Markov parameters are zero-padded to double the length in order to enforce Kung’s stability assumption. See [Kun78]. Defaults to True.


(Optional) Operator or NumPy array of shape (p, m). The zeroth Markov parameter that defines the feedthrough of the realization. Defaults to None.



Compute the error bounds for all possible reduction orders.


Construct the right/input projector \(W_2\).


Construct the left/output projector \(W_1\).


Construct a minimal realization.

cache_region = memory[source]
error_bounds(num_left=None, num_right=None)[source]

Compute the error bounds for all possible reduction orders.

Without tangential projection of the Markov parameters, the \(\mathcal{L}_2\)-error of the Markov parameters \(\epsilon\) is bounded by

\[\epsilon^2 = \sum_{i = 1}^{2 s - 1} \lVert C_r A_r^{i - 1} B_r - h_i \rVert_F^2 \leq \sigma_{r + 1}(\mathcal{H}) \sqrt{r + p + m},\]

where \((A_r,B_r,C_r)\) is the reduced realization of order \(r\), \(h_i\in\mathbb{R}^{p\times m}\) is the \(i\)-th Markov parameter and \(\sigma_{r+1}(\mathcal{H})\) is the first neglected singular value of the Hankel matrix of Markov parameters.

With tangential projection, the bound is given by

\[\epsilon^2 = \sum_{i = 1}^{2 s - 1} \lVert C_r A_r^{i - 1} B_r - h_i \rVert_F^2 \leq 4 \left( \sum_{i = n_L + 1}^p \sigma_i^2(\Theta_L) + \sum_{i = n_R + 1}^m \sigma_i^2(\Theta_R) \right) + 2 \sigma_{r + 1}(\mathcal{H}) \sqrt{r + n_L + n_R},\]

where \(\Theta_L,\,\Theta_R\) is the matrix of horizontally or vertically stacked Markov parameters, respectively. See [KG16] (Thm. 3.4) for details.


Construct the right/input projector \(W_2\).


Construct the left/output projector \(W_1\).

reduce(r=None, tol=None, num_left=None, num_right=None)[source]

Construct a minimal realization.



Order of the reduced model if tol is None, maximum order if tol is specified.


Tolerance for the error bound.


Number of left (output) directions for tangential projection.


Number of right (input) directions for tangential projection.



Reduced-order LTIModel.