pymor.reductors package

Submodules

basic module


class pymor.reductors.basic.DelayLTIPGReductor(fom, W, V, E_biorthonormal=False)[source]

Bases: pymor.reductors.basic.ProjectionBasedReductor

Petrov-Galerkin projection of an LinearDelayModel.

Parameters

fom

The full order Model to reduce.

W

The basis of the test space.

V

The basis of the ansatz space.

E_biorthonormal

If True, no E matrix will be assembled for the reduced Model. Set to True if W and V are biorthonormal w.r.t. fom.E.

Methods

DelayLTIPGReductor

build_rom, extend_basis, project_operators, project_operators_to_subbasis, reconstruct

ProjectionBasedReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis, reduce

BasicObject

disable_logging, enable_logging

reconstruct(u, basis='V')[source]

Reconstruct high-dimensional vector from reduced vector u.


class pymor.reductors.basic.InstationaryRBReductor(fom, RB=None, product=None, initial_data_product=None, product_is_mass=False, check_orthonormality=None, check_tol=None)[source]

Bases: pymor.reductors.basic.ProjectionBasedReductor

Galerkin projection of an InstationaryModel.

Parameters

fom

The full order Model to reduce.

RB

The basis of the reduced space onto which to project. If None an empty basis is used.

product

Inner product Operator w.r.t. which RB is orthonormalized. If None, the the Euclidean inner product is used.

initial_data_product

Inner product Operator w.r.t. which the initial_data of fom is orthogonally projected. If None, the Euclidean inner product is used.

product_is_mass

If True, no mass matrix for the reduced Model is assembled. Set to True if RB is orthonormal w.r.t. the mass matrix of fom.

check_orthonormality

See ProjectionBasedReductor.

check_tol

See ProjectionBasedReductor.

Methods

InstationaryRBReductor

build_rom, project_operators, project_operators_to_subbasis

ProjectionBasedReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis, extend_basis, reconstruct, reduce

BasicObject

disable_logging, enable_logging


class pymor.reductors.basic.LTIPGReductor(fom, W, V, E_biorthonormal=False)[source]

Bases: pymor.reductors.basic.ProjectionBasedReductor

Petrov-Galerkin projection of an LTIModel.

Parameters

fom

The full order Model to reduce.

W

The basis of the test space.

V

The basis of the ansatz space.

E_biorthonormal

If True, no E matrix will be assembled for the reduced Model. Set to True if W and V are biorthonormal w.r.t. fom.E.

Methods

LTIPGReductor

build_rom, extend_basis, project_operators, project_operators_to_subbasis, reconstruct

ProjectionBasedReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis, reduce

BasicObject

disable_logging, enable_logging

reconstruct(u, basis='V')[source]

Reconstruct high-dimensional vector from reduced vector u.


class pymor.reductors.basic.ProjectionBasedReductor(**wrapper_kwargs)[source]

Bases: pymor.core.base.BasicObject

Generic projection based reductor.

Parameters

fom

The full order Model to reduce.

bases

A dict of VectorArrays of basis vectors.

products

A dict of inner product Operators w.r.t. which the corresponding bases are orthonormalized. A value of None corresponds to orthonormalization of the basis w.r.t. the Euclidean inner product.

check_orthonormality

If True, check if bases which have a corresponding entry in the products dict are orthonormal w.r.t. the given inner product. After each basis extension, orthonormality is checked again.

check_tol

If check_orthonormality is True, the numerical tolerance with which the checks are performed.

Methods

ProjectionBasedReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis, build_rom, extend_basis, project_operators, project_operators_to_subbasis, reconstruct, reduce

BasicObject

disable_logging, enable_logging

reconstruct(u, basis='RB')[source]

Reconstruct high-dimensional vector from reduced vector u.


class pymor.reductors.basic.SOLTIPGReductor(fom, W, V, M_biorthonormal=False)[source]

Bases: pymor.reductors.basic.ProjectionBasedReductor

Petrov-Galerkin projection of an SecondOrderModel.

Parameters

fom

The full order Model to reduce.

W

The basis of the test space.

V

The basis of the ansatz space.

E_biorthonormal

If True, no E matrix will be assembled for the reduced Model. Set to True if W and V are biorthonormal w.r.t. fom.E.

Methods

SOLTIPGReductor

build_rom, extend_basis, project_operators, project_operators_to_subbasis, reconstruct

ProjectionBasedReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis, reduce

BasicObject

disable_logging, enable_logging

reconstruct(u, basis='V')[source]

Reconstruct high-dimensional vector from reduced vector u.


class pymor.reductors.basic.StationaryRBReductor(fom, RB=None, product=None, check_orthonormality=None, check_tol=None)[source]

Bases: pymor.reductors.basic.ProjectionBasedReductor

Galerkin projection of a StationaryModel.

Parameters

fom

The full order Model to reduce.

RB

The basis of the reduced space onto which to project. If None an empty basis is used.

product

Inner product Operator w.r.t. which RB is orthonormalized. If None, the Euclidean inner product is used.

check_orthonormality

See ProjectionBasedReductor.

check_tol

See ProjectionBasedReductor.

Methods

StationaryRBReductor

build_rom, project_operators, project_operators_to_subbasis

ProjectionBasedReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis, extend_basis, reconstruct, reduce

BasicObject

disable_logging, enable_logging


pymor.reductors.basic.extend_basis(U, basis, product=None, method='gram_schmidt', pod_modes=1, pod_orthonormalize=True, copy_U=True)[source]

bt module


class pymor.reductors.bt.BRBTReductor(fom, gamma=1, mu=None, solver_options=None)[source]

Bases: pymor.reductors.bt.GenericBTReductor

Bounded Real (BR) Balanced Truncation reductor.

See [A05] (Section 7.5.3) and [OJ88].

Parameters

fom

The full-order LTIModel to reduce.

gamma

Upper bound for the \(\mathcal{H}_\infty\)-norm.

mu

Parameter values.

solver_options

The solver options to use to solve the positive Riccati equations.

_gramians()[source]

Return low-rank Cholesky factors of Gramians.

error_bounds()[source]

Returns error bounds for all possible reduced orders.


class pymor.reductors.bt.BTReductor(fom, mu=None)[source]

Bases: pymor.reductors.bt.GenericBTReductor

Standard (Lyapunov) Balanced Truncation reductor.

See Section 7.3 in [A05].

Parameters

fom

The full-order LTIModel to reduce.

mu

Parameter values.

_gramians()[source]

Return low-rank Cholesky factors of Gramians.

error_bounds()[source]

Returns error bounds for all possible reduced orders.


class pymor.reductors.bt.GenericBTReductor(fom, mu=None)[source]

Bases: pymor.core.base.BasicObject

Generic Balanced Truncation reductor.

Parameters

fom

The full-order LTIModel to reduce.

mu

Parameter values.

_gramians()[source]

Return low-rank Cholesky factors of Gramians.

_sv_U_V()[source]

Return singular values and vectors.

error_bounds()[source]

Returns error bounds for all possible reduced orders.

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.

reduce(r=None, tol=None, projection='bfsr')[source]

Generic Balanced Truncation.

Parameters

r

Order of the reduced model if tol is None, maximum order if tol is specified.

tol

Tolerance for the error bound if r is None.

projection

Projection method used:

  • 'sr': square root method

  • 'bfsr': balancing-free square root method (default, since it avoids scaling by singular values and orthogonalizes the projection matrices, which might make it more accurate than the square root method)

  • 'biorth': like the balancing-free square root method, except it biorthogonalizes the projection matrices (using gram_schmidt_biorth)

Returns

rom

Reduced-order model.


class pymor.reductors.bt.LQGBTReductor(fom, mu=None, solver_options=None)[source]

Bases: pymor.reductors.bt.GenericBTReductor

Linear Quadratic Gaussian (LQG) Balanced Truncation reductor.

See Section 3 in [MG91].

Parameters

fom

The full-order LTIModel to reduce.

mu

Parameter values.

solver_options

The solver options to use to solve the Riccati equations.

_gramians()[source]

Return low-rank Cholesky factors of Gramians.

error_bounds()[source]

Returns error bounds for all possible reduced orders.

coercive module


class pymor.reductors.coercive.CoerciveRBEstimator(*args, **kwargs)[source]

Bases: pymor.core.base.ImmutableObject

Instantiated by CoerciveRBReductor.

Not to be used directly.

Methods

CoerciveRBEstimator

estimate, estimate_error, restricted_to_subbasis

ImmutableObject

with_, __setattr__

BasicObject

disable_logging, enable_logging


class pymor.reductors.coercive.CoerciveRBReductor(fom, RB=None, product=None, coercivity_estimator=None, check_orthonormality=None, check_tol=None)[source]

Bases: pymor.reductors.basic.StationaryRBReductor

Reduced Basis reductor for StationaryModels with coercive linear operator.

The only addition to StationaryRBReductor is an error estimator which evaluates the dual norm of the residual with respect to a given inner product. For the reduction of the residual we use ResidualReductor for improved numerical stability [BEOR14].

Parameters

fom

The Model which is to be reduced.

RB

VectorArray containing the reduced basis on which to project.

product

Inner product for the orthonormalization of RB, the projection of the Operators given by vector_ranged_operators and for the computation of Riesz representatives of the residual. If None, the Euclidean product is used.

coercivity_estimator

None or a ParameterFunctional returning a lower bound for the coercivity constant of the given problem. Note that the computed error estimate is only guaranteed to be an upper bound for the error when an appropriate coercivity estimate is specified.

Methods

CoerciveRBReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis

StationaryRBReductor

build_rom, project_operators, project_operators_to_subbasis

ProjectionBasedReductor

extend_basis, reconstruct, reduce

BasicObject

disable_logging, enable_logging


class pymor.reductors.coercive.SimpleCoerciveRBEstimator(*args, **kwargs)[source]

Bases: pymor.core.base.ImmutableObject

Instantiated by SimpleCoerciveRBReductor.

Not to be used directly.

Methods

SimpleCoerciveRBEstimator

estimate, estimate_error, restricted_to_subbasis

ImmutableObject

with_, __setattr__

BasicObject

disable_logging, enable_logging


class pymor.reductors.coercive.SimpleCoerciveRBReductor(fom, RB=None, product=None, coercivity_estimator=None, check_orthonormality=None, check_tol=None)[source]

Bases: pymor.reductors.basic.StationaryRBReductor

Reductor for linear StationaryModels with affinely decomposed operator and rhs.

Note

The reductor CoerciveRBReductor can be used for arbitrary coercive StationaryModels and offers an improved error estimator with better numerical stability.

The only addition is to StationaryRBReductor is an error estimator, which evaluates the norm of the residual with respect to a given inner product.

Parameters

fom

The Model which is to be reduced.

RB

VectorArray containing the reduced basis on which to project.

product

Inner product for the orthonormalization of RB, the projection of the Operators given by vector_ranged_operators and for the computation of Riesz representatives of the residual. If None, the Euclidean product is used.

coercivity_estimator

None or a ParameterFunctional returning a lower bound for the coercivity constant of the given problem. Note that the computed error estimate is only guaranteed to be an upper bound for the error when an appropriate coercivity estimate is specified.

Methods

SimpleCoerciveRBReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis

StationaryRBReductor

build_rom, project_operators, project_operators_to_subbasis

ProjectionBasedReductor

extend_basis, reconstruct, reduce

BasicObject

disable_logging, enable_logging

h2 module

Reductors based on H2-norm.


class pymor.reductors.h2.GenericIRKAReductor(fom, mu=None)[source]

Bases: pymor.core.base.BasicObject

Generic IRKA related reductor.

Parameters

fom

The full-order Model to reduce.

mu

Parameter values.

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.


class pymor.reductors.h2.IRKAReductor(fom, mu=None)[source]

Bases: pymor.reductors.h2.GenericIRKAReductor

Iterative Rational Krylov Algorithm reductor.

Parameters

fom

The full-order LTIModel to reduce.

mu

Parameter values.

reduce(rom0_params, tol=0.0001, maxit=100, num_prev=1, force_sigma_in_rhp=False, projection='orth', conv_crit='sigma', compute_errors=False)[source]

Reduce using IRKA.

See [GAB08] (Algorithm 4.1) and [ABG10] (Algorithm 1).

Parameters

rom0_params

Can be:

  • order of the reduced model (a positive integer),

  • initial interpolation points (a 1D NumPy array),

  • dict with 'sigma', 'b', 'c' as keys mapping to initial interpolation points (a 1D NumPy array), right tangential directions (NumPy array of shape (len(sigma), fom.dim_input)), and left tangential directions (NumPy array of shape (len(sigma), fom.dim_input)),

  • initial reduced-order model (LTIModel).

If the order of reduced model is given, initial interpolation data is generated randomly.

tol

Tolerance for the convergence criterion.

maxit

Maximum number of iterations.

num_prev

Number of previous iterations to compare the current iteration to. Larger number can avoid occasional cyclic behavior of IRKA.

force_sigma_in_rhp

If False, new interpolation are reflections of the current reduced-order model’s poles. Otherwise, only poles in the left half-plane are reflected.

projection

Projection method:

  • 'orth': projection matrices are orthogonalized with respect to the Euclidean inner product

  • 'biorth': projection matrices are biorthogolized with respect to the E product

  • 'arnoldi': projection matrices are orthogonalized using the Arnoldi process (available only for SISO systems).

conv_crit

Convergence criterion:

  • 'sigma': relative change in interpolation points

  • 'h2': relative \(\mathcal{H}_2\) distance of reduced-order models

compute_errors

Should the relative \(\mathcal{H}_2\)-errors of intermediate reduced-order models be computed.

Warning

Computing \(\mathcal{H}_2\)-errors is expensive. Use this option only if necessary.

Returns

rom

Reduced LTIModel model.


class pymor.reductors.h2.OneSidedIRKAReductor(fom, version, mu=None)[source]

Bases: pymor.reductors.h2.GenericIRKAReductor

One-Sided Iterative Rational Krylov Algorithm reductor.

Parameters

fom

The full-order LTIModel to reduce.

version

Version of the one-sided IRKA:

  • 'V': Galerkin projection using the input Krylov subspace,

  • 'W': Galerkin projection using the output Krylov subspace.

mu

Parameter values.

reduce(rom0_params, tol=0.0001, maxit=100, num_prev=1, force_sigma_in_rhp=False, projection='orth', conv_crit='sigma', compute_errors=False)[source]

Reduce using one-sided IRKA.

Parameters

rom0_params

Can be:

  • order of the reduced model (a positive integer),

  • initial interpolation points (a 1D NumPy array),

  • dict with 'sigma', 'b', 'c' as keys mapping to initial interpolation points (a 1D NumPy array), right tangential directions (NumPy array of shape (len(sigma), fom.dim_input)), and left tangential directions (NumPy array of shape (len(sigma), fom.dim_input)),

  • initial reduced-order model (LTIModel).

If the order of reduced model is given, initial interpolation data is generated randomly.

tol

Tolerance for the largest change in interpolation points.

maxit

Maximum number of iterations.

num_prev

Number of previous iterations to compare the current iteration to. A larger number can avoid occasional cyclic behavior.

force_sigma_in_rhp

If False, new interpolation are reflections of the current reduced-order model’s poles. Otherwise, only poles in the left half-plane are reflected.

projection

Projection method:

  • 'orth': projection matrix is orthogonalized with respect to the Euclidean inner product,

  • 'Eorth': projection matrix is orthogonalized with respect to the E product.

conv_crit

Convergence criterion:

  • 'sigma': relative change in interpolation points,

  • 'h2': relative \(\mathcal{H}_2\) distance of reduced-order models.

compute_errors

Should the relative \(\mathcal{H}_2\)-errors of intermediate reduced-order models be computed.

Warning

Computing \(\mathcal{H}_2\)-errors is expensive. Use this option only if necessary.

Returns

rom

Reduced LTIModel model.


class pymor.reductors.h2.TFIRKAReductor(fom, mu=None)[source]

Bases: pymor.reductors.h2.GenericIRKAReductor

Realization-independent IRKA reductor.

See [BG12].

Parameters

fom

The full-order Model with eval_tf and eval_dtf methods.

mu

Parameter values.

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.

reduce(rom0_params, tol=0.0001, maxit=100, num_prev=1, force_sigma_in_rhp=False, conv_crit='sigma', compute_errors=False)[source]

Reduce using TF-IRKA.

Parameters

rom0_params

Can be:

  • order of the reduced model (a positive integer),

  • initial interpolation points (a 1D NumPy array),

  • dict with 'sigma', 'b', 'c' as keys mapping to initial interpolation points (a 1D NumPy array), right tangential directions (NumPy array of shape (len(sigma), fom.dim_input)), and left tangential directions (NumPy array of shape (len(sigma), fom.dim_input)),

  • initial reduced-order model (LTIModel).

If the order of reduced model is given, initial interpolation data is generated randomly.

tol

Tolerance for the convergence criterion.

maxit

Maximum number of iterations.

num_prev

Number of previous iterations to compare the current iteration to. Larger number can avoid occasional cyclic behavior of TF-IRKA.

force_sigma_in_rhp

If False, new interpolation are reflections of the current reduced-order model’s poles. Otherwise, only poles in the left half-plane are reflected.

conv_crit

Convergence criterion:

  • 'sigma': relative change in interpolation points

  • 'h2': relative \(\mathcal{H}_2\) distance of reduced-order models

compute_errors

Should the relative \(\mathcal{H}_2\)-errors of intermediate reduced-order models be computed.

Warning

Computing \(\mathcal{H}_2\)-errors is expensive. Use this option only if necessary.

Returns

rom

Reduced LTIModel model.


class pymor.reductors.h2.TSIAReductor(fom, mu=None)[source]

Bases: pymor.reductors.h2.GenericIRKAReductor

Two-Sided Iteration Algorithm reductor.

Parameters

fom

The full-order LTIModel to reduce.

mu

Parameter values.

reduce(rom0_params, tol=0.0001, maxit=100, num_prev=1, projection='orth', conv_crit='sigma', compute_errors=False)[source]

Reduce using TSIA.

See [XZ11] (Algorithm 1) and [BKS11].

In exact arithmetic, TSIA is equivalent to IRKA (under some assumptions on the poles of the reduced model). The main difference in implementation is that TSIA computes the Schur decomposition of the reduced matrices, while IRKA computes the eigenvalue decomposition. Therefore, TSIA might behave better for non-normal reduced matrices.

Parameters

rom0_params

Can be:

  • order of the reduced model (a positive integer),

  • initial interpolation points (a 1D NumPy array),

  • dict with 'sigma', 'b', 'c' as keys mapping to initial interpolation points (a 1D NumPy array), right tangential directions (NumPy array of shape (len(sigma), fom.dim_input)), and left tangential directions (NumPy array of shape (len(sigma), fom.dim_input)),

  • initial reduced-order model (LTIModel).

If the order of reduced model is given, initial interpolation data is generated randomly.

tol

Tolerance for the convergence criterion.

maxit

Maximum number of iterations.

num_prev

Number of previous iterations to compare the current iteration to. Larger number can avoid occasional cyclic behavior of TSIA.

projection

Projection method:

  • 'orth': projection matrices are orthogonalized with respect to the Euclidean inner product

  • 'biorth': projection matrices are biorthogolized with respect to the E product

conv_crit

Convergence criterion:

  • 'sigma': relative change in interpolation points

  • 'h2': relative \(\mathcal{H}_2\) distance of reduced-order models

compute_errors

Should the relative \(\mathcal{H}_2\)-errors of intermediate reduced-order models be computed.

Warning

Computing \(\mathcal{H}_2\)-errors is expensive. Use this option only if necessary.

Returns

rom

Reduced LTIModel.


pymor.reductors.h2._lti_to_poles_b_c(rom)[source]

Compute poles and residues.

Parameters

rom

Reduced LTIModel (consisting of NumpyMatrixOperators).

Returns

poles

1D NumPy array of poles.

b

NumPy array of shape (rom.order, rom.dim_input).

c

NumPy array of shape (rom.order, rom.dim_output).


pymor.reductors.h2._poles_b_c_to_lti(poles, b, c)[source]

Create an LTIModel from poles and residue rank-1 factors.

Returns an LTIModel with real matrices such that its transfer function is

\[\sum_{i = 1}^r \frac{c_i b_i^T}{s - \lambda_i}\]

where \(\lambda_i, b_i, c_i\) are the poles and residue rank-1 factors.

Parameters

poles

Sequence of poles.

b

NumPy array of shape (rom.order, rom.dim_input).

c

NumPy array of shape (rom.order, rom.dim_output).

Returns

LTIModel.

interpolation module


class pymor.reductors.interpolation.DelayBHIReductor(fom, mu=None)[source]

Bases: pymor.reductors.interpolation.GenericBHIReductor

Bitangential Hermite interpolation for LinearDelayModels.

Parameters

fom

The full-order LinearDelayModel to reduce.

mu

Parameter values.

_PGReductor

alias of pymor.reductors.basic.DelayLTIPGReductor


class pymor.reductors.interpolation.GenericBHIReductor(fom, mu=None)[source]

Bases: pymor.core.base.BasicObject

Generic bitangential Hermite interpolation reductor.

This is a generic reductor for reducing any linear InputStateOutputModel with the transfer function which can be written in the generalized coprime factorization \(H(s) = \mathcal{C}(s) \mathcal{K}(s)^{-1} \mathcal{B}(s)\) as in [BG09]. The interpolation here is limited to only up to the first derivative. Interpolation points are assumed to be pairwise distinct.

In particular, given interpolation points \(\sigma_i\), right tangential directions \(b_i\), and left tangential directions \(c_i\), for \(i = 1, 2, \ldots, r\), which are closed under conjugation (if \(\sigma_i\) is real, then so are \(b_i\) and \(c_i\); if \(\sigma_i\) is complex, there is \(\sigma_j\) such that \(\sigma_j = \overline{\sigma_i}\), \(b_j = \overline{b_i}\), \(c_j = \overline{c_i}\)), this reductor finds a transfer function \(\hat{H}\) such that

\[\begin{split}H(\sigma_i) b_i & = \hat{H}(\sigma_i) b_i, \\ c_i^T H(\sigma_i) & = c_i^T \hat{H}(\sigma_i) b_i, \ - \widehat{y}\ c_i^T H'(\sigma_i) b_i & = c_i^T \hat{H}'(\sigma_i) b_i,\end{split}\]

for all \(i = 1, 2, \ldots, r\).

Parameters

fom

The full-order Model to reduce.

mu

Parameter values.

_PGReductor

alias of pymor.reductors.basic.ProjectionBasedReductor

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.

reduce(sigma, b, c, projection='orth')[source]

Bitangential Hermite interpolation.

Parameters

sigma

Interpolation points (closed under conjugation), sequence of length r.

b

Right tangential directions, NumPy array of shape (r, fom.dim_input).

c

Left tangential directions, NumPy array of shape (r, fom.dim_output).

projection

Projection method:

  • 'orth': projection matrices are orthogonalized with respect to the Euclidean inner product

  • 'biorth': projection matrices are biorthogolized with respect to the E product

Returns

rom

Reduced-order model.


class pymor.reductors.interpolation.LTIBHIReductor(fom, mu=None)[source]

Bases: pymor.reductors.interpolation.GenericBHIReductor

Bitangential Hermite interpolation for LTIModels.

Parameters

fom

The full-order LTIModel to reduce.

mu

Parameter values.

_PGReductor

alias of pymor.reductors.basic.LTIPGReductor

reduce(sigma, b, c, projection='orth')[source]

Bitangential Hermite interpolation.

Parameters

sigma

Interpolation points (closed under conjugation), sequence of length r.

b

Right tangential directions, NumPy array of shape (r, fom.dim_input).

c

Left tangential directions, NumPy array of shape (r, fom.dim_output).

projection

Projection method:

  • 'orth': projection matrices are orthogonalized with respect to the Euclidean inner product

  • 'biorth': projection matrices are biorthogolized with respect to the E product

  • 'arnoldi': projection matrices are orthogonalized using the rational Arnoldi process (available only for SISO systems).

Returns

rom

Reduced-order model.


class pymor.reductors.interpolation.SOBHIReductor(fom, mu=None)[source]

Bases: pymor.reductors.interpolation.GenericBHIReductor

Bitangential Hermite interpolation for SecondOrderModels.

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

_PGReductor

alias of pymor.reductors.basic.SOLTIPGReductor


class pymor.reductors.interpolation.TFBHIReductor(fom, mu=None)[source]

Bases: pymor.core.base.BasicObject

Loewner bitangential Hermite interpolation reductor.

See [BG12].

Parameters

fom

The Model with eval_tf and eval_dtf methods.

mu

Parameter values.

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.

reduce(sigma, b, c)[source]

Realization-independent tangential Hermite interpolation.

Parameters

sigma

Interpolation points (closed under conjugation), sequence of length r.

b

Right tangential directions, NumPy array of shape (r, fom.dim_input).

c

Left tangential directions, NumPy array of shape (r, fom.dim_output).

Returns

lti

The reduced-order LTIModel interpolating the transfer function of fom.

neural_network module


class pymor.reductors.neural_network.CustomDataset(training_data)[source]

Bases: torch.utils.data.dataset.Dataset

Class that represents the dataset to use in PyTorch.

Parameters

training_data

Set of training parameters and the respective coefficients of the solution in the reduced basis.


class pymor.reductors.neural_network.EarlyStoppingScheduler(size_training_validation_set, patience=10, delta=0.0)[source]

Bases: pymor.core.base.BasicObject

Class for performing early stopping in training of neural networks.

If the validation loss does not decrease over a certain amount of epochs, the training should be aborted to avoid overfitting the training data. This class implements an early stopping scheduler that recommends to stop the training process if the validation loss did not decrease by at least delta over patience epochs.

Parameters

size_training_validation_set

Size of both, training and validation set together.

patience

Number of epochs of non-decreasing validation loss allowed, before early stopping the training process.

delta

Minimal amount of decrease in the validation loss that is required to reset the counter of non-decreasing epochs.

__call__(losses, neural_network=None)[source]

Returns True if early stopping of training is suggested.

Parameters

losses

Dictionary of losses on the validation and the training set in the current epoch.

neural_network

Neural network that produces the current validation loss.

Returns

True if early stopping is suggested, False otherwise.


class pymor.reductors.neural_network.NeuralNetworkInstationaryReductor(fom, training_set, validation_set=None, validation_ratio=0.1, basis_size=None, rtol=0.0, atol=0.0, l2_err=0.0, pod_params=None, ann_mse='like_basis')[source]

Bases: pymor.reductors.neural_network.NeuralNetworkReductor

Reduced Basis reductor for instationary problems relying on artificial neural networks.

This is a reductor that constructs a reduced basis using proper orthogonal decomposition and trains a neural network that approximates the mapping from parameter and time space to coefficients of the full-order solution in the reduced basis. The approach is described in [WHR19].

Parameters

fom

The full-order Model to reduce.

training_set

Set of parameter values to use for POD and training of the neural network.

validation_set

Set of parameter values to use for validation in the training of the neural network.

validation_ratio

Fraction of the training set to use for validation in the training of the neural network (only used if no validation set is provided).

basis_size

Desired size of the reduced basis. If None, rtol, atol or l2_err must be provided.

rtol

Relative tolerance the basis should guarantee on the training set.

atol

Absolute tolerance the basis should guarantee on the training set.

l2_err

L2-approximation error the basis should not exceed on the training set.

pod_params

Dict of additional parameters for the POD-method.

ann_mse

If 'like_basis', the mean squared error of the neural network on the training set should not exceed the error of projecting onto the basis. If None, the neural network with smallest validation error is used to build the ROM. If a tolerance is prescribed, the mean squared error of the neural network on the training set should not exceed this threshold. Training is interrupted if a neural network that undercuts the error tolerance is found.

_build_rom()[source]

Construct the reduced order model.

_compute_layers_sizes(hidden_layers)[source]

Compute the number of neurons in the layers of the neural network (make sure to increase the input dimension to account for the time).

_compute_sample(mu, u, reduced_basis)[source]

Transform parameter and corresponding solution to tensors (make sure to include the time instances in the inputs).

build_basis()[source]

Compute a reduced basis using proper orthogonal decomposition.


class pymor.reductors.neural_network.NeuralNetworkReductor(fom, training_set, validation_set=None, validation_ratio=0.1, basis_size=None, rtol=0.0, atol=0.0, l2_err=0.0, pod_params=None, ann_mse='like_basis')[source]

Bases: pymor.core.base.BasicObject

Reduced Basis reductor relying on artificial neural networks.

This is a reductor that constructs a reduced basis using proper orthogonal decomposition and trains a neural network that approximates the mapping from parameter space to coefficients of the full-order solution in the reduced basis. The approach is described in [HU18].

Parameters

fom

The full-order Model to reduce.

training_set

Set of parameter values to use for POD and training of the neural network.

validation_set

Set of parameter values to use for validation in the training of the neural network.

validation_ratio

Fraction of the training set to use for validation in the training of the neural network (only used if no validation set is provided).

basis_size

Desired size of the reduced basis. If None, rtol, atol or l2_err must be provided.

rtol

Relative tolerance the basis should guarantee on the training set.

atol

Absolute tolerance the basis should guarantee on the training set.

l2_err

L2-approximation error the basis should not exceed on the training set.

pod_params

Dict of additional parameters for the POD-method.

ann_mse

If 'like_basis', the mean squared error of the neural network on the training set should not exceed the error of projecting onto the basis. If None, the neural network with smallest validation error is used to build the ROM. If a tolerance is prescribed, the mean squared error of the neural network on the training set should not exceed this threshold. Training is interrupted if a neural network that undercuts the error tolerance is found.

_build_rom()[source]

Construct the reduced order model.

_compute_layers_sizes(hidden_layers)[source]

Compute the number of neurons in the layers of the neural network.

_compute_sample(mu, u, reduced_basis)[source]

Transform parameter and corresponding solution to tensors.

_train(layers, activation_function, optimizer, epochs, batch_size, learning_rate)[source]

Perform a single training iteration and return the resulting neural network.

build_basis()[source]

Compute a reduced basis using proper orthogonal decomposition.

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.

reduce(hidden_layers='[(N+P)*3, (N+P)*3]', activation_function=<built-in method tanh of type object>, optimizer=<class 'torch.optim.lbfgs.LBFGS'>, epochs=1000, batch_size=20, learning_rate=1.0, restarts=10, seed=0)[source]

Reduce by training artificial neural networks.

Parameters

hidden_layers

Number of neurons in the hidden layers. Can either be fixed or a Python expression string depending on the reduced basis size N and the total dimension of the Parameters P.

activation_function

Activation function to use between the hidden layers.

optimizer

Algorithm to use as optimizer during training.

epochs

Maximum number of epochs for training.

batch_size

Batch size to use if optimizer allows mini-batching.

learning_rate

Step size to use in each optimization step.

restarts

Number of restarts of the training algorithm. Since the training results highly depend on the initial starting point, i.e. the initial weights and biases, it is advisable to train multiple neural networks by starting with different initial values and choose that one performing best on the validation set.

seed

Seed to use for various functions in PyTorch. Using a fixed seed, it is possible to reproduce former results.

Returns

rom

Reduced-order NeuralNetworkModel.

parabolic module


class pymor.reductors.parabolic.ParabolicRBEstimator(*args, **kwargs)[source]

Bases: pymor.core.base.ImmutableObject

Instantiated by ParabolicRBReductor.

Not to be used directly.

Methods

ParabolicRBEstimator

estimate, estimate_error, restricted_to_subbasis

ImmutableObject

with_, __setattr__

BasicObject

disable_logging, enable_logging


class pymor.reductors.parabolic.ParabolicRBReductor(fom, RB=None, product=None, coercivity_estimator=None, check_orthonormality=None, check_tol=None)[source]

Bases: pymor.reductors.basic.InstationaryRBReductor

Reduced Basis Reductor for parabolic equations.

This reductor uses InstationaryRBReductor for the actual RB-projection. The only addition is the assembly of an error estimator which bounds the discrete l2-in time / energy-in space error similar to [GP05], [HO08] as follows:

\[\left[ C_a^{-1}(\mu)\|e_N(\mu)\|^2 + \sum_{n=1}^{N} \Delta t\|e_n(\mu)\|^2_e \right]^{1/2} \leq \left[ C_a^{-2}(\mu)\Delta t \sum_{n=1}^{N}\|\mathcal{R}^n(u_n(\mu), \mu)\|^2_{e,-1} + C_a^{-1}(\mu)\|e_0\|^2 \right]^{1/2}\]

Here, \(\|\cdot\|\) denotes the norm induced by the problem’s mass matrix (e.g. the L^2-norm) and \(\|\cdot\|_e\) is an arbitrary energy norm w.r.t. which the space operator \(A(\mu)\) is coercive, and \(C_a(\mu)\) is a lower bound for its coercivity constant. Finally, \(\mathcal{R}^n\) denotes the implicit Euler timestepping residual for the (fixed) time step size \(\Delta t\),

\[\mathcal{R}^n(u_n(\mu), \mu) := f - M \frac{u_{n}(\mu) - u_{n-1}(\mu)}{\Delta t} - A(u_n(\mu), \mu),\]

where \(M\) denotes the mass operator and \(f\) the source term. The dual norm of the residual is computed using the numerically stable projection from [BEOR14].

Parameters

fom

The InstationaryModel which is to be reduced.

RB

VectorArray containing the reduced basis on which to project.

product

The energy inner product Operator w.r.t. which the reduction error is estimated and RB is orthonormalized.

coercivity_estimator

None or a ParameterFunctional returning a lower bound \(C_a(\mu)\) for the coercivity constant of fom.operator w.r.t. product.

Methods

ParabolicRBReductor

assemble_error_estimator, assemble_error_estimator_for_subbasis

InstationaryRBReductor

build_rom, project_operators, project_operators_to_subbasis

ProjectionBasedReductor

extend_basis, reconstruct, reduce

BasicObject

disable_logging, enable_logging

residual module


class pymor.reductors.residual.ImplicitEulerResidualOperator(*args, **kwargs)[source]

Bases: pymor.operators.interface.Operator

Instantiated by ImplicitEulerResidualReductor.

apply(U, U_old, mu=None)[source]

Apply the operator to a VectorArray.

Parameters

U

VectorArray of vectors to which the operator is applied.

mu

The parameter values for which to evaluate the operator.

Returns

VectorArray of the operator evaluations.


class pymor.reductors.residual.ImplicitEulerResidualReductor(RB, operator, mass, dt, rhs=None, product=None)[source]

Bases: pymor.core.base.BasicObject

Reduced basis residual reductor with mass operator for implicit Euler timestepping.

Given an operator, mass and a functional, the concatenation of residual operator with the Riesz isomorphism is given by:

riesz_residual.apply(U, U_old, mu)
    == product.apply_inverse(operator.apply(U, mu) + 1/dt*mass.apply(U, mu) - 1/dt*mass.apply(U_old, mu)
       - rhs.as_vector(mu))

This reductor determines a low-dimensional subspace of the image of a reduced basis space under riesz_residual using estimate_image_hierarchical, computes an orthonormal basis residual_range of this range space and then returns the Petrov-Galerkin projection

projected_riesz_residual
    == riesz_residual.projected(range_basis=residual_range, source_basis=RB)

of the riesz_residual operator. Given reduced basis coefficient vectors u and u_old, the dual norm of the residual can then be computed as

projected_riesz_residual.apply(u, u_old, mu).norm()

Moreover, a reconstruct method is provided such that

residual_reductor.reconstruct(projected_riesz_residual.apply(u, u_old, mu))
    == riesz_residual.apply(RB.lincomb(u), RB.lincomb(u_old), mu)

Parameters

operator

See definition of riesz_residual.

mass

The mass operator. See definition of riesz_residual.

dt

The time step size. See definition of riesz_residual.

rhs

See definition of riesz_residual. If None, zero right-hand side is assumed.

RB

VectorArray containing a basis of the reduced space onto which to project.

product

Inner product Operator w.r.t. which to compute the Riesz representatives.

reconstruct(u)[source]

Reconstruct high-dimensional residual vector from reduced vector u.


class pymor.reductors.residual.NonProjectedImplicitEulerResidualOperator(*args, **kwargs)[source]

Bases: pymor.reductors.residual.ImplicitEulerResidualOperator

Instantiated by ImplicitEulerResidualReductor.

Not to be used directly.

apply(U, U_old, mu=None)[source]

Apply the operator to a VectorArray.

Parameters

U

VectorArray of vectors to which the operator is applied.

mu

The parameter values for which to evaluate the operator.

Returns

VectorArray of the operator evaluations.


class pymor.reductors.residual.NonProjectedResidualOperator(*args, **kwargs)[source]

Bases: pymor.reductors.residual.ResidualOperator

Instantiated by ResidualReductor.

Not to be used directly.

apply(U, mu=None)[source]

Apply the operator to a VectorArray.

Parameters

U

VectorArray of vectors to which the operator is applied.

mu

The parameter values for which to evaluate the operator.

Returns

VectorArray of the operator evaluations.


class pymor.reductors.residual.ResidualOperator(*args, **kwargs)[source]

Bases: pymor.operators.interface.Operator

Instantiated by ResidualReductor.

apply(U, mu=None)[source]

Apply the operator to a VectorArray.

Parameters

U

VectorArray of vectors to which the operator is applied.

mu

The parameter values for which to evaluate the operator.

Returns

VectorArray of the operator evaluations.


class pymor.reductors.residual.ResidualReductor(RB, operator, rhs=None, product=None, riesz_representatives=False)[source]

Bases: pymor.core.base.BasicObject

Generic reduced basis residual reductor.

Given an operator and a right-hand side, the residual is given by:

residual.apply(U, mu) == operator.apply(U, mu) - rhs.as_range_array(mu)

When operator maps to functionals instead of vectors, we are interested in the Riesz representative of the residual:

residual.apply(U, mu)
    == product.apply_inverse(operator.apply(U, mu) - rhs.as_range_array(mu))

Given a basis RB of a subspace of the source space of operator, this reductor uses estimate_image_hierarchical to determine a low-dimensional subspace containing the image of the subspace under residual (resp. riesz_residual), computes an orthonormal basis residual_range for this range space and then returns the Petrov-Galerkin projection

projected_residual
    == project(residual, range_basis=residual_range, source_basis=RB)

of the residual operator. Given a reduced basis coefficient vector u, w.r.t. RB, the (dual) norm of the residual can then be computed as

projected_residual.apply(u, mu).norm()

Moreover, a reconstruct method is provided such that

residual_reductor.reconstruct(projected_residual.apply(u, mu))
    == residual.apply(RB.lincomb(u), mu)

Parameters

RB

VectorArray containing a basis of the reduced space onto which to project.

operator

See definition of residual.

rhs

See definition of residual. If None, zero right-hand side is assumed.

product

Inner product Operator w.r.t. which to orthonormalize and w.r.t. which to compute the Riesz representatives in case operator maps to functionals.

riesz_representatives

If True compute the Riesz representative of the residual.

reconstruct(u)[source]

Reconstruct high-dimensional residual vector from reduced vector u.

sobt module


class pymor.reductors.sobt.GenericSOBTpvReductor(fom, mu=None)[source]

Bases: pymor.core.base.BasicObject

Generic Second-Order Balanced Truncation position/velocity reductor.

See [RS08].

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

_gramians()[source]

Return Gramians.

_projection_matrices_and_singular_values(r, gramians)[source]

Return projection matrices and singular values.

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.

reduce(r, projection='bfsr')[source]

Reduce using GenericSOBTpv.

Parameters

r

Order of the reduced model.

projection

Projection method used:

  • 'sr': square root method

  • 'bfsr': balancing-free square root method (default, since it avoids scaling by singular values and orthogonalizes the projection matrices, which might make it more accurate than the square root method)

  • 'biorth': like the balancing-free square root method, except it biorthogonalizes the projection matrices

Returns

rom

Reduced-order SecondOrderModel.


class pymor.reductors.sobt.SOBTReductor(fom, mu=None)[source]

Bases: pymor.core.base.BasicObject

Second-Order Balanced Truncation reductor.

See [CLVV06].

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.

reduce(r, projection='bfsr')[source]

Reduce using SOBT.

Parameters

r

Order of the reduced model.

projection

Projection method used:

  • 'sr': square root method

  • 'bfsr': balancing-free square root method (default, since it avoids scaling by singular values and orthogonalizes the projection matrices, which might make it more accurate than the square root method)

  • 'biorth': like the balancing-free square root method, except it biorthogonalizes the projection matrices

Returns

rom

Reduced-order SecondOrderModel.


class pymor.reductors.sobt.SOBTfvReductor(fom, mu=None)[source]

Bases: pymor.core.base.BasicObject

Free-velocity Second-Order Balanced Truncation reductor.

See [MS96].

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

reconstruct(u)[source]

Reconstruct high-dimensional vector from reduced vector u.

reduce(r, projection='bfsr')[source]

Reduce using SOBTfv.

Parameters

r

Order of the reduced model.

projection

Projection method used:

  • 'sr': square root method

  • 'bfsr': balancing-free square root method (default, since it avoids scaling by singular values and orthogonalizes the projection matrices, which might make it more accurate than the square root method)

  • 'biorth': like the balancing-free square root method, except it biorthogonalizes the projection matrices

Returns

rom

Reduced-order SecondOrderModel.


class pymor.reductors.sobt.SOBTpReductor(fom, mu=None)[source]

Bases: pymor.reductors.sobt.GenericSOBTpvReductor

Second-Order Balanced Truncation position reductor.

See [RS08].

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

_gramians()[source]

Return Gramians.

_projection_matrices_and_singular_values(r, gramians)[source]

Return projection matrices and singular values.


class pymor.reductors.sobt.SOBTpvReductor(fom, mu=None)[source]

Bases: pymor.reductors.sobt.GenericSOBTpvReductor

Second-Order Balanced Truncation position-velocity reductor.

See [RS08].

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

_gramians()[source]

Return Gramians.

_projection_matrices_and_singular_values(r, gramians)[source]

Return projection matrices and singular values.


class pymor.reductors.sobt.SOBTvReductor(fom, mu=None)[source]

Bases: pymor.reductors.sobt.GenericSOBTpvReductor

Second-Order Balanced Truncation velocity reductor.

See [RS08].

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

_gramians()[source]

Return Gramians.

_projection_matrices_and_singular_values(r, gramians)[source]

Return projection matrices and singular values.


class pymor.reductors.sobt.SOBTvpReductor(fom, mu=None)[source]

Bases: pymor.reductors.sobt.GenericSOBTpvReductor

Second-Order Balanced Truncation velocity-position reductor.

See [RS08].

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

_gramians()[source]

Return Gramians.

_projection_matrices_and_singular_values(r, gramians)[source]

Return projection matrices and singular values.

sor_irka module

IRKA-type reductor for SecondOrderModels.


class pymor.reductors.sor_irka.SORIRKAReductor(fom, mu=None)[source]

Bases: pymor.reductors.h2.GenericIRKAReductor

SOR-IRKA reductor.

Parameters

fom

The full-order SecondOrderModel to reduce.

mu

Parameter values.

reduce(rom0_params, tol=0.0001, maxit=100, num_prev=1, force_sigma_in_rhp=False, projection='orth', conv_crit='sigma', compute_errors=False, irka_options=None)[source]

Reduce using SOR-IRKA.

It uses IRKA as the intermediate reductor, to reduce from 2r to r poles. See Section 5.3.2 in [W12].

Parameters

rom0_params

Can be:

  • order of the reduced model (a positive integer),

  • dict with 'sigma', 'b', 'c' as keys mapping to initial interpolation points (a 1D NumPy array), right tangential directions (VectorArray from fom.D.source), and left tangential directions (VectorArray from fom.D.range), all of the same length (the order of the reduced model),

  • initial reduced-order model (LTIModel).

If the order of reduced model is given, initial interpolation data is generated randomly.

tol

Tolerance for the convergence criterion.

maxit

Maximum number of iterations.

num_prev

Number of previous iterations to compare the current iteration to. Larger number can avoid occasional cyclic behavior of IRKA.

force_sigma_in_rhp

If False, new interpolation are reflections of the current reduced order model’s poles. Otherwise, only the poles in the left half-plane are reflected.

projection

Projection method:

  • 'orth': projection matrices are orthogonalized with respect to the Euclidean inner product

  • 'biorth': projection matrices are biorthogolized with respect to the E product

conv_crit

Convergence criterion:

  • 'sigma': relative change in interpolation points

  • 'h2': relative \(\mathcal{H}_2\) distance of reduced-order models

compute_errors

Should the relative \(\mathcal{H}_2\)-errors of intermediate reduced order models be computed.

Warning

Computing \(\mathcal{H}_2\)-errors is expensive. Use this option only if necessary.

irka_options

Dict of options for IRKAReductor.reduce.

Returns

rom

Reduced-order SecondOrderModel.