pymor.algorithms.greedy¶
Module Contents¶
- class pymor.algorithms.greedy.RBSurrogate(fom, reductor, use_error_estimator, error_norm, extension_params, pool)[source]¶
Bases:
WeakGreedySurrogateSurrogate for the
weak_greedyerror used inrb_greedy.Not intended to be used directly.
Methods
Evaluate the surrogate for given parameters.
Extend the approximation basis.
- evaluate(mus, return_all_values=False)[source]¶
Evaluate the surrogate for given parameters.
- Parameters:
mus – List of parameters for which to estimate the approximation error. When parallelization is used,
muscan be aRemoteObject.return_all_values – See below.
- Returns:
If
return_all_valuesisTrue, anNumPy arrayof the estimated errors.If
return_all_valuesisFalse, the maximum estimated error as firstreturn value and the corresponding parameter as second return value.
- class pymor.algorithms.greedy.WeakGreedySurrogate[source]¶
Bases:
pymor.core.base.BasicObjectSurrogate for the approximation error in
weak_greedy.Methods
Evaluate the surrogate for given parameters.
Extend the approximation basis.
- abstract evaluate(mus, return_all_values=False)[source]¶
Evaluate the surrogate for given parameters.
- Parameters:
mus – List of parameters for which to estimate the approximation error. When parallelization is used,
muscan be aRemoteObject.return_all_values – See below.
- Returns:
If
return_all_valuesisTrue, anNumPy arrayof the estimated errors.If
return_all_valuesisFalse, the maximum estimated error as firstreturn value and the corresponding parameter as second return value.
- pymor.algorithms.greedy.rb_greedy(fom, reductor, training_set, use_error_estimator=True, error_norm=None, atol=None, rtol=None, max_extensions=None, extension_params=None, pool=None)[source]¶
Weak Greedy basis generation using the RB approximation error as surrogate.
This algorithm generates a reduced basis using the
weak greedyalgorithm [BCD+11], where the approximation error is estimated from computing solutions of the reduced order model for the current reduced basis and then estimating the model reduction error.- Parameters:
fom – The
Modelto reduce.reductor – Reductor for reducing the given
Model. This has to be an object with areducemethod, such thatreductor.reduce()yields the reduced model, and anexted_basismethod, such thatreductor.extend_basis(U, copy_U=False, **extension_params)extends the current reduced basis by the vectors contained inU. For an example seeCoerciveRBReductor.training_set – The training set of
Parameterson which to perform the greedy search.use_error_estimator – If
False, exactly compute the model reduction error by also computing the solution offomfor allparameter valuesof the training set. This is mainly useful when no estimator for the model reduction error is available.error_norm – If
use_error_estimatorisFalse, use this function to calculate the norm of the error. IfNone, the Euclidean norm is used.atol – See
weak_greedy.rtol – See
weak_greedy.max_extensions – See
weak_greedy.extension_params –
dictof parameters passed to thereductor.extend_basismethod. IfNone,'gram_schmidt'basis extension will be used as a default for stationary problems (fom.solvereturnsVectorArraysof length 1) and'pod'basis extension (adding a single POD mode) for instationary problems.pool – See
weak_greedy.
- Returns:
data – Dict with the following fields:
- rom:
The reduced
Modelobtained for the computed basis.- max_errs:
Sequence of maximum errors during the greedy run.
- max_err_mus:
The parameters corresponding to
max_errs.- extensions:
Number of performed basis extensions.
- time:
Total runtime of the algorithm.
- pymor.algorithms.greedy.weak_greedy(surrogate, training_set, atol=None, rtol=None, max_extensions=None, pool=None)[source]¶
Weak greedy basis generation algorithm [BCD+11].
This algorithm generates an approximation basis for a given set of vectors
\[\mathcal{M} := \{v_{\mu} \,|\, \mu \in \mathcal{S}_{\text{train}}\}.\]In each iteration of the algorithm, a vector \(v_{\mu^*}\) from \(\mathcal{M}\) is determined which maximizes the estimated best-approxmiation error w.r.t. the current basis. Then, the basis is extended with \(v_{\mu^*}\).
The algorithm expects a
surrogate, which canestimatethe best-approximation error for any given \(v_\mu\), \(\mu \in \mathcal{S}_{\text{train}}\), where thetraining_set\(\mathcal{S}_{\text{train}}\) is a list of arbitrary Python objects (not necessarilyMuinstances). Further thesurrogateneeds to be able toextendthe approximation basis with \(v_\mu\) for any given \(\mu \in \mathcal{S}_{\text{train}}\).The constructed basis has to be extracted from the surrogate by the user after termination of the algorithm.
- Parameters:
surrogate – An instance of
WeakGreedySurrogaterepresenting the surrogate for the approximation error.training_set – The set of (parameter) samples on which to perform the greedy search.
atol – If not
None, stop the algorithm if the maximum (estimated) error on the training set drops below this value.rtol – If not
None, stop the algorithm if the maximum (estimated) relative error on the training set drops below this value.max_extensions – If not
None, stop the algorithm aftermax_extensionsextension steps.pool – If not
None, aWorkerPoolto use for parallelization. Parallelization needs to be supported bysurrogate.
- Returns:
data – Dict with the following fields:
- max_errs:
Sequence of maximum estimated errors during the greedy run.
- max_err_mus:
The parameters corresponding to
max_errs.- extensions:
Number of performed basis extensions.
- time:
Total runtime of the algorithm.