Run this tutorial

Click here to run this tutorial on mybinder.org:# Tutorial: Building a Reduced BasisÂ¶

In this tutorial we will learn more about `VectorArrays`

and how to
construct a reduced basis using pyMOR.

A reduced basis spans a low dimensional subspace of a `Model`

â€™s
`solution_space`

, in which the
`solutions`

of the `Model`

can be well approximated for all `parameter values`

. In this context,
time is treated as an additional parameter. So for time-dependent problems,
the reduced space (the span of the reduced basis) should approximate the
solution for all `parameter values`

and time instances.

An upper bound for the possible quality of a reduced space is given by the so-called Kolmogorov \(N\)-width \(d_N(\mathcal{M})\) of the solution manifold

given as

In these definitions, \(V\) denotes the
`solution_space`

, \(\mathcal{P} \subset \mathbb{R}^p\)
denotes the set of all `parameter values`

we are interested in, and
\(u(\mu)\) denotes the `solution`

of the `Model`

for the given `parameter values`

\(\mu\).
In pyMOR the set \(\mathcal{P}\) is called the `ParameterSpace`

.

How to read this formula? For each candidate reduced space \(V_N\) we look at all possible \(x \in \mathcal{M}\) and compute the best-approximation error in \(V_N\) (the second infimum). The supremum over the infimum is thus the worst-case best-approximation error over all solutions \(x = u(\mu)\) of interest. Now we take the infimum of the worst-case best-approximation errors over all possible reduced spaces of dimension at most \(N\), and this is \(d_N(\mathcal{M})\).

So whatever reduced space of dimension \(N\) we pick, we will always find a \(\mu\) for which the best-approximation error in our space is at least \(d_N(\mathcal{M})\). Reduced basis methods aim at constructing spaces \(V_N\) for which the worst-case best-approximation error is as close to \(d_N(\mathcal{M})\) as possible.

However, we will only find a good \(V_N\) of small dimension \(N\) if the values \(d_N(\mathcal{M})\) decrease quickly for growing \(N\). It can be shown that this is the case as soon as \(u(\mu)\) analytically depends on \(\mu\), which is true for many problems of interest. More precisely, it can be shown [BCD+11], [RD13] that there are constants \(C, c > 0\) such that

In this tutorial we will construct reduced spaces \(V_N\) for a concrete problem with pyMOR and study their error decay.

## Model setupÂ¶

First we need to define a `Model`

and a `ParameterSpace`

for which we want
to build a reduced basis. We choose here the standard
`thermal block`

benchmark
problem shipped with pyMOR (see Getting started). However, any pyMOR
`Model`

can be used, except for Section Weak Greedy Algorithm where
some more assumptions have to be made on the `Model`

.

First we import everything we need:

```
import numpy as np
from pymor.basic import *
```

Then we build a 3-by-2 thermalblock problem that we discretize using pyMORâ€™s
`builtin discretizers`

(see
Tutorial: Using pyMORâ€™s discretization toolkit for an introduction to pyMORâ€™s discretization toolkit).

```
problem = thermal_block_problem((3,2))
fom, _ = discretize_stationary_cg(problem, diameter=1/100)
```

Next, we need to define a `ParameterSpace`

of `parameter values`

for which
the solutions of the full-order model `fom`

should be approximated by the reduced basis.
We do this by calling the `space`

method
of the |parameters| of `fom`

:

```
parameter_space = fom.parameters.space(0.0001, 1.)
```

Here, `0.0001`

and `1.`

are the common lower and upper bounds for the
individual components of all |parameters| of `fom`

. In our case `fom`

expects a single parameter `'diffusion'`

of 9 values:

```
fom.parameters
```

```
Parameters({diffusion: 6})
```

If `fom`

were to depend on multiple |parameters|, we could also call the
`space`

method with a dictionary
of lower and upper bounds per parameter.

The main use of `ParameterSpaces`

in pyMOR is that they allow to easily sample
`parameter values`

from their domain using the methods
`sample_uniformly`

and
`sample_randomly`

.

## Computing the snapshot dataÂ¶

Reduced basis methods are snapshot-based, which means that they build
the reduced space as a linear subspace of the linear span of solutions
of the `fom`

for certain `parameter values`

. The easiest
approach is to just pick these values randomly, what we will do in the
following. First we define a training set of 25 parameters:

```
training_set = parameter_space.sample_randomly(25)
print(training_set)
```

```
[Mu({'diffusion': array([0.77397865, 0.43893455, 0.85861206, 0.69739829, 0.09426793,
0.97562479])}), Mu({'diffusion': array([0.76116359, 0.7860857 , 0.12820082, 0.4504409 , 0.37086094,
0.92677231])}), Mu({'diffusion': array([0.64390073, 0.82277934, 0.44346986, 0.227316 , 0.55462933,
0.06391087])}), Mu({'diffusion': array([0.82764841, 0.63170123, 0.75811193, 0.35459052, 0.97070095,
0.89313181])}), Mu({'diffusion': array([0.77840566, 0.19471924, 0.46677433, 0.04389939, 0.15437406,
0.68308065])}), Mu({'diffusion': array([0.74478768, 0.96751298, 0.32589278, 0.37052266, 0.46960886,
0.18955241])}), Mu({'diffusion': array([0.13000851, 0.47575736, 0.22698666, 0.66984701, 0.4372082 ,
0.83269493])}), Mu({'diffusion': array([0.70029508, 0.3124354 , 0.83227658, 0.80478388, 0.38753963,
0.28839927])}), Mu({'diffusion': array([0.68252725, 0.13983851, 0.19998821, 0.00746153, 0.78694569,
0.66488437])}), Mu({'diffusion': array([0.70519486, 0.78075096, 0.45896988, 0.56878432, 0.13988302,
0.11461862])}), Mu({'diffusion': array([0.66843612, 0.4711491 , 0.56527958, 0.76502236, 0.63475485,
0.55362404])}), Mu({'diffusion': array([0.55925124, 0.3040197 , 0.03091475, 0.43677372, 0.21466321,
0.40858779])}), Mu({'diffusion': array([0.85341773, 0.23401609, 0.05839691, 0.28145575, 0.2936644 ,
0.66195032])}), Mu({'diffusion': array([0.55707645, 0.78391982, 0.66434711, 0.40644622, 0.81403898,
0.16705622])}), Mu({'diffusion': array([0.0228098 , 0.09013886, 0.72238711, 0.46193104, 0.16135565,
0.50109467])}), Mu({'diffusion': array([0.15239687, 0.69635074, 0.44621166, 0.38108312, 0.30158194,
0.63031956])}), Mu({'diffusion': array([0.36187643, 0.08774115, 0.1180941 , 0.96190147, 0.90858983,
0.69973716])}), Mu({'diffusion': array([0.26594337, 0.96917946, 0.77877303, 0.7169185 , 0.44941657,
0.27231434])}), Mu({'diffusion': array([0.09648132, 0.90261214, 0.45583071, 0.20244313, 0.30602603,
0.57926165])}), Mu({'diffusion': array([0.17685511, 0.85662862, 0.75854368, 0.71949101, 0.43214983,
0.62734611])}), Mu({'diffusion': array([0.58413956, 0.64988162, 0.08453588, 0.41586582, 0.04171001,
0.49404142])}), Mu({'diffusion': array([0.32992823, 0.14460974, 0.10349263, 0.58768581, 0.17067591,
0.92512761])}), Mu({'diffusion': array([0.58110303, 0.34693512, 0.5909564 , 0.02290159, 0.95856336,
0.48235521])}), Mu({'diffusion': array([0.78275695, 0.08282173, 0.48670967, 0.49075792, 0.93783267,
0.57177088])}), Mu({'diffusion': array([0.47354205, 0.26704897, 0.33163584, 0.52072034, 0.43896757,
0.02170992])})]
```

Then we `solve`

the full-order model
for all `parameter values`

in the training set and accumulate all
solution vectors in a single `VectorArray`

using its
`append`

method. But first
we need to create an empty `VectorArray`

to which the solutions can
be appended. New `VectorArrays`

in pyMOR are always created by a
corresponding `VectorSpace`

. Empty arrays are created using the
`empty`

method. But what
is the right `VectorSpace`

? All `solutions`

of a `Model`

belong to its `solution_space`

,
so we write:

```
training_data = fom.solution_space.empty()
for mu in training_set:
training_data.append(fom.solve(mu))
```

Note that `fom.solve`

returns a `VectorArray`

containing a single vector.
This entire array (of one vector) is then appended to the `training_data`

array.
pyMOR has no notion of single vectors, we only speak of `VectorArrays`

.

What exactly is a `VectorSpace`

? A `VectorSpace`

in pyMOR holds all
information necessary to build `VectorArrays`

containing vector objects
of a certain type. In our case we have

```
fom.solution_space
```

```
NumpyVectorSpace(20201, id='STATE')
```

which means that the created `VectorArrays`

will internally hold
`NumPy arrays`

for data storage. The number is the dimension of the
vector. We have here a `NumpyVectorSpace`

because pyMORâ€™s builtin
discretizations are built around the NumPy/SciPy stack. If `fom`

would
represent a `Model`

living in an external PDE solver, we would have
a different type of `VectorSpace`

which, for instance, might hold a
reference to a discrete functions space object inside the PDE solver
instead of the dimension.

After appending all solutions vectors to `training_data`

, we can verify that
`training_data`

now really contains 25 vectors:

```
len(training_data)
```

```
25
```

Note that appending one `VectorArray`

`V`

to another array `U`

will append copies of the vectors in `V`

to `U`

. So
modifying `U`

will not affect `V`

.

Letâ€™s look at the solution snapshots we have just computed using
the `visualize`

method of `fom`

.
A `VectorArray`

containing multiple vectors is visualized as a
time series:

```
fom.visualize(training_data)
```