Tutorial: Building a Reduced Basis

Run this tutorial

Click here to run this tutorial on mybinder.org: try on mybinder.org
Please note that starting the notebook server may take a couple of minutes.

In this tutorial we will learn more about VectorArrays and how to construct a reduced basis using pyMOR.

A reduced basis spans a low dimensional subspace of a Model’s solution_space, in which the solutions of the Model can be well approximated for all parameter values. In this context, time is treated as an additional parameter. So for time-dependent problems, the reduced space (the span of the reduced basis) should approximate the solution for all parameter values and time instances.

An upper bound for the possible quality of a reduced space is given by the so-called Kolmogorov N-width dN given as

dN:=inf

In this formula V denotes the solution_space, \mathcal{P} \subset \mathbb{R}^p denotes the set of all parameter values we are interested in, and u(\mu) denotes the solution of the Model for the given parameter values \mu. In pyMOR the set \mathcal{P} is called the ParameterSpace.

How to read this formula? For each candidate reduced space V_N we look at all possible parameter values \mu and compute the best-approximation error in V_N (the second infimum). The supremum over the infimum is thus the worst-case best-approximation error over all parameter values of interest. Now we take the infimum of the worst-case best-approximation errors over all possible reduced spaces of dimension at most N, and this is d_N.

So whatever reduced space of dimension N we pick, we will always find a \mu for which the best-approximation error in our space is at least d_N. Reduced basis methods aim at constructing spaces V_N for which the worst-case best-approximation error is as close to d_N as possible.

However, we will only find a good V_N of small dimension N if the values d_N decrease quickly for growing N. It can be shown that this is the case as soon as u(\mu) analytically depends on \mu, which is true for many problems of interest. More precisely, it can be shown [BCDDPW11], [DPW13] that there are constants C, c > 0 such that

d_N \leq C \cdot e^{-N^c}.

In this tutorial we will construct reduced spaces V_N for a concrete problem with pyMOR and study their error decay.

Model setup

First we need to define a Model and a ParameterSpace for which we want to build a reduced basis. We choose here the standard thermal block benchmark problem shipped with pyMOR (see Getting started). However, any pyMOR Model can be used, except for Section ‘Weak greedy algorithm’ where some more assumptions have to be made on the Model.

First we import everything we need:

import numpy as np
from pymor.basic import *

Then we build a 3-by-3 thermalblock problem that we discretize using pyMOR’s builtin discretizers (see Tutorial: Using pyMOR’s discretization toolkit for an introduction to pyMOR’s discretization toolkit).

problem = thermal_block_problem((3,3))
fom, _ = discretize_stationary_cg(problem, diameter=1/100)
Log Output

00:00 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:00 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:00 DiffusionOperatorP1: Determine global dofs ...

00:00 DiffusionOperatorP1: Boundary treatment ...

00:00 DiffusionOperatorP1: Assemble system matrix ...

00:00 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:00 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:00 DiffusionOperatorP1: Determine global dofs ...

00:00 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 L2ProductP1: Integrate the products of the shape functions on each element

00:01 L2ProductP1: Determine global dofs ...

00:01 L2ProductP1: Boundary treatment ...

00:01 L2ProductP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

00:01 L2ProductP1: Integrate the products of the shape functions on each element

00:01 L2ProductP1: Determine global dofs ...

00:01 L2ProductP1: Boundary treatment ...

00:01 L2ProductP1: Assemble system matrix ...

00:01 DiffusionOperatorP1: Calculate gradients of shape functions transformed by reference map ...

00:01 DiffusionOperatorP1: Calculate all local scalar products between gradients ...

00:01 DiffusionOperatorP1: Determine global dofs ...

00:01 DiffusionOperatorP1: Boundary treatment ...

00:01 DiffusionOperatorP1: Assemble system matrix ...

Next, we need to define a ParameterSpace of parameter values for which the solutions of the full-order model fom should be approximated by the reduced basis. We do this by calling the space method of the Parameters of fom:

parameter_space = fom.parameters.space(0.0001, 1.)

Here, 0.0001 and 1. are the common lower and upper bounds for the individual components of all Parameters of fom. In our case fom expects a single parameter 'diffusion' of 9 values:

fom.parameters
Parameters({diffusion: 9})

If fom were to depend on multiple Parameters, we could also call the space method with a dictionary of lower and upper bounds per parameter.

The main use of ParameterSpaces in pyMOR is that they allow to easily sample parameter values from their domain using the methods sample_uniformly and sample_randomly.

Computing the snapshot data

Reduced basis methods are snapshot-based, which means that they build the reduced space as a linear subspace of the linear span of solutions of the fom for certain parameter values. The easiest approach is to just pick these values randomly, what we will do in the following. First we define a training set of 25 parameters:

training_set = parameter_space.sample_randomly(25)
print(training_set)
[Mu({diffusion: [0.37460266483547777, 0.9507192349792751, 0.7320207424172239, 0.5986986183486169, 0.15610303857839228, 0.15607892088416903, 0.05817780380698265, 0.8661895281603577, 0.6011549002420344]}), Mu({diffusion: [0.7081017705382658, 0.020682435846372867, 0.9699128611767781, 0.8324593965363417, 0.21241787676720833, 0.1819067847103799, 0.18348616940244847, 0.30431181873524177, 0.5248039559890746]}), Mu({diffusion: [0.43200182414025157, 0.2913000172840221, 0.6118917094329073, 0.13957991126597663, 0.2922154340703646, 0.3664252071093623, 0.4561243772186142, 0.7851974437968743, 0.1997538147801439]}), Mu({diffusion: [0.5142830149697702, 0.5924553274051563, 0.04654576767872573, 0.6075840974162482, 0.17060707127492278, 0.065145087825981, 0.9488906486996079, 0.9656354698712519, 0.8084165083816495]}), Mu({diffusion: [0.30468330779645336, 0.09776234679498323, 0.6842646032095057, 0.4402084784902273, 0.12212603102129435, 0.49522739242025904, 0.03448508226310688, 0.9093294700385742, 0.2588541036018569]}), Mu({diffusion: [0.6625560321255466, 0.311779904981802, 0.520116014375693, 0.5467556083153453, 0.18493597007997448, 0.9695876693017821, 0.7751553100787785, 0.9395049916700327, 0.8948378676926061]}), Mu({diffusion: [0.597940188813204, 0.9218820475996146, 0.0885836528017143, 0.1960632641329033, 0.04532276618164702, 0.325397797730188, 0.38873842196051306, 0.2714218968707185, 0.8287546354010141]}), Mu({diffusion: [0.3568176513609199, 0.28100641623641204, 0.5427418135499327, 0.14101013255226516, 0.8022167610559643, 0.07464318861540285, 0.9868882479068573, 0.7722675448197277, 0.19879580996601898]}), Mu({diffusion: [0.005621564911890039, 0.8154798823119886, 0.7068866581132324, 0.7290342673241832, 0.7712932196512772, 0.07413724726891696, 0.35852988197141816, 0.1159574726191772, 0.863117115533006]}), Mu({diffusion: [0.6233357970148752, 0.3309649350501639, 0.06365199445099504, 0.3110512234834906, 0.32525080369454434, 0.7296332177202303, 0.6375937156080776, 0.8872240213020689, 0.4722677036694331]}), Mu({diffusion: [0.11968228651370788, 0.7132734627442727, 0.7608089701120357, 0.5613210698497393, 0.7709900832365655, 0.4938462168047543, 0.5227805560990558, 0.42759826425671377, 0.025516584831420778]}), Mu({diffusion: [0.10798063785060512, 0.031526042768165584, 0.6364467702226541, 0.314424545478219, 0.5086198340955863, 0.9075757172787005, 0.24936729992596005, 0.41044188474332616, 0.7555755834291944]}), Mu({diffusion: [0.2288752856750733, 0.07707221183781011, 0.28982247776847664, 0.161305165125279, 0.9297046825773388, 0.8081395675264605, 0.6334404161347724, 0.871473444128699, 0.8036917096914246]}), Mu({diffusion: [0.18665140188014723, 0.8925697425901288, 0.5393883076914592, 0.8074594111485461, 0.8961016907935009, 0.3180716746243667, 0.11014091933522399, 0.22801236902568747, 0.42716507784739366]}), Mu({diffusion: [0.8180329644459009, 0.8607445101980178, 0.007051435318137585, 0.5107962278473079, 0.41746926204846413, 0.22218559968968316, 0.11995338079694944, 0.3376814098864876, 0.9429154129421279]}), Mu({diffusion: [0.32327061172755317, 0.5188387426811918, 0.7030486569992883, 0.36369323941905607, 0.9717849045126886, 0.962451050212617, 0.2518571175957816, 0.4972987810417962, 0.30094822198578797]}), Mu({diffusion: [0.2849120103280299, 0.03698325865979735, 0.6096033775464988, 0.5027287553265386, 0.05157360337486436, 0.27871859959018774, 0.9082750593780571, 0.2396379344779057, 0.14498038260401397]}), Mu({diffusion: [0.4895038150015352, 0.9856518890651896, 0.24213106598434925, 0.672168333851138, 0.7616434533671848, 0.2377137802380004, 0.7282435269769983, 0.3678463544059813, 0.6323426000105201]}), Mu({diffusion: [0.6335663577898186, 0.535821106606351, 0.09038074107740286, 0.8353189653396791, 0.32084798696523864, 0.18659985854881422, 0.040871064040608446, 0.590933853893923, 0.6775966054060982]}), Mu({diffusion: [0.016686170144963364, 0.512141848993451, 0.22657312562041815, 0.645208273130409, 0.17444899236209094, 0.6909686443286557, 0.38679667276590735, 0.9367363157378609, 0.13760719205157865]}), Mu({diffusion: [0.3411322444151535, 0.11356217388846501, 0.9247011489167349, 0.8773516194456429, 0.2580158335523841, 0.6600180476295756, 0.8172404779811957, 0.5552452915183024, 0.5296976132981709]}), Mu({diffusion: [0.24192810567136164, 0.09319345752911863, 0.8972260363775314, 0.9004280153576141, 0.6331381471275406, 0.3390958880695958, 0.3492746536551996, 0.7259830833023524, 0.8971205489265819]}), Mu({diffusion: [0.8870977156226908, 0.7798975583030381, 0.6420674429896723, 0.08423155099854933, 0.1617125512232043, 0.8985643331082266, 0.606468416753624, 0.009296131911467984, 0.10156139571174551]}), Mu({diffusion: [0.663535418931145, 0.005161077687834065, 0.16089197061235688, 0.5487789159876495, 0.6919260081729239, 0.6519960633766503, 0.2243468825296137, 0.7122080034254011, 0.23732536258805037]}), Mu({diffusion: [0.32546715818945177, 0.7465167559775123, 0.64966793575731, 0.8492384881531285, 0.6576471310111133, 0.568351772475138, 0.09376540035130967, 0.36777903147912755, 0.26527584744495725]})]

Then we solve the full-order model for all parameter values in the training set and accumulate all solution vectors in a single VectorArray using its append method. But first we need to create an empty VectorArray to which the solutions can be appended. New VectorArrays in pyMOR are always created by a corresponding VectorSpace. Empty arrays are created using the empty method. But what is the right VectorSpace? All solutions of a Model belong to its solution_space, so we write:

U = fom.solution_space.empty()
for mu in training_set:
    U.append(fom.solve(mu))
Log Output

00:01 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.37460266483547777, 0.9507192349792751, 0.7320207424172239, 0.5986986183486169, 0.15610303857839228, 0.15607892088416903, 0.05817780380698265, 0.8661895281603577, 0.6011549002420344]} ...

00:01 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7081017705382658, 0.020682435846372867, 0.9699128611767781, 0.8324593965363417, 0.21241787676720833, 0.1819067847103799, 0.18348616940244847, 0.30431181873524177, 0.5248039559890746]} ...

00:01 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.43200182414025157, 0.2913000172840221, 0.6118917094329073, 0.13957991126597663, 0.2922154340703646, 0.3664252071093623, 0.4561243772186142, 0.7851974437968743, 0.1997538147801439]} ...

00:01 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5142830149697702, 0.5924553274051563, 0.04654576767872573, 0.6075840974162482, 0.17060707127492278, 0.065145087825981, 0.9488906486996079, 0.9656354698712519, 0.8084165083816495]} ...

00:01 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.30468330779645336, 0.09776234679498323, 0.6842646032095057, 0.4402084784902273, 0.12212603102129435, 0.49522739242025904, 0.03448508226310688, 0.9093294700385742, 0.2588541036018569]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6625560321255466, 0.311779904981802, 0.520116014375693, 0.5467556083153453, 0.18493597007997448, 0.9695876693017821, 0.7751553100787785, 0.9395049916700327, 0.8948378676926061]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.597940188813204, 0.9218820475996146, 0.0885836528017143, 0.1960632641329033, 0.04532276618164702, 0.325397797730188, 0.38873842196051306, 0.2714218968707185, 0.8287546354010141]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.3568176513609199, 0.28100641623641204, 0.5427418135499327, 0.14101013255226516, 0.8022167610559643, 0.07464318861540285, 0.9868882479068573, 0.7722675448197277, 0.19879580996601898]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.005621564911890039, 0.8154798823119886, 0.7068866581132324, 0.7290342673241832, 0.7712932196512772, 0.07413724726891696, 0.35852988197141816, 0.1159574726191772, 0.863117115533006]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6233357970148752, 0.3309649350501639, 0.06365199445099504, 0.3110512234834906, 0.32525080369454434, 0.7296332177202303, 0.6375937156080776, 0.8872240213020689, 0.4722677036694331]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.11968228651370788, 0.7132734627442727, 0.7608089701120357, 0.5613210698497393, 0.7709900832365655, 0.4938462168047543, 0.5227805560990558, 0.42759826425671377, 0.025516584831420778]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.10798063785060512, 0.031526042768165584, 0.6364467702226541, 0.314424545478219, 0.5086198340955863, 0.9075757172787005, 0.24936729992596005, 0.41044188474332616, 0.7555755834291944]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.2288752856750733, 0.07707221183781011, 0.28982247776847664, 0.161305165125279, 0.9297046825773388, 0.8081395675264605, 0.6334404161347724, 0.871473444128699, 0.8036917096914246]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.18665140188014723, 0.8925697425901288, 0.5393883076914592, 0.8074594111485461, 0.8961016907935009, 0.3180716746243667, 0.11014091933522399, 0.22801236902568747, 0.42716507784739366]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8180329644459009, 0.8607445101980178, 0.007051435318137585, 0.5107962278473079, 0.41746926204846413, 0.22218559968968316, 0.11995338079694944, 0.3376814098864876, 0.9429154129421279]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.32327061172755317, 0.5188387426811918, 0.7030486569992883, 0.36369323941905607, 0.9717849045126886, 0.962451050212617, 0.2518571175957816, 0.4972987810417962, 0.30094822198578797]} ...

00:02 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.2849120103280299, 0.03698325865979735, 0.6096033775464988, 0.5027287553265386, 0.05157360337486436, 0.27871859959018774, 0.9082750593780571, 0.2396379344779057, 0.14498038260401397]} ...

00:03 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.4895038150015352, 0.9856518890651896, 0.24213106598434925, 0.672168333851138, 0.7616434533671848, 0.2377137802380004, 0.7282435269769983, 0.3678463544059813, 0.6323426000105201]} ...

00:03 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6335663577898186, 0.535821106606351, 0.09038074107740286, 0.8353189653396791, 0.32084798696523864, 0.18659985854881422, 0.040871064040608446, 0.590933853893923, 0.6775966054060982]} ...

00:03 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.016686170144963364, 0.512141848993451, 0.22657312562041815, 0.645208273130409, 0.17444899236209094, 0.6909686443286557, 0.38679667276590735, 0.9367363157378609, 0.13760719205157865]} ...

00:03 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.3411322444151535, 0.11356217388846501, 0.9247011489167349, 0.8773516194456429, 0.2580158335523841, 0.6600180476295756, 0.8172404779811957, 0.5552452915183024, 0.5296976132981709]} ...

00:03 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.24192810567136164, 0.09319345752911863, 0.8972260363775314, 0.9004280153576141, 0.6331381471275406, 0.3390958880695958, 0.3492746536551996, 0.7259830833023524, 0.8971205489265819]} ...

00:03 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8870977156226908, 0.7798975583030381, 0.6420674429896723, 0.08423155099854933, 0.1617125512232043, 0.8985643331082266, 0.606468416753624, 0.009296131911467984, 0.10156139571174551]} ...

00:03 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.663535418931145, 0.005161077687834065, 0.16089197061235688, 0.5487789159876495, 0.6919260081729239, 0.6519960633766503, 0.2243468825296137, 0.7122080034254011, 0.23732536258805037]} ...

00:03 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.32546715818945177, 0.7465167559775123, 0.64966793575731, 0.8492384881531285, 0.6576471310111133, 0.568351772475138, 0.09376540035130967, 0.36777903147912755, 0.26527584744495725]} ...

Note that fom.solve returns a VectorArray containing a single vector. This entire array (of one vector) is then appended to the U array. pyMOR has no notion of single vectors, we only speak of VectorArrays.

What exactly is a VectorSpace? A VectorSpace in pyMOR holds all information necessary to build VectorArrays containing vector objects of a certain type. In our case we have

fom.solution_space
NumpyVectorSpace(20201, id='STATE')

which means that the created VectorArrays will internally hold NumPy arrays for data storage. The number is the dimension of the vector. We have here a NumpyVectorSpace because pyMOR’s builtin discretizations are built around the NumPy/SciPy stack. If fom would represent a Model living in an external PDE solver, we would have a different type of VectorSpace which, for instance, might hold a reference to a discrete functions space object inside the PDE solver instead of the dimension.

After appending all solutions vectors to U, we can verify that U now really contains 25 vectors:

len(U)
25

Note that appending one VectorArray V to another array U will append copies of the vectors in V to U. So modifying U will not affect V.

Let’s look at the solution snapshots we have just computed using the visualize method of fom. A VectorArray containing multiple vectors is visualized as a time series:

fom.visualize(U)

A trivial reduced basis

Given some snapshot data, the easiest option to get a reduced basis is to just use the snapshot vectors as the basis:

trivial_basis = U.copy()

Note that assignment in Python never copies data! Thus, if we had written trivial_basis = U and modified trivial_basis, U would change as well, since trivial_basis and U would refer to the same VectorArray object. So whenever you want to use one VectorArray somewhere else and you are unsure whether some code might change the array, you should always create a copy. pyMOR uses copy-on-write semantics for copies of VectorArrays, which means that generally calling copy is cheap as it does not duplicate any data in memory. Only when you modify one of the arrays the data will be copied.

Now we want to know how good our reduced basis is. So we want to compute

\inf_{v \in V_N}\ \|u(\mu) - v\|,

where V_N denotes the span of our reduced basis, for all \mu in some validation set of parameter values. Assuming that we are in a Hilbert space, we can compute the infimum via orthogonal projection onto V_N: in that case, the projection will be the best approximation in V_N and the norm of the difference between u(\mu) and its orthogonal projection will be the best-approximation error.

So let v_{proj} be the orthogonal projection of v onto the linear space spanned by the basis vectors u_i,\ i=1, \ldots, N. By definition this means that v - v_{proj} is orthogonal to all u_i:

(v - v_{proj}, u_i) = 0, \qquad i = 1, \ldots, N.

Let \lambda_j, j = 1, \ldots, N be the coefficients of v_{proj} with respect to this basis, i.e. \sum_{j=1}^N \lambda_j u_j = v_{proj}. Then:

\sum_{j=1}^N \lambda_j (u_j, u_i) = (v, u_i), \qquad i = 1, \ldots, N.

With G_{i,j} := (u_i, u_j), R := [(v, u_1), \ldots, (v, u_N)]^T and \Lambda := [\lambda_1, \ldots, \lambda_N]^T, we obtain the linear equation system

G \cdot \Lambda = R,

which determines \lambda_i and, thus, v_{proj}.

Let’s assemble and solve this equation system using pyMOR to determine the best-approximation error in trivial_basis for some test vector V which we take as another random solution of our Model:

V = fom.solve(parameter_space.sample_randomly(1)[0])
Log Output

00:07 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.24406524441474567, 0.9730132536969703, 0.3931584148942937, 0.8920573505215956, 0.6311755121346632, 0.7948318224112942, 0.5026868293958816, 0.5769461942378965, 0.492568442049482]} ...

The matrix G of all inner products between vectors in trivial_basis is a so called Gramian matrix. Consequently, every VectorArray has a gramian method, which computes precisely this matrix:

G = trivial_basis.gramian()

The Gramian is computed w.r.t. the Euclidean inner product. For the right-hand side R, we need to compute all (Euclidean) inner products between the vectors in trivial_basis and (the single vector in) V. For that, we can use the inner method:

R = trivial_basis.inner(V)

which will give us a 25\times 1 NumPy array of all inner products.

Now, we can use NumPy to solve the linear equation system:

lambdas = np.linalg.solve(G, R)

Finally, we need to form the linear combination

\sum_{j=1}^N \lambda_j u_j = v_{proj}

using the lincomb method of trivial_basis. It expects row vectors of linear coefficients, but solve returns column vectors, so we need to take the transpose:

V_proj = trivial_basis.lincomb(lambdas.T)

Let’s look at V, V_proj and the difference of both. VectorArrays of the same length can simply be subtracted, yielding a new array of the differences:

fom.visualize((V, V_proj, V - V_proj),
              legend=('V', 'V_proj', 'best-approximation err'),
              separate_colorbars=True)
_images/tutorial_basis_generation_17_0.png _images/tutorial_basis_generation_17_1.png _images/tutorial_basis_generation_17_2.png

As you can see, we already have a quite good approximation of V with only 25 basis vectors.

Now, the Euclidean norm will just work fine in many cases. However, when the full-order model comes from a PDE, it will be usually not the norm we are interested in, and you may get poor results for problems with strongly anisotropic meshes.

For our diffusion problem with homogeneous Dirichlet boundaries, the Sobolev semi-norm (of order one) is a natural choice. Among other useful products, discretize_stationary_cg already assembled a corresponding inner product Operator for us, which is available as

fom.h1_0_semi_product
NumpyMatrixOperator(<20201x20201 sparse, 140601 nnz>, source_id='STATE', range_id='STATE', name='h1_0_semi')

Note

The 0 in h1_0_semi_product refers to the fact that rows and columns of Dirichlet boundary DOFs have been cleared in the matrix of the Operator to make it invertible. This is important for being able to compute Riesz representatives w.r.t. this inner product (required for a posteriori estimation of the ROM error). If you want to compute the H1 semi-norm of a function that does not vanish at the Dirichlet boundary, use fom.h1_semi_product.

To use fom.h1_0_semi_product as an inner product Operator for computing the projection error, we can simply pass it as the optional product argument to gramian and inner:

G = trivial_basis[:10].gramian(product=fom.h1_0_semi_product)
R = trivial_basis[:10].inner(V, product=fom.h1_0_semi_product)
lambdas = np.linalg.solve(G, R)
V_h1_proj = trivial_basis[:10].lincomb(lambdas.T)

fom.visualize((V, V_h1_proj, V - V_h1_proj), separate_colorbars=True)
_images/tutorial_basis_generation_19_0.png _images/tutorial_basis_generation_19_1.png _images/tutorial_basis_generation_19_2.png

As you might have guessed, we have additionally opted here to only use the span of the first 10 basis vectors of trivial_basis. Like NumPy arrays, VectorArrays can be sliced and indexed. The result is always a view onto the original data. If the view object is modified, the original array is modified as well.

Next we will assess the approximation error a bit more thoroughly, by evaluating it on a validation set of 100 parameter values for varying basis sizes.

First, we compute the validation snapshots:

validation_set = parameter_space.sample_randomly(100)
V = fom.solution_space.empty()
for mu in validation_set:
    V.append(fom.solve(mu))
Log Output

00:12 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.1953234634992647, 0.7224798700499792, 0.2808442852046117, 0.024413534834810693, 0.6455077486775771, 0.17719296833910822, 0.940464538494479, 0.9539331841448871, 0.9148729037814265]} ...

00:12 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.37022168438541886, 0.01555507086721454, 0.9283257307314666, 0.4282413299024826, 0.9666581535617652, 0.9636236150915439, 0.8530241545218133, 0.2945194471803787, 0.38515921882906506]} ...

00:12 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8511515578497052, 0.316990312955762, 0.16957579741142387, 0.5568455823321043, 0.9361611586833649, 0.6960601936953055, 0.570104163972356, 0.09726677612139147, 0.6150457259764999]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9900548447192529, 0.14017000683500036, 0.5183778193985004, 0.8773853346207626, 0.740794540892429, 0.6970460394211685, 0.7025138355787105, 0.3595552021046332, 0.2936624850800669]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8093802193629658, 0.8101323833397128, 0.8670856113482457, 0.9132492285012157, 0.5113912646210517, 0.5015661430577308, 0.7983153494488785, 0.6499989343846874, 0.7019966805699775]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7958130901691575, 0.8900163412833846, 0.33806135733585063, 0.37564539434468003, 0.09407254164688492, 0.5783223129820744, 0.036038679569362415, 0.4656514583306469, 0.5426903702441058]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.2866125980030716, 0.5908741772429539, 0.030597199914055524, 0.037444453930339495, 0.8226183006035923, 0.36025462234712174, 0.12714780660061958, 0.5222910357287989, 0.770016553743301]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.21589944539409348, 0.6229281867714184, 0.08543893024726862, 0.05177655299649084, 0.5314014961049912, 0.5406810580979455, 0.6374661585080568, 0.7261187245892893, 0.9758544942545884]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5163487182663652, 0.32302417729395183, 0.7952066761492268, 0.27090516803694803, 0.43902752356356556, 0.07854853570413173, 0.025448208341115965, 0.9626521498364573, 0.8359965225001547]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6960046086730886, 0.40901204911982847, 0.17337699063883868, 0.15652139896681894, 0.2503178738747789, 0.5492717420396499, 0.7146244631077924, 0.6602313569800595, 0.2800059035562482]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9548697941351277, 0.7379231270040989, 0.5543986171061496, 0.6117595741597288, 0.4196581024215471, 0.24780621640220735, 0.35603708138339646, 0.7578703258533227, 0.014492049280892892]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.11616103324286554, 0.04609804175755058, 0.040824729438738244, 0.8554750379526062, 0.7036874935940857, 0.47422641170441643, 0.09792437723493638, 0.4916667135293207, 0.4735244236034876]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.17328454972302418, 0.4339082640730492, 0.3985648839239337, 0.6158885130424112, 0.635130141502557, 0.045399479371067315, 0.37467515336500856, 0.625897329722665, 0.5031859449542296]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8565041922042035, 0.6587277622557831, 0.16301813363872153, 0.07066169052568981, 0.642455036278495, 0.02660865941056765, 0.585817003715336, 0.9402362184008151, 0.5755166304580913]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.38823110921390125, 0.6433238896205089, 0.45830706520246745, 0.5456622276370033, 0.9414706622956475, 0.3861640275369942, 0.9611944447675318, 0.9053601068918681, 0.1958715556758175]} ...

00:13 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.06945436474507795, 0.10086792357728891, 0.01832000346898457, 0.0945335164598528, 0.6830384727390152, 0.07128152959538296, 0.31904373273073194, 0.8448908234383576, 0.023369608542252285]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8144870357406769, 0.28192658929592257, 0.1182530111388941, 0.6967674916476142, 0.628979952495206, 0.8774842663257002, 0.7350975366995054, 0.8035005822918101, 0.28210636911404935]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.17752179982534483, 0.7506396901656942, 0.8068540557933372, 0.9905060914864733, 0.4126764151437353, 0.37208088398420386, 0.7764353194459226, 0.34086945989899253, 0.9307642498710044]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8584269105678275, 0.42905112797228084, 0.7508959806847182, 0.7545674197972739, 0.10321355644904902, 0.9025626513888987, 0.5053018472106123, 0.8264748203611308, 0.32011759607050866]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8955336761733509, 0.38926275856628967, 0.01093656771515033, 0.9053914382216217, 0.09137754811845494, 0.3193817062266558, 0.9500669608540998, 0.9506120862228623, 0.5734805443344738]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6318740284485823, 0.44850067742612193, 0.2932814506208947, 0.32873167891537897, 0.6725512042314307, 0.7523992919847362, 0.7915998858214759, 0.7896391809802744, 0.09129698243838551]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.4944708626721112, 0.05765300414064262, 0.5495739294355032, 0.4415863483232397, 0.8877154123400239, 0.3509799210508235, 0.11715530972596311, 0.14307738288463057, 0.7615344806543005]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6182562415099294, 0.10121256385517798, 0.08419839543438594, 0.7009990345459741, 0.07285573006355711, 0.8218778732844272, 0.7062716029337806, 0.08144064576383558, 0.0849292303137834]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9866409145433254, 0.3743333686765447, 0.37070508285218423, 0.8128182873007769, 0.9472538525261203, 0.9860024637164886, 0.7534028474404157, 0.37632195957236264, 0.0835923666269989]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.777169201235844, 0.5584484093108315, 0.42427958704605157, 0.9063637496562265, 0.11128636255792072, 0.49267584178043006, 0.011452509402942327, 0.46871377592992686, 0.05639764535426917]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.11890603447644511, 0.11761449415242717, 0.649245381085852, 0.7460702747774968, 0.5834104282206499, 0.9621763312196945, 0.3749330924657517, 0.2857835150732325, 0.8686122682766413]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.2236734789356007, 0.9632262171866672, 0.01225325924234736, 0.9698818388249683, 0.04325559595938106, 0.8911539993867013, 0.5277483389753912, 0.9929654996396884, 0.07388918507892532]} ...

00:14 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5538988989728807, 0.969305605365537, 0.5231455343857317, 0.629435698271449, 0.6957791141157187, 0.45459561066129645, 0.627595324276055, 0.5843558804919079, 0.9011678946899399]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.045541835703423725, 0.2810350932732711, 0.9504164429281511, 0.8902747575125273, 0.45571118711043435, 0.6201705845417566, 0.2774534448628346, 0.18820234760778892, 0.46375203509948815]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.3534168928032502, 0.583697746239687, 0.07782686350128834, 0.9743973681853999, 0.986212123405155, 0.6981918978483431, 0.536142756707486, 0.3095966635246991, 0.8138136402049779]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6847626994366239, 0.1627006776509568, 0.910936091775393, 0.8225549891988767, 0.9498049333005948, 0.7257469364375211, 0.6134538544161963, 0.41830121198698983, 0.9327352105056779]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8660772831114583, 0.04531414823917879, 0.02646433779980228, 0.3765257205413618, 0.8105722754487547, 0.987277401702013, 0.15050184941441783, 0.5941713022805999, 0.3809527675453584]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9699174063748217, 0.8421347112433951, 0.8383448718406667, 0.4687462904789908, 0.41487802038743143, 0.27347973122351316, 0.056469859101262024, 0.8647359040174277, 0.8129197190291646]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.999717701518802, 0.996637173390198, 0.5554761624320672, 0.7690105164389924, 0.9447712533094398, 0.8496624259383437, 0.24742336693302333, 0.4505990808965625, 0.12924649920997983]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9540556221559965, 0.6062140169874349, 0.22871994122291237, 0.6717335143374161, 0.61816642763385, 0.3582269017610372, 0.11364623644040907, 0.6716060382732403, 0.5203556701337029]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7723411598964658, 0.5202114847618822, 0.8521962821685083, 0.551951648090608, 0.5609818777382327, 0.8766659372980792, 0.4035425179257758, 0.13410182692779565, 0.028879798045707637]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7551617419480516, 0.6203475203983293, 0.7041093601224137, 0.21304286509275983, 0.13645783843921108, 0.01464321120131514, 0.3506525000507163, 0.5899586950859477, 0.3923048206952223]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.4375311745315267, 0.904168278624299, 0.348320641476598, 0.5140380902108949, 0.7836746474398689, 0.39660312804303804, 0.6221244915578507, 0.8623774723758705, 0.9495256715952763]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.14715877358094503, 0.9265949663989782, 0.49216708145023025, 0.25831856386012847, 0.45918984266263746, 0.9800345720279485, 0.4926688321834703, 0.32881873512647947, 0.6334375142312941]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.24022160421631525, 0.07595574177585306, 0.12896683393845815, 0.12813303437387666, 0.1519875032429431, 0.1389132899321452, 0.6409106573287343, 0.1819618963907049, 0.3457327165955308]} ...

00:15 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8967987310650213, 0.47401424409884607, 0.6675909827471751, 0.17240263921450966, 0.1923697899067899, 0.04096452940485222, 0.16901816956585733, 0.27866247999805543, 0.17709278322831914]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.08879366350367991, 0.12072380751349075, 0.4608326901559225, 0.20641308503395192, 0.36433343406197066, 0.5034669291277714, 0.6904257891465023, 0.03940820862711483, 0.7994304578691518]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6279375994519587, 0.08185085604567702, 0.8735912662443666, 0.92088031329176, 0.06117185205887827, 0.276949960382389, 0.806220659665082, 0.7482848644146199, 0.18460256725444169]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.20942838840437664, 0.37053505558110283, 0.48457453289250224, 0.6182929460531429, 0.36897674820581544, 0.4625884626615346, 0.7474961910399431, 0.036779534570308844, 0.25251170064958633]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.713378250925964, 0.8952173170034307, 0.5117262743714546, 0.5321602739167891, 0.10726129413864208, 0.4474676255867723, 0.5326640047283776, 0.24254625658436618, 0.26931630662628603]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.3773464346883122, 0.020169190657948595, 0.32214695766661994, 0.211526862195845, 0.32756460244269686, 0.11985015560606929, 0.8905382280118209, 0.5936330943086933, 0.6791344089125751]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7891923214834775, 0.49849235470916436, 0.08701159605861496, 0.537152831164366, 0.586882433909077, 0.7454649302369115, 0.43171638027505643, 0.1276675447652842, 0.2838475282081446]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.36314598816899524, 0.645952649607468, 0.5708212268384449, 0.3561611162252565, 0.9865165972681004, 0.6058142418749515, 0.23730306905682086, 0.1018722943731417, 0.15294385327041357]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.24603313261166967, 0.1607653051222297, 0.1866483673489006, 0.2851666591769777, 0.17345625793522534, 0.8967757480839625, 0.08032572228707605, 0.5245589384312976, 0.4104557873069625]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9823803790469156, 0.11212769827783554, 0.39791581348583704, 0.9694734862320413, 0.8655205751813909, 0.817090363742185, 0.2579770367622353, 0.1709704986313268, 0.6686763556024385]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9293830515286731, 0.5568072167246284, 0.5716555282009529, 0.28005109575091813, 0.7695159838986178, 0.18712504418266762, 0.32374686848060324, 0.42549389497255513, 0.5076596176465866]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.24248549144183873, 0.11492534105672962, 0.6106589804373884, 0.28870169018493175, 0.5812800976004701, 0.15444727900267488, 0.481191987844632, 0.5326361736083307, 0.05191835446874467]} ...

00:16 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.3366706177661012, 0.13450123547128034, 0.06346863297572047, 0.9899612363667063, 0.3224216095902255, 0.8098934584100494, 0.25471519069828746, 0.6815345719517069, 0.7602518371036976]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5956791767337835, 0.4716290309313034, 0.4118997300558538, 0.348933379716341, 0.929536191333401, 0.8306363458469503, 0.9650304079754459, 0.12438479376319618, 0.7308943884561239]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9383466227753557, 0.18131494285904357, 0.06658961774004082, 0.74114653722513, 0.5745156658685939, 0.8418445938805962, 0.13985839938862685, 0.7952877851287042, 0.20170715731573974]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.16373957727141794, 0.16434937135119984, 0.814593262759359, 0.6652307009741305, 0.5231131182266424, 0.35889460107509014, 0.877212820759027, 0.3925058629118931, 0.8166177795276299]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.4391909950793614, 0.3770067349819651, 0.4627335176910395, 0.3014477363767257, 0.7476346192382335, 0.5027701180534699, 0.23228947387730262, 0.899584615817241, 0.38395283225107407]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5435985058278772, 0.9064814637534505, 0.6242755721144007, 0.11698635090429324, 0.9398381404011138, 0.6277452822661107, 0.3349721240956204, 0.1393581454561209, 0.7940457867513689]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6201107486529207, 0.5335077458671239, 0.8939031937926526, 0.7886183515034083, 0.15175971224477788, 0.31179089558876866, 0.2485642909004843, 0.7439718979434197, 0.033629081492305805]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5699326959028294, 0.7624824398721165, 0.8767779601980733, 0.34214754054103586, 0.8212751789415457, 0.11072067378151171, 0.8464676465053448, 0.1275759134657492, 0.3973475618313112]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7973156362429756, 0.1500024356060389, 0.22932847009311821, 0.722280343136227, 0.7200645328924198, 0.6411835181220088, 0.6939790496226533, 0.5427701709032614, 0.25187387900106206]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.34576142390456904, 0.1816795570297455, 0.908459716277495, 0.5834334555866438, 0.4009113316219635, 0.46205960306376825, 0.9472886112778541, 0.1534360679757686, 0.5862712090335955]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5059380900165775, 0.6114930900111044, 0.018208372802458423, 0.8721366965532571, 0.932125070655364, 0.5651766702708501, 0.6966811587945045, 0.9225071312391779, 0.7072679104499673]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.15262378900996992, 0.5763307313307965, 0.6067543748782176, 0.4241882582352558, 0.7364705912011603, 0.934373578067538, 0.9255759560554857, 0.4508942874669917, 0.11332672203617118]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9848427148424383, 0.8389141966372895, 0.12475021493514653, 0.9208497984291106, 0.869909372425922, 0.5188861733203595, 0.5913163082013547, 0.3990628035997432, 0.05485616265814913]} ...

00:17 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.3352637219217363, 0.8028731632531516, 0.004731559802302399, 0.33356582177397504, 0.3982288767215842, 0.5374418633776291, 0.9198636308511192, 0.34641135976652465, 0.34701850657603805]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7375274979849374, 0.45227271909571815, 0.22468236245768797, 0.4524942721810801, 0.14094293467776187, 0.17646934780757267, 0.4984179359622058, 0.41898355695959744, 0.9148544164779933]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.36245765972672145, 0.5806302914430156, 0.6323010614907384, 0.013193147142674803, 0.6635710182795089, 0.17811816327306446, 0.9610742104377082, 0.14874786148033764, 0.41468266131465104]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.085441133111836, 0.9968745644207627, 0.5022447908302095, 0.5954254788183118, 0.06716976974068864, 0.7499854743521379, 0.20998460253627618, 0.8980644840117696, 0.20521912651795896]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.1907686518916002, 0.03664601288131007, 0.47211973841548827, 0.5648846491492902, 0.06580206856440954, 0.775550063933341, 0.4533435058645528, 0.5244378303006473, 0.4408186706635343]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.4008229845692384, 0.5596843672750871, 0.15532472190611943, 0.1820099376822216, 0.8617994424514159, 0.9461208505874193, 0.373371985348125, 0.2708175986762394, 0.6440351432846918]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.4087932976809866, 0.025483817024778827, 0.15623698210645373, 0.7160006316245128, 0.6589580495159604, 0.027193282904233142, 0.22204996471675614, 0.23115168910841258, 0.6719255543243685]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.01980856670058872, 0.10419817112637539, 0.799936093764652, 0.17862680758812818, 0.6527808332410895, 0.23825896276862185, 0.09953144862006923, 0.24324787378035412, 0.7222947051624736]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8557108984594751, 0.8302368425805349, 0.39724381126549296, 0.6681183280569891, 0.20506379698627938, 0.29321841548798716, 0.8963461849392677, 0.013100623318384961, 0.08559998000138264]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.2079654665205127, 0.026629550653432334, 0.18151729154628832, 0.5830832568135952, 0.4214824081374387, 0.8926824439058672, 0.8174618173822671, 0.3418831699627062, 0.25949749108794934]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.37975443892645017, 0.5903359130205562, 0.26813683445879394, 0.624186492958349, 0.4094707110260213, 0.552091976133895, 0.43618291648240315, 0.2945363129659635, 0.9484584616314604]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7636294335803449, 0.1401991644488759, 0.868481129100323, 0.4874824551296887, 0.894562771671402, 0.7998752704217205, 0.4252709831187865, 0.022567061389285385, 0.26875049164900755]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5416800512394008, 0.6335148720041647, 0.25796189666465896, 0.13944213846541684, 0.8349467437756192, 0.9844037404854817, 0.5257376132844556, 0.17176211791971865, 0.272380095786717]} ...

00:18 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.018488837479811986, 0.914307376679843, 0.11783930778185213, 0.5765588238667021, 0.2741278151651373, 0.5542225847155419, 0.651455246313029, 0.8297588295268309, 0.20650062963342714]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.011094729075614628, 0.1369719415058004, 0.9000286399839202, 0.8739026885547589, 0.5974533608600913, 0.60055680874761, 0.6650701708788008, 0.17545374149558723, 0.9144205047303673]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.4188286478395839, 0.3832002143966725, 0.5189658135123093, 0.047061270178377544, 0.1663667404192038, 0.7380598130647278, 0.08289038805833396, 0.6031917942554416, 0.24542457477035282]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.38935668448057237, 0.2887648673970228, 0.3557371491933026, 0.7190740005937271, 0.2971920034516127, 0.5664479998328675, 0.47610279715887976, 0.6637047982461118, 0.9368360563508256]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7325988400005292, 0.2150188845528977, 0.03128001674777855, 0.2623378178953951, 0.5951184229071848, 0.051520670902901924, 0.4964166105765162, 0.5968831646319975, 0.3343104664278791]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7709351125254473, 0.10668759330845844, 0.07523026795763518, 0.728215937327983, 0.4955417670745778, 0.6884335561880932, 0.43488385586988587, 0.2464773930357829, 0.8191204074424323]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7994359373810823, 0.6947270012073413, 0.2722179227162397, 0.5902716438024002, 0.36103779955033277, 0.09167291511930149, 0.9173218441046965, 0.1369049490558695, 0.9502423300854204]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.44606117238065995, 0.18521441554573578, 0.5419467572836203, 0.8729585412928207, 0.7322516639209202, 0.8065804917466636, 0.6588174883740463, 0.6923073368614007, 0.8492107320001627]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.2497430420583, 0.48947602114677613, 0.22128732087542025, 0.9876692411958473, 0.9440649337526446, 0.03952286868736905, 0.7056046149984369, 0.9252557925839242, 0.1806572875928208]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5679884360295742, 0.915496748758283, 0.034042583987940266, 0.6974505252201152, 0.2974192724718135, 0.9244037557569927, 0.9710611393408513, 0.9442720624645226, 0.474266795235798]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8620564467242144, 0.8445649435952168, 0.3191685631959313, 0.8289325826032623, 0.03710393395202107, 0.5963102514942048, 0.23008583640397426, 0.12065482908870108, 0.07704550630904623]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.696319147000552, 0.33994097627168934, 0.7247942948516086, 0.06544980516486248, 0.31535880879682726, 0.5395373432460997, 0.7907440925224797, 0.3188206276817767, 0.6259287872993654]} ...

00:19 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8859891504613635, 0.6159016018634863, 0.2330361788061584, 0.024498341478382365, 0.8701118640135398, 0.0213672839093021, 0.874714202516931, 0.5289842403138093, 0.9390737917430448]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.798803357450088, 0.9979343171222841, 0.350776744270165, 0.7672115701022337, 0.4019907205178812, 0.47992763274187844, 0.6275427126720482, 0.8736897464749201, 0.9840850608523752]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7682965865231317, 0.417825005489117, 0.42141486657684246, 0.7376085433587326, 0.2388532680537255, 0.11056306572008152, 0.35468669542501247, 0.2873102677549163, 0.2963784896439445]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.23368439027480495, 0.04218898031739825, 0.01797214733994047, 0.987723617497058, 0.42783035642248873, 0.3843882144949657, 0.6796793179648005, 0.21833206247627765, 0.9499661878318303]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7863663799141105, 0.08950206121202829, 0.41763901770734985, 0.8791303957314085, 0.9447375490891812, 0.4674547710987447, 0.6134500480717866, 0.167117242697466, 0.9911695092743629]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.23174853421330505, 0.9427375009577121, 0.6496816843274692, 0.6077760211993712, 0.512737242165407, 0.2307467447365869, 0.1766103792023077, 0.22056416044926833, 0.1865196183180401]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7796065151193966, 0.35019024664081344, 0.05793689229574003, 0.9691057198777973, 0.8837975063749294, 0.9277595079668934, 0.9949083318641706, 0.17397785969456533, 0.3963023947007685]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7582626518565209, 0.6960510159919868, 0.15398051674922106, 0.8158515416781187, 0.224518127779477, 0.2238952330611408, 0.5370207254511242, 0.592980640848219, 0.5801281992170274]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.0915776887140151, 0.8774731165441173, 0.2656734825844472, 0.1296019697906759, 0.8887592050609903, 0.9556559330799304, 0.8621414045037241, 0.8095351231174056, 0.6552764564409578]} ...

00:20 StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.5509022848720784, 0.087078061235419, 0.40851236774939165, 0.3727512481606127, 0.2598278083819502, 0.7234477716772161, 0.4959261475052629, 0.08113811128605718, 0.22026118362961636]} ...

To optimize the computation of the projection matrix and the right-hand side for varying basis sizes, we first compute these for the full basis and then extract appropriate sub-matrices:

def compute_proj_errors(basis, V, product):
    G = basis.gramian(product=product)
    R = basis.inner(V, product=product)
    errors = []
    for N in range(len(basis) + 1):
        if N > 0:
            v = np.linalg.solve(G[:N, :N], R[:N, :])
        else:
            v = np.zeros((0, len(V)))
        V_proj = basis[:N].lincomb(v.T)
        errors.append(np.max((V - V_proj).norm(product=product)))
    return errors

trivial_errors = compute_proj_errors(trivial_basis, V, fom.h1_0_semi_product)

Here we have used the fact that we can form multiple linear combinations at once by passing multiple rows of linear coefficients to lincomb. The norm method returns a NumPy array of the norms of all vectors in the array with respect to the given inner product Operator. When no norm is specified, the Euclidean norms of the vectors are computed.

Let’s plot the projection errors:

from matplotlib import pyplot as plt
plt.figure()
plt.semilogy(trivial_errors)
plt.ylim(1e-1, 1e1)
plt.show()
_images/tutorial_basis_generation_22_0.png

Good! We see an exponential decay of the error with growing basis size. However, we can do better. If we want to use a smaller basis than we have snapshots available, just picking the first of these obviously won’t be optimal.

Strong greedy algorithm

The strong greedy algorithm iteratively builds reduced spaces V_N with a small worst-case best approximation error on a training set of solution snapshots by adding, in each iteration, the currently worst-approximated snapshot vector to the basis of V_N.

We can easily implement it as follows:

def strong_greedy(U, product, N):
    basis = U.space.empty()

    for n in range(N):
        # compute projection errors
        G = basis.gramian(product)
        R = basis.inner(U, product=product)
        lambdas = np.linalg.solve(G, R)
        U_proj = basis.lincomb(lambdas.T)
        errors = (U - U_proj).norm(product)

        # extend basis
        basis.append(U[np.argmax(errors)])

    return basis

Obviously, this algorithm is not optimized as we keep computing inner products we already know, but it will suffice for our purposes. Let’s compute a reduced basis using the strong greedy algorithm:

greedy_basis = strong_greedy(U, fom.h1_0_product, 25)

We compute the approximation errors for the validation set as before:

greedy_errors = compute_proj_errors(greedy_basis, V, fom.h1_0_semi_product)

plt.figure()
plt.semilogy(trivial_errors, label='trivial')
plt.semilogy(greedy_errors, label='greedy')
plt.ylim(1e-1, 1e1)
plt.legend()
plt.show()
_images/tutorial_basis_generation_25_0.png

Indeed, the strong greedy algorithm constructs better bases than the trivial basis construction algorithm. For compact training sets contained in a Hilbert space, it can actually be shown that the greedy algorithm constructs quasi-optimal spaces in the sense that polynomial or exponential decay of the N-widths d_N yields similar rates for the worst-case best-approximation errors of the constructed V_N.

Orthonormalization required

There is one technical problem with both algorithms however: the condition numbers of the Gramians used to compute the projection onto V_N explode:

G_trivial = trivial_basis.gramian(fom.h1_0_semi_product)
G_greedy = greedy_basis.gramian(fom.h1_0_semi_product)
trivial_conds, greedy_conds = [], []
for N in range(1, len(U)):
    trivial_conds.append(np.linalg.cond(G_trivial[:N, :N]))
    greedy_conds.append(np.linalg.cond(G_greedy[:N, :N]))
plt.figure()
plt.semilogy(range(1, len(U)), trivial_conds, label='trivial')
plt.semilogy(range(1, len(U)), greedy_conds, label='greedy')
plt.legend()
plt.show()
_images/tutorial_basis_generation_26_0.png

This is quite obvious as the snapshot matrix U becomes more and more linear dependent the larger it grows.

If we would use the bases we just constructed to build a reduced-order model from them, we will quickly get bitten by the limited accuracy of floating-point numbers.

There is a simple remedy however: we orthonormalize our bases. The standard algorithm in pyMOR to do so, is a modified gram_schmidt procedure with re-orthogonalization to improve numerical accuracy:

gram_schmidt(greedy_basis, product=fom.h1_0_semi_product, copy=False)
gram_schmidt(trivial_basis, product=fom.h1_0_semi_product, copy=False)

The copy=False argument tells the algorithm to orthonormalize the given VectorArray in-place instead of returning a new array with the orthonormalized vectors.

Since the vectors in greedy_basis and trivial_basis are now orthonormal, their Gramians are identity matrices (up to numerics). Thus, their condition numbers should be near 1:

G_trivial = trivial_basis.gramian(fom.h1_0_semi_product)
G_greedy = greedy_basis.gramian(fom.h1_0_semi_product)

print(f'trivial: {np.linalg.cond(G_trivial)}, '
      f'greedy: {np.linalg.cond(G_greedy)}')
trivial: 1.0000000000000038, greedy: 1.000000000000002

Orthonormalizing the bases does not change their linear span, so best-approximation errors stay the same. Also, we can compute these errors now more easily by exploiting orthogonality:

def compute_proj_errors_orth_basis(basis, V, product):
    errors = []
    for N in range(len(basis) + 1):
        v = V.inner(basis[:N], product=product)
        V_proj = basis[:N].lincomb(v)
        errors.append(np.max((V - V_proj).norm(product)))
    return errors

trivial_errors = compute_proj_errors_orth_basis(trivial_basis, V, fom.h1_0_semi_product)
greedy_errors  = compute_proj_errors_orth_basis(greedy_basis, V, fom.h1_0_semi_product)

plt.figure()
plt.semilogy(trivial_errors, label='trivial')
plt.semilogy(greedy_errors, label='greedy')
plt.ylim(1e-1, 1e1)
plt.legend()
plt.show()
_images/tutorial_basis_generation_29_0.png

Proper Orthogonal Decomposition

Another popular method to create a reduced basis out of snapshot data is the so-called Proper Orthogonal Decomposition (POD) which can be seen as a non-centered version of Principal Component Analysis (PCA). First we build a snapshot matrix

\begin{split}A := \begin{bmatrix} \vdots & \vdots & \cdots & \vdots \\ u(\mu_1) & u(\mu_2) & \cdots & u(\mu_K)\\ \vdots & \vdots & \cdots & \vdots \end{bmatrix},\end{split}

where K denotes the total number of solution snapshots. Then we compute the SVD of A

A = U \Sigma V^T,

where \Sigma is an r \times r-diagonal matrix, U is an n \times r-matrix and V is an K \times r-matrix. Here n denotes the dimension of the solution_space and r is the rank of A. The diagonal entries \sigma_i of \Sigma are the singular values of A, which are assumed to be monotonically decreasing. The pairwise orthogonal and normalized columns of U and V are the left- resp. right-singular vectors of A. The i-th POD mode is than simply the i-th left-singular vector of A, i.e. the i-th column of U. The larger the corresponding singular value is, the more important is this vector for the approximation of the snapshot data. In fact, if we let V_N be the span of the first N left-singular vectors of A, then the following error identity holds:

\begin{split}\sum_{k = 1}^K \inf_{v \in V_N} \|u(\mu_k) - v\|^2 =\, \min_{\substack{W_N \subseteq V\\ \operatorname{dim}(W_N) \leq N}} \sum_{k = 1}^K \inf_{v \in W_N} \|u(\mu_k) - v\|^2 = \sum_{i = N+1}^{r} \sigma^2\end{split}

Thus, the linear spaces produced by the POD are actually optimal, albeit in a different error measure: instead of looking at the worst-case best-approximation error over all parameter values, we minimize the \ell^2-sum of all best-approximation errors. So in the mean squared average, the POD spaces are optimal, but there might be parameter values for which the best-approximation error is quite large.

So far we completely neglected that the snapshot vectors may lie in a Hilbert space with some given inner product. To account for that, instead of the snapshot matrix A, we consider the linear mapping that sends the i-th canonical basis vector e_k of \mathbb{R}^K to the vector u(\mu_k) in the solution_space:

\Phi: \mathbb{R}^K \to V, \ e_k \mapsto u(\mu_k).

Also for this finite-rank (hence compact) operator there exists a SVD of the form

\Phi(v) = \sum_{i=1}^r u_i \cdot \sigma_i \cdot (v_i, v) \qquad \forall v \in \mathbb{R}^K,

with orthonormal vectors u_i and v_i that generalizes the SVD of a matrix.

The POD in this more general form is implemented in pyMOR by the pod method, which can be called as follows:

pod_basis, pod_singular_values = pod(U, product=fom.h1_0_semi_product, modes=25)
Log Output

00:27 pod: Computing SVD ...

00:27 | method_of_snapshots: Computing Gramian (25 vectors) ...

00:27 | method_of_snapshots: Computing eigenvalue decomposition ...

00:27 | method_of_snapshots: Computing left-singular vectors (25 vectors) ...

00:27 pod: Checking orthonormality ...

We said that the POD modes (left-singular vectors) are orthonormal with respect to the inner product on the target Hilbert-space. Let’s check that:

np.linalg.cond(pod_basis.gramian(fom.h1_0_semi_product))
1.000000000010063

Now, let us compare how the POD performs against the greedy algorithm in the worst-case best-approximation error:

pod_errors = compute_proj_errors_orth_basis(pod_basis, V, fom.h1_0_semi_product)

plt.figure()
plt.semilogy(trivial_errors, label='trivial')
plt.semilogy(greedy_errors, label='greedy')
plt.semilogy(pod_errors, label='POD')
plt.ylim(1e-1, 1e1)
plt.legend()
plt.show()
_images/tutorial_basis_generation_32_0.png

As it turns out, the POD spaces perform even slightly better than the greedy spaces. Why is that the case? Note that for finite training or validation sets, both considered error measures are equivalent. In particular:

\sup_{k = 1,\ldots,K} \inf_{v \in V_N} \|u(\mu_k) - v\| \leq \left[\sum_{k = 1}^K \inf_{v \in V_N} \|u(\mu_k) - v\|^2\right]^{1/2} \leq \sqrt{K} \cdot \sup_{k = 1,\ldots,K} \inf_{v \in V_N} \|u(\mu_k) - v\|.

Since POD spaces are optimal in the \ell^2-error, they miss the error of the optimal space in the Kolmogorov sense by at most a factor of \sqrt{K}, which in our case is 5. On the other hand, the greedy algorithm also only produces quasi-optimal spaces. – For very large training sets with more complex parameter dependence, differences between both spaces may be more significant.

Finally, it is often insightful to look at the POD modes themselves:

fom.visualize(pod_basis)

As you can see, the first (more important) basis vectors account for the approximation of the solutions in the bulk of the subdomains, whereas the higher modes are responsible for approximating the solutions at the subdomain interfaces.

Weak greedy algorithm

Both POD and the strong greedy algorithm require the computation of all solutions u(\mu) for all parameter values \mu in the training_set. So it is clear right from the start that we cannot afford very large training sets. Otherwise we would not be interested in model order reduction in the first place. This is a problem when the number of Parameters increases and/or the solution depends less uniformly on the Parameters.

Reduced basis methods have a very elegant solution to this problem, which allows training sets that are orders of magnitude larger than the training sets affordable for POD: instead of computing the best-approximation error we only compute a surrogate

\inf_{v \in V_N} \|u(\mu) - v\| \approx \mathcal{E}(\mu)

for it. Replacing the best-approximation error by this surrogate in the strong greedy algorithm, we arrive at the weak greedy algorithm. If the surrogate \mathcal{E}(\mu) is an upper and lower bound to the best-approximation error up to some fixed factor, it can still be shown that the produced reduced spaces are quasi-optimal in the same sense as for the strong greedy algorithm, although the involved constants might be worse, depending on the efficiency of \mathcal{E}(\mu).

Now here comes the trick: to get a surrogate that can be quickly computed, we can use our current reduced-order model for it. More precisely, we choose \mathcal{E}(\mu) to be of the form

\mathcal{E}(\mu):= \operatorname{Err-Est}(\operatorname{ROM-Solve}(\mu), \mu).

So to compute the surrogate for fixed parameter values \mu, we first solve the reduced-order model for the current reduced basis for these parameter values and then compute an estimate for the model order reduction error.

We won’t go into any further details in this tutorial, but for nice problem classes (linear coercive problems with an affine dependence of the system matrix on the Parameters), one can derive a posteriori error estimators for which the equivalence with the best-approximation error can be shown and which can be computed efficiently, independently from the size of the full-order model. Here we will only give a simple example how to use the weak greedy algorithm for our problem at hand.

In order to do so, we need to be able to build a reduced-order model with an appropriate error estimator. For the given (linear coercive) thermal block problem we can use CoerciveRBReductor:

reductor = CoerciveRBReductor(
    fom,
    product=fom.h1_0_semi_product,
    coercivity_estimator=ExpressionParameterFunctional('min(diffusion)', fom.parameters)
)

Here product specifies the inner product with respect to which we want to compute the model order reduction error. With coercivity_estimator we need to specify a function which estimates the coercivity constant of the system matrix with respect to the given inner product. In our case, this is just the minimum of the diffusivities over all subdomains.

Now we can call rb_greedy, which constructs for us the surrogate \mathcal{E}(\mu) from fom and the reductor we just constructed. It then passes this surrogate to the weak_greedy method. Furthermore, we need to specify the number of basis vectors we want to compute (we could also have specified an error tolerance) and the training set. As the surrogate is cheap to evaluate, we choose here a training set of 1000 different parameter values:

greedy_data = rb_greedy(fom, reductor, parameter_space.sample_randomly(1000),
                        max_extensions=25)
Log Output

00:33 weak_greedy: Started greedy search on training set of size 1000.

00:33 weak_greedy: Estimating errors ...

00:33 | RBSurrogate: Reducing ...

00:33 | | CoerciveRBReductor: Operator projection ...

00:33 | | CoerciveRBReductor: Assembling error estimator ...

00:33 | | | ResidualReductor: Estimating residual range ...

00:33 | | | | estimate_image_hierarchical: Estimating image for basis vector -1 ...

00:33 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:33 | | | ResidualReductor: Projecting residual operator ...

00:33 | | CoerciveRBReductor: Building ROM ...

00:33 weak_greedy: Maximum error after 0 extensions: 1679.219624132986 (mu = {diffusion: [0.42528566261915013, 0.037667333289867495, 0.1279540145422953, 0.7655701958408092, 0.0001116335918906045, 0.41662417189944395, 0.5225579300674799, 0.054728956558972204, 0.9730808551560356]})

00:33 weak_greedy: Extending surrogate for mu = {diffusion: [0.42528566261915013, 0.037667333289867495, 0.1279540145422953, 0.7655701958408092, 0.0001116335918906045, 0.41662417189944395, 0.5225579300674799, 0.054728956558972204, 0.9730808551560356]} ...

00:33 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.42528566261915013, 0.037667333289867495, 0.1279540145422953, 0.7655701958408092, 0.0001116335918906045, 0.41662417189944395, 0.5225579300674799, 0.054728956558972204, 0.9730808551560356]} ...

00:33 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.42528566261915013, 0.037667333289867495, 0.1279540145422953, 0.7655701958408092, 0.0001116335918906045, 0.41662417189944395, 0.5225579300674799, 0.054728956558972204, 0.9730808551560356]} ...

00:33 | RBSurrogate: Extending basis with solution snapshot ...

00:33 | RBSurrogate: Reducing ...

00:33 | | CoerciveRBReductor: Operator projection ...

00:33 | | CoerciveRBReductor: Assembling error estimator ...

00:33 | | | ResidualReductor: Estimating residual range ...

00:33 | | | | estimate_image_hierarchical: Estimating image for basis vector 0 ...

00:34 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:34 | | | | | gram_schmidt: Removing vector 1 of norm 4.176384069555026e-21

00:34 | | | | | gram_schmidt: Orthonormalizing vector 2 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 5 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 7 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 8 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 9 again

00:34 | | | | | gram_schmidt: Removing linearly dependent vector 10

00:34 | | | ResidualReductor: Projecting residual operator ...

00:34 | | CoerciveRBReductor: Building ROM ...

00:34 weak_greedy: Estimating errors ...

00:34 weak_greedy: Maximum error after 1 extensions: 1424.8951837712245 (mu = {diffusion: [0.779842901921667, 0.1183068004288595, 0.00013071577349787764, 0.7121658321760169, 0.356660516368607, 0.25455588348509844, 0.012996180042546487, 0.5403453768208235, 0.8511656203524391]})

00:34 weak_greedy: Extending surrogate for mu = {diffusion: [0.779842901921667, 0.1183068004288595, 0.00013071577349787764, 0.7121658321760169, 0.356660516368607, 0.25455588348509844, 0.012996180042546487, 0.5403453768208235, 0.8511656203524391]} ...

00:34 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.779842901921667, 0.1183068004288595, 0.00013071577349787764, 0.7121658321760169, 0.356660516368607, 0.25455588348509844, 0.012996180042546487, 0.5403453768208235, 0.8511656203524391]} ...

00:34 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.779842901921667, 0.1183068004288595, 0.00013071577349787764, 0.7121658321760169, 0.356660516368607, 0.25455588348509844, 0.012996180042546487, 0.5403453768208235, 0.8511656203524391]} ...

00:34 | RBSurrogate: Extending basis with solution snapshot ...

00:34 | RBSurrogate: Reducing ...

00:34 | | CoerciveRBReductor: Operator projection ...

00:34 | | CoerciveRBReductor: Assembling error estimator ...

00:34 | | | ResidualReductor: Estimating residual range ...

00:34 | | | | estimate_image_hierarchical: Estimating image for basis vector 1 ...

00:34 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:34 | | | | | gram_schmidt: Removing vector 9 of norm 5.282889070834107e-24

00:34 | | | | | gram_schmidt: Orthonormalizing vector 10 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 11 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 12 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 13 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 14 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 15 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 16 again

00:34 | | | | | gram_schmidt: Orthonormalizing vector 17 again

00:34 | | | | | gram_schmidt: Removing linearly dependent vector 18

00:34 | | | ResidualReductor: Projecting residual operator ...

00:34 | | CoerciveRBReductor: Building ROM ...

00:34 weak_greedy: Estimating errors ...

00:35 weak_greedy: Maximum error after 2 extensions: 1208.6520537551214 (mu = {diffusion: [0.4293798334276248, 0.4079441383576739, 0.11403387491759907, 0.676174174422597, 0.7021269786606531, 0.7209211297301925, 0.7284980984640361, 0.00015282164960357146, 0.9539254117663375]})

00:35 weak_greedy: Extending surrogate for mu = {diffusion: [0.4293798334276248, 0.4079441383576739, 0.11403387491759907, 0.676174174422597, 0.7021269786606531, 0.7209211297301925, 0.7284980984640361, 0.00015282164960357146, 0.9539254117663375]} ...

00:35 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.4293798334276248, 0.4079441383576739, 0.11403387491759907, 0.676174174422597, 0.7021269786606531, 0.7209211297301925, 0.7284980984640361, 0.00015282164960357146, 0.9539254117663375]} ...

00:35 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.4293798334276248, 0.4079441383576739, 0.11403387491759907, 0.676174174422597, 0.7021269786606531, 0.7209211297301925, 0.7284980984640361, 0.00015282164960357146, 0.9539254117663375]} ...

00:35 | RBSurrogate: Extending basis with solution snapshot ...

00:35 | RBSurrogate: Reducing ...

00:35 | | CoerciveRBReductor: Operator projection ...

00:35 | | CoerciveRBReductor: Assembling error estimator ...

00:35 | | | ResidualReductor: Estimating residual range ...

00:35 | | | | estimate_image_hierarchical: Estimating image for basis vector 2 ...

00:35 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:35 | | | | | gram_schmidt: Removing vector 17 of norm 1.85705593607105e-20

00:35 | | | | | gram_schmidt: Orthonormalizing vector 18 again

00:35 | | | | | gram_schmidt: Orthonormalizing vector 19 again

00:35 | | | | | gram_schmidt: Orthonormalizing vector 20 again

00:35 | | | | | gram_schmidt: Orthonormalizing vector 21 again

00:35 | | | | | gram_schmidt: Orthonormalizing vector 22 again

00:35 | | | | | gram_schmidt: Orthonormalizing vector 23 again

00:35 | | | | | gram_schmidt: Orthonormalizing vector 24 again

00:35 | | | | | gram_schmidt: Orthonormalizing vector 25 again

00:35 | | | | | gram_schmidt: Removing linearly dependent vector 26

00:35 | | | ResidualReductor: Projecting residual operator ...

00:35 | | CoerciveRBReductor: Building ROM ...

00:35 weak_greedy: Estimating errors ...

00:36 weak_greedy: Maximum error after 3 extensions: 783.5372141152287 (mu = {diffusion: [0.05442660161942154, 0.7486703696027873, 0.31765503658159056, 0.00023467953518487152, 0.5111780263233061, 0.04694722333675037, 0.2762419408651174, 0.7070057898069706, 0.06278335265970368]})

00:36 weak_greedy: Extending surrogate for mu = {diffusion: [0.05442660161942154, 0.7486703696027873, 0.31765503658159056, 0.00023467953518487152, 0.5111780263233061, 0.04694722333675037, 0.2762419408651174, 0.7070057898069706, 0.06278335265970368]} ...

00:36 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.05442660161942154, 0.7486703696027873, 0.31765503658159056, 0.00023467953518487152, 0.5111780263233061, 0.04694722333675037, 0.2762419408651174, 0.7070057898069706, 0.06278335265970368]} ...

00:36 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.05442660161942154, 0.7486703696027873, 0.31765503658159056, 0.00023467953518487152, 0.5111780263233061, 0.04694722333675037, 0.2762419408651174, 0.7070057898069706, 0.06278335265970368]} ...

00:36 | RBSurrogate: Extending basis with solution snapshot ...

00:36 | RBSurrogate: Reducing ...

00:36 | | CoerciveRBReductor: Operator projection ...

00:36 | | CoerciveRBReductor: Assembling error estimator ...

00:36 | | | ResidualReductor: Estimating residual range ...

00:36 | | | | estimate_image_hierarchical: Estimating image for basis vector 3 ...

00:36 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:36 | | | | | gram_schmidt: Removing vector 25 of norm 1.0026605552259645e-23

00:36 | | | | | gram_schmidt: Orthonormalizing vector 26 again

00:36 | | | | | gram_schmidt: Orthonormalizing vector 27 again

00:36 | | | | | gram_schmidt: Orthonormalizing vector 28 again

00:36 | | | | | gram_schmidt: Orthonormalizing vector 29 again

00:36 | | | | | gram_schmidt: Orthonormalizing vector 30 again

00:36 | | | | | gram_schmidt: Orthonormalizing vector 31 again

00:36 | | | | | gram_schmidt: Orthonormalizing vector 32 again

00:36 | | | | | gram_schmidt: Orthonormalizing vector 33 again

00:36 | | | | | gram_schmidt: Orthonormalizing vector 34 again

00:36 | | | ResidualReductor: Projecting residual operator ...

00:36 | | CoerciveRBReductor: Building ROM ...

00:36 weak_greedy: Estimating errors ...

00:37 weak_greedy: Maximum error after 4 extensions: 557.6026048187521 (mu = {diffusion: [0.00032701551443371194, 0.9150100375937524, 0.8438950232898059, 0.7423180022637984, 0.16789781418646402, 0.12045153985236506, 0.06769798899743591, 0.7240567858540439, 0.9288150798918166]})

00:37 weak_greedy: Extending surrogate for mu = {diffusion: [0.00032701551443371194, 0.9150100375937524, 0.8438950232898059, 0.7423180022637984, 0.16789781418646402, 0.12045153985236506, 0.06769798899743591, 0.7240567858540439, 0.9288150798918166]} ...

00:37 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.00032701551443371194, 0.9150100375937524, 0.8438950232898059, 0.7423180022637984, 0.16789781418646402, 0.12045153985236506, 0.06769798899743591, 0.7240567858540439, 0.9288150798918166]} ...

00:37 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.00032701551443371194, 0.9150100375937524, 0.8438950232898059, 0.7423180022637984, 0.16789781418646402, 0.12045153985236506, 0.06769798899743591, 0.7240567858540439, 0.9288150798918166]} ...

00:37 | RBSurrogate: Extending basis with solution snapshot ...

00:37 | RBSurrogate: Reducing ...

00:37 | | CoerciveRBReductor: Operator projection ...

00:37 | | CoerciveRBReductor: Assembling error estimator ...

00:37 | | | ResidualReductor: Estimating residual range ...

00:37 | | | | estimate_image_hierarchical: Estimating image for basis vector 4 ...

00:37 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:37 | | | | | gram_schmidt: Removing vector 34 of norm 4.07079518056062e-20

00:37 | | | | | gram_schmidt: Orthonormalizing vector 35 again

00:37 | | | | | gram_schmidt: Orthonormalizing vector 36 again

00:37 | | | | | gram_schmidt: Orthonormalizing vector 37 again

00:37 | | | | | gram_schmidt: Orthonormalizing vector 38 again

00:37 | | | | | gram_schmidt: Orthonormalizing vector 39 again

00:37 | | | | | gram_schmidt: Orthonormalizing vector 40 again

00:37 | | | | | gram_schmidt: Orthonormalizing vector 41 again

00:37 | | | | | gram_schmidt: Orthonormalizing vector 42 again

00:37 | | | | | gram_schmidt: Removing linearly dependent vector 43

00:37 | | | ResidualReductor: Projecting residual operator ...

00:37 | | CoerciveRBReductor: Building ROM ...

00:37 weak_greedy: Estimating errors ...

00:38 weak_greedy: Maximum error after 5 extensions: 531.5997929279478 (mu = {diffusion: [0.8312626805094335, 0.7667916795027019, 0.35070762718800114, 0.37687294671143506, 0.5336010781102241, 0.00034098058466603586, 0.24132021160442405, 0.2083109854273536, 0.25055789845508847]})

00:38 weak_greedy: Extending surrogate for mu = {diffusion: [0.8312626805094335, 0.7667916795027019, 0.35070762718800114, 0.37687294671143506, 0.5336010781102241, 0.00034098058466603586, 0.24132021160442405, 0.2083109854273536, 0.25055789845508847]} ...

00:38 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.8312626805094335, 0.7667916795027019, 0.35070762718800114, 0.37687294671143506, 0.5336010781102241, 0.00034098058466603586, 0.24132021160442405, 0.2083109854273536, 0.25055789845508847]} ...

00:38 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.8312626805094335, 0.7667916795027019, 0.35070762718800114, 0.37687294671143506, 0.5336010781102241, 0.00034098058466603586, 0.24132021160442405, 0.2083109854273536, 0.25055789845508847]} ...

00:38 | RBSurrogate: Extending basis with solution snapshot ...

00:38 | RBSurrogate: Reducing ...

00:38 | | CoerciveRBReductor: Operator projection ...

00:38 | | CoerciveRBReductor: Assembling error estimator ...

00:38 | | | ResidualReductor: Estimating residual range ...

00:38 | | | | estimate_image_hierarchical: Estimating image for basis vector 5 ...

00:38 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:38 | | | | | gram_schmidt: Removing vector 42 of norm 1.0624340002778586e-22

00:38 | | | | | gram_schmidt: Orthonormalizing vector 43 again

00:38 | | | | | gram_schmidt: Orthonormalizing vector 44 again

00:38 | | | | | gram_schmidt: Orthonormalizing vector 45 again

00:38 | | | | | gram_schmidt: Orthonormalizing vector 46 again

00:38 | | | | | gram_schmidt: Orthonormalizing vector 47 again

00:38 | | | | | gram_schmidt: Orthonormalizing vector 48 again

00:38 | | | | | gram_schmidt: Orthonormalizing vector 49 again

00:38 | | | | | gram_schmidt: Orthonormalizing vector 50 again

00:38 | | | | | gram_schmidt: Orthonormalizing vector 51 again

00:38 | | | ResidualReductor: Projecting residual operator ...

00:38 | | CoerciveRBReductor: Building ROM ...

00:38 weak_greedy: Estimating errors ...

00:39 weak_greedy: Maximum error after 6 extensions: 369.48823917960414 (mu = {diffusion: [0.023208357400945415, 0.37306979916857286, 0.14887340502076796, 0.5584286391224649, 0.9173571666165982, 0.3915405823609823, 0.8449658822348769, 0.0003408504304146873, 0.1685977919948383]})

00:39 weak_greedy: Extending surrogate for mu = {diffusion: [0.023208357400945415, 0.37306979916857286, 0.14887340502076796, 0.5584286391224649, 0.9173571666165982, 0.3915405823609823, 0.8449658822348769, 0.0003408504304146873, 0.1685977919948383]} ...

00:39 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.023208357400945415, 0.37306979916857286, 0.14887340502076796, 0.5584286391224649, 0.9173571666165982, 0.3915405823609823, 0.8449658822348769, 0.0003408504304146873, 0.1685977919948383]} ...

00:39 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.023208357400945415, 0.37306979916857286, 0.14887340502076796, 0.5584286391224649, 0.9173571666165982, 0.3915405823609823, 0.8449658822348769, 0.0003408504304146873, 0.1685977919948383]} ...

00:39 | RBSurrogate: Extending basis with solution snapshot ...

00:39 | | gram_schmidt: Orthonormalizing vector 6 again

00:39 | RBSurrogate: Reducing ...

00:39 | | CoerciveRBReductor: Operator projection ...

00:39 | | CoerciveRBReductor: Assembling error estimator ...

00:39 | | | ResidualReductor: Estimating residual range ...

00:39 | | | | estimate_image_hierarchical: Estimating image for basis vector 6 ...

00:39 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:39 | | | | | gram_schmidt: Removing vector 51 of norm 2.0132511471087368e-18

00:39 | | | | | gram_schmidt: Orthonormalizing vector 52 again

00:39 | | | | | gram_schmidt: Orthonormalizing vector 53 again

00:39 | | | | | gram_schmidt: Orthonormalizing vector 54 again

00:39 | | | | | gram_schmidt: Orthonormalizing vector 55 again

00:39 | | | | | gram_schmidt: Orthonormalizing vector 56 again

00:39 | | | | | gram_schmidt: Orthonormalizing vector 57 again

00:39 | | | | | gram_schmidt: Orthonormalizing vector 58 again

00:39 | | | | | gram_schmidt: Orthonormalizing vector 59 again

00:39 | | | | | gram_schmidt: Orthonormalizing vector 60 again

00:39 | | | ResidualReductor: Projecting residual operator ...

00:40 | | CoerciveRBReductor: Building ROM ...

00:40 weak_greedy: Estimating errors ...

00:40 weak_greedy: Maximum error after 7 extensions: 211.51563996370336 (mu = {diffusion: [0.7333545759259906, 0.0005102484102309538, 0.5962230511354366, 0.915951409653657, 0.2880053006838097, 0.18560742266019342, 0.6517146268989391, 0.7126301471576634, 0.6880617159000861]})

00:40 weak_greedy: Extending surrogate for mu = {diffusion: [0.7333545759259906, 0.0005102484102309538, 0.5962230511354366, 0.915951409653657, 0.2880053006838097, 0.18560742266019342, 0.6517146268989391, 0.7126301471576634, 0.6880617159000861]} ...

00:40 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.7333545759259906, 0.0005102484102309538, 0.5962230511354366, 0.915951409653657, 0.2880053006838097, 0.18560742266019342, 0.6517146268989391, 0.7126301471576634, 0.6880617159000861]} ...

00:40 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7333545759259906, 0.0005102484102309538, 0.5962230511354366, 0.915951409653657, 0.2880053006838097, 0.18560742266019342, 0.6517146268989391, 0.7126301471576634, 0.6880617159000861]} ...

00:40 | RBSurrogate: Extending basis with solution snapshot ...

00:40 | RBSurrogate: Reducing ...

00:40 | | CoerciveRBReductor: Operator projection ...

00:40 | | CoerciveRBReductor: Assembling error estimator ...

00:40 | | | ResidualReductor: Estimating residual range ...

00:40 | | | | estimate_image_hierarchical: Estimating image for basis vector 7 ...

00:40 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:40 | | | | | gram_schmidt: Removing vector 60 of norm 1.3470573881545163e-19

00:40 | | | | | gram_schmidt: Orthonormalizing vector 61 again

00:40 | | | | | gram_schmidt: Orthonormalizing vector 62 again

00:40 | | | | | gram_schmidt: Orthonormalizing vector 63 again

00:40 | | | | | gram_schmidt: Orthonormalizing vector 64 again

00:40 | | | | | gram_schmidt: Orthonormalizing vector 65 again

00:40 | | | | | gram_schmidt: Orthonormalizing vector 66 again

00:41 | | | | | gram_schmidt: Orthonormalizing vector 67 again

00:41 | | | | | gram_schmidt: Orthonormalizing vector 68 again

00:41 | | | | | gram_schmidt: Removing linearly dependent vector 69

00:41 | | | ResidualReductor: Projecting residual operator ...

00:41 | | CoerciveRBReductor: Building ROM ...

00:41 weak_greedy: Estimating errors ...

00:41 weak_greedy: Maximum error after 8 extensions: 189.43142981045602 (mu = {diffusion: [0.0005015038709373388, 0.8059105800799345, 0.07934168116931582, 0.19157046594662863, 0.3862977716595597, 0.484910253842132, 0.35505961630236044, 0.3207800937677954, 0.9639196204719811]})

00:41 weak_greedy: Extending surrogate for mu = {diffusion: [0.0005015038709373388, 0.8059105800799345, 0.07934168116931582, 0.19157046594662863, 0.3862977716595597, 0.484910253842132, 0.35505961630236044, 0.3207800937677954, 0.9639196204719811]} ...

00:41 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.0005015038709373388, 0.8059105800799345, 0.07934168116931582, 0.19157046594662863, 0.3862977716595597, 0.484910253842132, 0.35505961630236044, 0.3207800937677954, 0.9639196204719811]} ...

00:41 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.0005015038709373388, 0.8059105800799345, 0.07934168116931582, 0.19157046594662863, 0.3862977716595597, 0.484910253842132, 0.35505961630236044, 0.3207800937677954, 0.9639196204719811]} ...

00:41 | RBSurrogate: Extending basis with solution snapshot ...

00:41 | | gram_schmidt: Orthonormalizing vector 8 again

00:41 | RBSurrogate: Reducing ...

00:41 | | CoerciveRBReductor: Operator projection ...

00:41 | | CoerciveRBReductor: Assembling error estimator ...

00:41 | | | ResidualReductor: Estimating residual range ...

00:41 | | | | estimate_image_hierarchical: Estimating image for basis vector 8 ...

00:41 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:41 | | | | | gram_schmidt: Removing vector 68 of norm 8.743727437370289e-18

00:41 | | | | | gram_schmidt: Orthonormalizing vector 69 again

00:42 | | | | | gram_schmidt: Orthonormalizing vector 70 again

00:42 | | | | | gram_schmidt: Orthonormalizing vector 71 again

00:42 | | | | | gram_schmidt: Orthonormalizing vector 72 again

00:42 | | | | | gram_schmidt: Orthonormalizing vector 73 again

00:42 | | | | | gram_schmidt: Orthonormalizing vector 74 again

00:42 | | | | | gram_schmidt: Orthonormalizing vector 75 again

00:42 | | | | | gram_schmidt: Orthonormalizing vector 76 again

00:42 | | | | | gram_schmidt: Removing linearly dependent vector 77

00:42 | | | ResidualReductor: Projecting residual operator ...

00:42 | | CoerciveRBReductor: Building ROM ...

00:42 weak_greedy: Estimating errors ...

00:43 weak_greedy: Maximum error after 9 extensions: 167.27107233178626 (mu = {diffusion: [0.1717928179747908, 0.529880905573174, 0.2900583939048774, 0.6289422544091227, 0.11802688100674606, 0.6549756104568648, 0.28336819834490246, 0.0003429101410692803, 0.12702336406840894]})

00:43 weak_greedy: Extending surrogate for mu = {diffusion: [0.1717928179747908, 0.529880905573174, 0.2900583939048774, 0.6289422544091227, 0.11802688100674606, 0.6549756104568648, 0.28336819834490246, 0.0003429101410692803, 0.12702336406840894]} ...

00:43 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.1717928179747908, 0.529880905573174, 0.2900583939048774, 0.6289422544091227, 0.11802688100674606, 0.6549756104568648, 0.28336819834490246, 0.0003429101410692803, 0.12702336406840894]} ...

00:43 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.1717928179747908, 0.529880905573174, 0.2900583939048774, 0.6289422544091227, 0.11802688100674606, 0.6549756104568648, 0.28336819834490246, 0.0003429101410692803, 0.12702336406840894]} ...

00:43 | RBSurrogate: Extending basis with solution snapshot ...

00:43 | | gram_schmidt: Orthonormalizing vector 9 again

00:43 | RBSurrogate: Reducing ...

00:43 | | CoerciveRBReductor: Operator projection ...

00:43 | | CoerciveRBReductor: Assembling error estimator ...

00:43 | | | ResidualReductor: Estimating residual range ...

00:43 | | | | estimate_image_hierarchical: Estimating image for basis vector 9 ...

00:43 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:43 | | | | | gram_schmidt: Removing vector 76 of norm 2.1151064280568365e-19

00:43 | | | | | gram_schmidt: Orthonormalizing vector 77 again

00:43 | | | | | gram_schmidt: Orthonormalizing vector 78 again

00:43 | | | | | gram_schmidt: Orthonormalizing vector 79 again

00:43 | | | | | gram_schmidt: Orthonormalizing vector 80 again

00:43 | | | | | gram_schmidt: Orthonormalizing vector 81 again

00:43 | | | | | gram_schmidt: Orthonormalizing vector 82 again

00:43 | | | | | gram_schmidt: Orthonormalizing vector 83 again

00:43 | | | | | gram_schmidt: Orthonormalizing vector 84 again

00:43 | | | | | gram_schmidt: Orthonormalizing vector 85 again

00:43 | | | ResidualReductor: Projecting residual operator ...

00:43 | | CoerciveRBReductor: Building ROM ...

00:43 weak_greedy: Estimating errors ...

00:44 weak_greedy: Maximum error after 10 extensions: 120.09349177483494 (mu = {diffusion: [0.7641165909129602, 0.6193893708057053, 0.23221069940887615, 0.0010429482232375852, 0.7571748652494085, 0.9852088051785026, 0.8099321289940598, 0.4602605974947997, 0.9037768846729667]})

00:44 weak_greedy: Extending surrogate for mu = {diffusion: [0.7641165909129602, 0.6193893708057053, 0.23221069940887615, 0.0010429482232375852, 0.7571748652494085, 0.9852088051785026, 0.8099321289940598, 0.4602605974947997, 0.9037768846729667]} ...

00:44 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.7641165909129602, 0.6193893708057053, 0.23221069940887615, 0.0010429482232375852, 0.7571748652494085, 0.9852088051785026, 0.8099321289940598, 0.4602605974947997, 0.9037768846729667]} ...

00:44 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7641165909129602, 0.6193893708057053, 0.23221069940887615, 0.0010429482232375852, 0.7571748652494085, 0.9852088051785026, 0.8099321289940598, 0.4602605974947997, 0.9037768846729667]} ...

00:44 | RBSurrogate: Extending basis with solution snapshot ...

00:44 | | gram_schmidt: Orthonormalizing vector 10 again

00:44 | RBSurrogate: Reducing ...

00:44 | | CoerciveRBReductor: Operator projection ...

00:44 | | CoerciveRBReductor: Assembling error estimator ...

00:44 | | | ResidualReductor: Estimating residual range ...

00:44 | | | | estimate_image_hierarchical: Estimating image for basis vector 10 ...

00:44 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:44 | | | | | gram_schmidt: Removing vector 85 of norm 2.5427808514840146e-17

00:44 | | | | | gram_schmidt: Orthonormalizing vector 86 again

00:44 | | | | | gram_schmidt: Orthonormalizing vector 87 again

00:44 | | | | | gram_schmidt: Orthonormalizing vector 88 again

00:44 | | | | | gram_schmidt: Orthonormalizing vector 89 again

00:44 | | | | | gram_schmidt: Orthonormalizing vector 90 again

00:44 | | | | | gram_schmidt: Orthonormalizing vector 91 again

00:44 | | | | | gram_schmidt: Orthonormalizing vector 92 again

00:44 | | | | | gram_schmidt: Orthonormalizing vector 93 again

00:45 | | | | | gram_schmidt: Removing linearly dependent vector 94

00:45 | | | ResidualReductor: Projecting residual operator ...

00:45 | | CoerciveRBReductor: Building ROM ...

00:45 weak_greedy: Estimating errors ...

00:45 weak_greedy: Maximum error after 11 extensions: 116.26316252511337 (mu = {diffusion: [0.14121200568749193, 0.0008894321421094647, 0.8101590767363867, 0.0876658055299396, 0.6338864337534504, 0.91462499286974, 0.20873104796711517, 0.6653063089532871, 0.07879070684983872]})

00:45 weak_greedy: Extending surrogate for mu = {diffusion: [0.14121200568749193, 0.0008894321421094647, 0.8101590767363867, 0.0876658055299396, 0.6338864337534504, 0.91462499286974, 0.20873104796711517, 0.6653063089532871, 0.07879070684983872]} ...

00:45 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.14121200568749193, 0.0008894321421094647, 0.8101590767363867, 0.0876658055299396, 0.6338864337534504, 0.91462499286974, 0.20873104796711517, 0.6653063089532871, 0.07879070684983872]} ...

00:45 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.14121200568749193, 0.0008894321421094647, 0.8101590767363867, 0.0876658055299396, 0.6338864337534504, 0.91462499286974, 0.20873104796711517, 0.6653063089532871, 0.07879070684983872]} ...

00:45 | RBSurrogate: Extending basis with solution snapshot ...

00:45 | | gram_schmidt: Orthonormalizing vector 11 again

00:45 | RBSurrogate: Reducing ...

00:45 | | CoerciveRBReductor: Operator projection ...

00:45 | | CoerciveRBReductor: Assembling error estimator ...

00:45 | | | ResidualReductor: Estimating residual range ...

00:45 | | | | estimate_image_hierarchical: Estimating image for basis vector 11 ...

00:45 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:45 | | | | | gram_schmidt: Removing vector 93 of norm 5.2437512478945804e-18

00:45 | | | | | gram_schmidt: Orthonormalizing vector 94 again

00:46 | | | | | gram_schmidt: Orthonormalizing vector 95 again

00:46 | | | | | gram_schmidt: Orthonormalizing vector 96 again

00:46 | | | | | gram_schmidt: Orthonormalizing vector 97 again

00:46 | | | | | gram_schmidt: Orthonormalizing vector 98 again

00:46 | | | | | gram_schmidt: Orthonormalizing vector 99 again

00:46 | | | | | gram_schmidt: Orthonormalizing vector 100 again

00:46 | | | | | gram_schmidt: Orthonormalizing vector 101 again

00:46 | | | | | gram_schmidt: Orthonormalizing vector 102 again

00:46 | | | ResidualReductor: Projecting residual operator ...

00:46 | | CoerciveRBReductor: Building ROM ...

00:46 weak_greedy: Estimating errors ...

00:47 weak_greedy: Maximum error after 12 extensions: 110.42942052573356 (mu = {diffusion: [0.7271584837622393, 0.0008167870207091237, 0.9368597447480062, 0.1646089198555731, 0.23365707617893694, 0.26354709834927886, 0.03791635453838011, 0.12313981837796588, 0.9863512879518676]})

00:47 weak_greedy: Extending surrogate for mu = {diffusion: [0.7271584837622393, 0.0008167870207091237, 0.9368597447480062, 0.1646089198555731, 0.23365707617893694, 0.26354709834927886, 0.03791635453838011, 0.12313981837796588, 0.9863512879518676]} ...

00:47 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.7271584837622393, 0.0008167870207091237, 0.9368597447480062, 0.1646089198555731, 0.23365707617893694, 0.26354709834927886, 0.03791635453838011, 0.12313981837796588, 0.9863512879518676]} ...

00:47 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7271584837622393, 0.0008167870207091237, 0.9368597447480062, 0.1646089198555731, 0.23365707617893694, 0.26354709834927886, 0.03791635453838011, 0.12313981837796588, 0.9863512879518676]} ...

00:47 | RBSurrogate: Extending basis with solution snapshot ...

00:47 | | gram_schmidt: Orthonormalizing vector 12 again

00:47 | RBSurrogate: Reducing ...

00:47 | | CoerciveRBReductor: Operator projection ...

00:47 | | CoerciveRBReductor: Assembling error estimator ...

00:47 | | | ResidualReductor: Estimating residual range ...

00:47 | | | | estimate_image_hierarchical: Estimating image for basis vector 12 ...

00:47 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:47 | | | | | gram_schmidt: Removing vector 102 of norm 1.4334670316276567e-17

00:47 | | | | | gram_schmidt: Orthonormalizing vector 103 again

00:47 | | | | | gram_schmidt: Orthonormalizing vector 104 again

00:47 | | | | | gram_schmidt: Orthonormalizing vector 105 again

00:47 | | | | | gram_schmidt: Orthonormalizing vector 106 again

00:47 | | | | | gram_schmidt: Orthonormalizing vector 107 again

00:47 | | | | | gram_schmidt: Orthonormalizing vector 108 again

00:47 | | | | | gram_schmidt: Orthonormalizing vector 109 again

00:47 | | | | | gram_schmidt: Orthonormalizing vector 110 again

00:47 | | | | | gram_schmidt: Removing linearly dependent vector 111

00:47 | | | ResidualReductor: Projecting residual operator ...

00:48 | | CoerciveRBReductor: Building ROM ...

00:48 weak_greedy: Estimating errors ...

00:48 weak_greedy: Maximum error after 13 extensions: 89.6909070279236 (mu = {diffusion: [0.007081563654160083, 0.4035406209300969, 0.2348309462827988, 0.8188873835934195, 0.3373968757848809, 0.8275409415844217, 0.000697879040911829, 0.30788836811111653, 0.24124109778633115]})

00:48 weak_greedy: Extending surrogate for mu = {diffusion: [0.007081563654160083, 0.4035406209300969, 0.2348309462827988, 0.8188873835934195, 0.3373968757848809, 0.8275409415844217, 0.000697879040911829, 0.30788836811111653, 0.24124109778633115]} ...

00:48 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.007081563654160083, 0.4035406209300969, 0.2348309462827988, 0.8188873835934195, 0.3373968757848809, 0.8275409415844217, 0.000697879040911829, 0.30788836811111653, 0.24124109778633115]} ...

00:48 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.007081563654160083, 0.4035406209300969, 0.2348309462827988, 0.8188873835934195, 0.3373968757848809, 0.8275409415844217, 0.000697879040911829, 0.30788836811111653, 0.24124109778633115]} ...

00:48 | RBSurrogate: Extending basis with solution snapshot ...

00:48 | | gram_schmidt: Orthonormalizing vector 13 again

00:48 | RBSurrogate: Reducing ...

00:48 | | CoerciveRBReductor: Operator projection ...

00:48 | | CoerciveRBReductor: Assembling error estimator ...

00:48 | | | ResidualReductor: Estimating residual range ...

00:48 | | | | estimate_image_hierarchical: Estimating image for basis vector 13 ...

00:48 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:48 | | | | | gram_schmidt: Removing vector 110 of norm 1.446377442028206e-17

00:48 | | | | | gram_schmidt: Orthonormalizing vector 111 again

00:48 | | | | | gram_schmidt: Orthonormalizing vector 112 again

00:49 | | | | | gram_schmidt: Orthonormalizing vector 113 again

00:49 | | | | | gram_schmidt: Orthonormalizing vector 114 again

00:49 | | | | | gram_schmidt: Orthonormalizing vector 115 again

00:49 | | | | | gram_schmidt: Orthonormalizing vector 116 again

00:49 | | | | | gram_schmidt: Orthonormalizing vector 117 again

00:49 | | | | | gram_schmidt: Orthonormalizing vector 118 again

00:49 | | | | | gram_schmidt: Orthonormalizing vector 119 again

00:49 | | | ResidualReductor: Projecting residual operator ...

00:49 | | CoerciveRBReductor: Building ROM ...

00:49 weak_greedy: Estimating errors ...

00:50 weak_greedy: Maximum error after 14 extensions: 62.03075900020038 (mu = {diffusion: [0.7011710172704385, 0.2907139050096004, 0.2975629207256689, 0.2964184420728927, 0.7255956907351923, 0.22990305918835682, 0.0010057446625656161, 0.7152653297972849, 0.17126762536074175]})

00:50 weak_greedy: Extending surrogate for mu = {diffusion: [0.7011710172704385, 0.2907139050096004, 0.2975629207256689, 0.2964184420728927, 0.7255956907351923, 0.22990305918835682, 0.0010057446625656161, 0.7152653297972849, 0.17126762536074175]} ...

00:50 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.7011710172704385, 0.2907139050096004, 0.2975629207256689, 0.2964184420728927, 0.7255956907351923, 0.22990305918835682, 0.0010057446625656161, 0.7152653297972849, 0.17126762536074175]} ...

00:50 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.7011710172704385, 0.2907139050096004, 0.2975629207256689, 0.2964184420728927, 0.7255956907351923, 0.22990305918835682, 0.0010057446625656161, 0.7152653297972849, 0.17126762536074175]} ...

00:50 | RBSurrogate: Extending basis with solution snapshot ...

00:50 | | gram_schmidt: Orthonormalizing vector 14 again

00:50 | RBSurrogate: Reducing ...

00:50 | | CoerciveRBReductor: Operator projection ...

00:50 | | CoerciveRBReductor: Assembling error estimator ...

00:50 | | | ResidualReductor: Estimating residual range ...

00:50 | | | | estimate_image_hierarchical: Estimating image for basis vector 14 ...

00:50 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:50 | | | | | gram_schmidt: Removing vector 119 of norm 7.674733338635646e-18

00:50 | | | | | gram_schmidt: Orthonormalizing vector 120 again

00:50 | | | | | gram_schmidt: Orthonormalizing vector 121 again

00:50 | | | | | gram_schmidt: Orthonormalizing vector 122 again

00:50 | | | | | gram_schmidt: Orthonormalizing vector 123 again

00:50 | | | | | gram_schmidt: Orthonormalizing vector 124 again

00:50 | | | | | gram_schmidt: Orthonormalizing vector 125 again

00:50 | | | | | gram_schmidt: Orthonormalizing vector 126 again

00:50 | | | | | gram_schmidt: Orthonormalizing vector 127 again

00:51 | | | | | gram_schmidt: Orthonormalizing vector 128 again

00:51 | | | ResidualReductor: Projecting residual operator ...

00:51 | | CoerciveRBReductor: Building ROM ...

00:51 weak_greedy: Estimating errors ...

00:51 weak_greedy: Maximum error after 15 extensions: 56.813733565506574 (mu = {diffusion: [0.795946794504788, 0.1752056278163402, 0.6727973379510255, 0.22058221300930353, 0.218285713798717, 0.8740843661278547, 0.24974398584495613, 0.26332102178335937, 0.000753325422223425]})

00:51 weak_greedy: Extending surrogate for mu = {diffusion: [0.795946794504788, 0.1752056278163402, 0.6727973379510255, 0.22058221300930353, 0.218285713798717, 0.8740843661278547, 0.24974398584495613, 0.26332102178335937, 0.000753325422223425]} ...

00:51 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.795946794504788, 0.1752056278163402, 0.6727973379510255, 0.22058221300930353, 0.218285713798717, 0.8740843661278547, 0.24974398584495613, 0.26332102178335937, 0.000753325422223425]} ...

00:51 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.795946794504788, 0.1752056278163402, 0.6727973379510255, 0.22058221300930353, 0.218285713798717, 0.8740843661278547, 0.24974398584495613, 0.26332102178335937, 0.000753325422223425]} ...

00:52 | RBSurrogate: Extending basis with solution snapshot ...

00:52 | | gram_schmidt: Orthonormalizing vector 15 again

00:52 | RBSurrogate: Reducing ...

00:52 | | CoerciveRBReductor: Operator projection ...

00:52 | | CoerciveRBReductor: Assembling error estimator ...

00:52 | | | ResidualReductor: Estimating residual range ...

00:52 | | | | estimate_image_hierarchical: Estimating image for basis vector 15 ...

00:52 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:52 | | | | | gram_schmidt: Removing vector 128 of norm 8.737850376890759e-18

00:52 | | | | | gram_schmidt: Orthonormalizing vector 129 again

00:52 | | | | | gram_schmidt: Orthonormalizing vector 130 again

00:52 | | | | | gram_schmidt: Orthonormalizing vector 131 again

00:52 | | | | | gram_schmidt: Orthonormalizing vector 132 again

00:52 | | | | | gram_schmidt: Orthonormalizing vector 133 again

00:52 | | | | | gram_schmidt: Orthonormalizing vector 134 again

00:52 | | | | | gram_schmidt: Orthonormalizing vector 135 again

00:52 | | | | | gram_schmidt: Orthonormalizing vector 136 again

00:52 | | | | | gram_schmidt: Orthonormalizing vector 137 again

00:52 | | | ResidualReductor: Projecting residual operator ...

00:52 | | CoerciveRBReductor: Building ROM ...

00:52 weak_greedy: Estimating errors ...

00:53 weak_greedy: Maximum error after 16 extensions: 56.28326968586152 (mu = {diffusion: [0.20605943787337397, 0.0010433209261043238, 0.03936112514349537, 0.9772860947946116, 0.2424294597672486, 0.6633443784330515, 0.8391157888297316, 0.5507053931906406, 0.1532291856339478]})

00:53 weak_greedy: Extending surrogate for mu = {diffusion: [0.20605943787337397, 0.0010433209261043238, 0.03936112514349537, 0.9772860947946116, 0.2424294597672486, 0.6633443784330515, 0.8391157888297316, 0.5507053931906406, 0.1532291856339478]} ...

00:53 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.20605943787337397, 0.0010433209261043238, 0.03936112514349537, 0.9772860947946116, 0.2424294597672486, 0.6633443784330515, 0.8391157888297316, 0.5507053931906406, 0.1532291856339478]} ...

00:53 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.20605943787337397, 0.0010433209261043238, 0.03936112514349537, 0.9772860947946116, 0.2424294597672486, 0.6633443784330515, 0.8391157888297316, 0.5507053931906406, 0.1532291856339478]} ...

00:53 | RBSurrogate: Extending basis with solution snapshot ...

00:53 | | gram_schmidt: Orthonormalizing vector 16 again

00:53 | RBSurrogate: Reducing ...

00:53 | | CoerciveRBReductor: Operator projection ...

00:53 | | CoerciveRBReductor: Assembling error estimator ...

00:53 | | | ResidualReductor: Estimating residual range ...

00:53 | | | | estimate_image_hierarchical: Estimating image for basis vector 16 ...

00:53 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:53 | | | | | gram_schmidt: Removing vector 137 of norm 1.3042863516699176e-17

00:53 | | | | | gram_schmidt: Orthonormalizing vector 138 again

00:53 | | | | | gram_schmidt: Orthonormalizing vector 139 again

00:54 | | | | | gram_schmidt: Orthonormalizing vector 140 again

00:54 | | | | | gram_schmidt: Orthonormalizing vector 141 again

00:54 | | | | | gram_schmidt: Orthonormalizing vector 142 again

00:54 | | | | | gram_schmidt: Orthonormalizing vector 143 again

00:54 | | | | | gram_schmidt: Orthonormalizing vector 144 again

00:54 | | | | | gram_schmidt: Orthonormalizing vector 145 again

00:54 | | | | | gram_schmidt: Orthonormalizing vector 146 again

00:54 | | | ResidualReductor: Projecting residual operator ...

00:54 | | CoerciveRBReductor: Building ROM ...

00:54 weak_greedy: Estimating errors ...

00:55 weak_greedy: Maximum error after 17 extensions: 42.150415713579946 (mu = {diffusion: [0.9166514383836467, 0.7055261022694304, 0.6718120923278404, 0.3944017122360292, 0.0013842970842487337, 0.12024724280974838, 0.5346010893015636, 0.6751623728558352, 0.41750653906807283]})

00:55 weak_greedy: Extending surrogate for mu = {diffusion: [0.9166514383836467, 0.7055261022694304, 0.6718120923278404, 0.3944017122360292, 0.0013842970842487337, 0.12024724280974838, 0.5346010893015636, 0.6751623728558352, 0.41750653906807283]} ...

00:55 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.9166514383836467, 0.7055261022694304, 0.6718120923278404, 0.3944017122360292, 0.0013842970842487337, 0.12024724280974838, 0.5346010893015636, 0.6751623728558352, 0.41750653906807283]} ...

00:55 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9166514383836467, 0.7055261022694304, 0.6718120923278404, 0.3944017122360292, 0.0013842970842487337, 0.12024724280974838, 0.5346010893015636, 0.6751623728558352, 0.41750653906807283]} ...

00:55 | RBSurrogate: Extending basis with solution snapshot ...

00:55 | | gram_schmidt: Orthonormalizing vector 17 again

00:55 | RBSurrogate: Reducing ...

00:55 | | CoerciveRBReductor: Operator projection ...

00:55 | | CoerciveRBReductor: Assembling error estimator ...

00:55 | | | ResidualReductor: Estimating residual range ...

00:55 | | | | estimate_image_hierarchical: Estimating image for basis vector 17 ...

00:55 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:55 | | | | | gram_schmidt: Removing vector 146 of norm 6.667401960169191e-18

00:55 | | | | | gram_schmidt: Orthonormalizing vector 147 again

00:55 | | | | | gram_schmidt: Orthonormalizing vector 148 again

00:56 | | | | | gram_schmidt: Orthonormalizing vector 149 again

00:56 | | | | | gram_schmidt: Orthonormalizing vector 150 again

00:56 | | | | | gram_schmidt: Orthonormalizing vector 151 again

00:56 | | | | | gram_schmidt: Orthonormalizing vector 152 again

00:56 | | | | | gram_schmidt: Orthonormalizing vector 153 again

00:56 | | | | | gram_schmidt: Orthonormalizing vector 154 again

00:56 | | | | | gram_schmidt: Orthonormalizing vector 155 again

00:56 | | | ResidualReductor: Projecting residual operator ...

00:56 | | CoerciveRBReductor: Building ROM ...

00:56 weak_greedy: Estimating errors ...

00:57 weak_greedy: Maximum error after 18 extensions: 25.580991605051587 (mu = {diffusion: [0.6978630022266195, 0.9002740847909502, 0.7922119756821959, 0.6763934317659368, 0.679665219076904, 0.9459300515676833, 0.2958945150867864, 0.001211982986767789, 0.27202736394225885]})

00:57 weak_greedy: Extending surrogate for mu = {diffusion: [0.6978630022266195, 0.9002740847909502, 0.7922119756821959, 0.6763934317659368, 0.679665219076904, 0.9459300515676833, 0.2958945150867864, 0.001211982986767789, 0.27202736394225885]} ...

00:57 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.6978630022266195, 0.9002740847909502, 0.7922119756821959, 0.6763934317659368, 0.679665219076904, 0.9459300515676833, 0.2958945150867864, 0.001211982986767789, 0.27202736394225885]} ...

00:57 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6978630022266195, 0.9002740847909502, 0.7922119756821959, 0.6763934317659368, 0.679665219076904, 0.9459300515676833, 0.2958945150867864, 0.001211982986767789, 0.27202736394225885]} ...

00:57 | RBSurrogate: Extending basis with solution snapshot ...

00:57 | | gram_schmidt: Orthonormalizing vector 18 again

00:57 | RBSurrogate: Reducing ...

00:57 | | CoerciveRBReductor: Operator projection ...

00:57 | | CoerciveRBReductor: Assembling error estimator ...

00:57 | | | ResidualReductor: Estimating residual range ...

00:57 | | | | estimate_image_hierarchical: Estimating image for basis vector 18 ...

00:57 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:57 | | | | | gram_schmidt: Removing vector 155 of norm 3.674871675740909e-17

00:57 | | | | | gram_schmidt: Orthonormalizing vector 156 again

00:57 | | | | | gram_schmidt: Orthonormalizing vector 157 again

00:58 | | | | | gram_schmidt: Orthonormalizing vector 158 again

00:58 | | | | | gram_schmidt: Orthonormalizing vector 159 again

00:58 | | | | | gram_schmidt: Orthonormalizing vector 160 again

00:58 | | | | | gram_schmidt: Orthonormalizing vector 161 again

00:58 | | | | | gram_schmidt: Orthonormalizing vector 162 again

00:58 | | | | | gram_schmidt: Orthonormalizing vector 163 again

00:58 | | | | | gram_schmidt: Orthonormalizing vector 164 again

00:58 | | | ResidualReductor: Projecting residual operator ...

00:59 | | CoerciveRBReductor: Building ROM ...

00:59 weak_greedy: Estimating errors ...

00:59 weak_greedy: Maximum error after 19 extensions: 21.34965960585272 (mu = {diffusion: [0.29337927991948665, 0.7731645597312796, 0.5180087316598077, 0.34816078360897124, 0.3718021209768396, 0.0014534903528966073, 0.2998705475124794, 0.6464940957637667, 0.9741970243719755]})

00:59 weak_greedy: Extending surrogate for mu = {diffusion: [0.29337927991948665, 0.7731645597312796, 0.5180087316598077, 0.34816078360897124, 0.3718021209768396, 0.0014534903528966073, 0.2998705475124794, 0.6464940957637667, 0.9741970243719755]} ...

00:59 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.29337927991948665, 0.7731645597312796, 0.5180087316598077, 0.34816078360897124, 0.3718021209768396, 0.0014534903528966073, 0.2998705475124794, 0.6464940957637667, 0.9741970243719755]} ...

00:59 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.29337927991948665, 0.7731645597312796, 0.5180087316598077, 0.34816078360897124, 0.3718021209768396, 0.0014534903528966073, 0.2998705475124794, 0.6464940957637667, 0.9741970243719755]} ...

00:59 | RBSurrogate: Extending basis with solution snapshot ...

00:59 | | gram_schmidt: Orthonormalizing vector 19 again

00:59 | RBSurrogate: Reducing ...

00:59 | | CoerciveRBReductor: Operator projection ...

00:59 | | CoerciveRBReductor: Assembling error estimator ...

00:59 | | | ResidualReductor: Estimating residual range ...

00:59 | | | | estimate_image_hierarchical: Estimating image for basis vector 19 ...

00:59 | | | | estimate_image_hierarchical: Orthonormalizing ...

00:59 | | | | | gram_schmidt: Removing vector 164 of norm 3.3190390109245884e-17

01:00 | | | | | gram_schmidt: Orthonormalizing vector 165 again

01:00 | | | | | gram_schmidt: Orthonormalizing vector 166 again

01:00 | | | | | gram_schmidt: Orthonormalizing vector 167 again

01:00 | | | | | gram_schmidt: Orthonormalizing vector 168 again

01:00 | | | | | gram_schmidt: Orthonormalizing vector 169 again

01:00 | | | | | gram_schmidt: Orthonormalizing vector 170 again

01:00 | | | | | gram_schmidt: Orthonormalizing vector 171 again

01:00 | | | | | gram_schmidt: Orthonormalizing vector 172 again

01:00 | | | | | gram_schmidt: Orthonormalizing vector 173 again

01:01 | | | ResidualReductor: Projecting residual operator ...

01:01 | | CoerciveRBReductor: Building ROM ...

01:01 weak_greedy: Estimating errors ...

01:01 weak_greedy: Maximum error after 20 extensions: 21.20906940013854 (mu = {diffusion: [0.6079444235692947, 0.3094417805909958, 0.8238080051828726, 0.9550650054338404, 0.8212161627219715, 0.001664948680953757, 0.6364377300463887, 0.05123026790395652, 0.2576814596116885]})

01:01 weak_greedy: Extending surrogate for mu = {diffusion: [0.6079444235692947, 0.3094417805909958, 0.8238080051828726, 0.9550650054338404, 0.8212161627219715, 0.001664948680953757, 0.6364377300463887, 0.05123026790395652, 0.2576814596116885]} ...

01:01 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.6079444235692947, 0.3094417805909958, 0.8238080051828726, 0.9550650054338404, 0.8212161627219715, 0.001664948680953757, 0.6364377300463887, 0.05123026790395652, 0.2576814596116885]} ...

01:01 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.6079444235692947, 0.3094417805909958, 0.8238080051828726, 0.9550650054338404, 0.8212161627219715, 0.001664948680953757, 0.6364377300463887, 0.05123026790395652, 0.2576814596116885]} ...

01:02 | RBSurrogate: Extending basis with solution snapshot ...

01:02 | | gram_schmidt: Orthonormalizing vector 20 again

01:02 | RBSurrogate: Reducing ...

01:02 | | CoerciveRBReductor: Operator projection ...

01:02 | | CoerciveRBReductor: Assembling error estimator ...

01:02 | | | ResidualReductor: Estimating residual range ...

01:02 | | | | estimate_image_hierarchical: Estimating image for basis vector 20 ...

01:02 | | | | estimate_image_hierarchical: Orthonormalizing ...

01:02 | | | | | gram_schmidt: Removing vector 173 of norm 2.2059446890149225e-17

01:02 | | | | | gram_schmidt: Orthonormalizing vector 174 again

01:02 | | | | | gram_schmidt: Orthonormalizing vector 175 again

01:02 | | | | | gram_schmidt: Orthonormalizing vector 176 again

01:02 | | | | | gram_schmidt: Orthonormalizing vector 177 again

01:02 | | | | | gram_schmidt: Orthonormalizing vector 178 again

01:02 | | | | | gram_schmidt: Orthonormalizing vector 179 again

01:02 | | | | | gram_schmidt: Orthonormalizing vector 180 again

01:03 | | | | | gram_schmidt: Orthonormalizing vector 181 again

01:03 | | | | | gram_schmidt: Orthonormalizing vector 182 again

01:03 | | | ResidualReductor: Projecting residual operator ...

01:03 | | CoerciveRBReductor: Building ROM ...

01:03 weak_greedy: Estimating errors ...

01:04 weak_greedy: Maximum error after 21 extensions: 20.727778800421216 (mu = {diffusion: [0.28747379359209796, 0.5579233956595278, 0.3043771610616822, 0.08654886136944032, 0.0015282002448688016, 0.4496383945849703, 0.8368797358843696, 0.5723579541115136, 0.9992073782135465]})

01:04 weak_greedy: Extending surrogate for mu = {diffusion: [0.28747379359209796, 0.5579233956595278, 0.3043771610616822, 0.08654886136944032, 0.0015282002448688016, 0.4496383945849703, 0.8368797358843696, 0.5723579541115136, 0.9992073782135465]} ...

01:04 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.28747379359209796, 0.5579233956595278, 0.3043771610616822, 0.08654886136944032, 0.0015282002448688016, 0.4496383945849703, 0.8368797358843696, 0.5723579541115136, 0.9992073782135465]} ...

01:04 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.28747379359209796, 0.5579233956595278, 0.3043771610616822, 0.08654886136944032, 0.0015282002448688016, 0.4496383945849703, 0.8368797358843696, 0.5723579541115136, 0.9992073782135465]} ...

01:04 | RBSurrogate: Extending basis with solution snapshot ...

01:04 | | gram_schmidt: Orthonormalizing vector 21 again

01:04 | RBSurrogate: Reducing ...

01:04 | | CoerciveRBReductor: Operator projection ...

01:04 | | CoerciveRBReductor: Assembling error estimator ...

01:04 | | | ResidualReductor: Estimating residual range ...

01:04 | | | | estimate_image_hierarchical: Estimating image for basis vector 21 ...

01:04 | | | | estimate_image_hierarchical: Orthonormalizing ...

01:04 | | | | | gram_schmidt: Removing vector 182 of norm 2.9434815109335376e-17

01:04 | | | | | gram_schmidt: Orthonormalizing vector 183 again

01:04 | | | | | gram_schmidt: Orthonormalizing vector 184 again

01:04 | | | | | gram_schmidt: Orthonormalizing vector 185 again

01:04 | | | | | gram_schmidt: Orthonormalizing vector 186 again

01:05 | | | | | gram_schmidt: Orthonormalizing vector 187 again

01:05 | | | | | gram_schmidt: Orthonormalizing vector 188 again

01:05 | | | | | gram_schmidt: Orthonormalizing vector 189 again

01:05 | | | | | gram_schmidt: Orthonormalizing vector 190 again

01:05 | | | | | gram_schmidt: Orthonormalizing vector 191 again

01:05 | | | ResidualReductor: Projecting residual operator ...

01:05 | | CoerciveRBReductor: Building ROM ...

01:05 weak_greedy: Estimating errors ...

01:06 weak_greedy: Maximum error after 22 extensions: 15.229039722937438 (mu = {diffusion: [0.41296989847025434, 0.5375400042978672, 0.04939473244292903, 0.002447239120066408, 0.2448753930144257, 0.6806150346094754, 0.1213527668698008, 0.11385058260866897, 0.41101951197509967]})

01:06 weak_greedy: Extending surrogate for mu = {diffusion: [0.41296989847025434, 0.5375400042978672, 0.04939473244292903, 0.002447239120066408, 0.2448753930144257, 0.6806150346094754, 0.1213527668698008, 0.11385058260866897, 0.41101951197509967]} ...

01:06 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.41296989847025434, 0.5375400042978672, 0.04939473244292903, 0.002447239120066408, 0.2448753930144257, 0.6806150346094754, 0.1213527668698008, 0.11385058260866897, 0.41101951197509967]} ...

01:06 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.41296989847025434, 0.5375400042978672, 0.04939473244292903, 0.002447239120066408, 0.2448753930144257, 0.6806150346094754, 0.1213527668698008, 0.11385058260866897, 0.41101951197509967]} ...

01:06 | RBSurrogate: Extending basis with solution snapshot ...

01:06 | | gram_schmidt: Orthonormalizing vector 22 again

01:06 | RBSurrogate: Reducing ...

01:06 | | CoerciveRBReductor: Operator projection ...

01:06 | | CoerciveRBReductor: Assembling error estimator ...

01:06 | | | ResidualReductor: Estimating residual range ...

01:06 | | | | estimate_image_hierarchical: Estimating image for basis vector 22 ...

01:06 | | | | estimate_image_hierarchical: Orthonormalizing ...

01:06 | | | | | gram_schmidt: Removing vector 191 of norm 3.879727091706633e-17

01:06 | | | | | gram_schmidt: Orthonormalizing vector 192 again

01:07 | | | | | gram_schmidt: Orthonormalizing vector 193 again

01:07 | | | | | gram_schmidt: Orthonormalizing vector 194 again

01:07 | | | | | gram_schmidt: Orthonormalizing vector 195 again

01:07 | | | | | gram_schmidt: Orthonormalizing vector 196 again

01:07 | | | | | gram_schmidt: Orthonormalizing vector 197 again

01:07 | | | | | gram_schmidt: Orthonormalizing vector 198 again

01:07 | | | | | gram_schmidt: Orthonormalizing vector 199 again

01:07 | | | | | gram_schmidt: Orthonormalizing vector 200 again

01:08 | | | ResidualReductor: Projecting residual operator ...

01:08 | | CoerciveRBReductor: Building ROM ...

01:08 weak_greedy: Estimating errors ...

01:09 weak_greedy: Maximum error after 23 extensions: 11.628513674023147 (mu = {diffusion: [0.22620271871654696, 0.30426829976431724, 0.30401211800380185, 0.23049362045113816, 0.001573674704102785, 0.7293718596085875, 0.9668488152493054, 0.22437105393671655, 0.6630808870842421]})

01:09 weak_greedy: Extending surrogate for mu = {diffusion: [0.22620271871654696, 0.30426829976431724, 0.30401211800380185, 0.23049362045113816, 0.001573674704102785, 0.7293718596085875, 0.9668488152493054, 0.22437105393671655, 0.6630808870842421]} ...

01:09 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.22620271871654696, 0.30426829976431724, 0.30401211800380185, 0.23049362045113816, 0.001573674704102785, 0.7293718596085875, 0.9668488152493054, 0.22437105393671655, 0.6630808870842421]} ...

01:09 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.22620271871654696, 0.30426829976431724, 0.30401211800380185, 0.23049362045113816, 0.001573674704102785, 0.7293718596085875, 0.9668488152493054, 0.22437105393671655, 0.6630808870842421]} ...

01:09 | RBSurrogate: Extending basis with solution snapshot ...

01:09 | | gram_schmidt: Orthonormalizing vector 23 again

01:09 | RBSurrogate: Reducing ...

01:09 | | CoerciveRBReductor: Operator projection ...

01:09 | | CoerciveRBReductor: Assembling error estimator ...

01:09 | | | ResidualReductor: Estimating residual range ...

01:09 | | | | estimate_image_hierarchical: Estimating image for basis vector 23 ...

01:09 | | | | estimate_image_hierarchical: Orthonormalizing ...

01:09 | | | | | gram_schmidt: Removing vector 200 of norm 5.217741250001337e-17

01:09 | | | | | gram_schmidt: Orthonormalizing vector 201 again

01:09 | | | | | gram_schmidt: Orthonormalizing vector 202 again

01:09 | | | | | gram_schmidt: Orthonormalizing vector 203 again

01:09 | | | | | gram_schmidt: Orthonormalizing vector 204 again

01:09 | | | | | gram_schmidt: Orthonormalizing vector 205 again

01:09 | | | | | gram_schmidt: Orthonormalizing vector 206 again

01:10 | | | | | gram_schmidt: Orthonormalizing vector 207 again

01:10 | | | | | gram_schmidt: Orthonormalizing vector 208 again

01:10 | | | | | gram_schmidt: Orthonormalizing vector 209 again

01:10 | | | ResidualReductor: Projecting residual operator ...

01:10 | | CoerciveRBReductor: Building ROM ...

01:10 weak_greedy: Estimating errors ...

01:11 weak_greedy: Maximum error after 24 extensions: 10.743182319873716 (mu = {diffusion: [0.9334617597393272, 0.34466213606739904, 0.161068333244206, 0.5632080720151095, 0.9437797429327817, 0.0039299074712725425, 0.23820103076967333, 0.9384537956463666, 0.17116885826047631]})

01:11 weak_greedy: Extending surrogate for mu = {diffusion: [0.9334617597393272, 0.34466213606739904, 0.161068333244206, 0.5632080720151095, 0.9437797429327817, 0.0039299074712725425, 0.23820103076967333, 0.9384537956463666, 0.17116885826047631]} ...

01:11 | RBSurrogate: Computing solution snapshot for mu = {diffusion: [0.9334617597393272, 0.34466213606739904, 0.161068333244206, 0.5632080720151095, 0.9437797429327817, 0.0039299074712725425, 0.23820103076967333, 0.9384537956463666, 0.17116885826047631]} ...

01:11 | | StationaryModel: Solving ThermalBlock((3, 3))_CG for {diffusion: [0.9334617597393272, 0.34466213606739904, 0.161068333244206, 0.5632080720151095, 0.9437797429327817, 0.0039299074712725425, 0.23820103076967333, 0.9384537956463666, 0.17116885826047631]} ...

01:11 | RBSurrogate: Extending basis with solution snapshot ...

01:11 | | gram_schmidt: Orthonormalizing vector 24 again

01:11 | RBSurrogate: Reducing ...

01:11 | | CoerciveRBReductor: Operator projection ...

01:11 | | CoerciveRBReductor: Assembling error estimator ...

01:11 | | | ResidualReductor: Estimating residual range ...

01:11 | | | | estimate_image_hierarchical: Estimating image for basis vector 24 ...

01:11 | | | | estimate_image_hierarchical: Orthonormalizing ...

01:11 | | | | | gram_schmidt: Removing vector 209 of norm 2.1392072431960174e-17

01:11 | | | | | gram_schmidt: Orthonormalizing vector 210 again

01:11 | | | | | gram_schmidt: Orthonormalizing vector 211 again

01:12 | | | | | gram_schmidt: Orthonormalizing vector 212 again

01:12 | | | | | gram_schmidt: Orthonormalizing vector 213 again

01:12 | | | | | gram_schmidt: Orthonormalizing vector 214 again

01:12 | | | | | gram_schmidt: Orthonormalizing vector 215 again

01:12 | | | | | gram_schmidt: Orthonormalizing vector 216 again

01:12 | | | | | gram_schmidt: Orthonormalizing vector 217 again

01:12 | | | | | gram_schmidt: Orthonormalizing vector 218 again

01:13 | | | ResidualReductor: Projecting residual operator ...

01:13 | | CoerciveRBReductor: Building ROM ...

01:13 weak_greedy: Maximum number of 25 extensions reached.

01:13 weak_greedy: Greedy search took 40.209163589999946 seconds

Take a look at the log output to see how the basis is built iteratively using the surrogate \mathcal{E}(\mu).

The returned greedy_data dictionary contains various information about the run of the algorithm, including the final ROM. Here, however, we are interested in the generated reduced basis, which is managed by the reductor:

weak_greedy_basis = reductor.bases['RB']

Let’s see, how the weak-greedy basis performs:

weak_greedy_errors = compute_proj_errors_orth_basis(weak_greedy_basis, V, fom.h1_0_semi_product)

plt.figure()
plt.semilogy(trivial_errors, label='trivial')
plt.semilogy(greedy_errors, label='greedy')
plt.semilogy(pod_errors, label='POD')
plt.semilogy(weak_greedy_errors, label='weak greedy')
plt.ylim(1e-1, 1e1)
plt.legend()
plt.show()
_images/tutorial_basis_generation_37_0.png

We see that for smaller basis sizes the weak-greedy basis is slightly worse than the POD and strong-greedy bases. This can be explained by the fact that the surrogate \mathcal{E}(\mu) can over-estimate the actual best-approximation error by a certain (fixed) factor, possibly resulting in the selection of sub-optimal snapshots. For larger basis sizes, this is mitigated by the very large training set from which we were able to choose: the 25 snapshots in training_set used for the POD and strong-greedy bases can approximate the entire manifold of solutions only to a certain degree, whereas the weak-greedy algorithm could select the snapshots from 1000 possible parameter values.

Download the code: tutorial_basis_generation.py tutorial_basis_generation.ipynb