dorie issueshttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues2019-11-25T15:12:11Zhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/169Use BCGS_AMG_SSOR solver for finite volume methods2019-11-25T15:12:11ZLukas Riedelmail@lukasriedel.comUse BCGS_AMG_SSOR solver for finite volume methods### Description
We currently use the `AMG_4_DG` solver in all cases. As its name suggests, the solver is optimized for solving DG problems by separating them into a CG and DG subspace. As finite volume spaces are essentially DG spaces of order zero, this solver works, but it likely performs many no-ops. Specifically, the CG subspace of the problem should be empty. We can therefore switch to a more direct solver.
### Proposal
Use the `BCGS_AMG_SSOR` solver for the finite volume method. It uses the same overall routines as the `AMG_4_DG`, which should ensure that the results remain comparable. It uses `AMG` for preconditioning, `BiCGStab` for solving the problem, and `SSOR` for smoothing the solution.
The PDELab class is called `Dune::PDELab::ISTLBackend_BCGS_AMG_SSOR` and defined in the `dune/pdelab/backend/istl/ovlpistlsolverbackend.hh` header. The class must replace the other linear solver via a conditional type definition based on the polynomial order of the problem. Initialization of the solver object is also different and must be controlled via `if constexpr`.
### How to test the implementation?
Test suite still succeeds.
### Related issues### Description
We currently use the `AMG_4_DG` solver in all cases. As its name suggests, the solver is optimized for solving DG problems by separating them into a CG and DG subspace. As finite volume spaces are essentially DG spaces of order zero, this solver works, but it likely performs many no-ops. Specifically, the CG subspace of the problem should be empty. We can therefore switch to a more direct solver.
### Proposal
Use the `BCGS_AMG_SSOR` solver for the finite volume method. It uses the same overall routines as the `AMG_4_DG`, which should ensure that the results remain comparable. It uses `AMG` for preconditioning, `BiCGStab` for solving the problem, and `SSOR` for smoothing the solution.
The PDELab class is called `Dune::PDELab::ISTLBackend_BCGS_AMG_SSOR` and defined in the `dune/pdelab/backend/istl/ovlpistlsolverbackend.hh` header. The class must replace the other linear solver via a conditional type definition based on the polynomial order of the problem. Initialization of the solver object is also different and must be controlled via `if constexpr`.
### How to test the implementation?
Test suite still succeeds.
### Related issueshttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/155Add spline MvG parameterisation2019-07-04T14:17:58ZSantiago Ospina De Los Ríossospinar@gmail.comAdd spline MvG parameterisation### Description
Prof. Roth has suggested several times to implement the MvG parameterisation as a spline in order to reduce computation requirements. Ole also has experienced a bottleneck that the power operations impose over the overall time.
### Proposal
None yet. Feel free to start a discussion on this.
### How to test the implementation?
It should -approximately- have the same behaviour as the raw MvG parameterization
### Related issues
See #63
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
Prof. Roth has suggested several times to implement the MvG parameterisation as a spline in order to reduce computation requirements. Ole also has experienced a bottleneck that the power operations impose over the overall time.
### Proposal
None yet. Feel free to start a discussion on this.
### How to test the implementation?
It should -approximately- have the same behaviour as the raw MvG parameterization
### Related issues
See #63
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/140Implement a global mass conservation operator2020-01-09T12:51:07ZSantiago Ospina De Los Ríossospinar@gmail.comImplement a global mass conservation operatorThe following discussion from !96 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/96#note_20802): (+12 comments)
> 2. Check global mass conservation.
>
> Similar to the mass conservation test of the Richards solver, we can run the coupled solver in multiple homogeneous and heterogeneous test cases and evaluate the solute mass conservation. Requires a separate test executable or a general check similar to the one implemented for the flux reconstruction.The following discussion from !96 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/96#note_20802): (+12 comments)
> 2. Check global mass conservation.
>
> Similar to the mass conservation test of the Richards solver, we can run the coupled solver in multiple homogeneous and heterogeneous test cases and evaluate the solute mass conservation. Requires a separate test executable or a general check similar to the one implemented for the flux reconstruction.Solute Transport FeatureSantiago Ospina De Los Ríossospinar@gmail.comSantiago Ospina De Los Ríossospinar@gmail.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/132Support restarting of simulations2019-05-17T17:15:43ZSantiago Ospina De Los Ríossospinar@gmail.comSupport restarting of simulations### Description
From discussion at #122:
> @sospinar: **If we write `vtk`, it is logical that we must be able to read them.** It would, for example, allow us to set checkpoints on simulations or just use the end of some simulations to start a new one without any further postprocessing.
> @lriedel: I generally agree. Restarting from a written simulation would be nice to have. But there are two major issues:
>
>1. VTK is a really huge library. It does not only contain data I/O, but (mostly) functions for data visualization (!) and analysis. It is now available through most common package managers, but I can understand that the DUNE devs did not want to include it because it will be unavailable on most clusters.
>
>2. The written VTK file does not include the entire GFS information. This is a particular problem with local grid refinement, where the grid configuration cannot—at least not simply—be inferred from the VTK output. Additionally, one may choose cell or vertex data output, and different levels of subsampling. If this is evaluated as initial condition, one never has the guarantee that the assembled DOF vector is the same as in the simulation run which generated the VTK file.
> @sospinar: Well, that's better than nothing. Hopefully, subsampling can support part of the error one makes when writing into `vtk`s.
> @lriedel: But that's exactly my point. Then the restart option is no real restart, but a simulation that starts from an initial bcondition which is _somewhat_ similar to an old solution. I'm not sure if this is really useful.
>
> I see two general options for a program restart:
>
> 1. Use program checkpoints. At a checkpoint, the full precision DOF vector is written to a file. The grid is stored in the [DUNE grid format](https://www.dune-project.org/doxygen/2.6.0/group__DuneGridFormatParser.html). However, I'm unsure about its capabilities and have never worked with it.
>
> 2. Use a solution from a previous program run as initial condition. This is the cheap way. Grid configurations don't need to match and the old solution is interpolated onto the new grid function space.
But that's better discussed in another Issue.
### Proposal
### How to test the implementation?
### Related issues
See #...
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
From discussion at #122:
> @sospinar: **If we write `vtk`, it is logical that we must be able to read them.** It would, for example, allow us to set checkpoints on simulations or just use the end of some simulations to start a new one without any further postprocessing.
> @lriedel: I generally agree. Restarting from a written simulation would be nice to have. But there are two major issues:
>
>1. VTK is a really huge library. It does not only contain data I/O, but (mostly) functions for data visualization (!) and analysis. It is now available through most common package managers, but I can understand that the DUNE devs did not want to include it because it will be unavailable on most clusters.
>
>2. The written VTK file does not include the entire GFS information. This is a particular problem with local grid refinement, where the grid configuration cannot—at least not simply—be inferred from the VTK output. Additionally, one may choose cell or vertex data output, and different levels of subsampling. If this is evaluated as initial condition, one never has the guarantee that the assembled DOF vector is the same as in the simulation run which generated the VTK file.
> @sospinar: Well, that's better than nothing. Hopefully, subsampling can support part of the error one makes when writing into `vtk`s.
> @lriedel: But that's exactly my point. Then the restart option is no real restart, but a simulation that starts from an initial bcondition which is _somewhat_ similar to an old solution. I'm not sure if this is really useful.
>
> I see two general options for a program restart:
>
> 1. Use program checkpoints. At a checkpoint, the full precision DOF vector is written to a file. The grid is stored in the [DUNE grid format](https://www.dune-project.org/doxygen/2.6.0/group__DuneGridFormatParser.html). However, I'm unsure about its capabilities and have never worked with it.
>
> 2. Use a solution from a previous program run as initial condition. This is the cheap way. Grid configurations don't need to match and the old solution is interpolated onto the new grid function space.
But that's better discussed in another Issue.
### Proposal
### How to test the implementation?
### Related issues
See #...
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/130Switch to vertex data evaluation on system tests2019-09-09T17:56:21ZSantiago Ospina De Los Ríossospinar@gmail.comSwitch to vertex data evaluation on system tests### Description
The following discussion from !128 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/128#note_19781):
> This requires an update of all system tests. The Python scripts only work with cell data. This is the only reason why `vertexData` is not set to `True` by default.
>
> I have a Python implementation ready which can also evaluate vertex VTKs, based on Oles `dune-vtkdiff`, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/snippets/16. I'll probably use that to update the Python test evaluators "soon"_
### Proposal
...
### How to test the implementation?
...
### Related issues
See #...
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
The following discussion from !128 should be addressed:
- [ ] @lriedel started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/128#note_19781):
> This requires an update of all system tests. The Python scripts only work with cell data. This is the only reason why `vertexData` is not set to `True` by default.
>
> I have a Python implementation ready which can also evaluate vertex VTKs, based on Oles `dune-vtkdiff`, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/snippets/16. I'll probably use that to update the Python test evaluators "soon"_
### Proposal
...
### How to test the implementation?
...
### Related issues
See #...
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/118use SubGrid to be able to have diferent grid views in each model2019-07-11T21:35:47ZSantiago Ospina De Los Ríossospinar@gmail.comuse SubGrid to be able to have diferent grid views in each model### Description
Use metagrid [`dune-subgrid`](http://numerik.mi.fu-berlin.de/dune-subgrid/index.php) so that we are able to have diferent grid views in each model.
### Proposal
The idea is that each model is able to adapt the grid to its own puposes while stille sharing the same host grid with the whole simulation. `dune-subgrid` would give us the possibility to create our custom grid views for each model and even giving us some support for the markin of the grid. It would remain to us to create the adapters that can transfer information between entities of the two sub grids.
### How to test the implementation?
Each model is still able to compute steps in the new grid views.
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
Use metagrid [`dune-subgrid`](http://numerik.mi.fu-berlin.de/dune-subgrid/index.php) so that we are able to have diferent grid views in each model.
### Proposal
The idea is that each model is able to adapt the grid to its own puposes while stille sharing the same host grid with the whole simulation. `dune-subgrid` would give us the possibility to create our custom grid views for each model and even giving us some support for the markin of the grid. It would remain to us to create the adapters that can transfer information between entities of the two sub grids.
### How to test the implementation?
Each model is still able to compute steps in the new grid views.
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/113Apply variable naming convention to Richards local operator2020-01-27T14:41:32ZLukas Riedelmail@lukasriedel.comApply variable naming convention to Richards local operator### Summary
The following discussions from !93 should be addressed:
- [ ] @sospinar started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/93#note_16474): (+2 comments)
> Before removing the WIP again, I will modify the name convention within the local operator. In !112 finally I got to a name convention that I think is more expressive:
>
> | Convention | Meaning |
> |---|---|
> | `<description>` + `_i` | `<description>` with respect to the **inside** entity. |
> | `<description>` + `_o` | `<description>` with respect to the **outside** entity. |
> | `<description>` + `_f` | `<description>` with respect to the **face** entity (or intersection). |
> | `<description>` + `_g` | `<description>` with respect to the **global** grid. |
> | `<description>` + `_n` | `<description>` in **normal** direction. |
>
> Notice that when working with gradients, the suffix for normal direction can be combined with the first three suffixes (`_n_i`, `_n_o`, and `_n_f`). When it's not specified, is understood that it is with respect to the face (`_n` $`\equiv`$`_n_f`).
### Proposal
Adjust the local operator of the Richards model to this naming convention.
### How to test the implementation?
Everything works as before :wink:
### Related issues/MRs
!93### Summary
The following discussions from !93 should be addressed:
- [ ] @sospinar started a [discussion](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/93#note_16474): (+2 comments)
> Before removing the WIP again, I will modify the name convention within the local operator. In !112 finally I got to a name convention that I think is more expressive:
>
> | Convention | Meaning |
> |---|---|
> | `<description>` + `_i` | `<description>` with respect to the **inside** entity. |
> | `<description>` + `_o` | `<description>` with respect to the **outside** entity. |
> | `<description>` + `_f` | `<description>` with respect to the **face** entity (or intersection). |
> | `<description>` + `_g` | `<description>` with respect to the **global** grid. |
> | `<description>` + `_n` | `<description>` in **normal** direction. |
>
> Notice that when working with gradients, the suffix for normal direction can be combined with the first three suffixes (`_n_i`, `_n_o`, and `_n_f`). When it's not specified, is understood that it is with respect to the face (`_n` $`\equiv`$`_n_f`).
### Proposal
Adjust the local operator of the Richards model to this naming convention.
### How to test the implementation?
Everything works as before :wink:
### Related issues/MRs
!93https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/109Add time step schemes to the richards simulation2018-12-06T16:02:59ZSantiago Ospina De Los Ríossospinar@gmail.comAdd time step schemes to the richards simulation### Proposal
Add time step schemes to the `RichardsSimulation` object.
### How to test the implementation?
Create system tests that run each of the implemented time step schemes.
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Proposal
Add time step schemes to the `RichardsSimulation` object.
### How to test the implementation?
Create system tests that run each of the implemented time step schemes.
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->Santiago Ospina De Los Ríossospinar@gmail.comSantiago Ospina De Los Ríossospinar@gmail.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/102Specify output intervals2019-01-17T06:30:58ZLukas Riedelmail@lukasriedel.comSpecify output intervals### Description
DORiE currently writes VTK output for every time step, or none at all. Users might want to specify output intervals for retrieving data and comparing them to measurements, which are typically conducted at certain time intervals, too.
### Proposal
Add a config key `output.interval`, specifying the output interval in seconds. This requires the `CalculationController` to adjust the time step to these intervals. `output.interval = 0` leads to the current output behavior.
### How to test the implementation?
Write a test which
* counts the number of VTK files if `output.interval` is used
* verifies that the VTK files have the expected time stamps
### Related issues
None.### Description
DORiE currently writes VTK output for every time step, or none at all. Users might want to specify output intervals for retrieving data and comparing them to measurements, which are typically conducted at certain time intervals, too.
### Proposal
Add a config key `output.interval`, specifying the output interval in seconds. This requires the `CalculationController` to adjust the time step to these intervals. `output.interval = 0` leads to the current output behavior.
### How to test the implementation?
Write a test which
* counts the number of VTK files if `output.interval` is used
* verifies that the VTK files have the expected time stamps
### Related issues
None.https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/63[meta] Parameters and Parametrization2019-05-16T23:27:51ZSantiago Ospina De Los Ríossospinar@gmail.com[meta] Parameters and Parametrization## Description
Today's parameter objects have several problems.
* Maybe the most important one is that it's working with dynamic polymorphism even though we only have one parametrization, this is simply not acceptable for functions that do virtually no computation (like `Interpolation` and `MualemVanGenuchten`) and is even worst for functions that only have to access data. Local operators are usually memory bounded, and having this kind of polymorphism affects directly its performance.
* Another problem is that they are strongly coupled with the input of parameters and with its use in the local operator. Since the base structure is an array, it affects directly the partitioning in parallel, plus strange artifacts on the solution in the interface of two parameter cells.
## Proposal
Change parameters and parametrizations objects such that they are based on a continuous or element-wise representation. Gmsh and DFG readers in dune have ways to identify elements (in codims 0 and dim) and therefore also ways to attach parameters to the grid.
Thus, my proposal is to create parameter grids that contain the parameters in a more clean and efficient way. I already have a bit of the code and seems to be a good approach to any kind of input data that is
$`C^{-1}`$ and $`C^0`$, however, the main problem then would be the parameter field generator `pfg`. My approach for that would be just to implement my proposal such that we can attach manually data to Gmsh and DFG, and later, we can take a look how to generate the data directly to for the Gmsh and DFG files.
## Procedures
- [x] #71: Build new parameter structures on top of current implementation
- [x] #89: Introduce [`yaml-cpp`](https://github.com/jbeder/yaml-cpp) as dependency
- [x] #86: Implement new parameter input scheme (with `yaml-cpp`)
- [x] #110: Revamp scaling implementation and add input of global scaling fields
- [ ] Add deprecation warnings to branch `1.1-stable`
- [ ] Add Mualem-Brooks-Corey parameterization
## Description
Today's parameter objects have several problems.
* Maybe the most important one is that it's working with dynamic polymorphism even though we only have one parametrization, this is simply not acceptable for functions that do virtually no computation (like `Interpolation` and `MualemVanGenuchten`) and is even worst for functions that only have to access data. Local operators are usually memory bounded, and having this kind of polymorphism affects directly its performance.
* Another problem is that they are strongly coupled with the input of parameters and with its use in the local operator. Since the base structure is an array, it affects directly the partitioning in parallel, plus strange artifacts on the solution in the interface of two parameter cells.
## Proposal
Change parameters and parametrizations objects such that they are based on a continuous or element-wise representation. Gmsh and DFG readers in dune have ways to identify elements (in codims 0 and dim) and therefore also ways to attach parameters to the grid.
Thus, my proposal is to create parameter grids that contain the parameters in a more clean and efficient way. I already have a bit of the code and seems to be a good approach to any kind of input data that is
$`C^{-1}`$ and $`C^0`$, however, the main problem then would be the parameter field generator `pfg`. My approach for that would be just to implement my proposal such that we can attach manually data to Gmsh and DFG, and later, we can take a look how to generate the data directly to for the Gmsh and DFG files.
## Procedures
- [x] #71: Build new parameter structures on top of current implementation
- [x] #89: Introduce [`yaml-cpp`](https://github.com/jbeder/yaml-cpp) as dependency
- [x] #86: Implement new parameter input scheme (with `yaml-cpp`)
- [x] #110: Revamp scaling implementation and add input of global scaling fields
- [ ] Add deprecation warnings to branch `1.1-stable`
- [ ] Add Mualem-Brooks-Corey parameterization
v2.0 ReleaseLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/38perform L2 projection in grid adaptation on water content, not matric head2018-10-16T08:54:38ZLukas Riedelmail@lukasriedel.comperform L2 projection in grid adaptation on water content, not matric headThe solution DOF vector is currently inserted into the `PDELab::adapt_grid()` algorithm, which performs an L2 projection onto the new grid. At soil layer boundaries, this might cause abrupt changes in volumetric water content, which might imply that global mass conservation does not hold. The projection conserves the integrated matric head, but not the water content.
To circumvent this, a water content DOF vector has to be created by interpolating from an apropriate `DiscreteGridFunction`. This vector is then handed to `adapt_grid`. Afterwards, we interpolate the new matric head solution from the newly received water content.
This issue might be related to the artifacts reported in #8.The solution DOF vector is currently inserted into the `PDELab::adapt_grid()` algorithm, which performs an L2 projection onto the new grid. At soil layer boundaries, this might cause abrupt changes in volumetric water content, which might imply that global mass conservation does not hold. The projection conserves the integrated matric head, but not the water content.
To circumvent this, a water content DOF vector has to be created by interpolating from an apropriate `DiscreteGridFunction`. This vector is then handed to `adapt_grid`. Afterwards, we interpolate the new matric head solution from the newly received water content.
This issue might be related to the artifacts reported in #8.https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/15Add analytic jacobian to increase performace2018-11-09T12:20:23ZLukas Riedelmail@lukasriedel.comAdd analytic jacobian to increase performace### Current understanding
Implementing an analytical expression for the residual jacobian would significantly improve performance, because numeric differentiation requires evaluating the residual several times. However, (Bastian 2014) mentions that numeric differentiation can increase stability and improve convergence, especially in the case of "non-smooth non-linearities" (it might be disputed if this occurs in Richards equation).
The original description focused on artifacts in the solution gradient. However, directly interpreting this gradient as flux is difficult because the gradient is now reconstructed (as is usually required for DG spaces). Additionally, we expect an oscillatory error component due to the actual solvers.
With this, implementing the analytic jacobian might be worth a shot in terms of performance, and *likely* not in terms of precision.
### Original description
Dorie uses numerical differentiation to compute the jacobian of the residual. The local operators are derived from the `NumericalJacobianXYZ` classes. ~~The PDELab developers will consider this a bug in the future.~~ _Edit: The PDELab developers consider it a bug if one of the local operators in PDELab does not implement an analytical jacobian._
The numerical differentiation might be the cause of the issue of 'wiggles' appearing in the solution plotted by a subsampling VTK writer. It limits the relative precision of the gradients to half the solution's precision, namely ~1e-8. These errors can become significant if values are added or subtracted due to the parameterization (e.g. in the case of the gravitational force).
There are two possible solutions:
* ~~use automated differentiation: There will be a future routine which computes the jacobian automatically, based on the function calls leading to the residual assembly~~ _Edit: Feature won't be ready in the near future._
* analytic differentiation: Ideally, the local operator should assemble the analytical jacobian. For this, the entire expression of the residual has to be differentiated
Both solutions are expected to drop the gradient precision towards the acutal solution precision (double precision)
---
Bug reports that are probably related to this issue:
* [Issue 62 on dune-grid](https://gitlab.dune-project.org/core/dune-grid/issues/62)
* #8 and [Issue 79 on dune-pdelab](https://gitlab.dune-project.org/pdelab/dune-pdelab/issues/79)### Current understanding
Implementing an analytical expression for the residual jacobian would significantly improve performance, because numeric differentiation requires evaluating the residual several times. However, (Bastian 2014) mentions that numeric differentiation can increase stability and improve convergence, especially in the case of "non-smooth non-linearities" (it might be disputed if this occurs in Richards equation).
The original description focused on artifacts in the solution gradient. However, directly interpreting this gradient as flux is difficult because the gradient is now reconstructed (as is usually required for DG spaces). Additionally, we expect an oscillatory error component due to the actual solvers.
With this, implementing the analytic jacobian might be worth a shot in terms of performance, and *likely* not in terms of precision.
### Original description
Dorie uses numerical differentiation to compute the jacobian of the residual. The local operators are derived from the `NumericalJacobianXYZ` classes. ~~The PDELab developers will consider this a bug in the future.~~ _Edit: The PDELab developers consider it a bug if one of the local operators in PDELab does not implement an analytical jacobian._
The numerical differentiation might be the cause of the issue of 'wiggles' appearing in the solution plotted by a subsampling VTK writer. It limits the relative precision of the gradients to half the solution's precision, namely ~1e-8. These errors can become significant if values are added or subtracted due to the parameterization (e.g. in the case of the gravitational force).
There are two possible solutions:
* ~~use automated differentiation: There will be a future routine which computes the jacobian automatically, based on the function calls leading to the residual assembly~~ _Edit: Feature won't be ready in the near future._
* analytic differentiation: Ideally, the local operator should assemble the analytical jacobian. For this, the entire expression of the residual has to be differentiated
Both solutions are expected to drop the gradient precision towards the acutal solution precision (double precision)
---
Bug reports that are probably related to this issue:
* [Issue 62 on dune-grid](https://gitlab.dune-project.org/core/dune-grid/issues/62)
* #8 and [Issue 79 on dune-pdelab](https://gitlab.dune-project.org/pdelab/dune-pdelab/issues/79)Lukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/6Enable parallel runs2018-10-07T00:54:45ZLukas Riedelmail@lukasriedel.comEnable parallel runsDorie requires MPI as a dependency and supports creating a random field in parallel. The main routine is always compiled with parallel support. However, the current `SEQ_SuperLU` linear solver backend lacks parallel computation abilities. We intend to switch to the [OVLP_AMG_4_DG](https://dune-project.org/doxygen/pdelab/master/classDune_1_1PDELab_1_1ISTLBackend__OVLP__AMG__4__DG.html) backend for this task. But assembling the (processor) boundary constraints correctly is quite the effort.
Also: In the past, the parallel solver proved to be inferior to the sequential one when computing a 'small' (not memory-bound) problem.
Tasks for going parallel:
- [x] Enable collective read of HDF5 parameter field file
- [x] revisit verbosity: only rank 0 should print into the `std::cout` stream
- find working combinations of `Grid`, `EntitySet`, `FiniteElementMap`, `Constraints`:
- [x] `YaspGrid` **with** overlap, `OverlappingEntitySet`, `P0` / `QkDG`, `P0ParallelConstraints`
- [ ] _Hint by Marian:_ Use `OVLP_AMG_4_DG` for parallel UG with an `AllEntitySet`. Contraints are probably `P0ParallelConstraints`, the subspace has to be P0 (which is already implemented for parallel YaspGrid).
__OR__
- [ ] employ NOVLP solver when using UG. Need new templated type helperDorie requires MPI as a dependency and supports creating a random field in parallel. The main routine is always compiled with parallel support. However, the current `SEQ_SuperLU` linear solver backend lacks parallel computation abilities. We intend to switch to the [OVLP_AMG_4_DG](https://dune-project.org/doxygen/pdelab/master/classDune_1_1PDELab_1_1ISTLBackend__OVLP__AMG__4__DG.html) backend for this task. But assembling the (processor) boundary constraints correctly is quite the effort.
Also: In the past, the parallel solver proved to be inferior to the sequential one when computing a 'small' (not memory-bound) problem.
Tasks for going parallel:
- [x] Enable collective read of HDF5 parameter field file
- [x] revisit verbosity: only rank 0 should print into the `std::cout` stream
- find working combinations of `Grid`, `EntitySet`, `FiniteElementMap`, `Constraints`:
- [x] `YaspGrid` **with** overlap, `OverlappingEntitySet`, `P0` / `QkDG`, `P0ParallelConstraints`
- [ ] _Hint by Marian:_ Use `OVLP_AMG_4_DG` for parallel UG with an `AllEntitySet`. Contraints are probably `P0ParallelConstraints`, the subspace has to be P0 (which is already implemented for parallel YaspGrid).
__OR__
- [ ] employ NOVLP solver when using UG. Need new templated type helperLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.com