dorie issueshttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues2018-05-15T14:38:16+02:00https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/67PkLocalFiniteElementMap creates continuous function space2018-05-15T14:38:16+02:00Lukas Riedelmail@lukasriedel.comPkLocalFiniteElementMap creates continuous function space### Description
We use `PkLocalFiniteElementMap` for grids with simplices. This type of finite element map creates a continuous function space instead of a discontinuous one.
This was noticed when comparing the DOF of a simplex grid to the number of vertices. As the case for CG spaces, the number of DOF was equal to the number of vertices.
### Proposal
Switch `PkLocalFiniteElementMap` to `OPBLocalFiniteElementMap`, whatever that is.
### How to test the implementation?
* Pipeline passes
* Verify number of DOF### Description
We use `PkLocalFiniteElementMap` for grids with simplices. This type of finite element map creates a continuous function space instead of a discontinuous one.
This was noticed when comparing the DOF of a simplex grid to the number of vertices. As the case for CG spaces, the number of DOF was equal to the number of vertices.
### Proposal
Switch `PkLocalFiniteElementMap` to `OPBLocalFiniteElementMap`, whatever that is.
### How to test the implementation?
* Pipeline passes
* Verify number of DOFLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/68Update to DUNE v2.62018-08-18T18:37:17+02:00Lukas Riedelmail@lukasriedel.comUpdate to DUNE v2.6### Description
[DUNE Core v2.6](https://www.dune-project.org/releases/2.6.0/#dune-2-6-release-notes) has been released. [`UG`](https://gitlab.dune-project.org/staging/dune-uggrid/blob/master/CHANGELOG.md) and [PDELab](https://gitlab.dune-project.org/pdelab/dune-pdelab/blob/releases/2.6/CHANGELOG.md) received similar updates.
We should update to them asap.
### Expected Adaptations
* `ReferenceElement` had a major overhaul. It is now constructable with `referenceElement(geometry)`.
* Layouts for `MultipleCodimMultipleGeomTypeMapper` are deprecated.
* This sounds really useful (for KnoFu?):
> The new method `MCMGMapper::indices(entity)` returns an iterable range (instance of `IntegralRange<Index>`) with the indices of dofs attached to the given entity: `for (const auto& i : mapper.indices(entity) ) dof = vector[i];`
* `StructuredGridFactory` returns a `unique_ptr` to the grid now
* PDELab extracts the "natural" blocking size from the function space now. This will break un-squaring in `adaptivity.hh`
* `L2` operator now sets integration order by itself (is probably used in adaptivity)
* Rename namespaces this time: `Dune::PDELab::istl` -> `Dune::PDELab::ISTL`
### How to test the implementation?
* Pipeline passing### Description
[DUNE Core v2.6](https://www.dune-project.org/releases/2.6.0/#dune-2-6-release-notes) has been released. [`UG`](https://gitlab.dune-project.org/staging/dune-uggrid/blob/master/CHANGELOG.md) and [PDELab](https://gitlab.dune-project.org/pdelab/dune-pdelab/blob/releases/2.6/CHANGELOG.md) received similar updates.
We should update to them asap.
### Expected Adaptations
* `ReferenceElement` had a major overhaul. It is now constructable with `referenceElement(geometry)`.
* Layouts for `MultipleCodimMultipleGeomTypeMapper` are deprecated.
* This sounds really useful (for KnoFu?):
> The new method `MCMGMapper::indices(entity)` returns an iterable range (instance of `IntegralRange<Index>`) with the indices of dofs attached to the given entity: `for (const auto& i : mapper.indices(entity) ) dof = vector[i];`
* `StructuredGridFactory` returns a `unique_ptr` to the grid now
* PDELab extracts the "natural" blocking size from the function space now. This will break un-squaring in `adaptivity.hh`
* `L2` operator now sets integration order by itself (is probably used in adaptivity)
* Rename namespaces this time: `Dune::PDELab::istl` -> `Dune::PDELab::ISTL`
### How to test the implementation?
* Pipeline passingLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/69Add additional debug job with Clang2018-09-03T14:25:19+02:00Lukas Riedelmail@lukasriedel.comAdd additional debug job with Clang### Description
We now support Clang which emits more warnings than GCC. This should be reflected in the testing.
### Proposal
* Have two testing images, one with Clang and one with GCC.
* Compile a smaller executable for debug builds.
* Have two debug builds, one with GCC and one with Clang.
* Fix all compiler warnings
### How to test the implementation?
* Both Debug builds succeed (without warnings)
### Related issues
See #28### Description
We now support Clang which emits more warnings than GCC. This should be reflected in the testing.
### Proposal
* Have two testing images, one with Clang and one with GCC.
* Compile a smaller executable for debug builds.
* Have two debug builds, one with GCC and one with Clang.
* Fix all compiler warnings
### How to test the implementation?
* Both Debug builds succeed (without warnings)
### Related issues
See #28Santiago Ospina De Los Ríossospinar@gmail.comSantiago Ospina De Los Ríossospinar@gmail.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/70Solution Container2019-01-08T18:18:23+01:00Santiago Ospina De Los Ríossospinar@gmail.comSolution Container### Description
Because of the two simulations (or models) that we want to couple, Richards and Transport, and the fact that they can have different time steps, it is necessary to store their solutions and be able to provide intermediate results by some evaluation policy (i.e. linear interpolation, previous solution, or last solution).
`dune-modelling` provide a solution, nevertheless it has two drawbacks:
1. Simple containers always do a hard copy of the solutions. (easy to change)
2. `SolutionStorage` require a complete interface made with dune modelling with the definition of `Traits`, `ModelParameters`, `EquationTraits`, and `Boundary`.
### Proposal
In order to continue using the same interface, I would like to require that a container of several solutions to mirror or extend a `GridFunction` such that is easy to be reused in other contexts. For example, that way will be easy to store the solution in arbitrary times with the `VTKWriter` (see discussion on #102). On the other hand, It will select the right solution when a solvers asks the local operators to set the time.
### How to test the implementation?
* An indirect form to test this is to check that mass is being conserved on the solute solution
### Related MR.
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Description
Because of the two simulations (or models) that we want to couple, Richards and Transport, and the fact that they can have different time steps, it is necessary to store their solutions and be able to provide intermediate results by some evaluation policy (i.e. linear interpolation, previous solution, or last solution).
`dune-modelling` provide a solution, nevertheless it has two drawbacks:
1. Simple containers always do a hard copy of the solutions. (easy to change)
2. `SolutionStorage` require a complete interface made with dune modelling with the definition of `Traits`, `ModelParameters`, `EquationTraits`, and `Boundary`.
### Proposal
In order to continue using the same interface, I would like to require that a container of several solutions to mirror or extend a `GridFunction` such that is easy to be reused in other contexts. For example, that way will be easy to store the solution in arbitrary times with the `VTKWriter` (see discussion on #102). On the other hand, It will select the right solution when a solvers asks the local operators to set the time.
### How to test the implementation?
* An indirect form to test this is to check that mass is being conserved on the solute solution
### Related MR.
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->Solute Transport FeatureSantiago Ospina De Los Ríossospinar@gmail.comSantiago Ospina De Los Ríossospinar@gmail.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/71Rebuild Parameter data structures on current I/O2018-10-25T15:21:51+02:00Lukas Riedelmail@lukasriedel.comRebuild Parameter data structures on current I/O### Description
As a first step, the data structures for Parameters and Parameterizations will be rebuilt without changes to the current data I/O and parameter input.
### Proposal
* Add class `FlowParameters`, storing parameterization information for each grid entity
* Add class interface `RichardsParameters`
* Add class `MualemVanGenuchtenParameters`
* Use strong types internally (only)
* Adapt local operators
* Adapt VTK adaptors
For using the current I/O, we need to
* Use `MualemVanGenuchten` with `NearestNeighborInterpolator` to read in the data from the file
* Then translate to the new data structures and destroy `MualemVanGenuchten`.
### New class structure
```plantuml
FlowParameters "1 per medium" *-- RichardsParameterization
FlowParameters "1 per cell" *-- Scaling
RichardsParameterization <|-- MualemVanGenuchtenParameterization
RichardsParameterization <|-- BrooksCoreyParameterization
class FlowParameters {
- _param : map<index, pair<shared_ptr<RP>, Scaling>>
- _cache : pair<index, pair<shared_ptr<RP>, Scaling>>
- _gv : LevelGridView
- _mapper : MCMGMapper<LevelGridView>
- _config: ParameterTree
__ Caching __
+ bind (Entity)
+ cache () : pair<index, pair<shared_ptr<RP>, Scaling>>
- verify_cache ()
__ Parameterization functions __
+ conductivity_f () : function<RF(RF)>
+ saturation_f () : function<RF(RF)>
+ water_content_f () : function<RF(RF)>
+ matric_head_f () : function<RF(RF)>
}
note top of FlowParameters
Class communicating with the LocalOperator
and all other DUNE and DORiE classes and functions.
end note
class Scaling {
+ head_scale : double
+ cond_scale :double
+ por_scale : double
}
abstract class RichardsParameterization {
# _theta_r : ResidualWaterContent
# _theta_s : SaturatedWaterContent
# _k0 : SaturatedConductivity
__
+ water_content_f () : function<WaterContent(Saturation)>
..
+ {abstract} saturation_f () : function<Saturation(MatricHead)>
+ {abstract} conductivity_f () : function<Conductivity(Saturation)>
..
+ {abstract} parameters () : map<string, double&>
}
note top of RichardsParameterization
Abstract interface for parameterizations.
Contains common values and functions.
end note
class MualemVanGenuchtenParameterization {
- _alpha : Alpha
- _n : N
- _tau : Tortuosity
__
+ saturation_f () : function<Saturation(MatricHead)>
+ conductivity_f () : function<Conductivity(Saturation)>
..
+ parameters () : map<string, double&>
}
class BrooksCoreyParameterization {
- _h0 : AirEntryValue
- _lambda : PoreSizeDistribution
- _tau : Tortuosity
__
+ saturation_f () : function<Saturation(MatricHead)>
+ conductivity_f () : function<Conductivity(Saturation)>
..
+ parameters () : map<string, double&>
}
```
### How to test the implementation?
Load data from old to new structure. Then verify that functions return the same values at all grid cells.
### Related issues
See #63.### Description
As a first step, the data structures for Parameters and Parameterizations will be rebuilt without changes to the current data I/O and parameter input.
### Proposal
* Add class `FlowParameters`, storing parameterization information for each grid entity
* Add class interface `RichardsParameters`
* Add class `MualemVanGenuchtenParameters`
* Use strong types internally (only)
* Adapt local operators
* Adapt VTK adaptors
For using the current I/O, we need to
* Use `MualemVanGenuchten` with `NearestNeighborInterpolator` to read in the data from the file
* Then translate to the new data structures and destroy `MualemVanGenuchten`.
### New class structure
```plantuml
FlowParameters "1 per medium" *-- RichardsParameterization
FlowParameters "1 per cell" *-- Scaling
RichardsParameterization <|-- MualemVanGenuchtenParameterization
RichardsParameterization <|-- BrooksCoreyParameterization
class FlowParameters {
- _param : map<index, pair<shared_ptr<RP>, Scaling>>
- _cache : pair<index, pair<shared_ptr<RP>, Scaling>>
- _gv : LevelGridView
- _mapper : MCMGMapper<LevelGridView>
- _config: ParameterTree
__ Caching __
+ bind (Entity)
+ cache () : pair<index, pair<shared_ptr<RP>, Scaling>>
- verify_cache ()
__ Parameterization functions __
+ conductivity_f () : function<RF(RF)>
+ saturation_f () : function<RF(RF)>
+ water_content_f () : function<RF(RF)>
+ matric_head_f () : function<RF(RF)>
}
note top of FlowParameters
Class communicating with the LocalOperator
and all other DUNE and DORiE classes and functions.
end note
class Scaling {
+ head_scale : double
+ cond_scale :double
+ por_scale : double
}
abstract class RichardsParameterization {
# _theta_r : ResidualWaterContent
# _theta_s : SaturatedWaterContent
# _k0 : SaturatedConductivity
__
+ water_content_f () : function<WaterContent(Saturation)>
..
+ {abstract} saturation_f () : function<Saturation(MatricHead)>
+ {abstract} conductivity_f () : function<Conductivity(Saturation)>
..
+ {abstract} parameters () : map<string, double&>
}
note top of RichardsParameterization
Abstract interface for parameterizations.
Contains common values and functions.
end note
class MualemVanGenuchtenParameterization {
- _alpha : Alpha
- _n : N
- _tau : Tortuosity
__
+ saturation_f () : function<Saturation(MatricHead)>
+ conductivity_f () : function<Conductivity(Saturation)>
..
+ parameters () : map<string, double&>
}
class BrooksCoreyParameterization {
- _h0 : AirEntryValue
- _lambda : PoreSizeDistribution
- _tau : Tortuosity
__
+ saturation_f () : function<Saturation(MatricHead)>
+ conductivity_f () : function<Conductivity(Saturation)>
..
+ parameters () : map<string, double&>
}
```
### How to test the implementation?
Load data from old to new structure. Then verify that functions return the same values at all grid cells.
### Related issues
See #63.v2.0 ReleaseLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/72[meta] Finite Volume Method for Solute Transport2019-12-18T15:35:34+01:00Santiago Ospina De Los Ríossospinar@gmail.com[meta] Finite Volume Method for Solute Transport_Note:_ This is a meta-task. It bundles several tasks together and is only closed once all these tasks are finished.
### Aims
Following the sequence of steps stated on %"Solute Transport Feature", this meta-issue must implement a Finite Volume method in `dorie` out of the code implemented in !30, and must create some benchmarks to test the transport solution.
### Tasks
* Finite Volume solver
* [x] #73 Create a Local Operator for finite volume scheme.
* [x] #94 Propose a base class for simulations.
* [x] !65 Modify traits system such that they are consistent with the two different models.
* [x] #95 Modify `RichardsSimulation` to have the base class #94.
* [x] #98 Implement a simulation object for transport: `TransportSimulation`.
* [x] #70 Define data structures for data exchange between `RichardsSimulation` and `TransportSimulation`.
* [ ] ~~#100 Couple the `TransportSimulation` with a simple ODE solver for richards equation.~~
* [x] !64 Manage solutions (and therefore adaptors) with shared pointers.
* [x] !96 Couple the `TransportSimulation` with `RichardsSimulation`.
* [ ] ~~!96 Manage adaptivity for coupled systems.~~
* Benchmakrs
* [ ] ~~Prepare few benchmarks to test the code in order to compare later with the dG method.~~
<!-- Remember to mention tasks with '#' here, once they are created. -->
### People involved
@sospinar
### Related meta-tasks
#73
<!-- Meta-tasks of other groups that require coordination -->
<!--
PLEASE READ THIS
A meta task is used to organise and discuss several regular tasks.
When creating this meta task, please take care of the following:
- When new tasks that belong to this meta-task are created,
link them here, and add them as tasks
- Attach the correct labels
- Mention the people that should get involved
- Assign the correct milestone (if available)
-->_Note:_ This is a meta-task. It bundles several tasks together and is only closed once all these tasks are finished.
### Aims
Following the sequence of steps stated on %"Solute Transport Feature", this meta-issue must implement a Finite Volume method in `dorie` out of the code implemented in !30, and must create some benchmarks to test the transport solution.
### Tasks
* Finite Volume solver
* [x] #73 Create a Local Operator for finite volume scheme.
* [x] #94 Propose a base class for simulations.
* [x] !65 Modify traits system such that they are consistent with the two different models.
* [x] #95 Modify `RichardsSimulation` to have the base class #94.
* [x] #98 Implement a simulation object for transport: `TransportSimulation`.
* [x] #70 Define data structures for data exchange between `RichardsSimulation` and `TransportSimulation`.
* [ ] ~~#100 Couple the `TransportSimulation` with a simple ODE solver for richards equation.~~
* [x] !64 Manage solutions (and therefore adaptors) with shared pointers.
* [x] !96 Couple the `TransportSimulation` with `RichardsSimulation`.
* [ ] ~~!96 Manage adaptivity for coupled systems.~~
* Benchmakrs
* [ ] ~~Prepare few benchmarks to test the code in order to compare later with the dG method.~~
<!-- Remember to mention tasks with '#' here, once they are created. -->
### People involved
@sospinar
### Related meta-tasks
#73
<!-- Meta-tasks of other groups that require coordination -->
<!--
PLEASE READ THIS
A meta task is used to organise and discuss several regular tasks.
When creating this meta task, please take care of the following:
- When new tasks that belong to this meta-task are created,
link them here, and add them as tasks
- Attach the correct labels
- Mention the people that should get involved
- Assign the correct milestone (if available)
-->Solute Transport FeatureSantiago Ospina De Los Ríossospinar@gmail.comSantiago Ospina De Los Ríossospinar@gmail.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/73Transport Local Operator for finite volume scheme2018-10-22T22:33:51+02:00Santiago Ospina De Los Ríossospinar@gmail.comTransport Local Operator for finite volume scheme## Description
The local operator must model the formulation of the transport equation in finite volumes. This local operator will be created taking into account the one created on !30, the one `convectiondiffusionccfv.hh` implemented on PDELab, and the one implemented by @oklein in dune-modelling-example.
## Tasks
- [x] Change the name of the [local operator file for richards](dune/dorie/solver/operator_DG.hh) to richards_operator_DG.hh
- [x] Implement the `alpha_volume()` method.
- [x] Implement the `jacobian_volume()` method.
- [x] Implement the `alpha_skeleton()` method.
- [x] Implement the `jacobian_skeleton()` method.
- [x] Implement the `alpha_boundary()` method.
- [x] Implement the `jacobian_boundary()` method.
- [x] Implement the `lambda_volume()` method.
## How to test the implementation?
Testing the local operator is hard without the external framework, so this task will be tied to those tasks implementing a `TransportSimulation`.
## Formulation
_**Warning**: I have to check these equations again. It seems to be that there is a mistake. Source terms omitted by now._
The strong formulation for solute transport is
```math
\begin{aligned}
\partial_t[\theta C_w] + \nabla\cdot [\textbf{j}_w C_w] - \nabla [\theta \mathsf{D}_{eff}\nabla C_w]=0 &\qquad \text{in } \Omega\\
C_w = g &\qquad \text{on } \Gamma_D \subseteq\partial\Omega\\
\nabla C_w \cdot \textbf{n} = \textbf{j}_{\scriptscriptstyle C_w}& \qquad \text{on } \Gamma_N =\partial\Omega \backslash \Gamma_D
\end{aligned}
```
with $`\textbf{j}_w = \theta \textbf{v}_w`$ and $`\mathsf{D}_{eff}(\textbf{j}_w,\theta)`$. Now, following formulation in `dune-pdelab-tutorial02` we use the ansatz function $`u=C_w`$ and the test function $`v`$, we have that the weak formulation for the spatial part is
```math
\int_\Omega \nabla\cdot [\textbf{j}_w u] v - \int_\Omega\nabla [\theta \mathsf{D}_{eff}\nabla u]v \qquad \forall v\in W_h
```
with $` W_h=\{w\in L^2(\Omega)\, : \, w|_T=\text{const for all } T\in\mathcal{T}_h\} `$. Then, integrating by parts
```math
\sum_{T\in\mathcal{T}_h}\left\{\int_{\partial T}(\textbf{j}_wu \cdot \textbf{n})v\,ds-\int_T(\textbf{j}_w\cdot\nabla v)u\,dx-\int_{\partial T}\theta\mathsf{D}_{eff}(\nabla u\cdot \textbf{n}) v\,ds+\int_T\theta\mathsf{D}_{eff}(\nabla u\cdot\nabla v)dx\right\}
```
Taking into account that $`\nabla u|_T=0`$ and approximating normal derivatives we have
```math
\sum_{T\in\mathcal{T}_h}\left\{\int_{\partial T}(\textbf{j}_wu \cdot \textbf{n})v\,ds-\int_{\partial T}\theta\mathsf{D}_{eff}(\nabla u\cdot \textbf{n}) v\,ds\right\}
```
and rearranging
```math
\begin{aligned}
\sum_{F\in\mathcal{F}_h^i}&\left\{\int_{\partial F}(\textbf{j}_w u\cdot\textbf{n}_F)[v(x_{T_F^-})-v(x_{T_F^+})]\right.\\
&-\left.\int_{\partial F}\mathsf{D}_{eff}(\nabla u\cdot \textbf{n}_F)[v(x_{T_F^-})\theta(x_{T_F^-})-v(x_{T_F^+})\theta(x_{T_F^+})] \,ds\right\} \\
+\sum_{F\in\mathcal{F}_h^{\partial\Omega}}&\left\{\int_{\partial F}\left[(\textbf{j}_wu \cdot \textbf{n}_F)-\mathsf{D}_{eff}(\nabla u\cdot \textbf{n}_F)\theta(x_{T_F^-})\right]v(x_{T_F^-})\,ds\right\},
\end{aligned}
```
```math
\text{with}\quad\nabla u\cdot \textbf{n}_F=\frac{u_h(x_{T_F^+})-u_h(x_{T_F^-})}{||x_{T_F^+}-x_{T_F^-}||}+\text{error}
```
For the piecewise-constant elements in the test function, we have that the basis functions that generate the space $`W_h`$ are one on one element and zero on all others, i.e.
```math
\phi_i(x) = \begin{cases}
1 &\text{if } x\in T_i \\
0 &\text{else}
\end{cases}
```
which is equivalent to $`v(x_{T_F^+})=0`$ on the equations above. Then,
```math
\begin{aligned}
&\sum_{F\in\mathcal{F}_h^i}\int_{\partial F}\left[(\textbf{j}_w u \cdot\textbf{n}_F) - \mathsf{D}_{eff}(\nabla u\cdot \textbf{n}_F)\theta(x_{T_F^-})\right]\,ds \\
+&\sum_{F\in\mathcal{F}_h^{\partial\Omega}\cap\Gamma_D}\int_{\partial F}\left[(\textbf{j}_w g \cdot \textbf{n}_F)-\mathsf{D}_{eff}(\nabla^* u\cdot \textbf{n}_F)\theta(x_{T_F^-})\right]\,ds\\
-&\sum_{F\in\mathcal{F}_h^{\partial\Omega}\cap\Gamma_N}\int_{\partial F}\mathsf{D}_{eff}\textbf{j}_{\scriptscriptstyle C_w}\theta(x_{T_F^-})\,ds
\end{aligned}
```
where $`\nabla^* u\cdot \textbf{n}_F`$ is the finite difference between $`u`$ and the boundary condition $`g`$ in direction $`\textbf{n}_F`$:
```math
\nabla^* u\cdot \textbf{n}_F=\frac{u_h(x_F)-g(x_{T_F^-})}{||x_F-x_{T_F^-}||}+\text{error}
```
### General Residual Form for Local Operators
```math
\begin{aligned}
r(u,v)=&\sum_{T\in\mathcal{T}_h}\alpha_T^V(\mathcal{R}_Tu,\mathcal{R}_Tv)+\sum_{T\in\mathcal{T}_h}\lambda_T^V(\mathcal{R}_Tv)\\
+&\sum_{F\in\mathcal{F}_h^i}\alpha_F^S(\mathcal{R}_{T_F^-}u,\mathcal{R}_{T_F^+}u,\mathcal{R}_{T_F^-}v,\mathcal{R}_{T_F^+}v)\\
+&\sum_{F\in\mathcal{F}_h^{\partial\Omega}}\alpha_F^B(\mathcal{R}_{T_F^-}u,\mathcal{R}_{T_F^-}v)+\sum_{F\in\mathcal{F}_h^{\partial\Omega}}\alpha_F^B(\mathcal{R}_{T_F^-}v)
\end{aligned}
```
## Related issues
#72
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->## Description
The local operator must model the formulation of the transport equation in finite volumes. This local operator will be created taking into account the one created on !30, the one `convectiondiffusionccfv.hh` implemented on PDELab, and the one implemented by @oklein in dune-modelling-example.
## Tasks
- [x] Change the name of the [local operator file for richards](dune/dorie/solver/operator_DG.hh) to richards_operator_DG.hh
- [x] Implement the `alpha_volume()` method.
- [x] Implement the `jacobian_volume()` method.
- [x] Implement the `alpha_skeleton()` method.
- [x] Implement the `jacobian_skeleton()` method.
- [x] Implement the `alpha_boundary()` method.
- [x] Implement the `jacobian_boundary()` method.
- [x] Implement the `lambda_volume()` method.
## How to test the implementation?
Testing the local operator is hard without the external framework, so this task will be tied to those tasks implementing a `TransportSimulation`.
## Formulation
_**Warning**: I have to check these equations again. It seems to be that there is a mistake. Source terms omitted by now._
The strong formulation for solute transport is
```math
\begin{aligned}
\partial_t[\theta C_w] + \nabla\cdot [\textbf{j}_w C_w] - \nabla [\theta \mathsf{D}_{eff}\nabla C_w]=0 &\qquad \text{in } \Omega\\
C_w = g &\qquad \text{on } \Gamma_D \subseteq\partial\Omega\\
\nabla C_w \cdot \textbf{n} = \textbf{j}_{\scriptscriptstyle C_w}& \qquad \text{on } \Gamma_N =\partial\Omega \backslash \Gamma_D
\end{aligned}
```
with $`\textbf{j}_w = \theta \textbf{v}_w`$ and $`\mathsf{D}_{eff}(\textbf{j}_w,\theta)`$. Now, following formulation in `dune-pdelab-tutorial02` we use the ansatz function $`u=C_w`$ and the test function $`v`$, we have that the weak formulation for the spatial part is
```math
\int_\Omega \nabla\cdot [\textbf{j}_w u] v - \int_\Omega\nabla [\theta \mathsf{D}_{eff}\nabla u]v \qquad \forall v\in W_h
```
with $` W_h=\{w\in L^2(\Omega)\, : \, w|_T=\text{const for all } T\in\mathcal{T}_h\} `$. Then, integrating by parts
```math
\sum_{T\in\mathcal{T}_h}\left\{\int_{\partial T}(\textbf{j}_wu \cdot \textbf{n})v\,ds-\int_T(\textbf{j}_w\cdot\nabla v)u\,dx-\int_{\partial T}\theta\mathsf{D}_{eff}(\nabla u\cdot \textbf{n}) v\,ds+\int_T\theta\mathsf{D}_{eff}(\nabla u\cdot\nabla v)dx\right\}
```
Taking into account that $`\nabla u|_T=0`$ and approximating normal derivatives we have
```math
\sum_{T\in\mathcal{T}_h}\left\{\int_{\partial T}(\textbf{j}_wu \cdot \textbf{n})v\,ds-\int_{\partial T}\theta\mathsf{D}_{eff}(\nabla u\cdot \textbf{n}) v\,ds\right\}
```
and rearranging
```math
\begin{aligned}
\sum_{F\in\mathcal{F}_h^i}&\left\{\int_{\partial F}(\textbf{j}_w u\cdot\textbf{n}_F)[v(x_{T_F^-})-v(x_{T_F^+})]\right.\\
&-\left.\int_{\partial F}\mathsf{D}_{eff}(\nabla u\cdot \textbf{n}_F)[v(x_{T_F^-})\theta(x_{T_F^-})-v(x_{T_F^+})\theta(x_{T_F^+})] \,ds\right\} \\
+\sum_{F\in\mathcal{F}_h^{\partial\Omega}}&\left\{\int_{\partial F}\left[(\textbf{j}_wu \cdot \textbf{n}_F)-\mathsf{D}_{eff}(\nabla u\cdot \textbf{n}_F)\theta(x_{T_F^-})\right]v(x_{T_F^-})\,ds\right\},
\end{aligned}
```
```math
\text{with}\quad\nabla u\cdot \textbf{n}_F=\frac{u_h(x_{T_F^+})-u_h(x_{T_F^-})}{||x_{T_F^+}-x_{T_F^-}||}+\text{error}
```
For the piecewise-constant elements in the test function, we have that the basis functions that generate the space $`W_h`$ are one on one element and zero on all others, i.e.
```math
\phi_i(x) = \begin{cases}
1 &\text{if } x\in T_i \\
0 &\text{else}
\end{cases}
```
which is equivalent to $`v(x_{T_F^+})=0`$ on the equations above. Then,
```math
\begin{aligned}
&\sum_{F\in\mathcal{F}_h^i}\int_{\partial F}\left[(\textbf{j}_w u \cdot\textbf{n}_F) - \mathsf{D}_{eff}(\nabla u\cdot \textbf{n}_F)\theta(x_{T_F^-})\right]\,ds \\
+&\sum_{F\in\mathcal{F}_h^{\partial\Omega}\cap\Gamma_D}\int_{\partial F}\left[(\textbf{j}_w g \cdot \textbf{n}_F)-\mathsf{D}_{eff}(\nabla^* u\cdot \textbf{n}_F)\theta(x_{T_F^-})\right]\,ds\\
-&\sum_{F\in\mathcal{F}_h^{\partial\Omega}\cap\Gamma_N}\int_{\partial F}\mathsf{D}_{eff}\textbf{j}_{\scriptscriptstyle C_w}\theta(x_{T_F^-})\,ds
\end{aligned}
```
where $`\nabla^* u\cdot \textbf{n}_F`$ is the finite difference between $`u`$ and the boundary condition $`g`$ in direction $`\textbf{n}_F`$:
```math
\nabla^* u\cdot \textbf{n}_F=\frac{u_h(x_F)-g(x_{T_F^-})}{||x_F-x_{T_F^-}||}+\text{error}
```
### General Residual Form for Local Operators
```math
\begin{aligned}
r(u,v)=&\sum_{T\in\mathcal{T}_h}\alpha_T^V(\mathcal{R}_Tu,\mathcal{R}_Tv)+\sum_{T\in\mathcal{T}_h}\lambda_T^V(\mathcal{R}_Tv)\\
+&\sum_{F\in\mathcal{F}_h^i}\alpha_F^S(\mathcal{R}_{T_F^-}u,\mathcal{R}_{T_F^+}u,\mathcal{R}_{T_F^-}v,\mathcal{R}_{T_F^+}v)\\
+&\sum_{F\in\mathcal{F}_h^{\partial\Omega}}\alpha_F^B(\mathcal{R}_{T_F^-}u,\mathcal{R}_{T_F^-}v)+\sum_{F\in\mathcal{F}_h^{\partial\Omega}}\alpha_F^B(\mathcal{R}_{T_F^-}v)
\end{aligned}
```
## Related issues
#72
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->Solute Transport Feature2018-07-18Santiago Ospina De Los Ríossospinar@gmail.comSantiago Ospina De Los Ríossospinar@gmail.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/74Switch to OVLP_AMG_4_DG for all computations2018-09-03T20:24:22+02:00Lukas Riedelmail@lukasriedel.comSwitch to OVLP_AMG_4_DG for all computations### Description
With the switch to DUNE v2.6, proper static blocking was introduced (see #68 !53) which changed the performance of the `OVLP_AMG_4_DG` linear solver significantly. @oklein reports that it achieves similar speeds as `SuperLU` even for small problems and is significantly faster for larger problems.
### Proposal
Remove the runtime switch for the linear solver and only use `OVLP_AMG_4_DG` for any computation.
### How to test the implementation?
Pipeline passes.
### Related issues
See #68### Description
With the switch to DUNE v2.6, proper static blocking was introduced (see #68 !53) which changed the performance of the `OVLP_AMG_4_DG` linear solver significantly. @oklein reports that it achieves similar speeds as `SuperLU` even for small problems and is significantly faster for larger problems.
### Proposal
Remove the runtime switch for the linear solver and only use `OVLP_AMG_4_DG` for any computation.
### How to test the implementation?
Pipeline passes.
### Related issues
See #68Lukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/75Documentation page down2018-09-03T14:22:48+02:00Santiago Ospina De Los Ríossospinar@gmail.comDocumentation page downDocumentation page listed in README.md (http://dorie-docs.gitballoon.com) is down.Documentation page listed in README.md (http://dorie-docs.gitballoon.com) is down.Lukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/76Make compilation procedure more efficient2019-11-28T15:46:10+01:00Lukas Riedelmail@lukasriedel.comMake compilation procedure more efficient### Description
The current compilation procedure is focused on not exceeding 2 GB of RAM for sequential `make` runs. As indicated in !63, compiling single instantiations of the `Simulation` template can be inefficient, both in terms of memory and CPU usage.
The new restrictive limit is the default GitLab runner, which is limited to 3 GB.
### Proposal
Use less object files:
1. Move multiple `YASPGrid` instantiations into one object file
2. _(optional)_ Move more `UGGrid` instantiations into single object files
### How to test the implementation?
Pipeline passes (faster than now)
### Related issues
See !63.### Description
The current compilation procedure is focused on not exceeding 2 GB of RAM for sequential `make` runs. As indicated in !63, compiling single instantiations of the `Simulation` template can be inefficient, both in terms of memory and CPU usage.
The new restrictive limit is the default GitLab runner, which is limited to 3 GB.
### Proposal
Use less object files:
1. Move multiple `YASPGrid` instantiations into one object file
2. _(optional)_ Move more `UGGrid` instantiations into single object files
### How to test the implementation?
Pipeline passes (faster than now)
### Related issues
See !63.https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/77Define the public API2018-09-03T20:24:23+02:00Lukas Riedelmail@lukasriedel.comDefine the public API### Description
DORiE is intended to comply to [Semantic Versioning](https://semver.org/). Updates are indicated by patches, minor revisions, and major revisions. The two latter relate to backwards compatible changes of the public API. Therefore, an API must be defined. This has not occurred as of v1.0.0.
The public API should encompass:
* The `dorie` CLI (how the compiled program is executed)
* The input files for the program:
- Config files
- Parameter field H5 file
- boundary condition input file
* The API of the main program instance: `Simulation`.
The config files are defined by the Cheat Sheet. The API of `Simulation` could be looked up in the doxygen documentation (which currently does not exist!). The boundary condition input file is defined in the docs.
### Proposal
* Add a new docs page concerning the public API
* Update the docs on the `dorie` CLI
* Add a specification of the parameter field file
* Deprecations and removals will be mentioned in the `CHANGELOG.md`
### How to test the implementation?
No testing necessary
### Related issues### Description
DORiE is intended to comply to [Semantic Versioning](https://semver.org/). Updates are indicated by patches, minor revisions, and major revisions. The two latter relate to backwards compatible changes of the public API. Therefore, an API must be defined. This has not occurred as of v1.0.0.
The public API should encompass:
* The `dorie` CLI (how the compiled program is executed)
* The input files for the program:
- Config files
- Parameter field H5 file
- boundary condition input file
* The API of the main program instance: `Simulation`.
The config files are defined by the Cheat Sheet. The API of `Simulation` could be looked up in the doxygen documentation (which currently does not exist!). The boundary condition input file is defined in the docs.
### Proposal
* Add a new docs page concerning the public API
* Update the docs on the `dorie` CLI
* Add a specification of the parameter field file
* Deprecations and removals will be mentioned in the `CHANGELOG.md`
### How to test the implementation?
No testing necessary
### Related issuesLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/78Roll out version 1.12018-08-29T18:07:05+02:00Lukas Riedelmail@lukasriedel.comRoll out version 1.1### To-do in the code
- [x] Create branch `1.1-stable`
- [x] Update `VERSION`, `CHANGELOG`, and `dune.module`
- [x] Create tag `1.1.0`
### To-do in GitLab
- [x] Create label `Pick into 1.1`
### To-do externally
- [x] Deploy Sphinx docs manually (see problems in !71)
- [x] Update description on Docker Hub
- [x] Update `latest` tag on Docker Hub
- [x] Update "Release" badge### To-do in the code
- [x] Create branch `1.1-stable`
- [x] Update `VERSION`, `CHANGELOG`, and `dune.module`
- [x] Create tag `1.1.0`
### To-do in GitLab
- [x] Create label `Pick into 1.1`
### To-do externally
- [x] Deploy Sphinx docs manually (see problems in !71)
- [x] Update description on Docker Hub
- [x] Update `latest` tag on Docker Hub
- [x] Update "Release" badgeLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/79Add more Issue and MR templates2019-09-24T16:51:20+02:00Lukas Riedelmail@lukasriedel.comAdd more Issue and MR templates### Description
The [GitLab description templates](https://docs.gitlab.com/ee/user/project/description_templates.html) are very useful, but we need more of them.
### Proposal
Add MR templates for
* Creating an MR _without_ issue
* Bugfix release MR, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/90
Add Issue templates for
* Release rollout:
- bugfix release, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/issues/96
- minor release, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/issues/78
- major release
### People involved
@sospinar, do you have another idea for an additional template? Does an existing template need an update?### Description
The [GitLab description templates](https://docs.gitlab.com/ee/user/project/description_templates.html) are very useful, but we need more of them.
### Proposal
Add MR templates for
* Creating an MR _without_ issue
* Bugfix release MR, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/merge_requests/90
Add Issue templates for
* Release rollout:
- bugfix release, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/issues/96
- minor release, see https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/issues/78
- major release
### People involved
@sospinar, do you have another idea for an additional template? Does an existing template need an update?https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/80Estimate of matrix backend entry numbers is wrong2018-09-03T20:24:23+02:00Lukas Riedelmail@lukasriedel.comEstimate of matrix backend entry numbers is wrong### Summary
`estimate_mbe_entries` produces wrong estimates of the matrix backend size.
When constructing the matrix backend, one can given an estimate of the number of matrix entries per element for faster resource allocation. The old values are wrong.
The numbers were extracted from the `patternStatistics` of the `OneStepGridOperator` jacobian, but are related to the wrong blocking that was used before the update to DUNE v2.6 (see #68, !56)
### Correct values
For a DG method, the numbers are
* simplex: `dim + 2`
* cube: `2*dim + 1`
### Summary
`estimate_mbe_entries` produces wrong estimates of the matrix backend size.
When constructing the matrix backend, one can given an estimate of the number of matrix entries per element for faster resource allocation. The old values are wrong.
The numbers were extracted from the `patternStatistics` of the `OneStepGridOperator` jacobian, but are related to the wrong blocking that was used before the update to DUNE v2.6 (see #68, !56)
### Correct values
For a DG method, the numbers are
* simplex: `dim + 2`
* cube: `2*dim + 1`
Lukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/81Output should display the exact computed solution2018-09-03T14:22:48+02:00Lukas Riedelmail@lukasriedel.comOutput should display the exact computed solution### Description
The current step scheme first adapts the grid and then prints the solution. This is confusing for users, because the output does not show the actual solution that was computed.
### Proposal
Change the order of steps in the `Simulation::run` algorithm:
1. Compute solution
2. Print solution
3. Adapt grid (only if loop continues)
### How to test the implementation?
Current pipeline works.
### Related issues### Description
The current step scheme first adapts the grid and then prints the solution. This is confusing for users, because the output does not show the actual solution that was computed.
### Proposal
Change the order of steps in the `Simulation::run` algorithm:
1. Compute solution
2. Print solution
3. Adapt grid (only if loop continues)
### How to test the implementation?
Current pipeline works.
### Related issuesLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/82Update License to include Santiago as contributor2018-09-03T20:24:22+02:00Lukas Riedelmail@lukasriedel.comUpdate License to include Santiago as contributor### Description
Santiago Ospinar is now a contributor to DORiE (to `master` and the releases) and should be mentioned in the `LICENSE.md` :tada:
@sospinar, do you want to be mentioned by your full name "Santiago Ospina De Los Ríos"? (if that is correct...?)### Description
Santiago Ospinar is now a contributor to DORiE (to `master` and the releases) and should be mentioned in the `LICENSE.md` :tada:
@sospinar, do you want to be mentioned by your full name "Santiago Ospina De Los Ríos"? (if that is correct...?)Lukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/83Remove unnecessary files from deployed Docker image2018-10-07T02:27:40+02:00Lukas Riedelmail@lukasriedel.comRemove unnecessary files from deployed Docker image### Description
The `deploy` jobs have no dedicated `dependencies`, which leads to artifacts from the previous stage being downloaded by these jobs. This increases the image's size and clutters it.
### Proposal
Declare 'empty' dependencies according to the [GitLab docs](https://docs.gitlab.com/ee/ci/yaml/#dependencies):
> Defining an empty array will skip downloading any artifacts for that job.
```yaml
dependencies: []
```
### How to test the implementation?
* Pipeline passes
* No artifacts are downloaded for `deploy` jobs
### Related issues
* #44 seeks for reducing the image size### Description
The `deploy` jobs have no dedicated `dependencies`, which leads to artifacts from the previous stage being downloaded by these jobs. This increases the image's size and clutters it.
### Proposal
Declare 'empty' dependencies according to the [GitLab docs](https://docs.gitlab.com/ee/ci/yaml/#dependencies):
> Defining an empty array will skip downloading any artifacts for that job.
```yaml
dependencies: []
```
### How to test the implementation?
* Pipeline passes
* No artifacts are downloaded for `deploy` jobs
### Related issues
* #44 seeks for reducing the image sizeLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/84Improve triggering of DUNE environment image setup in CI pipeline2018-09-03T20:24:23+02:00Lukas Riedelmail@lukasriedel.comImprove triggering of DUNE environment image setup in CI pipeline### Description
The CI pipeline currently rebuilds the DUNE environment image(s) whenever a pipeline is manually started through the web interface ("Run pipeline"). The interface allows for adding custom CI variables. This should be used to actually trigger a new setup of the images.
The `only` keyword in the [`.gitlab-ci.yml` syntax](https://docs.gitlab.com/ee/ci/yaml/#only-and-except-complex) allows for evaluating variable expressions. In particular, one can check for a variable being [defined and non-empty](https://docs.gitlab.com/ee/ci/variables/README.html#supported-syntax)
### Proposal
* Run the `setup` jobs only when the variable `REBUILD_BASE_IMAGE` is defined and non-empty:
```yaml
only:
variables:
- $REBUILD_BASE_IMAGE
```
* Add a `README.md` to the `docker` directory, explaining the Docker image usage and pipeline.
### How to test the implementation?
* Pipeline passes
### Related issues
* #83 will update the pipeline as well. It might be easier merging both into one MR.### Description
The CI pipeline currently rebuilds the DUNE environment image(s) whenever a pipeline is manually started through the web interface ("Run pipeline"). The interface allows for adding custom CI variables. This should be used to actually trigger a new setup of the images.
The `only` keyword in the [`.gitlab-ci.yml` syntax](https://docs.gitlab.com/ee/ci/yaml/#only-and-except-complex) allows for evaluating variable expressions. In particular, one can check for a variable being [defined and non-empty](https://docs.gitlab.com/ee/ci/variables/README.html#supported-syntax)
### Proposal
* Run the `setup` jobs only when the variable `REBUILD_BASE_IMAGE` is defined and non-empty:
```yaml
only:
variables:
- $REBUILD_BASE_IMAGE
```
* Add a `README.md` to the `docker` directory, explaining the Docker image usage and pipeline.
### How to test the implementation?
* Pipeline passes
### Related issues
* #83 will update the pipeline as well. It might be easier merging both into one MR.Lukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.comhttps://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/85CI schedule for released versions2018-08-20T23:55:24+02:00Santiago Ospina De Los Ríossospinar@gmail.comCI schedule for released versions### Proposal
Schedule CI Pipelines regularly (e.g. monthly) for released versions ([v1.0.0](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/tags/v1.0.0) and [v1.1.0](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/tags/v1.1.0)).
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->### Proposal
Schedule CI Pipelines regularly (e.g. monthly) for released versions ([v1.0.0](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/tags/v1.0.0) and [v1.1.0](https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/tags/v1.1.0)).
<!--
PLEASE READ THIS
Briefly explain __what__ should be changed and __propose__ how this can happen.
Adding pseudo code or diagrams would be great!
Additionally, you can:
- add suitable labels
- assign a milestone
- mention other issues
-->https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie/-/issues/86Implement new parameterization data input2018-11-06T20:43:08+01:00Lukas Riedelmail@lukasriedel.comImplement new parameterization data input### Description
With the new data structures almost ready, a new data input scheme can be established. The new scheme aims at being more customizable while always ensuring that grid intersections coincide with parameterization singularities (_"compatible mesh"_). The latter is achieved by the data structure itself.
For the new parameter input, we generally have to discern two use cases. For both, we want to end up with a data structure that maps an entity (or its index) to the soil medium it belongs to.
1. A (unstructured) grid is created from a GMSH file.
GMSH supports the definition of multiple "physical" entities on a mesh. When reading the mesh file, the grid entities can be mapped to the ID of the physical entity they belong to by their index (using a `Dune::Mapper`).
2. A regular (but possibly unstructured) grid is created with the `GridFactory`.
In this case, we require additional input in the form of a data file. When building a grid with $`x \times y \times z`$ cells, a dataset with equal dimensions and extensions is required. For each cell, it states an medium index: `id = data[z][y][x]`. H5 is a suitable format, and we can re-use `Dorie::H5File`.
Both use cases can/should encompass the usage of Python scripts: With [`pygmsh`](https://github.com/nschloe/pygmsh), one can easily write a `.geo` GMSH input file with Python and afterwards compile it to a mesh (this requires the GMSH CLI to be installed). Also, it's really easy to write a multi-dimensional H5 file with `h5py` and `numpy`.
For the input of the parameters themselves, we use a new file in [YAML](http://yaml.org/) format. It specifies the name of the soil layer, its id, the parameterization type, and the parameters themselves. The hierarchical layout of YAML makes it easy to hand over sub-nodes to the respective data structures for readout.
In the first implementation, we drop Miller scaling.
### Proposal
- Remove the Parameter Field Generator
- Install `pygmsh` and `h5py` into `virtualenv`
- Recommend installing the GMSH CLI
- Give example files and instructions on how to generate input files
- Rework `H5File` to also read datasets of `H5_NATIVE_INT` values
- Rework `RichardsSimulation` to build grid itself
- Two modes: `gmsh` and `regular`: Read GMSH file or build grid with `GridFactory`
- Create mapping from cell ids to soil medium id
- Leave option for building `RichardsSimulation` with a grid instance
- Read `param.yml`
- Create `RichardsParameterization` instances from input file
- Build parameter map from mapping between cells ids and medium ids
- Adapt CLI
### Use case flow chart
```plantuml
:User: ..> (write_geo.py)
(write_geo.py) -> (grid.geo) : Python
(grid.geo) -> (grid.msh) : GMSH
(grid.msh) ..> (DORiE) : GMSH mode
:User: .> (grid.geo) : directly supply
:User: ..> (write_ids.py)
(write_ids.py) -> (cell_ids.h5) : Python
(cell_ids.h5) ..> (DORiE) : rectangular mode
```
### `param.yml` layout
```yaml
---
sand_1: # name
type: MvG # Mualem–van Genuchten
index: 1 # layer index for reference
parameters:
alpha: -10.0
K_0: 1E-5
n:
tau:
theta_r:
theta_s:
my_layer:
type: # ...
# ...
```
### How to test the implementation?
- tests work with new exemplary/default input files
Add test executables for
- Reading arbitrary H5 files.
- Correct grid cell - medium id mapping for 2D and 3D test cases, respectively.
### Related issues
See #63### Description
With the new data structures almost ready, a new data input scheme can be established. The new scheme aims at being more customizable while always ensuring that grid intersections coincide with parameterization singularities (_"compatible mesh"_). The latter is achieved by the data structure itself.
For the new parameter input, we generally have to discern two use cases. For both, we want to end up with a data structure that maps an entity (or its index) to the soil medium it belongs to.
1. A (unstructured) grid is created from a GMSH file.
GMSH supports the definition of multiple "physical" entities on a mesh. When reading the mesh file, the grid entities can be mapped to the ID of the physical entity they belong to by their index (using a `Dune::Mapper`).
2. A regular (but possibly unstructured) grid is created with the `GridFactory`.
In this case, we require additional input in the form of a data file. When building a grid with $`x \times y \times z`$ cells, a dataset with equal dimensions and extensions is required. For each cell, it states an medium index: `id = data[z][y][x]`. H5 is a suitable format, and we can re-use `Dorie::H5File`.
Both use cases can/should encompass the usage of Python scripts: With [`pygmsh`](https://github.com/nschloe/pygmsh), one can easily write a `.geo` GMSH input file with Python and afterwards compile it to a mesh (this requires the GMSH CLI to be installed). Also, it's really easy to write a multi-dimensional H5 file with `h5py` and `numpy`.
For the input of the parameters themselves, we use a new file in [YAML](http://yaml.org/) format. It specifies the name of the soil layer, its id, the parameterization type, and the parameters themselves. The hierarchical layout of YAML makes it easy to hand over sub-nodes to the respective data structures for readout.
In the first implementation, we drop Miller scaling.
### Proposal
- Remove the Parameter Field Generator
- Install `pygmsh` and `h5py` into `virtualenv`
- Recommend installing the GMSH CLI
- Give example files and instructions on how to generate input files
- Rework `H5File` to also read datasets of `H5_NATIVE_INT` values
- Rework `RichardsSimulation` to build grid itself
- Two modes: `gmsh` and `regular`: Read GMSH file or build grid with `GridFactory`
- Create mapping from cell ids to soil medium id
- Leave option for building `RichardsSimulation` with a grid instance
- Read `param.yml`
- Create `RichardsParameterization` instances from input file
- Build parameter map from mapping between cells ids and medium ids
- Adapt CLI
### Use case flow chart
```plantuml
:User: ..> (write_geo.py)
(write_geo.py) -> (grid.geo) : Python
(grid.geo) -> (grid.msh) : GMSH
(grid.msh) ..> (DORiE) : GMSH mode
:User: .> (grid.geo) : directly supply
:User: ..> (write_ids.py)
(write_ids.py) -> (cell_ids.h5) : Python
(cell_ids.h5) ..> (DORiE) : rectangular mode
```
### `param.yml` layout
```yaml
---
sand_1: # name
type: MvG # Mualem–van Genuchten
index: 1 # layer index for reference
parameters:
alpha: -10.0
K_0: 1E-5
n:
tau:
theta_r:
theta_s:
my_layer:
type: # ...
# ...
```
### How to test the implementation?
- tests work with new exemplary/default input files
Add test executables for
- Reading arbitrary H5 files.
- Correct grid cell - medium id mapping for 2D and 3D test cases, respectively.
### Related issues
See #63v2.0 ReleaseLukas Riedelmail@lukasriedel.comLukas Riedelmail@lukasriedel.com