...
 
Commits (20)
---
BasedOnStyle: Mozilla
# Break before braces, except for one-line statements
BreakBeforeBraces: Allman
# Do not break before assignment operators
BreakBeforeBinaryOperators: NonAssignment
......@@ -115,8 +115,6 @@ prep:update-dune-clang:
script:
- docker build -f docker/dune-env-update.dockerfile
--build-arg DUNE_ENV_IMAGE=${DUNE_ENV_IMAGE}-clang
--build-arg CC=clang
--build-arg CXX=clang++
--build-arg PROCNUM=$CPUS_DIND
-t ${DUNE_ENV_IMAGE}-clang .
- docker push ${DUNE_ENV_IMAGE}-clang
......
......@@ -4,3 +4,6 @@
[submodule "plugins/vendor/googletest"]
path = plugins/vendor/googletest
url = https://github.com/google/googletest.git
[submodule "plugins/vendor/clang-format-hooks"]
path = plugins/vendor/clang-format-hooks
url = https://github.com/barisione/clang-format-hooks.git
......@@ -2,7 +2,20 @@
## Unreleased
<!-- Add new entries here! -->
### Added
* Extended Miller scaling adapter with additional porosity scaling !122
### Changed
* Linerar solver for finite volumes changed from `AMG_4_DG` to
`BCGS_AMG_SSOR` !204
* Simplify installation instructions !205
### Fixed
* Compiler warnings for unused variables in `dune/dorie/common/h5file.hh` !206
* Ensure UTF-8 locale definition in Docker images !210
## 2.0.0 (2020-05-14)
......
......@@ -68,6 +68,41 @@ templates. Provide information on how your code changes and additions solve the
problem or implement the feature. Make sure that your MR fulfills the the
criteria for being merged.
#### C++ Coding Style
The DORiE repository contains a `.clang-format` file in the top-level directory specifying the desired style for C++ code in DORiE.
We ask developers to use this style when writing new code, ideally by instructing their editor and/or [ClangFormat](https://clang.llvm.org/docs/ClangFormat.html) tool to do so.
Every change to C++ files introduced by Git commits should be formatted according to this style.
DORiE vendors [clang-format-hooks](https://github.com/barisione/clang-format-hooks) to automate formatting and avoid committing unformated changes.
It contains a Git pre-commit hook which checks the changes right before committing.
If the changes to not comply to the coding style, the user is shown the suggested style and prompted to apply the changes or abort the commit.
Follow these instructions for registering the hook:
1. Install `clang-format`.
On **macOS**:
```bash
brew install clang-format
```
On **Ubuntu**:
```bash
apt install clang-format
```
2. Register the hook. From the top-level DORiE source directory, execute
```bash
./plugins/vendor/clang-format-hooks/git-pre-commit-format install
```
When running in trouble, consult the [original installation instructions](https://github.com/barisione/clang-format-hooks#setup).
Happy coding!
### Releases
Tasks for creating and publishing a release are listed in the `version-rollout`
......
This diff is collapsed.
# NOTE: If you want to update this file with options for your own setup, create
# a copy of it. Make sure you *do not* commit any changes specific to your
# setup to the original file!
# CMake configuration options
# - Default build type: Release
# - Create virtual environment in dune-common
# - Fetch pip and install it into virtual environment
# - Prefer parallel over sequential HDF5 (required for Ubuntu/Debian)
CMAKE_FLAGS="
-DCMAKE_BUILD_TYPE=Release
-DDUNE_PYTHON_VIRTUALENV_SETUP=True
-DDUNE_PYTHON_ALLOW_GET_PIP=True
-DHDF5_PREFER_PARALLEL=True
"
#!/usr/bin/env bash
# Abort script on command failure
set -e
echo "=== Cloning DUNE modules into $(pwd) ==="
DEFAULT_BRANCH="releases/2.6"
git clone https://gitlab.dune-project.org/staging/dune-uggrid.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/oklein/dune-randomfield.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/core/dune-common.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/core/dune-geometry.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/core/dune-grid.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/core/dune-istl.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/core/dune-localfunctions.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/staging/dune-functions.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/pdelab/dune-pdelab.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/staging/dune-typetree.git -b $DEFAULT_BRANCH
git clone https://gitlab.dune-project.org/quality/dune-testtools.git -b $DEFAULT_BRANCH
......@@ -20,6 +20,7 @@ add_custom_target(example_tests
# create the mapping datafile as preparation for all tests
add_custom_target(prepare_testing
COMMAND ${DUNE_PYTHON_VIRTUALENV_EXECUTABLE} ${PROJECT_SOURCE_DIR}/test/maps/create_param_maps.py ${PROJECT_SOURCE_DIR}/test/maps/cell_ids.h5
COMMAND ${DUNE_PYTHON_VIRTUALENV_EXECUTABLE} ${PROJECT_SOURCE_DIR}/dune/dorie/test/create_scaling.py ${PROJECT_BINARY_DIR}/dune/dorie/test/scaling.h5
)
add_dependencies(system_tests prepare_testing)
add_dependencies(unit_tests prepare_testing)
......
......@@ -82,6 +82,26 @@ the Richards solver must be utilized to solve them. Use the value
``richards+transport`` in the ``simulation.mode`` keyword to run the solute
transport solver.
Advanced Solver Features
========================
* **Unstructured Grids**: The DG scheme of DORiE can be applied onto any grid
geometry. Users can load unstructured simplex grids into DORiE via GMSH
``.msh`` files. DORiE will then use an unstructured grid manager. See
:doc:`/manual/grid` for details.
* **Adaptive Grid Refinement** (h-refinement): DORiE supports adaptive local
grid refinement based on an a-posteriori flux error estimator. Enabling grid
refinement requires using an unstructured grid manager. Adaptivity can be
enabled by setting a refinement policy in the
:doc:`Configuration File </manual/config-file>`.
* **Parallel Execution:** DUNE modules are generally set up to support parallel
execution on distributed hardware via MPI. DORiE supports parallel execution
on a best effort basis. This feature is currently only supported for
structured rectangular grids. Parallel execution is controlled through the
:ref:`Command Line Interface <man-cli-commands>`.
Available Models
================
......@@ -92,21 +112,27 @@ combinations users may choose from via the
Docker image or with the default targets built locally on your machine, these
are the available combinations of options:
+----------------------+-------------------+-----------------------------+------------------------------+----------------+
|``simulation.mode`` |``grid.dimensions``|``richards.numerics.FEorder``|``transport.numerics.FEorder``| GMSH Grid and |
| | | | | grid refinement|
+======================+===================+=============================+==============================+================+
|``richards`` | 2 | <= 3 | ✗ | ✓* |
| +-------------------+-----------------------------+------------------------------+----------------+
| | 3 | <= 1 | ✗ | ✓* |
+----------------------+-------------------+-----------------------------+------------------------------+----------------+
|``richards+transport``| 2 | <= 3 | same as ``richards`` | ✓*\ :sup:`†` |
| +-------------------+-----------------------------+------------------------------+----------------+
| | 3 | <= 1 | same as ``richards`` | ✓*\ :sup:`†` |
+----------------------+-------------------+-----------------------------+------------------------------+----------------+
* *: Only for ``richards.numerics.FEorder`` > 0. Finite volume solvers do not
support unstructured grids.
+----------------------+-------------------+-----------------------------+------------------------------+----------------+---------------------+
|``simulation.mode`` |``grid.dimensions``|``richards.numerics.FEorder``|``transport.numerics.FEorder``| GMSH Grid and | Parallel |
| | | | | grid refinement| execution |
+======================+===================+=============================+==============================+================+=====================+
|``richards`` | 2 | 0 | ✗ | ✗ | ✓ |
| +-------------------+-----------------------------+------------------------------+----------------+---------------------+
| | 2 | 1..3 | ✗ | ✓ | on structured grids |
| +-------------------+-----------------------------+------------------------------+----------------+---------------------+
| | 3 | 0 | ✗ | ✗ | ✓ |
| +-------------------+-----------------------------+------------------------------+----------------+---------------------+
| | 3 | 1 | ✗ | ✓ | on structured grids |
+----------------------+-------------------+-----------------------------+------------------------------+----------------+---------------------+
|``richards+transport``| 2 | 0 | same as ``richards`` | ✗ | ✓ |
| +-------------------+-----------------------------+------------------------------+----------------+---------------------+
| | 2 | 1..3 | same as ``richards`` | ✓\ :sup:`†` | on structured grids |
| +-------------------+-----------------------------+------------------------------+----------------+---------------------+
| | 3 | 0 | same as ``richards`` | ✗ | ✓ |
| +-------------------+-----------------------------+------------------------------+----------------+---------------------+
| | 3 | 1 | same as ``richards`` | ✓\ :sup:`†` | on structured grids |
+----------------------+-------------------+-----------------------------+------------------------------+----------------+---------------------+
* :sup:`†`: Grid refinement on *rectangular* grids produces hanging nodes which
are currently not considered in the
:ref:`flux reconstruction <man-flux_reconstruction>`. This can result in
......
......@@ -126,18 +126,19 @@ Richards Parameterizations
~~~~~~~~~~~~~~~~~~~~~~~~~~
As the *soil porosity* :math:`\phi \, [-]` and the *residual water content*
:math:`\theta_s \, [-]` vary between soil types, it is convenient to define the
:math:`\theta_r \, [-]` vary between soil types, it is convenient to define the
*soil water saturation* :math:`\Theta \, [-]` by
.. math::
\Theta = \frac{\theta_w - \theta_r}{\phi - \theta_r},
\Theta (\theta_w) = \frac{\theta_w - \theta_r}{\phi - \theta_r},
\quad \Theta \in \left[ 0, 1 \right],
where :math:`\theta_w \, [-]` is the volumetric soil water content.
One typically assumes that the entire pore space can be filled up with water,
hence we set the *saturated water content* :math:`\theta_s \, [-]` equal to the
porosity, :math:`\theta_s = \phi`.
This relation can be manipulated in certain :ref:`local scalings <man-scaling>`.
Mualem–van Genuchten
""""""""""""""""""""
......@@ -492,6 +493,7 @@ Effective Hydromechanic Dispersion Tensor for Isotropic Media
lambda_t:
# <Deff_type> parameters ...
.. _man-scaling:
Scalings
~~~~~~~~
......@@ -507,6 +509,7 @@ Every ``scaling_section`` has the following layout:
extensions: <sequence>
offset: <sequence>
The setting ``interpolation`` accepts any of the :doc:`implemented Interpolators <interpolators>`.
The values of ``extensions`` and ``offset`` are sequences containing the
coordinates in the respective spatial dimensions.
......@@ -545,6 +548,34 @@ Miller Scaling
scale_miller:
# ...
Miller Porosity Scaling
"""""""""""""""""""""""
Applies porosity scaling in addition to regular :ref:`Miller scaling <man-miller_scaling>`, violating its geometric similarity assumption.
.. math::
\Theta &= \Theta (h_m \cdot \xi_M) \\
K &= K (\Theta) \cdot \xi_M^2 \\
\phi &= \theta_s - \xi_\theta
* ``type: MilPor``
Scaling sections:
* ``scale_miller``: Scaling factor :math:`\xi_M` applied onto matric head and conductivity simultaneously.
* ``scale_porosity``: Scaling summand :math:`\xi_\theta` subtracted from the saturated conductivity.
YAML template:
.. code-block:: yaml
type: MilPor
data:
scale_miller:
# ...
scale_porosity:
# ...
.. _YAML: https://gettaurus.org/docs/YAMLTutorial/
.. _HDF5: https://www.h5py.org/
......@@ -9,9 +9,6 @@ LABEL maintainer="lriedel@iup.uni-heidelberg.de"
# number of cores for parallel builds
ARG PROCNUM=1
# Compilers to be used
ARG CC=gcc
ARG CXX=g++
# copy the build context to this image
WORKDIR /opt/dune/dorie
......@@ -20,8 +17,7 @@ COPY ./ ./
# build the executable
WORKDIR /opt/dune/
RUN MAKE_FLAGS="-j${PROCNUM}" \
CMAKE_FLAGS="-DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=${CC} -DCMAKE_CXX_COMPILER=${CXX} -DDUNE_PYTHON_VIRTUALENV_SETUP=True -DDUNE_PYTHON_ALLOW_GET_PIP=True" \
./dune-common/bin/dunecontrol --only=dorie all
./dune-common/bin/dunecontrol --opts=dorie/build.opts --only=dorie all
# Start a fresh image as production environment
FROM $DUNE_ENV_IMAGE as prod-env
......
......@@ -6,13 +6,20 @@ LABEL maintainer="lriedel@iup.uni-heidelberg.de"
# number of cores for parallel builds
ARG PROCNUM=1
# Compilers to be used
ARG CC=gcc
ARG CXX=g++
RUN apt-get clean && apt-get update && apt-get upgrade -y && apt-get clean
# Upgrade Ubuntu packages
# NOTE: "localedef" is required in case "locale" package is updated
RUN apt-get clean \
&& apt-get update \
&& apt-get upgrade -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
# Update and build DUNE modules
WORKDIR /opt/dune
COPY build.opts ./
RUN ./dune-common/bin/dunecontrol update
RUN MAKE_FLAGS="-j ${PROCNUM}" \
CMAKE_FLAGS="-DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=${CC} -DCMAKE_CXX_COMPILER=${CXX} -DDUNE_PYTHON_VIRTUALENV_SETUP=True -DDUNE_PYTHON_ALLOW_GET_PIP=True" \
./dune-common/bin/dunecontrol all
./dune-common/bin/dunecontrol --opts=build.opts all
......@@ -6,6 +6,9 @@ ARG PROCNUM=1
ARG CC=gcc
ARG CXX=g++
# Pass compilers as environment variables (will persist in other images)
ENV CC=$CC CXX=$CXX
# disable any prompts while installing packages
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get clean \
......@@ -37,26 +40,18 @@ RUN apt-get clean \
python3-dev \
python3-pip \
python3-vtk7 \
&& apt-get clean
RUN rm -rf /var/lib/apt/lists/* \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
ENV LANG en_US.utf8
WORKDIR /opt/dune
RUN git clone https://gitlab.dune-project.org/staging/dune-uggrid.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/oklein/dune-randomfield.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/core/dune-common.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/core/dune-geometry.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/core/dune-grid.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/core/dune-istl.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/core/dune-localfunctions.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/staging/dune-functions.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/pdelab/dune-pdelab.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/staging/dune-typetree.git -b releases/2.6 \
&& git clone https://gitlab.dune-project.org/quality/dune-testtools.git -b releases/2.6
WORKDIR /opt/dune
# Clone DUNE modules
COPY clone_dune ./
RUN bash clone_dune
# Configure and build DUNE modules
COPY build.opts ./
RUN MAKE_FLAGS="-j ${PROCNUM}" \
CMAKE_FLAGS="-DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=${CC} -DCMAKE_CXX_COMPILER=${CXX} -DDUNE_PYTHON_VIRTUALENV_SETUP=True -DDUNE_PYTHON_ALLOW_GET_PIP=True" \
./dune-common/bin/dunecontrol all
./dune-common/bin/dunecontrol --opts=build.opts all
......@@ -58,6 +58,8 @@ public:
// release properties
herr_t status = H5Pclose(h5_plist_id);
assert(status > -1);
if (status < 0)
_log->warn("Unable to release H5 file properties");
// check for errors regardless of build type
if (_file_id < 0) {
......@@ -81,6 +83,9 @@ public:
// close the opened file
status = H5Fclose(_file_id);
assert(status > -1);
if (status < 0)
_log->warn("Error closing H5 file: {}", _file_path);
}
/// Return the file path of this object
......@@ -92,14 +97,17 @@ public:
*/
void open_group(const std::string& group_path="./")
{
_log->trace("Opening H5 group: {}", group_path);
// close the group if we already opened it
if (_group_id >= 0) {
herr_t status = H5Gclose(_group_id);
assert(status > -1);
if (status < 0)
_log->warn("Error closing previously opened H5 group");
}
// open the new group
_log->trace("Opening H5 group: {}", group_path);
_group_id = H5Gopen(_file_id, group_path.c_str(), H5P_DEFAULT);
if (_group_id < 0) {
_log->error("Failed to open H5 group: {}", group_path);
......
......@@ -72,7 +72,7 @@ which are applied to the parameterization functions as follows:
@f{align*}{
\Theta &= \Theta(h_m \cdot \xi_{h_m}) \\
\phi &= \theta_s + \xi_{\phi} \\
\phi &= \theta_s - \xi_{\phi} \\
K &= K(\Theta) \cdot \xi_K^2
@f}
......
......@@ -5,6 +5,8 @@
#include <string>
#include <map>
#include <dune/common/float_cmp.hh>
namespace Dune {
namespace Dorie {
namespace Parameterization {
......@@ -123,9 +125,11 @@ public:
const RF& scale_por
) const
{
verify_porosity_scaling(scale_por);
return [this, &scale_por](const Saturation sat) {
return WaterContent{
sat.value * (_theta_s.value - _theta_r.value + scale_por)
sat.value * (_theta_s.value - _theta_r.value - scale_por)
+ _theta_r.value
};
};
......@@ -140,13 +144,15 @@ public:
const RF& scale_por
) const
{
verify_porosity_scaling(scale_por);
return [this, &scale_por](const WaterContent theta)
{
// Small value to avoid numerical issues
constexpr double eps_min = 1E-6;
const double sat = (theta.value - _theta_r.value)
/ (_theta_s.value + scale_por - _theta_r.value);
/ (_theta_s.value - scale_por - _theta_r.value);
return Saturation{std::clamp(sat, eps_min, 1.0)};
};
}
......@@ -189,6 +195,24 @@ public:
*/
virtual std::unique_ptr<Richards<Traits>> clone () const = 0;
private:
/// Check if the porosity scaling factor conforms to the parameterization
/** \param scaling The porosity scaling factor to check
*/
void verify_porosity_scaling (const RF scaling) const
{
if (FloatCmp::eq(scaling, 0.0))
return;
if (scaling >= _theta_s.value - _theta_r.value) {
DUNE_THROW(Dune::IOError, "Invalid porosity scaling "
"(negative water content).");
}
else if(_theta_s.value - scaling > 1.0){
DUNE_THROW(Dune::IOError, "Invalid porosity scaling "
"(water content exceeds 1).");
}
}
};
......
......@@ -160,6 +160,62 @@ public:
~MillerScalingAdapter () override = default;
};
/// ScalingAdapter implementing Miller scaling with additional porosity scaling.
/**
* For the Description of Miller scaling, see MillerScalingAdapter.
* In addition, the porosity is scaled through an additive scaling
* \f$ \xi_\theta \f$:
* \f[
* \Theta (\theta) =
* \frac{\theta - \theta_r}{\theta_s - \xi_\theta - \theta_r}
* \f]
*
* The MilPor scaling violates the geometric similarity assumption of Miller
* scaling. The choice of \f$ -\xi_\theta \f$ (instead of \f$ +\xi_\theta \f$)
* is due to a material with smaller length scale (Miller scaling factor
* \f$ \xi < 1 \f$) having a larger porosity.
*
* \ingroup RichardsParam
* \author Hannes Bauser
* \date 2018
*/
template<typename Traits>
class MilPorScalingAdapter : public ScalingAdapter<Traits>
{
private:
using Base = ScalingAdapter<Traits>;
using Domain = typename Base::Domain;
using GridView = typename Base::GridView;
using return_t = typename Base::return_t;
public:
/// Create this adapter and the internal interpolator.
explicit MilPorScalingAdapter (
const YAML::Node& scale_data_cfg,
const GridView& grid_view,
const std::shared_ptr<spdlog::logger> log
):
Base(scale_data_cfg, grid_view, log)
{
this->add_interpolator(scale_data_cfg["scale_miller"], "scale_miller");
this->add_interpolator(scale_data_cfg["scale_porosity"], "scale_porosity");
}
/// Export type of this scaling adapter
static inline std::string type = "MilPor";
/// Evaluate this interpolator
return_t evaluate (const Domain& pos) const override
{
const auto miller_scaling_value = this->_interpolators[0]->evaluate(pos);
const auto porosity_scaling_value = this->_interpolators[1]->evaluate(pos);
return return_t{miller_scaling_value, miller_scaling_value, porosity_scaling_value};
}
/// Delete this adapter
~MilPorScalingAdapter () override = default;
};
/// A ScalingAdapter that does not implement any scaling.
/** This adapter returns a default-initialized instance of ScalingFactors
* when evaluated.
......@@ -241,6 +297,11 @@ public:
grid_view,
log);
}
else if (type == MilPorScalingAdapter<Traits>::type) {
return std::make_shared<MilPorScalingAdapter<Traits>>(config,
grid_view,
log);
}
else if (to_lower(type) == DummyScalingAdapter<Traits>::type) {
return std::make_shared<DummyScalingAdapter<Traits>>(config,
grid_view,
......
......@@ -154,9 +154,20 @@ void ModelRichards<Traits>::operator_setup()
igo = std::make_unique<IGO>(*go0,*go1);
// --- Solvers ---
lsgfs = std::make_unique<LSGFS>(LSGFSHelper::create(gv));
lscc = std::make_unique<LSCC>();
ls = std::make_unique<LS>(*igo,*cc,*lsgfs,*lscc,1000,0,true,true);
// FV linear solver
if constexpr (order == 0) {
// Solver args: GFS, maxiter, verbose, reuse, superlu
ls = std::make_unique<LS>(*gfs, 1000, 0, true, true);
}
// DG linear solver
else {
lsgfs = std::make_unique<LSGFS>(LSGFSHelper::create(gv));
lscc = std::make_unique<LSCC>();
// Solver args: GO, constraints, LSGFS, LSGFS constraints, maxiter, verbose,
// reuse, superlu
ls = std::make_unique<LS>(*igo, *cc, *lsgfs, *lscc, 1000, 0, true, true);
}
pdesolver = std::make_unique<PDESOLVER>(*igo,*ls);
pdesolver->setParameters(inifile.sub("NewtonParameters"));
......
......@@ -130,8 +130,10 @@ struct ModelRichardsTraits : public BaseTraits
/// Solution vector type
using U = typename IGO::Traits::Domain;
/// Linear solver types
using LS = Dune::PDELab::ISTLBackend_OVLP_AMG_4_DG<IGO,CC,LSGFS,LSCC,
Dune::PDELab::CG2DGProlongation,Dune::SeqSSOR,Dune::BiCGSTABSolver>;
using LS =
std::conditional_t<order == 0,
Dune::PDELab::ISTLBackend_BCGS_AMG_SSOR<IGO>,
Dune::PDELab::ISTLBackend_OVLP_AMG_4_DG<IGO, CC, LSGFS, LSCC, Dune::PDELab::CG2DGProlongation, Dune::SeqSSOR, Dune::BiCGSTABSolver>>;
/// Non-linear solver types
using PDESOLVER = Dune::PDELab::Newton<IGO,LS,U>;
/// Time stepping scheme
......
......@@ -182,16 +182,33 @@ void ModelTransport<Traits>::operator_setup()
explicit_igo = std::make_unique<ExplicitIGO>(*go0,*go1);
// --- Solvers ---
lsgfs = std::make_unique<LSGFS>(LSGFSHelper::create(gv));
lscc = std::make_unique<LSCC>();
// Initialize helper spaces for DG linear solver only
if constexpr (order > 0)
{
lsgfs = std::make_unique<LSGFS>(LSGFSHelper::create(gv));
lscc = std::make_unique<LSCC>();
}
if (ts_param->implicit())
implicit_ls = std::make_unique<ImplicitLS>(
*implicit_igo, *cc, *lsgfs, *lscc, 1000, 0, true, true
);
{
if constexpr (order == 0)
implicit_ls = std::make_unique<ImplicitLS>(*gfs, 1000, 0, true, true);
else
{
implicit_ls = std::make_unique<ImplicitLS>(
*implicit_igo, *cc, *lsgfs, *lscc, 1000, 0, true, true);
}
}
else
explicit_ls = std::make_unique<ExplicitLS>(
*explicit_igo, *cc, *lsgfs, *lscc, 1000, 0, true, true
);
{
if constexpr (order == 0)
explicit_ls = std::make_unique<ExplicitLS>(*gfs, 1000, 0, true, true);
else
{
explicit_ls = std::make_unique<ExplicitLS>(
*explicit_igo, *cc, *lsgfs, *lscc, 1000, 0, true, true);
}
}
// --- Time Step Operators ---
if (ts_param->implicit()){
......
......@@ -158,14 +158,26 @@ struct ModelTransportTraits : public BaseTraits
/// Solution vector type
using U = typename ImplicitIGO::Traits::Domain;
/// Linear solver type
using ImplicitLS = Dune::PDELab::ISTLBackend_OVLP_AMG_4_DG<
ImplicitIGO, CC, LSGFS, LSCC, Dune::PDELab::CG2DGProlongation,
Dune::SeqSSOR, Dune::BiCGSTABSolver
>;
using ExplicitLS = Dune::PDELab::ISTLBackend_OVLP_AMG_4_DG<
ExplicitIGO, CC, LSGFS, LSCC, Dune::PDELab::CG2DGProlongation,
Dune::SeqSSOR, Dune::BiCGSTABSolver
>;
using ImplicitLS = std::conditional_t<
order == 0,
Dune::PDELab::ISTLBackend_BCGS_AMG_SSOR<ImplicitIGO>,
Dune::PDELab::ISTLBackend_OVLP_AMG_4_DG<ImplicitIGO,
CC,
LSGFS,
LSCC,
Dune::PDELab::CG2DGProlongation,
Dune::SeqSSOR,
Dune::BiCGSTABSolver>>;
using ExplicitLS = std::conditional_t<
order == 0,
Dune::PDELab::ISTLBackend_BCGS_AMG_SSOR<ExplicitIGO>,
Dune::PDELab::ISTLBackend_OVLP_AMG_4_DG<ExplicitIGO,
CC,
LSGFS,
LSCC,
Dune::PDELab::CG2DGProlongation,
Dune::SeqSSOR,
Dune::BiCGSTABSolver>>;
/// Methods computing the time step
using ImplicitSLPS
= Dune::PDELab::StationaryLinearProblemSolver<ImplicitIGO, ImplicitLS, U>;
......
message(STATUS "Handling unit tests")
# copy ALL the config files and stuff
file(COPY scaling.h5
scaling.yml
test-param-richards-scaled.yml
file(COPY test-param-richards-scaled.yml
test-param-richards-unscaled.yml
test-param-transport.yml
test-param-factory.yml
DESTINATION .)
# scaling adapter test
dorie_add_unit_test(SOURCES test-model-base.cc NAME test-model-base)
dorie_add_unit_test(SOURCES test-grid-function-container.cc NAME test-grid-function-container)
dorie_add_unit_test(SOURCES test-interpolators.cc NAME test-interpolators)
dorie_add_unit_test(SOURCES test-scaling-adapters.cc NAME test-scaling-adapters)
dorie_add_unit_test(SOURCES test-param-factory.cc NAME test-param-factory)
dorie_add_unit_test(SOURCES test-boundary-condition.cc
NAME test-boundary-condition)
# scaling adapter test
dorie_add_metaini_test(
UNIT_TEST CUSTOM_MAIN
SOURCE test-scaling-adapters.cc
BASENAME test-scaling-adapters
METAINI test-scaling-adapters.mini.in
CREATED_TARGETS sa_target
)
# boundary condition test
dorie_add_metaini_test(
UNIT_TEST CUSTOM_MAIN
......
import h5py
import numpy as np
import os
import argparse
def create_and_write_datasets(args):
zeros = np.zeros((2, 2))
ones = np.ones((2, 2))
twos = np.ones((2, 2)) * 2
shift = np.zeros((2, 1))
shift[1, ...] = 1
porosity = np.ones((2, 2)) * 0.1
f = h5py.File(args.path, "w")
f.create_dataset("zeros", data=zeros)
f.create_dataset("ones", data=ones)
f.create_dataset("twos", data=twos)
f.create_dataset("shift", data=shift)
f.create_dataset("porosity", data=porosity)
f.close()
def parse_cli():
parser = argparse.ArgumentParser(
description="Write parameter mapping datasets for all tests"
)
parser.add_argument(
"path", type=os.path.realpath, help="File path to write the data"
)
return parser.parse_args()
if __name__ == "__main__":
args = parse_cli()
create_and_write_datasets(args)
# macro grid on which the scaling adapters are evaluated
grid:
extensions: [1, 1]
offset: [0, 0]
scalings:
# Loads zeros as Miller scaling factors
scale1:
type: Miller
data:
scale_miller:
file: scaling.h5
dataset: twos
interpolation: nearest
extensions: [1, 1]
offset: [0, 0]
# Loads zeros as Miller scaling factors (automatic extension deduction)
scale2:
type: Miller
data:
scale_miller:
file: scaling.h5
dataset: twos
interpolation: nearest
# Loads Dummy scale adapter returning 1.0 as factors
scale3:
type: none
......@@ -19,7 +19,7 @@ file = none
type = {_initial_type}
quantity = {_quantity}
file = ${CMAKE_CURRENT_LIST_DIR}/scaling.h5
file = scaling.h5
dataset = twos
interpolation = nearest
......@@ -34,6 +34,6 @@ quantity = soluteConcentration
equation = -1+2*y
file = ${CMAKE_CURRENT_LIST_DIR}/scaling.h5
file = scaling.h5
dataset = twos
interpolation = nearest
......@@ -3,8 +3,6 @@ include ${CMAKE_BINARY_DIR}/doc/default_files/config.ini
__name = scaled_param
_asset_path = "${PROJECT_SOURCE_DIR}/test"
adaptivity.useAdaptivity = false
[grid]
initialLevel = 1
gridType = rectangular
......
......@@ -24,7 +24,7 @@ volumes:
tau: 0.0
scaling:
type: Miller
type: MilPor
data:
scale_miller:
file: scaling.h5
......@@ -32,3 +32,9 @@ scaling:
interpolation: nearest
extensions: [1, 1]
offset: [0, 0]
scale_porosity:
file: scaling.h5
dataset: porosity
interpolation: nearest
extensions: [1, 1]
offset: [0, 0]
#include <gtest/gtest.h>
#ifdef HAVE_CONFIG_H
#include "config.h"
#include "config.h"
#endif
#include <cassert>
#include <exception>
#include <vector>
#include <random>
#include <functional>
#include <random>
#include <string>
#include <vector>
#include <yaml-cpp/yaml.h>
#include <dune/common/parallel/mpihelper.hh>
#include <dune/common/float_cmp.hh>
#include <dune/common/fvector.hh>
#include <dune/common/parallel/mpihelper.hh>
#include <dune/common/parametertree.hh>
#include <dune/common/parametertreeparser.hh>
#include <dune/grid/yaspgrid.hh>
#include <dune/dorie/common/grid_creator.hh>
#include <dune/dorie/common/setup.hh>
#include <dune/dorie/model/richards/parameterization/scaling.hh>
/// Compare two scaling factor structs
template<typename T>
std::enable_if_t<std::is_arithmetic_v<T>, bool> operator==
(const Dune::Dorie::Parameterization::ScalingFactors<T>& a,
const Dune::Dorie::Parameterization::ScalingFactors<T>& b)
int argc = 0;
char** argv;
struct Traits
{
if (a.scale_cond == b.scale_cond
and a.scale_head == b. scale_head
and a.scale_por == b.scale_por)
static constexpr int dim = 2;
using RF = double;
using DF = double;
using Grid = Dune::YaspGrid<dim, Dune::EquidistantOffsetCoordinates<DF, dim>>;
using Domain = Dune::FieldVector<DF, dim>;
using GridView = typename Grid::LeafGridView;
};
/// Fixture storing scaling settings, grid, and sampling positions
class ScalingFixture : public ::testing::Test
{
protected:
using Grid = typename Traits::Grid;
using Domain = typename Traits::Domain;
using SAF = Dune::Dorie::Parameterization::ScalingAdapterFactory<Traits>;
using Scaling =
Dune::Dorie::Parameterization::ScalingFactors<typename Traits::RF>;
std::shared_ptr<spdlog::logger> log;
std::shared_ptr<typename Traits::Grid> grid;
YAML::Node scale_file;
/// Positions where the scaling adapter will be sampled
std::vector<Domain> pos_sample;
void SetUp() override
{
auto [cfg, lg, helper] = Dune::Dorie::Setup::init(argc, argv);
log = lg;
scale_file = YAML::LoadFile(cfg["_scaling_file_name"]);
// Create the grid
Dune::Dorie::GridCreator<Grid> gc(cfg, helper);
grid = gc.grid();
// Refine the grid so we sample more than the coarsest grid vertices
grid->globalRefine(1);
// Store the sampling positions
for (auto&& vertex : vertices(grid->leafGridView()))
{
return true;
pos_sample.emplace_back(vertex.geometry().center());
}
}
return false;
void TearDown() override { spdlog::drop_all(); }
};
// Inject into correct namespace for Google Test
namespace Dune::Dorie::Parameterization
{
/// Compare two scaling factor structs
template<typename T>
std::enable_if_t<std::is_arithmetic_v<T>, bool>
operator==(const ScalingFactors<T>& a, const ScalingFactors<T>& b)
{
using namespace Dune::FloatCmp;
if (eq(a.scale_cond, b.scale_cond) and eq(a.scale_head, b.scale_head)
and eq(a.scale_por, b.scale_por))
{
return true;
}
return false;
}
/// Traits struct to be plugged in as traits mock-up
template<int dimension>
struct Traits
} // namespace Dune::Dorie::Parameterization
/// Create a scaling adapter
template<class GV>
decltype(auto)
create_scaling(const YAML::Node& settings,
const GV& grid_view,
const std::shared_ptr<spdlog::logger> log)
{
static constexpr int dim = dimension;
using RF = double;
using DF = double;
using Grid = Dune::YaspGrid<dim,
Dune::EquidistantOffsetCoordinates<DF, dim>>;
using Domain = Dune::FieldVector<DF, dim>;
using GridView = typename Grid::LeafGridView;
};
using SAF = Dune::Dorie::Parameterization::ScalingAdapterFactory<Traits>;
const auto type = settings["type"].as<std::string>();
return SAF::create(type, settings["data"], grid_view, log);
}
/// Test a single scaling setting
template<int dim>
void test_scaling_setting (const YAML::Node& cfg,
const typename Traits<dim>::GridView& grid_view)
// --- Test cases --- //
TEST_F(ScalingFixture, DummyScaling)
{
// create the scaling
const auto type = cfg["type"].as<std::string>();
using SAF = Dune::Dorie::Parameterization::ScalingAdapterFactory<Traits<dim>>;
const auto log = Dune::Dorie::get_logger(Dune::Dorie::log_base);
auto scaling = SAF::create(type, cfg["data"], grid_view, log);
// initialize random positions
using Domain = typename Traits<dim>::Domain;
std::vector<Domain> positions;
for(auto&& vertex : vertices(grid_view)) {
positions.push_back(vertex.geometry().center());
}
auto scaling = create_scaling(scale_file["dummy"], grid->leafGridView(), log);
const Scaling reference{ 1.0, 1.0, 0.0 };
for (auto&& pos : pos_sample)
{
const auto value = scaling->evaluate(pos);
EXPECT_EQ(value, reference) << "Position : " << pos[0] << ", " << pos[1];
}
}
evaluate_scaling(type, scaling, positions);
TEST_F(ScalingFixture, MillerAutomaticExtension)
{
auto scaling = create_scaling(
scale_file["miller_automatic_extension"], grid->leafGridView(), log);
const Scaling reference{ 2.0, 2.0, 0.0 };
for (auto&& pos : pos_sample)
{
const auto value = scaling->evaluate(pos);
EXPECT_EQ(value, reference) << "Position : " << pos[0] << ", " << pos[1];
}
}
/// Evaluate the given scaling adapter for a set of positions
template<typename ScalingAdapter, typename Domain>
void evaluate_scaling (const std::string& type,
const std::shared_ptr<ScalingAdapter> scaling,
const std::vector<Domain>& positions)
TEST_F(ScalingFixture, MillerCustomExtension)
{
// create reference scaling
using Scaling = Dune::Dorie::Parameterization::ScalingFactors<double>;
Scaling reference;
if (type == "none") {
reference = Scaling{1.0, 1.0, 0.0};
}
else if (type == "Miller") {
reference = Scaling{2.0, 2.0, 0.0};
}
else {
throw std::runtime_error("Scaling type not supported "
"by this test");
}
auto scaling = create_scaling(
scale_file["miller_custom_extension"], grid->leafGridView(), log);
const Scaling reference{ 2.0, 2.0, 0.0 };
for (auto&& pos : pos_sample)
{
const auto value = scaling->evaluate(pos);
EXPECT_EQ(value, reference) << "Position : " << pos[0] << ", " << pos[1];
}
}
// evaluate the scaling
for (auto&& pos : positions) {
const auto value = scaling->evaluate(pos);
assert(value == reference);
}
TEST_F(ScalingFixture, MillerShift)
{
auto scaling =
create_scaling(scale_file["miller_shift"], grid->leafGridView(), log);
const Scaling ref_low{ 0.0, 0.0, 0.0 }, ref_high{ 1.0, 1.0, 0.0 };
for (auto&& pos : pos_sample)
{
const auto value = scaling->evaluate(pos);
auto reference = ref_low;
if (int(pos[1]) > 0)
reference = ref_high;
EXPECT_EQ(value, reference) << "Position : " << pos[0] << ", " << pos[1];
}
}
int main (int argc, char** argv)
TEST_F(ScalingFixture, MilPor)
{
try{
// initialize MPI if needed
auto& helper = Dune::MPIHelper::instance(argc, argv);
// build the logger
const auto log = Dune::Dorie::create_base_logger(helper,
spdlog::level::info);
constexpr int dim = 2;
// Read YAML file
std::string filename;
if (argc == 1) {
filename = "scaling.yml";
}
else if (argc == 2) {
filename = argv[1];
}
else {
throw std::runtime_error("Call this program with single argument <config>");
}
const YAML::Node param_file = YAML::LoadFile(filename);
log->info("Creating grid");
// get extensions from cfg
using Domain = typename Traits<dim>::Domain;
using retrieve_t = std::vector<double>;
const auto off = param_file["grid"]["offset"].template as<retrieve_t>();
const auto ext = param_file["grid"]["extensions"].template as<retrieve_t>();
Domain extensions;
std::copy(begin(ext), end(ext), extensions.begin());
Domain lower_left;
std::copy(begin(off), end(off), lower_left.begin());
const auto upper_right = lower_left + extensions;
// build scaling from grid extensions
using Grid = typename Traits<dim>::Grid;
std::array<int, dim> cells;
std::fill(begin(cells), end(cells), 10);
Grid grid(lower_left, upper_right, cells);
const auto grid_view = grid.leafGridView();
// test the scaling settings
for (auto&& scaling_cfg : param_file["scalings"]) {
log->info("Creating scaling: {}", scaling_cfg.first.as<std::string>());
test_scaling_setting<dim>(scaling_cfg.second, grid_view);
}
return 0;
}
auto scaling =
create_scaling(scale_file["milpor"], grid->leafGridView(), log);
const Scaling reference{ 0.0, 0.0, 2.0 };
for (auto&& pos : pos_sample)
{
const auto value = scaling->evaluate(pos);
EXPECT_EQ(value, reference) << "Position : " << pos[0] << ", " << pos[1];
}
}
catch (Dune::Exception &e) {
std::cerr << "Dune reported error: " << e << std::endl;
return 1;
}
catch (std::exception &e) {
std::cerr << "Exception thrown: " << e.what() << std::endl;
return 1;
}
catch (...) {
std::cerr << "Unknown exception!" << std::endl;
return 1;
}
// --- Main Function --- //
int
main(int _argc, char** _argv)
{
// Parse input arguments
::testing::InitGoogleTest(&_argc, _argv);
argc = _argc;
argv = _argv;
return RUN_ALL_TESTS();
}
include ${CMAKE_BINARY_DIR}/doc/default_files/config.ini
__name = sa
_scaling_file_name = ${CMAKE_CURRENT_LIST_DIR}/test-scaling-adapters.yml
output.logLevel = warn
[grid]
extensions = 1 2
cells = 10 20
# Loads twos as Miller scaling factors
miller_custom_extension:
type: Miller
data:
scale_miller:
file: scaling.h5
dataset: twos
interpolation: nearest
extensions: [1, 2]
offset: [0, 0]
# Loads twos as Miller scaling factors (automatic extension deduction)
miller_automatic_extension:
type: Miller
data:
scale_miller:
file: scaling.h5
dataset: twos
interpolation: nearest
# Loads zeros and ones as Miller scaling factors
miller_shift:
type: Miller
data:
scale_miller:
file: scaling.h5
dataset: shift
interpolation: nearest
# Loads zeros as Miller scaling factors and twos as porosity scaling factors
milpor:
type: MilPor
data:
scale_miller:
file: scaling.h5
dataset: zeros
interpolation: nearest
scale_porosity:
file: scaling.h5
dataset: twos
interpolation: nearest
# Loads Dummy scale adapter returning 1.0 as factors
dummy:
type: none
Subproject commit ac35e705ebf20187b7ff92c29d1411f6a4c8d522