[Dune] clear_matrix_row_block() and write_matrix_element_if_exists_to_block() in backend/istl/matrixhelpers.hh
Eike Mueller
E.Mueller at bath.ac.uk
Thu Dec 17 08:22:37 CET 2015
Dear DUNE PDELab,
could anyone tell me what the methods clear_matrix_row() and write_matrix_element_if_exists() in pdelab/backend/istl/matrixhelpers.hh do exactly?
They are specialised for different types of the first argument (or second) argument, which can, for example, be tags::field_matrix_1_any.
template<typename RI, typename Block>
void clear_matrix_row(tags::field_matrix_1_any, Block& b, const RI& ri, int i)
{
assert(i == -1);
b[0] = 0;
}
template<typename T, typename RI, typename CI, typename Block>
void write_matrix_element_if_exists(const T& v, tags::field_matrix_1_m, Block& b, const RI& ri, const CI& ci, int i, int j)
{
assert(i == -1);
assert(j == 0);
b[0][ci[0]] = v;
}
I’m asking because in the EXADUNE fork, there are similar methods clear_matrix_row_block() and write_matrix_element_if_exists_to_block() e.g.
template<typename RI, typename Block>
void clear_matrix_row_block(tags::field_matrix_n_any, Block& b, const RI&ri, int i)
{
assert(i == 0);
b = 0;
}
template<typename T, typename RI, typename CI, typename Block>
void write_matrix_element_if_exists_to_block(const T& v, tags::field_matrix_n_m, Block& b, const RI& ri, const CI& ci, int i, int j)
{
assert(i == 0);
assert(j == 0);
for (std::size_t i = 0; i < b.rows; ++i)
b[i][i] = v;
}
However, the versions of those which take 1xm, 1x1 and nx1 block-matrices throw a Not-Implemented exception in EXADUNE. They do not get called in the sequential version of my program, but they are invoked in the parallel version, so it must have something to do with the parallel constraints.
I get 1xm blocks in my matrix is because I implemented a prolongation operator from a P0 space (1 unknown per cell) to a DG space of degree p (with (p+1)^d unknowns per cell).
If I just replace the exception-throwing code by what’s done in the nxm versions of those routines, my code compiles and runs fine, even in parallel.
Thanks a lot,
Eike
More information about the Dune
mailing list