[Dune] [dune-pdelab] Implementing tri-diagonal matrices in DUNE/PDELab

Eike Mueller em459 at bath.ac.uk
Tue Dec 6 11:03:47 CET 2011


Hi Steffen,

thanks a lot for these suggestions. Sounds like it is not as easy as I  
had hoped, so for the moment, I'm trying to circumvent PDELab, and  
have tried to construct my blocked matrix 'by hand', i.e. the matrix I  
use is of type

Dune::BCRSMatrix<Dune::BTDMatrix<Dune::FieldMatrix<double,1,1>>>

Now unfortunately the matrix construction fails with a segfault, which  
I suspect is due to the fact that the size of the tridiagonal blocks  
is not fixed and is not known at the point the structure of the  
'outer' matrix is build up (by calling setrowsize and addindex). To  
construct the matrix I use the following class:
(The stencil is 7 point in 3d, i.e. a 5 point direct nearest neighbour  
stencil in the horizontal, the structure subdomain carries information  
on the size of the matrix, i.e. nx,ny,nz and has a method nhoriz()  
which returns the size of the horizontal domain, i.e. nx*ny)

class BlockMatrix : public  
Dune::BCRSMatrix<Dune::BTDMatrix<Dune::FieldMatrix<double,1,1>>> {

   public:
     typedef Dune::FieldMatrix<double,1,1> FM;
     typedef double ElementType;
     typedef Dune::BTDMatrix<FM> TridiagMatrix;
     typedef Dune::BCRSMatrix<TridiagMatrix> BaseT;
     typedef BaseT::size_type size_type;

// Constructor:
     BlockMatrix(const DomainSize& subdomain_) :
       subdomain(subdomain_),  
BaseT(subdomain_.nhoriz(),subdomain_.nhoriz(),5*subdomain_.nhoriz(),
       BaseT::random) {
     int nx=subdomain.nx();
     int ny=subdomain.ny();
     int nz = subdomain.nz();
     TridiagMatrix subm;
     subm = 0;
     size_type ihoriz;
     int ix,iy;
     for (ihoriz=0; ihoriz<subdomain.nhoriz(); ihoriz++) {
       this->setrowsize(ihoriz,5);
     }

     this->endrowsizes();
     set<int> colIndices;
     for (ix=0; ix<nx; ix++) {
       for(iy=0; iy<ny; iy++) {
         ihoriz = ny*ix+iy;
         colIndices.clear();
         // diagonal
         colIndices.insert(ihoriz);
         colIndices.insert(ny*((ix+1)%nx)+iy);
         colIndices.insert(ny*ix+(iy+1)%ny);
         colIndices.insert(ny*((ix+nx-1)%nx)+iy);
         colIndices.insert(ny*ix+(iy+ny-1)%ny);
         for (set<int>::const_iterator it=colIndices.begin();
            it!=colIndices.end();it++) {
           cout << ihoriz << " " << *it << endl;
           this->addindex(ihoriz,*it);
         }
       }
     }

     this->endindices();
     typedef BaseT::RowIterator Row;
     typedef BaseT::ColIterator Col;
     for (Row row=BaseT::begin(); row!= BaseT::end(); row++) {
       cout << " row = " << row.index() << endl;
       for (Col col=row->begin(); col!=row->end();col++) {
         cout << " col = " << col.index() << endl;
         *col = subm;
       }
     }
[...]

This crashes at *col = subm;

What confuses me is that the similar construction in my hack of the  
ISTL matrix backend has a very similar structure and does work! Here  
is the constructor of the matrix container that I wrote, for comparison:

         Matrix (const T& t_) : BaseT(t_.globalSizeV()/ 
BLOCKSIZE,t_.globalSizeU()
/BLOCKSIZE,Dune::BCRSMatrix<Dune::BTDMatrix<FM>>::random) {
           Pattern pattern(t_.globalSizeV()/BLOCKSIZE,t_.globalSizeU()/ 
BLOCKSIZE)
;
           t_.fill_pattern(pattern);
           for (size_t i=0; i<pattern.size(); ++i) {
             this->setrowsize(i,pattern[i].size());
           }
           this->endrowsizes();
           for (size_t i=0; i<pattern.size();++i) {
             for (typename std::set<size_type>::iterator  
it=pattern[i].begin();it
!=pattern[i].end();++it) {
               this->addindex(i,*it);
             }
           }
           this->endindices();
           TridiagMatrixT m(BLOCKSIZE);
           typedef typename BaseT::RowIterator Row;
           typedef typename BaseT::ColIterator Col;
           for (Row row=BaseT::begin(); row!=BaseT::end(); row++) {
             for (Col col=row->begin(); col!=row->end(); col++) {
               *col = m;
             }
           }
         }

Ok, in this case BLOCKSIZE (=nz) is known at compile time, but it  
should not be known to the BCRSMatrix when I allocate its memory by  
setting the row sizes and column indices!
I thought about using a BCRS matrix whose entries are pointers to  
DTBMatrix, i.e.

Dune::BCRSMatrix<Dune::BTDMatrix<Dune::FieldMatrix<double,1,1>>*>

What works, of course, is if I use

Dune::BCRSMatrix<Dune::FieldMatrix<double,BLOCKSIZE,BLOCKSIZE>>

and then write a tridiagonal solver for the dense  
Dune::FieldMatrix<double,BLOCKSIZE,BLOCKSIZE>. This wastes a lot of  
memory and is inefficient (as written earlier).

Thanks a lot,

Eike

On 2 Dec 2011, at 08:43, Steffen Müthing wrote:

> Hello Eike,
>
>> Dear dune-pdelab,
>>
>> I already posted this on the main dune mailing list, but as it is  
>> really a pdelab-question I'm reposting it here. Apologies for any  
>> confusion!
>>
>> I'm trying to exploit the strong vertical coupling of the problem  
>> I'm solving (essentially a 3d Poisson equation, discretised by a 7  
>> point FV stencil) by using a line- instead of a point smoother and  
>> have implemented this using the BTDMatrix class (i.e. my matrix is  
>> a block matrix, where each block is tridiagonal and the vertical  
>> degrees of freedom for each horizontal position (x,y) are grouped  
>> together in a block of the solution vector).
>>
>> I tried to build this into the PDELab framework by writing a new  
>> version of ISTLVectorBackend, based on
>>
>> Dune::BlockVector<Dune::BlockVector<Dune::BlockFieldVector<E,1>>>
>>
>> instead of
>>
>> Dune::BlockVector<BlockFieldVector<E,BLOCKSIZE>>
>>
>> and a modified version of ISTLBCRSMatrixBackend, based on
>>
>> Dune::BCRSMatrix<Dune::BTDMatrix<Dune::FieldMatrix<E,1,1>>>
>>
>> instead of
>>
>> Dune::BCRSMatrix<Dune::FieldMatrix<E,BLOCKROWSIZE,BLOCKCOLSIZE>>
>>
>> and this all works fine, i.e. I really just need to replace the  
>> backends in my code and everything else stays the same:
>
> that's pretty neat! The problem is that by doing this, you have  
> ventured into an area of PDELab that most users
> (and developers) rarely, if ever, touch... the LA backend interface  
> is not very fleshed out yet, mostly because we
> lacked good examples of more advanced setups to figure out a good  
> interface design.
>
> I'm afraid there are no obvious, "clean" solutions to your  
> questions, but here are some thoughts:
>
>>
>> If I solve a Poisson problem on a unit cube with a strong  
>> anisotropy in the x-direction and use SSOR as a preconditioner  
>> (i.e. an x-line smoother), the tridiagonal solve is very efficient  
>> and the solver converges much faster than just by using the default  
>> Vector/Matrix backends (i.e. a point smoother). The only issues I  
>> still have are:
>>
>> * I have to specify the size of the x-direction, i.e. the block  
>> size as a template parameter to my backends (I'd rather specify it  
>> at runtime) and
>
> I take it that's for the mapping "global flat index" -> "blocked  
> matrix index" in the backend. The problem here is that the backend  
> only exposes
> some static methods and cannot carry any dynamic state (like the  
> block size). Until that changes, I'm afraid the only thing you might  
> be able to
> do is to store the block size on the vector / matrix wrappers (those  
> exposed by the backends) and then read them back in from those  
> objects in
> the access() methods of the backend. Of course, that would require a  
> non-standard constructor that gets passed the block size in addition  
> to
> the GFS / GOS. As I said, it's an ugly hack... ;-)
>
>> * I don't seem to have any control over how the degrees of freedom  
>> are mapped from the grid onto the vector container, for example in  
>> my case all points on an x-line are stored in one block of the  
>> block vector, but I actually want the vertical degrees at a given  
>> (x,y) to be grouped together. I guess I could modify the access()  
>> methods in the backends, but the backends do not know anything  
>> about the grid function space.
>
> Well, what you want is a custom reordering of the IndexSet which  
> belongs to the GridView of the GFS. PDELab just arranges the DOFs in  
> the
> same way the IndexSet does (and sorts different geometry types  
> according to their natural ordering, given by operator<() ). The  
> precise layout
> of the IndexSet actually depends on the grid you used, UG would for  
> example yield a different pattern than what you observe (I take it you
> currently use YaspGrid, which iterates (from inner to outer loop)  
> over x,y,z, giving you the blocking in x-direction). If you want to  
> change that,
> the easiest way to do so will be to determine the correct re- 
> ordering of the IndexSet and permute the entity indices wherever  
> they are used. If I
> remember correctly, you probably only need to touch the  
> implementation of the following methods of GridFunctionSpace in  
> gridfunctionspace.hh:
>
> - globalIndices()
> - entityOffset()
> - dataHandleGlobalIndices()
> - update()
>
> There are three different implementations of GridFunctionSpace in  
> that file, you only need to modify the first one in your case.
>
>>
>> Maybe rewriting the backends is also not the right way forward and  
>> the grid function space class needs to be adapted as well?
>
> As I laid out above, I'm afraid you need both...
>
> Steffen
>
>>
>> Thanks,
>>
>> Eike
>>
>> _______________________________________________
>> dune-pdelab mailing list
>> dune-pdelab at dune-project.org
>> http://lists.dune-project.org/mailman/listinfo/dune-pdelab
>
> Steffen Müthing
> Universität Stuttgart
> Institut für Parallele und Verteilte Systeme
> Universitätsstr. 38
> 70569 Stuttgart
> Tel: +49 711 685 88429
> Fax: +49 711 685 88340
> Email: steffen.muething at ipvs.uni-stuttgart.de
>
>
>
>
>
>
> _______________________________________________
> dune-pdelab mailing list
> dune-pdelab at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune-pdelab
>





More information about the Dune mailing list