[dune-pdelab] nonoverlappingsinglephaseflow

Bernd Flemisch bernd at iws.uni-stuttgart.de
Fri Aug 14 10:33:36 CEST 2009


Hey Felix, dear Dune-pdelab,

thank you for the hint! I got it up and working now.

First of all: Great job! I can run it successfully in parallel on both
ALUGrid and UGGrid, which I managed in dune-disc only for some special
uninteresting grids and numbers of processors. This is really nice!

Maybe an interesting side note is given by the two attached pictures
which show the partitioning done by UGGrid and ALUGrid for a particular
grid we use. As the pictures suggest, we end up with 60 subdomain-wise
ILU-preconditioned CG iterations for UGGrid, while it needs only 30 for
ALUGrid.

This brings me directly to my real concern and question: solvers. We are
in desperate need for a coarse grid correction. I see three alternatives:
1. Use/adapt the ISTL parallel AMG.
2. Interface to, e.g., PETSc.
3. Develop something on our own.

Can you maybe give me a recommendation and tell me what you already did
or plan to do in this direction?

Thank you! Kind regards
Bernd


> Hi Bernd,
>
> I think the changes to the boundary problems of problem "C" in commit to 
> revision 96 probably broke something. However, problem "D" still works fine. So 
> if you just want a parallel example replace
>
> -    std::string problem="C";
> +    std::string problem="D";
>
> in nonoverlappingsinglephaseflow.cc . This problem realizes a simple pressure 
> gradient  as boundary condition. So it should be a good parallel example to 
> start from.
>
> Best Regards,
> Felix
>
>
> On Donnerstag, 13. August 2009 12:10:50 Bernd Flemisch wrote:
>   
>> Dear Dune-pdelab,
>>
>> I tried to run the example "nonoverlappingsinglephaseflow" without
>> modifying anything. The CG solver does not converge, I get the following
>> output:
>> parallel run on 1 process(es)
>> GridFunctionSpace(static size version):
>>  1 degrees of freedom in codim 2
>> codim=2 offset=0 size=66049
>> WARNING: cannot handle multiple geometry types in static size grid
>> function space
>> /0/ constrained dofs=514
>> === matrix setup 0.416026 s
>> === jacobian assembly 0.372023 s
>> === CGSolver
>> 25001        0.0174095
>> === rate=0.999621, T=124.972, TIT=0.00499867, IT=25001
>>
>> Can you maybe check that and/or tell me from where to start when I want
>> to test dune-pdelab in parallel?
>>
>> Thank you! Kind regards
>> Bernd
>>     
>
>
>
>
> _______________________________________________
> dune-pdelab mailing list
> dune-pdelab at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune-pdelab
>
>   


-- 
_____________________________________________________________________

Bernd Flemisch                               phone: +49 711 685 69162
IWS, Universität Stuttgart                   fax:   +49 711 685 60430
Pfaffenwaldring 61                  email: bernd at iws.uni-stuttgart.de
D-70569 Stuttgart                       url: www.iws.uni-stuttgart.de
_____________________________________________________________________


-------------- next part --------------
A non-text attachment was scrubbed...
Name: ug_partition.jpg
Type: image/jpeg
Size: 6168 bytes
Desc: not available
URL: <https://lists.dune-project.org/pipermail/dune-pdelab/attachments/20090814/cbce7692/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: alu_partition.jpg
Type: image/jpeg
Size: 6362 bytes
Desc: not available
URL: <https://lists.dune-project.org/pipermail/dune-pdelab/attachments/20090814/cbce7692/attachment-0001.jpg>


More information about the dune-pdelab mailing list