<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Hi Markus,</div><div><br></div><div>many thanks for your explanations. I'm not sure if I understand everything, maybe you can confirm if the following is right, or whether I got it completely wrong:</div><div><br></div><div>Without ParMETIS, each processor will coarsen its local region and data is only pulled together on one processor to solve the problem on the coarsest level. I can see that this causes all-to-all communication, so might be a problem for parallel scalability (when I say my code stopped scaling, I have not looked in detail at what causes the problem, of course, so this is just a wild guess)?</div><div>I presume that in my case, even if each process only has one dof left (but maybe more, if the lower limit is 2000), then the agglomerated problem, which is solved on one processor, still has 32768 dofs, whereas on smaller processor counts it was much smaller?</div><div><br></div><div>If ParMETIS is installed, as soon as the lower limit of dof on one process is reached, data will be pulled together on a smaller number of processes, where each now has more than 2000 dofs, then coarsening continues until the dofs per process fall below 2000 again and the process is repeated until we end up with one process with less than 2000 dofs. So the size of problem that is solved on the coarsest level does not grow with the process count. ParMETIS is used (or rather the METIS subroutines inside it - which makes me believe I do not need to install Metis in addition to ParMETIS), to work out the best way of pulling data together, i.e. partition the problem between a decreasing number of processors on the coarser levels.</div><div><br></div><div>If I switch off data agglomeration, what happens at the coarsest level?</div><div><br></div><div>The next thing I'm going to do is run some tests on our smaller machine to see what impact the use of ParMETIS and SuperLU has on up to 512 cores.</div><div><br></div><div>Thanks a lot,</div><div><br></div><div>Eike</div><div><br></div><br><div><div>On 28 Feb 2012, at 16:54, Markus Blatt wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Hi,<br><br>I will produce some TOFU here.<br><br>The coarsening in our AMG method is decoupled. That is every<br>process coarsens its regions and no agglomeration can take place<br>across process boundaries.<br><br>If you do not have ParMetis installed, we coarsen until we reach the<br>coarsen target (defaults to 2000 dofs) or until we cannot coarsen any<br>more. In your 32K case every process only has 1 unknown left. Then we<br>agglomerate all the data on one master process and solve that system.<br><br>I would recommend installing ParMETIS. Anyway, because we had a lot of<br>troubles with ParMETIS on large core counts, we use metis on processor<br>for computing the data agglomeration. (We use the metis methods<br>provided with ParMETIS).<br><br>If your coarse level system can be solved with BiCGSTAB perconditioned<br>by your smoother, you do not need to install SuperLU. Otherwise you<br>should.<br><br>BTW: If you think that you do not need to agglomerate the data, there<br>is the possibility to switch it off.<br><br>Cheers,<br><br>Markus<br>On Fri, Feb 24, 2012 at 04:00:48PM +0000, Eike Mueller wrote:<br><blockquote type="cite">I have now started some highly parallel runs on Hector where my<br></blockquote><blockquote type="cite">first goal is to get the solver to scale to 65536 cores (the maximal<br></blockquote><blockquote type="cite">available core count in Phase 3 is ~90,000). So far I have done some<br></blockquote><blockquote type="cite">weak scaling runs on 64, 512, 4096 and 32768 cores.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I have not tuned anything, I use the ISTL Overlapping CG solver<br></blockquote><blockquote type="cite">backend with the parallel AMG preconditioner (with an SSOR point<br></blockquote><blockquote type="cite">smoother). I am not using SuperLU to solve the coarse level problem.<br></blockquote><blockquote type="cite">On the smaller machine (up to 800 cores), which I have used so far,<br></blockquote><blockquote type="cite">this already gave quite good results.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Basically, as compute time on Hector is expensive, I would be<br></blockquote><blockquote type="cite">interested in whether anybody already has experience with the ideal<br></blockquote><blockquote type="cite">setup for the parallel AMG for very large core counts, which I could<br></blockquote><blockquote type="cite">use as a starting point.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">The two main questions are:<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">* Will using SuperLU help (or be essential)?<br></blockquote><blockquote type="cite">* Will using ParMETIS help (or be essential) (and do I need to use<br></blockquote><blockquote type="cite">Metis in addition to ParMETIS, or will ParMETIS alone be enough?)?<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">The first three runs (on 64, 512 and 4096 cores) look ok, with the<br></blockquote><blockquote type="cite">time per iteration increasing from 0.6s to 0.65s to 1.1s between 64<br></blockquote><blockquote type="cite">and 512 and 4096 cores (and on 8 cores I get 0.59s). The 32768 run<br></blockquote><blockquote type="cite">does not complete in 10 minutes, but manages to get to the point<br></blockquote><blockquote type="cite">where it has built the coarse grid matrices. This, however, takes<br></blockquote><blockquote type="cite">48.7s instead of 6.5s on 4096 cores, so it has effectively stopped<br></blockquote><blockquote type="cite">scaling as 48.7/6.5 is not very far from 8.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">In the largest run I use 4096 x 4096 x 1024 = 1.8E10 degrees of freedom.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I observed that for the 4096 and 32768 core runs I get this warning message:<br></blockquote><blockquote type="cite">'Stopped coarsening because of rate breakdown 32768/32768=1<1.2<br></blockquote><blockquote type="cite">and the hierarchy is built up to 9 level only.'<br></blockquote><blockquote type="cite">I guess this is potentially a problem if I do not use SuperLU.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I have not compiled with ParMetis support, which is why I get this<br></blockquote><blockquote type="cite">message as well:<br></blockquote><blockquote type="cite">'Successive accumulation of data on coarse levels only works with<br></blockquote><blockquote type="cite">ParMETIS installed. Fell back to accumulation to one domain on<br></blockquote><blockquote type="cite">coarsest level'<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Thank you very much for any ideas,<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Eike<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">_______________________________________________<br></blockquote><blockquote type="cite">Dune mailing list<br></blockquote><blockquote type="cite"><a href="mailto:Dune@dune-project.org">Dune@dune-project.org</a><br></blockquote><blockquote type="cite"><a href="http://lists.dune-project.org/mailman/listinfo/dune">http://lists.dune-project.org/mailman/listinfo/dune</a><br></blockquote><blockquote type="cite"><br></blockquote><br>-- <br>Do you need more support with DUNE or HPC in general? <br><br>Dr. Markus Blatt - HPC-Simulation-Software & Services <a href="http://www.dr-blatt.de">http://www.dr-blatt.de</a><br>Rappoltsweilerstr. 5, 68229 Mannheim, Germany<br>Tel.: +49 (0) 160 97590858 Fax: +49 (0)322 1108991658 <br><br>_______________________________________________<br>Dune mailing list<br><a href="mailto:Dune@dune-project.org">Dune@dune-project.org</a><br>http://lists.dune-project.org/mailman/listinfo/dune<br><br></div></blockquote></div><br><div apple-content-edited="true"> <span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Dr Eike Mueller</div><div>Department of Mathematical Sciences</div><div>University of Bath</div><div><a href="mailto:e.mueller@bath.ac.uk">e.mueller@bath.ac.uk</a></div></div></span> </div><br></body></html>