[Dune-devel] [GSoC] Performance testing: more detailed schedule
Miha Čančula
miha at noughmad.eu
Sun Aug 18 15:22:16 CEST 2013
2013/8/17 Christian Engwer <christian.engwer at uni-muenster.de>
> Hi Miha,
>
> it seems you are making good progress with the build-system
> integration :-)
>
I don't think it's that good. The CMake integration works well, and is
pretty intuitive (just call add_performance_test(<name> <target>)), but I
still don't know how to do the same using automake. There is no equivalent
(that I know of) of add_custom_target() or add_dependency().
I've been thinking of using intermediate files and a wildcard target
(pattern rule), so that you would add something like
perftest_myexample: myexample perftest_myexample.log
and there would be a general rule
%_perftest.log:
<run the test here>
If will try such a rule today, unless you have a better suggestion.
>
> It would be great, to have a short readme in doc, which explains, how
> the tests interact with the buildsystem and what the user is supposed
> to do.
>
Yes, of course. But first the interaction should work :)
>
> Then I have some additional comments...
> - all test should be performed in the build directory, this is then
> also the place where results should be stored.
> - it would be convenient to have all results in a seperate
> subdirectory (of the build dir) and then the html files again in a
> sub dir.
>
I agree, will make it that way.
> - in order to be able to integrate the perftests with the autobuild
> services, it is necessary to seperate data aquisition and
> storage. Therefore it ise necessary to write some kind of log files
> which can then be transferred to the server (or is this already
> integrated?).
>
This is somewhat separated right now. There are separate python modules,
but so far the main script calls them all in sequence.
Log files are generated in any case, and are optionally deleted afterwards.
There shouldn't be problems there.
The flow of data is like this: (measure -> logfile), (logfile -> sqlite
database), (database -> HTML).
> - when we have a large set of tests, an overview file would help.
> Ciao
> Christian
>
> On Thu, Aug 15, 2013 at 05:06:39PM +0200, Miha Čančula wrote:
> > Apparently Chrome is more security-consious than Firefox and refuses to
> > load local files via XMLHttpRequest. If I put it in the same file, then
> > graphs work (at least in Chromium, don't have any other browsers to
> check).
> > I just pushed the change.
> >
> > I'm tackling CMake right now, I hope to get at least the compiler and
> flags
> > today.
> >
> >
> > 2013/8/14 Christian Engwer <christian.engwer at uni-muenster.de>
> >
> > > Hi Miha,
> > >
> > > > 2.) Using the buildsystem to call the measurement routine. I agree,
> this
> > > is
> > > > probably the best way to get all the needed compiler information. Of
> > > > course, it should be easy to use, so a single macro or target (per
> > > measured
> > > > program) should be used. I will try to do it in automake, but would
> it be
> > > > ok if I first write a CMake macro, get the details worked out, and
> then
> > > do
> > > > the same with autotools? Considering the use of two build systems, I
> this
> > > > part would be a pretty thin wrapper, just getting the compiler info
> and
> > > > passing it to python.
> > >
> > > This would be viable... in particular this would mean that you create
> > > a suitable ini file during the configure/cmake run.
> > >
> > > > 3.) Split HTML files. I agree, this would be a good idea. I don't
> know if
> > > > it works for you, but I added table filtering, so you can choose to
> only
> > > > display Run, Compile or all measurements. I would keep the total
> file,
> > > but
> > > > I can also generate type-specific ones.
> > > >
> > > > 4.) No graphs in resIults. I don't know why that should be. Yes, the
> data
> > > > is in a separate CSV file, but in my testing, it was always loaded
> > > > correctly. There might be some errors in handling paths, I'll check
> when
> > > I
> > > > get home.
> > >
> > > I think the problem is due to security reasons... I tried with
> > > chromium und epiphany, which are both webkit based and I assumed that
> > > the browser does not allow to reload local files from java-script. As
> > > many users will have chrome as browser, we must be able to hanle
> > > this and thus I suggest to include the csv again into the html file.
> > >
> > > > 5.) Automatic testing. The program now parses a configuration file
> > > > (perftest.ini) and runs the tests described in it. This means you
> still
> > > > have to manually run a command, but it's only one command per run. I
> > > > suppose "make perftest" is easier to remember than "perftest.py
> > > > perftest.ini", so some integration with the buildsystem is needed.
> > > However,
> > > > I wouldn't run performance tests automatically every time the
> program is
> > > > compiled, this would take too much time.
> > >
> > > No, we should not run the test every time. I would expect that you can
> > > enable/disable the automatic data collection. Using a single "make" is
> > > not really good, as we want to have seperate data for all our
> > > compilation units.
> > >
> > > In my vision, we constantly collect data (when enables, e.g. on the
> > > build-server) and then we have individual html files per make
> > > lib/exe. Within this html file we can now select which data to look
> > > at, using the filter approach. This will limit the number of files
> > > required and make sure they don't grow too far.
> > >
> > > > 6.) My progress. I added statistics for finding outliers, you may
> notice
> > > in
> > > > the results some rows are different colors depending on their
> distance
> > > from
> > > > the mean. Currently all points with at least 1-sigma deviation are
> > > marked,
> > > > but I don't think that should be the case. There are more rigorous
> > > > definition of outliers. The documentation uses Doxygen,
> > > > but I haven't converted all the actual docstrings yet. The graphs
> show
> > > both
> > > > memory consumption and time spent, and there are separate graphs for
> > > > compilation and running.
> > >
> > > I guess I will see this in detail, once the data is included again :-)
> > >
> > > Christian
> > >
>
> --
> Prof. Dr. Christian Engwer
> Institut für Numerische und Angewandte Mathematik
> Fachbereich Mathematik und Informatik der Universität Münster
> Einsteinstrasse 62
> 48149 Münster
>
> E-Mail christian.engwer at uni-muenster.de
> Telefon +49 251 83-35067
> FAX +49 251 83-32729
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.dune-project.org/pipermail/dune-devel/attachments/20130818/b89554f9/attachment.htm>
More information about the Dune-devel
mailing list