[Dune-devel] [GSoC] Performance testing: more detailed schedule

Christian Engwer christian.engwer at uni-muenster.de
Wed Aug 14 14:06:26 CEST 2013


Hi Miha,

> 2.) Using the buildsystem to call the measurement routine. I agree, this is
> probably the best way to get all the needed compiler information. Of
> course, it should be easy to use, so a single macro or target (per measured
> program) should be used. I will try to do it in automake, but would it be
> ok if I first write a CMake macro, get the details worked out, and then do
> the same with autotools? Considering the use of two build systems, I this
> part would be a pretty thin wrapper, just getting the compiler info and
> passing it to python.

This would be viable... in particular this would mean that you create
a suitable ini file during the configure/cmake run.

> 3.) Split HTML files. I agree, this would be a good idea. I don't know if
> it works for you, but I added table filtering, so you can choose to only
> display Run, Compile or all measurements. I would keep the total file, but
> I can also generate type-specific ones.
> 
> 4.) No graphs in resIults. I don't know why that should be. Yes, the data
> is in a separate CSV file, but in my testing, it was always loaded
> correctly. There might be some errors in handling paths, I'll check when I
> get home.

I think the problem is due to security reasons... I tried with
chromium und epiphany, which are both webkit based and I assumed that
the browser does not allow to reload local files from java-script. As
many users will have chrome as browser, we must be able to hanle
this and thus I suggest to include the csv again into the html file.

> 5.) Automatic testing. The program now parses a configuration file
> (perftest.ini) and runs the tests described in it. This means you still
> have to manually run a command, but it's only one command per run. I
> suppose "make perftest" is easier to remember than "perftest.py
> perftest.ini", so some integration with the buildsystem is needed. However,
> I wouldn't run performance tests automatically every time the program is
> compiled, this would take too much time.

No, we should not run the test every time. I would expect that you can
enable/disable the automatic data collection. Using a single "make" is
not really good, as we want to have seperate data for all our
compilation units.

In my vision, we constantly collect data (when enables, e.g. on the
build-server) and then we have individual html files per make
lib/exe. Within this html file we can now select which data to look
at, using the filter approach. This will limit the number of files
required and make sure they don't grow too far.

> 6.) My progress. I added statistics for finding outliers, you may notice in
> the results some rows are different colors depending on their distance from
> the mean. Currently all points with at least 1-sigma deviation are marked,
> but I don't think that should be the case. There are more rigorous
> definition of outliers. The documentation uses Doxygen,
> but I haven't converted all the actual docstrings yet. The graphs show both
> memory consumption and time spent, and there are separate graphs for
> compilation and running.

I guess I will see this in detail, once the data is included again :-)

Christian




More information about the Dune-devel mailing list