[Buildbot-commits] [Buildbot] #2461: Add support for providing and graphing data charts of build statistics over time. (was: Add support for providing and graphing data charts of metrics over time.)
Buildbot trac
trac at buildbot.net
Sun Mar 31 23:43:10 UTC 2013
#2461: Add support for providing and graphing data charts of build statistics over
time.
-------------------------+--------------------
Reporter: juj | Owner:
Type: project-idea | Status: new
Priority: major | Milestone: 0.9.+
Version: 0.8.7p1 | Resolution:
Keywords: |
-------------------------+--------------------
Description changed by dustin:
Old description:
> This is a feature request I would find extremely useful, and it's also
> one that would probably be a good one for GSoC, since it's a new feature
> with visually concrete results, which can make it more
> compelling/tractable for a new developer to get interested in.
>
> I maintain a continuous testing architecture for the Emscripten C++->JS
> compiler project (https://github.com/kripken/emscripten). It has an
> extensive unit testing and benchmarking suite, which prints a lot of
> graphable metrics out as the result of its run. The buildbot page is
> openly accessible at http://clb.demon.fi:8112/waterfall . More
> explanation at the bottom of the page here
> https://github.com/kripken/emscripten/wiki . While maintaining the
> buildbot, I've often wondered about the following:
>
> * How do build times vary over time? Are we optimizing the compiler, or
> is it getting more bloated and slowing down over time? Which commits
> caused big regressions/improvements?
> * How do buildbot task run times vary over time? These are already
> logged at the end of stdio with lines "program finished with exit code 0
> elapsedTime=1060.951319", which we'd like to graph over time.
> * How does runtime execution performance of compiled apps change over
> time? Emscripten has runtime benchmarks tester that tests runtime
> performance: http://clb.demon.fi:8112/builders/ubuntu-emcc-incoming-
> tests/builds/175/steps/Benchmarks/logs/stdio . Which commits caused big
> regressions/improvements?
> * How does compiled code size vary over time? There are some test apps
> that get built and stored after each commit:
> http://clb.demon.fi/dump/emcc/win-emcc-incoming-code-test/ . Which
> commits caused big regressions/improvements?
>
> To be able to measure these kind of quality concerns, visual graphing
> could be the answer. Being able to feed custom data fields into a data
> storage inside buildbot, and having a built-in data grapher integrated to
> buildbot HTTP server to visually compare metrics over time would be
> immensely helpful.
>
> The architecture should be somehow custom-driven so that the buildbot
> config files can control what data to generate and feed into graphs,
> since most of the above data points are specific to the project in
> question.
>
> Utilizing existing free graphing solutions would be expected, see for
> reference:
> * http://raphaeljs.com/analytics.html
> * https://www.reverserisk.com/ico/ (line graphs)
> * etc.
New description:
This is a feature request I would find extremely useful, and it's also one
that would probably be a good one for GSoC, since it's a new feature with
visually concrete results, which can make it more compelling/tractable for
a new developer to get interested in.
I maintain a continuous testing architecture for the Emscripten C++->JS
compiler project (https://github.com/kripken/emscripten). It has an
extensive unit testing and benchmarking suite, which prints a lot of
graphable metrics out as the result of its run. The buildbot page is
openly accessible at http://clb.demon.fi:8112/waterfall . More explanation
at the bottom of the page here https://github.com/kripken/emscripten/wiki
. While maintaining the buildbot, I've often wondered about the following:
* How do build times vary over time? Are we optimizing the compiler, or
is it getting more bloated and slowing down over time? Which commits
caused big regressions/improvements?
* How do buildbot task run times vary over time? These are already
logged at the end of stdio with lines "program finished with exit code 0
elapsedTime=1060.951319", which we'd like to graph over time.
* How does runtime execution performance of compiled apps change over
time? Emscripten has runtime benchmarks tester that tests runtime
performance: http://clb.demon.fi:8112/builders/ubuntu-emcc-incoming-
tests/builds/175/steps/Benchmarks/logs/stdio . Which commits caused big
regressions/improvements?
* How does compiled code size vary over time? There are some test apps
that get built and stored after each commit: http://clb.demon.fi/dump/emcc
/win-emcc-incoming-code-test/ . Which commits caused big
regressions/improvements?
To be able to measure these kind of quality concerns, visual graphing
could be the answer. Being able to feed custom data fields into a data
storage inside buildbot, and having a built-in data grapher integrated to
buildbot HTTP server to visually compare metrics over time would be
immensely helpful.
The architecture should be somehow custom-driven so that the buildbot
config files can control what data to generate and feed into graphs, since
most of the above data points are specific to the project in question.
Utilizing existing free graphing solutions would be expected, see for
reference:
* http://raphaeljs.com/analytics.html
* https://www.reverserisk.com/ico/ (line graphs)
* etc.
--
--
Ticket URL: <http://trac.buildbot.net/ticket/2461#comment:2>
Buildbot <http://buildbot.net/>
Buildbot: build/test automation
More information about the Commits
mailing list