[Buildbot-commits] [Buildbot] #2461: Add support for providing and graphing data charts of metrics over time.
Buildbot
nobody at buildbot.net
Sat Mar 2 21:50:52 UTC 2013
#2461: Add support for providing and graphing data charts of metrics over time.
-------------------------+--------------------
Reporter: juj | Owner:
Type: project-idea | Status: new
Priority: major | Milestone: 0.9.+
Version: 0.8.7p1 | Resolution:
Keywords: |
-------------------------+--------------------
Changes (by dustin):
* keywords: visualization graphing metrics =>
* milestone: undecided => 0.9.+
Old description:
> The project idea description contained too many links, and trac rejects
> it as spam, so find the description of the ticket here instead:
>
> https://dl.dropbox.com/u/40949268/code/buildbot_visualization_projectidea.txt
New description:
This is a feature request I would find extremely useful, and it's also one
that would probably be a good one for GSoC, since it's a new feature with
visually concrete results, which can make it more compelling/tractable for
a new developer to get interested in.
I maintain a continuous testing architecture for the Emscripten C++->JS
compiler project (https://github.com/kripken/emscripten). It has an
extensive unit testing and benchmarking suite, which prints a lot of
graphable metrics out as the result of its run. The buildbot page is
openly accessible at http://clb.demon.fi:8112/waterfall . More explanation
at the bottom of the page here https://github.com/kripken/emscripten/wiki
. While maintaining the buildbot, I've often wondered about the following:
* How do build times vary over time? Are we optimizing the compiler, or
is it getting more bloated and slowing down over time? Which commits
caused big regressions/improvements?
* How do buildbot task run times vary over time? These are already
logged at the end of stdio with lines "program finished with exit code 0
elapsedTime=1060.951319", which we'd like to graph over time.
* How does runtime execution performance of compiled apps change over
time? Emscripten has runtime benchmarks tester that tests runtime
performance: http://clb.demon.fi:8112/builders/ubuntu-emcc-incoming-
tests/builds/175/steps/Benchmarks/logs/stdio . Which commits caused big
regressions/improvements?
* How does compiled code size vary over time? There are some test apps
that get built and stored after each commit: http://clb.demon.fi/dump/emcc
/win-emcc-incoming-code-test/ . Which commits caused big
regressions/improvements?
To be able to measure these kind of quality concerns, visual graphing
could be the answer. Being able to feed custom data fields into a data
storage inside buildbot, and having a built-in data grapher integrated to
buildbot HTTP server to visually compare metrics over time would be
immensely helpful.
The architecture should be somehow custom-driven so that the buildbot
config files can control what data to generate and feed into graphs, since
most of the above data points are specific to the project in question.
Utilizing existing free graphing solutions would be expected, see for
reference:
* http://raphaeljs.com/analytics.html
* https://www.reverserisk.com/ico/ (line graphs)
* etc.
--
Comment:
This is a great idea, but it needs a bit more design work before it can
become a GSoC project. We've found over the years that large-scale design
work isn't suitable for GSoC.
In particular, I'd like to see a more detailed information on
* how the data would be gathered from builds
* how the data would be stored
* how users would configure all of this
all within the framework of the 'nine' branch.
--
Ticket URL: <http://trac.buildbot.net/ticket/2461#comment:1>
Buildbot <http://buildbot.net/>
Buildbot: build/test automation
More information about the Commits
mailing list