[Buildbot-devel] GSoC: Initial thoughts on the Graphs and Data Charts Project

Prasoon Shukla prasoon92.iitr at gmail.com
Tue Mar 17 14:17:30 UTC 2015


I have been reading Dustin's last post. I agree on the usage. I do have a
few questions though.

1. I am conflicted on how exactly the metrics module will be used. Right
now, I am thinking of it as merely a delegator - it will take the gathered
data, log it and pass it to influx. Is this sufficient? Or should I do
anything additional with the metrics module?

2. How will we collect, say, the number of skipped tests?

Dustin wrote:

> Overall, I see the configuration looking something like
> c['services'].append(
>   InfluxDbInjector(
>     influxUrl="...",
>     trackBuildTimes=True,
>     trackStepTimes=['compile', 'test'],
>     trackProperties=['lint_warnings', 'test_skips']
>   ))


Similarly, how will we track the number of linter warnings?

@tardyp posted a link a while back for buildbot_travis :
https://github.com/isotoma/buildbot_travis/blob/master/buildbot_travis/steps/create_steps.py#L44

This link has support for parsing the output of Nose, Trial and Plone. For
this, they've used regex based matching.

We can do this as well, though it will be hard. There are innumerable
testing frameworks out there just as there are innumerable linters. We
cannot possibly make regexes to parse the output of all of them.

One possibility is this: First, we'll create test-output parsers for all
major test frameworks and linters. We'll ask the user to specify the test
framework/linter being used as a build property. Then, if we have a parser
for the test framework/linter, we can continue. Otherwise, we'll log that
we don't have a parser for the test framework and disable all metric
collection.

What do you guys think?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://buildbot.net/pipermail/devel/attachments/20150317/f0b2d31d/attachment.html>


More information about the devel mailing list