[Buildbot-devel] GSoC: Initial thoughts on the Graphs and Data Charts Project

Mikhail Sobolev mss at mawhrin.net
Tue Mar 17 18:17:52 UTC 2015


Hi Prasoon,

On Tue, Mar 17, 2015 at 07:47:30PM +0530, Prasoon Shukla wrote:
>    I have been reading Dustin's last post. I agree on the usage. I do have a
>    few questions though.
>    1. I am conflicted on how exactly the metrics module will be used. Right
>    now, I am thinking of it as merely a delegator - it will take the gathered
>    data, log it and pass it to influx. Is this sufficient? Or should I do
>    anything additional with the metrics module?
First you need to think about how data provided by use of those classes will
end up in any external storage.

Second, _you_ should think if there's anything missing in the module.

>    2. How will we collect, say, the number of skipped tests?
That's basically one of the points of this project: understand, design and
implement a way for certain data provided by various components to appear as a
metric, which is then collected and stored.

>    Dustin wrote:
> 
>      Overall, I see the configuration looking something like
>      c['services'].append(
>        InfluxDbInjector(
>          influxUrl="...",
>          trackBuildTimes=True,
>          trackStepTimes=['compile', 'test'],
>          trackProperties=['lint_warnings', 'test_skips']
>        ))
> 
>    Similarly, how will we track the number of linter warnings?
As I said, this is the part that needs to be understood, designed and
implemented in the project.

>    @tardyp posted a link a while back for buildbot_travis
>    : [1]https://github.com/isotoma/buildbot_travis/blob/master/buildbot_travis/steps/create_steps.py#L44
>    This link has support for parsing the output of Nose, Trial and Plone. For
>    this, they've used regex based matching.
>    We can do this as well, though it will be hard.
If you think something is "hard", please provide rationale; that will help
others to follow your reasoning and provide additional information and/or a
different point of view should it be necessary.

For example, I do not see this to be hard :)

> There are innumerable testing frameworks out there just as there are
> innumerable linters. We cannot possibly make regexes to parse the output of
> all of them.
True.  However there are already build steps that do that, so the question is
how the data they produce turn into a metric.

>    One possibility is this: First, we'll create test-output parsers for all
>    major test frameworks and linters. We'll ask the user to specify the test
>    framework/linter being used as a build property. Then, if we have a parser
>    for the test framework/linter, we can continue. Otherwise, we'll log that
>    we don't have a parser for the test framework and disable all metric
>    collection.
I'd say that test-output parsers are out of scope of this project.  For the
frameworks Buildbot supports, we should be able to use that data already and
turn it into a metric.  For unsupported ones we do nothing, but describe a way
to turn _that_ data into a metric should somebody implement a support for a new
framework.

-- 
Misha (who's sorry for replying to the last message first)




More information about the devel mailing list