[Buildbot-devel] differences in test coverage from 0.7.x to 0.8.2?

Dustin J. Mitchell dustin at v.igoro.us
Mon Nov 29 18:56:17 UTC 2010


On Mon, Nov 29, 2010 at 1:39 PM, Axel Hecht <l10n.moz at googlemail.com> wrote:
> That's somewhat clearer,  yet I miss the action to regain ground on test
> coverage. As it is, I wouldn't want to try to write patches on the 0.8.x
> code base. Also, I think that there are a good deal of tests that slipped
> between the cracks.

I miss my army of minions, too :)

> I don't think so, at least not in terms of tests. I think there are some
> parallels between "pure unit" and xpcshell tests, many of the 0.7.12 tests
> and mochitest, and maybe there's even a parallel for mozmill. To rephrase
> that totally for those on the thread that are not deeply into the Firefox
> testing lingo, both Firefox and buildbot are network-aware, event-driven,
> single-thread-event-loop, callback-based systems. The coding patterns that
> are hard to test or easy to test are rather similar, AFAICT.

My point was that firefox has a lot more development going on, and
much of it is by long-term developers, rather than contributed
patches.

> I think that testing overall functionality is key. In particular if we're
> considering buildbot to be a CI platform, there are promises made that
> should be verified. From as simple things as "it runs two builds" to "it
> runs builds in parallel" to "if my change has a property, and my build
> request has a property, the build ends up with ...".

We need to be clear on what those promises are before we can test them
- and that clarity is not ready yet.  You're right, though - once the
promises are clear, they should be rigorously tested.

> Users of the platform can then take tests for functionality they depend on,
> integrate them into their own test suite, and on an update of the platform,
> run their test suite against the new platform and be at least highly
> comfortable that their code is going to work. Or, if the tests fail, they
> can figure out what changes were made to the upstream tests, and how that
> affects their own intergration with the platform.

I feel a lot less comfortable with this, since it makes the tests a
part of the interface, and thereby requires a much higher level of
quality and maintenance of the test.  In an ideal world, yes, but
Buildbot is a long way from supporting this.

While I respect and agree with all of your opinions on this matter, I
do need to push back against the implied "you should" in all of this.
I'm one person, and I have a lot of priorities within Buildbot, and
Buildbot is only one of my work-related priorities, and my work is
only one of my life's priorities.  If the condition of Buildbot's
testing seems broken, then look no further than yourself to find the
person to fix it.

This gets to the larger point I'd like to discuss at the summit:
Buildbot needs more people who are willing to stick with the project
long-term, taking projects like "get Buildbot's test coverage above
80%" from initiation to completion.  Any ideas on how to welcome and
encourage that are welcome.

Dustin




More information about the devel mailing list