[Buildbot-devel] testing progress?

Brian Warner warner-buildbot at lothar.com
Fri Dec 12 17:33:57 UTC 2003


> I noticed that the twisted process/steps seem to have support for 
> parsing (python) unittest results. It looks to me that this parsing can 
> be re-used by different projects that need such parsing. Does it make 
> sense to move this functionality to a generic '(unittest) logfile parse' 
> component of buildbot?

Yes. My plans for that part of the code are:

 write test-result parsers for various standard test suites. Each parser
 should return a dict that maps test case names to a result tuple (that has a
 summary symbol like PASSED, FAILED, SKIPPED, etc, and also records any text
 emitted or exceptions raised during that particular test case)

 write some code to display this dictionary on a page linked to by the Test
 step's event box

 have the Builder (or maybe BuilderStatus, or a related class.. still
 discussing this with slyphon) track the test results, noticing when there
 are new failures. Each new test failure causes a Problem object to be
 created. The Problem is "resolved" when the tests that caused it start
 passing again.

 Associate Problems with the users who committed the changes that created
 them. Then a Nagger object gets to harrass the guilty parties until they fix
 their code.

So a good test-result parser is central to this feature (which has been on
the todo list since the beginning, but only recently has anyone had time to
put into it).

Note that Twisted uses the 'trial' package for its unit tests, which is a
re-implementation of the standard 'unittest' package with some useful
additional features. It has a slightly different output format, so we'll
probably need separate trial and unittest parsers.

Also note that one experimental 'trial' feature we'll probably use is its
--jelly option, which serializes the test results in the wire format used by
Twisted's RPC layer. This means there's no parser: we just unserialize the
test's stdout and get a list of test results directly. My big hope here is to
collect much more information from exception tracebacks (like local
variables) than you could ever reasonably display on the screen and then
parse. Twisted has some excellent code for HTML-formatting exceptions (used,
when enabled, to display exceptions that occur during a web server request),
which I'd like to use to display exceptions that occur in unit tests.

> On a side note, is it possible to update the status page during tests? 
> (ie. "testing, xxx tests passed, yyyy tests failed, etc). Currently only 
> 'testing' is shown.

Yes, in theory. If the number of tests run/passed/failed can be gleaned from
stdout/stderr while the test is running, then the BuildStep can use
updateCurrentActivity() to change the text inside the little "running tests"
event box.

For ShellCommands, the addStdout() and addStderr() methods are called each
time a chunk of text is available from the child process. These can be parsed
and used to provide the number to update the event box.

Trial emits the total number of tests run/passed/failed as the last line of
stdout, so I found it more convenient to wait until the end to do the count.
But if the test is parseable as it runs, you should be able to get real-time
updates. (and trial --jelly should make this trivial).

BTW, this should be wired into the Progress class. Each Step declares a list
of metrics along which its progress can be measured. At the moment there are
just two: time (elapsed seconds) and output volume (number of characters
emitted on stdout/stderr). Each time the child process sends us an update, we
re-calculate the current ETA based upon how far each metric has progressed
relative to the last build. (i.e. if the Compile step usually emits 50kb, and
we've emitted 40kb so far, that particular metric thinks we're 80% done). The
metrics are averaged (yeah, kinda lame) and multiplied by the averaged build
time to figure out how many seconds we've got left to go. Eventually this
will also be fed to the status client, so you can see a row of bar graphs
(progress bar widgets) for time, output, test cases, etc, which are all
marching towards 100%.

This would be more accurate and more useful with more metrics. "number of
test cases run" is an excellent metric for this purpose. So any real-time
test-results parser should also feed number-of-tests-run into the progress
object.

 -Brian




More information about the devel mailing list