From warner at users.sourceforge.net Tue Jul 5 19:56:45 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 05 Jul 2005 19:56:45 +0000 Subject: [Buildbot-commits] buildbot/debian .cvsignore,1.1,NONE Message-ID: Update of /cvsroot/buildbot/buildbot/debian In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv23684/debian Removed Files: .cvsignore Log Message: remove leftover debian/.cvsignore file --- .cvsignore DELETED --- From warner at users.sourceforge.net Tue Jul 5 19:56:27 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 05 Jul 2005 19:56:27 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.461,1.462 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv23684 Modified Files: ChangeLog Log Message: remove leftover debian/.cvsignore file Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.461 retrieving revision 1.462 diff -u -d -r1.461 -r1.462 --- ChangeLog 18 Jun 2005 03:35:25 -0000 1.461 +++ ChangeLog 5 Jul 2005 19:56:24 -0000 1.462 @@ -1,3 +1,7 @@ +2005-07-05 Brian Warner + + * debian/.cvsignore: oops, missed one. Removing leftover file. + 2005-06-17 Brian Warner * buildbot/test/test_vc.py (VCSupport.__init__): svn --version From warner at users.sourceforge.net Thu Jul 7 08:09:03 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Thu, 07 Jul 2005 08:09:03 +0000 Subject: [Buildbot-commits] buildbot/docs buildbot.texinfo,1.6,1.7 Message-ID: Update of /cvsroot/buildbot/buildbot/docs In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv911/docs Modified Files: buildbot.texinfo Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-230 Creator: Brian Warner fix buildbot.texinfo so it can produce an HTML manual * docs/examples/twisted_master.cfg: update to match current usage * docs/buildbot.texinfo (System Architecture): comment out the image, it doesn't exist yet and just screws up the HTML manual. Index: buildbot.texinfo =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/buildbot.texinfo,v retrieving revision 1.6 retrieving revision 1.7 diff -u -d -r1.6 -r1.7 --- buildbot.texinfo 18 May 2005 07:49:30 -0000 1.6 +++ buildbot.texinfo 7 Jul 2005 08:08:59 -0000 1.7 @@ -297,7 +297,7 @@ @end smallexample @end ifinfo @ifnotinfo - at image{images/overview} + at c @image{images/overview} @end ifnotinfo The buildmaster is configured and maintained by the ``buildmaster From warner at users.sourceforge.net Thu Jul 7 08:09:14 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Thu, 07 Jul 2005 08:09:14 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.462,1.463 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv911 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-230 Creator: Brian Warner fix buildbot.texinfo so it can produce an HTML manual * docs/examples/twisted_master.cfg: update to match current usage * docs/buildbot.texinfo (System Architecture): comment out the image, it doesn't exist yet and just screws up the HTML manual. Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.462 retrieving revision 1.463 diff -u -d -r1.462 -r1.463 --- ChangeLog 5 Jul 2005 19:56:24 -0000 1.462 +++ ChangeLog 7 Jul 2005 08:09:01 -0000 1.463 @@ -1,3 +1,10 @@ +2005-07-07 Brian Warner + + * docs/examples/twisted_master.cfg: update to match current usage + + * docs/buildbot.texinfo (System Architecture): comment out the + image, it doesn't exist yet and just screws up the HTML manual. + 2005-07-05 Brian Warner * debian/.cvsignore: oops, missed one. Removing leftover file. @@ -22,7 +29,7 @@ Fix this by not upcalling to the buggy parent method. Note: twisted-2.0 fixes this, but the function only has 3 lines so it makes more sense to copy it than to try and detect the buggyness - of the parent class. + of the parent class. Fixes SF#1207588. * buildbot/changes/changes.py (Change.branch): doh! Add a class-level attribute to accomodate old Change instances that were From warner at users.sourceforge.net Thu Jul 7 08:09:15 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Thu, 07 Jul 2005 08:09:15 +0000 Subject: [Buildbot-commits] buildbot/docs/examples twisted_master.cfg,1.26,1.27 Message-ID: Update of /cvsroot/buildbot/buildbot/docs/examples In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv911/docs/examples Modified Files: twisted_master.cfg Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-230 Creator: Brian Warner fix buildbot.texinfo so it can produce an HTML manual * docs/examples/twisted_master.cfg: update to match current usage * docs/buildbot.texinfo (System Architecture): comment out the image, it doesn't exist yet and just screws up the HTML manual. Index: twisted_master.cfg =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/examples/twisted_master.cfg,v retrieving revision 1.26 retrieving revision 1.27 diff -u -d -r1.26 -r1.27 --- twisted_master.cfg 4 Dec 2004 22:16:59 -0000 1.26 +++ twisted_master.cfg 7 Jul 2005 08:09:12 -0000 1.27 @@ -6,6 +6,9 @@ # http://www.twistedmatrix.com/buildbot/ . Passwords and other secret # information are loaded from a neighboring file called 'private.py'. +import sys +sys.path.append('/home/buildbot/BuildBot/support-master') + import os.path from buildbot import master @@ -22,13 +25,14 @@ import private # holds passwords reload(private) # make it possible to change the contents without a restart +BuildmasterConfig = c = {} + # I set really=False when testing this configuration at home really = True useFreshCVS = False useMaildir = False usePBChangeSource = True -c = {} c['bots'] = [] for bot in private.bot_passwords.keys(): @@ -80,14 +84,12 @@ } builders.append(b1) -f22 = FullTwistedBuildFactory(svnurl, - python="python2.2", processDocs=1) b22 = {'name': "full-2.2", 'slavename': "bot-exarkun", 'builddir': "full2.2", 'factory': FullTwistedBuildFactory(svnurl, python="python2.2", - processDocs=1), + processDocs=0), } builders.append(b22) @@ -102,6 +104,7 @@ python=["python2.3", "-Wall"], # use -Werror soon compileOpts=b23compile_opts, + processDocs=1, runTestsRandomly=1), } builders.append(b23) @@ -127,7 +130,8 @@ 'factory': TwistedDebsBuildFactory(svnurl, python="python2.2"), } -builders.append(b3) +# debuild is offline while we figure out how to build 2.0 .debs from SVN +#builders.append(b3) reactors = ['gtk2', 'gtk', 'qt', 'poll'] b4 = {'name': "reactors", @@ -139,12 +143,14 @@ } builders.append(b4) + b23osx = {'name': "OS-X", - 'slavename': "bot-OSX", - 'builddir': "OSX-full2.3", + 'slavename': "bot-jerub", + 'builddir': "OSX-full2.4", 'factory': TwistedReactorsBuildFactory(svnurl, - python="python2.3", - reactors=["default", "cf"], + python="python2.4", + reactors=["default", #"cf", + "threadedselect"], ), } builders.append(b23osx) @@ -157,6 +163,7 @@ compileOpts2=["-c","mingw32"], reactors=["default", "iocp", + "win32", ]), } builders.append(b22w32) @@ -172,14 +179,13 @@ } builders.append(b23bsd) -b23netbsd = {'name': "netbsd", - 'slavename': "bot-netbsd", - 'builddir': "netbsd-full2.3", - 'factory': TwistedReactorsBuildFactory(svnurl, - python="python2.3", - reactors=["default"]), - } -builders.append(b23netbsd) +b24threadless = {'name': 'threadless', + 'slavename': 'bot-threadless', + 'builddir': 'debian-threadless-2.4', + 'factory': TwistedReactorsBuildFactory(svnurl, + python='python', + reactors=['default'])} +builders.append(b24threadless) c['builders'] = builders @@ -199,12 +205,11 @@ channels=["twisted"])) c['debugPassword'] = private.debugPassword -c['interlocks'] = [("do-deb", ["full-2.2"], ["debuild"])] +#c['interlocks'] = [("do-deb", ["full-2.2"], ["debuild"])] if hasattr(private, "manhole"): c['manhole'] = master.Manhole(*private.manhole) c['status'].append(client.PBListener(9936)) m = mail.MailNotifier(fromaddr="buildbot at twistedmatrix.com", - #builders=["quick", "full-2.2", "full-2.3", "full-2.4"], builders=["quick", "full-2.3"], sendToInterestedUsers=True, extraRecipients=["warner at lothar.com"], @@ -214,7 +219,3 @@ c['projectName'] = "Twisted" c['projectURL'] = "http://twistedmatrix.com/" c['buildbotURL'] = "http://twistedmatrix.com/buildbot/" - -# TODO?: services = ["change", "status"] - -BuildmasterConfig = c From warner at users.sourceforge.net Thu Jul 7 22:35:24 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Thu, 07 Jul 2005 22:35:24 +0000 Subject: [Buildbot-commits] site index.html,1.42,1.43 Message-ID: Update of /cvsroot/buildbot/site In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv12294 Modified Files: index.html Log Message: the spamassassin buildbot moved, add a link to netboxblue.com (their buildbot is internal-only, but they're kindly providing a twisted buildslave) Index: index.html =================================================================== RCS file: /cvsroot/buildbot/site/index.html,v retrieving revision 1.42 retrieving revision 1.43 diff -u -d -r1.42 -r1.43 --- index.html 9 Jun 2005 00:51:34 -0000 1.42 +++ index.html 7 Jul 2005 22:35:21 -0000 1.43 @@ -94,7 +94,7 @@ and release branches of the main project on several architectures.
  • Justin Mason reports that the SpamAssassin project is running a buildbot too.
  • + href="http://buildbot.spamassassin.org:8010/">buildbot too.
  • Rene Rivera says that the well-known Boost C++ project is moving all their @@ -112,6 +112,10 @@ at the University of Alabama, Birmingham, to maintain their visualization and virtual-environment projects. +
  • Stephen Thorne says that his company, Netbox Blue, uses a buildbot to build and + test their network security appliance.
  • +
  • install a Buildbot today and get your name added here!
  • @@ -134,5 +138,5 @@ align="right" /> -Last modified: Wed Jun 8 17:50:51 PDT 2005 +Last modified: Thu Jul 7 15:33:58 PDT 2005 From warner at users.sourceforge.net Sun Jul 17 23:28:35 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Sun, 17 Jul 2005 23:28:35 +0000 Subject: [Buildbot-commits] buildbot/buildbot/process process_twisted.py,1.37,1.38 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/process In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17470/buildbot/process Modified Files: process_twisted.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-232 Creator: Brian Warner update twisted buildmaster a bit * buildbot/process/process_twisted.py (TwistedReactorsBuildFactory): change the treeStableTimer to 5 minutes, to match the other twisted BuildFactories, and don't excuse failures in c/qt/win32 reactors any more. * docs/examples/twisted_master.cfg: turn off the 'threadless' and 'freebsd' builders, since the buildslaves have been unavailable for quite a while Index: process_twisted.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/process/process_twisted.py,v retrieving revision 1.37 retrieving revision 1.38 diff -u -d -r1.37 -r1.38 --- process_twisted.py 18 Apr 2005 00:26:56 -0000 1.37 +++ process_twisted.py 17 Jul 2005 23:28:33 -0000 1.38 @@ -88,7 +88,7 @@ self.steps.append(s(BuildDebs, warnOnWarnings=True)) class TwistedReactorsBuildFactory(TwistedBaseFactory): - treeStableTimer = 10*60 + treeStableTimer = 5*60 def __init__(self, svnurl, python="python", compileOpts=[], compileOpts2=[], @@ -118,10 +118,10 @@ for reactor in reactors: flunkOnFailure = 1 warnOnFailure = 0 - if reactor in ['c', 'qt', 'win32']: - # these are buggy, so tolerate failures for now - flunkOnFailure = 0 - warnOnFailure = 1 + #if reactor in ['c', 'qt', 'win32']: + # # these are buggy, so tolerate failures for now + # flunkOnFailure = 0 + # warnOnFailure = 1 self.steps.append(s(RemovePYCs)) # TODO: why? self.steps.append(s(TwistedTrial, name=reactor, python=python, reactor=reactor, From warner at users.sourceforge.net Sun Jul 17 23:28:35 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Sun, 17 Jul 2005 23:28:35 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.463,1.464 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17470 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-232 Creator: Brian Warner update twisted buildmaster a bit * buildbot/process/process_twisted.py (TwistedReactorsBuildFactory): change the treeStableTimer to 5 minutes, to match the other twisted BuildFactories, and don't excuse failures in c/qt/win32 reactors any more. * docs/examples/twisted_master.cfg: turn off the 'threadless' and 'freebsd' builders, since the buildslaves have been unavailable for quite a while Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.463 retrieving revision 1.464 diff -u -d -r1.463 -r1.464 --- ChangeLog 7 Jul 2005 08:09:01 -0000 1.463 +++ ChangeLog 17 Jul 2005 23:28:32 -0000 1.464 @@ -1,3 +1,14 @@ +2005-07-17 Brian Warner + + * buildbot/process/process_twisted.py + (TwistedReactorsBuildFactory): change the treeStableTimer to 5 + minutes, to match the other twisted BuildFactories, and don't + excuse failures in c/qt/win32 reactors any more. + + * docs/examples/twisted_master.cfg: turn off the 'threadless' and + 'freebsd' builders, since the buildslaves have been unavailable + for quite a while + 2005-07-07 Brian Warner * docs/examples/twisted_master.cfg: update to match current usage From warner at users.sourceforge.net Sun Jul 17 23:28:35 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Sun, 17 Jul 2005 23:28:35 +0000 Subject: [Buildbot-commits] buildbot/docs/examples twisted_master.cfg,1.27,1.28 Message-ID: Update of /cvsroot/buildbot/buildbot/docs/examples In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17470/docs/examples Modified Files: twisted_master.cfg Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-232 Creator: Brian Warner update twisted buildmaster a bit * buildbot/process/process_twisted.py (TwistedReactorsBuildFactory): change the treeStableTimer to 5 minutes, to match the other twisted BuildFactories, and don't excuse failures in c/qt/win32 reactors any more. * docs/examples/twisted_master.cfg: turn off the 'threadless' and 'freebsd' builders, since the buildslaves have been unavailable for quite a while Index: twisted_master.cfg =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/examples/twisted_master.cfg,v retrieving revision 1.27 retrieving revision 1.28 diff -u -d -r1.27 -r1.28 --- twisted_master.cfg 7 Jul 2005 08:09:12 -0000 1.27 +++ twisted_master.cfg 17 Jul 2005 23:28:33 -0000 1.28 @@ -177,7 +177,7 @@ "kqueue", ]), } -builders.append(b23bsd) +#builders.append(b23bsd) b24threadless = {'name': 'threadless', 'slavename': 'bot-threadless', @@ -185,7 +185,7 @@ 'factory': TwistedReactorsBuildFactory(svnurl, python='python', reactors=['default'])} -builders.append(b24threadless) +#builders.append(b24threadless) c['builders'] = builders From warner at users.sourceforge.net Tue Jul 19 01:55:23 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 01:55:23 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test sleep.py,NONE,1.1 test_slavecommand.py,1.14,1.15 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv10106/buildbot/test Modified Files: test_slavecommand.py Added Files: sleep.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-234 Creator: Brian Warner overhaul ShellCommand timeout/interrupt/cleanup, add tests * buildbot/slave/commands.py (ShellCommand): overhaul error-handling code, to try and make timeout/interrupt work properly, and make win32 happier * buildbot/test/test_slavecommand.py: clean up, stop using reactor.iterate, add tests for timeout and interrupt * buildbot/test/sleep.py: utility for a new timeout test * buildbot/twcompat.py: copy over twisted 1.3/2.0 compatibility code from the local-usebranches branch --- NEW FILE: sleep.py --- #! /usr/bin/python import sys, time delay = int(sys.argv[1]) sys.stdout.write("sleeping for %d seconds\n" % delay) time.sleep(delay) sys.stdout.write("woke up\n") sys.exit(0) Index: test_slavecommand.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_slavecommand.py,v retrieving revision 1.14 retrieving revision 1.15 diff -u -d -r1.14 -r1.15 --- test_slavecommand.py 16 May 2005 08:50:23 -0000 1.14 +++ test_slavecommand.py 19 Jul 2005 01:55:20 -0000 1.15 @@ -1,8 +1,9 @@ # -*- test-case-name: buildbot.test.test_slavecommand -*- from twisted.trial import unittest -from twisted.internet import reactor, defer -from twisted.python import util, runtime +from twisted.internet import reactor +from twisted.python import util, runtime, failure +from buildbot.twcompat import maybeWait noisy = False if noisy: @@ -10,10 +11,11 @@ import sys startLogging(sys.stdout) -import os, re, time, sys +import os, re, sys import signal -from buildbot.slave.commands import SlaveShellCommand +from buildbot.slave import commands +SlaveShellCommand = commands.SlaveShellCommand # test slavecommand.py by running the various commands with a fake # SlaveBuilder object that logs the calls to sendUpdate() @@ -21,28 +23,13 @@ def findDir(): # the same directory that holds this script return util.sibpath(__file__, ".") - -class FakeSlaveBuild: - pass class FakeSlaveBuilder: - def __init__(self, d, usePTY): + def __init__(self, usePTY): self.updates = [] - self.failure = None - self.deferred = d self.basedir = findDir() self.usePTY = usePTY - def startBuild(self): - self.build = FakeSlaveBuild() - def commandComplete(self, dummy): - if noisy: print "FakeSlaveBuilder.commandComplete" - self.completed = 1 - self.deferred.callback(0) - def commandFailed(self, failure): - if noisy: print "FakeSlaveBuilder.commandFailed", failure - self.failure = failure - self.deferred.callback(1) def sendUpdate(self, data): if noisy: print "FakeSlaveBuilder.sendUpdate", data self.updates.append(data) @@ -64,40 +51,14 @@ signal.signal(signal.SIGCHLD, self.sigchldHandler) -class Shell(SignalMixin, unittest.TestCase): - usePTY = False +class ShellBase(SignalMixin): def setUp(self): - d = defer.Deferred() - self.builder = FakeSlaveBuilder(d, self.usePTY) - d.addCallback(self.callback) - self.failed = None - self.results = None - - def callback(self, failed): - self.failed = failed - self.results = self.builder.updates - - def doTest(self, commandfactory, args): - builder = self.builder - builder.startBuild() - stepId = None - cmd = commandfactory(builder, stepId, args) - d = cmd.start() - d.addCallbacks(builder.commandComplete, builder.commandFailed) - - timeout = time.time() + 2 - while not (self.results or self.failed) and time.time() < timeout: - reactor.iterate(0.01) - if not (self.results or self.failed): - self.fail("timeout") - if self.failed: - print self.builder.failure - return self.failed + self.builder = FakeSlaveBuilder(self.usePTY) def getfile(self, which): got = "" - for r in self.results: + for r in self.builder.updates: if r.has_key(which): got += r[which] return got @@ -126,8 +87,8 @@ self.assertEquals(got, contents) def getrc(self): - self.failUnless(self.results[-1].has_key('rc')) - got = self.results[-1]['rc'] + self.failUnless(self.builder.updates[-1].has_key('rc')) + got = self.builder.updates[-1]['rc'] return got def checkrc(self, expected): got = self.getrc() @@ -136,48 +97,69 @@ def testShell1(self): cmd = sys.executable + " emit.py 0" args = {'command': cmd, 'workdir': '.', 'timeout': 60} - failed = self.doTest(SlaveShellCommand, args) - self.failIf(failed) - self.checkOutput([('stdout', "this is stdout\n"), - ('stderr', "this is stderr\n")]) - self.checkrc(0) + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + expected = [('stdout', "this is stdout\n"), + ('stderr', "this is stderr\n")] + d.addCallback(self._checkPass, expected, 0) + return maybeWait(d) + + def _checkPass(self, res, expected, rc): + self.checkOutput(expected) + self.checkrc(rc) def testShell2(self): - cmd = sys.executable + " emit.py 1" + cmd = [sys.executable, "emit.py", "0"] args = {'command': cmd, 'workdir': '.', 'timeout': 60} - failed = self.doTest(SlaveShellCommand, args) - self.failIf(failed) - self.checkOutput([('stdout', "this is stdout\n"), - ('stderr', "this is stderr\n")]) - self.checkrc(1) + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + expected = [('stdout', "this is stdout\n"), + ('stderr', "this is stderr\n")] + d.addCallback(self._checkPass, expected, 0) + return maybeWait(d) - def testShell3(self): + def testShellRC(self): + cmd = [sys.executable, "emit.py", "1"] + args = {'command': cmd, 'workdir': '.', 'timeout': 60} + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + expected = [('stdout', "this is stdout\n"), + ('stderr', "this is stderr\n")] + d.addCallback(self._checkPass, expected, 1) + return maybeWait(d) + + def testShellEnv(self): cmd = sys.executable + " emit.py 0" args = {'command': cmd, 'workdir': '.', 'env': {'EMIT_TEST': "envtest"}, 'timeout': 60} - failed = self.doTest(SlaveShellCommand, args) - self.failIf(failed) - self.checkOutput([('stdout', "this is stdout\n"), - ('stderr', "this is stderr\n"), - ('stdout', "EMIT_TEST: envtest\n"), - ]) - self.checkrc(0) + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + expected = [('stdout', "this is stdout\n"), + ('stderr', "this is stderr\n"), + ('stdout', "EMIT_TEST: envtest\n"), + ] + d.addCallback(self._checkPass, expected, 0) + return maybeWait(d) - def testShell4(self): + def testShellSubdir(self): cmd = sys.executable + " emit.py 0" args = {'command': cmd, 'workdir': "subdir", 'timeout': 60} - failed = self.doTest(SlaveShellCommand, args) - self.failIf(failed) - self.checkOutput([('stdout', "this is stdout in subdir\n"), - ('stderr', "this is stderr\n")]) - self.checkrc(0) + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + expected = [('stdout', "this is stdout in subdir\n"), + ('stderr', "this is stderr\n")] + d.addCallback(self._checkPass, expected, 0) + return maybeWait(d) - def testShellZ(self): + def testShellMissingCommand(self): args = {'command': "/bin/EndWorldHungerAndMakePigsFly", 'workdir': '.', 'timeout': 10} - failed = self.doTest(SlaveShellCommand, args) - self.failIf(failed) - self.failUnless(self.getrc() != 0) + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + d.addCallback(self._testShellMissingCommand_1) + return maybeWait(d) + def _testShellMissingCommand_1(self, res): + self.failIfEqual(self.getrc(), 0) got = self.getfile('stdout') + self.getfile('stderr') self.failUnless(re.search(r'no such file', got, re.I) # unix or re.search(r'cannot find the path specified', @@ -188,12 +170,89 @@ "message, got '%s'" % got ) - # todo: interrupt(), kill process + def testTimeout(self): + args = {'command': [sys.executable, "sleep.py", "10"], + 'workdir': '.', 'timeout': 2} + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + d.addCallback(self._testTimeout_1) + return maybeWait(d) + def _testTimeout_1(self, res): + self.failIfEqual(self.getrc(), 0) + got = self.getfile('header') + self.failUnlessIn("command timed out: 2 seconds without output", got) + if runtime.platformType == "posix": + # the "killing pid" message is not present in windows + self.failUnlessIn("killing pid", got) + # but the process *ought* to be killed somehow + self.failUnlessIn("process killed by signal", got) + #print got + if runtime.platformType != 'posix': + testTimeout.todo = "timeout doesn't appear to work under windows" + + def testInterrupt1(self): + args = {'command': [sys.executable, "sleep.py", "10"], + 'workdir': '.', 'timeout': 20} + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + reactor.callLater(1, c.interrupt) + d.addCallback(self._testInterrupt1_1) + return maybeWait(d) + def _testInterrupt1_1(self, res): + self.failIfEqual(self.getrc(), 0) + got = self.getfile('header') + self.failUnlessIn("command interrupted", got) + if runtime.platformType == "posix": + self.failUnlessIn("process killed by signal", got) + # todo: twisted-specific command tests +class Shell(ShellBase, unittest.TestCase): + usePTY = False + + def testInterrupt2(self): + # test the backup timeout. This doesn't work under a PTY, because the + # transport.loseConnection we do in the timeout handler actually + # *does* kill the process. + args = {'command': [sys.executable, "sleep.py", "5"], + 'workdir': '.', 'timeout': 20} + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + c.command.BACKUP_TIMEOUT = 1 + # make it unable to kill the child, by changing the signal it uses + # from SIGKILL to the do-nothing signal 0. + c.command.KILL = None + reactor.callLater(1, c.interrupt) + d.addBoth(self._testInterrupt2_1) + return maybeWait(d) + def _testInterrupt2_1(self, res): + # the slave should raise a TimeoutError exception. In a normal build + # process (i.e. one that uses step.RemoteShellCommand), this + # exception will be handed to the Step, which will acquire an ERROR + # status. In our test environment, it isn't such a big deal. + self.failUnless(isinstance(res, failure.Failure), + "res is not a Failure: %s" % (res,)) + self.failUnless(res.check(commands.TimeoutError)) + self.checkrc(-1) + return + # the command is still actually running. Start another command, to + # make sure that a) the old command's output doesn't interfere with + # the new one, and b) the old command's actual termination doesn't + # break anything + args = {'command': [sys.executable, "sleep.py", "5"], + 'workdir': '.', 'timeout': 20} + c = SlaveShellCommand(self.builder, None, args) + d = c.start() + d.addCallback(self._testInterrupt2_2) + return d + def _testInterrupt2_2(self, res): + self.checkrc(0) + # N.B.: under windows, the trial process hangs out for another few + # seconds. I assume that the win32eventreactor is waiting for one of + # the lingering child processes to really finish. if runtime.platformType == 'posix': # test with PTYs also - class ShellPTY(Shell): + class ShellPTY(ShellBase, unittest.TestCase): usePTY = True From warner at users.sourceforge.net Tue Jul 19 01:55:23 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 01:55:23 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.464,1.465 Makefile,1.12,1.13 .arch-inventory,1.3,1.4 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv10106 Modified Files: ChangeLog Makefile .arch-inventory Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-234 Creator: Brian Warner overhaul ShellCommand timeout/interrupt/cleanup, add tests * buildbot/slave/commands.py (ShellCommand): overhaul error-handling code, to try and make timeout/interrupt work properly, and make win32 happier * buildbot/test/test_slavecommand.py: clean up, stop using reactor.iterate, add tests for timeout and interrupt * buildbot/test/sleep.py: utility for a new timeout test * buildbot/twcompat.py: copy over twisted 1.3/2.0 compatibility code from the local-usebranches branch Index: .arch-inventory =================================================================== RCS file: /cvsroot/buildbot/buildbot/.arch-inventory,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- .arch-inventory 23 Apr 2005 00:01:22 -0000 1.3 +++ .arch-inventory 19 Jul 2005 01:55:21 -0000 1.4 @@ -4,3 +4,4 @@ junk ^_trial_temp$ junk ^MANIFEST$ junk ^dist$ +junk ^_darcs$ Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.464 retrieving revision 1.465 diff -u -d -r1.464 -r1.465 --- ChangeLog 17 Jul 2005 23:28:32 -0000 1.464 +++ ChangeLog 19 Jul 2005 01:55:21 -0000 1.465 @@ -1,3 +1,15 @@ +2005-07-18 Brian Warner + + * buildbot/slave/commands.py (ShellCommand): overhaul + error-handling code, to try and make timeout/interrupt work + properly, and make win32 happier + * buildbot/test/test_slavecommand.py: clean up, stop using + reactor.iterate, add tests for timeout and interrupt + * buildbot/test/sleep.py: utility for a new timeout test + + * buildbot/twcompat.py: copy over twisted 1.3/2.0 compatibility + code from the local-usebranches branch + 2005-07-17 Brian Warner * buildbot/process/process_twisted.py Index: Makefile =================================================================== RCS file: /cvsroot/buildbot/buildbot/Makefile,v retrieving revision 1.12 retrieving revision 1.13 diff -u -d -r1.12 -r1.13 --- Makefile 18 Jun 2005 02:50:39 -0000 1.12 +++ Makefile 19 Jul 2005 01:55:21 -0000 1.13 @@ -8,6 +8,9 @@ else T= endif +ifdef T13 +T=~/stuff/python/twisted/Twisted-1.3.0 +endif PP = PYTHONPATH=$(BBBASE):$(T) .PHONY: test From warner at users.sourceforge.net Tue Jul 19 01:55:23 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 01:55:23 +0000 Subject: [Buildbot-commits] buildbot/buildbot twcompat.py,1.1,1.2 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv10106/buildbot Modified Files: twcompat.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-234 Creator: Brian Warner overhaul ShellCommand timeout/interrupt/cleanup, add tests * buildbot/slave/commands.py (ShellCommand): overhaul error-handling code, to try and make timeout/interrupt work properly, and make win32 happier * buildbot/test/test_slavecommand.py: clean up, stop using reactor.iterate, add tests for timeout and interrupt * buildbot/test/sleep.py: utility for a new timeout test * buildbot/twcompat.py: copy over twisted 1.3/2.0 compatibility code from the local-usebranches branch Index: twcompat.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/twcompat.py,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- twcompat.py 17 May 2005 10:14:10 -0000 1.1 +++ twcompat.py 19 Jul 2005 01:55:21 -0000 1.2 @@ -1,16 +1,18 @@ -from twisted.python import components +if 0: + print "hey python-mode, stop thinking I want 8-char indentation" """ utilities to be compatible with both Twisted-1.3 and 2.0 -implements. Use this like: - from buildbot.tcompat import implements - class Foo: - if implements: - implements(IFoo) - else: - __implements__ = IFoo, +implements. Use this like the following. + +from buildbot.tcompat import implements +class Foo: + if implements: + implements(IFoo) + else: + __implements__ = IFoo, Interface: from buildbot.tcompat import Interface @@ -21,6 +23,9 @@ assert providedBy(obj, IFoo) """ +from twisted.copyright import version +from twisted.python import components + # does our Twisted use zope.interface? if hasattr(components, "interface"): # yes @@ -33,3 +38,208 @@ implements = None from twisted.python.components import Interface providedBy = components.implements + +# are we using a version of Trial that allows setUp/testFoo/tearDown to +# return Deferreds? +oldtrial = version.startswith("1.3") + +# use this at the end of setUp/testFoo/tearDown methods +def maybeWait(d, timeout="none"): + from twisted.trial import unittest + if oldtrial: + # this is required for oldtrial (twisted-1.3.0) compatibility. When we + # move to retrial (twisted-2.0.0), replace these with a simple 'return + # d'. + if timeout == "none": + unittest.deferredResult(d) + else: + unittest.deferredResult(d, timeout) + return None + return d + +# waitForDeferred and getProcessOutputAndValue are twisted-2.0 things. If +# we're running under 1.3, patch them into place. These versions are copied +# from twisted somewhat after 2.0.1 . + +from twisted.internet import defer +if not hasattr(defer, 'waitForDeferred'): + Deferred = defer.Deferred + class waitForDeferred: + """ + API Stability: semi-stable + + Maintainer: U{Christopher Armstrong} + + waitForDeferred and deferredGenerator help you write + Deferred-using code that looks like it's blocking (but isn't + really), with the help of generators. + + There are two important functions involved: waitForDeferred, and + deferredGenerator. + + def thingummy(): + thing = waitForDeferred(makeSomeRequestResultingInDeferred()) + yield thing + thing = thing.getResult() + print thing #the result! hoorj! + thingummy = deferredGenerator(thingummy) + + waitForDeferred returns something that you should immediately yield; + when your generator is resumed, calling thing.getResult() will either + give you the result of the Deferred if it was a success, or raise an + exception if it was a failure. + + deferredGenerator takes one of these waitForDeferred-using + generator functions and converts it into a function that returns a + Deferred. The result of the Deferred will be the last + value that your generator yielded (remember that 'return result' won't + work; use 'yield result; return' in place of that). + + Note that not yielding anything from your generator will make the + Deferred result in None. Yielding a Deferred from your generator + is also an error condition; always yield waitForDeferred(d) + instead. + + The Deferred returned from your deferred generator may also + errback if your generator raised an exception. + + def thingummy(): + thing = waitForDeferred(makeSomeRequestResultingInDeferred()) + yield thing + thing = thing.getResult() + if thing == 'I love Twisted': + # will become the result of the Deferred + yield 'TWISTED IS GREAT!' + return + else: + # will trigger an errback + raise Exception('DESTROY ALL LIFE') + thingummy = deferredGenerator(thingummy) + + Put succinctly, these functions connect deferred-using code with this + 'fake blocking' style in both directions: waitForDeferred converts from + a Deferred to the 'blocking' style, and deferredGenerator converts from + the 'blocking' style to a Deferred. + """ + def __init__(self, d): + if not isinstance(d, Deferred): + raise TypeError("You must give waitForDeferred a Deferred. You gave it %r." % (d,)) + self.d = d + + def getResult(self): + if hasattr(self, 'failure'): + self.failure.raiseException() + return self.result + + def _deferGenerator(g, deferred=None, result=None): + """ + See L{waitForDeferred}. + """ + while 1: + if deferred is None: + deferred = defer.Deferred() + try: + result = g.next() + except StopIteration: + deferred.callback(result) + return deferred + except: + deferred.errback() + return deferred + + # Deferred.callback(Deferred) raises an error; we catch this case + # early here and give a nicer error message to the user in case + # they yield a Deferred. Perhaps eventually these semantics may + # change. + if isinstance(result, defer.Deferred): + return fail(TypeError("Yield waitForDeferred(d), not d!")) + + if isinstance(result, waitForDeferred): + waiting=[True, None] + # Pass vars in so they don't get changed going around the loop + def gotResult(r, waiting=waiting, result=result): + result.result = r + if waiting[0]: + waiting[0] = False + waiting[1] = r + else: + _deferGenerator(g, deferred, r) + def gotError(f, waiting=waiting, result=result): + result.failure = f + if waiting[0]: + waiting[0] = False + waiting[1] = f + else: + _deferGenerator(g, deferred, f) + result.d.addCallbacks(gotResult, gotError) + if waiting[0]: + # Haven't called back yet, set flag so that we get reinvoked + # and return from the loop + waiting[0] = False + return deferred + else: + result = waiting[1] + + def func_metamerge(f, g): + """ + Merge function metadata from f -> g and return g + """ + try: + g.__doc__ = f.__doc__ + g.__dict__.update(f.__dict__) + g.__name__ = f.__name__ + except (TypeError, AttributeError): + pass + return g + + def deferredGenerator(f): + """ + See L{waitForDeferred}. + """ + def unwindGenerator(*args, **kwargs): + return _deferGenerator(f(*args, **kwargs)) + return func_metamerge(f, unwindGenerator) + + defer.waitForDeferred = waitForDeferred + defer.deferredGenerator = deferredGenerator + +from twisted.internet import utils +if not hasattr(utils, "getProcessOutputAndValue"): + from twisted.internet import reactor, protocol + _callProtocolWithDeferred = utils._callProtocolWithDeferred + try: + import cStringIO as StringIO + except ImportError: + import StringIO + + class _EverythingGetter(protocol.ProcessProtocol): + + def __init__(self, deferred): + self.deferred = deferred + self.outBuf = StringIO.StringIO() + self.errBuf = StringIO.StringIO() + self.outReceived = self.outBuf.write + self.errReceived = self.errBuf.write + + def processEnded(self, reason): + out = self.outBuf.getvalue() + err = self.errBuf.getvalue() + e = reason.value + code = e.exitCode + if e.signal: + self.deferred.errback((out, err, e.signal)) + else: + self.deferred.callback((out, err, code)) + + def getProcessOutputAndValue(executable, args=(), env={}, path='.', + reactor=reactor): + """Spawn a process and returns a Deferred that will be called back + with its output (from stdout and stderr) and it's exit code as (out, + err, code) If a signal is raised, the Deferred will errback with the + stdout and stderr up to that point, along with the signal, as (out, + err, signalNum) + """ + return _callProtocolWithDeferred(_EverythingGetter, + executable, args, env, path, + reactor) + utils.getProcessOutputAndValue = getProcessOutputAndValue From warner at users.sourceforge.net Tue Jul 19 01:55:24 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 01:55:24 +0000 Subject: [Buildbot-commits] buildbot/buildbot/slave commands.py,1.35,1.36 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/slave In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv10106/buildbot/slave Modified Files: commands.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-234 Creator: Brian Warner overhaul ShellCommand timeout/interrupt/cleanup, add tests * buildbot/slave/commands.py (ShellCommand): overhaul error-handling code, to try and make timeout/interrupt work properly, and make win32 happier * buildbot/test/test_slavecommand.py: clean up, stop using reactor.iterate, add tests for timeout and interrupt * buildbot/test/sleep.py: utility for a new timeout test * buildbot/twcompat.py: copy over twisted 1.3/2.0 compatibility code from the local-usebranches branch Index: commands.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/slave/commands.py,v retrieving revision 1.35 retrieving revision 1.36 diff -u -d -r1.35 -r1.36 --- commands.py 17 May 2005 22:19:18 -0000 1.35 +++ commands.py 19 Jul 2005 01:55:21 -0000 1.36 @@ -64,6 +64,12 @@ def connectionMade(self): if self.debug: log.msg("ShellCommandPP.connectionMade") + if not self.command.process: + if self.debug: + log.msg(" assigning self.command.process: %s" % + (self.transport,)) + self.command.process = self.transport + if self.command.stdin: if self.debug: log.msg(" writing to stdin") self.transport.write(self.command.stdin) @@ -107,6 +113,8 @@ # child shell. notreally = False + BACKUP_TIMEOUT = 5 + KILL = "KILL" def __init__(self, builder, command, workdir, environ=None, @@ -133,7 +141,6 @@ self.stdin = stdin self.timeout = timeout self.timer = None - self.interrupted = 0 self.keepStdout = keepStdout # usePTY=True is a convenience for cleaning up all children and @@ -219,10 +226,23 @@ log.msg(" " + msg) self.sendStatus({'header': msg+"\n"}) - self.process = reactor.spawnProcess(self.pp, argv[0], argv, - self.environ, - self.workdir, - usePTY=self.usePTY) + # win32eventreactor's spawnProcess (under twisted <= 2.0.1) returns + # None, as opposed to all the posixbase-derived reactors (which + # return the new Process object). This is a nuisance. We can make up + # for it by having the ProcessProtocol give us their .transport + # attribute after they get one. I'd prefer to get it from + # spawnProcess because I'm concerned about returning from this method + # without having a valid self.process to work with. (if kill() were + # called right after we return, but somehow before connectionMade + # were called, then kill() would blow up). + self.process = None + p = reactor.spawnProcess(self.pp, argv[0], argv, + self.environ, + self.workdir, + usePTY=self.usePTY) + # connectionMade might have been called during spawnProcess + if not self.process: + self.process = p # connectionMade also closes stdin as long as we're not using a PTY. # This is intended to kill off inappropriately interactive commands @@ -230,25 +250,20 @@ # enhanced to allow the same childFDs argument that Process takes, # which would let us connect stdin to /dev/null . - if self.timeout: self.timer = reactor.callLater(self.timeout, self.doTimeout) def addStdout(self, data): - if self.interrupted: return if self.sendStdout: self.sendStatus({'stdout': data}) if self.keepStdout: self.stdout += data if self.timer: self.timer.reset(self.timeout) def addStderr(self, data): - if self.interrupted: return if self.sendStderr: self.sendStatus({'stderr': data}) if self.timer: self.timer.reset(self.timeout) def finished(self, sig, rc): log.msg("command finished with signal %s, exit code %s" % (sig,rc)) - if self.interrupted: - return if sig is not None: rc = -1 if self.sendRC: @@ -259,22 +274,38 @@ if self.timer: self.timer.cancel() self.timer = None - self.deferred.callback(rc) + d = self.deferred + self.deferred = None + if d: + d.callback(rc) + else: + log.msg("Hey, command %s finished twice" % self) + + def failed(self, why): + log.msg("ShellCommand.failed: command failed: %s" % (why,)) + if self.timer: + self.timer.cancel() + self.timer = None + d = self.deferred + self.deferred = None + if d: + d.errback(why) + else: + log.msg("Hey, command %s finished twice" % self) def doTimeout(self): + self.timer = None msg = "command timed out: %d seconds without output" % self.timeout self.kill(msg) def kill(self, msg): - if not self.process: - msg += ", but there is no current process, finishing anyway" - log.msg(msg) - self.sendStatus({'header': "\n" + msg + "\n"}) - if self.pp: - self.pp.command = None - self.commandFailed(CommandInterrupted("no process to interrupt")) - return - msg += ", killing pid %d" % self.process.pid + # This may be called by the timeout, or when the user has decided to + # abort this build. + if self.timer: + self.timer.cancel() + self.timer = None + if hasattr(self.process, "pid"): + msg += ", killing pid %d" % self.process.pid log.msg(msg) self.sendStatus({'header': "\n" + msg + "\n"}) @@ -285,36 +316,65 @@ # Groups are ideal for this, but that requires # spawnProcess(usePTY=1). Try both ways in case process was # not started that way. - log.msg("trying os.kill(-pid, signal.SIGKILL)") - os.kill(-self.process.pid, signal.SIGKILL) - log.msg(" successful") - hit = 1 + + # the test suite sets self.KILL=None to tell us we should + # only pretend to kill the child. This lets us test the + # backup timer. + + sig = None + if self.KILL is not None: + sig = getattr(signal, "SIG"+ self.KILL, None) + + if self.KILL == None: + log.msg("self.KILL==None, only pretending to kill child") + elif sig is None: + log.msg("signal module is missing SIG%s" % self.KILL) + elif not hasattr(os, "kill"): + log.msg("os module is missing the 'kill' function") + else: + log.msg("trying os.kill(-pid, %d)" % (sig,)) + os.kill(-self.process.pid, sig) + log.msg(" signal %s sent successfully" % sig) + hit = 1 except OSError: # probably no-such-process, maybe because there is no process # group pass if not hit: try: - log.msg("trying process.signalProcess('KILL')") - self.process.signalProcess('KILL') - log.msg(" successful") - hit = 1 + if self.KILL is None: + log.msg("self.KILL==None, only pretending to kill child") + else: + log.msg("trying process.signalProcess('KILL')") + self.process.signalProcess(self.KILL) + log.msg(" signal %s sent successfully" % (self.KILL,)) + hit = 1 except OSError: # could be no-such-process, because they finished very recently pass if not hit: log.msg("signalProcess/os.kill failed both times") - # finished ought to be called momentarily - self.timer = reactor.callLater(5, self.doBackupTimeout) # just in case + + if runtime.platformType == "posix": + # we only do this under posix because the win32eventreactor + # blocks here until the process has terminated, while closing + # stderr. This is weird. + self.pp.transport.loseConnection() + + # finished ought to be called momentarily. Just in case it doesn't, + # set a timer which will abandon the command. + self.timer = reactor.callLater(self.BACKUP_TIMEOUT, + self.doBackupTimeout) def doBackupTimeout(self): - # we tried to kill the process, and it wouldn't die. Finish anyway. - self.sendStatus({'header': "SIGKILL failed to kill process\n"}) + log.msg("we tried to kill the process, and it wouldn't die.." + " finish anyway") self.timer = None - self.pp.command = None # take away its voice - # note, if/when the command finally does complete, an exception will - # be raised as pp tries to send status through .command - self.commandFailed(TimeoutError("SIGKILL failed to kill process")) + self.sendStatus({'header': "SIGKILL failed to kill process\n"}) + if self.sendRC: + self.sendStatus({'header': "using fake rc=-1\n"}) + self.sendStatus({'rc': -1}) + self.failed(TimeoutError("SIGKILL failed to kill process")) class Command: From warner at users.sourceforge.net Tue Jul 19 19:49:38 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 19:49:38 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test test_slavecommand.py,1.15,1.16 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv31532/buildbot/test Modified Files: test_slavecommand.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-237 Creator: Brian Warner don't use open(mode="wt+") to fix OS-X problem, other small fixes * docs/buildbot.texinfo (@settitle): don't claim version 1.0 * buildbot/changes/mail.py (parseSyncmail): update comment * buildbot/test/test_slavecommand.py: disable Shell tests on platforms that don't suport IReactorProcess * buildbot/status/builder.py (LogFile): remove the 't' mode from all places where we open logfiles. It causes OS-X to open the file in some weird mode that that prevents us from mixing reads and writes to the same filehandle, which we depend upon to implement _generateChunks properly. This change doesn't appear to break win32, on which "b" and "t" are treated differently but a missing flag seems to be interpreted as "t". Index: test_slavecommand.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_slavecommand.py,v retrieving revision 1.15 retrieving revision 1.16 diff -u -d -r1.15 -r1.16 --- test_slavecommand.py 19 Jul 2005 01:55:20 -0000 1.15 +++ test_slavecommand.py 19 Jul 2005 19:49:36 -0000 1.16 @@ -1,7 +1,7 @@ # -*- test-case-name: buildbot.test.test_slavecommand -*- from twisted.trial import unittest -from twisted.internet import reactor +from twisted.internet import reactor, interfaces from twisted.python import util, runtime, failure from buildbot.twcompat import maybeWait @@ -252,7 +252,12 @@ # seconds. I assume that the win32eventreactor is waiting for one of # the lingering child processes to really finish. +haveProcess = interfaces.IReactorProcess(reactor, None) if runtime.platformType == 'posix': # test with PTYs also class ShellPTY(ShellBase, unittest.TestCase): usePTY = True + if not haveProcess: + ShellPTY.skip = "this reactor doesn't support IReactorProcess" +if not haveProcess: + Shell.skip = "this reactor doesn't support IReactorProcess" From warner at users.sourceforge.net Tue Jul 19 19:49:38 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 19:49:38 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.465,1.466 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv31532 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-237 Creator: Brian Warner don't use open(mode="wt+") to fix OS-X problem, other small fixes * docs/buildbot.texinfo (@settitle): don't claim version 1.0 * buildbot/changes/mail.py (parseSyncmail): update comment * buildbot/test/test_slavecommand.py: disable Shell tests on platforms that don't suport IReactorProcess * buildbot/status/builder.py (LogFile): remove the 't' mode from all places where we open logfiles. It causes OS-X to open the file in some weird mode that that prevents us from mixing reads and writes to the same filehandle, which we depend upon to implement _generateChunks properly. This change doesn't appear to break win32, on which "b" and "t" are treated differently but a missing flag seems to be interpreted as "t". Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.465 retrieving revision 1.466 diff -u -d -r1.465 -r1.466 --- ChangeLog 19 Jul 2005 01:55:21 -0000 1.465 +++ ChangeLog 19 Jul 2005 19:49:35 -0000 1.466 @@ -1,3 +1,20 @@ +2005-07-19 Brian Warner + + * docs/buildbot.texinfo (@settitle): don't claim version 1.0 + + * buildbot/changes/mail.py (parseSyncmail): update comment + + * buildbot/test/test_slavecommand.py: disable Shell tests on + platforms that don't suport IReactorProcess + + * buildbot/status/builder.py (LogFile): remove the 't' mode from + all places where we open logfiles. It causes OS-X to open the file + in some weird mode that that prevents us from mixing reads and + writes to the same filehandle, which we depend upon to implement + _generateChunks properly. This change doesn't appear to break + win32, on which "b" and "t" are treated differently but a missing + flag seems to be interpreted as "t". + 2005-07-18 Brian Warner * buildbot/slave/commands.py (ShellCommand): overhaul From warner at users.sourceforge.net Tue Jul 19 19:49:38 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 19:49:38 +0000 Subject: [Buildbot-commits] buildbot/buildbot/changes mail.py,1.19,1.20 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/changes In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv31532/buildbot/changes Modified Files: mail.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-237 Creator: Brian Warner don't use open(mode="wt+") to fix OS-X problem, other small fixes * docs/buildbot.texinfo (@settitle): don't claim version 1.0 * buildbot/changes/mail.py (parseSyncmail): update comment * buildbot/test/test_slavecommand.py: disable Shell tests on platforms that don't suport IReactorProcess * buildbot/status/builder.py (LogFile): remove the 't' mode from all places where we open logfiles. It causes OS-X to open the file in some weird mode that that prevents us from mixing reads and writes to the same filehandle, which we depend upon to implement _generateChunks properly. This change doesn't appear to break win32, on which "b" and "t" are treated differently but a missing flag seems to be interpreted as "t". Index: mail.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/changes/mail.py,v retrieving revision 1.19 retrieving revision 1.20 diff -u -d -r1.19 -r1.20 --- mail.py 18 May 2005 07:49:29 -0000 1.19 +++ mail.py 19 Jul 2005 19:49:36 -0000 1.20 @@ -114,7 +114,9 @@ subject = m.getheader("subject") # syncmail puts the repository-relative directory in the subject: - # "%(dir)s %(file)s,%(oldversion)s,%(newversion)s" + # mprefix + "%(dir)s %(file)s,%(oldversion)s,%(newversion)s", where + # 'mprefix' is something that could be added by a mailing list + # manager. # this is the only reasonable way to determine the directory name space = subject.find(" ") if space != -1: From warner at users.sourceforge.net Tue Jul 19 19:49:39 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 19:49:39 +0000 Subject: [Buildbot-commits] buildbot/buildbot/status builder.py,1.59,1.60 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/status In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv31532/buildbot/status Modified Files: builder.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-237 Creator: Brian Warner don't use open(mode="wt+") to fix OS-X problem, other small fixes * docs/buildbot.texinfo (@settitle): don't claim version 1.0 * buildbot/changes/mail.py (parseSyncmail): update comment * buildbot/test/test_slavecommand.py: disable Shell tests on platforms that don't suport IReactorProcess * buildbot/status/builder.py (LogFile): remove the 't' mode from all places where we open logfiles. It causes OS-X to open the file in some weird mode that that prevents us from mixing reads and writes to the same filehandle, which we depend upon to implement _generateChunks properly. This change doesn't appear to break win32, on which "b" and "t" are treated differently but a missing flag seems to be interpreted as "t". Index: builder.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/status/builder.py,v retrieving revision 1.59 retrieving revision 1.60 diff -u -d -r1.59 -r1.60 --- builder.py 23 May 2005 17:45:56 -0000 1.59 +++ builder.py 19 Jul 2005 19:49:36 -0000 1.60 @@ -216,7 +216,7 @@ self.name = name self.filename = logfilename assert not os.path.exists(self.getFilename()) - self.openfile = open(self.getFilename(), "wt+") + self.openfile = open(self.getFilename(), "w+") self.runEntries = [] self.watchers = [] self.finishedWatchers = [] @@ -249,7 +249,7 @@ # don't close it! return self.openfile # otherwise they get their own read-only handle - return open(self.getFilename(), "rt") + return open(self.getFilename(), "r") def getText(self): # this produces one ginormous string @@ -427,7 +427,7 @@ pickled LogFile (inside the pickled Build) won't be modified.""" self.filename = logfilename if not os.path.exists(self.getFilename()): - self.openfile = open(self.getFilename(), "wt") + self.openfile = open(self.getFilename(), "w") self.finished = False for channel,text in self.entries: self.addEntry(channel, text) From warner at users.sourceforge.net Tue Jul 19 19:49:37 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 19:49:37 +0000 Subject: [Buildbot-commits] buildbot/docs buildbot.texinfo,1.7,1.8 Message-ID: Update of /cvsroot/buildbot/buildbot/docs In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv31532/docs Modified Files: buildbot.texinfo Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-237 Creator: Brian Warner don't use open(mode="wt+") to fix OS-X problem, other small fixes * docs/buildbot.texinfo (@settitle): don't claim version 1.0 * buildbot/changes/mail.py (parseSyncmail): update comment * buildbot/test/test_slavecommand.py: disable Shell tests on platforms that don't suport IReactorProcess * buildbot/status/builder.py (LogFile): remove the 't' mode from all places where we open logfiles. It causes OS-X to open the file in some weird mode that that prevents us from mixing reads and writes to the same filehandle, which we depend upon to implement _generateChunks properly. This change doesn't appear to break win32, on which "b" and "t" are treated differently but a missing flag seems to be interpreted as "t". Index: buildbot.texinfo =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/buildbot.texinfo,v retrieving revision 1.7 retrieving revision 1.8 diff -u -d -r1.7 -r1.8 --- buildbot.texinfo 7 Jul 2005 08:08:59 -0000 1.7 +++ buildbot.texinfo 19 Jul 2005 19:49:34 -0000 1.8 @@ -1,7 +1,7 @@ \input texinfo @c -*-texinfo-*- @c %**start of header @setfilename buildbot.info - at settitle BuildBot Manual 1.0 + at settitle BuildBot Manual x.x @c %**end of header @copying From warner at users.sourceforge.net Tue Jul 19 23:12:03 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:03 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.466,1.467 Makefile,1.13,1.14 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398 Modified Files: ChangeLog Makefile Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.466 retrieving revision 1.467 diff -u -d -r1.466 -r1.467 --- ChangeLog 19 Jul 2005 19:49:35 -0000 1.466 +++ ChangeLog 19 Jul 2005 23:12:00 -0000 1.467 @@ -1,5 +1,14 @@ 2005-07-19 Brian Warner + * buildbot/test/test_slaves.py: stubs for new test case + + * buildbot/scheduler.py: add test-case-name tag + * buildbot/test/test_buildreq.py: same + + * buildbot/slave/bot.py (SlaveBuilder.__init__): remove some + unnecessary init code + (Bot.remote_setBuilderList): match it + * docs/buildbot.texinfo (@settitle): don't claim version 1.0 * buildbot/changes/mail.py (parseSyncmail): update comment @@ -38,6 +47,155 @@ 'freebsd' builders, since the buildslaves have been unavailable for quite a while +2005-07-13 Brian Warner + + * buildbot/test/test_vc.py (VCBase.do_branch): test the new + build-on-branch feature + + * buildbot/process/step.py (Darcs.__init__): add base_url and + default_branch arguments, just like SVN + (Arch.__init__): note that the version= argument is really the + default branch name + + * buildbot/slave/commands.py (SourceBase): keep track of the + repository+branch that was used for the last checkout in + SRCDIR/.buildbot-sourcedata . If the contents of this file do not + match, we clobber the directory and perform a fresh checkout + rather than trying to do an in-place update. This should protect + us against trying to get to branch B by doing an update in a tree + obtained from branch A. + (CVS.setup): add CVS-specific sourcedata: root, module, and branch + (SVN.setup): same, just the svnurl + (Darcs.setup): same, just the repourl + (Arch.setup): same, arch coordinates (url), version, and + buildconfig. Also pull the buildconfig from the args dictionary, + which we weren't doing before, so the build-config was effectively + disabled. + (Arch.sourcedirIsUpdateable): don't try to update when we're + moving to a specific revision: arch can't go backwards, so it is + safer to just clobber the tree and checkout a new one at the + desired revision. + (Bazaar.setup): same sourcedata as Arch + + * buildbot/test/test_dependencies.py (Dependencies.testRun_Fail): + use maybeWait, to work with twisted-1.3.0 and twcompat + (Dependencies.testRun_Pass): same + + * buildbot/test/test_vc.py: rearrange, cleanup + + * buildbot/twcompat.py: add defer.waitForDeferred and + utils.getProcessOutputAndValue, so test_vc.py (which uses them) + can work under twisted-1.3.0 . + + * buildbot/test/test_vc.py: rewrite. The sample repositories are + now created at setUp time. This increases the runtime of the test + suite considerably (from 91 seconds to 151), but it removes the + need for an offline tarball, which should solve a problem I've + seen where the test host has a different version of svn than the + tarball build host. The new code also validates that mode=update + really picks up recent commits. This approach will also make it + easier to test out branches, because the code which creates the VC + branches is next to the code which uses them. It will also make it + possible to test some change-notification hooks, by actually + performing a VC commit and watching to see the ChangeSource get + notified. + +2005-07-12 Brian Warner + + * docs/buildbot.texinfo (SVN): add branches example + * docs/Makefile (buildbot.ps): add target for postscript manual + + * buildbot/test/test_dependencies.py: s/test_interlocks/test_locks/ + * buildbot/test/test_locks.py: same + + * buildbot/process/step.py (Darcs): comment about default branches + + * buildbot/master.py (BuildMaster.loadConfig): don't look for + c['interlocks'] in the config file, complain if it is present. + Scan all locks in c['builders'] to make sure the Locks they use + are uniquely named. + * buildbot/test/test_config.py: remove old c['interlocks'] test, + add some tests to check for non-uniquely-named Locks + * buildbot/test/test_vc.py (Patch.doPatch): fix factory.steps, + since the unique-Lock validation code requires it now + + * buildbot/locks.py: fix test-case-name + + * buildbot/interlock.py: remove old file + +2005-07-11 Brian Warner + + * buildbot/test/test_interlock.py: rename to.. + * buildbot/test/test_locks.py: .. something shorter + + * buildbot/slave/bot.py (BuildSlave.stopService): newer Twisted + versions (after 2.0.1) changed internet.TCPClient to shut down the + connection in stopService. Change the code to handle this + gracefully. + + * buildbot/process/base.py (Build): handle whole-Build locks + * buildbot/process/builder.py (Builder.compareToSetup): same + * buildbot/test/test_interlock.py: make tests work + + * buildbot/process/step.py (BuildStep.startStep): complain if a + Step tries to claim a lock that's owned by its own Build + (BuildStep.releaseLocks): typo + + * buildbot/locks.py (MasterLock): use ComparableMixin so config + file reloads don't replace unchanged Builders + (SlaveLock): same + * buildbot/test/test_config.py (ConfigTest.testInterlocks): + rewrite to cover new Locks instead of old c['interlocks'] + * buildbot/test/runutils.py (RunMixin.connectSlaves): remember + slave2 too + + + * buildbot/test/test_dependencies.py (Dependencies.setUp): always + start the master and connect the buildslave + + * buildbot/process/step.py (FailingDummy.done): finish with a + FAILURE status rather than raising an exception + + * buildbot/process/base.py (BuildRequest.mergeReasons): don't try to + stringify a BuildRequest.reason that is None + + * buildbot/scheduler.py (BaseUpstreamScheduler.buildSetFinished): + minor fix + * buildbot/status/builder.py (BuildSetStatus): implement enough to + allow scheduler.Dependent to work + * buildbot/buildset.py (BuildSet): set .reason and .results + + * buildbot/test/test_interlock.py (Locks.setUp): connect both + slaves, to make the test stop hanging. It still fails, of course, + because I haven't even started to implement Locks. + + * buildbot/test/runutils.py (RunMixin.connectSlaves): new utility + + * docs/buildbot.texinfo (Build-Dependencies): redesign the feature + * buildbot/interfaces.py (IUpstreamScheduler): new Interface + * buildbot/scheduler.py (BaseScheduler): factor out common stuff + (Dependent): new class for downstream build dependencies + * buildbot/test/test_dependencies.py: tests (still failing) + + * buildbot/buildset.py (BuildSet.waitUntilSuccess): minor notes + +2005-07-07 Brian Warner + + * buildbot/test/runutils.py (RunMixin): factored this class out.. + * buildbot/test/test_run.py: .. from here + * buildbot/test/test_interlock.py: removed old c['interlock'] tests, + added new buildbot.locks tests (which all hang right now) + * buildbot/locks.py (SlaveLock, MasterLock): implement Locks + * buildbot/process/step.py: claim/release per-BuildStep locks + + * docs/Makefile: add 'buildbot.html' target + + * buildbot/process/step.py (CVS.__init__): allow branch=None to be + interpreted as "HEAD", so that all VC steps can accept branch=None + and have it mean the "default branch". + + * docs/buildbot.texinfo: add Schedulers, Dependencies, and Locks + 2005-07-07 Brian Warner * docs/examples/twisted_master.cfg: update to match current usage @@ -60,6 +218,29 @@ * MANIFEST.in: same * Makefile (release): same +2005-06-07 Brian Warner + + * everything: create a distinct SourceStamp class to replace the + ungainly 4-tuple, let it handle merging instead of BuildRequest. + Changed the signature of Source.startVC to include the revision + information (instead of passing it through self.args). Implement + branches for SVN (now only Darcs/Git is missing support). Add more + Scheduler tests. + +2005-06-06 Brian Warner + + * everything: rearrange build scheduling. Create a new Scheduler + object (configured in c['schedulers'], which submit BuildSets to a + set of Builders. Builders can now use multiple slaves. Builds can + be run on alternate branches, either requested manually or driven + by changes. This changed some of the Status classes. Interlocks + are out of service until they've been properly split into Locks + and Dependencies. treeStableTimer, isFileImportant, and + periodicBuild have all been moved from the Builder to the + Scheduler. + (BuilderStatus.currentBigState): removed the 'waiting' and + 'interlocked' states, removed the 'ETA' argument. + 2005-05-24 Brian Warner * buildbot/pbutil.py (ReconnectingPBClientFactory): Twisted-1.3 Index: Makefile =================================================================== RCS file: /cvsroot/buildbot/buildbot/Makefile,v retrieving revision 1.13 retrieving revision 1.14 diff -u -d -r1.13 -r1.14 --- Makefile 19 Jul 2005 01:55:21 -0000 1.13 +++ Makefile 19 Jul 2005 23:12:00 -0000 1.14 @@ -19,8 +19,6 @@ test: $(PP) trial $(TRIALARGS) $(TEST) -test-vc: - $(PP) BUILDBOT_TEST_VC=$(PWD)/.. trial $(TRIALARGS) $(TEST) #debuild -uc -us From warner at users.sourceforge.net Tue Jul 19 23:12:02 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:02 +0000 Subject: [Buildbot-commits] buildbot/buildbot/slave commands.py,1.36,1.37 bot.py,1.13,1.14 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/slave In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot/slave Modified Files: commands.py bot.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) Index: bot.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/slave/bot.py,v retrieving revision 1.13 retrieving revision 1.14 diff -u -d -r1.13 -r1.14 --- bot.py 6 May 2005 04:57:58 -0000 1.13 +++ bot.py 19 Jul 2005 23:12:00 -0000 1.14 @@ -61,9 +61,9 @@ # when the step is started remoteStep = None - def __init__(self, parent, name, not_really): - #service.Service.__init__(self) - self.name = name + def __init__(self, name, not_really): + #service.Service.__init__(self) # Service has no __init__ method + self.setName(name) self.not_really = not_really def __repr__(self): @@ -267,7 +267,7 @@ % (name, b.builddir, builddir)) b.setBuilddir(builddir) else: - b = SlaveBuilder(self, name, self.not_really) + b = SlaveBuilder(name, self.not_really) b.usePTY = self.usePTY b.setServiceParent(self) b.setBuilddir(builddir) @@ -446,4 +446,6 @@ self.bf.continueTrying = 0 service.MultiService.stopService(self) # now kill the TCP connection - self.connection._connection.disconnect() + # twisted >2.0.1 does this for us, and leaves _connection=None + if self.connection._connection: + self.connection._connection.disconnect() Index: commands.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/slave/commands.py,v retrieving revision 1.36 retrieving revision 1.37 diff -u -d -r1.36 -r1.37 --- commands.py 19 Jul 2005 01:55:21 -0000 1.36 +++ commands.py 19 Jul 2005 23:12:00 -0000 1.37 @@ -618,6 +618,8 @@ that experience transient network failures. """ + sourcedata = "" + def setup(self, args): self.workdir = args['workdir'] self.mode = args.get('mode', "update") @@ -637,12 +639,15 @@ self.srcdir = "source" # hardwired directory name, sorry else: self.srcdir = self.workdir + self.sourcedatafile = os.path.join(self.builder.basedir, + self.srcdir, + ".buildbot-sourcedata") d = defer.succeed(None) # do we need to clobber anything? if self.mode in ("copy", "clobber", "export"): d.addCallback(self.doClobber, self.workdir) - if not self.sourcedirIsUpdateable(): + if not (self.sourcedirIsUpdateable() and self.sourcedataMatches()): # the directory cannot be updated, so we have to clobber it. # Perhaps the master just changed modes from 'export' to # 'update'. @@ -665,15 +670,32 @@ def doVC(self, res): if self.interrupted: raise AbandonChain(1) - if self.sourcedirIsUpdateable(): + if self.sourcedirIsUpdateable() and self.sourcedataMatches(): d = self.doVCUpdate() d.addCallback(self.maybeDoVCFallback) else: d = self.doVCFull() d.addBoth(self.maybeDoVCRetry) d.addCallback(self._abandonOnFailure) + d.addCallback(self.writeSourcedata) return d + def sourcedataMatches(self): + try: + olddata = open(self.sourcedatafile, "r").read() + if olddata != self.sourcedata: + return False + except IOError: + return False + return True + + def writeSourcedata(self, res): + open(self.sourcedatafile, "w").write(self.sourcedata) + return res + + def sourcedirIsUpdateable(self): + raise NotImplementedError("this must be implemented in a subclass") + def doVCUpdate(self): raise NotImplementedError("this must be implemented in a subclass") @@ -823,6 +845,8 @@ self.global_options = args.get('global_options', []) self.branch = args.get('branch') self.login = args.get('login') + self.sourcedata = "%s\n%s\n%s\n" % (self.cvsroot, self.cvsmodule, + self.branch) def sourcedirIsUpdateable(self): if os.path.exists(os.path.join(self.builder.basedir, @@ -897,6 +921,7 @@ def setup(self, args): SourceBase.setup(self, args) self.svnurl = args['svnurl'] + self.sourcedata = "%s\n" % self.svnurl def sourcedirIsUpdateable(self): if os.path.exists(os.path.join(self.builder.basedir, @@ -944,6 +969,7 @@ def setup(self, args): SourceBase.setup(self, args) self.repourl = args['repourl'] + self.sourcedata = "%s\n" % self.repourl def sourcedirIsUpdateable(self): if os.path.exists(os.path.join(self.builder.basedir, @@ -988,6 +1014,7 @@ def setup(self, args): SourceBase.setup(self, args) self.repourl = args['repourl'] + #self.sourcedata = "" # TODO def sourcedirIsUpdateable(self): if os.path.exists(os.path.join(self.builder.basedir, @@ -1036,8 +1063,18 @@ self.url = args['url'] self.version = args['version'] self.revision = args.get('revision') + self.buildconfig = args.get('build-config') + self.sourcedata = "%s\n%s\n%s\n" % (self.url, self.version, + self.buildconfig) def sourcedirIsUpdateable(self): + if self.revision: + # Arch cannot roll a directory backwards, so if they ask for a + # specific revision, clobber the directory. Technically this + # could be limited to the cases where the requested revision is + # later than our current one, but it's too hard to extract the + # current revision from the tree. + return False if os.path.exists(os.path.join(self.builder.basedir, self.srcdir, ".buildbot-patched")): return False @@ -1134,6 +1171,8 @@ # require that the buildmaster configuration to provide both the # archive name and the URL. self.archive = args['archive'] # required for Baz + self.sourcedata = "%s\n%s\n%s\n" % (self.url, self.version, + self.buildconfig) # in _didRegister, the regexp won't match, so we'll stick with the name # in self.archive From warner at users.sourceforge.net Tue Jul 19 23:12:02 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:02 +0000 Subject: [Buildbot-commits] buildbot/docs .arch-inventory,1.2,1.3 Makefile,1.1,1.2 buildbot.texinfo,1.8,1.9 .cvsignore,1.1,1.2 Message-ID: Update of /cvsroot/buildbot/buildbot/docs In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/docs Modified Files: .arch-inventory Makefile buildbot.texinfo .cvsignore Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) Index: .arch-inventory =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/.arch-inventory,v retrieving revision 1.2 retrieving revision 1.3 diff -u -d -r1.2 -r1.3 --- .arch-inventory 11 May 2005 23:25:25 -0000 1.2 +++ .arch-inventory 19 Jul 2005 23:12:00 -0000 1.3 @@ -1,2 +1,5 @@ junk ^reference$ precious \.info$ +precious ^buildbot.html$ +precious ^buildbot$ +precious ^buildbot.ps$ Index: .cvsignore =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/.cvsignore,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- .cvsignore 2 Aug 2004 20:37:08 -0000 1.1 +++ .cvsignore 19 Jul 2005 23:12:00 -0000 1.2 @@ -1 +1,3 @@ *.html +*.info +*.ps Index: Makefile =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/Makefile,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- Makefile 11 May 2005 23:25:25 -0000 1.1 +++ Makefile 19 Jul 2005 23:12:00 -0000 1.2 @@ -1,3 +1,14 @@ buildbot.info: buildbot.texinfo makeinfo --fill-column=70 $< + +buildbot.html: buildbot.texinfo + makeinfo --no-split --html $< + +buildbot.ps: buildbot.texinfo + texi2dvi $< + dvips buildbot.dvi + rm buildbot.aux buildbot.cp buildbot.cps buildbot.fn buildbot.ky buildbot.log buildbot.pg buildbot.toc buildbot.tp buildbot.vr + rm buildbot.dvi + + Index: buildbot.texinfo =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/buildbot.texinfo,v retrieving revision 1.8 retrieving revision 1.9 diff -u -d -r1.8 -r1.9 --- buildbot.texinfo 19 Jul 2005 19:49:34 -0000 1.8 +++ buildbot.texinfo 19 Jul 2005 23:12:00 -0000 1.9 @@ -42,6 +42,7 @@ * Build Process:: Controlling how each build is run. * Status Delivery:: Telling the world about the build's results. * Resources:: Getting help. +* Developer's Appendix:: * Index:: Complete index. @detailmenu @@ -74,7 +75,10 @@ Concepts * Version Control Systems:: [...1610 lines suppressed...] +as follows: - at node Index, , Resources, Top + at example +BuildMaster + ChangeMaster (in .change_svc) + [IChangeSource instances] + [IScheduler instances] (in .schedulers) + BotMaster (in .botmaster) + [IStatusTarget instances] (in .statusTargets) + at end example + +The BotMaster has a collection of Builder objects as values of its + at code{.builders} dictionary. + + + at node Index, , Developer's Appendix, Top @unnumbered Index @printindex cp From warner at users.sourceforge.net Tue Jul 19 23:12:03 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:03 +0000 Subject: [Buildbot-commits] buildbot/buildbot/clients base.py,1.11,1.12 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/clients In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot/clients Modified Files: base.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) Index: base.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/clients/base.py,v retrieving revision 1.11 retrieving revision 1.12 diff -u -d -r1.11 -r1.12 --- base.py 22 Apr 2005 07:36:01 -0000 1.11 +++ base.py 19 Jul 2005 23:12:01 -0000 1.12 @@ -63,20 +63,21 @@ class TextClient: def __init__(self, master, events="steps"): + """ + @type events: string, one of builders, builds, steps, logs, full + @param events: specify what level of detail should be reported. + - 'builders': only announce new/removed Builders + - 'builds': also announce builderChangedState, buildStarted, and + buildFinished + - 'steps': also announce buildETAUpdate, stepStarted, stepFinished + - 'logs': also announce stepETAUpdate, logStarted, logFinished + - 'full': also announce log contents + """ self.master = master self.listener = StatusClient(events) def run(self): - """Start the TextClient. - @type events: string, one of builders, builds, steps, logs, full - @param events: specify what level of detail should be reported. - - 'builders': only announce new/removed Builders - - 'builds': also announce builderChangedState, buildStarted, and - buildFinished - - 'steps': also announce buildETAUpdate, stepStarted, stepFinished - - 'logs': also announce stepETAUpdate, logStarted, logFinished - - 'full': also announce log contents - """ + """Start the TextClient.""" self.startConnecting() reactor.run() From warner at users.sourceforge.net Tue Jul 19 23:12:03 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:03 +0000 Subject: [Buildbot-commits] buildbot/buildbot/scripts runner.py,1.29,1.30 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/scripts In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot/scripts Modified Files: runner.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) Index: runner.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/scripts/runner.py,v retrieving revision 1.29 retrieving revision 1.30 diff -u -d -r1.29 -r1.30 --- runner.py 23 May 2005 22:47:51 -0000 1.29 +++ runner.py 19 Jul 2005 23:12:01 -0000 1.30 @@ -367,9 +367,10 @@ that's owned by the user and has the file we're looking for wins. Windows skips the owned-by-user test. - @rtype : dict + @rtype: dict @return: a dictionary of names defined in the options file. If no options - file was found, return an empty dict.""" + file was found, return an empty dict. + """ if here is None: here = os.getcwd() From warner at users.sourceforge.net Tue Jul 19 23:12:04 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:04 +0000 Subject: [Buildbot-commits] buildbot/buildbot/changes changes.py,1.25,1.26 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/changes In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot/changes Modified Files: changes.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) Index: changes.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/changes/changes.py,v retrieving revision 1.25 retrieving revision 1.26 diff -u -d -r1.25 -r1.26 --- changes.py 24 May 2005 18:57:49 -0000 1.25 +++ changes.py 19 Jul 2005 23:12:01 -0000 1.26 @@ -178,7 +178,6 @@ service.MultiService.__init__(self) self.changes = [] # self.basedir must be filled in by the parent - # self.botmaster too self.nextNumber = 1 def addSource(self, source): @@ -205,7 +204,7 @@ change.number = self.nextNumber self.nextNumber += 1 self.changes.append(change) - self.botmaster.addChange(change) + self.parent.addChange(change) # TODO: call pruneChanges after a while def pruneChanges(self): @@ -238,13 +237,11 @@ del d['parent'] del d['services'] # lose all children del d['namedServices'] - del d['botmaster'] return d def __setstate__(self, d): self.__dict__ = d # self.basedir must be set by the parent - # self.botmaster too self.services = [] # they'll be repopulated by readConfig self.namedServices = {} From warner at users.sourceforge.net Tue Jul 19 23:12:03 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:03 +0000 Subject: [Buildbot-commits] buildbot/buildbot/status client.py,1.19,1.20 builder.py,1.60,1.61 mail.py,1.17,1.18 words.py,1.37,1.38 html.py,1.64,1.65 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/status In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot/status Modified Files: client.py builder.py mail.py words.py html.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) Index: builder.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/status/builder.py,v retrieving revision 1.60 retrieving revision 1.61 diff -u -d -r1.60 -r1.61 --- builder.py 19 Jul 2005 19:49:36 -0000 1.60 +++ builder.py 19 Jul 2005 23:12:01 -0000 1.61 @@ -14,7 +14,7 @@ import pickle # sibling imports -from buildbot import interfaces, util +from buildbot import interfaces, util, sourcestamp from buildbot.twcompat import implements SUCCESS, WARNINGS, FAILURE, SKIPPED, EXCEPTION = range(5) @@ -180,14 +180,7 @@ which new entries can easily be appended. The file on disk has a name like 12-log-compile-output, under the Builder's directory. The actual filename is generated (before the LogFile is created) by - L{Builder.generateLogfileName}. - - @type parent: L{BuildStepStatus} - @param parent: the Step that this log is a part of - @type name: string - @param name: the name of this log, typically 'output' - @type logfilename: string - @param logfilename: the Builder-relative pathname for the saved entries + L{BuildStatus.generateLogfileName}. Old LogFile pickles (which kept their contents in .entries) must be upgraded. The L{BuilderStatus} is responsible for doing this, when it @@ -212,6 +205,14 @@ openfile = None def __init__(self, parent, name, logfilename): + """ + @type parent: L{BuildStepStatus} + @param parent: the Step that this log is a part of + @type name: string + @param name: the name of this log, typically 'output' + @type logfilename: string + @param logfilename: the Builder-relative pathname for the saved entries + """ self.step = parent self.name = name self.filename = logfilename @@ -535,6 +536,31 @@ return self.logs +class BuildSetStatus: + if implements: + implements(interfaces.IBuildSetStatus) + else: + __implements__ = interfaces.IBuildSetStatus, + + def __init__(self): + # TODO + pass + + def setSourceStamp(self, sourceStamp): + self.source = sourceStamp + def setReason(self, reason): + self.reason = reason + def setResults(self, results): + self.results = results + + def getSourceStamp(self): + return self.source + def getReason(self): + return self.reason + def getResults(self): + return self.results + + class BuildStepStatus: """ I represent a collection of output status for a @@ -779,7 +805,7 @@ else: __implements__ = interfaces.IBuildStatus, interfaces.IStatusEvent - sourceStamp = None + source = None reason = None changes = [] blamelist = [] @@ -830,7 +856,7 @@ return self.builder.getBuild(self.number-1) def getSourceStamp(self): - return self.sourceStamp + return (self.source.branch, self.source.revision, self.source.patch) def getReason(self): return self.reason @@ -955,12 +981,12 @@ def addTestResult(self, result): self.testResults[result.getName()] = result - def setSourceStamp(self, revision, patch=None): - self.sourceStamp = (revision, patch) + def setSourceStamp(self, sourceStamp): + self.source = sourceStamp + self.changes = self.source.changes + def setReason(self, reason): self.reason = reason - def setChanges(self, changes): - self.changes = changes def setBlamelist(self, blamelist): self.blamelist = blamelist def setProgress(self, progress): @@ -1081,6 +1107,16 @@ self.watchers = [] self.updates = {} self.finishedWatchers = [] + if d.has_key('sourceStamp'): + revision, patch = d['sourceStamp'] + changes = d.get('changes', []) + source = sourcestamp.SourceStamp(branch=None, + revision=revision, + patch=patch, + changes=changes) + self.source = source + self.changes = source.changes + del self.sourceStamp def upgradeLogfiles(self): # upgrade any LogFiles that need it. This must occur after we've been @@ -1155,7 +1191,6 @@ category = None currentBuild = None currentBigState = "offline" # or idle/waiting/interlocked/building - ETA = None nextBuildNumber = 0 basedir = None # filled in by our parent @@ -1170,7 +1205,6 @@ #self.currentBig = None #self.currentSmall = None self.nextBuild = None - self.eta = None self.watchers = [] self.buildCache = [] # TODO: age builds out of the cache @@ -1252,21 +1286,12 @@ for b in self.builds[0:-self.stepHorizon]: b.pruneSteps() - def getETA(self): - eta = self.ETA # absolute time, set by currentlyWaiting - state = self.currentBigState - if state == "waiting": - eta = self.ETA - util.now() - elif state == "building": - eta = self.currentBuild.getETA() - return eta - # IBuilderStatus methods def getName(self): return self.name def getState(self): - return (self.currentBigState, self.getETA(), self.currentBuild) + return (self.currentBigState, self.currentBuild) def getSlave(self): return self.status.getSlave(self.slavename) @@ -1361,62 +1386,21 @@ self.events.append(e) return e # for consistency, but they really shouldn't touch it - def currentlyOffline(self): - log.msg("currentlyOffline") - self.currentBigState = "offline" - self.publishState() - - def currentlyIdle(self): - self.currentBigState = "idle" - self.ETA = None - self.currentBuild = None - self.publishState() - - def currentlyWaiting(self, when): - self.currentBigState = "waiting" - self.ETA = when - self.currentBuild = None - self.publishState() - - def currentlyInterlocked(self, interlocks): - self.currentBigState = "interlocked" - self.ETA = None - self.currentBuild = None - #names = [interlock.name for interlock in interlocks] - #self.currentBig = Event(color="yellow", - # text=["interlocked"] + names) - self.publishState() - - def buildETAText(self, text): - # UNUSED, should live in the clients - if self.eta: - done = self.eta.eta() - if done != None: - text += [time.strftime("ETA: %H:%M:%S", time.localtime(done)), - "[%d seconds]" % (done - util.now())] - else: - text += ["ETA: ?"] - - def NOTcurrentlyBuilding(self, build, eta): - # eta is a progress.BuildProgress object - self.currentBigState = "building" - self.currentBuild = build - if eta: - self.ETA = eta.eta() - else: - self.ETA = None - self.publishState() + def setBigState(self, state): + needToUpdate = state != self.currentBuild + self.currentBigState = state + if needToUpdate: + self.publishState() def publishState(self, target=None): state = self.currentBigState - eta = self.getETA() if target is not None: # unicast - target.builderChangedState(self.name, state, eta) + target.builderChangedState(self.name, state) return for w in self.watchers: - w.builderChangedState(self.name, state, eta) + w.builderChangedState(self.name, state) def newBuild(self): """The Builder has decided to start a build, but the Build object is @@ -1424,6 +1408,8 @@ Steps). Create a BuildStatus object that it can use.""" number = self.nextBuildNumber self.nextBuildNumber += 1 + # TODO: self.saveYourself(), to make sure we don't forget about the + # build number we've just allocated s = BuildStatus(self, number) s.waitUntilFinished().addCallback(self._buildFinished) return s @@ -1436,9 +1422,7 @@ assert s.builder is self # paranoia assert s.number == self.nextBuildNumber - 1 self.currentBuild = s - self.currentBigState = "building" self.addBuildToCache(self.currentBuild) - self.publishState() # now that the BuildStatus is prepared to answer queries, we can # announce the new build to all our watchers @@ -1453,8 +1437,6 @@ def _buildFinished(self, s): assert s is self.currentBuild - self.currentBigState = "idle" - self.ETA = None self.currentBuild.saveYourself() self.currentBuild = None @@ -1467,13 +1449,6 @@ # waterfall display (history) - - # top-row: last-build status - def setLastBuildStatus(self, event): - log.msg("setLastBuildStatus", event) - self.lastBuildStatus = event - for w in self.watchers: - self.sendLastBuildStatus(w) # I want some kind of build event that holds everything about the build: # why, what changes went into it, the results of the build, itemized @@ -1497,67 +1472,13 @@ client.currentlyOffline() elif state == "idle": client.currentlyIdle() - elif state == "waiting": - client.currentlyWaiting(self.nextBuild - util.now()) - elif state == "interlocked": - client.currentlyInterlocked() elif state == "building": - client.currentlyBuilding(self.eta) - # let them format the time as they wish + client.currentlyBuilding() else: log.msg("Hey, self.currentBigState is weird:", state) - - # current-activity-small - def OFFsetCurrentActivity(self, event): - self.pushEvent(event) - self.currentSmall = event - for s in self.subscribers: - s.newEvent(event) - - def OFFpushEvent(self, event): - if self.events: - next = self.events[-1].number + 1 - else: - next = 0 - event.setName(self, next) - self.events.append(event) - self.pruneEvents() - - - def OFFupdateCurrentActivity(self, **kwargs): - self.currentSmall.update(**kwargs) - def OFFaddFileToCurrentActivity(self, name, logfile): - self.currentSmall.addFile(name, logfile) - def OFFfinishCurrentActivity(self): - self.currentSmall.finish() - - def setCurrentBuild(self): - pass - def finishCurrentBuild(self): - pass ## HTML display interface - def getLastBuildStatus(self): - return self.lastBuildStatus - def getCurrentBig(self): - state = self.currentBigState - if state == "waiting": - when = self.nextBuild - return Event(color="yellow", - text=["waiting", "next build", - time.strftime("%H:%M:%S", - time.localtime(when)), - "[%d seconds]" % (when - util.now()) - ]) - elif state == "building": - text = ["building"] - self.buildETAText(text) - return Event(color="yellow", text=text) - else: - return self.currentBig - def getCurrentSmall(self): - return self.currentSmall def getEventNumbered(self, num): # deal with dropped events, pruned events @@ -1722,7 +1643,7 @@ if not os.path.isdir(builder_status.basedir): os.mkdir(builder_status.basedir) - builder_status.currentlyOffline() + builder_status.setBigState("offline") for t in self.watchers: self.announceNewBuilder(t, name, builder_status) Index: client.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/status/client.py,v retrieving revision 1.19 retrieving revision 1.20 diff -u -d -r1.19 -r1.20 --- client.py 18 May 2005 00:50:51 -0000 1.19 +++ client.py 19 Jul 2005 23:12:01 -0000 1.20 @@ -30,8 +30,8 @@ return self.b.getName() def remote_getState(self): - state, ETA, build = self.b.getState() - return (state, ETA, makeRemote(build)) + state, build = self.b.getState() + return (state, None, makeRemote(build)) # TODO: remove leftover ETA def remote_getSlave(self): return IRemote(self.b.getSlave()) @@ -308,8 +308,9 @@ return self return None - def builderChangedState(self, name, state, eta): - self.client.callRemote("builderChangedState", name, state, eta) + def builderChangedState(self, name, state): + self.client.callRemote("builderChangedState", name, state, None) + # TODO: remove leftover ETA argument def builderRemoved(self, name): if name in self.subscribed_to_builders: Index: html.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/status/html.py,v retrieving revision 1.64 retrieving revision 1.65 diff -u -d -r1.64 -r1.65 --- html.py 17 May 2005 10:14:09 -0000 1.64 +++ html.py 19 Jul 2005 23:12:01 -0000 1.65 @@ -677,24 +677,16 @@ return time.strftime("%H:%M:%S", time.localtime(util.now()+eta)) def getBox(self): - state, ETA, build = self.original.getState() + state, build = self.original.getState() color = "white" - if state in ("waiting", "interlocked", "building"): + if state == "building": color = "yellow" text = [state] if state == "offline": color = "red" - if state == "waiting": - if ETA is not None: - text.extend(["next build", - self.formatETA(ETA), - "[%d seconds]" % ETA]) if state == "building": - if ETA is not None: - text.extend(["ETA: %s" % self.formatETA(ETA), - "[%d seconds]" % ETA]) - else: - text.extend(["ETA: ?"]) + # TODO: ETA calculation + pass return Box(text, color=color, class_="Activity " + state) components.registerAdapter(CurrentBox, builder.BuilderStatus, ICurrentBox) @@ -988,7 +980,7 @@ for b in builders: text = "" color = "#ca88f7" - state, ETA, build = b.getState() + state, build = b.getState() if state != "offline": text += "%s
    \n" % state #b.getCurrentBig().text[0] else: @@ -1473,9 +1465,9 @@ will be used for the 'favicon.ico' resource. Many browsers automatically request this file and use it as an icon in any bookmark generated from this site. - Defaults to the L{buildbot.png} image provided in the - distribution. Can be set to None to avoid using - a favicon at all. + Defaults to the L{buildbot/buildbot.png} image + provided in the distribution. Can be set to None to + avoid using a favicon at all. """ base.StatusReceiverMultiService.__init__(self) Index: mail.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/status/mail.py,v retrieving revision 1.17 retrieving revision 1.18 diff -u -d -r1.17 -r1.18 --- mail.py 18 May 2005 00:29:43 -0000 1.17 +++ mail.py 19 Jul 2005 23:12:01 -0000 1.18 @@ -243,11 +243,16 @@ if ss is None: source = "unavailable" else: - revision, patch = ss - if patch is None: - source = revision + branch, revision, patch = ss + source = "" + if branch: + source += "[branch %s] " + if revision: + source += revision else: - source = "%s (plus patch)" % revision + source += "HEAD" + if patch is not None: + source += " (plus patch)" text += "Build Source Stamp: %s\n" % source text += "Blamelist: %s\n" % ",".join(build.getResponsibleUsers()) Index: words.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/status/words.py,v retrieving revision 1.37 retrieving revision 1.38 diff -u -d -r1.37 -r1.38 --- words.py 18 May 2005 01:03:49 -0000 1.37 +++ words.py 19 Jul 2005 23:12:01 -0000 1.38 @@ -24,6 +24,38 @@ def __init__(self, string = "Invalid usage", *more): ValueError.__init__(self, string, *more) +class IrcBuildRequest: + hasStarted = False + timer = None + + def __init__(self, parent, reply): + self.parent = parent + self.reply = reply + self.timer = reactor.callLater(5, self.soon) + + def soon(self): + del self.timer + if not self.hasStarted: + self.parent.reply(self.reply, + "The build has been queued, I'll give a shout" + " when it starts") + + def started(self, c): + self.hasStarted = True + if self.timer: + self.timer.cancel() + del self.timer + s = c.getStatus() + eta = s.getETA() + response = "build #%d forced" % s.getNumber() + if eta is not None: + response = "build forced [ETA %s]" % self.parent.convertTime(eta) + self.parent.reply(reply, response) + self.parent.reply(reply, "I'll give a shout when the build finishes") + d = s.waitUntilFinished() + d.addCallback(self.parent.buildFinished, reply) + + class IrcStatusBot(irc.IRCClient): silly = { "What happen ?": "Somebody set up us the bomb.", @@ -266,7 +298,9 @@ # 'reply' argument. r = "forced: by IRC user <%s>: %s" % (user, reason) try: - c = bc.forceBuild(who, r) + # TODO: replace this with bc.requestBuild, and maybe give certain + # users the ability to request builds of certain branches + d = bc.forceBuild(who, r) except interfaces.NoSlaveError: self.reply(reply, "sorry, I can't force a build: the slave is offline") @@ -275,19 +309,14 @@ self.reply(reply, "sorry, I can't force a build: the slave is in use") return - if not c: + if not d: self.reply(reply, "sorry, I can't force a build: I must have " "left the builder in my other pants") return - s = c.getStatus() - eta = s.getETA() - response = "build #%d forced" % s.getNumber() - if eta is not None: - response = "build forced [ETA %s]" % self.convertTime(eta) - self.reply(reply, response) - self.reply(reply, "I'll give a shout when the build finishes") - d = s.waitUntilFinished() - d.addCallback(self.buildFinished, reply) + + req = IrcBuildRequest(self, reply) + d.addCallback(req.started) + command_FORCE.usage = "force build - Force a build" def command_STOP(self, user, reply, args): From warner at users.sourceforge.net Tue Jul 19 23:23:23 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:23:23 +0000 Subject: [Buildbot-commits] buildbot/buildbot/process base.py,1.56,1.57 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/process In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv20581/buildbot/process Modified Files: base.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-242 Creator: Brian Warner remove references to old 'interlock' module * buildbot/master.py (BuildMaster): remove references to old 'interlock' module, this caused a bunch of post-merge test failures * buildbot/test/test_config.py: same * buildbot/process/base.py (Build): same Index: base.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/process/base.py,v retrieving revision 1.56 retrieving revision 1.57 diff -u -d -r1.56 -r1.57 --- base.py 19 Jul 2005 23:11:58 -0000 1.56 +++ base.py 19 Jul 2005 23:23:21 -0000 1.57 @@ -174,7 +174,6 @@ self.source = requests[0].mergeWith(requests[1:]) self.reason = requests[0].mergeReasons(requests[1:]) - #self.interlocks = [] #self.abandoned = False self.progress = None @@ -209,36 +208,6 @@ havedirs = 1 return files - def OFFcheckInterlocks(self, interlocks): - assert interlocks - # Build.interlocks is a list of the ones we are waiting for. As each - # deferred fires, we remove one from the list. When the last one - # fires, the build is started. When the first one fails, the build - # is abandoned. - - # This could be done with a DeferredList, but we track the actual - # Interlocks so we can provide better status information (i.e. - # *which* interlocks the build is waiting for). - - self.interlocks = interlocks[:] - for interlock in interlocks: - d = interlock.hasPassed(self.maxChangeNumber) - d.addCallback(self.interlockDone, interlock) - # wait for all of them to pass - - def OFFinterlockDone(self, passed, interlock): - # one interlock has finished - self.interlocks.remove(interlock) - if self.abandoned: - return - if passed and not self.interlocks: - # that was the last holdup, we are now .buildable - self.builder.interlockPassed(self) - else: - # failed, do failmerge - self.abandoned = True - self.builder.interlockFailed(self) - def __repr__(self): return "" % (self.builder.name) From warner at users.sourceforge.net Tue Jul 19 23:23:24 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:23:24 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test test_config.py,1.22,1.23 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv20581/buildbot/test Modified Files: test_config.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-242 Creator: Brian Warner remove references to old 'interlock' module * buildbot/master.py (BuildMaster): remove references to old 'interlock' module, this caused a bunch of post-merge test failures * buildbot/test/test_config.py: same * buildbot/process/base.py (Build): same Index: test_config.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_config.py,v retrieving revision 1.22 retrieving revision 1.23 diff -u -d -r1.22 -r1.23 --- test_config.py 19 Jul 2005 23:11:58 -0000 1.22 +++ test_config.py 19 Jul 2005 23:23:21 -0000 1.23 @@ -23,7 +23,6 @@ from twisted.web.distrib import ResourcePublisher from buildbot.process.builder import Builder from buildbot.process.factory import BasicBuildFactory -from buildbot.process.interlock import Interlock from buildbot.process import step from buildbot.status import html, builder try: From warner at users.sourceforge.net Tue Jul 19 23:23:23 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:23:23 +0000 Subject: [Buildbot-commits] buildbot/buildbot master.py,1.74,1.75 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv20581/buildbot Modified Files: master.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-242 Creator: Brian Warner remove references to old 'interlock' module * buildbot/master.py (BuildMaster): remove references to old 'interlock' module, this caused a bunch of post-merge test failures * buildbot/test/test_config.py: same * buildbot/process/base.py (Build): same Index: master.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/master.py,v retrieving revision 1.74 retrieving revision 1.75 diff -u -d -r1.74 -r1.75 --- master.py 19 Jul 2005 23:11:59 -0000 1.74 +++ master.py 19 Jul 2005 23:23:21 -0000 1.75 @@ -24,7 +24,6 @@ from buildbot.twcompat import implements from buildbot.util import now from buildbot.pbutil import NewCredPerspective -from buildbot.process.interlock import Interlock from buildbot.process.builder import Builder from buildbot.status.builder import BuilderStatus, SlaveStatus, Status from buildbot.changes.changes import Change, ChangeMaster @@ -291,7 +290,6 @@ # They are added by calling botmaster.addBuilder() from the startup # code. self.slaves = {} # maps slavename to BotPerspective - self.interlocks = {} self.statusClientService = None self.watchers = {} @@ -359,7 +357,6 @@ self.builders[builder.name] = builder self.builderNames.append(builder.name) builder.setBotmaster(self) - #self.checkInactiveInterlocks() # TODO?: do this in caller instead? slave = self.slaves[slavename] return slave.addBuilder(builder) @@ -374,18 +371,6 @@ if self.debug: print "removeBuilder", builder log.msg("Botmaster.removeBuilder(%s)" % builder.name) b = self.builders[builder.name] - # any linked interlocks will be made inactive before the builder is - # removed -## interlocks = [] -## for i in b.feeders: -## assert i not in interlocks -## interlocks.append(i) -## for i in b.interlocks: -## assert i not in interlocks -## interlocks.append(i) -## for i in interlocks: -## if self.debug: print " deactivating interlock", i -## i.deactivate(self.builders) del self.builders[builder.name] self.builderNames.remove(builder.name) slave = self.slaves.get(builder.slavename) @@ -393,38 +378,6 @@ return slave.removeBuilder(builder) return defer.succeed(None) - def addInterlock(self, interlock): - """This is called by the setup code to create build interlocks: - objects which let one build wait until another has successfully - build the same set of changes. These objects are created by name, - then builds are told if they feed the interlock or if the interlock - feeds them. - - If any of the referenced builds do not exist, the interlock is left - inactive. All inactive interlocks will be checked again when new - builders are added. This should only be a transient condition as - config changes are read: if it persists after the config file is - fully parsed, a warning should be emitted. - """ - - if self.debug: print "addInterlock", interlock - assert isinstance(interlock, Interlock) - self.interlocks[interlock.name] = interlock - interlock.tryToActivate(self.builders) - - def checkInactiveInterlocks(self): - if self.debug: print "checkInactiveInterlocks" - for interlock in self.interlocks.values(): - if not interlock.active: - interlock.tryToActivate(self.builders) - - def removeInterlock(self, interlock): - if self.debug: print "removeInterlock", interlock - assert isinstance(interlock, Interlock) - del self.interlocks[interlock.name] - if interlock.active: - interlock.deactivate(self.builders) - def getPerspective(self, slavename): return self.slaves[slavename] @@ -759,18 +712,6 @@ if config.has_key('interlocks'): raise KeyError("c['interlocks'] is no longer accepted") -## for i in interlocks: -## name, feeders, watchers = i -## if type(feeders) != type([]): -## raise TypeError, "interlock feeders must be a list" -## if type(watchers) != type([]): -## raise TypeError, "interlock watchers must be a list" -## bnames = feeders + watchers -## for bname in bnames: -## if bnames.count(bname) > 1: -## why = ("builder '%s' appears multiple times for " + \ -## "interlock %s") % (bname, name) -## raise ValueError, why for s in status: assert interfaces.IStatusReceiver(s) @@ -872,9 +813,6 @@ self.slavePort.setServiceParent(self) log.msg("BuildMaster listening on port %d" % slavePortnum) self.slavePortnum = slavePortnum - - # self.interlocks: - #self.loadConfig_Interlocks(interlocks) log.msg("configuration updated") self.readConfig = True @@ -999,39 +937,6 @@ return defer.DeferredList(dl) - def loadConfig_Interlocks(self, newInterlocks): - newList = {} - for interlockData in newInterlocks: - name, feeders, watchers = interlockData - feeders.sort() - watchers.sort() - newList[name] = interlockData - # identify all that were removed, and remove them - for old in self.botmaster.interlocks.values(): - if old.name not in newList.keys(): - if self.debug: print "old interlock", old - self.botmaster.removeInterlock(old) - # everything in newList is either unchanged, changed, or new - for newName, data in newList.items(): - old = self.botmaster.interlocks.get(newName) - name, feeders, watchers = data - if not old: - # new - i = Interlock(name, feeders, watchers) - if self.debug: print "new interlock", i - self.botmaster.addInterlock(i) - elif (old.feederNames == feeders and - old.watcherNames == watchers): - # unchanged: leave it alone - if self.debug: print "unchanged interlock", old - pass - else: - # changed: remove and re-add - if self.debug: print "interlock changed", name - self.botmaster.removeInterlock(old) - i = Interlock(name, feeders, watchers) - self.botmaster.addInterlock(i) - def addChange(self, change): for s in self.schedulers: From warner at users.sourceforge.net Tue Jul 19 23:23:24 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:23:24 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.467,1.468 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv20581 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-242 Creator: Brian Warner remove references to old 'interlock' module * buildbot/master.py (BuildMaster): remove references to old 'interlock' module, this caused a bunch of post-merge test failures * buildbot/test/test_config.py: same * buildbot/process/base.py (Build): same Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.467 retrieving revision 1.468 diff -u -d -r1.467 -r1.468 --- ChangeLog 19 Jul 2005 23:12:00 -0000 1.467 +++ ChangeLog 19 Jul 2005 23:23:22 -0000 1.468 @@ -1,5 +1,11 @@ 2005-07-19 Brian Warner + * buildbot/master.py (BuildMaster): remove references to old + 'interlock' module, this caused a bunch of post-merge test + failures + * buildbot/test/test_config.py: same + * buildbot/process/base.py (Build): same + * buildbot/test/test_slaves.py: stubs for new test case * buildbot/scheduler.py: add test-case-name tag From warner at users.sourceforge.net Tue Jul 19 23:12:00 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:00 +0000 Subject: [Buildbot-commits] buildbot/buildbot/process step.py,1.66,1.67 factory.py,1.9,1.10 base.py,1.55,1.56 builder.py,1.26,1.27 interlock.py,1.7,NONE Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/process In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot/process Modified Files: step.py factory.py base.py builder.py Removed Files: interlock.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) Index: base.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/process/base.py,v retrieving revision 1.55 retrieving revision 1.56 diff -u -d -r1.55 -r1.56 --- base.py 22 May 2005 02:16:14 -0000 1.55 +++ base.py 19 Jul 2005 23:11:58 -0000 1.56 @@ -15,58 +15,167 @@ from buildbot.status.builder import Results from buildbot.status.progress import BuildProgress +class BuildRequest: + """I represent a request to a specific Builder to run a single build. + + I have a SourceStamp which specifies what sources I will build. This may + specify a specific revision of the source tree (so source.branch, + source.revision, and source.patch are used). The .patch attribute is + either None or a tuple of (patchlevel, diff), consisting of a number to + use in 'patch -pN', and a unified-format context diff. + + Alternatively, the SourceStamp may specify a set of Changes to be built, + contained in source.changes. In this case, I may be mergeable with other + BuildRequests on the same branch. + + I may be part of a BuildSet, in which case I will report status results + to it. + + @type source: a L{buildbot.buildset.SourceStamp} instance. + @ivar source: the source code that this BuildRequest use + + @type reason: string + @ivar reason: the reason this Build is being requested. Schedulers + provide this, but for forced builds the user requesting the + build will provide a string. + + @ivar status: the IBuildStatus object which tracks our status + + @ivar submittedAt: a timestamp (seconds since epoch) when this request + was submitted to the Builder. This is used by the CVS + step to compute a checkout timestamp. + """ + + source = None + builder = None + startCount = 0 # how many times we have tried to start this build + + if implements: + implements(interfaces.IBuildRequestControl) + else: + __implements__ = interfaces.IBuildRequestControl, + + def __init__(self, reason, source): + assert interfaces.ISourceStamp(source, None) + self.reason = reason + self.source = source + self.start_watchers = [] + self.finish_watchers = [] + + def canBeMergedWith(self, other): + return self.source.canBeMergedWith(other.source) + + def mergeWith(self, others): + return self.source.mergeWith([o.source for o in others]) + + def mergeReasons(self, others): + """Return a reason for the merged build request.""" + reasons = [] + for req in [self] + others: + if req.reason and req.reason not in reasons: + reasons.append(req.reason) + return ", ".join(reasons) + + def waitUntilStarted(self): + """Get a Deferred that will fire (with a + L{buildbot.interfaces.IBuildControl} instance) when the build starts. + TODO: there could be multiple Builds to satisfy a BuildRequest, but + this API only allows you to wait for the first one.""" + # TODO: if you call this after the build has started, the Deferred + # will never fire. + d = defer.Deferred() + self.start_watchers.append(d) + return d + + def waitUntilFinished(self): + """Get a Deferred that will fire (with a + L{buildbot.interfaces.IBuildStatus} instance when the build + finishes.""" + d = defer.Deferred() + self.finish_watchers.append(d) + return d + + # these are called by the Builder + + def requestSubmitted(self, builder): + # the request has been placed on the queue + self.builder = builder + + def buildStarted(self, build, buildstatus): + """This is called by the Builder when a Build has been started in the + hopes of satifying this BuildRequest. It may be called multiple + times, since interrupted builds and lost buildslaves may force + multiple Builds to be run until the fate of the BuildRequest is known + for certain.""" + for w in self.start_watchers: + w.callback(build) + self.start_watchers = [] + + def finished(self, buildstatus): + """This is called by the Builder when the BuildRequest has been + retired. This happens when its Build has either succeeded (yay!) or + failed (boo!). TODO: If it is halted due to an exception (oops!), or + some other retryable error, C{finished} will not be called yet.""" + + for w in self.finish_watchers: + w.callback(buildstatus) + self.finish_watchers = [] + + # IBuildRequestControl + def cancel(self): + """Cancel this request. This can only be successful if the Build has + not yet been started. + + @return: a boolean indicating if the cancel was successful.""" + if self.builder: + return self.builder.cancelBuildRequest(self) + return False + + class Build: """I represent a single build by a single bot. Specialized Builders can use subclasses of Build to hold status information unique to those build processes. - I am responsible for two things: - 1. deciding B{when} a build should occur. This involves knowing - which file changes to ignore (documentation or comments files, - for example), and deciding how long to wait for the tree to - become stable before starting. The base class pays attention - to all files, and waits 10 seconds for a stable tree. - - 2. controlling B{how} the build proceeds. The actual build is - broken up into a series of steps, saved in the .buildSteps[] - array as a list of L{buildbot.process.step.BuildStep} - objects. Each step is a single remote command, possibly a shell - command. - - Before the build is started, I accumulate Changes and track the - tree-stable timers and interlocks necessary to decide when I ought to - start building. + I control B{how} the build proceeds. The actual build is broken up into a + series of steps, saved in the .buildSteps[] array as a list of + L{buildbot.process.step.BuildStep} objects. Each step is a single remote + command, possibly a shell command. - During the build, I don't do anything interesting. + During the build, I put status information into my C{BuildStatus} + gatherer. - After the build, I hold historical data about the build, like how long - it took, tree size, lines of code, etc. It is expected to be used to - generate graphs and quantify long-term trends. It does not hold any - status events or build logs. + After the build, I go away. I can be used by a factory by setting buildClass on L{buildbot.process.factory.BuildFactory} + + @ivar request: the L{BuildRequest} that triggered me + @ivar build_status: the L{buildbot.status.builder.BuildStatus} that + collects our status """ + if implements: + implements(interfaces.IBuildControl) + else: + __implements__ = interfaces.IBuildControl, - treeStableTimer = 10 #*60 workdir = "build" build_status = None reason = "changes" - sourceStamp = (None, None) finished = False results = None - def __init__(self): - self.unimportantChanges = [] - self.changes = [] - self.failedChanges = [] - self.maxChangeNumber = None - # .timer and .nextBuildTime are only set while we are in the - # Builder's 'waiting' slot - self.timer = None - self.nextBuildTime = None - self.interlocks = [] - self.abandoned = False + def __init__(self, requests): + self.requests = requests + for req in self.requests: + req.startCount += 1 + self.locks = [] + # build a source stamp + self.source = requests[0].mergeWith(requests[1:]) + self.reason = requests[0].mergeReasons(requests[1:]) + + #self.interlocks = [] + #self.abandoned = False self.progress = None self.currentStep = None @@ -80,131 +189,27 @@ """ self.builder = builder - def setSourceStamp(self, baserev, patch, reason="try"): - # sourceStamp is (baserevision, (patchlevel, diff)) - self.sourceStamp = (baserev, patch) - self.reason = reason - - def isFileImportant(self, filename): - """ - I check if the given file is important enough to trigger a rebuild. - - Override me to ignore unimporant files: documentation, .cvsignore - files, etc. - - The timer is not restarted, so a checkout may occur in the middle of - a set of changes marked 'unimportant'. Also, the checkout may or may - not pick up the 'unimportant' changes. The implicit assumption is - that any file marked 'unimportant' is incapable of affecting the - results of the build. - - @param filename: name of a file to check, relative to the VC base - @type filename: string - - @rtype: boolean - @returns: whether the change to this file should trigger a rebuild - """ - return True - - def isBranchImportant(self, branch): - """I return True if the given branch is important enough to trigger a - rebuild, False if it should be ignored. Override me to ignore - unimportant branches. The timer is not restarted, so a checkout may - occur in the middle of a set of changes marked 'unimportant'. Also, - the checkout may or may not pick up the 'unimportant' changes.""" - return True - - def bumpMaxChangeNumber(self, change): - if not self.maxChangeNumber: - self.maxChangeNumber = change.number - if change.number > self.maxChangeNumber: - self.maxChangeNumber = change.number - - def addChange(self, change): - """ - Add the change, deciding if the change is important or not. - Called by L{buildbot.process.builder.filesChanged} - - @type change: L{buildbot.changes.changes.Change} - """ - # for a change to be important, it needs to be with an important - # branch and it need to contain an important file - - important = 0 - - if self.isBranchImportant(change.branch): - for filename in change.files: - if self.isFileImportant(filename): - important = 1 - break - - if important: - self.addImportantChange(change) - else: - self.addUnimportantChange(change) + def setLocks(self, locks): + self.locks = locks - def addImportantChange(self, change): - log.msg("builder %s: change is important, adding" % self.builder.name) - self.bumpMaxChangeNumber(change) - self.changes.append(change) - self.nextBuildTime = change.when + self.treeStableTimer - self.setTimer(self.nextBuildTime) - self.builder.updateBigStatus() - - def addUnimportantChange(self, change): - self.unimportantChanges.append(change) + def getSourceStamp(self): + return self.source def allChanges(self): - return self.changes + self.failedChanges + self.unimportantChanges + return self.source.changes def allFiles(self): # return a list of all source files that were changed files = [] havedirs = 0 - for c in self.changes + self.unimportantChanges: + for c in self.allChanges(): for f in c.files: files.append(f) if c.isdir: havedirs = 1 return files - def failMerge(self, b): - for c in b.unimportantChanges + b.changes + b.failedChanges: - self.bumpMaxChangeNumber(c) - self.failedChanges.append(c) - def merge(self, b): - self.unimportantChanges.extend(b.unimportantChanges) - self.failedChanges.extend(b.failedChanges) - self.changes.extend(b.changes) - for c in b.unimportantChanges + b.changes + b.failedChanges: - self.bumpMaxChangeNumber(c) - - def getSourceStamp(self): - return self.sourceStamp - - def setTimer(self, when): - log.msg("setting timer to %s" % - time.strftime("%H:%M:%S", time.localtime(when))) - if when < now(): - when = now() + 1 - if self.timer: - self.timer.cancel() - self.timer = reactor.callLater(when - now(), self.fireTimer) - def stopTimer(self): - if self.timer: - self.timer.cancel() - self.timer = None - - def fireTimer(self): - """ - Fire the build timer on the builder. - """ - self.timer = None - self.nextBuildTime = None - # tell the Builder to deal with us - self.builder.buildTimerFired(self) - - def checkInterlocks(self, interlocks): + def OFFcheckInterlocks(self, interlocks): assert interlocks # Build.interlocks is a list of the ones we are waiting for. As each # deferred fires, we remove one from the list. When the last one @@ -221,7 +226,7 @@ d.addCallback(self.interlockDone, interlock) # wait for all of them to pass - def interlockDone(self, passed, interlock): + def OFFinterlockDone(self, passed, interlock): # one interlock has finished self.interlocks.remove(interlock) if self.abandoned: @@ -241,23 +246,19 @@ d = self.__dict__.copy() if d.has_key('remote'): del d['remote'] - d['timer'] = None return d - def __setstate__(self, state): - self.__dict__ = state - if self.nextBuildTime: - self.setTimer(self.nextBuildTime) def blamelist(self): - who = {} - for c in self.unimportantChanges + self.changes + self.failedChanges: - who[c.who] = 1 - blamelist = who.keys() + blamelist = [] + for c in self.allChanges(): + if c.who not in blamelist: + blamelist.append(c.who) blamelist.sort() return blamelist + def changesText(self): changetext = "" - for c in self.failedChanges + self.unimportantChanges + self.changes: + for c in self.allChanges(): changetext += "-" * 60 + "\n\n" + c.asText() + "\n" # consider sorting these by number return changetext @@ -277,14 +278,19 @@ useProgress = True - def startBuild(self, build_status, expectations, remote): + def getSlaveCommandVersion(self, command, oldversion=None): + return self.slavebuilder.getSlaveCommandVersion(command, oldversion) + + def startBuild(self, build_status, expectations, slavebuilder): """This method sets up the build, then starts it by invoking the first Step. It returns a Deferred which will fire when the build - finishes.""" + finishes. This Deferred is guaranteed to never errback.""" log.msg("%s.startBuild" % self) self.build_status = build_status - self.remote = remote + self.slavebuilder = slavebuilder + self.locks = [l.getLock(self.slavebuilder) for l in self.locks] + self.remote = slavebuilder.remote self.remote.notifyOnDisconnect(self.lostRemote) d = self.deferred = defer.Deferred() @@ -308,9 +314,27 @@ return d self.build_status.buildStarted(self) - self.startNextStep() + self.acquireLocks().addCallback(self._startBuild_2) return d + def acquireLocks(self, res=None): + log.msg("acquireLocks(step %s, locks %s)" % (self, self.locks)) + if not self.locks: + return defer.succeed(None) + for lock in self.locks: + if not lock.isAvailable(): + log.msg("Build %s waiting for lock %s" % (self, lock)) + d = lock.waitUntilAvailable(self) + d.addCallback(self.acquireLocks) + return d + # all locks are available, claim them all + for lock in self.locks: + lock.claim(self) + return defer.succeed(None) + + def _startBuild_2(self, res): + self.startNextStep() + def setupBuild(self, expectations): # create the actual BuildSteps. If there are any name collisions, we # add a count to the loser until it is unique. @@ -361,11 +385,8 @@ self.progress.setExpectationsFrom(expectations) # we are now ready to set up our BuildStatus. - self.build_status.setSourceStamp(self.maxChangeNumber) + self.build_status.setSourceStamp(self.source) self.build_status.setReason(self.reason) - self.build_status.setChanges(self.changes + - self.failedChanges + - self.unimportantChanges) self.build_status.setBlamelist(self.blamelist()) self.build_status.setProgress(self.progress) @@ -438,7 +459,7 @@ terminate = True return terminate - def lostRemote(self, remote): + def lostRemote(self, remote=None): # the slave went away. There are several possible reasons for this, # and they aren't necessarily fatal. For now, kill the build, but # TODO: see if we can resume the build when it reconnects. @@ -449,7 +470,7 @@ log.msg(" stopping currentStep", self.currentStep) self.currentStep.interrupt(Failure(error.ConnectionLost())) - def stopBuild(self, reason): + def stopBuild(self, reason=""): # the idea here is to let the user cancel a build because, e.g., # they realized they committed a bug and they don't want to waste # the time building something that they know will fail. Another @@ -519,21 +540,19 @@ # XXX: also test a 'timing consistent' flag? log.msg(" setting expectations for next time") self.builder.setExpectations(self.progress) + reactor.callLater(0, self.releaseLocks) self.deferred.callback(self) self.deferred = None - def testsFinished(self, results): - """Accept a TestResults object.""" - self.builder.testsFinished(results) + def releaseLocks(self): + log.msg("releaseLocks(%s): %s" % (self, self.locks)) + for lock in self.locks: + lock.release(self) -class BuildControl(components.Adapter): - if implements: - implements(interfaces.IBuildControl) - else: - __implements__ = interfaces.IBuildControl, + # IBuildControl def getStatus(self): - return self.original.build_status + return self.build_status + + # stopBuild is defined earlier - def stopBuild(self, reason=""): - self.original.stopBuild(reason) Index: builder.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/process/builder.py,v retrieving revision 1.26 retrieving revision 1.27 diff -u -d -r1.26 -r1.27 --- builder.py 17 May 2005 10:14:10 -0000 1.26 +++ builder.py 19 Jul 2005 23:11:58 -0000 1.27 @@ -1,18 +1,157 @@ #! /usr/bin/python +import warnings + from twisted.python import log, components, failure from twisted.spread import pb from twisted.internet import reactor, defer -from buildbot import interfaces +from buildbot import interfaces, sourcestamp from buildbot.twcompat import implements from buildbot.status.progress import Expectations from buildbot.status import builder from buildbot.util import now from buildbot.process import base -class Builder(pb.Referenceable): +class SlaveBuilder(pb.Referenceable): + """I am the master-side representative for one of the + L{buildbot.slave.bot.SlaveBuilder} objects that lives in a remote + buildbot. When a remote builder connects, I query it for command versions + and then make it available to any Builds that are ready to run. """ + + remote = None + build = None + + def __init__(self, builder): + self.builder = builder + self.ping_watchers = [] + + def getSlaveCommandVersion(self, command, oldversion=None): + if self.remoteCommands is None: + # the slave is 0.5.0 or earlier + return oldversion + return self.remoteCommands.get(command) + + def attached(self, slave, remote, commands): + self.slave = slave + self.remote = remote + self.remoteCommands = commands # maps command name to version + log.msg("Buildslave %s attached to %s" % (slave.slavename, + self.builder.name)) + d = self.remote.callRemote("setMaster", self) + d.addErrback(self._attachFailure, "Builder.setMaster") + d.addCallback(self._attached2) + return d + + def _attached2(self, res): + d = self.remote.callRemote("print", "attached") + d.addErrback(self._attachFailure, "Builder.print 'attached'") + d.addCallback(self._attached3) + return d + + def _attached3(self, res): + # now we say they're really attached + return self + + def _attachFailure(self, why, where): + assert type(where) is str + log.msg(where) + log.err(why) + return why + + def detached(self): + self.slave = None + self.remote = None + self.remoteCommands = None + + def startBuild(self, build): + self.build = build + + def finishBuild(self): + self.build = None + + + def ping(self, timeout, status=None): + """Ping the slave to make sure it is still there. Returns a Deferred + that fires with True if it is. + + @param status: if you point this at a BuilderStatus, a 'pinging' + event will be pushed. + """ + + newping = not self.ping_watchers + d = defer.Deferred() + self.ping_watchers.append(d) + if newping: + if status: + event = status.addEvent(["pinging"], "yellow") + d2 = defer.Deferred() + d2.addCallback(self._pong_status, event) + self.ping_watchers.insert(0, d2) + # I think it will make the tests run smoother if the status + # is updated before the ping completes + Ping().ping(self.remote, timeout).addCallback(self._pong) + + return d + + def _pong(self, res): + watchers, self.ping_watchers = self.ping_watchers, [] + for d in watchers: + d.callback(res) + + def _pong_status(self, res, event): + if res: + event.text = ["ping", "success"] + event.color = "green" + else: + event.text = ["ping", "failed"] + event.color = "red" + event.finish() + +class Ping: + running = False + timer = None + + def ping(self, remote, timeout): + assert not self.running + self.running = True + log.msg("sending ping") + self.d = defer.Deferred() + remote.callRemote("print", "ping").addBoth(self._pong) + + # We use either our own timeout or the (long) TCP timeout to detect + # silently-missing slaves. This might happen because of a NAT + # timeout or a routing loop. If the slave just shuts down (and we + # somehow missed the FIN), we should get a "connection refused" + # message. + self.timer = reactor.callLater(timeout, self._ping_timeout, remote) + return self.d + + def _ping_timeout(self, remote): + log.msg("ping timeout") + # force the BotPerspective to disconnect, since this indicates that + # the bot is unreachable. + del self.timer + remote.broker.transport.loseConnection() + # the forcibly-lost connection will now cause the ping to fail + + def _pong(self, res): + if not self.running: + return + self.running = False + log.msg("ping finished") + if self.timer: + self.timer.cancel() + del self.timer + + if isinstance(res, failure.Failure): + self.d.callback(False) + else: + self.d.callback(True) + + +class Builder(pb.Referenceable): """I manage all Builds of a given type. Each Builder is created by an entry in the config file (the c['builders'] @@ -24,76 +163,72 @@ Build object defines when and how the build is performed, so a new Factory or Builder should be defined to control this behavior. - The Builder holds on to a number of these Build - objects, in various slots like C{.waiting}, C{.interlocked}, - C{.buildable}, and C{.currentBuild}. Incoming - L{Change} objects are passed to the - C{.waiting} build, and when it decides it is ready to go, I move it to - the C{.buildable} slot. When a slave becomes available, I move it to the - C{.currentBuild} slot and start it running. - - The Builder is also the master-side representative for one of the - L{buildbot.slave.bot.SlaveBuilder} objects that lives in a remote - buildbot. When a remote builder connects, I query it for command versions - and then make it available to any Builds that are ready to run. + The Builder holds on to a number of L{base.BuildRequest} objects in a + list named C{.buildable}. Incoming BuildRequest objects will be added to + this list, or (if possible) merged into an existing request. When a slave + becomes available, I will use my C{BuildFactory} to turn the request into + a new C{Build} object. The C{BuildRequest} is forgotten, the C{Build} + goes into C{.building} while it runs. Once the build finishes, I will + discard it. - I also manage Interlocks, periodic build timers, forced builds, progress - expectation (ETA) management, and some status delivery chores. + I maintain a list of available SlaveBuilders, one for each connected + slave that the C{slavename} parameter says we can use. Some of these will + be idle, some of them will be busy running builds for me. If there are + multiple slaves, I can run multiple builds at once. - @type waiting: L{buildbot.process.base.Build} - @ivar waiting: a slot for a Build waiting for its 'tree stable' timer to - expire + I also manage forced builds, progress expectation (ETA) management, and + some status delivery chores. - @type interlocked: list of L{buildbot.process.base.Build} - @ivar interlocked: a slot for the Builds that are stable, but which must - wait for other Builds to complete successfully before - they can be run. + I am persisted in C{BASEDIR/BUILDERNAME/builder}, so I can remember how + long a build usually takes to run (in my C{expectations} attribute). This + pickle also includes the L{buildbot.status.builder.BuilderStatus} object, + which remembers the set of historic builds. - @type buildable: L{buildbot.process.base.Build} - @ivar buildable: a slot for a Build that is stable and ready to build, - but which is waiting for a buildslave to be available. + @type buildable: list of L{buildbot.process.base.BuildRequest} + @ivar buildable: BuildRequests that are ready to build, but which are + waiting for a buildslave to be available. - @type currentBuild: L{buildbot.process.base.Build} - @ivar currentBuild: a slot for the Build that actively running + @type building: list of L{buildbot.process.base.Build} + @ivar building: Builds that are actively running """ - remote = None - lastChange = None - buildNumber = 0 - periodicBuildTimer = None - buildable = None - currentBuild = None - status = "idle" - debug = False - wantToStartBuild = None expectations = None # this is created the first time we get a good build + START_BUILD_TIMEOUT = 10 def __init__(self, setup, builder_status): """ @type setup: dict @param setup: builder setup data, as stored in BuildmasterConfig['builders']. Contains name, - slavename, builddir, factory. + slavename, builddir, factory, locks. @type builder_status: L{buildbot.status.builder.BuilderStatus} """ self.name = setup['name'] self.slavename = setup['slavename'] self.builddir = setup['builddir'] self.buildFactory = setup['factory'] - - self.periodicBuildTime = setup.get('periodicBuildTime', None) + self.locks = setup.get("locks", []) + if setup.has_key('periodicBuildTime'): + raise ValueError("periodicBuildTime can no longer be defined as" + " part of the Builder: use scheduler.Periodic" + " instead") # build/wannabuild slots: Build objects move along this sequence - self.waiting = self.newBuild() - self.interlocked = [] + self.buildable = [] + self.building = [] - self.interlocks = [] # I watch these interlocks to know when to build - self.feeders = [] # I feed these interlocks + # buildslaves at our disposal. This maps SlaveBuilder instances to + # state, where state is one of "attaching", "idle", "pinging", + # "busy". "pinging" is used when a Build is about to start, to make + # sure that they're still alive. + self.slaves = {} self.builder_status = builder_status self.builder_status.setSlavename(self.slavename) - self.watchers = {'attach': [], 'detach': []} + + # for testing, to help synchronize tests + self.watchers = {'attach': [], 'detach': [], 'idle': []} def setBotmaster(self, botmaster): self.botmaster = botmaster @@ -108,252 +243,201 @@ % (self.builddir, setup['builddir'])) if setup['factory'] != self.buildFactory: # compare objects diffs.append('factory changed') - if setup.get('periodicBuildTime', None) != self.periodicBuildTime: - diffs.append('periodicBuildTime changed from %s to %s' \ - % (self.periodicBuildTime, - setup.get('periodicBuildTime', None))) + oldlocks = [lock.name for lock in setup.get('locks',[])] + newlocks = [lock.name for lock in self.locks] + if oldlocks != newlocks: + diffs.append('locks changed from %s to %s' % (oldlocks, newlocks)) return diffs - def newBuild(self): - """ - Create a new build from our build factory and set ourself as the - builder. - - @rtype: L{buildbot.process.base.Build} - """ - b = self.buildFactory.newBuild() - b.setBuilder(self) - return b - - def watchInterlock(self, interlock): - """This builder will wait for the given interlock to open up before - it starts.""" - self.interlocks.append(interlock) - def stopWatchingInterlock(self, interlock): - self.interlocks.remove(interlock) + def __repr__(self): + return "" % self.name - def feedInterlock(self, interlock): - """The following interlocks will be fed by this build.""" - self.feeders.append(interlock) - def stopFeedingInterlock(self, interlock): - self.feeders.remove(interlock) + def submitBuildRequest(self, req): + req.submittedAt = now() + self.buildable.append(req) + req.requestSubmitted(self) + self.maybeStartBuild() - def __repr__(self): - return "" % self.name + def cancelBuildRequest(self, req): + if req in self.buildable: + self.buildable.remove(req) + return True + return False def __getstate__(self): d = self.__dict__.copy() - d['remote'] = None - d['currentBuild'] = None # XXX: failover to a new Build - d['periodicBuildTimer'] = None + # TODO: note that d['buildable'] can contain Deferreds + del d['building'] # TODO: move these back to .buildable? + del d['slaves'] return d - - def attached(self, remote, commands): + + def __setstate__(self, d): + self.__dict__ = d + self.building = [] + self.slaves = {} + + def fireTestEvent(self, name, with=None): + if with is None: + with = self + watchers = self.watchers[name] + self.watchers[name] = [] + for w in watchers: + w.callback(with) + + def attached(self, slave, remote, commands): """This is invoked by the BotPerspective when the self.slavename bot registers their builder. - @rtype : L{twisted.internet.defer.Deferred} + @type slave: L{buildbot.master.BotPerspective} + @param slave: the BotPerspective that represents the buildslave as a + whole + @type remote: L{twisted.spread.pb.RemoteReference} + @param remote: a reference to the L{buildbot.slave.bot.SlaveBuilder} + @type commands: dict: string -> string, or None + @param commands: provides the slave's version of each RemoteCommand + + @rtype: L{twisted.internet.defer.Deferred} @return: a Deferred that fires (with 'self') when the slave-side builder is fully attached and ready to accept commands. """ - if self.remote == remote: - # already attached to them - log.msg("Builder %s already attached" % self.name) - return defer.succeed(self) - if self.remote: - log.msg("WEIRD", self.remote, remote) - self.remote = remote - self.remoteCommands = commands # maps command name to version - log.msg("Builder %s attached" % self.name) - d = self.remote.callRemote("setMaster", self) - d.addErrback(self._attachFailure, "Builder.setMaster") - d.addCallback(self._attached2) - return d - - def _attachFailure(self, why, where): - assert type(where) is str - log.msg(where) - log.err(why) - - def _attached2(self, res): - d = self.remote.callRemote("print", "attached") - d.addErrback(self._attachFailure, "Builder.print 'attached'") - d.addCallback(self._attached3) + for s in self.slaves.keys(): + if s.slave == slave: + # already attached to them. TODO: how does this ever get + # reached? + log.msg("%s.attached: WEIRD slave %s already attached" + % (self, slave)) + return defer.succeed(self) + sb = SlaveBuilder(self) + self.slaves[sb] = "attaching" + d = sb.attached(slave, remote, commands) + d.addCallback(self._attached) + d.addErrback(self._not_attached, slave) return d - def _attached3(self, res): - # now we say they're really attached - self.builder_status.addPointEvent(['connect']) - if self.currentBuild: - # XXX: handle interrupted build: flunk the current buildStep, - # see if it can be restarted. buildStep.setBuilder(self) must be - # done to allow it to run finishStep() when it is complete. - log.msg("interrupted build!") - pass - self.startPeriodicBuildTimer() + def _attached(self, sb): + # TODO: make this .addSlaveEvent(slave.slavename, ['connect']) ? + self.builder_status.addPointEvent(['connect', sb.slave.slavename]) + self.slaves[sb] = "idle" self.maybeStartBuild() - for w in self.watchers['attach']: - w.callback(self) - self.watchers['attach'] = [] + + self.fireTestEvent('attach') return self - def getSlaveCommandVersion(self, command, oldversion=None): - if self.remoteCommands is None: - # the slave is 0.5.0 or earlier - return oldversion - return self.remoteCommands.get(command) + def _not_attached(self, why, slave): + # already log.err'ed by SlaveBuilder._attachFailure + # TODO: make this .addSlaveEvent? + # TODO: remove from self.slaves + self.builder_status.addPointEvent(['failed', 'connect', + slave.slave.slavename]) + # TODO: add an HTMLLogFile of the exception + self.fireTestEvent('attach', why) - def detached(self): + def detached(self, slave): """This is called when the connection to the bot is lost.""" - log.msg("%s.detached" % self) - self.remote = None - reactor.callLater(0, self._detached) - # the current step will be stopped (via a notifyOnDisconnect - # callback), and the build will probably stop. + log.msg("%s.detached" % self, slave.slavename) + for sb in self.slaves.keys(): + if sb.slave == slave: + break + if self.slaves[sb] == "busy": + # the Build's .lostRemote method (invoked by a notifyOnDisconnect + # handler) will cause the Build to be stopped, probably right + # after the notifyOnDisconnect that invoked us finishes running. - def _detached(self): - if self.currentBuild: - log.msg("%s._detached: killing build" % self) - # wasn't enough - try: - self.currentBuild.stopBuild("slave lost") - except: - log.msg("currentBuild.stopBuild failed") - log.err() - self.currentBuild = None # TODO: should failover to a new Build - self.builder_status.addPointEvent(['disconnect']) - self.builder_status.currentlyOffline() - self.stopPeriodicBuildTimer() - log.msg("Builder %s detached" % self.name) - for w in self.watchers['detach']: - w.callback(self) - self.watchers['detach'] = [] - - def updateBigStatus(self): - if self.currentBuild: - return # leave it alone - if self.buildable and self.remote: - log.msg("(self.buildable and self.remote) shouldn't happen") - # maybeStartBuild should have moved this to self.currentBuild - # before we get to see it - elif self.buildable and not self.remote: - # TODO: make this a big-status - log.msg("want to start build, but we don't have a remote") - if self.interlocked: - # TODO: list all blocked interlocks - self.builder_status.currentlyInterlocked(self.interlocks) - elif self.waiting and self.waiting.nextBuildTime: - self.builder_status.currentlyWaiting(self.waiting.nextBuildTime) - # nextBuildTime == None means an interlock failed and the - # changes were merged into the next build, but we don't know - # when that will be. Call this state of affairs "idle" - elif self.remote: - self.builder_status.currentlyIdle() - else: - self.builder_status.currentlyOffline() - - def filesChanged(self, change): - """ - Tell the waiting L{buildbot.process.base.Build} that files have - changed. - - @type change: L{buildbot.changes.changes.Change} - """ - # this is invoked by the BotMaster to distribute change notification - # we assume they are added in strictly increasing order - if not self.waiting: - self.waiting = self.newBuild() - self.waiting.addChange(change) - # eventually, our buildTimerFired() method will be called - - def buildTimerFired(self, wb): - """ - Called by the Build when the build timer fires. + #self.retryBuild(sb.build) + pass - @type wb: L{buildbot.process.base.Build} - @param wb: the waiting build that fires the timer - """ + del self.slaves[sb] - if not self.interlocks: - # move from .waiting to .buildable - if self.buildable: - self.buildable.merge(wb) - else: - self.buildable = wb - self.waiting = None - self.maybeStartBuild() - return - # interlocked. Move from .waiting to .interlocked[] - self.interlocked.append(wb) - self.waiting = None - # tell them to ask build interlock when they can proceed - wb.checkInterlocks(self.interlocks) - self.updateBigStatus() - # if the interlocks are not blocked, interlockDone may be fired - # inside checkInterlocks - - def interlockPassed(self, b): - log.msg("%s: interlockPassed" % self) - self.interlocked.remove(b) - if self.buildable: - self.buildable.merge(b) - else: - self.buildable = b - self.maybeStartBuild() - def interlockFailed(self, b): - log.msg("%s: interlockFailed" % self) - # who do we merge to? - assert(self.interlocked[0] == b) - self.interlocked.remove(b) - if self.interlocked: - target = self.interlocked[0] - elif self.waiting: - target = self.waiting - else: - self.waiting = self.newBuild() - target = self.waiting - target.failMerge(b) + # TODO: make this .addSlaveEvent? + self.builder_status.addPointEvent(['disconnect', slave.slavename]) + sb.detached() # inform the SlaveBuilder that their slave went away self.updateBigStatus() - - def startPeriodicBuildTimer(self): - self.stopPeriodicBuildTimer() - if self.periodicBuildTime: - t = reactor.callLater(self.periodicBuildTime, - self.doPeriodicBuild) - self.periodicBuildTimer = t - - def stopPeriodicBuildTimer(self): - if self.periodicBuildTimer: - self.periodicBuildTimer.cancel() - self.periodicBuildTimer = None + self.fireTestEvent('detach') - def doPeriodicBuild(self): - self.periodicBuildTimer = None - self.forceBuild(None, "periodic build") - self.startPeriodicBuildTimer() + def updateBigStatus(self): + if not self.slaves: + self.builder_status.setBigState("offline") + elif self.building: + self.builder_status.setBigState("building") + else: + self.builder_status.setBigState("idle") + self.fireTestEvent('idle') def maybeStartBuild(self): - if self.currentBuild: - return # must wait + log.msg("maybeStartBuild: %s %s" % (self.buildable, self.slaves)) if not self.buildable: self.updateBigStatus() return # nothing to do - if not self.remote: - #log.msg("want to start build, but we don't have a remote") + idle_slaves = [sb for sb in self.slaves.keys() + if self.slaves[sb] == "idle"] + if not idle_slaves: + log.msg("%s: want to start build, but we don't have a remote" + % self) self.updateBigStatus() return - # move to .building, start it - self.currentBuild = self.buildable - self.buildable = None - return self.startBuild(self.currentBuild) + sb = idle_slaves[0] - def startBuild(self, build): - log.msg("starting build %s" % build) - d = self.remote.callRemote("startBuild") # informational courtesy - d.addErrback(self._startBuildFailed, build) + # there is something to build, and there is a slave on which to build + # it. Grab the oldest request, see if we can merge it with anything + # else. + req = self.buildable.pop(0) + mergers = [] + for br in self.buildable[:]: + if req.canBeMergedWith(br): + self.buildable.remove(br) + mergers.append(br) + requests = [req] + mergers + + # Create a new build from our build factory and set ourself as the + # builder. + build = self.buildFactory.newBuild(requests) + build.setBuilder(self) + build.setLocks(self.locks) + + # start it + self.startBuild(build, sb) + + def startBuild(self, build, sb): + """Start a build on the given slave. + @param build: the L{base.Build} to start + @param slave: the L{SlaveBuilder} which will host this build + + @return: a Deferred which fires with a L{base.BuildControl} that can + be used to stop the Build, or to access a + L{buildbot.status.builder.BuildStatus} which will watch the Build as + it runs. """ + + self.building.append(build) + + # claim the slave + self.slaves[sb] = "pinging" + sb.startBuild(build) + + self.updateBigStatus() + + log.msg("starting build %s.. pinging the slave" % build) + # ping the slave to make sure they're still there. If they're fallen + # off the map (due to a NAT timeout or something), this will fail in + # a couple of minutes, depending upon the TCP timeout. TODO: consider + # making this time out faster, or at least characterize the likely + # duration. + d = sb.ping(self.START_BUILD_TIMEOUT) + d.addCallback(self._startBuild_1, build, sb) + return d + + def _startBuild_1(self, res, build, sb): + if not res: + return self._startBuildFailed("slave ping failed", build, sb) + # The buildslave is ready to go. + self.slaves[sb] = "building" + d = sb.remote.callRemote("startBuild") + d.addCallbacks(self._startBuild_2, self._startBuildFailed, + callbackArgs=(build,sb), errbackArgs=(build,sb)) + return d + def _startBuild_2(self, res, build, sb): # create the BuildStatus object that goes with the Build bs = self.builder_status.newBuild() @@ -361,32 +445,51 @@ # BuildStatus that it has started, which will announce it to the # world (through our BuilderStatus object, which is its parent). # Finally it will start the actual build process. - d = build.startBuild(bs, self.expectations, self.remote) - d.addCallback(self.buildFinished) - d.addErrback(self._buildNotFinished) - control = base.BuildControl(build) - return control - - def _buildNotFinished(self, why): - log.msg("_buildNotFinished") - log.err() + d = build.startBuild(bs, self.expectations, sb) + d.addCallback(self.buildFinished, sb) + d.addErrback(log.err) # this shouldn't happen. if it does, the slave + # will be wedged + for req in build.requests: + req.buildStarted(build, bs) + return build # this is the IBuildControl - def _startBuildFailed(self, why, build): + def _startBuildFailed(self, why, build, sb): + # put the build back on the buildable list log.msg("I tried to tell the slave that the build %s started, but " "remote_startBuild failed: %s" % (build, why)) + # release the slave + sb.finishBuild() + if sb in self.slaves: + self.slaves[sb] = "idle" - def testsFinished(self, results): - # XXX: add build number, datestamp, Change information - #self.testTracker.testsFinished(results) - pass - - def buildFinished(self, build): - self.currentBuild = None - for f in self.feeders: - f.buildFinished(self.name, build.maxChangeNumber, - (build.results == builder.SUCCESS)) + log.msg("re-queueing the BuildRequest") + self.building.remove(build) + for req in build.requests: + self.buildable.insert(0, req) # they get first priority + + # other notifyOnDisconnect calls will mark the slave as disconnected. + # Re-try after they have fired, maybe there's another slave + # available. TODO: I don't like these un-synchronizable callLaters.. + # a better solution is to mark the SlaveBuilder as disconnected + # ourselves, but we'll need to make sure that they can tolerate + # multiple disconnects first. + reactor.callLater(0, self.maybeStartBuild) + + def buildFinished(self, build, sb): + """This is called when the Build has finished (either success or + failure). Any exceptions during the build are reported with + results=FAILURE, not with an errback.""" + + # release the slave + sb.finishBuild() + if sb in self.slaves: + self.slaves[sb] = "idle" + # otherwise the slave probably got removed in detach() + + self.building.remove(build) + for req in build.requests: + req.finished(build.build_status) self.maybeStartBuild() - return build.results # give to whoever started the build def setExpectations(self, progress): """Mark the build as successful and update expectations for the next @@ -404,74 +507,10 @@ log.msg("new expectations: %s seconds" % \ self.expectations.expectedBuildTime()) - def forceBuild(self, who, reason): - # only add a build if there isn't anything already building - if self.currentBuild: - log.msg(self, - "forceBuild(%s,%s) ignored because a build is running" % \ - (who, reason)) - raise interfaces.BuilderInUseError - if not self.remote: - log.msg(self, - "forceBuild(%s,%s) ignored because we have no slave" % \ - (who, reason)) - raise interfaces.NoSlaveError - if self.buildable: - self.buildable.reason = reason - else: - self.buildable = self.newBuild() - self.buildable.reason = reason - return self.maybeStartBuild() - def shutdownSlave(self): if self.remote: self.remote.callRemote("shutdown") - - -class Ping: - def ping(self, status, remote, timeout): - if not remote: - status.addPointEvent(["ping", "no slave"], "red") - return defer.succeed(False) # interfaces.NoSlaveError - self.event = status.addEvent(["pinging"], "yellow") - self.active = True - self.d = defer.Deferred() - d = remote.callRemote("print", "ping") - d.addBoth(self._pong) - - # We use either our own timeout or the (long) TCP timeout to detect - # silently-missing slaves. This might happen because of a NAT - # timeout or a routing loop. If the slave just shuts down (and we - # somehow missed the FIN), we should get a "connection refused" - # message. - self.timer = reactor.callLater(timeout, self.timeout) - return self.d - def timeout(self): - self.timer = None - self._pong(failure.Failure(interfaces.NoSlaveError("timeout"))) - - def _pong(self, res): - if not self.active: - return - self.active = False - if self.timer: - self.timer.cancel() - e = self.event - if isinstance(res, failure.Failure): - e.text = ["ping", "failed"] - e.color = "red" - ponged = False - # TODO: force the BotPerspective to disconnect, since this - # indicates that the bot is unreachable. That will also append a - # "disconnect" event to the builder_status, terminating this - # "ping failed" event. - else: - e.text = ["ping", "success"] - e.color = "green" - ponged = True - e.finish() - self.d.callback(ponged) class BuilderControl(components.Adapter): if implements: @@ -480,18 +519,47 @@ __implements__ = interfaces.IBuilderControl, def forceBuild(self, who, reason): - bc = self.original.forceBuild(who, reason) - return bc + """This is a shortcut for building the current HEAD. You get back a + BuildRequest, just as if you'd asked politely. To get control of the + resulting build, you'll need to wait for req.waitUntilStarted(). + + This shortcut peeks into the Builder and raises an exception if there + is no slave available, to make backwards-compatibility a little + easier. + """ + + warnings.warn("Please use BuilderControl.requestBuild instead", + category=DeprecationWarning, stacklevel=1) + idle_slaves = [sb for sb in self.original.slaves + if self.original.slaves[sb] == "idle"] + if not idle_slaves: + if self.original.building: + raise interfaces.BuilderInUseError("All slaves are in use") + raise interfaces.NoSlaveError("There are no slaves connected") + req = base.BuildRequest(reason, sourcestamp.SourceStamp()) + self.requestBuild(req) + return req.waitUntilStarted() + + def requestBuild(self, req): + self.original.submitBuildRequest(req) + + def getPendingBuilds(self): + # return IBuildRequestControl objects + raise NotImplementedError def getBuild(self, number): - b = self.original.currentBuild - if b and b.build_status.number == number: - return base.BuildControl(b) + for b in self.original.building: + if b.build_status.number == number: + return b return None def ping(self, timeout=30): - d = Ping().ping(self.original.builder_status, - self.original.remote, timeout) + if not self.original.slaves: + self.original.builder_status.addPointEvent(["ping", "no slave"], + "red") + return defer.succeed(False) # interfaces.NoSlaveError + d = self.original.slaves.keys()[0].ping(timeout, + self.original.builder_status) return d components.registerAdapter(BuilderControl, Builder, interfaces.IBuilderControl) Index: factory.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/process/factory.py,v retrieving revision 1.9 retrieving revision 1.10 diff -u -d -r1.9 -r1.10 --- factory.py 24 Apr 2005 21:30:24 -0000 1.9 +++ factory.py 19 Jul 2005 23:11:58 -0000 1.10 @@ -15,21 +15,21 @@ @type buildClass: L{buildbot.process.base.Build} """ buildClass = Build - treeStableTimer = None steps = [] useProgress = 1 - compare_attrs = ['buildClass', 'treeStableTimer', 'steps', 'useProgress'] + compare_attrs = ['buildClass', 'steps', 'useProgress'] def __init__(self, steps=None): if steps is None: steps = [] self.steps = steps - def newBuild(self): - b = self.buildClass() + def newBuild(self, request): + """Create a new Build instance. + @param request: a L{base.BuildRequest} describing what is to be built + """ + b = self.buildClass(request) b.useProgress = self.useProgress b.setSteps(self.steps) - if self.treeStableTimer: - b.treeStableTimer = self.treeStableTimer return b --- interlock.py DELETED --- Index: step.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/process/step.py,v retrieving revision 1.66 retrieving revision 1.67 diff -u -d -r1.66 -r1.67 --- step.py 17 May 2005 04:40:55 -0000 1.66 +++ step.py 19 Jul 2005 23:11:57 -0000 1.67 @@ -403,7 +403,7 @@ flunkOnFailure = False warnOnWarnings = False warnOnFailure = False - parms = ['build', 'name', + parms = ['build', 'name', 'locks', 'haltOnFailure', 'flunkOnWarnings', 'flunkOnFailure', @@ -411,6 +411,7 @@ 'warnOnFailure',] name = "generic" + locks = [] progressMetrics = [] # 'time' is implicit useProgress = True # set to False if step is really unpredictable build = None @@ -468,17 +469,45 @@ self.remote = remote self.deferred = defer.Deferred() + # convert all locks into their real form (SlaveLocks get narrowed + # down to the slave that this build is being run on) + self.locks = [l.getLock(self.build.slavebuilder) for l in self.locks] + for l in self.locks: + if l in self.build.locks: + log.msg("Hey, lock %s is claimed by both a Step (%s) and the" + " parent Build (%s)" % (l, self, self.build)) + raise RuntimeError("lock claimed by both Step and Build") + d = self.acquireLocks() + d.addCallback(self._startStep_2) + return self.deferred + + def acquireLocks(self, res=None): + log.msg("acquireLocks(step %s, locks %s)" % (self, self.locks)) + if not self.locks: + return defer.succeed(None) + for lock in self.locks: + if not lock.isAvailable(): + log.msg("step %s waiting for lock %s" % (self, lock)) + d = lock.waitUntilAvailable(self) + d.addCallback(self.acquireLocks) + return d + # all locks are available, claim them all + for lock in self.locks: + lock.claim(self) + return defer.succeed(None) + + def _startStep_2(self, res): if self.progress: self.progress.start() self.step_status.stepStarted() try: skip = self.start() if skip == SKIPPED: + reactor.callLater(0, self.releaseLocks) reactor.callLater(0, self.deferred.callback, SKIPPED) except: log.msg("BuildStep.startStep exception in .start") self.failed(Failure()) - return self.deferred def start(self): """Begin the step. Override this method and add code to do local @@ -537,10 +566,16 @@ ['step', 'interrupted'] or ['remote', 'lost']""" pass + def releaseLocks(self): + log.msg("releaseLocks(%s): %s" % (self, self.locks)) + for lock in self.locks: + lock.release(self) + def finished(self, results): if self.progress: self.progress.finish() self.step_status.stepFinished(results) + self.releaseLocks() self.deferred.callback(results) def failed(self, why): @@ -565,13 +600,19 @@ # the progress stuff may still be whacked (the StepStatus may # think that it is still running), but the build overall will now # finish + try: + self.releaseLocks() + except: + log.msg("exception while releasing locks") + log.err() + log.msg("BuildStep.failed now firing callback") self.deferred.callback(EXCEPTION) # utility methods that BuildSteps may find useful def slaveVersion(self, command, oldversion=None): - return self.build.builder.getSlaveCommandVersion(command, oldversion) + return self.build.getSlaveCommandVersion(command, oldversion) def addLog(self, name): loog = self.step_status.addLog(name) @@ -975,16 +1016,20 @@ % self.name) return SKIPPED - # can we construct a source stamp? - #revision = None # default: use the latest sources (-rHEAD) - revision, patch = self.build.getSourceStamp() - # 'patch' is None or a tuple of (patchlevel, diff) + # what source stamp would this build like to use? + s = self.build.getSourceStamp() + # if branch is None, then use the Step's "default" branch + branch = s.branch or self.branch + # if revision is None, use the latest sources (-rHEAD) + revision = s.revision if not revision and not self.alwaysUseLatest: - changes = self.build.allChanges() - revision = self.computeSourceRevision(changes) - self.args['revision'] = revision - self.args['patch'] = patch - self.startVC() + revision = self.computeSourceRevision(s.changes) + # if patch is None, then do not patch the tree after checkout + + # 'patch' is None or a tuple of (patchlevel, diff) + patch = s.patch + + self.startVC(branch, revision, patch) class CVS(Source): @@ -1009,7 +1054,7 @@ # called with each complete line. def __init__(self, cvsroot, cvsmodule, - global_options=[], branch="HEAD", checkoutDelay=None, + global_options=[], branch=None, checkoutDelay=None, login=None, clobber=0, export=0, copydir=None, **kwargs): @@ -1038,9 +1083,10 @@ it was previously performed or not. @type branch: string - @param branch: a string to be used in a '-r' argument to specify - which named branch of the source tree should be - used for this checkout. Defaults to 'HEAD'. + @param branch: the default branch nane, will be used in a '-r' + argument to specify which branch of the source tree + should be used for this checkout. Defaults to None, + which means to use 'HEAD'. @type checkoutDelay: int or None @param checkoutDelay: if not None, the number of seconds to put @@ -1065,6 +1111,7 @@ ,v files).""" self.checkoutDelay = checkoutDelay + self.branch = branch if not kwargs.has_key('mode') and (clobber or export or copydir): # deal with old configs @@ -1084,7 +1131,6 @@ self.args.update({'cvsroot': cvsroot, 'cvsmodule': cvsmodule, 'global_options': global_options, - 'branch': branch, 'login': login, }) @@ -1095,10 +1141,17 @@ if self.checkoutDelay is not None: when = lastChange + self.checkoutDelay else: - when = lastChange + self.build.treeStableTimer / 2 + lastSubmit = max([r.submittedAt for r in self.build.requests]) + when = (lastChange + lastSubmit) / 2 return formatdate(when) - def startVC(self): + def startVC(self, branch, revision, patch): + if branch is None: + branch = "HEAD" + self.args['branch'] = branch + self.args['revision'] = revision + self.args['patch'] = patch + if self.args['branch'] == "HEAD" and self.args['revision']: # special case. 'cvs update -r HEAD -D today' gives no files # TODO: figure out why, see if it applies to -r BRANCH @@ -1126,14 +1179,28 @@ name = 'svn' - def __init__(self, svnurl, directory=None, **kwargs): + def __init__(self, svnurl=None, base_url=None, default_branch=None, + directory=None, **kwargs): """ @type svnurl: string @param svnurl: the URL which points to the Subversion server, combining the access method (HTTP, ssh, local file), - the repository host/port, the repository path, - the sub-tree within the repository, and the branch - to check out. + the repository host/port, the repository path, the + sub-tree within the repository, and the branch to + check out. Using C{svnurl} does not enable builds of + alternate branches: use C{base_url} to enable this. + Use exactly one of C{svnurl} and C{base_url}. + + @param base_url: if branches are enabled, this is the base URL to + which a branch name will be appended. It should + probably end in a slash. Use exactly one of + C{svnurl} and C{base_url}. + + @param default_branch: if branches are enabled, this is the branch + to use if the Build does not specify one + explicitly. It will simply be appended + to C{base_url} and the result handed to + the SVN command. """ if not kwargs.has_key('workdir') and directory is not None: @@ -1141,9 +1208,16 @@ warnings.warn("Please use workdir=, not directory=", DeprecationWarning) kwargs['workdir'] = directory + + if not svnurl and not base_url: + raise ValueError("you must use exactly one of svnurl and base_url") + + self.svnurl = svnurl + self.base_url = base_url + self.branch = default_branch + Source.__init__(self, **kwargs) - self.args['svnurl'] = svnurl def computeSourceRevision(self, changes): if not changes: @@ -1151,7 +1225,8 @@ lastChange = max([c.revision for c in changes]) return lastChange - def startVC(self): + def startVC(self, branch, revision, patch): + # accomodate old slaves errorMessage = None slavever = self.slaveVersion("svn", "old") @@ -1167,13 +1242,20 @@ log.msg("WARNING: this slave only does mode=update") assert self.args['mode'] != "export" # more serious self.args['directory'] = self.args['workdir'] - if self.args['revision'] is not None: + if revision is not None: # 0.5.0 can only do HEAD errorMessage = "WARNING: this slave can only update to HEAD" - errorMessage += ", not revision=%s\n" % self.args['revision'] + errorMessage += ", not revision=%s\n" % revision log.msg("WARNING: this slave only does -rHEAD") - self.args['revision'] = "HEAD" # interprets this key differently - assert not self.args['patch'] # 0.5.0 slave can't do patch + revision = "HEAD" # interprets this key differently + assert not patch # 0.5.0 slave can't do patch + + if self.svnurl: + self.args['svnurl'] = self.svnurl + else: + self.args['svnurl'] = self.base_url + branch + self.args['revision'] = revision + self.args['patch'] = patch self.cmd = LoggedRemoteCommand("svn", self.args) ShellCommand.start(self, errorMessage) @@ -1191,19 +1273,47 @@ name = "darcs" - def __init__(self, repourl, **kwargs): + def __init__(self, repourl=None, base_url=None, default_branch=None, + **kwargs): """ @type repourl: string - @param repourl: the URL which points at the Darcs repository + @param repourl: the URL which points at the Darcs repository. This + is used as the default branch. Using C{repourl} does + not enable builds of alternate branches: use + C{base_url} to enable this. Use either C{repourl} or + C{base_url}, not both. + + @param base_url: if branches are enabled, this is the base URL to + which a branch name will be appended. It should + probably end in a slash. Use exactly one of + C{repourl} and C{base_url}. + + @param default_branch: if branches are enabled, this is the branch + to use if the Build does not specify one + explicitly. It will simply be appended to + C{base_url} and the result handed to the + 'darcs pull' command. """ assert kwargs['mode'] != "export", \ "Darcs does not have an 'export' mode" + if (not repourl and not base_url) or (repourl and base_url): + raise ValueError("you must provide exactly one of repourl and" + " base_url") + self.repourl = repourl + self.base_url = base_url + self.branch = default_branch Source.__init__(self, **kwargs) - self.args['repourl'] = repourl - def startVC(self): + def startVC(self, branch, revision, patch): slavever = self.slaveVersion("darcs") assert slavever, "slave is too old, does not know about darcs" + + if self.repourl: + self.args['repourl'] = self.repourl + else: + self.args['repourl'] = self.base_url + branch + self.args['revision'] = revision + self.args['patch'] = patch self.cmd = LoggedRemoteCommand("darcs", self.args) ShellCommand.start(self) @@ -1218,10 +1328,14 @@ @type repourl: string @param repourl: the URL which points at the git repository """ + self.branch = None # TODO Source.__init__(self, **kwargs) self.args['repourl'] = repourl - def startVC(self): + def startVC(self, branch, revision, patch): + self.args['branch'] = branch + self.args['revision'] = revision + self.args['patch'] = patch slavever = self.slaveVersion("git") assert slavever, "slave is too old, does not know about git" self.cmd = LoggedRemoteCommand("git", self.args) @@ -1246,38 +1360,21 @@ pathname of a local directory instead. @type version: string - @param version: the category--branch--version to check out + @param version: the category--branch--version to check out. This is + the default branch. If a build specifies a different + branch, it will be used instead of this. @type archive: string @param archive: The archive name. If provided, it must match the one that comes from the repository. If not, the repository's default will be used. """ + self.branch = version Source.__init__(self, **kwargs) self.args.update({'url': url, - 'version': version, 'archive': archive, }) - def checkSlaveVersion(self): - slavever = self.slaveVersion("arch") - assert slavever, "slave is too old, does not know about arch" - # slave 1.28 and later understand 'revision' - oldslave = False - try: - if slavever.startswith("1.") and int(slavever[2:]) < 28: - oldslave = True - except ValueError: - pass - if oldslave: - if not self.alwaysUseLatest: - log.msg("warning, slave is too old to use a revision") - - def startVC(self): - self.checkSlaveVersion() - self.cmd = LoggedRemoteCommand("arch", self.args) - ShellCommand.start(self) - def computeSourceRevision(self, changes): # in Arch, fully-qualified revision numbers look like: # arch at buildbot.sourceforge.net--2004/buildbot--dev--0--patch-104 @@ -1302,6 +1399,29 @@ return "base-0" return "patch-%d" % lastChange + def checkSlaveVersion(self): + slavever = self.slaveVersion("arch") + assert slavever, "slave is too old, does not know about arch" + # slave 1.28 and later understand 'revision' + oldslave = False + try: + if slavever.startswith("1.") and int(slavever[2:]) < 28: + oldslave = True + except ValueError: + pass + if oldslave: + if not self.alwaysUseLatest: + log.msg("warning, slave is too old to use a revision") + + def startVC(self, branch, revision, patch): + self.args['version'] = branch + self.args['revision'] = revision + self.args['patch'] = patch + self.checkSlaveVersion() + self.cmd = LoggedRemoteCommand("arch", self.args) + ShellCommand.start(self) + + class Bazaar(Arch): """Bazaar is an alternative client for Arch repositories. baz is mostly compatible with tla, but archive registration is slightly different.""" @@ -1323,9 +1443,9 @@ buildslave will attempt to get sources from the wrong archive. """ + self.branch = version Source.__init__(self, **kwargs) self.args.update({'url': url, - 'version': version, 'archive': archive, }) @@ -1341,7 +1461,10 @@ pass assert not oldslave, "slave is too old, does not know about baz" - def startVC(self): + def startVC(self, branch, revision, patch): + self.args['version'] = branch + self.args['revision'] = revision + self.args['patch'] = patch self.checkSlaveVersion() self.cmd = LoggedRemoteCommand("bazaar", self.args) ShellCommand.start(self) @@ -1363,7 +1486,7 @@ 'view': view, }) - def startVC(self): + def startVC(self, branch, revision, patch): self.cmd = LoggedRemoteCommand("p4", self.args) ShellCommand.start(self) @@ -1389,6 +1512,7 @@ def __init__(self, p4port, **kwargs): assert kwargs['mode'] == "copy", "P4Sync can only be used in mode=copy" + self.branch = None Source.__init__(self, **kwargs) self.args['p4port'] = p4port @@ -1398,7 +1522,7 @@ lastChange = max([c.revision for c in changes]) return lastChange - def startVC(self): + def startVC(self, branch, revision, patch): slavever = self.slaveVersion("p4sync") assert slavever, "slave is too old, does not know about p4" self.cmd = LoggedRemoteCommand("p4sync", self.args) @@ -1440,8 +1564,8 @@ self.finished(SUCCESS) class FailingDummy(Dummy): - """I am a dummy no-op step that 'runs' master-side and raises an - Exception after by default 5 seconds.""" + """I am a dummy no-op step that 'runs' master-side and finishes (with a + FAILURE status) after 5 seconds.""" name = "failing dummy" @@ -1451,13 +1575,8 @@ self.timer = reactor.callLater(self.timeout, self.done) def done(self): - class Boom(Exception): - pass - try: - raise Boom("boom") - except Boom: - f = Failure() - self.failed(f) + self.step_status.setColor("red") + self.finished(FAILURE) # subclasses from Shell Command to get the output reporting class RemoteDummy(ShellCommand): From warner at users.sourceforge.net Tue Jul 19 23:12:01 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:01 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test test_dependencies.py,NONE,1.1 test_slaves.py,NONE,1.1 test_locks.py,NONE,1.1 test_buildreq.py,NONE,1.1 runutils.py,NONE,1.1 test_changes.py,1.4,1.5 test_steps.py,1.13,1.14 test_config.py,1.21,1.22 test_run.py,1.32,1.33 test_control.py,1.6,1.7 test_vc.py,1.32,1.33 test_web.py,1.18,1.19 test_status.py,1.21,1.22 test_interlock.py,1.2,NONE Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot/test Modified Files: test_changes.py test_steps.py test_config.py test_run.py test_control.py test_vc.py test_web.py test_status.py Added Files: test_dependencies.py test_slaves.py test_locks.py test_buildreq.py runutils.py Removed Files: test_interlock.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) --- NEW FILE: runutils.py --- import shutil, os from twisted.internet import defer from twisted.python import log from buildbot import master, interfaces from buildbot.twcompat import maybeWait from buildbot.slave import bot class MyBot(bot.Bot): def remote_getSlaveInfo(self): return self.parent.info class MyBuildSlave(bot.BuildSlave): botClass = MyBot class RunMixin: master = None slave = None slave2 = None def rmtree(self, d): try: shutil.rmtree(d, ignore_errors=1) except OSError, e: # stupid 2.2 appears to ignore ignore_errors if e.errno != errno.ENOENT: raise def setUp(self): self.rmtree("basedir") self.rmtree("slavebase") self.rmtree("slavebase2") os.mkdir("basedir") self.master = master.BuildMaster("basedir") self.status = self.master.getStatus() self.control = interfaces.IControl(self.master) def connectSlave(self, builders=["dummy"]): port = self.master.slavePort._port.getHost().port os.mkdir("slavebase") slave = MyBuildSlave("localhost", port, "bot1", "sekrit", "slavebase", keepalive=0, usePTY=1) slave.info = {"admin": "one"} self.slave = slave slave.startService() dl = [] # initiate call for all of them, before waiting on result, # otherwise we might miss some for b in builders: dl.append(self.master.botmaster.waitUntilBuilderAttached(b)) d = defer.DeferredList(dl) return d def connectSlaves(self, builders=["dummy"]): port = self.master.slavePort._port.getHost().port os.mkdir("slavebase") slave1 = MyBuildSlave("localhost", port, "bot1", "sekrit", "slavebase", keepalive=0, usePTY=1) slave1.info = {"admin": "one"} self.slave = slave1 slave1.startService() os.mkdir("slavebase2") slave2 = MyBuildSlave("localhost", port, "bot2", "sekrit", "slavebase2", keepalive=0, usePTY=1) slave2.info = {"admin": "one"} self.slave2 = slave2 slave2.startService() dl = [] # initiate call for all of them, before waiting on result, # otherwise we might miss some for b in builders: dl.append(self.master.botmaster.waitUntilBuilderAttached(b)) d = defer.DeferredList(dl) return d def connectSlave2(self): port = self.master.slavePort._port.getHost().port os.mkdir("slavebase2") slave = MyBuildSlave("localhost", port, "bot1", "sekrit", "slavebase2", keepalive=0, usePTY=1) slave.info = {"admin": "two"} self.slave2 = slave slave.startService() def connectSlave3(self): # this slave has a very fast keepalive timeout port = self.master.slavePort._port.getHost().port os.mkdir("slavebase") slave = MyBuildSlave("localhost", port, "bot1", "sekrit", "slavebase", keepalive=2, usePTY=1, keepaliveTimeout=1) slave.info = {"admin": "one"} self.slave = slave slave.startService() d = self.master.botmaster.waitUntilBuilderAttached("dummy") return d def tearDown(self): log.msg("doing tearDown") d = self.shutdownSlave() d.addCallback(self._tearDown_1) d.addCallback(self._tearDown_2) return maybeWait(d) def _tearDown_1(self, res): if self.master: return defer.maybeDeferred(self.master.stopService) def _tearDown_2(self, res): self.master = None log.msg("tearDown done") # various forms of slave death def shutdownSlave(self): # the slave has disconnected normally: they SIGINT'ed it, or it shut # down willingly. This will kill child processes and give them a # chance to finish up. We return a Deferred that will fire when # everything is finished shutting down. log.msg("doing shutdownSlave") dl = [] if self.slave: dl.append(self.slave.waitUntilDisconnected()) dl.append(defer.maybeDeferred(self.slave.stopService)) if self.slave2: dl.append(self.slave2.waitUntilDisconnected()) dl.append(defer.maybeDeferred(self.slave2.stopService)) d = defer.DeferredList(dl) d.addCallback(self._shutdownSlaveDone) return d def _shutdownSlaveDone(self, res): self.slave = None self.slave2 = None return self.master.botmaster.waitUntilBuilderDetached("dummy") def killSlave(self): # the slave has died, its host sent a FIN. The .notifyOnDisconnect # callbacks will terminate the current step, so the build should be # flunked (no further steps should be started). self.slave.bf.continueTrying = 0 bot = self.slave.getServiceNamed("bot") broker = bot.builders["dummy"].remote.broker broker.transport.loseConnection() self.slave = None def disappearSlave(self): # the slave's host has vanished off the net, leaving the connection # dangling. This will be detected quickly by app-level keepalives or # a ping, or slowly by TCP timeouts. # implement this by replacing the slave Broker's .dataReceived method # with one that just throws away all data. def discard(data): pass bot = self.slave.getServiceNamed("bot") broker = bot.builders["dummy"].remote.broker broker.dataReceived = discard # seal its ears broker.transport.write = discard # and take away its voice def ghostSlave(self): # the slave thinks it has lost the connection, and initiated a # reconnect. The master doesn't yet realize it has lost the previous # connection, and sees two connections at once. raise NotImplementedError --- NEW FILE: test_buildreq.py --- # -*- test-case-name: buildbot.test.test_buildreq -*- from twisted.trial import unittest from twisted.internet import defer, reactor from twisted.application import service from buildbot import buildset, scheduler, interfaces, sourcestamp from buildbot.twcompat import maybeWait from buildbot.process import base from buildbot.status import builder from buildbot.changes.changes import Change class Request(unittest.TestCase): def testMerge(self): R = base.BuildRequest S = sourcestamp.SourceStamp b1 = R("why", S("branch1", None, None, None)) b1r1 = R("why2", S("branch1", "rev1", None, None)) b1r1a = R("why not", S("branch1", "rev1", None, None)) b1r2 = R("why3", S("branch1", "rev2", None, None)) b2r2 = R("why4", S("branch2", "rev2", None, None)) b1r1p1 = R("why5", S("branch1", "rev1", (3, "diff"), None)) c1 = Change("alice", [], "changed stuff", branch="branch1") c2 = Change("alice", [], "changed stuff", branch="branch1") c3 = Change("alice", [], "changed stuff", branch="branch1") c4 = Change("alice", [], "changed stuff", branch="branch1") c5 = Change("alice", [], "changed stuff", branch="branch1") c6 = Change("alice", [], "changed stuff", branch="branch1") b1c1 = R("changes", S("branch1", None, None, [c1,c2,c3])) b1c2 = R("changes", S("branch1", None, None, [c4,c5,c6])) self.failUnless(b1.canBeMergedWith(b1)) self.failIf(b1.canBeMergedWith(b1r1)) self.failIf(b1.canBeMergedWith(b2r2)) self.failIf(b1.canBeMergedWith(b1r1p1)) self.failIf(b1.canBeMergedWith(b1c1)) self.failIf(b1r1.canBeMergedWith(b1)) self.failUnless(b1r1.canBeMergedWith(b1r1)) self.failIf(b1r1.canBeMergedWith(b2r2)) self.failIf(b1r1.canBeMergedWith(b1r1p1)) self.failIf(b1r1.canBeMergedWith(b1c1)) self.failIf(b1r2.canBeMergedWith(b1)) self.failIf(b1r2.canBeMergedWith(b1r1)) self.failUnless(b1r2.canBeMergedWith(b1r2)) self.failIf(b1r2.canBeMergedWith(b2r2)) self.failIf(b1r2.canBeMergedWith(b1r1p1)) self.failIf(b1r1p1.canBeMergedWith(b1)) self.failIf(b1r1p1.canBeMergedWith(b1r1)) self.failIf(b1r1p1.canBeMergedWith(b1r2)) self.failIf(b1r1p1.canBeMergedWith(b2r2)) self.failIf(b1r1p1.canBeMergedWith(b1c1)) self.failIf(b1c1.canBeMergedWith(b1)) self.failIf(b1c1.canBeMergedWith(b1r1)) self.failIf(b1c1.canBeMergedWith(b1r2)) self.failIf(b1c1.canBeMergedWith(b2r2)) self.failIf(b1c1.canBeMergedWith(b1r1p1)) self.failUnless(b1c1.canBeMergedWith(b1c1)) self.failUnless(b1c1.canBeMergedWith(b1c2)) sm = b1.mergeWith([]) self.failUnlessEqual(sm.branch, "branch1") self.failUnlessEqual(sm.revision, None) self.failUnlessEqual(sm.patch, None) self.failUnlessEqual(sm.changes, []) ss = b1r1.mergeWith([b1r1]) self.failUnlessEqual(ss, S("branch1", "rev1", None, None)) why = b1r1.mergeReasons([b1r1]) self.failUnlessEqual(why, "why2") why = b1r1.mergeReasons([b1r1a]) self.failUnlessEqual(why, "why2, why not") ss = b1c1.mergeWith([b1c2]) self.failUnlessEqual(ss, S("branch1", None, None, [c1,c2,c3,c4,c5,c6])) why = b1c1.mergeReasons([b1c2]) self.failUnlessEqual(why, "changes") class FakeBuilder: def __init__(self): self.requests = [] def submitBuildRequest(self, req): self.requests.append(req) class Set(unittest.TestCase): def testBuildSet(self): S = buildset.BuildSet a,b = FakeBuilder(), FakeBuilder() # two builds, the first one fails, the second one succeeds. The # waitUntilSuccess watcher fires as soon as the first one fails, # while the waitUntilFinished watcher doesn't fire until all builds # are complete. source = sourcestamp.SourceStamp() s = S(["a","b"], source, "forced build") s.start([a,b]) self.failUnlessEqual(len(a.requests), 1) self.failUnlessEqual(len(b.requests), 1) r1 = a.requests[0] self.failUnlessEqual(r1.reason, s.reason) self.failUnlessEqual(r1.source, s.source) res = [] d1 = s.waitUntilSuccess() d1.addCallback(lambda r: res.append(("success", r))) d2 = s.waitUntilFinished() d2.addCallback(lambda r: res.append(("finished", r))) self.failUnlessEqual(res, []) builderstatus_a = builder.BuilderStatus("a") builderstatus_b = builder.BuilderStatus("b") bsa = builder.BuildStatus(builderstatus_a, 1) bsa.setResults(builder.FAILURE) a.requests[0].finished(bsa) self.failUnlessEqual(len(res), 1) self.failUnlessEqual(res[0][0], "success") bss = res[0][1] self.failUnless(interfaces.IBuildSetStatus(bss, None)) bsb = builder.BuildStatus(builderstatus_b, 1) bsb.setResults(builder.SUCCESS) b.requests[0].finished(bsb) self.failUnlessEqual(len(res), 2) self.failUnlessEqual(res[1][0], "finished") self.failUnlessEqual(res[1][1], bss) class FakeMaster(service.MultiService): def submitBuildSet(self, bs): self.sets.append(bs) class Scheduling(unittest.TestCase): def setUp(self): self.master = master = FakeMaster() master.sets = [] master.startService() def tearDown(self): d = self.master.stopService() return maybeWait(d) def addScheduler(self, s): s.setServiceParent(self.master) def testPeriodic1(self): self.addScheduler(scheduler.Periodic("quickly", ["a","b"], 2)) d = defer.Deferred() reactor.callLater(5, d.callback, None) d.addCallback(self._testPeriodic1_1) return maybeWait(d) def _testPeriodic1_1(self, res): self.failUnless(len(self.master.sets) > 1) s1 = self.master.sets[0] self.failUnlessEqual(s1.builderNames, ["a","b"]) def testPeriodic2(self): # Twisted-2.0 starts the TimerService right away # Twisted-1.3 waits one interval before starting it. # so don't bother asserting anything about it raise unittest.SkipTest("twisted-1.3 and -2.0 are inconsistent") self.addScheduler(scheduler.Periodic("hourly", ["a","b"], 3600)) d = defer.Deferred() reactor.callLater(1, d.callback, None) d.addCallback(self._testPeriodic2_1) return maybeWait(d) def _testPeriodic2_1(self, res): # the Periodic scheduler *should* fire right away self.failUnless(self.master.sets) def isImportant(self, change): if "important" in change.files: return True return False def testBranch(self): s = scheduler.Scheduler("b1", "branch1", 2, ["a","b"], fileIsImportant=self.isImportant) self.addScheduler(s) c0 = Change("carol", ["important"], "other branch", branch="other") s.addChange(c0) self.failIf(s.timer) self.failIf(s.importantChanges) c1 = Change("alice", ["important", "not important"], "some changes", branch="branch1") s.addChange(c1) c2 = Change("bob", ["not important", "boring"], "some more changes", branch="branch1") s.addChange(c2) c3 = Change("carol", ["important", "dull"], "even more changes", branch="branch1") s.addChange(c3) self.failUnlessEqual(s.importantChanges, [c1,c3]) self.failUnlessEqual(s.unimportantChanges, [c2]) self.failUnless(s.timer) d = defer.Deferred() reactor.callLater(4, d.callback, None) d.addCallback(self._testBranch_1) return maybeWait(d) def _testBranch_1(self, res): self.failUnlessEqual(len(self.master.sets), 1) s = self.master.sets[0].source self.failUnlessEqual(s.branch, "branch1") self.failUnlessEqual(s.revision, None) self.failUnlessEqual(len(s.changes), 3) self.failUnlessEqual(s.patch, None) def testAnyBranch(self): s = scheduler.AnyBranchScheduler("b1", None, 2, ["a","b"], fileIsImportant=self.isImportant) self.addScheduler(s) c1 = Change("alice", ["important", "not important"], "some changes", branch="branch1") s.addChange(c1) c2 = Change("bob", ["not important", "boring"], "some more changes", branch="branch1") s.addChange(c2) c3 = Change("carol", ["important", "dull"], "even more changes", branch="branch1") s.addChange(c3) c4 = Change("carol", ["important"], "other branch", branch="branch2") s.addChange(c4) d = defer.Deferred() reactor.callLater(4, d.callback, None) d.addCallback(self._testAnyBranch_1) return maybeWait(d) def _testAnyBranch_1(self, res): self.failUnlessEqual(len(self.master.sets), 2) self.master.sets.sort(lambda a,b: cmp(a.source.branch, b.source.branch)) s1 = self.master.sets[0].source self.failUnlessEqual(s1.branch, "branch1") self.failUnlessEqual(s1.revision, None) self.failUnlessEqual(len(s1.changes), 3) self.failUnlessEqual(s1.patch, None) s2 = self.master.sets[1].source self.failUnlessEqual(s2.branch, "branch2") self.failUnlessEqual(s2.revision, None) self.failUnlessEqual(len(s2.changes), 1) self.failUnlessEqual(s2.patch, None) Index: test_changes.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_changes.py,v retrieving revision 1.4 retrieving revision 1.5 diff -u -d -r1.4 -r1.5 --- test_changes.py 17 May 2005 03:36:54 -0000 1.4 +++ test_changes.py 19 Jul 2005 23:11:58 -0000 1.5 @@ -59,23 +59,18 @@ self.failUnlessEqual(c3.who, "alice") config_empty = """ -from buildbot.changes import pb -c = {} +BuildmasterConfig = c = {} c['bots'] = [] c['builders'] = [] c['sources'] = [] +c['schedulers'] = [] c['slavePortnum'] = 0 -BuildmasterConfig = c """ -config_sender = """ +config_sender = config_empty + \ +""" from buildbot.changes import pb -c = {} -c['bots'] = [] -c['builders'] = [] c['sources'] = [pb.PBChangeSource(port=None)] -c['slavePortnum'] = 0 -BuildmasterConfig = c """ class Sender(unittest.TestCase): Index: test_config.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_config.py,v retrieving revision 1.21 retrieving revision 1.22 diff -u -d -r1.21 -r1.22 --- test_config.py 22 May 2005 02:16:13 -0000 1.21 +++ test_config.py 19 Jul 2005 23:11:58 -0000 1.22 @@ -16,6 +16,7 @@ from buildbot.twcompat import providedBy from buildbot.master import BuildMaster +from buildbot import scheduler from twisted.application import service, internet from twisted.spread import pb from twisted.web.server import Site @@ -36,402 +37,298 @@ emptyCfg = \ """ -c = {} +BuildmasterConfig = c = {} c['bots'] = [] c['sources'] = [] +c['schedulers'] = [] c['builders'] = [] c['slavePortnum'] = 9999 c['projectName'] = 'dummy project' c['projectURL'] = 'http://dummy.example.com' c['buildbotURL'] = 'http://dummy.example.com/buildbot' -BuildmasterConfig = c -""" - -slaveportCfg = \ -""" -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9000 -BuildmasterConfig = c -""" - -botsCfg = \ -""" -c = {} -c['bots'] = [('bot1', 'pw1'), ('bot2', 'pw2')] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 -BuildmasterConfig = c -""" - -sourcesCfg = \ -""" -from buildbot.changes.freshcvs import FreshCVSSource -c = {} -c['bots'] = [] -s1 = FreshCVSSource('cvs.example.com', 1000, 'pname', 'spass', - prefix='Prefix/') -c['sources'] = [s1] -c['builders'] = [] -c['slavePortnum'] = 9999 -BuildmasterConfig = c """ buildersCfg = \ """ from buildbot.process.factory import BasicBuildFactory -c = {} +BuildmasterConfig = c = {} c['bots'] = [('bot1', 'pw1')] c['sources'] = [] +c['schedulers'] = [] +c['slavePortnum'] = 9999 f1 = BasicBuildFactory('cvsroot', 'cvsmodule') c['builders'] = [{'name':'builder1', 'slavename':'bot1', 'builddir':'workdir', 'factory':f1}] -c['slavePortnum'] = 9999 -BuildmasterConfig = c """ -buildersCfg2 = \ +buildersCfg2 = buildersCfg + \ """ -from buildbot.process.factory import BasicBuildFactory -c = {} -c['bots'] = [('bot1', 'pw1')] -c['sources'] = [] f1 = BasicBuildFactory('cvsroot', 'cvsmodule2') c['builders'] = [{'name':'builder1', 'slavename':'bot1', 'builddir':'workdir', 'factory':f1}] -c['slavePortnum'] = 9999 -BuildmasterConfig = c """ -buildersCfg2new = \ -""" -from buildbot.process.factory import BasicBuildFactory -c = {} -c['bots'] = [('bot1', 'pw1')] -c['sources'] = [] -f1 = BasicBuildFactory('cvsroot', 'cvsmodule2') -c['builders'] = [{ 'name': 'builder1', 'slavename': 'bot1', - 'builddir': 'workdir', 'factory': f1 }] -c['slavePortnum'] = 9999 -BuildmasterConfig = c -""" - -buildersCfg1new = \ -""" -from buildbot.process.factory import BasicBuildFactory -c = {} -c['bots'] = [('bot1', 'pw1')] -c['sources'] = [] -f1 = BasicBuildFactory('cvsroot', 'cvsmodule') -c['builders'] = [{ 'name': 'builder1', 'slavename': 'bot1', - 'builddir': 'workdir', 'factory': f1 }] -c['slavePortnum'] = 9999 -BuildmasterConfig = c -""" - -buildersCfg3 = \ +buildersCfg3 = buildersCfg2 + \ """ -from buildbot.process.factory import BasicBuildFactory -c = {} -c['bots'] = [('bot1', 'pw1')] -c['sources'] = [] -f1 = BasicBuildFactory('cvsroot', 'cvsmodule2') -c['builders'] = [{ 'name': 'builder1', 'slavename': 'bot1', - 'builddir': 'workdir', 'factory': f1 }, - { 'name': 'builder2', 'slavename': 'bot1', - 'builddir': 'workdir2', 'factory': f1 }] -c['slavePortnum'] = 9999 -BuildmasterConfig = c +c['builders'].append({'name': 'builder2', 'slavename': 'bot1', + 'builddir': 'workdir2', 'factory': f1 }) """ -buildersCfg4 = \ +buildersCfg4 = buildersCfg2 + \ """ -from buildbot.process.factory import BasicBuildFactory -c = {} -c['bots'] = [('bot1', 'pw1')] -c['sources'] = [] -f1 = BasicBuildFactory('cvsroot', 'cvsmodule2') c['builders'] = [{ 'name': 'builder1', 'slavename': 'bot1', 'builddir': 'newworkdir', 'factory': f1 }, { 'name': 'builder2', 'slavename': 'bot1', 'builddir': 'workdir2', 'factory': f1 }] -c['slavePortnum'] = 9999 -BuildmasterConfig = c """ -ircCfg1 = \ +ircCfg1 = emptyCfg + \ """ from buildbot.status import words -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 c['status'] = [words.IRC('irc.us.freenode.net', 'buildbot', ['twisted'])] -BuildmasterConfig = c """ -ircCfg2 = \ +ircCfg2 = emptyCfg + \ """ from buildbot.status import words -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 c['status'] = [words.IRC('irc.us.freenode.net', 'buildbot', ['twisted']), words.IRC('irc.example.com', 'otherbot', ['chan1', 'chan2'])] -BuildmasterConfig = c """ -ircCfg3 = \ +ircCfg3 = emptyCfg + \ """ from buildbot.status import words -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 c['status'] = [words.IRC('irc.us.freenode.net', 'buildbot', ['knotted'])] -BuildmasterConfig = c """ -webCfg1 = \ +webCfg1 = emptyCfg + \ """ from buildbot.status import html -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 c['status'] = [html.Waterfall(http_port=9980)] -BuildmasterConfig = c """ -webCfg2 = \ +webCfg2 = emptyCfg + \ """ from buildbot.status import html -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 c['status'] = [html.Waterfall(http_port=9981)] -BuildmasterConfig = c """ -webNameCfg1 = \ +webNameCfg1 = emptyCfg + \ """ from buildbot.status import html -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 c['status'] = [html.Waterfall(distrib_port='~/.twistd-web-pb')] -BuildmasterConfig = c """ -webNameCfg2 = \ +webNameCfg2 = emptyCfg + \ """ from buildbot.status import html -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 c['status'] = [html.Waterfall(distrib_port='bar.socket')] -BuildmasterConfig = c """ -debugPasswordCfg = \ +debugPasswordCfg = emptyCfg + \ """ -c = {} -c['bots'] = [] -c['sources'] = [] -c['builders'] = [] -c['slavePortnum'] = 9999 c['debugPassword'] = 'sekrit' -BuildmasterConfig = c """ -# create an inactive interlock (builder3 is not yet defined). This isn't -# recommended practice, it is only here to test the code -interlockCfg1 = \ +interlockCfgBad = \ """ from buildbot.process.factory import BasicBuildFactory c = {} c['bots'] = [('bot1', 'pw1')] c['sources'] = [] +c['schedulers'] = [] f1 = BasicBuildFactory('cvsroot', 'cvsmodule') c['builders'] = [ { 'name': 'builder1', 'slavename': 'bot1', 'builddir': 'workdir', 'factory': f1 }, { 'name': 'builder2', 'slavename': 'bot1', 'builddir': 'workdir2', 'factory': f1 }, - { 'name': 'builder4', 'slavename': 'bot1', - 'builddir': 'workdir4', 'factory': f1 }, - { 'name': 'builder5', 'slavename': 'bot1', - 'builddir': 'workdir5', 'factory': f1 }, ] +# interlocks have been removed c['interlocks'] = [('lock1', ['builder1'], ['builder2', 'builder3']), ] c['slavePortnum'] = 9999 BuildmasterConfig = c """ -# make it active -interlockCfg2 = \ +lockCfgBad1 = \ """ -from buildbot.process.factory import BasicBuildFactory +from buildbot.process.step import Dummy +from buildbot.process.factory import BuildFactory, s +from buildbot.locks import MasterLock c = {} c['bots'] = [('bot1', 'pw1')] c['sources'] = [] -f1 = BasicBuildFactory('cvsroot', 'cvsmodule') +c['schedulers'] = [] +l1 = MasterLock('lock1') +l2 = MasterLock('lock1') # duplicate lock name +f1 = BuildFactory([s(Dummy, locks=[])]) c['builders'] = [ { 'name': 'builder1', 'slavename': 'bot1', - 'builddir': 'workdir', 'factory': f1 }, + 'builddir': 'workdir', 'factory': f1, 'locks': [l1, l2] }, { 'name': 'builder2', 'slavename': 'bot1', 'builddir': 'workdir2', 'factory': f1 }, - { 'name': 'builder3', 'slavename': 'bot1', - 'builddir': 'workdir3', 'factory': f1 }, - { 'name': 'builder4', 'slavename': 'bot1', - 'builddir': 'workdir4', 'factory': f1 }, - { 'name': 'builder5', 'slavename': 'bot1', - 'builddir': 'workdir5', 'factory': f1 }, - ] -c['interlocks'] = [('lock1', ['builder1'], ['builder2', 'builder3']), ] c['slavePortnum'] = 9999 BuildmasterConfig = c """ -# add a second lock -interlockCfg3 = \ +lockCfgBad2 = \ """ -from buildbot.process.factory import BasicBuildFactory +from buildbot.process.step import Dummy +from buildbot.process.factory import BuildFactory, s +from buildbot.locks import MasterLock, SlaveLock c = {} c['bots'] = [('bot1', 'pw1')] c['sources'] = [] -f1 = BasicBuildFactory('cvsroot', 'cvsmodule') +c['schedulers'] = [] +l1 = MasterLock('lock1') +l2 = SlaveLock('lock1') # duplicate lock name +f1 = BuildFactory([s(Dummy, locks=[])]) c['builders'] = [ { 'name': 'builder1', 'slavename': 'bot1', - 'builddir': 'workdir', 'factory': f1 }, + 'builddir': 'workdir', 'factory': f1, 'locks': [l1, l2] }, { 'name': 'builder2', 'slavename': 'bot1', 'builddir': 'workdir2', 'factory': f1 }, - { 'name': 'builder3', 'slavename': 'bot1', - 'builddir': 'workdir3', 'factory': f1 }, - { 'name': 'builder4', 'slavename': 'bot1', - 'builddir': 'workdir4', 'factory': f1 }, - { 'name': 'builder5', 'slavename': 'bot1', - 'builddir': 'workdir5', 'factory': f1 }, ] -c['interlocks'] = [('lock1', ['builder1'], ['builder2', 'builder3']), - ('lock2', ['builder3', 'builder4'], ['builder5']), +c['slavePortnum'] = 9999 +BuildmasterConfig = c +""" + +lockCfgBad3 = \ +""" +from buildbot.process.step import Dummy +from buildbot.process.factory import BuildFactory, s +from buildbot.locks import MasterLock +c = {} +c['bots'] = [('bot1', 'pw1')] +c['sources'] = [] +c['schedulers'] = [] +l1 = MasterLock('lock1') +l2 = MasterLock('lock1') # duplicate lock name +f1 = BuildFactory([s(Dummy, locks=[l2])]) +f2 = BuildFactory([s(Dummy)]) +c['builders'] = [ + { 'name': 'builder1', 'slavename': 'bot1', + 'builddir': 'workdir', 'factory': f2, 'locks': [l1] }, + { 'name': 'builder2', 'slavename': 'bot1', + 'builddir': 'workdir2', 'factory': f1 }, ] c['slavePortnum'] = 9999 BuildmasterConfig = c """ -# change the second lock -interlockCfg4 = \ +lockCfg1a = \ """ from buildbot.process.factory import BasicBuildFactory +from buildbot.locks import MasterLock c = {} c['bots'] = [('bot1', 'pw1')] c['sources'] = [] +c['schedulers'] = [] f1 = BasicBuildFactory('cvsroot', 'cvsmodule') +l1 = MasterLock('lock1') +l2 = MasterLock('lock2') c['builders'] = [ { 'name': 'builder1', 'slavename': 'bot1', - 'builddir': 'workdir', 'factory': f1 }, + 'builddir': 'workdir', 'factory': f1, 'locks': [l1, l2] }, { 'name': 'builder2', 'slavename': 'bot1', 'builddir': 'workdir2', 'factory': f1 }, - { 'name': 'builder3', 'slavename': 'bot1', - 'builddir': 'workdir3', 'factory': f1 }, - { 'name': 'builder4', 'slavename': 'bot1', - 'builddir': 'workdir4', 'factory': f1 }, - { 'name': 'builder5', 'slavename': 'bot1', - 'builddir': 'workdir5', 'factory': f1 }, - ] -c['interlocks'] = [('lock1', ['builder1'], ['builder2', 'builder3']), - ('lock2', ['builder1', 'builder4'], ['builder5']), ] c['slavePortnum'] = 9999 BuildmasterConfig = c """ -# delete the first lock -interlockCfg5 = \ +lockCfg1b = \ """ from buildbot.process.factory import BasicBuildFactory +from buildbot.locks import MasterLock c = {} c['bots'] = [('bot1', 'pw1')] c['sources'] = [] +c['schedulers'] = [] f1 = BasicBuildFactory('cvsroot', 'cvsmodule') +l1 = MasterLock('lock1') +l2 = MasterLock('lock2') c['builders'] = [ { 'name': 'builder1', 'slavename': 'bot1', - 'builddir': 'workdir', 'factory': f1 }, + 'builddir': 'workdir', 'factory': f1, 'locks': [l1] }, { 'name': 'builder2', 'slavename': 'bot1', 'builddir': 'workdir2', 'factory': f1 }, - { 'name': 'builder3', 'slavename': 'bot1', - 'builddir': 'workdir3', 'factory': f1 }, - { 'name': 'builder4', 'slavename': 'bot1', - 'builddir': 'workdir4', 'factory': f1 }, - { 'name': 'builder5', 'slavename': 'bot1', - 'builddir': 'workdir5', 'factory': f1 }, - ] -c['interlocks'] = [('lock2', ['builder1', 'builder4'], ['builder5']), ] c['slavePortnum'] = 9999 BuildmasterConfig = c """ -# render the lock inactive by removing a builder it depends upon -interlockCfg6 = \ +# test out step Locks +lockCfg2a = \ """ -from buildbot.process.factory import BasicBuildFactory +from buildbot.process.step import Dummy +from buildbot.process.factory import BuildFactory, s +from buildbot.locks import MasterLock c = {} c['bots'] = [('bot1', 'pw1')] c['sources'] = [] -f1 = BasicBuildFactory('cvsroot', 'cvsmodule') +c['schedulers'] = [] +l1 = MasterLock('lock1') +l2 = MasterLock('lock2') +f1 = BuildFactory([s(Dummy, locks=[l1,l2])]) +f2 = BuildFactory([s(Dummy)]) + c['builders'] = [ { 'name': 'builder1', 'slavename': 'bot1', 'builddir': 'workdir', 'factory': f1 }, { 'name': 'builder2', 'slavename': 'bot1', - 'builddir': 'workdir2', 'factory': f1 }, - { 'name': 'builder3', 'slavename': 'bot1', - 'builddir': 'workdir3', 'factory': f1 }, - { 'name': 'builder4', 'slavename': 'bot1', - 'builddir': 'workdir4', 'factory': f1 }, + 'builddir': 'workdir2', 'factory': f2 }, ] -c['interlocks'] = [('lock2', ['builder1', 'builder4'], ['builder5']), +c['slavePortnum'] = 9999 +BuildmasterConfig = c +""" + +lockCfg2b = \ +""" +from buildbot.process.step import Dummy +from buildbot.process.factory import BuildFactory, s +from buildbot.locks import MasterLock +c = {} +c['bots'] = [('bot1', 'pw1')] +c['sources'] = [] +c['schedulers'] = [] +l1 = MasterLock('lock1') +l2 = MasterLock('lock2') +f1 = BuildFactory([s(Dummy, locks=[l1])]) +f2 = BuildFactory([s(Dummy)]) + +c['builders'] = [ + { 'name': 'builder1', 'slavename': 'bot1', + 'builddir': 'workdir', 'factory': f1 }, + { 'name': 'builder2', 'slavename': 'bot1', + 'builddir': 'workdir2', 'factory': f2 }, ] c['slavePortnum'] = 9999 BuildmasterConfig = c """ -# finally remove the interlock -interlockCfg7 = \ +lockCfg2c = \ """ -from buildbot.process.factory import BasicBuildFactory +from buildbot.process.step import Dummy +from buildbot.process.factory import BuildFactory, s +from buildbot.locks import MasterLock c = {} c['bots'] = [('bot1', 'pw1')] c['sources'] = [] -f1 = BasicBuildFactory('cvsroot', 'cvsmodule') +c['schedulers'] = [] +l1 = MasterLock('lock1') +l2 = MasterLock('lock2') +f1 = BuildFactory([s(Dummy)]) +f2 = BuildFactory([s(Dummy)]) + c['builders'] = [ { 'name': 'builder1', 'slavename': 'bot1', 'builddir': 'workdir', 'factory': f1 }, { 'name': 'builder2', 'slavename': 'bot1', - 'builddir': 'workdir2', 'factory': f1 }, - { 'name': 'builder3', 'slavename': 'bot1', - 'builddir': 'workdir3', 'factory': f1 }, - { 'name': 'builder4', 'slavename': 'bot1', - 'builddir': 'workdir4', 'factory': f1 }, + 'builddir': 'workdir2', 'factory': f2 }, ] -c['interlocks'] = [] c['slavePortnum'] = 9999 BuildmasterConfig = c """ @@ -504,7 +401,6 @@ self.checkPorts(master, [(9999, pb.PBServerFactory)]) self.failUnlessEqual(list(master.change_svc), []) self.failUnlessEqual(master.botmaster.builders, {}) - self.failUnlessEqual(master.botmaster.interlocks, {}) self.failUnlessEqual(master.checker.users, {"change": "changepw"}) self.failUnlessEqual(master.projectName, "dummy project") @@ -528,7 +424,7 @@ "the slave port was changed even " + \ "though the configuration was not") - master.loadConfig(slaveportCfg) + master.loadConfig(emptyCfg + "c['slavePortnum'] = 9000\n") self.failUnlessEqual(master.slavePortnum, 9000) ports = self.checkPorts(master, [(9000, pb.PBServerFactory)]) self.failIf(p is ports[0], @@ -541,6 +437,8 @@ self.failUnlessEqual(master.botmaster.builders, {}) self.failUnlessEqual(master.checker.users, {"change": "changepw"}) + botsCfg = (emptyCfg + + "c['bots'] = [('bot1', 'pw1'), ('bot2', 'pw2')]\n") master.loadConfig(botsCfg) self.failUnlessEqual(master.checker.users, {"change": "changepw", @@ -563,6 +461,14 @@ master.loadConfig(emptyCfg) self.failUnlessEqual(list(master.change_svc), []) + sourcesCfg = emptyCfg + \ +""" +from buildbot.changes.freshcvs import FreshCVSSource +s1 = FreshCVSSource('cvs.example.com', 1000, 'pname', 'spass', + prefix='Prefix/') +c['sources'] = [s1] +""" + d = master.loadConfig(sourcesCfg) dr(d) self.failUnlessEqual(len(list(master.change_svc)), 1) @@ -586,6 +492,39 @@ dr(d) self.failUnlessEqual(list(master.change_svc), []) + def testSchedulers(self): + master = self.buildmaster + master.loadChanges() + master.loadConfig(emptyCfg) + self.failUnlessEqual(master.schedulers, []) + + schedulersCfg = \ +""" +from buildbot.scheduler import Scheduler +from buildbot.process.factory import BasicBuildFactory +c = {} +c['bots'] = [('bot1', 'pw1')] +c['sources'] = [] +c['schedulers'] = [Scheduler('full', None, 60, ['builder1'])] +f1 = BasicBuildFactory('cvsroot', 'cvsmodule') +c['builders'] = [{'name':'builder1', 'slavename':'bot1', + 'builddir':'workdir', 'factory':f1}] +c['slavePortnum'] = 9999 +c['projectName'] = 'dummy project' +c['projectURL'] = 'http://dummy.example.com' +c['buildbotURL'] = 'http://dummy.example.com/buildbot' +BuildmasterConfig = c +""" + + d = master.loadConfig(schedulersCfg) + dr(d) + self.failUnlessEqual(len(master.schedulers), 1) + s = master.schedulers[0] + self.failUnless(isinstance(s, scheduler.Scheduler)) + self.failUnlessEqual(s.name, "full") + self.failUnlessEqual(s.branch, None) + self.failUnlessEqual(s.treeStableTimer, 60) + self.failUnlessEqual(s.builderNames, ['builder1']) def testBuilders(self): master = self.buildmaster @@ -634,22 +573,6 @@ #statusbag3 = master.client_svc.statusbags["builder1"] #self.failUnlessIdentical(statusbag, statusbag3) - # moving to a new-style builder spec shouldn't cause a change - master.loadConfig(buildersCfg2new) - b3n = master.botmaster.builders["builder1"] - self.failUnlessIdentical(b3n, b3) - # TODO - #statusbag3n = master.client_svc.statusbags["builder1"] - #self.failUnlessIdentical(statusbag3n, statusbag3) - - # unless it is different somehow - master.loadConfig(buildersCfg1new) - b3nn = master.botmaster.builders["builder1"] - self.failIf(b3nn is b3n) - - master.loadConfig(buildersCfg2new) - b3 = master.botmaster.builders["builder1"] - # adding new builder master.loadConfig(buildersCfg3) self.failUnlessEqual(master.botmaster.builderNames, ["builder1", @@ -787,121 +710,44 @@ self.failUnlessEqual(master.checker.users, {"change": "changepw"}) - def checkInterlocks(self, botmaster, expected): - for (bname, (feeders, interlocks)) in expected.items(): - b = botmaster.builders[bname] - self.failUnlessListsEquivalent(b.feeders, feeders) - self.failUnlessListsEquivalent(b.interlocks, interlocks) - for bname, b in botmaster.builders.items(): - if bname not in expected.keys(): - self.failUnlessEqual(b.feeders, []) - self.failUnlessEqual(b.interlocks, []) - - def testInterlocks(self): + def testLocks(self): master = self.buildmaster botmaster = master.botmaster - # create an inactive interlock - master.loadConfig(interlockCfg1) - self.failUnlessListsEquivalent(botmaster.interlocks.keys(), - ['lock1']) - i1 = botmaster.interlocks['lock1'] - self.failUnless(isinstance(i1, Interlock)) - self.failUnlessEqual(i1.name, 'lock1') - self.failUnlessEqual(i1.feederNames, ['builder1']) - self.failUnlessEqual(i1.watcherNames, ['builder2', 'builder3']) - self.failUnlessEqual(i1.active, False) - self.checkInterlocks(botmaster, {'builder1': ([], [])}) - - # make it active by adding the builder - master.loadConfig(interlockCfg2) - self.failUnlessListsEquivalent(botmaster.interlocks.keys(), - ['lock1']) - # should be the same Interlock object as before - self.failUnlessIdentical(i1, botmaster.interlocks['lock1']) - self.failUnless(isinstance(i1, Interlock)) - self.failUnlessEqual(i1.name, 'lock1') - self.failUnlessEqual(i1.feederNames, ['builder1']) - self.failUnlessEqual(i1.watcherNames, ['builder2', 'builder3']) - self.failUnlessEqual(i1.active, True) - self.checkInterlocks(botmaster, {'builder1': ([i1], []), - 'builder2': ([], [i1]), - 'builder3': ([], [i1]), - }) - - # add a second lock - master.loadConfig(interlockCfg3) - self.failUnlessListsEquivalent(botmaster.interlocks.keys(), - ['lock1', 'lock2']) - self.failUnlessIdentical(i1, botmaster.interlocks['lock1']) - self.failUnless(isinstance(i1, Interlock)) - self.failUnlessEqual(i1.name, 'lock1') - self.failUnlessEqual(i1.feederNames, ['builder1']) - self.failUnlessEqual(i1.watcherNames, ['builder2', 'builder3']) - self.failUnlessEqual(i1.active, True) - i2 = botmaster.interlocks['lock2'] - self.failUnless(isinstance(i2, Interlock)) - self.failUnlessEqual(i2.name, 'lock2') - self.failUnlessEqual(i2.feederNames, ['builder3', 'builder4']) - self.failUnlessEqual(i2.watcherNames, ['builder5']) - self.failUnlessEqual(i2.active, True) - self.checkInterlocks(botmaster, {'builder1': ([i1], []), - 'builder2': ([], [i1]), - 'builder3': ([i2], [i1]), - 'builder4': ([i2], []), - 'builder5': ([], [i2]), - }) - - # modify the second interlock - master.loadConfig(interlockCfg4) - self.failUnlessListsEquivalent(botmaster.interlocks.keys(), - ['lock1', 'lock2']) - self.failUnlessIdentical(i1, botmaster.interlocks['lock1']) - self.failUnless(isinstance(i1, Interlock)) - self.failUnlessEqual(i1.name, 'lock1') - self.failUnlessEqual(i1.feederNames, ['builder1']) - self.failUnlessEqual(i1.watcherNames, ['builder2', 'builder3']) - self.failUnlessEqual(i1.active, True) - # second interlock has changed, should be a new Interlock object - self.failIf(i2 is botmaster.interlocks['lock2']) - i2 = botmaster.interlocks['lock2'] - self.failUnless(isinstance(i2, Interlock)) - self.failUnlessEqual(i2.name, 'lock2') - self.failUnlessEqual(i2.feederNames, ['builder1', 'builder4']) - self.failUnlessEqual(i2.watcherNames, ['builder5']) - self.failUnlessEqual(i2.active, True) - self.checkInterlocks(botmaster, {'builder1': ([i1,i2], []), - 'builder2': ([], [i1]), - 'builder3': ([], [i1]), - 'builder4': ([i2], []), - 'builder5': ([], [i2]), - }) + # make sure that c['interlocks'] is rejected properly + self.failUnlessRaises(KeyError, master.loadConfig, interlockCfgBad) + # and that duplicate-named Locks are caught + self.failUnlessRaises(ValueError, master.loadConfig, lockCfgBad1) + self.failUnlessRaises(ValueError, master.loadConfig, lockCfgBad2) + self.failUnlessRaises(ValueError, master.loadConfig, lockCfgBad3) - # delete the first interlock - master.loadConfig(interlockCfg5) - self.failUnlessEqual(botmaster.interlocks.keys(), ['lock2']) - self.failUnlessIdentical(i2, botmaster.interlocks['lock2']) - self.failUnless(isinstance(i2, Interlock)) - self.failUnlessEqual(i2.name, 'lock2') - self.failUnlessEqual(i2.feederNames, ['builder1', 'builder4']) - self.failUnlessEqual(i2.watcherNames, ['builder5']) - self.failUnlessEqual(i2.active, True) - self.checkInterlocks(botmaster, {'builder1': ([i2], []), - 'builder4': ([i2], []), - 'builder5': ([], [i2]), - }) + # create a Builder that uses Locks + master.loadConfig(lockCfg1a) + b1 = master.botmaster.builders["builder1"] + self.failUnlessEqual(len(b1.locks), 2) - # make it inactive by removing a builder it depends upon - master.loadConfig(interlockCfg6) - self.failUnlessEqual(botmaster.interlocks.keys(), ['lock2']) - self.failUnlessIdentical(i2, botmaster.interlocks['lock2']) - self.failUnlessEqual(i2.active, False) - self.checkInterlocks(botmaster, {}) + # reloading the same config should not change the Builder + master.loadConfig(lockCfg1a) + self.failUnlessIdentical(b1, master.botmaster.builders["builder1"]) + # but changing the set of locks used should change it + master.loadConfig(lockCfg1b) + self.failIfIdentical(b1, master.botmaster.builders["builder1"]) + b1 = master.botmaster.builders["builder1"] + self.failUnlessEqual(len(b1.locks), 1) - # now remove it - master.loadConfig(interlockCfg7) - self.failUnlessEqual(botmaster.interlocks, {}) - self.checkInterlocks(botmaster, {}) + # similar test with step-scoped locks + master.loadConfig(lockCfg2a) + b1 = master.botmaster.builders["builder1"] + # reloading the same config should not change the Builder + master.loadConfig(lockCfg2a) + self.failUnlessIdentical(b1, master.botmaster.builders["builder1"]) + # but changing the set of locks used should change it + master.loadConfig(lockCfg2b) + self.failIfIdentical(b1, master.botmaster.builders["builder1"]) + b1 = master.botmaster.builders["builder1"] + # remove the locks entirely + master.loadConfig(lockCfg2c) + self.failIfIdentical(b1, master.botmaster.builders["builder1"]) class ConfigFileTest(unittest.TestCase): @@ -909,6 +755,7 @@ def testFindConfigFile(self): os.mkdir("test_cf") open(os.path.join("test_cf", "master.cfg"), "w").write(emptyCfg) + slaveportCfg = emptyCfg + "c['slavePortnum'] = 9000\n" open(os.path.join("test_cf", "alternate.cfg"), "w").write(slaveportCfg) m = BuildMaster("test_cf") Index: test_control.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_control.py,v retrieving revision 1.6 retrieving revision 1.7 diff -u -d -r1.6 -r1.7 --- test_control.py 17 May 2005 10:14:10 -0000 1.6 +++ test_control.py 19 Jul 2005 23:11:58 -0000 1.7 @@ -7,10 +7,12 @@ from twisted.internet import defer, reactor from buildbot import master, interfaces -from buildbot.twcompat import providedBy +from buildbot.sourcestamp import SourceStamp +from buildbot.twcompat import providedBy, maybeWait from buildbot.slave import bot from buildbot.status import builder from buildbot.status.builder import SUCCESS +from buildbot.process import base config = """ from buildbot.process import factory, step @@ -24,6 +26,7 @@ c = {} c['bots'] = [['bot1', 'sekrit']] c['sources'] = [] +c['schedulers'] = [] c['builders'] = [{'name': 'force', 'slavename': 'bot1', 'builddir': 'force-dir', 'factory': f1}] c['slavePortnum'] = 0 @@ -91,14 +94,17 @@ self.connectSlave() def tearDown(self): + dl = [] if self.slave: - d = self.master.botmaster.waitUntilBuilderDetached("force") - dr(defer.maybeDeferred(self.slave.stopService)) - dr(d) + dl.append(self.master.botmaster.waitUntilBuilderDetached("force")) + dl.append(defer.maybeDeferred(self.slave.stopService)) if self.master: - dr(defer.maybeDeferred(self.master.stopService)) + dl.append(defer.maybeDeferred(self.master.stopService)) + return maybeWait(defer.DeferredList(dl)) def testForce(self): + # TODO: since BuilderControl.forceBuild has been deprecated, this + # test is scheduled to be removed soon m = self.master m.loadConfig(config) m.readConfig = True @@ -107,11 +113,17 @@ c = interfaces.IControl(m) builder_control = c.getBuilder("force") - build_control = builder_control.forceBuild("bob", "I was bored") + d = builder_control.forceBuild("bob", "I was bored") + d.addCallback(self._testForce_1) + return maybeWait(d) + + def _testForce_1(self, build_control): self.failUnless(providedBy(build_control, interfaces.IBuildControl)) d = build_control.getStatus().waitUntilFinished() - bs = dr(d) + d.addCallback(self._testForce_2) + return d + def _testForce_2(self, bs): self.failUnless(providedBy(bs, interfaces.IBuildStatus)) self.failUnless(bs.isFinished()) self.failUnlessEqual(bs.getResults(), SUCCESS) @@ -119,20 +131,7 @@ self.failUnlessEqual(bs.getChanges(), []) #self.failUnlessEqual(bs.getReason(), "forced") # TODO - def testNoSlave(self): - m = self.master - m.loadConfig(config) - m.readConfig = True - m.startService() - # don't connect the slave here - - c = interfaces.IControl(m) - builder_control = c.getBuilder("force") - self.failUnlessRaises(interfaces.NoSlaveError, - builder_control.forceBuild, - "bob", "I was bored") - - def testBuilderInUse(self): + def testRequest(self): m = self.master m.loadConfig(config) m.readConfig = True @@ -140,20 +139,9 @@ self.connectSlave() c = interfaces.IControl(m) - bc1 = c.getBuilder("force") - self.failUnless(bc1) - b = bc1.forceBuild("bob", "running first build") - # this test depends upon less than one second occurring between the - # two calls to forceBuild - - failed = "did not raise exception" - try: - bc1.forceBuild("bob", "finger twitched") - except interfaces.BuilderInUseError: - failed = None - except Exception, e: - failed = "raised the wrong exception: %s" % e - - dr(b.getStatus().waitUntilFinished()) - if failed: - self.fail(failed) + req = base.BuildRequest("I was bored", SourceStamp()) + builder_control = c.getBuilder("force") + d = req.waitUntilStarted() + builder_control.requestBuild(req) + d.addCallback(self._testForce_1) + return maybeWait(d) --- NEW FILE: test_dependencies.py --- # -*- test-case-name: buildbot.test.test_dependencies -*- from twisted.trial import unittest from twisted.internet import reactor, defer from buildbot import interfaces from buildbot.process import step from buildbot.sourcestamp import SourceStamp from buildbot.process.base import BuildRequest from buildbot.test.runutils import RunMixin from buildbot.twcompat import maybeWait config_1 = """ from buildbot import scheduler from buildbot.process import step, factory s = factory.s from buildbot.test.test_locks import LockStep BuildmasterConfig = c = {} c['bots'] = [('bot1', 'sekrit'), ('bot2', 'sekrit')] c['sources'] = [] c['schedulers'] = [] c['slavePortnum'] = 0 s1 = scheduler.Scheduler('upstream1', None, 10, ['slowpass', 'fastfail']) s2 = scheduler.Dependent('downstream2', s1, ['b3', 'b4']) s3 = scheduler.Scheduler('upstream3', None, 10, ['fastpass', 'slowpass']) s4 = scheduler.Dependent('downstream4', s3, ['b3', 'b4']) s5 = scheduler.Dependent('downstream5', s4, ['b5']) c['schedulers'] = [s1, s2, s3, s4, s5] f_fastpass = factory.BuildFactory([s(step.Dummy, timeout=1)]) f_slowpass = factory.BuildFactory([s(step.Dummy, timeout=2)]) f_fastfail = factory.BuildFactory([s(step.FailingDummy, timeout=1)]) def builder(name, f): d = {'name': name, 'slavename': 'bot1', 'builddir': name, 'factory': f} return d c['builders'] = [builder('slowpass', f_slowpass), builder('fastfail', f_fastfail), builder('fastpass', f_fastpass), builder('b3', f_fastpass), builder('b4', f_fastpass), builder('b5', f_fastpass), ] """ class Dependencies(RunMixin, unittest.TestCase): def setUp(self): RunMixin.setUp(self) self.master.loadConfig(config_1) self.master.startService() d = self.connectSlave(["slowpass", "fastfail", "fastpass", "b3", "b4", "b5"]) return maybeWait(d) def findScheduler(self, name): for s in self.master.schedulers: if s.name == name: return s raise KeyError("No Scheduler named '%s'" % name) def testParse(self): self.master.loadConfig(config_1) # that's it, just make sure this config file is loaded successfully def testRun_Fail(self): # kick off upstream1, which has a failing Builder and thus will not # trigger downstream3 s = self.findScheduler("upstream1") # this is an internal function of the Scheduler class s.fireTimer() # fires a build # t=0: two builders start: 'slowpass' and 'fastfail' # t=1: builder 'fastfail' finishes # t=2: builder 'slowpass' finishes d = defer.Deferred() d.addCallback(self._testRun_Fail_1) reactor.callLater(3, d.callback, None) return maybeWait(d) def _testRun_Fail_1(self, res): # 'slowpass' and 'fastfail' should have run one build each b = self.status.getBuilder('slowpass').getLastFinishedBuild() self.failUnless(b) self.failUnlessEqual(b.getNumber(), 0) b = self.status.getBuilder('fastfail').getLastFinishedBuild() self.failUnless(b) self.failUnlessEqual(b.getNumber(), 0) # none of the other builders should have run self.failIf(self.status.getBuilder('b3').getLastFinishedBuild()) self.failIf(self.status.getBuilder('b4').getLastFinishedBuild()) self.failIf(self.status.getBuilder('b5').getLastFinishedBuild()) def testRun_Pass(self): # kick off upstream3, which will fire downstream4 and then # downstream5 s = self.findScheduler("upstream3") # this is an internal function of the Scheduler class s.fireTimer() # fires a build # t=0: slowpass and fastpass start # t=1: builder 'fastpass' finishes # t=2: builder 'slowpass' finishes # scheduler 'downstream4' fires # builds b3 and b4 are started # t=3: builds b3 and b4 finish # scheduler 'downstream5' fires # build b5 is started # t=4: build b5 is finished d = defer.Deferred() d.addCallback(self._testRun_Pass_1) reactor.callLater(5, d.callback, None) return maybeWait(d) def _testRun_Pass_1(self, res): # 'fastpass' and 'slowpass' should have run one build each b = self.status.getBuilder('fastpass').getLastFinishedBuild() self.failUnless(b) self.failUnlessEqual(b.getNumber(), 0) b = self.status.getBuilder('slowpass').getLastFinishedBuild() self.failUnless(b) self.failUnlessEqual(b.getNumber(), 0) self.failIf(self.status.getBuilder('fastfail').getLastFinishedBuild()) b = self.status.getBuilder('b3').getLastFinishedBuild() self.failUnless(b) self.failUnlessEqual(b.getNumber(), 0) b = self.status.getBuilder('b4').getLastFinishedBuild() self.failUnless(b) self.failUnlessEqual(b.getNumber(), 0) b = self.status.getBuilder('b4').getLastFinishedBuild() self.failUnless(b) self.failUnlessEqual(b.getNumber(), 0) --- test_interlock.py DELETED --- --- NEW FILE: test_locks.py --- # -*- test-case-name: buildbot.test.test_locks -*- from twisted.trial import unittest from twisted.internet import defer from buildbot import interfaces from buildbot.process import step from buildbot.sourcestamp import SourceStamp from buildbot.process.base import BuildRequest from buildbot.test.runutils import RunMixin from buildbot.twcompat import maybeWait class LockStep(step.Dummy): def start(self): number = self.build.requests[0].number self.build.requests[0].events.append(("start", number)) step.Dummy.start(self) def done(self): number = self.build.requests[0].number self.build.requests[0].events.append(("done", number)) step.Dummy.done(self) config_1 = """ from buildbot import locks from buildbot.process import step, factory s = factory.s from buildbot.test.test_locks import LockStep BuildmasterConfig = c = {} c['bots'] = [('bot1', 'sekrit'), ('bot2', 'sekrit')] c['sources'] = [] c['schedulers'] = [] c['slavePortnum'] = 0 first_lock = locks.SlaveLock('first') second_lock = locks.MasterLock('second') f1 = factory.BuildFactory([s(LockStep, timeout=2, locks=[first_lock])]) f2 = factory.BuildFactory([s(LockStep, timeout=3, locks=[second_lock])]) f3 = factory.BuildFactory([s(LockStep, timeout=2, locks=[])]) b1a = {'name': 'full1a', 'slavename': 'bot1', 'builddir': '1a', 'factory': f1} b1b = {'name': 'full1b', 'slavename': 'bot1', 'builddir': '1b', 'factory': f1} b1c = {'name': 'full1c', 'slavename': 'bot1', 'builddir': '1c', 'factory': f3, 'locks': [first_lock, second_lock]} b1d = {'name': 'full1d', 'slavename': 'bot1', 'builddir': '1d', 'factory': f2} b2a = {'name': 'full2a', 'slavename': 'bot2', 'builddir': '2a', 'factory': f1} b2b = {'name': 'full2b', 'slavename': 'bot2', 'builddir': '2b', 'factory': f3, 'locks': [second_lock]} c['builders'] = [b1a, b1b, b1c, b1d, b2a, b2b] """ class Locks(RunMixin, unittest.TestCase): def setUp(self): RunMixin.setUp(self) self.req1 = req1 = BuildRequest("forced build", SourceStamp()) req1.number = 1 self.req2 = req2 = BuildRequest("forced build", SourceStamp()) req2.number = 2 self.req3 = req3 = BuildRequest("forced build", SourceStamp()) req3.number = 3 req1.events = req2.events = req3.events = self.events = [] d = self.master.loadConfig(config_1) d.addCallback(lambda res: self.master.startService()) d.addCallback(lambda res: self.connectSlaves(["full1a", "full1b", "full1c", "full1d", "full2a", "full2b"])) return maybeWait(d) def testLock1(self): self.control.getBuilder("full1a").requestBuild(self.req1) self.control.getBuilder("full1b").requestBuild(self.req2) d = defer.DeferredList([self.req1.waitUntilFinished(), self.req2.waitUntilFinished()]) d.addCallback(self._testLock1_1) return d def _testLock1_1(self, res): # full1a should complete its step before full1b starts it self.failUnlessEqual(self.events, [("start", 1), ("done", 1), ("start", 2), ("done", 2)]) def testLock2(self): # two builds run on separate slaves with slave-scoped locks should # not interfere self.control.getBuilder("full1a").requestBuild(self.req1) self.control.getBuilder("full2a").requestBuild(self.req2) d = defer.DeferredList([self.req1.waitUntilFinished(), self.req2.waitUntilFinished()]) d.addCallback(self._testLock2_1) return d def _testLock2_1(self, res): # full2a should start its step before full1a finishes it. They run on # different slaves, however, so they might start in either order. self.failUnless(self.events[:2] == [("start", 1), ("start", 2)] or self.events[:2] == [("start", 2), ("start", 1)]) def testLock3(self): # two builds run on separate slaves with master-scoped locks should # not overlap self.control.getBuilder("full1c").requestBuild(self.req1) self.control.getBuilder("full2b").requestBuild(self.req2) d = defer.DeferredList([self.req1.waitUntilFinished(), self.req2.waitUntilFinished()]) d.addCallback(self._testLock3_1) return d def _testLock3_1(self, res): # full2b should not start until after full1c finishes. The builds run # on different slaves, so we can't really predict which will start # first. The important thing is that they don't overlap. self.failUnless(self.events == [("start", 1), ("done", 1), ("start", 2), ("done", 2)] or self.events == [("start", 2), ("done", 2), ("start", 1), ("done", 1)] ) def testLock4(self): self.control.getBuilder("full1a").requestBuild(self.req1) self.control.getBuilder("full1c").requestBuild(self.req2) self.control.getBuilder("full1d").requestBuild(self.req3) d = defer.DeferredList([self.req1.waitUntilFinished(), self.req2.waitUntilFinished(), self.req3.waitUntilFinished()]) d.addCallback(self._testLock4_1) return d def _testLock4_1(self, res): # full1a starts, then full1d starts (because they do not interfere). # Once both are done, full1c can run. self.failUnlessEqual(self.events, [("start", 1), ("start", 3), ("done", 1), ("done", 3), ("start", 2), ("done", 2)]) Index: test_run.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_run.py,v retrieving revision 1.32 retrieving revision 1.33 diff -u -d -r1.32 -r1.33 --- test_run.py 17 May 2005 10:14:10 -0000 1.32 +++ test_run.py 19 Jul 2005 23:11:58 -0000 1.33 @@ -1,154 +1,77 @@ # -*- test-case-name: buildbot.test.test_run -*- from twisted.trial import unittest -dr = unittest.deferredResult from twisted.internet import reactor, defer from twisted.python import log import sys, os, os.path, shutil, time, errno #log.startLogging(sys.stderr) from buildbot import master, interfaces +from buildbot.sourcestamp import SourceStamp from buildbot.util import now from buildbot.slave import bot from buildbot.changes import changes from buildbot.status import base, builder +from buildbot.process.base import BuildRequest +from buildbot.twcompat import maybeWait -def maybeWait(d, timeout="none"): - # this is required for oldtrial (twisted-1.3.0) compatibility. When we - # move to retrial (twisted-2.0.0), replace these with a simple 'return - # d'. - if timeout == "none": - unittest.deferredResult(d) - else: - unittest.deferredResult(d, timeout) - return None - -config_1 = """ -from buildbot.process import factory - -c = {} -c['bots'] = [['bot1', 'sekrit']] -c['sources'] = [] -c['builders'] = [] -f1 = factory.QuickBuildFactory('fakerep', 'cvsmodule', configure=None) -c['builders'].append({'name':'quick', 'slavename':'bot1', - 'builddir': 'quickdir', 'factory': f1}) -c['slavePortnum'] = 0 -BuildmasterConfig = c -""" +from buildbot.test.runutils import RunMixin -config_2 = """ +config_base = """ from buildbot.process import factory, step +s = factory.s -def s(klass, **kwargs): - return (klass, kwargs) +f1 = factory.QuickBuildFactory('fakerep', 'cvsmodule', configure=None) -f1 = factory.BuildFactory([ +f2 = factory.BuildFactory([ s(step.Dummy, timeout=1), s(step.RemoteDummy, timeout=2), ]) -c = {} -c['bots'] = [['bot1', 'sekrit']] -c['sources'] = [] -c['builders'] = [{'name': 'dummy', 'slavename': 'bot1', - 'builddir': 'dummy1', 'factory': f1}, - {'name': 'testdummy', 'slavename': 'bot1', - 'builddir': 'dummy2', 'factory': f1, 'category': 'test'}] -c['slavePortnum'] = 0 -BuildmasterConfig = c -""" - -config_3 = """ -from buildbot.process import factory, step -def s(klass, **kwargs): - return (klass, kwargs) - -f1 = factory.BuildFactory([ - s(step.Dummy, timeout=1), - s(step.RemoteDummy, timeout=2), - ]) -c = {} +BuildmasterConfig = c = {} c['bots'] = [['bot1', 'sekrit']] c['sources'] = [] -c['builders'] = [ - {'name': 'dummy', 'slavename': 'bot1', - 'builddir': 'dummy1', 'factory': f1}, - {'name': 'testdummy', 'slavename': 'bot1', - 'builddir': 'dummy2', 'factory': f1, 'category': 'test'}, - {'name': 'adummy', 'slavename': 'bot1', - 'builddir': 'adummy3', 'factory': f1}, - {'name': 'bdummy', 'slavename': 'bot1', - 'builddir': 'adummy4', 'factory': f1, 'category': 'test'}, -] +c['schedulers'] = [] +c['builders'] = [] +c['builders'].append({'name':'quick', 'slavename':'bot1', + 'builddir': 'quickdir', 'factory': f1}) c['slavePortnum'] = 0 -BuildmasterConfig = c """ -config_4 = """ -from buildbot.process import factory, step - -def s(klass, **kwargs): - return (klass, kwargs) +config_run = config_base + """ +from buildbot.scheduler import Scheduler +c['schedulers'] = [Scheduler('quick', None, 120, ['quick'])] +""" -f1 = factory.BuildFactory([ - s(step.Dummy, timeout=1), - s(step.RemoteDummy, timeout=2), - ]) -c = {} -c['bots'] = [['bot1', 'sekrit']] -c['sources'] = [] +config_2 = config_base + """ c['builders'] = [{'name': 'dummy', 'slavename': 'bot1', - 'builddir': 'dummy', 'factory': f1}] -c['slavePortnum'] = 0 -BuildmasterConfig = c + 'builddir': 'dummy1', 'factory': f2}, + {'name': 'testdummy', 'slavename': 'bot1', + 'builddir': 'dummy2', 'factory': f2, 'category': 'test'}] """ -config_4_newbasedir = """ -from buildbot.process import factory, step - -def s(klass, **kwargs): - return (klass, kwargs) +config_3 = config_2 + """ +c['builders'].append({'name': 'adummy', 'slavename': 'bot1', + 'builddir': 'adummy3', 'factory': f2}) +c['builders'].append({'name': 'bdummy', 'slavename': 'bot1', + 'builddir': 'adummy4', 'factory': f2, + 'category': 'test'}) +""" -f1 = factory.BuildFactory([ - s(step.Dummy, timeout=1), - s(step.RemoteDummy, timeout=2), - ]) -c = {} -c['bots'] = [['bot1', 'sekrit']] -c['sources'] = [] +config_4 = config_base + """ c['builders'] = [{'name': 'dummy', 'slavename': 'bot1', - 'builddir': 'dummy2', 'factory': f1}] -c['slavePortnum'] = 0 -BuildmasterConfig = c + 'builddir': 'dummy', 'factory': f2}] """ -config_4_newbuilder = """ -from buildbot.process import factory, step - -def s(klass, **kwargs): - return (klass, kwargs) - -f1 = factory.BuildFactory([ - s(step.Dummy, timeout=1), - s(step.RemoteDummy, timeout=2), - ]) -c = {} -c['bots'] = [['bot1', 'sekrit']] -c['sources'] = [] +config_4_newbasedir = config_4 + """ c['builders'] = [{'name': 'dummy', 'slavename': 'bot1', - 'builddir': 'dummy2', 'factory': f1}, - {'name': 'dummy2', 'slavename': 'bot1', - 'builddir': 'dummy23', 'factory': f1},] -c['slavePortnum'] = 0 -BuildmasterConfig = c + 'builddir': 'dummy2', 'factory': f2}] """ -class MyBot(bot.Bot): - def remote_getSlaveInfo(self): - return self.parent.info -class MyBuildSlave(bot.BuildSlave): - botClass = MyBot +config_4_newbuilder = config_4_newbasedir + """ +c['builders'].append({'name': 'dummy2', 'slavename': 'bot1', + 'builddir': 'dummy23', 'factory': f2}) +""" class STarget(base.StatusReceiver): debug = False @@ -165,8 +88,8 @@ self.announce() if "builder" in self.mode: return self - def builderChangedState(self, name, state, eta): - self.events.append(("builderChangedState", name, state, eta)) + def builderChangedState(self, name, state): + self.events.append(("builderChangedState", name, state)) self.announce() def buildStarted(self, name, build): self.events.append(("buildStarted", name, build)) @@ -221,155 +144,48 @@ self.rmtree("basedir") os.mkdir("basedir") m = master.BuildMaster("basedir") - m.loadConfig(config_1) + m.loadConfig(config_run) m.readConfig = True m.startService() cm = m.change_svc c = changes.Change("bob", ["Makefile", "foo/bar.c"], "changed stuff") cm.addChange(c) - b1 = m.botmaster.builders["quick"] - self.failUnless(b1.waiting) - # now kill the timer - b1.waiting.stopTimer() + # verify that the Scheduler is now waiting + s = m.schedulers[0] + self.failUnless(s.timer) + # halting the service will also stop the timer d = defer.maybeDeferred(m.stopService) - maybeWait(d) - -class RunMixin: - master = None - slave = None - slave2 = None - - def rmtree(self, d): - try: - shutil.rmtree(d, ignore_errors=1) - except OSError, e: - # stupid 2.2 appears to ignore ignore_errors - if e.errno != errno.ENOENT: - raise - - def setUp(self): - self.rmtree("basedir") - self.rmtree("slavebase") - self.rmtree("slavebase2") - os.mkdir("basedir") - self.master = master.BuildMaster("basedir") - - def connectSlave(self, builders=["dummy"]): - port = self.master.slavePort._port.getHost().port - os.mkdir("slavebase") - slave = MyBuildSlave("localhost", port, "bot1", "sekrit", - "slavebase", keepalive=0, usePTY=1) - slave.info = {"admin": "one"} - self.slave = slave - slave.startService() - dl = [] - # initiate call for all of them, before waiting on result, - # otherwise we might miss some - for b in builders: - dl.append(self.master.botmaster.waitUntilBuilderAttached(b)) - d = defer.DeferredList(dl) - dr(d) - - def connectSlave2(self): - port = self.master.slavePort._port.getHost().port - os.mkdir("slavebase2") - slave = MyBuildSlave("localhost", port, "bot1", "sekrit", - "slavebase2", keepalive=0, usePTY=1) - slave.info = {"admin": "two"} - self.slave2 = slave - slave.startService() - - def connectSlave3(self): - # this slave has a very fast keepalive timeout - port = self.master.slavePort._port.getHost().port - os.mkdir("slavebase") - slave = MyBuildSlave("localhost", port, "bot1", "sekrit", - "slavebase", keepalive=2, usePTY=1, - keepaliveTimeout=1) - slave.info = {"admin": "one"} - self.slave = slave - slave.startService() - d = self.master.botmaster.waitUntilBuilderAttached("dummy") - dr(d) - - def tearDown(self): - log.msg("doing tearDown") - d = self.shutdownSlave() - d.addCallback(self._tearDown_1) - d.addCallback(self._tearDown_2) return maybeWait(d) - def _tearDown_1(self, res): - if self.master: - return defer.maybeDeferred(self.master.stopService) - def _tearDown_2(self, res): - self.master = None - log.msg("tearDown done") - # various forms of slave death +class Ping(RunMixin, unittest.TestCase): + def testPing(self): + self.master.loadConfig(config_2) + self.master.readConfig = True + self.master.startService() - def shutdownSlave(self): - # the slave has disconnected normally: they SIGINT'ed it, or it shut - # down willingly. This will kill child processes and give them a - # chance to finish up. We return a Deferred that will fire when - # everything is finished shutting down. + d = self.connectSlave() + d.addCallback(self._testPing_1) + return maybeWait(d) - log.msg("doing shutdownSlave") - dl = [] - if self.slave: - dl.append(self.slave.waitUntilDisconnected()) - dl.append(defer.maybeDeferred(self.slave.stopService)) - if self.slave2: - dl.append(self.slave2.waitUntilDisconnected()) - dl.append(defer.maybeDeferred(self.slave2.stopService)) - d = defer.DeferredList(dl) - d.addCallback(self._shutdownSlaveDone) + def _testPing_1(self, res): + d = interfaces.IControl(self.master).getBuilder("dummy").ping(1) + d.addCallback(self._testPing_2) return d - def _shutdownSlaveDone(self, res): - self.slave = None - self.slave2 = None - return self.master.botmaster.waitUntilBuilderDetached("dummy") - - def killSlave(self): - # the slave has died, its host sent a FIN. The .notifyOnDisconnect - # callbacks will terminate the current step, so the build should be - # flunked (no further steps should be started). - self.slave.bf.continueTrying = 0 - bot = self.slave.getServiceNamed("bot") - broker = bot.builders["dummy"].remote.broker - broker.transport.loseConnection() - self.slave = None - - def disappearSlave(self): - # the slave's host has vanished off the net, leaving the connection - # dangling. This will be detected quickly by app-level keepalives or - # a ping, or slowly by TCP timeouts. - # implement this by replacing the slave Broker's .dataReceived method - # with one that just throws away all data. - def discard(data): - pass - bot = self.slave.getServiceNamed("bot") - broker = bot.builders["dummy"].remote.broker - broker.dataReceived = discard # seal its ears - broker.transport.write = discard # and take away its voice - - def ghostSlave(self): - # the slave thinks it has lost the connection, and initiated a - # reconnect. The master doesn't yet realize it has lost the previous - # connection, and sees two connections at once. - raise NotImplementedError + def _testPing_2(self, res): + pass class Status(RunMixin, unittest.TestCase): def testSlave(self): m = self.master s = m.getStatus() - t1 = STarget(["builder"]) + self.t1 = t1 = STarget(["builder"]) #t1.debug = True; print s.subscribe(t1) self.failUnlessEqual(len(t1.events), 0) - t3 = STarget(["builder", "build", "step"]) + self.t3 = t3 = STarget(["builder", "build", "step"]) s.subscribe(t3) m.loadConfig(config_2) @@ -379,20 +195,18 @@ self.failUnlessEqual(len(t1.events), 4) self.failUnlessEqual(t1.events[0][0:2], ("builderAdded", "dummy")) self.failUnlessEqual(t1.events[1], - ("builderChangedState", "dummy", "offline", - None)) + ("builderChangedState", "dummy", "offline")) self.failUnlessEqual(t1.events[2][0:2], ("builderAdded", "testdummy")) self.failUnlessEqual(t1.events[3], - ("builderChangedState", "testdummy", "offline", - None)) + ("builderChangedState", "testdummy", "offline")) t1.events = [] self.failUnlessEqual(s.getBuilderNames(), ["dummy", "testdummy"]) self.failUnlessEqual(s.getBuilderNames(categories=['test']), ["testdummy"]) - s1 = s.getBuilder("dummy") + self.s1 = s1 = s.getBuilder("dummy") self.failUnlessEqual(s1.getName(), "dummy") - self.failUnlessEqual(s1.getState(), ("offline", None, None)) + self.failUnlessEqual(s1.getState(), ("offline", None)) self.failUnlessEqual(s1.getCurrentBuild(), None) self.failUnlessEqual(s1.getLastFinishedBuild(), None) self.failUnlessEqual(s1.getBuild(-1), None) @@ -400,40 +214,46 @@ # status targets should, upon being subscribed, immediately get a # list of all current builders matching their category - t2 = STarget([]) + self.t2 = t2 = STarget([]) s.subscribe(t2) self.failUnlessEqual(len(t2.events), 2) self.failUnlessEqual(t2.events[0][0:2], ("builderAdded", "dummy")) self.failUnlessEqual(t2.events[1][0:2], ("builderAdded", "testdummy")) - self.connectSlave(builders=["dummy", "testdummy"]) + d = self.connectSlave(builders=["dummy", "testdummy"]) + d.addCallback(self._testSlave_1, t1) + return maybeWait(d) + def _testSlave_1(self, res, t1): self.failUnlessEqual(len(t1.events), 2) self.failUnlessEqual(t1.events[0], - ("builderChangedState", "dummy", "idle", None)) + ("builderChangedState", "dummy", "idle")) self.failUnlessEqual(t1.events[1], - ("builderChangedState", "testdummy", "idle", - None)) + ("builderChangedState", "testdummy", "idle")) t1.events = [] - c = interfaces.IControl(m) - bc = c.getBuilder("dummy").forceBuild(None, - "forced build for testing") - d = bc.getStatus().waitUntilFinished() - res = dr(d) + c = interfaces.IControl(self.master) + req = BuildRequest("forced build for testing", SourceStamp()) + c.getBuilder("dummy").requestBuild(req) + d = req.waitUntilFinished() + d2 = self.master.botmaster.waitUntilBuilderIdle("dummy") + dl = defer.DeferredList([d, d2]) + dl.addCallback(self._testSlave_2) + return dl + def _testSlave_2(self, res): # t1 subscribes to builds, but not anything lower-level - ev = t1.events + ev = self.t1.events self.failUnlessEqual(len(ev), 4) self.failUnlessEqual(ev[0][0:3], ("builderChangedState", "dummy", "building")) self.failUnlessEqual(ev[1][0], "buildStarted") self.failUnlessEqual(ev[2][0:2]+ev[2][3:4], ("buildFinished", "dummy", builder.SUCCESS)) - self.failUnlessEqual(ev[3], - ("builderChangedState", "dummy", "idle", None)) + self.failUnlessEqual(ev[3][0:3], + ("builderChangedState", "dummy", "idle")) - self.failUnlessEqual([ev[0] for ev in t3.events], + self.failUnlessEqual([ev[0] for ev in self.t3.events], ["builderAdded", "builderChangedState", # offline "builderAdded", @@ -449,11 +269,11 @@ "builderChangedState", # idle ]) - b = s1.getLastFinishedBuild() + b = self.s1.getLastFinishedBuild() self.failUnless(b) self.failUnlessEqual(b.getBuilder().getName(), "dummy") self.failUnlessEqual(b.getNumber(), 0) - self.failUnlessEqual(b.getSourceStamp(), (None, None)) + self.failUnlessEqual(b.getSourceStamp(), (None, None, None)) self.failUnlessEqual(b.getReason(), "forced build for testing") self.failUnlessEqual(b.getChanges(), []) self.failUnlessEqual(b.getResponsibleUsers(), []) @@ -490,16 +310,23 @@ self.failUnlessEqual(logs[0].getName(), "log") self.failUnlessEqual(logs[0].getText(), "data") + self.eta = eta # now we run it a second time, and we should have an ETA - t4 = STarget(["builder", "build", "eta"]) - s.subscribe(t4) - c = interfaces.IControl(m) - bc = c.getBuilder("dummy").forceBuild(None, - "forced build for testing") - d = bc.getStatus().waitUntilFinished() - res = dr(d) + self.t4 = t4 = STarget(["builder", "build", "eta"]) + self.master.getStatus().subscribe(t4) + c = interfaces.IControl(self.master) + req = BuildRequest("forced build for testing", SourceStamp()) + c.getBuilder("dummy").requestBuild(req) + d = req.waitUntilFinished() + d2 = self.master.botmaster.waitUntilBuilderIdle("dummy") + dl = defer.DeferredList([d, d2]) + dl.addCallback(self._testSlave_3) + return dl + def _testSlave_3(self, res): + t4 = self.t4 + eta = self.eta self.failUnless(eta-1 < t4.eta_build < eta+1, # should be 3 seconds "t4.eta_build was %g, not in (%g,%g)" % (t4.eta_build, eta-1, eta+1)) @@ -521,37 +348,33 @@ class Disconnect(RunMixin, unittest.TestCase): - def disconnectSetupMaster(self): + def setUp(self): + RunMixin.setUp(self) + # verify that disconnecting the slave during a build properly # terminates the build m = self.master - s = m.getStatus() - c = interfaces.IControl(m) + s = self.status + c = self.control m.loadConfig(config_2) m.readConfig = True m.startService() self.failUnlessEqual(s.getBuilderNames(), ["dummy", "testdummy"]) - s1 = s.getBuilder("dummy") + self.s1 = s1 = s.getBuilder("dummy") self.failUnlessEqual(s1.getName(), "dummy") - self.failUnlessEqual(s1.getState(), ("offline", None, None)) + self.failUnlessEqual(s1.getState(), ("offline", None)) self.failUnlessEqual(s1.getCurrentBuild(), None) self.failUnlessEqual(s1.getLastFinishedBuild(), None) self.failUnlessEqual(s1.getBuild(-1), None) - return m,s,c,s1 - def disconnectSetup(self): - m,s,c,s1 = self.disconnectSetupMaster() - self.connectSlave() - self.failUnlessEqual(s1.getState(), ("idle", None, None)) - return m,s,c,s1 + d = self.connectSlave() + d.addCallback(self._disconnectSetup_1) + return maybeWait(d) - def disconnectSetup2(self): - m,s,c,s1 = self.disconnectSetupMaster() - self.connectSlave3() - self.failUnlessEqual(s1.getState(), ("idle", None, None)) - return m,s,c,s1 + def _disconnectSetup_1(self, res): + self.failUnlessEqual(self.s1.getState(), ("idle", None)) def verifyDisconnect(self, bs): @@ -575,186 +398,187 @@ def testIdle1(self): - m,s,c,s1 = self.disconnectSetup() # disconnect the slave before the build starts d = self.shutdownSlave() # dies before it gets started - d.addCallback(self._testIdle1_1, (m,s,c,s1)) + d.addCallback(self._testIdle1_1) return d - def _testIdle1_1(self, res, (m,s,c,s1)): + def _testIdle1_1(self, res): # trying to force a build now will cause an error. Regular builds # just wait for the slave to re-appear, but forced builds that # cannot be run right away trigger NoSlaveErrors - fb = c.getBuilder("dummy").forceBuild + fb = self.control.getBuilder("dummy").forceBuild self.failUnlessRaises(interfaces.NoSlaveError, fb, None, "forced build") def testIdle2(self): - # this used to be a testIdle2.skip="msg", but that caused a - # UserWarning when used with Twisted-1.3, which I think was an - # indication of an internal Trial problem - raise unittest.SkipTest("SF#1083403 pre-ping not yet implemented") - m,s,c,s1 = self.disconnectSetup() # now suppose the slave goes missing + self.slave.bf.continueTrying = 0 self.disappearSlave() - # forcing a build will work: the build will begin, since we think we - # have a slave. The build will fail, however, because of a timeout - # error. - bc = c.getBuilder("dummy").forceBuild(None, "forced build") - bs = bc.getStatus() - print "build started" - d = bs.waitUntilFinished() - dr(d, 5) - print bs.getText() - - def testSlaveTimeout(self): - m,s,c,s1 = self.disconnectSetup2() # fast timeout - - # now suppose the slave goes missing. We want to find out when it - # creates a new Broker, so we reach inside and mark it with the - # well-known sigil of impending messy death. - bd = self.slave.getServiceNamed("bot").builders["dummy"] - broker = bd.remote.broker - broker.redshirt = 1 - - # make sure the keepalives will keep the connection up - later = now() + 5 - while 1: - if now() > later: - break - bd = self.slave.getServiceNamed("bot").builders["dummy"] - if not bd.remote or not hasattr(bd.remote.broker, "redshirt"): - self.fail("slave disconnected when it shouldn't have") - reactor.iterate(0.01) + # forcing a build will work: the build detect that the slave is no + # longer available and will be re-queued. Wait 5 seconds, then check + # to make sure the build is still in the 'waiting for a slave' queue. + self.control.getBuilder("dummy").original.START_BUILD_TIMEOUT = 1 + req = BuildRequest("forced build", SourceStamp()) + self.failUnlessEqual(req.startCount, 0) + self.control.getBuilder("dummy").requestBuild(req) + # this should ping the slave, which doesn't respond, and then give up + # after a second. The BuildRequest will be re-queued, and its + # .startCount will be incremented. + d = defer.Deferred() + d.addCallback(self._testIdle2_1, req) + reactor.callLater(3, d.callback, None) + return maybeWait(d, 5) + testIdle2.timeout = 5 - d = self.master.botmaster.waitUntilBuilderDetached("dummy") - # whoops! how careless of me. - self.disappearSlave() + def _testIdle2_1(self, res, req): + self.failUnlessEqual(req.startCount, 1) + cancelled = req.cancel() + self.failUnless(cancelled) - # the slave will realize the connection is lost within 2 seconds, and - # reconnect. - dr(d, 5) - d = self.master.botmaster.waitUntilBuilderAttached("dummy") - dr(d, 5) - # make sure it is a new connection (i.e. a new Broker) - bd = self.slave.getServiceNamed("bot").builders["dummy"] - self.failUnless(bd.remote, "hey, slave isn't really connected") - self.failIf(hasattr(bd.remote.broker, "redshirt"), - "hey, slave's Broker is still marked for death") def testBuild1(self): - m,s,c,s1 = self.disconnectSetup() # this next sequence is timing-dependent. The dummy build takes at # least 3 seconds to complete, and this batch of commands must # complete within that time. # - bc = c.getBuilder("dummy").forceBuild(None, "forced build") - bs = bc.getStatus() + d = self.control.getBuilder("dummy").forceBuild(None, "forced build") + d.addCallback(self._testBuild1_1) + return maybeWait(d) + def _testBuild1_1(self, bc): + bs = bc.getStatus() # now kill the slave before it gets to start the first step d = self.shutdownSlave() # dies before it gets started - dr(d, 5) + d.addCallback(self._testBuild1_2, bs) + return d # TODO: this used to have a 5-second timeout + def _testBuild1_2(self, res, bs): # now examine the just-stopped build and make sure it is really # stopped. This is checking for bugs in which the slave-detach gets # missed or causes an exception which prevents the build from being # marked as "finished due to an error". d = bs.waitUntilFinished() - dr(d, 5) + d2 = self.master.botmaster.waitUntilBuilderDetached("dummy") + dl = defer.DeferredList([d, d2]) + dl.addCallback(self._testBuild1_3, bs) + return dl # TODO: this had a 5-second timeout too - self.failUnlessEqual(s1.getState()[0], "offline") + def _testBuild1_3(self, res, bs): + self.failUnlessEqual(self.s1.getState()[0], "offline") self.verifyDisconnect(bs) + def testBuild2(self): - m,s,c,s1 = self.disconnectSetup() # this next sequence is timing-dependent - bc = c.getBuilder("dummy").forceBuild(None, "forced build") + d = self.control.getBuilder("dummy").forceBuild(None, "forced build") + d.addCallback(self._testBuild1_1) + return maybeWait(d, 30) + testBuild2.timeout = 30 + + def _testBuild1_1(self, bc): bs = bc.getStatus() # shutdown the slave while it's running the first step reactor.callLater(0.5, self.shutdownSlave) d = bs.waitUntilFinished() - d.addCallback(self._testBuild2_1, s1, bs) - return maybeWait(d, 30) - testBuild2.timeout = 30 + d.addCallback(self._testBuild2_2, bs) + return d - def _testBuild2_1(self, res, s1, bs): + def _testBuild2_2(self, res, bs): # we hit here when the build has finished. The builder is still being # torn down, however, so spin for another second to allow the # callLater(0) in Builder.detached to fire. d = defer.Deferred() reactor.callLater(1, d.callback, None) - d.addCallback(self._testBuild2_2, s1, bs) + d.addCallback(self._testBuild2_3, bs) return d - def _testBuild2_2(self, res, s1, bs): - self.failUnlessEqual(s1.getState()[0], "offline") + def _testBuild2_3(self, res, bs): + self.failUnlessEqual(self.s1.getState()[0], "offline") self.verifyDisconnect(bs) def testBuild3(self): - m,s,c,s1 = self.disconnectSetup() # this next sequence is timing-dependent - bc = c.getBuilder("dummy").forceBuild(None, "forced build") + d = self.control.getBuilder("dummy").forceBuild(None, "forced build") + d.addCallback(self._testBuild3_1) + return maybeWait(d, 30) + testBuild3.timeout = 30 + + def _testBuild3_1(self, bc): bs = bc.getStatus() # kill the slave while it's running the first step reactor.callLater(0.5, self.killSlave) d = bs.waitUntilFinished() - d.addCallback(self._testBuild3_1, s1, bs) - return maybeWait(d, 30) - testBuild3.timeout = 30 + d.addCallback(self._testBuild3_2, bs) + return d - def _testBuild3_1(self, res, s1, bs): + def _testBuild3_2(self, res, bs): # the builder is still being torn down, so give it another second d = defer.Deferred() reactor.callLater(1, d.callback, None) - d.addCallback(self._testBuild3_2, s1, bs) + d.addCallback(self._testBuild3_3, bs) return d - def _testBuild3_2(self, res, s1, bs): - self.failUnlessEqual(s1.getState()[0], "offline") + def _testBuild3_3(self, res, bs): + self.failUnlessEqual(self.s1.getState()[0], "offline") self.verifyDisconnect(bs) def testBuild4(self): - m,s,c,s1 = self.disconnectSetup() # this next sequence is timing-dependent - bc = c.getBuilder("dummy").forceBuild(None, "forced build") + d = self.control.getBuilder("dummy").forceBuild(None, "forced build") + d.addCallback(self._testBuild4_1) + return maybeWait(d, 30) + testBuild4.timeout = 30 + + def _testBuild4_1(self, bc): bs = bc.getStatus() # kill the slave while it's running the second (remote) step reactor.callLater(1.5, self.killSlave) + d = bs.waitUntilFinished() + d.addCallback(self._testBuild4_2, bs) + return d - dr(bs.waitUntilFinished(), 30) + def _testBuild4_2(self, res, bs): # at this point, the slave is in the process of being removed, so it # could either be 'idle' or 'offline'. I think there is a # reactor.callLater(0) standing between here and the offline state. - reactor.iterate() # TODO: remove the need for this + #reactor.iterate() # TODO: remove the need for this - self.failUnlessEqual(s1.getState()[0], "offline") + self.failUnlessEqual(self.s1.getState()[0], "offline") self.verifyDisconnect2(bs) + def testInterrupt(self): - m,s,c,s1 = self.disconnectSetup() # this next sequence is timing-dependent - bc = c.getBuilder("dummy").forceBuild(None, "forced build") + d = self.control.getBuilder("dummy").forceBuild(None, "forced build") + d.addCallback(self._testInterrupt_1) + return maybeWait(d, 30) + testInterrupt.timeout = 30 + + def _testInterrupt_1(self, bc): bs = bc.getStatus() # halt the build while it's running the first step reactor.callLater(0.5, bc.stopBuild, "bang go splat") + d = bs.waitUntilFinished() + d.addCallback(self._testInterrupt_2, bs) + return d - dr(bs.waitUntilFinished(), 30) - + def _testInterrupt_2(self, res, bs): self.verifyDisconnect(bs) + def testDisappear(self): - m,s,c,s1 = self.disconnectSetup() - bc = c.getBuilder("dummy") + bc = self.control.getBuilder("dummy") # ping should succeed d = bc.ping(1) - d.addCallback(self._testDisappear_1, (m,s,c,s1,bc)) + d.addCallback(self._testDisappear_1, bc) return maybeWait(d) - def _testDisappear_1(self, res, (m,s,c,s1,bc)): + def _testDisappear_1(self, res, bc): self.failUnlessEqual(res, True) # now, before any build is run, make the slave disappear @@ -769,9 +593,8 @@ self.failUnlessEqual(res, False) def testDuplicate(self): - m,s,c,s1 = self.disconnectSetup() - bc = c.getBuilder("dummy") - bs = s.getBuilder("dummy") + bc = self.control.getBuilder("dummy") + bs = self.status.getBuilder("dummy") ss = bs.getSlave() self.failUnless(ss.isConnected()) @@ -784,13 +607,93 @@ d = self.master.botmaster.waitUntilBuilderDetached("dummy") # now let the new slave take over self.connectSlave2() - dr(d, 2) + d.addCallback(self._testDuplicate_1, ss) + return maybeWait(d, 2) + testDuplicate.timeout = 5 + + def _testDuplicate_1(self, res, ss): d = self.master.botmaster.waitUntilBuilderAttached("dummy") - dr(d, 2) + d.addCallback(self._testDuplicate_2, ss) + return d + def _testDuplicate_2(self, res, ss): self.failUnless(ss.isConnected()) self.failUnlessEqual(ss.getAdmin(), "two") + +class Disconnect2(RunMixin, unittest.TestCase): + + def setUp(self): + RunMixin.setUp(self) + # verify that disconnecting the slave during a build properly + # terminates the build + m = self.master + s = self.status + c = self.control + + m.loadConfig(config_2) + m.readConfig = True + m.startService() + + self.failUnlessEqual(s.getBuilderNames(), ["dummy", "testdummy"]) + self.s1 = s1 = s.getBuilder("dummy") + self.failUnlessEqual(s1.getName(), "dummy") + self.failUnlessEqual(s1.getState(), ("offline", None)) + self.failUnlessEqual(s1.getCurrentBuild(), None) + self.failUnlessEqual(s1.getLastFinishedBuild(), None) + self.failUnlessEqual(s1.getBuild(-1), None) + + d = self.connectSlave3() + d.addCallback(self._setup_disconnect2_1) + return maybeWait(d) + + def _setup_disconnect2_1(self, res): + self.failUnlessEqual(self.s1.getState(), ("idle", None)) + + + def testSlaveTimeout(self): + # now suppose the slave goes missing. We want to find out when it + # creates a new Broker, so we reach inside and mark it with the + # well-known sigil of impending messy death. + bd = self.slave.getServiceNamed("bot").builders["dummy"] + broker = bd.remote.broker + broker.redshirt = 1 + + # make sure the keepalives will keep the connection up + d = defer.Deferred() + reactor.callLater(5, d.callback, None) + d.addCallback(self._testSlaveTimeout_1) + return maybeWait(d, 20) + testSlaveTimeout.timeout = 20 + + def _testSlaveTimeout_1(self, res): + bd = self.slave.getServiceNamed("bot").builders["dummy"] + if not bd.remote or not hasattr(bd.remote.broker, "redshirt"): + self.fail("slave disconnected when it shouldn't have") + + d = self.master.botmaster.waitUntilBuilderDetached("dummy") + # whoops! how careless of me. + self.disappearSlave() + # the slave will realize the connection is lost within 2 seconds, and + # reconnect. + d.addCallback(self._testSlaveTimeout_2) + return d + + def _testSlaveTimeout_2(self, res): + # the ReconnectingPBClientFactory will attempt a reconnect in two + # seconds. + d = self.master.botmaster.waitUntilBuilderAttached("dummy") + d.addCallback(self._testSlaveTimeout_3) + return d + + def _testSlaveTimeout_3(self, res): + # make sure it is a new connection (i.e. a new Broker) + bd = self.slave.getServiceNamed("bot").builders["dummy"] + self.failUnless(bd.remote, "hey, slave isn't really connected") + self.failIf(hasattr(bd.remote.broker, "redshirt"), + "hey, slave's Broker is still marked for death") + + class Basedir(RunMixin, unittest.TestCase): def testChangeBuilddir(self): m = self.master @@ -798,19 +701,26 @@ m.readConfig = True m.startService() - self.connectSlave() - bot = self.slave.bot - builder = bot.builders.get("dummy") + d = self.connectSlave() + d.addCallback(self._testChangeBuilddir_1) + return maybeWait(d) + + def _testChangeBuilddir_1(self, res): + self.bot = bot = self.slave.bot + self.builder = builder = bot.builders.get("dummy") self.failUnless(builder) self.failUnlessEqual(builder.builddir, "dummy") self.failUnlessEqual(builder.basedir, os.path.join("slavebase", "dummy")) - d = m.loadConfig(config_4_newbasedir) - dr(d) + d = self.master.loadConfig(config_4_newbasedir) + d.addCallback(self._testChangeBuilddir_2) + return d + def _testChangeBuilddir_2(self, res): + bot = self.bot # this causes the builder to be replaced - self.failIfIdentical(builder, bot.builders.get("dummy")) + self.failIfIdentical(self.builder, bot.builders.get("dummy")) builder = bot.builders.get("dummy") self.failUnless(builder) # the basedir should be updated @@ -819,7 +729,5 @@ os.path.join("slavebase", "dummy2")) # add a new builder, which causes the basedir list to be reloaded - d = m.loadConfig(config_4_newbuilder) - dr(d) - - + d = self.master.loadConfig(config_4_newbuilder) + return d --- NEW FILE: test_slaves.py --- # -*- test-case-name: buildbot.test.test_slaves -*- from twisted.trial import unittest from buildbot.twcompat import maybeWait from buildbot.test.runutils import RunMixin config_1 = """ from buildbot.process import step, factory s = factory.s BuildmasterConfig = c = {} c['bots'] = [('bot1', 'sekrit'), ('bot2', 'sekrit')] c['sources'] = [] c['schedulers'] = [] c['slavePortnum'] = 0 c['schedulers'] = [] f = factory.BuildFactory([s(step.RemoteDummy, timeout=1)]) c['builders'] = [ {'name': 'b1', 'slavename': 'bot1', 'builddir': 'b1', 'factory': f}, ] """ class Slave(RunMixin, unittest.TestCase): skip = "Not implemented yet" def setUp(self): RunMixin.setUp(self) self.master.loadConfig(config_1) self.master.startService() d = self.connectSlave(["b1"]) return maybeWait(d) def testClaim(self): # have three slaves connect for the same builder, make sure all show # up in the list of known slaves. # run a build, make sure it doesn't freak out. # Disable the first slave, so that a slaveping will timeout. Then # start a build, and verify that the non-failing (second) one is # claimed for the build, and that the failing one is moved to the # back of the list. print "done" def testDontClaimPingingSlave(self): # have two slaves connect for the same builder. Do something to the # first one so that slavepings are delayed (but do not fail # outright). # submit a build, which should claim the first slave and send the # slaveping. While that is (slowly) happening, submit a second build. # Verify that the second build does not claim the first slave (since # it is busy doing the slaveping). pass def testFirstComeFirstServed(self): # submit three builds, then connect a slave which fails the # slaveping. The first build will claim the slave, do the slaveping, # give up, and re-queue the build. Verify that the build gets # re-queued in front of all other builds. This may be tricky, because # the other builds may attempt to claim the just-failed slave. pass Index: test_status.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_status.py,v retrieving revision 1.21 retrieving revision 1.22 diff -u -d -r1.21 -r1.22 --- test_status.py 23 May 2005 17:45:55 -0000 1.21 +++ test_status.py 19 Jul 2005 23:11:59 -0000 1.22 @@ -7,6 +7,7 @@ dr = unittest.deferredResult from buildbot import interfaces +from buildbot.sourcestamp import SourceStamp from buildbot.twcompat import implements, providedBy from buildbot.status import builder try: @@ -79,7 +80,7 @@ def __init__(self, parent, number, results): builder.BuildStatus.__init__(self, parent, number) self.results = results - self.sourceStamp = ("1.14", None) + self.source = SourceStamp(revision="1.14") self.reason = "build triggered by changes" self.finished = True def getLogs(self): Index: test_steps.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_steps.py,v retrieving revision 1.13 retrieving revision 1.14 diff -u -d -r1.13 -r1.14 --- test_steps.py 6 May 2005 06:40:04 -0000 1.13 +++ test_steps.py 19 Jul 2005 23:11:58 -0000 1.14 @@ -20,6 +20,7 @@ from twisted.internet import reactor from twisted.internet.defer import Deferred +from buildbot.sourcestamp import SourceStamp from buildbot.process import step, base, factory from buildbot.process.step import ShellCommand #, ShellCommands from buildbot.status import builder @@ -39,6 +40,7 @@ class FakeBuilder: statusbag = None name = "fakebuilder" +class FakeSlaveBuilder: def getSlaveCommandVersion(self, command, oldversion=None): return "1.10" @@ -68,9 +70,11 @@ self.builder_status.basedir = "test_steps" os.mkdir(self.builder_status.basedir) self.build_status = self.builder_status.newBuild() - self.build = base.Build() + req = base.BuildRequest("reason", SourceStamp()) + self.build = base.Build([req]) self.build.build_status = self.build_status # fake it self.build.builder = self.builder + self.build.slavebuilder = FakeSlaveBuilder() self.remote = FakeRemote() self.finished = 0 @@ -160,5 +164,6 @@ (step.Test, {'command': "make testharder"}), ] f = factory.ConfigurableBuildFactory(steps) - b = f.newBuild() + req = base.BuildRequest("reason", SourceStamp()) + b = f.newBuild([req]) #for s in b.steps: print s.name Index: test_vc.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_vc.py,v retrieving revision 1.32 retrieving revision 1.33 diff -u -d -r1.32 -r1.33 --- test_vc.py 18 Jun 2005 03:35:21 -0000 1.32 +++ test_vc.py 19 Jul 2005 23:11:58 -0000 1.33 @@ -5,7 +5,7 @@ from twisted.trial import unittest dr = unittest.deferredResult -from twisted.internet import defer, reactor +from twisted.internet import defer, reactor, utils #defer.Deferred.debug = True from twisted.python import log @@ -13,28 +13,28 @@ from buildbot import master, interfaces [...1548 lines suppressed...] + r = base.BuildRequest("forced", SourceStamp()) + b = base.Build([r]) + s = step.SVN(svnurl="dummy", workdir=None, build=b) self.failUnlessEqual(s.computeSourceRevision(b.allChanges()), None) def testSVN2(self): - b = base.Build() - b.treeStableTimer = 100 - self.addChange(b, revision=4) - self.addChange(b, revision=10) - self.addChange(b, revision=67) - s = step.SVN(svnurl=None, workdir=None, build=b) + c = [] + c.append(self.makeChange(revision=4)) + c.append(self.makeChange(revision=10)) + c.append(self.makeChange(revision=67)) + r = base.BuildRequest("forced", SourceStamp(changes=c)) + b = base.Build([r]) + s = step.SVN(svnurl="dummy", workdir=None, build=b) self.failUnlessEqual(s.computeSourceRevision(b.allChanges()), 67) Index: test_web.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_web.py,v retrieving revision 1.18 retrieving revision 1.19 diff -u -d -r1.18 -r1.19 --- test_web.py 17 May 2005 10:14:10 -0000 1.18 +++ test_web.py 19 Jul 2005 23:11:59 -0000 1.19 @@ -11,7 +11,7 @@ from twisted.internet.interfaces import IReactorUNIX from twisted.web import client -from buildbot import master, interfaces +from buildbot import master, interfaces, buildset, sourcestamp from buildbot.twcompat import providedBy from buildbot.status import html, builder from buildbot.changes.changes import Change @@ -32,13 +32,14 @@ interfaces.IControl) -config1 = """ -BuildmasterConfig = { +base_config = """ +from buildbot.status import html +BuildmasterConfig = c = { 'bots': [], 'sources': [], + 'schedulers': [], 'builders': [], 'slavePortnum': 0, - '%(k)s': %(v)s, } """ @@ -95,16 +96,7 @@ def test_webPortnum(self): # run a regular web server on a TCP socket - config = """ -from buildbot.status import html -BuildmasterConfig = { - 'bots': [], - 'sources': [], - 'builders': [], - 'slavePortnum': 0, - 'status': [html.Waterfall(http_port=0)], - } -""" + config = base_config + "c['status'] = [html.Waterfall(http_port=0)]\n" os.mkdir("test_web1") self.master = m = ConfiguredMaster("test_web1", config) m.startService() @@ -120,16 +112,8 @@ # running a t.web.distrib server over a UNIX socket if not providedBy(reactor, IReactorUNIX): raise unittest.SkipTest("UNIX sockets not supported here") - config = """ -from buildbot.status import html -BuildmasterConfig = { - 'bots': [], - 'sources': [], - 'builders': [], - 'slavePortnum': 0, - 'status': [html.Waterfall(distrib_port='.web-pb')], - } -""" + config = (base_config + + "c['status'] = [html.Waterfall(distrib_port='.web-pb')]\n") os.mkdir("test_web2") self.master = m = ConfiguredMaster("test_web2", config) m.startService() @@ -145,16 +129,8 @@ def test_webPathname_port(self): # running a t.web.distrib server over TCP - config = """ -from buildbot.status import html -BuildmasterConfig = { - 'bots': [], - 'sources': [], - 'builders': [], - 'slavePortnum': 0, - 'status': [html.Waterfall(distrib_port=0)], - } -""" + config = (base_config + + "c['status'] = [html.Waterfall(distrib_port=0)]\n") os.mkdir("test_web3") self.master = m = ConfiguredMaster("test_web3", config) m.startService() @@ -169,17 +145,11 @@ def test_waterfall(self): # this is the right way to configure the Waterfall status - config1 = """ -from buildbot.status import html -from buildbot.changes import mail -BuildmasterConfig = { - 'bots': [], - 'sources': [mail.SyncmailMaildirSource('my-maildir')], - 'builders': [], - 'slavePortnum': 0, - 'status': [html.Waterfall(http_port=0)], - } -""" + config1 = \ + (base_config + \ + "from buildbot.changes import mail\n" + + "c['sources'] = [mail.SyncmailMaildirSource('my-maildir')]\n" + + "c['status'] = [html.Waterfall(http_port=0)]\n") os.mkdir("test_web4") os.mkdir("my-maildir"); os.mkdir("my-maildir/new") self.master = m = ConfiguredMaster("test_web4", config1) @@ -221,6 +191,7 @@ BuildmasterConfig = { 'bots': [('bot1', 'passwd1')], 'sources': [], + 'schedulers': [], 'builders': [{'name': 'builder1', 'slavename': 'bot1', 'builddir':'workdir', 'factory':f1}], 'slavePortnum': 0, @@ -235,8 +206,9 @@ # insert an event s = m.status.getBuilder("builder1") + req = base.BuildRequest("reason", sourcestamp.SourceStamp()) bs = s.newBuild() - build1 = base.Build() + build1 = base.Build([req]) step1 = step.BuildStep(build=build1) step1.name = "setup" bs.addStep(step1) From warner at users.sourceforge.net Tue Jul 19 23:12:02 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:12:02 +0000 Subject: [Buildbot-commits] buildbot/buildbot buildset.py,NONE,1.1 locks.py,NONE,1.1 sourcestamp.py,NONE,1.1 scheduler.py,NONE,1.1 interfaces.py,1.26,1.27 master.py,1.73,1.74 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot Modified Files: interfaces.py master.py Added Files: buildset.py locks.py sourcestamp.py scheduler.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239 Creator: Brian Warner merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41) Patches applied: * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40 Merged from arch at buildbot.sf.net--2004 (patch 232-238) * warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41 Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0 tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1 rearrange build scheduling * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2 replace ugly 4-tuple with a distinct SourceStamp class * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3 document upcoming features, clean up CVS branch= argument * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4 Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5 implement per-Step Locks, add tests (which all fail) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6 implement scheduler.Dependent, add (failing) tests * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7 make test_dependencies work * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8 finish making Locks work, tests now pass * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9 fix test failures when run against twisted >2.0.1 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10 rename test_interlock.py to test_locks.py * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11 add more Locks tests, add branch examples to manual * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12 rewrite test_vc.py, create repositories in setUp rather than offline * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13 make new tests work with twisted-1.3.0 * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14 implement/test build-on-branch for most VC systems * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15 minor changes: test-case-name tags, init cleanup * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16 Merged from arch at buildbot.sf.net--2004 (patch 232-233) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17 Merged from arch at buildbot.sf.net--2004 (patch 234-236) * warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18 Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40) --- NEW FILE: buildset.py --- from twisted.internet import defer from buildbot.process import base from buildbot.status import builder class BuildSet: """I represent a set of potential Builds, all of the same source tree, across a specified list of Builders. I can represent a build of a specific version of the source tree (named by source.branch and source.revision), or a build of a certain set of Changes (source.changes=list).""" def __init__(self, builderNames, source, reason=None): """ @param source: a L{SourceStamp} """ self.builderNames = builderNames self.source = source self.reason = reason self.set_status = bss = builder.BuildSetStatus() bss.setSourceStamp(source) bss.setReason(reason) self.successWatchers = [] self.finishedWatchers = [] self.failed = False def waitUntilSuccess(self): """Return a Deferred that will fire (with an IBuildSetStatus) when we know whether or not this BuildSet will be a complete success (all builds succeeding). This means it will fire upon the first failing build, or upon the last successful one.""" # TODO: make it safe to call this after the buildset has completed d = defer.Deferred() self.successWatchers.append(d) return d def waitUntilFinished(self): """Return a Deferred that will fire when all builds have finished.""" d = defer.Deferred() self.finishedWatchers.append(d) return d def start(self, builders): """This is called by the BuildMaster to actually create and submit the BuildRequests.""" self.requests = [] reqs = [] # create the requests for b in builders: req = base.BuildRequest(self.reason, self.source) reqs.append((b, req)) self.requests.append(req) d = req.waitUntilFinished() d.addCallback(self.requestFinished, req) # now submit them self.status = {} # maps requests to BuildStatus for b,req in reqs: b.submitBuildRequest(req) def requestFinished(self, buildstatus, req): self.requests.remove(req) self.status[req] = buildstatus if buildstatus.getResults() == builder.FAILURE: if not self.failed: self.failed = True self.set_status.setResults(builder.FAILURE) self.notifySuccessWatchers() if not self.requests: self.set_status.setResults(builder.SUCCESS) self.notifyFinishedWatchers() def notifySuccessWatchers(self): for d in self.successWatchers: d.callback(self.set_status) self.successWatchers = [] def notifyFinishedWatchers(self): if not self.failed: self.notifySuccessWatchers() for d in self.finishedWatchers: d.callback(self.set_status) self.finishedWatchers = [] Index: interfaces.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/interfaces.py,v retrieving revision 1.26 retrieving revision 1.27 diff -u -d -r1.26 -r1.27 --- interfaces.py 15 May 2005 23:43:56 -0000 1.26 +++ interfaces.py 19 Jul 2005 23:11:59 -0000 1.27 @@ -34,6 +34,28 @@ """Should return a string which briefly describes this source. This string will be displayed in an HTML status page.""" +class IScheduler(Interface): + """I watch for Changes in the source tree and decide when to trigger + Builds. I create BuildSet objects and submit them to the BuildMaster. I + am a service, and the BuildMaster is always my parent.""" + + def addChange(change): + """A Change has just been dispatched by one of the ChangeSources. + Each Scheduler will receive this Change. I may decide to start a + build as a result, or I might choose to ignore it.""" + +class IUpstreamScheduler(Interface): + """This marks an IScheduler as being eligible for use as the 'upstream=' + argument to a buildbot.scheduler.Dependent instance.""" + + def subscribeToSuccessfulBuilds(target): + """Request that the target callbable be invoked after every + successful buildset. The target will be called with a single + argument: the SourceStamp used by the successful builds.""" + +class ISourceStamp(Interface): + pass + class IEmailSender(Interface): """I know how to send email, and can be used by other parts of the Buildbot to contact developers.""" @@ -78,6 +100,33 @@ """Unregister an IStatusReceiver. No further status messgaes will be delivered.""" +class IBuildSetStatus(Interface): + """I represent a set of Builds, each run on a separate Builder but all + using the same source tree.""" + + def getSourceStamp(): + pass + def getReason(): + pass + def getChanges(): + pass + def getResponsibleUsers(): + pass + def getInterestedUsers(): + pass + def getBuilds(): + """Return a list of IBuildStatus objects that represent my + component Builds.""" + def isFinished(): + pass + def waitUntilFirstFailure(): + pass + def waitUntilFinished(): + pass + def getResults(): + pass + + class ISlaveStatus(Interface): def getName(): """Return the name of the build slave.""" @@ -91,26 +140,36 @@ def isConnected(): """Return True if the slave is currently online, False if not.""" +class ISchedulerStatus(Interface): + def getName(): + """Return the name of this Scheduler (a string).""" + + def getPendingBuildsets(): + """Return an IBuildSet for all BuildSets that are pending. These + BuildSets are waiting for their tree-stable-timers to expire.""" + + class IBuilderStatus(Interface): def getName(): """Return the name of this Builder (a string).""" def getState(): - """Return a tuple (state, ETA, build=None) for this Builder. 'state' - is the so-called 'big-status', indicating overall status (as opposed - to which step is currently running). It is a string, one of - 'offline', 'idle', 'waiting', 'interlocked', or 'building'. In the - 'waiting' and 'building' states, 'ETA' may be a number indicating - how long the builder expectes to be in that state (expressed as - seconds from now). 'ETA' may be None if it cannot be estimated or - the state does not have an ETA. In the 'building' state, 'build' - will be an IBuildStatus object representing the current build.""" - # we could make 'build' valid for 'waiting' and 'interlocked' too + # TODO: this isn't nearly as meaningful as it used to be + """Return a tuple (state, build=None) for this Builder. 'state' is + the so-called 'big-status', indicating overall status (as opposed to + which step is currently running). It is a string, one of 'offline', + 'idle', or 'building'. In the 'building' state, 'build' will be an + IBuildStatus object representing the current build.""" def getSlave(): """Return an ISlaveStatus object for the buildslave that is used by this builder.""" + def getPendingBuilds(): + """Return an IBuildRequestStatus object for all upcoming builds + (those which are ready to go but which are waiting for a buildslave + to be available.""" + def getCurrentBuild(): """Return an IBuildStatus object for the current build in progress. If the state is not 'building', this will be None.""" @@ -150,41 +209,55 @@ delivered.""" class IBuildStatus(Interface): - """I represent the status of a single build, which may or may not be - finished.""" + """I represent the status of a single Build/BuildRequest. It could be + finished, in-progress, or not yet started.""" def getBuilder(): """ - Return the BuilderStatus that ran this build. + Return the BuilderStatus that owns this build. @rtype: implementor of L{IBuilderStatus} """ - def getNumber(): - """Within each builder, each Build has a number. Return it.""" + def isStarted(): + """Return a boolean. True means the build has started, False means it + is still in the pending queue.""" - def getPreviousBuild(): - """Convenience method. Returns None if the previous build is - unavailable.""" + def waitUntilStarted(): + """Return a Deferred that will fire (with this IBuildStatus instance + as an argument) when the build starts. If the build has already + started, this deferred will fire right away.""" - def getSourceStamp(): - """Return a tuple of (revision, patch) which can be used to re-create - the source tree that this build used. 'revision' is a string, the - sort you would pass to 'cvs co -r REVISION'. 'patch' is either None, - or a string which represents a patch that should be applied with - 'patch -p0 < PATCH' from the directory created by the checkout - operation. + def isFinished(): + """Return a boolean. True means the build has finished, False means + it is still running.""" + + def waitUntilFinished(): + """Return a Deferred that will fire when the build finishes. If the + build has already finished, this deferred will fire right away. The + callback is given this IBuildStatus instance as an argument.""" - This method will return None if the source information is no longer - available.""" - # TODO: it should be possible to expire the patch but still remember - # that the build was r123+something. def getReason(): """Return a string that indicates why the build was run. 'changes', 'forced', and 'periodic' are the most likely values. 'try' will be added in the future.""" + def getSourceStamp(): + """Return a tuple of (branch, revision, patch) which can be used to + re-create the source tree that this build used. 'branch' is a string + with a VC-specific meaning, or None to indicate that the checkout + step used its default branch. 'revision' is a string, the sort you + would pass to 'cvs co -r REVISION'. 'patch' is either None, or a + (level, diff) tuple which represents a patch that should be applied + with 'patch -pLEVEL < DIFF' from the directory created by the + checkout operation. + + This method will return None if the source information is no longer + available.""" + # TODO: it should be possible to expire the patch but still remember + # that the build was r123+something. + def getChanges(): """Return a list of Change objects which represent which source changes went into the build.""" @@ -204,6 +277,15 @@ make the Changes that went into it (build sheriffs, code-domain owners).""" + # once the build has started, the following methods become available + + def getNumber(): + """Within each builder, each Build has a number. Return it.""" + + def getPreviousBuild(): + """Convenience method. Returns None if the previous build is + unavailable.""" + def getSteps(): """Return a list of IBuildStepStatus objects. For invariant builds (those which always use the same set of Steps), this should always @@ -217,15 +299,6 @@ (seconds since the epoch) when the Build started and finished. If the build is still running, 'end' will be None.""" - def isFinished(): - """Return a boolean. True means the build has finished, False means - it is still running.""" - - def waitUntilFinished(): - """Return a Deferred that will fire when the build finishes. If the - build has already finished, this deferred will fire right away. The - callback is given this IBuildStatus instance as an argument.""" - # while the build is running, the following methods make sense. # Afterwards they return None @@ -569,11 +642,9 @@ @rtype: implementor of L{IStatusReceiver} """ - def builderChangedState(builderName, state, eta=None): + def builderChangedState(builderName, state): """Builder 'builderName' has changed state. The possible values for - 'state' are 'offline', 'idle', 'waiting', 'interlocked', and - 'building'. For waiting and building, 'eta' gives the number of - seconds from now that the state is expected to change.""" + 'state' are 'offline', 'idle', and 'building'.""" def buildStarted(builderName, build): """Builder 'builderName' has just started a build. The build is an @@ -652,12 +723,19 @@ themselves whether the change is interesting or not, and may initiate a build as a result.""" + def submitBuildSet(buildset): + """Submit a BuildSet object, which will eventually be run on all of + the builders listed therein.""" + # TODO: return a status object + def getBuilder(name): """Retrieve the IBuilderControl object for the given Builder.""" class IBuilderControl(Interface): - def forceBuild(who, reason): # TODO: add sourceStamp, patch - """Start a build of the latest sources. If 'who' is not None, it is + def forceBuild(who, reason): + """DEPRECATED, please use L{requestBuild} instead. + + Start a build of the latest sources. If 'who' is not None, it is string with the name of the user who is responsible for starting the build: they will be added to the 'interested users' list (so they may be notified via email or another Status object when it finishes). @@ -667,12 +745,22 @@ even if the Status object would normally only send results upon failures. - forceBuild() may raise NoSlaveError or BuilderInUseError if it + forceBuild() may raise L{NoSlaveError} or L{BuilderInUseError} if it cannot start the build. - forceBuild() returns an IBuildControl object which can be used to - further control the new build, or from which an IBuildStatus object - can be obtained.""" + forceBuild() returns a Deferred which fires with an L{IBuildControl} + object that can be used to further control the new build, or from + which an L{IBuildStatus} object can be obtained.""" + + def requestBuild(request): + """Queue a L{buildbot.process.base.BuildRequest} object for later + building.""" + + def getPendingBuilds(): + """Return a list of L{IBuildRequestControl} objects for this Builder. + Each one corresponds to a pending build that has not yet started (due + to a scarcity of build slaves). These upcoming builds can be canceled + through the control object.""" def getBuild(number): """Attempt to return an IBuildControl object for the given build. @@ -690,10 +778,14 @@ # or something. However the event that is emitted is most useful in # the Builder column, so it kinda fits here too. +class IBuildRequestControl(Interface): + def cancel(): + """Remove the build from the pending queue. Has no effect if the + build has already been started.""" + class IBuildControl(Interface): def getStatus(): """Return an IBuildStatus object for the Build that I control.""" def stopBuild(reason=""): """Halt the build. This has no effect if the build has already finished.""" - --- NEW FILE: locks.py --- # -*- test-case-name: buildbot.test.test_locks -*- from twisted.python import log from twisted.internet import reactor, defer from buildbot import util class BaseLock: owner = None description = "" def __init__(self, name): self.name = name self.waiting = [] def __repr__(self): return self.description def isAvailable(self): log.msg("%s isAvailable: self.owner=%s" % (self, self.owner)) return not self.owner def claim(self, owner): log.msg("%s claim(%s)" % (self, owner)) assert owner is not None self.owner = owner def release(self, owner): log.msg("%s release(%s)" % (self, owner)) assert owner is self.owner self.owner = None reactor.callLater(0, self.nowAvailable) def waitUntilAvailable(self, owner): log.msg("%s waitUntilAvailable(%s)" % (self, owner)) assert self.owner, "You aren't supposed to call this on a free Lock" d = defer.Deferred() self.waiting.append((d, owner)) return d def nowAvailable(self): log.msg("%s nowAvailable" % self) assert not self.owner if not self.waiting: return d,owner = self.waiting.pop(0) d.callback(self) class MasterLock(BaseLock, util.ComparableMixin): compare_attrs = ['name'] def __init__(self, name): BaseLock.__init__(self, name) self.description = "" % (name,) def getLock(self, slave): return self class SlaveLock(util.ComparableMixin): compare_attrs = ['name'] def __init__(self, name): self.name = name self.locks = {} def getLock(self, slavebuilder): slavename = slavebuilder.slave.slavename if not self.locks.has_key(slavename): lock = self.locks[slavename] = BaseLock(self.name) lock.description = "" % (self.name, slavename) self.locks[slavename] = lock return self.locks[slavename] Index: master.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/master.py,v retrieving revision 1.73 retrieving revision 1.74 diff -u -d -r1.73 -r1.74 --- master.py 22 May 2005 02:16:14 -0000 1.73 +++ master.py 19 Jul 2005 23:11:59 -0000 1.74 @@ -71,7 +71,7 @@ """ self.builders.remove(builder) if self.slave: - builder.detached() + builder.detached(self) return self.sendBuilderList() return defer.succeed(None) @@ -238,7 +238,7 @@ # if we sent the builders list because of a config # change, the Builder might already be attached. # Builder.attached will ignore us if this happens. - d = b.attached(remote, self.slave_commands) + d = b.attached(self, remote, self.slave_commands) dl.append(d) continue return defer.DeferredList(dl) @@ -270,7 +270,7 @@ self.slave = None self.slave_status.connected = False for b in self.builders: - b.detached() + b.detached(self) log.msg("Botmaster.detached(%s)" % self.slavename) @@ -279,9 +279,6 @@ """This is the master-side service which manages remote buildbot slaves. It provides them with BotPerspectives, and distributes file change notification messages to them. - - Any CVS changes that arrive should be handed to the .addChange method. - """ debug = 0 @@ -300,20 +297,33 @@ def waitUntilBuilderAttached(self, name): # convenience function for testing - d = defer.Deferred() b = self.builders[name] + if b.slaves: + return defer.succeed(None) + d = defer.Deferred() b.watchers['attach'].append(d) return d def waitUntilBuilderDetached(self, name): # convenience function for testing - d = defer.Deferred() - b = self.builders.get(name, None) - if not b or not b.remote: + b = self.builders.get(name) + if not b or not b.slaves: return defer.succeed(None) + d = defer.Deferred() b.watchers['detach'].append(d) return d + def waitUntilBuilderIdle(self, name): + # convenience function for testing + b = self.builders[name] + for sb in b.slaves.keys(): + if b.slaves[sb] != "idle": + d = defer.Deferred() + b.watchers['idle'].append(d) + return d + return defer.succeed(None) + + def addSlave(self, slavename): slave = BotPerspective(slavename) self.slaves[slavename] = slave @@ -349,7 +359,7 @@ self.builders[builder.name] = builder self.builderNames.append(builder.name) builder.setBotmaster(self) - self.checkInactiveInterlocks() # TODO?: do this in caller instead? + #self.checkInactiveInterlocks() # TODO?: do this in caller instead? slave = self.slaves[slavename] return slave.addBuilder(builder) @@ -366,16 +376,16 @@ b = self.builders[builder.name] # any linked interlocks will be made inactive before the builder is # removed - interlocks = [] - for i in b.feeders: - assert i not in interlocks - interlocks.append(i) - for i in b.interlocks: - assert i not in interlocks - interlocks.append(i) - for i in interlocks: - if self.debug: print " deactivating interlock", i - i.deactivate(self.builders) +## interlocks = [] +## for i in b.feeders: +## assert i not in interlocks +## interlocks.append(i) +## for i in b.interlocks: +## assert i not in interlocks +## interlocks.append(i) +## for i in interlocks: +## if self.debug: print " deactivating interlock", i +## i.deactivate(self.builders) del self.builders[builder.name] self.builderNames.remove(builder.name) slave = self.slaves.get(builder.slavename) @@ -418,22 +428,6 @@ def getPerspective(self, slavename): return self.slaves[slavename] - def addChange(self, change): - for b in self.builders.values(): - b.filesChanged(change) - - def forceBuild(self, name, reason="forced", who=None): - """Manually tell a builder with the given name to start a build. - Returns an IBuildControl object, which can be used to control or - observe the build.""" - - log.msg("BotMaster.forceBuild(%s)" % name) - b = self.builders.get(name) - if b: - return b.forceBuild(who, reason) - else: - log.msg("unknown builder '%s'" % name) - def shutdownSlaves(self): # TODO: make this into a bot method rather than a builder method for b in self.slaves.values(): @@ -612,6 +606,7 @@ self.statusTargets = [] + self.schedulers = [] self.bots = [] # this ChangeMaster is a dummy, only used by tests. In the real # buildmaster, where the BuildMaster instance is activated @@ -659,7 +654,6 @@ self.change_svc.disownServiceParent() self.change_svc = changes self.change_svc.basedir = self.basedir - self.change_svc.botmaster = self.botmaster self.change_svc.setName("changemaster") self.dispatcher.changemaster = self.change_svc self.change_svc.setServiceParent(self) @@ -729,9 +723,9 @@ log.err("config file must define BuildmasterConfig") raise - known_keys = "bots sources builders slavePortnum " + \ + known_keys = "bots sources schedulers builders slavePortnum " + \ "debugPassword manhole " + \ - "interlocks status projectName projectURL buildbotURL" + "status projectName projectURL buildbotURL" known_keys = known_keys.split() for k in config.keys(): if k not in known_keys: @@ -741,13 +735,13 @@ # required bots = config['bots'] sources = config['sources'] + schedulers = config['schedulers'] builders = config['builders'] slavePortnum = config['slavePortnum'] # optional debugPassword = config.get('debugPassword') manhole = config.get('manhole') - interlocks = config.get('interlocks', []) status = config.get('status', []) projectName = config.get('projectName') projectURL = config.get('projectURL') @@ -762,18 +756,21 @@ for name, passwd in bots: if name in ("debug", "change", "status"): raise KeyError, "reserved name '%s' used for a bot" % name - for i in interlocks: - name, feeders, watchers = i - if type(feeders) != type([]): - raise TypeError, "interlock feeders must be a list" - if type(watchers) != type([]): - raise TypeError, "interlock watchers must be a list" - bnames = feeders + watchers - for bname in bnames: - if bnames.count(bname) > 1: - why = ("builder '%s' appears multiple times for " + \ - "interlock %s") % (bname, name) - raise ValueError, why + if config.has_key('interlocks'): + raise KeyError("c['interlocks'] is no longer accepted") + +## for i in interlocks: +## name, feeders, watchers = i +## if type(feeders) != type([]): +## raise TypeError, "interlock feeders must be a list" +## if type(watchers) != type([]): +## raise TypeError, "interlock watchers must be a list" +## bnames = feeders + watchers +## for bname in bnames: +## if bnames.count(bname) > 1: +## why = ("builder '%s' appears multiple times for " + \ +## "interlock %s") % (bname, name) +## raise ValueError, why for s in status: assert interfaces.IStatusReceiver(s) @@ -796,6 +793,31 @@ % (b['name'], b['builddir'])) dirnames.append(b['builddir']) + # assert that all locks used by the Builds and their Steps are + # uniquely named. + locks = {} + for b in builders: + for l in b.get('locks', []): + if locks.has_key(l.name): + if locks[l.name] is not l: + raise ValueError("Two different locks (%s and %s) " + "share the name %s" + % (l, locks[l.name], l.name)) + else: + locks[l.name] = l + # TODO: this will break with any BuildFactory that doesn't use a + # .steps list, but I think the verification step is more + # important. + for s in b['factory'].steps: + for l in s[1].get('locks', []): + if locks.has_key(l.name): + if locks[l.name] is not l: + raise ValueError("Two different locks (%s and %s)" + " share the name %s" + % (l, locks[l.name], l.name)) + else: + locks[l.name] = l + # now we're committed to implementing the new configuration, so do # it atomically @@ -829,6 +851,7 @@ manhole.setServiceParent(self) dl.append(self.loadConfig_Sources(sources)) + dl.append(self.loadConfig_Schedulers(schedulers)) # add/remove self.botmaster.builders to match builders. The # botmaster will handle startup/shutdown issues. @@ -851,7 +874,7 @@ self.slavePortnum = slavePortnum # self.interlocks: - self.loadConfig_Interlocks(interlocks) + #self.loadConfig_Interlocks(interlocks) log.msg("configuration updated") self.readConfig = True @@ -888,6 +911,15 @@ for source in sources if source not in self.change_svc] return defer.DeferredList(dl) + def loadConfig_Schedulers(self, newschedulers): + old = [s for s in self.schedulers if s not in newschedulers] + [self.schedulers.remove(s) for s in old] + dl = [s.disownServiceParent() for s in old] + [s.setServiceParent(self) + for s in newschedulers if s not in self.schedulers] + self.schedulers = newschedulers + return defer.DeferredList(dl) + def loadConfig_Builders(self, newBuilders): dl = [] old = self.botmaster.getBuildernames() @@ -1001,6 +1033,27 @@ self.botmaster.addInterlock(i) + def addChange(self, change): + for s in self.schedulers: + s.addChange(change) + + def submitBuildSet(self, bs): + # determine the set of Builders to use + builders = [] + for name in bs.builderNames: + b = self.botmaster.builders.get(name) + if b: + if b not in builders: + builders.append(b) + continue + # TODO: add aliases like 'all' + raise KeyError("no such builder named '%s'" % name) + + # now tell the BuildSet to create BuildRequests for all those + # Builders and submit them + bs.start(builders) + + class Control: if implements: implements(interfaces.IControl) @@ -1013,6 +1066,10 @@ def addChange(self, change): self.master.change_svc.addChange(change) + def submitBuildSet(self, bs): + self.master.submitBuildSet(bs) + # TODO: return a BuildSetStatus + def getBuilder(self, name): b = self.master.botmaster.builders[name] return interfaces.IBuilderControl(b) --- NEW FILE: scheduler.py --- # -*- test-case-name: buildbot.test.test_dependencies -*- import time from twisted.internet import reactor from twisted.application import service, internet from twisted.python import log from buildbot import interfaces, buildset, util from buildbot.util import now from buildbot.status import builder from buildbot.twcompat import implements, providedBy from buildbot.sourcestamp import SourceStamp class BaseScheduler(service.MultiService, util.ComparableMixin): if implements: implements(interfaces.IScheduler) else: __implements__ = interfaces.IScheduler, def __init__(self, name): service.MultiService.__init__(self) self.name = name def __repr__(self): return "" % self.name def submit(self, bs): self.parent.submitBuildSet(bs) class BaseUpstreamScheduler(BaseScheduler): if implements: implements(interfaces.IUpstreamScheduler) else: __implements__ = interfaces.IUpstreamScheduler, def __init__(self, name): BaseScheduler.__init__(self, name) self.successWatchers = [] def subscribeToSuccessfulBuilds(self, watcher): self.successWatchers.append(watcher) def unsubscribeToSuccessfulBuilds(self, watcher): self.successWatchers.remove(watcher) def submit(self, bs): d = bs.waitUntilFinished() d.addCallback(self.buildSetFinished) self.parent.submitBuildSet(bs) def buildSetFinished(self, bss): if not self.running: return if bss.getResults() == builder.SUCCESS: ss = bss.getSourceStamp() for w in self.successWatchers: w(ss) class Scheduler(BaseUpstreamScheduler): """The default Scheduler class will run a build after some period of time called the C{treeStableTimer}, on a given set of Builders. It only pays attention to a single branch. You you can provide a C{fileIsImportant} function which will evaluate each Change to decide whether or not it should trigger a new build. """ compare_attrs = ('name', 'treeStableTimer', 'builderNames', 'branch', 'fileIsImportant') def __init__(self, name, branch, treeStableTimer, builderNames, fileIsImportant=None): """ @param name: the name of this Scheduler @param branch: The branch name that the Scheduler should pay attention to. Any Change that is not on this branch will be ignored. It can be set to None to only pay attention to the default branch. @param treeStableTimer: the duration, in seconds, for which the tree must remain unchanged before a build will be triggered. This is intended to avoid builds of partially-committed fixes. @param builderNames: a list of Builder names. When this Scheduler decides to start a set of builds, they will be run on the Builders named by this list. @param fileIsImportant: A callable which takes one argument (a Change instance) and returns True if the change is worth building, and False if it is not. Unimportant Changes are accumulated until the build is triggered by an important change. The default value of None means that all Changes are important. """ BaseUpstreamScheduler.__init__(self, name) self.treeStableTimer = treeStableTimer for b in builderNames: assert type(b) is str self.builderNames = builderNames self.branch = branch if fileIsImportant: assert callable(fileIsImportant) self.fileIsImportant = fileIsImportant self.importantChanges = [] self.unimportantChanges = [] self.nextBuildTime = None self.timer = None def fileIsImportant(self, change): # note that externally-provided fileIsImportant callables are # functions, not methods, and will only receive one argument. Or you # can override this method, in which case it will behave like a # normal method. return True def addChange(self, change): if change.branch != self.branch: log.msg("%s ignoring off-branch %s" % (self, change)) return if self.fileIsImportant(change): self.addImportantChange(change) else: self.addUnimportantChange(change) def addImportantChange(self, change): log.msg("%s: change is important, adding %s" % (self, change)) self.importantChanges.append(change) self.nextBuildTime = max(self.nextBuildTime, change.when + self.treeStableTimer) self.setTimer(self.nextBuildTime) def addUnimportantChange(self, change): log.msg("%s: change is not important, adding %s" % (self, change)) self.unimportantChanges.append(change) def setTimer(self, when): log.msg("%s: setting timer to %s" % (self, time.strftime("%H:%M:%S", time.localtime(when)))) now = util.now() if when < now: when = now + 1 if self.timer: self.timer.cancel() self.timer = reactor.callLater(when - now, self.fireTimer) def stopTimer(self): if self.timer: self.timer.cancel() self.timer = None def fireTimer(self): # clear out our state self.timer = None self.nextBuildTime = None changes = self.importantChanges + self.unimportantChanges self.importantChanges = [] self.unimportantChanges = [] # create a BuildSet, submit it to the BuildMaster bs = buildset.BuildSet(self.builderNames, SourceStamp(changes=changes)) self.submit(bs) def stopService(self): self.stopTimer() return service.MultiService.stopService(self) class AnyBranchScheduler(BaseUpstreamScheduler): """This Scheduler will handle changes on a variety of branches. It will accumulate Changes for each branch separately. It works by creating a separate Scheduler for each new branch it sees.""" schedulerFactory = Scheduler compare_attrs = ('name', 'branches', 'treeStableTimer', 'builderNames', 'fileIsImportant') def __init__(self, name, branches, treeStableTimer, builderNames, fileIsImportant=None): """ @param name: the name of this Scheduler @param branches: The branch names that the Scheduler should pay attention to. Any Change that is not on one of these branches will be ignored. It can be set to None to accept changes from any branch. @param treeStableTimer: the duration, in seconds, for which the tree must remain unchanged before a build will be triggered. This is intended to avoid builds of partially-committed fixes. @param builderNames: a list of Builder names. When this Scheduler decides to start a set of builds, they will be run on the Builders named by this list. @param fileIsImportant: A callable which takes one argument (a Change instance) and returns True if the change is worth building, and False if it is not. Unimportant Changes are accumulated until the build is triggered by an important change. The default value of None means that all Changes are important. """ BaseUpstreamScheduler.__init__(self, name) self.treeStableTimer = treeStableTimer for b in builderNames: assert type(b) is str self.builderNames = builderNames self.branches = branches if fileIsImportant: assert callable(fileIsImportant) self.fileIsImportant = fileIsImportant self.schedulers = {} # one per branch def addChange(self, change): branch = change.branch if self.branches and branch not in self.branches: log.msg("%s ignoring off-branch %s" % (self, change)) return s = self.schedulers.get(branch) if not s: name = self.name + "." + branch s = self.schedulerFactory(name, branch, self.treeStableTimer, self.builderNames, self.fileIsImportant) s.successWatchers = self.successWatchers s.setServiceParent(self) # TODO: does this result in schedulers that stack up forever? # When I make the persistify-pass, think about this some more. self.schedulers[branch] = s s.addChange(change) def submitBuildSet(self, bs): self.parent.submitBuildSet(bs) class Dependent(BaseUpstreamScheduler): """This scheduler runs some set of 'downstream' builds when the 'upstream' scheduler has completed successfully.""" compare_attrs = ('name', 'upstream', 'builders') def __init__(self, name, upstream, builderNames): assert providedBy(upstream, interfaces.IUpstreamScheduler) BaseUpstreamScheduler.__init__(self, name) self.upstream = upstream self.builderNames = builderNames def startService(self): service.MultiService.startService(self) self.upstream.subscribeToSuccessfulBuilds(self.upstreamBuilt) def stopService(self): d = service.MultiService.stopService(self) self.upstream.unsubscribeToSuccessfulBuilds(self.upstreamBuilt) return d def upstreamBuilt(self, ss): bs = buildset.BuildSet(self.builderNames, ss) self.submit(bs) class Periodic(BaseUpstreamScheduler): """Instead of watching for Changes, this Scheduler can just start a build at fixed intervals. The C{periodicBuildTimer} parameter sets the number of seconds to wait between such periodic builds. The first build will be run immediately.""" # TODO: consider having this watch another (changed-based) scheduler and # merely enforce a minimum time between builds. compare_attrs = ('name', 'builderNames', 'periodicBuildTimer', 'branch') def __init__(self, name, builderNames, periodicBuildTimer, branch=None): BaseUpstreamScheduler.__init__(self, name) self.builderNames = builderNames self.periodicBuildTimer = periodicBuildTimer self.branch = branch self.timer = internet.TimerService(self.periodicBuildTimer, self.doPeriodicBuild) self.timer.setServiceParent(self) def doPeriodicBuild(self): bs = buildset.BuildSet(self.builderNames, SourceStamp(branch=self.branch)) self.submit(bs) --- NEW FILE: sourcestamp.py --- from buildbot import util, interfaces from buildbot.twcompat import implements class SourceStamp(util.ComparableMixin): """ a tuple of (branch, revision, patchspec, changes). C{branch} is always valid, although it may be None to let the Source step use its default branch. There are four possibilities for the remaining elements: - (revision=REV, patchspec=None, changes=None): build REV - (revision=REV, patchspec=(LEVEL, DIFF), changes=None): checkout REV, then apply a patch to the source, with C{patch -pPATCHLEVEL Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv26774 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-244 Creator: Brian Warner remove deferredResult from test_web.py * buildbot/test/test_web.py (WebTest): remove use of deferredResult, bring it properly up to date with twisted-2.0 test guidelines Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.468 retrieving revision 1.469 diff -u -d -r1.468 -r1.469 --- ChangeLog 19 Jul 2005 23:23:22 -0000 1.468 +++ ChangeLog 19 Jul 2005 23:51:52 -0000 1.469 @@ -1,5 +1,8 @@ 2005-07-19 Brian Warner + * buildbot/test/test_web.py (WebTest): remove use of deferredResult, + bring it properly up to date with twisted-2.0 test guidelines + * buildbot/master.py (BuildMaster): remove references to old 'interlock' module, this caused a bunch of post-merge test failures From warner at users.sourceforge.net Tue Jul 19 23:51:55 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Tue, 19 Jul 2005 23:51:55 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test test_web.py,1.19,1.20 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv26774/buildbot/test Modified Files: test_web.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-244 Creator: Brian Warner remove deferredResult from test_web.py * buildbot/test/test_web.py (WebTest): remove use of deferredResult, bring it properly up to date with twisted-2.0 test guidelines Index: test_web.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_web.py,v retrieving revision 1.19 retrieving revision 1.20 diff -u -d -r1.19 -r1.20 --- test_web.py 19 Jul 2005 23:11:59 -0000 1.19 +++ test_web.py 19 Jul 2005 23:51:52 -0000 1.20 @@ -5,14 +5,13 @@ #log.startLogging(sys.stderr) from twisted.trial import unittest -dr = unittest.deferredResult from twisted.internet import reactor, defer from twisted.internet.interfaces import IReactorUNIX from twisted.web import client from buildbot import master, interfaces, buildset, sourcestamp -from buildbot.twcompat import providedBy +from buildbot.twcompat import providedBy, maybeWait from buildbot.status import html, builder from buildbot.changes.changes import Change from buildbot.process import step, base @@ -88,7 +87,7 @@ http._logDateTimeStop() if self.master: d = self.master.stopService() - dr(d) + return maybeWait(d) def find_waterfall(self, master): return filter(lambda child: isinstance(child, html.Waterfall), @@ -104,7 +103,10 @@ port = list(self.find_waterfall(m)[0])[0]._port.getHost().port d = client.getPage("http://localhost:%d/" % port) - page = dr(d, 10) + d.addCallback(self._test_webPortnum_1) + return maybeWait(d) + test_webPortnum.timeout = 10 + def _test_webPortnum_1(self, page): #print page self.failUnless(page) @@ -121,11 +123,14 @@ p = DistribUNIX("test_web2/.web-pb") d = client.getPage("http://localhost:%d/remote/" % p.portnum) - page = dr(d, 10) + d.addCallback(self._test_webPathname_1, p) + return maybeWait(d) + test_webPathname.timeout = 10 + def _test_webPathname_1(self, page, p): #print page self.failUnless(page) - dr(p.shutdown()) - + return p.shutdown() + def test_webPathname_port(self): # running a t.web.distrib server over TCP @@ -139,9 +144,12 @@ p = DistribTCP(dport) d = client.getPage("http://localhost:%d/remote/" % p.portnum) - page = dr(d, 10) + d.addCallback(self._test_webPathname_port_1, p) + return maybeWait(d) + test_webPathname_port.timeout = 10 + def _test_webPathname_port_1(self, page, p): self.failUnlessIn("BuildBot", page) - dr(p.shutdown()) + return p.shutdown() def test_waterfall(self): # this is the right way to configure the Waterfall status @@ -160,7 +168,10 @@ m.change_svc.addChange(Change("user", ["foo.c"], "comments")) d = client.getPage("http://localhost:%d/" % port) - page = dr(d) + d.addCallback(self._test_waterfall_1, port) + return maybeWait(d) + test_waterfall.timeout = 10 + def _test_waterfall_1(self, page, port): self.failUnless(page) self.failUnlessIn("current activity", page) self.failUnlessIn("", page) @@ -169,17 +180,23 @@ # phase=0 is really for debugging the waterfall layout d = client.getPage("http://localhost:%d/?phase=0" % port) - page = dr(d) + d.addCallback(self._test_waterfall_2, port) + return d + def _test_waterfall_2(self, page, port): self.failUnless(page) self.failUnlessIn("", page) d = client.getPage("http://localhost:%d/favicon.ico" % port) - icon = dr(d) + d.addCallback(self._test_waterfall_3, port) + return d + def _test_waterfall_3(self, icon, port): expected = open(html.buildbot_icon,"rb").read() self.failUnless(icon == expected) d = client.getPage("http://localhost:%d/changes" % port) - changes = dr(d) + d.addCallback(self._test_waterfall_4) + return d + def _test_waterfall_4(self, changes): self.failUnlessIn("
  • Syncmail mailing list in maildir " + "my-maildir
  • ", changes) @@ -225,19 +242,29 @@ bs.buildFinished() d = client.getPage("http://localhost:%d/" % port) - page = dr(d, 5) + d.addCallback(self._test_logfile_1, port) + return maybeWait(d) + test_logfile.timeout = 10 + def _test_logfile_1(self, page, port): self.failUnless(page) logurl = "http://localhost:%d/builder1/builds/0/setup/0" % port d = client.getPage(logurl) - logbody = dr(d, 5) + d.addCallback(self._test_logfile_2, port) + return d + def _test_logfile_2(self, logbody, port): self.failUnless(logbody) + logurl = "http://localhost:%d/builder1/builds/0/setup/0" % port d = client.getPage(logurl + "/text") - logtext = dr(d, 5) + d.addCallback(self._test_logfile_3, port) + return d + def _test_logfile_3(self, logtext, port): self.failUnlessEqual(logtext, "some stdout\n") logurl = "http://localhost:%d/builder1/builds/0/setup/1" % port d = client.getPage(logurl) - logbody = dr(d, 5) + d.addCallback(self._test_logfile_4) + return d + def _test_logfile_4(self, logbody): self.failUnlessEqual(logbody, "ouch") From warner at users.sourceforge.net Wed Jul 20 02:18:24 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 02:18:24 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test test_vc.py,1.34,1.35 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv25026/buildbot/test Modified Files: test_vc.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-249 Creator: Brian Warner disable bazaar's revision cache, since it causes test failures * buildbot/test/test_vc.py (Arch.createRepository): and disable bazaar's revision cache, since they cause test failures (the multiple repositories we create all interfere with each other through the cache) Index: test_vc.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_vc.py,v retrieving revision 1.34 retrieving revision 1.35 diff -u -d -r1.34 -r1.35 --- test_vc.py 20 Jul 2005 00:28:06 -0000 1.34 +++ test_vc.py 20 Jul 2005 02:18:22 -0000 1.35 @@ -1172,6 +1172,18 @@ "Buildbot Test Suite "]) yield w; w.getResult() + if VCS.have['baz']: + # bazaar keeps a cache of revisions, but this test creates a new + # archive each time it is run, so the cache causes errors. + # Disable the cache to avoid these problems. This will be + # slightly annoying for people who run the buildbot tests under + # the same UID as one which uses baz on a regular basis, but + # bazaar doesn't give us a way to disable the cache just for this + # one archive. + cmd = "baz cache-config --disable" + w = self.do(tmp, cmd) + yield w; w.getResult() + w = waitForDeferred(self.unregisterRepository("tla")) yield w; w.getResult() From warner at users.sourceforge.net Wed Jul 20 02:18:24 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 02:18:24 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.470,1.471 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv25026 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-249 Creator: Brian Warner disable bazaar's revision cache, since it causes test failures * buildbot/test/test_vc.py (Arch.createRepository): and disable bazaar's revision cache, since they cause test failures (the multiple repositories we create all interfere with each other through the cache) Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.470 retrieving revision 1.471 diff -u -d -r1.470 -r1.471 --- ChangeLog 20 Jul 2005 00:28:06 -0000 1.470 +++ ChangeLog 20 Jul 2005 02:18:21 -0000 1.471 @@ -3,6 +3,9 @@ * buildbot/test/test_vc.py (Arch.createRepository): set the tla ID if it wasn't already set: most tla commands will fail unless one has been set. + (Arch.createRepository): and disable bazaar's revision cache, since + they cause test failures (the multiple repositories we create all + interfere with each other through the cache) * buildbot/test/test_web.py (WebTest): remove use of deferredResult, bring it properly up to date with twisted-2.0 test guidelines From warner at users.sourceforge.net Wed Jul 20 00:28:10 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 00:28:10 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test test_vc.py,1.33,1.34 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv2777/buildbot/test Modified Files: test_vc.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-247 Creator: Brian Warner make sure tla 'my-id' parameter is set, otherwise Arch tests fail * buildbot/test/test_vc.py (Arch.createRepository): set the tla ID if it wasn't already set: most tla commands will fail unless one has been set. Index: test_vc.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_vc.py,v retrieving revision 1.33 retrieving revision 1.34 diff -u -d -r1.33 -r1.34 --- test_vc.py 19 Jul 2005 23:11:58 -0000 1.33 +++ test_vc.py 20 Jul 2005 00:28:06 -0000 1.34 @@ -335,15 +335,17 @@ self.httpServer = reactor.listenTCP(0, self.site) self.httpPort = self.httpServer.getHost().port - def runCommand(self, basedir, command): - # all commands passed to do() should be strings. None of the - # arguments may have spaces. This makes the commands less verbose at - # the expense of restricting what they can specify. - command = command.split(" ") + def runCommand(self, basedir, command, failureIsOk=False): + # all commands passed to do() should be strings or lists. If they are + # strings, none of the arguments may have spaces. This makes the + # commands less verbose at the expense of restricting what they can + # specify. + if type(command) not in (list, tuple): + command = command.split(" ") d = utils.getProcessOutputAndValue(command[0], command[1:], env=os.environ, path=basedir) def check((out, err, code)): - if code != 0: + if code != 0 and not failureIsOk: log.msg("command %s finished with exit code %d" % (command, code)) log.msg(" and stdout %s" % (out,)) @@ -354,8 +356,8 @@ d.addCallback(check) return d - def do(self, basedir, command): - d = self.runCommand(basedir, command) + def do(self, basedir, command, failureIsOk=False): + d = self.runCommand(basedir, command, failureIsOk=failureIsOk) return waitForDeferred(d) def populate(self, basedir): @@ -1162,6 +1164,14 @@ self.populate(tmp) + w = self.do(tmp, "tla my-id", failureIsOk=True) + yield w; res = w.getResult() + if not res: + # tla will fail a lot of operations if you have not set an ID + w = self.do(tmp, ["tla", "my-id", + "Buildbot Test Suite "]) + yield w; w.getResult() + w = waitForDeferred(self.unregisterRepository("tla")) yield w; w.getResult() From warner at users.sourceforge.net Wed Jul 20 00:28:08 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 00:28:08 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.469,1.470 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv2777 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-247 Creator: Brian Warner make sure tla 'my-id' parameter is set, otherwise Arch tests fail * buildbot/test/test_vc.py (Arch.createRepository): set the tla ID if it wasn't already set: most tla commands will fail unless one has been set. Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.469 retrieving revision 1.470 diff -u -d -r1.469 -r1.470 --- ChangeLog 19 Jul 2005 23:51:52 -0000 1.469 +++ ChangeLog 20 Jul 2005 00:28:06 -0000 1.470 @@ -1,5 +1,9 @@ 2005-07-19 Brian Warner + * buildbot/test/test_vc.py (Arch.createRepository): set the tla ID + if it wasn't already set: most tla commands will fail unless one + has been set. + * buildbot/test/test_web.py (WebTest): remove use of deferredResult, bring it properly up to date with twisted-2.0 test guidelines From warner at users.sourceforge.net Wed Jul 20 05:07:50 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 05:07:50 +0000 Subject: [Buildbot-commits] buildbot/buildbot master.py,1.76,1.77 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv21704/buildbot Modified Files: master.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-258 Creator: Brian Warner oops, update sanity-checking to handle Dependent instances * buildbot/master.py (BuildMaster.loadConfig): oops, sanity-check c['schedulers'] in such a way that we can actually accept Dependent instances * buildbot/test/test_config.py: check it Index: master.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/master.py,v retrieving revision 1.76 retrieving revision 1.77 diff -u -d -r1.76 -r1.77 --- master.py 20 Jul 2005 04:21:57 -0000 1.76 +++ master.py 20 Jul 2005 05:07:46 -0000 1.77 @@ -714,16 +714,16 @@ assert type(sources) in (list, tuple) for s in sources: - assert interfaces.IChangeSource(s) + assert interfaces.IChangeSource(s, None) # this assertion catches c['schedulers'] = Scheduler(), since # Schedulers are service.MultiServices and thus iterable. assert type(schedulers) in (list, tuple) for s in schedulers: - assert (interfaces.IScheduler(s) - or interfaces.IUpstreamScheduler(s)) + assert (interfaces.IScheduler(s, None) + or interfaces.IUpstreamScheduler(s, None)) assert type(status) in (list, tuple) for s in status: - assert interfaces.IStatusReceiver(s) + assert interfaces.IStatusReceiver(s, None) slavenames = [name for name,pw in bots] buildernames = [] @@ -866,7 +866,7 @@ def loadConfig_Schedulers(self, newschedulers): old = [s for s in self.schedulers if s not in newschedulers] [self.schedulers.remove(s) for s in old] - dl = [s.disownServiceParent() for s in old] + dl = [defer.maybeDeferred(s.disownServiceParent) for s in old] [s.setServiceParent(self) for s in newschedulers if s not in self.schedulers] self.schedulers = newschedulers From warner at users.sourceforge.net Wed Jul 20 05:07:55 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 05:07:55 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test test_config.py,1.24,1.25 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv21704/buildbot/test Modified Files: test_config.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-258 Creator: Brian Warner oops, update sanity-checking to handle Dependent instances * buildbot/master.py (BuildMaster.loadConfig): oops, sanity-check c['schedulers'] in such a way that we can actually accept Dependent instances * buildbot/test/test_config.py: check it Index: test_config.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_config.py,v retrieving revision 1.24 retrieving revision 1.25 diff -u -d -r1.24 -r1.25 --- test_config.py 20 Jul 2005 04:21:57 -0000 1.24 +++ test_config.py 20 Jul 2005 05:07:48 -0000 1.25 @@ -505,7 +505,7 @@ self.schedulersCfg = \ """ -from buildbot.scheduler import Scheduler +from buildbot.scheduler import Scheduler, Dependent from buildbot.process.factory import BasicBuildFactory c = {} c['bots'] = [('bot1', 'pw1')] @@ -540,7 +540,7 @@ d.addBoth(self._testSchedulers_2) return d def _testSchedulers_2(self, res): - self.shouldBeFailure(res, AssertionError, components.CannotAdapt) + self.shouldBeFailure(res, AssertionError) # c['schedulers'] must point at real builders badcfg = self.schedulersCfg + \ """ @@ -563,6 +563,24 @@ self.failUnlessEqual(s.treeStableTimer, 60) self.failUnlessEqual(s.builderNames, ['builder1']) + newcfg = self.schedulersCfg + \ +""" +s1 = Scheduler('full', None, 60, ['builder1']) +c['schedulers'] = [s1, Dependent('downstream', s1, ['builder1'])] +""" + d = self.buildmaster.loadConfig(newcfg) + d.addCallback(self._testSchedulers_5) + return d + def _testSchedulers_5(self, res): + self.failUnlessEqual(len(self.buildmaster.schedulers), 2) + s = self.buildmaster.schedulers[0] + self.failUnless(isinstance(s, scheduler.Scheduler)) + s = self.buildmaster.schedulers[1] + self.failUnless(isinstance(s, scheduler.Dependent)) + self.failUnlessEqual(s.name, "downstream") + self.failUnlessEqual(s.builderNames, ['builder1']) + + def testBuilders(self): master = self.buildmaster master.loadConfig(emptyCfg) From warner at users.sourceforge.net Wed Jul 20 05:07:55 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 05:07:55 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.474,1.475 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv21704 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-258 Creator: Brian Warner oops, update sanity-checking to handle Dependent instances * buildbot/master.py (BuildMaster.loadConfig): oops, sanity-check c['schedulers'] in such a way that we can actually accept Dependent instances * buildbot/test/test_config.py: check it Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.474 retrieving revision 1.475 diff -u -d -r1.474 -r1.475 --- ChangeLog 20 Jul 2005 04:49:07 -0000 1.474 +++ ChangeLog 20 Jul 2005 05:07:53 -0000 1.475 @@ -1,5 +1,10 @@ 2005-07-19 Brian Warner + * buildbot/master.py (BuildMaster.loadConfig): oops, sanity-check + c['schedulers'] in such a way that we can actually accept + Dependent instances + * buildbot/test/test_config.py: check it + * buildbot/scheduler.py (Dependent.listBuilderNames): oops, add utility method to *all* the Schedulers (Periodic.listBuilderNames): same From warner at users.sourceforge.net Wed Jul 20 04:49:09 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 04:49:09 +0000 Subject: [Buildbot-commits] buildbot/docs buildbot.texinfo,1.10,1.11 Message-ID: Update of /cvsroot/buildbot/buildbot/docs In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv18786/docs Modified Files: buildbot.texinfo Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-256 Creator: Brian Warner update Locks docs, add listBuilderNames to all Schedulers * buildbot/scheduler.py (Dependent.listBuilderNames): oops, add utility method to *all* the Schedulers (Periodic.listBuilderNames): same * docs/buildbot.texinfo (Interlocks): update chapter to match reality Index: buildbot.texinfo =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/buildbot.texinfo,v retrieving revision 1.10 retrieving revision 1.11 diff -u -d -r1.10 -r1.11 --- buildbot.texinfo 20 Jul 2005 03:27:04 -0000 1.10 +++ buildbot.texinfo 20 Jul 2005 04:49:06 -0000 1.11 @@ -2705,11 +2705,17 @@ name. To use a lock, simply include it in the @code{locks=} argument of the - at code{BuildStep} or @code{Build} object that should obtain the lock -before it runs. These arguments accept a list of @code{Lock} objects: -the Step or Build will acquire all of them before it runs. The - at code{BuildFactory} also accepts @code{locks=}, and simply passes it -on to the @code{Build} that it creates. + at code{BuildStep} object that should obtain the lock before it runs. +This argument accepts a list of @code{Lock} objects: the Step will +acquire all of them before it runs. + +To claim a lock for the whole Build, add a @code{'locks'} key to the +builder specification dictionary with the same list of @code{Lock} +objects. (This is the dictionary that has the @code{'name'}, + at code{'slavename'}, @code{'builddir'}, and @code{'factory'} keys). The + at code{Build} object also accepts a @code{locks=} argument, but unless +you are writing your own @code{BuildFactory} subclass then it will be +easier to set the locks in the builder dictionary. Note that there are no partial-acquire or partial-release semantics: this prevents deadlocks caused by two Steps each waiting for a lock @@ -2760,12 +2766,15 @@ slow_lock = locks.SlaveLock("cpu") source = s(step.SVN, svnurl="http://example.org/svn/Trunk") -f22 = factory.Trial(source, trialpython=["python2.2"], locks=[slow_lock]) -f23 = factory.Trial(source, trialpython=["python2.3"], locks=[slow_lock]) -f24 = factory.Trial(source, trialpython=["python2.4"], locks=[slow_lock]) -b1 = @{'name': 'p22', 'slavename': 'bot-1, builddir='p22', 'factory': f22@} -b2 = @{'name': 'p23', 'slavename': 'bot-1, builddir='p23', 'factory': f23@} -b3 = @{'name': 'p24', 'slavename': 'bot-1, builddir='p24', 'factory': f24@} +f22 = factory.Trial(source, trialpython=["python2.2"]) +f23 = factory.Trial(source, trialpython=["python2.3"]) +f24 = factory.Trial(source, trialpython=["python2.4"]) +b1 = @{'name': 'p22', 'slavename': 'bot-1, builddir='p22', 'factory': f22, + 'locks': [slow_lock] @} +b2 = @{'name': 'p23', 'slavename': 'bot-1, builddir='p23', 'factory': f23, + 'locks': [slow_lock] @} +b3 = @{'name': 'p24', 'slavename': 'bot-1, builddir='p24', 'factory': f24, + 'locks': [slow_lock] @} c['builders'] = [b1, b2, b3] @end example From warner at users.sourceforge.net Wed Jul 20 04:49:09 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 04:49:09 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.473,1.474 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv18786 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-256 Creator: Brian Warner update Locks docs, add listBuilderNames to all Schedulers * buildbot/scheduler.py (Dependent.listBuilderNames): oops, add utility method to *all* the Schedulers (Periodic.listBuilderNames): same * docs/buildbot.texinfo (Interlocks): update chapter to match reality Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.473 retrieving revision 1.474 diff -u -d -r1.473 -r1.474 --- ChangeLog 20 Jul 2005 04:21:58 -0000 1.473 +++ ChangeLog 20 Jul 2005 04:49:07 -0000 1.474 @@ -1,5 +1,12 @@ 2005-07-19 Brian Warner + * buildbot/scheduler.py (Dependent.listBuilderNames): oops, add + utility method to *all* the Schedulers + (Periodic.listBuilderNames): same + + * docs/buildbot.texinfo (Interlocks): update chapter to match + reality + * buildbot/master.py (BuildMaster.loadConfig): Add sanity checks to make sure that c['sources'], c['schedulers'], and c['status'] are all lists of the appropriate objects, and that the Schedulers From warner at users.sourceforge.net Wed Jul 20 04:49:08 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 04:49:08 +0000 Subject: [Buildbot-commits] buildbot/buildbot scheduler.py,1.2,1.3 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv18786/buildbot Modified Files: scheduler.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-256 Creator: Brian Warner update Locks docs, add listBuilderNames to all Schedulers * buildbot/scheduler.py (Dependent.listBuilderNames): oops, add utility method to *all* the Schedulers (Periodic.listBuilderNames): same * docs/buildbot.texinfo (Interlocks): update chapter to match reality Index: scheduler.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/scheduler.py,v retrieving revision 1.2 retrieving revision 1.3 diff -u -d -r1.2 -r1.3 --- scheduler.py 20 Jul 2005 04:21:57 -0000 1.2 +++ scheduler.py 20 Jul 2005 04:49:06 -0000 1.3 @@ -256,6 +256,9 @@ self.upstream = upstream self.builderNames = builderNames + def listBuilderNames(self): + return self.builderNames + def startService(self): service.MultiService.startService(self) self.upstream.subscribeToSuccessfulBuilds(self.upstreamBuilt) @@ -292,6 +295,9 @@ self.doPeriodicBuild) self.timer.setServiceParent(self) + def listBuilderNames(self): + return self.builderNames + def doPeriodicBuild(self): bs = buildset.BuildSet(self.builderNames, SourceStamp(branch=self.branch)) From warner at users.sourceforge.net Wed Jul 20 04:22:00 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 04:22:00 +0000 Subject: [Buildbot-commits] buildbot/buildbot/test test_config.py,1.23,1.24 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/test In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv14016/buildbot/test Modified Files: test_config.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-254 Creator: Brian Warner add sanity checks to the config file parser * buildbot/master.py (BuildMaster.loadConfig): Add sanity checks to make sure that c['sources'], c['schedulers'], and c['status'] are all lists of the appropriate objects, and that the Schedulers all point to real Builders * buildbot/test/test_config.py (ConfigTest.testSchedulers): test it Index: test_config.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_config.py,v retrieving revision 1.23 retrieving revision 1.24 diff -u -d -r1.23 -r1.24 --- test_config.py 19 Jul 2005 23:23:21 -0000 1.23 +++ test_config.py 20 Jul 2005 04:21:57 -0000 1.24 @@ -5,7 +5,7 @@ from twisted.trial import unittest dr = unittest.deferredResult -from twisted.python import components +from twisted.python import components, failure from twisted.internet import defer try: @@ -14,7 +14,7 @@ except ImportError: cvstoys = None -from buildbot.twcompat import providedBy +from buildbot.twcompat import providedBy, maybeWait from buildbot.master import BuildMaster from buildbot import scheduler from twisted.application import service, internet @@ -491,13 +491,19 @@ dr(d) self.failUnlessEqual(list(master.change_svc), []) + def shouldBeFailure(self, res, *expected): + self.failUnless(isinstance(res, failure.Failure), + "we expected this to fail, not produce %s" % (res,)) + res.trap(*expected) + return None # all is good + def testSchedulers(self): master = self.buildmaster master.loadChanges() master.loadConfig(emptyCfg) self.failUnlessEqual(master.schedulers, []) - schedulersCfg = \ + self.schedulersCfg = \ """ from buildbot.scheduler import Scheduler from buildbot.process.factory import BasicBuildFactory @@ -515,10 +521,42 @@ BuildmasterConfig = c """ - d = master.loadConfig(schedulersCfg) - dr(d) - self.failUnlessEqual(len(master.schedulers), 1) - s = master.schedulers[0] + # c['schedulers'] must be a list + badcfg = self.schedulersCfg + \ +""" +c['schedulers'] = Scheduler('full', None, 60, ['builder1']) +""" + d = defer.maybeDeferred(self.buildmaster.loadConfig, badcfg) + d.addBoth(self._testSchedulers_1) + return maybeWait(d) + def _testSchedulers_1(self, res): + self.shouldBeFailure(res, AssertionError) + # c['schedulers'] must be a list of IScheduler objects + badcfg = self.schedulersCfg + \ +""" +c['schedulers'] = ['oops', 'problem'] +""" + d = defer.maybeDeferred(self.buildmaster.loadConfig, badcfg) + d.addBoth(self._testSchedulers_2) + return d + def _testSchedulers_2(self, res): + self.shouldBeFailure(res, AssertionError, components.CannotAdapt) + # c['schedulers'] must point at real builders + badcfg = self.schedulersCfg + \ +""" +c['schedulers'] = [Scheduler('full', None, 60, ['builder-bogus'])] +""" + d = defer.maybeDeferred(self.buildmaster.loadConfig, badcfg) + d.addBoth(self._testSchedulers_3) + return d + def _testSchedulers_3(self, res): + self.shouldBeFailure(res, AssertionError) + d = self.buildmaster.loadConfig(self.schedulersCfg) + d.addCallback(self._testSchedulers_4) + return d + def _testSchedulers_4(self, res): + self.failUnlessEqual(len(self.buildmaster.schedulers), 1) + s = self.buildmaster.schedulers[0] self.failUnless(isinstance(s, scheduler.Scheduler)) self.failUnlessEqual(s.name, "full") self.failUnlessEqual(s.branch, None) From warner at users.sourceforge.net Wed Jul 20 04:21:59 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 04:21:59 +0000 Subject: [Buildbot-commits] buildbot/buildbot interfaces.py,1.27,1.28 scheduler.py,1.1,1.2 master.py,1.75,1.76 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv14016/buildbot Modified Files: interfaces.py scheduler.py master.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-254 Creator: Brian Warner add sanity checks to the config file parser * buildbot/master.py (BuildMaster.loadConfig): Add sanity checks to make sure that c['sources'], c['schedulers'], and c['status'] are all lists of the appropriate objects, and that the Schedulers all point to real Builders * buildbot/test/test_config.py (ConfigTest.testSchedulers): test it Index: interfaces.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/interfaces.py,v retrieving revision 1.27 retrieving revision 1.28 diff -u -d -r1.27 -r1.28 --- interfaces.py 19 Jul 2005 23:11:59 -0000 1.27 +++ interfaces.py 20 Jul 2005 04:21:57 -0000 1.28 @@ -44,6 +44,10 @@ Each Scheduler will receive this Change. I may decide to start a build as a result, or I might choose to ignore it.""" + def listBuilderNames(): + """Return a list of strings indicating the Builders that this + Scheduler might feed.""" + class IUpstreamScheduler(Interface): """This marks an IScheduler as being eligible for use as the 'upstream=' argument to a buildbot.scheduler.Dependent instance.""" @@ -53,6 +57,10 @@ successful buildset. The target will be called with a single argument: the SourceStamp used by the successful builds.""" + def listBuilderNames(): + """Return a list of strings indicating the Builders that this + Scheduler might feed.""" + class ISourceStamp(Interface): pass Index: master.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/master.py,v retrieving revision 1.75 retrieving revision 1.76 diff -u -d -r1.75 -r1.76 --- master.py 19 Jul 2005 23:23:21 -0000 1.75 +++ master.py 20 Jul 2005 04:21:57 -0000 1.76 @@ -712,6 +712,16 @@ if config.has_key('interlocks'): raise KeyError("c['interlocks'] is no longer accepted") + assert type(sources) in (list, tuple) + for s in sources: + assert interfaces.IChangeSource(s) + # this assertion catches c['schedulers'] = Scheduler(), since + # Schedulers are service.MultiServices and thus iterable. + assert type(schedulers) in (list, tuple) + for s in schedulers: + assert (interfaces.IScheduler(s) + or interfaces.IUpstreamScheduler(s)) + assert type(status) in (list, tuple) for s in status: assert interfaces.IStatusReceiver(s) @@ -734,6 +744,10 @@ % (b['name'], b['builddir'])) dirnames.append(b['builddir']) + for s in schedulers: + for b in s.listBuilderNames(): + assert b in buildernames + # assert that all locks used by the Builds and their Steps are # uniquely named. locks = {} @@ -816,7 +830,7 @@ log.msg("configuration updated") self.readConfig = True - return defer.DeferredList(dl) + return defer.DeferredList(dl, fireOnOneErrback=1, consumeErrors=1) def loadConfig_Slaves(self, bots): # set up the Checker with the names and passwords of all valid bots @@ -836,7 +850,7 @@ # all done self.bots = bots - return defer.DeferredList(dl) + return defer.DeferredList(dl, fireOnOneErrback=1, consumeErrors=1) def loadConfig_Sources(self, sources): log.msg("loadConfig_Sources, change_svc is", self.change_svc, @@ -847,7 +861,7 @@ for source in oldsources if source not in sources] [self.change_svc.addSource(source) for source in sources if source not in self.change_svc] - return defer.DeferredList(dl) + return defer.DeferredList(dl, fireOnOneErrback=1, consumeErrors=1) def loadConfig_Schedulers(self, newschedulers): old = [s for s in self.schedulers if s not in newschedulers] @@ -856,7 +870,7 @@ [s.setServiceParent(self) for s in newschedulers if s not in self.schedulers] self.schedulers = newschedulers - return defer.DeferredList(dl) + return defer.DeferredList(dl, fireOnOneErrback=1, consumeErrors=1) def loadConfig_Builders(self, newBuilders): dl = [] @@ -916,7 +930,7 @@ # now that everything is up-to-date, make sure the names are in the # desired order self.botmaster.builderNames = newNames - return defer.DeferredList(dl) + return defer.DeferredList(dl, fireOnOneErrback=1, consumeErrors=1) def loadConfig_status(self, status): dl = [] @@ -935,7 +949,7 @@ s.setServiceParent(self) self.statusTargets.append(s) - return defer.DeferredList(dl) + return defer.DeferredList(dl, fireOnOneErrback=1, consumeErrors=1) def addChange(self, change): Index: scheduler.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/scheduler.py,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- scheduler.py 19 Jul 2005 23:11:59 -0000 1.1 +++ scheduler.py 20 Jul 2005 04:21:57 -0000 1.2 @@ -109,6 +109,9 @@ self.nextBuildTime = None self.timer = None + def listBuilderNames(self): + return self.builderNames + def fileIsImportant(self, change): # note that externally-provided fileIsImportant callables are # functions, not methods, and will only receive one argument. Or you @@ -215,6 +218,9 @@ self.fileIsImportant = fileIsImportant self.schedulers = {} # one per branch + def listBuilderNames(self): + return self.builderNames + def addChange(self, change): branch = change.branch if self.branches and branch not in self.branches: From warner at users.sourceforge.net Wed Jul 20 04:22:00 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 04:22:00 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.472,1.473 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv14016 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-254 Creator: Brian Warner add sanity checks to the config file parser * buildbot/master.py (BuildMaster.loadConfig): Add sanity checks to make sure that c['sources'], c['schedulers'], and c['status'] are all lists of the appropriate objects, and that the Schedulers all point to real Builders * buildbot/test/test_config.py (ConfigTest.testSchedulers): test it Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.472 retrieving revision 1.473 diff -u -d -r1.472 -r1.473 --- ChangeLog 20 Jul 2005 03:27:07 -0000 1.472 +++ ChangeLog 20 Jul 2005 04:21:58 -0000 1.473 @@ -1,5 +1,14 @@ 2005-07-19 Brian Warner + * buildbot/master.py (BuildMaster.loadConfig): Add sanity checks + to make sure that c['sources'], c['schedulers'], and c['status'] + are all lists of the appropriate objects, and that the Schedulers + all point to real Builders + * buildbot/interfaces.py (IScheduler, IUpstreamScheduler): add + 'listBuilderNames' utility method to support this + * buildbot/scheduler.py: implement the utility method + * buildbot/test/test_config.py (ConfigTest.testSchedulers): test it + * docs/buildbot.texinfo: add some @cindex entries * buildbot/test/test_vc.py (Arch.createRepository): set the tla ID From warner at users.sourceforge.net Wed Jul 20 03:27:11 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 03:27:11 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.471,1.472 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv5256 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-252 Creator: Brian Warner * docs/buildbot.texinfo: add some @cindex entries Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.471 retrieving revision 1.472 diff -u -d -r1.471 -r1.472 --- ChangeLog 20 Jul 2005 02:18:21 -0000 1.471 +++ ChangeLog 20 Jul 2005 03:27:07 -0000 1.472 @@ -1,5 +1,7 @@ 2005-07-19 Brian Warner + * docs/buildbot.texinfo: add some @cindex entries + * buildbot/test/test_vc.py (Arch.createRepository): set the tla ID if it wasn't already set: most tla commands will fail unless one has been set. From warner at users.sourceforge.net Wed Jul 20 05:36:58 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 05:36:58 +0000 Subject: [Buildbot-commits] buildbot/buildbot master.py,1.77,1.78 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv26391/buildbot Modified Files: master.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-260 Creator: Brian Warner make sure SlaveLock('name') and MasterLock('name') are distinct * buildbot/master.py (BuildMaster.loadConfig): give a better error message when schedulers use unknown builders * buildbot/process/builder.py (Builder.compareToSetup): make sure SlaveLock('name') and MasterLock('name') are distinct Index: master.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/master.py,v retrieving revision 1.77 retrieving revision 1.78 diff -u -d -r1.77 -r1.78 --- master.py 20 Jul 2005 05:07:46 -0000 1.77 +++ master.py 20 Jul 2005 05:36:55 -0000 1.78 @@ -746,7 +746,8 @@ for s in schedulers: for b in s.listBuilderNames(): - assert b in buildernames + assert b in buildernames, \ + "%s uses unknown builder %s" % (s, b) # assert that all locks used by the Builds and their Steps are # uniquely named. From warner at users.sourceforge.net Wed Jul 20 05:36:58 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 05:36:58 +0000 Subject: [Buildbot-commits] buildbot/buildbot/process builder.py,1.27,1.28 Message-ID: Update of /cvsroot/buildbot/buildbot/buildbot/process In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv26391/buildbot/process Modified Files: builder.py Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-260 Creator: Brian Warner make sure SlaveLock('name') and MasterLock('name') are distinct * buildbot/master.py (BuildMaster.loadConfig): give a better error message when schedulers use unknown builders * buildbot/process/builder.py (Builder.compareToSetup): make sure SlaveLock('name') and MasterLock('name') are distinct Index: builder.py =================================================================== RCS file: /cvsroot/buildbot/buildbot/buildbot/process/builder.py,v retrieving revision 1.27 retrieving revision 1.28 diff -u -d -r1.27 -r1.28 --- builder.py 19 Jul 2005 23:11:58 -0000 1.27 +++ builder.py 20 Jul 2005 05:36:56 -0000 1.28 @@ -243,8 +243,10 @@ % (self.builddir, setup['builddir'])) if setup['factory'] != self.buildFactory: # compare objects diffs.append('factory changed') - oldlocks = [lock.name for lock in setup.get('locks',[])] - newlocks = [lock.name for lock in self.locks] + oldlocks = [(lock.__class__, lock.name) + for lock in setup.get('locks',[])] + newlocks = [(lock.__class__, lock.name) + for lock in self.locks] if oldlocks != newlocks: diffs.append('locks changed from %s to %s' % (oldlocks, newlocks)) return diffs From warner at users.sourceforge.net Wed Jul 20 05:36:58 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 05:36:58 +0000 Subject: [Buildbot-commits] buildbot ChangeLog,1.475,1.476 Message-ID: Update of /cvsroot/buildbot/buildbot In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv26391 Modified Files: ChangeLog Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-260 Creator: Brian Warner make sure SlaveLock('name') and MasterLock('name') are distinct * buildbot/master.py (BuildMaster.loadConfig): give a better error message when schedulers use unknown builders * buildbot/process/builder.py (Builder.compareToSetup): make sure SlaveLock('name') and MasterLock('name') are distinct Index: ChangeLog =================================================================== RCS file: /cvsroot/buildbot/buildbot/ChangeLog,v retrieving revision 1.475 retrieving revision 1.476 diff -u -d -r1.475 -r1.476 --- ChangeLog 20 Jul 2005 05:07:53 -0000 1.475 +++ ChangeLog 20 Jul 2005 05:36:56 -0000 1.476 @@ -1,5 +1,11 @@ 2005-07-19 Brian Warner + * buildbot/master.py (BuildMaster.loadConfig): give a better error + message when schedulers use unknown builders + + * buildbot/process/builder.py (Builder.compareToSetup): make sure + SlaveLock('name') and MasterLock('name') are distinct + * buildbot/master.py (BuildMaster.loadConfig): oops, sanity-check c['schedulers'] in such a way that we can actually accept Dependent instances From warner at users.sourceforge.net Wed Jul 20 03:27:09 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 03:27:09 +0000 Subject: [Buildbot-commits] buildbot/docs buildbot.texinfo,1.9,1.10 Message-ID: Update of /cvsroot/buildbot/buildbot/docs In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv5256/docs Modified Files: buildbot.texinfo Log Message: Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-252 Creator: Brian Warner * docs/buildbot.texinfo: add some @cindex entries Index: buildbot.texinfo =================================================================== RCS file: /cvsroot/buildbot/buildbot/docs/buildbot.texinfo,v retrieving revision 1.9 retrieving revision 1.10 diff -u -d -r1.9 -r1.10 --- buildbot.texinfo 19 Jul 2005 23:12:00 -0000 1.9 +++ buildbot.texinfo 20 Jul 2005 03:27:04 -0000 1.10 @@ -234,6 +234,8 @@ @node History and Philosophy, System Architecture, Introduction, Introduction @section History and Philosophy + at cindex Philosophy of operation + The Buildbot was inspired by a similar project built for a development team writing a cross-platform embedded system. The various components of the project were supposed to compile and run on several flavors of @@ -441,6 +443,8 @@ @node Installing the code, Creating a buildmaster, Requirements, Installation @section Installing the code + at cindex installation + The Buildbot is installed using the standard python @code{distutils} module. After unpacking the tarball, the process is: @@ -753,6 +757,8 @@ @node Logfiles, Shutdown, Launching the daemons, Installation @section Logfiles + at cindex logfiles + While a buildbot daemon runs, it emits text to a logfile, named @file{twistd.log}. A command like @code{tail -f twistd.log} is useful to watch the command output as it runs. @@ -899,7 +905,7 @@ @node Version Control Systems, Schedulers, Concepts, Concepts @section Version Control Systems - at cindex CVS + at cindex Version Control These source trees come from a Version Control System of some kind. CVS and Subversion are two popular ones, but the Buildbot supports @@ -1338,6 +1344,8 @@ @node Users, , Builder, Concepts @section Users + at cindex Users + Buildbot has a somewhat limited awareness of @emph{users}. It assumes the world consists of a set of developers, each of whom can be described by a couple of simple attributes. These developers make @@ -1477,6 +1485,8 @@ @node Configuration, Getting Source Code Changes, Concepts, Top @chapter Configuration + at cindex Configuration + The buildbot's behavior is defined by the ``config file'', which normally lives in the @file{master.cfg} file in the buildmaster's base directory (but this can be changed with an option to the @@ -1603,15 +1613,18 @@ c['buildbotURL'] = "http://localhost:8010/" @end example + at cindex c['projectName'] @code{projectName} is a short string will be used to describe the project that this buildbot is working on. For example, it is used as the title of the waterfall HTML page. + at cindex c['projectURL'] @code{projectURL} is a string that gives a URL for the project as a whole. HTML status displays will show @code{projectName} as a link to @code{projectURL}, to provide a link from buildbot HTML pages to your project's home page. + at cindex c['buildbotURL'] The @code{buildbotURL} string should point to the location where the buildbot's internal web server (usually the @code{html.Waterfall} page) is visible. This typically uses the port number set when you @@ -1628,6 +1641,7 @@ @node Listing Change Sources and Schedulers, Setting the slaveport, Defining the Project, Configuration @section Listing Change Sources and Schedulers + at cindex c['sources'] The @code{c['sources']} key is a list of ChangeSource instances at footnote{To be precise, it is a list of objects which all implement the @code{buildbot.interfaces.IChangeSource} Interface}. @@ -1640,7 +1654,7 @@ c['sources'] = [buildbot.changes.pb.PBChangeSource()] @end example - + at cindex c['schedulers'] @code{c['schedulers']} is a list of Scheduler instances, each of which causes builds to be started on a particular set of Builders. The two basic Scheduler classes you are likely to start with are @@ -1685,7 +1699,7 @@ @node Build Dependencies, , Listing Change Sources and Schedulers, Listing Change Sources and Schedulers @subsection Build Dependencies - at cindex dependencies + at cindex Dependencies It is common to wind up with one kind of build which should only be performed if the same source code was successfully handled by some @@ -1732,7 +1746,7 @@ @node Setting the slaveport, Buildslave Specifiers, Listing Change Sources and Schedulers, Configuration @section Setting the slaveport - at cindex slavePortnum + at cindex c['slavePortnum'] The buildmaster will listen on a TCP port of your choosing for connections from buildslaves. It can also use this port for @@ -1754,7 +1768,7 @@ @node Buildslave Specifiers, Defining Builders, Setting the slaveport, Configuration @section Buildslave Specifiers - at cindex bots + at cindex c['bots'] The @code{c['bots']} key is a list of known buildslaves. Each buildslave is defined by a tuple of (slavename, slavepassword). These @@ -1780,7 +1794,7 @@ @node Defining Builders, Defining Status Targets, Buildslave Specifiers, Configuration @section Defining Builders - at cindex builders + at cindex c['builders'] The @code{c['builders']} key is a list of dictionaries which specify the Builders. The Buildmaster runs a collection of Builders, each of @@ -1847,6 +1861,8 @@ in the configuration's @code{status} list. To add status targets, you just append more objects to this list: + at cindex c['status'] + @example c['status'] = [] @@ -1872,6 +1888,7 @@ @section Debug options + at cindex c['debugPassword'] If you set @code{c['debugPassword']}, then you can connect to the buildmaster with the diagnostic tool launched by @code{buildbot debugclient MASTER:PORT}. From this tool, you can reload the config @@ -1885,6 +1902,7 @@ c['debugPassword'] = "debugpassword" @end example + at cindex c['manhole'] If you set @code{c['manhole']} to an instance of the @code{buildbot.master.Manhole} class, you can telnet into the buildmaster and get an interactive Python shell, which may be useful From warner at users.sourceforge.net Wed Jul 20 07:22:24 2005 From: warner at users.sourceforge.net (Brian Warner) Date: Wed, 20 Jul 2005 07:22:24 +0000 Subject: [Buildbot-commits] site manual-0.6.6.html,NONE,1.1 manual-CVS.html,NONE,1.1 index.html,1.43,1.44 source-Arch.html,1.5,1.6 Message-ID: Update of /cvsroot/buildbot/site In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv13263 Modified Files: index.html source-Arch.html Added Files: manual-0.6.6.html manual-CVS.html Log Message: add user manuals (both 0.6.6 and HEAD) and Darcs archive information Index: index.html =================================================================== RCS file: /cvsroot/buildbot/site/index.html,v retrieving revision 1.43 retrieving revision 1.44 diff -u -d -r1.43 -r1.44 --- index.html 7 Jul 2005 22:35:21 -0000 1.43 +++ index.html 20 Jul 2005 07:22:19 -0000 1.44 @@ -20,10 +20,18 @@
  • The latest code is available from CVS for browsing or read-only - checkout. There is also an Arch repository which tracks the main CVS - tree, details are here.
  • + checkout. There are also Arch and Darcs repositories which track the + main CVS tree and provide lower-latency access than anonymous CVS, details + are here. -
  • The README file: installation hints, overview
  • +
  • The README file: installation hints, overview. + There is also a preliminary User's Manual + for version 0.6.6: it is very rough and may be incomplete or incorrect in + places, but it is better than nothing. Alternatively, the CVS -rHEAD User's Manual is available: it is + generated from the latest source code, so it is more complete, but + describes features and interfaces that are not yet in the latest + release.
  • Recent changes are summarized in the NEWS file, while the complete details are in the -Last modified: Thu Jul 7 15:33:58 PDT 2005 +Last modified: Wed Jul 20 00:16:12 PDT 2005 --- NEW FILE: manual-0.6.6.html --- BuildBot Manual 0.6.6