[Buildbot-commits] buildbot/buildbot/test runutils.py,1.2,1.3 test_locks.py,1.1,1.2 test_config.py,1.25,1.26 test_scheduler.py,1.5,1.6 test_run.py,1.34,1.35 test_slaves.py,1.1,1.2

Brian Warner warner at users.sourceforge.net
Fri Oct 14 19:42:41 UTC 2005


Update of /cvsroot/buildbot/buildbot/buildbot/test
In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv32254/buildbot/test

Modified Files:
	runutils.py test_locks.py test_config.py test_scheduler.py 
	test_run.py test_slaves.py 
Log Message:
Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-326
Creator:  Brian Warner <warner at lothar.com>

implement multiple slaves per Builder, allowing concurrent Builds

	* lots: implement multiple slaves per Builder, which means multiple
	current builds per Builder. Some highlights:
	* buildbot/interfaces.py (IBuilderStatus.getState): return a tuple
	of (state,currentBuilds) instead of (state,currentBuild)
	(IBuilderStatus.getCurrentBuilds): replace getCurrentBuild()
	(IBuildStatus.getSlavename): new method, so you can tell which
	slave got used. This only gets set when the build completes.
	(IBuildRequestStatus.getBuilds): new method

	* buildbot/process/builder.py (SlaveBuilder): add a .state
	attribute to track things like ATTACHING and IDLE and BUILDING,
	instead of..
	(Builder): .. the .slaves attribute here, which has been turned
	into a simple list of available slaves. Added a separate
	attaching_slaves list to track ones that are not yet ready for
	builds.
	(Builder.fireTestEvent): put off the test-event callback for a
	reactor turn, to make tests a bit more consistent.
	(Ping): cleaned up the slaveping a bit, now it disconnects if the
	ping fails due to an exception. This needs work, I'm worried that
	a code error could lead to a constantly re-connecting slave.
	Especially since I'm trying to move to a distinct remote_ping
	method, separate from the remote_print that we currently use.
	(BuilderControl.requestBuild): return a convenience Deferred that
	provides an IBuildStatus when the build finishes.
	(BuilderControl.ping): ping all connected slaves, only return True
	if they all respond.

	* buildbot/slave/bot.py (BuildSlave.stopService): stop trying to
	reconnect when we shut down.

	* buildbot/status/builder.py: implement new methods, convert
	one-build-at-a-time methods to handle multiple builds
	* buildbot/status/*.py: do the same in all default status targets
	* buildbot/status/html.py: report the build's slavename in the
	per-Build page, report all buildslaves on the per-Builder page

	* buildbot/test/test_run.py: update/create tests
	* buildbot/test/test_slaves.py: same
	* buildbot/test/test_scheduler.py: remove stale test

	* docs/buildbot.texinfo: document the new builder-specification
	'slavenames' parameter


Index: runutils.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/runutils.py,v
retrieving revision 1.2
retrieving revision 1.3
diff -u -d -r1.2 -r1.3
--- runutils.py	20 Jul 2005 08:08:23 -0000	1.2
+++ runutils.py	14 Oct 2005 19:42:39 -0000	1.3
@@ -16,8 +16,6 @@
 
 class RunMixin:
     master = None
-    slave = None
-    slave2 = None
 
     def rmtree(self, d):
         try:
@@ -28,79 +26,77 @@
                 raise
 
     def setUp(self):
+        self.slaves = {}
         self.rmtree("basedir")
-        self.rmtree("slavebase")
-        self.rmtree("slavebase2")
         os.mkdir("basedir")
         self.master = master.BuildMaster("basedir")
         self.status = self.master.getStatus()
         self.control = interfaces.IControl(self.master)
 
-    def connectSlave(self, builders=["dummy"]):
+    def connectOneSlave(self, slavename, opts={}):
         port = self.master.slavePort._port.getHost().port
-        os.mkdir("slavebase")
-        slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
-                             "slavebase", keepalive=0, usePTY=1)
+        self.rmtree("slavebase-%s" % slavename)
+        os.mkdir("slavebase-%s" % slavename)
+        slave = MyBuildSlave("localhost", port, slavename, "sekrit",
+                             "slavebase-%s" % slavename,
+                             keepalive=0, usePTY=1, debugOpts=opts)
         slave.info = {"admin": "one"}
-        self.slave = slave
+        self.slaves[slavename] = slave
         slave.startService()
+
+    def connectSlave(self, builders=["dummy"], slavename="bot1",
+                     opts={}):
+        # connect buildslave 'slavename' and wait for it to connect to all of
+        # the given builders
         dl = []
         # initiate call for all of them, before waiting on result,
         # otherwise we might miss some
         for b in builders:
             dl.append(self.master.botmaster.waitUntilBuilderAttached(b))
         d = defer.DeferredList(dl)
+        self.connectOneSlave(slavename, opts)
         return d
 
-    def connectSlaves(self, builders=["dummy"]):
-        port = self.master.slavePort._port.getHost().port
-        os.mkdir("slavebase")
-        slave1 = MyBuildSlave("localhost", port, "bot1", "sekrit",
-                             "slavebase", keepalive=0, usePTY=1)
-        slave1.info = {"admin": "one"}
-        self.slave = slave1
-        slave1.startService()
-
-        os.mkdir("slavebase2")
-        slave2 = MyBuildSlave("localhost", port, "bot2", "sekrit",
-                             "slavebase2", keepalive=0, usePTY=1)
-        slave2.info = {"admin": "one"}
-        self.slave2 = slave2
-        slave2.startService()
-
+    def connectSlaves(self, slavenames, builders):
         dl = []
         # initiate call for all of them, before waiting on result,
         # otherwise we might miss some
         for b in builders:
             dl.append(self.master.botmaster.waitUntilBuilderAttached(b))
         d = defer.DeferredList(dl)
+        for name in slavenames:
+            self.connectOneSlave(name)
         return d
 
     def connectSlave2(self):
+        # this takes over for bot1, so it has to share the slavename
         port = self.master.slavePort._port.getHost().port
-        os.mkdir("slavebase2")
+        self.rmtree("slavebase-bot2")
+        os.mkdir("slavebase-bot2")
+        # this uses bot1, really
         slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
-                             "slavebase2", keepalive=0, usePTY=1)
+                             "slavebase-bot2", keepalive=0, usePTY=1)
         slave.info = {"admin": "two"}
-        self.slave2 = slave
+        self.slaves['bot2'] = slave
         slave.startService()
 
-    def connectSlave3(self):
+    def connectSlaveFastTimeout(self):
         # this slave has a very fast keepalive timeout
         port = self.master.slavePort._port.getHost().port
-        os.mkdir("slavebase")
+        self.rmtree("slavebase-bot1")
+        os.mkdir("slavebase-bot1")
         slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
-                             "slavebase", keepalive=2, usePTY=1,
+                             "slavebase-bot1", keepalive=2, usePTY=1,
                              keepaliveTimeout=1)
         slave.info = {"admin": "one"}
-        self.slave = slave
+        self.slaves['bot1'] = slave
         slave.startService()
         d = self.master.botmaster.waitUntilBuilderAttached("dummy")
         return d
 
     def tearDown(self):
         log.msg("doing tearDown")
-        d = self.shutdownSlave()
+        d = self.shutdownAllSlaves()
         d.addCallback(self._tearDown_1)
         d.addCallback(self._tearDown_2)
         return maybeWait(d)
@@ -110,52 +106,67 @@
     def _tearDown_2(self, res):
         self.master = None
         log.msg("tearDown done")
+        
 
     # various forms of slave death
 
-    def shutdownSlave(self):
+    def shutdownAllSlaves(self):
         # the slave has disconnected normally: they SIGINT'ed it, or it shut
         # down willingly. This will kill child processes and give them a
         # chance to finish up. We return a Deferred that will fire when
         # everything is finished shutting down.
 
-        log.msg("doing shutdownSlave")
+        log.msg("doing shutdownAllSlaves")
         dl = []
-        if self.slave:
-            dl.append(self.slave.waitUntilDisconnected())
-            dl.append(defer.maybeDeferred(self.slave.stopService))
-        if self.slave2:
-            dl.append(self.slave2.waitUntilDisconnected())
-            dl.append(defer.maybeDeferred(self.slave2.stopService))
+        for slave in self.slaves.values():
+            dl.append(slave.waitUntilDisconnected())
+            dl.append(defer.maybeDeferred(slave.stopService))
         d = defer.DeferredList(dl)
-        d.addCallback(self._shutdownSlaveDone)
+        d.addCallback(self._shutdownAllSlavesDone)
         return d
-    def _shutdownSlaveDone(self, res):
-        self.slave = None
-        self.slave2 = None
-        return self.master.botmaster.waitUntilBuilderDetached("dummy")
+    def _shutdownAllSlavesDone(self, res):
+        for name in self.slaves.keys():
+            del self.slaves[name]
+        return self.master.botmaster.waitUntilBuilderFullyDetached("dummy")
+
+    def shutdownSlave(self, slavename, buildername):
+        # this slave has disconnected normally: they SIGINT'ed it, or it shut
+        # down willingly. This will kill child processes and give them a
+        # chance to finish up. We return a Deferred that will fire when
+        # everything is finished shutting down, and the given Builder knows
+        # that the slave has gone away.
+
+        s = self.slaves[slavename]
+        dl = [self.master.botmaster.waitUntilBuilderDetached(buildername),
+              s.waitUntilDisconnected()]
+        d = defer.DeferredList(dl)
+        d.addCallback(self._shutdownSlave_done, slavename)
+        s.stopService()
+        return d
+    def _shutdownSlave_done(self, res, slavename):
+        del self.slaves[slavename]
 
     def killSlave(self):
         # the slave has died, its host sent a FIN. The .notifyOnDisconnect
         # callbacks will terminate the current step, so the build should be
         # flunked (no further steps should be started).
-        self.slave.bf.continueTrying = 0
-        bot = self.slave.getServiceNamed("bot")
+        self.slaves['bot1'].bf.continueTrying = 0
+        bot = self.slaves['bot1'].getServiceNamed("bot")
         broker = bot.builders["dummy"].remote.broker
         broker.transport.loseConnection()
-        self.slave = None
+        del self.slaves['bot1']
 
-    def disappearSlave(self):
+    def disappearSlave(self, slavename="bot1", buildername="dummy"):
         # the slave's host has vanished off the net, leaving the connection
         # dangling. This will be detected quickly by app-level keepalives or
         # a ping, or slowly by TCP timeouts.
 
-        # implement this by replacing the slave Broker's .dataReceived method
+        # simulate this by replacing the slave Broker's .dataReceived method
         # with one that just throws away all data.
         def discard(data):
             pass
-        bot = self.slave.getServiceNamed("bot")
-        broker = bot.builders["dummy"].remote.broker
+        bot = self.slaves[slavename].getServiceNamed("bot")
+        broker = bot.builders[buildername].remote.broker
         broker.dataReceived = discard # seal its ears
         broker.transport.write = discard # and take away its voice
 

Index: test_config.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_config.py,v
retrieving revision 1.25
retrieving revision 1.26
diff -u -d -r1.25 -r1.26
--- test_config.py	20 Jul 2005 05:07:48 -0000	1.25
+++ test_config.py	14 Oct 2005 19:42:39 -0000	1.26
@@ -592,7 +592,7 @@
         b = master.botmaster.builders["builder1"]
         self.failUnless(isinstance(b, Builder))
         self.failUnlessEqual(b.name, "builder1")
-        self.failUnlessEqual(b.slavename, "bot1")
+        self.failUnlessEqual(b.slavenames, ["bot1"])
         self.failUnlessEqual(b.builddir, "workdir")
         f1 = b.buildFactory
         self.failUnless(isinstance(f1, BasicBuildFactory))

Index: test_locks.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_locks.py,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -d -r1.1 -r1.2
--- test_locks.py	19 Jul 2005 23:11:58 -0000	1.1
+++ test_locks.py	14 Oct 2005 19:42:39 -0000	1.2
@@ -63,7 +63,8 @@
         req1.events = req2.events = req3.events = self.events = []
         d = self.master.loadConfig(config_1)
         d.addCallback(lambda res: self.master.startService())
-        d.addCallback(lambda res: self.connectSlaves(["full1a", "full1b",
+        d.addCallback(lambda res: self.connectSlaves(["bot1", "bot2"],
+                                                     ["full1a", "full1b",
                                                       "full1c", "full1d",
                                                       "full2a", "full2b"]))
         return maybeWait(d)

Index: test_run.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_run.py,v
retrieving revision 1.34
retrieving revision 1.35
diff -u -d -r1.34 -r1.35
--- test_run.py	7 Oct 2005 18:45:42 -0000	1.34
+++ test_run.py	14 Oct 2005 19:42:39 -0000	1.35
@@ -206,8 +206,8 @@
                              ["testdummy"])
         self.s1 = s1 = s.getBuilder("dummy")
         self.failUnlessEqual(s1.getName(), "dummy")
-        self.failUnlessEqual(s1.getState(), ("offline", None))
-        self.failUnlessEqual(s1.getCurrentBuild(), None)
+        self.failUnlessEqual(s1.getState(), ("offline", []))
+        self.failUnlessEqual(s1.getCurrentBuilds(), [])
         self.failUnlessEqual(s1.getLastFinishedBuild(), None)
         self.failUnlessEqual(s1.getBuild(-1), None)
         #self.failUnlessEqual(s1.getEvent(-1), foo("created"))
@@ -364,8 +364,8 @@
         self.failUnlessEqual(s.getBuilderNames(), ["dummy", "testdummy"])
         self.s1 = s1 = s.getBuilder("dummy")
         self.failUnlessEqual(s1.getName(), "dummy")
-        self.failUnlessEqual(s1.getState(), ("offline", None))
-        self.failUnlessEqual(s1.getCurrentBuild(), None)
+        self.failUnlessEqual(s1.getState(), ("offline", []))
+        self.failUnlessEqual(s1.getCurrentBuilds(), [])
         self.failUnlessEqual(s1.getLastFinishedBuild(), None)
         self.failUnlessEqual(s1.getBuild(-1), None)
 
@@ -374,7 +374,7 @@
         return maybeWait(d)
 
     def _disconnectSetup_1(self, res):
-        self.failUnlessEqual(self.s1.getState(), ("idle", None))
+        self.failUnlessEqual(self.s1.getState(), ("idle", []))
 
 
     def verifyDisconnect(self, bs):
@@ -399,7 +399,7 @@
 
     def testIdle1(self):
         # disconnect the slave before the build starts
-        d = self.shutdownSlave() # dies before it gets started
+        d = self.shutdownAllSlaves() # dies before it gets started
         d.addCallback(self._testIdle1_1)
         return d
     def _testIdle1_1(self, res):
@@ -412,7 +412,7 @@
 
     def testIdle2(self):
         # now suppose the slave goes missing
-        self.slave.bf.continueTrying = 0
+        self.slaves['bot1'].bf.continueTrying = 0
         self.disappearSlave()
 
         # forcing a build will work: the build detect that the slave is no
@@ -449,7 +449,7 @@
     def _testBuild1_1(self, bc):
         bs = bc.getStatus()
         # now kill the slave before it gets to start the first step
-        d = self.shutdownSlave() # dies before it gets started
+        d = self.shutdownAllSlaves() # dies before it gets started
         d.addCallback(self._testBuild1_2, bs)
         return d  # TODO: this used to have a 5-second timeout
 
@@ -479,7 +479,7 @@
     def _testBuild1_1(self, bc):
         bs = bc.getStatus()
         # shutdown the slave while it's running the first step
-        reactor.callLater(0.5, self.shutdownSlave)
+        reactor.callLater(0.5, self.shutdownAllSlaves)
 
         d = bs.waitUntilFinished()
         d.addCallback(self._testBuild2_2, bs)
@@ -582,7 +582,7 @@
         self.failUnlessEqual(res, True)
 
         # now, before any build is run, make the slave disappear
-        self.slave.bf.continueTrying = 0
+        self.slaves['bot1'].bf.continueTrying = 0
         self.disappearSlave()
 
         # at this point, a ping to the slave should timeout
@@ -595,13 +595,13 @@
     def testDuplicate(self):
         bc = self.control.getBuilder("dummy")
         bs = self.status.getBuilder("dummy")
-        ss = bs.getSlave()
+        ss = bs.getSlaves()[0]
 
         self.failUnless(ss.isConnected())
         self.failUnlessEqual(ss.getAdmin(), "one")
 
         # now, before any build is run, make the first slave disappear
-        self.slave.bf.continueTrying = 0
+        self.slaves['bot1'].bf.continueTrying = 0
         self.disappearSlave()
 
         d = self.master.botmaster.waitUntilBuilderDetached("dummy")
@@ -638,24 +638,24 @@
         self.failUnlessEqual(s.getBuilderNames(), ["dummy", "testdummy"])
         self.s1 = s1 = s.getBuilder("dummy")
         self.failUnlessEqual(s1.getName(), "dummy")
-        self.failUnlessEqual(s1.getState(), ("offline", None))
-        self.failUnlessEqual(s1.getCurrentBuild(), None)
+        self.failUnlessEqual(s1.getState(), ("offline", []))
+        self.failUnlessEqual(s1.getCurrentBuilds(), [])
         self.failUnlessEqual(s1.getLastFinishedBuild(), None)
         self.failUnlessEqual(s1.getBuild(-1), None)
 
-        d = self.connectSlave3()
+        d = self.connectSlaveFastTimeout()
         d.addCallback(self._setup_disconnect2_1)
         return maybeWait(d)
 
     def _setup_disconnect2_1(self, res):
-        self.failUnlessEqual(self.s1.getState(), ("idle", None))
+        self.failUnlessEqual(self.s1.getState(), ("idle", []))
 
 
     def testSlaveTimeout(self):
         # now suppose the slave goes missing. We want to find out when it
         # creates a new Broker, so we reach inside and mark it with the
         # well-known sigil of impending messy death.
-        bd = self.slave.getServiceNamed("bot").builders["dummy"]
+        bd = self.slaves['bot1'].getServiceNamed("bot").builders["dummy"]
         broker = bd.remote.broker
         broker.redshirt = 1
 
@@ -667,7 +667,7 @@
     testSlaveTimeout.timeout = 20
 
     def _testSlaveTimeout_1(self, res):
-        bd = self.slave.getServiceNamed("bot").builders["dummy"]
+        bd = self.slaves['bot1'].getServiceNamed("bot").builders["dummy"]
         if not bd.remote or not hasattr(bd.remote.broker, "redshirt"):
             self.fail("slave disconnected when it shouldn't have")
 
@@ -688,7 +688,7 @@
 
     def _testSlaveTimeout_3(self, res):
         # make sure it is a new connection (i.e. a new Broker)
-        bd = self.slave.getServiceNamed("bot").builders["dummy"]
+        bd = self.slaves['bot1'].getServiceNamed("bot").builders["dummy"]
         self.failUnless(bd.remote, "hey, slave isn't really connected")
         self.failIf(hasattr(bd.remote.broker, "redshirt"),
                     "hey, slave's Broker is still marked for death")
@@ -706,12 +706,12 @@
         return maybeWait(d)
 
     def _testChangeBuilddir_1(self, res):
-        self.bot = bot = self.slave.bot
+        self.bot = bot = self.slaves['bot1'].bot
         self.builder = builder = bot.builders.get("dummy")
         self.failUnless(builder)
         self.failUnlessEqual(builder.builddir, "dummy")
         self.failUnlessEqual(builder.basedir,
-                             os.path.join("slavebase", "dummy"))
+                             os.path.join("slavebase-bot1", "dummy"))
 
         d = self.master.loadConfig(config_4_newbasedir)
         d.addCallback(self._testChangeBuilddir_2)
@@ -726,7 +726,7 @@
         # the basedir should be updated
         self.failUnlessEqual(builder.builddir, "dummy2")
         self.failUnlessEqual(builder.basedir,
-                             os.path.join("slavebase", "dummy2"))
+                             os.path.join("slavebase-bot1", "dummy2"))
 
         # add a new builder, which causes the basedir list to be reloaded
         d = self.master.loadConfig(config_4_newbuilder)

Index: test_scheduler.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_scheduler.py,v
retrieving revision 1.5
retrieving revision 1.6
diff -u -d -r1.5 -r1.6
--- test_scheduler.py	31 Aug 2005 01:12:07 -0000	1.5
+++ test_scheduler.py	14 Oct 2005 19:42:39 -0000	1.6
@@ -46,20 +46,6 @@
         s1 = self.master.sets[0]
         self.failUnlessEqual(s1.builderNames, ["a","b"])
 
-    def testPeriodic2(self):
-        # Twisted-2.0 starts the TimerService right away
-        # Twisted-1.3 waits one interval before starting it.
-        # so don't bother asserting anything about it
-        raise unittest.SkipTest("twisted-1.3 and -2.0 are inconsistent")
-        self.addScheduler(scheduler.Periodic("hourly", ["a","b"], 3600))
-        d = defer.Deferred()
-        reactor.callLater(1, d.callback, None)
-        d.addCallback(self._testPeriodic2_1)
-        return maybeWait(d)
-    def _testPeriodic2_1(self, res):
-        # the Periodic scheduler *should* fire right away
-        self.failUnless(self.master.sets)
-
     def isImportant(self, change):
         if "important" in change.files:
             return True

Index: test_slaves.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_slaves.py,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -d -r1.1 -r1.2
--- test_slaves.py	19 Jul 2005 23:11:58 -0000	1.1
+++ test_slaves.py	14 Oct 2005 19:42:39 -0000	1.2
@@ -2,16 +2,19 @@
 
 from twisted.trial import unittest
 from buildbot.twcompat import maybeWait
+from twisted.internet import defer, reactor
 
 from buildbot.test.runutils import RunMixin
-
+from buildbot.sourcestamp import SourceStamp
+from buildbot.process.base import BuildRequest
+from buildbot.status.builder import SUCCESS
 
 config_1 = """
 from buildbot.process import step, factory
 s = factory.s
 
 BuildmasterConfig = c = {}
-c['bots'] = [('bot1', 'sekrit'), ('bot2', 'sekrit')]
+c['bots'] = [('bot1', 'sekrit'), ('bot2', 'sekrit'), ('bot3', 'sekrit')]
 c['sources'] = []
 c['schedulers'] = []
 c['slavePortnum'] = 0
@@ -20,43 +23,166 @@
 f = factory.BuildFactory([s(step.RemoteDummy, timeout=1)])
 
 c['builders'] = [
-    {'name': 'b1', 'slavename': 'bot1', 'builddir': 'b1', 'factory': f},
+    {'name': 'b1', 'slavenames': ['bot1','bot2','bot3'],
+     'builddir': 'b1', 'factory': f},
     ]
-
 """
 
 class Slave(RunMixin, unittest.TestCase):
-    skip = "Not implemented yet"
+
     def setUp(self):
         RunMixin.setUp(self)
         self.master.loadConfig(config_1)
         self.master.startService()
         d = self.connectSlave(["b1"])
+        d.addCallback(lambda res: self.connectSlave(["b1"], "bot2"))
         return maybeWait(d)
 
-    def testClaim(self):
-        # have three slaves connect for the same builder, make sure all show
-        # up in the list of known slaves.
+    def doBuild(self, buildername):
+        br = BuildRequest("forced", SourceStamp())
+        d = self.control.getBuilder(buildername).requestBuild(br)
+        return d
 
-        # run a build, make sure it doesn't freak out.
+    def testSequence(self):
+        # make sure both slaves appear in the list.
+        attached_slaves = [c for c in self.master.botmaster.slaves.values()
+                           if c.slave]
+        self.failUnlessEqual(len(attached_slaves), 2)
+        b = self.master.botmaster.builders["b1"]
+        self.failUnlessEqual(len(b.slaves), 2)
+
+        # since the current scheduling algorithm is simple and does not
+        # rotate or attempt any sort of load-balancing, two builds in
+        # sequence should both use the first slave. This may change later if
+        # we move to a more sophisticated scheme.
+
+        d = self.doBuild("b1")
+        d.addCallback(self._testSequence_1)
+        return maybeWait(d)
+    def _testSequence_1(self, res):
+        self.failUnlessEqual(res.getResults(), SUCCESS)
+        self.failUnlessEqual(res.getSlavename(), "bot1")
+
+        d = self.doBuild("b1")
+        d.addCallback(self._testSequence_2)
+        return d
+    def _testSequence_2(self, res):
+        self.failUnlessEqual(res.getSlavename(), "bot1")
+
+
+    def testSimultaneous(self):
+        # make sure we can actually run two builds at the same time
+        d1 = self.doBuild("b1")
+        d2 = self.doBuild("b1")
+        d1.addCallback(self._testSimultaneous_1, d2)
+        return maybeWait(d1)
+    def _testSimultaneous_1(self, res, d2):
+        self.failUnlessEqual(res.getResults(), SUCCESS)
+        self.failUnlessEqual(res.getSlavename(), "bot1")
+        d2.addCallback(self._testSimultaneous_2)
+        return d2
+    def _testSimultaneous_2(self, res):
+        self.failUnlessEqual(res.getResults(), SUCCESS)
+        self.failUnlessEqual(res.getSlavename(), "bot2")
 
+    def testFallback1(self):
+        # detach the first slave, verify that a build is run using the second
+        # slave instead
+        d = self.shutdownSlave("bot1", "b1")
+        d.addCallback(self._testFallback1_1)
+        return maybeWait(d)
+    def _testFallback1_1(self, res):
+        attached_slaves = [c for c in self.master.botmaster.slaves.values()
+                           if c.slave]
+        self.failUnlessEqual(len(attached_slaves), 1)
+        self.failUnlessEqual(len(self.master.botmaster.builders["b1"].slaves),
+                             1)
+        d = self.doBuild("b1")
+        d.addCallback(self._testFallback1_2)
+        return d
+    def _testFallback1_2(self, res):
+        self.failUnlessEqual(res.getResults(), SUCCESS)
+        self.failUnlessEqual(res.getSlavename(), "bot2")
+
+    def testFallback2(self):
         # Disable the first slave, so that a slaveping will timeout. Then
         # start a build, and verify that the non-failing (second) one is
-        # claimed for the build, and that the failing one is moved to the
-        # back of the list.
-        print "done"
+        # claimed for the build, and that the failing one is removed from the
+        # list.
+
+        # reduce the ping time so we'll failover faster
+        self.master.botmaster.builders["b1"].START_BUILD_TIMEOUT = 1
+        self.disappearSlave("bot1", "b1")
+        d = self.doBuild("b1")
+        d.addCallback(self._testFallback2_1)
+        return maybeWait(d)
+    def _testFallback2_1(self, res):
+        self.failUnlessEqual(res.getResults(), SUCCESS)
+        self.failUnlessEqual(res.getSlavename(), "bot2")
+        b1slaves = self.master.botmaster.builders["b1"].slaves
+        self.failUnlessEqual(len(b1slaves), 1)
+        self.failUnlessEqual(b1slaves[0].slave.slavename, "bot2")
+
+
+    def notFinished(self, brs):
+        # utility method
+        builds = brs.getBuilds()
+        self.failIf(len(builds) > 1)
+        if builds:
+            self.failIf(builds[0].isFinished())
 
     def testDontClaimPingingSlave(self):
         # have two slaves connect for the same builder. Do something to the
         # first one so that slavepings are delayed (but do not fail
         # outright).
+        timers = []
+        self.slaves['bot1'].debugOpts["stallPings"] = (10, timers)
+        br = BuildRequest("forced", SourceStamp())
+        d1 = self.control.getBuilder("b1").requestBuild(br)
+        s1 = br.status # this is a BuildRequestStatus
+        # give it a chance to start pinging
+        d2 = defer.Deferred()
+        d2.addCallback(self._testDontClaimPingingSlave_1, d1, s1, timers)
+        reactor.callLater(1, d2.callback, None)
+        return maybeWait(d2)
+    def _testDontClaimPingingSlave_1(self, res, d1, s1, timers):
+        # now the first build is running (waiting on the ping), so start the
+        # second build. This should claim the second slave, not the first,
+        # because the first is busy doing the ping.
+        self.notFinished(s1)
+        d3 = self.doBuild("b1")
+        d3.addCallback(self._testDontClaimPingingSlave_2, d1, s1, timers)
+        return d3
+    def _testDontClaimPingingSlave_2(self, res, d1, s1, timers):
+        self.failUnlessEqual(res.getSlavename(), "bot2")
+        self.notFinished(s1)
+        # now let the ping complete
+        self.failUnlessEqual(len(timers), 1)
+        timers[0].reset(0)
+        d1.addCallback(self._testDontClaimPingingSlave_3)
+        return d1
+    def _testDontClaimPingingSlave_3(self, res):
+        self.failUnlessEqual(res.getSlavename(), "bot1")
 
-        # submit a build, which should claim the first slave and send the
-        # slaveping. While that is (slowly) happening, submit a second build.
-        # Verify that the second build does not claim the first slave (since
-        # it is busy doing the slaveping).
 
-        pass
+class Slave2(RunMixin, unittest.TestCase):
+
+    revision = 0
+
+    def setUp(self):
+        RunMixin.setUp(self)
+        self.master.loadConfig(config_1)
+        self.master.startService()
+
+    def doBuild(self, buildername, reason="forced"):
+        # we need to prevent these builds from being merged, so we create
+        # each of them with a different revision specifier. The revision is
+        # ignored because our build process does not have a source checkout
+        # step.
+        self.revision += 1
+        br = BuildRequest(reason, SourceStamp(revision=self.revision))
+        d = self.control.getBuilder(buildername).requestBuild(br)
+        return d
 
     def testFirstComeFirstServed(self):
         # submit three builds, then connect a slave which fails the
@@ -64,5 +190,36 @@
         # give up, and re-queue the build. Verify that the build gets
         # re-queued in front of all other builds. This may be tricky, because
         # the other builds may attempt to claim the just-failed slave.
-        pass
-    
+
+        d1 = self.doBuild("b1", "first")
+        d2 = self.doBuild("b1", "second")
+        #buildable = self.master.botmaster.builders["b1"].buildable
+        #print [b.reason for b in buildable]
+
+        # specifically, I want the poor build to get precedence over any
+        # others that were waiting. To test this, we need more builds than
+        # slaves.
+
+        # now connect a broken slave. The first build started as soon as it
+        # connects, so by the time we get to our _1 method, the ill-fated
+        # build has already started.
+        d = self.connectSlave(["b1"], opts={"failPingOnce": True})
+        d.addCallback(self._testFirstComeFirstServed_1, d1, d2)
+        return maybeWait(d)
+    def _testFirstComeFirstServed_1(self, res, d1, d2):
+        # the master has send the slaveping. When this is received, it will
+        # fail, causing the master to hang up on the slave. When it
+        # reconnects, it should find the first build at the front of the
+        # queue. If we simply wait for both builds to complete, then look at
+        # the status logs, we should see that the builds ran in the correct
+        # order.
+
+        d = defer.DeferredList([d1,d2])
+        d.addCallback(self._testFirstComeFirstServed_2)
+        return d
+    def _testFirstComeFirstServed_2(self, res):
+        b = self.status.getBuilder("b1")
+        builds = b.getBuild(0), b.getBuild(1)
+        reasons = [build.getReason() for build in builds]
+        self.failUnlessEqual(reasons, ["first", "second"])
+





More information about the Commits mailing list