[Buildbot-commits] buildbot/buildbot/test test_dependencies.py,NONE,1.1 test_slaves.py,NONE,1.1 test_locks.py,NONE,1.1 test_buildreq.py,NONE,1.1 runutils.py,NONE,1.1 test_changes.py,1.4,1.5 test_steps.py,1.13,1.14 test_config.py,1.21,1.22 test_run.py,1.32,1.33 test_control.py,1.6,1.7 test_vc.py,1.32,1.33 test_web.py,1.18,1.19 test_status.py,1.21,1.22 test_interlock.py,1.2,NONE
Brian Warner
warner at users.sourceforge.net
Tue Jul 19 23:12:01 UTC 2005
- Previous message (by thread): [Buildbot-commits] buildbot/buildbot/process step.py,1.66,1.67 factory.py,1.9,1.10 base.py,1.55,1.56 builder.py,1.26,1.27 interlock.py,1.7,NONE
- Next message (by thread): [Buildbot-commits] buildbot/buildbot/slave commands.py,1.36,1.37 bot.py,1.13,1.14
- Messages sorted by:
[ date ]
[ thread ]
[ subject ]
[ author ]
Update of /cvsroot/buildbot/buildbot/buildbot/test
In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv17398/buildbot/test
Modified Files:
test_changes.py test_steps.py test_config.py test_run.py
test_control.py test_vc.py test_web.py test_status.py
Added Files:
test_dependencies.py test_slaves.py test_locks.py
test_buildreq.py runutils.py
Removed Files:
test_interlock.py
Log Message:
Revision: arch at buildbot.sf.net--2004/buildbot--dev--0--patch-239
Creator: Brian Warner <warner at monolith.lothar.com>
merge in build-on-branch code: Merged from warner at monolith.lothar.com--2005 (patch 0-18, 40-41)
Patches applied:
* warner at monolith.lothar.com--2005/buildbot--dev--0--patch-40
Merged from arch at buildbot.sf.net--2004 (patch 232-238)
* warner at monolith.lothar.com--2005/buildbot--dev--0--patch-41
Merged from local-usebranches (warner at monolith.lothar.com--2005/buildbot--usebranches--0( (patch 0-18)
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--base-0
tag of warner at monolith.lothar.com--2005/buildbot--dev--0--patch-38
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-1
rearrange build scheduling
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-2
replace ugly 4-tuple with a distinct SourceStamp class
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-3
document upcoming features, clean up CVS branch= argument
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-4
Merged from arch at buildbot.sf.net--2004 (patch 227-231), warner at monolith.lothar.com--2005 (patch 39)
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-5
implement per-Step Locks, add tests (which all fail)
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-6
implement scheduler.Dependent, add (failing) tests
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-7
make test_dependencies work
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-8
finish making Locks work, tests now pass
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-9
fix test failures when run against twisted >2.0.1
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-10
rename test_interlock.py to test_locks.py
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-11
add more Locks tests, add branch examples to manual
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-12
rewrite test_vc.py, create repositories in setUp rather than offline
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-13
make new tests work with twisted-1.3.0
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-14
implement/test build-on-branch for most VC systems
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-15
minor changes: test-case-name tags, init cleanup
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-16
Merged from arch at buildbot.sf.net--2004 (patch 232-233)
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-17
Merged from arch at buildbot.sf.net--2004 (patch 234-236)
* warner at monolith.lothar.com--2005/buildbot--usebranches--0--patch-18
Merged from arch at buildbot.sf.net--2004 (patch 237-238), warner at monolith.lothar.com--2005 (patch 40)
--- NEW FILE: runutils.py ---
import shutil, os
from twisted.internet import defer
from twisted.python import log
from buildbot import master, interfaces
from buildbot.twcompat import maybeWait
from buildbot.slave import bot
class MyBot(bot.Bot):
def remote_getSlaveInfo(self):
return self.parent.info
class MyBuildSlave(bot.BuildSlave):
botClass = MyBot
class RunMixin:
master = None
slave = None
slave2 = None
def rmtree(self, d):
try:
shutil.rmtree(d, ignore_errors=1)
except OSError, e:
# stupid 2.2 appears to ignore ignore_errors
if e.errno != errno.ENOENT:
raise
def setUp(self):
self.rmtree("basedir")
self.rmtree("slavebase")
self.rmtree("slavebase2")
os.mkdir("basedir")
self.master = master.BuildMaster("basedir")
self.status = self.master.getStatus()
self.control = interfaces.IControl(self.master)
def connectSlave(self, builders=["dummy"]):
port = self.master.slavePort._port.getHost().port
os.mkdir("slavebase")
slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
"slavebase", keepalive=0, usePTY=1)
slave.info = {"admin": "one"}
self.slave = slave
slave.startService()
dl = []
# initiate call for all of them, before waiting on result,
# otherwise we might miss some
for b in builders:
dl.append(self.master.botmaster.waitUntilBuilderAttached(b))
d = defer.DeferredList(dl)
return d
def connectSlaves(self, builders=["dummy"]):
port = self.master.slavePort._port.getHost().port
os.mkdir("slavebase")
slave1 = MyBuildSlave("localhost", port, "bot1", "sekrit",
"slavebase", keepalive=0, usePTY=1)
slave1.info = {"admin": "one"}
self.slave = slave1
slave1.startService()
os.mkdir("slavebase2")
slave2 = MyBuildSlave("localhost", port, "bot2", "sekrit",
"slavebase2", keepalive=0, usePTY=1)
slave2.info = {"admin": "one"}
self.slave2 = slave2
slave2.startService()
dl = []
# initiate call for all of them, before waiting on result,
# otherwise we might miss some
for b in builders:
dl.append(self.master.botmaster.waitUntilBuilderAttached(b))
d = defer.DeferredList(dl)
return d
def connectSlave2(self):
port = self.master.slavePort._port.getHost().port
os.mkdir("slavebase2")
slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
"slavebase2", keepalive=0, usePTY=1)
slave.info = {"admin": "two"}
self.slave2 = slave
slave.startService()
def connectSlave3(self):
# this slave has a very fast keepalive timeout
port = self.master.slavePort._port.getHost().port
os.mkdir("slavebase")
slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
"slavebase", keepalive=2, usePTY=1,
keepaliveTimeout=1)
slave.info = {"admin": "one"}
self.slave = slave
slave.startService()
d = self.master.botmaster.waitUntilBuilderAttached("dummy")
return d
def tearDown(self):
log.msg("doing tearDown")
d = self.shutdownSlave()
d.addCallback(self._tearDown_1)
d.addCallback(self._tearDown_2)
return maybeWait(d)
def _tearDown_1(self, res):
if self.master:
return defer.maybeDeferred(self.master.stopService)
def _tearDown_2(self, res):
self.master = None
log.msg("tearDown done")
# various forms of slave death
def shutdownSlave(self):
# the slave has disconnected normally: they SIGINT'ed it, or it shut
# down willingly. This will kill child processes and give them a
# chance to finish up. We return a Deferred that will fire when
# everything is finished shutting down.
log.msg("doing shutdownSlave")
dl = []
if self.slave:
dl.append(self.slave.waitUntilDisconnected())
dl.append(defer.maybeDeferred(self.slave.stopService))
if self.slave2:
dl.append(self.slave2.waitUntilDisconnected())
dl.append(defer.maybeDeferred(self.slave2.stopService))
d = defer.DeferredList(dl)
d.addCallback(self._shutdownSlaveDone)
return d
def _shutdownSlaveDone(self, res):
self.slave = None
self.slave2 = None
return self.master.botmaster.waitUntilBuilderDetached("dummy")
def killSlave(self):
# the slave has died, its host sent a FIN. The .notifyOnDisconnect
# callbacks will terminate the current step, so the build should be
# flunked (no further steps should be started).
self.slave.bf.continueTrying = 0
bot = self.slave.getServiceNamed("bot")
broker = bot.builders["dummy"].remote.broker
broker.transport.loseConnection()
self.slave = None
def disappearSlave(self):
# the slave's host has vanished off the net, leaving the connection
# dangling. This will be detected quickly by app-level keepalives or
# a ping, or slowly by TCP timeouts.
# implement this by replacing the slave Broker's .dataReceived method
# with one that just throws away all data.
def discard(data):
pass
bot = self.slave.getServiceNamed("bot")
broker = bot.builders["dummy"].remote.broker
broker.dataReceived = discard # seal its ears
broker.transport.write = discard # and take away its voice
def ghostSlave(self):
# the slave thinks it has lost the connection, and initiated a
# reconnect. The master doesn't yet realize it has lost the previous
# connection, and sees two connections at once.
raise NotImplementedError
--- NEW FILE: test_buildreq.py ---
# -*- test-case-name: buildbot.test.test_buildreq -*-
from twisted.trial import unittest
from twisted.internet import defer, reactor
from twisted.application import service
from buildbot import buildset, scheduler, interfaces, sourcestamp
from buildbot.twcompat import maybeWait
from buildbot.process import base
from buildbot.status import builder
from buildbot.changes.changes import Change
class Request(unittest.TestCase):
def testMerge(self):
R = base.BuildRequest
S = sourcestamp.SourceStamp
b1 = R("why", S("branch1", None, None, None))
b1r1 = R("why2", S("branch1", "rev1", None, None))
b1r1a = R("why not", S("branch1", "rev1", None, None))
b1r2 = R("why3", S("branch1", "rev2", None, None))
b2r2 = R("why4", S("branch2", "rev2", None, None))
b1r1p1 = R("why5", S("branch1", "rev1", (3, "diff"), None))
c1 = Change("alice", [], "changed stuff", branch="branch1")
c2 = Change("alice", [], "changed stuff", branch="branch1")
c3 = Change("alice", [], "changed stuff", branch="branch1")
c4 = Change("alice", [], "changed stuff", branch="branch1")
c5 = Change("alice", [], "changed stuff", branch="branch1")
c6 = Change("alice", [], "changed stuff", branch="branch1")
b1c1 = R("changes", S("branch1", None, None, [c1,c2,c3]))
b1c2 = R("changes", S("branch1", None, None, [c4,c5,c6]))
self.failUnless(b1.canBeMergedWith(b1))
self.failIf(b1.canBeMergedWith(b1r1))
self.failIf(b1.canBeMergedWith(b2r2))
self.failIf(b1.canBeMergedWith(b1r1p1))
self.failIf(b1.canBeMergedWith(b1c1))
self.failIf(b1r1.canBeMergedWith(b1))
self.failUnless(b1r1.canBeMergedWith(b1r1))
self.failIf(b1r1.canBeMergedWith(b2r2))
self.failIf(b1r1.canBeMergedWith(b1r1p1))
self.failIf(b1r1.canBeMergedWith(b1c1))
self.failIf(b1r2.canBeMergedWith(b1))
self.failIf(b1r2.canBeMergedWith(b1r1))
self.failUnless(b1r2.canBeMergedWith(b1r2))
self.failIf(b1r2.canBeMergedWith(b2r2))
self.failIf(b1r2.canBeMergedWith(b1r1p1))
self.failIf(b1r1p1.canBeMergedWith(b1))
self.failIf(b1r1p1.canBeMergedWith(b1r1))
self.failIf(b1r1p1.canBeMergedWith(b1r2))
self.failIf(b1r1p1.canBeMergedWith(b2r2))
self.failIf(b1r1p1.canBeMergedWith(b1c1))
self.failIf(b1c1.canBeMergedWith(b1))
self.failIf(b1c1.canBeMergedWith(b1r1))
self.failIf(b1c1.canBeMergedWith(b1r2))
self.failIf(b1c1.canBeMergedWith(b2r2))
self.failIf(b1c1.canBeMergedWith(b1r1p1))
self.failUnless(b1c1.canBeMergedWith(b1c1))
self.failUnless(b1c1.canBeMergedWith(b1c2))
sm = b1.mergeWith([])
self.failUnlessEqual(sm.branch, "branch1")
self.failUnlessEqual(sm.revision, None)
self.failUnlessEqual(sm.patch, None)
self.failUnlessEqual(sm.changes, [])
ss = b1r1.mergeWith([b1r1])
self.failUnlessEqual(ss, S("branch1", "rev1", None, None))
why = b1r1.mergeReasons([b1r1])
self.failUnlessEqual(why, "why2")
why = b1r1.mergeReasons([b1r1a])
self.failUnlessEqual(why, "why2, why not")
ss = b1c1.mergeWith([b1c2])
self.failUnlessEqual(ss, S("branch1", None, None, [c1,c2,c3,c4,c5,c6]))
why = b1c1.mergeReasons([b1c2])
self.failUnlessEqual(why, "changes")
class FakeBuilder:
def __init__(self):
self.requests = []
def submitBuildRequest(self, req):
self.requests.append(req)
class Set(unittest.TestCase):
def testBuildSet(self):
S = buildset.BuildSet
a,b = FakeBuilder(), FakeBuilder()
# two builds, the first one fails, the second one succeeds. The
# waitUntilSuccess watcher fires as soon as the first one fails,
# while the waitUntilFinished watcher doesn't fire until all builds
# are complete.
source = sourcestamp.SourceStamp()
s = S(["a","b"], source, "forced build")
s.start([a,b])
self.failUnlessEqual(len(a.requests), 1)
self.failUnlessEqual(len(b.requests), 1)
r1 = a.requests[0]
self.failUnlessEqual(r1.reason, s.reason)
self.failUnlessEqual(r1.source, s.source)
res = []
d1 = s.waitUntilSuccess()
d1.addCallback(lambda r: res.append(("success", r)))
d2 = s.waitUntilFinished()
d2.addCallback(lambda r: res.append(("finished", r)))
self.failUnlessEqual(res, [])
builderstatus_a = builder.BuilderStatus("a")
builderstatus_b = builder.BuilderStatus("b")
bsa = builder.BuildStatus(builderstatus_a, 1)
bsa.setResults(builder.FAILURE)
a.requests[0].finished(bsa)
self.failUnlessEqual(len(res), 1)
self.failUnlessEqual(res[0][0], "success")
bss = res[0][1]
self.failUnless(interfaces.IBuildSetStatus(bss, None))
bsb = builder.BuildStatus(builderstatus_b, 1)
bsb.setResults(builder.SUCCESS)
b.requests[0].finished(bsb)
self.failUnlessEqual(len(res), 2)
self.failUnlessEqual(res[1][0], "finished")
self.failUnlessEqual(res[1][1], bss)
class FakeMaster(service.MultiService):
def submitBuildSet(self, bs):
self.sets.append(bs)
class Scheduling(unittest.TestCase):
def setUp(self):
self.master = master = FakeMaster()
master.sets = []
master.startService()
def tearDown(self):
d = self.master.stopService()
return maybeWait(d)
def addScheduler(self, s):
s.setServiceParent(self.master)
def testPeriodic1(self):
self.addScheduler(scheduler.Periodic("quickly", ["a","b"], 2))
d = defer.Deferred()
reactor.callLater(5, d.callback, None)
d.addCallback(self._testPeriodic1_1)
return maybeWait(d)
def _testPeriodic1_1(self, res):
self.failUnless(len(self.master.sets) > 1)
s1 = self.master.sets[0]
self.failUnlessEqual(s1.builderNames, ["a","b"])
def testPeriodic2(self):
# Twisted-2.0 starts the TimerService right away
# Twisted-1.3 waits one interval before starting it.
# so don't bother asserting anything about it
raise unittest.SkipTest("twisted-1.3 and -2.0 are inconsistent")
self.addScheduler(scheduler.Periodic("hourly", ["a","b"], 3600))
d = defer.Deferred()
reactor.callLater(1, d.callback, None)
d.addCallback(self._testPeriodic2_1)
return maybeWait(d)
def _testPeriodic2_1(self, res):
# the Periodic scheduler *should* fire right away
self.failUnless(self.master.sets)
def isImportant(self, change):
if "important" in change.files:
return True
return False
def testBranch(self):
s = scheduler.Scheduler("b1", "branch1", 2, ["a","b"],
fileIsImportant=self.isImportant)
self.addScheduler(s)
c0 = Change("carol", ["important"], "other branch", branch="other")
s.addChange(c0)
self.failIf(s.timer)
self.failIf(s.importantChanges)
c1 = Change("alice", ["important", "not important"], "some changes",
branch="branch1")
s.addChange(c1)
c2 = Change("bob", ["not important", "boring"], "some more changes",
branch="branch1")
s.addChange(c2)
c3 = Change("carol", ["important", "dull"], "even more changes",
branch="branch1")
s.addChange(c3)
self.failUnlessEqual(s.importantChanges, [c1,c3])
self.failUnlessEqual(s.unimportantChanges, [c2])
self.failUnless(s.timer)
d = defer.Deferred()
reactor.callLater(4, d.callback, None)
d.addCallback(self._testBranch_1)
return maybeWait(d)
def _testBranch_1(self, res):
self.failUnlessEqual(len(self.master.sets), 1)
s = self.master.sets[0].source
self.failUnlessEqual(s.branch, "branch1")
self.failUnlessEqual(s.revision, None)
self.failUnlessEqual(len(s.changes), 3)
self.failUnlessEqual(s.patch, None)
def testAnyBranch(self):
s = scheduler.AnyBranchScheduler("b1", None, 2, ["a","b"],
fileIsImportant=self.isImportant)
self.addScheduler(s)
c1 = Change("alice", ["important", "not important"], "some changes",
branch="branch1")
s.addChange(c1)
c2 = Change("bob", ["not important", "boring"], "some more changes",
branch="branch1")
s.addChange(c2)
c3 = Change("carol", ["important", "dull"], "even more changes",
branch="branch1")
s.addChange(c3)
c4 = Change("carol", ["important"], "other branch", branch="branch2")
s.addChange(c4)
d = defer.Deferred()
reactor.callLater(4, d.callback, None)
d.addCallback(self._testAnyBranch_1)
return maybeWait(d)
def _testAnyBranch_1(self, res):
self.failUnlessEqual(len(self.master.sets), 2)
self.master.sets.sort(lambda a,b: cmp(a.source.branch,
b.source.branch))
s1 = self.master.sets[0].source
self.failUnlessEqual(s1.branch, "branch1")
self.failUnlessEqual(s1.revision, None)
self.failUnlessEqual(len(s1.changes), 3)
self.failUnlessEqual(s1.patch, None)
s2 = self.master.sets[1].source
self.failUnlessEqual(s2.branch, "branch2")
self.failUnlessEqual(s2.revision, None)
self.failUnlessEqual(len(s2.changes), 1)
self.failUnlessEqual(s2.patch, None)
Index: test_changes.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_changes.py,v
retrieving revision 1.4
retrieving revision 1.5
diff -u -d -r1.4 -r1.5
--- test_changes.py 17 May 2005 03:36:54 -0000 1.4
+++ test_changes.py 19 Jul 2005 23:11:58 -0000 1.5
@@ -59,23 +59,18 @@
self.failUnlessEqual(c3.who, "alice")
config_empty = """
-from buildbot.changes import pb
-c = {}
+BuildmasterConfig = c = {}
c['bots'] = []
c['builders'] = []
c['sources'] = []
+c['schedulers'] = []
c['slavePortnum'] = 0
-BuildmasterConfig = c
"""
-config_sender = """
+config_sender = config_empty + \
+"""
from buildbot.changes import pb
-c = {}
-c['bots'] = []
-c['builders'] = []
c['sources'] = [pb.PBChangeSource(port=None)]
-c['slavePortnum'] = 0
-BuildmasterConfig = c
"""
class Sender(unittest.TestCase):
Index: test_config.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_config.py,v
retrieving revision 1.21
retrieving revision 1.22
diff -u -d -r1.21 -r1.22
--- test_config.py 22 May 2005 02:16:13 -0000 1.21
+++ test_config.py 19 Jul 2005 23:11:58 -0000 1.22
@@ -16,6 +16,7 @@
from buildbot.twcompat import providedBy
from buildbot.master import BuildMaster
+from buildbot import scheduler
from twisted.application import service, internet
from twisted.spread import pb
from twisted.web.server import Site
@@ -36,402 +37,298 @@
emptyCfg = \
"""
-c = {}
+BuildmasterConfig = c = {}
c['bots'] = []
c['sources'] = []
+c['schedulers'] = []
c['builders'] = []
c['slavePortnum'] = 9999
c['projectName'] = 'dummy project'
c['projectURL'] = 'http://dummy.example.com'
c['buildbotURL'] = 'http://dummy.example.com/buildbot'
-BuildmasterConfig = c
-"""
-
-slaveportCfg = \
-"""
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9000
-BuildmasterConfig = c
-"""
-
-botsCfg = \
-"""
-c = {}
-c['bots'] = [('bot1', 'pw1'), ('bot2', 'pw2')]
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
-BuildmasterConfig = c
-"""
-
-sourcesCfg = \
-"""
-from buildbot.changes.freshcvs import FreshCVSSource
-c = {}
-c['bots'] = []
-s1 = FreshCVSSource('cvs.example.com', 1000, 'pname', 'spass',
- prefix='Prefix/')
-c['sources'] = [s1]
-c['builders'] = []
-c['slavePortnum'] = 9999
-BuildmasterConfig = c
"""
buildersCfg = \
"""
from buildbot.process.factory import BasicBuildFactory
-c = {}
+BuildmasterConfig = c = {}
c['bots'] = [('bot1', 'pw1')]
c['sources'] = []
+c['schedulers'] = []
+c['slavePortnum'] = 9999
f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
c['builders'] = [{'name':'builder1', 'slavename':'bot1',
'builddir':'workdir', 'factory':f1}]
-c['slavePortnum'] = 9999
-BuildmasterConfig = c
"""
-buildersCfg2 = \
+buildersCfg2 = buildersCfg + \
"""
-from buildbot.process.factory import BasicBuildFactory
-c = {}
-c['bots'] = [('bot1', 'pw1')]
-c['sources'] = []
f1 = BasicBuildFactory('cvsroot', 'cvsmodule2')
c['builders'] = [{'name':'builder1', 'slavename':'bot1',
'builddir':'workdir', 'factory':f1}]
-c['slavePortnum'] = 9999
-BuildmasterConfig = c
"""
-buildersCfg2new = \
-"""
-from buildbot.process.factory import BasicBuildFactory
-c = {}
-c['bots'] = [('bot1', 'pw1')]
-c['sources'] = []
-f1 = BasicBuildFactory('cvsroot', 'cvsmodule2')
-c['builders'] = [{ 'name': 'builder1', 'slavename': 'bot1',
- 'builddir': 'workdir', 'factory': f1 }]
-c['slavePortnum'] = 9999
-BuildmasterConfig = c
-"""
-
-buildersCfg1new = \
-"""
-from buildbot.process.factory import BasicBuildFactory
-c = {}
-c['bots'] = [('bot1', 'pw1')]
-c['sources'] = []
-f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
-c['builders'] = [{ 'name': 'builder1', 'slavename': 'bot1',
- 'builddir': 'workdir', 'factory': f1 }]
-c['slavePortnum'] = 9999
-BuildmasterConfig = c
-"""
-
-buildersCfg3 = \
+buildersCfg3 = buildersCfg2 + \
"""
-from buildbot.process.factory import BasicBuildFactory
-c = {}
-c['bots'] = [('bot1', 'pw1')]
-c['sources'] = []
-f1 = BasicBuildFactory('cvsroot', 'cvsmodule2')
-c['builders'] = [{ 'name': 'builder1', 'slavename': 'bot1',
- 'builddir': 'workdir', 'factory': f1 },
- { 'name': 'builder2', 'slavename': 'bot1',
- 'builddir': 'workdir2', 'factory': f1 }]
-c['slavePortnum'] = 9999
-BuildmasterConfig = c
+c['builders'].append({'name': 'builder2', 'slavename': 'bot1',
+ 'builddir': 'workdir2', 'factory': f1 })
"""
-buildersCfg4 = \
+buildersCfg4 = buildersCfg2 + \
"""
-from buildbot.process.factory import BasicBuildFactory
-c = {}
-c['bots'] = [('bot1', 'pw1')]
-c['sources'] = []
-f1 = BasicBuildFactory('cvsroot', 'cvsmodule2')
c['builders'] = [{ 'name': 'builder1', 'slavename': 'bot1',
'builddir': 'newworkdir', 'factory': f1 },
{ 'name': 'builder2', 'slavename': 'bot1',
'builddir': 'workdir2', 'factory': f1 }]
-c['slavePortnum'] = 9999
-BuildmasterConfig = c
"""
-ircCfg1 = \
+ircCfg1 = emptyCfg + \
"""
from buildbot.status import words
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
c['status'] = [words.IRC('irc.us.freenode.net', 'buildbot', ['twisted'])]
-BuildmasterConfig = c
"""
-ircCfg2 = \
+ircCfg2 = emptyCfg + \
"""
from buildbot.status import words
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
c['status'] = [words.IRC('irc.us.freenode.net', 'buildbot', ['twisted']),
words.IRC('irc.example.com', 'otherbot', ['chan1', 'chan2'])]
-BuildmasterConfig = c
"""
-ircCfg3 = \
+ircCfg3 = emptyCfg + \
"""
from buildbot.status import words
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
c['status'] = [words.IRC('irc.us.freenode.net', 'buildbot', ['knotted'])]
-BuildmasterConfig = c
"""
-webCfg1 = \
+webCfg1 = emptyCfg + \
"""
from buildbot.status import html
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
c['status'] = [html.Waterfall(http_port=9980)]
-BuildmasterConfig = c
"""
-webCfg2 = \
+webCfg2 = emptyCfg + \
"""
from buildbot.status import html
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
c['status'] = [html.Waterfall(http_port=9981)]
-BuildmasterConfig = c
"""
-webNameCfg1 = \
+webNameCfg1 = emptyCfg + \
"""
from buildbot.status import html
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
c['status'] = [html.Waterfall(distrib_port='~/.twistd-web-pb')]
-BuildmasterConfig = c
"""
-webNameCfg2 = \
+webNameCfg2 = emptyCfg + \
"""
from buildbot.status import html
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
c['status'] = [html.Waterfall(distrib_port='bar.socket')]
-BuildmasterConfig = c
"""
-debugPasswordCfg = \
+debugPasswordCfg = emptyCfg + \
"""
-c = {}
-c['bots'] = []
-c['sources'] = []
-c['builders'] = []
-c['slavePortnum'] = 9999
c['debugPassword'] = 'sekrit'
-BuildmasterConfig = c
"""
-# create an inactive interlock (builder3 is not yet defined). This isn't
-# recommended practice, it is only here to test the code
-interlockCfg1 = \
+interlockCfgBad = \
"""
from buildbot.process.factory import BasicBuildFactory
c = {}
c['bots'] = [('bot1', 'pw1')]
c['sources'] = []
+c['schedulers'] = []
f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
c['builders'] = [
{ 'name': 'builder1', 'slavename': 'bot1',
'builddir': 'workdir', 'factory': f1 },
{ 'name': 'builder2', 'slavename': 'bot1',
'builddir': 'workdir2', 'factory': f1 },
- { 'name': 'builder4', 'slavename': 'bot1',
- 'builddir': 'workdir4', 'factory': f1 },
- { 'name': 'builder5', 'slavename': 'bot1',
- 'builddir': 'workdir5', 'factory': f1 },
]
+# interlocks have been removed
c['interlocks'] = [('lock1', ['builder1'], ['builder2', 'builder3']),
]
c['slavePortnum'] = 9999
BuildmasterConfig = c
"""
-# make it active
-interlockCfg2 = \
+lockCfgBad1 = \
"""
-from buildbot.process.factory import BasicBuildFactory
+from buildbot.process.step import Dummy
+from buildbot.process.factory import BuildFactory, s
+from buildbot.locks import MasterLock
c = {}
c['bots'] = [('bot1', 'pw1')]
c['sources'] = []
-f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
+c['schedulers'] = []
+l1 = MasterLock('lock1')
+l2 = MasterLock('lock1') # duplicate lock name
+f1 = BuildFactory([s(Dummy, locks=[])])
c['builders'] = [
{ 'name': 'builder1', 'slavename': 'bot1',
- 'builddir': 'workdir', 'factory': f1 },
+ 'builddir': 'workdir', 'factory': f1, 'locks': [l1, l2] },
{ 'name': 'builder2', 'slavename': 'bot1',
'builddir': 'workdir2', 'factory': f1 },
- { 'name': 'builder3', 'slavename': 'bot1',
- 'builddir': 'workdir3', 'factory': f1 },
- { 'name': 'builder4', 'slavename': 'bot1',
- 'builddir': 'workdir4', 'factory': f1 },
- { 'name': 'builder5', 'slavename': 'bot1',
- 'builddir': 'workdir5', 'factory': f1 },
- ]
-c['interlocks'] = [('lock1', ['builder1'], ['builder2', 'builder3']),
]
c['slavePortnum'] = 9999
BuildmasterConfig = c
"""
-# add a second lock
-interlockCfg3 = \
+lockCfgBad2 = \
"""
-from buildbot.process.factory import BasicBuildFactory
+from buildbot.process.step import Dummy
+from buildbot.process.factory import BuildFactory, s
+from buildbot.locks import MasterLock, SlaveLock
c = {}
c['bots'] = [('bot1', 'pw1')]
c['sources'] = []
-f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
+c['schedulers'] = []
+l1 = MasterLock('lock1')
+l2 = SlaveLock('lock1') # duplicate lock name
+f1 = BuildFactory([s(Dummy, locks=[])])
c['builders'] = [
{ 'name': 'builder1', 'slavename': 'bot1',
- 'builddir': 'workdir', 'factory': f1 },
+ 'builddir': 'workdir', 'factory': f1, 'locks': [l1, l2] },
{ 'name': 'builder2', 'slavename': 'bot1',
'builddir': 'workdir2', 'factory': f1 },
- { 'name': 'builder3', 'slavename': 'bot1',
- 'builddir': 'workdir3', 'factory': f1 },
- { 'name': 'builder4', 'slavename': 'bot1',
- 'builddir': 'workdir4', 'factory': f1 },
- { 'name': 'builder5', 'slavename': 'bot1',
- 'builddir': 'workdir5', 'factory': f1 },
]
-c['interlocks'] = [('lock1', ['builder1'], ['builder2', 'builder3']),
- ('lock2', ['builder3', 'builder4'], ['builder5']),
+c['slavePortnum'] = 9999
+BuildmasterConfig = c
+"""
+
+lockCfgBad3 = \
+"""
+from buildbot.process.step import Dummy
+from buildbot.process.factory import BuildFactory, s
+from buildbot.locks import MasterLock
+c = {}
+c['bots'] = [('bot1', 'pw1')]
+c['sources'] = []
+c['schedulers'] = []
+l1 = MasterLock('lock1')
+l2 = MasterLock('lock1') # duplicate lock name
+f1 = BuildFactory([s(Dummy, locks=[l2])])
+f2 = BuildFactory([s(Dummy)])
+c['builders'] = [
+ { 'name': 'builder1', 'slavename': 'bot1',
+ 'builddir': 'workdir', 'factory': f2, 'locks': [l1] },
+ { 'name': 'builder2', 'slavename': 'bot1',
+ 'builddir': 'workdir2', 'factory': f1 },
]
c['slavePortnum'] = 9999
BuildmasterConfig = c
"""
-# change the second lock
-interlockCfg4 = \
+lockCfg1a = \
"""
from buildbot.process.factory import BasicBuildFactory
+from buildbot.locks import MasterLock
c = {}
c['bots'] = [('bot1', 'pw1')]
c['sources'] = []
+c['schedulers'] = []
f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
+l1 = MasterLock('lock1')
+l2 = MasterLock('lock2')
c['builders'] = [
{ 'name': 'builder1', 'slavename': 'bot1',
- 'builddir': 'workdir', 'factory': f1 },
+ 'builddir': 'workdir', 'factory': f1, 'locks': [l1, l2] },
{ 'name': 'builder2', 'slavename': 'bot1',
'builddir': 'workdir2', 'factory': f1 },
- { 'name': 'builder3', 'slavename': 'bot1',
- 'builddir': 'workdir3', 'factory': f1 },
- { 'name': 'builder4', 'slavename': 'bot1',
- 'builddir': 'workdir4', 'factory': f1 },
- { 'name': 'builder5', 'slavename': 'bot1',
- 'builddir': 'workdir5', 'factory': f1 },
- ]
-c['interlocks'] = [('lock1', ['builder1'], ['builder2', 'builder3']),
- ('lock2', ['builder1', 'builder4'], ['builder5']),
]
c['slavePortnum'] = 9999
BuildmasterConfig = c
"""
-# delete the first lock
-interlockCfg5 = \
+lockCfg1b = \
"""
from buildbot.process.factory import BasicBuildFactory
+from buildbot.locks import MasterLock
c = {}
c['bots'] = [('bot1', 'pw1')]
c['sources'] = []
+c['schedulers'] = []
f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
+l1 = MasterLock('lock1')
+l2 = MasterLock('lock2')
c['builders'] = [
{ 'name': 'builder1', 'slavename': 'bot1',
- 'builddir': 'workdir', 'factory': f1 },
+ 'builddir': 'workdir', 'factory': f1, 'locks': [l1] },
{ 'name': 'builder2', 'slavename': 'bot1',
'builddir': 'workdir2', 'factory': f1 },
- { 'name': 'builder3', 'slavename': 'bot1',
- 'builddir': 'workdir3', 'factory': f1 },
- { 'name': 'builder4', 'slavename': 'bot1',
- 'builddir': 'workdir4', 'factory': f1 },
- { 'name': 'builder5', 'slavename': 'bot1',
- 'builddir': 'workdir5', 'factory': f1 },
- ]
-c['interlocks'] = [('lock2', ['builder1', 'builder4'], ['builder5']),
]
c['slavePortnum'] = 9999
BuildmasterConfig = c
"""
-# render the lock inactive by removing a builder it depends upon
-interlockCfg6 = \
+# test out step Locks
+lockCfg2a = \
"""
-from buildbot.process.factory import BasicBuildFactory
+from buildbot.process.step import Dummy
+from buildbot.process.factory import BuildFactory, s
+from buildbot.locks import MasterLock
c = {}
c['bots'] = [('bot1', 'pw1')]
c['sources'] = []
-f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
+c['schedulers'] = []
+l1 = MasterLock('lock1')
+l2 = MasterLock('lock2')
+f1 = BuildFactory([s(Dummy, locks=[l1,l2])])
+f2 = BuildFactory([s(Dummy)])
+
c['builders'] = [
{ 'name': 'builder1', 'slavename': 'bot1',
'builddir': 'workdir', 'factory': f1 },
{ 'name': 'builder2', 'slavename': 'bot1',
- 'builddir': 'workdir2', 'factory': f1 },
- { 'name': 'builder3', 'slavename': 'bot1',
- 'builddir': 'workdir3', 'factory': f1 },
- { 'name': 'builder4', 'slavename': 'bot1',
- 'builddir': 'workdir4', 'factory': f1 },
+ 'builddir': 'workdir2', 'factory': f2 },
]
-c['interlocks'] = [('lock2', ['builder1', 'builder4'], ['builder5']),
+c['slavePortnum'] = 9999
+BuildmasterConfig = c
+"""
+
+lockCfg2b = \
+"""
+from buildbot.process.step import Dummy
+from buildbot.process.factory import BuildFactory, s
+from buildbot.locks import MasterLock
+c = {}
+c['bots'] = [('bot1', 'pw1')]
+c['sources'] = []
+c['schedulers'] = []
+l1 = MasterLock('lock1')
+l2 = MasterLock('lock2')
+f1 = BuildFactory([s(Dummy, locks=[l1])])
+f2 = BuildFactory([s(Dummy)])
+
+c['builders'] = [
+ { 'name': 'builder1', 'slavename': 'bot1',
+ 'builddir': 'workdir', 'factory': f1 },
+ { 'name': 'builder2', 'slavename': 'bot1',
+ 'builddir': 'workdir2', 'factory': f2 },
]
c['slavePortnum'] = 9999
BuildmasterConfig = c
"""
-# finally remove the interlock
-interlockCfg7 = \
+lockCfg2c = \
"""
-from buildbot.process.factory import BasicBuildFactory
+from buildbot.process.step import Dummy
+from buildbot.process.factory import BuildFactory, s
+from buildbot.locks import MasterLock
c = {}
c['bots'] = [('bot1', 'pw1')]
c['sources'] = []
-f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
+c['schedulers'] = []
+l1 = MasterLock('lock1')
+l2 = MasterLock('lock2')
+f1 = BuildFactory([s(Dummy)])
+f2 = BuildFactory([s(Dummy)])
+
c['builders'] = [
{ 'name': 'builder1', 'slavename': 'bot1',
'builddir': 'workdir', 'factory': f1 },
{ 'name': 'builder2', 'slavename': 'bot1',
- 'builddir': 'workdir2', 'factory': f1 },
- { 'name': 'builder3', 'slavename': 'bot1',
- 'builddir': 'workdir3', 'factory': f1 },
- { 'name': 'builder4', 'slavename': 'bot1',
- 'builddir': 'workdir4', 'factory': f1 },
+ 'builddir': 'workdir2', 'factory': f2 },
]
-c['interlocks'] = []
c['slavePortnum'] = 9999
BuildmasterConfig = c
"""
@@ -504,7 +401,6 @@
self.checkPorts(master, [(9999, pb.PBServerFactory)])
self.failUnlessEqual(list(master.change_svc), [])
self.failUnlessEqual(master.botmaster.builders, {})
- self.failUnlessEqual(master.botmaster.interlocks, {})
self.failUnlessEqual(master.checker.users,
{"change": "changepw"})
self.failUnlessEqual(master.projectName, "dummy project")
@@ -528,7 +424,7 @@
"the slave port was changed even " + \
"though the configuration was not")
- master.loadConfig(slaveportCfg)
+ master.loadConfig(emptyCfg + "c['slavePortnum'] = 9000\n")
self.failUnlessEqual(master.slavePortnum, 9000)
ports = self.checkPorts(master, [(9000, pb.PBServerFactory)])
self.failIf(p is ports[0],
@@ -541,6 +437,8 @@
self.failUnlessEqual(master.botmaster.builders, {})
self.failUnlessEqual(master.checker.users,
{"change": "changepw"})
+ botsCfg = (emptyCfg +
+ "c['bots'] = [('bot1', 'pw1'), ('bot2', 'pw2')]\n")
master.loadConfig(botsCfg)
self.failUnlessEqual(master.checker.users,
{"change": "changepw",
@@ -563,6 +461,14 @@
master.loadConfig(emptyCfg)
self.failUnlessEqual(list(master.change_svc), [])
+ sourcesCfg = emptyCfg + \
+"""
+from buildbot.changes.freshcvs import FreshCVSSource
+s1 = FreshCVSSource('cvs.example.com', 1000, 'pname', 'spass',
+ prefix='Prefix/')
+c['sources'] = [s1]
+"""
+
d = master.loadConfig(sourcesCfg)
dr(d)
self.failUnlessEqual(len(list(master.change_svc)), 1)
@@ -586,6 +492,39 @@
dr(d)
self.failUnlessEqual(list(master.change_svc), [])
+ def testSchedulers(self):
+ master = self.buildmaster
+ master.loadChanges()
+ master.loadConfig(emptyCfg)
+ self.failUnlessEqual(master.schedulers, [])
+
+ schedulersCfg = \
+"""
+from buildbot.scheduler import Scheduler
+from buildbot.process.factory import BasicBuildFactory
+c = {}
+c['bots'] = [('bot1', 'pw1')]
+c['sources'] = []
+c['schedulers'] = [Scheduler('full', None, 60, ['builder1'])]
+f1 = BasicBuildFactory('cvsroot', 'cvsmodule')
+c['builders'] = [{'name':'builder1', 'slavename':'bot1',
+ 'builddir':'workdir', 'factory':f1}]
+c['slavePortnum'] = 9999
+c['projectName'] = 'dummy project'
+c['projectURL'] = 'http://dummy.example.com'
+c['buildbotURL'] = 'http://dummy.example.com/buildbot'
+BuildmasterConfig = c
+"""
+
+ d = master.loadConfig(schedulersCfg)
+ dr(d)
+ self.failUnlessEqual(len(master.schedulers), 1)
+ s = master.schedulers[0]
+ self.failUnless(isinstance(s, scheduler.Scheduler))
+ self.failUnlessEqual(s.name, "full")
+ self.failUnlessEqual(s.branch, None)
+ self.failUnlessEqual(s.treeStableTimer, 60)
+ self.failUnlessEqual(s.builderNames, ['builder1'])
def testBuilders(self):
master = self.buildmaster
@@ -634,22 +573,6 @@
#statusbag3 = master.client_svc.statusbags["builder1"]
#self.failUnlessIdentical(statusbag, statusbag3)
- # moving to a new-style builder spec shouldn't cause a change
- master.loadConfig(buildersCfg2new)
- b3n = master.botmaster.builders["builder1"]
- self.failUnlessIdentical(b3n, b3)
- # TODO
- #statusbag3n = master.client_svc.statusbags["builder1"]
- #self.failUnlessIdentical(statusbag3n, statusbag3)
-
- # unless it is different somehow
- master.loadConfig(buildersCfg1new)
- b3nn = master.botmaster.builders["builder1"]
- self.failIf(b3nn is b3n)
-
- master.loadConfig(buildersCfg2new)
- b3 = master.botmaster.builders["builder1"]
-
# adding new builder
master.loadConfig(buildersCfg3)
self.failUnlessEqual(master.botmaster.builderNames, ["builder1",
@@ -787,121 +710,44 @@
self.failUnlessEqual(master.checker.users,
{"change": "changepw"})
- def checkInterlocks(self, botmaster, expected):
- for (bname, (feeders, interlocks)) in expected.items():
- b = botmaster.builders[bname]
- self.failUnlessListsEquivalent(b.feeders, feeders)
- self.failUnlessListsEquivalent(b.interlocks, interlocks)
- for bname, b in botmaster.builders.items():
- if bname not in expected.keys():
- self.failUnlessEqual(b.feeders, [])
- self.failUnlessEqual(b.interlocks, [])
-
- def testInterlocks(self):
+ def testLocks(self):
master = self.buildmaster
botmaster = master.botmaster
- # create an inactive interlock
- master.loadConfig(interlockCfg1)
- self.failUnlessListsEquivalent(botmaster.interlocks.keys(),
- ['lock1'])
- i1 = botmaster.interlocks['lock1']
- self.failUnless(isinstance(i1, Interlock))
- self.failUnlessEqual(i1.name, 'lock1')
- self.failUnlessEqual(i1.feederNames, ['builder1'])
- self.failUnlessEqual(i1.watcherNames, ['builder2', 'builder3'])
- self.failUnlessEqual(i1.active, False)
- self.checkInterlocks(botmaster, {'builder1': ([], [])})
-
- # make it active by adding the builder
- master.loadConfig(interlockCfg2)
- self.failUnlessListsEquivalent(botmaster.interlocks.keys(),
- ['lock1'])
- # should be the same Interlock object as before
- self.failUnlessIdentical(i1, botmaster.interlocks['lock1'])
- self.failUnless(isinstance(i1, Interlock))
- self.failUnlessEqual(i1.name, 'lock1')
- self.failUnlessEqual(i1.feederNames, ['builder1'])
- self.failUnlessEqual(i1.watcherNames, ['builder2', 'builder3'])
- self.failUnlessEqual(i1.active, True)
- self.checkInterlocks(botmaster, {'builder1': ([i1], []),
- 'builder2': ([], [i1]),
- 'builder3': ([], [i1]),
- })
-
- # add a second lock
- master.loadConfig(interlockCfg3)
- self.failUnlessListsEquivalent(botmaster.interlocks.keys(),
- ['lock1', 'lock2'])
- self.failUnlessIdentical(i1, botmaster.interlocks['lock1'])
- self.failUnless(isinstance(i1, Interlock))
- self.failUnlessEqual(i1.name, 'lock1')
- self.failUnlessEqual(i1.feederNames, ['builder1'])
- self.failUnlessEqual(i1.watcherNames, ['builder2', 'builder3'])
- self.failUnlessEqual(i1.active, True)
- i2 = botmaster.interlocks['lock2']
- self.failUnless(isinstance(i2, Interlock))
- self.failUnlessEqual(i2.name, 'lock2')
- self.failUnlessEqual(i2.feederNames, ['builder3', 'builder4'])
- self.failUnlessEqual(i2.watcherNames, ['builder5'])
- self.failUnlessEqual(i2.active, True)
- self.checkInterlocks(botmaster, {'builder1': ([i1], []),
- 'builder2': ([], [i1]),
- 'builder3': ([i2], [i1]),
- 'builder4': ([i2], []),
- 'builder5': ([], [i2]),
- })
-
- # modify the second interlock
- master.loadConfig(interlockCfg4)
- self.failUnlessListsEquivalent(botmaster.interlocks.keys(),
- ['lock1', 'lock2'])
- self.failUnlessIdentical(i1, botmaster.interlocks['lock1'])
- self.failUnless(isinstance(i1, Interlock))
- self.failUnlessEqual(i1.name, 'lock1')
- self.failUnlessEqual(i1.feederNames, ['builder1'])
- self.failUnlessEqual(i1.watcherNames, ['builder2', 'builder3'])
- self.failUnlessEqual(i1.active, True)
- # second interlock has changed, should be a new Interlock object
- self.failIf(i2 is botmaster.interlocks['lock2'])
- i2 = botmaster.interlocks['lock2']
- self.failUnless(isinstance(i2, Interlock))
- self.failUnlessEqual(i2.name, 'lock2')
- self.failUnlessEqual(i2.feederNames, ['builder1', 'builder4'])
- self.failUnlessEqual(i2.watcherNames, ['builder5'])
- self.failUnlessEqual(i2.active, True)
- self.checkInterlocks(botmaster, {'builder1': ([i1,i2], []),
- 'builder2': ([], [i1]),
- 'builder3': ([], [i1]),
- 'builder4': ([i2], []),
- 'builder5': ([], [i2]),
- })
+ # make sure that c['interlocks'] is rejected properly
+ self.failUnlessRaises(KeyError, master.loadConfig, interlockCfgBad)
+ # and that duplicate-named Locks are caught
+ self.failUnlessRaises(ValueError, master.loadConfig, lockCfgBad1)
+ self.failUnlessRaises(ValueError, master.loadConfig, lockCfgBad2)
+ self.failUnlessRaises(ValueError, master.loadConfig, lockCfgBad3)
- # delete the first interlock
- master.loadConfig(interlockCfg5)
- self.failUnlessEqual(botmaster.interlocks.keys(), ['lock2'])
- self.failUnlessIdentical(i2, botmaster.interlocks['lock2'])
- self.failUnless(isinstance(i2, Interlock))
- self.failUnlessEqual(i2.name, 'lock2')
- self.failUnlessEqual(i2.feederNames, ['builder1', 'builder4'])
- self.failUnlessEqual(i2.watcherNames, ['builder5'])
- self.failUnlessEqual(i2.active, True)
- self.checkInterlocks(botmaster, {'builder1': ([i2], []),
- 'builder4': ([i2], []),
- 'builder5': ([], [i2]),
- })
+ # create a Builder that uses Locks
+ master.loadConfig(lockCfg1a)
+ b1 = master.botmaster.builders["builder1"]
+ self.failUnlessEqual(len(b1.locks), 2)
- # make it inactive by removing a builder it depends upon
- master.loadConfig(interlockCfg6)
- self.failUnlessEqual(botmaster.interlocks.keys(), ['lock2'])
- self.failUnlessIdentical(i2, botmaster.interlocks['lock2'])
- self.failUnlessEqual(i2.active, False)
- self.checkInterlocks(botmaster, {})
+ # reloading the same config should not change the Builder
+ master.loadConfig(lockCfg1a)
+ self.failUnlessIdentical(b1, master.botmaster.builders["builder1"])
+ # but changing the set of locks used should change it
+ master.loadConfig(lockCfg1b)
+ self.failIfIdentical(b1, master.botmaster.builders["builder1"])
+ b1 = master.botmaster.builders["builder1"]
+ self.failUnlessEqual(len(b1.locks), 1)
- # now remove it
- master.loadConfig(interlockCfg7)
- self.failUnlessEqual(botmaster.interlocks, {})
- self.checkInterlocks(botmaster, {})
+ # similar test with step-scoped locks
+ master.loadConfig(lockCfg2a)
+ b1 = master.botmaster.builders["builder1"]
+ # reloading the same config should not change the Builder
+ master.loadConfig(lockCfg2a)
+ self.failUnlessIdentical(b1, master.botmaster.builders["builder1"])
+ # but changing the set of locks used should change it
+ master.loadConfig(lockCfg2b)
+ self.failIfIdentical(b1, master.botmaster.builders["builder1"])
+ b1 = master.botmaster.builders["builder1"]
+ # remove the locks entirely
+ master.loadConfig(lockCfg2c)
+ self.failIfIdentical(b1, master.botmaster.builders["builder1"])
class ConfigFileTest(unittest.TestCase):
@@ -909,6 +755,7 @@
def testFindConfigFile(self):
os.mkdir("test_cf")
open(os.path.join("test_cf", "master.cfg"), "w").write(emptyCfg)
+ slaveportCfg = emptyCfg + "c['slavePortnum'] = 9000\n"
open(os.path.join("test_cf", "alternate.cfg"), "w").write(slaveportCfg)
m = BuildMaster("test_cf")
Index: test_control.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_control.py,v
retrieving revision 1.6
retrieving revision 1.7
diff -u -d -r1.6 -r1.7
--- test_control.py 17 May 2005 10:14:10 -0000 1.6
+++ test_control.py 19 Jul 2005 23:11:58 -0000 1.7
@@ -7,10 +7,12 @@
from twisted.internet import defer, reactor
from buildbot import master, interfaces
-from buildbot.twcompat import providedBy
+from buildbot.sourcestamp import SourceStamp
+from buildbot.twcompat import providedBy, maybeWait
from buildbot.slave import bot
from buildbot.status import builder
from buildbot.status.builder import SUCCESS
+from buildbot.process import base
config = """
from buildbot.process import factory, step
@@ -24,6 +26,7 @@
c = {}
c['bots'] = [['bot1', 'sekrit']]
c['sources'] = []
+c['schedulers'] = []
c['builders'] = [{'name': 'force', 'slavename': 'bot1',
'builddir': 'force-dir', 'factory': f1}]
c['slavePortnum'] = 0
@@ -91,14 +94,17 @@
self.connectSlave()
def tearDown(self):
+ dl = []
if self.slave:
- d = self.master.botmaster.waitUntilBuilderDetached("force")
- dr(defer.maybeDeferred(self.slave.stopService))
- dr(d)
+ dl.append(self.master.botmaster.waitUntilBuilderDetached("force"))
+ dl.append(defer.maybeDeferred(self.slave.stopService))
if self.master:
- dr(defer.maybeDeferred(self.master.stopService))
+ dl.append(defer.maybeDeferred(self.master.stopService))
+ return maybeWait(defer.DeferredList(dl))
def testForce(self):
+ # TODO: since BuilderControl.forceBuild has been deprecated, this
+ # test is scheduled to be removed soon
m = self.master
m.loadConfig(config)
m.readConfig = True
@@ -107,11 +113,17 @@
c = interfaces.IControl(m)
builder_control = c.getBuilder("force")
- build_control = builder_control.forceBuild("bob", "I was bored")
+ d = builder_control.forceBuild("bob", "I was bored")
+ d.addCallback(self._testForce_1)
+ return maybeWait(d)
+
+ def _testForce_1(self, build_control):
self.failUnless(providedBy(build_control, interfaces.IBuildControl))
d = build_control.getStatus().waitUntilFinished()
- bs = dr(d)
+ d.addCallback(self._testForce_2)
+ return d
+ def _testForce_2(self, bs):
self.failUnless(providedBy(bs, interfaces.IBuildStatus))
self.failUnless(bs.isFinished())
self.failUnlessEqual(bs.getResults(), SUCCESS)
@@ -119,20 +131,7 @@
self.failUnlessEqual(bs.getChanges(), [])
#self.failUnlessEqual(bs.getReason(), "forced") # TODO
- def testNoSlave(self):
- m = self.master
- m.loadConfig(config)
- m.readConfig = True
- m.startService()
- # don't connect the slave here
-
- c = interfaces.IControl(m)
- builder_control = c.getBuilder("force")
- self.failUnlessRaises(interfaces.NoSlaveError,
- builder_control.forceBuild,
- "bob", "I was bored")
-
- def testBuilderInUse(self):
+ def testRequest(self):
m = self.master
m.loadConfig(config)
m.readConfig = True
@@ -140,20 +139,9 @@
self.connectSlave()
c = interfaces.IControl(m)
- bc1 = c.getBuilder("force")
- self.failUnless(bc1)
- b = bc1.forceBuild("bob", "running first build")
- # this test depends upon less than one second occurring between the
- # two calls to forceBuild
-
- failed = "did not raise exception"
- try:
- bc1.forceBuild("bob", "finger twitched")
- except interfaces.BuilderInUseError:
- failed = None
- except Exception, e:
- failed = "raised the wrong exception: %s" % e
-
- dr(b.getStatus().waitUntilFinished())
- if failed:
- self.fail(failed)
+ req = base.BuildRequest("I was bored", SourceStamp())
+ builder_control = c.getBuilder("force")
+ d = req.waitUntilStarted()
+ builder_control.requestBuild(req)
+ d.addCallback(self._testForce_1)
+ return maybeWait(d)
--- NEW FILE: test_dependencies.py ---
# -*- test-case-name: buildbot.test.test_dependencies -*-
from twisted.trial import unittest
from twisted.internet import reactor, defer
from buildbot import interfaces
from buildbot.process import step
from buildbot.sourcestamp import SourceStamp
from buildbot.process.base import BuildRequest
from buildbot.test.runutils import RunMixin
from buildbot.twcompat import maybeWait
config_1 = """
from buildbot import scheduler
from buildbot.process import step, factory
s = factory.s
from buildbot.test.test_locks import LockStep
BuildmasterConfig = c = {}
c['bots'] = [('bot1', 'sekrit'), ('bot2', 'sekrit')]
c['sources'] = []
c['schedulers'] = []
c['slavePortnum'] = 0
s1 = scheduler.Scheduler('upstream1', None, 10, ['slowpass', 'fastfail'])
s2 = scheduler.Dependent('downstream2', s1, ['b3', 'b4'])
s3 = scheduler.Scheduler('upstream3', None, 10, ['fastpass', 'slowpass'])
s4 = scheduler.Dependent('downstream4', s3, ['b3', 'b4'])
s5 = scheduler.Dependent('downstream5', s4, ['b5'])
c['schedulers'] = [s1, s2, s3, s4, s5]
f_fastpass = factory.BuildFactory([s(step.Dummy, timeout=1)])
f_slowpass = factory.BuildFactory([s(step.Dummy, timeout=2)])
f_fastfail = factory.BuildFactory([s(step.FailingDummy, timeout=1)])
def builder(name, f):
d = {'name': name, 'slavename': 'bot1', 'builddir': name, 'factory': f}
return d
c['builders'] = [builder('slowpass', f_slowpass),
builder('fastfail', f_fastfail),
builder('fastpass', f_fastpass),
builder('b3', f_fastpass),
builder('b4', f_fastpass),
builder('b5', f_fastpass),
]
"""
class Dependencies(RunMixin, unittest.TestCase):
def setUp(self):
RunMixin.setUp(self)
self.master.loadConfig(config_1)
self.master.startService()
d = self.connectSlave(["slowpass", "fastfail", "fastpass",
"b3", "b4", "b5"])
return maybeWait(d)
def findScheduler(self, name):
for s in self.master.schedulers:
if s.name == name:
return s
raise KeyError("No Scheduler named '%s'" % name)
def testParse(self):
self.master.loadConfig(config_1)
# that's it, just make sure this config file is loaded successfully
def testRun_Fail(self):
# kick off upstream1, which has a failing Builder and thus will not
# trigger downstream3
s = self.findScheduler("upstream1")
# this is an internal function of the Scheduler class
s.fireTimer() # fires a build
# t=0: two builders start: 'slowpass' and 'fastfail'
# t=1: builder 'fastfail' finishes
# t=2: builder 'slowpass' finishes
d = defer.Deferred()
d.addCallback(self._testRun_Fail_1)
reactor.callLater(3, d.callback, None)
return maybeWait(d)
def _testRun_Fail_1(self, res):
# 'slowpass' and 'fastfail' should have run one build each
b = self.status.getBuilder('slowpass').getLastFinishedBuild()
self.failUnless(b)
self.failUnlessEqual(b.getNumber(), 0)
b = self.status.getBuilder('fastfail').getLastFinishedBuild()
self.failUnless(b)
self.failUnlessEqual(b.getNumber(), 0)
# none of the other builders should have run
self.failIf(self.status.getBuilder('b3').getLastFinishedBuild())
self.failIf(self.status.getBuilder('b4').getLastFinishedBuild())
self.failIf(self.status.getBuilder('b5').getLastFinishedBuild())
def testRun_Pass(self):
# kick off upstream3, which will fire downstream4 and then
# downstream5
s = self.findScheduler("upstream3")
# this is an internal function of the Scheduler class
s.fireTimer() # fires a build
# t=0: slowpass and fastpass start
# t=1: builder 'fastpass' finishes
# t=2: builder 'slowpass' finishes
# scheduler 'downstream4' fires
# builds b3 and b4 are started
# t=3: builds b3 and b4 finish
# scheduler 'downstream5' fires
# build b5 is started
# t=4: build b5 is finished
d = defer.Deferred()
d.addCallback(self._testRun_Pass_1)
reactor.callLater(5, d.callback, None)
return maybeWait(d)
def _testRun_Pass_1(self, res):
# 'fastpass' and 'slowpass' should have run one build each
b = self.status.getBuilder('fastpass').getLastFinishedBuild()
self.failUnless(b)
self.failUnlessEqual(b.getNumber(), 0)
b = self.status.getBuilder('slowpass').getLastFinishedBuild()
self.failUnless(b)
self.failUnlessEqual(b.getNumber(), 0)
self.failIf(self.status.getBuilder('fastfail').getLastFinishedBuild())
b = self.status.getBuilder('b3').getLastFinishedBuild()
self.failUnless(b)
self.failUnlessEqual(b.getNumber(), 0)
b = self.status.getBuilder('b4').getLastFinishedBuild()
self.failUnless(b)
self.failUnlessEqual(b.getNumber(), 0)
b = self.status.getBuilder('b4').getLastFinishedBuild()
self.failUnless(b)
self.failUnlessEqual(b.getNumber(), 0)
--- test_interlock.py DELETED ---
--- NEW FILE: test_locks.py ---
# -*- test-case-name: buildbot.test.test_locks -*-
from twisted.trial import unittest
from twisted.internet import defer
from buildbot import interfaces
from buildbot.process import step
from buildbot.sourcestamp import SourceStamp
from buildbot.process.base import BuildRequest
from buildbot.test.runutils import RunMixin
from buildbot.twcompat import maybeWait
class LockStep(step.Dummy):
def start(self):
number = self.build.requests[0].number
self.build.requests[0].events.append(("start", number))
step.Dummy.start(self)
def done(self):
number = self.build.requests[0].number
self.build.requests[0].events.append(("done", number))
step.Dummy.done(self)
config_1 = """
from buildbot import locks
from buildbot.process import step, factory
s = factory.s
from buildbot.test.test_locks import LockStep
BuildmasterConfig = c = {}
c['bots'] = [('bot1', 'sekrit'), ('bot2', 'sekrit')]
c['sources'] = []
c['schedulers'] = []
c['slavePortnum'] = 0
first_lock = locks.SlaveLock('first')
second_lock = locks.MasterLock('second')
f1 = factory.BuildFactory([s(LockStep, timeout=2, locks=[first_lock])])
f2 = factory.BuildFactory([s(LockStep, timeout=3, locks=[second_lock])])
f3 = factory.BuildFactory([s(LockStep, timeout=2, locks=[])])
b1a = {'name': 'full1a', 'slavename': 'bot1', 'builddir': '1a', 'factory': f1}
b1b = {'name': 'full1b', 'slavename': 'bot1', 'builddir': '1b', 'factory': f1}
b1c = {'name': 'full1c', 'slavename': 'bot1', 'builddir': '1c', 'factory': f3,
'locks': [first_lock, second_lock]}
b1d = {'name': 'full1d', 'slavename': 'bot1', 'builddir': '1d', 'factory': f2}
b2a = {'name': 'full2a', 'slavename': 'bot2', 'builddir': '2a', 'factory': f1}
b2b = {'name': 'full2b', 'slavename': 'bot2', 'builddir': '2b', 'factory': f3,
'locks': [second_lock]}
c['builders'] = [b1a, b1b, b1c, b1d, b2a, b2b]
"""
class Locks(RunMixin, unittest.TestCase):
def setUp(self):
RunMixin.setUp(self)
self.req1 = req1 = BuildRequest("forced build", SourceStamp())
req1.number = 1
self.req2 = req2 = BuildRequest("forced build", SourceStamp())
req2.number = 2
self.req3 = req3 = BuildRequest("forced build", SourceStamp())
req3.number = 3
req1.events = req2.events = req3.events = self.events = []
d = self.master.loadConfig(config_1)
d.addCallback(lambda res: self.master.startService())
d.addCallback(lambda res: self.connectSlaves(["full1a", "full1b",
"full1c", "full1d",
"full2a", "full2b"]))
return maybeWait(d)
def testLock1(self):
self.control.getBuilder("full1a").requestBuild(self.req1)
self.control.getBuilder("full1b").requestBuild(self.req2)
d = defer.DeferredList([self.req1.waitUntilFinished(),
self.req2.waitUntilFinished()])
d.addCallback(self._testLock1_1)
return d
def _testLock1_1(self, res):
# full1a should complete its step before full1b starts it
self.failUnlessEqual(self.events,
[("start", 1), ("done", 1),
("start", 2), ("done", 2)])
def testLock2(self):
# two builds run on separate slaves with slave-scoped locks should
# not interfere
self.control.getBuilder("full1a").requestBuild(self.req1)
self.control.getBuilder("full2a").requestBuild(self.req2)
d = defer.DeferredList([self.req1.waitUntilFinished(),
self.req2.waitUntilFinished()])
d.addCallback(self._testLock2_1)
return d
def _testLock2_1(self, res):
# full2a should start its step before full1a finishes it. They run on
# different slaves, however, so they might start in either order.
self.failUnless(self.events[:2] == [("start", 1), ("start", 2)] or
self.events[:2] == [("start", 2), ("start", 1)])
def testLock3(self):
# two builds run on separate slaves with master-scoped locks should
# not overlap
self.control.getBuilder("full1c").requestBuild(self.req1)
self.control.getBuilder("full2b").requestBuild(self.req2)
d = defer.DeferredList([self.req1.waitUntilFinished(),
self.req2.waitUntilFinished()])
d.addCallback(self._testLock3_1)
return d
def _testLock3_1(self, res):
# full2b should not start until after full1c finishes. The builds run
# on different slaves, so we can't really predict which will start
# first. The important thing is that they don't overlap.
self.failUnless(self.events == [("start", 1), ("done", 1),
("start", 2), ("done", 2)]
or self.events == [("start", 2), ("done", 2),
("start", 1), ("done", 1)]
)
def testLock4(self):
self.control.getBuilder("full1a").requestBuild(self.req1)
self.control.getBuilder("full1c").requestBuild(self.req2)
self.control.getBuilder("full1d").requestBuild(self.req3)
d = defer.DeferredList([self.req1.waitUntilFinished(),
self.req2.waitUntilFinished(),
self.req3.waitUntilFinished()])
d.addCallback(self._testLock4_1)
return d
def _testLock4_1(self, res):
# full1a starts, then full1d starts (because they do not interfere).
# Once both are done, full1c can run.
self.failUnlessEqual(self.events,
[("start", 1), ("start", 3),
("done", 1), ("done", 3),
("start", 2), ("done", 2)])
Index: test_run.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_run.py,v
retrieving revision 1.32
retrieving revision 1.33
diff -u -d -r1.32 -r1.33
--- test_run.py 17 May 2005 10:14:10 -0000 1.32
+++ test_run.py 19 Jul 2005 23:11:58 -0000 1.33
@@ -1,154 +1,77 @@
# -*- test-case-name: buildbot.test.test_run -*-
from twisted.trial import unittest
-dr = unittest.deferredResult
from twisted.internet import reactor, defer
from twisted.python import log
import sys, os, os.path, shutil, time, errno
#log.startLogging(sys.stderr)
from buildbot import master, interfaces
+from buildbot.sourcestamp import SourceStamp
from buildbot.util import now
from buildbot.slave import bot
from buildbot.changes import changes
from buildbot.status import base, builder
+from buildbot.process.base import BuildRequest
+from buildbot.twcompat import maybeWait
-def maybeWait(d, timeout="none"):
- # this is required for oldtrial (twisted-1.3.0) compatibility. When we
- # move to retrial (twisted-2.0.0), replace these with a simple 'return
- # d'.
- if timeout == "none":
- unittest.deferredResult(d)
- else:
- unittest.deferredResult(d, timeout)
- return None
-
-config_1 = """
-from buildbot.process import factory
-
-c = {}
-c['bots'] = [['bot1', 'sekrit']]
-c['sources'] = []
-c['builders'] = []
-f1 = factory.QuickBuildFactory('fakerep', 'cvsmodule', configure=None)
-c['builders'].append({'name':'quick', 'slavename':'bot1',
- 'builddir': 'quickdir', 'factory': f1})
-c['slavePortnum'] = 0
-BuildmasterConfig = c
-"""
+from buildbot.test.runutils import RunMixin
-config_2 = """
+config_base = """
from buildbot.process import factory, step
+s = factory.s
-def s(klass, **kwargs):
- return (klass, kwargs)
+f1 = factory.QuickBuildFactory('fakerep', 'cvsmodule', configure=None)
-f1 = factory.BuildFactory([
+f2 = factory.BuildFactory([
s(step.Dummy, timeout=1),
s(step.RemoteDummy, timeout=2),
])
-c = {}
-c['bots'] = [['bot1', 'sekrit']]
-c['sources'] = []
-c['builders'] = [{'name': 'dummy', 'slavename': 'bot1',
- 'builddir': 'dummy1', 'factory': f1},
- {'name': 'testdummy', 'slavename': 'bot1',
- 'builddir': 'dummy2', 'factory': f1, 'category': 'test'}]
-c['slavePortnum'] = 0
-BuildmasterConfig = c
-"""
-
-config_3 = """
-from buildbot.process import factory, step
-def s(klass, **kwargs):
- return (klass, kwargs)
-
-f1 = factory.BuildFactory([
- s(step.Dummy, timeout=1),
- s(step.RemoteDummy, timeout=2),
- ])
-c = {}
+BuildmasterConfig = c = {}
c['bots'] = [['bot1', 'sekrit']]
c['sources'] = []
-c['builders'] = [
- {'name': 'dummy', 'slavename': 'bot1',
- 'builddir': 'dummy1', 'factory': f1},
- {'name': 'testdummy', 'slavename': 'bot1',
- 'builddir': 'dummy2', 'factory': f1, 'category': 'test'},
- {'name': 'adummy', 'slavename': 'bot1',
- 'builddir': 'adummy3', 'factory': f1},
- {'name': 'bdummy', 'slavename': 'bot1',
- 'builddir': 'adummy4', 'factory': f1, 'category': 'test'},
-]
+c['schedulers'] = []
+c['builders'] = []
+c['builders'].append({'name':'quick', 'slavename':'bot1',
+ 'builddir': 'quickdir', 'factory': f1})
c['slavePortnum'] = 0
-BuildmasterConfig = c
"""
-config_4 = """
-from buildbot.process import factory, step
-
-def s(klass, **kwargs):
- return (klass, kwargs)
+config_run = config_base + """
+from buildbot.scheduler import Scheduler
+c['schedulers'] = [Scheduler('quick', None, 120, ['quick'])]
+"""
-f1 = factory.BuildFactory([
- s(step.Dummy, timeout=1),
- s(step.RemoteDummy, timeout=2),
- ])
-c = {}
-c['bots'] = [['bot1', 'sekrit']]
-c['sources'] = []
+config_2 = config_base + """
c['builders'] = [{'name': 'dummy', 'slavename': 'bot1',
- 'builddir': 'dummy', 'factory': f1}]
-c['slavePortnum'] = 0
-BuildmasterConfig = c
+ 'builddir': 'dummy1', 'factory': f2},
+ {'name': 'testdummy', 'slavename': 'bot1',
+ 'builddir': 'dummy2', 'factory': f2, 'category': 'test'}]
"""
-config_4_newbasedir = """
-from buildbot.process import factory, step
-
-def s(klass, **kwargs):
- return (klass, kwargs)
+config_3 = config_2 + """
+c['builders'].append({'name': 'adummy', 'slavename': 'bot1',
+ 'builddir': 'adummy3', 'factory': f2})
+c['builders'].append({'name': 'bdummy', 'slavename': 'bot1',
+ 'builddir': 'adummy4', 'factory': f2,
+ 'category': 'test'})
+"""
-f1 = factory.BuildFactory([
- s(step.Dummy, timeout=1),
- s(step.RemoteDummy, timeout=2),
- ])
-c = {}
-c['bots'] = [['bot1', 'sekrit']]
-c['sources'] = []
+config_4 = config_base + """
c['builders'] = [{'name': 'dummy', 'slavename': 'bot1',
- 'builddir': 'dummy2', 'factory': f1}]
-c['slavePortnum'] = 0
-BuildmasterConfig = c
+ 'builddir': 'dummy', 'factory': f2}]
"""
-config_4_newbuilder = """
-from buildbot.process import factory, step
-
-def s(klass, **kwargs):
- return (klass, kwargs)
-
-f1 = factory.BuildFactory([
- s(step.Dummy, timeout=1),
- s(step.RemoteDummy, timeout=2),
- ])
-c = {}
-c['bots'] = [['bot1', 'sekrit']]
-c['sources'] = []
+config_4_newbasedir = config_4 + """
c['builders'] = [{'name': 'dummy', 'slavename': 'bot1',
- 'builddir': 'dummy2', 'factory': f1},
- {'name': 'dummy2', 'slavename': 'bot1',
- 'builddir': 'dummy23', 'factory': f1},]
-c['slavePortnum'] = 0
-BuildmasterConfig = c
+ 'builddir': 'dummy2', 'factory': f2}]
"""
-class MyBot(bot.Bot):
- def remote_getSlaveInfo(self):
- return self.parent.info
-class MyBuildSlave(bot.BuildSlave):
- botClass = MyBot
+config_4_newbuilder = config_4_newbasedir + """
+c['builders'].append({'name': 'dummy2', 'slavename': 'bot1',
+ 'builddir': 'dummy23', 'factory': f2})
+"""
class STarget(base.StatusReceiver):
debug = False
@@ -165,8 +88,8 @@
self.announce()
if "builder" in self.mode:
return self
- def builderChangedState(self, name, state, eta):
- self.events.append(("builderChangedState", name, state, eta))
+ def builderChangedState(self, name, state):
+ self.events.append(("builderChangedState", name, state))
self.announce()
def buildStarted(self, name, build):
self.events.append(("buildStarted", name, build))
@@ -221,155 +144,48 @@
self.rmtree("basedir")
os.mkdir("basedir")
m = master.BuildMaster("basedir")
- m.loadConfig(config_1)
+ m.loadConfig(config_run)
m.readConfig = True
m.startService()
cm = m.change_svc
c = changes.Change("bob", ["Makefile", "foo/bar.c"], "changed stuff")
cm.addChange(c)
- b1 = m.botmaster.builders["quick"]
- self.failUnless(b1.waiting)
- # now kill the timer
- b1.waiting.stopTimer()
+ # verify that the Scheduler is now waiting
+ s = m.schedulers[0]
+ self.failUnless(s.timer)
+ # halting the service will also stop the timer
d = defer.maybeDeferred(m.stopService)
- maybeWait(d)
-
-class RunMixin:
- master = None
- slave = None
- slave2 = None
-
- def rmtree(self, d):
- try:
- shutil.rmtree(d, ignore_errors=1)
- except OSError, e:
- # stupid 2.2 appears to ignore ignore_errors
- if e.errno != errno.ENOENT:
- raise
-
- def setUp(self):
- self.rmtree("basedir")
- self.rmtree("slavebase")
- self.rmtree("slavebase2")
- os.mkdir("basedir")
- self.master = master.BuildMaster("basedir")
-
- def connectSlave(self, builders=["dummy"]):
- port = self.master.slavePort._port.getHost().port
- os.mkdir("slavebase")
- slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
- "slavebase", keepalive=0, usePTY=1)
- slave.info = {"admin": "one"}
- self.slave = slave
- slave.startService()
- dl = []
- # initiate call for all of them, before waiting on result,
- # otherwise we might miss some
- for b in builders:
- dl.append(self.master.botmaster.waitUntilBuilderAttached(b))
- d = defer.DeferredList(dl)
- dr(d)
-
- def connectSlave2(self):
- port = self.master.slavePort._port.getHost().port
- os.mkdir("slavebase2")
- slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
- "slavebase2", keepalive=0, usePTY=1)
- slave.info = {"admin": "two"}
- self.slave2 = slave
- slave.startService()
-
- def connectSlave3(self):
- # this slave has a very fast keepalive timeout
- port = self.master.slavePort._port.getHost().port
- os.mkdir("slavebase")
- slave = MyBuildSlave("localhost", port, "bot1", "sekrit",
- "slavebase", keepalive=2, usePTY=1,
- keepaliveTimeout=1)
- slave.info = {"admin": "one"}
- self.slave = slave
- slave.startService()
- d = self.master.botmaster.waitUntilBuilderAttached("dummy")
- dr(d)
-
- def tearDown(self):
- log.msg("doing tearDown")
- d = self.shutdownSlave()
- d.addCallback(self._tearDown_1)
- d.addCallback(self._tearDown_2)
return maybeWait(d)
- def _tearDown_1(self, res):
- if self.master:
- return defer.maybeDeferred(self.master.stopService)
- def _tearDown_2(self, res):
- self.master = None
- log.msg("tearDown done")
- # various forms of slave death
+class Ping(RunMixin, unittest.TestCase):
+ def testPing(self):
+ self.master.loadConfig(config_2)
+ self.master.readConfig = True
+ self.master.startService()
- def shutdownSlave(self):
- # the slave has disconnected normally: they SIGINT'ed it, or it shut
- # down willingly. This will kill child processes and give them a
- # chance to finish up. We return a Deferred that will fire when
- # everything is finished shutting down.
+ d = self.connectSlave()
+ d.addCallback(self._testPing_1)
+ return maybeWait(d)
- log.msg("doing shutdownSlave")
- dl = []
- if self.slave:
- dl.append(self.slave.waitUntilDisconnected())
- dl.append(defer.maybeDeferred(self.slave.stopService))
- if self.slave2:
- dl.append(self.slave2.waitUntilDisconnected())
- dl.append(defer.maybeDeferred(self.slave2.stopService))
- d = defer.DeferredList(dl)
- d.addCallback(self._shutdownSlaveDone)
+ def _testPing_1(self, res):
+ d = interfaces.IControl(self.master).getBuilder("dummy").ping(1)
+ d.addCallback(self._testPing_2)
return d
- def _shutdownSlaveDone(self, res):
- self.slave = None
- self.slave2 = None
- return self.master.botmaster.waitUntilBuilderDetached("dummy")
-
- def killSlave(self):
- # the slave has died, its host sent a FIN. The .notifyOnDisconnect
- # callbacks will terminate the current step, so the build should be
- # flunked (no further steps should be started).
- self.slave.bf.continueTrying = 0
- bot = self.slave.getServiceNamed("bot")
- broker = bot.builders["dummy"].remote.broker
- broker.transport.loseConnection()
- self.slave = None
-
- def disappearSlave(self):
- # the slave's host has vanished off the net, leaving the connection
- # dangling. This will be detected quickly by app-level keepalives or
- # a ping, or slowly by TCP timeouts.
- # implement this by replacing the slave Broker's .dataReceived method
- # with one that just throws away all data.
- def discard(data):
- pass
- bot = self.slave.getServiceNamed("bot")
- broker = bot.builders["dummy"].remote.broker
- broker.dataReceived = discard # seal its ears
- broker.transport.write = discard # and take away its voice
-
- def ghostSlave(self):
- # the slave thinks it has lost the connection, and initiated a
- # reconnect. The master doesn't yet realize it has lost the previous
- # connection, and sees two connections at once.
- raise NotImplementedError
+ def _testPing_2(self, res):
+ pass
class Status(RunMixin, unittest.TestCase):
def testSlave(self):
m = self.master
s = m.getStatus()
- t1 = STarget(["builder"])
+ self.t1 = t1 = STarget(["builder"])
#t1.debug = True; print
s.subscribe(t1)
self.failUnlessEqual(len(t1.events), 0)
- t3 = STarget(["builder", "build", "step"])
+ self.t3 = t3 = STarget(["builder", "build", "step"])
s.subscribe(t3)
m.loadConfig(config_2)
@@ -379,20 +195,18 @@
self.failUnlessEqual(len(t1.events), 4)
self.failUnlessEqual(t1.events[0][0:2], ("builderAdded", "dummy"))
self.failUnlessEqual(t1.events[1],
- ("builderChangedState", "dummy", "offline",
- None))
+ ("builderChangedState", "dummy", "offline"))
self.failUnlessEqual(t1.events[2][0:2], ("builderAdded", "testdummy"))
self.failUnlessEqual(t1.events[3],
- ("builderChangedState", "testdummy", "offline",
- None))
+ ("builderChangedState", "testdummy", "offline"))
t1.events = []
self.failUnlessEqual(s.getBuilderNames(), ["dummy", "testdummy"])
self.failUnlessEqual(s.getBuilderNames(categories=['test']),
["testdummy"])
- s1 = s.getBuilder("dummy")
+ self.s1 = s1 = s.getBuilder("dummy")
self.failUnlessEqual(s1.getName(), "dummy")
- self.failUnlessEqual(s1.getState(), ("offline", None, None))
+ self.failUnlessEqual(s1.getState(), ("offline", None))
self.failUnlessEqual(s1.getCurrentBuild(), None)
self.failUnlessEqual(s1.getLastFinishedBuild(), None)
self.failUnlessEqual(s1.getBuild(-1), None)
@@ -400,40 +214,46 @@
# status targets should, upon being subscribed, immediately get a
# list of all current builders matching their category
- t2 = STarget([])
+ self.t2 = t2 = STarget([])
s.subscribe(t2)
self.failUnlessEqual(len(t2.events), 2)
self.failUnlessEqual(t2.events[0][0:2], ("builderAdded", "dummy"))
self.failUnlessEqual(t2.events[1][0:2], ("builderAdded", "testdummy"))
- self.connectSlave(builders=["dummy", "testdummy"])
+ d = self.connectSlave(builders=["dummy", "testdummy"])
+ d.addCallback(self._testSlave_1, t1)
+ return maybeWait(d)
+ def _testSlave_1(self, res, t1):
self.failUnlessEqual(len(t1.events), 2)
self.failUnlessEqual(t1.events[0],
- ("builderChangedState", "dummy", "idle", None))
+ ("builderChangedState", "dummy", "idle"))
self.failUnlessEqual(t1.events[1],
- ("builderChangedState", "testdummy", "idle",
- None))
+ ("builderChangedState", "testdummy", "idle"))
t1.events = []
- c = interfaces.IControl(m)
- bc = c.getBuilder("dummy").forceBuild(None,
- "forced build for testing")
- d = bc.getStatus().waitUntilFinished()
- res = dr(d)
+ c = interfaces.IControl(self.master)
+ req = BuildRequest("forced build for testing", SourceStamp())
+ c.getBuilder("dummy").requestBuild(req)
+ d = req.waitUntilFinished()
+ d2 = self.master.botmaster.waitUntilBuilderIdle("dummy")
+ dl = defer.DeferredList([d, d2])
+ dl.addCallback(self._testSlave_2)
+ return dl
+ def _testSlave_2(self, res):
# t1 subscribes to builds, but not anything lower-level
- ev = t1.events
+ ev = self.t1.events
self.failUnlessEqual(len(ev), 4)
self.failUnlessEqual(ev[0][0:3],
("builderChangedState", "dummy", "building"))
self.failUnlessEqual(ev[1][0], "buildStarted")
self.failUnlessEqual(ev[2][0:2]+ev[2][3:4],
("buildFinished", "dummy", builder.SUCCESS))
- self.failUnlessEqual(ev[3],
- ("builderChangedState", "dummy", "idle", None))
+ self.failUnlessEqual(ev[3][0:3],
+ ("builderChangedState", "dummy", "idle"))
- self.failUnlessEqual([ev[0] for ev in t3.events],
+ self.failUnlessEqual([ev[0] for ev in self.t3.events],
["builderAdded",
"builderChangedState", # offline
"builderAdded",
@@ -449,11 +269,11 @@
"builderChangedState", # idle
])
- b = s1.getLastFinishedBuild()
+ b = self.s1.getLastFinishedBuild()
self.failUnless(b)
self.failUnlessEqual(b.getBuilder().getName(), "dummy")
self.failUnlessEqual(b.getNumber(), 0)
- self.failUnlessEqual(b.getSourceStamp(), (None, None))
+ self.failUnlessEqual(b.getSourceStamp(), (None, None, None))
self.failUnlessEqual(b.getReason(), "forced build for testing")
self.failUnlessEqual(b.getChanges(), [])
self.failUnlessEqual(b.getResponsibleUsers(), [])
@@ -490,16 +310,23 @@
self.failUnlessEqual(logs[0].getName(), "log")
self.failUnlessEqual(logs[0].getText(), "data")
+ self.eta = eta
# now we run it a second time, and we should have an ETA
- t4 = STarget(["builder", "build", "eta"])
- s.subscribe(t4)
- c = interfaces.IControl(m)
- bc = c.getBuilder("dummy").forceBuild(None,
- "forced build for testing")
- d = bc.getStatus().waitUntilFinished()
- res = dr(d)
+ self.t4 = t4 = STarget(["builder", "build", "eta"])
+ self.master.getStatus().subscribe(t4)
+ c = interfaces.IControl(self.master)
+ req = BuildRequest("forced build for testing", SourceStamp())
+ c.getBuilder("dummy").requestBuild(req)
+ d = req.waitUntilFinished()
+ d2 = self.master.botmaster.waitUntilBuilderIdle("dummy")
+ dl = defer.DeferredList([d, d2])
+ dl.addCallback(self._testSlave_3)
+ return dl
+ def _testSlave_3(self, res):
+ t4 = self.t4
+ eta = self.eta
self.failUnless(eta-1 < t4.eta_build < eta+1, # should be 3 seconds
"t4.eta_build was %g, not in (%g,%g)"
% (t4.eta_build, eta-1, eta+1))
@@ -521,37 +348,33 @@
class Disconnect(RunMixin, unittest.TestCase):
- def disconnectSetupMaster(self):
+ def setUp(self):
+ RunMixin.setUp(self)
+
# verify that disconnecting the slave during a build properly
# terminates the build
m = self.master
- s = m.getStatus()
- c = interfaces.IControl(m)
+ s = self.status
+ c = self.control
m.loadConfig(config_2)
m.readConfig = True
m.startService()
self.failUnlessEqual(s.getBuilderNames(), ["dummy", "testdummy"])
- s1 = s.getBuilder("dummy")
+ self.s1 = s1 = s.getBuilder("dummy")
self.failUnlessEqual(s1.getName(), "dummy")
- self.failUnlessEqual(s1.getState(), ("offline", None, None))
+ self.failUnlessEqual(s1.getState(), ("offline", None))
self.failUnlessEqual(s1.getCurrentBuild(), None)
self.failUnlessEqual(s1.getLastFinishedBuild(), None)
self.failUnlessEqual(s1.getBuild(-1), None)
- return m,s,c,s1
- def disconnectSetup(self):
- m,s,c,s1 = self.disconnectSetupMaster()
- self.connectSlave()
- self.failUnlessEqual(s1.getState(), ("idle", None, None))
- return m,s,c,s1
+ d = self.connectSlave()
+ d.addCallback(self._disconnectSetup_1)
+ return maybeWait(d)
- def disconnectSetup2(self):
- m,s,c,s1 = self.disconnectSetupMaster()
- self.connectSlave3()
- self.failUnlessEqual(s1.getState(), ("idle", None, None))
- return m,s,c,s1
+ def _disconnectSetup_1(self, res):
+ self.failUnlessEqual(self.s1.getState(), ("idle", None))
def verifyDisconnect(self, bs):
@@ -575,186 +398,187 @@
def testIdle1(self):
- m,s,c,s1 = self.disconnectSetup()
# disconnect the slave before the build starts
d = self.shutdownSlave() # dies before it gets started
- d.addCallback(self._testIdle1_1, (m,s,c,s1))
+ d.addCallback(self._testIdle1_1)
return d
- def _testIdle1_1(self, res, (m,s,c,s1)):
+ def _testIdle1_1(self, res):
# trying to force a build now will cause an error. Regular builds
# just wait for the slave to re-appear, but forced builds that
# cannot be run right away trigger NoSlaveErrors
- fb = c.getBuilder("dummy").forceBuild
+ fb = self.control.getBuilder("dummy").forceBuild
self.failUnlessRaises(interfaces.NoSlaveError,
fb, None, "forced build")
def testIdle2(self):
- # this used to be a testIdle2.skip="msg", but that caused a
- # UserWarning when used with Twisted-1.3, which I think was an
- # indication of an internal Trial problem
- raise unittest.SkipTest("SF#1083403 pre-ping not yet implemented")
- m,s,c,s1 = self.disconnectSetup()
# now suppose the slave goes missing
+ self.slave.bf.continueTrying = 0
self.disappearSlave()
- # forcing a build will work: the build will begin, since we think we
- # have a slave. The build will fail, however, because of a timeout
- # error.
- bc = c.getBuilder("dummy").forceBuild(None, "forced build")
- bs = bc.getStatus()
- print "build started"
- d = bs.waitUntilFinished()
- dr(d, 5)
- print bs.getText()
-
- def testSlaveTimeout(self):
- m,s,c,s1 = self.disconnectSetup2() # fast timeout
-
- # now suppose the slave goes missing. We want to find out when it
- # creates a new Broker, so we reach inside and mark it with the
- # well-known sigil of impending messy death.
- bd = self.slave.getServiceNamed("bot").builders["dummy"]
- broker = bd.remote.broker
- broker.redshirt = 1
-
- # make sure the keepalives will keep the connection up
- later = now() + 5
- while 1:
- if now() > later:
- break
- bd = self.slave.getServiceNamed("bot").builders["dummy"]
- if not bd.remote or not hasattr(bd.remote.broker, "redshirt"):
- self.fail("slave disconnected when it shouldn't have")
- reactor.iterate(0.01)
+ # forcing a build will work: the build detect that the slave is no
+ # longer available and will be re-queued. Wait 5 seconds, then check
+ # to make sure the build is still in the 'waiting for a slave' queue.
+ self.control.getBuilder("dummy").original.START_BUILD_TIMEOUT = 1
+ req = BuildRequest("forced build", SourceStamp())
+ self.failUnlessEqual(req.startCount, 0)
+ self.control.getBuilder("dummy").requestBuild(req)
+ # this should ping the slave, which doesn't respond, and then give up
+ # after a second. The BuildRequest will be re-queued, and its
+ # .startCount will be incremented.
+ d = defer.Deferred()
+ d.addCallback(self._testIdle2_1, req)
+ reactor.callLater(3, d.callback, None)
+ return maybeWait(d, 5)
+ testIdle2.timeout = 5
- d = self.master.botmaster.waitUntilBuilderDetached("dummy")
- # whoops! how careless of me.
- self.disappearSlave()
+ def _testIdle2_1(self, res, req):
+ self.failUnlessEqual(req.startCount, 1)
+ cancelled = req.cancel()
+ self.failUnless(cancelled)
- # the slave will realize the connection is lost within 2 seconds, and
- # reconnect.
- dr(d, 5)
- d = self.master.botmaster.waitUntilBuilderAttached("dummy")
- dr(d, 5)
- # make sure it is a new connection (i.e. a new Broker)
- bd = self.slave.getServiceNamed("bot").builders["dummy"]
- self.failUnless(bd.remote, "hey, slave isn't really connected")
- self.failIf(hasattr(bd.remote.broker, "redshirt"),
- "hey, slave's Broker is still marked for death")
def testBuild1(self):
- m,s,c,s1 = self.disconnectSetup()
# this next sequence is timing-dependent. The dummy build takes at
# least 3 seconds to complete, and this batch of commands must
# complete within that time.
#
- bc = c.getBuilder("dummy").forceBuild(None, "forced build")
- bs = bc.getStatus()
+ d = self.control.getBuilder("dummy").forceBuild(None, "forced build")
+ d.addCallback(self._testBuild1_1)
+ return maybeWait(d)
+ def _testBuild1_1(self, bc):
+ bs = bc.getStatus()
# now kill the slave before it gets to start the first step
d = self.shutdownSlave() # dies before it gets started
- dr(d, 5)
+ d.addCallback(self._testBuild1_2, bs)
+ return d # TODO: this used to have a 5-second timeout
+ def _testBuild1_2(self, res, bs):
# now examine the just-stopped build and make sure it is really
# stopped. This is checking for bugs in which the slave-detach gets
# missed or causes an exception which prevents the build from being
# marked as "finished due to an error".
d = bs.waitUntilFinished()
- dr(d, 5)
+ d2 = self.master.botmaster.waitUntilBuilderDetached("dummy")
+ dl = defer.DeferredList([d, d2])
+ dl.addCallback(self._testBuild1_3, bs)
+ return dl # TODO: this had a 5-second timeout too
- self.failUnlessEqual(s1.getState()[0], "offline")
+ def _testBuild1_3(self, res, bs):
+ self.failUnlessEqual(self.s1.getState()[0], "offline")
self.verifyDisconnect(bs)
+
def testBuild2(self):
- m,s,c,s1 = self.disconnectSetup()
# this next sequence is timing-dependent
- bc = c.getBuilder("dummy").forceBuild(None, "forced build")
+ d = self.control.getBuilder("dummy").forceBuild(None, "forced build")
+ d.addCallback(self._testBuild1_1)
+ return maybeWait(d, 30)
+ testBuild2.timeout = 30
+
+ def _testBuild1_1(self, bc):
bs = bc.getStatus()
# shutdown the slave while it's running the first step
reactor.callLater(0.5, self.shutdownSlave)
d = bs.waitUntilFinished()
- d.addCallback(self._testBuild2_1, s1, bs)
- return maybeWait(d, 30)
- testBuild2.timeout = 30
+ d.addCallback(self._testBuild2_2, bs)
+ return d
- def _testBuild2_1(self, res, s1, bs):
+ def _testBuild2_2(self, res, bs):
# we hit here when the build has finished. The builder is still being
# torn down, however, so spin for another second to allow the
# callLater(0) in Builder.detached to fire.
d = defer.Deferred()
reactor.callLater(1, d.callback, None)
- d.addCallback(self._testBuild2_2, s1, bs)
+ d.addCallback(self._testBuild2_3, bs)
return d
- def _testBuild2_2(self, res, s1, bs):
- self.failUnlessEqual(s1.getState()[0], "offline")
+ def _testBuild2_3(self, res, bs):
+ self.failUnlessEqual(self.s1.getState()[0], "offline")
self.verifyDisconnect(bs)
def testBuild3(self):
- m,s,c,s1 = self.disconnectSetup()
# this next sequence is timing-dependent
- bc = c.getBuilder("dummy").forceBuild(None, "forced build")
+ d = self.control.getBuilder("dummy").forceBuild(None, "forced build")
+ d.addCallback(self._testBuild3_1)
+ return maybeWait(d, 30)
+ testBuild3.timeout = 30
+
+ def _testBuild3_1(self, bc):
bs = bc.getStatus()
# kill the slave while it's running the first step
reactor.callLater(0.5, self.killSlave)
d = bs.waitUntilFinished()
- d.addCallback(self._testBuild3_1, s1, bs)
- return maybeWait(d, 30)
- testBuild3.timeout = 30
+ d.addCallback(self._testBuild3_2, bs)
+ return d
- def _testBuild3_1(self, res, s1, bs):
+ def _testBuild3_2(self, res, bs):
# the builder is still being torn down, so give it another second
d = defer.Deferred()
reactor.callLater(1, d.callback, None)
- d.addCallback(self._testBuild3_2, s1, bs)
+ d.addCallback(self._testBuild3_3, bs)
return d
- def _testBuild3_2(self, res, s1, bs):
- self.failUnlessEqual(s1.getState()[0], "offline")
+ def _testBuild3_3(self, res, bs):
+ self.failUnlessEqual(self.s1.getState()[0], "offline")
self.verifyDisconnect(bs)
def testBuild4(self):
- m,s,c,s1 = self.disconnectSetup()
# this next sequence is timing-dependent
- bc = c.getBuilder("dummy").forceBuild(None, "forced build")
+ d = self.control.getBuilder("dummy").forceBuild(None, "forced build")
+ d.addCallback(self._testBuild4_1)
+ return maybeWait(d, 30)
+ testBuild4.timeout = 30
+
+ def _testBuild4_1(self, bc):
bs = bc.getStatus()
# kill the slave while it's running the second (remote) step
reactor.callLater(1.5, self.killSlave)
+ d = bs.waitUntilFinished()
+ d.addCallback(self._testBuild4_2, bs)
+ return d
- dr(bs.waitUntilFinished(), 30)
+ def _testBuild4_2(self, res, bs):
# at this point, the slave is in the process of being removed, so it
# could either be 'idle' or 'offline'. I think there is a
# reactor.callLater(0) standing between here and the offline state.
- reactor.iterate() # TODO: remove the need for this
+ #reactor.iterate() # TODO: remove the need for this
- self.failUnlessEqual(s1.getState()[0], "offline")
+ self.failUnlessEqual(self.s1.getState()[0], "offline")
self.verifyDisconnect2(bs)
+
def testInterrupt(self):
- m,s,c,s1 = self.disconnectSetup()
# this next sequence is timing-dependent
- bc = c.getBuilder("dummy").forceBuild(None, "forced build")
+ d = self.control.getBuilder("dummy").forceBuild(None, "forced build")
+ d.addCallback(self._testInterrupt_1)
+ return maybeWait(d, 30)
+ testInterrupt.timeout = 30
+
+ def _testInterrupt_1(self, bc):
bs = bc.getStatus()
# halt the build while it's running the first step
reactor.callLater(0.5, bc.stopBuild, "bang go splat")
+ d = bs.waitUntilFinished()
+ d.addCallback(self._testInterrupt_2, bs)
+ return d
- dr(bs.waitUntilFinished(), 30)
-
+ def _testInterrupt_2(self, res, bs):
self.verifyDisconnect(bs)
+
def testDisappear(self):
- m,s,c,s1 = self.disconnectSetup()
- bc = c.getBuilder("dummy")
+ bc = self.control.getBuilder("dummy")
# ping should succeed
d = bc.ping(1)
- d.addCallback(self._testDisappear_1, (m,s,c,s1,bc))
+ d.addCallback(self._testDisappear_1, bc)
return maybeWait(d)
- def _testDisappear_1(self, res, (m,s,c,s1,bc)):
+ def _testDisappear_1(self, res, bc):
self.failUnlessEqual(res, True)
# now, before any build is run, make the slave disappear
@@ -769,9 +593,8 @@
self.failUnlessEqual(res, False)
def testDuplicate(self):
- m,s,c,s1 = self.disconnectSetup()
- bc = c.getBuilder("dummy")
- bs = s.getBuilder("dummy")
+ bc = self.control.getBuilder("dummy")
+ bs = self.status.getBuilder("dummy")
ss = bs.getSlave()
self.failUnless(ss.isConnected())
@@ -784,13 +607,93 @@
d = self.master.botmaster.waitUntilBuilderDetached("dummy")
# now let the new slave take over
self.connectSlave2()
- dr(d, 2)
+ d.addCallback(self._testDuplicate_1, ss)
+ return maybeWait(d, 2)
+ testDuplicate.timeout = 5
+
+ def _testDuplicate_1(self, res, ss):
d = self.master.botmaster.waitUntilBuilderAttached("dummy")
- dr(d, 2)
+ d.addCallback(self._testDuplicate_2, ss)
+ return d
+ def _testDuplicate_2(self, res, ss):
self.failUnless(ss.isConnected())
self.failUnlessEqual(ss.getAdmin(), "two")
+
+class Disconnect2(RunMixin, unittest.TestCase):
+
+ def setUp(self):
+ RunMixin.setUp(self)
+ # verify that disconnecting the slave during a build properly
+ # terminates the build
+ m = self.master
+ s = self.status
+ c = self.control
+
+ m.loadConfig(config_2)
+ m.readConfig = True
+ m.startService()
+
+ self.failUnlessEqual(s.getBuilderNames(), ["dummy", "testdummy"])
+ self.s1 = s1 = s.getBuilder("dummy")
+ self.failUnlessEqual(s1.getName(), "dummy")
+ self.failUnlessEqual(s1.getState(), ("offline", None))
+ self.failUnlessEqual(s1.getCurrentBuild(), None)
+ self.failUnlessEqual(s1.getLastFinishedBuild(), None)
+ self.failUnlessEqual(s1.getBuild(-1), None)
+
+ d = self.connectSlave3()
+ d.addCallback(self._setup_disconnect2_1)
+ return maybeWait(d)
+
+ def _setup_disconnect2_1(self, res):
+ self.failUnlessEqual(self.s1.getState(), ("idle", None))
+
+
+ def testSlaveTimeout(self):
+ # now suppose the slave goes missing. We want to find out when it
+ # creates a new Broker, so we reach inside and mark it with the
+ # well-known sigil of impending messy death.
+ bd = self.slave.getServiceNamed("bot").builders["dummy"]
+ broker = bd.remote.broker
+ broker.redshirt = 1
+
+ # make sure the keepalives will keep the connection up
+ d = defer.Deferred()
+ reactor.callLater(5, d.callback, None)
+ d.addCallback(self._testSlaveTimeout_1)
+ return maybeWait(d, 20)
+ testSlaveTimeout.timeout = 20
+
+ def _testSlaveTimeout_1(self, res):
+ bd = self.slave.getServiceNamed("bot").builders["dummy"]
+ if not bd.remote or not hasattr(bd.remote.broker, "redshirt"):
+ self.fail("slave disconnected when it shouldn't have")
+
+ d = self.master.botmaster.waitUntilBuilderDetached("dummy")
+ # whoops! how careless of me.
+ self.disappearSlave()
+ # the slave will realize the connection is lost within 2 seconds, and
+ # reconnect.
+ d.addCallback(self._testSlaveTimeout_2)
+ return d
+
+ def _testSlaveTimeout_2(self, res):
+ # the ReconnectingPBClientFactory will attempt a reconnect in two
+ # seconds.
+ d = self.master.botmaster.waitUntilBuilderAttached("dummy")
+ d.addCallback(self._testSlaveTimeout_3)
+ return d
+
+ def _testSlaveTimeout_3(self, res):
+ # make sure it is a new connection (i.e. a new Broker)
+ bd = self.slave.getServiceNamed("bot").builders["dummy"]
+ self.failUnless(bd.remote, "hey, slave isn't really connected")
+ self.failIf(hasattr(bd.remote.broker, "redshirt"),
+ "hey, slave's Broker is still marked for death")
+
+
class Basedir(RunMixin, unittest.TestCase):
def testChangeBuilddir(self):
m = self.master
@@ -798,19 +701,26 @@
m.readConfig = True
m.startService()
- self.connectSlave()
- bot = self.slave.bot
- builder = bot.builders.get("dummy")
+ d = self.connectSlave()
+ d.addCallback(self._testChangeBuilddir_1)
+ return maybeWait(d)
+
+ def _testChangeBuilddir_1(self, res):
+ self.bot = bot = self.slave.bot
+ self.builder = builder = bot.builders.get("dummy")
self.failUnless(builder)
self.failUnlessEqual(builder.builddir, "dummy")
self.failUnlessEqual(builder.basedir,
os.path.join("slavebase", "dummy"))
- d = m.loadConfig(config_4_newbasedir)
- dr(d)
+ d = self.master.loadConfig(config_4_newbasedir)
+ d.addCallback(self._testChangeBuilddir_2)
+ return d
+ def _testChangeBuilddir_2(self, res):
+ bot = self.bot
# this causes the builder to be replaced
- self.failIfIdentical(builder, bot.builders.get("dummy"))
+ self.failIfIdentical(self.builder, bot.builders.get("dummy"))
builder = bot.builders.get("dummy")
self.failUnless(builder)
# the basedir should be updated
@@ -819,7 +729,5 @@
os.path.join("slavebase", "dummy2"))
# add a new builder, which causes the basedir list to be reloaded
- d = m.loadConfig(config_4_newbuilder)
- dr(d)
-
-
+ d = self.master.loadConfig(config_4_newbuilder)
+ return d
--- NEW FILE: test_slaves.py ---
# -*- test-case-name: buildbot.test.test_slaves -*-
from twisted.trial import unittest
from buildbot.twcompat import maybeWait
from buildbot.test.runutils import RunMixin
config_1 = """
from buildbot.process import step, factory
s = factory.s
BuildmasterConfig = c = {}
c['bots'] = [('bot1', 'sekrit'), ('bot2', 'sekrit')]
c['sources'] = []
c['schedulers'] = []
c['slavePortnum'] = 0
c['schedulers'] = []
f = factory.BuildFactory([s(step.RemoteDummy, timeout=1)])
c['builders'] = [
{'name': 'b1', 'slavename': 'bot1', 'builddir': 'b1', 'factory': f},
]
"""
class Slave(RunMixin, unittest.TestCase):
skip = "Not implemented yet"
def setUp(self):
RunMixin.setUp(self)
self.master.loadConfig(config_1)
self.master.startService()
d = self.connectSlave(["b1"])
return maybeWait(d)
def testClaim(self):
# have three slaves connect for the same builder, make sure all show
# up in the list of known slaves.
# run a build, make sure it doesn't freak out.
# Disable the first slave, so that a slaveping will timeout. Then
# start a build, and verify that the non-failing (second) one is
# claimed for the build, and that the failing one is moved to the
# back of the list.
print "done"
def testDontClaimPingingSlave(self):
# have two slaves connect for the same builder. Do something to the
# first one so that slavepings are delayed (but do not fail
# outright).
# submit a build, which should claim the first slave and send the
# slaveping. While that is (slowly) happening, submit a second build.
# Verify that the second build does not claim the first slave (since
# it is busy doing the slaveping).
pass
def testFirstComeFirstServed(self):
# submit three builds, then connect a slave which fails the
# slaveping. The first build will claim the slave, do the slaveping,
# give up, and re-queue the build. Verify that the build gets
# re-queued in front of all other builds. This may be tricky, because
# the other builds may attempt to claim the just-failed slave.
pass
Index: test_status.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_status.py,v
retrieving revision 1.21
retrieving revision 1.22
diff -u -d -r1.21 -r1.22
--- test_status.py 23 May 2005 17:45:55 -0000 1.21
+++ test_status.py 19 Jul 2005 23:11:59 -0000 1.22
@@ -7,6 +7,7 @@
dr = unittest.deferredResult
from buildbot import interfaces
+from buildbot.sourcestamp import SourceStamp
from buildbot.twcompat import implements, providedBy
from buildbot.status import builder
try:
@@ -79,7 +80,7 @@
def __init__(self, parent, number, results):
builder.BuildStatus.__init__(self, parent, number)
self.results = results
- self.sourceStamp = ("1.14", None)
+ self.source = SourceStamp(revision="1.14")
self.reason = "build triggered by changes"
self.finished = True
def getLogs(self):
Index: test_steps.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_steps.py,v
retrieving revision 1.13
retrieving revision 1.14
diff -u -d -r1.13 -r1.14
--- test_steps.py 6 May 2005 06:40:04 -0000 1.13
+++ test_steps.py 19 Jul 2005 23:11:58 -0000 1.14
@@ -20,6 +20,7 @@
from twisted.internet import reactor
from twisted.internet.defer import Deferred
+from buildbot.sourcestamp import SourceStamp
from buildbot.process import step, base, factory
from buildbot.process.step import ShellCommand #, ShellCommands
from buildbot.status import builder
@@ -39,6 +40,7 @@
class FakeBuilder:
statusbag = None
name = "fakebuilder"
+class FakeSlaveBuilder:
def getSlaveCommandVersion(self, command, oldversion=None):
return "1.10"
@@ -68,9 +70,11 @@
self.builder_status.basedir = "test_steps"
os.mkdir(self.builder_status.basedir)
self.build_status = self.builder_status.newBuild()
- self.build = base.Build()
+ req = base.BuildRequest("reason", SourceStamp())
+ self.build = base.Build([req])
self.build.build_status = self.build_status # fake it
self.build.builder = self.builder
+ self.build.slavebuilder = FakeSlaveBuilder()
self.remote = FakeRemote()
self.finished = 0
@@ -160,5 +164,6 @@
(step.Test, {'command': "make testharder"}),
]
f = factory.ConfigurableBuildFactory(steps)
- b = f.newBuild()
+ req = base.BuildRequest("reason", SourceStamp())
+ b = f.newBuild([req])
#for s in b.steps: print s.name
Index: test_vc.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_vc.py,v
retrieving revision 1.32
retrieving revision 1.33
diff -u -d -r1.32 -r1.33
--- test_vc.py 18 Jun 2005 03:35:21 -0000 1.32
+++ test_vc.py 19 Jul 2005 23:11:58 -0000 1.33
@@ -5,7 +5,7 @@
from twisted.trial import unittest
dr = unittest.deferredResult
-from twisted.internet import defer, reactor
+from twisted.internet import defer, reactor, utils
#defer.Deferred.debug = True
from twisted.python import log
@@ -13,28 +13,28 @@
from buildbot import master, interfaces
[...1548 lines suppressed...]
+ r = base.BuildRequest("forced", SourceStamp())
+ b = base.Build([r])
+ s = step.SVN(svnurl="dummy", workdir=None, build=b)
self.failUnlessEqual(s.computeSourceRevision(b.allChanges()), None)
def testSVN2(self):
- b = base.Build()
- b.treeStableTimer = 100
- self.addChange(b, revision=4)
- self.addChange(b, revision=10)
- self.addChange(b, revision=67)
- s = step.SVN(svnurl=None, workdir=None, build=b)
+ c = []
+ c.append(self.makeChange(revision=4))
+ c.append(self.makeChange(revision=10))
+ c.append(self.makeChange(revision=67))
+ r = base.BuildRequest("forced", SourceStamp(changes=c))
+ b = base.Build([r])
+ s = step.SVN(svnurl="dummy", workdir=None, build=b)
self.failUnlessEqual(s.computeSourceRevision(b.allChanges()), 67)
Index: test_web.py
===================================================================
RCS file: /cvsroot/buildbot/buildbot/buildbot/test/test_web.py,v
retrieving revision 1.18
retrieving revision 1.19
diff -u -d -r1.18 -r1.19
--- test_web.py 17 May 2005 10:14:10 -0000 1.18
+++ test_web.py 19 Jul 2005 23:11:59 -0000 1.19
@@ -11,7 +11,7 @@
from twisted.internet.interfaces import IReactorUNIX
from twisted.web import client
-from buildbot import master, interfaces
+from buildbot import master, interfaces, buildset, sourcestamp
from buildbot.twcompat import providedBy
from buildbot.status import html, builder
from buildbot.changes.changes import Change
@@ -32,13 +32,14 @@
interfaces.IControl)
-config1 = """
-BuildmasterConfig = {
+base_config = """
+from buildbot.status import html
+BuildmasterConfig = c = {
'bots': [],
'sources': [],
+ 'schedulers': [],
'builders': [],
'slavePortnum': 0,
- '%(k)s': %(v)s,
}
"""
@@ -95,16 +96,7 @@
def test_webPortnum(self):
# run a regular web server on a TCP socket
- config = """
-from buildbot.status import html
-BuildmasterConfig = {
- 'bots': [],
- 'sources': [],
- 'builders': [],
- 'slavePortnum': 0,
- 'status': [html.Waterfall(http_port=0)],
- }
-"""
+ config = base_config + "c['status'] = [html.Waterfall(http_port=0)]\n"
os.mkdir("test_web1")
self.master = m = ConfiguredMaster("test_web1", config)
m.startService()
@@ -120,16 +112,8 @@
# running a t.web.distrib server over a UNIX socket
if not providedBy(reactor, IReactorUNIX):
raise unittest.SkipTest("UNIX sockets not supported here")
- config = """
-from buildbot.status import html
-BuildmasterConfig = {
- 'bots': [],
- 'sources': [],
- 'builders': [],
- 'slavePortnum': 0,
- 'status': [html.Waterfall(distrib_port='.web-pb')],
- }
-"""
+ config = (base_config +
+ "c['status'] = [html.Waterfall(distrib_port='.web-pb')]\n")
os.mkdir("test_web2")
self.master = m = ConfiguredMaster("test_web2", config)
m.startService()
@@ -145,16 +129,8 @@
def test_webPathname_port(self):
# running a t.web.distrib server over TCP
- config = """
-from buildbot.status import html
-BuildmasterConfig = {
- 'bots': [],
- 'sources': [],
- 'builders': [],
- 'slavePortnum': 0,
- 'status': [html.Waterfall(distrib_port=0)],
- }
-"""
+ config = (base_config +
+ "c['status'] = [html.Waterfall(distrib_port=0)]\n")
os.mkdir("test_web3")
self.master = m = ConfiguredMaster("test_web3", config)
m.startService()
@@ -169,17 +145,11 @@
def test_waterfall(self):
# this is the right way to configure the Waterfall status
- config1 = """
-from buildbot.status import html
-from buildbot.changes import mail
-BuildmasterConfig = {
- 'bots': [],
- 'sources': [mail.SyncmailMaildirSource('my-maildir')],
- 'builders': [],
- 'slavePortnum': 0,
- 'status': [html.Waterfall(http_port=0)],
- }
-"""
+ config1 = \
+ (base_config + \
+ "from buildbot.changes import mail\n" +
+ "c['sources'] = [mail.SyncmailMaildirSource('my-maildir')]\n"
+ + "c['status'] = [html.Waterfall(http_port=0)]\n")
os.mkdir("test_web4")
os.mkdir("my-maildir"); os.mkdir("my-maildir/new")
self.master = m = ConfiguredMaster("test_web4", config1)
@@ -221,6 +191,7 @@
BuildmasterConfig = {
'bots': [('bot1', 'passwd1')],
'sources': [],
+ 'schedulers': [],
'builders': [{'name': 'builder1', 'slavename': 'bot1',
'builddir':'workdir', 'factory':f1}],
'slavePortnum': 0,
@@ -235,8 +206,9 @@
# insert an event
s = m.status.getBuilder("builder1")
+ req = base.BuildRequest("reason", sourcestamp.SourceStamp())
bs = s.newBuild()
- build1 = base.Build()
+ build1 = base.Build([req])
step1 = step.BuildStep(build=build1)
step1.name = "setup"
bs.addStep(step1)
- Previous message (by thread): [Buildbot-commits] buildbot/buildbot/process step.py,1.66,1.67 factory.py,1.9,1.10 base.py,1.55,1.56 builder.py,1.26,1.27 interlock.py,1.7,NONE
- Next message (by thread): [Buildbot-commits] buildbot/buildbot/slave commands.py,1.36,1.37 bot.py,1.13,1.14
- Messages sorted by:
[ date ]
[ thread ]
[ subject ]
[ author ]
More information about the Commits
mailing list