[Buildbot-devel] Changing the master config dynamically

Pierre Tardy tardyp at gmail.com
Thu Oct 9 12:33:25 UTC 2014


This re-config issue is kind of a FAQ. I think this deserve a chapter in
the buildbot manual. This is actually quite advanced topic, if you are
starting to go beyond the simple stuff.


Here is how it is working (I take the action of making a PR with a cleaned
up version of this text)

buildbot reconfig is actually sending a SIGHUP on the master's process.

The master loads the "master.cfg" file with python's execfile.

This means that the master.cfg is indeed fully parsed and run.

The master.cfg can import python modules. In this case, the python runtime
of the buildbot process is used. Modules that were already loaded during
the buildbot start are *not* reloaded.

You can however use the python reload builtin
<https://docs.python.org/2/library/functions.html#reload> in order to
reload some of your modules.
Thus, if you have some custom build steps in a separated python module, you
can just reload them (I'll detail the bottlenecks later)

When the master.cfg has been loaded, a new version of the configuration is
checked by the master.
During this time, the master is still running, the builds are still running
with a copy of the last configuration.

Once config is known good, the master compares all the new configs, against
old configs, and will reconfigure the appropriate services accordingly.

If you changed one builder's configuration during this reload, then the new
builds will be affected. The builds that were running before will still be
running with a copy of the buildfactory that was configured at the time of
the start of the build. Thus the steps attached to this buildfactory will
also be from this previous configuration.
If you are doing crazy reloading rate, with crazy long builds, you can have
several versions of buildfactories running at the same time for the same
builder, and everything will be running fine.


One thing you have to take care, though, is that if there are pending
buildrequests, those were created with the previous version of the
schedulers configuration, thus will have a set of properties that may be
incompatible with the new version of the buildfactory. This kind of issue
is actually not strictly related to a reload, and can also happen with a
restart. As stopping the master will keep the build request queue as is
(while the running build will be aborted, and marked as retry state)
So the list of properties needed for a builder needs to be well reasoned,
and you should avoid to change the properties, and their meaning all the
time. When adding new configuration properties, make sure you setup
reasonable defaults so that previous buildrequests are still compatible.

In the case you are having a complex flow of builders 1 -> 2 -> 3 -> 4,
each connected via TriggereableScheduler, or DependantScheduler, or
ForceScheduler, and inheritProperty ( for the manual promotion use case).
You cannot ensure that 1 and 4 will all be run with the same version of the
global configuration.


About reloading python modules, my team has been using it for a while, and
we went through pretty complicated stuff, trying to reload all the modules
of our process. This did not went well in the end. If, like for
metabuildbot, you only have a half a dozen modules, this is okay. But, if
you have a large system, with a lot of dependencies between modules. The
order of module reloading is becoming important:

Lets say you have

common.py:
    class BaseStep(Buildstep):
         pass
step1.py
   from .common import BaseStep
step2.py
   from .common import BaseStep

In this case, if you reload step1 before common, then step1 will be
reloaded with previous version of common.py's BaseStep. Obviously, you will
very quickly encounter lots of issues, and reloading everything is a tree
ordering problem combined quite slow python reflexivity mechanisms. Short
story: avoid it!

The better method is using what Dan is suggesting a Domain Specific Langage.

The idea is to separate logic and data for the configuration.
All things allowed to change via reconfig is data and described in json of
yaml files. The python code is the logic, it loads its json/yaml
configuration files, and generate a buildbot configuration accordingly.

Dan is pushing it to the limit and does not do any process specific stuff
in the python, putting everything in its json file.
Travis-ci is actually doing very similar approach.
You can actually very easily implement a travisrc.yml parser in
buildbot. Isotoma
has been doing it with eight <https://github.com/isotoma/buildbot_travis>,
it will be even easier todo with nine's dynamic builderfactories, and
plugin-able UI architecture.

In my team's process, we still have a lot of logic coded in the python's
steps. Python has tons more expressivity than any DSL. The good news with
buildbot is that you can really put the limit were you want wether you want
to do a high level DSL, or a low level DSL.


Hopefully this answer your question and will not confuse you more.

Pierre

On Wed, Oct 8, 2014 at 7:20 PM, Karoly Kamaras <karoly.kamaras at prezi.com>
wrote:

> Hi,
>
> I remember seeing this problem somewhere, but i thought it would worth to
> reopen it in a new thread:
>
> I would like to be able to edit the master configuration dynamically
> without hurting the currently running builds. I know i can reconfigure the
> master without restart, but i also saw that if the configuration is built
> up by smaller components, like builders, buildsteps, schedulers, etc. in
> different python files, then changing one file might not affect the
> reconfig in the expected way.
>
> Here is my example:
> I have a few teams, every team has their own 1 master and N slave nodes.
> They have their own github projects where they can define the build/test
> steps of each project in a custom DSL that we generate into Buildbot's
> configuration. In this case if they decide to change one of their projects
> configuration - let's say the add an extra step to a builder - they can
> commit the changes into the git repo (even to a different branch), than we
> regenerate the config, reconfigure the master and Buildbot will run with
> the new config.
>
> Is it possible to make this scenario work in a nice way? By nice i mean
> not to kill the currently running jobs, let them finish and publish the
> results to the same place from the "old" (currently running) and "new"
> (just commited and regenerated) configuration.
>
> Our first thought would be running multiple masters on the same server
> with the same DB but different configuration, and make the connection
> forwarded to the lastly configured master without killing the running "old"
> one (until it finishes the currently running jobs). It's just a first
> thought, i haven't tried it yet. I am wondering what kind of solutions
> exist to this problem?
>
> Thank you for your help in advance,
>
> Regards,
>
> *Karoly Kamaras*
> Developer at Prezi <http://prezi.com>
>
>
> ------------------------------------------------------------------------------
> Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
> Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
> Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
> Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
>
> http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
> _______________________________________________
> Buildbot-devel mailing list
> Buildbot-devel at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/buildbot-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://buildbot.net/pipermail/devel/attachments/20141009/66988f77/attachment.html>


More information about the devel mailing list