[Buildbot-devel] MQ design plan and code review
Dustin J. Mitchell
dustin at v.igoro.us
Sat Jun 14 22:15:55 UTC 2014
On Fri, Jun 13, 2014 at 11:35 PM, Damon Wang <damon.devops at gmail.com> wrote:
> First, I have some a news, that kombu doesn't support Qpid as we expect, if
> we want to keep our doc (http://docs.buildbot.net/nine/developer/mq.html) 's
> accuracy, we must implement another mq plugin by qpid-python.
Then there's no need to support qpid. It's fine for Buildbot to
support what Kombu supports and no more.
> Besides, we use persistent to make sure the message will send to a consumer
> as soon as it active, but if we use mq, the message will always available in
> the mq, so persistent can be ignored here, but our new mq has another attr
> -- durable, durable queues remain active when a server restarts, my design
> is in default, exchange and queues will establish when buildbot start, and
> delete when buildbot shutdown, durable can help us when buildbot stop
> anomaly, so queues are not deleted as expect, then we can get our messages
> which not consumed.
Actually, the first bit is false: in AMQP, if a message is sent to an
exchange which has no active queues, then the message is dropped.
It's the *queues* that are persistent -- or not. And Buildbot needs
both. A persistent queue would be used by a scheduler, for example,
which needs to get a notification of *every* change, even one which
occurs while the scheduler is not connected to the MQ server. So that
MQ server needs to have a queue that stays around. But we need
non-persistent queues for connections from web clients, which come and
go all the time. We can't have the MQ server caching every message
sent since last Wednesday when I looked at the site with my laptop!
Persistent queues should also be durable.
> What's more, our former ued filter to make consumer get right message,
> standard mq often make consumer bind to a explicit queue, and will get all
> messages from this queue, do we still need filter, this can implement by
> filter message's routing_key after our consumer get it from queue, but I
> don't know whether it is necessary.
It's the queue that performs the filtering. So a consumer which wants
messages matching "foo.#.bar.*.bing" would create a new queue, bound
to the exchange using that pattern.
In other words, every consumer would have its own queue. I suspect
you're thinking of a different model, where all consumers would
subscribe to the same queue. That won't work for Buildbot.
> About serializer, we use json as doc says, but does json support datetime
> type in python? I find some message contain a time info like this:
> "'complete_at': datetime.datetime(2014, 6, 14, 1, 47, 39,
> tzinfo=<buildbot.util.UTC object at 0x2c8b7d0>)", but when I test in my
> environment, I got "datetime.datetime(2014, 6, 14, 1, 47, 39) is not JSON
> serializable", does buildbot has extend json?
The REST API uses a _toJson method to handle that issue:
that could be refactored a little bit so that the MQ code can use it, too.
> Last but not least, to consume message asynchronous, our former mq use
> twisted's derfer, I'm not very familiar with it, but after read twisted's
> doc, I find it is not easy to use defer here, first, kombu consume messages
> by something like: "connection.drain_events", this function will return when
> any consumer get a message (event), or use hub(kombu's asyn method), all of
> this are face to connection not certain queue or consumer, so make a thread
> to run hub maybe a solution, but I know that twisted's aysn IO is very
> famous, there must be a solution better than use system's thread or process,
> please make comments if you have any.
>From a brief google around, it looks like this is not a common
solution. There is a tool called txamqp which can speak AMQP
directly, but of course that bypasses Kombu!
I think that the best way to do this to start is to use a separate
thread, dedicated to Kombu. Later, we can work on a more natural way
to integrate the two.
> The current code is on my github, link is
> maybe I can make a pull request for review? Not for merge, only for
> convenience to comment.
>From a brief look, it seems like you're registering all of the queues
in advance. The queues need to be created and deleted as consumers
subscribe and stop their subscriptions, instead.
I'm not sure what the standardizeKey method is trying to accomplish -
it looks like it sometimes returns a value, and sometimes doesn't?
This is the sort of method for which unit tests can be *very* helpful!
More information about the devel