<div dir="ltr">Mmh.. sounds like you found a memory leak in the UI.<div><br><div>Several points to check that could be out of control for UI:</div><div>Message could be queuing up over hanged (or badly closed) websockets</div><div>REST queries data could keep some reference in some way, which would make them just accumulate in memory.</div><div>sombody is doing huge REST apis queries. It is unfortunatly quite easy to just hit /api/v2/builds, and dump the whole build db.</div><div>This wouldn't go all the way to 45GB IMO.</div><div><br></div><div>During my perf tests, I saw the logs system are quite memory hungry, and its buildbot is trading memory for cpu if the master can't keepup recording all the logs in the data (nothing fancy. the twisted queues will just be growing waiting for callbacks to be called). </div><div>Due to several encoding and decoding, the memory usage in the log queue could be 2-4 times the amount of actual log bytes processed. anyway, this is probably not your problem.</div><div><br></div><div>You can look at <a href="https://github.com/tardyp/buildbot_profiler">https://github.com/tardyp/buildbot_profiler</a> maybe improve it to add a memory profiler..</div><div><br></div><div><br></div><div>Pierre</div><div> </div></div></div><br><div class="gmail_quote"><div dir="ltr">Le jeu. 10 août 2017 à 21:52, Neil Gilmore <<a href="mailto:ngilmore@grammatech.com">ngilmore@grammatech.com</a>> a écrit :<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
A couple of times in the last couple weeks, the master running our UI<br>
has crashed/exited/ceased to run. This most recent time it was a<br>
MemoryError, backed up by the kernel log saying it was OOM. Current run<br>
shows it using 43.1G memory. Seems like a lot, doesn't it? The other 2<br>
larger masters (the ones actually running builds) are about 2G, and the<br>
remaining, smallest one is about 1.7G.<br>
<br>
What does the UI hold onto that makes it so large?<br>
<br>
I'm supposed to get some heap profiling thingy wedged into buildbot so I<br>
can try to figure it out. My current plan is to introduce a new build<br>
step (like our current custom ones) that triggers the profiling process,<br>
create a sort of dummy build using the step, and create a force<br>
scheduler to trigger that build. That'll keep it on our UI master<br>
anyway. Is there a better way to trigger such code in the master at<br>
arbitrary times?<br>
<br>
I'm also starting to get complaints (and I've noticed this myself), that<br>
sometimes queued builds take a long time to start after the previous<br>
build finishes. Sometimes on the order of a half-hour or more. Or at<br>
least it appears so. I'm going on what the UI tells me. I haven't tried<br>
delving into the logs of the master controlling the builds to match up<br>
times. Any ideas? I'm afraid I'm not up on exactly how/when builds are<br>
supposed to get started.<br>
<br>
And out database seems to be accumulating integrity errors again.<br>
<br>
Neil Gilmore<br>
<a href="mailto:raito@raito.com" target="_blank">raito@raito.com</a><br>
_______________________________________________<br>
users mailing list<br>
<a href="mailto:users@buildbot.net" target="_blank">users@buildbot.net</a><br>
<a href="https://lists.buildbot.net/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.buildbot.net/mailman/listinfo/users</a><br>
</blockquote></div>