Songbird’s build farm recently moved nests.
This I know because an ex-coworker recently asked for some random configuration details and any gotchas to look out for, in preparation for the migration1.
After we finished discussing “bidness,” I wondered aloud how the farm had fared, post-Preed as the shepherd. His answer surprised me: it had been humming along fine for months. In fact, he said they hadn’t really needed to touch a thing until now.
I was, of course, glad to hear that the infrastructure I built years ago had continued to “Just Work” ™ for them, so their main activity—shipping code—wasn’t affected in the slightest.
But it was also nice to have real-world data to validate my approach to building and managing the farm.
The core methods of the approach include:
- Being meticulous about the core of your organization’s build infrastructure—the build farm—matters. As do the details. Thinking about them as a bunch of machines that “just have stuff installed” on them is a recipe for disaster.
- Taking time to design the infrastructure is important. Specifically, we made the infrastructure pluggable, so the VM images were common, but certain features were tunable. This allowed different VMs to be used for different things, but the images weren’t entirely dissimilar or unrelated to each other2.
- Access control to the farm is important. Shared passwords and multiple access paths are your enemy. You need to know and manage who can access which resources, and what they have access to inside the farm, so you can…
- Implement some process for basic change control. This is a core tenet of configuration management, but for some reason, it is often the first thing to fall by the wayside. Sometimes, this will require being a bit of a Nazi about it, but it pays off for the entire organization, even developers.
- You need to build visibility into the farm operations. Whether it’s Nagios, ESX monitoring, or even just the way continuous integration notifications are handled, building in early-reporting of problems is key to keeping a farm humming
While these all may seem simple, it of course turns out to be more complex to implement. And I’m sure that the farm’s months of happy compiling was helped by a lower rate of configuration changes.
But this provides proof that if you make the time, you can sow the seeds of a build farm that is reliable, stable, allows you to “Set it and forget it3,” and will provide months (at least!) of
1 I was more than happy to help ensure a successful flight
2 For example, the artifact for each VM were stored on a separate drive, so machines responsible for larger builds could easily have larger drives and wouldn’t run out of space, and VMs doing smaller builds weren’t wasting space
3 Thank you Ronco!