Continuous Integration

Release Schedule

August 2019
« Nov    



  • Error: Invalid or expired token.

Simply ship. Every time.


Pomp and Circumstances


It’s that time of year again where students button up all their final projects (hopefully!) and march across stages all over the world to be awarded a degree they’ve been striving for years toward.

Congratulations to all those donning caps and gowns right now, and welcome to the quote-unquote “real world!”

A little known fact: back in *cough*, I gave the commencement speech for our Computer Science department’s ceremony1.

Even though the speech is about a decade old now, when I read through it, I figured some of that advice may still be of use to today’s computer science/engineering grads.

(This was written as a riff off of Baz Lurhmann’s reinterpretation of Mary Schmich’s2 Chicago Tribune Column3, for those that remember that song…)

Fellow graduates of the class of 2003: use a firewall.

Security research conducted at UC Davis and Carnegie Mellon reports that, used in combination with other security best-practices, a firewall is one of the best ways to reduce your vulnerability to security breaches, whereas the rest of this advice has no basis more scientific than my own circuitous journey on Poly’s twelve year plan.

I will dispense these observations… now. Read More


Git is the new COM


(Editor’s note: I don’t mean to be banging the drum on Git this week, but Mike Fiedler and I were talking about this yesterday, and I’ve been meaning to write this for awhile, so…)

I apparently have a bit of a reputation for being a “Git hater.”

While I understand where it comes from, the notion is misplaced.

Git is not always all roses and that’s probably where that reputation stems from: the frank discussions I’ve had on the real pros and cons of the tool—and there are cons— are in contrast to the general unbridled enthusiasm over Git1.

Having said that, there are many facets of Git I really like, and there is no denying that it revitalized open source development and is changing software configuration management as we know it.

Despite that, I think Git is the new COM2.

Read More

EPISODE 20: Does Your Entire Team Have to Git It?


On Git’ing It


The Ship Show‘s current episode tackles a topic near and dear to my heart: Git and, for our purposes, its (non-open source project) usages.

The seed for this episode’s topic came up when the podcast crew was discussing whether or not release engineers, and by extension ops/tools/automation engineers, should have intricate knowledge of the version control tool they’re supporting in their positions. Because this is more and more commonly Git these days, the seed-question naturally drifted to “Does everyone on your team have to have a more complex knowledge of Git?”

What makes Git different from other version control tools it it puts the power to destroy the repository1 in the hands of every developer. (Some may point out you can disable these on the server side; and they’re right. Unfortunately, Github does not allow you to do this in their hosted product, so in the common case, that doesn’t much matter2.) And, given that, “with great power, comes great responsibility,” so the saying goes… but are all developers equipped3 to handle that responsibility?

Even if you’re not using Git, the question about version control proficiency remains. One of the things I’ve always done at any shop I’ve worked at is work to understand the intricacies of the version control tool’s versioning model and how it understands the world that we put into it.

This is key to being able to make coherent recommendations about code-line policies, merging strategies, and general repository administration, which is a large part of the release engineering functional role.

Despite this, I’ve worked with many release engineers who don’t really understand what’s going on under the covers, and have only a basic understanding of the tool. Because of Git’s complexities and sometimes confusing “porcelain”4, this is a very real problem facing a lot of release and ops engineers.

Is it something we should be concerned with?

Well… listen to the episode… and tell me what you think.

(As an aside: this episode’s musical interstitials turned out to be among my favorite thus far; kudos to everyone who caught what they are. If you don’t like them, don’t blame me!)

1 Referring to force-pushes, mostly
2 I still don’t understand why they don’t allow that; Bitbucket does; is it a feature-differentiation thing? Do they assume that pull requests protect the repository-of-record from this behavior?
3 Via proper training, etc.
4 The fact that the Git community insists on making the distinction between “porcelain” and “plumbing” is part of the problem, me thinks

Eulogy for a Founding Father, revisited


In response to my post earlier this week on Tinderbox’s end-of-life, reader Carsten Mattner asked:

Reading [your post], I couldn’t figure out what replaced Tinderbox for the Mozilla builds. What feeds tbpl? Does Mozilla not use Tinderbox to build continuously?

When I left Mozilla in 2007, there was a Release Engineering project in progress to actively replace Tinderbox (Client) with buildbot. So in short, no, Mozilla does not use Tinderbox Client to drive its continuous integration builds, and hasn’t for some time.

Do they still use buildbot today?

I didn’t know the answer to that question, so I tracked down Coop on IRC, who graciously gave me a few minutes of his time to answer exactly that.

He said:

  • Mozilla currently uses “95% buildbot, with 5% Jenkins for random small projects”
  • There are multiple buildbot masters that drive the buildbot clients
  • Unlike the out-of-the-box buildbot master setup, the masters query a job scheduling database instead of monitoring source control for changes themselves; they then report their results to a database, which tbpl (and other services) use to generate their reports/dashboards; the buildbot master waterfall pages aren’t accessible to the external world (which makes sense, because they include unsecured administrative functionality1)
  • There are about 60 masters right now, but Coop said “number keeps growing though, so we need to rethink the whole solution”

So there’s your answer, Carsten!

1 A long standing criticism of mine, among others

ChefConf 2013 Revue Review


This year was my first year attending ChefConf.

For episode 19 of The Ship Show, we did a joint podcast with the Food Fight Show Crew, and had a ton of fun; if you’re looking for a high level review of the conference, that’s a good place to get a high level overview1.

Even though it wound down over a week ago, it cracked open a world view that I’ve not had a lot of experience with to date, and that experience was pretty impactful, so I wanted to discuss it a bit more after having digested things.

It’s interesting to me that most of the people attending ChefConf are approaching many of the same topics we do in traditional “release engineering” for “native”/”boxed” software, but it’s similarly patently obvious that the dynamics and interactions2 can be pretty different too.

To whit: I found it very interesting that among a sea of developers in talks and in the hallways, I could pick out common themes that wouldn’t surprise any practitioner: version control, deployment, scaling, configuration management… but one phrase I did not hear uttered once by a single person was “release engineering.”

What is especially interesting was having heard bits of pieces of various conversations, it’s clear that the problem space people were tackling had a great deal over overlap with the issues release engineering has been tackling for years.

It’s just no one called it that.

I haven’t quite been able to figure out why that is.

It’s possible that “release engineering” may have a “curmudgeony” connotation to it, so there’s a desire to distance from (any) roles that have traditionally been at odds with “lean manufacturing”3 concepts. Or, perhaps, it’s just that the bulk of the attendees at ChefConf hail from a so-called “[Web]Ops” background, and release engineering (and release engineers) weren’t an obvious part of the Web 1.0+-world?

I don’t know… but no matter your opinion of release engineering, all y’alls are doing it5… even if you don’t know it, or find the concept abhorrent.

Which brings me to the next thing I found fascinating: we did a word association segment for The Ship Show with various conference-goers that turned out not only be a lot of fun, but revealed the wide, often extremely disparate viewpoints on some standard ideas within the DevOps sphere. For instance:

  • Release engineering” prompted responses spanning from “confusing”, “tiring”, and “hopefully a thing of the past” to “sanity”, “frictionless”, “really cool”, and “craft.”
  • Configuration management” prompted “hard” and “Jeezum… yah… no… I dunno” all the way to “good”, “sanity”, and “important”7.
  • Shell scripts” engendered, on the one hand “old school”, “horrifying”, “obsolete”, “the fallback”, and “if you have to” all the way to “kinda awesome sometimes”, “I like ‘em”, “much needed”, and “actually pretty awesome.”
  • Our beloved “DevOps” excited utterances ranging from “I don’t know”, “ouch”, “dear God”, “confusion”, and “not-a-Thing” to “sweet”, “cool”, “so cool”, and “fun.”

You can draw your own conclusions8, but as someone who spends his days thinking about DevOps, the people who identify as part of that community, and the problems it faces, I found this illustrates the group’s real heterogeneity.

In attempting to fix broken social and technical systems, I think we can often forget or gloss over the fact that we’re, and the systems we work on, aren’t as uniform as we might think, which is probably something we should all be more aware of.

Read More

"Red is bad, right?"

Eulogy for a Founding Father


About a month ago, I noticed a tweet from Coop:

Pouring out a little liquor for tinderbox today. Drinking the rest, because, you know, tinderbox.

It linked to a post describing the plan to end-of-life Tinderbox1.

As one of a handful of people who was required in an employment-capacity to support Tinderbox in production2,3, I can certainly understand the elation at getting rid of the aged continuous integration system. It hasn’t changed much (or seen much maintenance for that matter) since its original open source release fifteen years ago and certainly had plenty of warts4.

Having said that, part of me is sad at the… glee, for lack of a better word, at its demise.

Tinderbox is certainly antiquated by any modern standard, but it should not be forgotten that, having been released in 1998, it is very much the grandfather of continuous integration systems.

It may have “sucked,” but it facilitated a workflow and nurtured an ethos that is not only extremely important, but taken for granted today: namely the notion that individual developers should be “on the hook” when checking in, and have a responsibility to their peers to monitor the build and make sure the tree “stays green.”

It was Tinderbox that was largely responsible for introducing a generation of software engineers to this now-commonplace concept, and helping to get a previous generation of engineers to care about such things. Mozilla was the poster-child user for Tinderbox, but I know of at least VMware and Yahoo who used it years before Hudson/Jenkins and Buildbot existed.

Beyond that, it sports features that those systems TO THIS DAY do not:

Read More

A Decade Ago


Today marks a somber event, which I started recording in a series of posts in my blog-of-the-time, ten years ago today.

What follows is an excerpt from the first one:

A family friend picked me up and we started the trek back towards Fort Collins. We heard one last update as we sped off from KDEN that they had him out of surgery and were waiting to revive him. He said the doctor quoted her a “10% chance,” but then was honest with him and said “about a 1% chance.” I don’t know why they bother trying to quantify it below 50%; at that point, it’s all the same: “We don’t really know.” Or “We don’t really want to tell you.”

It was at this point that I finally got the full story: Dad had been packing things up with Mom this morning when he fell to the floor, complaining of pains in his leg; he thought he had pinched a nerve in his back. They waited a few minutes and then Mom asked if she should call an ambulance. To her surprise, he said yes.

Upon arriving at the hospital, an EKG was done to normal results. Next, a chest X-ray series was done, but he moved as X-rays pierced his body, requiring them to re-shoot them. They decided to do a CT scan instead. They rolled him over to radiology to get the series done. From what I understand, as soon as the tech finished the series, he noticed the tear in the aortic valve. Just as he went to call the doctors to get him into surgery immediately, Dad arrested on the table.

In front of Mom.

She was quickly pushed out of the way and they began working on him, trying to stabilize him. It didn’t work, so they put him on an artificial heart pump, and took him up surgery to repair the valve. He never woke up.

The doctor declared it all over an hour or so after the three hour surgery when they tried to warm his body back up and get his heart pumping again. In repairing the valve, they had won the battle… at the expense of the war.
Read More

“Burn bridges. It will be fine.”


In some ways, tech is a much smaller and more incestuous scene than I ever imagined – and I grew up in the midwest. There are also people in it who are dishonest, manipulative, abusive, bullying, mean-spirited, harassing and destructive. Early in my career I was very paranoid about maintaining amicable relationships with these individuals or staying quiet despite my moral qualms about their actions, because I was always told I’d have to work with them again, and that someday they might be on the other side of a hiring board or committee or collective I needed something from. I’ve since realized that these very fears ensure these assholes will have long prosperous careers, where we’re all forced to see them again.

Definitely worth the read.

On Footnotes


Longtime readers of The Sober Build Engineer may find today’s XKCD amusing1.

I get asked a lot of times why this blog tends to rely heavily on footnotes; it turns out it’s mostly a historical footnote2.

The original incarnation of this blog started back when I was one of Mozilla Corporation’s two3 full-time release engineers. Since a lot of the community communication regarding builds came through that blog at the time, a lot of the funnier content4 had to be relegated down to the footnotes.

And then it just sort of snowballed from there; it turns out people found them amusing and missed them; they’re also a great tool for citations, without being too intrusive.

So yeah… they’re here to stay… for now.

1 Thanks to Redfive for calling it out!
2 See what I did there?
3 And then only one… and then two again
4 Or stuff that were inside jokes anyway

EPISODE 16: PaaS: Play or Passe?


Mozilla’s Brandon Burton, on The Ship Show


In case you missed it, Mozilla’s own Brandon Burton (aka @solarce) joined the panel for the most recent episode of The Ship Show1, to talk about his research into and experiences with building and rolling out an internal platform-as-a-service (PaaS).

He had a lot of interesting stories and data about introducing PaaS infrastructure at Mozilla to service web developers’ infrastructure needs. If you’re interested in how Mozilla’s forays into PaaS have fared, definitely check the episode out.

A great listen, and a huge shout out to Brandon for taking the time to join us.

And, “if build engineering, DevOps, release management, and everything in between” is of interest to you, feel free to join The Ship Show Crew every couple of weeks for discussions on precisely those sets of topics!

You can find us at or in the iTunes Music Store, or peruse the podcast archives.

1 Yes, the episode title—PaaS: Play or Paasé—is a little cheesy…2
2 But I picked it, so I guess I only have myself to blame…

Newer Posts
Older Posts