The Cost of Failing to Brief Your Approach

10/28/2010

One of the blogs I follow, the Aviation Mentor, recently wrote a fascinating post about the Oakland VORTAC1.

For the av-geeks in the crowd, I definitely recommend the couple of minutes’ reading.

For those not here for the av-geekery, there was some road construction2 done at the Oakland airport about six years ago. Since that time, certain portions of the navigation signal provided by the VORTAC have been unreliable and/or unusable. The FAA has been trying to fix the problem ever since then, but each “solution” has caused another problem.

For instance, they recently “dopperlized”3 the VORTAC to improve accuracy and try to make the previously-unusable radials usable again. But that caused the navigation beacon to be unusable for high-altitude operations4

This whole debacle may seem familiar to software engineers: someone or some project spec requires a change in the way the system was designed, and all sorts of unintended consequences, of varying degrees, result.

Sometimes, they’re not all even known when the initial change occurs, since they are either too subtle or disjoint to be immediately noticed, or because they’re caused later by a patch-to-a-patch-to-a-patch solution to solve a problem that was initially created by this forced change.

Configuration management really only exists to analyze and effectively manage change, so I’m not arguing that we should never make any changes to the systems on which we work. But this story is a great illustration of the myriad unintended consequences of failing to adequately study, weigh, and plan that change, and then executing it in a non-haphazard way, so as to minimize those “gotchas!”

The idea that we make the time time to perform this analysis has seemingly become unpopular in recent years: it often gets characterized as “being the enemy of the good” or “stop energy.”

Like most carpenters, I’ve never understood why “measure twice, cut once” is demonized so. Maybe it’s because there’s an assumption that we, as software engineers, have an infinite amount of “wood” available.

Back to the Oakland VORTAC, investigations continue, but the unintended consequences of this probably-unnecessary and arguably-illegal initial construction have turned out to be many: various approach and departure procedures are no longer available unless you’re flying GPS-equipped aircraft, which apparently many cargo aircraft are not.

This causes many FedEx and UPS heavy jets to be required to fly over communities during early morning and late night hours, until they are high enough to receive vectors on course from air traffic control, when before they could have flown a defined departure procedure over the Bay.

In some cases, even that won’t help, and the procedure just isn’t available. As the blog post above mentions, this caused him and his student to be stranded, and they weren’t even trying to land in Oakland! But—you guessed it—the approach to that airport was unavailable because the missed approach segment was defined by (drum roll) the Oakland VORTAC.

All of this consternation because someone with a personal profit motive rammed through the addition of a $100 million “feature” that was tacked onto without consideration to its impacts to users.

In some sense, it should be comforting that it turns out this story is not only age-old, but can be observed in fields other than software development… but it isn’t.

_______________
1 And here, you thought aviation navigational aids were boring!
2 Certainly of ethically questionable nature; the FBI investigated it determine whether it was also legally questionable
3 I learned a new term today!
4 Which is important, since OAK defines six Jet airways.