Forestalling death marches.

This came up at work.

Now that I’m leading a team I’ve gotten suggestions from a couple of people about using external frameworks or how we should rearrange our code. Some of the suggestions have been made by people outside of my team, some have come from inside my team. And it makes sense: anyone with more than a few years of experience start thinking of code reuse–and that leads them to either design patterns or to code frameworks that, in the short term, seem like they should save time.

But then, despite the best intentions of senior developers and architects and development managers, the whole thing turns into a cluster-fuck of biblical proportions. The projects that succeed do so because of a death march. The ones that fail do so because the programmers couldn’t make the death march. In fact, the idea is so engrained in the collective psyche of the development community that we just assume all projects end with a death march.

In all the years I’ve been working on code, I’ve either worked on projects that shipped early and did not involve a death march, or I’ve worked on projects that turned into a death march for members of my team–and ultimately failed.

And I’ve noticed that the ones that do turn into death marches do so because they violate on of the following four principles:

Discoverability.

A software developer cannot keep an entire project inside his head. It’s just impossible. So all software developers I know rely upon a variety of shortcuts, mnemonics and half-remembered memories on how they put their own code together to keep track of what is in there.

Software developers often want to decry other people’s work–and that makes sense as well: if you didn’t write it, it probably doesn’t make sense.

The reality is various IDEs like Eclipse and Xtools and NetBeans contain all sorts of tools to allow you to figure out how someone else’s code works. It’s easy to find all methods which call into a particular method, or to set a breakpoint in the code and see how it’s being called.

Assuming, of course, that you haven’t used a framework which uses Java reflection to invoke methods via a configuration file (I’m looking at you, Spring!), or you haven’t used anonymous identifiers in Objective C to invoke methods which have slightly semantically different meanings depending on the type of object being invoked.

Discoverability allows a new software developer to figure out what’s going on in some new code. It requires good documentation, both in the comments and along side the code. And it requires that we not use different development techniques that negate method call analysis tools, class hierarchy browsers, or wrap such small pieces of functionality into individual libraries that we effectively are “programming” by using configuration files to plug in groups of a dozen lines of code or less.

The opposite of discoverability is opaqueness: the inability to look at existing code and understand what it is doing. When code is no longer discoverable, we as developers no longer feel comfortable reaching in and changing or modifying the existing code base, but instead resort to using the “Lava Flow” anti-pattern. And that contributes to code bloat, uncertainty in functionality, and increased development and testing costs.

Compilability

Related to discoverability and debugability; this is the ability to use an IDE (such as Eclipse), check out someone else’s source code from a source control system, and easily and quickly build the end-product.

The key here is “easily and quickly.”

A lack of compilability means it is harder for new people on your team to build the product or understand the project. It makes it harder for an operations person to rebuild your product. It puts firewalls in the way of QA testing, since it means it is harder for someone to slipstream a new revision to resolve known bugs. And it generally leads to a failure in debugability below.

I once worked on a project that, on a very fast machine, took an hour and a half to compile. Worse, the project could not be hosted in an IDE for debugging; the only way you could debug the project was through inserting print statements. So I used to start my day scrubbing bugs and coming up with a strategy for debugging and fixing those bugs: since I only had time during the day to compile the product 6 times, I had to plan how I was going to fix multiple bugs in such a way as to maximize my six opportunities to build the product. And God help me if QA needed a new deployment: this involved rolling back my test changes, removing my print statements–and ate one of my six chances to build the project for testing purposes.

Debugability

This is extremely key: to me, this means the ability, once your environment is set up, to hit the “debug” button and see the entire project work, from the UI all the way down the stack through the client/server calls to the database calls made at the back-end.

GWT is wonderful in that, with the standard Eclipse project, you can write a client/server system and, on hitting “debug” and cutting and pasting the suggested URL into your browser, set breakpoints both in the front-end Java code that drives the UI, and in the back-end server calls, and see how everything works all the way down.

One of the people I was working with on our team was so wedded to Maven that he suggested we should sacrifice Eclipse, and sacrifice Debugability, in order to allow us to integrate within Maven. (The problem is that Maven, Eclipse and GWT didn’t interact very well together–while I’m sure things have improved, at the time it was “Maven, Eclipse, GWT: pick two.”)

Let’s just say it was hard for me not visibly lose my temper.

Without debugability there is no discoverability. There is no way to set a breakpoint and know how the project works. There is no easy way to fix bugs. There is no easy way to do your work. It turns what should be an interesting and perhaps fun project into the death march from hell.

In the aforementioned project, because I only had six chances to build the product, and only could use print statements to print the internal state of the system, I had to be tremendously clever to figure out what was going on. I had to use code review–and there were a number of bugs which easily took me a week to resolve that–had I been able to set a breakpoint in Eclipse and run the entire product by hitting “debug” would have literally taken me minutes to fix.

Think of that: without debugability, a problem that should take minutes took hundreds of times longer to resolve.

Debugability is easy to lose. Use the wrong build system that is incompatible with a good IDE. Set up a project that is built across multiple projects, some of which are shipped as opaque libraries. Use a client/server system where one or both pieces contain large opaque elements. Many developers don’t even realize the value of debugability; they’re still used to writing code in VI or Emacs, and who wants to use an IDE anyway? It’s only when, after having not used an IDE they turn to an IDE to debug–and had set up a build process that is impossible to alter to fit within that IDE, that they discover they’re stuck with only six chances to find their bugs, and waste a week fixing what should have taken minutes.

Flexibility

I’m a huge believer, within UI systems, of the delegate design pattern and the principles behind the MVC system–though I can’t say I fully appreciate the MVC pattern as a cure-all for all that ails UI development. (I’ve seen, for example, the suggestion that Ruby on Rails implements web MVC: *shudder.* Unless you’re doing pure AJAX, what people call “MVC” really isn’t–so I tend to take a leery eye towards such things.)

I believe in them, because I believe it should be possible to take any component in your system, rip it out, replace it with a substitute component, and have minimal (if any) impact on the overall code base. You should, for example, be able to replace a GWT Button with a custom button built using GWT Label and not have to rewrite your entire user interface. You should be able to take your custom text editor panel and plug in the NSTextField and (aside from renaming a few methods to fit the delegate protocol) have no changes to your overall program. If your custom table code doesn’t work you should be able to plug in someone else’s custom table code and–again, with little change besides renaming a few interface methods, use the new custom table code without rewriting your entire application.

And, especially with a user interface, you should be able to rearrange the buttons, the controls, the layout of your system without having to do much work beyond perhaps changing a few observer calls.

Without flexibility when product requirements change (as they inevitably do) it becomes difficult to make the changes needed without revamping large amounts of code. And if your code is not discoverable, those revamps often follow the “Lava Flow” antipattern. Worse, if your code is not debuggable, those changes become incredibly risky and problematic–and your entire project becomes a death march very quickly.

On my team I’m trying to communicate these four points. I don’t care what people do on my team–I assume all of them are extremely smart, capable, and self-motivated folks who are doing what they do because they want to do it, and because at some level they love doing it.

But these four points must be maintained. Without discoverability, compilability, debugability and flexibility, we will quickly sink into a death march–and not only does that mean long hours but it means no-one will want to be at work. It is discouraging knowing you have a deadline in a month, five bugs to fix, and no-way to fix those bugs faster than one bug a week.

Me; I’d rather fix those bugs in an hour–and take the next week off reading random development blogs.

3 thoughts on “Forestalling death marches.

  1. Couldn’t agree more.

    I’ve often thought about this ‘discoverability’ thing. I even tried Googling ‘design for discoverability’ once, but got no useful results. Naming is the first step towards discoverability: I exposed a save() method once, but then had to add a “forced save” (save, even if in invalid state). I contemplated calling the new method forceSave(), but then it wouldn’t come up right below save() when the developer hit Ctrl-Space for auto-complete. So I opted for the slightly forced (ha!) saveWithforce(). I think that’s a first step to discoverability via auto-complete.

    I once groaned at another developer’s lava-flow on top of my own API. There was a perfectly proper way of plugging in his code, he just didn’t go to the trouble of studying the model. But that’s it: developers don’t want to study your little model in detail to figure out how to best fit in, they just want to get their use case done. They will take time to study language features or common, pervasive 3rd party libraries in their ecosystem, because they deem this time-investment worthwhile: they can re-use this knowledge in the next project (or next job), and chances are that the pervasive library has proper plug-points, a design that’s had many eyes on it, not some internal guy’s pet project.

    Like

  2. Pingback: Things I think about when starting a new project that, surprisingly, many people appear not to. | Development Chaos Theory

  3. Pingback: I think part of the problem with CS educations today is the overuse of design patterns. | Development Chaos Theory

Leave a comment