Tactical and Strategic Programming

What became apparent when working in this old codebase is that there is direct correlation between what engineering and product management valued and the code that was produced. It immediately reminded me of a chapter in John Ousterhout’s excellent “A Philosophy of Software Design” about tactical and strategical code.

In his book Ousterhout acknowledges that “many organizations encourage a tactical mindset, focused on getting features working as quickly as possible” and that tactical programming is “trying to finish a task as quickly as possible”.

On the other hand, most of the “code in any system is written by extending the existing code base, so your most important job as a developer is to facilitate those future extensions” and producing a great design is strategic programming. The organization has to reward investments in the system, rather than doing things as quickly as possible.

And while looking at the code I saw the ebb and flow between the two modes.

“Agile” as the end goal

The biggest predictor of code quality and cohesiveness is how team productivity was being measured. I noticed a trend that during periods of “high productivity” with retrospectives that fawned over how much work was crammed into sprints that the code became harder to understand, microservices were spun up that had no documentation and debt that had accrued in prior versions just kept sitting there. By all measured metrics things were humming along without a hitch, but taking a look at the code and documentation tells another story. New features were added without tests, existing code was never refactored, and so on. This is due to certain parts of the process ceremony being valued higher than the end deliverable - working software.

Ousterhoust calls out a type of programmer called a “tactical tornado” - a person who is able to write code faster than other developers and deliver features quicker without regard for quality. If the benchmark of productivity is shipping code as quickly as possible without addressing technical debt, then you can see marks of developers aiming to hit that goal. A handful of these shortcuts are probably not noticable, but dozens or more begin to slow down people working on maintaining existing services. A short burst of work like this on a product is fine, but a prolonged period like this will eventually begin to show up as code rot.

John Cutler has written about this phenomenon as well by referring to it as a feature factory.

If the primary marker of doing-the-right-thing is story points and burndowns, then there’s a pretty high likelihood that engineering isn’t actually solving the problem and is performing only in a slingin’ code capacity. I’d love to know more about organizations that feel like this is working for them, because it may be that my sample size is too small or that they’ve been keeping a relatively low churn on their teams.

Indicators of investing in software

On the other side, there are indicators between versions in a codebase that show a period of investment. These indicators are things like adding tooling, instrumenting for observability or addressing productivity papercuts. Addressing the accrued complexity over the years shows an acknowledgement that the codebase could be simpler. From a product perspective this meant acknowledging where a solution was overwrought and allowing a simpler solution to be shipped. From an engineering perspective this may mean adding tests in areas where they were missing which in turn makes it easier for refactoring.

One of the best feelings as a developer is opening up a solution to see what changed between versions and seeing less code. I know I may be in the minority here, but I’d prefer to see fewer lines of the right code. I know that when I ship something that if I did my job correctly it will last far longer than I will and I want the person to maintain it after me to not worry about changing behavior or have to fight against code that I wrote.


722 Words

2019-03-16