Tech Debt, Depreciation, and Features
There was some discourse on Twitter (yes, it's still there, for now) yesterday about the nature of Technical Debt and how effective it is as a metaphor. The real seed was the new boss arguing with—and eventually firing—one of the few remaining engineers on the Android team. But as far as I can tell, Cindy Sridharan—a person whose opinions are always worth considering—really kicked off the discourse segment with this tweet:
And Marco Rogers—another worthwhile follow, while you can— came in pretty hot with
I don't fully agree with Marco here. I think there's a lot of value in the debt metaphor for describing technical decisions within a team. Making use of "good," deliberate debt to ship something faster than you otherwise could is a trade-off, and in healthy product teams, that kind of good debt gets paid down regularly. (I even think you can extend that metaphor.)
This kind of debt—what Martin Fowler would call "deliberate and prudent"—is usually much more concrete corner cutting than what Marco's describing at the top of that thread. It can be captured in issue trackers, estimated—if that's a thing you do—prioritized, and fixed. It's things like "this API is designed to be extensible, but for now the implementation is not. We will fix the implementation after it ships." Or "using these two features together can create data loss, so for now we will prevent using them together. We will fix the bug after we ship."
(And for the record, there is plenty in that thread I do agree with.)
Outside of product teams, though, I generally try to use Camille's phrase: "sustaining engineering." It's the work we do to keep ourselves effective and responsive over time—and engineers should absolutely understand and learn to communicate how that work contributes business value.
But in the original discussion that Cindy quoted, technical debt is absolutely the wrong metaphor. What Eric, the now-former-Twitter developer, describes as tech debt is the consequence of a team that never prioritized performance, and did not treat it as a feature.
When we say performance is a feature, that means that while we may not always work on it, we should test, measure, and monitor it. We shouldn't allow it to "break"—and that means we have an agreed-upon, UX-driven threshold for what counts as "broken."
In multiple roles now, engineers on my teams have done work to measure the impact of better performance. In all cases, there's been a fairly obvious point of diminishing returns, i.e. once it's this fast, making it faster doesn't have much impact on engagement, business metrics, or even SEO. Somewhere around there—maybe shy of that point if it's already in the realm of diminishing improvements for engineering time—is usually a good place to set a threshold.
And then, like any other feature, you can stop actively working on it. This is hard for engineers, sometimes, because it's fun. Performance optimizations often feel like the "real engineering work," especially when you have to find new, clever solutions for each tenth of a second.
"Debt" is a useful metaphor, but not for everything, and not when it is used as a handwavy catch-all term.