Technical debt is usually easy to identify. Typical indicators are:
- Poorly named variables, functions or classes
- Poorly formatted code is another
- Duplicate code
- An excessive use of 'true' and 'false' constants in code
- Functions with multiple 'return' statements
- Functions with any number of 'continue' statements
- A flat, wide class hierarchy
- A tall, narrow class hierarchy
- Cyclic dependencies among classes
Most of these indicators can be measured -- with automatic means. (The meaningfulness of names cannot.) Measuring these indicators gives you an estimate of your technical debt. Measuring them over time gives you an estimate of the trajectory of technical debt -- how fast it is growing.
But measurements are not enough. What it comes down to is this: I can quantify the technical debt, but not the benefits of fixing it. And that's a problem.
It's a problem because very few people find such information appealing. I'm describing (and quantifying) a problem, and not quantifying (or even describing) the benefits of the solution. Who would buy anything with that kind of a sales pitch?
What we need is a measure of the cost of technical debt. We need to measure the effect of technical debt on our projects. Does it increase the time needed to add new features? Does it increase the risk of defects? Does it drive developers to other projects (thus increasing staff turnover)?
Intuitively, I know (or at least I believe) that we want to reduce technical debt. We want our code to be "clean".
But measuring the effect of technical debt is hard.
No comments:
Post a Comment