Earlier in the month, I blogged a response to – “What would you say is the average percentage of development time devoted to creating the unit test scripts?”. As I was telling a friend about it, I realized that I missed an important point!
The question also implies that development time is constant. Or maybe not. But if I tell you that I spend X% of my time on creating unit tests and your project currently spends Y time on development, I imagine you are considering the development time with unit testing takes Y + Y*X.
However, this isn’t the case. The real time is Y + Y*X – Z where Z is the time saved in finding the defects in the code faster. The benefits span past the current project, but this factor matters even if one just measures within the current project.
Or another way of expressing this is that “your Y is different than my Y.” My time on development has been adjusted by creating software in a different way.
This seems like a convoluted way of expressing it. Has anyone come across a better explanation?