another reason % of time writing tests is meaningless

Earlier in the month, I blogged a response to – “What would you say is the average percentage of development time devoted to creating the unit test scripts?”.  As I was telling a friend about it, I realized that I missed an important point!

The question also implies that development time is constant.  Or maybe not.   But if I tell you that I spend X% of my time on creating unit tests and your project currently spends Y time on development, I imagine you are considering the development time with unit testing takes Y + Y*X.

However, this isn’t the case.  The real time is Y + Y*X – Z where Z is the time saved in finding the defects in the code faster.  The benefits span past the current project, but this factor matters even if one just measures within the current project.

Or another way of expressing this is that “your Y is different than my Y.”  My time on development has been adjusted by creating software in a different way.

This seems like a convoluted way of expressing it.  Has anyone come across a better explanation?

a response to – “What would you say is the average percentage of development time devoted to creating the unit test scripts?”

How long does it take to compile? We don’t ask that. It would be absurd. The fact that people ask how long unit testing takes mean they see it as an optional cost to be incurred. What I want to know is why they don’t ask for a similar accounting of the cost of NOT writing unit tests!

I think the “cost” of unit testing falls into three categories:
1) The early days/training/learning
2) The ongoing cost
3) The cost of not testing/cost of errors

The early days/training/learning
The first time you do something, it takes longer.  This is why we pay experienced people more. It is expected there is a training or learning cost. If an activity is worthwhile, one recoups this cost quickly.  It is called an investment.

The problem I see is that some teams don’t get past this point. They see unit testing taking longer the first time and imagine it will take that long forever. I wonder how these people learned Java or regular expressions or anything else. Except that they wanted to learn the hard tech. If one believes the current buggy state of affairs works or can’t imagine a better way, staying motivated to get over the learning hump is difficult.  This is why having a mentor or coach promoting unit testing is helpful.

The ongoing cost
When writing unit tests as part of the task, it is difficult to measure the
amount of time it takes.  I really can’t tell you what percentage of development time I spend “writing tests” because it occurs at the same time I do the other parts of the task.  I also couldn’t account for the percentages of time I spend typing, thinking, compiling, etc.  These things are occurring simultaneously.

Many of the people who complain the ongoing cost of testing is to high are treating it is a separate activity. They write the code, manually test it, debug, repeat and then write a unit test. If done that way, the entire cost of writing the test is extra. And often resented because they really are “done” before they start writing the tests.  In this case, the unit tests only have value as regression but not as part of the development process.  While I suppose it is better than nothing, I find this a way to make writing tests more costly than it should be.

I’m not saying one has to use TDD (test driven development) to see a benefit. Using the unit tests as a replacement for that manual testing lets it become a cost you need to incur anyway. Granted, it takes a little longer to write a test than test by hand. But this is minimal. It is also offset by way less regression issues and the ability to test error conditions easily. And if you write tests in close proximity to the code, it is even faster.

There’s another measurement issue going on here.  Yes, it takes longer to write tests than test manually ONCE.  If you are writing a script that will only be run once, it isn’t worth it.  How many times have you had an application that was written once and then never touched again?

The cost of not testing/cost of errors

This is the part that really bothers me.  There is a cost to not doing an activity.  It tends to be swept under the rug and treated as a cost of doing business.  It is hard to see opportunity cost.  But it still exists.  The two biggest costs I see are finding errors late and regression errors in future release.

  1. We’ve all see the curve that shows how fixing errors late is much more expensive than finding them in development.  Unfortunately, developers get to claim they are “done” when they’ve really just moved the errors to later.  They are more expensive, but the developer gets to claim they finished the task in X days.  Managers need to stop allowing this.
  2. The future release problem is even harder.  I think more of the value of having unit tests comes from the future.  Some developers claim that they don’t need unit testing because they produce high quality code without it.  Sometimes this is even true.  However, what happens when that developer looks at the code in a year or leaves the team.  The unit tests are code and live on.  My development velocity on maintenance/enhancement tasks is much higher with unit tests because I don’t have to posit what I was thinking a year ago when I wrote the code.

What if my management still needs a cost

This blog post is inspired by someone asking me this question.  “What would you say is the average percentage of development time devoted to creating the unit test scripts?”  While I take issue with the question, I think he is still going to have it.  As a result, here are links to three articles/webpages that use numbers.

  1. Misko Hevery comes up with a figure of 10% cost.  He calls it a 10% tax and points out the benefits that come a tax.  Note that he is writing tests as an integral part of his process and is fluent in doing so.  He also actively dispels the myth that testing takes twice as long.
  2. Brian Johnston discusses costs. I like that he covers the cost of not testing too.   While he picks extreme figures (based on the worst case myth), it does cover risks and things to take into account for your own shops.  Also, he discusses the hardest 15% of tests.  The earlier tests written are the easier ones and cost less.
  3. A variety of opinions wiki’d. While numbers are mentioned, the real value of the page is the caveats for dealing with such numbers.

Conclusion

When talking to management about a cost, make sure they know about the associated benefits.  And the cost of other options.  Doing nothing is an option and has a definite cost!

see part 2 of this blog post

Refactoring JUnit 3.8 to 4.0 when hierarchy extends TestCase

Problem:

I want to start writing tests in JUnit 4.0, but I have a lot of tests in JUnit 3.8.  I can’t just start writing tests in 4.0, because I rely on common setup/assertions in my custom superclass which extends TestCase. (Which means JUnit will only look for 3.8 style tests)

Solution:

Create one or more new classes to contain static method equivalents that you need.  The new code can use static imports to get these methods.  The original abstract class can delegate to them.

Limitations:

Some frameworks require you extend a certain class.  Until the framework provides a non-JUnit 3.8 dependency, there’s not much you can do here.  You can still use this approach for tests that don’t require that special framework.

UML for refactoring:

Sample code:

package com.javaranch.asserts;

import static com.javaranch.asserts.JavaRanchTestUtil.*;
import junit.framework.*;

public abstract class AbstractJavaRanchTestCase extends TestCase {
   @Override
   public void setUp() throws Exception {
      super.setUp();
      setUp_propertyConfiguration();
   }
}

Conclusion:

I’ve done this refactoring enough times that is second nature by now.  This blog post documents the refactoring since I’m not finding it on the web.

What other problems/sticky situations have you encountered when mixing JUnit 3.8 and 4.0 code?