Bad Example for Test-Driven Development

by Krishna on August 13, 2009

Uncle Bob has a point with respect to “generic code and specific tests”, but the example he gives for supporting his argument (“Prime Factors Kata”) is a poor one. In fact, it supports the argument against Test-Driven Development, which is that it pays too little attention to design and that design is an after-thought of TDD. That is not necessarily a valid argument, but examples like these fuel such criticism.

If you go through the entire PowerPoint slide of the algorithm, you will see how tests are written to validate the output of the algorithm for generating the prime factors for integers starting from 1. The algorithm evolves from returning an empty list (as 1 has no prime factors) to returning a hard-coded value (2) for the tests for 2 & 3, getting convoluted as it handles bigger numbers and then lo-and-behold, the programmer suddenly realizes that the algorithm can be simplified.

This is truly silly. You could easily construct a mental model of the right algorithm before you even start writing it, instead of having to write tests to lead you into the proper direction. Second, what would you do with a more complex problem where you may need many more tests to get you to write generic, simplified code?

Third, some of the test cases in the example are both wasteful and insufficient. You want to focus on edge cases (negative numbers, 0, 1, prime numbers, non-prime numbers with single factors, non-prime numbers with multiple prime factors that may or may not repeat). In the example, all the edge cases are not covered. But some cases are covered multiple times.

The biggest problem I see with this is why are the tests driving the implementation of the program? The tests should test the output of a method. There may be an incidental benefit that they may sometimes point out defects in the implementation, but they shouldn’t impose themselves on driving the design and implementation. In this regard, I agree with Rebecca Wirfs-Brock’s article on “Design for Test”.

So where did I agree with Uncle Bob? Well, a few things. First, it is true that you will end up writing more specific tests. The reason is that your requirements will become more specific with time, because you know more about the problem. This requires you to test more scenarios and then hence more tests. This can lead to more code, but ongoing refactoring helps you to streamline the code and make it more generic.

Therefore, what starts up being very specialized code and few tests ends up with generic code and more specific tests. But note that this is a function of lack of knowledge of requirements and is usually involves a greater timespan than one coding session. Before you sit down to code, you should know all the requirements applicable to your code, design properly and write your code. The tests written then test the requirements known at that time. In fact, the example of Fitnesse that Bob mentions illustrates this point. That is a one-year timespan, not a 4-hour coding session.

What is really driving the change from specialized code to generic code in this cycle is the changing requirements. New tests are a manifestation of those changes, but not drivers for code changes in themselves. You should write generalized code whenever you sit down to code. But new requirements can lead to code changes, which may force you to make them more generic than they already are.

Leave a Comment

{ 1 trackback }

Previous post:

Next post: