Brilliant post on testing by Michael Bolton (emphasis mine):
Bug Investigation and Reporting (time spent on tests that find bugs)Test Design and Execution (time spent on tests that don’t find bugs)
Module Time spent on tests that find bugs Time spent on tests that don’t find bugs Total Tests A 0 minutes (no bugs found) 90 minutes (45 tests) 45 B 10 minutes (1 test, 1 bug) 80 minutes (40 tests) 41 C 80 minutes (8 tests, 8 bugs) 10 minutes (5 tests) 13
[…]If we are being measured based on the number of bugs we find (exactly the sort of measurement that will be taken by managers who don’t understand testing), Team A makes us look awful—we’re not finding any bugs in their stuff. Meanwhile, Team C makes us look great in the eyes of management. We’re finding lots of bugs! That’s good! How could that be bad?
On the other hand, if we’re being measured based on the test coverage we obtain in a day (which is exactly the sort of measurement that will be taken by managers who count test cases; that is, managers who probably have an even more damaging model of testing than the managers in the last paragraph), Team C makes us look terrible. “You’re not getting enough done! You could have performed 45 test cases today on Module C, and you’ve only done 13!”
And yet, remember that in our scenario we started with the assumption that, no matter what the module, we always find a problem if there’s one there. That is, there’s no difference between the testers or the testing for each of the three modules; it’s solely the condition of the product that makes all the difference.
The obvious larger lesson here is that if you are going to use metrics, comparing numbers is not going to do anything. You have to be willing to spend the time to understand what is going on. Too often, managers use metrics as a club to hit employees on productivity, something that is enormously counter-productive. As employees find a way to make the metrics towards what the manager wants instead of the needs of the project.
This is also a case for collecting fewer metrics so that you can spend greater time analyzing them. For example, A looks pretty good on test coverage, but there might be the possibility that we haven’t written some necessary tests. Or now that we know C is pretty buggy, we may need to look at the development process – what stage these bugs are getting introduced and why?
In several recent posts, I have talked about the quality versus productivity issue in mature software organizations. Since deadlines loom larger, they are often given higher priority by developers and testers at the expense of quality. And so, managers have to make a special effort to emphasize quality and avoiding shortcuts. Creating productivity metrics swings the pendulum in the other direction. It introduces pressure to get more done instead of getting things done right. So instead, when you evaluate these kind of metrics, stop looking at “finding lots of bugs” or “getting enough done”, but instead at the direction the product quality is taking.