I just went through last week’s unread feed (a sleeping problem? me?) and found this on the Daily WTF. The unit test code you see there is so absurdly wrong it makes you smile, but then, there’s a serious problem behind that.
The author has been assigned to implement a development process that utilizes unit testing with a team whose members don’t buy into the idea. So management escalates the problem and enforces a policy that uses the code coverage metric to measure the quality of unit tests. Of course the team members still don’t buy into the whole idea of automated testing, and game the metric to the extreme.
And there’s the problem: if you want to have automated tests that really improve quality, and not just for the sake of a process requirement, you need to find a way to get developers to love them. If they don’t, they’ll find a way to weasel around any metric that you put up. All you can do then is forcefully review the tests, and live with the extra work and bad spirit that they create.
It is very easy to develop a love for unit testing when you’re the one responsible for maintenance problems. If it’s your own code that you’re living off. Or, when you’ll be held responsible for what others produce. On the other hand, if you’re mostly responsible for delivering features set in limited time, and then these result melt into some larger product, the responsibility for maintenance gets blurry.
Anyone: how do you put chocolate around a unit test?