I love deadlines. I like the whooshing sound they make as they fly by.
– Douglas Adams
So you estimated three months of testing. You end up being given six weeks. You hear talk about adding non-testers from other teams, yes, even non-techies and people who have never seen your product before, as helpers to compensate (Testing can’t be so hard, right?). You see the Big Triangle in front of you, the three levers one can pull: resources, release date – and quality. The release date was communicated and set in stone months ago. And the resources? You have one tester – for this example let’s say it’s yourself. By adding two others with no testing or product experience, how much more work can you do? You know the answer in your heart: approximately none. You’ll spend a great deal of time handing out instructions and answering questions, and your “helpers” are much less efficient than you in the first place.
In the meeting with the program head or project lead, you hold back your comment about how the company maybe should triple your salary instead of burning the money. No, you voice your concerns and then, you present the third possible solution for the problem: you could reduce the quality of the product. As you will not have enough time to run all the tests, the chance that there will be more bugs in the final product is great, and that usually means a decrease in quality.
As the professional tester you are, and let’s assume you are not the final decision maker, being presented such a “solution” probably makes you cry out load or at least scowl with anger. But if all fails, and you can explain to the “business” what this means, an informed decision to deliver a product which wasn’t fully tested is an option that will save yourself a lot of pain and energy. Energy that you can then invest in fixing bugs and preparing for the next release instead of falling into the deep black abyss after the crunch. Let’s be honest, testers and coders working 14 hours, making last-minute changes on a Sunday – this has never helped any product’s quality. A proper, detailed risk analysis, presented to the decision makers; a realistic, meaningful report showing where you are and what is still in the queue – you need to be informed, know the big picture. You need a contingency plan. Managament backing. And a little bit of luck.
An ultimate set of acceptance criteria, agreed upon with the team and management, can be your “quality safety net”. A maximum number of known bugs of this severity, or a minimum test case coverage, can be defined as a “contract” and prevent a premature product release. Obviously, some kind of critical systems, e.g. defense, medical or aerospace, have zero tolerance built in. But your average B2B, B2C software can possibly live with a margin of error – if you define this margin in advance, it will be easy to come to a go/no go decision.
Every tester wants to deliver the best possible result, but so also wants management – they have more than just one priority, however. As a tester you will have heard this before, but it’s very true and in my experience, forgotten too often.
Plan and prioritize, execute and follow up all your tests in such a way that when you stop testing, no matter how much you have actually tested by then, you have done the best possible test.