Google Analytics

Search

To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g.

site:darrellgrainger.blogspot.com CSS selectors

will search for "CSS selectors" but only on my site.


Monday, February 1, 2010

Testing under a deadline -- the definition of insanity

Ever have to test something with limited time? Or course you have. When I first started testing the plan for the Software Development Life Cycle (SDLC) was:

- get the requirements
- design the application
- start thinking about how to test it
- start implementing the design
- start testing the application
- fix the defects
- test it again
- ship it

Generally speaking there is a lot more iterating over parts of this and occasionally the application gets redesigned in the middle of everything.

On many projects, the plan is laid out so that each step gets a set amount of time. They put a buffer in for unanticipated problems. As the project progresses, when it looks like we aren't going to meet the target date, they put more resources on it, reduce the scope or both.

If things really aren't going well (at some point putting more people on the project just makes things worse and if you cut too much it just isn't worth shipping), they do Albert Einstein's definition of insanity: doing the same thing over and over again and expecting different results. They just do it faster and more frequently. In the end, the Spiral development model becomes the Death Spiral and testing time gets shorter and shorter on each iteration.

So what do you do as a tester?

If you started off well intentioned, you asked for say 2 weeks (10 business days) to do testing. After you get the initial design and more detailed information, you reestimate and ask for 3 weeks (15 business days). You create a wonderful test suite with hundreds of necessary tests. You feel you should be able to complete all the testing in 12 days (80% of 15 days is 12 days; keep 20% in reserve for unforeseen problems).

You ran the entire test suite and filed a few hundred defect reports. Development threw a new version of the product over the wall but you now have 5 days to test it again. Do you run the entire test suite? There are a few possible choices.

The first is to order the tests by importance. This way, if you don't get through all the tests before the ship date, you have covered the most important ones. But 5 days is only 33% of the expected 15 day test period. Even if we assume nothing goes wrong, 5 days is less than 42% of 12 day test period. This means best case scenario we have 58% of the tests not run. Not good.

We can test those things which will be hardest or impossible to patch once the product has shipped. Then we continue testing after the product has shipped, create a patch and hope we didn't lose too many customers.

Another option is to examine source code and determine what areas have definitely not changed. Determine which test cases apply to only those areas which have not changed and drop them from the test suite. If the test failed previously, we know they had to make changes to that code or the defect still exists. So tests which failed must be kept. How much interdependency does the product code have? The poorer the development the harder it will be to test this way. From a QA standpoint, good development comes before good testing.

If your staff is fairly senior and good at adhoc testing, you might have a better chance at just re-testing the failed tests and using good judgement to test the related features. But now we are off the test plan. How do we record what testing was done and the results? What I like to do is keep a copy of notepad/textpad/vi/etc. open and write point form notes on what I'm doing, as I do it, and what results I see. If the application is very graphical and that matters, use something like Microsoft Word or Apple Pages so you can copy and paste images into the document. The important thing is being able to reproduce the tests if needed.

I once had the idea of recording the video output onto video tape. Then you could play the recording back and transcribe the video to a test plan, if necessary.

All of these ideas are accepting the idea that you have to deal with the limited time to test. How about talking to project management (or their management) about doing a post mortem. Figure out what went wrong and put things in place to make sure it does not happen again.

The most important thing is to not do it again; remember Einstein's definition of insanity. Features will have been dropped because the project was behind schedule. There will be a strong pressure to release a patch or the next version in order to get those dropped features out to the customer. Resist the urge to just dive in and do it all over again.

No comments: