After years of software testing I have found one constant... there is never enough time to test everything.
So how do I pick what I'm going to test? Here are a few tricks I use to focus what I'm going to test.
First, test what changed. I like to get a snapshot of the last release (source code) and a snapshot of the current release candidate then compare them. Often the source code is readable enough for any tester to understand what has changed. If you are not a programmer, not familiar with the particular language or the code requires a little indepth knowledge only the programmers have then you might want to talk with some of the developers about what you are looking at.
For example, if I am testing a shopping cart for an online web store and I see changes to a class called TaxRules then I'm going to look at the requirements for tax rules and see if there is anything new. Even if there aren't requirements for tax rules, I can talk to the person who wanted this feature (or improvement) and I can look at the code. Comments will be for humans and I'm human (no really, I am). Good code should also be self documenting. In other words, the variables and method names should almost read like English.
The next trick is to get someone to add a code coverage tool to the web site (or whatever it is you are testing). As you test the application (automated or manually) the coverage tool will record which parts of the code you executed. Look to see of the code which has changed was executed. Set the coverage tool so it records actions within the methods and not just method level data. This way you can see which branch conditions were called. If there is a loop which never gets entered or a portion of an if/elsif/else which does not executed, you know you need another test case. You could even ask the developer to relate the statements missed to user actions. Essentially, get them to tell you what test case you missed. :)
Finally, look at the inputs to the methods. Can you see a way to affect the inputs? Does the method handle situations where the data is bad? Developers think in terms of "make the program do what it is supposed to do." They always make it do the right thing when given the right data. As a tester you need to think in terms of "make sure the program doesn't do what it is not supposed to do." In other words, if you input bad data, it should report a nice, helpful message to the user or just restrict the user's ability to input the bad data.
As you do code coverage, you can keep notes with your test cases. Just a comment about how test case XXX should be run with class YYY is altered.
No comments:
Post a Comment