Google Analytics

Search

To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g.

site:darrellgrainger.blogspot.com CSS selectors

will search for "CSS selectors" but only on my site.


Thursday, May 6, 2010

Organizing your automation

It has been a while since I have posted something to the blog. Life has been keeping me busy with things other than software testing.

I've been to a number of interviews in the past few weeks and I've been asked a number of questions. One of those questions was regarding traceability. If I have a number of automated tests and a set of requirements, how do I connect the two and how do I determine what has been run and what has not?

The first part to this is how are the requirements organized. If there is a tracking system and some unique identifier for the requirements, I want to use that identifier in my test automation. For example, a previous job the defect tracking system was used to manage requirements. In addition to categories like 'defect', 'enhancement', etc. there was a 'new feature' category. The business analyst would file a 'new feature' request into the system. Using mock ups and text descriptions, they would flesh out the requirements.

Each defect report has a unique tracking number. I would need to incorporate this number into the test automation. There might be one number for the requirement but a number of different test cases. Additionally, the number might be just a number, e.g. 28364. In the automation environment this might not stand out as a requirement. For this reason I would develop a convention where all requirements references would start with REQ-. Thus REQ-28364 could get added to the test case automation.

Ideally, you want the automation to be self-explanatory. If a test case fails, it would be helpful to know what failed without having to look up the requirements documentation. With automation like Selenium RC or Watij (both using Java) I can name the test case class the same as the requirement number, e.g. REQ-28364 but if I am looking at the test suite or test results it might not be obvious what this is. So I would create an annotation @REQ and put the requirement information in the comments of the source code.

The name of the class can then be used to indicate what the new feature is. The name of each test case would be a description of what is being tested. For example, if I'm adding Print support to an editor I might have the class name "PrintSupport" or "NewFeaturePrintSupport". The test cases might be:

  • canPrintAllPages
  • canPrintAllEvenPages
  • canPrintAllOddPages
  • canPrintRangeOfPages
  • canCancelPrintDialog
When when I look at the results for the test run I would see:
  • PrintSupport->canPrintAllPages: Pass
  • PrintSupport->canPrintAllEvenPages: Pass
  • PrintSupport->canPrintAllOddPages: Pass
  • PrintSupport->canPrintRangeOfPages: Pass
  • PrintSupport->canCancelPrintDialog: Fail
Very easy to see what is not working and what is.

The most important thing for tying requirements and automation together is creating a convention and sticking to it. To help you stick to it, edit the templates for the IDE. Whenever I create a new Test Case class, the template will have an @REQ field in the comments at the top of the class. I can even go one step further and have source control check for the @REQ field. If the field does not exist or it is blank, the check in of my automation will fail with an error telling me to add a requirement reference to the source code.

No comments: