Google Analytics

Search

To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g.

site:darrellgrainger.blogspot.com CSS selectors

will search for "CSS selectors" but only on my site.


Friday, December 23, 2011

Your automation must not dictate your test plan

One of the things I see people getting into automation doing is selecting what to automate or how to test an application based on what the automation tool will let them do. This is a dangerous approach to automation.

The test cases I create for an application are based on what is important to the customer. I want to make sure that the customer experience is a good experience. If I create a set of test cases or use stories which reflect real customer usage of the application then I am most likely to find issues which will affect the customer.

I remember working on a project for 4 years. After 4 years of testing and improving the application we were at a point that over 90% of the features important to the customer were tested and bug free. Of the remaining 10% we knew of most the defects and had a work-around. We were at a point where we were testing extreme edge cases. At this point I found a major defect. The developer looked at the fix and realized the defect had been there since the beginning. In 4 years not one single customer reported this defect. The defect was easy to automate but really added zero value to the application. This is NOT a test case you want to start with when automating.

On another project someone found a defect in a desktop application. The steps to reproduce were:


  1. Run an application not involved in the test case at all (Outlook for example)
  2. Hand edit a project file using notepad or something other than the application it was intended for
  3. Make a very specific change to the file
  4. Run the application under test
  5. Open the corrupted file
  6. Select a feature which relies on the corrupted file
  7. A modal dialog appears telling you the file is corrupt, do you wish to repair it.
  8. Ignore the dialog
  9. Use CTRL-TAB to switch to a different application not involved in the test case at all
  10. Click on the application under test in a very specific location on the MDI client window


At this point the modal dialog is hidden behind the window with focus and the window with focus appears to be frozen. It is really waiting for you to respond to the modal dialog. This was a design flaw in the operating system. It was virtually impossible to fix in the application under test without a major re-design. It was highly unlikely a customer would be able to reproduce this defect. When the tester showed me the 'locked' state it only took me a few minutes to figure out what was going on. Our customer was typically a software developer with 7+ years of experience.

This was a useless test case. In both this and the previous test case it was a bad test case regardless of creating it manually or automating it. My point is, the test case came first. Even before we attempted to automate it, we decided whether or not it was a good test case.

Test automation NEVER proceeds test case creation or test planning.

Once you know what you want to test and the steps to testing it, you automate those steps.

This is the second mistake I see people getting into automation making. They decide WHAT they want to test but when they start automating it, the steps they generate with the automation are not the same steps as they would do manually. In this case you have taken the time to create a good set of test cases and thrown them out the door when you start automating. This is not a good idea.

Rather than changing the test case to something which is easy to automate, you need to figure out how to automate the test steps. This is what separates good automation from bad automation.

Many times I have seen a test case automated. It gets run and passes. We ship the application to the customer. He uses the same feature and it fails horribly. Why? Because the steps he used to get to the defect where not the steps we automated. We had a good test case. If an experienced tester had executed the test case manually, they would have found the defect. The person automating the test case just found it easier to automate something close to but not equivalent to the test case.

I am currently using Selenium 2.x with WebDriver. One of the changes from Selenium 1.x to 2.x is that you cannot interact with invisible elements. For example, a common trick on a website is to have an Accept checkbox on a download page. If you accept the terms the Download button becomes visible. In Selenium 1.x I could click on the Download button without clicking the Accept checkbox. The REAL test case was:


  1. Go to download page
  2. Click Accept checkbox
  3. Click Download button
  4. Confirm download

What someone would automate with Selenium 1.x was:


  1. Go to download page
  2. Click Download button
  3. Confirm download

The idea is that it saves a step. One less step means quicker to code, runs quicker, one less thing to maintain. You do this a thousand times and it adds up. HOWEVER, the customer would never click on the invisible Download button.

In Selenium 2.x you would get a failure with the shortened test case. People occasionally complain that Selenium 2.x has removed an important feature. They want to know how they can click on the invisible Download button. They come up with these tricky Javascript snippets which will allow Selenium 2.x to 'see' and click the Download button. Is a customer going to create a Javascript snippet, inject it into the page, run it just so they can click the Download button? Is a manually tester going to do this? If the answer is no, then why is our automation doing this? If the manual test case calls for clicking the Accept checkbox then our automation should as well. If clicking the Accept checkbox does not enable the Download button, file a bug and move on to something else.

Finally, automation is all about finding elements on a page, interacting with them (clicking, right clicking, typing, etc.) and checking what happened. As a manual tester you are going to use your eyes and hands to do everything. The test case might have a step like, "Locate the folder called 'My Documents' and double click it." This is really two steps. The automation should locate the folder called 'My Documents', this is step 1. It should double click the element, this is step 2. As a manual tester I find the element by looking for the text 'My Documents'. If this is a web page and the HTML is:

<div id='lsk499s'><a href="...">My Documents</a></div>

I am not going to use the div id to find the element. I'm going to use the text. As a manual tester I used the text to find the element. There is no reason to do anything differently with the automation tool.

What if the web page is made to look like Window Explorer. On the left is a tree view with one of the nodes being 'My Documents' and on the right is a thumbnail view with a 'My Documents' folder. In the manual test case, does it specify which 'My Documents' to double click? If yes, follow the test case. If no, how would you as a tester decide? Do you always go with the thumbnail view? Do you pick randomly? Do you change ever test run? If you are a good manual tester, we want that experience captured by the automation. If I would normally change every test run but I never test the same feature twice in one day, it might be sufficient to say, if the day of the year is even, double click thumbnail else double click tree view. If the automation gets run daily, it will pick a different way each day.

The important thing in all this is that I am a good tester. I write good test plans and test cases. When I decide to automation my good test cases, I should not compromise the quality of my testing just because I am testing it with an automation tool rather than manually.

Happy testing!

.

No comments: