- Windows 2000, Internet Explorer 6.x
- Windows 2000, Internet Explorer 7.x
- Windows 2000, Firefox 3.x
- Windows 2000, Safari 3.x
- Windows 2000, Safari 4.x
- Windows XP, Internet Explorer 6.x
- Windows XP, Internet Explorer 7.x
- Windows XP, Firefox 3.x
- Windows XP, Safari 3.x
- Windows XP, Safari 4.x
- Windows Vista, Internet Explorer 7.x
- Windows Vista, Firefox 3.x
- Windows Vista, Safari 4.x
- Solaris 10, Firefox 3.x
- Redhat Linux, Firefox 3.x
- SuSE Linux, Firefox 3.x
Without automation, we would look at equivalence classes. Using our experience and knowledge of the different operating systems, we would estimate that Internet Explorer 6.x on Windows 2000 is going to behave the same as Internet Explorer 6.x on Windows XP and Vista. We might also assume Firefox 3.x on Redhat Linux and SuSE Linux is going to be equivalent. Additionally, we would look at customer base and see if we can reduce the combinations because customers are unlikely to be using some of them. For example, we found that most Windows customers who were using Firefox 3.x were also on Windows XP. So the list was trimmed down to:
- Windows 2000, Internet Explorer 6.x
- Windows XP, Firefox 3.x
- Windows XP, Safari 3.x
- Windows Vista, Internet Explorer 7.x
- Windows Vista, Safari 4.x
- Solaris 10, Firefox 3.x
- SuSE Linux, Firefox 3.x
We selected SuSE Linux because we had encountered issues unique to SuSE Linux. All issues found in Firefox 3.x on Redhat Linux were similar to Firefox 3.x on Solaris 10.
Looking at the list, there are still 7 configurations. If manually testing each configuration takes two weeks and a test cycle is two weeks, we need 7 testers working on this full time. This also assumes everything goes fine and there aren't additional issues to content with. The reality was that we had to add a couple more configurations. The different desktop managers for Linux actually made a different on how things rendered. It was really exposing problems in the web browser but from a customer's point of view, most websites look fine but our application does not, therefore the problem is in our application not the browser. Non-technical customers don't want to hear the explanation why our application doesn't work with the Redhat Linux 4, Gnome desktop and Firefox 3.x.
So we had to add in another configuration:
- Windows 2000, Internet Explorer 6.x
- Windows XP, Firefox 3.x
- Windows XP, Safari 3.x
- Windows Vista, Internet Explorer 7.x
- Windows Vista, Safari 4.x
- Solaris 10, Firefox 3.x
- SuSE Linux, KDE Desktop, Firefox 3.x
- Redhat Linux 4, Gnome Desktop, Firefox 3.x
This adds one more tester to the test cycle. If we assume testers will be productive 6 hours a day (checking mail in the morning, lunch, meetings, etc.) then we are looking at 6 hours * 5 days * 2 weeks * 8 configurations for a total of 480 hours.
But what if I had an automation tool that ran on all the platforms and worked with all the different web browsers? I could write one test suite and run it with different configurations. One such tool is Selenium. It is written using JavaScript and Java and therefore runs on all these platforms.
The test automation can be run after hours. This means 24 hours a day and 7 days a week. We can also run the test suite in parallel. Even if the test suite ran as slow as a manual tester, I just need to add another computer to run the test suite. A computer costs a lot less then a manual tester.
This is a little naive however. You will need at least one tester to automate the test suite. Since automation is development, it should be someone who knows how to create test strategies, test plans, test cases and knows how to do software development. This one employee is probably going to cost more than a manual tester would. However, they are not going to cost 8 times as much.
The reality is that creating a maintainable test automation framework takes longer than 2 weeks. Additionally, as you find defects time will be lost filing the defect and following up on it. Other issues which occur with this is that the implementation of Selenium on the different platforms will present their own set of problems. Basically, the same reason we find issues with our web application on the different platforms is going to cause issues with Selenium as well.
So be forewarned, test automation is not as great as some people would lead you to believe. If done correctly, there will be some cost saving. The best thing is once you have a good test framework in place, the maintenance of it does yield significant cost saving. The problem is, most people don't put enough effort into the design and implementation of the framework to reap the benefits later.
One huge things which can derail test automation will also add cost to development. One project I worked on looked the same from the customer's point of view regardless of which configuration they were using but if you looked at source control they actually had different codes for the different configurations. The web pages being sent down from the server were slightly different depending on the configuration. The code was littered with lots of "if IE6 then do one thing else if IE7 do another thing else if FF3 do something TOTALLY different". Patches to the web browsers and operating systems often broke the application.
From an automation point of view, if the web page served for IE6 is different from the web page served for FF3 then it is as if we are testing two different web pages. The worst part is that the pages are often, initially, close enough that an automator will try to code once for both pages. The effort to create a single automation script that works for both configurations will be more time consuming then normally anticipated and the maintenance for this script will be greater than expected.
But if coded well, the application can be automated for one configuration. A few tweaks for different configurations and then run continuously on all configurations.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.