For JUnit, the execution is:
- Run Before, open web browser
- Run Test Case #1
- Run After, close web browser
- Run Before, open web browser
- Run Test Case #2
- Run After, close web browser
- Run Before, open web browser
- Run Test Case #3
- Run After, close web browser
What test automators want is:
- Run Before, open web browser
- Run Test Case #1
- Run Test Case #2
- Run Test Case #3
- Run After, close web browser
The problem is, this is not how things work in JUnit. So they have been using static methods and helper classes to create the web browser. It looks like:
- Run Before, if browser == null [true], browser = open web browser
- Run Test Case #1
- Run After, if last test case [false], close web browser
- Run Before, if browser == null [false], browser = open web browser
- Run Test Case #2
- Run After, if last test case [false], close web browser
- Run Before, if browser == null [false], browser = open web browser
- Run Test Case #3, flag last test case
- Run After, if last test case [true], close web browser
This is essentially what test automators want because the After/Before calls between test cases do nothing and leave the web browser open.
Writing test cases which follow each other, i.e. the final state of test case n is the setup state for test case n+1, is a bad idea.
If you start with something simple and current it is not a bad idea.
Imagine the test suite growing and growing. A few months from now you have 5000 'test cases'. Everything is going great. Then they make a change to the application. Your test cases start failing. You investigate the first failure which happens to be test case 3476 of 5000. You need to run all the test cases from 1 to 3476 before you can get to the point it APPEARS to fail.
A few hours later you find out that things have changed in the application and it is a false negative, i.e. the problem is in the test automation and not the application. You fix it and run the test suite again. A few hours later you find out your fix didn't work. So you tweak it and run the test suite again. A few hours later you find the fix still isn't right. You try one more time. While the test suite is running for the third time that day, 5pm hits. Do you go home and check the results in the morning? Do you work late?
Let's say you work late and find test case 3476 is working again. But wait, test case 3788 fails now. You set a break point just before the failure point and run the test suite again. Even if you can fix the problems in one attempt, if there are multiple test cases needing maintenance you will still take days to just FIX the old test cases. Where do you find time to add new test cases for the next features? They will take just as long if you add them to the end of this chain.
Another thing to consider is, hopefully, you will get to a point that your test suite takes MANY hours to run. I've had test suites which took over 8 hours to run. What do you do? You could ask the Project Manager for more time. As a Project Manager he is going to look at the problem just like any other problem he deals with. If Tim has a task to do and it is going to take him 12 days to complete but we need it done in 4 days, break the task into 3 sub-tasks and bring in two more people to help Tim. For safety, bring in three more people. With four people (Tim plus the three new people) working on it, each person should take 3 days to do their part. Working in parallel, the whole task will be done in 3 days.
How does this apply to test automation? If one machine is taking 40 hours to run the test suite but we want it run daily. Out of a 24 hour day we need time for maintenance, building the application, backing up the system, etc. So let's say you have 8 hours to run the test suite. If it takes 40 hours to run the whole thing, 40 / 8 = 5 so get 5 computers and break the test suite into 5 parts. Heck, computers are a lot cheaper than people; get 10 computers and break the test suite into 10 parts. Now it takes 4 hours to run the test suite.
But how do you break apart the test suite? The way I write a test suite is to have the Before call setup for the test, I run the test case then the After call returns the system to the exact same state as before the test was run. This means all test cases start from the same point and end at the same point. I can run the tests in any order I want. I can run one, some or all the tests without worrying about how one test will affect another test. For me, if I have 5000 tests and I want to run them on 10 machines then I run tests 1 to 499 on machine 1, tests 500 to 999 runs on machine 2, tests 1000 to 1499 runs on machine 3, ..., tests 4500 to 5000 runs on machine 10.
If I try this and find that machine 3 is taking 7 hours and machines 5 and 9 are taking 3 hours and 2 hours, I can start moving some from machine 3 to machine 9, keep doing this until machine 9 is taking 4 hours and machine 3 is taking 5 hours. Then I can move some from machine 3 to machine 5 until they are both taking 4 hours.
Which test cases I move to which machine does not matter. It should take me seconds to make the change.
Other the other hand, if your test cases all depend on the previous test case, you will need to figure out where to make the first break, i.e. test case 1 to what? Once you figure that out, you'll have to figure out how to get the next test case into the correct state before it starts. You will have to do this 10 times and each time it could take you over a day to get things set up. In other words, it could take weeks just to get back to nightly build and test.
Bottom line, it feels like you are saving time by having the tests run one after the other without closing but in the long run it will cost you so much that you might have to abandon test automation or seriously jeopardize the project.
It is important to understand that if this happens a project manager has to deal with a lot more than testing, a LOT MORE. He will not want to hear why you need two EXTRA weeks. He will just expect you to find a solution and bring the project in on time and under budget. If fingers start pointing you can be sure that the project manager will throw the test automators 'under the bus'.
6 comments:
Are these projects you're working on large scale projects? A lot of our web application projects that seem large to our small company sound like they are much smaller than these.
The project I'm currently working on is small and only has a few hundred test cases. If we were to automate everything there might be a thousand test cases. The code base is approximately 100,000 lines of code.
Previously, I have worked on projects which are 500,000 lines of code. They have been in development for over ten years and now have approximately 5,000 test cases.
"Writing test cases which follow each other, i.e. the final state of test case n is the setup state for test case n+1, is a bad idea."
I would agree with you for test cases which are independent of each other.
Consider a situation of two test cases where in out come of one test case determines whether second test case is to be executed or not. So I would develop it like this -
page 1|
page 2| Test Case 1
page 1|
page1
(was left opened with Test case 1, Make a check whether test 2 is to be executed, If yes then - )
page 3| Execute Test Case 2
and this how I constitute test case 2
Now herein I would not close browser (page 1)on third step of test case 1 and continue execution with test case 2. Would not this save time on opening and closing the browser?
How would you take up such test case?
~ T
Tarun K,
Your approach will save time for the execution of the test cases.
There are two general rules I have learned from software development.
First, you will spend 4 times more time maintaining a piece of software then creating it (i.e. 20% creation, 80% maintenance).
Second, do not spend time optimizing code until you know you need to. Have a plan for making the application faster, if necessary, but do not do premature optimization.
- setUp
- openBrowser
- testCase1
- tearDown
- closeBrowser
- openBrowser
- setUp
- testCase2
- tearDown
- closeBrowser
will take longer than:
- setUp
- openBrowser
- testCase1
- testCase2
- tearDown
- closeBrowser
The problem is what are you saving when you make test cases dependent? What are you losing?
In my scenario execution time is longer. How much does a second computer cost? Let's get a top of the line computer. Maybe $3000 (you can use a switch box and share the keyboard/mouse/monitor between the test machines).
In your scenario, debugging the test cases might take longer for the test automation staff.
If the test automation suite gets harder and harder to maintain, management has a few options: (a) hire another test automator, (b) do less testing and hope for the best, (c) abandon automation and outsource testing to India or China.
Option (b) is not a good option at all. That is just a recipe for disaster. It happens though. Sometimes the automation staff ask for more help, management pushes back on them and they just hide the fact maintaining the test suite is falling behind.
Option (a) is going to cost the company a lot of money. Your staff costs you salary, benefits, government taxes, unemployment insurance, worker compensation insurance, HR to administer all this, office space, etc. A good automator is really someone who knows software development. When you factor in all the cost overhead for an employee and not just their salary, it could cost a company $100,000 a year. If the difficulty of maintaining the test suite grows with the size of the test suite, hiring a new automator is just delaying the inevitable.
Option (c) is a viable option for the company but as a test automator not one I would hope for.
Bottom line, if you set up your test suite to be fast now but hard to maintain years from now you are saving the company a few thousand dollars (which they can amortize over time) in the short run but costing them a lot of money in the long run.
The longer a test framework is in place and getting used the better return on investment. Even the people selling record and playback (snake oil) will admit this. Test automation should be a long term thing and not a quick fix to an immediate problem.
Think years not just the current release.
On a lighter note -
"(c) abandon automation and outsource testing to India or China."
Instead of abandoning automation they could outsource automation to India or China.
My experience with out sourcing has been to create test plans and out source the work. How the other company achieves their goals is up to them but there is no reason the company I out source to could not use automation.
Post a Comment