Wednesday, June 20, 2012

StaleElementException

Most automation tools depend on the concept of the page has finished loading. With AJAX and Web 2.0 this has become a grey area. META tags can refresh the page and Javascript can update the DOM at regular intervals.

For Selenium this means that StaleElementException can occur. StaleElementException occurs if I find an element, the DOM gets updated then I try to interact with the element.

Actions like:
driver.findElement(By.id("foo")).click();
are not atomic. Just because it was all entered on one line, the code generated is no different than:
By fooID = By.id("foo");
WebElement foo = driver.findElement(fooID);
foo.click();
If Javascript updates the page between the findElement call and the click call then I'll get a StaleElementException. It is not uncommon for this to occur on modern web pages. It will not happen consistently however. The timing has to be just right for this bug to occur.

Generally speaking, if you know the page has Javascript which automatically updates the DOM, you should assume a StaleElementException will occur. It might not occur when you are writing the test or running it on your local machine but it will happen. Often it will happen after you have 5000 test cases and haven't touched this code for over a year. Like most developers, if it worked yesterday and stopped working today you'll look at what you changed recently and never find this bug.

So how do I handle it? I use the following click method:
public boolean retryingFindClick(By by) {
        boolean result = false;
        int attempts = 0;
        while(attempts < 2) {
            try {
                driver.findElement(by).click();
                result = true;
                break;
            } catch(StaleElementException e) {
            }
            attempts++;
        }
        return result;
}
This will attempt to find and click the element. If the DOM changes between the find and click, it will try again. The idea is that if it failed and I try again immediately the second attempt will succeed. If the DOM changes are very rapid then this will not work. At that point you need to get development to slow down the DOM change so this works or you need to make a custom solution for that particular project.

The method takes as input a locator for the element you want to click. If it is successful it will return true. Otherwise it returns false. If it makes it past the click call, it will return true. All other failures will return false.

Personally, I would argue this should always work. If the developers are refreshing the page too quickly then it will be overloading the browser on the client machine.

Tuesday, June 19, 2012

When trial and error is a bad thing

I'm a hacker. Not the breaking into systems and doing damage hacker. To me a hacker is someone who learns things by trial and error. I will systematically poke away at something which for all intents and purpose is a black box. I hack to learn. I hack things I own. I'll create an instance of something and hack away at it. People I work with will create development, test or staging environments which I will hack.

I do not hack sites I don't own or have permission to hack. This is what differentiates good hackers and bad hackers.

What I do is poke at something. Maybe I'll try changing an input or altering the environment slightly and see how that changes things. I'll keep doing this until a pattern merges. From trying different things I start forming a hypothesis of what is happening inside the black box. If I try something and the result does not fit in my hypothesis, I form a new hypothesis. At some point I usually get a clear understanding of what is happening in the black box without ever seeing what is in the black box.

I essentially look at the symptoms and narrow down what the cause would be.

This is a good use of trial and error. The goal is not to find the input which gives me the desired output. If I stopped the moment I got the desired output, I might think I have the solution but I don't. Case in point, I input 2 and 2 and get 4. My hypothesis is that the black box does addition. At this point my hypothesis is correct. However, if I poke further I might find that inputing 2 and 3 gives me 6. Now I see that it is not addition. New hypothesis is that it is multiplication.

Hacking is really empirical. Unless I try every possible input, I cannot be certain my hypothesis is correct. For example, I might input 1 to a function and it returns 43, I input 2 and get 47, I input 3 and get 53. After inputting the numbers from 1 to 20 I notice all the numbers are prime! My hypothesis is that the function is a prime number generator. However, if I input 41 I get 1763. This is not prime (43 * 41 = 1763). Turns out the function is Euler's formula for finding prime numbers, i.e. n^2 + n + 41. This has been proven to only produce prime numbers when n is less than 40.

Still hacking can be a good thing. Trail and error to find THE answer is never a good thing.

I see a lot of people solving problems as follows:

  1. Program or computer not functioning correctly.
  2. Change something.
  3. If program or computer not functioning correctly go to step 2.
  4. Problem solved.
Now maybe they did find the right solution but most often they don't. Later the problem will come back with different symptoms. If I purchased a program from you and you used this method to solve the problem, here is how I see this as a consumer of software:

My car is running slower than normal. I bring it to my mechanic and he does the following:
  1. He changes the spark plugs and charges me for that.
  2. Car is still running slow.
  3. He adjusts the valve on the carburetor and charges me for that.
  4. Car is still running slow.
  5. He rotates the tires and charges me for that.
  6. Car is still running slow.
  7. He changes all the fluids and charges me for that.
  8. Car is still running slow.
  9. Cars today have a lot of electronics, so he disconnects the battery for a week.
  10. All my programming, bluetooth, radio stations, clock, GPS, etc. are gone.
  11. The car is no longer running slow.
  12. Three months later the car is running slow again.
  13. My mechanic disconnects the battery for a week.
  14. All my programming, bluetooth, radio stations, clock, GPS, etc. are gone.
  15. My car is still running slow.
Would you pay for all the work the mechanic did? I think it is safe to say that NO ONE would put up with this. Some people might put up with it until step 12 then find a new mechanic. Others would put up with this until just step 2 or 4. Most of us would not pay for anything after step 2.

I've worked in industries where EVERYONE programs like this. There might be 4 or 5 different vendors and you really don't have any other choice. However, it just takes one guy to write quality software and everyone switches to that other guy. Trying to win back those customers means you have to make up for all the poor software issues PLUS give them some incentive to switch away from the guy who has always given them good software.

Thursday, June 14, 2012

So you want to do unit testing


What is a unit test? Wikipedia describes unit testing as testing individual units of code in isolation. If the code has external dependencies, you simulate the dependencies using mock objects.

For example, if I am testing code which gets data from a database, hopefully access to the database is via something like ODBC or JDBC. In which case, it is possible to use a fake database (file system or memory based) rather than say an Oracle or SQL Server driver.

If my database connection is hard coded to a particular machine or assumes the machine is localhost then my first step is to refactor the code to remove this dependency.

Part of the purpose of having unit test cases is so that we can safely change the code and know we didn't break any existing functionality. So if we need to modify the code to be able to add unit tests we have a bit of a Catch-22 situation. The truth of the matter is, if we have been changing the code without unit tests, changing it one more time in order to add unit tests is actually a step in the right direction and no worse than previous development.

Another important feature of unit tests are speed. If I am adding a new feature and I want to be sure it hasn't broken anything, I want to know as soon as possible. I don't want to write the feature, run the tests and check the results tomorrow. Ideally, I want to know in seconds. Realistically, I might have to live with minutes at first.

Test runs should be automated. If I have to make a change and figure out what tests to run, run them and check the results there is a strong chance I will stop running them. Especially if I'm on a tight timeline.

Ideally, I would check in my code. This will fire a trigger which builds my code (not the entire product, just my code) and run the unit tests against it. Putting such a build system in place is a great deal of work but worth the effort. Every minute it takes to create this build system should be weighed against how much time developers spend testing their code before they check in, how many minutes testers spend finding bugs, how much time developers take understanding the bug and fixing it. Numerous studies have shown fixing bugs is much more expensive than never introducing them in the first place.

So what do we need so far?

First, we need a unit test framework. You wouldn't create your own replacement for JDBC/ODBC. So why create your own unit test framework. There are plenty of them out there.

Second, we need mocking frameworks for the technologies we are utilizing. Which mock object frameworks you require depends on what you are using in your application. If it is a web application, you might need to mock out the web server. If it accesses a database, you will need to mock out the database.

Third, we need a build system to automate the running and reporting of the unit tests. Reporting the results is important too. Most systems will either report back to the source control client or send you an email. If the tests run in, literally, seconds, you can afford to reject the checkin if a unit test fails. If it takes more than say 5 seconds, you might want to send an email when a checkin fails.

Fourth, we need commitment from management and the team. If you don't believe there is benefit to unit testing there will be no benefit to unit testing. Training people on how to create good unit tests and maintain them is critical. If I'm starting a new project and writing tests from the beginning it is easy but the majority of you will be adding unit tests to existing code.

The first three things are relatively easy to obtain. There are plenty of technologies and examples of people using them. The fourth requirement is the biggest reason adopting unit testing fails. If you don’t get buy in from everyone involved it just won’t work. The developers need to understand this will benefit them in the long run. The testers need to understand that less testing will be required and they need to focus on things unit testing will not catch. There will always be plenty of things to test. So there should be no fear unit testing will replace  integration or system testing. Management has to understand if they cut timelines for a project, they will not give developers time to write the unit tests. If you reward the Project Manager for getting the project out on time, he will get the project out on time even if it means giving developers no time for unit test creation. As a Project Manager, if reducing the number of issues AFTER the project has shipped is not a metric I’m evaluated on, I’m happy to ship a product which will make the next project difficult to get out on time.

So, you have the tools and you have buy in from everyone. Now what? If you have 100,000+ lines of code, where do you start writing unit tests? The answer is actually really simple. For example piece of code a developer touches, they should add unit tests. Bug fixing is the best place to start. I would FIRST write a unit test which would have caught the bug. Then I’d fix the bug and see the unit test pass.

By focusing on unit tests for bug fixes it reduces the need for regression testing, it focuses on the features customers are using and the developers are in that code anyways. If we need to refactor the code to support unit testing, might as well happen as we are changing the code. The code was broken when  we started the bug fix. So we’ll have to manually test the fix without unit tests. Hopefully, with a unit test in place, it will be the last time we manually test changes to this code.

If we are modifying the code for feature creation, not bug fixing, we want to write unit tests to confirm the current behaviour. Once we have a test which passes with the current code, we can add the feature and the tests should continue to pass.

At this point we know what we need and where to start. So let’s cover some of the how to write a unit test.

First, a unit test is going to be a function/method which calls our code. We want the name of the unit test to reflect what it is testing. When results are published they will go out to the developer but they will also be seen by the backup developer, project management and various other people as well. If I got an email telling me test17() failed I’m going to have to open the code and read what test17() is testing. You added comments and kept them up to date, right? Or course you didn’t. The comments shouldn’t be necessary. The test name should tell me what it is doing. If the test method was called, callingForgotPasswordWhenNoEmailInUserPreferences() then we all know what is being tested.

Second, what failed? Most unit test frameworks has assert statements. There is the basic fail() call but there are also things like AssertTrue, AssertEquals, AssertNotNull, etc. They can be called with just what you are checking or with a message and what you are checking. You don’t want to code any more than you have to but enough that someone receiving the results will know what failed. If the requirement for my software is “When a user clicks the Forgot Password button but they have not set an email address in their preferences, they should be presented with a message telling them to contact the system administrator.” Then the result message from my example here might be something like, “callingForgotPasswordWhenNoEmailInUserPreferences() failed. Was expecting: ‘No email address was set for your account. Please contact the System Administrator.’ but received: ‘No email address.’”. From this is it pretty clear what was expected and what we received instead. Failing to tell the user how to proceed should be considered a show stopper for the customer. On the other hand, if the result was: “callingForgotPasswordWhenNoEmailInUserPreferences() failed. Was expecting: ‘No email address was set for your account. Please contact the System Administrator.’ but received: ‘No email address was set for your account. Please contact the system administrator.’” the customer might consider this acceptable. We might even update the unit test case to ignore case so the test becomes a pass.

Unit test frameworks are pretty well established now. The general structure of a unit test is:

  • set up for the test
  • run the test
  • assert the test passed
  • clean up so the next test starts at the same point


The set up would be things like creating mock objects, initializing the inputs for the test, etc. The running of the test would be a call to the method being testing. Next would be an assert statement confirming that we received the expected results or side effect. Finally, clean up (often called tear down) the environment so it is at the exact same condition it was before the set up occurred.

Often you will group similar tests in one test suite. If I have 12 tests and they all require the same set up I will put them all in one suite. The code will then have one setUp() method that creates the environment for each test, one method for each test (12 methods in total for this example) and one tearDown(). The setUp() method will create any mock objects, initial global variables, etc. The test method will create anything particular to that test, call the method being tested then make an assert call. The tearDown() method will then clean up the environment so it is just like it was before the setUp() method was called. This is important because most unit test frameworks do no guarantee the order the tests will be run. Assuming one test starts where a previous test left off is just bad practice. I have worked on a project with 45,000 unit tests. All test are run as part of the nightly build. Rather than running all the tests on one machine, they are distributed to 238 different machines. If they all ran on one machine they would take 378 hours (over 2 weeks) to run. By distributing them over 238 computers they run in approximately 3 hours. However, if test1932 depends on test1931 and the two tests get sent to different machines, test1932 will not run correctly. Each test must be independent of all other tests. This will not seem important at first but 1 year later you might find yourself needing weeks (possibly months) to refactor all your unit tests. Moments like these often cause management to abandon unit testing.

This is unit testing is a nutshell. I will warn you, ‘the devil is in the details.’ Hiring someone who has gone through the pains of setting up a unit test framework is always a good idea. Either find a good consultant or hire someone full time to work on the framework for you. Some unit test frameworks are jUnit for Java, cppUnit for C++, nUnit for .NET, etc. Gerard Meszaros has written an excellent book called “xUnit Test Patterns: Refactoring Test Code”. In it he talks about “Test Smells”. Essentially, you can sometimes look at a piece of code and say, “This code stinks.” A code or test ‘smell’ is an indicating that the code has problems, i.e. it stinks. I have found reading Gerard Meszaros book I know what to look for before I do it. Originally the book was designed for people who created unit tests, found the tests have issues, i.e. they ‘smell’ and are looking to fix them, i.e. refactor. By reading the book, I avoid creating the bad unit tests in the first place.

Good luck and have fun!