Google Analytics


To search for specific articles you can use advanced Google features. Go to and enter "" before your search terms, e.g. CSS selectors

will search for "CSS selectors" but only on my site.

Thursday, September 25, 2014

Silencing ChromeDriver with WebDriver

While setting up a test environment today we decided to have the tests running on the same machine as the build radiator.

A build radiator takes up the entire display. It shows a green bar for each job on the build server. If someone checks in a change and it breaks a test, the bar turns red and everyone stops to fix the build.

The consequence of this is that the build radiator has to be visible to everyone in the room. Having a browser open on the display is not an option.

So we need to run our WebDriver tests without showing the browser or any other output. Our build server is running Linux. So we have WebDriver tests. We can run them from the command line using something like:
java org.testng.TestNG testng.xml
where testng.xml is a TestNG test suite example. When we run it as this we see the browser open and the tests executing. The tests were written using ChromeDriver. When we run this on the build radiator however, we don't want the browser opening. The solution is actually quite easy for Linux. We use an application called Xvfb:
xvfb-run --server-args="-screen 0 1600x1200x24" java org.testng.TestNG testng.xml
The command xvfb-run will run the application using the X Virtual FrameBuffer. The --server-args lets us pass arguments to the server. The "-screen 0" tells xvfb to use screen 0. The "1600x1200x24" tells xvfb to make the virtual display 1600 by 1200 with 24 bit depth. If your application has to work on 1024 by 768 and 16 bit colour then you can use "1024x768x16".

When you execute this you will not see the browser open. It almost seems like nothing is happening. The only thing you will see is the output from TestNG (a dot for a pass, an I for an ignore and an F for a failure) and the output from chromedriver. What if you want to look at the logs and see just the output from TestNG; not interlaced with output from chromedriver?

This requires a few changes to the creation of the WebDriver object. Normally, you might have something like:
System.setProperty("", "./chromedriver");
WebDriver driver = new WebDriver();
but this outputs chromedriver log information to the screen. You could use:
System.setProperty("", "./chromedriver");
System.setProperty("", "--disable-logging");
WebDriver driver = new WebDriver();
This will stop most the output but you will see the header for when chromedriver starts up:
Starting ChromeDriver (v2.9.248307) on port 9515
So how do you get rid of this? I was digging through the code for chromedriver (remember it is open source) and I found some code where it was checking for the property If this was set to true then it would run with the silent flag set to true. So I tried:
System.setProperty("", "./chromedriver");
System.setProperty("", "--disable-logging");
System.setProperty("", "true");
WebDriver driver = new WebDriver();
Sure enough that did it. Complete silence from chromedriver.

Saturday, July 12, 2014


I was recently poking around on my Terminal (Mac OS X) and I noticed one of the environment variables was:
So I checked the man page for the bash shell to see what I could find about it:
man bash
Reading the man page I find DIRSTACK is an array relating to popd, pushd and dirs. Rather than using cd to change to a directory I can use pushd. For example:
pushd ~/Downloads
This will change directory to ~/Downloads plus it will add the directory to the DIRSTACK array. I can add some more to the DIRSTACK using:
pushd ~/Documents
pushd /Volumes
Now if I issue a dirs I will see:
/Volumes ~/Documents ~/Downloads
If you search for popd, pushd and dirs on the bash man page you will find all the settings for these builtin commands:
dirs [-clpv] [+n] [-n]
+n display the nth entry from the left, e.g. +2 will display the entry in position 2, this is zero-indexed
-n displays the nth entry from the right, just like the +n this is zero-indexed, e.g. -0 is the first entry
-c clears the DIRSTACK
-l displays a longer list, e.g. ~ gets expanded to the full directory name /Users/darrell
-p display one entry per line
-v display one entry per line with a number at the start of each line

You might thing the -v option is just line numbers but they are more than that. The numbers are directly related to the -n and +n option. Additionally, I can refer to specific entries in the list using ~n. For example, if the dirs -v displays:
 0  ~/Public
 1  ~/Downloads
 2  ~/Documents
 3  ~
then ls -l ~2 will be the same as ls -l ~/Documents. I can also use the tilde notation for popping elements off the stack as well. The next command, popd, has the following format:
popd [-n] [+n] [-n]
-n is literally -n, when you normally popd it will change to the directory you pop, -n will suppress this
+n removes n entries from the left, e.g. +2 will remove from third element from left (zero-indexed)
-n removes n entries from the right, e.g. -1 will pick the second element from right

The pushd commands looks similar:
pushd [-n] [dir]
pushd [-n] [+n] [-n]
-n is literally -n, and like popd it adds to the stack but does not cd to the new directory.
[dir] will push [dir] on the DIRSTACK then cd [dir]
+n will rotate the stack so the nth directory from the left is at the top
-n will roate the stack so the nth directory from the right is at the top


Friday, July 11, 2014

Interactive Ruby Shell

My current project uses Ruby and has a web testing component to it. The obvious choice for testing a web application with Ruby would be Selenium-WebDriver.

If you are familiar with Ruby you should be familiar with the Interactive Ruby Shell or irb.

If I enter irb at a command prompt I am placed at the Interactive Ruby Shell:

1.9.3-p547 :001 > 
Once you are at the Interactive Ruby Shell you can try things to see how they work. In a compiled language like Java you would have to compile the code into class files then execute them. With Ruby you can actually type the lines out and see what happens immediately. For example, to do the basic Selenium example I can enter:
require 'selenium-webdriver'
driver = Selenium::WebDriver.for :chrome
At this point a chrome browser should open. If it does not possible problems might be if chromedriver isn't in your PATH. Before you open the command prompt make sure that chromedriver is in your PATH. If it is in the PATH and you run irb then the Ruby code above should open a Chrome browser. It also assumes you have Chrome installed.

Once the browser opens you can do:
driver.methods - Object.methods
All objects in Ruby have a methods method. All objects also inherit Object. So the line above says to give me all the methods for driver and subtract all the Object methods from the list. So what will remain are the Selenium WebDriver methods:

 => [:save_screenshot, :screenshot_as, :action, :mouse, :keyboard, :navigate, :switch_to, :manage, :get, :current_url, :title, :page_source, :quit, :close, :window_handles, :window_handle, :execute_script, :execute_async_script, :first, :all, :script, :[], :browser, :capabilities, :ref, :find_element, :find_elements]
From this list I can see all the things I can do with driver, an instance of Selenium WebDriver. So now that I have an instance of WebDriver and I have the browser open I can enter:
driver.get ''
text_field = driver.find_element :id => 'gbqfq'
text_field.send_keys 'Selenium'
puts driver.title
As I type these lines I will see the browser switch to, sending the text 'Selenium' to the browser and the browser closing (driver.quit).

Friday, June 20, 2014

Desktop automation tools

Talking to one of my colleagues today. He asked me about testing desktop applications. As someone who predates the Internet, testing desktop applications was not foreign to me. However I realized that many testers today have only worked on web or mobile device applications.

Testing things like Microsoft Word, Notepad, Calendar or Eclipse isn't something many testers have done.

You wouldn't use something like Selenium, cucumber or Watir to test a desktop application.

So where would you go to find desktop application test tools? The first place I look is At this time this page notes:

  • AutoIt: (free)
  • IBM/Rational Functional Tester: (commercial)
  • Quality First Software: (commercial)
  • Sikuli: (free)
  • SilkTest: (commercial)
  • Test Automation FX: (commercial)
  • Telerik TestStudio: (commercial)
In addition to these links I would also check out:
The first site has a wide range of links and it currently actively maintained. Unfortunately, it has more links for web and mobile testing but there are some desktop application tools listed.

The second link is to sourceforge. It will take you to the recently updated testing links. It includes open source applications for all sorts of testing. You can remove the "recently-updated" filter and see more but I usually restrict myself to recently updated tools. 

If the tool is for desktop testing and it has not been recently updated, there is a strong chance it does not work with modern applications or a current operating system. The older the tool the higher the risk it will not work on Windows 7 or 8, Mac OS X 10.8 or 10.9.

If you enter in a search term, like "testing" then a whole set of menus will appear below the search term and you can narrow things down even further. For example, after entering "testing" into the search text box I can select OS to be "Mac" and category "Quality Assurance".

I can also clear all the filters and search for things which might help narrow it down in different ways. For example, if I know how to program Java and I think I might want to add to the open source tool then I can filter for things with Programming Language "Java". Additionally, if the desktop application is written using Swing and Java I could search for Programming Language "Java", Category "Testing" then enter "swing". From the results I find jrobot ( and gtt (GUI Test Tool; 

It is also worth just poking around sourceforge and keeping a mental note about things you find. This is because you'll be surprised what you stumble. For example, JFCUnit is a good tool for testing Java Swing applications but it didn't come with JFCUnit (

Tuesday, June 10, 2014

What is agile testing?

There are books on agile testing and there are people who claim to be agile testers (as opposed to??). But what does this actually mean?

In recent years I have worked at companies/clients who claim to be 'agile' or I hear of companies who say they have a prescription for 'being' agile.

But agile is a concept and a way of doing software development. To me it is almost as if, when you start creating lists or writing down a checklist of activities you need to do in order to be agile is right when you stop being agile.

Doing agile software development is constantly thinking about how you can be better, faster, more efficient in what you are doing to get your software to the people who can use it or who will be affected by it.

I started being agile probably around 1998. Actually I was probably trying to be agile before then but around that time was when I started working with other like mind individuals.

The first thing I started noticing was that the developer to tester ratio caused testing to be the bottleneck in the software development life cycle. So I had to start thinking about how I can reduce risk, test more with less effort.

One possible aid was automating tests. Writing automation was not faster than manually testing an application but if I need to test the application on Windows 95, Windows 98, Windows 98 Second Edition, Windows ME, Windows NT4 and Windows 2000 then I could test it 6 times, I could test it fully once and some limited testing 5 times (what I was doing before automation) or I could take 3 times longer automating it (than manually testing it once) and run it 6 times.

Before test automation I would test one platform completely then pick the highest priority tests and run them on the other 5 platforms. This made testing faster but because I didn't test everything on every platform it increased risk.

Writing automation and running it 6 times maintained the risk but allowed me to test everything faster. It was this last idea that made test automation the best way to do all this repetitive testing.

I wasn't agile because I was writing test automation. I wrote test automation because I wanted to be agile. So if you write test automation it doesn't make you agile. If your test automation makes you able to test more without increasing risk then you are being agile.

Now I have seen people who write automation which takes days or even weeks to run. This is not faster than manual testing in some cases. Or running the tests and analysing the results might take longer than manually testing everything. It is automated but this is not agile.

What if we have an emergency fix? Someone finds a defect in production code. They figure out what is wrong and fix it. I am then tasked with testing the new build before we deploy to production. I could run the entire test suite on the new build. Or I could use the source control system to find out the difference between the production code and the fix. I could examine the source code, understand what has changed and manually test only what I determine would be effected by the fix.

The best part is that the fix might expose a new problem that the test automation isn't even testing for but my brain should quickly see the potential danger. It is faster to do this focused testing and I probably have a better chance of spotting new problems an automation test suite would not detect. This is agile, faster and reduce risk. I could run the test suite as well but if the tests are still running when we are ready to go to production, we could release to production and decide if it needed to be rolled back should something in the test suite fail. Getting to production versus the likelihood the tests will fail. I'm not the gatekeeper trying to stop the code from going to production. I am part of the development team trying to improve the quality of the code in production. It is no longer a black and white issue. If it is better than what is in production but still not perfect, we should still release it.

How about testing the right thing and understand what the right thing REALLY is. Old school testing was when you are given a list of requirements and you simple checked off the requirements were met. This does not mean you implemented a good, high quality piece of software.

As part of agile software development, we have stories. Everyone needs to understand what the details of a software are. The developer needs to understand the story so they can implement the requirements (or acceptance criteria). A tester needs to understand the story so they can confirm the requirements have been implemented correctly. The key is that the developer, tester, business analyst and possibly the product owner all need to confirm with each other what the requirements are before they are ready. We are essentially testing the requirements before we test they have been met.

Cam we go even further? At one company I was testing software for a retail store. It was a store I shopped at. I could have claimed to be representative of the customer because I was a customer. But was I really the typical customer? I was not the typical customer. How did I determine this? I would go to retail stores after work and just observe other customers. I would go to different stores on different days and at different times. Where people on their smart phones? Did anyone use the computer terminals at the entrance? I found the typical customer was much older or a mom with kids. I saw people with ads and coupons printed out on a 8.5" x 11" paper.

I would chat with people in the store. I didn't tell them who I was. I asked them how they found out about sales, where did they get coupons, did other stores do stuff they wished this store did on the website, etc. I found if people thought they could get you implementing their personal agenda or they were being surveyed that their answers weren't honest. If I was just a chatty shopper in the store I feel I got a more honest response.

So next time you are thinking about being an agile tester, do something different. Visit the customer in their natural environment. Visit different stores and see if different neighbourhoods have different customers or does the store always attract the same sort of person.

Friday, May 16, 2014

Test Driven Development

I've read a lot for and a lot against Test Driven Development (TDD) but I don't every remember something TDD has really helped me with. If you write a test then write the code which makes the test pass the end result is a piece of code which demonstrates how the application code works. I have used a number of open source applications and like every library/framework/application I have used the documentation ends up being way out of date. But if you are writing tests for all the features you add (before or after you write the code) you are essentially creating examples of how to use the library/framework/application. So even with the documentation falling out of date, so long as the code has a good base of unit tests, you can figure out how it works. I find this helpful in multiple ways. When I'm automating I tend to use tools like Selenium. Whenever I need to use a new feature of Selenium I can look at the documentation but ultimate, the unit tests for the framework tell me how it actually behaves. Additionally, the applications I am testing are written by people who understand TDD and have a good set of unit tests for each application. So I find it easy to under how new features are implemented by looking at the unit test for the application I am testing. For example, Selenium has a TakesScreenshot class. The source for it would be found at:
So if I want to see the tests for it, change 'src' to 'test' and change '' to '' for a full URL of:
When I look at this latter file I see many examples of how to use the TakesScreenshot class.

Wednesday, April 30, 2014

WebDriverWait versus FluentWait

The WebDriver code waits for the page to load but today we have dynamic websites using things like angularjs and Ajax. WebDriver does not wait for the javascript to execute. So you might have something like:
<a href="http://{{env}}">{{env}}</a>
When the page loads, javascript runs and converts {{env}} to some defined value. So on the test environment {{env}} might convert to test, on stage it converts to stage and on production it converts to www. However, WebDriver will not wait for the javascript to make the substitution.  The end result is clicking the element will cause WebDriver to go to "{{env}}" when we really wanted it to wait and go to "".

So how do we make WebDriver wait for the variable to be updated by javascript?

The answer is WebDriverWait or FluentWait. Below are examples of how you can use WebDriverWait and FluentWait to wait for the javascript to finish:
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.*;
import org.openqa.selenium.remote.DesiredCapabilities;
import java.util.Arrays;
import static java.util.concurrent.TimeUnit.*;
import static org.junit.Assert.assertFalse;

public class TestingFluentWaitAndWebDriverWait {
  WebDriver driver;
  WebElement button;

  public void setUp() {
    System.setProperty("", "/Users/ThoughtWorks/IdeaProjects/SeleniumTesting/chromedriver2");
    DesiredCapabilities dc =;
    dc.setCapability("driver.switches", Arrays.asList("--start-maximized"));
    driver = new ChromeDriver(dc);
    driver.findElement(By.cssSelector("#gbqfq")).sendKeys("\"Darrell Grainger\"");
    button = driver.findElement(By.cssSelector("#gbqfb"));

  public void testWebDriverWait() {
    new WebDriverWait(driver, 3).until(ExpectedConditions.visibilityOf(button)).click();

  public void testFluentWait() {
    new FluentWait<WebElement>(button).withTimeout(3, SECONDS)
        .pollingEvery(100, MILLISECONDS)
        .until(new Function<WebElement, Boolean>() {
          public Boolean apply(WebElement w) {
            return w.isDisplayed();

  public void testFluentWaitPredicate() {
    new FluentWait<WebElement>(button).withTimeout(3, SECONDS)
        .pollingEvery(100, MILLISECONDS)
        .until(new Predicate<WebElement>() {
          public boolean apply(WebElement w) {
            return w.isDisplayed();

  private void assertMyBlogLinkExists() {
    final String linkText = "QA & Testing";
    try {
    new FluentWait<WebDriver>(driver).withTimeout(3, SECONDS)
        .pollingEvery(100, MILLISECONDS)
        .until(new Function<WebDriver, Boolean>() {
          public Boolean apply(WebDriver d) {
            WebElement link = d.findElement(By.linkText(linkText));
            return link.isDisplayed();
    } catch(TimeoutException te) {
      assertFalse(String.format("Timeout waiting for link: '%s'", linkText), true);

  public void tearDown() {
    if(driver != null)
All three tests will result in the same outcome.

Tuesday, April 29, 2014

Creating good locators

I am constantly seeing people look for tools which will create locators. I think a lot of us starting by using Firebug to examine the DOM and select locators. With Chrome I can now right-click on an element and select Inspect Element.

Now of this really helps figure out what is a good locator. So people are still looking for a tool which will tell them the best locator for an element.

If I go to and inspect the Google Search button I will see:
<button class="gbqfba" aria-label="Google Search" id="gbqfba" name="btnK">
<span id="gbqfsa">Google Search</span>
We might think that the Selenium locator By.cssSelector("#gbqfba") might be a good locator. One definition of a good locator is that it needs to be consistent and unique. For years the Google Search button has had the id attribute of gbqfba. So it is consistent. It is an id and according to the HTML standard there can be only one element for each id value. So it will be unique.

But is it a good locator? I will argue that it is not a good locator. If I had the following Selenium code:
driver.findElement(By.cssSelector("#gbqfq")).sendKeys("Darrell Grainger");
assertThat(driver.findElement(By.xpath("//a[contains(text(),'Testing')]")).getText(), startsWith("QA"));
it will run consistently over years. But what happens if something changes and I need to refactor this code? The first line is sending my name to something with id='gbqfq', it is clicking something with id='gbqfba'. What if we changed the HTML to have:
<button class="gbqfba" aria-label="Google Search" id="gbqfba" data-qa='search button'>
<span id="gbqfsa" data-qa='text for search button'>Google Search</span>
then the Selenium code could read:
driver.findElement(By.cssSelector("[data-qa='search input box']")).sendKeys("Darrell Grainger");
driver.findElement(By.cssSelector("[data-qa='search button']")).click();
assertThat(driver.findElement(By.cssSelector("[data-qa='first search result']")).getText(), startsWith("QA"));
Now if I'm looking at the code and have no knowledge of the website being automated, it should be easy for me to figure out that the first element is the search input box, the second element is the search button and the last element is the first search result.

Essentially, by adding the data-qa attribute the elements we are trying to locate:
  • easier to locate the element
  • easier to read the code when running/refactoring it
  • make the code less brittle
A possible complaint would be that adding these elements could interfere with other elements on the page. The whole 'data-' syntax is a standard for HTML5. If there is a change someone is going to use data-qa then you can use data-foo where foo is the name of your application or your company name or your name.

Another possibility is that it is not unique. It only has to be unique to the given page. If the value is incredibly descriptive then it should be unique. You make the value of data-qa reflect the information necessary for someone testing the application manually then it should be unique. If it is not unique then how would a manual tester find THE element if its description was not unique?

I also considered adding more data to the page would increase what the CSS and XPath engines had to index more information. So I tried loading up page with these attributes added and could not measure any noticeable impact on the performance.

The only other concern I can image is if I add too much data or too verbose descriptions on the data-qa values then I could increase the amount of data transmitted. More bytes transmitted might impact performance of the web page. Wasn't able to see any significant impact in this area but your mileage might vary. So this is something to watch out for.

Tuesday, January 28, 2014

Forcing a remote computer to restart

Occasionally I find I need to reboot a remote computer. In my current situation the remote computer is Windows XP and accessed via VNC. However the VNC server will occasionally stop responding.

If the machine's IP address was and I try:
shutdown /m /r /t 1
it does not reboot the computer. Assuming there is nothing to save on the computer and I just want it to reboot and restart the services, I am happy to just kill processes until I hit something critical and it reboots.

However, if you hit the RPC process, you will be unable to send the command to kill more processes. So you have to be careful about which process you are killing. What I have found is killing the lsass.exe process will cause the computer to reboot. So issuing:
taskkill /im "lsass.exe" /s /f
Will kill the lsass.exe proccess on the remote machine and force a reboot of that computer. The reboot will not be immediate. A 30 second countdown will happen as the system warns you a critical process was killed. However it will reboot the computer after the countdown is complete.