Google Analytics

Search

To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g.

site:darrellgrainger.blogspot.com CSS selectors

will search for "CSS selectors" but only on my site.


Wednesday, December 28, 2011

The 9 Oddest Job Interview Questions Asked at Tech Companies in 2011





I recent read an article on Mashable.com about the 9 oddest job interview questions asked at tech companies in 2011. Here they are:


  1. How many people are using Facebook in San Francisco at 2:30 p.m. on a Friday?
  2. If Germans were the tallest people in the world, how would you prove it?
  3. Given 20 ‘destructible’ light bulbs (which break at a certain height), and a building with 100 floors, how do you determine the height that the light bulbs break?
  4. How would you cure world hunger?
  5. You’re in a row boat, which is in a large tank filled with water. You have an anchor on board, which you throw overboard (the chain is long enough so the anchor rests completely on the bottom of the tank). Does the water level in the tank rise or fall?
  6. Please spell ‘diverticulitis’.
  7. You have a bouquet of flowers. All but two are roses, all but two are daisies, and all but two are tulips. How many flowers do you have?
  8. How do you feel about those jokers at Congress?
  9. If you were a Microsoft Office program, which one would you be?

I wondered how would I have answered these. 

The first one was immediately obvious to me. The answer is "all of them." Okay a little facetious but that is how I'd answer it at first. Sort of an ice-breaker. If they had a problem with that answer and moved on I'd have a bit of a problem working there. If that got a chuckle I'd see if they were looking for something else. I think I'd ask for further clarification. What is the motivation for this question? What frame of mind should someone be in to properly answer this question? Is there a specific answer you are looking for?

Number 2 seems to be a little vague for me. As a software tester, ambiguity doesn't work for me. What does "Germans were the tallest people in the world" mean? Is the combined height of all Germans greater than the combined height of all other people nations? By Germans do you mean people born in German? People who hold German citizenship? What about immigrants to Germany? What about people who hold dual citizenship? What about former Germans who immigrated to another country and no longer consider themselves German? In short, the question needs further clarification to be answered.

Number 3 is also a little vague. How accurate does the answer for each bulb have to be? Can we open a window on each floor in the building? Do we know the exact height of each window? Are we to assume all the bulbs break at the same height and there is one answer for all 20 bulbs? If this is the case and we really want to know from which floor the bulbs will break, drop one from the first floor. If it does not break, go to the second floor and repeat. When if no bulb has broken and you make it to the 20th floor, go down and collect all 20 unbroken bulbs, start again from the 21st floor. Once you find the floor the bulb breaks on, you will have used only 1 bulb. If I make it to the 100th floor and no bulb has broken. I'd have to devise some way to go above 100 floors and continue the test.

On the other hand, if the bulbs might break at different heights, I'd have to drop all of them from the first floor. then all surviving bulbs from the second floor and so on.

For number 4, if I had an answer to this I won't be sitting in the job interview at a tech company. I would be implementing my answer.

Number 5 seems like a question about displacement. A real physics question. If I am in a boat with an anchor, the boat myself and the anchor have weight. The amount of water displaced by the boat is less than the weight of myself, the boat and the anchor. If I throw the anchor overboard, the boat will rise in the water (it weighs less and displaces just as much water, therefore it will be more buoyant). However as the boat rises it will displace less water. Thus the water level will fall. The anchor dropped in the water has volume and will displace some water. This the water level will rise. The unanswered question is whether the displacement of the anchor is greater than, less than or equal to the reduced displacement of the boat. I believe the answer is equal to. So the water level in the tank will neither rise or fall.

Kind of hard to mess up number 6. This is a question which only works verbally. Since I can see the spelling it is pointless. Not sure what they are trying to test here. They like people who are good at spelling bees?

Number 7 would have to be 3 flowers. If I have 1 rose then all but two are roses (3 - 2 = 1 rose), 1 daisy then all but two are daisies (3 - 2 = 1 daisy) and 1 tulip then all but two are tulips (3 - 2 = 1 tulip).

Number 8 seems to assume I care about American politics and have an option on Congress. The question is a leading question. Personally, I'd answer it with, "I don't believe in mixing politics, religion and work. Since this is a job interview no politics or religion please."

And for number 9, I'd have to take a moment and think about it. I'd need to know myself then I'd have to relay how a Microsoft Office program could be analogous to the traits I like about myself. Outlook is good for communication, Word is probably the most popular program and good at the most jobs. Excel is great for finance, budgetting, invoicing. Powerpoint is good to convey ideas and used in presentations which inform and teach. Do we include Messenger? I think the key to this one is knowing yourself. If you can describe any Microsoft Office application as exhibiting the same traits you're probably giving a good answer. On the other hand you might want to say you could never limit yourself to one Microsoft Office program. Like the full Office solution, you do it all.

Personally, I try to avoid interview questions like these. Often the interview just thinks the answer is creative or smart and if you can get it you must be creative or smart. It does not take into account cultural differences, training background, already heard the question, etc. In other cases the interview thinks by making the question cryptic, it will be harder for the interviewee to know what the interviewer is looking for and they'll get an honest answer. Realistically, that doesn't work. If you look at research surveys, they will ask a 100 questions. Of those, 20 questions are probably related. The candidate might think they know what I'm look for on 2 or 3 of those question but the majority of the questions will give me what the honest answer to the questions are.

In the end, these are all games and statistically, employers should find a good candidate. They might not find the best candidate but they'll never know because the person they hire will be okay and possibly even great.


.

Selecting WebDriver locators

When you are using WebDriver for test automation a test case really boils down to:
  1. Find an element
  2. Perform an action
  3. Confirm the expected result
Finding an element and confirming the expected result requires locators. In WebDriver a locator is a way of uniquely identifying an element on the web page, i.e. in the Document Object Model (DOM). The By class is used in WebDriver to locator elements. You have:
  • By.className(className);
  • By.id(id);
  • By.linkText(linkText);
  • By.name(name);
  • By.partialLinkText(linkText);
  • By.tagName(name);
  • By.cssSelector(selector);
  • By.xpath(xpathExpression);
The most powerful locators are CSS and XPath. All the other selectors can actually be done using CSS or XPath. For example:

OriginalCSSXPath
By.className("foo"); By.cssSelector(".foo"); By.xpath("//*[@class='foo']");
By.id("bar"); By.cssSelector("#bar"); By.xpath("//*[@id='bar']");
By.linkText("Click Me"); N/A By.xpath("//a[text()='Click Me']");
By.name("fee"); By.cssSelector("[name='fee']"); By.xpath("//*[@name='fee']");
By.partialLinkText("some"); N/A By.xpath("//a[contains(text(),'some')]");
By.tagname("div"); By.cssSelector("div"); By.xpath("//div");

In addition to the simple locators, CSS and XPath can selector more complex elements. Rather than saying I want all the DIV tags, I can say I want all the DIV tags whose parent is a SPAN. In CSS this would be "span>div" and in XPath this would be "//span/div".

The combinations are endless. For each tag in the XPath or CSS I can add multiple identifiers. So I could have locators for things as complex as "all DIV tags, with name containing 'foo' and class equal 'bar' whose parent is a TD but only in the TABLE with id='summary' and the class equal 'humho'"

The first thing to understand is that CSS will be noticeably faster than XPath when testing against Internet Explorer. Your tests could run as much as 10 times slower (something which runs on a day on Firefox could take a week on Internet Explorer) when using XPath.

So the first thing to remember is CSS is better than XPath. However, some things are easier to express as XPath. So occasionally you might need to use XPath.

If you have a selector like "html>body>table>tbody>tr[2]>td[3]>a" it might work but if the developer finds it does not format nicely on Chrome, they need to throw in a DIV. So the selector changes to "html>body>div>table>tbody>tr[2]>td[3]>a". Later a new version of Internet Explorer comes out and the developer finds they need to add a SPAN to make it look proper on the new Internet Explorer and still look okay on older versions. So the locator becomes "html>body>div>table>tbody>tr[2]>td[3]>span>a".

If we spend all our time maintaining the locators, it could end up that the cost of maintaining the automation is greater than running the tests manually. In which case the automation is deemed a failure.

So you have to start looking for patterns. Is there something I could use on the first version of the application which also works on the second and third version? Can I predict a locator which will work on the fourth and subsequent versions?

Often the underlying technology changes but it continues to look the same to the user. So is there something visual I can use which will not change? In this example, the text for the anchor probably never changed. So I'd use By.linkText("whatever"); locator or By.xpath("//a[text()='whatever']);.

What if I find myself changing locators because sometimes the text is "  whatever", sometimes it is "whatever" and other times it is "whatever  "? Then I'm going to use By.partialLinkText("whatever"); or By.xpath("//a[contains(text(), 'whatever')]");.

The danger is that there might be two links which contain the substring "whatever". I need to make sure I am selecting the correct link. So the locator might need to be more complex. It might need to be partial text and parent information. For example, if the text appears in two different tables and I want the text from table 2. Table 2 has the id='foo2' then the locator might be:

  • "table.foo2 a"
  • "//table[@id='foo2']/tbody/tr/td/a[contains(text(),'whatever')]"
The first locator assume there is only 1 anchor in the second table. This might not be true in all cases. The second locator finds the second table but it searches all rows (TR) and all columns (TD) for an anchor (A) whose text contains the substring "whatever". This can be extremely slow, especially for large tables.

Finding the balance between locators which are too long and too short can be an art. The trick is to pick something. If it requires maintenance, pick a new locator which works on the previous versions and the new version. As you continue to maintain the locators you will see a pattern. You will start to see that chunks of HTML code never change Outside these chunks change (so keep the locator short enough to stay inside the chunk that does not change). Within the chunk there might be multiple matches if you make the locator too short. So figure out, within that chunk, what makes the element you want different from all the other matches.

So how do I look at the DOM? I need to see what the DOM looks like to be able to see all the possible locators which would work.


If you are using Internet Explorer 8 or higher you can press F12 to open the developer tools. If you are using Firefox you need to install Firebug then F12 will open Firebug. If you are using Chrome then CTRL-SHIFT-I will open the developer tools.

Beyond that, the only tool I use is my brain and the W3 standards.

Reading the W3 standards (or any standards documentation, ISO, ANSI, IEEE, etc.) can be difficult at first. Especially if you have been learning from books like "Web Design in 21 Days" or "Software Testing for Dummies." However, the more you read and understand standards documentation, the easier it gets to read other standards documents. If generating XPath was easy enough for a piece of software then why would they pay you to do the work? There are probably a dozen XPath locators for any given element on a page. Some will work once and need to be revised on the next release of the application. Some will work within the design pattern of the application and might never need updating. There is no way for a piece of software to spot the design pattern and know which locator will work best. This is what they pay you to do.

Excessively long XPath is brittle and will need a great deal of revising from release to release. Extremely short XPath will sometimes find the wrong element between releases. This leads to a test which fails unpredictably and can be difficult to debug. Not something you want in an automation suite. Finding the right balance is your job. The first time you select a locator it might need revising for the next release. You need to look at why you selected the first locator when selecting the revised locator. The second locator should work for the first release and the second release. When the locator fails, you need to select a new locator which would have worked on the first release and all subsequent releases, including the next release. After a while you should start to see the pattern. The pattern is usually derived from some design pattern the application is being developed with. Learn about Design Patterns, it will be extremely helpful in generating good test automation. If the developers change the tools, libraries, design patterns, etc. you should expect the locators to fail. At this point, selecting a locator which works with the next release but does not work with the previous release makes sense. Major change in development usually implies major change in test automation. It would be difficult for a tool to realize when it needs to abandon old locators.

Essentially, automation is all about finding elements (locators), performing actions on them, confirming the expected results (usually involves more locators). Two thirds of the work is about the locators. Learning XPath, CSS and DOM will make your job that much easier.




When possible, use CSS selectors as they are faster. Some things are easier to locate using XPath and XQuery (XPath functions). It is better to have a test run slow and be easy to maintain. So if CSS selectors are complex and unintuitive you might want to use XPath functions instead.

This is essentially how I decide on locators.

.

Friday, December 23, 2011

Your automation must not dictate your test plan

One of the things I see people getting into automation doing is selecting what to automate or how to test an application based on what the automation tool will let them do. This is a dangerous approach to automation.

The test cases I create for an application are based on what is important to the customer. I want to make sure that the customer experience is a good experience. If I create a set of test cases or use stories which reflect real customer usage of the application then I am most likely to find issues which will affect the customer.

I remember working on a project for 4 years. After 4 years of testing and improving the application we were at a point that over 90% of the features important to the customer were tested and bug free. Of the remaining 10% we knew of most the defects and had a work-around. We were at a point where we were testing extreme edge cases. At this point I found a major defect. The developer looked at the fix and realized the defect had been there since the beginning. In 4 years not one single customer reported this defect. The defect was easy to automate but really added zero value to the application. This is NOT a test case you want to start with when automating.

On another project someone found a defect in a desktop application. The steps to reproduce were:


  1. Run an application not involved in the test case at all (Outlook for example)
  2. Hand edit a project file using notepad or something other than the application it was intended for
  3. Make a very specific change to the file
  4. Run the application under test
  5. Open the corrupted file
  6. Select a feature which relies on the corrupted file
  7. A modal dialog appears telling you the file is corrupt, do you wish to repair it.
  8. Ignore the dialog
  9. Use CTRL-TAB to switch to a different application not involved in the test case at all
  10. Click on the application under test in a very specific location on the MDI client window


At this point the modal dialog is hidden behind the window with focus and the window with focus appears to be frozen. It is really waiting for you to respond to the modal dialog. This was a design flaw in the operating system. It was virtually impossible to fix in the application under test without a major re-design. It was highly unlikely a customer would be able to reproduce this defect. When the tester showed me the 'locked' state it only took me a few minutes to figure out what was going on. Our customer was typically a software developer with 7+ years of experience.

This was a useless test case. In both this and the previous test case it was a bad test case regardless of creating it manually or automating it. My point is, the test case came first. Even before we attempted to automate it, we decided whether or not it was a good test case.

Test automation NEVER proceeds test case creation or test planning.

Once you know what you want to test and the steps to testing it, you automate those steps.

This is the second mistake I see people getting into automation making. They decide WHAT they want to test but when they start automating it, the steps they generate with the automation are not the same steps as they would do manually. In this case you have taken the time to create a good set of test cases and thrown them out the door when you start automating. This is not a good idea.

Rather than changing the test case to something which is easy to automate, you need to figure out how to automate the test steps. This is what separates good automation from bad automation.

Many times I have seen a test case automated. It gets run and passes. We ship the application to the customer. He uses the same feature and it fails horribly. Why? Because the steps he used to get to the defect where not the steps we automated. We had a good test case. If an experienced tester had executed the test case manually, they would have found the defect. The person automating the test case just found it easier to automate something close to but not equivalent to the test case.

I am currently using Selenium 2.x with WebDriver. One of the changes from Selenium 1.x to 2.x is that you cannot interact with invisible elements. For example, a common trick on a website is to have an Accept checkbox on a download page. If you accept the terms the Download button becomes visible. In Selenium 1.x I could click on the Download button without clicking the Accept checkbox. The REAL test case was:


  1. Go to download page
  2. Click Accept checkbox
  3. Click Download button
  4. Confirm download

What someone would automate with Selenium 1.x was:


  1. Go to download page
  2. Click Download button
  3. Confirm download

The idea is that it saves a step. One less step means quicker to code, runs quicker, one less thing to maintain. You do this a thousand times and it adds up. HOWEVER, the customer would never click on the invisible Download button.

In Selenium 2.x you would get a failure with the shortened test case. People occasionally complain that Selenium 2.x has removed an important feature. They want to know how they can click on the invisible Download button. They come up with these tricky Javascript snippets which will allow Selenium 2.x to 'see' and click the Download button. Is a customer going to create a Javascript snippet, inject it into the page, run it just so they can click the Download button? Is a manually tester going to do this? If the answer is no, then why is our automation doing this? If the manual test case calls for clicking the Accept checkbox then our automation should as well. If clicking the Accept checkbox does not enable the Download button, file a bug and move on to something else.

Finally, automation is all about finding elements on a page, interacting with them (clicking, right clicking, typing, etc.) and checking what happened. As a manual tester you are going to use your eyes and hands to do everything. The test case might have a step like, "Locate the folder called 'My Documents' and double click it." This is really two steps. The automation should locate the folder called 'My Documents', this is step 1. It should double click the element, this is step 2. As a manual tester I find the element by looking for the text 'My Documents'. If this is a web page and the HTML is:

<div id='lsk499s'><a href="...">My Documents</a></div>

I am not going to use the div id to find the element. I'm going to use the text. As a manual tester I used the text to find the element. There is no reason to do anything differently with the automation tool.

What if the web page is made to look like Window Explorer. On the left is a tree view with one of the nodes being 'My Documents' and on the right is a thumbnail view with a 'My Documents' folder. In the manual test case, does it specify which 'My Documents' to double click? If yes, follow the test case. If no, how would you as a tester decide? Do you always go with the thumbnail view? Do you pick randomly? Do you change ever test run? If you are a good manual tester, we want that experience captured by the automation. If I would normally change every test run but I never test the same feature twice in one day, it might be sufficient to say, if the day of the year is even, double click thumbnail else double click tree view. If the automation gets run daily, it will pick a different way each day.

The important thing in all this is that I am a good tester. I write good test plans and test cases. When I decide to automation my good test cases, I should not compromise the quality of my testing just because I am testing it with an automation tool rather than manually.

Happy testing!

.

Wednesday, December 14, 2011

Using the right tool for the job

From time to time I see people asking questions about how to use an automation tool to do something the tool was never meant to do. For example, how do I use Selenium to get the web page for a site without loading the javascript or CSS?

Selenium is designed to simulate a user browsing a website. When I open a web page with a browser, the website sends me javascript and CSS files. The browser just naturally processes those. If I don't want that, I shouldn't use a browser. If I am not using a browser, why would I use Selenium to send the HTTP request?

That is all the get() method in Selenium does. It opens a connection to the website and sends an HTTP request using the web browser. The website sends back an HTTP response and the browser processes it.

If all I want to do is send the request and get the response back, unprocessed. I don't need a web browser.

So how can I send an HTTP request and get the HTTP response back? There are a number of tools to do this.

Fiddler2: http://www.fiddler2.com/fiddler2/

The way Fiddler works is you add a proxy to your web browser (actually Fiddler does it automatically). Now when you use the web browser, if Fiddler is running, the web browser sends the HTTP request to Fiddler and Fiddler records the request and passes it on to the intended website. The website sends the response back to Fiddler and Fiddler passes it back to the web browser.

You can save the request/response pair and play them back. Before you play the request back you can get it. You can edit the website address, you can edit the context root of the URL and if there is POST data you can get the data as well.

Charles: http://www.charlesproxy.com/

Charles is much like Fiddler2 but there are two main differences. The first is that Charles is not free. You can get an evaluation copy of Charles but ultimately, you need to pay for it. So why would you use Charles? With purchase comes support. If there are things not working (SSL decrypting for example) you can get help with that. Additionally, Fiddler is only available on Windows. Charles works on Mac OS X and Linux as well.

curl: http://curl.haxx.se/

Fiddler and Charles are GUI applications with menus and dialogs. They are intended for interacting with humans. If you are more of a script writer or want something you can add to an automated test, you want something you can run from the command line. That would be curl. Because it is lightweight and command line driven, I can run curl commands over and over again. I can even use it crude for load testing.

The most common place to find curl is checking the contents of a web page or that a website is up and running. There are many command line options (-d to pass POST data, -k to ignore certificate errors, etc.) but the general use is curl -o output.txt http://your.website.com/some/context/root. This will send the HTTP request for /some/context/root to the website your.website.com. A more real example would be:

curl -o output.txt http://www.google.ca/search?q=curl

I could then use another command line tool to parse the output.txt file. Or I could use piping to pipe the output to another program.


Another nice command line tool is wget. The wget command, like curl, will let you send an HTTP request. The nice thing about wget is that you can use it to crawl an entire website. One of my favourite wget commands is:

wget -t 1 -nc -S --ignore-case -x -r -l 999 -k -p http://your.website.com

The -t sets the number of tries. I always figure if they don't send it to me on the first try they probably won't send it to me ever. The -nc is for 'no clobber'. If there are two files sent with the same name, it will write the first file using the full name and the second file with a .1 on the end. You might wonder, how could it have the same file twice in the same directory? The answer is UNIX versus Windows. On a UNIX system there might be index.html and INDEX.html. To UNIX these are different files but downloading it to Windows I need to treat these as the same file. The -S prints the server reponse header to stderr. It doesn't get saved to the files but lets me see that things are still going and something is being sent back. The --ignore-case option is because Windows ignores case so we should as well. The -x option forces the creation of directories. This will create a directory structure similar to the original website. This is important because two different directories on the server might have the same file name and we want to preserve that. The -r option is for recursive. Keep going down into subdirectories. The -l option is for the number of levels to recurse. If you don't specify it, the default is 5. The -k option is for converting links. If there are links in the pages being downloaded, they get converted. Relative links like src="../../index.html" will be fine. But if they hard coded something like src="http://your.website.com/foo.html" we want to convert this to a file:// rather than go back to the original website. Finally, the -p option says to get entire pages. If the HTML page we retrieve needs other things like CSS files, javascript, images, etc. then the -p option will retrieve them as well.

These are just some of the tools I use when Selenium is not the right tool for the job.