I was just reviewing some documentation for my new job and came across the term soak test. I had never heard the term before and tried to imagine what it might be.
Turns out the term soak test is essentially a load test conducting for a period of time.
A stress test is putting a load on a system which is beyond acceptable limits. The purpose is to see how the system handles the load. All systems will fail but a proper system will fail gracefully.
A load test is putting a load on a system which is within acceptable limits. The purpose is to confirm the system will handle what is considered an acceptable load. For example, when a university opens the online course selection system, thousands of students will log on and try to select their courses for the year. We know how many students are enrolled and there is the theoretical possibility they will all submit selections at the same time. That would be an acceptable load test.
I've looked at load testing over a period of time before but never knew it was called a soak test. From the descriptions I have read, you would put a load on the system and leave it there for a period of time. In some examples they talk about leaving the load for days or weeks.
For me, I would record metrics on the system and plot a graph. If any of the recorded metrics increased over time but with no change in the load, I would suspect a failure. For example, if I am recording used memory and the application continues to increase the amount of memory required, I can extrapolate (a) the application will run out of memory and (b) the approximate time when this will happen. There was never a need to run the test for days or weeks.
Technically, this was a soak test. I was running a test with a load for a period of time. However, I took the data from the load test and projected where it would go over time. By doing that I could see trends and determine when they would have an impact.
For example, on a system which is performing well I might see the memory usage go to 10,000 megabytes then plateau. On a system with a memory leak I would see it go to 10,000 megabytes then it would start to slowly climb. It might be that it leaks 100 megabytes ah hour. If the system had 200,000 megabytes of memory for the application, I can predict it will take 1,900 hours or approximately 80 days for an out of memory error to occur.
If I ran a soak test for 30 days I might assume the application is fine. I would still need to analyze the data to see that it was leaking memory. It would be easier to see the memory leak after 30 days but not always necessary.
Personally, I would run a soak test if I found no obvious problem using load testing and an analysis of the data. If I found a memory leak or data I/O issue without the soak test then a load test is sufficient to find the error.
Once the load test with analysis passes however, it would be prudent to run a soak test if time and resources allowed. You could even run the soak test after releasing the application. If you do find an issue, create a patch.
Google Analytics
Search
To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g. site:darrellgrainger.blogspot.com CSS selectors will search for "CSS selectors" but only on my site. |
Friday, October 8, 2010
Tuesday, September 28, 2010
On the move again
Decided the job at Certicom was not right for me. Whenever I am looking for a job I spend as much time preparing and interviewing the company as they spend interviewing me. I want to be sure that I'm a good fit for the company and the company is a good fit for me.
Just like a company, I use the probationary period to decide if I am going to stay with the company. There was no one reason I felt I didn't fit in but quantity if small things made me realize I would never be quite happy.
So I'm no longer commuting to Mississauga and taken a new job within walking distance of my home.
Monday, October 4th I start my new job working at the Ministry of Education. Not sure which technologies I'll be working with just yet but I will try to blog about them here.
Just like a company, I use the probationary period to decide if I am going to stay with the company. There was no one reason I felt I didn't fit in but quantity if small things made me realize I would never be quite happy.
So I'm no longer commuting to Mississauga and taken a new job within walking distance of my home.
Monday, October 4th I start my new job working at the Ministry of Education. Not sure which technologies I'll be working with just yet but I will try to blog about them here.
Friday, August 20, 2010
New Job
It certainly has been a while since I posted to my blog. For anyone who is curious, I started a job as QA Manager at Certicom Corporation.
Certicom is in the business of cryptography. We create libraries or toolkits for C and Java which are used by customer applications. At the lowest level is Crypto. Crypto is cryptographic routines and algorithms used as part of a security solution. Customers like XM Radio use our Security Builder® Crypto™.
The next level up is PKI or Public Key Infrastructure. Security Builder® PKI™ enables you to add robust, standards-based digital certificates and key management to applications and devices, ensuring trust and non-repudiation. Some customers will develop their own Crypto solution for use with our PKI or they will use our Crypto solution with our PKI.
The security most people in the public know about is SSL or Secure Sockets Layer. Our Security Builder® SSL™ product can be used for people wishing to implement SSL. Either in a client or server. For example, with our product you can develop a mod_ssl for use with Apache Web Server.
Another term you may be familar with is VPN or Virtual Private Network. To create a VPN requires IPSec or Internet Protocol Security. This is acheived using Certicom's Security Builder® IPSec™.
In addition to all the publicly available products we create custom solutions for various industries and companies.
The company was recently acquired by Research In Motion. Most people know this company as BlackBerry, which is just a product the company produces.
As QA Manager I have the challenging task of testing all the different implementations of our products. Some are Windows, AIX, HP-UX, Linux, Solaris or Mac OS based but others are build on embedded devices. Our Asset Management System, used in chip manufacturing plants, utilizes Web Services, AJAX, JavaEE and other web technologies as a front end to a complex cryptography solution.
At this time I am actually looking to hire testers for testing these products. It is quite challenging to find people who can do the work. Ideally, they need to know or be able to learn:
If you know anyone who fits this description or you believe you are up for the challenge, you can apply to the position at https://rim.taleo.net/careersection/professional/jobdetail.ftl?lang=en&job=188752. If this link does not work, try the following:
Certicom is in the business of cryptography. We create libraries or toolkits for C and Java which are used by customer applications. At the lowest level is Crypto. Crypto is cryptographic routines and algorithms used as part of a security solution. Customers like XM Radio use our Security Builder® Crypto™.
The next level up is PKI or Public Key Infrastructure. Security Builder® PKI™ enables you to add robust, standards-based digital certificates and key management to applications and devices, ensuring trust and non-repudiation. Some customers will develop their own Crypto solution for use with our PKI or they will use our Crypto solution with our PKI.
The security most people in the public know about is SSL or Secure Sockets Layer. Our Security Builder® SSL™ product can be used for people wishing to implement SSL. Either in a client or server. For example, with our product you can develop a mod_ssl for use with Apache Web Server.
Another term you may be familar with is VPN or Virtual Private Network. To create a VPN requires IPSec or Internet Protocol Security. This is acheived using Certicom's Security Builder® IPSec™.
In addition to all the publicly available products we create custom solutions for various industries and companies.
The company was recently acquired by Research In Motion. Most people know this company as BlackBerry, which is just a product the company produces.
As QA Manager I have the challenging task of testing all the different implementations of our products. Some are Windows, AIX, HP-UX, Linux, Solaris or Mac OS based but others are build on embedded devices. Our Asset Management System, used in chip manufacturing plants, utilizes Web Services, AJAX, JavaEE and other web technologies as a front end to a complex cryptography solution.
At this time I am actually looking to hire testers for testing these products. It is quite challenging to find people who can do the work. Ideally, they need to know or be able to learn:
- C, C++ and Java programming
- Defect tracking systems
- Test reporting
- Knowledge of Windows, Linux or UNIX
- Experience testing embedded devices or mobile devices, e.g. BlackBerry
- Ability to create test plans
- Working knowledge of test automation
- Unit testing, system testing, integration testing
- Experience with source control
- Knowledge of cryptography
- Development experience or experience testing toolkits and libraries
If you know anyone who fits this description or you believe you are up for the challenge, you can apply to the position at https://rim.taleo.net/careersection/professional/jobdetail.ftl?lang=en&job=188752. If this link does not work, try the following:
- Go to http://www.rim.com/
- Go to the Careers section
- Go to Americas, this should bring you to a Job Search page
- In the keywords field enter: Certicom
Wednesday, June 23, 2010
Understanding how xpath relates to web pages
When using automation tools like Selenium or Watij you often find yourself creating an xpath to find an element. Talking to a few people there seems to be a lack of understanding of how an xpath relates to a web page.
A web page is merely a group of blocks inside blocks. To illustrate I have the following image:
I think the step which is missing for most people is understanding how to look at a web page.
Image the outer block is the <BODY> of the web page. Inside the outer block, i.e. the BODY are two rectangles. Let's say they are <TABLE> elements. The top table, i.e. /HTML/BODY/TABLE[1], has one row and three columns. The lower table, i.e. /HTML/BODY/TABLE[2], has three rows and four columns.
Let's say that both tables have one row of cells where the class was 'foo', i.e. <TD class='foo'>. If I wanted to find all cells with class='foo' and the text contained 'bar' I would use:
//TD[@class='foo' and contains(text(), 'bar')]
But what if I wanted to search only the second table? Then I would use:
//TABLE[2]/TBODY/TR/TD[@class='foo' and contains(text(), 'bar')]
Essentially, the longer the xpath the small the area of the web page I am searching. Using //BODY will search the largest square in my example. Using //BODY/TABLE[2] will search the lower table or the second level in.
If you look at the third row of the lower table you can see the 'cells' contain another level of element. Let's say that the cells, i.e. <TD>, contains a <DIV>. Using //TABLE[2]/TR[3]/TD/DIV[1] focuses on the first div in the last row of the lower table.
Labels:
automation,
Selenium,
Watij,
xpath
Wednesday, June 9, 2010
Creating Time
I've started a new job and I'm working with my staff to see how things are done. One of the problems I see in many organizations is 'not enough time'. There never seems to be enough time in the day/week/month to get everything done.
This will always be true because there is always a pressure to get to market before the competition.
However, I have found ways of getting more done compared to someone else in the same time period. The trick is to look for lost moments.
Yesterday I was talking to a staff member while he set up an appliance. The process took 5 minutes of running commands and answering questions and around 15 minutes of the software getting installed. I asked him a question and he immediate paused to answer me. He was literally one key press away from the point the setup no longer required his attention. I stopped him and told him to press ENTER first.
When you look at what he was going to do:
The way I would do it would be:
With microprocessors, you save a cycle here, a cycle there. It does not look like a lot but if you look at the ratio. I have 1 cycle saved for 12 cycles spent. This means 1/12 or over 8%. In real time, saving 8% means an extra 40 minutes a day. Over the course of a week I have an extra 3 hours and 20 minutes.
This is how you create extra time. I will also do things like using tools like expect to automate a process. Rather than typing in all the prompts to a Bourne shell script I will do the following:
Important to note however is that if you have 5 or 10 minutes while you are waiting for something, don't switch to a task that requires you to change what you are thinking about. Getting back into the right mindset for the thing you are waiting for could cost you 5 or 10 minutes. In that case there is no real savings.
So think about the times you are waiting for something and what can you do while you are waiting. Thing about what tools you can use to create situations you will be waiting and therefore able to do something else.
One last parting example, if you are using automation tools that take over your computer (they create mouse and keyboard activity), set up a Vmware image and run the tool inside the Vmware. While the automation is running, minimize the Vmware window and do something else on your desktop.
This will always be true because there is always a pressure to get to market before the competition.
However, I have found ways of getting more done compared to someone else in the same time period. The trick is to look for lost moments.
Yesterday I was talking to a staff member while he set up an appliance. The process took 5 minutes of running commands and answering questions and around 15 minutes of the software getting installed. I asked him a question and he immediate paused to answer me. He was literally one key press away from the point the setup no longer required his attention. I stopped him and told him to press ENTER first.
When you look at what he was going to do:
- Enter 99% of the interactive portion of setup (4 minutes, 59 seconds)
- Answer my question (10 minutes)
- Press ENTER to finish the interactive portion of setup (1 second)
- Wait for the batch portion of setup to finish (15 minutes)
- Total running time = 30 minutes
The way I would do it would be:
- Emter 99% of the interactive portion of setup (4 minutes, 59 seconds)
- Ask me to wait for a moment (3 seconds)
- Press ENTER to finish the interactive portion ofsetup (1 second)
- Answer my question (10 minutes)
- Wait for the batch portion of setup to finish (5 minutes)
- Total running time = 20 minutes, 3 seconds
With microprocessors, you save a cycle here, a cycle there. It does not look like a lot but if you look at the ratio. I have 1 cycle saved for 12 cycles spent. This means 1/12 or over 8%. In real time, saving 8% means an extra 40 minutes a day. Over the course of a week I have an extra 3 hours and 20 minutes.
This is how you create extra time. I will also do things like using tools like expect to automate a process. Rather than typing in all the prompts to a Bourne shell script I will do the following:
- Run the script using sh -x <script-name>
- This will output everything which is happening
- Take the output and determine what all the prompts are
- Use a tool like expect to script the response to the Bourne shell script
- Use the expect script and while it is running do something else.
Important to note however is that if you have 5 or 10 minutes while you are waiting for something, don't switch to a task that requires you to change what you are thinking about. Getting back into the right mindset for the thing you are waiting for could cost you 5 or 10 minutes. In that case there is no real savings.
So think about the times you are waiting for something and what can you do while you are waiting. Thing about what tools you can use to create situations you will be waiting and therefore able to do something else.
One last parting example, if you are using automation tools that take over your computer (they create mouse and keyboard activity), set up a Vmware image and run the tool inside the Vmware. While the automation is running, minimize the Vmware window and do something else on your desktop.
Sunday, May 9, 2010
Down on your luck? We can make it worse.
The scammers are at it again. I recently received an email from ECIT GROUP LTD. The email appears to be coming from Workpolis. The From address appears to be Workopolis.com <workopolis.jobs@workopolis.ca>.
If you receive an email via GMail, you can open the menu with "Reply" and there will be an option for Show Original. When you show the original it will display the full email header. Here is the email header for the message I received:
If you receive an email via GMail, you can open the menu with "Reply" and there will be an option for Show Original. When you show the original it will display the full email header. Here is the email header for the message I received:
Received: by 10.231.60.10 with SMTP id n10cs78111ibh; Fri, 7 May 2010 15:54:15 -0700 (PDT) Received: by 10.227.154.11 with SMTP id m11mr642898wbw.165.1273272854287; Fri, 07 May 2010 15:54:14 -0700 (PDT) Return-Path: <pbcpttnl@kristell.webhosters-direct.com> Received: from kristell.webhosters-direct.com (kristell.computerdirect.dedicated.redunix.net [193.34.167.216]) by mx.google.com with ESMTP id e7si7418364wbb.9.2010.05.07.15.54.13; Fri, 07 May 2010 15:54:14 -0700 (PDT) Received-SPF: neutral (google.com: 193.34.167.216 is neither permitted nor denied by best guess record for domain of pbcpttnl@kristell.webhosters-direct.com) client-ip=193.34.167.216; Authentication-Results: mx.google.com; spf=neutral (google.com: 193.34.167.216 is neither permitted nor denied by best guess record for domain of pbcpttnl@kristell.webhosters-direct.com) smtp.mail=pbcpttnl@kristell.webhosters-direct.com Received: from pbcpttnl by kristell.webhosters-direct.com with local (Exim 4.60) (envelope-from <pbcpttnl@kristell.webhosters-direct.com>) id 1OAWQv-0005e2-PJ for my.address@gmail.com; Sat, 08 May 2010 00:54:13 +0200 To: my.address@gmail.com Subject: Executive Financial Manager Position From: Workopolis.com <workopolis.jobs@workopolis.ca> Reply-To: recruiter@ecitgroup.co.uk MIME-Version: 1.0 Content-Type: text/html Content-Transfer-Encoding: 8bit Message-Id: <E1OAWQv-0005e2-PJ@kristell.webhosters-direct.com> Date: Sat, 08 May 2010 00:54:13 +0200
Most email programs will take bits from the email header and display them, hiding the rest. You'll see the Subject, To, From and Date fields. You might see the Reply-To address as well. When you click reply, you will almost definitely see the Reply-To information.
Notice how the From address and the Reply-To address are completely different. That should be your first clue something is odd. Above the To field is the route the email took to get to me. You don't need to know a lot of the details but just look at all the machine names and notice that workopolis.ca does not appear in anything above the To field. On a quick glance it looks like the email came from kristell.webhosters-direct.com. I haven't checked but I would guess that www.webhosters-direct.com is a web hosting company. Anyone can buy an account from them and set up email accounts and a website.
The idea is, the criminal creates an email account and possibly a website using a general hosting company. Anyone can do this, including you. They will pay for the account in such a way as to make it hard to find out who they really are (stolen credit card). Minutes after they create the email accounts they spam millions of people with their scam email. A few hundred people respond and give them personal information (bank information, credit card, identity, etc.). They take all that information and disappear.
People like me will report them and MAYBE someone will investigate. They will find the site was created with a stolen credit card. The criminal logged in from a stolen account. Usually I can log into a company, from there log into a second company, to a third computer, etc. Different ways of logging in and hopefully somewhere in that chain of logins someone will not be keeping record of my log in. For example, I might have installed a virus on your computer and used your computer to log into the web hosting site. So when the police get the logs from the web hosting site, it leads back to your computer. When they get a warrant for your computer they find a virus on your computer and no logs. Dead end.
The best thing you can do is (a) don't fall victim to these scams and (b) tell your friends.
General rule of thumb is that NO ONE asks for personal information via an email. This scam says:
Have a chequing account at ROYAL BANK OF CANADA (RBC)They are telling you right up front you have to have an RBC chequing account. This immediately rings warning bells for me. Other warning bells:
- They promising you money. Sometimes it is millions but sometimes it is just a good income.
- They want personal information. Not just credit card information. When you call RBC they validate you are who you say you are. The more information I can gather about you, the better chance I have of pretending to be you.
- The From address and the Reply-To address are completely different.
- They continually claim to be reputable. We are recognized by the Better Business Bureau, we are registered with the Chamber of Commerce, etc. Maybe they even are. But why do they feel the need to convince you they are legitimate?
Bottom line, if it sounds too good to be true it probably is. If something doesn't seem right, i.e. you have an odd feeling about it, no matter how small, investigate. I'd say 10 times out of 10 that odd feeling turns out to be right on the money and they are a scam.
Remember that the scammers have been doing this for years. They are MUCH better at this then you are. If logically it seems legitimate but your gut says something is wrong, your gut is probably right. The only way for you to find out what you are missing is to be taken or find someone else who has been taken. Sometimes I walk away from a situation without solid proof it is a scam but (a) at least I wasn't taken and (b) sooner or later I find out what the scam was and realize I was right to walk away.
Thursday, May 6, 2010
Organizing your automation
It has been a while since I have posted something to the blog. Life has been keeping me busy with things other than software testing.
I've been to a number of interviews in the past few weeks and I've been asked a number of questions. One of those questions was regarding traceability. If I have a number of automated tests and a set of requirements, how do I connect the two and how do I determine what has been run and what has not?
The first part to this is how are the requirements organized. If there is a tracking system and some unique identifier for the requirements, I want to use that identifier in my test automation. For example, a previous job the defect tracking system was used to manage requirements. In addition to categories like 'defect', 'enhancement', etc. there was a 'new feature' category. The business analyst would file a 'new feature' request into the system. Using mock ups and text descriptions, they would flesh out the requirements.
Each defect report has a unique tracking number. I would need to incorporate this number into the test automation. There might be one number for the requirement but a number of different test cases. Additionally, the number might be just a number, e.g. 28364. In the automation environment this might not stand out as a requirement. For this reason I would develop a convention where all requirements references would start with REQ-. Thus REQ-28364 could get added to the test case automation.
Ideally, you want the automation to be self-explanatory. If a test case fails, it would be helpful to know what failed without having to look up the requirements documentation. With automation like Selenium RC or Watij (both using Java) I can name the test case class the same as the requirement number, e.g. REQ-28364 but if I am looking at the test suite or test results it might not be obvious what this is. So I would create an annotation @REQ and put the requirement information in the comments of the source code.
The name of the class can then be used to indicate what the new feature is. The name of each test case would be a description of what is being tested. For example, if I'm adding Print support to an editor I might have the class name "PrintSupport" or "NewFeaturePrintSupport". The test cases might be:
I've been to a number of interviews in the past few weeks and I've been asked a number of questions. One of those questions was regarding traceability. If I have a number of automated tests and a set of requirements, how do I connect the two and how do I determine what has been run and what has not?
The first part to this is how are the requirements organized. If there is a tracking system and some unique identifier for the requirements, I want to use that identifier in my test automation. For example, a previous job the defect tracking system was used to manage requirements. In addition to categories like 'defect', 'enhancement', etc. there was a 'new feature' category. The business analyst would file a 'new feature' request into the system. Using mock ups and text descriptions, they would flesh out the requirements.
Each defect report has a unique tracking number. I would need to incorporate this number into the test automation. There might be one number for the requirement but a number of different test cases. Additionally, the number might be just a number, e.g. 28364. In the automation environment this might not stand out as a requirement. For this reason I would develop a convention where all requirements references would start with REQ-. Thus REQ-28364 could get added to the test case automation.
Ideally, you want the automation to be self-explanatory. If a test case fails, it would be helpful to know what failed without having to look up the requirements documentation. With automation like Selenium RC or Watij (both using Java) I can name the test case class the same as the requirement number, e.g. REQ-28364 but if I am looking at the test suite or test results it might not be obvious what this is. So I would create an annotation @REQ and put the requirement information in the comments of the source code.
The name of the class can then be used to indicate what the new feature is. The name of each test case would be a description of what is being tested. For example, if I'm adding Print support to an editor I might have the class name "PrintSupport" or "NewFeaturePrintSupport". The test cases might be:
- canPrintAllPages
- canPrintAllEvenPages
- canPrintAllOddPages
- canPrintRangeOfPages
- canCancelPrintDialog
When when I look at the results for the test run I would see:
- PrintSupport->canPrintAllPages: Pass
- PrintSupport->canPrintAllEvenPages: Pass
- PrintSupport->canPrintAllOddPages: Pass
- PrintSupport->canPrintRangeOfPages: Pass
- PrintSupport->canCancelPrintDialog: Fail
Very easy to see what is not working and what is.
The most important thing for tying requirements and automation together is creating a convention and sticking to it. To help you stick to it, edit the templates for the IDE. Whenever I create a new Test Case class, the template will have an @REQ field in the comments at the top of the class. I can even go one step further and have source control check for the @REQ field. If the field does not exist or it is blank, the check in of my automation will fail with an error telling me to add a requirement reference to the source code.
Labels:
automation,
development,
tips
Thursday, March 25, 2010
How to develop an automation framework for a legacy application
If you join a team testing an application which already exists and may even be released to the customer but there is no automation in place, how do you start?
For many this can be an overwhelming task. The real answer is, one feature at a time.
Let's take for example the last project I worked on. It was created years before I joined the company and had a small base of customers who depended on it and tolerated the quirks and bugs of the system.
The first thing to do is pick an automation tool that will work for the application. Talk to the developers, business analysts and stakeholders to get a feel for where the project is going and keep that in mind. For example, current requirements are Windows and Internet Explorer 7. In the future, we would like to support Windows, Linux, Solaris, etc. and we want support for any and all browsers (Internet Explorer, Firefox, Opera, Safari, Mozilla, etc.). Additionally, we also have a Windows Mobile application but we would like to change to using the web browser on iPhone, Palm Pre or BlackBerry. So do we use a tool that supports all these possible future combinations or do we use a tool that works for the current requirements because the future requirements aren't written in stone and 95% of our customers for the next few years will be on Windows with Internet Explorer 7 (or IE8 in IE7 compatible mode).
The tools I found were a very mature product which only supported Internet Explorer or a newer product which supported everything but wasn't quite as mature. I selected Watij, a more mature product which only supported Internet Explorer. The majority of our customer were Internet Explorer customers, all the developers did their work using Firefox. The defects which appeared in one web browser and not another were typically layout issues. To date, all the layout issues were in Internet Explorer because the developers were using Firefox to do their design.
Next you want to think about code maintenance. This might seem strange; we don't have a single line of code and I'm thinking about how I'm going to maintain this code that doesn't exist.
Think about successful products. Microsoft Office was released 20 years ago. JBoss was started over 10 years ago. Photoshop 1.0 was released 20 years ago. The last proven web based application I tested was started in 2004 and is still going strong. Basically, a successful software project can exist for 10 to 20 years. How long do you think you have to create a test framework? If the software is going to be released within one year, the project manager is going to expect the framework and initial test cases to be written in the first release cycle. So you have less than a year to create your framework. Thus, 6 months to 1 year to create and use the framework on, hopefully, a project which will last for 10 to 20 years. So 90% of your time is going to be on maintenance. Even if we expect the framework to be used for 5 years you are still looking at 80% of your time spent on maintenance.
So plan out how to break the framework into small, manageable pieces. Look at something like a library. Take for example the Java APIs. There are over 200 packages with over 3700 classes. A typical class might have dozens of methods. Did this happen overnight? Absolutely not. James Gosling started work on Java (aka Oak) in 1991 and by 1995 v1.0 was released. So the 3700+ classes were developed over 19 years.
Creating a successful framework doesn't mean all the code has to be in place at version 1.0. It just means the structure has to be there. If you look at version 1.0 of Java, it was object oriented, it had packages, constructors, exceptions, etc. All the basic functionality of the language which exists today was there in v1.0. Was there support for XML, Xpath, SQL? I'm not sure but there didn't need to be. There just had to be enough to produce something useful.
So your framework just need to be structured in such a way that it will grow to something we can use 10 years from now. So how do you create something which will be comparable to Java, C++, etc.?
Looking at things like MSDN library I noticed the original libraries where very detailed. To accomplish one thing you often had to call numerous library functions. Over the years Microsoft recognized that most programmers will call:
Additionally, do things on the page translate to data structures that need to get passed around? For example, on a web page you will have forms to fill in. Later you might need to edit the data you input. The data required for creation (filling in the initial form) and the data required for editing will be the same data. So create data structures that mirror the form. If the data on the form changes, you just need to edit the data structure. Use getters and setters to obfuscate away the implementation of the data structure. For example, you might store the date string as a Date or Calendar object. Later the web page may change from a text field to a read-only text field with a Calendar widget. If you hardcoded a string for the date field you will have to go and clean up all your automation. If you successful obfuscated away the data type, you should be able to just update one library call and the automation will continue to work. Imagine they decide to change the web form two years from now. You have 4,879 test cases which fill in that form. How long will it take to find and fix 4,879 test cases? How long will it take to update one library call?
Once you have decided on how to structure the framework you can start writing test cases. You might have noticed, I talked about designing the framework but I didn't write any of the code. The reason for this is because you want to have the test cases drive the code creation. If you have test plans in place and the priority of the test cases (conducted manually) is known, then the order you want to automate the test cases is also known. Just as you are not going to start manually testing a minor feature before you test a major feature, you are not going to start automating a minor feature before you automated a major feature.
My first test case might be something like:
It will tell you the getProperty method does not exist and do you want to create it. You can create it inside the current test class but won't other test classes need to get properties as well? So maybe you want to extend the test class and put the getProperty method in a super class. So I would go to the class definition and add an extends statement. Now I get an error in the extends statement. So I take the suggestion to create a new super class. The class will be empty and the error will go away. Now when I deal with the getProperty error, one of the suggestions is to create a method in the super class, so I do.
As you resolve each error, the test framework starts growing. You keep adding in more and more code. Once you have resolved all the errors in your first test case, you have JUST enough code in the test framework to run one test cases. The test case should now be runnable and you can add it to the nightly build process. Each night it will build the application, deploy it and run your one test. Next day, add another test. Are there things from the framework you can use for test case number two? Then reuse them.
If you find yourself putting code into more than one place, move the code to a library and change the two places you are using it into a call to the library.
At this point all your libraries will be fairly low level and work on one page at a time. What if I wanted to do a larger action? Maybe create account would create a user, login, add information about the user.
You might be tempted to put a higher level function call into one of the existing classes. If you do this you will have one library calling another library. This is not a good idea as it can lead to circular references.
What I have found happens is you end up writing helper methods in the test cases. So of the test case was originally:
So you want to move some of these methods to a more common location. You could create packages which are feature and use case oriented rather than page oriented. So your initial test cases will be very low level and test a page at a time. Next you start creating test cases which test features, use cases or stories. You can continue to run the page test cases but now you have much more powerful libraries. Just like with my example of MSDN libraries. Twenty years ago, programmers had to call the page tests. Later they called methods which called the page tests.
You can create 'requirement' test cases which call the page methods to test end-to-end requirements. You can create 'user-defect' methods. How a user does something or the data they use might reveal a defect. So you can create packages for all user defect reports then create a class for each customer. In each class will be the high level library calls for how they achieve something and the data they used to find a defect. Now you can create test cases which cross reference to the defect number and call these user-defect methods. If a project manager wants to see if a defect has been fixed, he can look to see of the corresponding test case has passed.
You want to think about how the test cases are organized, how they are executed, etc. as a set of manual test cases. The automation should reflect this because a project manager, QA manager, stakeholder, etc. might request a specific subset of the tests be run at different stages of the project.
Finally, you might have noticed there is not a lot of error handling. To keep this article short I have not been putting in error handling but you do. If you give a Java library bad input it will throw an exception. If you give your library method bad input, it should throw an exception and end the test case. If you were manually testing the application and an error appeared on the screen, you don't keep executing the test case. You STOP and investigate. With automation, you need to anticipate were things will go wrong. Any time you are submitting user input, an error can occur. Any time you are receiving input from outside the application (network, printer, COM call, etc.) an error can occur. You code should be constantly checking for things which can go wrong. An analogy is, every time you look at the screen with your eyes, the automation should be scanning the application for problems/errors.
For many this can be an overwhelming task. The real answer is, one feature at a time.
Let's take for example the last project I worked on. It was created years before I joined the company and had a small base of customers who depended on it and tolerated the quirks and bugs of the system.
The first thing to do is pick an automation tool that will work for the application. Talk to the developers, business analysts and stakeholders to get a feel for where the project is going and keep that in mind. For example, current requirements are Windows and Internet Explorer 7. In the future, we would like to support Windows, Linux, Solaris, etc. and we want support for any and all browsers (Internet Explorer, Firefox, Opera, Safari, Mozilla, etc.). Additionally, we also have a Windows Mobile application but we would like to change to using the web browser on iPhone, Palm Pre or BlackBerry. So do we use a tool that supports all these possible future combinations or do we use a tool that works for the current requirements because the future requirements aren't written in stone and 95% of our customers for the next few years will be on Windows with Internet Explorer 7 (or IE8 in IE7 compatible mode).
The tools I found were a very mature product which only supported Internet Explorer or a newer product which supported everything but wasn't quite as mature. I selected Watij, a more mature product which only supported Internet Explorer. The majority of our customer were Internet Explorer customers, all the developers did their work using Firefox. The defects which appeared in one web browser and not another were typically layout issues. To date, all the layout issues were in Internet Explorer because the developers were using Firefox to do their design.
Next you want to think about code maintenance. This might seem strange; we don't have a single line of code and I'm thinking about how I'm going to maintain this code that doesn't exist.
Think about successful products. Microsoft Office was released 20 years ago. JBoss was started over 10 years ago. Photoshop 1.0 was released 20 years ago. The last proven web based application I tested was started in 2004 and is still going strong. Basically, a successful software project can exist for 10 to 20 years. How long do you think you have to create a test framework? If the software is going to be released within one year, the project manager is going to expect the framework and initial test cases to be written in the first release cycle. So you have less than a year to create your framework. Thus, 6 months to 1 year to create and use the framework on, hopefully, a project which will last for 10 to 20 years. So 90% of your time is going to be on maintenance. Even if we expect the framework to be used for 5 years you are still looking at 80% of your time spent on maintenance.
So plan out how to break the framework into small, manageable pieces. Look at something like a library. Take for example the Java APIs. There are over 200 packages with over 3700 classes. A typical class might have dozens of methods. Did this happen overnight? Absolutely not. James Gosling started work on Java (aka Oak) in 1991 and by 1995 v1.0 was released. So the 3700+ classes were developed over 19 years.
Creating a successful framework doesn't mean all the code has to be in place at version 1.0. It just means the structure has to be there. If you look at version 1.0 of Java, it was object oriented, it had packages, constructors, exceptions, etc. All the basic functionality of the language which exists today was there in v1.0. Was there support for XML, Xpath, SQL? I'm not sure but there didn't need to be. There just had to be enough to produce something useful.
So your framework just need to be structured in such a way that it will grow to something we can use 10 years from now. So how do you create something which will be comparable to Java, C++, etc.?
a dwarf standing on the shoulders of a giant may see farther than a giant himself.Borrow the design of something like Java. If the language you are using for automation is going to be object oriented, use proven object oriented designs. Leverage the work of James Gosling. Look at the application you are testing. Can you break it apart into sub-sections? For a web application you have pages. Each page will have a variety of actions. Some will alter the current page using JavaScript and some will load a new page. Focus on the functionality of just that page. The new page loading will be in some other package, class, method.
Looking at things like MSDN library I noticed the original libraries where very detailed. To accomplish one thing you often had to call numerous library functions. Over the years Microsoft recognized that most programmers will call:
So they created a new library where you called:result1 = functionA(); result2 = functionB(result1); result3 = functionC(result2); result4 = functionD(result3); print result4;
If you look at the existing code in Microsoft you will see:print functionABCD();
You want to build things up the same way. You can have separate projects, separate packages, different naming conventions. The choice is all up to you. Just remember that the division will look silly at first but by this time next year you will be happy you broken it down as much as you did. What I mean is you might find you have 2 packages, each package has 3 or 4 classes, each class has 5 or 6 methods. In a few years time you should find you have hundreds of packages, each package has dozens of classes and each class has numerous methods.functionABCD() { result1 = functionA(); result2 = functionB(result1); result3 = functionC(result2); result4 = functionD(result3); }
Additionally, do things on the page translate to data structures that need to get passed around? For example, on a web page you will have forms to fill in. Later you might need to edit the data you input. The data required for creation (filling in the initial form) and the data required for editing will be the same data. So create data structures that mirror the form. If the data on the form changes, you just need to edit the data structure. Use getters and setters to obfuscate away the implementation of the data structure. For example, you might store the date string as a Date or Calendar object. Later the web page may change from a text field to a read-only text field with a Calendar widget. If you hardcoded a string for the date field you will have to go and clean up all your automation. If you successful obfuscated away the data type, you should be able to just update one library call and the automation will continue to work. Imagine they decide to change the web form two years from now. You have 4,879 test cases which fill in that form. How long will it take to find and fix 4,879 test cases? How long will it take to update one library call?
Once you have decided on how to structure the framework you can start writing test cases. You might have noticed, I talked about designing the framework but I didn't write any of the code. The reason for this is because you want to have the test cases drive the code creation. If you have test plans in place and the priority of the test cases (conducted manually) is known, then the order you want to automate the test cases is also known. Just as you are not going to start manually testing a minor feature before you test a major feature, you are not going to start automating a minor feature before you automated a major feature.
My first test case might be something like:
If you enter this into an IDE like IntelliJ/IDEA or Eclipse you will get a lot of error messages. The getProperty method does not exist. The loginPage object does not exist. The goToTheLoginPage method does not exist. But the IDE has a helpful feature, it will make suggestions as to how to fix the errors.class MyFirstTestCase { public void setUp() throws Exception { } public void testLoggingIn() throws Exception { String username = getProperty("username", "darrell"); String password = getProperty("password", "mySecretPassw0rd"); loginPage.goToTheLoginPage(); loginPage.logIn(username, password); assertTrue(homePage.assertHomePage()); } public void tearDown() throws Exception { } }
It will tell you the getProperty method does not exist and do you want to create it. You can create it inside the current test class but won't other test classes need to get properties as well? So maybe you want to extend the test class and put the getProperty method in a super class. So I would go to the class definition and add an extends statement. Now I get an error in the extends statement. So I take the suggestion to create a new super class. The class will be empty and the error will go away. Now when I deal with the getProperty error, one of the suggestions is to create a method in the super class, so I do.
As you resolve each error, the test framework starts growing. You keep adding in more and more code. Once you have resolved all the errors in your first test case, you have JUST enough code in the test framework to run one test cases. The test case should now be runnable and you can add it to the nightly build process. Each night it will build the application, deploy it and run your one test. Next day, add another test. Are there things from the framework you can use for test case number two? Then reuse them.
If you find yourself putting code into more than one place, move the code to a library and change the two places you are using it into a call to the library.
At this point all your libraries will be fairly low level and work on one page at a time. What if I wanted to do a larger action? Maybe create account would create a user, login, add information about the user.
You might be tempted to put a higher level function call into one of the existing classes. If you do this you will have one library calling another library. This is not a good idea as it can lead to circular references.
What I have found happens is you end up writing helper methods in the test cases. So of the test case was originally:
I might change it to:import com.company.application.pages.LoginPage; import com.company.application.pages.HomePage; import com.company.application.pages.RegisterUserPage; import com.company.application.pages.UserProfilePage; import com.company.application.datatypes.UserProfile; class MySecondTestCase extends MyTestCase { LoginPage loginPage; HomePage homePage; RegisterUserPage registerUserPage; UserProfilePage userProfilePage; public void setUp() throws Exception { super.setUp(); loginPage = new LoginPage(); homePage = new HomePage(); registerUserPage = new RegisterUserPage(); userProfilePage = new UserProfilePage(); } public void testCreateUser() throws Exception { String username = getProperty("username", "darrell"); String password = getProperty("password", "mySecretPassw0rd"); loginPage.goToRegisterNewUser(); registerUserPage.fillInForm(username, password, password); registerUserPage.submitForm(); loginPage.goToTheLoginPage(); loginPage.logIn(username, password); homePage.goToUserProfile(); userProfilePage.goToEditUserProfile(); UserProfile profile = new UserProfile(); // code to set the various fields of the user profile // e.g. profile.setHobbies("scuba diving, travelling, programming"); userProfilePage.fillInForm(profile); assertEquals(userProfilePage.getUserProfile(), profile); } }
This is okay but if I need to register a user, go to a user profile from the login page or update a user profile from more than one test cases, I'm going to have code duplication. You want to avoid code duplication. The more you duplicate code the more maintenance work you are creating PLUS there is a chance you will miss one of the duplicate pieces of code.import com.company.application.pages.LoginPage; import com.company.application.pages.HomePage; import com.company.application.pages.RegisterUserPage; import com.company.application.pages.UserProfilePage; import com.company.application.datatypes.UserProfile; class MySecondTestCase extends MyTestCase { LoginPage loginPage; HomePage homePage; RegisterUserPage registerUserPage; UserProfilePage userProfilePage; public void setUp() throws Exception { super.setUp(); loginPage = new LoginPage(); homePage = new HomePage(); registerUserPage = new RegisterUserPage(); userProfilePage = new UserProfilePage(); } public void testCreateUser() throws Exception { String username = getProperty("username", "darrell"); String password = getProperty("password", "mySecretPassw0rd"); registerUser(username, password); logInAndGoToUserProfile(username, password); UserProfile profile = createAUserProfile(); updateUserProfile(profile); assertEquals(userProfilePage.getUserProfile(), profile); } private void registerUser(String username, String password) throws Exception { loginPage.goToRegisterNewUser(); registerUserPage.fillInForm(username, password, password); registerUserPage.submitForm(); } private void logInAndGoToUserProfile(username, password) throws Exception { loginPage.goToTheLoginPage(); loginPage.logIn(username, password); homePage.goToUserProfile(); userProfilePage.goToEditUserProfile(); } private UserProfile createAUserProfile() throws Exception { UserProfile profile = new UserProfile(); // code to set the various fields of the user profile // e.g. profile.setHobbies("scuba diving, travelling, programming"); return profile; } private void updateUserProfile(UserProfile profile) throws Exception { userProfilePage.fillInForm(profile); userProfilePage.submitForm(); // submitting profile send us to home page homePage.goToUserProfile(); } }
So you want to move some of these methods to a more common location. You could create packages which are feature and use case oriented rather than page oriented. So your initial test cases will be very low level and test a page at a time. Next you start creating test cases which test features, use cases or stories. You can continue to run the page test cases but now you have much more powerful libraries. Just like with my example of MSDN libraries. Twenty years ago, programmers had to call the page tests. Later they called methods which called the page tests.
You can create 'requirement' test cases which call the page methods to test end-to-end requirements. You can create 'user-defect' methods. How a user does something or the data they use might reveal a defect. So you can create packages for all user defect reports then create a class for each customer. In each class will be the high level library calls for how they achieve something and the data they used to find a defect. Now you can create test cases which cross reference to the defect number and call these user-defect methods. If a project manager wants to see if a defect has been fixed, he can look to see of the corresponding test case has passed.
You want to think about how the test cases are organized, how they are executed, etc. as a set of manual test cases. The automation should reflect this because a project manager, QA manager, stakeholder, etc. might request a specific subset of the tests be run at different stages of the project.
Finally, you might have noticed there is not a lot of error handling. To keep this article short I have not been putting in error handling but you do. If you give a Java library bad input it will throw an exception. If you give your library method bad input, it should throw an exception and end the test case. If you were manually testing the application and an error appeared on the screen, you don't keep executing the test case. You STOP and investigate. With automation, you need to anticipate were things will go wrong. Any time you are submitting user input, an error can occur. Any time you are receiving input from outside the application (network, printer, COM call, etc.) an error can occur. You code should be constantly checking for things which can go wrong. An analogy is, every time you look at the screen with your eyes, the automation should be scanning the application for problems/errors.
Tuesday, March 23, 2010
One advantage of test automation - part 2
In my previous blog One advantage of test automation I talked about the saves of writing once and running many times. Essentially, if I have 8 configurations, it will take me 480 hours to test everything and I will test everything 12 times in a release I am looking at 5,760 hours of testing. Let's say the cost of a manual tester is $30/hour (salary, benefits, computer equipment, office space, vacation pay, support staff, etc.) then we are looking at $172,800 to manually test the application for the current release.
If I can write the automation once and run it 8 times (once for each configuration) I'm going to see significant savings. If automated testers cost twice as much, I'm going to see a cost savings of 4 times. Still looking good. So what I'm looking for is the cost of creating the initial framework but the cost of maintaining that framework needs to be less than 4 times as long as manual testing.
Here are four scenarios I have observed (the amount noted is the total cost from the beginning of the release):
Scenario #1
Iteration #1: manual=X, automated=X
Iteration #2: manual=2X, automated=2X
Iteration #3: manual=3X, automated=3X
Iteration #4: manual=4X, automated=4X
Iteration #5: manual=5X, automated=5X
Iteration #6: manual=6X, automated=6X
Iteration #7: manual=7X, automated=7X
Iteration #8: manual=8X, automated=8X
Iteration #9: manual=9X, automated=9X
Iteration #10: manual=10X, automated=10X
Iteration #11: manual=11X, automated=11X
Iteration #12: manual=12X, automated=12X
What is happening in scenario #1 is record and playback. The idea is that you buy the software, you record one iteration then play it back over and over. The cost of recording is the same as manual testing. You turn on record then manually test the application. Stop recording and save it. For the next iteration you simple play the recorded first iteration. You find there are timing issues. Changes to the application are causing tests to no longer work. Debugging what is wrong takes time. If you are lucky, it takes around the same amount of time as manual testing. But you paid for the software and training your staff. Let's say that is $20,000. You could have saved $20,000 if you just stuck to manual testing.
Scenario #2
Iteration #1: manual=X, automated=2X
Iteration #2: manual=2X, automated=4X
Iteration #3: manual=3X, automated=6X
Iteration #4: manual=4X, automated=8X
Iteration #5: manual=5X, automated=10X
Iteration #6: manual=6X, automated=12X
Iteration #7: manual=7X, automated=14X
Iteration #8: manual=8X, automated=16X
Iteration #9: manual=9X, automated=18X
Iteration #10: manual=10X, automated=20X
Iteration #11: manual=11X, automated=22X
Iteration #12: manual=12X, automated=24X
What is happening in scenario #2 is poor planning at the start of the automation. Not enough time was spent designing and implementing the test framework. Every time there is a new release of the application, the cost to maintain the automation is as much as the initial creation. We are not leveraging anything we did from the first iteration. Usually test automation is seen as a failure around the third or fourth iteration. When this happens, management is usually pretty apprehensive about trying test automation again. They didn't expect these sort of losses. If we use the initial estimate that manual testing costs $172,800 and assuming we abandoned test automation after iteration 3. The total cost of attempting automation is $216,000 ($43,200 more than just doing manual testing). And this is assuming we called it quits after iteration 3. Each iteration we hesitate on is costing the company an additional $14,400.
It is understandable why so many companies become disillusioned with test automation. Companies selling test automation software will pitch it as saving you 8 times the cost of manual testing. It looks like the cost of automation is going to be $21,600. Even if you double that and add $10,000 for the software ($53,200) it is better than one third the cost of manual testing. So you are expecting $55,000 and it ends of costing you $210,000 or more.
Scenario #3
Iteration #1: manual=X, automated=5X
Iteration #2: manual=2X, automated=6X
Iteration #3: manual=3X, automated=7X
Iteration #4: manual=4X, automated=8X
Iteration #5: manual=5X, automated=9X
Iteration #6: manual=6X, automated=10X
Iteration #7: manual=7X, automated=11X
Iteration #8: manual=8X, automated=12X
Iteration #9: manual=9X, automated=13X
Iteration #10: manual=10X, automated=14X
Iteration #11: manual=11X, automated=15X
Iteration #12: manual=12X, automated=16X
In scenario #3 we spent time creating a test framework but for some reason the cost of maintaining the test framework is the same as manual testing. Sometimes this is because we haven't really thought out the design or the application we are testing is in flux. Some applications are not ready for test automation. If your test automation efforts are failing, you need to look at why. If the application is changing so much that automators need to do some creative coding (which becomes difficult to maintain) or they end up throwing away a lot of the previous iteration automation, then you need to look at stabilizing the development process. Another thing you might have done wrong at this point is write different code for the different web browsers. If I have methods with:
Or if I actually write one method for IE and another for Firefox then you are not really taking advantage of write once and run many. Additionally, you don't want to be too creative with the automation code. If you have to use a very complex regular expression to find a text field on all the different browsers then maybe the application is not ready for automation and you need to work with development to make the application friendlier to automation.
Scenario #4
Iteration #1: manual=X, automated=5X
Iteration #2: manual=2X, automated=5.5X
Iteration #3: manual=3X, automated=6X
Iteration #4: manual=4X, automated=6.5X
Iteration #5: manual=5X, automated=7X
Iteration #6: manual=6X, automated=7.5X
Iteration #7: manual=7X, automated=8X
Iteration #8: manual=8X, automated=8.5X
Iteration #9: manual=9X, automated=9X
Iteration #10: manual=10X, automated=9.5X
Iteration #11: manual=11X, automated=10X
Iteration #12: manual=12X, automated=10.5X
In scenario #4 we spent time creating a test framework and we were able to leverage the work in the previous iteration for the next iteration. Maintaining the automated test framework and adding new functionality after the initial iteration now costs us half as much as manual testing. At iteration #9 we are breaking even. By the end of the release we are looking at a cost savings of $21,600. This does not seem like a lot but if your automation team gets good at this or you hire someone with the right experience you can actually increase the saving. The cost of maintenance might not be linear. It could grow with each iteration. Or you might realize even more savings with someone who is already familiar with the pitfalls of automating.
Things to look for that might give you additional cost savings:
Obviously, someone who uses the same tool, has written automation for an application using similar technologies, a proven record of creating automation frameworks would be ideal and can work with development if the application is not friendly to automation tools.
I would say the most important factor is someone with experience working with development. If they are not comfortable working with development they might be set up for failure and no way to prevent it. Additionally, good automation is development. If the test automator is a developer, they should be comfortable working with development to make the application more amenable to automation.
Next would be experience creating an automation framework is the second most important factor. You can easily land yourself in scenario #2 if you don't know how to create a good test framework.
The second least important factor is testing similar applications. The technologies and problems for web testing are very different from desktop application testing. Additionally, if an application is using certain third party libraries, making it more amenable to automation might not be as easy. However, most automation is similar. Approximately 80% of the work will be the same regardless of the technology being tested. It is that last 20% which might make the difference. I'd also point out that there are dozen of resources on the various technologies and people within your company can help the automator understand the technology. Understanding the pitfalls and how to avoid them for automation frameworks and the soft skills to work with development are not as easy to find.
The least important factor is familiarity with the automation tool. This is not important if the automator has a proven record of learning new applications and technologies. For many experienced test automators, a new automation tool is trivial to learn. Test automators who know how to use a tool without understanding how and why it does when it does will be restricted to the tools they already know. If an automator understands the underlying principles behind all automation tools for a given technology then learning a new tool would be trivial.
If I can write the automation once and run it 8 times (once for each configuration) I'm going to see significant savings. If automated testers cost twice as much, I'm going to see a cost savings of 4 times. Still looking good. So what I'm looking for is the cost of creating the initial framework but the cost of maintaining that framework needs to be less than 4 times as long as manual testing.
Here are four scenarios I have observed (the amount noted is the total cost from the beginning of the release):
Scenario #1
Iteration #1: manual=X, automated=X
Iteration #2: manual=2X, automated=2X
Iteration #3: manual=3X, automated=3X
Iteration #4: manual=4X, automated=4X
Iteration #5: manual=5X, automated=5X
Iteration #6: manual=6X, automated=6X
Iteration #7: manual=7X, automated=7X
Iteration #8: manual=8X, automated=8X
Iteration #9: manual=9X, automated=9X
Iteration #10: manual=10X, automated=10X
Iteration #11: manual=11X, automated=11X
Iteration #12: manual=12X, automated=12X
What is happening in scenario #1 is record and playback. The idea is that you buy the software, you record one iteration then play it back over and over. The cost of recording is the same as manual testing. You turn on record then manually test the application. Stop recording and save it. For the next iteration you simple play the recorded first iteration. You find there are timing issues. Changes to the application are causing tests to no longer work. Debugging what is wrong takes time. If you are lucky, it takes around the same amount of time as manual testing. But you paid for the software and training your staff. Let's say that is $20,000. You could have saved $20,000 if you just stuck to manual testing.
Scenario #2
Iteration #1: manual=X, automated=2X
Iteration #2: manual=2X, automated=4X
Iteration #3: manual=3X, automated=6X
Iteration #4: manual=4X, automated=8X
Iteration #5: manual=5X, automated=10X
Iteration #6: manual=6X, automated=12X
Iteration #7: manual=7X, automated=14X
Iteration #8: manual=8X, automated=16X
Iteration #9: manual=9X, automated=18X
Iteration #10: manual=10X, automated=20X
Iteration #11: manual=11X, automated=22X
Iteration #12: manual=12X, automated=24X
What is happening in scenario #2 is poor planning at the start of the automation. Not enough time was spent designing and implementing the test framework. Every time there is a new release of the application, the cost to maintain the automation is as much as the initial creation. We are not leveraging anything we did from the first iteration. Usually test automation is seen as a failure around the third or fourth iteration. When this happens, management is usually pretty apprehensive about trying test automation again. They didn't expect these sort of losses. If we use the initial estimate that manual testing costs $172,800 and assuming we abandoned test automation after iteration 3. The total cost of attempting automation is $216,000 ($43,200 more than just doing manual testing). And this is assuming we called it quits after iteration 3. Each iteration we hesitate on is costing the company an additional $14,400.
It is understandable why so many companies become disillusioned with test automation. Companies selling test automation software will pitch it as saving you 8 times the cost of manual testing. It looks like the cost of automation is going to be $21,600. Even if you double that and add $10,000 for the software ($53,200) it is better than one third the cost of manual testing. So you are expecting $55,000 and it ends of costing you $210,000 or more.
Scenario #3
Iteration #1: manual=X, automated=5X
Iteration #2: manual=2X, automated=6X
Iteration #3: manual=3X, automated=7X
Iteration #4: manual=4X, automated=8X
Iteration #5: manual=5X, automated=9X
Iteration #6: manual=6X, automated=10X
Iteration #7: manual=7X, automated=11X
Iteration #8: manual=8X, automated=12X
Iteration #9: manual=9X, automated=13X
Iteration #10: manual=10X, automated=14X
Iteration #11: manual=11X, automated=15X
Iteration #12: manual=12X, automated=16X
In scenario #3 we spent time creating a test framework but for some reason the cost of maintaining the test framework is the same as manual testing. Sometimes this is because we haven't really thought out the design or the application we are testing is in flux. Some applications are not ready for test automation. If your test automation efforts are failing, you need to look at why. If the application is changing so much that automators need to do some creative coding (which becomes difficult to maintain) or they end up throwing away a lot of the previous iteration automation, then you need to look at stabilizing the development process. Another thing you might have done wrong at this point is write different code for the different web browsers. If I have methods with:
if IE6 then do X else if IE7 then do Y else if FF3 then do Z
Or if I actually write one method for IE and another for Firefox then you are not really taking advantage of write once and run many. Additionally, you don't want to be too creative with the automation code. If you have to use a very complex regular expression to find a text field on all the different browsers then maybe the application is not ready for automation and you need to work with development to make the application friendlier to automation.
Scenario #4
Iteration #1: manual=X, automated=5X
Iteration #2: manual=2X, automated=5.5X
Iteration #3: manual=3X, automated=6X
Iteration #4: manual=4X, automated=6.5X
Iteration #5: manual=5X, automated=7X
Iteration #6: manual=6X, automated=7.5X
Iteration #7: manual=7X, automated=8X
Iteration #8: manual=8X, automated=8.5X
Iteration #9: manual=9X, automated=9X
Iteration #10: manual=10X, automated=9.5X
Iteration #11: manual=11X, automated=10X
Iteration #12: manual=12X, automated=10.5X
In scenario #4 we spent time creating a test framework and we were able to leverage the work in the previous iteration for the next iteration. Maintaining the automated test framework and adding new functionality after the initial iteration now costs us half as much as manual testing. At iteration #9 we are breaking even. By the end of the release we are looking at a cost savings of $21,600. This does not seem like a lot but if your automation team gets good at this or you hire someone with the right experience you can actually increase the saving. The cost of maintenance might not be linear. It could grow with each iteration. Or you might realize even more savings with someone who is already familiar with the pitfalls of automating.
Things to look for that might give you additional cost savings:
- Someone with experience working with development
- Someone with general experience creating automation frameworks
- Someone with experience automating a similar application (web based, desktop application, mobile device, etc.)
- Someone familiar with the automation tool you are using.
Obviously, someone who uses the same tool, has written automation for an application using similar technologies, a proven record of creating automation frameworks would be ideal and can work with development if the application is not friendly to automation tools.
I would say the most important factor is someone with experience working with development. If they are not comfortable working with development they might be set up for failure and no way to prevent it. Additionally, good automation is development. If the test automator is a developer, they should be comfortable working with development to make the application more amenable to automation.
Next would be experience creating an automation framework is the second most important factor. You can easily land yourself in scenario #2 if you don't know how to create a good test framework.
The second least important factor is testing similar applications. The technologies and problems for web testing are very different from desktop application testing. Additionally, if an application is using certain third party libraries, making it more amenable to automation might not be as easy. However, most automation is similar. Approximately 80% of the work will be the same regardless of the technology being tested. It is that last 20% which might make the difference. I'd also point out that there are dozen of resources on the various technologies and people within your company can help the automator understand the technology. Understanding the pitfalls and how to avoid them for automation frameworks and the soft skills to work with development are not as easy to find.
The least important factor is familiarity with the automation tool. This is not important if the automator has a proven record of learning new applications and technologies. For many experienced test automators, a new automation tool is trivial to learn. Test automators who know how to use a tool without understanding how and why it does when it does will be restricted to the tools they already know. If an automator understands the underlying principles behind all automation tools for a given technology then learning a new tool would be trivial.
One advantage of test automation
One big advantage of test automation is testing different configurations. For example, when I was working at Quest Software I tested web applications for multiple configurations. There were different operating systems with different web browsers. Here are some of the combinations:
- Windows 2000, Internet Explorer 6.x
- Windows 2000, Internet Explorer 7.x
- Windows 2000, Firefox 3.x
- Windows 2000, Safari 3.x
- Windows 2000, Safari 4.x
- Windows XP, Internet Explorer 6.x
- Windows XP, Internet Explorer 7.x
- Windows XP, Firefox 3.x
- Windows XP, Safari 3.x
- Windows XP, Safari 4.x
- Windows Vista, Internet Explorer 7.x
- Windows Vista, Firefox 3.x
- Windows Vista, Safari 4.x
- Solaris 10, Firefox 3.x
- Redhat Linux, Firefox 3.x
- SuSE Linux, Firefox 3.x
Without automation, we would look at equivalence classes. Using our experience and knowledge of the different operating systems, we would estimate that Internet Explorer 6.x on Windows 2000 is going to behave the same as Internet Explorer 6.x on Windows XP and Vista. We might also assume Firefox 3.x on Redhat Linux and SuSE Linux is going to be equivalent. Additionally, we would look at customer base and see if we can reduce the combinations because customers are unlikely to be using some of them. For example, we found that most Windows customers who were using Firefox 3.x were also on Windows XP. So the list was trimmed down to:
- Windows 2000, Internet Explorer 6.x
- Windows XP, Firefox 3.x
- Windows XP, Safari 3.x
- Windows Vista, Internet Explorer 7.x
- Windows Vista, Safari 4.x
- Solaris 10, Firefox 3.x
- SuSE Linux, Firefox 3.x
We selected SuSE Linux because we had encountered issues unique to SuSE Linux. All issues found in Firefox 3.x on Redhat Linux were similar to Firefox 3.x on Solaris 10.
Looking at the list, there are still 7 configurations. If manually testing each configuration takes two weeks and a test cycle is two weeks, we need 7 testers working on this full time. This also assumes everything goes fine and there aren't additional issues to content with. The reality was that we had to add a couple more configurations. The different desktop managers for Linux actually made a different on how things rendered. It was really exposing problems in the web browser but from a customer's point of view, most websites look fine but our application does not, therefore the problem is in our application not the browser. Non-technical customers don't want to hear the explanation why our application doesn't work with the Redhat Linux 4, Gnome desktop and Firefox 3.x.
So we had to add in another configuration:
- Windows 2000, Internet Explorer 6.x
- Windows XP, Firefox 3.x
- Windows XP, Safari 3.x
- Windows Vista, Internet Explorer 7.x
- Windows Vista, Safari 4.x
- Solaris 10, Firefox 3.x
- SuSE Linux, KDE Desktop, Firefox 3.x
- Redhat Linux 4, Gnome Desktop, Firefox 3.x
This adds one more tester to the test cycle. If we assume testers will be productive 6 hours a day (checking mail in the morning, lunch, meetings, etc.) then we are looking at 6 hours * 5 days * 2 weeks * 8 configurations for a total of 480 hours.
But what if I had an automation tool that ran on all the platforms and worked with all the different web browsers? I could write one test suite and run it with different configurations. One such tool is Selenium. It is written using JavaScript and Java and therefore runs on all these platforms.
The test automation can be run after hours. This means 24 hours a day and 7 days a week. We can also run the test suite in parallel. Even if the test suite ran as slow as a manual tester, I just need to add another computer to run the test suite. A computer costs a lot less then a manual tester.
This is a little naive however. You will need at least one tester to automate the test suite. Since automation is development, it should be someone who knows how to create test strategies, test plans, test cases and knows how to do software development. This one employee is probably going to cost more than a manual tester would. However, they are not going to cost 8 times as much.
The reality is that creating a maintainable test automation framework takes longer than 2 weeks. Additionally, as you find defects time will be lost filing the defect and following up on it. Other issues which occur with this is that the implementation of Selenium on the different platforms will present their own set of problems. Basically, the same reason we find issues with our web application on the different platforms is going to cause issues with Selenium as well.
So be forewarned, test automation is not as great as some people would lead you to believe. If done correctly, there will be some cost saving. The best thing is once you have a good test framework in place, the maintenance of it does yield significant cost saving. The problem is, most people don't put enough effort into the design and implementation of the framework to reap the benefits later.
One huge things which can derail test automation will also add cost to development. One project I worked on looked the same from the customer's point of view regardless of which configuration they were using but if you looked at source control they actually had different codes for the different configurations. The web pages being sent down from the server were slightly different depending on the configuration. The code was littered with lots of "if IE6 then do one thing else if IE7 do another thing else if FF3 do something TOTALLY different". Patches to the web browsers and operating systems often broke the application.
From an automation point of view, if the web page served for IE6 is different from the web page served for FF3 then it is as if we are testing two different web pages. The worst part is that the pages are often, initially, close enough that an automator will try to code once for both pages. The effort to create a single automation script that works for both configurations will be more time consuming then normally anticipated and the maintenance for this script will be greater than expected.
But if coded well, the application can be automated for one configuration. A few tweaks for different configurations and then run continuously on all configurations.
Saturday, March 20, 2010
Ways to facilitate communication
One of the most important things on a team is communication. I have worked on teams were everything was put into a document. Unfortunately, as the project progressed, time became more and more precious. People needed to be more efficient and far too often, the documentation suffered. Decisions would be made between two individuals and the rest of the team would miss out.
So how do we make sure everything is well communicated? The simplest answer is to talk to each other.
In scrum you have sprints and planning meetings. What I would do is at the end of the week look at the back log (features and defects in the tracking system) and decide what I could get done be the end of next week. I might add a few notes to the defect report and assign it to myself.
On Monday morning we would have the planning meeting for the week. Everyone would decide what features and bug fixes should and could be completed that week. If there was anything which would take more than one week to complete, it need to be broken down into multiple, smaller tasks. No task should take more than one week. This means implementing and testing the feature/bug fix should happen the same week.
Everyone would pick the defects they were going to work on and give estimates for how long each would take. Every day we would meet and go over the status of the team. Have things changed? Did sales land a new contract but promise a feature we need to complete this week? Or is that feature multiple tasks and we want to start on one of those tasks this week? If yes, we reassess the plan and change priorities.
The key to all of this is that developers, business analysts, testers, project stakeholders all meet and decide what will be done each week and continue to track progress on a daily basis. Most people who do scrum understand this.
What about the time between each scrum? What if you need clarification on the feature you are working on? Whether you are a developer or a tester you might need further clarification. If everyone is in the same room you could just shout to the people involved. This could be disruptive to the other people in the room.
So how do you deal with it? We could create teams and partition the rooms. Each room would have a tester, a developer and a business analyst. What about the project manager? There will usually be one project manager and multiple developers. We cannot have one manager to each developer. Additionally, I have never worked on a team were the developer to tester ratio was 1:1. So this idea is not going to work.
What has worked for me in the past is everyone is on MSN Messenger or Google Talk. These tools let you talk with one or multiple people at once. You can set up aliases for the different groups you need to talk with. This is really helpful if you are working in an open concept office or a distributed team. In the case of an open concept office, shouting to all interested parties might work for you but it is disruptive to the people in the room who are not involved in the conversation. By using chat software I can ask quick questions of just the people who need to be involved. The same is true for people at remote locations. I can chat with everyone involved without having to book a conference room and have everyone call in.
An added benefit is the ability to opt out of a conversation. If I am working on something which requires my full concentration I can log out of chat. This send a signal to everyone that I cannot be disturbed. The danger of this is forgetting to log back into chat but hopefully, you will just get into the habit of remembering. Everyone checks email after they have been working hard on a feature which required their full attention. If someone is not logging back into chat and you need to talk with them, send them an email. When they are done it will be a gentle reminder to log back into chat.
If you find that chat is becoming too tedious and requires too much typing, switch to a meeting. Book a conference room and invite everyone to a 30 minute or 1 hour meeting. If the team is distributed you can use Skype or VOIP to get everyone calling in.
So how do we make sure everything is well communicated? The simplest answer is to talk to each other.
In scrum you have sprints and planning meetings. What I would do is at the end of the week look at the back log (features and defects in the tracking system) and decide what I could get done be the end of next week. I might add a few notes to the defect report and assign it to myself.
On Monday morning we would have the planning meeting for the week. Everyone would decide what features and bug fixes should and could be completed that week. If there was anything which would take more than one week to complete, it need to be broken down into multiple, smaller tasks. No task should take more than one week. This means implementing and testing the feature/bug fix should happen the same week.
Everyone would pick the defects they were going to work on and give estimates for how long each would take. Every day we would meet and go over the status of the team. Have things changed? Did sales land a new contract but promise a feature we need to complete this week? Or is that feature multiple tasks and we want to start on one of those tasks this week? If yes, we reassess the plan and change priorities.
The key to all of this is that developers, business analysts, testers, project stakeholders all meet and decide what will be done each week and continue to track progress on a daily basis. Most people who do scrum understand this.
What about the time between each scrum? What if you need clarification on the feature you are working on? Whether you are a developer or a tester you might need further clarification. If everyone is in the same room you could just shout to the people involved. This could be disruptive to the other people in the room.
So how do you deal with it? We could create teams and partition the rooms. Each room would have a tester, a developer and a business analyst. What about the project manager? There will usually be one project manager and multiple developers. We cannot have one manager to each developer. Additionally, I have never worked on a team were the developer to tester ratio was 1:1. So this idea is not going to work.
What has worked for me in the past is everyone is on MSN Messenger or Google Talk. These tools let you talk with one or multiple people at once. You can set up aliases for the different groups you need to talk with. This is really helpful if you are working in an open concept office or a distributed team. In the case of an open concept office, shouting to all interested parties might work for you but it is disruptive to the people in the room who are not involved in the conversation. By using chat software I can ask quick questions of just the people who need to be involved. The same is true for people at remote locations. I can chat with everyone involved without having to book a conference room and have everyone call in.
An added benefit is the ability to opt out of a conversation. If I am working on something which requires my full concentration I can log out of chat. This send a signal to everyone that I cannot be disturbed. The danger of this is forgetting to log back into chat but hopefully, you will just get into the habit of remembering. Everyone checks email after they have been working hard on a feature which required their full attention. If someone is not logging back into chat and you need to talk with them, send them an email. When they are done it will be a gentle reminder to log back into chat.
If you find that chat is becoming too tedious and requires too much typing, switch to a meeting. Book a conference room and invite everyone to a 30 minute or 1 hour meeting. If the team is distributed you can use Skype or VOIP to get everyone calling in.
Labels:
agile,
communication,
tips
Friday, March 19, 2010
How does a conventional tester fit into an agile environment?
When I think 'agile' I think test driven development (TDD). In TDD, the programmers actually write the tests first then write code to make the tests pass. If you look at something like Eclipse IDE, I can write a jUnit class that creates the code to instantiates a class that does not exist yet. Eclipse will complain about the non-existent class and offer to create the class for me (without the body). If I write calls to methods which do not exist, Eclipse will complain about the non-existent methods and offer to write the method for me (without the body).
In Agile development you have sprints. A sprint is typically a week. On most agile projects I have worked on, the QA staff are involved from the very beginning of each sprint. Every Monday (or some other day) all the stakeholders (developers, business analysts, testers, etc.) get together to decide what will be completed in the next sprint. They will come up with "user stories" (sort of like requirements). Everyone signs off on the sprint (meaning the stories should cover all the requirements and from that all the test scenarios plus test cases).
As each feature is written, there is daily deployment (or more often). As QA Tester I would write automation to conduct Integration and System level tests.
The most important aspect of agile development is fast turn around. On a traditional SDLC, I would often not see a functioning application until weeks after a developer has created it. By the time I am testing the first feature the developer could be done with that feature and working on something totally different. If I file a defect report on a feature a developer completed 3 weeks ago, she has to stop what she is working on, recall the code for the feature I am testing, fix the bug then get back to the new feature she was originally working on. This can be quite disruptive.
With agile development, a developer makes a change, compiles the code, runs the unit tests. If something she did breaks a unit test she knows in seconds. If it passes the unit test she can check the change in. The QA Tester can then build and test the application.
With the frameworks I worked on, the application would get built automatically by a nightly build process. All unit tests would get run on it. There might also be static analysis tools run on the project before compiling. Once the project is built and unit tests pass it would get deployed to test machines and the QA automation suite would be run. This all happens after business hours. In the morning, I would examine the test results. Sometimes a change in the program breaks my automation so I would do some maintenance on the automation and re-run the tests which failed. If there was an actual failure in the application and my automation found it, I would file a defect report.
On a daily basis the team would meet to talk about what was done the previous 24 hours and what would be the goal for each person in the next 24 hours (this is scrum, a form of agile development). I would bring up any defects I found at the scrum.
Fast turn around and immediate feedback is key to a good agile development. I would write regression tests to make sure defects found and fixed do not creep back into the product but I would also work with the developers to bring the regression tests into the unit test framework. This way the defect can be caught sooner should it creep back in.
Because the number of regression test grow with each iteration (sprint), it is recommended that they be automated. This frees the test team to conduct manual test efforts towards new features.
The most important thing to understand as a tester in an agile development environment is that there is a STRONG belief in minimal documentation. Source code should be self-documenting. The reason for this, and I have seen this on EVER traditional development team, is because the documentation is often neglected. No documentation is often better than wrong documentation. If there is no documentation and the developer has not gotten to the point were their code is self-documenting, they will be forced to look at the code and figure out what it does, rather than read outdated documentation and believe it does something different than it really does.
To understand the requirements of the software, you look at the unit tests. These should also be self-documenting and there should be one unit test for each requirement (user story).
As a tester, I will not expect to get a set of requirements. If I need to, I will take notes at the planning meeting (beginning of each sprint) and update them at each daily meeting (scrum).
The key to success for the tester will be communication. Without communication between the tester, the developer, business analysts and project stakeholders, it will be extremely difficult for the tester to properly test the application.
In Agile development you have sprints. A sprint is typically a week. On most agile projects I have worked on, the QA staff are involved from the very beginning of each sprint. Every Monday (or some other day) all the stakeholders (developers, business analysts, testers, etc.) get together to decide what will be completed in the next sprint. They will come up with "user stories" (sort of like requirements). Everyone signs off on the sprint (meaning the stories should cover all the requirements and from that all the test scenarios plus test cases).
As each feature is written, there is daily deployment (or more often). As QA Tester I would write automation to conduct Integration and System level tests.
The most important aspect of agile development is fast turn around. On a traditional SDLC, I would often not see a functioning application until weeks after a developer has created it. By the time I am testing the first feature the developer could be done with that feature and working on something totally different. If I file a defect report on a feature a developer completed 3 weeks ago, she has to stop what she is working on, recall the code for the feature I am testing, fix the bug then get back to the new feature she was originally working on. This can be quite disruptive.
With agile development, a developer makes a change, compiles the code, runs the unit tests. If something she did breaks a unit test she knows in seconds. If it passes the unit test she can check the change in. The QA Tester can then build and test the application.
With the frameworks I worked on, the application would get built automatically by a nightly build process. All unit tests would get run on it. There might also be static analysis tools run on the project before compiling. Once the project is built and unit tests pass it would get deployed to test machines and the QA automation suite would be run. This all happens after business hours. In the morning, I would examine the test results. Sometimes a change in the program breaks my automation so I would do some maintenance on the automation and re-run the tests which failed. If there was an actual failure in the application and my automation found it, I would file a defect report.
On a daily basis the team would meet to talk about what was done the previous 24 hours and what would be the goal for each person in the next 24 hours (this is scrum, a form of agile development). I would bring up any defects I found at the scrum.
Fast turn around and immediate feedback is key to a good agile development. I would write regression tests to make sure defects found and fixed do not creep back into the product but I would also work with the developers to bring the regression tests into the unit test framework. This way the defect can be caught sooner should it creep back in.
Because the number of regression test grow with each iteration (sprint), it is recommended that they be automated. This frees the test team to conduct manual test efforts towards new features.
The most important thing to understand as a tester in an agile development environment is that there is a STRONG belief in minimal documentation. Source code should be self-documenting. The reason for this, and I have seen this on EVER traditional development team, is because the documentation is often neglected. No documentation is often better than wrong documentation. If there is no documentation and the developer has not gotten to the point were their code is self-documenting, they will be forced to look at the code and figure out what it does, rather than read outdated documentation and believe it does something different than it really does.
To understand the requirements of the software, you look at the unit tests. These should also be self-documenting and there should be one unit test for each requirement (user story).
As a tester, I will not expect to get a set of requirements. If I need to, I will take notes at the planning meeting (beginning of each sprint) and update them at each daily meeting (scrum).
The key to success for the tester will be communication. Without communication between the tester, the developer, business analysts and project stakeholders, it will be extremely difficult for the tester to properly test the application.
Thursday, March 18, 2010
Evaluating HP Quick Test Professional - Part 1
To get a free 14 day evaluation version of Quick Test Professional (QTP) I went to the main HP website and searched for QTP. On the search result page, on the left, under Recommended links, was a link for Download a free evaluation of QuickTest Professional 10.0 as well as a link for Download a free evaluation of Quality Center 10.0.
Clicking the QTP link takes you to a login page. If you don't have an HP login you can create one (it is free). Once logged in you will be taken to a page where you can download a ZIP file containing the installer for QTP.
Installation guide is 108 pages long but generally speaking, if you have read the Product Availability Matrix (PAM) and know the technology you are going to automate is supported, the installer is very straight forward.
When you run the installer it asks what you want to install, QTP, and you follow the prompts. If there is anything not installed, e.g. Visual Studio 2005 runtime distributable, the installer will let you know and give you the option to install the necessary files.
After QTP is installed a second installer will ask about installing additional requirements. You can usually go with the defaults. I noticed that the documentation talks about support for IE8 Beta 2. At this time IE8 is a fully released product. Additionally, the second installer recommends downloading and installing the Microsoft Script Debugger. IE8 includes a script debugger and development tool bar. If you are using IE8 I would recommend not downloading and installing the script debugger as it is not necessary and might actually interfere with the IE8 built-in debugger.
Additionally, the second installer will install the License Wizard. There is no license for the 14 day free trial period. If you are using the 14 day free trial, you can skip this since you will not have a license key to complete the installation.
Once the installation is finished, there will be a shortcut to run QTP on the desktop.
On the Start menu you will find:
The obvious starting point would be reading the Tutorial. This will be part 2 of my evaluation.
Clicking the QTP link takes you to a login page. If you don't have an HP login you can create one (it is free). Once logged in you will be taken to a page where you can download a ZIP file containing the installer for QTP.
Installation guide is 108 pages long but generally speaking, if you have read the Product Availability Matrix (PAM) and know the technology you are going to automate is supported, the installer is very straight forward.
When you run the installer it asks what you want to install, QTP, and you follow the prompts. If there is anything not installed, e.g. Visual Studio 2005 runtime distributable, the installer will let you know and give you the option to install the necessary files.
After QTP is installed a second installer will ask about installing additional requirements. You can usually go with the defaults. I noticed that the documentation talks about support for IE8 Beta 2. At this time IE8 is a fully released product. Additionally, the second installer recommends downloading and installing the Microsoft Script Debugger. IE8 includes a script debugger and development tool bar. If you are using IE8 I would recommend not downloading and installing the script debugger as it is not necessary and might actually interfere with the IE8 built-in debugger.
Additionally, the second installer will install the License Wizard. There is no license for the 14 day free trial period. If you are using the 14 day free trial, you can skip this since you will not have a license key to complete the installation.
Once the installation is finished, there will be a shortcut to run QTP on the desktop.
On the Start menu you will find:
- Start Menu
- Programs
- QuickTest Professional
- Documentation
- Printer-Friendly Documentation
- QuickTest Automation Reference
- QuickTest Professional Code Samples Plus
- QuickTest Professional Help
- Tutorial
- Extensibility
- Documentation
- .Net Add-in Windows Form Extensibility Help
- Delphi Add-in Extensibility Help
- Java Add-in Extensibility Help
- Web Add-in Extensibility Help
- Sample Applications
- Flight
- Mercury Tours Web site
- Tools
- Additional Installation Requirements
- HP Micro Player
- License Validation Utility
- Password Encoder
- QuickTest Script Editor
- Register New Browser Control
- Remote Agent
- Save and Restore Settings
- Silent Test Runner
- Test Batch Runner
- Test Result Deletion Tool
- Check for Updates
- QuickTest Professional
- Readme
- Test Results Viewer
The obvious starting point would be reading the Tutorial. This will be part 2 of my evaluation.
Tuesday, March 16, 2010
Selenium RC [Java] waitForCondition example
I've seen a few people post questions about using the Selenium waitForCondition method.
If you read the documentation for the waitForCondition method (in either Selenium or DefaultSelenium) you will see a mention of selenium.browserbot.getCurrentWindow(). This is key to the wait for condition.
When you start a test case there are two windows which open up. The first window has the Selenium Command History and ability to display the log. I call this the Selenium window. The next window which opens is the window for the application under test or AUT window.
The waitForCondition method has two parameters. The second parameter is the easiest to understand. We don't want the test to wait for ever. If we don't get a response by a certain time we can assume the test failed. The second parameter is the timeout period in milliseconds. So if the condition should occur within 20 seconds, the second parameter is "20000".
The second parameter is the key to the waiting. It is the condition we are waiting for. There are two windows open, Selenium and AUT. The first parameter to waitForCondition is a javascript expression. It will normally be tested on the Selenium window. If you want to evaluate some javascript in the AUT window you need to use: selenium.browserbot.getCurrentWindow(). For example, Let's say this is the AUT window:
What this page does is simulate something loading. After 10 seconds it will hide the spinning GIF and set the loading variable to false. So the condition we are waiting for is when loading == false.
This means from waitForCondition we want to look at the variable "selenium.browserbot.getCurrentWindow().loading == false". In other words, the full statement is going to be:
selenium.waitForCondition("selenium.browserbot.getCurrentWindow().loading == false", "20000");
It is important to note that the selenium in selenium.waitForCondition is a Java object in your test code and the selenium in selenium.browserbot is a javascript object running in the web browser.
Let's say we take this HTML and save it to the file /Users/darrell/workspace/LearningSelenium/waitForCondition.html then we can create the following Selenium RC test case:
and that is one example of waitForCondition. You will have to know what condition to wait for in your application. If you are using AJAX you might find there is an AJAX object on your application. In it will be a activeRequestCount. When the AJAX.activeRequestCount goes to zero, all the AJAX calls are done. So you can wait for an AJAX call to complete with:
If you read the documentation for the waitForCondition method (in either Selenium or DefaultSelenium) you will see a mention of selenium.browserbot.getCurrentWindow(). This is key to the wait for condition.
When you start a test case there are two windows which open up. The first window has the Selenium Command History and ability to display the log. I call this the Selenium window. The next window which opens is the window for the application under test or AUT window.
The waitForCondition method has two parameters. The second parameter is the easiest to understand. We don't want the test to wait for ever. If we don't get a response by a certain time we can assume the test failed. The second parameter is the timeout period in milliseconds. So if the condition should occur within 20 seconds, the second parameter is "20000".
The second parameter is the key to the waiting. It is the condition we are waiting for. There are two windows open, Selenium and AUT. The first parameter to waitForCondition is a javascript expression. It will normally be tested on the Selenium window. If you want to evaluate some javascript in the AUT window you need to use: selenium.browserbot.getCurrentWindow(). For example, Let's say this is the AUT window:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Insert title here</title> </head> <body> <script type="text/javascript">
// this variable is the key. When it is false the loading is done var loading=true; function loadingDone() {
// hidden the GIF
document.getElementById("loading").style.visibility="hidden";
// set the flag to indicate we are done loading
loading=false; }
// this sets a timer for ten seconds to simulate something
// loading for ten seconds. After ten seconds this will call
// the loadingDone() function. var seconds=10; var t=setTimeout("loadingDone()", seconds * 1000); loading=true; </script>
<!-- the moment the page loads this will create -->
<!-- a spinning GIF to signal something loading --> <img id="loading" src="http://www.oshawa.ca/images/loading.gif" style="visibility:visible"/> </body> </html>
What this page does is simulate something loading. After 10 seconds it will hide the spinning GIF and set the loading variable to false. So the condition we are waiting for is when loading == false.
This means from waitForCondition we want to look at the variable "selenium.browserbot.getCurrentWindow().loading == false". In other words, the full statement is going to be:
selenium.waitForCondition("selenium.browserbot.getCurrentWindow().loading == false", "20000");
It is important to note that the selenium in selenium.waitForCondition is a Java object in your test code and the selenium in selenium.browserbot is a javascript object running in the web browser.
Let's say we take this HTML and save it to the file /Users/darrell/workspace/LearningSelenium/waitForCondition.html then we can create the following Selenium RC test case:
package com.example.tests; import org.openqa.selenium.server.RemoteControlConfiguration; import org.openqa.selenium.server.SeleniumServer; import com.thoughtworks.selenium.DefaultSelenium; import com.thoughtworks.selenium.SeleneseTestCase; public class TestingSelenium extends SeleneseTestCase { SeleniumServer ss; RemoteControlConfiguration rcc = new RemoteControlConfiguration(); public void setUp() throws Exception { int ssPort = 4444; rcc.setPort(ssPort); rcc.setTimeoutInSeconds(1200); ss = new SeleniumServer(rcc); ss.start(); selenium = new DefaultSelenium("localhost", ssPort, "*safari", "file://"); selenium.start(); } public void testStub() throws Exception { long start, stop; selenium.open("/Users/darrell/workspace/LearningSelenium/waitForCondition.html"); start = System.currentTimeMillis(); selenium.waitForCondition("selenium.browserbot.getCurrentWindow().loading == false", "300000"); stop = System.currentTimeMillis(); System.err.println("Elapsed time: " + ((stop - start) / 1000.0) + " seconds"); } public void tearDown() throws Exception { super.tearDown(); ss.stop(); } }
and that is one example of waitForCondition. You will have to know what condition to wait for in your application. If you are using AJAX you might find there is an AJAX object on your application. In it will be a activeRequestCount. When the AJAX.activeRequestCount goes to zero, all the AJAX calls are done. So you can wait for an AJAX call to complete with:
selenium.waitForCondition("selenium.browserbot.getCurrentWindow().AJAX.activeRequestCount == 0", "30000");
Saturday, March 13, 2010
SQL Injection
Do you test an application which uses a database? If you do you should be testing for SQL Injection.
What is SQL Injection? It is when the input to the application allows queries to be sent to the database which the programmer did not intend. To make it more clear, here is an example.
A web application has an administrator log in which checks the username and password against an entry in a database. The user inputs:
This seems odd but harmless. Now let's look at some potential code:
Given the input from the user, the SQL statement becomes
You might have noticed one thing. I mentioned that -- is a comment for the MySQL database. How did I know it was the MySQL database? Maybe I didn't and I'm just guessing. Actually, as a tester I would know which database it is but as a hacker, how would I know? It is actually quite simple. Peek under the covers.
Sometimes there is information in the HTTP headers that reveals information about the technology being used at the site. Also, have a look at how data is getting sent to the server. If it is a form with POST or GET parameters, I can fake those. Let's say the HTML source code for http://my.site.com/myapp/ is:
<form action="login.action">
<input type="hidden" name="key" value="293674"/>
Username: <input name="username" value=""/><br/>
Password: <input type="password" name="password" value=""/>
</form>
I can go:
Or I could change the value of the hidden inputs until I get an error from the application server. Usually the error messages will have something in them to tell me what technology is being used. Worst case, if the site administrator is smart enough to hide the database we can use syntax that works on all databases or just try hacks we know work for MySQL in hopes you are using that.
So if there are known security flaws in your database, you want to make sure you get them patched or the software guards against them.
What is SQL Injection? It is when the input to the application allows queries to be sent to the database which the programmer did not intend. To make it more clear, here is an example.
A web application has an administrator log in which checks the username and password against an entry in a database. The user inputs:
Username: admin'; --
Password: whatever
This seems odd but harmless. Now let's look at some potential code:
statement = "SELECT * FROM users WHERE username=' + userName + "' AND password='" + passWord + "';"
Given the input from the user, the SQL statement becomes
SELECT * FROM users WHERE username='admin'; -- 'AND password='whatever';If the administrator's username is admin then this will return the rowset for the administrator. Here is how it works. The -- in MySQL is a comment. Everything from the -- to the end of line is a comment. So the above statement might as well be:
SELECT * FROM users WHERE username='admin';It eliminates the check for the correct password. This is a known hack and I would be DEEPLY surprised if you find this simple mistake. The solution is simple. Check the user input before sending it to the method that builds the SQL statement. We could either reject input with semicolons, SQL comments, etc. or we could escape them. That is, change the input so things like ' become \'.
You might have noticed one thing. I mentioned that -- is a comment for the MySQL database. How did I know it was the MySQL database? Maybe I didn't and I'm just guessing. Actually, as a tester I would know which database it is but as a hacker, how would I know? It is actually quite simple. Peek under the covers.
Sometimes there is information in the HTTP headers that reveals information about the technology being used at the site. Also, have a look at how data is getting sent to the server. If it is a form with POST or GET parameters, I can fake those. Let's say the HTML source code for http://my.site.com/myapp/ is:
<form action="login.action">
<input type="hidden" name="key" value="293674"/>
Username: <input name="username" value=""/><br/>
Password: <input type="password" name="password" value=""/>
</form>
I can go:
http://my.site.com/myapp/login.action?key=293674&username=admin&password=wildguess
Or I could change the value of the hidden inputs until I get an error from the application server. Usually the error messages will have something in them to tell me what technology is being used. Worst case, if the site administrator is smart enough to hide the database we can use syntax that works on all databases or just try hacks we know work for MySQL in hopes you are using that.
So if there are known security flaws in your database, you want to make sure you get them patched or the software guards against them.
So how do you test for this? You could become the ultimate hacker and discover new and innovative ways to inject data into SQL queries. Or you could use some common tools for generating SQL Injections. Here is a list of possible site to start with, in no particular order:
Should we use customer data or static test data when testing an application?
My general approach when testing a legacy system is to use the customer data in the system for testing. I'll often find the application will let me do something that creates bad data in a persistent storage (e.g. database). Before we ship version 8.0, development will fix the application so the user interface will not allow the bad data to get stored.
The idea is to control the scenarios by restricting the user input. Unfortunately, the application has been in use for years and that ability to store bad data has been there for years. There is a good chance that someone stored that bad data in the production database. It wasn't a problem in version 7.7 but that nice new feature in 8.0 breaks on the bad data.
So although version 8.0 will stop any future customers storing the bad data, there is still the old data in the system. What we need to do is get rid of the bad data, ideally. If the user interface had of stopped the customer from entering the bad data in the past, what would the customer have done? If the customer would have skipped entering the data altogether then we can delete the data now. If the customer would have changed the input to be valid data (e.g. don't leave fields blank) or if we now store bad inputs as default good inputs (e.g. blank fields are no longer stored as null but as an empty string or -1 for a numeric value). But what if different customers would have done different things. The business rules could become quite complex. Especially if you have hundreds of customers using the same application (e.g. web based service).
So the solution is to leave the bad data and change version 8.0 so it handles bad data.
How do we test this? There are two camps of thought on this. We can use the existing customer data or we can generate static data.
How do I test this? I like to use both ways. The short term solution is to use customer data. Often during my testing it is hard to imagine what bad data might be hiding in the system. For the long term I will create static data which tests all the scenarios I think are valid. Trying to test all the possible inputs would be impossible. So I'm going to test what the application will let me use. If I didn't use customer data and simply used what version 8.0 was willing to generate, I'd miss the hidden bad data.
On pretty much every project I have worked on, using customer data has resulted in old data causing problems that could never be created with the application as it exists when I joined the project.
So create test cases with known, static data but use legacy data as well.
The idea is to control the scenarios by restricting the user input. Unfortunately, the application has been in use for years and that ability to store bad data has been there for years. There is a good chance that someone stored that bad data in the production database. It wasn't a problem in version 7.7 but that nice new feature in 8.0 breaks on the bad data.
So although version 8.0 will stop any future customers storing the bad data, there is still the old data in the system. What we need to do is get rid of the bad data, ideally. If the user interface had of stopped the customer from entering the bad data in the past, what would the customer have done? If the customer would have skipped entering the data altogether then we can delete the data now. If the customer would have changed the input to be valid data (e.g. don't leave fields blank) or if we now store bad inputs as default good inputs (e.g. blank fields are no longer stored as null but as an empty string or -1 for a numeric value). But what if different customers would have done different things. The business rules could become quite complex. Especially if you have hundreds of customers using the same application (e.g. web based service).
So the solution is to leave the bad data and change version 8.0 so it handles bad data.
How do we test this? There are two camps of thought on this. We can use the existing customer data or we can generate static data.
How do I test this? I like to use both ways. The short term solution is to use customer data. Often during my testing it is hard to imagine what bad data might be hiding in the system. For the long term I will create static data which tests all the scenarios I think are valid. Trying to test all the possible inputs would be impossible. So I'm going to test what the application will let me use. If I didn't use customer data and simply used what version 8.0 was willing to generate, I'd miss the hidden bad data.
On pretty much every project I have worked on, using customer data has resulted in old data causing problems that could never be created with the application as it exists when I joined the project.
So create test cases with known, static data but use legacy data as well.
A good defect report
What makes a good defect report? Let's look at the flip side.
A bad defect report is a defect report with insufficient information. If I file a report and the developer has to come back to me for more information, it is a bad report. If I write down incorrect information or make assumptions and report them as facts.
A bad defect report can also be a report with too much information. You might think I am wrong. How can you have too much information? Well, if the report goes on and on the developer will have to wade through all the information you provided to find the relevant information. The developer's time costs the company money. Also, how long time do you spend gathering and entering in information that is not useful or necessary? Your time costs the company money.
So like many things in life, you need balance. But how do you know what is the right amount of information?
Most defect tracking systems will have a form to fill out. As you are doing your testing, think about the spaces on that form. There will be:
This is where you need to start thinking. When someone is trying to find the defect, they will search the summary. Was there an error message? The title of an error dialog, the message in the dialog, the Exception from the log file (e.g. NullPointerException, IllegalFormatException, etc.). Try to put a keyword the Developer or Project Manager will associate with the problem. Error dialogs are good for the Project Manager and log messages are good for the Developer (and many Project Managers as well).
A bad defect report is a defect report with insufficient information. If I file a report and the developer has to come back to me for more information, it is a bad report. If I write down incorrect information or make assumptions and report them as facts.
A bad defect report can also be a report with too much information. You might think I am wrong. How can you have too much information? Well, if the report goes on and on the developer will have to wade through all the information you provided to find the relevant information. The developer's time costs the company money. Also, how long time do you spend gathering and entering in information that is not useful or necessary? Your time costs the company money.
So like many things in life, you need balance. But how do you know what is the right amount of information?
Most defect tracking systems will have a form to fill out. As you are doing your testing, think about the spaces on that form. There will be:
- Which project you are testing.
- The version of the project.
- A summary of the defect.
This is where you need to start thinking. When someone is trying to find the defect, they will search the summary. Was there an error message? The title of an error dialog, the message in the dialog, the Exception from the log file (e.g. NullPointerException, IllegalFormatException, etc.). Try to put a keyword the Developer or Project Manager will associate with the problem. Error dialogs are good for the Project Manager and log messages are good for the Developer (and many Project Managers as well).
- The severity.
This should not be confused with the priority. The priority is how important fixing the defect is. On most projects this is decided by the Project Manager. The severity is how harmful this is to the customer. For example, let's say I found the following defect: If you enter "Bob" at the top of the page, scroll down two rows, press the Home key, press the shift key repeatedly for at least 37 times, turn the monitor off then back on again, while an audio CD is playing in the computer, it will short circuit the cause the monitor to burst into flames.
The severity of this is VERY high. Any time a computer program can cause your equipment to catch fire I would classify it as the highest possible priority. Now if the defect only happens with THE exact steps above and no other way then the priority would be extremely low, in my opinion.
- Steps to reproduce.
I see a lot of defect tracking systems that do not have a specific section for this but it is very important. This also requires balance. I might be recording everything I did all day and after 5 hours of testing I find a bug. It would not be reasonable to list all 5 hours of testing. For me to enter all that information would probably take over 5 hours and for the developer to step through everything would take him at probably 5 hours. So this would triple the test time for each defect found. Additionally, I'd have to do all these steps when the defect is resolved to confirm it was fixed.
Often what I find myself doing is trying to find a short set of steps to reproduce the error. The more I know about the application the more I can guess at what went wrong. Hopefully this will help me determine a shorter test case to produce the problem.
- The environment.
If you are testing a web site, the type of web browser is important. The version might matter as well. When I test a web site I will use different browsers for the same test. If I see it looks fine in one browser but not the other I will mention it. What was the error/problem? What are the contributing factors. Is the amount of memory a factor? What about disk space or file permissions?
When you are trying to determine what to include and what not to, i.e. finding that balance, it is usually best to err on the side of caution. Anything you know is a factor must be included in the defect report. Anything you know does not contribute to the defect should be left out. Is there something which you are unsure about? Take a few seconds (maybe minutes) to try a few things to eliminate or confirm the information should be included. If I click on something in a web page and an error dialog pops up, I'd try a different web browser. If we supported four web browser, I have to test all of them anyways, try the test case with all four browsers. Now I know if it is a defect for all browsers or just some browsers and I can include that in the defect report.
On the other hand, if I think it might have something to do with the amount of memory available I might just put the amount of memory available in the defect report. The effort to free up more memory might be more than it is worth. Additionally, how much memory do I need to free up to determine memory was a contributing factor?
Finally, any files associated with the defect should be attached. Any log files but only if there is something in the log that looks unusual, like am Exception message or gibberish.
Labels:
testing
How to get a job
A few people I know are trying to find a job. Some will go to Workopolis and scan the job listing. Some will do it religiously. Some will do it occasionally. Most will do it religiously at first but stop when they get depressed.
The truth of the matter is, finding a job is a job. Think about what you do well. I'm good at testing software. Why am I good at testing software? I'm always learning. I'm always using what I learn to improve.
If finding a job is my job, what have I learned about finding a job? Am I getting better at it? Am I doing the same thing every day and expecting different results? Albert Einstein said this was the definition of insanity.
So learn something new and use it to get better at finding a job. How many people do you think go to Workopolis and look for work? That is how many you are competing with.
If you can get a friend to hand in a resume it will go a long way. Don't have any friends at a company you want to work for? Find anyone at the company and ask them to hand in your resume. A lot of companies will give employees a bonus if they hire you. This doesn't usually apply to management so go after an individual contributor.
If you are a programmer, you might be thinking working on an open source project would be good. Maybe help you to meet someone who can hand in your resume. The people doing the actual hiring won't be looking at open source projects. They'll be asking employees if they know anyone, they'll be placing ads on Workopolis or hiring a recruiter.
Rather than working on a project, figure out where recruiter hang out (LinkedIn) and be impressive there. If you stand out, in a good way, a recruiter will find you and introduce you to the hiring manager at the company.
Additionally, a lot of people believe the purpose of a resume is to get a job. Totally wrong. The purpose of a resume is to get an interview. The interview will get you a job. So you need to focus on getting an interview. Talk to hiring managers and see what they look for in a resume. If you can see things from their perspective you might be able to help them find you.
Think about this, I used to get 1000+ resumes for a position a former company advertised. The bigger the company, the better the job, the more people applying to it. The more your resume has to SCREAM that you are the right candidate for the job. I had a full time job. Scanning resume was not my top priority. Anything someone could do to make my job easier I liked.
With 1000+ resumes you cannot read each one. Big companies will use a scanner to look for keywords. They will either reject an resume which does not have ALL the requirements, by keyword, or they will rank the resumes and keep the top resumes. If they think they have time to read 20 resumes, they'll keep the top 20. So the first phase of resumes is throwing them away. I want an employee who I can trust. Are you trying to be sneaky? Did you put things like "Familiar with Java" but no evidence to support it? Stuff like this will get past phase one but as hiring manager, I'll keep the resumes which show where they used Java.
This might sound harsh but thing of this. To properly read a resume might take me 3 minutes. If I gave everyone a chance we are talking 3000 minutes or over 7 days. With all the work a manager has, there is NO way they could devote that much time to finding a candidate. Even 20 resumes is going to be an hour out of my day.
If you embellish or lie on your resume there is a small chance you can get the job but more likely you are going to tick off the interviewer. The number of times I've brought someone in for an interview, I slide a sheet of source code across the table and ask them questions about it only to find they have no idea what the letters in the paper mean.
So try to be truthful on the resume BUT don't sell yourself short. Leave them guessing. Tell them something you did at your last job so it sounds interesting but vague enough that they will want to find out more. Don't give too much information. Too much information might help me to eliminate you from the interview process. Remember, you are trying to get an interview not a job.
Once you get the interview, think about what the employer wants. I'm amazed when I ask questions like, "I give you a task to do. I believe it will take 4 days to complete. After 2 days you finish the task. What do you do with the 2 extra days?"
Good answer: let me know you are done and ask what you should do now.
Bad answer: read a book, a nice fiction novel. I never seem to have time to read a good book anymore.
Find interview questions (technical and soft skills). Think about the answer. Practice your answer. Talk to yourself in a mirror or have a friend ask the questions and give them the answer. Having the right answer is part of it but saying it with confidence is important as well. The more you practice the better you get.
Finally, I once had two candidates. One was very technically competent but seem to be taking the job because he couldn't find any other work. The other candidate was younger, less knowledgeable but really excited about working for me. I asked a colleague which one he would hire. He said to me, "I'd hire the eager candidate. You can always teach him the technical skills. You cannot teach someone to be passionate about their job." So if you are not excited about the idea of working for a company, don't bother applying.
Be forewarned, every time I got rejected from a job it hurt because I really wanted to work for that company.
Good luck.
The truth of the matter is, finding a job is a job. Think about what you do well. I'm good at testing software. Why am I good at testing software? I'm always learning. I'm always using what I learn to improve.
If finding a job is my job, what have I learned about finding a job? Am I getting better at it? Am I doing the same thing every day and expecting different results? Albert Einstein said this was the definition of insanity.
So learn something new and use it to get better at finding a job. How many people do you think go to Workopolis and look for work? That is how many you are competing with.
If you can get a friend to hand in a resume it will go a long way. Don't have any friends at a company you want to work for? Find anyone at the company and ask them to hand in your resume. A lot of companies will give employees a bonus if they hire you. This doesn't usually apply to management so go after an individual contributor.
If you are a programmer, you might be thinking working on an open source project would be good. Maybe help you to meet someone who can hand in your resume. The people doing the actual hiring won't be looking at open source projects. They'll be asking employees if they know anyone, they'll be placing ads on Workopolis or hiring a recruiter.
Rather than working on a project, figure out where recruiter hang out (LinkedIn) and be impressive there. If you stand out, in a good way, a recruiter will find you and introduce you to the hiring manager at the company.
Additionally, a lot of people believe the purpose of a resume is to get a job. Totally wrong. The purpose of a resume is to get an interview. The interview will get you a job. So you need to focus on getting an interview. Talk to hiring managers and see what they look for in a resume. If you can see things from their perspective you might be able to help them find you.
Think about this, I used to get 1000+ resumes for a position a former company advertised. The bigger the company, the better the job, the more people applying to it. The more your resume has to SCREAM that you are the right candidate for the job. I had a full time job. Scanning resume was not my top priority. Anything someone could do to make my job easier I liked.
With 1000+ resumes you cannot read each one. Big companies will use a scanner to look for keywords. They will either reject an resume which does not have ALL the requirements, by keyword, or they will rank the resumes and keep the top resumes. If they think they have time to read 20 resumes, they'll keep the top 20. So the first phase of resumes is throwing them away. I want an employee who I can trust. Are you trying to be sneaky? Did you put things like "Familiar with Java" but no evidence to support it? Stuff like this will get past phase one but as hiring manager, I'll keep the resumes which show where they used Java.
This might sound harsh but thing of this. To properly read a resume might take me 3 minutes. If I gave everyone a chance we are talking 3000 minutes or over 7 days. With all the work a manager has, there is NO way they could devote that much time to finding a candidate. Even 20 resumes is going to be an hour out of my day.
If you embellish or lie on your resume there is a small chance you can get the job but more likely you are going to tick off the interviewer. The number of times I've brought someone in for an interview, I slide a sheet of source code across the table and ask them questions about it only to find they have no idea what the letters in the paper mean.
So try to be truthful on the resume BUT don't sell yourself short. Leave them guessing. Tell them something you did at your last job so it sounds interesting but vague enough that they will want to find out more. Don't give too much information. Too much information might help me to eliminate you from the interview process. Remember, you are trying to get an interview not a job.
Once you get the interview, think about what the employer wants. I'm amazed when I ask questions like, "I give you a task to do. I believe it will take 4 days to complete. After 2 days you finish the task. What do you do with the 2 extra days?"
Good answer: let me know you are done and ask what you should do now.
Bad answer: read a book, a nice fiction novel. I never seem to have time to read a good book anymore.
Find interview questions (technical and soft skills). Think about the answer. Practice your answer. Talk to yourself in a mirror or have a friend ask the questions and give them the answer. Having the right answer is part of it but saying it with confidence is important as well. The more you practice the better you get.
Finally, I once had two candidates. One was very technically competent but seem to be taking the job because he couldn't find any other work. The other candidate was younger, less knowledgeable but really excited about working for me. I asked a colleague which one he would hire. He said to me, "I'd hire the eager candidate. You can always teach him the technical skills. You cannot teach someone to be passionate about their job." So if you are not excited about the idea of working for a company, don't bother applying.
Be forewarned, every time I got rejected from a job it hurt because I really wanted to work for that company.
Good luck.
Thursday, March 11, 2010
How to pick identifiers
When doing software automation pretty much all automation tools require some way of finding the elements in the application. If it is a desktop application, the tool needs a way to find the menus, buttons, text fields, etc. If it is a web application, the tool needs a way to find the various HTML elements, e.g. input, select, ul, li, table.
The trick to easily maintainable automation is to pick something unique and unlikely to change. Take for example a web page with a table and I want to find the contents of a cell. Tools like Selenium using 'locators'. A locator can be: id, name, dom, xpath, css. There are others but these five are the ones common to most web automation tools. I like to use these because if I switch to a different tool (new project, current tool no longer supported, different company, etc.) my knowledge is more transferable. If I use a recognition method unique to a tool I am tying myself to that tool.
If you read the HTML standard, you will find that the ID of an element must be unique. Being unique is paramount to automation. If two elements on a page can have the same value then there is the chance at some point your automation will fail. Some automation languages will fail with a helpful error, i.e. it will tell you there was more than one element which matched. Many will just select the first match, or the last match, or fail farther down in the test case. The problem with ID is not all elements have an ID attribute. There is nothing in the standard indicating all elements have to have an ID. But if an ID exists, this is the best choice.
The NAME attribute does not have to be unique and therefore could cause problems in the future. I will occasionally use the NAME attribute but only if I am working closely with the development team and have a feel for how they select attributes. If the attribute is selected by some automated tool (like struts) and I know this tool is guaranteed to make all the names unique, I'll use it but only if there is no better choice.
The DOM is going to be something a web developer will understand. However, the structure of the DOM might change. If the UI is fairly simple or the application is very stable then using the DOM might be okay. But of they are adding menus, moving around navigation bars, changing tables to div/span combinations, adding divs or spans to deal with new web browsers, etc. then the DOM will be changing and you will be required to update your automation. So what works today might given you a lot of headaches in the future. The whole idea behind automation is to spend more time creating the automation and reap the benefit of running it again and again. If you spend 4 times more time automating compared to manual testing and have to spend the same amount of time maintaining the automation as you would manual testing, you'll never re-coop the initial cost of automation. The more the DOM is changing the less beneficial it will be to use the DOM.
The XPATH is my preferred method of identifying elements on a web page. How you use it can make a world of difference however. If the full xpath to the cell in a table was /html/body/div[2]/span[4]/table[3]/tbody/tr[7]/td[9] I could use that. It would definitely be unique. But if the document is in flux, this xpath will change. Alternative xpaths would be //body/div[2]/span[4]/table[3]/tbody/tr[7]/td[9] or even //table[3]/tbody/tr[7]/td[9]. The shorter I can get the xpath the less likely it will change. Even with the last xpath will break if they add another table above the current one. What if they add or subtract a column? Then the td becomes td[8] or td[10].
Basically, using magic numbers is a bad thing. This is taught in first year computer programming. If we look closer at xpath we find there are functions we can use. If the cell has ID=foo I can use //TD[@id='foo']. What if today it is a cell in a table but in the future they change it to a set of DIV/SPAN. I could use //*[@id='foo'].
What if the automated tool occasionally adds a space to the attribute strings (I see this a lot). I'd have to figure this out and change it to //*[@id='foo ']. But there is a better solution, try using: //*[contains(@id,'foo')]. The danger of this is that there might be an ID='foo' and a second element with ID='foobar'. At this point you need to use your judgement. Are there attribute values who are subsets of other attribute values? If yes, don't use contains(). Is it common for spaces to get added to attribute values? If yes, do use contains(). What if both conditions occur? This is where it gets hard. There is no one right choice.
Sometimes you want to add some more in. For example, if the table has ID='bar' and the structure of the table is fairly solid, I might be okay to use //table[@id='bar']/tbody/tr[7]/td[9].
Another solution is if things are relative to one another. For example, if I have a table and the third column has an element which is unique (the text, the id, whatever) and I was the seventh column on that row, I can use a relative path. For example, say I want the input field on the same row with the text 'Enter the Quantity:' then I could use:
//td[contains(text(),'Enter the Quantity:')]/../td[7]/input
This starts from the cell with the known text, goes up one to be on the row then down into the 7th column and finally down into the input element.
The CSS is very much the same as XPATH. The only difference is that understanding how you can match things with CSS requires knowledge of CSS. If you are a developer who uses CSS to identify and target elements for say AJAX then obviously you are going to be comfortable using CSS to identify the elements. Just like with XPATH you can do things like TD#foo (e.g. //TD[@id='foo']), #foo (e.g. //*[@id='foo']), *[id*='foo'] (e.g. //*[contains(@id,'foo')]). NOTE: I'm not an CSS expert so take these examples with a grain of salt.
Summary, pick identifiers which are (1) unique, (2) unlikely to change and (3) you understand and can maintain.
The trick to easily maintainable automation is to pick something unique and unlikely to change. Take for example a web page with a table and I want to find the contents of a cell. Tools like Selenium using 'locators'. A locator can be: id, name, dom, xpath, css. There are others but these five are the ones common to most web automation tools. I like to use these because if I switch to a different tool (new project, current tool no longer supported, different company, etc.) my knowledge is more transferable. If I use a recognition method unique to a tool I am tying myself to that tool.
If you read the HTML standard, you will find that the ID of an element must be unique. Being unique is paramount to automation. If two elements on a page can have the same value then there is the chance at some point your automation will fail. Some automation languages will fail with a helpful error, i.e. it will tell you there was more than one element which matched. Many will just select the first match, or the last match, or fail farther down in the test case. The problem with ID is not all elements have an ID attribute. There is nothing in the standard indicating all elements have to have an ID. But if an ID exists, this is the best choice.
The NAME attribute does not have to be unique and therefore could cause problems in the future. I will occasionally use the NAME attribute but only if I am working closely with the development team and have a feel for how they select attributes. If the attribute is selected by some automated tool (like struts) and I know this tool is guaranteed to make all the names unique, I'll use it but only if there is no better choice.
The DOM is going to be something a web developer will understand. However, the structure of the DOM might change. If the UI is fairly simple or the application is very stable then using the DOM might be okay. But of they are adding menus, moving around navigation bars, changing tables to div/span combinations, adding divs or spans to deal with new web browsers, etc. then the DOM will be changing and you will be required to update your automation. So what works today might given you a lot of headaches in the future. The whole idea behind automation is to spend more time creating the automation and reap the benefit of running it again and again. If you spend 4 times more time automating compared to manual testing and have to spend the same amount of time maintaining the automation as you would manual testing, you'll never re-coop the initial cost of automation. The more the DOM is changing the less beneficial it will be to use the DOM.
The XPATH is my preferred method of identifying elements on a web page. How you use it can make a world of difference however. If the full xpath to the cell in a table was /html/body/div[2]/span[4]/table[3]/tbody/tr[7]/td[9] I could use that. It would definitely be unique. But if the document is in flux, this xpath will change. Alternative xpaths would be //body/div[2]/span[4]/table[3]/tbody/tr[7]/td[9] or even //table[3]/tbody/tr[7]/td[9]. The shorter I can get the xpath the less likely it will change. Even with the last xpath will break if they add another table above the current one. What if they add or subtract a column? Then the td becomes td[8] or td[10].
Basically, using magic numbers is a bad thing. This is taught in first year computer programming. If we look closer at xpath we find there are functions we can use. If the cell has ID=foo I can use //TD[@id='foo']. What if today it is a cell in a table but in the future they change it to a set of DIV/SPAN. I could use //*[@id='foo'].
What if the automated tool occasionally adds a space to the attribute strings (I see this a lot). I'd have to figure this out and change it to //*[@id='foo ']. But there is a better solution, try using: //*[contains(@id,'foo')]. The danger of this is that there might be an ID='foo' and a second element with ID='foobar'. At this point you need to use your judgement. Are there attribute values who are subsets of other attribute values? If yes, don't use contains(). Is it common for spaces to get added to attribute values? If yes, do use contains(). What if both conditions occur? This is where it gets hard. There is no one right choice.
Sometimes you want to add some more in. For example, if the table has ID='bar' and the structure of the table is fairly solid, I might be okay to use //table[@id='bar']/tbody/tr[7]/td[9].
Another solution is if things are relative to one another. For example, if I have a table and the third column has an element which is unique (the text, the id, whatever) and I was the seventh column on that row, I can use a relative path. For example, say I want the input field on the same row with the text 'Enter the Quantity:' then I could use:
//td[contains(text(),'Enter the Quantity:')]/../td[7]/input
This starts from the cell with the known text, goes up one to be on the row then down into the 7th column and finally down into the input element.
The CSS is very much the same as XPATH. The only difference is that understanding how you can match things with CSS requires knowledge of CSS. If you are a developer who uses CSS to identify and target elements for say AJAX then obviously you are going to be comfortable using CSS to identify the elements. Just like with XPATH you can do things like TD#foo (e.g. //TD[@id='foo']), #foo (e.g. //*[@id='foo']), *[id*='foo'] (e.g. //*[contains(@id,'foo')]). NOTE: I'm not an CSS expert so take these examples with a grain of salt.
Summary, pick identifiers which are (1) unique, (2) unlikely to change and (3) you understand and can maintain.
Subscribe to:
Posts (Atom)