Decided the job at Certicom was not right for me. Whenever I am looking for a job I spend as much time preparing and interviewing the company as they spend interviewing me. I want to be sure that I'm a good fit for the company and the company is a good fit for me.
Just like a company, I use the probationary period to decide if I am going to stay with the company. There was no one reason I felt I didn't fit in but quantity if small things made me realize I would never be quite happy.
So I'm no longer commuting to Mississauga and taken a new job within walking distance of my home.
Monday, October 4th I start my new job working at the Ministry of Education. Not sure which technologies I'll be working with just yet but I will try to blog about them here.
Google Analytics
Search
To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g. site:darrellgrainger.blogspot.com CSS selectors will search for "CSS selectors" but only on my site. |
Tuesday, September 28, 2010
Friday, August 20, 2010
New Job
It certainly has been a while since I posted to my blog. For anyone who is curious, I started a job as QA Manager at Certicom Corporation.
Certicom is in the business of cryptography. We create libraries or toolkits for C and Java which are used by customer applications. At the lowest level is Crypto. Crypto is cryptographic routines and algorithms used as part of a security solution. Customers like XM Radio use our Security Builder® Crypto™.
The next level up is PKI or Public Key Infrastructure. Security Builder® PKI™ enables you to add robust, standards-based digital certificates and key management to applications and devices, ensuring trust and non-repudiation. Some customers will develop their own Crypto solution for use with our PKI or they will use our Crypto solution with our PKI.
The security most people in the public know about is SSL or Secure Sockets Layer. Our Security Builder® SSL™ product can be used for people wishing to implement SSL. Either in a client or server. For example, with our product you can develop a mod_ssl for use with Apache Web Server.
Another term you may be familar with is VPN or Virtual Private Network. To create a VPN requires IPSec or Internet Protocol Security. This is acheived using Certicom's Security Builder® IPSec™.
In addition to all the publicly available products we create custom solutions for various industries and companies.
The company was recently acquired by Research In Motion. Most people know this company as BlackBerry, which is just a product the company produces.
As QA Manager I have the challenging task of testing all the different implementations of our products. Some are Windows, AIX, HP-UX, Linux, Solaris or Mac OS based but others are build on embedded devices. Our Asset Management System, used in chip manufacturing plants, utilizes Web Services, AJAX, JavaEE and other web technologies as a front end to a complex cryptography solution.
At this time I am actually looking to hire testers for testing these products. It is quite challenging to find people who can do the work. Ideally, they need to know or be able to learn:
If you know anyone who fits this description or you believe you are up for the challenge, you can apply to the position at https://rim.taleo.net/careersection/professional/jobdetail.ftl?lang=en&job=188752. If this link does not work, try the following:
Certicom is in the business of cryptography. We create libraries or toolkits for C and Java which are used by customer applications. At the lowest level is Crypto. Crypto is cryptographic routines and algorithms used as part of a security solution. Customers like XM Radio use our Security Builder® Crypto™.
The next level up is PKI or Public Key Infrastructure. Security Builder® PKI™ enables you to add robust, standards-based digital certificates and key management to applications and devices, ensuring trust and non-repudiation. Some customers will develop their own Crypto solution for use with our PKI or they will use our Crypto solution with our PKI.
The security most people in the public know about is SSL or Secure Sockets Layer. Our Security Builder® SSL™ product can be used for people wishing to implement SSL. Either in a client or server. For example, with our product you can develop a mod_ssl for use with Apache Web Server.
Another term you may be familar with is VPN or Virtual Private Network. To create a VPN requires IPSec or Internet Protocol Security. This is acheived using Certicom's Security Builder® IPSec™.
In addition to all the publicly available products we create custom solutions for various industries and companies.
The company was recently acquired by Research In Motion. Most people know this company as BlackBerry, which is just a product the company produces.
As QA Manager I have the challenging task of testing all the different implementations of our products. Some are Windows, AIX, HP-UX, Linux, Solaris or Mac OS based but others are build on embedded devices. Our Asset Management System, used in chip manufacturing plants, utilizes Web Services, AJAX, JavaEE and other web technologies as a front end to a complex cryptography solution.
At this time I am actually looking to hire testers for testing these products. It is quite challenging to find people who can do the work. Ideally, they need to know or be able to learn:
- C, C++ and Java programming
- Defect tracking systems
- Test reporting
- Knowledge of Windows, Linux or UNIX
- Experience testing embedded devices or mobile devices, e.g. BlackBerry
- Ability to create test plans
- Working knowledge of test automation
- Unit testing, system testing, integration testing
- Experience with source control
- Knowledge of cryptography
- Development experience or experience testing toolkits and libraries
If you know anyone who fits this description or you believe you are up for the challenge, you can apply to the position at https://rim.taleo.net/careersection/professional/jobdetail.ftl?lang=en&job=188752. If this link does not work, try the following:
- Go to http://www.rim.com/
- Go to the Careers section
- Go to Americas, this should bring you to a Job Search page
- In the keywords field enter: Certicom
Wednesday, June 23, 2010
Understanding how xpath relates to web pages
When using automation tools like Selenium or Watij you often find yourself creating an xpath to find an element. Talking to a few people there seems to be a lack of understanding of how an xpath relates to a web page.
A web page is merely a group of blocks inside blocks. To illustrate I have the following image:
I think the step which is missing for most people is understanding how to look at a web page.
Image the outer block is the <BODY> of the web page. Inside the outer block, i.e. the BODY are two rectangles. Let's say they are <TABLE> elements. The top table, i.e. /HTML/BODY/TABLE[1], has one row and three columns. The lower table, i.e. /HTML/BODY/TABLE[2], has three rows and four columns.
Let's say that both tables have one row of cells where the class was 'foo', i.e. <TD class='foo'>. If I wanted to find all cells with class='foo' and the text contained 'bar' I would use:
//TD[@class='foo' and contains(text(), 'bar')]
But what if I wanted to search only the second table? Then I would use:
//TABLE[2]/TBODY/TR/TD[@class='foo' and contains(text(), 'bar')]
Essentially, the longer the xpath the small the area of the web page I am searching. Using //BODY will search the largest square in my example. Using //BODY/TABLE[2] will search the lower table or the second level in.
If you look at the third row of the lower table you can see the 'cells' contain another level of element. Let's say that the cells, i.e. <TD>, contains a <DIV>. Using //TABLE[2]/TR[3]/TD/DIV[1] focuses on the first div in the last row of the lower table.
Labels:
automation,
Selenium,
Watij,
xpath
Wednesday, June 9, 2010
Creating Time
I've started a new job and I'm working with my staff to see how things are done. One of the problems I see in many organizations is 'not enough time'. There never seems to be enough time in the day/week/month to get everything done.
This will always be true because there is always a pressure to get to market before the competition.
However, I have found ways of getting more done compared to someone else in the same time period. The trick is to look for lost moments.
Yesterday I was talking to a staff member while he set up an appliance. The process took 5 minutes of running commands and answering questions and around 15 minutes of the software getting installed. I asked him a question and he immediate paused to answer me. He was literally one key press away from the point the setup no longer required his attention. I stopped him and told him to press ENTER first.
When you look at what he was going to do:
The way I would do it would be:
With microprocessors, you save a cycle here, a cycle there. It does not look like a lot but if you look at the ratio. I have 1 cycle saved for 12 cycles spent. This means 1/12 or over 8%. In real time, saving 8% means an extra 40 minutes a day. Over the course of a week I have an extra 3 hours and 20 minutes.
This is how you create extra time. I will also do things like using tools like expect to automate a process. Rather than typing in all the prompts to a Bourne shell script I will do the following:
Important to note however is that if you have 5 or 10 minutes while you are waiting for something, don't switch to a task that requires you to change what you are thinking about. Getting back into the right mindset for the thing you are waiting for could cost you 5 or 10 minutes. In that case there is no real savings.
So think about the times you are waiting for something and what can you do while you are waiting. Thing about what tools you can use to create situations you will be waiting and therefore able to do something else.
One last parting example, if you are using automation tools that take over your computer (they create mouse and keyboard activity), set up a Vmware image and run the tool inside the Vmware. While the automation is running, minimize the Vmware window and do something else on your desktop.
This will always be true because there is always a pressure to get to market before the competition.
However, I have found ways of getting more done compared to someone else in the same time period. The trick is to look for lost moments.
Yesterday I was talking to a staff member while he set up an appliance. The process took 5 minutes of running commands and answering questions and around 15 minutes of the software getting installed. I asked him a question and he immediate paused to answer me. He was literally one key press away from the point the setup no longer required his attention. I stopped him and told him to press ENTER first.
When you look at what he was going to do:
- Enter 99% of the interactive portion of setup (4 minutes, 59 seconds)
- Answer my question (10 minutes)
- Press ENTER to finish the interactive portion of setup (1 second)
- Wait for the batch portion of setup to finish (15 minutes)
- Total running time = 30 minutes
The way I would do it would be:
- Emter 99% of the interactive portion of setup (4 minutes, 59 seconds)
- Ask me to wait for a moment (3 seconds)
- Press ENTER to finish the interactive portion ofsetup (1 second)
- Answer my question (10 minutes)
- Wait for the batch portion of setup to finish (5 minutes)
- Total running time = 20 minutes, 3 seconds
With microprocessors, you save a cycle here, a cycle there. It does not look like a lot but if you look at the ratio. I have 1 cycle saved for 12 cycles spent. This means 1/12 or over 8%. In real time, saving 8% means an extra 40 minutes a day. Over the course of a week I have an extra 3 hours and 20 minutes.
This is how you create extra time. I will also do things like using tools like expect to automate a process. Rather than typing in all the prompts to a Bourne shell script I will do the following:
- Run the script using sh -x <script-name>
- This will output everything which is happening
- Take the output and determine what all the prompts are
- Use a tool like expect to script the response to the Bourne shell script
- Use the expect script and while it is running do something else.
Important to note however is that if you have 5 or 10 minutes while you are waiting for something, don't switch to a task that requires you to change what you are thinking about. Getting back into the right mindset for the thing you are waiting for could cost you 5 or 10 minutes. In that case there is no real savings.
So think about the times you are waiting for something and what can you do while you are waiting. Thing about what tools you can use to create situations you will be waiting and therefore able to do something else.
One last parting example, if you are using automation tools that take over your computer (they create mouse and keyboard activity), set up a Vmware image and run the tool inside the Vmware. While the automation is running, minimize the Vmware window and do something else on your desktop.
Sunday, May 9, 2010
Down on your luck? We can make it worse.
The scammers are at it again. I recently received an email from ECIT GROUP LTD. The email appears to be coming from Workpolis. The From address appears to be Workopolis.com <workopolis.jobs@workopolis.ca>.
If you receive an email via GMail, you can open the menu with "Reply" and there will be an option for Show Original. When you show the original it will display the full email header. Here is the email header for the message I received:
If you receive an email via GMail, you can open the menu with "Reply" and there will be an option for Show Original. When you show the original it will display the full email header. Here is the email header for the message I received:
Received: by 10.231.60.10 with SMTP id n10cs78111ibh; Fri, 7 May 2010 15:54:15 -0700 (PDT) Received: by 10.227.154.11 with SMTP id m11mr642898wbw.165.1273272854287; Fri, 07 May 2010 15:54:14 -0700 (PDT) Return-Path: <pbcpttnl@kristell.webhosters-direct.com> Received: from kristell.webhosters-direct.com (kristell.computerdirect.dedicated.redunix.net [193.34.167.216]) by mx.google.com with ESMTP id e7si7418364wbb.9.2010.05.07.15.54.13; Fri, 07 May 2010 15:54:14 -0700 (PDT) Received-SPF: neutral (google.com: 193.34.167.216 is neither permitted nor denied by best guess record for domain of pbcpttnl@kristell.webhosters-direct.com) client-ip=193.34.167.216; Authentication-Results: mx.google.com; spf=neutral (google.com: 193.34.167.216 is neither permitted nor denied by best guess record for domain of pbcpttnl@kristell.webhosters-direct.com) smtp.mail=pbcpttnl@kristell.webhosters-direct.com Received: from pbcpttnl by kristell.webhosters-direct.com with local (Exim 4.60) (envelope-from <pbcpttnl@kristell.webhosters-direct.com>) id 1OAWQv-0005e2-PJ for my.address@gmail.com; Sat, 08 May 2010 00:54:13 +0200 To: my.address@gmail.com Subject: Executive Financial Manager Position From: Workopolis.com <workopolis.jobs@workopolis.ca> Reply-To: recruiter@ecitgroup.co.uk MIME-Version: 1.0 Content-Type: text/html Content-Transfer-Encoding: 8bit Message-Id: <E1OAWQv-0005e2-PJ@kristell.webhosters-direct.com> Date: Sat, 08 May 2010 00:54:13 +0200
Most email programs will take bits from the email header and display them, hiding the rest. You'll see the Subject, To, From and Date fields. You might see the Reply-To address as well. When you click reply, you will almost definitely see the Reply-To information.
Notice how the From address and the Reply-To address are completely different. That should be your first clue something is odd. Above the To field is the route the email took to get to me. You don't need to know a lot of the details but just look at all the machine names and notice that workopolis.ca does not appear in anything above the To field. On a quick glance it looks like the email came from kristell.webhosters-direct.com. I haven't checked but I would guess that www.webhosters-direct.com is a web hosting company. Anyone can buy an account from them and set up email accounts and a website.
The idea is, the criminal creates an email account and possibly a website using a general hosting company. Anyone can do this, including you. They will pay for the account in such a way as to make it hard to find out who they really are (stolen credit card). Minutes after they create the email accounts they spam millions of people with their scam email. A few hundred people respond and give them personal information (bank information, credit card, identity, etc.). They take all that information and disappear.
People like me will report them and MAYBE someone will investigate. They will find the site was created with a stolen credit card. The criminal logged in from a stolen account. Usually I can log into a company, from there log into a second company, to a third computer, etc. Different ways of logging in and hopefully somewhere in that chain of logins someone will not be keeping record of my log in. For example, I might have installed a virus on your computer and used your computer to log into the web hosting site. So when the police get the logs from the web hosting site, it leads back to your computer. When they get a warrant for your computer they find a virus on your computer and no logs. Dead end.
The best thing you can do is (a) don't fall victim to these scams and (b) tell your friends.
General rule of thumb is that NO ONE asks for personal information via an email. This scam says:
Have a chequing account at ROYAL BANK OF CANADA (RBC)They are telling you right up front you have to have an RBC chequing account. This immediately rings warning bells for me. Other warning bells:
- They promising you money. Sometimes it is millions but sometimes it is just a good income.
- They want personal information. Not just credit card information. When you call RBC they validate you are who you say you are. The more information I can gather about you, the better chance I have of pretending to be you.
- The From address and the Reply-To address are completely different.
- They continually claim to be reputable. We are recognized by the Better Business Bureau, we are registered with the Chamber of Commerce, etc. Maybe they even are. But why do they feel the need to convince you they are legitimate?
Bottom line, if it sounds too good to be true it probably is. If something doesn't seem right, i.e. you have an odd feeling about it, no matter how small, investigate. I'd say 10 times out of 10 that odd feeling turns out to be right on the money and they are a scam.
Remember that the scammers have been doing this for years. They are MUCH better at this then you are. If logically it seems legitimate but your gut says something is wrong, your gut is probably right. The only way for you to find out what you are missing is to be taken or find someone else who has been taken. Sometimes I walk away from a situation without solid proof it is a scam but (a) at least I wasn't taken and (b) sooner or later I find out what the scam was and realize I was right to walk away.
Thursday, May 6, 2010
Organizing your automation
It has been a while since I have posted something to the blog. Life has been keeping me busy with things other than software testing.
I've been to a number of interviews in the past few weeks and I've been asked a number of questions. One of those questions was regarding traceability. If I have a number of automated tests and a set of requirements, how do I connect the two and how do I determine what has been run and what has not?
The first part to this is how are the requirements organized. If there is a tracking system and some unique identifier for the requirements, I want to use that identifier in my test automation. For example, a previous job the defect tracking system was used to manage requirements. In addition to categories like 'defect', 'enhancement', etc. there was a 'new feature' category. The business analyst would file a 'new feature' request into the system. Using mock ups and text descriptions, they would flesh out the requirements.
Each defect report has a unique tracking number. I would need to incorporate this number into the test automation. There might be one number for the requirement but a number of different test cases. Additionally, the number might be just a number, e.g. 28364. In the automation environment this might not stand out as a requirement. For this reason I would develop a convention where all requirements references would start with REQ-. Thus REQ-28364 could get added to the test case automation.
Ideally, you want the automation to be self-explanatory. If a test case fails, it would be helpful to know what failed without having to look up the requirements documentation. With automation like Selenium RC or Watij (both using Java) I can name the test case class the same as the requirement number, e.g. REQ-28364 but if I am looking at the test suite or test results it might not be obvious what this is. So I would create an annotation @REQ and put the requirement information in the comments of the source code.
The name of the class can then be used to indicate what the new feature is. The name of each test case would be a description of what is being tested. For example, if I'm adding Print support to an editor I might have the class name "PrintSupport" or "NewFeaturePrintSupport". The test cases might be:
I've been to a number of interviews in the past few weeks and I've been asked a number of questions. One of those questions was regarding traceability. If I have a number of automated tests and a set of requirements, how do I connect the two and how do I determine what has been run and what has not?
The first part to this is how are the requirements organized. If there is a tracking system and some unique identifier for the requirements, I want to use that identifier in my test automation. For example, a previous job the defect tracking system was used to manage requirements. In addition to categories like 'defect', 'enhancement', etc. there was a 'new feature' category. The business analyst would file a 'new feature' request into the system. Using mock ups and text descriptions, they would flesh out the requirements.
Each defect report has a unique tracking number. I would need to incorporate this number into the test automation. There might be one number for the requirement but a number of different test cases. Additionally, the number might be just a number, e.g. 28364. In the automation environment this might not stand out as a requirement. For this reason I would develop a convention where all requirements references would start with REQ-. Thus REQ-28364 could get added to the test case automation.
Ideally, you want the automation to be self-explanatory. If a test case fails, it would be helpful to know what failed without having to look up the requirements documentation. With automation like Selenium RC or Watij (both using Java) I can name the test case class the same as the requirement number, e.g. REQ-28364 but if I am looking at the test suite or test results it might not be obvious what this is. So I would create an annotation @REQ and put the requirement information in the comments of the source code.
The name of the class can then be used to indicate what the new feature is. The name of each test case would be a description of what is being tested. For example, if I'm adding Print support to an editor I might have the class name "PrintSupport" or "NewFeaturePrintSupport". The test cases might be:
- canPrintAllPages
- canPrintAllEvenPages
- canPrintAllOddPages
- canPrintRangeOfPages
- canCancelPrintDialog
When when I look at the results for the test run I would see:
- PrintSupport->canPrintAllPages: Pass
- PrintSupport->canPrintAllEvenPages: Pass
- PrintSupport->canPrintAllOddPages: Pass
- PrintSupport->canPrintRangeOfPages: Pass
- PrintSupport->canCancelPrintDialog: Fail
Very easy to see what is not working and what is.
The most important thing for tying requirements and automation together is creating a convention and sticking to it. To help you stick to it, edit the templates for the IDE. Whenever I create a new Test Case class, the template will have an @REQ field in the comments at the top of the class. I can even go one step further and have source control check for the @REQ field. If the field does not exist or it is blank, the check in of my automation will fail with an error telling me to add a requirement reference to the source code.
Labels:
automation,
development,
tips
Thursday, March 25, 2010
How to develop an automation framework for a legacy application
If you join a team testing an application which already exists and may even be released to the customer but there is no automation in place, how do you start?
For many this can be an overwhelming task. The real answer is, one feature at a time.
Let's take for example the last project I worked on. It was created years before I joined the company and had a small base of customers who depended on it and tolerated the quirks and bugs of the system.
The first thing to do is pick an automation tool that will work for the application. Talk to the developers, business analysts and stakeholders to get a feel for where the project is going and keep that in mind. For example, current requirements are Windows and Internet Explorer 7. In the future, we would like to support Windows, Linux, Solaris, etc. and we want support for any and all browsers (Internet Explorer, Firefox, Opera, Safari, Mozilla, etc.). Additionally, we also have a Windows Mobile application but we would like to change to using the web browser on iPhone, Palm Pre or BlackBerry. So do we use a tool that supports all these possible future combinations or do we use a tool that works for the current requirements because the future requirements aren't written in stone and 95% of our customers for the next few years will be on Windows with Internet Explorer 7 (or IE8 in IE7 compatible mode).
The tools I found were a very mature product which only supported Internet Explorer or a newer product which supported everything but wasn't quite as mature. I selected Watij, a more mature product which only supported Internet Explorer. The majority of our customer were Internet Explorer customers, all the developers did their work using Firefox. The defects which appeared in one web browser and not another were typically layout issues. To date, all the layout issues were in Internet Explorer because the developers were using Firefox to do their design.
Next you want to think about code maintenance. This might seem strange; we don't have a single line of code and I'm thinking about how I'm going to maintain this code that doesn't exist.
Think about successful products. Microsoft Office was released 20 years ago. JBoss was started over 10 years ago. Photoshop 1.0 was released 20 years ago. The last proven web based application I tested was started in 2004 and is still going strong. Basically, a successful software project can exist for 10 to 20 years. How long do you think you have to create a test framework? If the software is going to be released within one year, the project manager is going to expect the framework and initial test cases to be written in the first release cycle. So you have less than a year to create your framework. Thus, 6 months to 1 year to create and use the framework on, hopefully, a project which will last for 10 to 20 years. So 90% of your time is going to be on maintenance. Even if we expect the framework to be used for 5 years you are still looking at 80% of your time spent on maintenance.
So plan out how to break the framework into small, manageable pieces. Look at something like a library. Take for example the Java APIs. There are over 200 packages with over 3700 classes. A typical class might have dozens of methods. Did this happen overnight? Absolutely not. James Gosling started work on Java (aka Oak) in 1991 and by 1995 v1.0 was released. So the 3700+ classes were developed over 19 years.
Creating a successful framework doesn't mean all the code has to be in place at version 1.0. It just means the structure has to be there. If you look at version 1.0 of Java, it was object oriented, it had packages, constructors, exceptions, etc. All the basic functionality of the language which exists today was there in v1.0. Was there support for XML, Xpath, SQL? I'm not sure but there didn't need to be. There just had to be enough to produce something useful.
So your framework just need to be structured in such a way that it will grow to something we can use 10 years from now. So how do you create something which will be comparable to Java, C++, etc.?
Looking at things like MSDN library I noticed the original libraries where very detailed. To accomplish one thing you often had to call numerous library functions. Over the years Microsoft recognized that most programmers will call:
Additionally, do things on the page translate to data structures that need to get passed around? For example, on a web page you will have forms to fill in. Later you might need to edit the data you input. The data required for creation (filling in the initial form) and the data required for editing will be the same data. So create data structures that mirror the form. If the data on the form changes, you just need to edit the data structure. Use getters and setters to obfuscate away the implementation of the data structure. For example, you might store the date string as a Date or Calendar object. Later the web page may change from a text field to a read-only text field with a Calendar widget. If you hardcoded a string for the date field you will have to go and clean up all your automation. If you successful obfuscated away the data type, you should be able to just update one library call and the automation will continue to work. Imagine they decide to change the web form two years from now. You have 4,879 test cases which fill in that form. How long will it take to find and fix 4,879 test cases? How long will it take to update one library call?
Once you have decided on how to structure the framework you can start writing test cases. You might have noticed, I talked about designing the framework but I didn't write any of the code. The reason for this is because you want to have the test cases drive the code creation. If you have test plans in place and the priority of the test cases (conducted manually) is known, then the order you want to automate the test cases is also known. Just as you are not going to start manually testing a minor feature before you test a major feature, you are not going to start automating a minor feature before you automated a major feature.
My first test case might be something like:
It will tell you the getProperty method does not exist and do you want to create it. You can create it inside the current test class but won't other test classes need to get properties as well? So maybe you want to extend the test class and put the getProperty method in a super class. So I would go to the class definition and add an extends statement. Now I get an error in the extends statement. So I take the suggestion to create a new super class. The class will be empty and the error will go away. Now when I deal with the getProperty error, one of the suggestions is to create a method in the super class, so I do.
As you resolve each error, the test framework starts growing. You keep adding in more and more code. Once you have resolved all the errors in your first test case, you have JUST enough code in the test framework to run one test cases. The test case should now be runnable and you can add it to the nightly build process. Each night it will build the application, deploy it and run your one test. Next day, add another test. Are there things from the framework you can use for test case number two? Then reuse them.
If you find yourself putting code into more than one place, move the code to a library and change the two places you are using it into a call to the library.
At this point all your libraries will be fairly low level and work on one page at a time. What if I wanted to do a larger action? Maybe create account would create a user, login, add information about the user.
You might be tempted to put a higher level function call into one of the existing classes. If you do this you will have one library calling another library. This is not a good idea as it can lead to circular references.
What I have found happens is you end up writing helper methods in the test cases. So of the test case was originally:
So you want to move some of these methods to a more common location. You could create packages which are feature and use case oriented rather than page oriented. So your initial test cases will be very low level and test a page at a time. Next you start creating test cases which test features, use cases or stories. You can continue to run the page test cases but now you have much more powerful libraries. Just like with my example of MSDN libraries. Twenty years ago, programmers had to call the page tests. Later they called methods which called the page tests.
You can create 'requirement' test cases which call the page methods to test end-to-end requirements. You can create 'user-defect' methods. How a user does something or the data they use might reveal a defect. So you can create packages for all user defect reports then create a class for each customer. In each class will be the high level library calls for how they achieve something and the data they used to find a defect. Now you can create test cases which cross reference to the defect number and call these user-defect methods. If a project manager wants to see if a defect has been fixed, he can look to see of the corresponding test case has passed.
You want to think about how the test cases are organized, how they are executed, etc. as a set of manual test cases. The automation should reflect this because a project manager, QA manager, stakeholder, etc. might request a specific subset of the tests be run at different stages of the project.
Finally, you might have noticed there is not a lot of error handling. To keep this article short I have not been putting in error handling but you do. If you give a Java library bad input it will throw an exception. If you give your library method bad input, it should throw an exception and end the test case. If you were manually testing the application and an error appeared on the screen, you don't keep executing the test case. You STOP and investigate. With automation, you need to anticipate were things will go wrong. Any time you are submitting user input, an error can occur. Any time you are receiving input from outside the application (network, printer, COM call, etc.) an error can occur. You code should be constantly checking for things which can go wrong. An analogy is, every time you look at the screen with your eyes, the automation should be scanning the application for problems/errors.
For many this can be an overwhelming task. The real answer is, one feature at a time.
Let's take for example the last project I worked on. It was created years before I joined the company and had a small base of customers who depended on it and tolerated the quirks and bugs of the system.
The first thing to do is pick an automation tool that will work for the application. Talk to the developers, business analysts and stakeholders to get a feel for where the project is going and keep that in mind. For example, current requirements are Windows and Internet Explorer 7. In the future, we would like to support Windows, Linux, Solaris, etc. and we want support for any and all browsers (Internet Explorer, Firefox, Opera, Safari, Mozilla, etc.). Additionally, we also have a Windows Mobile application but we would like to change to using the web browser on iPhone, Palm Pre or BlackBerry. So do we use a tool that supports all these possible future combinations or do we use a tool that works for the current requirements because the future requirements aren't written in stone and 95% of our customers for the next few years will be on Windows with Internet Explorer 7 (or IE8 in IE7 compatible mode).
The tools I found were a very mature product which only supported Internet Explorer or a newer product which supported everything but wasn't quite as mature. I selected Watij, a more mature product which only supported Internet Explorer. The majority of our customer were Internet Explorer customers, all the developers did their work using Firefox. The defects which appeared in one web browser and not another were typically layout issues. To date, all the layout issues were in Internet Explorer because the developers were using Firefox to do their design.
Next you want to think about code maintenance. This might seem strange; we don't have a single line of code and I'm thinking about how I'm going to maintain this code that doesn't exist.
Think about successful products. Microsoft Office was released 20 years ago. JBoss was started over 10 years ago. Photoshop 1.0 was released 20 years ago. The last proven web based application I tested was started in 2004 and is still going strong. Basically, a successful software project can exist for 10 to 20 years. How long do you think you have to create a test framework? If the software is going to be released within one year, the project manager is going to expect the framework and initial test cases to be written in the first release cycle. So you have less than a year to create your framework. Thus, 6 months to 1 year to create and use the framework on, hopefully, a project which will last for 10 to 20 years. So 90% of your time is going to be on maintenance. Even if we expect the framework to be used for 5 years you are still looking at 80% of your time spent on maintenance.
So plan out how to break the framework into small, manageable pieces. Look at something like a library. Take for example the Java APIs. There are over 200 packages with over 3700 classes. A typical class might have dozens of methods. Did this happen overnight? Absolutely not. James Gosling started work on Java (aka Oak) in 1991 and by 1995 v1.0 was released. So the 3700+ classes were developed over 19 years.
Creating a successful framework doesn't mean all the code has to be in place at version 1.0. It just means the structure has to be there. If you look at version 1.0 of Java, it was object oriented, it had packages, constructors, exceptions, etc. All the basic functionality of the language which exists today was there in v1.0. Was there support for XML, Xpath, SQL? I'm not sure but there didn't need to be. There just had to be enough to produce something useful.
So your framework just need to be structured in such a way that it will grow to something we can use 10 years from now. So how do you create something which will be comparable to Java, C++, etc.?
a dwarf standing on the shoulders of a giant may see farther than a giant himself.Borrow the design of something like Java. If the language you are using for automation is going to be object oriented, use proven object oriented designs. Leverage the work of James Gosling. Look at the application you are testing. Can you break it apart into sub-sections? For a web application you have pages. Each page will have a variety of actions. Some will alter the current page using JavaScript and some will load a new page. Focus on the functionality of just that page. The new page loading will be in some other package, class, method.
Looking at things like MSDN library I noticed the original libraries where very detailed. To accomplish one thing you often had to call numerous library functions. Over the years Microsoft recognized that most programmers will call:
So they created a new library where you called:result1 = functionA(); result2 = functionB(result1); result3 = functionC(result2); result4 = functionD(result3); print result4;
If you look at the existing code in Microsoft you will see:print functionABCD();
You want to build things up the same way. You can have separate projects, separate packages, different naming conventions. The choice is all up to you. Just remember that the division will look silly at first but by this time next year you will be happy you broken it down as much as you did. What I mean is you might find you have 2 packages, each package has 3 or 4 classes, each class has 5 or 6 methods. In a few years time you should find you have hundreds of packages, each package has dozens of classes and each class has numerous methods.functionABCD() { result1 = functionA(); result2 = functionB(result1); result3 = functionC(result2); result4 = functionD(result3); }
Additionally, do things on the page translate to data structures that need to get passed around? For example, on a web page you will have forms to fill in. Later you might need to edit the data you input. The data required for creation (filling in the initial form) and the data required for editing will be the same data. So create data structures that mirror the form. If the data on the form changes, you just need to edit the data structure. Use getters and setters to obfuscate away the implementation of the data structure. For example, you might store the date string as a Date or Calendar object. Later the web page may change from a text field to a read-only text field with a Calendar widget. If you hardcoded a string for the date field you will have to go and clean up all your automation. If you successful obfuscated away the data type, you should be able to just update one library call and the automation will continue to work. Imagine they decide to change the web form two years from now. You have 4,879 test cases which fill in that form. How long will it take to find and fix 4,879 test cases? How long will it take to update one library call?
Once you have decided on how to structure the framework you can start writing test cases. You might have noticed, I talked about designing the framework but I didn't write any of the code. The reason for this is because you want to have the test cases drive the code creation. If you have test plans in place and the priority of the test cases (conducted manually) is known, then the order you want to automate the test cases is also known. Just as you are not going to start manually testing a minor feature before you test a major feature, you are not going to start automating a minor feature before you automated a major feature.
My first test case might be something like:
If you enter this into an IDE like IntelliJ/IDEA or Eclipse you will get a lot of error messages. The getProperty method does not exist. The loginPage object does not exist. The goToTheLoginPage method does not exist. But the IDE has a helpful feature, it will make suggestions as to how to fix the errors.class MyFirstTestCase { public void setUp() throws Exception { } public void testLoggingIn() throws Exception { String username = getProperty("username", "darrell"); String password = getProperty("password", "mySecretPassw0rd"); loginPage.goToTheLoginPage(); loginPage.logIn(username, password); assertTrue(homePage.assertHomePage()); } public void tearDown() throws Exception { } }
It will tell you the getProperty method does not exist and do you want to create it. You can create it inside the current test class but won't other test classes need to get properties as well? So maybe you want to extend the test class and put the getProperty method in a super class. So I would go to the class definition and add an extends statement. Now I get an error in the extends statement. So I take the suggestion to create a new super class. The class will be empty and the error will go away. Now when I deal with the getProperty error, one of the suggestions is to create a method in the super class, so I do.
As you resolve each error, the test framework starts growing. You keep adding in more and more code. Once you have resolved all the errors in your first test case, you have JUST enough code in the test framework to run one test cases. The test case should now be runnable and you can add it to the nightly build process. Each night it will build the application, deploy it and run your one test. Next day, add another test. Are there things from the framework you can use for test case number two? Then reuse them.
If you find yourself putting code into more than one place, move the code to a library and change the two places you are using it into a call to the library.
At this point all your libraries will be fairly low level and work on one page at a time. What if I wanted to do a larger action? Maybe create account would create a user, login, add information about the user.
You might be tempted to put a higher level function call into one of the existing classes. If you do this you will have one library calling another library. This is not a good idea as it can lead to circular references.
What I have found happens is you end up writing helper methods in the test cases. So of the test case was originally:
I might change it to:import com.company.application.pages.LoginPage; import com.company.application.pages.HomePage; import com.company.application.pages.RegisterUserPage; import com.company.application.pages.UserProfilePage; import com.company.application.datatypes.UserProfile; class MySecondTestCase extends MyTestCase { LoginPage loginPage; HomePage homePage; RegisterUserPage registerUserPage; UserProfilePage userProfilePage; public void setUp() throws Exception { super.setUp(); loginPage = new LoginPage(); homePage = new HomePage(); registerUserPage = new RegisterUserPage(); userProfilePage = new UserProfilePage(); } public void testCreateUser() throws Exception { String username = getProperty("username", "darrell"); String password = getProperty("password", "mySecretPassw0rd"); loginPage.goToRegisterNewUser(); registerUserPage.fillInForm(username, password, password); registerUserPage.submitForm(); loginPage.goToTheLoginPage(); loginPage.logIn(username, password); homePage.goToUserProfile(); userProfilePage.goToEditUserProfile(); UserProfile profile = new UserProfile(); // code to set the various fields of the user profile // e.g. profile.setHobbies("scuba diving, travelling, programming"); userProfilePage.fillInForm(profile); assertEquals(userProfilePage.getUserProfile(), profile); } }
This is okay but if I need to register a user, go to a user profile from the login page or update a user profile from more than one test cases, I'm going to have code duplication. You want to avoid code duplication. The more you duplicate code the more maintenance work you are creating PLUS there is a chance you will miss one of the duplicate pieces of code.import com.company.application.pages.LoginPage; import com.company.application.pages.HomePage; import com.company.application.pages.RegisterUserPage; import com.company.application.pages.UserProfilePage; import com.company.application.datatypes.UserProfile; class MySecondTestCase extends MyTestCase { LoginPage loginPage; HomePage homePage; RegisterUserPage registerUserPage; UserProfilePage userProfilePage; public void setUp() throws Exception { super.setUp(); loginPage = new LoginPage(); homePage = new HomePage(); registerUserPage = new RegisterUserPage(); userProfilePage = new UserProfilePage(); } public void testCreateUser() throws Exception { String username = getProperty("username", "darrell"); String password = getProperty("password", "mySecretPassw0rd"); registerUser(username, password); logInAndGoToUserProfile(username, password); UserProfile profile = createAUserProfile(); updateUserProfile(profile); assertEquals(userProfilePage.getUserProfile(), profile); } private void registerUser(String username, String password) throws Exception { loginPage.goToRegisterNewUser(); registerUserPage.fillInForm(username, password, password); registerUserPage.submitForm(); } private void logInAndGoToUserProfile(username, password) throws Exception { loginPage.goToTheLoginPage(); loginPage.logIn(username, password); homePage.goToUserProfile(); userProfilePage.goToEditUserProfile(); } private UserProfile createAUserProfile() throws Exception { UserProfile profile = new UserProfile(); // code to set the various fields of the user profile // e.g. profile.setHobbies("scuba diving, travelling, programming"); return profile; } private void updateUserProfile(UserProfile profile) throws Exception { userProfilePage.fillInForm(profile); userProfilePage.submitForm(); // submitting profile send us to home page homePage.goToUserProfile(); } }
So you want to move some of these methods to a more common location. You could create packages which are feature and use case oriented rather than page oriented. So your initial test cases will be very low level and test a page at a time. Next you start creating test cases which test features, use cases or stories. You can continue to run the page test cases but now you have much more powerful libraries. Just like with my example of MSDN libraries. Twenty years ago, programmers had to call the page tests. Later they called methods which called the page tests.
You can create 'requirement' test cases which call the page methods to test end-to-end requirements. You can create 'user-defect' methods. How a user does something or the data they use might reveal a defect. So you can create packages for all user defect reports then create a class for each customer. In each class will be the high level library calls for how they achieve something and the data they used to find a defect. Now you can create test cases which cross reference to the defect number and call these user-defect methods. If a project manager wants to see if a defect has been fixed, he can look to see of the corresponding test case has passed.
You want to think about how the test cases are organized, how they are executed, etc. as a set of manual test cases. The automation should reflect this because a project manager, QA manager, stakeholder, etc. might request a specific subset of the tests be run at different stages of the project.
Finally, you might have noticed there is not a lot of error handling. To keep this article short I have not been putting in error handling but you do. If you give a Java library bad input it will throw an exception. If you give your library method bad input, it should throw an exception and end the test case. If you were manually testing the application and an error appeared on the screen, you don't keep executing the test case. You STOP and investigate. With automation, you need to anticipate were things will go wrong. Any time you are submitting user input, an error can occur. Any time you are receiving input from outside the application (network, printer, COM call, etc.) an error can occur. You code should be constantly checking for things which can go wrong. An analogy is, every time you look at the screen with your eyes, the automation should be scanning the application for problems/errors.
Subscribe to:
Posts (Atom)