Seems life is turning my site into a monthly event. This month I'm going to talk about boundary test cases and a little about resumes.
When you are testing an application you want to be systematic. Even the simplest of applications can have so many possible combinations that testing all of them does not make sense. What you need to figure out is what subset is sufficient.
For example, a web site I was testing had security and a timeout. You had to log in and the system would automatically log you out after a certain period of inactivity.
Recently, they made the timeout period configurable. You would edit a config file and set a property. If the application.timeout property existed and had a value of 30 it would automatically log users out after 30 minutes of inactivity.
So, how do you test this. I could try all possible values but the application server takes 10 minutes to power up and 5 minutes to power down. Each value I'd use then means 15 minutes plus the timeout period. Even if it only supported up to 99 minutes it would take me one solid week to test it. If I took this long for every feature it would take me decades to test the application. In other words, before I finished testing it, the computer would be obsolete.
What I need to do is pick a subset. If 1 worked and 2 worked, I'd guess that 3, 4, 5, 6, etc. will work. If I understand the language the application was written in, I might be able to figure out special cases. It was written in Java. Is the timeout value stored in an int? long? Integer? If the data type is 8 bits then I know for Java there are no unsigned char so the data range is -127 to 128. What happens if I use 129? What happens if I use a negative number? The value 0 will be fine for the char but what will the timeout code do with a 0? Does 0 mean 'disable timeout'?
Turns out the programmer used a 32 bit number. This means it ranges from 2147483648 to -2147483647. So I could try a value of 2147483649 (MAX+1). If they are using the input as a String then using Integer class to convert it, the Integer class will throw an Exception. What about if I set the value to "twenty". Did they think to handle that. For the application I'm testing, the user is a Application Server Administrator for LARGE enterprise environments. I didn't test for "twenty" because our users wouldn't try that.
At this point you might be wonder, what about the resume thing? I see a lot of testers putting programming languages on their resumes. I test development tools. All my staff are programmers and I expect anyone I hire to be a programmer. So for me putting a language on your resume means you know how to program in that language. If you don't I have to put you on a different team (e.g. testing the user interface for application monitoring solutions). If I was hiring for someone testing a non-development application, the language on the resume might mean you understand the limitations of the language and can apply that to your testing.
For example, if you note you test Java and C applications, it means you understand that C has unsigned data types and Java does not. In Java the boundary cases are going to be 2^7, 2^15 and 2^31 but in C language it will be all the Java boundary cases PLUS 2^8, 2^16 and 2^32 (there are actually more but this shows the difference between Java and C applications). All the people I've interviewed had no idea why the language the application was written with made a difference. When I asked them why they put it on their resume they had no idea. If you don't know why something is worth putting on your resume ask. If no one can tell you, don't put it on your resume until someone can explain why.
Wednesday, June 4, 2008
Saturday, May 3, 2008
Been a while
It has been a while since I posted to my blog. I've been reading less techie books and taking time to myself.
Been using Watij at work to do load testing. I am testing Foglight 5.0. Foglight is a application monitoring tool. You install it on a computer then deploy agents to other computers. The agents collect information and send it back to Foglight. Foglight saves the data in a database. A user can then log into the web console for Foglight and view the data. A Foglight cartridge is a package of agents, configuration files and schema information. You would have an Oracle Database cartridge. This would have an agent which monitors an Oracle database, sends back all the information an Oracle Database Administrator would be interested in then displays it in a manner the DBA would appreciate. The 'cartridge' has default dashboards (a dashboard is a chart/table/view of the agent data) and rules (a rule does things like email the DBA when the database crashes, if Foglight detects a bottleneck, someone tries to illegally access the database, etc.). There are other things like reports (PDF) and analysis tools.
So, if you load all the cartridges into Foglight (OracleDB, WebLogic, WebSphere, Windows, Solaris, AIX, HP-UX, Vmware, MySQL, DB2, etc.) you will have hundreds of different views. For example, just the Windows cartridge will have agents for DiskIO, FileSystem, CPU, Memory, EventLog, AppMonitor, WebMonitor, ApacheSvr, LogMonitor, etc. and each agent will have dozens of views.
Verifying all these views can be quite time consuming. Each dashboard has an associated URL. As a user of Foglight I would log into the console (username/password) then select a dashboard from a treeview. I could also type the URL into the address bar and go to the dashboard directly.
This is how I use Watij. I created a set of jUnit test cases. The setup() was starting IE and logging into the Foglight Console. Each test case [test*()] was loading a URL, i.e. dashboard. The tearDown() was logging out of the console and quitting IE.
One of the challenges I faced with Foglight's Web Console Framework (WCF) was the use of AJAX and client-side Javascript. A fair amount of the code was in the form of Javascript on the client side. This meant, the HTTP response would complete, Watij would see the HTTP request as done but the client (Internet Explorer) would still be processing the Javascript (many of the views were complex enough that a page would take an addition 1 to 5 seconds to render).
The solution: WCF has a GIF which they set the style="VISIBILITY: visible" when the page is rendering and it gets changed to style="VISIBILITY: hidden" when the rendering completes. So I just wrote a method which gets the CSS for the image as a string then uses the match method of Java String to search for "style=\".*VISIBILITY:[\s]*visible.*" and does a loop until this changes. Basically it is a:
The moment the style changes from visible to anything else, the loop exits and I know the page is really done. As a double check I do a windowCapture from Watij then manually inspect the images.
Darrell
Been using Watij at work to do load testing. I am testing Foglight 5.0. Foglight is a application monitoring tool. You install it on a computer then deploy agents to other computers. The agents collect information and send it back to Foglight. Foglight saves the data in a database. A user can then log into the web console for Foglight and view the data. A Foglight cartridge is a package of agents, configuration files and schema information. You would have an Oracle Database cartridge. This would have an agent which monitors an Oracle database, sends back all the information an Oracle Database Administrator would be interested in then displays it in a manner the DBA would appreciate. The 'cartridge' has default dashboards (a dashboard is a chart/table/view of the agent data) and rules (a rule does things like email the DBA when the database crashes, if Foglight detects a bottleneck, someone tries to illegally access the database, etc.). There are other things like reports (PDF) and analysis tools.
So, if you load all the cartridges into Foglight (OracleDB, WebLogic, WebSphere, Windows, Solaris, AIX, HP-UX, Vmware, MySQL, DB2, etc.) you will have hundreds of different views. For example, just the Windows cartridge will have agents for DiskIO, FileSystem, CPU, Memory, EventLog, AppMonitor, WebMonitor, ApacheSvr, LogMonitor, etc. and each agent will have dozens of views.
Verifying all these views can be quite time consuming. Each dashboard has an associated URL. As a user of Foglight I would log into the console (username/password) then select a dashboard from a treeview. I could also type the URL into the address bar and go to the dashboard directly.
This is how I use Watij. I created a set of jUnit test cases. The setup() was starting IE and logging into the Foglight Console. Each test case [test*()] was loading a URL, i.e. dashboard. The tearDown() was logging out of the console and quitting IE.
One of the challenges I faced with Foglight's Web Console Framework (WCF) was the use of AJAX and client-side Javascript. A fair amount of the code was in the form of Javascript on the client side. This meant, the HTTP response would complete, Watij would see the HTTP request as done but the client (Internet Explorer) would still be processing the Javascript (many of the views were complex enough that a page would take an addition 1 to 5 seconds to render).
The solution: WCF has a GIF which they set the style="VISIBILITY: visible" when the page is rendering and it gets changed to style="VISIBILITY: hidden" when the rendering completes. So I just wrote a method which gets the CSS for the image as a string then uses the match method of Java String to search for "style=\".*VISIBILITY:[\s]*visible.*" and does a loop until this changes. Basically it is a:
do { // sleep 250 milliseconds // get the CSS in the string s } while(s.matches(REGEX));
The moment the style changes from visible to anything else, the loop exits and I know the page is really done. As a double check I do a windowCapture from Watij then manually inspect the images.
Darrell
Monday, March 17, 2008
nmap and amap
Back in the day I used to frequent alt.2600. This was a usenet newsgroup. They are not as popular now a days. If you wanted to learn something about say C programming, you'd go to comp.lang.c and read the messages. It was a lot like a bulletin board. The alt.* newsgroups were easier to form and didn't require you to get a bunch of people to vote on whether or not you could form the group. You would constantly see things like alt.bork.bork.bork.swedish.chef or alt.wesley.crusher.must.die.
One alt group that was formed and lasted for quite some time was alt.2600. The frequency 2600 Hz was a tone AT&T used to indicate a line was not in use. A hacker found that a whistle you got with Captain Crunch cereal could transmit the 2600 Hz signal. He would call a long distance number, blow the whistle and AT&T would assume the line was not in use, thus he got long distance calls for free.
alt.2600 was devoted to hackers, like Captain Crunch (the nickname of the guy who discovered the 2600 Hz trick), sharing information about hacking. Some were ethical and some were not.
Today there is a 2600 magazine available. I was reading this magazine today and it had a nice article on tools like nmap and amap.
I was familiar with nmap. The nmap software is a Network MAPper. You can use it to probe a network. You pick a machine and nmap will probe the machine to see what ports are active. You can either listen, passively, for transmissions or you can actively send data to various ports and see what responses you get back. If you are passive, the machine operator would not know you are out there probing his/her network but you don't get a lot of information back this way. If you actively probe the network you get a lot more data back but the system operator will be able to detect you are probing their network.
The art of hacking seems to be a dying art. Many of the system operators out there today don't think to look for people probing their network. Most will try to keep some sort of logging. If you do something malicious they will check the logs to see who did it, i.e. they are more reactive then proactive.
So you can often probe networks so long as you don't do anything to make the system operator respond.
The amap program, I just read about, is an Application MAPper. Programs like nmap or netscan will probe a system but they can easily be tricked. For example, web servers are typically at port 80. If I have an application server running at port 80, mapping software might mistakenly assume it is a web server. The claim of the 2600 article is that amap is a little smarter than that.
I'll still have to download the source code and compile a version for myself.
You can find information about nmap at http://en.wikipedia.org/wiki/Nmap. For amap, you'll need to go to http://freeworld.thc.org/thc-amap/ and compile the program yourself. NOTE: a good way to hack someone is to give a novice a tool that lets them play without the source code. The novice downloads and runs the 'tool' only to find out they have installed a trojan horse on their system. You always want the source code *AND* you want to look at the source to see what it is doing. If there is any cryptic code like:
Don't use it. If you don't understand what a piece of code does there is a good chance it is a trojan horse.
One alt group that was formed and lasted for quite some time was alt.2600. The frequency 2600 Hz was a tone AT&T used to indicate a line was not in use. A hacker found that a whistle you got with Captain Crunch cereal could transmit the 2600 Hz signal. He would call a long distance number, blow the whistle and AT&T would assume the line was not in use, thus he got long distance calls for free.
alt.2600 was devoted to hackers, like Captain Crunch (the nickname of the guy who discovered the 2600 Hz trick), sharing information about hacking. Some were ethical and some were not.
Today there is a 2600 magazine available. I was reading this magazine today and it had a nice article on tools like nmap and amap.
I was familiar with nmap. The nmap software is a Network MAPper. You can use it to probe a network. You pick a machine and nmap will probe the machine to see what ports are active. You can either listen, passively, for transmissions or you can actively send data to various ports and see what responses you get back. If you are passive, the machine operator would not know you are out there probing his/her network but you don't get a lot of information back this way. If you actively probe the network you get a lot more data back but the system operator will be able to detect you are probing their network.
The art of hacking seems to be a dying art. Many of the system operators out there today don't think to look for people probing their network. Most will try to keep some sort of logging. If you do something malicious they will check the logs to see who did it, i.e. they are more reactive then proactive.
So you can often probe networks so long as you don't do anything to make the system operator respond.
The amap program, I just read about, is an Application MAPper. Programs like nmap or netscan will probe a system but they can easily be tricked. For example, web servers are typically at port 80. If I have an application server running at port 80, mapping software might mistakenly assume it is a web server. The claim of the 2600 article is that amap is a little smarter than that.
I'll still have to download the source code and compile a version for myself.
You can find information about nmap at http://en.wikipedia.org/wiki/Nmap. For amap, you'll need to go to http://freeworld.thc.org/thc-amap/ and compile the program yourself. NOTE: a good way to hack someone is to give a novice a tool that lets them play without the source code. The novice downloads and runs the 'tool' only to find out they have installed a trojan horse on their system. You always want the source code *AND* you want to look at the source to see what it is doing. If there is any cryptic code like:
main(){int j=10024;char t[]=":@abcdefghijklmnopqrstuvwxyz.\n",*i=
"@zp:pf:qbogw\nxbz\nexke.z";char *strchr(const char *,int);while(
*i){ j+=strchr(t,*i++)-t;j%=sizeof t-1;putchar(t[j]);} return 0;}
Don't use it. If you don't understand what a piece of code does there is a good chance it is a trojan horse.
Wednesday, February 20, 2008
Safely probing web sites
The whole recruitmenttech.com / Bernard Haldane scam thing got me using my old telnet trick to examine the contents of a website. A number of bad eggs like to use security flaws in IE or Firefox to infect your computer. Most people try to make sure the security patches are up to date. But, there is always a period of time between when a virus is released and when a security patch is released to deal with it. If you visit the wrong website during that time you could be in for trouble.
What I like to do is avoid the security flaw by using a method the virus writers are not expecting. I like to use telnet. I tend to telnet from different operating systems as well. You might not have the luxury of a dozen different OS. You could consider Vmware or some other OS emulator.
Anyways, here is how it works. I'll use telnet from MSDOS. The connection for a web browser and for telnet is pretty much the same. Telnet defaults to port 23 and web browsers default to port 80. So if I wanted to use telnet to connect to say www.blogger.com I'd use:
At this point the MSDOS telnet program will print nothing. If you press CTRL-] you get to the telnet settings. In there enter set localecho. Press ENTER twice, once to turn on local echo and once to get out of the telnet settings. You are now back at a blank screen again. Enter:
NOTE: you have to press enter twice at the end. Once to send the Host: field and once to signal the end of the HTTP header.
If you take too long to type things in, the computer at the other end will timeout and hang up on you. If you type it in quickly enough you'll get something back like:
There should be a lot more as well. What I'm not posting here is all the stuff between <html> and </html>. What I have posted is the HTTP response header. Your web browser eats this and uses the information. For example, the Set-Cookie: field will make the web browser set a cookie.
The GET command is a standard HTML header command. The next part, /, is the path you want to get and the HTTP/1.1 is the protocol to use. Some web servers only work with HTTP/1.0, some only work with HTTP/1.1, most will use both.
If you wanted the web site, http://en.wikipedia.org/wiki/List_of_HTTP_headers then the sequence would be:
You'll notice that I always put a Host: field. Many web servers will not respond to robots or automation. They want to know who is talking to them. So if you don't include the computer name in the header, they just hang up or send you a HTTP/1.1 403 Forbidden response. Try using telnet to www.google.com and they will refuse you. They not only want the Host: information but they expect a number of other fields as well. If you go to the http://en.wikipedia.org/wiki/List_of_HTTP_headers web page, they talk about some of the common header fields and have a link to the HTTP standard on www.w3.org.
When you get the response back, you'll have to look through the body and see if there are other references, e.g. <SCRIPT> tags, which will create more GET requests. Your web browser is often getting the first page and from there doing multiple GET commands for the contents (each <IMG> tag is a GET, running Javascript will create more GET commands, etc.).
Once you have downloaded everything you can then look at it with a text editor and see if there is anything in it which could harm your computer. If you don't know how to read Javascript, this is obviously not an option for you but as a big nerd this is what I do. :^)
Hope you enjoy this. Let me know if you have any questions.
Happy hunting!
What I like to do is avoid the security flaw by using a method the virus writers are not expecting. I like to use telnet. I tend to telnet from different operating systems as well. You might not have the luxury of a dozen different OS. You could consider Vmware or some other OS emulator.
Anyways, here is how it works. I'll use telnet from MSDOS. The connection for a web browser and for telnet is pretty much the same. Telnet defaults to port 23 and web browsers default to port 80. So if I wanted to use telnet to connect to say www.blogger.com I'd use:
telnet www.blogger.com 80
At this point the MSDOS telnet program will print nothing. If you press CTRL-] you get to the telnet settings. In there enter set localecho. Press ENTER twice, once to turn on local echo and once to get out of the telnet settings. You are now back at a blank screen again. Enter:
GET / HTTP/1.1
Host: www.bogus-computer.com
NOTE: you have to press enter twice at the end. Once to send the Host: field and once to signal the end of the HTTP header.
If you take too long to type things in, the computer at the other end will timeout and hang up on you. If you type it in quickly enough you'll get something back like:
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=0df2c62f96b9ffe7:TM=1203534740:LM=1203534740:S=0L2HwAZgQbyqrbmI; expires=Fri, 19-Feb-2010 19:12:20 GMT; path=/; domain=.google.com
Server: gws
Transfer-Encoding: chunked
Date: Wed, 20 Feb 2008 19:12:20 GMT
...
There should be a lot more as well. What I'm not posting here is all the stuff between <html> and </html>. What I have posted is the HTTP response header. Your web browser eats this and uses the information. For example, the Set-Cookie: field will make the web browser set a cookie.
The GET command is a standard HTML header command. The next part, /, is the path you want to get and the HTTP/1.1 is the protocol to use. Some web servers only work with HTTP/1.0, some only work with HTTP/1.1, most will use both.
If you wanted the web site, http://en.wikipedia.org/wiki/List_of_HTTP_headers then the sequence would be:
telnet en.wikipedia.org 80
GET /wiki/List_of_HTTP_headers HTTP/1.1
Host: www.bogus-computer.com
You'll notice that I always put a Host: field. Many web servers will not respond to robots or automation. They want to know who is talking to them. So if you don't include the computer name in the header, they just hang up or send you a HTTP/1.1 403 Forbidden response. Try using telnet to www.google.com and they will refuse you. They not only want the Host: information but they expect a number of other fields as well. If you go to the http://en.wikipedia.org/wiki/List_of_HTTP_headers web page, they talk about some of the common header fields and have a link to the HTTP standard on www.w3.org.
When you get the response back, you'll have to look through the body and see if there are other references, e.g. <SCRIPT> tags, which will create more GET requests. Your web browser is often getting the first page and from there doing multiple GET commands for the contents (each <IMG> tag is a GET, running Javascript will create more GET commands, etc.).
Once you have downloaded everything you can then look at it with a text editor and see if there is anything in it which could harm your computer. If you don't know how to read Javascript, this is obviously not an option for you but as a big nerd this is what I do. :^)
Hope you enjoy this. Let me know if you have any questions.
Happy hunting!
Wednesday, January 16, 2008
Donald E. Knuth
Last week, Thursday, January 10th, was Donald E. Knuth's birthday. I cannot believe I missed it.
I first learned about him in university. Someone mentioned The Art of Computer Programming. They talked about it as if EVERYONE knew about it. Coming from a trade school background I had no idea who Donald E. Knuth was or any of his publications. Since my background was graphic arts and typesetting I was first intrigued by TeX and started using it to typeset my math homework. As time went on I switched from a Mathematics major to a Computer Science major, in part due to Donald E. Knuth.
When I started reading The Art of Computer Programming I noted in the preface the information about questions and how he ranked them. If it had [1] it would be a question that takes a second to answer. Something like [10] might take a few seconds to a minute, [20] might take a day, etc. (I might have these estimate wrong). The one which stuck in my head was questions ranked [50]. Remember, I had no idea who Donald E. Knuth was nor how brilliant he was. He gave as an example the following question:
He then proceeded to say, "If anyone finds an answer to these questions I'd appreciate you letting me know."
I figured this question didn't look too hard so I'd give it a try. I spent 3 months on it and figured out that if I could prove it for an integer n as a prime number then I could prove it for any integer n. Try as I might that was the closest thing I could come up with. I'd figured out a lot of algebra and log/exp theory but I was stumped. After Christmas (I spent from mid September to Christmas working on this), I was defeated. It was the first time I couldn't answer a math puzzle. I went to my professor and asked him for the solution; I feel so stupid for not being able to figure it out myself. My professor laughed out loud when I asked for the solution. I felt so humiliated and a little angry; I was thinking he was laughing at me because of how stupid I must be. He quickly realized I actually expected an answer and thought he was laughing at my stupidity. He told me no one knows the answer to this puzzle and the most brilliant minds have been trying to prove it since Fermat wrote it over 350 years ago (this was 2 years before Andrew Wiles published his proof).
The funniest thing is a year later I was watching an episode of Star Trek NG and Picard is reading a book. The book is about Fermat's Last Theorem; he says something like, "There are things man was never meant to understand, like Fermat's Last Theorem." The writers of Star Trek NG assumed no one was going to solution this thing.
Donald E. Knuth, he always seemed to write in a very unassuming way. I have never had the pleasure of meeting the man. I would guess he just truly loves math and computers. Maybe some day.
I first learned about him in university. Someone mentioned The Art of Computer Programming. They talked about it as if EVERYONE knew about it. Coming from a trade school background I had no idea who Donald E. Knuth was or any of his publications. Since my background was graphic arts and typesetting I was first intrigued by TeX and started using it to typeset my math homework. As time went on I switched from a Mathematics major to a Computer Science major, in part due to Donald E. Knuth.
When I started reading The Art of Computer Programming I noted in the preface the information about questions and how he ranked them. If it had [1] it would be a question that takes a second to answer. Something like [10] might take a few seconds to a minute, [20] might take a day, etc. (I might have these estimate wrong). The one which stuck in my head was questions ranked [50]. Remember, I had no idea who Donald E. Knuth was nor how brilliant he was. He gave as an example the following question:
If an integer n is greater than 2, then the equation an + bn = cn has no solutions in non-zero integers a, b, and c.
He then proceeded to say, "If anyone finds an answer to these questions I'd appreciate you letting me know."
I figured this question didn't look too hard so I'd give it a try. I spent 3 months on it and figured out that if I could prove it for an integer n as a prime number then I could prove it for any integer n. Try as I might that was the closest thing I could come up with. I'd figured out a lot of algebra and log/exp theory but I was stumped. After Christmas (I spent from mid September to Christmas working on this), I was defeated. It was the first time I couldn't answer a math puzzle. I went to my professor and asked him for the solution; I feel so stupid for not being able to figure it out myself. My professor laughed out loud when I asked for the solution. I felt so humiliated and a little angry; I was thinking he was laughing at me because of how stupid I must be. He quickly realized I actually expected an answer and thought he was laughing at my stupidity. He told me no one knows the answer to this puzzle and the most brilliant minds have been trying to prove it since Fermat wrote it over 350 years ago (this was 2 years before Andrew Wiles published his proof).
The funniest thing is a year later I was watching an episode of Star Trek NG and Picard is reading a book. The book is about Fermat's Last Theorem; he says something like, "There are things man was never meant to understand, like Fermat's Last Theorem." The writers of Star Trek NG assumed no one was going to solution this thing.
Donald E. Knuth, he always seemed to write in a very unassuming way. I have never had the pleasure of meeting the man. I would guess he just truly loves math and computers. Maybe some day.