It has been a short while since I posted. For a while there I was without a computer. My Windows box had decided I needed an updated video driver. Last time it did this I was reduced to 320x200 16 colour graphics. This time it just totally puked my machine. I tried booting in safe mode; everything appeared to start okay but after all the drivers loaded, it switched to the Windows XP splash screen (just before the login dialog) and hung there. Booted the thing with a Dr.DOS CD I created (just for such an emergency). The registry had been corrupted.
Decided now was the time to buy a new computer. Had been looking at the Mac for a while. It is really a UNIX box with an Apple GUI. The thing makes as much noise turned on as it does turned off. The Windows box sounds like a jet taking off in the distance. I connected my ethernet cable, powered it up and it immediately found the DSL modem, knew what kind of protocol it used, asked me two questions (username and password for my ISP) and bang I was on the net.
The really cool thing is I don't have to install something like Cygwin to do Bourne shell scripting. The REALLY cool thing is what I can do without much knowledge of UNIX command line. There is a utility on the thing called 'Network Utility'. You go to Spotlight (Command-Spacebar) and enter 'network'. The first thing it finds is the Network Utility.
A lot of the stuff on this is pretty easy to use from the command line, e.g. netstat or ping. But there is a feature labeled 'Port Scan'. This thing is like nmap but a lot easier to use. You give it the name or IP address of a machine and it will probe the machines ports. It will see if something is at each port and figure out what that something is. You might thing, it sees something at port 80 and assumes it is http but I've put an application server at a different port and it found it.
I'm scanning a site now and seeing that they have turned off telnet (I think it is a Linux machine so telnetd is off by default) but they have sshd running on the standard port 22. It is a web site so no surprise there is an httpd running on port 80.
It is not clear where this thing has a passive mode (or if it is always in passive mode). I'm going to guess it is not and the host I'm probing sees me peeking at all the ports. If you want to do some stealth probing, this is probably not the tool for you. You'd have to go for command line.
Makes me wonder, why would you want this? Your average joe user can find you are running iiop on port 3528. This will mean nothing to them. Most the people who know about this sort of stuff, know command line utilities like nmap. Still, kind of fun to play with.
The typical UNIX or Linux start does not apply to the Mac. After the initial rc.d everything is started by a launchd application. Additionally, things you run from the GUI are run very differently then on say SuSE Linux. A GUI application has the elaborate structure and there is a special program that runs the application. For example, if the GUI indicates Foo is an application in /homes/darrell then to run it from the command line I'd have to issue: "/System/Library/Frameworks/Carbon.framework/Versions/A/Support/LaunchCFMApp /homes/darrell/Foo.app/Contents/MacOS/Foo"
The Foo.app is what you see as Foo from the GUI. You don't see the Contents folder or anything else inside the Foo.app directory.
Google Analytics
Search
To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g. site:darrellgrainger.blogspot.com CSS selectors will search for "CSS selectors" but only on my site. |
Wednesday, November 7, 2007
Sunday, October 7, 2007
Good list of books you should read
A group of people/companies got together and created The SoftWare Engineering Body Of Knowledge or SWEBOK. It is basically a reference to software engineering covering everything from requirements to maintenance. There are chapters on all the stages of the software development life cycle.
A lot of that book is based on traditional, proven methodologies. A lot of what you see in Agile or eXtreme Programming environments is not well reflected in the SWEBOK. It is still a good starting point for anyone wanting to understand things like Testing or Software Quality Assurance.
The book is a list of known definitions and descriptions plus, and this is the important part, a reference to where the definition/description came from. It is the list of reference material at the back of each chapter which makes the SWEBOK most valuable. It is a great place to build a good library of books from.
When I started out in programming, I was self taught. I only knew things from trial and error or based on whatever books I could find in stores or the public library. Books like The Art Of Programming by Donald Knuth or even Algorithms by Robert Sedgwick were not available in computer stores. Once I found these books I started to realize how little I knew. To paraphrase Socrates, "I am smarter than most because I know I know nothing." Read the SWEBOK and you'll know just how much there is to know and hopefully you'll be able to realize how much more there is for you to learn.
A lot of that book is based on traditional, proven methodologies. A lot of what you see in Agile or eXtreme Programming environments is not well reflected in the SWEBOK. It is still a good starting point for anyone wanting to understand things like Testing or Software Quality Assurance.
The book is a list of known definitions and descriptions plus, and this is the important part, a reference to where the definition/description came from. It is the list of reference material at the back of each chapter which makes the SWEBOK most valuable. It is a great place to build a good library of books from.
When I started out in programming, I was self taught. I only knew things from trial and error or based on whatever books I could find in stores or the public library. Books like The Art Of Programming by Donald Knuth or even Algorithms by Robert Sedgwick were not available in computer stores. Once I found these books I started to realize how little I knew. To paraphrase Socrates, "I am smarter than most because I know I know nothing." Read the SWEBOK and you'll know just how much there is to know and hopefully you'll be able to realize how much more there is for you to learn.
Interview Help Sites
I recently when to Wikipedia and searched for Software Quality Assurance. The page for SQA contained two external links, both of which were the same site but different pages.
I went to the external links and found something I've found before, while trying to grow my knowledge of SQA. I found a site with Interview Questions.
The general idea behind these type of sites is a mish-mash of interview questions with answers. The site has some semblance of organization but as you go through it you will find the same questions with different answers. If I had to guess, someone set up a site to have people post questions they have been asked in interviews. The person is trying to remember the question and often forgets important details. So the questions are not well formed. On top of that, the answers to the questions are often from the people who failed the interview or various people trying to help the interviewee answer the question.
For example, there is a section on C language (NOT C++). The first question is "1.What is polymorphism?". Obviously not a C language question.
In some cases I wonder if the person who created the original question really knows what they are doing. For example,
The answer posted notes that the a[0] will be assigned the value 0 then i will be incremented to a value of 1. The printf will attempt to reference a[1] but since nothing has been assigned to this, you will get back a random value.
This is very true. What should also be noted, if this is a C language question, is that the ANSI C standard requires main to return an int for defined behaviour. Declaring main as "void main()" is okay in C++ but not in C. In pre-ANSI C the keyword void did not exist. When you see something like:
The default return type, when not explicitly indicated is an int. So the above snippet is the equivalent of:
Many people wrongly assume no explicit return type means it returns void.
The questions on the interview web site have a lot of wrong answers. Oddly enough, I have conducted technical interview for hundreds of people on various languages and operating systems. I find a fair number of them seem to either have no knowledge of what they claim to know or they frequent these interview web sites and have bad knowledge of things they claim to know.
If you are surfing the web looking for answers to interview questions, think twice about the source of the information. Just surf the site and think are there things about the question which are questionable? Is the same question posted twice but with different answers? Are questions in the wrong section? Are there questions without answers? If the answer is yes to these questions then the person who is putting up the site probably knows as much or less than you.
Additionally, when I run a whois on the site, the owner of the site is hidden. If you don't know who owns the site, how do you know you can trust the information? Why don't they want you to know who they are?
Bottom line, if you try using these interview sites to make it through an interview you might get the job but you will not keep it. These sites are good for questions but you want to find out the answers for yourself and not trust the answers posted. I hang out on various forums and newsgroups. If you seem like someone who really wants to learn I'll help you out. If you just want to pass an interview I'll know it.
I went to the external links and found something I've found before, while trying to grow my knowledge of SQA. I found a site with Interview Questions.
The general idea behind these type of sites is a mish-mash of interview questions with answers. The site has some semblance of organization but as you go through it you will find the same questions with different answers. If I had to guess, someone set up a site to have people post questions they have been asked in interviews. The person is trying to remember the question and often forgets important details. So the questions are not well formed. On top of that, the answers to the questions are often from the people who failed the interview or various people trying to help the interviewee answer the question.
For example, there is a section on C language (NOT C++). The first question is "1.What is polymorphism?". Obviously not a C language question.
In some cases I wonder if the person who created the original question really knows what they are doing. For example,
10. What will be the output of the following code? void main () { int i = 0 , a[3]; a[i] = i++; printf(“%d",a[i]); }
The answer posted notes that the a[0] will be assigned the value 0 then i will be incremented to a value of 1. The printf will attempt to reference a[1] but since nothing has been assigned to this, you will get back a random value.
This is very true. What should also be noted, if this is a C language question, is that the ANSI C standard requires main to return an int for defined behaviour. Declaring main as "void main()" is okay in C++ but not in C. In pre-ANSI C the keyword void did not exist. When you see something like:
main() { printf("Hello world.\n"); return 0; }
The default return type, when not explicitly indicated is an int. So the above snippet is the equivalent of:
int main() { printf("Hello world.\n"); return 0; }
Many people wrongly assume no explicit return type means it returns void.
The questions on the interview web site have a lot of wrong answers. Oddly enough, I have conducted technical interview for hundreds of people on various languages and operating systems. I find a fair number of them seem to either have no knowledge of what they claim to know or they frequent these interview web sites and have bad knowledge of things they claim to know.
If you are surfing the web looking for answers to interview questions, think twice about the source of the information. Just surf the site and think are there things about the question which are questionable? Is the same question posted twice but with different answers? Are questions in the wrong section? Are there questions without answers? If the answer is yes to these questions then the person who is putting up the site probably knows as much or less than you.
Additionally, when I run a whois on the site, the owner of the site is hidden. If you don't know who owns the site, how do you know you can trust the information? Why don't they want you to know who they are?
Bottom line, if you try using these interview sites to make it through an interview you might get the job but you will not keep it. These sites are good for questions but you want to find out the answers for yourself and not trust the answers posted. I hang out on various forums and newsgroups. If you seem like someone who really wants to learn I'll help you out. If you just want to pass an interview I'll know it.
Tuesday, October 2, 2007
Is Automated testing development?
I'm not talking about unit testing. I am talking about regression testing. There are a number of automation tools out there and for some applications you can just use the record and playback feature. WinRunner, SilkTest, RationalRobot, etc. all have a feature where you can turn on a recorder, manually walk through an application then save the script. Later you can play the script back; if nothing has changed the script should execute without error.
This is the theory. The reality is that most projects change and the scripts fail. You then have to take the time to re-record the script or edit the code so it matches the change in the application. Additionally, the scripts tend to make the application do things but the tester still needs to add code to the script to confirm the right things happen, e.g. assert statements or capture points.
So testers are creating, maintaining, enhancing and debugging source code. This sounds a lot like development work. Yet in most the places I've seen people doing automation and with most the people I've interviewed (and some I hired), very few have knowledge of software development.
Yesterday I was talking to someone using an automated script. The script worked fine for the person developing it but did not for the person I was talking to. It turns out that the script assumes relative paths to other things. If you don't run it from the right directory (not the directory the script is in) it fails to work. To fix this flaw the 'developer' added a command line option to the script. The logic was "If there is a $1 parameter, cd $1 else assume you are in the correct directory."
There was no comments in the script, they did not reassign the $1 variable to something more sensible and they checked for $1 deep in the script, i.e. not at the top.
The person I spoke with spent an hour trying to figure out what was wrong. She even spoke with the creator of the script and he couldn't figure out she was doing wrong.
A good development practice is a coding style guideline. Using appropriate comments, parsing input parameters near the beginning of the script and possibly writing it as a function. Developers working on a team have learned that a consistent style makes it easier for everyone to take over someone else's code. At first a new developer might want everyone to switch to their standard but once they come around everyone benefits.
Creators of automated regression tests never create a coding standard. In many cases they don't use source control. Additionally, they will pick automation tools that have poor or no debugging capabilities. Developers like Visual C++ or Java because the IDEs are so advanced. Once you get familiar with Eclipse or NetBeans, you could never imagine using Java from the command line again.
If developers are using powerful tools like Eclipse to develop their code, how is an automated tester going to keep up? Every time the developer makes a required change/enhancement to the application, the tester will have to maintain their scripts. If the developer can make the change in minutes but the tester takes hours, the cost of maintaining the automation will not be worth it.
I like the idea of pair programming where one person does the development and the other person codes the tests. Agile programmers are more thinking about unit testing when they describe this concept but why not have an integration or system level tester be a person with development skills?
I'm not saying that developers should start doing automation testing. Developers have a different mindset then a tester. I'm suggesting that testers should have some fundamental development training. If you hire QA or testing staff with development experience you will probably get better automation.
Additionally, if you are an automation tester, learn development techniques and apply them to your script development. Become more efficient. In many companies, they get automation tools but end up abandoning them because they become a maintenance nightmare. Maybe you will be the person who saves the tools and keeps the company using them.
Finally, automated testing is not WHAT you want to test. It is HOW you want to test. If I'm creating an application, I'll first come up with a list of requirements and design WHAT I want to create. If I decide I'm going to write the application in Java or C++, the language does not, for the most part, dictate WHAT I'm going to create. The automation tool you use comes at the implementation stage. You still need a test strategy, test plans and a list of test cases. Only then should you be looking at HOW you are going to automate the test cases.
This is the theory. The reality is that most projects change and the scripts fail. You then have to take the time to re-record the script or edit the code so it matches the change in the application. Additionally, the scripts tend to make the application do things but the tester still needs to add code to the script to confirm the right things happen, e.g. assert statements or capture points.
So testers are creating, maintaining, enhancing and debugging source code. This sounds a lot like development work. Yet in most the places I've seen people doing automation and with most the people I've interviewed (and some I hired), very few have knowledge of software development.
Yesterday I was talking to someone using an automated script. The script worked fine for the person developing it but did not for the person I was talking to. It turns out that the script assumes relative paths to other things. If you don't run it from the right directory (not the directory the script is in) it fails to work. To fix this flaw the 'developer' added a command line option to the script. The logic was "If there is a $1 parameter, cd $1 else assume you are in the correct directory."
There was no comments in the script, they did not reassign the $1 variable to something more sensible and they checked for $1 deep in the script, i.e. not at the top.
The person I spoke with spent an hour trying to figure out what was wrong. She even spoke with the creator of the script and he couldn't figure out she was doing wrong.
A good development practice is a coding style guideline. Using appropriate comments, parsing input parameters near the beginning of the script and possibly writing it as a function. Developers working on a team have learned that a consistent style makes it easier for everyone to take over someone else's code. At first a new developer might want everyone to switch to their standard but once they come around everyone benefits.
Creators of automated regression tests never create a coding standard. In many cases they don't use source control. Additionally, they will pick automation tools that have poor or no debugging capabilities. Developers like Visual C++ or Java because the IDEs are so advanced. Once you get familiar with Eclipse or NetBeans, you could never imagine using Java from the command line again.
If developers are using powerful tools like Eclipse to develop their code, how is an automated tester going to keep up? Every time the developer makes a required change/enhancement to the application, the tester will have to maintain their scripts. If the developer can make the change in minutes but the tester takes hours, the cost of maintaining the automation will not be worth it.
I like the idea of pair programming where one person does the development and the other person codes the tests. Agile programmers are more thinking about unit testing when they describe this concept but why not have an integration or system level tester be a person with development skills?
I'm not saying that developers should start doing automation testing. Developers have a different mindset then a tester. I'm suggesting that testers should have some fundamental development training. If you hire QA or testing staff with development experience you will probably get better automation.
Additionally, if you are an automation tester, learn development techniques and apply them to your script development. Become more efficient. In many companies, they get automation tools but end up abandoning them because they become a maintenance nightmare. Maybe you will be the person who saves the tools and keeps the company using them.
Finally, automated testing is not WHAT you want to test. It is HOW you want to test. If I'm creating an application, I'll first come up with a list of requirements and design WHAT I want to create. If I decide I'm going to write the application in Java or C++, the language does not, for the most part, dictate WHAT I'm going to create. The automation tool you use comes at the implementation stage. You still need a test strategy, test plans and a list of test cases. Only then should you be looking at HOW you are going to automate the test cases.
Wednesday, August 8, 2007
AJAX
It has been a while since I have posted. I've worked on a number of things but I think the 'hot' topic right now would be testing AJAX. AJAX stands for Asynchronous Javascript And XML. See Wikipedia's page on AJAX for more information.
A few years back a web site might be set up so you have two frames (left and right). The left frame has a tree view. When you select something in the tree view the web browser sends an HTTP Request to the web server. The web server response with an HTTP Response to the web browser. The browser then displays it to the right frame.
Basically, every time you click the appropriate link, the browser sends a synchronous HTTP Request, the user waits for the response, then it display.
Along comes AJAX. Now the tree view is javascript. When you select something in the tree view, AJAX sends an HTTP Request directly to the web server. The web browser is unaware of the request and therefore does not wait for the response. You can continue using the browser while AJAX, asynchronously, waits for the response.
The problem with testing AJAX is that most test software detects when the web browser does an HTTP Request and waits for the HTTP Response. Because AJAX handles the request and response, the test tools are unaware there is a need to wait. If the selection in the tree view took 3 seconds, the test script will click the tree view and nanoseconds later expect the results to be in the right frame.
Solution #1: I can put a sleep for 3 seconds. Problem is, network conditions change. It might be next time it takes 5 seconds or 7 seconds. We could brute force it and make the sleep 1 hour. But if it sleeps for 1 hour on each click and there are more than two dozen clicks, it will take over a day for even a simple test case. NOT A GOOD SOLUTION.
Solution #2: If my test suite is written in javascript and running in the same authentication domain as the website I'm testing (e.g. Selenium) then I could write my AJAX so it sets a flag when it does the HTTP Request and clears the flag when it gets the HTTP Response. Now the test code can wait for the flag to get set then wait again for the flag to be cleared before it assumes the right frame has the results in it. This is a good solution but it requires you to modify the application under test (AUT).
Solution #3: Create a man-in-the-middle setup. The way that man-in-the-middle works is original from a phishing (fraud) scheme. You have probably seen it. You get an email from 'your bank' telling you to log in and fix a problem with your account. The link says http://mybank.com but the actual link is to http://mybank_com.evil_doers.com. The evil_doers.com website will receive the HTTP Request from your browser, look over the information you are sending then pass it on to the real mybank.com. When mybank.com receives it, it will log you in and send an HTTP Response back to evil_doers.com. The evil_doers.com will examine the response, log it and send it back to you.
It is like putting a wire tap on your telephone. We can use this for good. I have host1.com running the web browser. I have host2.com running man-in-the-middle software and it will forward things to testsite.com. On host1.com I would normally go to http://testsite.com/my_fabulous_app/index.jsp. Now I go to http://host2.com/my_fabulous_app/index.jsp. The man-in-the-middle software will be my test software.
Realistically, I could actually run the web browser and the test software on the same machine. I'd have less security issues if I did that and the URL would become http://localhost/my_fabulous_app/index.jsp.
A few years back a web site might be set up so you have two frames (left and right). The left frame has a tree view. When you select something in the tree view the web browser sends an HTTP Request to the web server. The web server response with an HTTP Response to the web browser. The browser then displays it to the right frame.
Basically, every time you click the appropriate link, the browser sends a synchronous HTTP Request, the user waits for the response, then it display.
Along comes AJAX. Now the tree view is javascript. When you select something in the tree view, AJAX sends an HTTP Request directly to the web server. The web browser is unaware of the request and therefore does not wait for the response. You can continue using the browser while AJAX, asynchronously, waits for the response.
The problem with testing AJAX is that most test software detects when the web browser does an HTTP Request and waits for the HTTP Response. Because AJAX handles the request and response, the test tools are unaware there is a need to wait. If the selection in the tree view took 3 seconds, the test script will click the tree view and nanoseconds later expect the results to be in the right frame.
Solution #1: I can put a sleep for 3 seconds. Problem is, network conditions change. It might be next time it takes 5 seconds or 7 seconds. We could brute force it and make the sleep 1 hour. But if it sleeps for 1 hour on each click and there are more than two dozen clicks, it will take over a day for even a simple test case. NOT A GOOD SOLUTION.
Solution #2: If my test suite is written in javascript and running in the same authentication domain as the website I'm testing (e.g. Selenium) then I could write my AJAX so it sets a flag when it does the HTTP Request and clears the flag when it gets the HTTP Response. Now the test code can wait for the flag to get set then wait again for the flag to be cleared before it assumes the right frame has the results in it. This is a good solution but it requires you to modify the application under test (AUT).
Solution #3: Create a man-in-the-middle setup. The way that man-in-the-middle works is original from a phishing (fraud) scheme. You have probably seen it. You get an email from 'your bank' telling you to log in and fix a problem with your account. The link says http://mybank.com but the actual link is to http://mybank_com.evil_doers.com. The evil_doers.com website will receive the HTTP Request from your browser, look over the information you are sending then pass it on to the real mybank.com. When mybank.com receives it, it will log you in and send an HTTP Response back to evil_doers.com. The evil_doers.com will examine the response, log it and send it back to you.
It is like putting a wire tap on your telephone. We can use this for good. I have host1.com running the web browser. I have host2.com running man-in-the-middle software and it will forward things to testsite.com. On host1.com I would normally go to http://testsite.com/my_fabulous_app/index.jsp. Now I go to http://host2.com/my_fabulous_app/index.jsp. The man-in-the-middle software will be my test software.
Realistically, I could actually run the web browser and the test software on the same machine. I'd have less security issues if I did that and the URL would become http://localhost/my_fabulous_app/index.jsp.
Labels:
AJAX,
automation,
HTTP,
Selenium
Thursday, July 12, 2007
Bourne shell scripting made easy
Someone was having trouble writing a shell script. A common activity for Bourne shell scripting is to take the output from various commands and use it as the input for other commands. Case in point, we have a server that monitors clients. Whenever we get new monitoring software we have to use the server command line tool to install the cartridge on the server, create the agents for each client, deploy the agents, configure them and activate them.
The general steps are:
1) get a list of the cartridges (ls)
2) using a tool, install them (tool.sh)
3) using the same, tool get a list of agents
4) using the tool, get a list of clients
5) using the tool, for each client create an instance of each agent
6) using the tool, for each agent created deploy to the client
7) using the tool, configure the agents
8) using the tool, activate the agents
Just looking at the first two steps, if I was doing this by hand I would use ls to get a list of all the cartridges. I would then cut and paste the cartridge named into a command to install them.
So a Bourne shell script should just cut the same things out of the ls list.
If the cartridge files all end with the extension .cart I can use:
If the command to install a cartridge was:
I could use:
This is pretty easy and straight forward. What if the command was not as clean as ls? What is the list of agents was something like:
To install the agent I only need the Name. If I only wanted the Linux agents, how would I get just the Name? First, you want to narrow it down to the lines you want:
This will remove all the other agents from the list and give me:
Now I need to parse each line. If I use the above command in a for loop I can start with:
Now I can try adding to the backtick command to narrow things down. The two ways I like to parse a line is using awk or cut. For cut I could use:
This will break the line at the colon. The cut on the first line would give the fields:
The problem is there is a space in front of Disk. I can add a cut -b2-, which will give me from character 2 to the end, i.e. cut off the first character. What if there is more than one space? This is why I like to use awk. For awk it would be:
For awk the fields would become:
The spaces would not be an issue.
So by using backticks, piping and grep I can break things apart into just the lines I want. Piping the result of grep to cut or awk to break the line apart and keep just the bits I want.
The only other command I like to use for parsing output like this is sed. I can use sed for things like:
The // is a regex pattern. The ^ means beginning of line. The $ means end of line. So ^$ would be a blank line. The d is for delete. This will delete blank lines.
Actually, lets give an example usage. I want to list all files in a given directory plus all subdirectories. I want the file size for each file. The ls -lR will give me a listing like:
To get rid of the blank likes I can use the sed -e '/^$/d'. To get rid of the path information I can use grep -v ":", assuming there are no colons in the filenames. To get rid of the directories I can use sed -e '/^d/d' because all directory lines start with a 'd'. So the whole thing looks like:
But there is actually an easier answer. Rather than cutting out what I don't want, I can use sed to keep what I do want. The sed -n command will output nothing BUT if the script has a 'p' command it will print that. So I want to sed -n with the right 'p' commands. Here is the solution:
This is because all the files have '-' at the start of the line. This will output:
I can now use awk to cut the file size out, i.e. awk '{print $5}'. So the whole command becomes:
If I want to add all the file sizes for a total I can use:
The expr will let me do simple integer match with the output.
NOTE: you use use man to learn more about the various commands I've shown here:
The sed and awk commands are actually powerful enough to have entire chapters written on them. But the man page will get you started.
While you are at it, do a man man.
Enjoy!
The general steps are:
1) get a list of the cartridges (ls)
2) using a tool, install them (tool.sh)
3) using the same, tool get a list of agents
4) using the tool, get a list of clients
5) using the tool, for each client create an instance of each agent
6) using the tool, for each agent created deploy to the client
7) using the tool, configure the agents
8) using the tool, activate the agents
Just looking at the first two steps, if I was doing this by hand I would use ls to get a list of all the cartridges. I would then cut and paste the cartridge named into a command to install them.
So a Bourne shell script should just cut the same things out of the ls list.
If the cartridge files all end with the extension .cart I can use:
ls -1 *.cart
If the command to install a cartridge was:
./tool.sh --install_cart [cartridge_name]
I could use:
for c in `ls -1 *.cart`; do ./tool.sh --install_cart $c done
This is pretty easy and straight forward. What if the command was not as clean as ls? What is the list of agents was something like:
./tool.sh --list_agents OS: Linux, Level: 2.4, Version: 3.8, Name: Disk OS: Linux, Level: 2.4, Version: 3.8, Name: Kernel OS: Windows, Level: 5.1, Version: 3.8, Name: System
To install the agent I only need the Name. If I only wanted the Linux agents, how would I get just the Name? First, you want to narrow it down to the lines you want:
./tool.sh --list_agents | grep "OS: Linux"
This will remove all the other agents from the list and give me:
OS: Linux, Level: 2.4, Version: 3.8, Name: Disk OS: Linux, Level: 2.4, Version: 3.8, Name: Kernel
Now I need to parse each line. If I use the above command in a for loop I can start with:
for a in `./tool.sh --list_agents | grep "OS: Linux"`; do echo $a done
Now I can try adding to the backtick command to narrow things down. The two ways I like to parse a line is using awk or cut. For cut I could use:
for a in `./tool.sh --list_agents | grep "OS: Linux" | cut -d: -f5`; do echo $a done
This will break the line at the colon. The cut on the first line would give the fields:
- OS
- Linux, Level
- 2.4, Version
- 3.8, Name
- Disk
The problem is there is a space in front of Disk. I can add a cut -b2-, which will give me from character 2 to the end, i.e. cut off the first character. What if there is more than one space? This is why I like to use awk. For awk it would be:
for a in `./tool.sh --list_agents | grep "OS: Linux" | awk '{print $8}'`; do echo $a done
For awk the fields would become:
- OS:
- Linux,
- Level:
- 2.4,
- Version:
- 3.8,
- Name:
- Disk
The spaces would not be an issue.
So by using backticks, piping and grep I can break things apart into just the lines I want. Piping the result of grep to cut or awk to break the line apart and keep just the bits I want.
The only other command I like to use for parsing output like this is sed. I can use sed for things like:
cat file | sed -e '/^$/d'
The // is a regex pattern. The ^ means beginning of line. The $ means end of line. So ^$ would be a blank line. The d is for delete. This will delete blank lines.
Actually, lets give an example usage. I want to list all files in a given directory plus all subdirectories. I want the file size for each file. The ls -lR will give me a listing like:
.: total 4 drwxrwxrwx+ 2 Darrell None 0 Apr 19 14:56 ListCarFiles drwxr-xr-x+ 2 Darrell None 0 May 7 21:58 bin -rw-rw-rw- 1 Darrell None 631 Oct 17 2006 cvsroots ./ListCarFiles: total 8 -rwxrwxrwx 1 Darrell None 2158 Mar 30 22:37 ListCarFiles.class -rwxrwxrwx 1 Darrell None 1929 Mar 31 09:09 ListCarFiles.java ./bin: total 4 -rwxr-xr-x 1 Darrell None 823 May 7 21:58 ps-p.sh
To get rid of the blank likes I can use the sed -e '/^$/d'. To get rid of the path information I can use grep -v ":", assuming there are no colons in the filenames. To get rid of the directories I can use sed -e '/^d/d' because all directory lines start with a 'd'. So the whole thing looks like:
ls -lR | sed -e '/^$/d' -e '/^d/d' | grep -v ":"
But there is actually an easier answer. Rather than cutting out what I don't want, I can use sed to keep what I do want. The sed -n command will output nothing BUT if the script has a 'p' command it will print that. So I want to sed -n with the right 'p' commands. Here is the solution:
ls -lR | sed -n -e '/^-/p'
This is because all the files have '-' at the start of the line. This will output:
-rw-rw-rw- 1 Darrell None 631 Oct 17 2006 cvsroots -rwxrwxrwx 1 Darrell None 2158 Mar 30 22:37 ListCarFiles.class -rwxrwxrwx 1 Darrell None 1929 Mar 31 09:09 ListCarFiles.java -rwxr-xr-x 1 Darrell None 823 May 7 21:58 ps-p.sh
I can now use awk to cut the file size out, i.e. awk '{print $5}'. So the whole command becomes:
ls -lR | sed -n -e '/^-/p' | awk '{print $5}'
If I want to add all the file sizes for a total I can use:
TOTAL=0 for fs in `ls -lR | sed -n -e '/^-/p' | awk '{print $5}'`; do: TOTAL=`expr $TOTAL + $fs` done echo $TOTAL
The expr will let me do simple integer match with the output.
NOTE: you use use man to learn more about the various commands I've shown here:
- man grep
- man cut
- man awk
- man sed
- man regex
- man expr
The sed and awk commands are actually powerful enough to have entire chapters written on them. But the man page will get you started.
While you are at it, do a man man.
Enjoy!
Labels:
AWK,
Bourne,
Linux,
shell scripting,
UNIX
Wednesday, June 13, 2007
We're not dead yet...
It has been a while since I posted to my blog. I've been fairly busy moving into my new home. I'm in now and the computer is set up. So it is time to blog again...
We have been hiring people to work in my department, Software Quality Assurance. Because our software products are development and system administrator tools, our QA staff needs to know how to program and how to validate the information our tools are providing; do you know AIX, HP-UX, Solaris, Linux (Redhat and SuSE) and Windows? Can you confirm the Disk I/O, Process, Thread, NIC, etc. information is correct? Can you write a multithread application which is guaranteed to deadlock so our tools will detect the deadlock? Can you write a J2EE application that exercises all J2EE technologies (EJB, JDBC, Servlets, JSPs, RMI, JNDI, etc.)?
These are the sort of skills the QA staff at my company possess. We interview a lot of people. Most don't have a clue about the basics. No one (myself included) had all the knowledge necessary to do the job well. So how do we do it? An ability to learn and find answers.
As we hire people, some work out but many more don't make it through the probation period; we either terminate them or they quit. I've been trying to put my finger on what the survives have that the others don't and I think I figured it out. Those who survive have a hacker mentality. One guy I hired, Jerry, found this magazine and thought it would be right up my alley. It was called alt.2600.
It has been over a decade since I hung out in alt.2600. When I saw the magazine I thought I'd point Jerry to the alt.2600 newsgroup. I was surprised to find out it was gone. I checked google.com to see if the archives were there and there was no hint of alt.2600. If you google "alt 2600" you will find the FAQ and references to the newsgroup but the newsgroup itself is gone. The last time the FAQ was updated was April 2004.
The magazine made me realize though that hackers think differently. Case in point, when Kryponite locks came out they were advertised as impossible to cut with bolt cutters. I knew someone who took 4 foot bolt cutters and tried. He bent the bolt cuts. I looked at the lock and realized the locking mechanism overlapped the Kryponite bar by 2mm. A swift whack at this point with a 2 pound hammer and the the lock popped open. Most people looked at the ad and tried to figure out how to cut the bar (the ads indicated the bar was uncuttable). I stepped back and thought, the problem is not cutting the bar. This is narrow thinking. The real problem is removing the lock from what it held. Cutting the bar was only one way to do this.
Hackers get into web sites by looking for the weak points. They don't let the requirements lead them. The login web page only lets me enter limited information; don't use the login web page. Create your own web page and set the FORM action to point to the other web site. Design your FORM so you can send more information. Do something you know will fail just to see if there is useful information in the error message. The more you can reveal about the technology the more you can determine the weak point.
When I test a piece of software I'm looking for the weak point. This ability to see things form a different point of view lets me find the bugs the developer did not see.
Is being a hacker a dying art?
We have been hiring people to work in my department, Software Quality Assurance. Because our software products are development and system administrator tools, our QA staff needs to know how to program and how to validate the information our tools are providing; do you know AIX, HP-UX, Solaris, Linux (Redhat and SuSE) and Windows? Can you confirm the Disk I/O, Process, Thread, NIC, etc. information is correct? Can you write a multithread application which is guaranteed to deadlock so our tools will detect the deadlock? Can you write a J2EE application that exercises all J2EE technologies (EJB, JDBC, Servlets, JSPs, RMI, JNDI, etc.)?
These are the sort of skills the QA staff at my company possess. We interview a lot of people. Most don't have a clue about the basics. No one (myself included) had all the knowledge necessary to do the job well. So how do we do it? An ability to learn and find answers.
As we hire people, some work out but many more don't make it through the probation period; we either terminate them or they quit. I've been trying to put my finger on what the survives have that the others don't and I think I figured it out. Those who survive have a hacker mentality. One guy I hired, Jerry, found this magazine and thought it would be right up my alley. It was called alt.2600.
It has been over a decade since I hung out in alt.2600. When I saw the magazine I thought I'd point Jerry to the alt.2600 newsgroup. I was surprised to find out it was gone. I checked google.com to see if the archives were there and there was no hint of alt.2600. If you google "alt 2600" you will find the FAQ and references to the newsgroup but the newsgroup itself is gone. The last time the FAQ was updated was April 2004.
The magazine made me realize though that hackers think differently. Case in point, when Kryponite locks came out they were advertised as impossible to cut with bolt cutters. I knew someone who took 4 foot bolt cutters and tried. He bent the bolt cuts. I looked at the lock and realized the locking mechanism overlapped the Kryponite bar by 2mm. A swift whack at this point with a 2 pound hammer and the the lock popped open. Most people looked at the ad and tried to figure out how to cut the bar (the ads indicated the bar was uncuttable). I stepped back and thought, the problem is not cutting the bar. This is narrow thinking. The real problem is removing the lock from what it held. Cutting the bar was only one way to do this.
Hackers get into web sites by looking for the weak points. They don't let the requirements lead them. The login web page only lets me enter limited information; don't use the login web page. Create your own web page and set the FORM action to point to the other web site. Design your FORM so you can send more information. Do something you know will fail just to see if there is useful information in the error message. The more you can reveal about the technology the more you can determine the weak point.
When I test a piece of software I'm looking for the weak point. This ability to see things form a different point of view lets me find the bugs the developer did not see.
Is being a hacker a dying art?
Friday, April 27, 2007
Sun is not the only vendor of Java
Many people who know about the Java programming language only know about the Sun implementation. But there are actually different vendors. There is:
If you are programming Java on the AIX operating system (IBM's version of UNIX) then you would be using IBM Java. If you are programming Java on the HP-UX operating system (Hewlett-Packard's version of UNIX) then you would be using HP Java. Similarly, if you are programming Java on MacOS X then you are using Apple Java. BEA is not a creator of an operating system. They create a J2EE application server called WebLogic. It typically ships with the Sun version of Java and BEA's version of Java. The BEA version is called JRockit. Finally, Blackdown is an implementation associated with Linux.
The idea behind all these different implementations of Java is that they are better in some way. You should get better performance on your web applications if you use JRockit on the BEA WebLogic application server. If you are running Linux, the Blackdown implementation should give you better performance.
If you don't have access to HP-UX, AIX or MacOS X then you will not have the opportunity to use the OS manufacturers specific version. If you want though, you can download JRockit from BEA. Go to the BEA website, select Products then select JRockit. On the main JRockit page is an option to download JRockit for free. You can get Blackdown for free from the Blackdown website.
IBM HP Apple BEA Blackdown
If you are programming Java on the AIX operating system (IBM's version of UNIX) then you would be using IBM Java. If you are programming Java on the HP-UX operating system (Hewlett-Packard's version of UNIX) then you would be using HP Java. Similarly, if you are programming Java on MacOS X then you are using Apple Java. BEA is not a creator of an operating system. They create a J2EE application server called WebLogic. It typically ships with the Sun version of Java and BEA's version of Java. The BEA version is called JRockit. Finally, Blackdown is an implementation associated with Linux.
The idea behind all these different implementations of Java is that they are better in some way. You should get better performance on your web applications if you use JRockit on the BEA WebLogic application server. If you are running Linux, the Blackdown implementation should give you better performance.
If you don't have access to HP-UX, AIX or MacOS X then you will not have the opportunity to use the OS manufacturers specific version. If you want though, you can download JRockit from BEA. Go to the BEA website, select Products then select JRockit. On the main JRockit page is an option to download JRockit for free. You can get Blackdown for free from the Blackdown website.
Labels:
Java
Wednesday, April 25, 2007
Identifying UNIX versions
I work in an environment with numerous different versions of UNIX and Linux. Sometimes I'll be accessing multiple machines from my workstation. Occasionally, I need to confirm the OS for the current terminal. The way to determine which version of UNIX you are using is with:
For Solaris you would get something like:
For HP-UX you would get something like:
For AIX you would get something like:
From this it is a little harder to see the version. It is actually AIX 5.2. If you check the man page for uname it will help you decode the hexidecimal number at the end. This will tell you things like 4C is the model ID and the 00 is the submodel ID. Additionally, AIX uses other switches to tell you about things the -a normally gives you on other platforms. For example,
For Linux things are a little tricker. The uname -a will tell you it is Linux but it will not tell you if it is SuSE Linux Enterprise Server (SLES) 10.0, Redhat AS 5.0, et cetera. To figure this out, look for a text file in /etc/ which ends in 'release', i.e.
This text file will tell you which distribution of Linux you are using.
uname -a
For Solaris you would get something like:
SunOS rd-r220-01 5.8 Generic_117350-26 sun4u sparc SUNW,Ultra-60
For HP-UX you would get something like:
HP-UX l2000-cs B.11.11 U 9000/800 158901567 unlimited-user licenseor
HP-UX rdhpux04 B.11.23 U ia64 0216397005 unlimited-user license
For AIX you would get something like:
AIX rd-aix09 2 5 00017F8A4C00
From this it is a little harder to see the version. It is actually AIX 5.2. If you check the man page for uname it will help you decode the hexidecimal number at the end. This will tell you things like 4C is the model ID and the 00 is the submodel ID. Additionally, AIX uses other switches to tell you about things the -a normally gives you on other platforms. For example,
uname -p # the processor architecture uname -M # the model
For Linux things are a little tricker. The uname -a will tell you it is Linux but it will not tell you if it is SuSE Linux Enterprise Server (SLES) 10.0, Redhat AS 5.0, et cetera. To figure this out, look for a text file in /etc/ which ends in 'release', i.e.
cat /etc/*release
This text file will tell you which distribution of Linux you are using.
Tuesday, April 17, 2007
Going from small programs to large programs
Whenever I'm interviewing someone (or being interviewed) I like to know how many lines of code you have created for one project. I'm not looking for a magic number; people tend to have either programmed a few hundred to a thousand lines of code and others will have worked on something in the tens of thousands.
The reason for asking this is because you can be a junior programmer and still survive programming a few hundreds lines of code.
The trick to programming thousands of lines of code is DON'T. When junior programmers write a program they tend to write the entire program at once. If you are programming 100 lines of code, you can keep the entire concept in your head. Trying to remember 500,000 lines of code would be impossible for all but a few people.
The way you do it is to take the program and break it into sub-programs. You keep breaking it down until you have 5000 small snippets of code. They you write one of those snippets.
For example, I assigned a co-op student to write a small Bourne shell script. Our product builds in parts and has dependencies. The build system puts all the build output in a specific directory (let's call it $BUILD_DIR). The structure is:
$BUILD_DIR/$PRODUCT/$BRANCH/$BUILD/
What I wanted for the script is for the user to specify the product, branch and build. Then the script would scan the build log for references to any other product in $BUILD_DIR.
The co-op student wrote a getops loop to get the inputs from the user. Inside the loop was a case statement for each input (product, branch, build, help). In each case statement was an if/else statement for, if you did or didn't get the needed input. If you did not get the needed input was a loop to list all the possible inputs.
As you can see, I'm writing the code to get input, parse it, deal with it, etc. all in one loop/case/if/else/loop structure.
How could this be written easier?
# Check that $BUILD_DIR is defined and exists
# Get the user input
# Save the product in $PRODUCT
# Save the branch in $BRANCH
# Save the build in $BUILD
# if $BUILD_DIR/$PRODUCT is not defined or does not exist
# list possible inputs for product
# exit
# if $BUILD_DIR/$PRODUCT/$BRANCH is not defined or does not exist
# list possible inputs for branch
# exit
# if $BUILD_DIR/$PRODUCT/$BRANCH/$BUILD is not defined or does not exist
# list possible inputs for build
# exit
# build a list of all other products (omit the current product)
# search $BUILD_DIR/$PRODUCT/$BRANCH/$BUILD for references to anything from
# list of all other products
# print the results
Each break is a separate concept. I would program one at a time. I am going to write the check for $BUILD_DIR. I'm going to think about all the possible problems. The variable could be undefined, check for that. The variable could have the wrong value, check for that. The directory might not be readable by me, check for that. I'd keep thinking of things like this. Once I am positive $BUILD_DIR will hold a good value, I forget about it and focus on getting input from the user. I'm just going to get input from the user. I'm not going to validate it is good input. I'm just going to parse the command line and save all the inputs. Once I have written that, perfectly, I'm move on to validating the $PRODUCT. This will be similar to validating the $BUILD_DIR. Maybe the code to validate $BUILD_DIR should be a subroutine and I can use it to validate $PRODUCT as well.
By breaking it down into small, manageable chunks it is just writing a bunch of small code snippets. If you can write one code snippet then writing a dozen is possible.
It is good to get into this habit with small programs. If you practise this technique on small programs then writing the large ones will come naturally.
The reason for asking this is because you can be a junior programmer and still survive programming a few hundreds lines of code.
The trick to programming thousands of lines of code is DON'T. When junior programmers write a program they tend to write the entire program at once. If you are programming 100 lines of code, you can keep the entire concept in your head. Trying to remember 500,000 lines of code would be impossible for all but a few people.
The way you do it is to take the program and break it into sub-programs. You keep breaking it down until you have 5000 small snippets of code. They you write one of those snippets.
For example, I assigned a co-op student to write a small Bourne shell script. Our product builds in parts and has dependencies. The build system puts all the build output in a specific directory (let's call it $BUILD_DIR). The structure is:
$BUILD_DIR/$PRODUCT/$BRANCH/$BUILD/
What I wanted for the script is for the user to specify the product, branch and build. Then the script would scan the build log for references to any other product in $BUILD_DIR.
The co-op student wrote a getops loop to get the inputs from the user. Inside the loop was a case statement for each input (product, branch, build, help). In each case statement was an if/else statement for, if you did or didn't get the needed input. If you did not get the needed input was a loop to list all the possible inputs.
As you can see, I'm writing the code to get input, parse it, deal with it, etc. all in one loop/case/if/else/loop structure.
How could this be written easier?
# Check that $BUILD_DIR is defined and exists
# Get the user input
# Save the product in $PRODUCT
# Save the branch in $BRANCH
# Save the build in $BUILD
# if $BUILD_DIR/$PRODUCT is not defined or does not exist
# list possible inputs for product
# exit
# if $BUILD_DIR/$PRODUCT/$BRANCH is not defined or does not exist
# list possible inputs for branch
# exit
# if $BUILD_DIR/$PRODUCT/$BRANCH/$BUILD is not defined or does not exist
# list possible inputs for build
# exit
# build a list of all other products (omit the current product)
# search $BUILD_DIR/$PRODUCT/$BRANCH/$BUILD for references to anything from
# list of all other products
# print the results
Each break is a separate concept. I would program one at a time. I am going to write the check for $BUILD_DIR. I'm going to think about all the possible problems. The variable could be undefined, check for that. The variable could have the wrong value, check for that. The directory might not be readable by me, check for that. I'd keep thinking of things like this. Once I am positive $BUILD_DIR will hold a good value, I forget about it and focus on getting input from the user. I'm just going to get input from the user. I'm not going to validate it is good input. I'm just going to parse the command line and save all the inputs. Once I have written that, perfectly, I'm move on to validating the $PRODUCT. This will be similar to validating the $BUILD_DIR. Maybe the code to validate $BUILD_DIR should be a subroutine and I can use it to validate $PRODUCT as well.
By breaking it down into small, manageable chunks it is just writing a bunch of small code snippets. If you can write one code snippet then writing a dozen is possible.
It is good to get into this habit with small programs. If you practise this technique on small programs then writing the large ones will come naturally.
Tuesday, April 3, 2007
Planning a multi-threaded application
I have used analogies to teach programming to people for over two decades. A while ago I realized that analogies aren't only good for teaching but sometimes you can apply the concepts from something completely different to programming (and you can apply programming concepts to other aspects of life).
For example, I used to talk about Relational Database Management Systems (RDBMS) using file cabinets, drawers, folders, index cards, etc. and I realized that systems libraries used when I was a kid (before computers were affordable) actually worked and the concepts helped to form RDBMS we see today. In other words, before computers existed, secretaries, clerks, librarians, etc. had to keep track of data (books, client information, etc.). If I wanted to search for a book by author they had a set of index cards sorted by author. If I wanted to search for a book by title they had a set of index cards sorted by title. The actual books were sorted, on the shelves, according to the Dewey Decimal System. If a book was added to the library, they had to insert it on a shelf (leave room for new books so you don't have to shift the entire collection) and an index card for each index was created. Same sort of thing happens for a database. You insert the record for the book in the database and then you update all the indices.
So, how does this relate to multi-threaded applications? Well, we just need to look at a multi-threaded application differently. A professor at U of T used things like Human, Male, Female, Baby, etc. or Animal, Mammal, etc. to teach object oriented design (OOD). I learned to used project management software; people were 'resources'. You planned the activities of resources. You created dependencies between two resources, e.g. Karen might be working on something but she need work from Bob opn day 4. So at day 4 there is a dependency between Bob and Karen. If the work Bob is doing will take 9 days, I want Bob to start 5 days before Karen or I want someone to help Bob so it will get done early or I want work for Karen to do while she is waiting for Bob to finish. Basic techniques to ensure my 'resources' are used efficiently.
What if we look at the threads in an application as 'resources' and use project planning software to make sure all the different threads are working together well. Or what about systems with multiple processors. Each processor could be a resource and I want to make sure they are utilitized efficiently. If the math co-processor will take 9 seconds to complete a computation and the main cpu will need the result in 4 seconds, I have the same options as with Karen and Bob. I can start the math computation 5 seconds before the cpu needs it, I can have 5 seconds of other work for the cpu while it waits for the math computation, I can put more math co-processors. As I type this I also realized, I could put a faster math co-processor. This means for the human scenario, I can put someone who is faster than Bob on the task so it will get completed in 4 days rather than 9 days.
So for everyone who thinks learning how to use project management software is only for managing project, think again.
For example, I used to talk about Relational Database Management Systems (RDBMS) using file cabinets, drawers, folders, index cards, etc. and I realized that systems libraries used when I was a kid (before computers were affordable) actually worked and the concepts helped to form RDBMS we see today. In other words, before computers existed, secretaries, clerks, librarians, etc. had to keep track of data (books, client information, etc.). If I wanted to search for a book by author they had a set of index cards sorted by author. If I wanted to search for a book by title they had a set of index cards sorted by title. The actual books were sorted, on the shelves, according to the Dewey Decimal System. If a book was added to the library, they had to insert it on a shelf (leave room for new books so you don't have to shift the entire collection) and an index card for each index was created. Same sort of thing happens for a database. You insert the record for the book in the database and then you update all the indices.
So, how does this relate to multi-threaded applications? Well, we just need to look at a multi-threaded application differently. A professor at U of T used things like Human, Male, Female, Baby, etc. or Animal, Mammal, etc. to teach object oriented design (OOD). I learned to used project management software; people were 'resources'. You planned the activities of resources. You created dependencies between two resources, e.g. Karen might be working on something but she need work from Bob opn day 4. So at day 4 there is a dependency between Bob and Karen. If the work Bob is doing will take 9 days, I want Bob to start 5 days before Karen or I want someone to help Bob so it will get done early or I want work for Karen to do while she is waiting for Bob to finish. Basic techniques to ensure my 'resources' are used efficiently.
What if we look at the threads in an application as 'resources' and use project planning software to make sure all the different threads are working together well. Or what about systems with multiple processors. Each processor could be a resource and I want to make sure they are utilitized efficiently. If the math co-processor will take 9 seconds to complete a computation and the main cpu will need the result in 4 seconds, I have the same options as with Karen and Bob. I can start the math computation 5 seconds before the cpu needs it, I can have 5 seconds of other work for the cpu while it waits for the math computation, I can put more math co-processors. As I type this I also realized, I could put a faster math co-processor. This means for the human scenario, I can put someone who is faster than Bob on the task so it will get completed in 4 days rather than 9 days.
So for everyone who thinks learning how to use project management software is only for managing project, think again.
Friday, March 23, 2007
Named pipes
If you are familiar with UNIX, you are familiar with pipes. For example, I can do:
The ps command will output a list of all processes to stdout. Normally, this would be to the console window. The pipe (|) will tell UNIX to take the output of ps and make it the input to sort. Then the output from sort will become the input to more.
Without using pipes I could do:
This is like using the pipe but instead we put the output of ps into temp_file1. Then we use temp_file1 as the input to sort and send the output to temp_file2. Finally, we use temp_file2 as the input to more. You should be able to see how this is a lot like the first example using pipes.
Now here is a third way using Named Pipes. To create a named pipe use:
If you list this entry using ls -l you will see something like:
Notice the first letter is not - for a file or even d for a directory. It is p for a named pipe. Also the size of the 'file' is 0. We will need two shells to do this.
The interesting thing about this example is that we needed two shells to do this. At first this might seem like a downside but the truth is, this is a positive. I can do something like:
The -x will turn on trace. Debug information will be output to stderr. By using the named pipes, I can redirect the regular output to shell 2 and the debug information to shell 3.
ps -ef | sort | more
The ps command will output a list of all processes to stdout. Normally, this would be to the console window. The pipe (|) will tell UNIX to take the output of ps and make it the input to sort. Then the output from sort will become the input to more.
Without using pipes I could do:
ps -ef > temp_file1 sort < temp_file1 > temp_file2 rm temp_file1 more temp_file2 rm temp_file2
This is like using the pipe but instead we put the output of ps into temp_file1. Then we use temp_file1 as the input to sort and send the output to temp_file2. Finally, we use temp_file2 as the input to more. You should be able to see how this is a lot like the first example using pipes.
Now here is a third way using Named Pipes. To create a named pipe use:
mkfifo temp_file1
If you list this entry using ls -l you will see something like:
prw-r--r-- 1 dgrainge staff 0 Mar 23 08:13 stdout
Notice the first letter is not - for a file or even d for a directory. It is p for a named pipe. Also the size of the 'file' is 0. We will need two shells to do this.
# shell 1 mkfifo temp_pipe1 mkfifo temp_pipe2 ps -ef > temp_pipe1 # this will block so switch to shell 2 # shell 2 sort < temp_pipe1 > temp_pipe2 # this will block so switch back to shell 1 # shell 1 more temp_pipe2 rm temp_pipe1 rm temp_pipe2
The interesting thing about this example is that we needed two shells to do this. At first this might seem like a downside but the truth is, this is a positive. I can do something like:
mkfifo stdout mkfifo stderr # shell 2 more stdout # shell 3 more stderr # shell 1 sh -x some_script.sh 1> stdout 2> stderr
The -x will turn on trace. Debug information will be output to stderr. By using the named pipes, I can redirect the regular output to shell 2 and the debug information to shell 3.
Monday, March 19, 2007
Windows Manage
Originally, there was Windows 3.1, Windows 9x and Windows NT. The 3.1 version was pretty much dead when Windows 95 came out. Windows 95/98/ME were okay but did not have the robustness of Windows NT.
If you wanted to use a home computer you'd go for Windows 9x but for a work environment they liked Windows NT. Windows NT 4.0 came before Windows 2000. Windows 2000 is REALLY Windows NT 5.0. If you look in the registry (regedit.exe) you will see evidence of NT 5.0. Windows XP Professional is Windows NT 6.0. Windows 2003 Server and Windows XP Professional are on par. One is server class and the other is workstation class.
Since Windows 2000, all of these (including Windows Vista) have an extra menu option when you right click on "My Computer". It is the Manage option. When you select Manage it opens up the Microsoft Management Console (mmc). You can even find this on Windows XP Home.
MMC is a framework that you can open other programs inside. If you go to the Control Panel, you will find things like Services, Disk Management, Event Viewer, etc. When you open up Manage, it opens the MMC with *ALL* these administrative tools in it. You can edit users/groups, manage disk drives, view events, create performance logs, etc.
Any place you can see "My Computer" you can right click on it and select Manage. This is a great shortcut to these administrative tools. You should give this area a peek and see what is available in there. Be careful though; this is an area where you can lock yourself out of the system or disable the bootable hard drive if you are not careful.
If you wanted to use a home computer you'd go for Windows 9x but for a work environment they liked Windows NT. Windows NT 4.0 came before Windows 2000. Windows 2000 is REALLY Windows NT 5.0. If you look in the registry (regedit.exe) you will see evidence of NT 5.0. Windows XP Professional is Windows NT 6.0. Windows 2003 Server and Windows XP Professional are on par. One is server class and the other is workstation class.
Since Windows 2000, all of these (including Windows Vista) have an extra menu option when you right click on "My Computer". It is the Manage option. When you select Manage it opens up the Microsoft Management Console (mmc). You can even find this on Windows XP Home.
MMC is a framework that you can open other programs inside. If you go to the Control Panel, you will find things like Services, Disk Management, Event Viewer, etc. When you open up Manage, it opens the MMC with *ALL* these administrative tools in it. You can edit users/groups, manage disk drives, view events, create performance logs, etc.
Any place you can see "My Computer" you can right click on it and select Manage. This is a great shortcut to these administrative tools. You should give this area a peek and see what is available in there. Be careful though; this is an area where you can lock yourself out of the system or disable the bootable hard drive if you are not careful.
Labels:
Windows
Thursday, March 8, 2007
Extracting part of a log using Bourne shell
Someone recently asked me how to select a range of text from a log file. Because it was a log file, each line started with the date and time for each log entry.
She wanted to extract all the log entries from a start time to an end time. For example, all log entries from 08:07 to 08:16 on March 8th, 2007. The format for the timestamp would be:
where ss.sss was the seconds and [log message] was the actual text message written to the log.
My solution, using Bourne shell, was to determine the first occurance of "2007-03-08 08:07" using grep. The GNU grep command would be:
The -n will prefix the results with the line number. The -m1 tells it to quit after the first match. The output is going to be something like:
where 237 is the line number. So the cut -d: will break the line at the semicolons and the -f1 will take the first field, i.e. 237.
Next you want to find the last occurance of 08:16. I would suggest looking for 08:17 using the same grep command, e.g.
The reason you want to look for the value after the real END time is because a log might have many entries for 08:16. By looking for 08:17 we know we have captured all the entries for 08:16 rather than just the first entry.
This will give us the line AFTER the line we want, so we do the following to decrement it by one:
Now we want to extract everything from START to END in the log. We start by extracting everything from 1 to the END using the head command:
Now we want to trim off the first START lines from this. For that we can use the tail command. But the tail command wants to know how many lines are to be kept. The value of START is the number of lines we want to get rid of. So we really want $END - $START + 1. So:
Finally we would have:
and this will display only the lines from 08:07 to 08:16 on March 8, 2007.
She wanted to extract all the log entries from a start time to an end time. For example, all log entries from 08:07 to 08:16 on March 8th, 2007. The format for the timestamp would be:
2007-03-08 08:07:ss.sss [log message]
where ss.sss was the seconds and [log message] was the actual text message written to the log.
My solution, using Bourne shell, was to determine the first occurance of "2007-03-08 08:07" using grep. The GNU grep command would be:
START=`grep -n -m1 "2007-03-08 08:07" logfile.log | cut -d: -f1`
The -n will prefix the results with the line number. The -m1 tells it to quit after the first match. The output is going to be something like:
237:2007-03-08 08:07:ss.sss [log message]
where 237 is the line number. So the cut -d: will break the line at the semicolons and the -f1 will take the first field, i.e. 237.
Next you want to find the last occurance of 08:16. I would suggest looking for 08:17 using the same grep command, e.g.
END=`grep -n -m1 "2007-03-08 08:17" logfile.log | cut -d: -f1`
The reason you want to look for the value after the real END time is because a log might have many entries for 08:16. By looking for 08:17 we know we have captured all the entries for 08:16 rather than just the first entry.
This will give us the line AFTER the line we want, so we do the following to decrement it by one:
END=`expr $END - 1`
Now we want to extract everything from START to END in the log. We start by extracting everything from 1 to the END using the head command:
head -n $END logfile.log
Now we want to trim off the first START lines from this. For that we can use the tail command. But the tail command wants to know how many lines are to be kept. The value of START is the number of lines we want to get rid of. So we really want $END - $START + 1. So:
LINES=`expr $END - $START + 1`
Finally we would have:
head -n $END logfile.log | tail -n $LINES
and this will display only the lines from 08:07 to 08:16 on March 8, 2007.
Labels:
Bourne,
shell scripting,
UNIX
Friday, February 23, 2007
Explaining URLs
I've been a little busy with work so we are looking at almost a month since my last post.
Here is an example URL:
http://128.23.64.233:8080/contest/reward.html?key=value&key2=value2
Here is the break down to the parts of the URL:
- http:// is the transmission protocol (humans talk french, engish, etc.; computers talk http, ftp, telnet, etc.)
- 128.23.64.233 is the IP address of the website. If you use www.google.ca, your computer will turn this into 64.233.167.104.
- On a website computer it typically listens on port 80 for http talking. The :8080 tells our web browser to send the message to port 8080 on the website computer. You can access Google using http://www.google.ca:80/ but the :80 is not necessary.
- The /contest is call the context root. The web site sees the /contest and knows that all the files for /context root are stored in a specific location on the hard drive. Typically there will be a root to the website, e.g. C:\www\http\. For this context root the file could be in C:\www\http\contest\.
- The reward.html is the file that gets processed. The website will process different files different ways. It knows how to send you an HTML file or a JSP file.
- Everything after the ? are parameters for the web page. If I have a form on a web page it it has the following fields: name, password. Then I can enter the text in the textbox or I can do: http://somesite/login.html?name=myname&password=secret
So that is a simple break down for an HTTP URL. There is more to it but this is a start. I'll post more later and maybe a tutorial on how to peek at websites without using a web browser. Good way to avoid website viruses.
Here is an example URL:
http://128.23.64.233:8080/contest/reward.html?key=value&key2=value2
Here is the break down to the parts of the URL:
- http:// is the transmission protocol (humans talk french, engish, etc.; computers talk http, ftp, telnet, etc.)
- 128.23.64.233 is the IP address of the website. If you use www.google.ca, your computer will turn this into 64.233.167.104.
- On a website computer it typically listens on port 80 for http talking. The :8080 tells our web browser to send the message to port 8080 on the website computer. You can access Google using http://www.google.ca:80/ but the :80 is not necessary.
- The /contest is call the context root. The web site sees the /contest and knows that all the files for /context root are stored in a specific location on the hard drive. Typically there will be a root to the website, e.g. C:\www\http\. For this context root the file could be in C:\www\http\contest\.
- The reward.html is the file that gets processed. The website will process different files different ways. It knows how to send you an HTML file or a JSP file.
- Everything after the ? are parameters for the web page. If I have a form on a web page it it has the following fields: name, password. Then I can enter the text in the textbox or I can do: http://somesite/login.html?name=myname&password=secret
So that is a simple break down for an HTTP URL. There is more to it but this is a start. I'll post more later and maybe a tutorial on how to peek at websites without using a web browser. Good way to avoid website viruses.
Labels:
RFC
Wednesday, January 24, 2007
J2EE, what is it?
It has been a while since I posted. I've been busy training new co-op students and trying to get two projects out the door.
One of the projects runs on JBoss application server. The interface for configuring the application is deployed as a WAR file. So we can update the configuration tools on future releases by just shipping a new WAR file to the customers. The student didn't understand how this works. Here was my 5 minute explanation...
In Java you have JAR files. A JAR file is a ZIP file with a bunch of Java classes in it. You can create a JAR file such that it can be run using 'java -jar'. It just requires the JAR file to have specific files in specific locations. A WAR file is the same idea.
For example, if I want to create the WAR file console.war I might do the following:
- Create the directory C:\console
- Create the directory C:\console\WEB-INF
- Create the directory C:\console\META-INF
- Create the file C:\console\WEB-INF\web.xml
- Create the file C:\console\META-INF\Manifest.mf
This is the basic structure. Now I need the code that will actually get run. Let's say I create the class AdminConsole in the package com.mycompany.myproject.console. Or, in other words, I created the file C:\console\classes\com\mycompany\myproject\console\AdminConsole.class.
The web.xml is defined by J2EE WAR file standards to have a specific format. It might contain something like:
The <servlet> tag tells the application server about the class file and the <servlet-mapping> tells the application server when to run the code. So if the server in on the machine darrell.blogger.com the user could enter the following in a web browser:
The <url-pattern> takes the /console/Admin and knows which Java code to run. It will run the AdminConsole.class code. The AdminConsole class has to have specific methods implemented. Just like a desktop application knows to start at:
The AdminConsole class will extend javax.servlet.http.HttpServlet and will have doGet() and possibly doPost() methods.
With a desktop application, if I wanted to print to the screen I'd use System.out, e.g.
but with a servlet I'd have to get the output stream and write to it, e.g.
Additionally, when you write to the web browser you have to indicate it the data is an image, text, HTML, etc. Before you write you can do things like:
So if you can write a Java application, it is fairly easy to convert to a web application.
One of the projects runs on JBoss application server. The interface for configuring the application is deployed as a WAR file. So we can update the configuration tools on future releases by just shipping a new WAR file to the customers. The student didn't understand how this works. Here was my 5 minute explanation...
In Java you have JAR files. A JAR file is a ZIP file with a bunch of Java classes in it. You can create a JAR file such that it can be run using 'java -jar'. It just requires the JAR file to have specific files in specific locations. A WAR file is the same idea.
For example, if I want to create the WAR file console.war I might do the following:
- Create the directory C:\console
- Create the directory C:\console\WEB-INF
- Create the directory C:\console\META-INF
- Create the file C:\console\WEB-INF\web.xml
- Create the file C:\console\META-INF\Manifest.mf
This is the basic structure. Now I need the code that will actually get run. Let's say I create the class AdminConsole in the package com.mycompany.myproject.console. Or, in other words, I created the file C:\console\classes\com\mycompany\myproject\console\AdminConsole.class.
The web.xml is defined by J2EE WAR file standards to have a specific format. It might contain something like:
<web-app> <servlet> <servlet-name> com.mycompany.myproject.console.AdminConsole </servlet-name> <servlet-class> com.mycompany.myproject.console.AdminConsole </servlet-class> </servlet> <servlet-mapping> <servlet-name> com.mycompany.myproject.console.AdminConsole </servlet-name> <url-pattern> /console/Admin </url-pattern> </servlet-mapping> </web-app>
The <servlet> tag tells the application server about the class file and the <servlet-mapping> tells the application server when to run the code. So if the server in on the machine darrell.blogger.com the user could enter the following in a web browser:
http://darrell.blogger.com/console/Admin
The <url-pattern> takes the /console/Admin and knows which Java code to run. It will run the AdminConsole.class code. The AdminConsole class has to have specific methods implemented. Just like a desktop application knows to start at:
public static void main(Strings[] args)
The AdminConsole class will extend javax.servlet.http.HttpServlet and will have doGet() and possibly doPost() methods.
With a desktop application, if I wanted to print to the screen I'd use System.out, e.g.
System.out.println("Hello");
but with a servlet I'd have to get the output stream and write to it, e.g.
ServletOutputStream sos = response.getOutputStream(); // uses the methods of ServletOutputStream to write to the web browser // e.g. sos.write().
Additionally, when you write to the web browser you have to indicate it the data is an image, text, HTML, etc. Before you write you can do things like:
response.setContentType("text/html");
So if you can write a Java application, it is fairly easy to convert to a web application.
Tuesday, January 9, 2007
Making shell scripts atomic
What do you do if you have a shell script that cannot be run twice at the same time, i.e. you have to wait until the script finishes before you can run it a second time.
The solution is to make the script check to see if it is running. If it is not running then let it start. The problem with this is it is possible to have the following scenario:
- start the script
- it checks to see if it is running
- start the script running a second time
- the second script checks to see if it is running
- flag that the first script is running and enter the critical section
- flag that the second script is running and enter the critical section
In other words, the check and the setting of the flag have to be atomic, i.e. you cannot have them be two steps in the script.
The solution is to use ln to create a link. The link will be the setting of the flag and the check. Then you can check the status of the ln command (the flag).
So here is the code to do it:
Or if you perfer the C shell:
The solution is to make the script check to see if it is running. If it is not running then let it start. The problem with this is it is possible to have the following scenario:
- start the script
- it checks to see if it is running
- start the script running a second time
- the second script checks to see if it is running
- flag that the first script is running and enter the critical section
- flag that the second script is running and enter the critical section
In other words, the check and the setting of the flag have to be atomic, i.e. you cannot have them be two steps in the script.
The solution is to use ln to create a link. The link will be the setting of the flag and the check. Then you can check the status of the ln command (the flag).
So here is the code to do it:
# Bourne Shell #!/bin/sh # create a filename for the ln LOCK=`echo $0 | awk -F/ '{print $NF}' | awk -F. '{print $1}'`.LOCK # create the link ln $0 ${LOCK} 2> /dev/null # see if we are already running if [ $? -eq "0" ]; then echo "running atomic script here" echo "it just sleeps for 5 seconds" sleep 5 /bin/rm ${LOCK} else echo "script is already running" fi
Or if you perfer the C shell:
#C shell #!/bin/csh # create a filename for the ln set LOCK=${0:r}.LOCK # create the link ln $0 ${LOCK} >& /dev/null # see if we are already running if ( ! $status ) then echo "running atomic script here" echo "it just sleeps for 5 seconds" sleep 5 /bin/rm ${LOCK} else echo "script is already running" endif
Labels:
Linux,
shell scripting,
UNIX
Subscribe to:
Posts (Atom)