Google Analytics

Search

To search for specific articles you can use advanced Google features. Go to www.google.com and enter "site:darrellgrainger.blogspot.com" before your search terms, e.g.

site:darrellgrainger.blogspot.com CSS selectors

will search for "CSS selectors" but only on my site.


Tuesday, May 4, 2021

What is QA to me

Before I did testing and QA I was a software developer and a lecturer at university.

I actually wanted to teach high school (mathematics, computer science, statistics, etc.). I applied to Teacher's College but due to a bureaucratic mistake, I was told I had to apply again a year later.

I could have fought it but I figured I'd just find a job for a year and try again a year later.

Funny enough, I found a job testing software development tools for a company which created computer chips. Testing an IDE, compiler, assembler, micro-kernel, etc. required hardware and software development knowledge. I had both and thought working in 'QA' would be ideal as it wasn't a 'real' job.

I took that job in the Fall of 1998 with the full intention of going back to school in the Fall of 1999. It is now May 2021 and I'm still doing QA and testing software. I just fell in love with it.

Automated testing wasn't really a think. Many people, myself included, where still doing Waterfall software development.

Over the years I moved from testing desktop applications to testing web applications. Initially, when software developers starting doing Agile software development, QA or testing was still an after thought. One signatory of the Agile Manifesto actually published an article about how QA was dead and Agile software developers didn't need software testers anymore. He was wrong and others pointed it out.

But it still got me wondering about the future of QA and software testing. Software development was evolving. So without skipping a beat, I evolved too.

With manual testing, I'd get a story from developers. I'd manually test it. Then I'd get a second story. I'd manually test it. But I'd also re-test story number one. By the time the developers had written story 500, I was unable to do regression testing on stories 1 through 499. First thought was to order the stories in terms of importance. Test the most important stories first. Maybe I could test 300 of the stories before the Project Manager ask if the product was ready to ship. I was nervous to say yes. He shipped it anyway.

I'd keep testing after the product ship. If I found anything wrong, we'd create a "service patch".

This didn't feel ideal. So I started automating UI testing. Tools like Selenium and HP Quality Center started appearing. Being a former software developer, I like the programmability of Selenium. I could use the tools and best practices of software development to develop Selenium test suites.

But I remembered an article from IBM about how maintaining code was 4/5th of the work. I quickly realized that maintaining my test automation would become a huge part of the development. If I couldn't maintain it, it would fail.

Try as I might, maintaining UI automation which tested everything wasn't maintainable. It tested a lot but it was still lacking.

Along comes Mike Cohn, Mountain Goat Software, and the test pyramid. It made sense to me. I started learning about flipping the pyramid or as Alister B. Scott put it, the test ice cream cone. Rather than doing a lot of testing early, I was testing everything at the end. I almost saw it as the testing lollipop.

So I started encouraging developers to test as much as they could at the unit level. Kent Beck was promoting JUnit, an xUnit test framework. This seemed ideal. I could take one of my tests at the top of the pyramid and see that it had 10 reasons it might be failing. If I found a defect, could the developers write a unit test that would detect the defect? If yes, then I no longer had to maintain my UI automation for that potential failure. I started to realize that they could have unit tests that caught maybe 6 of the 10 possible failures.

Then we started learning about mocking, test doubles, etc.. Gerard Meszaros wrote a great book titled XUnit Test Patterns. You can read more about it at http://xunitpatterns.com/. 

As I looked more and more into this I started finding about integration testing, contract testing, etc.. All these other lower level tests that were easier for developers to write and maintain.

I started realizing that UI testing an application wasn't really valuable at all. The test pyramid showed me that UI testing, at the top of the pyramid, was the smallest portion. Trying to convince people I needed to write more UI tests was self-defecting. If I REALLY wanted to assure quality, I needed to enable the team to write better software.

Looking at patterns for software failure and thinking about how could this have been caught and fixed early was now a priority for me. Writing automation was starting to shift from my job to the job of the software developers. Was that Agile Manifesto signatory right? Was I no longer needed?

He wasn't. The software developers still have a different mindset. I still notice to this day that a software developer tends to make sure the application does what it is supposed to do. But a QA tends to make sure the application doesn't do what it isn't supposed to do. It is a subtle difference but an important one.

Are their software developers who know how to think like a software developer AND think like a QA? Absolutely! Does my working with a developer help them think like a QA and potentially put me out of a job? Well, kind of. When I pair with a developer a little bit of me rubs off on them. But there are so many other developers I can help. I've been doing this for almost 20 years now and I still haven't found people I can't help.

I still write automation but I also try to understand where problems might arise in the application. Enterprise architecture can get really complex. How can we reduce that complexity? How can I find and eliminate defects? It seems to be a never-ending task. I keep trying to put myself out of a job but so far I've just made myself and my team better.

Bottom line, stop doing Quality Control (you can't test quality into software) and start doing Quality Assurance.

Friday, April 30, 2021

Using git show to find breaking changes

 First, sorry for not being around much. I've been lucky enough to be busy with work.

We had all the unit tests passing, smoke and integration tests passing but when we put the full enterprise environment to the test things which were failing.

The technology we are using can't be run locally. So stepping through the code and debugging the issue wasn't possible. We had to go old school and add print statements to the code to see what was happening.

First, you should be using something like log4j to control the log levels. This way you can add debug statements to the code but be able to turn them off when you deploy to a production environment.

But how do you determine how much is enough log statements? How do you determine what is too little?

I looked at our build. The end to end tests were passing on Monday. They only run nightly. In the course of the day there were 11 commits to the git repository. First, was the end to end test failing because of a commit in the code? Or was there a change in the enterprise environment?

My first step was to deploy the code from the last time it was working.

Fortunately, most build pipelines will let you build and deploy based on the git commit hash. So I looked up the last time the build was working for the end to end tests. The pipeline would tell me which git commit was used to build it.

So I run the pipeline and ask it to deploy that specific git commit. Then I run the current end to end tests on the deployed application. The end to end test passed. Now I have the git commit for when it was working.

I also have the git commit for the current master commit. This commit is failing today.

Let's say that the commit which was working was 0a803c1 and the latest commit was ea3dbb9. There are 11 commits. So how do I figure out what commit broke it?

I can see all the files which were changed using the git show command. The exact syntax is:

git show --pretty="" --name-only 0a803c1..ea3dbb9

 This will provide a list of all the files which have changed. If a file was changed in different commits, multiple times, it will show the file more than once in the list. If all you care about is a short list of files which changed I'd actually use:

git show --pretty="" --name-only 0a903c1..ea3dbb9 | sort | uniq

This will get rid of duplication.

Once I know which files have been affected, I can start adding debug statements to these files and not bother focusing on any of the other files in the repository.


Monday, February 10, 2020

Hello old friend

It has been a while since I wrote to my blog. What happened? Life.

But why am I back here? The macOS Catalina 10.15.xx.

I'm still working as a consultant testing software my colleagues have written. Recently, I got a new laptop for work and it came with the Catalina OS.

Over the last year I have noticed that Apple Macintosh has become the new Microsoft Windows. Whenever I get an update or install something new and things behave badly (Wifi won't connect) I have learned to reboot my computer. Even though the update or installation didn't say it needed to reboot my computer, I'm finding rebooting my computer actually fixes things from time to time.

Until recently. Normally, to save time, I never shutdown or log out of my laptop unless I feel I REALLY need to. So at the end of the day, I close my laptop and it goes to sleep. I have it plugged in all day. So when I close it, it is at 100% battery power. The next morning, I open my laptop and it is often at 100% power. Sometimes it drops to 99% power.

Since I upgraded to Catalina this is no longer true. I closed my laptop and didn't use it over the weekend. Monday morning I open my laptop and it is at 7% battery remaining. What the heck!?!!

So after a little digging I find that the default settings for a lot of things leave background processes running, even when it should be in sleep mode. Here is what I did to stop my laptop from dying when the lid is closed:

  1. Go to System Preferences -> Energy Saver
  2. Go to the Battery settings and uncheck Enable Power Nap while on battery power
  3. Go to Bluetooth, click the Advanced... button
  4. Uncheck Allow Bluetooth devices to wake this computer
  5. If you are using an external mouse and keyboard this could screw them up but I don't use bluetooth devices
  6. Go to Notifications, select Do Not Disturb
  7. Check on When display is sleeping and When the screen is locked
With these changes, you should find your laptop is not draining power when the lid is close and on battery. If you are still seeing the battery drain, you might have to reset the System Management Control (SMC). To do this:
  1. Shut down your laptop
  2. Once shutdown, press Shift-Control-Option on the left side of the keyboard
  3. While still holding down the 3 keys, press the power button and hold it for around 5 seconds
    1. The Apple logo will appear then disappear while you do this
    2. If the Apple logo appears a second time, you held the keys for too long
    3. Go back to step 1 and try again
  4. Release all the keys before the logo appears the second time
  5. Wait a second then press the power button to turn on the computer
  6. The SMC should now be reset

Wednesday, February 7, 2018

Clearing space on a macOS hard drive


As a tester I'm often hoarding articles, data, VMs and tools on my hard drive.

The hard drives a decade ago where IDE or SATA hard drives. It is easy to find 1 terabyte (1000 gigabyte) hard drives. So I tend to put 1 or 2 terabyte hard drives in my computer.

Today you tend to find computers using SSD hard drives. When they first came out all you could find was 128 gigabyte or 256 gigabyte hard drives. Getting 128 gigabytes was normal because the 256 gigabyte hard drive was often a few hundred dollars more.

As time went on, the cost of SSD drives came down. Now it is common to find 256 gigabyte hard drives and it is a few hundred dollars more for a 512 gigabyte hard drive. Still 256 gigabyte is a LOT smaller than a 1 terabyte hard drive.

Large data sets and VMs for testing different operating systems and browsers would often take up a LOT of space. Add to that all the tools I typically use (development, automation, communication, productivity, etc.) and I quickly find a 256 gigabyte hard drive full.

Recently, I kept finding I would have to backup something on my hard drive and delete it to make room for something on a new project. I kept trying to figure out how to free up more space on my macOS hard drive. I have only a 250 gigabyte flash drive in my laptop and it always seemed to be at 230 gigabytes used.

Recently I read about how backing up your hard drive using Time Machine would create these local backups on your hard drive. The article talked about where the folders with the local backup were and how you could just delete the folder and it would free up a lot of room.

With the latest version of macOS (High Sierra 10.13), it uses APFS. This new file system has a number of advantages around speed, encryption, reliability, etc.. However, new file systems often mean just deleting things from the Terminal or using older hard drive clean up applications, could result in the file system getting a little messed up.

So rather than just delete local Time Machine backups, I would recommend using the tools which come with the operating system. In this case the tool is called tmutil.

If you go to a Terminal and enter tmutil it will give me a list of things it can do. One of those things is listing the local backups. To see them you can use:
tmutil listlocalsnapshots /
This will list the local snapshots on the root hard drive. From the output, you can then delete one or more of the local backups. I deleted all the local backups and found my hard drive went from 230 gigabytes used down to 117 gigabytes used. Essentially, the local backups were taking 113 gigabytes (almost half my hard drive).

To delete a local backup, I'd use:
tmutil deletelocalsnapshots <snapshot_date> 
where <snapshot_date> would be the date of the snapshot. Basically, the output of listlocalsnapshots would be:
com.apple.TimeMachine.<snapshot_date>
where <snapshot_date> is something like 2018-01-14.

Wednesday, March 29, 2017

How to use secrets


From time to time you heard about how someone's site has been compromised and customer's personal information has been leaked. For example, Target Got Hacked Hard.

What can you do? Good passwords is a start.

First is understanding how do we choose a password.

If the password can be 10 characters with uppercase, lowercase, digits and symbols it could be (26 + 26 + 10 + 32) to the power of 10 or 5.386151140948997e19 different possible passwords. Even if you could attempt thousands of password attempts a second, it would take you eons to be able to crack this. Most systems will also throttle how many attempts to you can make per minute.

A good hacker won't attempt all possible combinations. They would use a dictionary to restrict the attempts to known password combinations. They might be able to reduce the number of different possible passwords to something they could attempt within a year. So systems will often request you change your password every 90 days.

Additionally, if the system allows for more characters (password length of 20 or 30 characters) just adding a few characters will make it exponentially harder to crack.

So don't use known patterns (dictionary attack), make your password as long as possible, change it every few months. The change should also be significant.

Now maybe you have to type the password into a mobile device and a really long, complicated password is going to be a bother. So you end up shortening it. Also, if you password is something like "CBVcP3Zj/i}3mK,xUL7U", you are never going to remember that. You will be tempted to make it shorter, easy to remember and you won't want to change it too often. So something like "!Passw0rd!" seems like a good password. It has uppercase, lowercase, digit, symbols, it is easy to remember, it is 10 characters long. However, it is easy to guess. A hacker will DEFINITELY have this in their dictionary.

What about changing the password? If your password was "My!S3cr3t!Passw0rd!" and I changed it to "My!S3cr3t!Passw0rd!01", next I used "My!S3cr3t!Passw0rd!02", someone will guess this with a dictionary attack.

Additionally, it used to be that a 6 character password wasn't crackable by brute force. As computers got faster we needed 8 character passwords. Today it seems that companies are recommending 12 character passwords. So if you want to be safe, you should use at least 20 character passwords.

So REALLY the password of "CBVcP3Zj/i}3mK,xUL7U" is what we want to use. That said, I've known security staff walk around the office looking for post-it notes. People will write their password on a post-it note and stick the note to a monitor. A lot of time, systems are hacked because of an insider getting a password which was written down. When I first got into computers, a high school teacher wrote the password to the system on a sheet in his desk. I know this because students would see him looking in his desk then logging in the system. One of the students stole the password from his desk.

What about saving it on a file in the computer. That works even better for some things. I need to log into a website. I open my text file, copy the password, paste it into the website. What happens if someone gets a copy of my text file. Then they have all my passwords.

I can put them in a spreadsheet with a password or a zip file with a password. Is the password to the spreadsheet going to be "CBVcP3Zj/i}3mK,xUL7U"? And I don't write it down? Probably not.

This brings us to dynamic and static storage. If my password is in a variable or the clipboard, I can turn off my computer and it is gone. If it is stored in a text file, someone can make a copy of the file and take days trying to crack it. Hackers will sometimes get encrypted database files then spend weeks trying to crack it. This is where they get common passwords for their dictionary attacks.

What if your files are automatically backed up to iCloud or Google? Then someone might be able to intercept them and take weeks trying to crack them.

Also, how do you think I came up with "CBVcP3Zj/i}3mK,xUL7U"? The answer is a password vault program. There are a few. I use 1Password (because it works on all the devices I use). There are free options like LastPass, KeePass, Dashlane. Have a look at List of password managers for these and other options. Now you just have to pick one good, strong password to secure your vault.

Additionally, if you don't backup your vault to a cloud or let people have access to it, they cannot hack it.

Another bad practice I see people doing is saving passwords in variables. This is less for your personal passwords and more for programmers accessing enterprise sites. I might have a utility which accesses the XYZ system. If variables are set, it never asks me for a password. I might have:

export XYZ_USERNAME='abssass'
export XYZ_USERNAME='2eD$g^^nJk5wHki6Lsst4Gwr'
Now if I run the utility it will see these variable are set and use them to log in. Or I might  have something like: "http://$XYZ_USERNAME:$XYZ_PASSWORD@hostname" and so long as the variables are set, it will automatically log me into the website.

But now we are back to saving passwords in a text file. Things like export statements are saved in ~/.bash_profile on my computer.

For the team they might save the passwords in a team password manager. For example, vault by Hashicorp. To get my secrets I might execute something like:

vault read /secrets/team/storage/xyz
This might return something like:

KeyValue
xyz_usernameabssass
xyz_password2eD$g^^nJk5wHki6Lsst4Gwr

So now I can use my UNIX/Linux skills to parse the username and password out and save them in a variable. For example:

export XYZ_PASSWORD=$(vault read /secrets/team/storage/xyz | grep xyz_password | awk '{print $2}')export XYZ_USERNAME=$(vault read /secrets/team/storage/xyz | grep xyz_username | awk '{print $2}')
This will read the information from the vault, parse out the appropriate information and save it to a variable. When I turn off my computer the variables are gone and the password isn't saved in a text file... not even encrypted.

Tuesday, March 21, 2017

Using Charles from the command line

If you are testing network traffic you are probably familiar with Fiddler. Fiddler is a nice, easy to use tool for monitoring network traffic.

It works very easy. You start up Fiddler and it configures your Internet Settings. Now when you start up a web browser, it automatically routes traffic through Fiddler. As you hit web pages on the browser, the HTTP requests and response show up in Fiddler. It is very easy to read and understand what is going on.

If you are using a macOS computer you will be sad to learn that Fiddler does not exist for macOS. It is a Windows only product. If you check for free options to do the same thing you find Wireshark (formerly Ethereal). But Wireshark's configuration and output assumes you have knowledge of TCP, HTTP, Sockets, packets, etc. You can get the information that you need but it is not as easy as Fiddler.

Additionally, to play back a request with some modifications is a lot harder with Wireshark than with Fiddler.

So what do you do? Charles Proxy. Unfortunately, it is not free but at $50 it is a good investment. If you are working at a company with many people needing it, there are discounts available as well.

Now if you get Charles you will find it automatically starts up and changes the Network Settings on your macOS. So all the browsers and anything which uses Network Settings, will automatically go through Charles. 

What about command line? For example, I have a Docker script which creates a container, deploys a web service and waits for someone to hit it. What if I'm creating automation using Python, Java, bash script, etc.? These do not use the macOS Network Settings. So you will see nothing in Charles.

The solution is to add the necessary information to the shell before you launch your test scripts.

The way Charles works is rather simple. If my machine is using 192.168.0.4 and I want to hit www.google.com (209.52.144.114) it might following the following route:
  • 192.168.0.4
  • 64.114.101.7
  • 209.121.102.146
  • 209.52.144.114
The way Charles works is creating a MITM (Man-In-The-Middle). So if I want Charles to be able to observe the traffic the route might be:
  • 192.168.0.4
  • CharlesProxy
  • 64.114.101.7
  • 209.121.102.146
  • 209.52.144.114
The way it does this is by creating proxy settings in Network Settings. To create proxy settings on the command line you need to set certain variables. For HTTP traffic and HTTPS traffic Charles tells macOS to set it to use IP address 127.0.0.1 and port 8888.

For the command line you want to use:
export http_proxy="http://127.0.0.1:8888"
export https_proxy="http://127.0.0.1:8888"
Additionally, Charles tells the macOS to bypass certain addresses. What I do is go to System Preferences, select Network, select the Advanced... button, to to the Proxies tab.

On this page, assuming you are running Charles, you will see a bunch of addresses in the Bypass proxy settings box. Select all of them, copy them into the clipboard, go back to the command line and enter:
export no_proxy="<paste>"
With these three settings, anything you run from the command line will go through Charles.

However, if you close the shell you lose all the settings. If you want to keep the settings you can add them to your ~/.bash_profile text file. Every time you open a shell it will add the proxy information to the shell. HOWEVER, you don't want this if you are not using Charles. To disable this you need to enter:
unset http_proxy
unset https_proxy
unset no_proxy
So what I do is add the following to my ~/.bash_profile text file:
# Charles shortcuts
function charles_on {
        export http_proxy="http://127.0.0.1:8888"
        export https_proxy="http://127.0.0.1:8888"
        export no_proxy="127.0.0.1:6258, 127.0.0.1:6263, 127.0.0.1:10191, 127.0.0.1:14821, 127.0.0.1:24861, 127.0.0.1:25007, 127.0.0.1:38151, 127.0.0.1:46360, 127.0.0.1:49801, 127.0.0.1:55730, 127.0.0.1:59483"
}
function charles_off {
        unset http_proxy
        unset https_proxy
        unset no_proxy
}
By adding this to my ~/.bash_profile text file I can use:
charles_on
to enable Charles. And I can use:
charles_off
to disable Charles. Whenever Charles is not running I MUST disable Charles on the command line.


Wednesday, August 24, 2016

Mobile web testing

I've been working on a project recently which required testing on a mobile device. The project started in April this year and was focused solely on iOS.

When I looked into what was available for mobile testing I found a number of different tools:


  • KIF
  • Appium
  • Frank
  • Calabash
  • EarlGray
  • UI Automation
What I found next was a bother. I need a tool which would be used for testing the IPA we would ship from an app store. Tools like Frank and Calabash are great at automating tests but they required you to build a special version of the app. This would not be the same app you deployed to an app store.

This made it easy to eliminate those two great tools from my list of potential test automation tools.

I then looked into KIF and EarlGray. They had great reviews and looked really promising until I noticed Apple made significant changes to the UI Automation framework and broke KIF and EarlGray. So if I wanted to test against iOS 9.3 developed with XCode 7 and Swift I was probably not going to want to use KIF or EarlGray.

So the obvious choice was Appium. However, even Appium seem to be affected by the changes Apple announced at the July 2015 Developer Conference. :(

Since our iterations were one week and waiting to see who would 'fix' issues with their framework wasn't really an option, we branched Frank and started developing the app using Frank for UI testing. In the meantime we looked at UI Automation (our app was iOS only, so we didn't need to worry about Android support).

Initial use of UI Automation seemed good. So we started automating UI tests with it but continued to keep the Frank tests running in parallel. However after a few iterations we started to see maintaining the UI Automation tests was becoming increasingly difficult. Since I got into UI automation in 1998 I have found that failure to maintain a test automation framework is one of the common reasons for UI automation to fail. We didn't have a team of 20 QA Automation experts to keep the UI Automation framework going. :(

So I had a second look at the previously discarded frameworks. To my surprise and delight I found that support for them had been re-established and I took a second look at using Appium. 

Appium is definitely not fast and I'm looking for ways I can reduce the execution time of the Appium test suite (currently 15 minutes when run on hardware; I'd like to get it down to 5 minutes plus add more tests; maybe run tests in parallel on four or more phones).

Bottom line, Appium seems to be working well for us. I've created a page object model framework. In my next article I'll talk about using the Appium Inspector on a Mac laptop and things I found blocked me or slowed me down.