Tuesday, November 29, 2011

Test Automation Magic Bullet

Ah yes, the Myth of the test automation magic bullet.  You got too many test cases to run and not enough time, let's just automate them all!  That'll do it!  Every test automator has probably experienced this scenario before and probably more than once.

We have all heard the "Infinite Monkey Theorem" or some variation on it. In the case of test automation, it is kind of the opposite. My "Test Automation Theorem" goes like this:

A single test automator can automate an infinite number of
test cases of any complexity in any specified time period.

That about sums up managements typical understanding of test automation.  So, given the Test Automation Theorem, we can easily derive the Magic Bullet of Test Automation that solves all timeline and workload issues when it comes to testing.

Here is an actual example:

The company has decided to migrate to a new piece of software and it is super critical to the business (aren't they all).  The schedule is getting pushed back due to unforeseen (and obviously unplanned for - more on project planning in another rant) development issues and delays.  The test team has created over 7800 test cases with 2700 identified as urgent or high priority. Senior management declares that the test team will need to automate all 2700 urgent/high tests and that will take care of any schedule problems.  So let it be written; so let it be done!

Yes! The test automation magic bullet will fix it all and we have full management support to release the magic!  Ah, yo, um, Bob, um, flip that magic bullet switch over there and get this done. Sweet! That was easy.

Seriously, this was my situation recently. No joke.  Luckily, my manager has been down this path with me before and so we decided we needed to manage expectations and educate some people.

So let's break the situation down and see the (obvious to some) problem with the magic bullet approach. Here are the challenges:
  • Product has not gone through a test cycle and all tests are, of course, GUI based.
  • New product means no test automation infrastructure in place; all has to be built from the ground up.
  • How big of an automation team do you have and what timeline?
  • You can never automate every functional test.

Hitting the Moving Target:
If you haven't tried to create test automation, especially GUI automation, for a version 1.0 product, you will be trying to hit a moving target.  The interface is likely to change many times during the development cycle and you will end up re-writing more than half of any automation code you create.  You will end up stressed out, over worked, and, more than likely, hating your job.

You can search and you'll find many articles that say automated testing at the GUI level is fragile (and it is but much of the issues can be mitigated if the automation is designed well), but the GUI of a brand new application is the most unstable, ever changing beast you can encounter.  I have read many articles stressing using automation at the API level to test business logic rather than at the fragile GUI level. In a perfect world, that would be great, but testing at the API level typically requires more cooperation with the application developers that is many times hard to come by.  The application developers are already at full workload to get the "must have" product out the door and, if automation requirements weren't factored in at the initial product planning (yeah, right), they just don't have time to "fix your automation problems".  It can be really hard to get a good relationship established between a test or test automation team and the development team, especially when your company starts outsourcing more than half of the development work.  The outsourced team is trying to meet their timelines and test automation just doesn't fit into their workload.  Let's just leave it up to shortsightedness in both budget and planning.

The "Hidden" Work of Building New App Automation:
Even when you already have test automation in place, you can't always leverage the work you have already done. In this case, the new application to automate was created in PowerBuilder (yuck!).  All our previous automation efforts are in desktop Java swing apps or Web interfaces.  Looks like we have to start from (almost) scratch on this one.  That will sure make things easy </sarcasm>.

Now, even if this new application was web or Java, there is still lots of start up work to be done.  We have to build an object repository of some sort depending on the automation tool of choice and previous handling of this sort of thing.  We have to decide the best approach for automation development (data driven, keyword, hybrid, etc) and that will undoubtedly require learning about how the application works and the data requirements to execute test cases.  In our environment, the test automation team is separate from the manual test teams, so we don't have any application knowledge by default (sort of), but this application is all new so everyone is learning it. If your automators are part of the manual test team, then they have to learn the application anyway, but they also have to write test cases,  test plans, etc. which will take up their time; it's a trade off that I can't say either way is hands down better than the other.  So after you have your strategy, objects, and application knowledge down, you have to start building up function libraries and such for automation.  What??? You thought we were just going to Record and Playback the tests?  Really? Have you every tried to maintain test automation created like that? You still want to do it that way? GTFO!!!

The Other Magic Bullet: Outsourcing:

So the Test Automation Theorem isn't working out for you? You must have done something to mess up the equation.  Oh well, just use the other magic bullet: Outsourcing.  You can get a shit ton of test automators for the price of a can of coke and a bag of chips with Outsourcing. That'll fix your problem.  Oh boy...

Outsourcing, in my experience, creates sloppy, inefficient, and impossible to maintain automation code.  The outsourced "automators" (Ok, let's call them what they really are: Record and Playback Drones) slap together just enough code to do the test and repeat the process as fast as they can to get their "number of tests automated" count as high as possible as fast as possible to demonstrate progress and meet deadlines so they get paid. They won't be around next test cycle to deal with the mess, but you will.  Good luck with that.

No comments:

Post a Comment