Why and how to automate tests.
By: Slav Kurochkin
So the best way to think about automation it is repeatable tests, that you usually perform while regression testing. Since QA Engineer may repeat some tests every release he basically will have some issues which can be solved by Automation Tests.
Here is 3 main reasons why automation is so important:
1. Human factor
We all human people that mean when we doing something often we getting inurement, it is like driving car with manual transmission, when you start driving car you always expecting that something can go wrong and car can stop because you didn't perform gear changes well, in couple weeks you getting habits, you did same thing multiple times and you already doing it automatically and not paying any attention to the way you doing it and you don't have any concern. Same thing with tests when you do it manually from sprint to sprint, than you just doing it automatically and you may miss some details. With Automation Tests if they developed right way it would never happen.
So how long it may take you to test 100-1000 test cases manually? It is good if you have answer on that question, at list you can measure it. But when you let machine do your job it will be much more faster. Since machine not going to restroom, drinking coffee or talking with co-workers how sucks your hometown Football team.
Since we have more structure in automation testing we can perform multiple tests on different environments. For example we got new story where developer working on new component, and we getting it on developer, qa, release, production environments. Since it will take a wile for manual tester to perform quality tests automation will do it quickly and with no complaining about doing same thing over and over.
Selective or full tests? Why would choose one instead another
The methodology of selective tests, Engineer based on his experience and software knowledge select specific concern areas to test. So why would somebody do selective tests? The answer is obvious you spend less money and it is usually faster than full testing. But as an other side of selective testing you have no guaranty that you cover everything and your software working properly and non of other module was affected by different components.
So how would you sure that you have 100% covered, and no issues in your web app? The best way to get it, it is have an Automated Tests.
Continues testing. As I mentioned before usually you have different environments and to low risk of missing bug in different environments it is good to have automation in each environment. It is like factory workers which getting parts from coworker before they start working on part, they check quality existing part since nobody want to job twice and not getting pay for that. This model insure that environment is not affect software. But why would you double your job and what is the reason? So usually multiple different developers works on different components. Imagine situation when we got new component on dev environment ability to identify possible issues because of changes by one component will let you as QA engineer faster figure out what cost the issue and who to assign bug.
There is different types of testing which we can divide on:
Manual - tester perform tests manually each time when it is required.
Semi-automation - it is mean that QA Engineer should be involved in automation process to start and maintain tests while tool such as Selenium Webdriver running it.
Full Automation - every time we got new push or release program automatically start testing (for example Selenium will start by using Jenkins and Maven). It also can be schedule testing for example each night.
How would I write my automation scripts. From where to start?
First of all you have to review story.
As tester you need to understand why would business decide to do something. Are we developing new fetcher for existing software or we are starting up new project from scratch. So you need to fully understand the situation why you doing that.
As QA Engineer you should ask what will happen if something go wrong in story. How would it affect entire system. Can we prevent issue if it possibly go wrong in a story? Do we have enough information?
How can you improve story to minimize failure risk?
Second you need to develop tests. But how to make sure that you cover everything. And from where to start.
So usually before you start write a test case you already have understanding what is story about and what is the core. To start you should imagine fish (or what ever you want). Main part of it is skeleton, without it fish will not able to keep all parts together, ask yourself without what this particular component will not able to exist, what is all about? What should it do, what I'm as end user expect it will do, is it simple enough for me as end user, end if something will not working will I be able to get to the point I want to be.
The best practice to write test cases it is to write them small and individual. Since your test cases will be used in different releases and different environments than more simple they, than more easy to you implement it. Another part of small and simple test cases, what if in a couple months your company will decide to rewrite component, will you be able to localize your test case and make changes just in it or you will need to rewrite entire framework?
Here is simple bug priority list:
|Minor||2,8||These are "nuisance" bugs. A default setting not being applied, a read-only field showing as editable (or vice-versa), a race condition in the UI, a misleading error message, etc. Fix for this release if there are no higher-priority issues, otherwise the following release.|
|Major||1,8||Usually reserved for perf issues. Anything that seriously hampers productivity but doesn't actually prevent work from being done. Fix by next release.|
|Critical||0,5||These may refer to unhandled exceptions or to other "serious" bugs that only happen under certain specific conditions (i.e. a practical workaround is available). No hard limit for resolution time, but should be fixed within the week (hotfix) and must be fixed by next release. They key distinction between (1) and (2) is not the severity or impact but the existence of a workaround.|
|Blocker||0,3||Reserved for catastrophic failures - exceptions, crashes, corrupt data, etc. that (a) prevent somebody from completing their task, and (b) have no workaround. These should be extremely rare. They must be fixed immediately (same-day) and deployed as hotfixes.|
After bug was reported it is important to have "informative inspection". Informative Inspection mean that bug is not coming from nowhere it should be associated to some story, it is important to understand and keep records related bugs should be linked to particular story, test case. In case there will be some update it will be easy for engineer to go back and make sure that risky parts are functional as expected without any issue. "Informative Inspection" will low risk and let tester prevent repeating issue.