Sanity Testing follows a broad and narrow approach with detailed testing of some limited features. It is mainly non-scripted. It is a subset of Regression Testing. It is a type of testing to prove that software application is working as per requirements as mentioned in the specified documents. It checks whether the application is built according to user needs. It is used to check after making minor changes to a part of the code; the application is still working as per the requirements of the user. To know more about Sanity Testing,
This is executed at an end in the process of a software development Lifecycle. It helps to check whether added new functionality is working according to requirements or not.
If a newly added functionality is not working according to requirement then in that case, Sanity testing fails.
If a newly added functionality to a system and web application is working according to a requirement then in that case Sanity test passes.
When Sanity test passes then complete system testing carried out to check that newly added functionally to an application will not affect previously present components of system and application.
It helps to avoid wastage of time.
It helps to save cost in testing even if the build failed.
In this type of testing, the tester directly rejects build upon failure.
Why Sanity Testing Matters?
Basically, after completion of a Regression testing, this testing comes into practice.
This testing executed to check that the defect fixes and changes made to software not breaking the functionality of the software.
It is the narrow and deep approach to testing, therefore focuses on limited and main features of testing.
How to Adopt Sanity Testing?
Whenever tester receives a software build with minor issues fixes in code or functionality, Sanity Testing carried. Then it is checked whether bugs reported in a previous build fixed or not. There can be a regression introduced due to these fixes. So the main aim of this testing is to check that the planned functionality is working as per expectations. This way helps to execute Sanity testing instead of doing a whole regression testing.
Why this testing has to be adapted or conditions where this testing is required –
The big releases generally planned and executed in a proper and systematic format. But sometimes, small releases asked to be delivered as soon as possible. In such situations teams didn’t get much time to first document the test cases, then execute them, do the documentation for the bugs and finally do the regression and follow the whole process.
Here for such situations some things required to keep in mind. These things which are required explained as follows –
Perform Dev-QA Testing.
Whatever bugs found, report these bugs.
If there is a requirement to do a Performance Testing or Stress or Load testing, have a proper automation framework. The process behind this it is nearly impossible to test these things using a sanity test manually.
This a most important and last part of this testing strategy – While drafting the release email or the document mention the in the following way-
Contains all the executed test cases.
Bugs found with a status marker.
If anything left untested mention this point with reasons.
Best Practises
This type of testing executed after the addition of new functionality to a system.
This type of testing carried out once new functionality implemented within a system.
To adopt this type of testing would be useless if there is not any new functionality added to a system.
Sanity Testing Tools
There is no way around to do sanity testing with the help of any tool. It is to check that after making changes to a part of the software, software working as per requirements. In this only functionality is checked, by just analyzing its behavior.
Compressive Approach
Sanity testing is a subset of regression testing and this testing is used for software testing to perform some basic tests. To execute this testing in an appropriate way, we recommend taking the following steps.
Sanity Testing Vs Regression Testing
Given below are a few differences between the two:
Strategy For Mobile App Testing
You must be wondering why I am mentioning specifically about mobile apps here?
The reason is that OS and browser version for web or desktop apps do not vary much and especially the screen sizes are standard. But with mobile apps, the screen size, the mobile network, the OS versions, etc affect the stability, look and in short, the success of our mobile app.
Hence a strategy formulation becomes critical when you are performing this testing on a mobile app because one failure can land you in big trouble. The testing must be done smartly and with caution too.
Following are some pointers to help you perform this testing successfully on a ‘mobile app’:
#1) First of all, analyze the impact of the OS version on the implementation with your team.
Try to find answers to questions like, will the behaviour be different across versions? Will the implementation work on the lowest supported version or not? Will there be performance issues for the implementation of versions? Is there any specific feature of the OS that might impact the behaviour of the implementation? etc.
#2) On the above note, analyze for the phone models also i.e. are there any features of the phone that will impact the implementation? Is the implementation of behaviour-changing with GPS? Is the implementation behaviour changing with the phone’s camera? etc. If you find that there’s no impact, avoid testing on different phone models.
#3) Unless there are any UI changes for the implementation I would recommend keeping UI testing on the least priority, you can inform the team (if you want) that UI will not be tested.
#4) In order to save your time, avoid testing on good networks because it is obvious that the implementation is going to work as expected on a strong network. I would recommend starting with testing on a 4G or 3G network.
#5) This testing is to be done in less time but make sure that you do at least one field test unless it’s a mere UI change.
#6) If you must test for a matrix of different OS and their version, I would suggest that you do it in a smart way. For instance, choose the lowest, medium and the latest OS-version pairs for testing. You can mention in the release document that not every combination is tested.
#7) On a similar line, for UI implementation sanity test, use small, medium and large screen sizes to save time. You can also use a simulator and emulator.
Precautionary Measures
Sanity Testing is performed when you are running short of time and hence it is not possible for you to run each and every test case and most importantly you are not given enough time to plan out your testing. In order to avoid the blame games, it is better to take precautionary measures.
In such cases lack of written communication, test documentation and miss outs are quite common.
To ensure that you don’t fall prey to this, make sure that:
Never accept a build for testing until you are not given a written requirement shared by the client. It happens that clients communicate changes or new implementations verbally or in chat or a simple 1 liner in an email and expect us to treat that as a requirement. Compel your client to provide some basic functionality points and acceptance criteria.
Always make rough notes of your test cases and bugs if you are not having sufficient time to write them neatly. Never leave these undocumented. If there is some time, share it with your lead or team so that if anything is missing they can point it out easily.
If you and your team are short of time, make sure that the bugs are marked in the appropriate state in an email? You can email the complete list of bugs to the team and make the devs mark them appropriately. Always keep the ball in the other’s court.
If you have Automation Framework ready, use that and avoid doing Manual Testing, that way in less time you can cover more.
Avoid the scenario of “release in 1 hour†unless you are 100% sure that you will be able to deliver.
Last but not the least, as mentioned above, draft a detailed release email communicating what is tested, what is left out, reasons, risks, which bugs are resolved, what are ‘Latered’ etc.
As a QA, you should judge what is the most important part of the implementation that needs to be tested and what are the parts that can be left out or basic-tested.
Even in a short time, plan a strategy about how you want to do and you will be able to achieve the best in the given time frame.
Smoke Testing
Smoke Testing is not exhaustive testing but it is a group of tests that are executed to verify if the basic functionalities of that particular build are working fine as expected or not. This is and should always be the first test to be done on any ‘new’ build.
When the development team releases a build to the QA for testing, it is obviously not possible to test the entire build and verify immediately if any of the implementations is having bugs or if any of the working functionality is broken.
In the light of this, how will a QA make sure that the basic functionalities are working fine?
The answer to this will be to perform a Smoke Testing.
Once the tests are marked as Smoke tests (in the test suite) pass, only then the build is accepted by the QA for in-depth testing and/or regression. If any of the smoke tests fail, the build is rejected and the development team needs to fix the issue and release a new build for testing.
Theoretically, the Smoke test is defined as surface-level testing to certify that the build provided by the development team to the QA team is ready for further testing. This testing is also performed by the development team before releasing the build to the QA team.
This testing is normally used in Integration Testing, System Testing, and Acceptance Level Testing. Never treat this as a substitute for the actual end to end complete testing. It comprises of both positive and negative tests depending on the build implementation.
Smoke Testing Examples
This testing is normally used for Integration, Acceptance and System Testing.
In my career as a QA, I always accepted a build only after I had performed a smoke test. So, let’s understand what is a smoke test from the perspective of all these three testings, with some examples.
#1) Acceptance Testing
Whenever a build is released to the QA, smoke test in the form of an Acceptance Testing should be done.
In this test, the first and most important smoke test is to verify the basic expected functionality of the implementation. Like this, you should verify all the implementations for that particular build.
Let us take the following Examples as implementations done in a build to understand the smoke tests for those:
Implemented the login functionality to allow the registered drivers to log in successfully.
Implemented the dashboard functionality to show the routes that a driver is to execute today.
Implemented the functionality to show an appropriate message if no routes exist for a given day.
In the above build, at the acceptance level, the smoke test will mean to verify that the basic three implementations are working fine. If any of these three is broken, then the QA should reject the build.
#2) Integration Testing
This testing is usually done when the individual modules are implemented and tested. In the Integration Testing level, this testing is performed to make sure that all the basic integration and end to end functionalities are working fine as expected.
It may be the integration of two modules or all modules together, hence the complexity of the smoke test will vary depending on the level of integration.
Let us consider the following Examples of integration implementation for this testing:
Implemented the integration of route and stops modules.
Implemented the integration of arrival status update and reflect the same on the stops screen.
Implemented the integration of complete pick up till the delivery functionality modules.
In this build, the smoke test will not only verify these three basic implementations but for the third implementation, a few cases will verify for complete integration too. It helps a lot to find out the issues that get introduced in integration and the ones that went unnoticed by the development team.
#3) System Testing
As the name itself suggests, for system level, the smoke testing includes tests for the most important and commonly used workflows of the system. This is done only after the complete system is ready & tested, and this testing for system-level can be referred to as smoke testing before regression testing also.
Before starting the regression of the complete system, the basic end to end features is tested as a part of the smoke test. The smoke test suite for the complete system comprises of the end to end test cases that the end-users are going to use very frequently.
This is usually done with the help of automation tools.
Importance In SCRUM Methodology
Nowadays, the projects hardly follow the Waterfall methodology in project implementation, mostly all the projects follow Agile and SCRUM only. Compared to the traditional waterfall method, Smoke Testing holds high regards in SCRUM and Agile.
I worked for 4 years in SCRUM. And as we know that in SCRUM, the sprints are of shorter duration and hence it is of extreme importance to do this testing, so that the failed builds can immediately be reported to the development team and fixed as well.
Following are some takeaway on the importance of this testing in SCRUM:
Out of the fortnight sprint, halftime is allocated to the QA but at times the builds to the QA are delayed.
In sprints, it is best for the team that the issues are reported at an early stage.
Each story has a set of acceptance criteria, hence testing the first 2-3 acceptance criteria is equal to smoke testing of that functionality. Customers reject the delivery if a single criterion is failing.
Just imagine what will happen if it’s 2 days that the development team delivered you the build and only 3 days are remaining for the demo and you come across a basic functionality failure.
On average, a sprint has stories ranging from 5-10, hence when the build is given it is important to make sure that each story is implemented as expected before accepting the build into testing.
When the complete system is to be tested and regressed, a sprint is dedicated to the activity. A fortnight maybe little less to test the whole system, hence it is very important to verify the most basic functionalities before starting the regression.
Smoke Test Vs Build Acceptance Testing
Smoke Testing is directly related to Build Acceptance Testing (BAT).
In BAT, we do the same testing – to verify if the build has not failed and that the system is working fine or not. Sometimes, it happens that when a build is created, some issues get introduced and when it is delivered, the build doesn’t work for the QA.
I would say that BAT is a part of a smoke check because if the system is failing, how can you as a QA accept the build for testing? Not just the functionalities, the system itself has to work before the QA’s proceed with In-Depth Testing.
Smoke Test Cycle
The following flowchart explains the Smoke Testing Cycle.
Once a build is deployed to a QA, the basic cycle followed is that if the smoke test passes, the build is accepted by the QA team for further testing but if it fails, the build is rejected until the reported issues are fixed.
Test Cycle
Who Should Perform the Smoke Test?
Not the whole team is involved in this type of testing to avoid the wastage of time of all the QA’s.
Smoke Testing is ideally performed by the QA lead who decides based on the result as for whether to pass the build to the team for further testing or reject it. Or in the absence of the lead, the QA’s themselves can also perform this testing.
At times, when the project is a large scale one, a group of QA can also perform this testing to check for any showstoppers. But this is not so in the case of SCRUM because SCRUM is a flat structure with no Leads or Managers and each tester has their own responsibilities towards their stories.
Hence individual QA’s perform this testing for the stories that they own.
Why Should We Automate Smoke Tests?
This testing is the first test to be done on a build released by the development team(s). Based on the results of this testing, further testing is done (or the build is rejected).
The best way to do this testing is to use an automation tool and schedule the smoke suite to run when a new build is created. You may be thinking why should I “automate the smoke testing suite�
Let us look at the following case:
Let’s say that you are a week away from your release and out of the total 500 test cases, your smoke test suite comprises of 80-90. If you start executing all these 80-90 test cases manually, imagine how much time will you take? I think 4-5 days (minimum).
But if you use automation and create scripts to run all these 80-90 test cases then ideally, these will be run in 2-3 hours and you will have the results with you instantly. Didn’t it save your precious time and give you the results about the build-in much less time?
Advantages And Disadvantages
Let us first take a look at the advantages as it has a lot to offer when compared to its few disadvantages.
Advantages:
Easy to perform.
Reduces the risk.
Defects are identified at a very early stage.
Saves efforts, time and money.
Runs quickly if automated.
Least integration risks and issues.
Improves the overall quality of the system.
Disadvantages:
This testing is not equal to or a substitute for complete functional testing.
Even after the smoke test passes, you may find showstopper bugs.
This type of testing is best suitable if you can automate else a lot of time is spent on manually executing the test cases especially in large-scale projects having around 700-800 test cases.
Smoke Testing should definitely be done on every build as it points out the major failures and showstoppers at a very early stage. This applies not only to new functionalities but also to the integration of modules, fixing of issues and improvisation as well. It is a very simple process to perform and get the correct result.
This testing can be treated as the entry point for complete Functional Testing of functionality or system (as a whole). But before that, the QA team should be very clear about what tests are to be done as smoke tests. This testing can minimize the efforts, save time and improve the quality of the system. It holds a very important place in sprints as the time in sprints is less.
This testing can be done both manually and also with the help of automation tools. But the best and preferred way is to use automation tools to save time.
Difference Between Smoke And Sanity Testing
Most of the times we get confused between the meaning of Sanity Testing and Smoke Testing. First of all, these two testings are way “different†and performed during different stages of a testing cycle.
Following is a diagrammatic representation of their differences:
SMOKE TESTING
This testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In the software industry, this testing is a shallow and wide approach whereby all the areas of the application without getting into too deep, is tested.
A smoke test is scripted, either using a written set of tests or an automated test
A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.
This testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with the finer details. (Such as build verification).
This testing is a normal health check-up to the build of an application before taking it to test in-depth.
SANITY TESTING
A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity Testing is usually narrow and deep.
This test is usually unscripted.
This test is used to determine that a small section of the application is still working after a minor change.
This testing is cursory testing, it is performed whenever a cursory testing is sufficient to prove that the application is functioning according to specifications. This level of testing is a subset of regression testing.
This is to verify whether the requirements are met or not, checking all the features breadth-first.