Wednesday, 1 April 2015

Role of a tester in Defect Prevention

“What is the role of a tester in Defect Prevention and Defect Detection?”. In this post we will discuss the role of a tester in these phases and how to testers can prevent more defects in Defect Prevention phase and how testers can detect more bugs in Defect Detection phase
Role of a tester in defect prevention and defect detection.
Defect prevention – In Defect prevention, developers plays an important role. In this phase Developers do activities like – code reviews/static code analysis, unit testing, etc. Testers are also involved in defect prevention by reviewing specification documents. Studying the specification document is an art.
While studying specification documents, testers encounter various queries. And many times it happens that with those queries, requirement document gets changed/updated.
Developers often neglect primary ambiguities in specification documents in order to complete the project; or they fail to identify them when they see them. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. This is how testers help in defect prevention. 

What is Black Box Testing

Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the “legal” inputs and what the expected outputs should be, but not how the program actually arrives at those outputs.

It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used, “Test groups are sometimes called professional idiots…people who are good at designing incorrect data.” 1 Also, do to the nature of black box testing, the test planning can begin as soon as the specifications are written. The opposite of this would be glass box testing, where test data are derived from direct examination of the code to be tested. For glass box testing, the test cases cannot be determined until the code has actually been written. Both of these testing techniques have advantages and disadvantages, but when combined, they help to ensure thorough testing of the product.

Waterfall Model

Waterfall Model is the most common method used in software testing. It is said to be a water fall method because it is like flowing downwards steadily from step to step.
The main phase or steps in the water fall method are

Conception,
Initiation,
Analysis,
Design,
Construction,
Testing,
Production/Implementation,
Maintenance.
This water fall method actually originated in the manufacturing and construction industries. At that time since no software methodologies existed, this was taken to the software development and testing. the main highlight of this method is one can go to the next step of the development only after completing the on going step.

waterfall testing

Also the developers can go only to one step behind that is the immediately previous phase only. in this method, each phase of the development activity is followed by verification and validation activities. in the waterfall method, the following are the steps involved. you can move on to the next step only when you finishes the present one. the phases or steps are:

Software requirement specification
System and sotware design
Implementation (coding or unit testing)
Integration
Testing and validation
Operation or installation
Maintenance

What is Beta Testing

What is Beta Testing

In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. View the advantages of beta testing
beta-testing

beta-testing

The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.


What is ALPHA TESTING


In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.

Alpha testing is done before the beta testing and after the acceptance testing. Mostly its done by the in-house members from developers and qa teams. I simple words its the testing by developed team just before launching the live beta version of that software.

What is User Acceptance Testing



In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. In software development, user acceptance testing (UAT) – also called beta testing, application testing, and end user testing – is a phase of software development in which the software is tested in the “real world” by the intended audience.

uat-testing

User Acceptance Testing can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially.

What is Regression Testing



Regression testing is a style of testing that focuses on retesting after changes are made. In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression.

regression-Testing

What is Scenario Testing


Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program and easy to evaluate for the tester. They provide meaningful combination of functions and variables rather than the more artificial combination you get with domain testing or combinatorial test design.

scenario-testing

This test find out the issues in our software against the practical usage. The end users creates the scenario here. Now we can consider an example to get more idea. Supposed We have developed a billing software for shop. We have completed many testing and there is no bugs in coding and all features are working, thats good. Now we are discussing with our customer and starting and regression test. He is telling a scenario, that I have entered a processed bill for one order, then my customer require to change the quantity of material he purchased. I need to give its as same bill. Then we will try this scenario in our software, and we found that our software not able to edit the generated bill because there is no option for that. So we need to add that facility too. Its only a general example. In simple words it doing the test against practical situation , and that stories can be given by end customers.


What is Domain Testing

What is Domain Testing

Domain testing is the most frequently described test technique. Some authors write only about domain testing when they write about test design. Also read pdf tutorials about domain based testing. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset. This type of testing also known as equivalence testing or boundary analysis.

domain-based-testing




What is Volume Testing


Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system. Read a pdf tutorial about Experiments with High Volume Test Automation after this article.

Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.

volume-testing

Volume testing will seek to verify the physical and logical limits to a system’s capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.


What is Recovery Testing



Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems


What is Smoke Testing


This type of testing is also called sanity testing. But there are some difference between Smoke and Sanity testing. and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. Read pdf tutorials about Smoke test at the end of this page. A test of new or repaired equipment by turning it on. If it smokes… guess what… it doesn’t work! The term also refers to testing the basic functions of software. The term was originally coined in the manufacture of containers and pipes, where smoke was introduced to determine if there were any leaks.

What is Usability Testing



This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user. You can also read pdf tutorials about usability tests after this description.

Usability testing is the process of working with end-users directly and indirectly to assess how the user perceives a software package and how they interact with it. This process will uncover areas of difficulty for users as well as areas of strength.

usability-testing

The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability. This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback. Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc). Often, this involves trivial modifications to existing software, but can result in tremendous return on investment.

Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability. These changes should be directly related to real-world usability by average users. As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease.


What is Exploratory Testing

This testing is similar to the ad-hoc testing and is done in order to learn/explore the application. It si shortly known as ET.

Exploratory software testing is a powerful and fun approach to testing. View the pdf tutorials about Exploratory Testing. In some situations, it can be orders of magnitude more productive than scripted testing. At least unconsciously, testers perform exploratory testing at one time or another. Yet it doesn’t get much respect in our field. It can be considered as “Scientific Thinking” at real time


What is Load Testing



The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly


What is Stress Testing



The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand. Stress testing deals with the quality of the application in the environment. Read the stress testing tutorials in pdf with examples after this basic introduction.

The idea is to create an environment more demanding of the application than the application would experience under normal work loads. This is the hardest and most complex category of testing to accomplish and it requires a joint effort from all teams. A test environment is established with many testing stations. At each station, a script is exercising the system. These scripts are usually based on the regression suite. More and more stations are added, all simultaneous hammering on the system, until the system breaks. The system is repaired and the stress test is repeated until a level of stress is reached that is higher than expected to be present at a customer site.

stress-testing

Race conditions and memory leaks are often found under stress testing. A race condition is a conflict between at least two tests. Each test works correctly when done in isolation. When the two tests are run in parallel, one or both of the tests fail. This is usually due to an incorrectly managed lock. A memory leak happens when a test leaves allocated memory behind and does not correctly return the memory to the memory allocation scheme. The test seems to run correctly, but after being exercised several times, available memory is reduced until the system fails.


What is Ad Hoc Testing



This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing. It is the least formal method of testing. View pdf tutorials about Ad-hoc testing after reading all these details.

One of the best uses of ad hoc testing is for discovery. Reading the requirements or specifications (if they exist) rarely gives you a good sense of how a program actually behaves. Even the user documentation may not capture the “look and feel” of a program. Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be apparent. In this way, it serves as a tool for checking the completeness of your testing. Missing cases can be found and added to your testing arsenal. Finding new tests in this way can also be a sign that you should perform root cause analysis.

ad-hoc-testing

Ask yourself or your test team, “What other tests of this class should we be running?” Defects found while doing ad hoc testing are often examples of entire classes of forgotten test cases. Another use for ad hoc testing is to determine the priorities for your other testing activities. In our example program, Panorama may allow the user to sort photographs that are being displayed. If ad hoc testing shows this to work well, the formal testing of this feature might be deferred until the problematic areas are completed. On the other hand, if ad hoc testing of this sorting photograph feature uncovers problems, then the formal testing might receive a higher priority.

- See more at: http://www.testingbrain.com/blackbox/ad-hoc-testing.html#sthash.IVQ47naz.dpuf

Test Execution Process

Once the test cases are written, shared with the BAs and Dev team, reviewed by them, changes are notified to the QA team (if any), QA team makes necessary amends- Test design phase is complete. Now getting the Test cases ready does not mean we can initiate the test run. We need to have the application ready as well among other things.
Test Execution Guidelines:

Let us now make a list of all things that are important to understand Test Execution phase:
#1. The build (the code that is written by the dev team is packaged into what is referred to a build- this is nothing but an installable piece of software (AUT), ready to be deployed to QA environment.) being deployed (in other words, installed and made available) to the QA environment is one of the most important aspects that needs to happen for the test execution to start.
#2. Test execution happens in the QA environment. To make sure that the dev team’s work on code is not in the same place, where the QA team is testing, the general practice is to have dedicated Dev and QA environment. (There is also a production environment to host the live application). This is basically to preserve the integrity of the application at various stages in the SDLC life cycle. Otherwise, ideally, all the 3 environments are identical in nature.
#3. Test team size is a not constant from the beginning of the project. When the test plan is initiated the team might just have a Team lead. During the test design phase, a few testers come on board.  Test execution is the phase when the team is at its maximum size.
#4. Test execution also happens in at least 2 cycles (3 in some projects). Typically in each cycle, all the test cases (the entire test suite) will be executed. The objective of the first cycle is to identify any blocking, critical defects, and most of the high defects. The objective of the second cycle is to identify remaining high and medium defects, correct gaps in the scripts and obtain results.
#5. Test execution phase consists of- Executing the test scripts + test script maintenance (correct gaps in the scripts) + reporting (defects, status, metrics, etc.) Therefore, when planning this phase schedules and efforts should be estimated taking into consideration all these aspects and not just the script execution.
#6. After the test script being done and the AUT is deployed – and before the test execution begins, there is an intermediary step. This is called the “Test Readiness Review (TRR)”.  This is a sort of transitional step that will end the test designing phase and ease us into the test execution.
For information on this step and a sample “Test readiness review checklist”, check out this link: Software testing Checklist
#7. In addition to the TRR, there are few more additional checks before we ensure that we can go ahead with accepting the current build that is deployed in the QA environment for test execution.
Those are the smoke and sanity tests. Detailed information on what these are is at: What is Smoke and Sanity Test?
#8. On the successful completion of TRR, smoke and sanity tests, the test cycle officially begins.
#9. Exploratory Testing would be carried out once the build is ready for testing. The purpose of this test is to make sure critical defects are removed before the next levels of testing can start. This exploratory testing is carried out in the application without any test scripts and documentation. It also helps in getting familiar with the AUT.
#10. Just like with the other phases of the STLC, work is divided among team members in the test execution phase also. The division might be based on module wise or test case count wise or anything else that might make sense.
#11. The primary outcome of the test execution phase is in the form of reports – primarily, defect report and test execution status report. The detailed process for reporting can be found at: Test executions reports
New Columns in Test Cases Document:

The test case document now gets to be expanded with the following two columns – Status and Actual result.
(Note – For live project test execution, we have added and updated these columns with test execution results in the test cases spreadsheet provided for download below)
------------
Status column:
Test execution is nothing but, using the test steps on the AUT, supplying the test data(as identified in the test case document) and observing the behavior of the AUT to see if it satisfies the expected result or not. If the expected result is not met, it can be construed as a defect. And the status of the test case becomes “Fail” and if the expected result is met, the status is “Pass”. If the test case cannot be executed because of any reasons (an existing defect or environment not supporting) the status would be “Blocked”. The status of a test case that is yet to be run can be set to No run/unexecuted or can be left empty.
For a test case with multiple steps, if a certain step’s (in the middle of the test case steps) expected result is not met, the test case status can be set to “Fail” right there and the next steps need not be executed.
The status “Fail” can be indicated in red color, if you would like to draw attention to it immediately.
Actual result column:
This is a space where we testers can record what the deviation in the expected result is. When the expected result is met (or a test case whose status is “Pass”) this field can be left empty. Because, if the expected result is met it means the actual result=expected result, which means rewriting it in the actual result column will be a repetition and redundancy.
A screenshot of the deviation can be attached in this column for enhanced clarity of what the problem is.
Test Execution Results for OrangeHRM Live Project:

Let us now get OrangeHRM and carry out the test execution based on the above guidelines listed. Here are a few points to note:
The extended test case template.
Exploratory testing as indicated is to be carried out without test scripts. So please feel free to test the application in parallel as you see fit.
Due to the limitations that we have in presenting the live project in the form of readable content- only a limited amount of test cases/functionality of the OrangeHRM application is shown in the sample test execution template. Again, please feel to work on more for the most practical experience.
The sanity and smoke test suites are also added to the document, to give you an idea about what kind of test cases are considered for these stages.
Defects are not logged yet, even though the status of some test cases is set to “Fail”. This is because, logging the defects is the next most important/commonly worked on aspect of our life as testers. So, we want to deal with it in detail in the next article.
Test Cases with Execution Results:
=> Click here to download the test case execution document.
It Contains – Test cases execution result, smoke tests, sanity tests, exploratory test – spreadsheets
Lastly, if a test management tool was used for creating and maintaining the test case, the same can be used for test execution as well. The use of a tool makes reporting easier, but otherwise the process of running the test cases is the same.Once the test cases are written, shared with the BAs and Dev team, reviewed by them, changes are notified to the QA team (if any), QA team makes necessary amends- Test design phase is complete. Now getting the Test cases ready does not mean we can initiate the test run. We need to have the application ready as well among other things.

Test Execution Guidelines:

Let us now make a list of all things that are important to understand Test Execution phase:
#1. The build (the code that is written by the dev team is packaged into what is referred to a build- this is nothing but an installable piece of software (AUT), ready to be deployed to QA environment.) being deployed (in other words, installed and made available) to the QA environment is one of the most important aspects that needs to happen for the test execution to start.
#2. Test execution happens in the QA environment. To make sure that the dev team’s work on code is not in the same place, where the QA team is testing, the general practice is to have dedicated Dev and QA environment. (There is also a production environment to host the live application). This is basically to preserve the integrity of the application at various stages in the SDLC life cycle. Otherwise, ideally, all the 3 environments are identical in nature.
#3. Test team size is a not constant from the beginning of the project. When the test plan is initiated the team might just have a Team lead. During the test design phase, a few testers come on board.  Test execution is the phase when the team is at its maximum size.
#4. Test execution also happens in at least 2 cycles (3 in some projects). Typically in each cycle, all the test cases (the entire test suite) will be executed. The objective of the first cycle is to identify any blocking, critical defects, and most of the high defects. The objective of the second cycle is to identify remaining high and medium defects, correct gaps in the scripts and obtain results.
#5. Test execution phase consists of- Executing the test scripts + test script maintenance (correct gaps in the scripts) + reporting (defects, status, metrics, etc.) Therefore, when planning this phase schedules and efforts should be estimated taking into consideration all these aspects and not just the script execution.
#6. After the test script being done and the AUT is deployed – and before the test execution begins, there is an intermediary step. This is called the “Test Readiness Review (TRR)”.  This is a sort of transitional step that will end the test designing phase and ease us into the test execution.
#7. In addition to the TRR, there are few more additional checks before we ensure that we can go ahead with accepting the current build that is deployed in the QA environment for test execution.
Those are the smoke and sanity tests. Detailed information on what these are is at: What is Smoke and Sanity Test?
#8. On the successful completion of TRR, smoke and sanity tests, the test cycle officially begins.
#9. Exploratory Testing would be carried out once the build is ready for testing. The purpose of this test is to make sure critical defects are removed before the next levels of testing can start. This exploratory testing is carried out in the application without any test scripts and documentation. It also helps in getting familiar with the AUT.
#10. Just like with the other phases of the STLC, work is divided among team members in the test execution phase also. The division might be based on module wise or test case count wise or anything else that might make sense.
#11. The primary outcome of the test execution phase is in the form of reports – primarily, defect report and test execution status report.

New Columns in Test Cases Document:

The test case document now gets to be expanded with the following two columns – Status and Actual result.
(Note – For live project test execution, we have added and updated these columns with test execution results in the test cases spreadsheet provided for download below)
------------
Status column:
Test execution is nothing but, using the test steps on the AUT, supplying the test data(as identified in the test case document) and observing the behavior of the AUT to see if it satisfies the expected result or not. If the expected result is not met, it can be construed as a defect. And the status of the test case becomes “Fail” and if the expected result is met, the status is “Pass”. If the test case cannot be executed because of any reasons (an existing defect or environment not supporting) the status would be “Blocked”. The status of a test case that is yet to be run can be set to No run/unexecuted or can be left empty.
  • For a test case with multiple steps, if a certain step’s (in the middle of the test case steps) expected result is not met, the test case status can be set to “Fail” right there and the next steps need not be executed.
  • The status “Fail” can be indicated in red color, if you would like to draw attention to it immediately.
Actual result column:
This is a space where we testers can record what the deviation in the expected result is. When the expected result is met (or a test case whose status is “Pass”) this field can be left empty. Because, if the expected result is met it means the actual result=expected result, which means rewriting it in the actual result column will be a repetition and redundancy.
A screenshot of the deviation can be attached in this column for enhanced clarity of what the problem is.

Test Execution Results for OrangeHRM Live Project:

Let us now get OrangeHRM and carry out the test execution based on the above guidelines listed. Here are a few points to note:
  • The extended test case template.
  • Exploratory testing as indicated is to be carried out without test scripts. So please feel free to test the application in parallel as you see fit.
  • Due to the limitations that we have in presenting the live project in the form of readable content- only a limited amount of test cases/functionality of the OrangeHRM application is shown in the sample test execution template. Again, please feel to work on more for the most practical experience.
  • The sanity and smoke test suites are also added to the document, to give you an idea about what kind of test cases are considered for these stages.
  • Defects are not logged yet, even though the status of some test cases is set to “Fail”. This is because, logging the defects is the next most important/commonly worked on aspect of our life as testers. So, we want to deal with it in detail in the next article.
Test Cases with Execution Results:
It Contains – Test cases execution result, smoke tests, sanity tests, exploratory test – spreadsheets
Lastly, if a test management tool was used for creating and maintaining the test case, the same can be used for test execution as well. The use of a tool makes reporting easier, but otherwise the process of running the test cases is the same.

How to Write a Test Plan Document

STLC can be roughly divided into 3 parts:
  1. Test planning
  2. Test Design
  3. Test Execution
In the previous article we have seen that in a practical QA project, we started with the SRS review and Test scenario writing – which is actually the step 2 in the STLC process – the test design, which involves details on what to test and how to test. Why haven’t we started with the Test planning?
Test planning indeed is the first and foremost activity that happens in a testing project.

How test planning takes place at each phase of the SDLC:

SDLC PhaseTest planning activity
InitiateIdeally QA team should get involved while the scope of the project is gathered from the customer/client in the form of business requirements. But in the real world, that is not the case. From a practical point of view, the involvement of the QA team is NIL. At the end of this phase, BRD is finalized and a basic Project Plan is created.
DefineSRS is created from the BRD. Test plan's initial draft is created. At this point, since the QA team is not done with the SRS review, the scope of testing is not clear. So the TP at this phase will only contain information on when testing is going to happen, project information and the team information (if we have it).
DesignThe SRS review is carried out and the scope of testing is identified. We have much more information on what to test and a good estimate of how many test cases we might get etc. A second version of the Test plan is created incorporating all this information.
From the above table it is clear that test plan is not a document that you can create all at once and use it from then on.
Test Plan is a dynamic document. The success of a testing project depends on a well written test plan document that is current at all times. Test Plan is more or less like a blue print of how the testing activity is going to take place in a project.

It has clear information on the following aspects:

Items in a Test Plan TemplateWhat do they contain
Scope =>Test scenarios/Test objectives that will be validated.
Out of scope =>Enhanced clarity on what we are not going to cover
Assumptions =>All the conditions that need to hold true for us to be able to proceed successfully
Schedules =>Test scenario prep
Test documentation- test cases/test data/setting up environment
Test execution
Test cycle- how many cycle
Start and end date for cycles
Roles and Responsibilities =>Team members are listed
Who is to do what
module owners are listed and their contact info
Deliverables =>What documents(test artifacts) are going to produce at what time frames
What can be expected from each document
Environment =>What kind of environment requirements exist
Who is going to be in charge
What to do in case of problems
Tools =>For example: JIRA for bug tracking
Login
How to use JIRA
Defect Management =>Who are we going to report the defects to
How are we going to report
What is expected- do we provide screenshot?
Risks and Risk Management =>Risks are listed
Risks are analyzed- likelihood and impact is documented
Risk mitigation plans are drawn
Exit criteria =>When to stop testing
Since, all the above information is the most critical for theday-to-day working of a QA project,  it is important to keep the Test Plan document updated at all times.

Here are a few important pointers regarding a test plan:

  1. Test Plan is a document that is the point of reference based on which testing is carried out within the QA team.
  2. It is also a document we share with the Business Analysts, Project Managers, Dev team and the other teams. This is to enhance the level of transparency into the QA team’s working to the external teams.
  3. It is documented by the QA manager/QA lead based on the inputs from the QA team members.
  4. Test Planning is typically allocated 1/3rd of the time it takes for the entire QA engagement.  The other 1/3rd is for Test Designing and rest is for Test Execution.
  5. Test plan is not static and is updated on an on demand basis.
  6. The more detailed and comprehensive the Test plan, the more successful the testing activity.

Sample Test Plan Document for OrangeHRM Project

A sample Test plan template document is created for our “ORANGEHRM VERSION 3.0 – MY INFO MODULE” Project and attached below. Please take a look at it. Additional comments have been added to the document in Red to explain the sections. This Test plan is for the Functional as well as the UAT phases. It also explains the Test Management process using the HP ALM tool.

How to Review SRS Document and Create Test Scenarios

Let us now get into a detailed analysis of how an SRS walkthrough happens, what is it that we need to identify from this step, what pre-steps we need to take before we begin, what are the challenges we could face etc. in a detailed manner.
SDLC’s Design Phase:
The next phase in the SDLC is “Design”- this is where the functional requirements are translated into the technical details. The dev, design, environment and data teams are involved in this step. The outcome of this step is typically a Technical Design Document- TDD. The input is the SRS document both for the creation of the TDD and for the QA team to start working on the QA aspect of the project- which is to review the SRS and identify the test objective.

What is an SRS review?

SRS is a document that is created by the development team in collaboration with business analysts and environment/data teams. Typically, this document once finalized will be shared with the QA team via a meeting where a detailed walkthrough is arranged. Sometimes, for an already existing application, we might not need a formal meeting and someone guiding us through this document. We might have the necessary information to do this by ourselves.
SRS review is nothing but going through the functional requirements specification document and trying to understand what the target application is going to be like.
The formal format and a sample were shared with you all in the previous article. It does not necessarily mean that all SRSs are going to be documented that way exactly. Always, form is secondary to the content. Some teams will just choose to write a bulleted list, some teams will include use cases, some teams will include sample screenshots (like the document we had) and some just describe the details in paragraphs.

Pre-steps to software requirements specification review:

Step #1: Documents go through multiple revisions, so make sure we have the right version of the reference document, the SRS.
Step #2: Establish guidelines on what is expected at the end of the review from each team member. In other words, decide on what deliverables are expected from this step – typically, the output of this step is to identify the test scenarios. Test scenarios are nothing but one line pointers of ‘what to test’ for a certain functionality.
Step #3: Also establish guidelines on how this deliverable is to be presented- I mean, the template.
Step #4: Decide on whether each member of the team is going to work on the entire document or divide it among themselves. It is highly recommended that everyone reads everything because that will prevent knowledge concentration with certain team members. But in case of a huge project, with the SRS documents running close to 1000 pages, the approach of breaking up the document module wise and assigning to individual team members is most practical.
Step #5: SRS review also helps in better understanding if there are any specific prerequisites required for the testing of the software.
Step #6: As a byproduct, a list of queries where some functionality is difficult to understand or if more information needs to be incorporated into functional requirements or if mistakes are made in SRS they are identified.

What do we need to get started?

  • The correct version of the SRS document
  • Clear instructions on who is going to work on what and how much time have they got.
  • A template to create test scenarios
  • Other information on- who to contact in case of a question or who to report in case of a documentation inconsistency
Who would provide this information?
Team leads are generally responsible for providing all the items listed in the section above. However, team members’ inputs are always important for the success of this entire endeavor.
Team leads often ask- What kind of inputs? Wouldn’t it be better to assign a certain module to someone interested in it than to a team member who is not? Wouldn’t it be nice to decide on the target date based on the team’s opinion than thrust a decision on them? Also, for the success of a project, templates are important. As a general rule, templates have a higher rate of efficiency when they are tailored to the specific team’s convenience and comfort.
It should therefore be noted that, team leads more than anything are team members. Getting your team onboard on the day-to-day decisions is crucial for the smooth running of the project.

Why a template for test scenarios – isn’t it enough if we just make a list?

It sure is. However, software projects are not ‘one-man’ shows. Imagine in a team of 4- if each one of them decide to review one module of the software requirements specification each. Team member A has made a list on a sheet of paper. Team member 2 used an excel sheet. Team member 3 used a notepad. Team member 4 used a word doc. How do we consolidate all the work done for the project at the end of the day?
Also, how can we decide which one is the standard and how can we say what is right and what’s not if we did not create the rules to begin with?
That is what template is- A set of guidelines and an agreed format for uniformity and concurrence for the entire team.
How to create a template for QA Test scenarios?
Templates don’t have to be complicated or inflexible.
All they need to be are an efficient mechanism to create a useful testing artifact. Something simple like the one we see below:
test scenarios template
The header of this template contains the space required to capture basic information about the project, the current document and the reference document.
The table below will let us create the test scenarios. The columns included are:
Column #1: Test scenario ID 
Every entity in our testing process has to be uniquely identifiable. So, every test scenario has to be assigned an ID. The rules to follow while assigning this ID have to be defined. For the sake of this article we are going to follow the naming convention as: TS(prefix that stands for Test Scenario) followed by ‘_’ , module name MI(my Info module of the Orange HRM project) followed by ‘_’ and then the sub section (eg: MIM for My info module, P for photograph and so on)followed by a serial number. An example would be: “TS_MI_MIM_01”.
Column #2: Requirement 
It helps that when we create a test scenario we should be able to map it back to the section of the SRS document where we picked it from. If the requirements have IDs we could use that. If not section numbers or even page numbers of the SRS document from where we identified a testable requirement will do.
Column #3: Test scenario description
A one liner specifying ‘what to test’. We would also refer to it as test objective.
Column #4: Importance
This is to give an idea about how important certain functionality is for the AUT. Values like high, medium and low can be assigned to this field. You could also choose a point system, like 1-5, 5 being most important, 1 being less important. Whatever the value this field can take, it has to be pre-decided.
------------
Column #5: No. of Test cases
A rough estimate on how many individual test cases we might end up writing that one test scenario. For example: To test the login- we include these situations: Correct username and password. Correct username and wrong password. Correct password and wrong username. So, validating the login functionality will result in 3 test cases.
Note: You can expand this template or remove the fields as you see fit.
For example, you can add “Reviewed by” in the header or remove the date of creation etc. Also in the table, you can include a field “Created by” to designate the tester responsible for a certain test scenarios or remove the “No. of Test cases” column. The choice is yours. Go with what works best for the entire team.

Let us now review our Orange HRM SRS Document and create the test scenarios

Tip: check out the table of contents in the SRS sample we provided in 1st tutorials to get a good idea on how any document is going to flow and how much of work it might involve.
Section 1 is the purpose of the document. No testable requirements there.
Section 2.1 – Project Overview- Audience- no testable requirements there either.
Section 2.2- Hardware and hosting- This section is talking about how the Orange HRM site is going to be hosted. Now, is this information important to us testers? The answer is Yes and No. Yes, because when we test we need to have an environment that is similar to the real time environment. This gives us an idea on how it needs to be. No because, it is not a testable requirement- a kind of prerequisite for the testing to happen.
Section 3: There is a login screen here and the details of the type of account we need to have to enter the site. This is a testable requirement. So it needs to be a part of our Test scenarios.
Please see the test scenarios document where test scenarios for a few sections of the SRS have been added. For practice, please add the rest of the scenarios in a similar manner. However, I am going right to section 4 of the document.
Section 4: Aesthetic/HTML Requirements and Guidelines- This section best explains how some requirements might not make sense to the test team at the time of SRS review, but the team should make a note of them as testable requirements all the same. How to test them and if we need specific set up/any team’s help to validate it are details we might not know at this point of time. But making them a part of our testing scope is the first step to ensure that we do not miss them.

Some important observations regarding SRS review:

#1. No information is to be left uncovered.
#2. Perform a feasibility analysis on whether a certain requirement is correct or not and also if it can be tested or not.
#3. Unless a separate performance/security or any other form of test teams exists- it is our job to make sure that all nonfunctional requirements have to be taken into consideration.
#4. Not all information is targeted at the testers, so it is important to understand what to note and what not.
#5. The importance and no. of test cases for a test scenario need not be accurate and can be filled in with an approximate value or can be left empty.
To sum up, SRS review results in:
  • Test scenarios list
  • Review results – documentation/requirement errors found by statically going through/verifying the SRS document
  • A list of Questions for better understanding- in case of any
  • Preliminary idea of how the test environment is supposed to be like
  • Test scope identification and a rough idea on how many test cases we might end up having- so how much time we need for documentation and eventually execution.