Educate every one especially village youth and unemployed people. Increase their job opportunities in their new lives.This is my small step for big success which helps them to turn their steps into software industry......
Monday, 6 April 2015
Thursday, 2 April 2015
About ISTQB and Benefits for individuals
About ISTQB
The ISTQB was officially founded as an International Software Testing Qualifications Board in Edinburgh in November 2002. The international recognition of the certification is due to the participation of many countries as opposed to any country-specific scheme.
ISTQB® has created the world's most successful scheme for certifying software testers.
As of June 2013, ISTQB® has issued over 307,000 certifications in 70 countries world-wide, with a growth rate of approximately 12,000 certifications per quarter.
The main aims, tasks, and responsibilities of the ISTQB are:
To define and maintain all aspects of the ISTQB Certified Tester scheme such as core syllabi, examination structure, regulations, and certification guidelines.
To ensure that each successful participant receive the "ISTQB Certified Tester" certificate (or the local variant with the added "ISTQB compliant" logo).
To promote testing as a profession, increase the number of qualified testers, and develop a common body of understanding and knowledge about testing, through the syllabus & terminology.
To approve, monitor compliance, or expel national boards.
Read more about the ISTQB www.istqb.org
Some advantages offered by the ISTQB Certified Tester scheme are:
Testers don't need to know English in order to gain a recognized qualification with less cultural bias. It also allows them to move across country borders.
Economic benefits accrue to testing-related suppliers such as training providers, consultants, etc. in all participating countries.
European/multinational/international projects can have a common understanding of testing issues.
Benefits for individuals
ISTQB® certified testers:
Gain independently assessed knowledge and skills.
Increase their marketability throughout the industry.
Have greater career opportunities and increased earning potential.
Can add the "ISTQB® Certified Tester" logo and credential to their resumes.
Are recognized as having subscribed to a Code of Ethics.
Benefits for employers
Employers of ISTQB® Certified Testers can take advantage of many benefits, as shown below:
Having certified staff can be a competitive advantage for organizations, which can benefit from the adoption of more structured testing practices and optimization of test activities, derived from the ISTQB®competencies.
For consulting organizations, certified staff can offer high-level services to customers, increasing revenues and brand value.
Adoption of ISTQB® certification schemes in an organization can help in recruiting and retaining high standing staff and can help organizations remain up-to-date with testing innovations.
Formal recognition for organizations having adopted ISTQB® certifications will be available in the future.
Benefits for training providers
ISTQB® Training providers can:
Access an international market that recognizes ISTQB®.
Distinguish themselves through the independently assessed professionalism of their teachers and quality/ coverage of their training material.
Offer their clients the most up-to-date testing knowledge.
Provide a continuously expanding professional development path to their clients in the field of testing.
Participate in early reviews of the ISTQB® syllabi and in other activities organized by ISTQB®.
Use the ISTQB® accredited trainer provider logos and credential within their marketing materials.
The ISTQB was officially founded as an International Software Testing Qualifications Board in Edinburgh in November 2002. The international recognition of the certification is due to the participation of many countries as opposed to any country-specific scheme.
ISTQB® has created the world's most successful scheme for certifying software testers.
As of June 2013, ISTQB® has issued over 307,000 certifications in 70 countries world-wide, with a growth rate of approximately 12,000 certifications per quarter.
The main aims, tasks, and responsibilities of the ISTQB are:
To define and maintain all aspects of the ISTQB Certified Tester scheme such as core syllabi, examination structure, regulations, and certification guidelines.
To ensure that each successful participant receive the "ISTQB Certified Tester" certificate (or the local variant with the added "ISTQB compliant" logo).
To promote testing as a profession, increase the number of qualified testers, and develop a common body of understanding and knowledge about testing, through the syllabus & terminology.
To approve, monitor compliance, or expel national boards.
Read more about the ISTQB www.istqb.org
Some advantages offered by the ISTQB Certified Tester scheme are:
Testers don't need to know English in order to gain a recognized qualification with less cultural bias. It also allows them to move across country borders.
Economic benefits accrue to testing-related suppliers such as training providers, consultants, etc. in all participating countries.
European/multinational/international projects can have a common understanding of testing issues.
Benefits for individuals
ISTQB® certified testers:
Gain independently assessed knowledge and skills.
Increase their marketability throughout the industry.
Have greater career opportunities and increased earning potential.
Can add the "ISTQB® Certified Tester" logo and credential to their resumes.
Are recognized as having subscribed to a Code of Ethics.
Benefits for employers
Employers of ISTQB® Certified Testers can take advantage of many benefits, as shown below:
Having certified staff can be a competitive advantage for organizations, which can benefit from the adoption of more structured testing practices and optimization of test activities, derived from the ISTQB®competencies.
For consulting organizations, certified staff can offer high-level services to customers, increasing revenues and brand value.
Adoption of ISTQB® certification schemes in an organization can help in recruiting and retaining high standing staff and can help organizations remain up-to-date with testing innovations.
Formal recognition for organizations having adopted ISTQB® certifications will be available in the future.
Benefits for training providers
ISTQB® Training providers can:
Access an international market that recognizes ISTQB®.
Distinguish themselves through the independently assessed professionalism of their teachers and quality/ coverage of their training material.
Offer their clients the most up-to-date testing knowledge.
Provide a continuously expanding professional development path to their clients in the field of testing.
Participate in early reviews of the ISTQB® syllabi and in other activities organized by ISTQB®.
Use the ISTQB® accredited trainer provider logos and credential within their marketing materials.
Mobile application development and testing process
Recently, we have been involved in mobile application development and testing. The below checklist ensures that both developers and testers have covered these high level scenarios during their requirements discussion, development and testing activities. Mobile application development and testing checklist also helps you refine your requirements to ensure that your scope of work is clearly defined. These are high level questions and not very specific to the application functionality (we will cover that in the next article in the series).
1. Which mobile platform to develop for? iOS or Android?
iOS and Android are the preferred platform for developing mobile applications. However, Blackberry is still used by several enterprise users and a significant population of the developing world still uses Symbian phones. It’s good to know upfront which platforms are expected to be supported. This will help you make crucial decisions regarding your architecture / design and also provide inputs on whether you want to develop a completely native application or use a framework like PhoneGap. Popular platforms are listed below:
Android
iOS
Windows Mobile
Blackberry
Symbian
Mobile application development and testing smartphone market share
Mobile Application Testing Device Units Forecast
This Forbes article provides details on the current market share and forecast, based on mobile OS.
2. Which version of iOS should I target? What version of Android should I develop for?
Does your application use any features introduced in a specific version of the OS? If so, you will want to mention this in your marketing material and also test it on the target OS. It is also good practice to prevent the application from being installed on OS versions that are not supported. This will avoid users leaving low ratings and negative feedback after installing the application on an unsupported OS. Of course, you will want to test and ensure that your application works on the targeted OS version.
Mobile application developmentmobile application testing iOS Version chart
The above image shows the usage statistics for different version of the Android and iOS. Google provides updated metrics for the version of Android being used across all android devices and information on how you can support multiple versions. Apple also provides statistics for iOS version usage along with a checklist of their own (These stats are from Sept 2013 before iOS7 launched).
3. Device Hardware Requirements
Does your application have any specific hardware requirements like memory, camera, CPUs etc? As mentioned above, it’s best to prevent installation on unsupported devices programmatically when possible. Here are instructions to check if your device has sufficient memory/RAM in Android and iOS and also to check for camera on iOS and Android.
4. Which screen resolution should I target?
Make sure that your application looks good on your target screen resolution. Smartphones and tablets come in all shapes and sizes. A list of devices with screen resolution and display density is available on Wikipedia. Some of the common screen resolutions are below:
320 x 480px
640 x 960px
480 x 800px
720 x 1280px
768 x 1280px
800 x 1280px
1200 x 1920px
2048 × 1536px
Google also provides statistics on the number of devices that have a particular physical screen size and density. Information on pixel count for various screen sizes and how you can support multiple screen sizes is also available.
5. Should I develop another app for tablets?
It’s good practice to use high quality graphics for large devices like tablets especially if your application or game is expected to be used on these devices. Some developers release a separate HD version of the application/game instead of using a single package. Irrespective of your implementation process, it’s good practice to test on both devices if you expect significant users on both. Google has also released an app quality checklist to help developers deliver quality apps for tablet devices.
6. Portrait or Landscape Orientation?
Some games work only in landscape mode while some applications are designed to work in portrait mode only and other work in both modes. Make sure you test your applications to see if there are any issues when changing the orientation like application crashing or UI bugs.
7. Testing GPS functionality, Accelerometer, Hardware keys
If your application requires use of the following hardware features, your test cases also need to test for scenarios when they are not available -
Hardware keys – Ex. Camera application using a dedicated camera button, Task/Event Manager applications using hardware buttons to snooze a reminder, media players using volume and other keys etc. Some applications also use the power button to provide additional functionality / shortcuts to application behavior.
Accelerometer – Applications that make use of accelerometer require testing to ensure that the readings are being recorded accurately and utilized correctly within the applications. This test case might be relevant to applications like Star Maps, Pedometers, Jump trackers, Games, 3D visualization applications etc
GPS – How will your Navigation applications respond if the GPS is disabled or turned off abruptly during operation?
Any other sensor – If you application depends on additional sensors for temperature, luminosity or any accessory that provides additional functionality, then you need to ensure that you have tested for conditions when they are not available or do not function accurately.
8. Network Connectivity Issues – GPRS, 2G, 3G, WiFi, Intermittent connectivity, No connectivity
Most of the applications are developed in the presence of WiFi which provides good network connectivity. However it’s important to test applications in the real world where the user might not have access to WiFi. Usually when people are on the move, network connectivity is intermittent with connection being dropped once in a while. Network speeds also vary based on the users location and the kind of connectivity they are paying for. Applications must be able to handle these situations gracefully and they must be tested for it.
9. Test Mobile + web app updates
Does your mobile application have a server side component or a web service it uses? Does the mobile application need an update when the server side component is updated? If so, make sure there is a test case to track this to avoid any human error.
10. Testing interruptions to the mobile app
There are various events that can interrupt the flow of your application. Your application should be able to handle these and should be tested for the same.
Incoming Call
Text message
Other app notifications
Storage low
Battery low
Battery dead
No storage
Airplane mode
Intermittent connectivity
Home screen jump
Sleep mode
11. Mobile application security testing
Security and data privacy are of utmost importance in today’s scenario. Users are worried about their data and credentials being exposed through vulnerable applications.
Is your application storing payment information or credit card details?
Does your application use secure network protocols?
Can they be switched to insecure ones?
Does the application ask for more permissions than it needs?
Does your application use certificates?
Does your application use a Device ID as an identifier?
Does your application require a user to be authenticated before they are allowed to access their data?
Is there a maximum number of login attempts before they are locked out?
Applications should encrypt user name and passwords when authenticating the user over a network. One way to test security related scenarios is to route your mobile’s data through a proxy server like OWASP Zed Attack Proxy and look for vulnerabilities.
12. Testing In-app payment, advertisements and payment gateway integrations
If your app makes use of in-app payment, advertisements or payment gateways for e-commerce transactions, you will need to test the functionality end to end to ensure that there are no issues in the transactions. Testing for payment gateway integration and advertisements will need accounts to be created with the Payment Gateways and Advertisement servers before testing can begin.
13. Mobile application performance testing
Have you checked to see if the performance of your mobile application degrades with increase in the – size of mailbox, album, messages, music or any other content relevant to the application?
It’s good practice to test your application for performance and scalability issues. With large storage capacity being available at affordable prices, it’s not uncommon for users to have large amount of data / content on their smartphone. Users even store SMS for several years on their smartphones. If your application has user generated content / data associated with it (Ex. Photographs, SMS etc) which can grow to huge proportions over the lifetime of the application, your testing should include these scenarios to see how the application performs. In case the application has a server side component, you should also test the application with increasing number of users. While this testing can be done manually, we have tools like Little Eye and Neo Load that can help you with Performance and Load testing of your mobile application.
14. Mobile Application Localization and Timezone issues
If your application is multilingual, it needs to be tested in other languages to ensure that there is no character encoding issue, data truncation issue or any UI issues due to varying character lengths. You also need to test applications to ensure that they handle timezone changes. What happens if a user travels forward across timezone and returns to his/her previous timezone? How does your app handle entries with date and time which are in sequence but not in chronological order?
15. Testing Social network integration
Many applications these days ship with the ability to share a post from the application, on the users’ social networking account. However most users would like to be prompted before a post is published on their account. Does your application handle this? Are they being allowed to share the status message being shared?
16. Test hardware connectivity – Bluetooth, WiFi, NFC, USB – Device recognition
Smartphones come with a plethora of connectivity options. If your application makes use of the below connectivity options (Ex. File managers or photo editors that let you share your file, AirDroid which allows you to transfer files between PC and your mobile over wifi) then you should test them to ensure they work as expected. You should also test to see how they handle errors when the connection is lost during a transfer / transaction. Commonly used mechanism to share data or transact are:
Bluetooth
WiFi
USB
NFC
17. Google Play / Apple App store integration and supported device list/restrictions
Consultants and organizations that provide end to end services should also include test cases to ensure that the mobile app is successfully deployed to the App store / Play store and is only available to the supported devices. This could also include validation of all the text, screenshots, version numbers etc that are part of the app listing.
In the next article in this series we will cover test cases related to mobile application functionality.
If you found this post useful, please share it with your friends / colleagues.
1. Which mobile platform to develop for? iOS or Android?
iOS and Android are the preferred platform for developing mobile applications. However, Blackberry is still used by several enterprise users and a significant population of the developing world still uses Symbian phones. It’s good to know upfront which platforms are expected to be supported. This will help you make crucial decisions regarding your architecture / design and also provide inputs on whether you want to develop a completely native application or use a framework like PhoneGap. Popular platforms are listed below:
Android
iOS
Windows Mobile
Blackberry
Symbian
Mobile application development and testing smartphone market share
Mobile Application Testing Device Units Forecast
This Forbes article provides details on the current market share and forecast, based on mobile OS.
2. Which version of iOS should I target? What version of Android should I develop for?
Does your application use any features introduced in a specific version of the OS? If so, you will want to mention this in your marketing material and also test it on the target OS. It is also good practice to prevent the application from being installed on OS versions that are not supported. This will avoid users leaving low ratings and negative feedback after installing the application on an unsupported OS. Of course, you will want to test and ensure that your application works on the targeted OS version.
Mobile application developmentmobile application testing iOS Version chart
The above image shows the usage statistics for different version of the Android and iOS. Google provides updated metrics for the version of Android being used across all android devices and information on how you can support multiple versions. Apple also provides statistics for iOS version usage along with a checklist of their own (These stats are from Sept 2013 before iOS7 launched).
3. Device Hardware Requirements
Does your application have any specific hardware requirements like memory, camera, CPUs etc? As mentioned above, it’s best to prevent installation on unsupported devices programmatically when possible. Here are instructions to check if your device has sufficient memory/RAM in Android and iOS and also to check for camera on iOS and Android.
4. Which screen resolution should I target?
Make sure that your application looks good on your target screen resolution. Smartphones and tablets come in all shapes and sizes. A list of devices with screen resolution and display density is available on Wikipedia. Some of the common screen resolutions are below:
320 x 480px
640 x 960px
480 x 800px
720 x 1280px
768 x 1280px
800 x 1280px
1200 x 1920px
2048 × 1536px
Google also provides statistics on the number of devices that have a particular physical screen size and density. Information on pixel count for various screen sizes and how you can support multiple screen sizes is also available.
5. Should I develop another app for tablets?
It’s good practice to use high quality graphics for large devices like tablets especially if your application or game is expected to be used on these devices. Some developers release a separate HD version of the application/game instead of using a single package. Irrespective of your implementation process, it’s good practice to test on both devices if you expect significant users on both. Google has also released an app quality checklist to help developers deliver quality apps for tablet devices.
6. Portrait or Landscape Orientation?
Some games work only in landscape mode while some applications are designed to work in portrait mode only and other work in both modes. Make sure you test your applications to see if there are any issues when changing the orientation like application crashing or UI bugs.
7. Testing GPS functionality, Accelerometer, Hardware keys
If your application requires use of the following hardware features, your test cases also need to test for scenarios when they are not available -
Hardware keys – Ex. Camera application using a dedicated camera button, Task/Event Manager applications using hardware buttons to snooze a reminder, media players using volume and other keys etc. Some applications also use the power button to provide additional functionality / shortcuts to application behavior.
Accelerometer – Applications that make use of accelerometer require testing to ensure that the readings are being recorded accurately and utilized correctly within the applications. This test case might be relevant to applications like Star Maps, Pedometers, Jump trackers, Games, 3D visualization applications etc
GPS – How will your Navigation applications respond if the GPS is disabled or turned off abruptly during operation?
Any other sensor – If you application depends on additional sensors for temperature, luminosity or any accessory that provides additional functionality, then you need to ensure that you have tested for conditions when they are not available or do not function accurately.
8. Network Connectivity Issues – GPRS, 2G, 3G, WiFi, Intermittent connectivity, No connectivity
Most of the applications are developed in the presence of WiFi which provides good network connectivity. However it’s important to test applications in the real world where the user might not have access to WiFi. Usually when people are on the move, network connectivity is intermittent with connection being dropped once in a while. Network speeds also vary based on the users location and the kind of connectivity they are paying for. Applications must be able to handle these situations gracefully and they must be tested for it.
9. Test Mobile + web app updates
Does your mobile application have a server side component or a web service it uses? Does the mobile application need an update when the server side component is updated? If so, make sure there is a test case to track this to avoid any human error.
10. Testing interruptions to the mobile app
There are various events that can interrupt the flow of your application. Your application should be able to handle these and should be tested for the same.
Incoming Call
Text message
Other app notifications
Storage low
Battery low
Battery dead
No storage
Airplane mode
Intermittent connectivity
Home screen jump
Sleep mode
11. Mobile application security testing
Security and data privacy are of utmost importance in today’s scenario. Users are worried about their data and credentials being exposed through vulnerable applications.
Is your application storing payment information or credit card details?
Does your application use secure network protocols?
Can they be switched to insecure ones?
Does the application ask for more permissions than it needs?
Does your application use certificates?
Does your application use a Device ID as an identifier?
Does your application require a user to be authenticated before they are allowed to access their data?
Is there a maximum number of login attempts before they are locked out?
Applications should encrypt user name and passwords when authenticating the user over a network. One way to test security related scenarios is to route your mobile’s data through a proxy server like OWASP Zed Attack Proxy and look for vulnerabilities.
12. Testing In-app payment, advertisements and payment gateway integrations
If your app makes use of in-app payment, advertisements or payment gateways for e-commerce transactions, you will need to test the functionality end to end to ensure that there are no issues in the transactions. Testing for payment gateway integration and advertisements will need accounts to be created with the Payment Gateways and Advertisement servers before testing can begin.
13. Mobile application performance testing
Have you checked to see if the performance of your mobile application degrades with increase in the – size of mailbox, album, messages, music or any other content relevant to the application?
It’s good practice to test your application for performance and scalability issues. With large storage capacity being available at affordable prices, it’s not uncommon for users to have large amount of data / content on their smartphone. Users even store SMS for several years on their smartphones. If your application has user generated content / data associated with it (Ex. Photographs, SMS etc) which can grow to huge proportions over the lifetime of the application, your testing should include these scenarios to see how the application performs. In case the application has a server side component, you should also test the application with increasing number of users. While this testing can be done manually, we have tools like Little Eye and Neo Load that can help you with Performance and Load testing of your mobile application.
14. Mobile Application Localization and Timezone issues
If your application is multilingual, it needs to be tested in other languages to ensure that there is no character encoding issue, data truncation issue or any UI issues due to varying character lengths. You also need to test applications to ensure that they handle timezone changes. What happens if a user travels forward across timezone and returns to his/her previous timezone? How does your app handle entries with date and time which are in sequence but not in chronological order?
15. Testing Social network integration
Many applications these days ship with the ability to share a post from the application, on the users’ social networking account. However most users would like to be prompted before a post is published on their account. Does your application handle this? Are they being allowed to share the status message being shared?
16. Test hardware connectivity – Bluetooth, WiFi, NFC, USB – Device recognition
Smartphones come with a plethora of connectivity options. If your application makes use of the below connectivity options (Ex. File managers or photo editors that let you share your file, AirDroid which allows you to transfer files between PC and your mobile over wifi) then you should test them to ensure they work as expected. You should also test to see how they handle errors when the connection is lost during a transfer / transaction. Commonly used mechanism to share data or transact are:
Bluetooth
WiFi
USB
NFC
17. Google Play / Apple App store integration and supported device list/restrictions
Consultants and organizations that provide end to end services should also include test cases to ensure that the mobile app is successfully deployed to the App store / Play store and is only available to the supported devices. This could also include validation of all the text, screenshots, version numbers etc that are part of the app listing.
In the next article in this series we will cover test cases related to mobile application functionality.
If you found this post useful, please share it with your friends / colleagues.
What is Prototype model- advantages, disadvantages and when to use it?
The basic idea here is that instead of freezing the requirements before a design or coding can proceed, a throwaway prototype is built to understand the requirements. This prototype is developed based on the currently known requirements. By using this prototype, the client can get an “actual feel” of the system, since the interactions with prototype can enable the client to better understand the requirements of the desired system. Prototyping is an attractive idea for complicated and large systems for which there is no manual process or existing system to help determining the requirements. The prototype are usually not complete systems and many of the details are not built in the prototype. The goal is to provide a system with overall functionality.
Diagram of Prototype model:
Prototype model
Advantages of Prototype model:
Users are actively involved in the development
Since in this methodology a working model of the system is provided, the users get a better understanding of the system being developed.
Errors can be detected much earlier.
Quicker user feedback is available leading to better solutions.
Missing functionality can be identified easily
Confusing or difficult functions can be identified
Requirements validation, Quick implementation of, incomplete, but
functional, application.
Disadvantages of Prototype model:
Leads to implementing and then repairing way of building systems.
Practically, this methodology may increase the complexity of the system as scope of the system may expand beyond original plans.
Incomplete application may cause application not to be used as the
full system was designed
Incomplete or inadequate problem analysis.
When to use Prototype model:
Prototype model should be used when the desired system needs to have a lot of interaction with the end users.
Typically, online systems, web interfaces have a very high amount of interaction with end users, are best suited for Prototype model. It might take a while for a system to be built that allows ease of use and needs minimal training for the end user.
Prototyping ensures that the end users constantly work with the system and provide a feedback which is incorporated in the prototype to result in a useable system. They are excellent for designing good human computer interface systems.
Diagram of Prototype model:
Prototype model
Advantages of Prototype model:
Users are actively involved in the development
Since in this methodology a working model of the system is provided, the users get a better understanding of the system being developed.
Errors can be detected much earlier.
Quicker user feedback is available leading to better solutions.
Missing functionality can be identified easily
Confusing or difficult functions can be identified
Requirements validation, Quick implementation of, incomplete, but
functional, application.
Disadvantages of Prototype model:
Leads to implementing and then repairing way of building systems.
Practically, this methodology may increase the complexity of the system as scope of the system may expand beyond original plans.
Incomplete application may cause application not to be used as the
full system was designed
Incomplete or inadequate problem analysis.
When to use Prototype model:
Prototype model should be used when the desired system needs to have a lot of interaction with the end users.
Typically, online systems, web interfaces have a very high amount of interaction with end users, are best suited for Prototype model. It might take a while for a system to be built that allows ease of use and needs minimal training for the end user.
Prototyping ensures that the end users constantly work with the system and provide a feedback which is incorporated in the prototype to result in a useable system. They are excellent for designing good human computer interface systems.
Spiral model- advantages, disadvantages and when to use it?
The spiral model is similar to the incremental model, with more emphasis placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
Planning Phase: Requirements are gathered during the planning phase. Requirements like ‘BRS’ that is ‘Bussiness Requirement Specifications’ and ‘SRS’ that is ‘System Requirement specifications’.
Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found during the risk analysis then alternate solutions are suggested and implemented.
Engineering Phase: In this phase software is developed, along with testing at the end of the phase. Hence in this phase the development and testing is done.
Evaluation phase: This phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
Diagram of Spiral model:
Spiral model
Advantages of Spiral model:
High amount of risk analysis hence, avoidance of Risk is enhanced.
Good for large and mission-critical projects.
Strong approval and documentation control.
Additional Functionality can be added at a later date.
Software is produced early in the software life cycle.
Disadvantages of Spiral model:
Can be a costly model to use.
Risk analysis requires highly specific expertise.
Project’s success is highly dependent on the risk analysis phase.
Doesn’t work well for smaller projects.
When to use Spiral model:
When costs and risk evaluation is important
For medium to high-risk projects
Long-term project commitment unwise because of potential changes to economic priorities
Users are unsure of their needs
Requirements are complex
New product line
Significant changes are expected (research and exploration)
Planning Phase: Requirements are gathered during the planning phase. Requirements like ‘BRS’ that is ‘Bussiness Requirement Specifications’ and ‘SRS’ that is ‘System Requirement specifications’.
Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found during the risk analysis then alternate solutions are suggested and implemented.
Engineering Phase: In this phase software is developed, along with testing at the end of the phase. Hence in this phase the development and testing is done.
Evaluation phase: This phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
Diagram of Spiral model:
Spiral model
Advantages of Spiral model:
High amount of risk analysis hence, avoidance of Risk is enhanced.
Good for large and mission-critical projects.
Strong approval and documentation control.
Additional Functionality can be added at a later date.
Software is produced early in the software life cycle.
Disadvantages of Spiral model:
Can be a costly model to use.
Risk analysis requires highly specific expertise.
Project’s success is highly dependent on the risk analysis phase.
Doesn’t work well for smaller projects.
When to use Spiral model:
When costs and risk evaluation is important
For medium to high-risk projects
Long-term project commitment unwise because of potential changes to economic priorities
Users are unsure of their needs
Requirements are complex
New product line
Significant changes are expected (research and exploration)
Agile model – advantages, disadvantages and when to use it?
Agile development model is also a type of Incremental model. Software is developed in incremental, rapid cycles. This results in small incremental releases with each release building on previous functionality. Each release is thoroughly tested to ensure software quality is maintained. It is used for time critical applications. Extreme Programming (XP) is currently one of the most well known agile development life cycle model.
Diagram of Agile model:
Agile model in Software testing
Advantages of Agile model:
Customer satisfaction by rapid, continuous delivery of useful software.
People and interactions are emphasized rather than process and tools. Customers, developers and testers constantly interact with each other.
Working software is delivered frequently (weeks rather than months).
Face-to-face conversation is the best form of communication.
Close, daily cooperation between business people and developers.
Continuous attention to technical excellence and good design.
Regular adaptation to changing circumstances.
Even late changes in requirements are welcomed
Disadvantages of Agile model:
In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at the beginning of the software development life cycle.
There is lack of emphasis on necessary designing and documentation.
The project can easily get taken off track if the customer representative is not clear what final outcome that they want.
Only senior programmers are capable of taking the kind of decisions required during the development process. Hence it has no place for newbie programmers, unless combined with experienced resources.
When to use Agile model:
When new changes are needed to be implemented. The freedom agile gives to change is very important. New changes can be implemented at very little cost because of the frequency of new increments that are produced.
To implement a new feature the developers need to lose only the work of a few days, or even only hours, to roll back and implement it.
Unlike the waterfall model in agile model very limited planning is required to get started with the project. Agile assumes that the end users’ needs are ever changing in a dynamic business and IT world. Changes can be discussed and features can be newly effected or removed based on feedback. This effectively gives the customer the finished system they want or need.
Both system developers and stakeholders alike, find they also get more freedom of time and options than if the software was developed in a more rigid sequential way. Having options gives them the ability to leave important decisions until more or better data or even entire hosting programs are available; meaning the project can continue to move forward without fear of reaching a sudden standstill.
Diagram of Agile model:
Agile model in Software testing
Advantages of Agile model:
Customer satisfaction by rapid, continuous delivery of useful software.
People and interactions are emphasized rather than process and tools. Customers, developers and testers constantly interact with each other.
Working software is delivered frequently (weeks rather than months).
Face-to-face conversation is the best form of communication.
Close, daily cooperation between business people and developers.
Continuous attention to technical excellence and good design.
Regular adaptation to changing circumstances.
Even late changes in requirements are welcomed
Disadvantages of Agile model:
In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at the beginning of the software development life cycle.
There is lack of emphasis on necessary designing and documentation.
The project can easily get taken off track if the customer representative is not clear what final outcome that they want.
Only senior programmers are capable of taking the kind of decisions required during the development process. Hence it has no place for newbie programmers, unless combined with experienced resources.
When to use Agile model:
When new changes are needed to be implemented. The freedom agile gives to change is very important. New changes can be implemented at very little cost because of the frequency of new increments that are produced.
To implement a new feature the developers need to lose only the work of a few days, or even only hours, to roll back and implement it.
Unlike the waterfall model in agile model very limited planning is required to get started with the project. Agile assumes that the end users’ needs are ever changing in a dynamic business and IT world. Changes can be discussed and features can be newly effected or removed based on feedback. This effectively gives the customer the finished system they want or need.
Both system developers and stakeholders alike, find they also get more freedom of time and options than if the software was developed in a more rigid sequential way. Having options gives them the ability to leave important decisions until more or better data or even entire hosting programs are available; meaning the project can continue to move forward without fear of reaching a sudden standstill.
What is Capability Maturity Model (CMM)? What are CMM Levels?
Capability Maturity Model is a bench-mark for measuring the maturity of an organization’s software process. It is a methodology used to develop and refine an organization’s software development process. CMM can be used to assess an organization against a scale of five process maturity levels based on certain Key Process Areas (KPA). It describes the maturity of the company based upon the project the company is dealing with and the clients. Each level ranks the organization according to its standardization of processes in the subject area being assessed.
A maturity model provides:
A place to start
The benefit of a community’s prior experiences
A common language and a shared vision
A framework for prioritizing actions
A way to define what improvement means for your organization
In CMMI models with a staged representation, there are five maturity levels designated by the numbers 1 through 5 as shown below:
Initial
Managed
Defined
Quantitatively Managed
Optimizing
CMM level diagram - Characteristics of maturity levelsMaturity levels consist of a predefined set of process areas. The maturity levels are measured by the achievement of the specific and generic goals that apply to each predefined set of process areas. The following sections describe the characteristics of each maturity level in detail.
Maturity Level 1 – Initial: Company has no standard process for software development. Nor does it have a project-tracking system that enables developers to predict costs or finish dates with any accuracy.
In detail we can describe it as given below:
At maturity level 1, processes are usually ad hoc and chaotic.
The organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes.
Maturity level 1 organizations often produce products and services that work but company has no standard process for software development. Nor does it have a project-tracking system that enables developers to predict costs or finish dates with any accuracy.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes.
Maturity Level 2 – Managed: Company has installed basic software management processes and controls. But there is no consistency or coordination among different groups.
In detail we can describe it as given below:
At maturity level 2, an organization has achieved all the specific and generic goals of the maturity level 2 process areas. In other words, the projects of the organization have ensured that requirements are managed and that processes are planned, performed, measured, and controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.
At maturity level 2, requirements, processes, work products, and services are managed. The status of the work products and the delivery of services are visible to management at defined points.
Commitments are established among relevant stakeholders and are revised as needed. Work products are reviewed with stakeholders and are controlled.
The work products and services satisfy their specified requirements, standards, and objectives.
Maturity Level 3 – Defined: Company has pulled together a standard set of processes and controls for the entire organization so that developers can move between projects more easily and customers can begin to get consistency from different groups.
In detail we can describe it as given below:
At maturity level 3, an organization has achieved all the specific and generic goals.
At maturity level 3, processes are well characterized and understood, and are described in standards, procedures, tools, and methods.
A critical distinction between maturity level 2 and maturity level 3 is the scope of standards, process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At maturity level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organizational unit.
The organization’s set of standard processes includes the processes addressed at maturity level 2 and maturity level 3. As a result, the processes that are performed across the organization are consistent except for the differences allowed by the tailoring guidelines.
Another critical distinction is that at maturity level 3, processes are typically described in more detail and more rigorously than at maturity level 2.
At maturity level 3, processes are managed more proactively using an understanding of the interrelationships of the process activities and detailed measures of the process, its work products, and its services.
Maturity Level 4 – Quantitatively Managed: In addition to implementing standard processes, company has installed systems to measure the quality of those processes across all projects.
In detail we can describe it as given below:
At maturity level 4, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.
At maturity level 4 Sub-processes are selected that significantly contribute to overall process performance. These selected sub-processes are controlled using statistical and other quantitative techniques.
Quantitative objectives for quality and process performance are established and used as criteria in managing processes. Quantitative objectives are based on the needs of the customer, end users, organization, and process implementers. Quality and process performance are understood in statistical terms and are managed throughout the life of the processes.
For these processes, detailed measures of process performance are collected and statistically analyzed. Special causes of process variation are identified and, where appropriate, the sources of special causes are corrected to prevent future occurrences.
Quality and process performance measures are incorporated into the organizations measurement repository to support fact-based decision making in the future.
A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.
Maturity Level 5 – Optimizing: Company has accomplished all of the above and can now begin to see patterns in performance over time, so it can tweak its processes in order to improve productivity and reduce defects in software development across the entire organization.
In detail we can describe it as given below:
At maturity level 5, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2 and 3.
Processes are continually improved based on a quantitative understanding of the common causes of variation inherent in processes.
Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements.
Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement.
The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organization’s set of standard processes are targets of measurable improvement activities.
Optimizing processes that are agile and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization.
The organization’s ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. Improvement of the processes is inherently part of everybody’s role, resulting in a cycle of continual improvement.
A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical predictability) to achieve the established quantitative process-improvement objectives.
A maturity model provides:
A place to start
The benefit of a community’s prior experiences
A common language and a shared vision
A framework for prioritizing actions
A way to define what improvement means for your organization
In CMMI models with a staged representation, there are five maturity levels designated by the numbers 1 through 5 as shown below:
Initial
Managed
Defined
Quantitatively Managed
Optimizing
CMM level diagram - Characteristics of maturity levelsMaturity levels consist of a predefined set of process areas. The maturity levels are measured by the achievement of the specific and generic goals that apply to each predefined set of process areas. The following sections describe the characteristics of each maturity level in detail.
Maturity Level 1 – Initial: Company has no standard process for software development. Nor does it have a project-tracking system that enables developers to predict costs or finish dates with any accuracy.
In detail we can describe it as given below:
At maturity level 1, processes are usually ad hoc and chaotic.
The organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes.
Maturity level 1 organizations often produce products and services that work but company has no standard process for software development. Nor does it have a project-tracking system that enables developers to predict costs or finish dates with any accuracy.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes.
Maturity Level 2 – Managed: Company has installed basic software management processes and controls. But there is no consistency or coordination among different groups.
In detail we can describe it as given below:
At maturity level 2, an organization has achieved all the specific and generic goals of the maturity level 2 process areas. In other words, the projects of the organization have ensured that requirements are managed and that processes are planned, performed, measured, and controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.
At maturity level 2, requirements, processes, work products, and services are managed. The status of the work products and the delivery of services are visible to management at defined points.
Commitments are established among relevant stakeholders and are revised as needed. Work products are reviewed with stakeholders and are controlled.
The work products and services satisfy their specified requirements, standards, and objectives.
Maturity Level 3 – Defined: Company has pulled together a standard set of processes and controls for the entire organization so that developers can move between projects more easily and customers can begin to get consistency from different groups.
In detail we can describe it as given below:
At maturity level 3, an organization has achieved all the specific and generic goals.
At maturity level 3, processes are well characterized and understood, and are described in standards, procedures, tools, and methods.
A critical distinction between maturity level 2 and maturity level 3 is the scope of standards, process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At maturity level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organizational unit.
The organization’s set of standard processes includes the processes addressed at maturity level 2 and maturity level 3. As a result, the processes that are performed across the organization are consistent except for the differences allowed by the tailoring guidelines.
Another critical distinction is that at maturity level 3, processes are typically described in more detail and more rigorously than at maturity level 2.
At maturity level 3, processes are managed more proactively using an understanding of the interrelationships of the process activities and detailed measures of the process, its work products, and its services.
Maturity Level 4 – Quantitatively Managed: In addition to implementing standard processes, company has installed systems to measure the quality of those processes across all projects.
In detail we can describe it as given below:
At maturity level 4, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.
At maturity level 4 Sub-processes are selected that significantly contribute to overall process performance. These selected sub-processes are controlled using statistical and other quantitative techniques.
Quantitative objectives for quality and process performance are established and used as criteria in managing processes. Quantitative objectives are based on the needs of the customer, end users, organization, and process implementers. Quality and process performance are understood in statistical terms and are managed throughout the life of the processes.
For these processes, detailed measures of process performance are collected and statistically analyzed. Special causes of process variation are identified and, where appropriate, the sources of special causes are corrected to prevent future occurrences.
Quality and process performance measures are incorporated into the organizations measurement repository to support fact-based decision making in the future.
A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.
Maturity Level 5 – Optimizing: Company has accomplished all of the above and can now begin to see patterns in performance over time, so it can tweak its processes in order to improve productivity and reduce defects in software development across the entire organization.
In detail we can describe it as given below:
At maturity level 5, an organization has achieved all the specific goals of the process areas assigned to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2 and 3.
Processes are continually improved based on a quantitative understanding of the common causes of variation inherent in processes.
Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements.
Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement.
The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organization’s set of standard processes are targets of measurable improvement activities.
Optimizing processes that are agile and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization.
The organization’s ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. Improvement of the processes is inherently part of everybody’s role, resulting in a cycle of continual improvement.
A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical predictability) to achieve the established quantitative process-improvement objectives.
What is independent testing? It’s benefits and risks
The degree of independence avoids author bias and is often more effective at finding defects and failures.
There is several level of independence which is listed here from the lowest level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii.Tests by the person from some different group such as an independent test team.
iv.Tests by a person from a different organization or company, such as outsourced testing or certification by an external body.
When we think about how independent the test team is? It is really very important to understand that independence is not an either/or condition, but a range:
At one end of the range lies the absence of independence, where the programmer performs testing within the programming team.
Moving toward independence, we find an integrated tester or group of testers working alongside the programmers, but still within and reporting to the development manager.
Then moving little bit more towards independence we might find a team of testers who are independent and outside the development team, but reporting to project management.
Near the other end of the continuum lies complete independence. We might see a separate test team reporting into the organization at a point equal to the development or project team. We might find specialists in the business domain (such as users of the system), specialists in technology (such as database experts), and specialists in testing (such as security testers, certification testers, or test automation experts) in a separate test team, as part of a larger independent test team, or as part of a contract, outsourced test team.
Benefits of independence testing:
An independent tester can repeatedly find out more, other, and different defects than a tester working within a programming team – or a tester who is by profession a programmer.
While business analysts, marketing staff, designers, and programmers bring their own assumptions to the specification and implementation of the item under test, an independent tester brings a different set of assumptions to testing and to reviews, which often helps in exposing the hidden defects and problems
An independent tester who reports to senior management can report his results honestly and without any concern for reprisal that might result from pointing out problems in coworkers’ or, worse yet, the manager’s work.
An independent test team often has a separate budget, which helps ensure the proper level of money is spent on tester training, testing tools, test equipment, etc.
In addition, in some organizations, testers in an independent test team may find it easier to have a career path that leads up into more senior roles in testing.
Risks of independence and integrated testing:
There is a possibility that the testers and the test team can get isolated. This can take the form of interpersonal isolation from the programmers, the designers, and the project team itself, or it can take the form of isolation from the broader view of quality and the business objectives (e.g., obsessive focus on defects, often accompanied by a refusal to accept business prioritization of defects).
This leads to communication problems, feelings of unfriendliness and hostility.
Lack of identification with and support for the project goals, spontaneous blame festivals and political backstabbing.
Even well-integrated test teams can suffer problems. Other project stakeholders might come to see the independent test team – rightly or wrongly – as a bottleneck and a source of delay. Some programmers give up their responsibility for quality, saying, ‘Well, we have this test team now, so why do I need to unit test my code?’
There is several level of independence which is listed here from the lowest level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii.Tests by the person from some different group such as an independent test team.
iv.Tests by a person from a different organization or company, such as outsourced testing or certification by an external body.
When we think about how independent the test team is? It is really very important to understand that independence is not an either/or condition, but a range:
At one end of the range lies the absence of independence, where the programmer performs testing within the programming team.
Moving toward independence, we find an integrated tester or group of testers working alongside the programmers, but still within and reporting to the development manager.
Then moving little bit more towards independence we might find a team of testers who are independent and outside the development team, but reporting to project management.
Near the other end of the continuum lies complete independence. We might see a separate test team reporting into the organization at a point equal to the development or project team. We might find specialists in the business domain (such as users of the system), specialists in technology (such as database experts), and specialists in testing (such as security testers, certification testers, or test automation experts) in a separate test team, as part of a larger independent test team, or as part of a contract, outsourced test team.
Benefits of independence testing:
An independent tester can repeatedly find out more, other, and different defects than a tester working within a programming team – or a tester who is by profession a programmer.
While business analysts, marketing staff, designers, and programmers bring their own assumptions to the specification and implementation of the item under test, an independent tester brings a different set of assumptions to testing and to reviews, which often helps in exposing the hidden defects and problems
An independent tester who reports to senior management can report his results honestly and without any concern for reprisal that might result from pointing out problems in coworkers’ or, worse yet, the manager’s work.
An independent test team often has a separate budget, which helps ensure the proper level of money is spent on tester training, testing tools, test equipment, etc.
In addition, in some organizations, testers in an independent test team may find it easier to have a career path that leads up into more senior roles in testing.
Risks of independence and integrated testing:
There is a possibility that the testers and the test team can get isolated. This can take the form of interpersonal isolation from the programmers, the designers, and the project team itself, or it can take the form of isolation from the broader view of quality and the business objectives (e.g., obsessive focus on defects, often accompanied by a refusal to accept business prioritization of defects).
This leads to communication problems, feelings of unfriendliness and hostility.
Lack of identification with and support for the project goals, spontaneous blame festivals and political backstabbing.
Even well-integrated test teams can suffer problems. Other project stakeholders might come to see the independent test team – rightly or wrongly – as a bottleneck and a source of delay. Some programmers give up their responsibility for quality, saying, ‘Well, we have this test team now, so why do I need to unit test my code?’
What is the Psychology of testing?
In this section we will discuss:
The comparison of the mindset of the tester and the developer.
The balance between self-testing and independent testing.
There should be clear and courteous communication and feedback on defects between tester and developer.
Comparison of the mindset of the tester and developer:
The testing and reviewing of the applications are different from the analysing and developing of it. By this we mean to say that if we are building or developing applications we are working positively to solve the problems during the development process and to make the product according to the user specification. However while testing or reviewing a product we are looking for the defects or failures in the product. Thus building the software requires a different mindset from testing the software.
The balance between self-testing and independent testing:
The comparison made on the mindset of the tester and the developer in the above article is just to compare the two different perspectives. It does not mean that the tester cannot be the programmer, or that the programmer cannot be the tester, although they often are separate roles. In fact programmers are the testers. They always test their component which they built. While testing their own code they find many problems so the programmers, architect and the developers always test their own code before giving it to anyone. However we all know that it is difficult to find our own mistakes. So, programmers, architect, business analyst depend on others to help test their work. This other person might be some other developer from the same team or the Testing specialists or professional testers. Giving applications to the testing specialists or professional testers allows an independent test of the system.
This degree of independence avoids author bias and is often more effective at finding defects and failures.
There is several level of independence in software testing which is listed here from the lowest level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii. Tests by the person from some different group such as an independent test team.
iv. Tests by a person from a different organization or company, such as outsourced testing or certification by an external body.
Clear and courteous communication and feedback on defects between tester and developer:
We all make mistakes and we sometimes get annoyed and upset or depressed when someone points them out. So, when as testers we run a test which is a good test from our viewpoint because we found the defects and failures in the software. But at the same time we need to be very careful as how we react or report the defects and failures to the programmers. We are pleased because we found a good bug but how will the requirement analyst, the designer, developer, project manager and customer react.
The people who build the application may react defensively and take this reported defect as personal criticism.
The project manager may be annoyed with everyone for holding up the project.
The customer may lose confidence in the product because he can see defects.
Because testing can be seen as destructive activity we need to take care while reporting our defects and failures as objectively and politely as possible.
The comparison of the mindset of the tester and the developer.
The balance between self-testing and independent testing.
There should be clear and courteous communication and feedback on defects between tester and developer.
Comparison of the mindset of the tester and developer:
The testing and reviewing of the applications are different from the analysing and developing of it. By this we mean to say that if we are building or developing applications we are working positively to solve the problems during the development process and to make the product according to the user specification. However while testing or reviewing a product we are looking for the defects or failures in the product. Thus building the software requires a different mindset from testing the software.
The balance between self-testing and independent testing:
The comparison made on the mindset of the tester and the developer in the above article is just to compare the two different perspectives. It does not mean that the tester cannot be the programmer, or that the programmer cannot be the tester, although they often are separate roles. In fact programmers are the testers. They always test their component which they built. While testing their own code they find many problems so the programmers, architect and the developers always test their own code before giving it to anyone. However we all know that it is difficult to find our own mistakes. So, programmers, architect, business analyst depend on others to help test their work. This other person might be some other developer from the same team or the Testing specialists or professional testers. Giving applications to the testing specialists or professional testers allows an independent test of the system.
This degree of independence avoids author bias and is often more effective at finding defects and failures.
There is several level of independence in software testing which is listed here from the lowest level of independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii. Tests by the person from some different group such as an independent test team.
iv. Tests by a person from a different organization or company, such as outsourced testing or certification by an external body.
Clear and courteous communication and feedback on defects between tester and developer:
We all make mistakes and we sometimes get annoyed and upset or depressed when someone points them out. So, when as testers we run a test which is a good test from our viewpoint because we found the defects and failures in the software. But at the same time we need to be very careful as how we react or report the defects and failures to the programmers. We are pleased because we found a good bug but how will the requirement analyst, the designer, developer, project manager and customer react.
The people who build the application may react defensively and take this reported defect as personal criticism.
The project manager may be annoyed with everyone for holding up the project.
The customer may lose confidence in the product because he can see defects.
Because testing can be seen as destructive activity we need to take care while reporting our defects and failures as objectively and politely as possible.
What is fundamental test process in software testing?
Testing is a process rather than a single activity. This process starts from test planning then designing test cases, preparing for execution and evaluating status till the test closure. So, we can divide the activities within the fundamental test process into the following basic steps:
1) Planning and Control
2) Analysis and Design
3) Implementation and Execution
4) Evaluating exit criteria and Reporting
5) Test Closure activities
1) Planning and Control:
Test planning has following major tasks:
i. To determine the scope and risks and identify the objectives of testing.
ii. To determine the test approach.
iii. To implement the test policy and/or the test strategy. (Test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform PM, testers and developers about some key issues of the testing process. This includes the testing objectives, method of testing, total time and resources required for the project and the testing environments.).
iv. To determine the required test resources like people, test environments, PCs, etc.
v. To schedule test analysis and design tasks, test implementation, execution and evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage criteria. (Coverage criteria are the percentage of statements in the software that must be executed during testing. This will help us track whether we are completing test activities correctly. They will show us which tasks and checks we must complete for a particular level of testing before we can say that testing is finished.)
Test control has the following major tasks:
i. To measure and analyze the results of reviews and testing.
ii. To monitor and document progress, test coverage and exit criteria.
iii. To provide information on testing.
iv. To initiate corrective actions.
v. To make decisions.
2) Analysis and Design:
Test analysis and Test Design has the following major tasks:
i. To review the test basis. (The test basis is the information we need in order to start the test analysis and create our own test cases. Basically it’s a documentation on which test cases are based, such as requirements, design specifications, product risk analysis, architecture and interfaces. We can use the test basis documents to understand what the system should do once built.)
ii. To identify test conditions.
iii. To design the tests.
iv. To evaluate testability of the requirements and system.
v. To design the test environment set-up and identify and required infrastructure and tools.
3) Implementation and Execution:
During test implementation and execution, we take the test conditions into test cases and procedures and other testware such as scripts for automation, the test environment and any other test infrastructure. (Test cases is a set of conditions under which a tester will determine whether an application is working correctly or not.)
(Testware is a term for all utilities that serve in combination for testing a software like scripts, the test environment and any other test infrastructure for later reuse.)
Test implementation has the following major task:
i. To develop and prioritize our test cases by using techniques and create test data for those tests. (In order to test a software application you need to enter some data for testing most of the features. Any such specifically identified data which is used in tests is known as test data.)
We also write some instructions for carrying out the tests which is known as test procedures.
We may also need to automate some tests using test harness and automated tests scripts. (A test harness is a collection of software and test data for testing a program unit by running it under different conditions and monitoring its behavior and outputs.)
ii. To create test suites from the test cases for efficient test execution.
(Test suite is a collection of test cases that are used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions and information for each collection of test cases on the system configuration to be used during testing. Test suites are used to group similar test cases together.)
iii. To implement and verify the environment.
Test execution has the following major task:
i. To execute test suites and individual test cases following the test procedures.
ii. To re-execute the tests that previously failed in order to confirm a fix. This is known as confirmation testing or re-testing.
iii. To log the outcome of the test execution and record the identities and versions of the software under tests. The test log is used for the audit trial. (A test log is nothing but, what are the test cases that we executed, in what order we executed, who executed that test cases and what is the status of the test case (pass/fail). These descriptions are documented and called as test log.).
iv. To Compare actual results with expected results.
v. Where there are differences between actual and expected results, it report discrepancies as Incidents.
4) Evaluating Exit criteria and Reporting:
Based on the risk assessment of the project we will set the criteria for each test level against which we will measure the “enough testing”. These criteria vary from project to project and are known as exit criteria.
Exit criteria come into picture, when:
– Maximum test cases are executed with certain pass percentage.
– Bug rate falls below certain level.
– When achieved the deadlines.
Evaluating exit criteria has the following major tasks:
i. To check the test logs against the exit criteria specified in test planning.
ii. To assess if more test are needed or if the exit criteria specified should be changed.
iii. To write a test summary report for stakeholders.
5) Test Closure activities:
Test closure activities are done when software is delivered. The testing can be closed for the other reasons also like:
When all the information has been gathered which are needed for the testing.
When a project is cancelled.
When some target is achieved.
When a maintenance release or update is done.
Test closure activities have the following major tasks:
i. To check which planned deliverables are actually delivered and to ensure that all incident reports have been resolved.
ii. To finalize and archive testware such as scripts, test environments, etc. for later reuse.
iii. To handover the testware to the maintenance organization. They will give support to the software.
iv To evaluate how the testing went and learn lessons for future releases and projects.
1) Planning and Control
2) Analysis and Design
3) Implementation and Execution
4) Evaluating exit criteria and Reporting
5) Test Closure activities
1) Planning and Control:
Test planning has following major tasks:
i. To determine the scope and risks and identify the objectives of testing.
ii. To determine the test approach.
iii. To implement the test policy and/or the test strategy. (Test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform PM, testers and developers about some key issues of the testing process. This includes the testing objectives, method of testing, total time and resources required for the project and the testing environments.).
iv. To determine the required test resources like people, test environments, PCs, etc.
v. To schedule test analysis and design tasks, test implementation, execution and evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage criteria. (Coverage criteria are the percentage of statements in the software that must be executed during testing. This will help us track whether we are completing test activities correctly. They will show us which tasks and checks we must complete for a particular level of testing before we can say that testing is finished.)
Test control has the following major tasks:
i. To measure and analyze the results of reviews and testing.
ii. To monitor and document progress, test coverage and exit criteria.
iii. To provide information on testing.
iv. To initiate corrective actions.
v. To make decisions.
2) Analysis and Design:
Test analysis and Test Design has the following major tasks:
i. To review the test basis. (The test basis is the information we need in order to start the test analysis and create our own test cases. Basically it’s a documentation on which test cases are based, such as requirements, design specifications, product risk analysis, architecture and interfaces. We can use the test basis documents to understand what the system should do once built.)
ii. To identify test conditions.
iii. To design the tests.
iv. To evaluate testability of the requirements and system.
v. To design the test environment set-up and identify and required infrastructure and tools.
3) Implementation and Execution:
During test implementation and execution, we take the test conditions into test cases and procedures and other testware such as scripts for automation, the test environment and any other test infrastructure. (Test cases is a set of conditions under which a tester will determine whether an application is working correctly or not.)
(Testware is a term for all utilities that serve in combination for testing a software like scripts, the test environment and any other test infrastructure for later reuse.)
Test implementation has the following major task:
i. To develop and prioritize our test cases by using techniques and create test data for those tests. (In order to test a software application you need to enter some data for testing most of the features. Any such specifically identified data which is used in tests is known as test data.)
We also write some instructions for carrying out the tests which is known as test procedures.
We may also need to automate some tests using test harness and automated tests scripts. (A test harness is a collection of software and test data for testing a program unit by running it under different conditions and monitoring its behavior and outputs.)
ii. To create test suites from the test cases for efficient test execution.
(Test suite is a collection of test cases that are used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions and information for each collection of test cases on the system configuration to be used during testing. Test suites are used to group similar test cases together.)
iii. To implement and verify the environment.
Test execution has the following major task:
i. To execute test suites and individual test cases following the test procedures.
ii. To re-execute the tests that previously failed in order to confirm a fix. This is known as confirmation testing or re-testing.
iii. To log the outcome of the test execution and record the identities and versions of the software under tests. The test log is used for the audit trial. (A test log is nothing but, what are the test cases that we executed, in what order we executed, who executed that test cases and what is the status of the test case (pass/fail). These descriptions are documented and called as test log.).
iv. To Compare actual results with expected results.
v. Where there are differences between actual and expected results, it report discrepancies as Incidents.
4) Evaluating Exit criteria and Reporting:
Based on the risk assessment of the project we will set the criteria for each test level against which we will measure the “enough testing”. These criteria vary from project to project and are known as exit criteria.
Exit criteria come into picture, when:
– Maximum test cases are executed with certain pass percentage.
– Bug rate falls below certain level.
– When achieved the deadlines.
Evaluating exit criteria has the following major tasks:
i. To check the test logs against the exit criteria specified in test planning.
ii. To assess if more test are needed or if the exit criteria specified should be changed.
iii. To write a test summary report for stakeholders.
5) Test Closure activities:
Test closure activities are done when software is delivered. The testing can be closed for the other reasons also like:
When all the information has been gathered which are needed for the testing.
When a project is cancelled.
When some target is achieved.
When a maintenance release or update is done.
Test closure activities have the following major tasks:
i. To check which planned deliverables are actually delivered and to ensure that all incident reports have been resolved.
ii. To finalize and archive testware such as scripts, test environments, etc. for later reuse.
iii. To handover the testware to the maintenance organization. They will give support to the software.
iv To evaluate how the testing went and learn lessons for future releases and projects.
What are the principles of testing?
Principles of Testing
There are seven principles of testing. They are as follows:
1) Testing shows presence of defects: Testing can show the defects are present, but cannot prove that there are no defects. Even after testing the application or product thoroughly we cannot say that the product is 100% defect free. Testing always reduces the number of undiscovered defects remaining in the software but even if no defects are found, it is not a proof of correctness.
2) Exhaustive testing is impossible: Testing everything including all combinations of inputs and preconditions is not possible. So, instead of doing the exhaustive testing we can use risks and priorities to focus testing efforts. For example: In an application in one screen there are 15 input fields, each having 5 possible values, then to test all the valid combinations you would need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow for this number of tests. So, accessing and managing risk is one of the most important activities and reason for testing in any project.
3) Early testing: In the software development life cycle testing activities should start as early as possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the defects discovered during pre-release testing or shows the most operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide Paradox”, it is really very important to review the test cases regularly and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
6) Testing is context depending: Testing is basically context dependent. Different kinds of sites are tested differently. For example, safety – critical software is tested differently from an e-commerce site.
7) Absence – of – errors fallacy: If the system built is unusable and does not fulfil the user’s needs and expectations then finding and fixing defects does not help.
There are seven principles of testing. They are as follows:
1) Testing shows presence of defects: Testing can show the defects are present, but cannot prove that there are no defects. Even after testing the application or product thoroughly we cannot say that the product is 100% defect free. Testing always reduces the number of undiscovered defects remaining in the software but even if no defects are found, it is not a proof of correctness.
2) Exhaustive testing is impossible: Testing everything including all combinations of inputs and preconditions is not possible. So, instead of doing the exhaustive testing we can use risks and priorities to focus testing efforts. For example: In an application in one screen there are 15 input fields, each having 5 possible values, then to test all the valid combinations you would need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow for this number of tests. So, accessing and managing risk is one of the most important activities and reason for testing in any project.
3) Early testing: In the software development life cycle testing activities should start as early as possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the defects discovered during pre-release testing or shows the most operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide Paradox”, it is really very important to review the test cases regularly and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
6) Testing is context depending: Testing is basically context dependent. Different kinds of sites are tested differently. For example, safety – critical software is tested differently from an e-commerce site.
7) Absence – of – errors fallacy: If the system built is unusable and does not fulfil the user’s needs and expectations then finding and fixing defects does not help.
What is the difference between Severity and Priority?
There are two key things in defects of the software testing. They are:
1) Severity
2) Priority
What is the difference between Severity and Priority?
1) Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of application crashing is severe. So the severity is high but priority is low.
Severity can be of following types:
Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical.
Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.
Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.
Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.
2) Priority:
Priority defines the order in which we should resolve a defect. Should we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it.
Priority can be of following types:
Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious defect have been fixed.
Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the repair has been done.
Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority: An error which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user.
Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).
1) Severity
2) Priority
What is the difference between Severity and Priority?
1) Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of application crashing is severe. So the severity is high but priority is low.
Severity can be of following types:
Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical.
Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.
Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.
Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.
2) Priority:
Priority defines the order in which we should resolve a defect. Should we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it.
Priority can be of following types:
Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious defect have been fixed.
Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the repair has been done.
Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority: An error which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user.
Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).
What is the cost of defects in software testing?
The cost of defects can be measured by the impact of the defects and when we find them. Earlier the defect is found lesser is the cost of defect. For example if error is found in the requirement specifications then it is somewhat cheap to fix it. The correction to the requirement specification can be done and then it can be re-issued. In the same way when defect or error is found in the design then the design can be corrected and it can be re-issued. But if the error is not caught in the specifications and is not found till the user acceptance then the cost to fix those errors or defects will be way too expensive.
If the error is made and the consequent defect is detected in the requirements phase then it is relatively cheap to fix it.
Similarly if an error is made and the consequent defect is found in the design phase then the design can be corrected and reissued with relatively little expense.
cost of defects in software testing
The same applies for construction phase. If however, a defect is introduced in the requirement specification and it is not detected until acceptance testing or even once the system has been implemented then it will be much more expensive to fix. This is because rework will be needed in the specification and design before changes can be made in construction; because one defect in the requirements may well propagate into several places in the design and code; and because all the testing work done-to that point will need to be repeated in order to reach the confidence level in the software that we require.
It is quite often the case that defects detected at a very late stage, depending on how serious they are, are not corrected because the cost of doing so is too expensive.
If the error is made and the consequent defect is detected in the requirements phase then it is relatively cheap to fix it.
Similarly if an error is made and the consequent defect is found in the design phase then the design can be corrected and reissued with relatively little expense.
cost of defects in software testing
The same applies for construction phase. If however, a defect is introduced in the requirement specification and it is not detected until acceptance testing or even once the system has been implemented then it will be much more expensive to fix. This is because rework will be needed in the specification and design before changes can be made in construction; because one defect in the requirements may well propagate into several places in the design and code; and because all the testing work done-to that point will need to be repeated in order to reach the confidence level in the software that we require.
It is quite often the case that defects detected at a very late stage, depending on how serious they are, are not corrected because the cost of doing so is too expensive.
When do defects in software testing arise?
Because of the following reasons the software defects arise:
- The person using the software application or product may not have enough knowledge of the product.
- Maybe the software is used in the wrong way which leads to the defects or failures.
- The developers may have coded incorrectly and there can be defects present in the design.
- Incorrect setup of the testing environments.
To know when defects in software testing arise, let us take a small example with a diagram as given below.
We can see that Requirement 1 is implemented correctly – we understood the customer’s requirement, designed correctly to meet that requirement, built correctly to meet the design, and so deliver that requirement with the right attributes: functionally, it does what it is supposed to do and it also has the right non-functional attributes, so it is fast enough, easy to understand and so on.
Types of errors and defects - when do defects arise With the other requirements, errors have been made at different stages. Requirement 2 is fine until the software is coded, when we make some mistakes and introduce defects. Probably, these are easily spotted and corrected during testing, because we can see the product does not meet its design specification.
The defects introduced in Requirement 3 are harder to deal with; we built exactly what we were told to but unfortunately the designer made some mistakes so there are defects in the design. Unless we check against the requirements definition, we will not spot those defects during testing. When we do notice them they will be hard to fix because design changes will be required.
The defects in Requirement 4 were introduced during the definition of the requirements; the product has been designed and built to meet that flawed requirements definition. If we test the product meets its requirements and design, it will pass its tests but may be rejected by the user or customer. Defects reported by the customer in acceptance test or live use can be very costly. Unfortunately, requirements and design defects are not rare; assessments of thousands of projects have shown that defects introduced during requirements and design make up close to half of the total number of defects.
- The person using the software application or product may not have enough knowledge of the product.
- Maybe the software is used in the wrong way which leads to the defects or failures.
- The developers may have coded incorrectly and there can be defects present in the design.
- Incorrect setup of the testing environments.
To know when defects in software testing arise, let us take a small example with a diagram as given below.
We can see that Requirement 1 is implemented correctly – we understood the customer’s requirement, designed correctly to meet that requirement, built correctly to meet the design, and so deliver that requirement with the right attributes: functionally, it does what it is supposed to do and it also has the right non-functional attributes, so it is fast enough, easy to understand and so on.
Types of errors and defects - when do defects arise With the other requirements, errors have been made at different stages. Requirement 2 is fine until the software is coded, when we make some mistakes and introduce defects. Probably, these are easily spotted and corrected during testing, because we can see the product does not meet its design specification.
The defects introduced in Requirement 3 are harder to deal with; we built exactly what we were told to but unfortunately the designer made some mistakes so there are defects in the design. Unless we check against the requirements definition, we will not spot those defects during testing. When we do notice them they will be hard to fix because design changes will be required.
The defects in Requirement 4 were introduced during the definition of the requirements; the product has been designed and built to meet that flawed requirements definition. If we test the product meets its requirements and design, it will pass its tests but may be rejected by the user or customer. Defects reported by the customer in acceptance test or live use can be very costly. Unfortunately, requirements and design defects are not rare; assessments of thousands of projects have shown that defects introduced during requirements and design make up close to half of the total number of defects.
From where do defects and failures in software testing arise?
Defects and failures basically arise from:
Errors in the specification, design and implementation of the software and system
Errors in use of the system
Environmental conditions
Intentional damage
Potential consequences of earlier errors
Errors in the specification and design of the software:
Specification is basically a written document which describes the functional and non – functional aspects of the software by using prose and pictures. For testing specifications there is no need of having code. Without having code we can test the specifications. About 55% of all the bugs present in the product are because of the mistakes present in the specification. Hence testing the specifications can lots of time and the cost in future or in later stages of the product.
Errors in use of the system:
Errors in use of the system or product or application may arise because of the following reasons:
- Inadequate knowledge of the product or the software to the tester. The tester may not be aware of the functionalities of the product and hence while testing the product there might be some defects or failures.
- Lack of the understanding of the functionalities by the developer. It may also happen that the developers may not have understood the functionalities of the product or application properly. Based on their understanding the feature they will develop may not match with the specifications. Hence this may result into the defect or failure.
Environmental conditions:
Because of the wrong setup of the testing environment testers may report the defects or failures. As per the recent surveys it has been observed that about 40% of the tester’s time is consumed because of the environment issues and this has a great impact on quality and productivity. Hence proper test environments are required for quality and on time delivery of the product to the customers.
Intentional damage:
The defects and failures reported by the testers while testing the product or the application may arise because of the intentional damage.
Potential consequences of earlier errors:
Errors found in the earlier stages of the development reduce our cost of production. Hence it’s very important to find the error at the earlier stage. This could be done by reviewing the specification documents or by walkthrough. The downward flow of the defect will increase the cost of production.
Errors in the specification, design and implementation of the software and system
Errors in use of the system
Environmental conditions
Intentional damage
Potential consequences of earlier errors
Errors in the specification and design of the software:
Specification is basically a written document which describes the functional and non – functional aspects of the software by using prose and pictures. For testing specifications there is no need of having code. Without having code we can test the specifications. About 55% of all the bugs present in the product are because of the mistakes present in the specification. Hence testing the specifications can lots of time and the cost in future or in later stages of the product.
Errors in use of the system:
Errors in use of the system or product or application may arise because of the following reasons:
- Inadequate knowledge of the product or the software to the tester. The tester may not be aware of the functionalities of the product and hence while testing the product there might be some defects or failures.
- Lack of the understanding of the functionalities by the developer. It may also happen that the developers may not have understood the functionalities of the product or application properly. Based on their understanding the feature they will develop may not match with the specifications. Hence this may result into the defect or failure.
Environmental conditions:
Because of the wrong setup of the testing environment testers may report the defects or failures. As per the recent surveys it has been observed that about 40% of the tester’s time is consumed because of the environment issues and this has a great impact on quality and productivity. Hence proper test environments are required for quality and on time delivery of the product to the customers.
Intentional damage:
The defects and failures reported by the testers while testing the product or the application may arise because of the intentional damage.
Potential consequences of earlier errors:
Errors found in the earlier stages of the development reduce our cost of production. Hence it’s very important to find the error at the earlier stage. This could be done by reviewing the specification documents or by walkthrough. The downward flow of the defect will increase the cost of production.
What is a Failure in software testing?
If under certain environment and situation defects in the application or product get executed then the system will produce the wrong results causing a failure.
Not all defects result in failures, some may stay inactive in the code and we may never notice them. Example: Defects in dead code will never result in failures.
It is not just defects that give rise to failure. Failures can also be caused because of the other reasons also like:
Because of the environmental conditions as well like a radiation burst, a strong magnetic field, electronic field or pollution could cause faults in hardware or firmware. Those faults might prevent or change the execution of software.
Failures may also arise because of human error in interacting with the software, perhaps a wrong input value being entered or an output being misinterpreted.
Finally failures may also be caused by someone deliberately trying to cause a failure in the system.
Difference between Error, Defect and Failure in software testing:
Error: The mistakes made by programmer is knowns as an ‘Error’. This could happen because of the following reasons:
- Because of some confusion in understanding the functionality of the software
- Because of some miscalculation of the values
- Because of misinterpretation of any value, etc.
Defect: The bugs introduced by programmer inside the code are known as a defect. This can happen because of some programatical mistakes.
Failure: If under certain circumstances these defects get executed by the tester during the testing then it results into the failure which is known as software failure.
Few points that are important to know:
When tester is executing a test he/she may observe some difference in the behavior of the feature or functionality, but this not because of the failure. This may happen because of the wrong test data entered, tester may not be aware of the feature or functionality or because of the bad environment. Because of these reasons incidents are reported. They are known as incident report. The condition or situation which requires further analysis or clarification is known as incident. To deal with the incidents the programmer need to to the analysis that whether this incident has occurred because of the failure or not.
It’s not necessary that defects or bugs introduced in the product are only by the software. To understand it further let’s take an example. A bug or defect can also be introduced by a business analyst. Defects present in the specifications like requirements specification and design specifications can be detected during the reviews. When the defect or bug is caught during the review cannot result into failure because the software has not yet been executed.
These defects or bugs are reported not to blame the developers or any people but to judge the quality of the product. The quality of product is of utmost importance. To gain the confidence of the customers it’s very important to deliver the quality product on time.
Not all defects result in failures, some may stay inactive in the code and we may never notice them. Example: Defects in dead code will never result in failures.
It is not just defects that give rise to failure. Failures can also be caused because of the other reasons also like:
Because of the environmental conditions as well like a radiation burst, a strong magnetic field, electronic field or pollution could cause faults in hardware or firmware. Those faults might prevent or change the execution of software.
Failures may also arise because of human error in interacting with the software, perhaps a wrong input value being entered or an output being misinterpreted.
Finally failures may also be caused by someone deliberately trying to cause a failure in the system.
Difference between Error, Defect and Failure in software testing:
Error: The mistakes made by programmer is knowns as an ‘Error’. This could happen because of the following reasons:
- Because of some confusion in understanding the functionality of the software
- Because of some miscalculation of the values
- Because of misinterpretation of any value, etc.
Defect: The bugs introduced by programmer inside the code are known as a defect. This can happen because of some programatical mistakes.
Failure: If under certain circumstances these defects get executed by the tester during the testing then it results into the failure which is known as software failure.
Few points that are important to know:
When tester is executing a test he/she may observe some difference in the behavior of the feature or functionality, but this not because of the failure. This may happen because of the wrong test data entered, tester may not be aware of the feature or functionality or because of the bad environment. Because of these reasons incidents are reported. They are known as incident report. The condition or situation which requires further analysis or clarification is known as incident. To deal with the incidents the programmer need to to the analysis that whether this incident has occurred because of the failure or not.
It’s not necessary that defects or bugs introduced in the product are only by the software. To understand it further let’s take an example. A bug or defect can also be introduced by a business analyst. Defects present in the specifications like requirements specification and design specifications can be detected during the reviews. When the defect or bug is caught during the review cannot result into failure because the software has not yet been executed.
These defects or bugs are reported not to blame the developers or any people but to judge the quality of the product. The quality of product is of utmost importance. To gain the confidence of the customers it’s very important to deliver the quality product on time.
What are software testing objectives and purpose?
Software Testing has different goals and objectives.The major objectives of Software testing are as follows:
Finding defects which may get created by the programmer while developing the software.
Gaining confidence in and providing information about the level of quality.
To prevent defects.
To make sure that the end result meets the business and user requirements.
To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is System Requirement Specifications.
To gain the confidence of the customers by providing them a quality product.
Software testing helps in finalizing the software application or product against business and user requirements. It is very important to have good test coverage in order to test the software application completely and make it sure that it’s performing well and as per the specifications.
While determining the coverage the test cases should be designed well with maximum possibilities of finding the errors or bugs. The test cases should be very effective. This objective can be measured by the number of defects reported per test cases. Higher the number of the defects reported the more effective are the test cases.
Once the delivery is made to the end users or the customers they should be able to operate it without any complaints. In order to make this happen the tester should know as how the customers are going to use this product and accordingly they should write down the test scenarios and design the test cases. This will help a lot in fulfilling all the customer’s requirements.
Software testing makes sure that the testing is being done properly and hence the system is ready for use. Good coverage means that the testing has been done to cover the various areas like functionality of the application, compatibility of the application with the OS, hardware and different types of browsers, performance testing to test the performance of the application and load testing to make sure that the system is reliable and should not crash or there should not be any blocking issues. It also determines that the application can be deployed easily to the machine and without any resistance. Hence the application is easy to install, learn and use.
Finding defects which may get created by the programmer while developing the software.
Gaining confidence in and providing information about the level of quality.
To prevent defects.
To make sure that the end result meets the business and user requirements.
To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is System Requirement Specifications.
To gain the confidence of the customers by providing them a quality product.
Software testing helps in finalizing the software application or product against business and user requirements. It is very important to have good test coverage in order to test the software application completely and make it sure that it’s performing well and as per the specifications.
While determining the coverage the test cases should be designed well with maximum possibilities of finding the errors or bugs. The test cases should be very effective. This objective can be measured by the number of defects reported per test cases. Higher the number of the defects reported the more effective are the test cases.
Once the delivery is made to the end users or the customers they should be able to operate it without any complaints. In order to make this happen the tester should know as how the customers are going to use this product and accordingly they should write down the test scenarios and design the test cases. This will help a lot in fulfilling all the customer’s requirements.
Software testing makes sure that the testing is being done properly and hence the system is ready for use. Good coverage means that the testing has been done to cover the various areas like functionality of the application, compatibility of the application with the OS, hardware and different types of browsers, performance testing to test the performance of the application and load testing to make sure that the system is reliable and should not crash or there should not be any blocking issues. It also determines that the application can be deployed easily to the machine and without any resistance. Hence the application is easy to install, learn and use.
Software Testing Necessary?
Software Testing is necessary because we all make mistakes. Some of those mistakes are unimportant, but some of them are expensive or dangerous. We need to check everything and anything we produce because things can always go wrong – humans make mistakes all the time.
Since we assume that our work may have mistakes, hence we all need to check our own work. However some mistakes come from bad assumptions and blind spots, so we might make the same mistakes when we check our own work as we made when we did it. So we may not notice the flaws in what we have done.
Ideally, we should get someone else to check our work because another person is more likely to spot the flaws.
There are several reasons which clearly tells us as why Software Testing is important and what are the major things that we should consider while testing of any product or application.
Software testing is very important because of the following reasons:
Software testing is really required to point out the defects and errors that were made during the development phases.
It’s essential since it makes sure of the Customer’s reliability and their satisfaction in the application.
It is very important to ensure the Quality of the product. Quality product delivered to the customers helps in gaining their confidence.
Testing is necessary in order to provide the facilities to the customers like the delivery of high quality product or software application which requires lower maintenance cost and hence results into more accurate, consistent and reliable results.
Testing is required for an effective performance of software application or product.
It’s important to ensure that the application should not result into any failures because it can be very expensive in the future or in the later stages of the development.
It’s required to stay in the business.
Since we assume that our work may have mistakes, hence we all need to check our own work. However some mistakes come from bad assumptions and blind spots, so we might make the same mistakes when we check our own work as we made when we did it. So we may not notice the flaws in what we have done.
Ideally, we should get someone else to check our work because another person is more likely to spot the flaws.
There are several reasons which clearly tells us as why Software Testing is important and what are the major things that we should consider while testing of any product or application.
Software testing is very important because of the following reasons:
Software testing is really required to point out the defects and errors that were made during the development phases.
It’s essential since it makes sure of the Customer’s reliability and their satisfaction in the application.
It is very important to ensure the Quality of the product. Quality product delivered to the customers helps in gaining their confidence.
Testing is necessary in order to provide the facilities to the customers like the delivery of high quality product or software application which requires lower maintenance cost and hence results into more accurate, consistent and reliable results.
Testing is required for an effective performance of software application or product.
It’s important to ensure that the application should not result into any failures because it can be very expensive in the future or in the later stages of the development.
It’s required to stay in the business.
2014 Winners For Software Testing
The Borland European Software Testing Award
Home Office Technology – Test Design & Consultancy Services
---------------------------------------------------------------------------------------
Lifetime Achievement Award
Bob Bartlett
---------------------------------------------------------------------------------------
The Cigniti Technologies Best Agile Project
EPAM Systems (winner)
- Mindfire Solutions
- Black Pepper Software
- AkBank
- Cognizant Technology Solutions
----------------------------------------------------------------------------------------
The Neotys Best Mobile Project
Waitrose in partnership with Cognizant Technology Solutions (winner)
- Centrica in partnership with Cognizant Technology Solutions
- Virgin Media in Partnership with Accenture
- Lloyds Banking Group in partnership with Cognizant Technology Solutions
- Proxama
------------------------------------------------------------------------------------------
Best Test Automation Project
TIBCO Jaspersoft (winner)
- Original Software
- Infuse IT powered by useMangoTM
- BD Medication Workflow Solutions (Becton Dickinson Austria GmbH)
- HCL Technologies Ltd
- Lloyds Banking Group in partnership with Cognizant Technologies Solutions
---------------------------------------------------------------------------------------------
The Sogeti Green Testing Team Of The Year
Tech Mahindra (winner)
- Sage UK
- Banking Testing team of Cognizant Technology Solutions
-----------------------------------------------------------------------------------------------
Graduate Tester Of The Year
Kieran Hunter, Cognizant Technology Solutions (winner)
- Paul Foy, Sogeti UK
- Karthik Kannan, Tata Consultancy Services
- Stacey Ballance, Wincor Nixdorf
- Prabhdeep Bhopal, Sopra
------------------------------------------------------------------------------------------------
Best Overall Testing Project – Finance Sector
Barclays (winner)
- Brickendon Consulting
- Xbosoft
- Credit Suisse in partnership with Cognizant Technology Solutions
- Infosys Limited
- IFDS - Oval Project
------------------------------------------------------------------------------------------------
Leading Vendor
Tata Consultancy Services (winner)
- Cognizant Technology Solutions with Credit Suisse
- Neotys
-------------------------------------------------------------------------------------------------
The Sage Most Innovative Project
Proxama (winner)
King (highly commended)
- King
- Philips in partnership with Tech Mahindra
- Allianz Insurance and ACIS
- British Gas
--------------------------------------------------------------------------------------------------
The Maveric Systems Best Overall Project
Aditi Technologies (winner)
- Barclays
- Leading Global Reinsurer in partnership with Cognizant Technology Solutions
- British Gas
- HISCOX in partnership with Cognizant Technology Solutions
-----------------------------------------------------------------------------------------------------
Home Office Technology – Test Design & Consultancy Services
---------------------------------------------------------------------------------------
Lifetime Achievement Award
Bob Bartlett
---------------------------------------------------------------------------------------
The Cigniti Technologies Best Agile Project
EPAM Systems (winner)
- Mindfire Solutions
- Black Pepper Software
- AkBank
- Cognizant Technology Solutions
----------------------------------------------------------------------------------------
The Neotys Best Mobile Project
Waitrose in partnership with Cognizant Technology Solutions (winner)
- Centrica in partnership with Cognizant Technology Solutions
- Virgin Media in Partnership with Accenture
- Lloyds Banking Group in partnership with Cognizant Technology Solutions
- Proxama
------------------------------------------------------------------------------------------
Best Test Automation Project
TIBCO Jaspersoft (winner)
- Original Software
- Infuse IT powered by useMangoTM
- BD Medication Workflow Solutions (Becton Dickinson Austria GmbH)
- HCL Technologies Ltd
- Lloyds Banking Group in partnership with Cognizant Technologies Solutions
---------------------------------------------------------------------------------------------
The Sogeti Green Testing Team Of The Year
Tech Mahindra (winner)
- Sage UK
- Banking Testing team of Cognizant Technology Solutions
-----------------------------------------------------------------------------------------------
Graduate Tester Of The Year
Kieran Hunter, Cognizant Technology Solutions (winner)
- Paul Foy, Sogeti UK
- Karthik Kannan, Tata Consultancy Services
- Stacey Ballance, Wincor Nixdorf
- Prabhdeep Bhopal, Sopra
------------------------------------------------------------------------------------------------
Best Overall Testing Project – Finance Sector
Barclays (winner)
- Brickendon Consulting
- Xbosoft
- Credit Suisse in partnership with Cognizant Technology Solutions
- Infosys Limited
- IFDS - Oval Project
------------------------------------------------------------------------------------------------
Leading Vendor
Tata Consultancy Services (winner)
- Cognizant Technology Solutions with Credit Suisse
- Neotys
-------------------------------------------------------------------------------------------------
The Sage Most Innovative Project
Proxama (winner)
King (highly commended)
- King
- Philips in partnership with Tech Mahindra
- Allianz Insurance and ACIS
- British Gas
--------------------------------------------------------------------------------------------------
The Maveric Systems Best Overall Project
Aditi Technologies (winner)
- Barclays
- Leading Global Reinsurer in partnership with Cognizant Technology Solutions
- British Gas
- HISCOX in partnership with Cognizant Technology Solutions
-----------------------------------------------------------------------------------------------------
Wednesday, 1 April 2015
Role of a tester in Defect Prevention
“What is the role of a tester in Defect Prevention and Defect Detection?”. In this post we will discuss the role of a tester in these phases and how to testers can prevent more defects in Defect Prevention phase and how testers can detect more bugs in Defect Detection phase
Role of a tester in defect prevention and defect detection.
Defect prevention – In Defect prevention, developers plays an important role. In this phase Developers do activities like – code reviews/static code analysis, unit testing, etc. Testers are also involved in defect prevention by reviewing specification documents. Studying the specification document is an art.
While studying specification documents, testers encounter various queries. And many times it happens that with those queries, requirement document gets changed/updated.
Developers often neglect primary ambiguities in specification documents in order to complete the project; or they fail to identify them when they see them. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. This is how testers help in defect prevention.
Role of a tester in defect prevention and defect detection.
Defect prevention – In Defect prevention, developers plays an important role. In this phase Developers do activities like – code reviews/static code analysis, unit testing, etc. Testers are also involved in defect prevention by reviewing specification documents. Studying the specification document is an art.
While studying specification documents, testers encounter various queries. And many times it happens that with those queries, requirement document gets changed/updated.
Developers often neglect primary ambiguities in specification documents in order to complete the project; or they fail to identify them when they see them. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. This is how testers help in defect prevention.
What is Black Box Testing
Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the “legal” inputs and what the expected outputs should be, but not how the program actually arrives at those outputs.
It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used, “Test groups are sometimes called professional idiots…people who are good at designing incorrect data.” 1 Also, do to the nature of black box testing, the test planning can begin as soon as the specifications are written. The opposite of this would be glass box testing, where test data are derived from direct examination of the code to be tested. For glass box testing, the test cases cannot be determined until the code has actually been written. Both of these testing techniques have advantages and disadvantages, but when combined, they help to ensure thorough testing of the product.
Waterfall Model
Waterfall Model is the most common method used in software testing. It is said to be a water fall method because it is like flowing downwards steadily from step to step.
The main phase or steps in the water fall method are
Conception,
Initiation,
Analysis,
Design,
Construction,
Testing,
Production/Implementation,
Maintenance.
This water fall method actually originated in the manufacturing and construction industries. At that time since no software methodologies existed, this was taken to the software development and testing. the main highlight of this method is one can go to the next step of the development only after completing the on going step.
waterfall testing
Also the developers can go only to one step behind that is the immediately previous phase only. in this method, each phase of the development activity is followed by verification and validation activities. in the waterfall method, the following are the steps involved. you can move on to the next step only when you finishes the present one. the phases or steps are:
Software requirement specification
System and sotware design
Implementation (coding or unit testing)
Integration
Testing and validation
Operation or installation
Maintenance
The main phase or steps in the water fall method are
Conception,
Initiation,
Analysis,
Design,
Construction,
Testing,
Production/Implementation,
Maintenance.
This water fall method actually originated in the manufacturing and construction industries. At that time since no software methodologies existed, this was taken to the software development and testing. the main highlight of this method is one can go to the next step of the development only after completing the on going step.
waterfall testing
Also the developers can go only to one step behind that is the immediately previous phase only. in this method, each phase of the development activity is followed by verification and validation activities. in the waterfall method, the following are the steps involved. you can move on to the next step only when you finishes the present one. the phases or steps are:
Software requirement specification
System and sotware design
Implementation (coding or unit testing)
Integration
Testing and validation
Operation or installation
Maintenance
What is Beta Testing
What is Beta Testing
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. View the advantages of beta testing
beta-testing
The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. View the advantages of beta testing
beta-testing
The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.
What is ALPHA TESTING
In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.
Alpha testing is done before the beta testing and after the acceptance testing. Mostly its done by the in-house members from developers and qa teams. I simple words its the testing by developed team just before launching the live beta version of that software.
What is User Acceptance Testing
In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. In software development, user acceptance testing (UAT) – also called beta testing, application testing, and end user testing – is a phase of software development in which the software is tested in the “real world” by the intended audience.
User Acceptance Testing can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially.
What is Regression Testing
Regression testing is a style of testing that focuses on retesting after changes are made. In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression.
What is Scenario Testing
Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program and easy to evaluate for the tester. They provide meaningful combination of functions and variables rather than the more artificial combination you get with domain testing or combinatorial test design.
scenario-testing
This test find out the issues in our software against the practical usage. The end users creates the scenario here. Now we can consider an example to get more idea. Supposed We have developed a billing software for shop. We have completed many testing and there is no bugs in coding and all features are working, thats good. Now we are discussing with our customer and starting and regression test. He is telling a scenario, that I have entered a processed bill for one order, then my customer require to change the quantity of material he purchased. I need to give its as same bill. Then we will try this scenario in our software, and we found that our software not able to edit the generated bill because there is no option for that. So we need to add that facility too. Its only a general example. In simple words it doing the test against practical situation , and that stories can be given by end customers.
What is Domain Testing
What is Domain Testing
Domain testing is the most frequently described test technique. Some authors write only about domain testing when they write about test design. Also read pdf tutorials about domain based testing. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset. This type of testing also known as equivalence testing or boundary analysis.
domain-based-testing
Domain testing is the most frequently described test technique. Some authors write only about domain testing when they write about test design. Also read pdf tutorials about domain based testing. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset. This type of testing also known as equivalence testing or boundary analysis.
domain-based-testing
What is Volume Testing
Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system. Read a pdf tutorial about Experiments with High Volume Test Automation after this article.
Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.
volume-testing
Volume testing will seek to verify the physical and logical limits to a system’s capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.
What is Recovery Testing
Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems
What is Smoke Testing
This type of testing is also called sanity testing. But there are some difference between Smoke and Sanity testing. and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. Read pdf tutorials about Smoke test at the end of this page. A test of new or repaired equipment by turning it on. If it smokes… guess what… it doesn’t work! The term also refers to testing the basic functions of software. The term was originally coined in the manufacture of containers and pipes, where smoke was introduced to determine if there were any leaks.
What is Usability Testing
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user. You can also read pdf tutorials about usability tests after this description.
Usability testing is the process of working with end-users directly and indirectly to assess how the user perceives a software package and how they interact with it. This process will uncover areas of difficulty for users as well as areas of strength.
usability-testing
The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability. This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback. Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc). Often, this involves trivial modifications to existing software, but can result in tremendous return on investment.
Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability. These changes should be directly related to real-world usability by average users. As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease.
What is Exploratory Testing
This testing is similar to the ad-hoc testing and is done in order to learn/explore the application. It si shortly known as ET.
Exploratory software testing is a powerful and fun approach to testing. View the pdf tutorials about Exploratory Testing. In some situations, it can be orders of magnitude more productive than scripted testing. At least unconsciously, testers perform exploratory testing at one time or another. Yet it doesn’t get much respect in our field. It can be considered as “Scientific Thinking” at real time
Exploratory software testing is a powerful and fun approach to testing. View the pdf tutorials about Exploratory Testing. In some situations, it can be orders of magnitude more productive than scripted testing. At least unconsciously, testers perform exploratory testing at one time or another. Yet it doesn’t get much respect in our field. It can be considered as “Scientific Thinking” at real time
What is Load Testing
The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly
What is Stress Testing
The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand. Stress testing deals with the quality of the application in the environment. Read the stress testing tutorials in pdf with examples after this basic introduction.
The idea is to create an environment more demanding of the application than the application would experience under normal work loads. This is the hardest and most complex category of testing to accomplish and it requires a joint effort from all teams. A test environment is established with many testing stations. At each station, a script is exercising the system. These scripts are usually based on the regression suite. More and more stations are added, all simultaneous hammering on the system, until the system breaks. The system is repaired and the stress test is repeated until a level of stress is reached that is higher than expected to be present at a customer site.
stress-testing
Race conditions and memory leaks are often found under stress testing. A race condition is a conflict between at least two tests. Each test works correctly when done in isolation. When the two tests are run in parallel, one or both of the tests fail. This is usually due to an incorrectly managed lock. A memory leak happens when a test leaves allocated memory behind and does not correctly return the memory to the memory allocation scheme. The test seems to run correctly, but after being exercised several times, available memory is reduced until the system fails.
What is Ad Hoc Testing
This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing. It is the least formal method of testing. View pdf tutorials about Ad-hoc testing after reading all these details.
One of the best uses of ad hoc testing is for discovery. Reading the requirements or specifications (if they exist) rarely gives you a good sense of how a program actually behaves. Even the user documentation may not capture the “look and feel” of a program. Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be apparent. In this way, it serves as a tool for checking the completeness of your testing. Missing cases can be found and added to your testing arsenal. Finding new tests in this way can also be a sign that you should perform root cause analysis.
ad-hoc-testing
Ask yourself or your test team, “What other tests of this class should we be running?” Defects found while doing ad hoc testing are often examples of entire classes of forgotten test cases. Another use for ad hoc testing is to determine the priorities for your other testing activities. In our example program, Panorama may allow the user to sort photographs that are being displayed. If ad hoc testing shows this to work well, the formal testing of this feature might be deferred until the problematic areas are completed. On the other hand, if ad hoc testing of this sorting photograph feature uncovers problems, then the formal testing might receive a higher priority.
- See more at: http://www.testingbrain.com/blackbox/ad-hoc-testing.html#sthash.IVQ47naz.dpuf
Test Execution Process
Once the test cases are written, shared with the BAs and Dev team, reviewed by them, changes are notified to the QA team (if any), QA team makes necessary amends- Test design phase is complete. Now getting the Test cases ready does not mean we can initiate the test run. We need to have the application ready as well among other things.
Test Execution Guidelines:
Let us now make a list of all things that are important to understand Test Execution phase:
#1. The build (the code that is written by the dev team is packaged into what is referred to a build- this is nothing but an installable piece of software (AUT), ready to be deployed to QA environment.) being deployed (in other words, installed and made available) to the QA environment is one of the most important aspects that needs to happen for the test execution to start.
#2. Test execution happens in the QA environment. To make sure that the dev team’s work on code is not in the same place, where the QA team is testing, the general practice is to have dedicated Dev and QA environment. (There is also a production environment to host the live application). This is basically to preserve the integrity of the application at various stages in the SDLC life cycle. Otherwise, ideally, all the 3 environments are identical in nature.
#3. Test team size is a not constant from the beginning of the project. When the test plan is initiated the team might just have a Team lead. During the test design phase, a few testers come on board. Test execution is the phase when the team is at its maximum size.
#4. Test execution also happens in at least 2 cycles (3 in some projects). Typically in each cycle, all the test cases (the entire test suite) will be executed. The objective of the first cycle is to identify any blocking, critical defects, and most of the high defects. The objective of the second cycle is to identify remaining high and medium defects, correct gaps in the scripts and obtain results.
#5. Test execution phase consists of- Executing the test scripts + test script maintenance (correct gaps in the scripts) + reporting (defects, status, metrics, etc.) Therefore, when planning this phase schedules and efforts should be estimated taking into consideration all these aspects and not just the script execution.
#6. After the test script being done and the AUT is deployed – and before the test execution begins, there is an intermediary step. This is called the “Test Readiness Review (TRR)”. This is a sort of transitional step that will end the test designing phase and ease us into the test execution.
For information on this step and a sample “Test readiness review checklist”, check out this link: Software testing Checklist
#7. In addition to the TRR, there are few more additional checks before we ensure that we can go ahead with accepting the current build that is deployed in the QA environment for test execution.
Those are the smoke and sanity tests. Detailed information on what these are is at: What is Smoke and Sanity Test?
#8. On the successful completion of TRR, smoke and sanity tests, the test cycle officially begins.
#9. Exploratory Testing would be carried out once the build is ready for testing. The purpose of this test is to make sure critical defects are removed before the next levels of testing can start. This exploratory testing is carried out in the application without any test scripts and documentation. It also helps in getting familiar with the AUT.
#10. Just like with the other phases of the STLC, work is divided among team members in the test execution phase also. The division might be based on module wise or test case count wise or anything else that might make sense.
#11. The primary outcome of the test execution phase is in the form of reports – primarily, defect report and test execution status report. The detailed process for reporting can be found at: Test executions reports
New Columns in Test Cases Document:
The test case document now gets to be expanded with the following two columns – Status and Actual result.
(Note – For live project test execution, we have added and updated these columns with test execution results in the test cases spreadsheet provided for download below)
------------
Status column:
Test execution is nothing but, using the test steps on the AUT, supplying the test data(as identified in the test case document) and observing the behavior of the AUT to see if it satisfies the expected result or not. If the expected result is not met, it can be construed as a defect. And the status of the test case becomes “Fail” and if the expected result is met, the status is “Pass”. If the test case cannot be executed because of any reasons (an existing defect or environment not supporting) the status would be “Blocked”. The status of a test case that is yet to be run can be set to No run/unexecuted or can be left empty.
For a test case with multiple steps, if a certain step’s (in the middle of the test case steps) expected result is not met, the test case status can be set to “Fail” right there and the next steps need not be executed.
The status “Fail” can be indicated in red color, if you would like to draw attention to it immediately.
Actual result column:
This is a space where we testers can record what the deviation in the expected result is. When the expected result is met (or a test case whose status is “Pass”) this field can be left empty. Because, if the expected result is met it means the actual result=expected result, which means rewriting it in the actual result column will be a repetition and redundancy.
A screenshot of the deviation can be attached in this column for enhanced clarity of what the problem is.
Test Execution Results for OrangeHRM Live Project:
Let us now get OrangeHRM and carry out the test execution based on the above guidelines listed. Here are a few points to note:
The extended test case template.
Exploratory testing as indicated is to be carried out without test scripts. So please feel free to test the application in parallel as you see fit.
Due to the limitations that we have in presenting the live project in the form of readable content- only a limited amount of test cases/functionality of the OrangeHRM application is shown in the sample test execution template. Again, please feel to work on more for the most practical experience.
The sanity and smoke test suites are also added to the document, to give you an idea about what kind of test cases are considered for these stages.
Defects are not logged yet, even though the status of some test cases is set to “Fail”. This is because, logging the defects is the next most important/commonly worked on aspect of our life as testers. So, we want to deal with it in detail in the next article.
Test Cases with Execution Results:
=> Click here to download the test case execution document.
It Contains – Test cases execution result, smoke tests, sanity tests, exploratory test – spreadsheets
Lastly, if a test management tool was used for creating and maintaining the test case, the same can be used for test execution as well. The use of a tool makes reporting easier, but otherwise the process of running the test cases is the same.Once the test cases are written, shared with the BAs and Dev team, reviewed by them, changes are notified to the QA team (if any), QA team makes necessary amends- Test design phase is complete. Now getting the Test cases ready does not mean we can initiate the test run. We need to have the application ready as well among other things.
Test Execution Guidelines:
Let us now make a list of all things that are important to understand Test Execution phase:
#1. The build (the code that is written by the dev team is packaged into what is referred to a build- this is nothing but an installable piece of software (AUT), ready to be deployed to QA environment.) being deployed (in other words, installed and made available) to the QA environment is one of the most important aspects that needs to happen for the test execution to start.
#2. Test execution happens in the QA environment. To make sure that the dev team’s work on code is not in the same place, where the QA team is testing, the general practice is to have dedicated Dev and QA environment. (There is also a production environment to host the live application). This is basically to preserve the integrity of the application at various stages in the SDLC life cycle. Otherwise, ideally, all the 3 environments are identical in nature.
#3. Test team size is a not constant from the beginning of the project. When the test plan is initiated the team might just have a Team lead. During the test design phase, a few testers come on board. Test execution is the phase when the team is at its maximum size.
#4. Test execution also happens in at least 2 cycles (3 in some projects). Typically in each cycle, all the test cases (the entire test suite) will be executed. The objective of the first cycle is to identify any blocking, critical defects, and most of the high defects. The objective of the second cycle is to identify remaining high and medium defects, correct gaps in the scripts and obtain results.
#5. Test execution phase consists of- Executing the test scripts + test script maintenance (correct gaps in the scripts) + reporting (defects, status, metrics, etc.) Therefore, when planning this phase schedules and efforts should be estimated taking into consideration all these aspects and not just the script execution.
#6. After the test script being done and the AUT is deployed – and before the test execution begins, there is an intermediary step. This is called the “Test Readiness Review (TRR)”. This is a sort of transitional step that will end the test designing phase and ease us into the test execution.
For information on this step and a sample “Test readiness review checklist”, check out this link: Software testing Checklist
#7. In addition to the TRR, there are few more additional checks before we ensure that we can go ahead with accepting the current build that is deployed in the QA environment for test execution.
Those are the smoke and sanity tests. Detailed information on what these are is at: What is Smoke and Sanity Test?
#8. On the successful completion of TRR, smoke and sanity tests, the test cycle officially begins.
#9. Exploratory Testing would be carried out once the build is ready for testing. The purpose of this test is to make sure critical defects are removed before the next levels of testing can start. This exploratory testing is carried out in the application without any test scripts and documentation. It also helps in getting familiar with the AUT.
#10. Just like with the other phases of the STLC, work is divided among team members in the test execution phase also. The division might be based on module wise or test case count wise or anything else that might make sense.
#11. The primary outcome of the test execution phase is in the form of reports – primarily, defect report and test execution status report. The detailed process for reporting can be found at: Test executions reports
New Columns in Test Cases Document:
The test case document now gets to be expanded with the following two columns – Status and Actual result.
(Note – For live project test execution, we have added and updated these columns with test execution results in the test cases spreadsheet provided for download below)
------------
Status column:
Test execution is nothing but, using the test steps on the AUT, supplying the test data(as identified in the test case document) and observing the behavior of the AUT to see if it satisfies the expected result or not. If the expected result is not met, it can be construed as a defect. And the status of the test case becomes “Fail” and if the expected result is met, the status is “Pass”. If the test case cannot be executed because of any reasons (an existing defect or environment not supporting) the status would be “Blocked”. The status of a test case that is yet to be run can be set to No run/unexecuted or can be left empty.
For a test case with multiple steps, if a certain step’s (in the middle of the test case steps) expected result is not met, the test case status can be set to “Fail” right there and the next steps need not be executed.
The status “Fail” can be indicated in red color, if you would like to draw attention to it immediately.
Actual result column:
This is a space where we testers can record what the deviation in the expected result is. When the expected result is met (or a test case whose status is “Pass”) this field can be left empty. Because, if the expected result is met it means the actual result=expected result, which means rewriting it in the actual result column will be a repetition and redundancy.
A screenshot of the deviation can be attached in this column for enhanced clarity of what the problem is.
Test Execution Results for OrangeHRM Live Project:
Let us now get OrangeHRM and carry out the test execution based on the above guidelines listed. Here are a few points to note:
The extended test case template.
Exploratory testing as indicated is to be carried out without test scripts. So please feel free to test the application in parallel as you see fit.
Due to the limitations that we have in presenting the live project in the form of readable content- only a limited amount of test cases/functionality of the OrangeHRM application is shown in the sample test execution template. Again, please feel to work on more for the most practical experience.
The sanity and smoke test suites are also added to the document, to give you an idea about what kind of test cases are considered for these stages.
Defects are not logged yet, even though the status of some test cases is set to “Fail”. This is because, logging the defects is the next most important/commonly worked on aspect of our life as testers. So, we want to deal with it in detail in the next article.
Test Cases with Execution Results:
=> Click here to download the test case execution document.
It Contains – Test cases execution result, smoke tests, sanity tests, exploratory test – spreadsheets
Lastly, if a test management tool was used for creating and maintaining the test case, the same can be used for test execution as well. The use of a tool makes reporting easier, but otherwise the process of running the test cases is the same.Once the test cases are written, shared with the BAs and Dev team, reviewed by them, changes are notified to the QA team (if any), QA team makes necessary amends- Test design phase is complete. Now getting the Test cases ready does not mean we can initiate the test run. We need to have the application ready as well among other things.
Test Execution Guidelines:
Let us now make a list of all things that are important to understand Test Execution phase:
#1. The build (the code that is written by the dev team is packaged into what is referred to a build- this is nothing but an installable piece of software (AUT), ready to be deployed to QA environment.) being deployed (in other words, installed and made available) to the QA environment is one of the most important aspects that needs to happen for the test execution to start.
#2. Test execution happens in the QA environment. To make sure that the dev team’s work on code is not in the same place, where the QA team is testing, the general practice is to have dedicated Dev and QA environment. (There is also a production environment to host the live application). This is basically to preserve the integrity of the application at various stages in the SDLC life cycle. Otherwise, ideally, all the 3 environments are identical in nature.
#3. Test team size is a not constant from the beginning of the project. When the test plan is initiated the team might just have a Team lead. During the test design phase, a few testers come on board. Test execution is the phase when the team is at its maximum size.
#4. Test execution also happens in at least 2 cycles (3 in some projects). Typically in each cycle, all the test cases (the entire test suite) will be executed. The objective of the first cycle is to identify any blocking, critical defects, and most of the high defects. The objective of the second cycle is to identify remaining high and medium defects, correct gaps in the scripts and obtain results.
#5. Test execution phase consists of- Executing the test scripts + test script maintenance (correct gaps in the scripts) + reporting (defects, status, metrics, etc.) Therefore, when planning this phase schedules and efforts should be estimated taking into consideration all these aspects and not just the script execution.
#6. After the test script being done and the AUT is deployed – and before the test execution begins, there is an intermediary step. This is called the “Test Readiness Review (TRR)”. This is a sort of transitional step that will end the test designing phase and ease us into the test execution.
#7. In addition to the TRR, there are few more additional checks before we ensure that we can go ahead with accepting the current build that is deployed in the QA environment for test execution.
Those are the smoke and sanity tests. Detailed information on what these are is at: What is Smoke and Sanity Test?
#8. On the successful completion of TRR, smoke and sanity tests, the test cycle officially begins.
#9. Exploratory Testing would be carried out once the build is ready for testing. The purpose of this test is to make sure critical defects are removed before the next levels of testing can start. This exploratory testing is carried out in the application without any test scripts and documentation. It also helps in getting familiar with the AUT.
#10. Just like with the other phases of the STLC, work is divided among team members in the test execution phase also. The division might be based on module wise or test case count wise or anything else that might make sense.
#11. The primary outcome of the test execution phase is in the form of reports – primarily, defect report and test execution status report.
New Columns in Test Cases Document:
The test case document now gets to be expanded with the following two columns – Status and Actual result.
(Note – For live project test execution, we have added and updated these columns with test execution results in the test cases spreadsheet provided for download below)
------------
Status column:
Test execution is nothing but, using the test steps on the AUT, supplying the test data(as identified in the test case document) and observing the behavior of the AUT to see if it satisfies the expected result or not. If the expected result is not met, it can be construed as a defect. And the status of the test case becomes “Fail” and if the expected result is met, the status is “Pass”. If the test case cannot be executed because of any reasons (an existing defect or environment not supporting) the status would be “Blocked”. The status of a test case that is yet to be run can be set to No run/unexecuted or can be left empty.
- For a test case with multiple steps, if a certain step’s (in the middle of the test case steps) expected result is not met, the test case status can be set to “Fail” right there and the next steps need not be executed.
- The status “Fail” can be indicated in red color, if you would like to draw attention to it immediately.
Actual result column:
This is a space where we testers can record what the deviation in the expected result is. When the expected result is met (or a test case whose status is “Pass”) this field can be left empty. Because, if the expected result is met it means the actual result=expected result, which means rewriting it in the actual result column will be a repetition and redundancy.
A screenshot of the deviation can be attached in this column for enhanced clarity of what the problem is.
Test Execution Results for OrangeHRM Live Project:
Let us now get OrangeHRM and carry out the test execution based on the above guidelines listed. Here are a few points to note:
- The extended test case template.
- Exploratory testing as indicated is to be carried out without test scripts. So please feel free to test the application in parallel as you see fit.
- Due to the limitations that we have in presenting the live project in the form of readable content- only a limited amount of test cases/functionality of the OrangeHRM application is shown in the sample test execution template. Again, please feel to work on more for the most practical experience.
- The sanity and smoke test suites are also added to the document, to give you an idea about what kind of test cases are considered for these stages.
- Defects are not logged yet, even though the status of some test cases is set to “Fail”. This is because, logging the defects is the next most important/commonly worked on aspect of our life as testers. So, we want to deal with it in detail in the next article.
Test Cases with Execution Results:
It Contains – Test cases execution result, smoke tests, sanity tests, exploratory test – spreadsheets
Lastly, if a test management tool was used for creating and maintaining the test case, the same can be used for test execution as well. The use of a tool makes reporting easier, but otherwise the process of running the test cases is the same.
Subscribe to:
Posts (Atom)