Sample Software Test Plan Draft / Template
The project is initiated as a pilot for the testing of [application].
[Provide information about the application in 3-4 lines]
[Provide more details of the application including its feature set]
[Technology being used in the application]
The following table defines the scope of testing of risks for the [application]
Risk Based Testing - Test (Release x.x.x) Scope | |
IS | IS NOT |
*Globalization testing (partial): The globalization testing will not be performed in complete sense like testing with all locales and cultures with all kinds of input data as per i18n standard.
For the pilot per se, the plan is to validate the display (UI) during [features] work flow with different supported language set (ar, en, es, etc). Even while reading the content verification will be out of scope rather focus would be more on UI and functional flow.
*Regression testing (partial): ~ 10 test cases will be automated. The issues like [list of issues that will not be automated] will not be handled during the pilot.
The following table defines the scope of testing in terms of features of the application.
Feature Testing - Test (Release x.x.x) Scope | |
IS | IS NOT |
The following table defines the scope of testing in terms of testing tasks required.
Testing Tasks - Test (Release 2.6.1) Scope | |
IS | IS NOT |
Types of Tests to be conducted
The features in scope for the [application] will be tested in terms of its functionality during the pilot. The functionality will be tested in terms of –
a. Work flows
b. Data validation at front end
c. Data correctness at backend for [features]
d. Multiple languages handling and work flow (display perspective)
e. UI Interface
Exploratory testing will be conducted on a regular basis to understand more about the application and its end user and to unearth difficult to find defects in the application.
Test Case Development
o Test cases will be written for all the features that were mentioned in scope section. Initially test cases will be written in terms of (negative, positive and boundary) work flows and line items (for data correctness), which will be peer reviewed first and detailed later. Every test case internally might contain multiple test case items, to cover wide range of test data.
o Test cases will be grouped by features and testing types and every group will be considered as a test suite. Every test suite will be independent on each other. It helps in managing the test cycles and brings efficiency while test execution planning.
o Test cases in a test suite can be dependent on each other i.e. a test case execution can be prerequisites for other test cases. It will help in clubbing dependent test cases. It will be done only when required otherwise normally all test cases execution should be dependent on each other.
o Test case development (in details) and exploratory testing will go in parallel and for the same reason on an average one test case per hour will be completed (documentation and execution). In four weeks, around 160 (8 hr * 5days * 4wk) test cases will be targeted. All test cases related to functional work flow and data validation will be targeted on English language, whereas from Globalization perspective all non English languages will be covered but only subset of test cases (~15) will be targeted.
Test Case Automation
o Around 10 test cases of [features] flow (one test case per two days) will be picked and automated as complete scenarios. Automation will cover subset of work flows from the features.
o Test automation will be developed in Python using Selenium Webdriver API and Database API.
o All test case items will be automated separately and completely in isolation. It will help later integrating test automation with external test case management solution.
o Nine positive test cases will be automated and one test case from globalization test suite will be automated. Following test case ids that will be automated –
§ [List of test cases]
Test Execution Methodology
The following subsections define the activities and participants involved in test execution.
Test Hours
Testing team will test primarily between the hours of 11:30 AM and 7:30 PM (all times in IST), Monday through Friday. However, we may start earlier and continue later, as the workload requires. The escalation points section below specifies on-hours and off-hours support numbers. All meetings with the stakeholders will be held at x hours.
Test Cycles
As this is a pilot project, only one test cycle of six weeks will be used. Following activities will be considered in the test cycle. For more details see the Effort and Schedule section.
1. Knowledge Transfer / Understanding
2. Test Case Line Items (Test Case Summary)
3. Detail Test Case Preparation
4. Exploratory Testing
5. Test Execution
6. Test Automation
7. Reporting
8. Test Plan / Test Case / Defects / Test Execution Review
Test Execution Process
Exploratory testing will be performed by all test engineers all the time. Initially more effort will be given for Exploratory testing, which will be reduced subsequently. While executing the written test case manually, the tester will be responsible to explore other related areas as well to find defects. Once defects are found, new test cases will be added accordingly.
The priority of test cases execution will be as listed below –
1) The work flows in English language
2) Data correctness at backend for [features]
3) Data validation for the form fields
4) The work flows in other languages
Test Case and Bug Tracking
Test cases will be tracked using a set of Excel worksheets. These will provide both detail level and summary level information on test cases. There will be different excel sheets for different testing types (Functionality, Data Correctness, etc.) and different worksheet in an excel sheet refers different test suites. It will help in executing a subset of test suites when required.
For the Pilot, all bugs found during the testing will be reported in excel.
The definition for bugs severity and priority is as defined below. Usually, for testers definition for Severity is more important and priority needs to be defined by the Project Manager. Generally Test Engineers will give their opinion on priority but assigning the priority is the responsibility of the Project Manager.
Severity | |
1 | Blocker – System crash or anything that blocks testing progress |
2 | Critical – Crash or anything that loses persistent data or data corruption |
3 | Major – Feature that doesn’t work at all |
4 | Minor – Aspect of a feature that doesn’t work |
5 | Trivial – Cosmetic problems, misspelling in dialogs, redraw issues etc. |
Hardware Environment
2 Desktop Class Machines
Intel(R) Core(TM)2 CPU
4300 @ 1.80GHz
1.80 GHZ, 1.97 GB of RAM
Software Environment
Windows XP OS
Firefox 3.6
Putty (for making secure connection)
Test Tools
Tool Name | Purpose |
PICT 2.0 | Tool for generating test data for combinatorial testing |
MS Excel 2007 | It will be used for documenting test cases as well as defects |
Selenium 2.0 | Automating work flow from web using Selenium Webdriver API |
CouchDB 1.0.1 API | Automating backend data processing |
Python 2.7 (Language) | It will be used for automating all test cases, which will be internally using WebDriver as well as CouchDB API |
DOS Batch Files | It will be used for wrapping Python scripts and executing multiple test case automation with one click |
Communication Protocols
The application is a web based application and hence accessed using web browsers internally using HTTP protocol.
Test Effort and Schedule
The following shows the scheduled events that affect this test effort.
Week | Milestone/Effort | Start | End |
Week – 1 | Knowledge Transfer / Understanding | | |
Week – 2 | Test Case Line Items, Automation (initial setup) | | |
Week – 3 to 4 | Detail Test Case Preparation | | |
Week – 2 to 6 | Exploratory Testing | | |
Week – 4 | Test Case Review – Within Testing team and with Customer | | |
Week – 5 to 6 | Test Execution | | |
Week – 3 to 6 | Test Automation | | |
Week – 6 | Reporting | | |
Week – 6 | Test Execution Review | | |
The following are the list of deliverables that will be provided to the client during the testing or post testing.
Deliverables | Delivery Date |
Test Plan Document | |
Test Case Line Items, List of Test Cases for Automation | |
Detail Test Case Document | |
Defect Reports | |
Test Automation Demo and Scripts | |
Test Execution Result Report | |
Test team will send status report to stakeholders on every Friday evening IST. The documents sent during the prior week will be reviewed weekly on every Monday (evening). There will be a weekly call schedule for every Monday at 6:30 PM IST.
Test Contact List
Name | Role | E-mail | Cell/Landline |
| Sr. Test Engineer (Manual) | | |
| Test Lead (Automation) | | |
| Test Manager | | |
| Delivery Manager | | |
The table describes the human resources need to execute this plan. Key support and liaison roles are also defined.
Title | Roles | Name |
Delivery Manager | Responsible for the Program - Manages the support and escalation process | |
Test Manager | Plan, track, and report on test execution Secure appropriate human and other resources Provide technical and managerial leadership of test team | |
Test Lead (Automation) | Automate manual test cases Report results to the Test Manager | |
Sr. Test Engineer (Manual) | Develop manual tests Execute manual tests Report results to the Test Manager | |
Stake Holder Team - Roles and Responsibilities
The table describes the human resources needed to review / track the execution of this plan.
Title | Roles | Name |
| The single point of contact for resolving any technical queries | |
| Responsible for reviewing the documents | |
| Responsible for doing inspections | |
o The test cases will be written by team members separately for every test suite mentioned in-scope section
o Every test case written by team member will be reviewed by other team members and sub-sets of test cases will be reviewed by the test manager. Rework will be done on identification of any defects in the test cases. The test cases will be reviewed using following checklist –
§ Every test suite has collection of positive / negative / boundary cases. In case test suite doesn’t contain any negative / boundary test cases, there should be explanation around it.
§ Test cases are written clearly, which can be executed by any tester easily.
§ There are two parts to any test execution, one is generic test case and other is test case dependent on test data set (it can be used later for automating the test cases)
§ Every test case can be executed independently. In case, test case change the state of the team, the tear down step in the test case should bring down the system back to the same state after test execution.
o Every document that is prepared and delivered to stakeholders will be reviewed / inspected by the corresponding contact points. The list of deliverables and the corresponding review activity are listed below:*
SNo | Document | Inspection / Review | Inspection / Review conducted by @ Testing Team | Inspection / Review conducted by @ Stakeholders |
1. | Test Plan | Inspection | | |
2. | Test Cases | Review | | |
3. | Test Execution Report | Review | | |
* The storage location of the delivered documents is not added here because this is a proof of concept. Otherwise, it is important information.
Assumptions
o As compatibility testing is out of scope, assuming to use Windows XP and Firefox for the testing progress – Stakeholders to confirm the same.
o QA build (x.x.x) will not be changing during our testing or in other words, will not be testing on some moving target. In case of such changes, testing team requests to intimate the changes ASAP, for understanding the changes and taking appropriate actions.
o All relevant test data required for testing will be available in the QA environment
Dependencies
o QA environment is being managed and maintained by other stakeholders than QA team
o Testing during pilot will be dependent on test data currently present on test environment
Acceptance Criteria: Review the technical documents, project management documents, test results - Provide the feedback on the content, processes and enter into discussions for any changes / modifications.
Sign off Criteria: [provide sign off criteria]
The following table describes the key risks to success of this plan, and contingencies to address them.
Risk | Contingency |
Unavailability of test environment | A dedicated resource from other team will be responsible for resolving the QA environment issues |
Unavailability of support related to requirements, review of test artifacts and feedback etc. | A dedicated resource from other team will be responsible for clarifying the doubts, conducting reviews and providing feedback on time. |
Access to their QA environment
Functional documents
Comments
Post a Comment