Posts

Showing posts from August, 2012

QA Metrics - Testing Progress Metrics

Quality Progress Metrics Start tracking before Test Execution User Stories, Features or Requirements Coverage Progress Are test cases being authored for all required stories, features or requirements? Are test cases for important features or requirements being targetted first? Are number of test cases (for stories, features or requirements) proportional to effort required for their development? Are there any features, stories or requirements being left over? Is team on target in covering all required features, stories or requirements? Test Case Readiness Progress When will all the test cases be ready to run? Will all the test cases be ready to run by the end of the iteration? How many test cases must the team still write and review? How many test cases are ready to run? Test Automation Progress How many test cases has been automated? How many test cases being automated on regular basis? Is the team on track in automating the test cases? Start tracking during Te

Manual Testing Core Activities

Manual Testing Core Activities What are the core activities of testing especially for manual testers? How should we plan testing efforts? Most of the time, I have seen effort of review activities and retesting of defects are being ignored heavily and schedule get hit because of that. Following are the core test activities should be considered while planning - Test Ideas (Not Reviewed): Test idea is a test case with only summary and no details Test Ideas (Reviewed L1): Test ideas reviewed by peer team member Test Ideas (Reviewed L2): Test ideas reviewed by Test Lead / Test Manager Test Ideas (Signed-off): Test ideas reviewed by client and signed-off Test Cases (Not Reviewed): Test case is extension of test idea, putting more details around test idea (descriptive, detailed, Clear steps for execution, Test Data information etc.) Test Cases (Reviewed L1): Test cases reviewed by peer team member Test Cases (Reviewed L2): Test cases reviewed by test lead / test manager. Review

Testing Metrics & Measurement

Metrics & Management You are conducting testing for a product and in middle of testing, Product Owner (PO) would like to know the quality of the product. What to answer? How to measure the quality of the product? If number of open defects are huge, especially the high severity defects, than it is easy to answer this question - "At a moment product sucks". But how about if number of open defects are very less in number, especially if no high severity defect exists? Is the quality of the product good? It depends on how long high severity defects are not being found and other important aspect is to check the coverage of testing. What percentage of requirements tested? What percentage of code has been covered during the testing? If coverage is less than quality of product can't be measured rather only quality of features can be explained. If number of high severity defects are less in number and not being raised for a longer time and coverage is also almost m

Integration of Testing Process in Sprint

Integration of Testing Process in Sprint 1. Understanding the stories context from testability perspective Phase: Sprint Planning Stories should be understood from end users perspective It should be clear for testers, what are the changes and the reasons behind those changes Missing test requirements (e.g. login / password, contest id, test environment accessibility, access to new features etc.) What should be done to overcome the requirements? Track missing requirements and make it available as soon as possible Features or requirements or part of requirements is not testable (e.g. not permissible to login to Salesforce etc.) 2. Create a running document, containing clarifications and its resolution Phase: Sprint Planning A document that contain history of all questions or concerns being raised A reference document for tester, when in doubt Document for product owner to know how well equipped his / her testers are 3. Add notes to testers, testing ideas or