Integration of Testing Process in Sprint

Integration of Testing Process in Sprint

1. Understanding the stories context from testability perspective

Phase: Sprint Planning
  • Stories should be understood from end users perspective
  • It should be clear for testers, what are the changes and the reasons behind those changes
  • Missing test requirements (e.g. login / password, contest id, test environment accessibility, access to new features etc.)
    • What should be done to overcome the requirements?
    • Track missing requirements and make it available as soon as possible
  • Features or requirements or part of requirements is not testable (e.g. not permissible to login to Salesforce etc.)

2. Create a running document, containing clarifications and its resolution

Phase: Sprint Planning
  • A document that contain history of all questions or concerns being raised
  • A reference document for tester, when in doubt
  • Document for product owner to know how well equipped his / her testers are

3. Add notes to testers, testing ideas or pointers

Phase: Sprint Planning
  • Broader idea on testing the stories
  • TODO items that testers need to track and arrange for conducting the testing successfully

4. Adding new test ideas / cases targetting stories in sprint

Phase: Sprint Planning
  • At this point only test summary should be written
  • Test idea should address risks and answer either or all of following questions
    • Is this test case trying to validate functionality of new feature?
    • Is this test case trying to re-test a bug fix?
    • Is there a high probability of finding defects on execution of this test case?
  • Attributes of good test cases
    • It explains the problem or scenario from end users perspective
    • It is written as if application is not available and still in development
    • It suggests someone on what to test and not what exactly to test
  • Example - Instead of "Name field shouldn't be more than 25 characters", it could be "Data validation of Registration form" and while writing details to the test case, it should mention what exactly to test for each fields. In this example, data-driven test strategy can be used while executing the test case. Pair wise testing strategy is used to come up with relevant test data.
  • Avoid prefixing test case with "Verifying" as implicitly all test cases are verifying
  • Goal should be writing less number of 'quality' test cases but covering more requirements
    • More number of test cases means more documentation and less exploratory testing
    • More test cases means more time on reviews and less time for exploratory testing
    • More test cases mean more time on executing scripted test cases
    • More execution of scripted test cases, more boring work and less productivity
    • More execution of scripted test cases, less time on finding defects
    • Now, what do you want?
    • It doesn't mean writing less number of test cases but balancing approach on writing test cases
    • Writing minimal number of test cases, covering more is a challenge, take it
    • Authored test cases are required to show our output to customers, can't be avoided
    • Documented test cases can be automated, which means more time for finding defects
  • Once find a defect, add new test case(s) for it
    • It has failed today, has high probability of failing in future
    • Remember question - "Is there a high probability of finding defects on execution of this test case?"
  • Categorize test idea / case as 'Positive', 'Boundary' and 'Negative'
  • Usually story with more points should have more number of test cases but it is not mandatory
    • More points with story means more effort going in its implementation
    • More effort in implementation, more chance of leaving bugs
    • Requires more testing, which means more test cases
    • It is not mandatory as sometime a simple change by a novice developer might results nasty bugs

5. Linking existing test ideas / cases targetting stories in sprint

Phase: Sprint Planning
  • Resist temptation of adding a new test case, instead search in test case management system for relevant existing test case
  • Excel is not a good tool for maintaining test plan and test cases for the same reason
  • Add tags in description field of test case so that relevant test cases can be searched faster
  • Create a new test plan for every sprint and add test cases link to it

6. Reviewing and finalizing the test ideas / cases for sprint

Phase: Sprint Planning
  • For each story there must be set of
    • Positive Normal Scenarios - It covers the basic functionality as mentioned in requirement document
    • Positive Boundary Scenarios - It covers scenarios covering functionalities in unique way
    • Negative Scenarios - It cover scenarios where application is expected to return errors
  • Review Technique
    • Each team member write test idea / cases for their assigned stories
    • Each team member reviews test ideas of each others and jot down their review comments
    • Team brainstorm & review the ideas together, document it and get it reviewed by Test Lead / Test Manager
  • As this is initial review, documenting all review comments can be ignored
  • Documenting the final list of ideas should be goal
  • Review by Test Lead / Test Manager is important step in the process.
    • Thorough formal review at this stage
    • History of review comments should be maintained
    • If required, review metrics from here can be maintained and tracked
    • Being a senior resource in the team, their input should cover the gap present in testing ideas
  • Once test ideas / case are reviewed by Test Lead / Test Manager and finalized, it should be shared to client for their comments

7. Write detail test cases

Phase: Sprint Planning, Sprint Design
  • By this time, application or feature should be in working state so that steps to execute can be detailed
  • Test data should be created for the test case and keep it ready for execution phase
    • Test should be meaningful and should follow some pattern consistently
    • Name of the company and personal details should be avoided
    • Test data prepared for learning the application or testing should be deleted to clean the working space
    • If get difficulty in creating test data due to any reason, it should be highlighted to lead / manager / stakeholders immediately
    • For many mail ids, service like 'www.malinator.com' can be used
  • Expected results should be mentioned clearly
    • It is not necessary to mention expected results for each & every steps
    • Example, login should be successful before proceeding to next step, so no need to verify it explicitly
    • All expected results for a scenario should be listed explicitly
    • Test case having partial expected results is of not much use, as it doesn't test sceanrios / idea from multiple angles
    • Automation implements partial checkpoints / validation and hence might miss defects
  • Assign priority to each test case, which is associated with Sprint than test cases
  • While writing detail test case, each test idea can be expanded horizontally or vertically
    • Horizontally means adding more test ideas / test cases to test suite
    • Vertically means adding more test items under test idea / test case
    • Test items usually results as set of data inputs i.e. combination of data values from different fields
    • If test idea is associated with multiple data points, scale horizontally, if data points are implemented separately in different page / screen
    • If test idea is associated with multiple data points, scale vertically, if data points are implemented in same page / screen

8. Reviewing and finalizing the detail test cases

Phase: Sprint Design
  • A percentage of random test case should be reviewed depending on the time availability
    • It should be reviewed by executing the application and following "steps to execute"
    • If found more than x number of test cases written incorrectly with missing steps by an owner
      • Return all test cases for re-work to its owner
      • Review test cases for other owner instead
      • As the work is not done properly by owner, he / she need to fix this by putting extra hours instead
    • If found less number of test cases to be updated
      • Update the test case and put your comment in the remark section
      • Review together with owner and finalize the test case
    • If required metrics can be maintained and tracked here to monitor the review process
  • All test cases will be reviewed and corrected during execution phase
    • Every team member will execute test cases of others
    • If test cases are not written correctly and steps are missing, it will be corrected and review comment will be added
  • Review comments should persist in all cases

9. Execution of test cases

Phase: Sprint Execution
  • Executed all regression automated test cases
  • Following test scripts instructions to execute all test cases assigned for stories manually
  • In case test case fail, raise a defect and copy details from test case to bug description
  • Exploratory testing around existing test cases
    • It results maximum number of defects if done properly
    • Log defects in detail
  • There might be multiple cycles of execution of test cases in one sprint
    • Complete tests execution in QA environment
    • Complete tests execution in Integration environment
    • Partial test execution in Pre-Production environment
  • Number of test cases need execution depends on time availability and test case priority number

10. Identifying defects

Phase: Sprint Execution
  • Defects should always be raised by understanding the context first
    • Is defects found related to stories that are being tested?
    • Is defects found are related to usability, which is not much of a priority as of now?
    • Other details depend on project to project
  • Make sure, severity of defects are being assigned properly
    • It is one thing, irritates developers / product owner most if not done properly
    • If low severity defects are assigned as high severity, than triage meeting take lot of time to filter these defects
    • If high severity defects are assigned low severity, than they are ignored and not fixed
  • Defects should be logged with enough details
    • Steps to reproduce should be clear
    • If not done properly and working remotely, it takes lot of time for developers to understand it
    • Give test data details in the bug, rather than suggesting login to application, mention the credential as well
    • More information: http://www.rgnotes.com/2011/12/how-to-report-bugs-effectively.html

11. Adding test cases around defects

Phase: Sprint Execution
  • Add test cases for every defects raised
    • Test case should have only title
    • Description / Steps to Execute can refer the defect id
  • Add test cases for scenarios around or related to defects

Comments

  1. Nice article ....Provides lot of helpful information related to implemeting the test ideas and test cases in the sprint planning.

    ReplyDelete

Post a Comment

Popular posts from this blog

Performance Test Run Report Template

Understanding Blockchain

Bugs Management in Agile Project