Key Agile Metrics for a Sprint


In any agile program, it is important to track both business and development progress metrics. Agile metrics helps a team to better understand their development process and also releasing quality software easier and faster. There is a big debate on the usage of some metrics and there are concerns of using it in teams. Usually, usage of metrics is guided by three rules -
  1. When a measure becomes a target, it ceases to be a good measure (Goodhart's Law)
  2. Measures tend to be corrupted/gamed when used for target setting (Campbell's Law)
  3. Monitoring a metric may subtly influence people to maximize that measure (The Observer Effect)
Below are some popular key agile metrics at the sprint level as the sprint progresses.

Pre Sprint Execution
Business Value A value given to user story by the product owner, representing its impact to stakeholders
During Sprint Execution
Work in progress (WIP) Number of tasks that are in progress
Burndown / Burnup Chart Shows the trend of remaining effort for the sprint. In case, new tasks are being added in the middle of sprint, Burnup Chart should be used instead
Cycle Time Total time taken by individual issues to get from “in progress” to “done” state
Velocity Rate at which team is completing and delivering stories
% Automated Test Coverage Percentage of code base or requirements being covered by automated tests
Test Pass / Fail Over Time Shows the trend of testing progress in the sprint
Defects Trend (CFD) Shows the trend of product quality during development in the sprint
Post Sprint Execution
Customer / User Satisfaction Count of smiley face indicators after the demo
Team Happiness Count of smiley face indicators post retrospective meeting
Story Committed vs Completed (On-Time Delivery) The ability of a team to predict its capabilities of effort estimation

Business Value
The purpose of any software development effort is to create the features that deliver business value. There are two questions associated with it; (1) How do we know if we are delivering value? (2) Are we delivering the right thing? The metric "Business Value" can measure value being delivered per sprint in terms of points or dollar amount, but there is no way of tracking the real impact of the software until it is released. Some more key points related to this metric are as follows -
  • Product owner prioritizes higher value items towards the top of the backlog so that each sprint can deliver the maximum value possible
  • There is no standard formula for measuring value, but a clear view of what "value" means to the stakeholders needs to be articulated at the beginning
  • The product owner can use different techniques such as "T-shirt sizing" in prioritizing the project stories or an alternate approach can be to use a three dimensional metric, which incorporates complexity, business value and the ROI
  • For a project with definite end, the sprint starts early with very high value and gradually tend towards delivering less and less value
  • At some point, the cost of development eclipses the potential value of running another sprint, and a good time for the team to switch to a new product

Work in progress (WIP)
Multitasking becomes the norm while working on multiple items and leads to WIP. Multitasking sounds good, but it is deceptively time-consuming. It is most likely that work is waiting and the team is switching tasks when WIP is high. Hence, limiting the amount of WIP improves throughput and force team to complete their work. At a fundamental level, WIP limit encourages a culture of "done".
  • WIP limit determines the minimum and maximum amount of work that can stay in each status of workflow
  • The goal of WIP limit is to ensure that everybody has work to do, but no one is multitasking
  • As a best practice, some teams set the maximum WIP limit below the number of team members. If someone finishes an item first, and the team is already at their WIP limit, he or she can join another developer/tester to knock out item(s) from their plate
  • Resist the temptation of raising WIP limit just because the team keeps breaching it. Understand the reasons behind it first and act accordingly
  • Consistent sizing of individual tasks helps in setting WIP limit correctly. It is important to keep individual tasks to no more than 16 hours of work

Burn-down / Burn-up Chart
Burn-down and Burn-up chart both are used to track and communicate the progress of a project. A Burn-down chart shows how much work is remaining to be done in the project, whereas a Burn-up chart shows how much work has been completed against the total amount of work. The Burn-down chart is simple but sometimes hide important information. The Burn-up chart avoids ambiguity and gives a complete picture of the project progress.
  • People get confused with the effort spent and the effort remaining. If these are wrongly plotted then the report insight will be inaccurate
  • In Burn-down chart, sometimes it appears that team didn’t accomplish much in the middle of project but heroically finished everything at the end whereas if the same thing is plotted using Burn-up chart, it might reveal that probably scope has increased in middle and items were removed at the end to meet deadline and team made steady progress all along
  • The Burn-up chart is preferred over Burn-down chart when project progress is being presented on a regular basis to the same audience. Apart from showing the steady progress, it allows showing changes in the scope due to the addition of more work items or testing revealing significant bugs. It might help in convincing customers to stop requesting changes
  • Both Burn-up and Burn-down chart help in showing the velocity of a team, which can be compared against the velocity required to meet the deadline
  • Burn-down charts are usually used at the Sprint level whereas Burn-up charts are mainly used at the release or project level

Cycle Time
It is the measure of the elapsed time when work starts on an item (story, task, bug etc.) until it is ready for delivery. In DevOps era, it is measured until the task is deployed in the production.
  • It tells how long it takes to complete a task. So, in case if an issue is reopened, worked on and completed again, then this extra time is also added to the Cycle time
  • Team with consistent cycle times across many types of work (new feature, bugs etc.) are more predictable in delivering work and can answer business owner with data-driven estimate
  • Cycle time is a direct measurement of productivity. It is quicker to introduce new features to end users with short cycle time
  • If Cycle time is in couple of days, different reasons for this could be
    • Story is too large
    • Task is not well understood by team
    • Definition of done has expanded
    • Work in progress (WIP) is high

Velocity
It determines the regular run rate of the team efficiency, the rate at which the team delivers a story. It is one of the most hated metrics as it can be gamed easily. So, the temptation of comparing against teams must be resisted otherwise instead of focusing on delivering working software that has business value for stakeholders, the team will be concerned only with delivering more story points. The trend should be rather analyzed and retrospect to figure out the reason for a change in velocity.
  • Velocity in terms of stories done is better than story points
  • If velocity is erratic over a long time, team estimation practices should be revisited
  • There are many ways velocity can be increased
    • Team might start estimating higher effort
    • Stories will tend to become smaller (it is a good sign though)
    • Team can put less effort in refactoring or testing
  • There can be many other reasons for change in the Velocity other than team issues
    • Changes in the team size between sprints, new member joins or a veteran has left
    • Sprint is targeting short release cycle or maintenance work
    • Team doesn’t understand the scope of work at the start of the sprint
    • Team is working on something new (technology, domain etc.)
    • Team is working on a legacy code
    • Many holidays or sick leaves in the current sprint
    • Team had to deal with few critical bugs

% Automated Test Coverage
Software projects become more complex as time proceeds due to the increased line of code (added features, defects etc.). Complexity over time tends to decrease the test coverage and quality of the product. The goal of automation is to reduce the time of testing and cost of delivery while increasing test coverage and quality. Carefully defined metrics can provide insight into the status of automated testing effort and one such metric is percentage automated test coverage. It determines how much of the code base and functionalities are covered by automated tests in a sprint. The automated tests include both unit and acceptance regression tests.
  • It is practically impossible to do full regression testing in a sprint without automated tests
  • All bugs fixes should also be automated as part of test coverage

Test Pass / Fail Over Time
As the application gets larger with each sprint, the total number of tests executed and passed should continue to increase. The warning should be flagged for failed tests, especially if it is a priority defect. Any red tests for a longer time warrants investigation and resolution. This help to reach the agile objective of software that is releasable and high quality at any given time on a continuous basis.
  • The value of this metric should increase as the project progress. If it doesn’t happen then it might be because QA team is not able to close the defects
  • When test pass rate decreases after a steady increase, then it might be because QA team started re-opening defects

Defects Trend (CFD)
The Defects trend chart shows the cumulative defects opened versus cumulative defects closed in the sprint. The chart shows the submission rate of defects and the rate at which it is getting closed in form of cumulative flow diagram (CFD). If the distance between cumulative open and cumulative closed is shorter then it shows team is efficiently resolving the defects otherwise it warrants investigation. Some of the questions that can be asked from the defects trend are –
  • Are defects submission rate declining toward the end of the sprint as expected?
  • Are new defects being found at all or not?

Customers / Users Satisfaction
Any product is made for its customers and it is very important that focus should be towards fulfilling their needs. Every sprint should target towards increasing value to its customers. The customer/user satisfaction is one of the most important metrics to track. One of the ways to measure could be to show a demo of the product after every sprint and count their smiley faces or it could be more formal by sharing surveys to customers and stakeholders. Are customers willing to recommend company’s product or services to others?

If the count is low, does this mean the team is not doing a proper job? This is a hard question but there could be many other factors for this value to be low –
  • Product owner is not clear with the customer requirements
  • Customer is not involved in developing stories or defining story acceptance criteria
  • Sprint reviews are not being conducted properly
  • Customers and stakeholders are usually not present during sprint reviews

Team Happiness
A key metric for a successful scrum team. If team mates aren’t enthusiastic and not having a passion for work, no process or methodology can help in further improvement. Usually, if a team is not happy, its side effects will start showing in future iterations, in forms of more defects injection, less velocity, more reopen defects etc. This metric is very hard to measure as not every time team mates could be vocal to raise concerns or showing their unhappiness during the retrospective meeting. Some of the reasons for team unhappiness could be –
  • There are high number of impediments during the sprint and are not being removed in a timely manner
  • Team members can’t contribute in a product area because they lack knowledge or experience
  • Team members are working long hours’ sprints after sprints
  • There are internal conflicts among team members and they are not working collaboratively
  • Repeated mistakes are not being acknowledged or addressed
  • Team members not being encouragement or valued and lost passion for their work
  • Agile metrics being used to target individuals or teams

Story Committed vs Completed (On-Time Delivery)
This metric is a way to measure predictability. It can be measured by comparing the number of stories committed versus stories completed in the sprint. One thing that an agile team should definitely be able to do is to deliver software by a certain date. The score of this metric can be low due to several reasons other than the team effectiveness –
  • Team doesn’t have an applicable reference story to make relative estimates
  • Not every team members are experienced in the story’s referenced domain or technology
  • Customer requirements are not clearly communicated to the team
  • Requirements keep changing (scope creep) during the sprint
  • Many changes in the team (team disruption)
  • Changes need to be done in legacy code (new to many team members)
  • One team member has taken decision of estimation as per his / her capability and thought process


Comments

Popular posts from this blog

Performance Test Run Report Template

Understanding Blockchain

Bugs Management in Agile Project