Posts

Showing posts with the label Test Metrics

Key Agile Metrics for a Sprint

Image
In any agile program, it is important to track both business and development progress metrics. Agile metrics helps a team to better understand their development process and also releasing quality software easier and faster. There is a big debate on the usage of some metrics and there are concerns of using it in teams. Usually, usage of metrics is guided by three rules - When a measure becomes a target, it ceases to be a good measure ( Goodhart's Law ) Measures tend to be corrupted/gamed when used for target setting ( Campbell's Law ) Monitoring a metric may subtly influence people to maximize that measure ( The Observer Effect ) Below are some popular key agile metrics at the sprint level as the sprint progresses. Pre Sprint Execution Business Value A value given to user story by

Measurement & Metrics

"You cannot improve what you cannot measure" A lot had been talked and spoken already about ‘Measurement & Metrics” in Software Testing.  In this article I am not saying that something new will be written or discussed, but my idea for this article is to summarize on this topic from various different sources. Measuring, Monitoring and Acting on it are key roles of any manager in any industry and Software Testing is no different.  A lot of times it has been seen that KPIs are being measured and monitored but not being acted.  In that case, this whole exercise becomes worthless.  Sometimes, the measurements themselves are not collected and reported correctly, which is more dangerous than not doing this activity at all. Other times, it had been observed that there are numerous KPIs are being monitored without having any goals in mind.  As per me, the number of KPIs that should be measured and monitored should be minimal and any addition to new KPIs should be discuss

QA Metrics - Testing Progress Metrics

Quality Progress Metrics Start tracking before Test Execution User Stories, Features or Requirements Coverage Progress Are test cases being authored for all required stories, features or requirements? Are test cases for important features or requirements being targetted first? Are number of test cases (for stories, features or requirements) proportional to effort required for their development? Are there any features, stories or requirements being left over? Is team on target in covering all required features, stories or requirements? Test Case Readiness Progress When will all the test cases be ready to run? Will all the test cases be ready to run by the end of the iteration? How many test cases must the team still write and review? How many test cases are ready to run? Test Automation Progress How many test cases has been automated? How many test cases being automated on regular basis? Is the team on track in automating the test cases? Start tracking during Te

Testing Metrics & Measurement

Metrics & Management You are conducting testing for a product and in middle of testing, Product Owner (PO) would like to know the quality of the product. What to answer? How to measure the quality of the product? If number of open defects are huge, especially the high severity defects, than it is easy to answer this question - "At a moment product sucks". But how about if number of open defects are very less in number, especially if no high severity defect exists? Is the quality of the product good? It depends on how long high severity defects are not being found and other important aspect is to check the coverage of testing. What percentage of requirements tested? What percentage of code has been covered during the testing? If coverage is less than quality of product can't be measured rather only quality of features can be explained. If number of high severity defects are less in number and not being raised for a longer time and coverage is also almost m