Platform QA Metrics

Registered by Gema Gomez

Track progress and be able to determine how long will take us to get to 100% coverage and help us prioritize the work

Blueprint information

Status:
Complete
Approver:
Pete Graner
Priority:
Medium
Drafter:
Gema Gomez
Direction:
Approved
Assignee:
Canonical Platform QA Team
Definition:
Approved
Series goal:
Accepted for quantal
Implementation:
Implemented
Milestone target:
milestone icon ubuntu-12.10
Started by
Gema Gomez
Completed by
Gema Gomez

Related branches

Sprints

Whiteboard

*Metrics P*
During Precise we established a way of tagging bugs[1] and we produced two bug reports to keep track of bugs found (still open)[2] and bugs closed[3]. We also provided JSON for anyone to be able to access to the lists and manipulate the data[4][5].
[1] https://wiki.ubuntu.com/QATeam/AutomatedTesting/TestingTypeAndBugTracking
[2] http://reports.qa.ubuntu.com/reports/qa/qa-open-bugs.html
[3] http://reports.qa.ubuntu.com/reports/qa/qa-closed-bugs.html
[4] http://reports.qa.ubuntu.com/reports/qa/qa-open-bugs.json
[5] http://reports.qa.ubuntu.com/reports/qa/qa-closed-bugs.json

*Metrics Q*

Objectives

- Track bugs found as a result of all the QA initiatives within Ubuntu. Overall view of Platform QA + PS QA + Certification + Community, also interpret the numbers

NOTE: A wiki has been created to keep track of the tags each group will be using: https://wiki.ubuntu.com/QATeam/QABugTracking

- Track test cases added to the Platform QA Team's pool and run regularly.

- Track regressions found with automated testing

Blueprint Pad:
Now we need to set up targets based and agree on metrics for Q.

* # new test cases. Platform QA Team to count the number of test cases added to the lab and running regularly automatically

* # of regressions found by our test suites (this is meant to measure the test cases effectiveness, we could keep stats of defects found / test case, to know which test cases are the most effective at finding defects and be able to write more of those). How would we do this?

When a test case was passing and it suddenly fails and there is a bug, that bug is to be considered a regression.

* Start doing test escape analysis: find out how many bugs escaped our testing and why, can we automated test cases for them? Why weren't the test cases there in the first place? How many could we have found. How many are insignificant. What is the best way to keep track of this?

After release, we can count how many bugs high and critical are there that we have missed. Compare the bugs we've found against the bugs found by end users.

Automated testing: Platform QA + PS QA + certification
Manual testing: Community testing (actually, *all*)
Other: End Users/Manufacturers

Number of confirmed bugs against precise the first month after release, regression?

 * classify by area/package

Number of executed test cases during a release
Number of hits a launchpad page has?
how many bugs we missed with automated testing vs manual

(?)

Work Items

Work items:
[gema] Create a wiki to keep track of all the different groups tagging: DONE
[nskaggs] To find out or agree with the community what tagging they are going to use: DONE
[gema] Figure out which tag to use for Platform QA, generic bugs (https://wiki.ubuntu.com/QATeam/AutomatedTesting/TestingTypeAndBugTracking): DONE
[gema] Add a report to the QA reporting to track all these bugs, already in place for existing bugs, since nobody else is adding theirs there is no need to change the report): DONE
[gema] Agree on a place to share the scripts to gather the metrics (I have my script in lp https://code.launchpad.net/~gema.gomez/+junk/metrics, whenever we have more scripts we can consider a launchpad project): DONE

Dependency tree

* Blueprints in grey have been implemented.