Integrate CI Metrics

Registered by Ronald Bradford

Use the Unit Testing coverage as a metric for a non-voting gate if code coverage decreases. Also use Unit Test coverage packages with poor coverage (e.g. < 30%) as an identifier for possible work for new contributors. Ideally this would after a long evaluation become a candidate for a -1 vote.

Blueprint information

Steven Dake
Ronald Bradford
Ronald Bradford
Series goal:
Accepted for liberty
Milestone target:
milestone icon liberty-2
Started by
Adrian Otto
Completed by
Adrian Otto

Related branches



As discussed in the Design Summit we wish to use the Unit Testing coverage as a metric for a future -1 gate if coverage decreases and we want to use Unit Test coverage as an identifier for possible work for new contributors.

Coverage test output can be produced in three formats (HTML,XML,TXT). We can use theTXT format as this is both human readable and the easiest to parse.

Coverage is run by default with:

$ tox -e cover

This produces HTML output in the /cover directory (/cover defined in .gitignore).
The TXT version can be created with

$ coverage report -m > coverage.txt

Do we want to record the current coverage percentage and historical percentages over time? Wiki, repo?
This overall current percentage number for HEAD could be included in the weekly meeting for reference.

Calculating Percentage (for -1 test)

The coverage report produces a percentage (e.g. 42%) which is not granular enough for calculations.
A layman's test is is the percentage of code coverage less then recorded a -1 is given. This would require the percentage to drop to 41%.
A better calculation is to produce a 2 digit precision calculation from the per line items in the report. If the percentage drops by more then the percentage then a -1 is given.

i.e. if the percentage is 42.50%, then we need to determine the new percentage is 42.50% or better of the difference. To maintain the total percentage, 100% of new code would need to have coverage.

The proposed gate can compare coverage with HEAD and HEAD~1 for differences.

Listing Unit Test Work

For the benefit of new contributors, a list of packages that have < X% coverage can be produced. I would propose we record this on a wiki page (A placeholder is at

The content would include:

$ awk -v project="magnum" -v ratio=0.30 '/^Name|^TOTAL/ {print $0} $1 ~ project {if ($2 > 0 && $3 > 0 && ($2-$3)/$2 < ratio) print $0}' coverage.txt

Name Stmts Miss Branch BrMiss Cover Missing
magnum/base 56 56 16 16 0% 13-113
magnum/cloud/nova_driver 20 20 0 0 0% 13-45
magnum/cmd/api 23 23 2 2 0% 15-54
magnum/cmd/conductor 28 28 3 3 0% 15-65
magnum/cmd/db_manage 42 42 2 2 0% 14-91
magnum/cmd/template_manage 49 49 15 15 0% 14-96
magnum/common/pythonk8sclient/client/ApivbetaApi 4570 4415 1656 1656 2% 30, 55-109, 135-189,

The following Etherpad outlines a blog/wiki format that describes an example of identifying and improving Unit Tests.

A thought is to add the coverage output to Infra nodepool results for each tests. These are available in logstash and linked from review output for each review.

Gerrit topic:,topic:bp/integrate-ci-metrics,n,z

Addressed by:
    Improving Unit Test coverage of k8s_manifest


Work Items

This blueprint contains Public information 
Everyone can see this information.