Automated tests backlog for Utopic

Registered by Jean-Baptiste Lallement

The goal of this blueprint is to track tests failing in CI during automated testing of the phone.

Blueprint information

Status:
Complete
Approver:
Julien Funk
Priority:
High
Drafter:
Jean-Baptiste Lallement
Direction:
Approved
Assignee:
Canonical Platform QA Team
Definition:
Approved
Series goal:
Accepted for utopic
Implementation:
Informational Informational
Milestone target:
milestone icon ubuntu-14.10
Started by
Leo Arias
Completed by
Leo Arias

Sprints

Whiteboard

We will continue improving the existing automation from https://blueprints.launchpad.net/ubuntu/+spec/qa-v-automatedtests-backlog

The bugs will be tagged qa-broken-test qa-daily-testing: http://ur1.ca/hevqd
In order to highlight bugs to the landing team, tag them with qa-landing-email: http://ur1.ca/hi6xd

https://wiki.ubuntu.com/CI/UnstableTests

Observations and recommendation
-------------------------------------------------

- Things usually break when major things land (unity8, uitk, qt, mir)
- Tests break even when they should have caused CI failures to happen (and thus block the MP)
 - When tests are known to be flaky, teams might ignore failures - messaging-app failure (change to name of icon)
- Many projects have cumbersome release processes (necessitated by the nature of the project, e.g. Unity8, Mir, UITK). This means that landings are fraught with risk and getting fixes in is very slow.
- Some projects don't have adequate test infrastructure (core-apps don't run tests on ARM)
- Test and release cycle is not tight enough. A silo might be tested on 172, land in 174, but be broken by a change in 173.
- Click apps can use Python modules that in turn requires deb packages to work - see https://bugs.launchpad.net/ubuntu-calendar-app/+bug/1353921
- Landings can frequently contain a lot of code making it difficult to assess the impact

Recommendations

- Use autopackagetest where test suites are stable
- Have more frequent releases (one image per landing)
- Put more importance on having stable, reliable CI results
- Make sure all projects run tests on device or emulator
- Make sure CI tests are passing on branches that are landing - hard rule
- Make sure all CI results are for the very latest image prior to publishing
- Enable large projects to run full set of suites quickly - parallelise
- Have a way to process test runs and possibly rerun failing tests that may have a chance of passing when rerun. Make sure to adequately report the reruns so they can be addressed.
- Test Click apps in CI as they are on the dashboard
- It should be possible to isolate fixes into seperate landings

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.