Automated tests backlog for V

Registered by Leo Arias

The goal of this blueprint is to improve the tests that have common failures on the daily testing.

Blueprint information

Status:
Not started
Approver:
Thomi Richards
Priority:
Undefined
Drafter:
Leo Arias
Direction:
Needs approval
Assignee:
Leo Arias
Definition:
New
Series goal:
None
Implementation:
Unknown
Milestone target:
None

Sprints

Whiteboard

This is a continuation of https://blueprints.launchpad.net/ubuntu/+spec/qa-u-automatedtests-backlog

Process:

Review the most recent results of the automated tests on the dashboards:
ci.ubuntu.com
dashboard.ubuntu-ci

For each failure report a bug and link it to this blueprint.
The bugs should be tagged qa-broken-test qa-daily-testing: http://ur1.ca/hevqd

For each reported bug, try to understand the cause of the failure. If the cause is clear, explain it in a comment on the bug and mark it as triaged. If the cause is not clear, add any information that might be useful and attach logs, screenshots and videos of the failed executions.

In order to highlight bugs as prioritary to the landing team, send an email to Lukasz. Prioritary failures are the ones that were caused by developer changes. We need the landing team to catch their attention before more changes make them harder to fix, and to block the promotions in case they are ignored for a long time.

Once all the failures of an execution have been reported and analyzed, choose one bug to fix and assign it to yourself. You should choose the problems that will give a hard time to a developer, or the ones a developer is more likely to make worse by just adding a quick patch. For example, turning an ugly setup into a reusable fixture.
Ask for a review from the QA team pinging ubuntu-qa in #ubuntu-quality@freenode, and a review from one of the developers of the project. Do not land the branch without the two approvals.

While triaging failures, it's likely that we will find common problems that are repeatedly making the tests unstable or hard to maintain. Comment about those findings below on the Observations and Recommendations sections.

Advice on Triaging

Isolate failures locally - for each suite that has a failure, run that locally and figure out which tests also fail locally. You may need to try several runs to make sure none of the issues are transient - some might not occur every time.

Each test suite will record the results in a subunit file, including traces, console output and screenshots. Use trv to view the subunit results. Demo and instructions to install it: https://www.youtube.com/watch?v=jkLtbmQxXYc

Resources:

Links to documentation of our automation strategy:
- Autopilot tutorial http://developer.ubuntu.com/api/devel/ubuntu-14.04/autopilot/tutorial/tutorial.html
- Acceptance testing using the page-object model http://developer.ubuntu.com/apps/platform/guides/acceptance-testing-using-the-page-object-model/
- QA lightning talks https://wiki.ubuntu.com/QATeam/LightningTalks
- Landing process https://wiki.ubuntu.com/CI/UnstableTests

Observations:

- Things usually break when major things land (unity8, uitk, qt, mir)
- Tests break even when they should have caused CI failures to happen (and thus block the MP)
 - When tests are known to be flaky, teams might ignore failures - messaging-app failure (change to name of icon)
- Many projects have cumbersome release processes (necessitated by the nature of the project, e.g. Unity8, Mir, UITK). This means that landings are fraught with risk and getting fixes in is very slow.
- Some projects don't have adequate test infrastructure (core-apps don't run tests on ARM)
- Test and release cycle is not tight enough. A silo might be tested on 172, land in 174, but be broken by a change in 173.
- Click apps can use Python modules that in turn requires deb packages to work - see https://bugs.launchpad.net/ubuntu-calendar-app/+bug/1353921
- Landings can frequently contain a lot of code making it difficult to assess the impact

Recommendations:

- Use autopackagetest where test suites are stable
- Have more frequent releases (one image per landing)
- Put more importance on having stable, reliable CI results
- Make sure all projects run tests on device or emulator
- Make sure CI tests are passing on branches that are landing - hard rule
- Make sure all CI results are for the very latest image prior to publishing
- Enable large projects to run full set of suites quickly - parallelise
- Have a way to process test runs and possibly rerun failing tests that may have a chance of passing when rerun. Make sure to adequately report the reruns so they can be addressed.
- Test Click apps in CI as they are on the dashboard
- It should be possible to isolate fixes into seperate landings

(?)

Work Items

Work items:
Identify all the apps that are not following the best practices: TODO
[elopio] Clean up gallery app set up: INPROGRESS

This blueprint contains Public information 
Everyone can see this information.