Improving automated certification testing of Kernel SRUs

Registered by Ara Pulido

Kernel Stable Release Updates (SRUs) are tested by the certification team in search of possible regressions or bugs that may affect certified systems.

These tests have to be performed in sync with the SRU release cadence, and in order not to hold up the SRU's release, they need to be done within a tight schedule, so they entirely consist of automated tests, to provide the response times and scalability the SRU process requires.

Due to its nature, SRU testing would benefit from as many tests as possible, but always with the constraint that they be automated or, at most, require a human to review collected data in bulk. Tests that require interactive verification or human action are to be avoided.

To enhance the value provided by SRU testing, we need to both add new tests, and improve the existing ones, based on input from the Kernel team about the areas that traditionally give the most trouble, and possible ways to test for bugs or regressions.

== Improving the test development process ==
From inception to actual implementation in real SRU runs, tests may take up to 4 months to actually start being run. This diminishes the value and means that the number of tests that can be implemented per cycle is very small.

The test development process needs to be described, analyzed and reengineered with the goal of reducing the turnaround time for new tests as much as possible.

== Improving coverage in existing areas ==
Coverage has been improved in e.g. wireless testing, but some other areas, such as graphics and audio, can still use some improvement. Consulting with experts on these areas as to which tests would be useful to have, and implementing them (ideally with the new process so we can start seeing results as soon as possible).

== Automating plugging/unplugging of devices ==
Coming up with a solution that can automate traditionally manual actions (such as USB, memory card, external video and FireWire insertion or removal). This would enable automation of these high-interest tests, and open up new areas for SRU testing.

== Benefits from this blueprint ==

The certification team will benefit from an ability to develop reliable tests more quickly, by being able to respond to requests from other teams, as well as having techniques for automating physical interaction which will allow fully automatic and more comprehensive testing while maintaining scalability.

The kernel team will benefit from wider testing, able to catch more bugs and regressions in sync with the SRU cadence, and an increase in the number of test results the certification team will provide.

List of possible tests:
https://docs.google.com/spreadsheet/pub?key=0AhMEQ8F2hKQOdC1Vd0RGWllLa3R1ZXNkbldsaDJvaWc&output=html

Blueprint information

Status:
Not started
Approver:
None
Priority:
High
Drafter:
Daniel Manrique
Direction:
Needs approval
Assignee:
None
Definition:
New
Series goal:
None
Implementation:
Unknown
Milestone target:
None

Related branches

Sprints

Whiteboard

== Definition of Done ==

- There is a documented, new process for developing tests more quickly.
- Tests in https://docs.google.com/spreadsheet/pub?key=0AhMEQ8F2hKQOdC1Vd0RGWllLa3R1ZXNkbldsaDJvaWc&output=html are implemented and/or deployed in the labs
- Tests and data collection for graphics and audio implemented by the time Alpha 2 hits (~2 months)
- Automation driving solution proposed, a production-level prototype in use in the lab.
- Proposals for adding as many buses and tests as possible to the automated driving solution, with costs and part lists
- Based on the above two items, a decision on whether the automation driving proposal can be implemented on more systems

(?)

Work Items