Improve certification testsuite and processes for self-testing

Registered by Ara Pulido on 2011-09-27

Ubuntu certification is entirely performed by Canonical engineers, who receive systems to be certified and test them on a Canonical facility. It would be desirable to allow partners to perform a pre-test themselves on-site, so they can have a clear idea of whether a system is likely to be certified, and take any action they deem necessary. This eases the logistics and increases the amount of systems that can be tested by allowing partners to anticipate work that needs to be done.

However, since a certification program implies a strong validation component, the process needs to be improved and formalized to better respond to the special requirements of this new arrangement.

A third-party testing environment introduces the following challenges (note possible solutions in parentheses):
- non-Canonical engineers won't be as familiar with the Ubuntu certification suite (make the tests less prone to failure and clearer in their instructions, have clear, unambiguous documentation on how to run tests and what to do when they fail, improve the tests and redesign the testing tool's user interface to make the process friendlier and more "foolproof").
- A certain motivation for "cheating" appears (automate as much as possible, so as to reduce reliance on a human's judgement at test runtime. do not show the results during the testing process. Instead, results are sent to Canonical for assessment.).
- Partner testing labs may not have internet connectivity (ensure that the tools can reliably collect all needed data offline, and submit on a connected system).

Particular suggestions include:
- A complete redesign of the Checkbox UI to address usability concerns that have been raised both in the past and as part of the large amount of feedback gathered from Ubuntu Friendly. This is particularly important as the tool gets wider exposure and is compared to other, more polished applications present in Ubuntu.
- Splitting the tests into categories:
  1 - Tests that can be executed and verified automatically (no human interaction required - testing CPU frequency governors)
  2 - Tests where the system can perform the required action and only confirmation is required from a human. (Changing display resolution)
  3 - Tests where a human has to perform an action and then the system can test and decide whether the test passes (insert USB stick and automatically test read/write behavior, wiggling the mouse and checking that things work as expected)
- For category 2, collecting the information for later review, possibly by Canonical engineers, so the partners will mainly collect information and won't necessarily receive feedback as to which tests passed and which failed.
- An "academic exam" approach, where, as mentioned, the testing tools just collect raw data and send to Canonical for analysis. To avoid all possible forms of cheating, some form of video surveillance has been suggested.
- Signing and/or encryption of test data, to identify and/or avoid tampering. This needs more thought as, being free software, the source code for the tools is widely available.
- Looking at the way Checkbox is used currently and adding functionality that's needed (keeping old submissions, something that at the moment is done ad hoc, but whose frequent need indicates it should be doable through the UI).
- Leveraging information gathered from the Ubuntu Friendly program. Since this will have widespread use, a lot can be learned about which tests are important to people, how people are running them, how to make them easier or automate them, and how to redesign the user interface for the testing tools to make the process as streamlined as possible. Ubuntu Friendly constitutes an external user running the test suite without prior knowledge and no help, so the usability and automation improvements that will derive from that program will prove valuable to enhance the partner self-performed pretesting experience.

Partners will be benefited by faster certification times, since they can get faster local feedback on how their systems perform prior to sending them in for actual certification. Canonical hardware certification engineers will profit from a more-automated set of tests and tools, enabling them to certify more systems. End-users will benefit from a wider selection of Ubuntu-certified systems, with the confidence that results are trustworthy due to the verification processes in place to prevent faulty systems from gaining certified status. All three will also benefit from the widespread on-the-field experience, knowledge, user interface redesign and test improvements stemming from the Ubuntu Friendly program and results.

Blueprint information

Status:
Complete
Approver:
Ara Pulido
Priority:
High
Drafter:
Daniel Manrique
Direction:
Approved
Assignee:
None
Definition:
Approved
Series goal:
None
Implementation:
Implemented
Milestone target:
None
Started by
Ara Pulido on 2012-05-04
Completed by
Ara Pulido on 2012-05-04

Related branches

Sprints

Whiteboard

Definition of Done
- Checkbox has a new user interface that's easy to use for beginners but agile enough for experienced and high-volume testers, such as partners and Canonical certification engineers. Bonus points for attractive design. The CLI interface also works very well.
- Tests added and/or enhanced for areas identified as important through Ubuntu Friendly testing.
- There are three selectable whitelists (self-testing, certification, ubuntu-friendly) with tests divided in three categories (automated, manual interaction with auto-verification, and fully manual).
- A user can complete a successful run any of the three whitelists from beginning to end without having to ask for help.
- The submitted result contains enough information to determine whether a system should be certified.
- There is information, either in the submission or in an additional "package" of data, to determine whether the results are valid and haven't been tampered with.
- There is clear documentation for both the tester and the person evaluating the raw results, so that decisions can be made according to procedure and not in an ad hoc fashion.
- There is a mechanism to get required additional software from the "universe" repository easily, so that all tests can be run.

(?)

Work Items