Checkbox Needs a Scheduler for test cases

Registered by Jeff Lane 

As our list of tests grows it's becoming more and more difficult to efficiently run a battery of tests to achieve certification status. The tests are a mix of both automated and manual tests and while SOME of our current tests can be automated, a large number can not as they require physically interacting with the system and testing "User Experience" items that require a human to actually judge.

The problem, as it stands is that Checkbox does not schedule tests by type. Instead, the flow is something like this:

Info Gathering -> manual tests -> auto tests -> manual tests -> auto tests -> manual tests -> auto tests -> done

Unfortunately, that means a lot of sitting around waiting for a prompt when a manual case is reached. This means, if I'm testing 3 systems simultaneously, two of those systems are going to sit idle at a manual test case while I'm working on the third.

The optimal flow should be this:

Info gathering/Local tests -> all manual tests -> all automated tests -> done.

Or perhaps swap the order of manual and automated, but they should all run as a group. this will take some thinking out an planning, but ultimately, it could make testing far more efficient if we could get all the interactive tests out of the way at one time and then simply walk away and let the automated stuff take over. If we could achieve that, then once the manual tests are done, the automated tests will run to completion and the results will (should) be automatically uploaded to the certification site.

This would also go a long way toward making the end-user test experience better, as our end users won't be sitting throughout the entire test cycle waiting on that interaction prompt.

Blueprint information

Jeff Lane 
Needs approval
Series goal:
Accepted for archives
Not started
Milestone target:
Completed by
Victor Tuson Palau

Related branches



At the same time, resolve bug # 626736: ~checkbox always sorts tests alphabetically, even if it does not intend to sometimes -

Dependencies will not suffice since they imply that a test cannot run without another previous test run having been run. What if we wanted to run all the user interactive tests all at one time then let then all the non-interactive tests run later? One would have to create a fake dependency to make this happen and that seems like a poor solution. I would propose the following:

1) If order cannot be guaranteed then the tests should be randomised to help uncover anomalous problems (eg. hardware tests that leave residuals) and the test order should be logged,

2) Ordering should be allowed to reproduce specific problems uncovered by randomizing, to allow for reply or to test a specific series of steps that cause a problem to occur separate from dependencies (the tests may be unrelated but cause a problem if run in a specific order),

3) Mixed tests (ordered & unordered) should run ordered then randomised,

Ordering scheme proposal:

1) Ordering can be specified by a new tag (ie. "Priority=")

2) Missing Priority tag or "Priority=0" signifies random execution (maybe assign random Priorities between zero and one)

3) The actual order (random or not) the tests are run is captured in a file that can be replayed via command line option and/or other means

4) Priorities less than zero (<0) are run before random tests

5) Priorities greater than one (>1) are run after random tests

this didnt get done in N and is not schedule for O


Work Items

This blueprint contains Public information 
Everyone can see this information.