Automated testing for HDA sound cards

Registered by David Henningsson

Hda-emu is a way to test kernel code for Intel HDA sound cards, without having the hardware at hand. Discuss how to evolve this code into a regression test suite, that we could run before e g releasing proposed kernels, and how to integrate it into existing QA efforts (jenkins etc).

Blueprint information

Chris Van Hoof
David Henningsson
Canonical Hardware Enablement
Series goal:
Accepted for quantal
Milestone target:
milestone icon ubuntu-12.10-beta-1
Started by
Kate Stewart
Completed by
Chris Van Hoof

Related branches



== Introduction to hda-emu ==

hda-emu is an HD-audio emulator. The main purpose of this program is to debug an HD-audio codec without the real hardware. Thus, it doesn't emulate the behavior with the real audio I/O, but it just dumps the codec register changes and the ALSA-driver internal changes at probing and operating the HD-audio driver.

It is written and maintained by Takashi Iwai.

== Suggested improvement of hda-emu ==

 * Introduce a "batch run mode" to hda-emu (or develop a framework around it to run it several times in a row) and report errors, i e if codec parsing fails for some reason or if the code tries to send an invalid verb to the device.
 * Collect codec information (easiest done through the alsa-info script) from as many machines as possible. The set of machines that we certify, could be a good set.
(Upstream also has a collection of codecs)

== Suggestion about when to run the test suite ==

 * For kernel updates, e g when before a stable kernel is uploaded to the proposed repository
 * Throughout the development cycle, e g before a kernel is released
 * When developing driver code - if this regression test could be run even before new kernel code is committed to the upstream repository, that would be the best scenario of them all! While I have not yet consulted upstream about these plans, I believe it will be a welcomed effort and hopefully a used one as well.

== Possible errors to check for ==

This is probably the trickiest part.
 * Invalid verbs - these are usually ignored by the codec and would perhaps not be a critical error?
 * Two volume controls controlling the same thing (we had that for a few machines in 3.0)
 * check for errors returned when
   - trying to get/set/info a volume control
   - testing playback or record
   - suspend / resume

 * How can we keep track of the current status of different machines, so we can shout loudly if things change for the worse, but keep silent if things that are bad stays that way?

== Raw Notes From UDS-Q Session from Etherpad ==

== Test system status ==

The QA team now has regular VM-based jobs running for other projects in the QA lab. Once we've got what we're running finalized, I can get a job setup for that. --nuclearbob

Test suite up and running on Takashi's daily builds:


Work Items

Work items:
[roadmr] Cert team to add a job to Gather alsa-info during test runs, these are uploaded to cert website and imported to HEXR: DONE
[vanhoof] Add BIOS version, system-version, system-product-name to DONE
[vanhoof] if plan b (git) becomes the plan a sort out workflow: DONE
[jk-ozlabs] Jeremy can chat to schwuk about adding to alsa-info to HEXR: DONE
[nuclearbob] will assist in getting us a virtual machine for use: POSTPONED
[diwic] to write the parser/tool :): DONE
[roadmr] to look into helping us integrating this into the standard SRU cycle: POSTPONED
[vanhoof] to ensure these tests are run at each of the afformentioned milestones: DONE
[jk-ozlabs] chat w/ Dave on "Is Private" in HEXR implemented to trigger push to public git: DONE

Dependency tree

* Blueprints in grey have been implemented.