LAVA test optimization for very slow targets
LAVA Test is responsible for running our tests on various devices. So far we've been using it to run on moderately fast A8 and A9 silicon cores where performance is totally acceptable. With the introduction of fast models and qemu we need to re-think certain assumptions to ensure we are not wasting precious simulation time on tasks that can be done on the outside (outside of the simulation).
Blueprint information
- Status:
- Complete
- Approver:
- Paul Larson
- Priority:
- Low
- Drafter:
- Zygmunt Krynicki
- Direction:
- Needs approval
- Assignee:
- Andy Doan
- Definition:
- Obsolete
- Series goal:
- None
- Implementation:
- Not started
- Milestone target:
- None
- Started by
- Completed by
- Neil Williams
Related branches
Related bugs
Sprints
Whiteboard
[asac, Feb 24, 2012]: feels like low priority to me. engineering time is more expensive than emulation time for now. Not saying we don't need it, but definitely not in first batch.
[zkrynick, 2012-05-18] based on my observations when running on fast models it may not be required, we should still do it later to remove the need for python on the target, when lava-test merges with lava-core as host-based testing framework
Meta:
Headline: LAVA Test framework is efficient on slow hardware simulators
Acceptance:
1. No regressions for current hardware (same mode of operation possible)
2. New split mode where parsing is performed outside of the simulation
3. In split mode the part that runs inside the simulation is still gathering device information and software meta-data
Roadmap id: LAVA2012-
Work Items
Work items:
Add a --split mode where the device is not running the parser: TODO
Add a --merge mode where we combine raw logs, parse results and software/hardware data: TODO
Ensure that in split mode we capture hardware and software data on the device: TODO