Report benchmark results in LAVA

Registered by Asa Sandahl

Use LAVA for reporting and visualization of toolchain benchmarks results.

This is the first step to using LAVA for automating the toolchain benchmarks. Use the available reporting tools/API:s in LAVA for storing and visualizing benchmark results. All other steps for building, running and extracting results will be done in the cbuild system and with the python scripts written by Michael.

The final step depends on a private report feature in LAVA, which is not yet implemented. Work with dummy numbers or open benchmarks in the meantime.

Blueprint information

Status:
Complete
Approver:
Michael Hope
Priority:
Medium
Drafter:
None
Direction:
Approved
Assignee:
None
Definition:
Approved
Series goal:
None
Implementation:
Informational Informational
Milestone target:
milestone icon backlog
Started by
Matthew Gretton-Dann
Completed by
Matthew Gretton-Dann

Related branches

Sprints

Whiteboard

[2013-08-22 matthew-gretton-dann] This is being done under the Support & Maintenance work in Jira

(?)

Work Items

Work items:
Get LAVA permission from the validation team: TODO
Talk to the validation team about a "private report feature": TODO
Do a manual test run of EEMBC on one of the ursas: TODO
Do a comparison of SPEC2K results with ref and train data, using Michael's scripts: TODO
Get started with LAVA, go through the steps to schedule a simple job: TODO
Read up on the visualization APIs in LAVA and create some example graphs: TODO
Adapt the results from toolchain benchmars to the format expected by LAVA: TODO
Create a toolchain benchmark report area in LAVA and store some results in it: BLOCKED

This blueprint contains Public information 
Everyone can see this information.

Subscribers

No subscribers.