Add tracing support

Registered by Amit Pundir

[dmart]: I think the main focus of this instrumentation is going to be tracing.
My current ideas about what is needed here are fairly vague, but I
think that as a minimum we need something like the following. Even if
we want more tracing in the long term, this subset should be a
sensible place to start.

Events (and data) to trace:
 * switch starting (timestamp, CPU#)
 * switch finished (timestamp, CPU#)

(in the b.L MP and real frequency scaling cases, we probably want
these events to scale to all performance point transitions, across all
CPUs)

Mechanisms:
 * tracepoints
 * ftrace (either just trace specific functions, or provide our own plugin)

Using ftrace would allow us to piggy-back on existing infrastructure
without having to reinvent so much stuff as would be needed if just
using raw tracepoints. For example, ftrace has well-defined mehanisms
for sending trace streams to userspace.

cpufreq presumably also has some tracing support, so we may be able to
reuse some of this. The cpu_frequency event described in
Documentation/trace/events-power.txt may be useful.

[npitre]: In the switcher case, I see the following events worth tracking:

For the outbound CPU:
1) switch initiation (timestamp, CPU#, cluster#)
2) switch preparation done and inbound CPU awaken (timestamp, CPU#, cluster#)
3) ready to go offline (timestamp, CPU#, cluster#)

On the inbound CPU:
4) entry into recovery from switch (timestamp, CPU#, cluster#)
5) switch recovery done, ready to resume user space (timestamp, CPU#, cluster#)

Obviously, event 3 can occur in parallel, or even after, events 4 or 5.

We'll want to optimize the time spent between #2 and #3, not to make it
fast, but rather to keep the outbound CPU online with its cache alive
long enough for the inbound CPU to snoop it. Therefore some cache usage
metric would be required here. Maybe even make it runtime self-tuned?

This implies that the inbound CPU might decide to switch back before the
outbound CPU actually goes offline. The code will need to cope with
that of course, but that's also another event that would be interesting
to track (might be indicative of a too aggressive switching policy).

I never looked at the available mechanisms closely enough to have an
opinion here. But going with one of the available framework (and
improving/extending it if necessary) is certainly the best path forward.

[pfefferz]: Seems like tracepoints may be better since inter-function tracing and
various ad-hoc tracepoints may be needed.

Blueprint information

Status:
Complete
Approver:
Zach Pfeffer
Priority:
Undefined
Drafter:
None
Direction:
Needs approval
Assignee:
Amit Pundir
Definition:
Superseded
Series goal:
None
Implementation:
Unknown
Milestone target:
None
Completed by
Amit Pundir

Related branches

Sprints

Whiteboard

Notes:
[2012/04/16 pundiramit] Put notes here.
[2012/05/07 pundiramit] This blueprint is superseded by https://blueprints.launchpad.net/linux-linaro/+spec/big-little-instrumentation

Meta:
Roadmap id: ANDROID2012-BIG-LITTLE
Headline: Tracing support is available for big.LITTLE switcher
Acceptance: Trace the events/data on big.LITTLE switcher on the A15/A7 simulator running Android.

(?)

Work Items

Work items:
Investigate existing tracing mechanisms: TODO
Extend existing mechanisms to support missing features: TODO
Try ftrace: TODO
Try tracepoints: TODO

This blueprint contains Public information 
Everyone can see this information.

Subscribers

No subscribers.