Task placement for asymmetric cores, hotplug improvements and asymmetric Tegra

Registered by Vincent Guittot on 2012-07-20

=== Task placement for asymmetric cores ===

[Slides](http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-scheduler-task-placement-rasmussen.pdf)

Traditional SMP scheduling is basically aiming for equal load distribution across all CPUs. To take full advantage of the big.LITTLE MP power/performance heterogeneity, task affinity is crucial. I have experimented with modifications to the Linux scheduler, which attempt to minimizing power consumption by selecting task affinity appropriately for each task.

Topic Lead: Morten Rasmussen

=== Scheduling and the big.LITTLE Architecture ===

[Slides](http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-scheduler-big-little-mckenney.pdf)

ARM's big.LITTLE architecture is an example of asymmetric multiprocessing where all CPUs are instruction-set compatible, but where different CPUs have very different performance and energy-efficiency characteristics. In the case of big.LITTLE, the big CPUs are Cortex-A15 CPUs with deep pipelines and numerous functional units, providing maximal performance. In contrast, the LITTLE CPUs are Cortex-A7 with short pipelines and few functional units, which optimizes for energy efficiency. Linaro is working on two methods of supporting big.LITTLE systems.

One way to configure big.LITTLE systems is in an MP configuration, where both the big and LITTLE CPUs are present. Traditionally, most SMP operating systems have assumed that all CPUs are identical, but this is emphatically not the case for big.LITTLE. Therefore, changes for big.LITTLE are required. This talk will give an overview of the progress towards the goal of big.LITTLE support in the Linux plumbing.

Topic Lead: Paul E. McKenney

=== cpuquiet: Dynamic CPU core management ===
[Slides](http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/08/cpuquiet.pdf)

NVIDIA Tegra30 has CPU clusters with different abilities. There is a fast cluster with 4 cortex A9 cores and a low power cluster with a single cortex A9 core. Only 1 of the clusters can be active at a time. This means the number of available cores for the kernel changes at runtime. Currently we use CPU hotplug to make cores unavailable so we can initiate a switch to the low power cluster. This has a number of problems such as long latencies in switching between the clusters. Therefor a new mechanism where CPUs would not be available but still not completely removed from the system, would be useful. CPUs in this quiet state would not be running any userspace or kernelspace code, until they are explicitly made available again. The kernel datastructures associated with each CPU would be preserved however so a transitions can be a low latency operations. The policy can be encapsulated in a governor, like the cpufreq and cpuidle governors we already have.

Topic Lead: Peter De Schrijver
Peter is an NVIDIA Tegra Linux kernel engineer, Debian developer, previously working on power management in maemo for Nokia.

Blueprint information

Status:
Not started
Approver:
None
Priority:
Undefined
Drafter:
None
Direction:
Needs approval
Assignee:
Morten Rasmussen
Definition:
New
Series goal:
None
Implementation:
Unknown
Milestone target:
None

Related branches

Sprints

Whiteboard

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.