libvirt, enable hugepages memory
The aim of this bp is to enable hugepages memory for the hypervisor by the way of an image property.
Blueprint information
- Status:
- Complete
- Approver:
- Russell Bryant
- Priority:
- Undefined
- Drafter:
- Sahid Orentino
- Direction:
- Needs approval
- Assignee:
- Sahid Orentino
- Definition:
- Superseded
- Series goal:
- None
- Implementation:
- Blocked
- Milestone target:
- None
- Started by
- Sahid Orentino
- Completed by
- Sahid Orentino
Related branches
Related bugs
Sprints
Whiteboard
Gerrit topic: https:/
Addressed by: https:/
libvirt: Adds support of the option hugepages
The design information for this blueprint is totally inadequate. Enabling the feature flag in the libvirt guest config is the least of the problems to deal with.
This has schedular implications eg. If the guest RAM is 2 GB and hugepages are 4k, then booting the guest requires 500 hugepages. The scheduler may have checked that the host has 2 GB of free memory, but this does *not* imply that there are 500 hugepages available. The 2 GB free memory may be highly fragmented resulting in only 200 huge pages being available. Thus the scheduler could put the guest on a host where it is incapable of starting. This implies we need to track available hugepages on hosts and make the schedular aware of this info.
There's also security implications to deal with. Given that hugepages are a finite resource, there needs to be controls placed on their usage, likely against the flavour.
Finally there's a question as to whether this is even worthwhile given that the kernel now has the "transparent hugepages" feature whereby it will automatically allocate hugepages for VMs when available.
There needs to be a doument written up on the wiki for this feature covering all these core design questions.
deferred from icehouse-3 to "next": http://
Where being able to explicitly request huge pages (probably from the flavour) rather than relying on transparent huge pages is attractive is for HPC-style workloads where you may want to *know* you have huge pages available for a given instance or instance group. Ideally the scheduler will then ensure that the instance lands on a host where enough huge pages are available. I think there is some work required in libvirt itself to support these scheduler determinations though, for instance:
- Exposing what sized pages the host supports.
- Number of pages exposed in total or ideally per NUMA node
- Number of pages used in total or ideally per NUMA node
My understanding is libvirt does not expose these details today.
--sgordon
Removed from next, as next is now reserved for near misses from the last milestone --johnthetubagu
Addressed by: https:/
libvirt: Adds support of the option hugepages
The design information for this blueprint is totally inadequate. Enabling the feature flag in the libvirt guest config is the least of the problems to deal with.
This has schedular implications eg. If the guest RAM is 2 GB and hugepages are 4k, then booting the guest requires 500 hugepages. The scheduler may have checked that the host has 2 GB of free memory, but this does *not* imply that there are 500 hugepages available. The 2 GB free memory may be highly fragmented resulting in only 200 huge pages being available. Thus the scheduler could put the guest on a host where it is incapable of starting. This implies we need to track available hugepages on hosts and make the schedular aware of this info.
There's also security implications to deal with. Given that hugepages are a finite resource, there needs to be controls placed on their usage, likely against the flavour.
Finally there's a question as to whether this is even worthwhile given that the kernel now has the "transparent hugepages" feature whereby it will automatically allocate hugepages for VMs when available.
There needs to be a doument written up on the wiki for this feature covering all these core design questions.
deferred from icehouse-3 to "next": http://
Where being able to explicitly request huge pages (probably from the flavour) rather than relying on transparent huge pages is attractive is for HPC-style workloads where you may want to *know* you have huge pages available for a given instance or instance group. Ideally the scheduler will then ensure that the instance lands on a host where enough huge pages are available. I think there is some work required in libvirt itself to support these scheduler determinations though, for instance:
- Exposing what sized pages the host supports.
- Number of pages exposed in total or ideally per NUMA node
- Number of pages used in total or ideally per NUMA node
My understanding is libvirt does not expose these details today.
--sgordon
Removed from next, as next is now reserved for near misses from the last milestone --johnthetubaguy
Marking this blueprint as definition: Drafting. If you are still working on this, please re-submit via nova-specs. If not, please mark as obsolete, and add a quick comment to describe why. --johnthetubaguy (20th April 2014)