Multiple VMware vCenter Clusters managed using single compute service
Title: Multiple VMware vCenter Clusters managed using single compute service
Current implementation of VMware VC driver for OpenStack uses one proxy server to run nova-compute service to manage a cluster.
New model will have the following changes to nova-compute service VMware VC Driver
• To allow a single VC driver to model multiple Clusters in vCenter as multiple nova-compute nodes.
• To allow the VC driver to be configured to represent a set of clusters as compute nodes.
• To dynamically create / update / delete nova-compute nodes based on changes in vCenter for Clusters.
Nova-compute is identified uniquely with the combination of vCenter + mob id of cluster pool.
This is an enhancement to VMware vCenter Nova driver. VMware vCenter driver treats an ESX cluster as one compute, where as our proposal is in line of Baremetal nova driver where we would like to present one nova proxy driver to serve multiple ESX clusters.
Blueprint information
- Status:
- Complete
- Approver:
- Russell Bryant
- Priority:
- Low
- Drafter:
- Kiran Kumar Vaddi
- Direction:
- Approved
- Assignee:
- Kiran Kumar Vaddi
- Definition:
- Approved
- Series goal:
- Accepted for havana
- Implementation:
- Implemented
- Milestone target:
- 2013.2
- Started by
- Kiran Kumar Vaddi
- Completed by
- Russell Bryant
Related branches
Related bugs
Sprints
Whiteboard
Hi, I spoke with some folks at HP about this at the summit and prior. I'd really like to see either a more in-depth spec, or work-in-progress code posted somewhere to get a better ideal of the proposed changes, particularly with respect to interactions with the scheduler. I was told that at least a version of this code was done, so perhaps posting as a work-in-progress branch would be easiest, as long as it is not too large of a patch. --?
There are multiple blueprints for the vmware driver all assigned to the same person. I'd like some clarification that 1) there's consensus around these features, 2) who is actually doing the work, and 3) a realistic target for completion for each one, before adding these to the havana release plan. --russellb
Hi, I would need some more time to post the code, hopefully in the next 2 weeks. Let me share some details so that we can have a discussion on IRC.
1. The compute drivers nova.conf will have the names of the clusters to be managed. Since vCenter supports having multiple clusters of the same name. the full path starting from the datacenter/folder can be specfied.
2. The driver on startup reads this conf and makes available each cluster as a compute node. This achieved by implementing the get_available_nodes method of the driver interface in the vCenter driver. Each cluster is now available as a compute node. We use the hypevisor_hostname to store the cluster identifier.
3. When the scheduler selects a node and posts a message on the rabbitmq, the vCenter driver will retrieve the cluster identifier from the message and create the instance on the specific cluster.
The cleanup of node, stats update is all done similar to baremetal. The multi compute-nodes support that exists in nova today which was introduced when bare metal support was originally written. This will continue to be in nova even as the other aspects of the baremetal driver are proposed to be moved out into a separate project/module.
-kirankv
Gerrit topic: https:/
Addressed by: https:/
Multiple Clusters using single compute service
The patch is put for review that includes the implementation of the following blueprint as well.
https:/
Since there is very little difference to support clusters and resource pools, the changes are in a single patch (https:/
Addressed by: https:/
Nova support for vmware cinder driver