Manually managing the bay nodes

Registered by hongbin

Magnum manages bay nodes by using ResourceGroup from Heat. This approach works but it is infeasible to manage the heterogeneity across bay nodes, which is a frequently demanded feature. As an example, there is a request to provision bay nodes across availability zones [1]. There is another request to provision bay nodes with different set of flavors [2]. For the request features above, ResourceGroup won't work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat stack for each bay nodes. For example, for creating a cluster with 2 masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a group of nodes, but it should be addressed in a separated blueprint


Blueprint information

Not started
Needs approval
Series goal:
Milestone target:

Related branches



Below are the implementation options:

* Option 1:
Implement it in Heat templates declaratively. For example, if users want to create a cluster with 5 nodes, Magnum will generate a set of mappings of parameters for each node. For example:

  $ heat stack-create -f cluster.yaml \
      -P count=5 \
      -P az_map='{"0":"az1",...,"4":"az4"}' \
      -P flavor_map='{"0":"",...,"4":""}'

Inside the top-level template, it contains a single resource group. The trick is passing %index% to the nested template.

  $ cat cluster.yaml
  heat_template_version: 2015-04-30
      type: integer
      type: json
      type: json
      type: OS::Heat::ResourceGroup
        count: {get_param: count}
          type: server.yaml
            availability_zone_map: {get_param: az_map}
            flavor_map: {get_param: flavor_map}
            index: '%index%'

In the nested template, use 'index' to retrieve the parameters.

  $ cat server.yaml
  heat_template_version: 2015-04-30
      type: json
      type: json
      type: string
      type: OS::Nova::Server
        image: the_image
        flavor: {get_param: [flavor_map, {get_param: index}]}
        availability_zone: {get_param: [availability_zone_map, {get_param: index}]}

This approach has a critical drawback. As pointed out by Zane [1], we cannot remove member from the middle of the list. Therefore, the usage of resource group was not recommended.

* Option 2:
Generate Heat template by using the generator [2]. The code to generate the Heat template will be something like below:

  $ cat
  from os_hotgen import composer
  from os_hotgen import heat

  tmpl_a = heat.Template(description="...")

  for group in rsr_groups:
      # parameters
      param_name = + '_flavor'
      param_type = 'string'
      param_flavor = heat.Parameter(name=param_name, type=param_type)
      param_name = + '_az'
      param_type = 'string'
      param_az = heat.Parameter(name=param_name, type=param_type)

      # resources
      rsc = heat.Resource(, 'OS::Heat::ResourceGroup')
      resource_def = {
          'type': 'server.yaml',
          'properties': {
              'availability_zone': heat.FnGetParam(,
              'flavor': heat.FnGetParam(param_flavor.flavor),
      resource_def_prp = heat.ResourceProperty('resource_def', resource_def)
      count_prp = heat.ResourceProperty('count', group.count)

      print composer.compose_template(tmpl_a)

* Option 3:
Remove the usage of ResourceGroup and manually manage Heat stacks for each bay node. For example, for a cluster with 5 nodes, Magnum is going to create 5 Heat stacks:

  for node in nodes:
      fields = {
          'parameters': {
              'flavor': node.flavor,
              'availability_zone': node.availability_zone,
          'template': 'server.yaml',


I have a question for option3. Does this option require users to specify flavor for each node or does it allow users to specify node count for a flavor, e.g., 10 nodes for flavor_x, 5 nodes for flavor_y, etc.?
Will it scale for large cluster, e.g., hundred of nodes? Thanks! --wanyen


Work Items

This blueprint contains Public information 
Everyone can see this information.