Manually managing the bay nodes

Registered by hongbin

Magnum manages bay nodes by using ResourceGroup from Heat. This approach works but it is infeasible to manage the heterogeneity across bay nodes, which is a frequently demanded feature. As an example, there is a request to provision bay nodes across availability zones [1]. There is another request to provision bay nodes with different set of flavors [2]. For the request features above, ResourceGroup won't work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat stack for each bay nodes. For example, for creating a cluster with 2 masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a group of nodes, but it should be addressed in a separated blueprint

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Blueprint information

Status:
Not started
Approver:
hongbin
Priority:
Undefined
Drafter:
hongbin
Direction:
Needs approval
Assignee:
None
Definition:
New
Series goal:
None
Implementation:
Unknown
Milestone target:
None

Related branches

Sprints

Whiteboard

Below are the implementation options:

* Option 1:
Implement it in Heat templates declaratively. For example, if users want to create a cluster with 5 nodes, Magnum will generate a set of mappings of parameters for each node. For example:

  $ heat stack-create -f cluster.yaml \
      -P count=5 \
      -P az_map='{"0":"az1",...,"4":"az4"}' \
      -P flavor_map='{"0":"m1.foo",...,"4":"m1.bar"}'

Inside the top-level template, it contains a single resource group. The trick is passing %index% to the nested template.

  $ cat cluster.yaml
  heat_template_version: 2015-04-30
  parameters:
    count:
      type: integer
    az_map:
      type: json
    flavor_map:
      type: json
  resources:
   AGroup:
      type: OS::Heat::ResourceGroup
      properties:
        count: {get_param: count}
        resource_def:
          type: server.yaml
          properties:
            availability_zone_map: {get_param: az_map}
            flavor_map: {get_param: flavor_map}
            index: '%index%'

In the nested template, use 'index' to retrieve the parameters.

  $ cat server.yaml
  heat_template_version: 2015-04-30
  parameters:
    availability_zone_map:
      type: json
    flavor_map:
      type: json
    index:
      type: string
  resources:
   server:
      type: OS::Nova::Server
      properties:
        image: the_image
        flavor: {get_param: [flavor_map, {get_param: index}]}
        availability_zone: {get_param: [availability_zone_map, {get_param: index}]}

This approach has a critical drawback. As pointed out by Zane [1], we cannot remove member from the middle of the list. Therefore, the usage of resource group was not recommended.

* Option 2:
Generate Heat template by using the generator [2]. The code to generate the Heat template will be something like below:

  $ cat generator.py
  from os_hotgen import composer
  from os_hotgen import heat

  tmpl_a = heat.Template(description="...")
  ...

  for group in rsr_groups:
      # parameters
      param_name = group.name + '_flavor'
      param_type = 'string'
      param_flavor = heat.Parameter(name=param_name, type=param_type)
      tmpl_a.add_parameter(param_flavor)
      param_name = group.name + '_az'
      param_type = 'string'
      param_az = heat.Parameter(name=param_name, type=param_type)
      tmpl_a.add_parameter(param_az)
      ...

      # resources
      rsc = heat.Resource(group.name, 'OS::Heat::ResourceGroup')
      resource_def = {
          'type': 'server.yaml',
          'properties': {
              'availability_zone': heat.FnGetParam(param_az.name),
              'flavor': heat.FnGetParam(param_flavor.flavor),
              ...
          }
      }
      resource_def_prp = heat.ResourceProperty('resource_def', resource_def)
      rsc.add_property(resource_def_prp)
      count_prp = heat.ResourceProperty('count', group.count)
      rsc.add_property(count_prp)
      tmpl_a.add_resource(rsc)
      ...

      print composer.compose_template(tmpl_a)

* Option 3:
Remove the usage of ResourceGroup and manually manage Heat stacks for each bay node. For example, for a cluster with 5 nodes, Magnum is going to create 5 Heat stacks:

  for node in nodes:
      fields = {
          'stack_name': node.name,
          'parameters': {
              'flavor': node.flavor,
              'availability_zone': node.availability_zone,
              ...
          },
          'template': 'server.yaml',
          ...
      }
      osc.heat().stacks.create(**fields)

=============================================================================

I have a question for option3. Does this option require users to specify flavor for each node or does it allow users to specify node count for a flavor, e.g., 10 nodes for flavor_x, 5 nodes for flavor_y, etc.?
Will it scale for large cluster, e.g., hundred of nodes? Thanks! --wanyen

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.