Inventory JSON should be better formatted.

Registered by Kevin Carter on 2014-12-10

The dynamic inventory script should be updated to allow for a cleaner, more easily consumable layout. Presently, there are more than a few entries in the inventory parser that have simply been added to container and host entries which are then consumed by the plays in a less than ideal way.

Problem Description
-------------------

As the inventory is created it should be saved and rederented with the keys sorted so that if the inventory needs to be modified with a text editor its not a nightmare to pickthrough everything to find what needs the update.

Proposed Change
---------------

All of the initial work to update inventory would need to be done in the ``dynamic_inventory.py`` file though once changed the various plays / templates will need to be updated with the new layout, specifically when accessing **host_vars**.

Old Inventory:
  .. code-block:: json

      "578126-infra1_glance_container-266782ed": {
          "storage_netmask": "255.255.252.0",
          "is_metal": false,
          "container_network": {
              "container_interface": "eth1",
              "container_netmask": "255.255.252.0",
              "container_bridge": "br-mgmt"
          },
          "component": "glance_api",
          "ansible_ssh_host": "172.24.243.105",
          "snet_address": "172.24.251.204",
          "container_netmask": "255.255.252.0",
          "physical_host": "578126-infra1",
          "container_name": "578126-infra1_glance_container-266782ed",
          "snet_netmask": "255.255.252.0",
          "storage_address": "172.24.244.73",
          "container_address": "172.24.243.105"
      }

New Inventory:
  .. code-block:: json

    "infra1_glance_container-96f49a2f": {
        "ansible_ssh_host": "172.29.239.73",
        "cinder_default_availability_zone": "cinderAZ_1",
        "cinder_storage_availability_zone": "cinderAZ_1",
        "component": "glance_api",
        "container_address": "172.29.239.73",
        "container_name": "infra1_glance_container-96f49a2f",
        "container_networks": {
            "management_address": {
                "address": "172.29.239.73",
                "bridge": "br-mgmt",
                "interface": "eth1",
                "netmask": "255.255.252.0",
                "type": "veth"
            },
            "snet_address": {
                "address": "172.29.249.25",
                "bridge": "br-snet",
                "interface": "eth3",
                "netmask": "255.255.252.0",
                "type": "veth"
            },
            "storage_address": {
                "address": "172.29.245.89",
                "bridge": "br-storage",
                "interface": "eth2",
                "netmask": "255.255.252.0",
                "type": "veth"
            }
        },
        "physical_host": "infra1",
        "physical_host_group": "os-infra_hosts",
        "properties": {
            "container_release": "trusty",
            "service_name": "glance"
        }
    }

Notable changes:
  * The additional of the **physical_host_group** for container items
  * The **poperties** key holds *other* information about the container type. Presently I see the service_name and the default container_release as being easy first entries which would be added as hostvars for a given container type.
  * The **container_networks** key would house all of the networks being used in a container.

Playbook Impact
---------------

A lot of this work has been accomplished here: https://github.com/cloudnull/os-lxc-hosts

The following roles would need to be updated for networking:
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/neutron_common
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/cinder_common

The following roles would need to be updated for *is_metal*:
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/swift_container
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/neutron_common
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/cinder_device_add
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/neutron_add_network_interfaces
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/container_common
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/common
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/nova_compute_devices

The following roles may be able to be removed:
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/neutron_add_network_interfaces
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/container_extra_setup

The following script may be able to be updated:
  * /opt/ansible-lxc-rpc/scripts/inventory-manage.py

Alternatives
------------

Keep things as-is

Security Impact
---------------

None

Notifications Impact
--------------------

None

Other End User Impact
---------------------

The user would be able to better understand what is being created and why and they would be able to logically sort / udpate items in inventory if needed.

Performance Impact
------------------

There will be an impact when writting the inventory the first time however I would suspect that its minor and would remain un-noticed to the enduser.

Other Deployer Impact
---------------------

Faster deployments. With the inventory parsed in this way much more of the container configuration can be done upfront. By being able to do the container network and deployment ealier most of the host and container setup roles can be consolodated making it so that there are far fewer redundant tasks.

Developer Impact
----------------

The playbooks would need to be updated to use the new layout. Future playbook development to be cleaner however some of the original playbooks would need to be refactored.

Blueprint information

Status:
Complete
Approver:
None
Priority:
Undefined
Drafter:
Kevin Carter
Direction:
Approved
Assignee:
None
Definition:
Obsolete
Series goal:
None
Implementation:
Implemented
Milestone target:
None
Started by
Kevin Carter on 2015-06-17
Completed by
Kevin Carter on 2015-02-26

Related branches

Sprints

Whiteboard

Additional changes would need to be made to the rpc_environment.yml and the rpc_use_config.yml file. These changes would be provide for additional functionality but should be backwards compatible with old releases.

Questions..

* Could we remove service_name from inventory and pass it as role parameter where needed? This would allow users to combine roles if necessary (as happens when mutliple roles are added to the physical host in an AIO). This would prevent us from having to manually source group_vars

@hughsaunders: yes, we should remove the global grou_vars munging we are doing, The service name should a default per role that is namespaced and can be overridden when needed. IE: "https://github.com/os-cloud/os_nova/blob/master/defaults/main.yml#L69".

This BP also goes hand in hand with the ansible-galaxy BP and is affected by the rackspace-namesake BP. Here is a sample of what I believe the refactored JSON inventory should look like. "https://gist.github.com/cloudnull/e586ef61f0edc8684cf6"

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.

Subscribers

No subscribers.