Inventory JSON should be better formatted.

Registered by Kevin Carter

The dynamic inventory script should be updated to allow for a cleaner, more easily consumable layout. Presently, there are more than a few entries in the inventory parser that have simply been added to container and host entries which are then consumed by the plays in a less than ideal way.

Problem Description

As the inventory is created it should be saved and rederented with the keys sorted so that if the inventory needs to be modified with a text editor its not a nightmare to pickthrough everything to find what needs the update.

Proposed Change

All of the initial work to update inventory would need to be done in the ```` file though once changed the various plays / templates will need to be updated with the new layout, specifically when accessing **host_vars**.

Old Inventory:
  .. code-block:: json

      "578126-infra1_glance_container-266782ed": {
          "storage_netmask": "",
          "is_metal": false,
          "container_network": {
              "container_interface": "eth1",
              "container_netmask": "",
              "container_bridge": "br-mgmt"
          "component": "glance_api",
          "ansible_ssh_host": "",
          "snet_address": "",
          "container_netmask": "",
          "physical_host": "578126-infra1",
          "container_name": "578126-infra1_glance_container-266782ed",
          "snet_netmask": "",
          "storage_address": "",
          "container_address": ""

New Inventory:
  .. code-block:: json

    "infra1_glance_container-96f49a2f": {
        "ansible_ssh_host": "",
        "cinder_default_availability_zone": "cinderAZ_1",
        "cinder_storage_availability_zone": "cinderAZ_1",
        "component": "glance_api",
        "container_address": "",
        "container_name": "infra1_glance_container-96f49a2f",
        "container_networks": {
            "management_address": {
                "address": "",
                "bridge": "br-mgmt",
                "interface": "eth1",
                "netmask": "",
                "type": "veth"
            "snet_address": {
                "address": "",
                "bridge": "br-snet",
                "interface": "eth3",
                "netmask": "",
                "type": "veth"
            "storage_address": {
                "address": "",
                "bridge": "br-storage",
                "interface": "eth2",
                "netmask": "",
                "type": "veth"
        "physical_host": "infra1",
        "physical_host_group": "os-infra_hosts",
        "properties": {
            "container_release": "trusty",
            "service_name": "glance"

Notable changes:
  * The additional of the **physical_host_group** for container items
  * The **poperties** key holds *other* information about the container type. Presently I see the service_name and the default container_release as being easy first entries which would be added as hostvars for a given container type.
  * The **container_networks** key would house all of the networks being used in a container.

Playbook Impact

A lot of this work has been accomplished here:

The following roles would need to be updated for networking:
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/neutron_common
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/cinder_common

The following roles would need to be updated for *is_metal*:
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/swift_container
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/neutron_common
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/cinder_device_add
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/neutron_add_network_interfaces
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/container_common
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/common
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/nova_compute_devices

The following roles may be able to be removed:
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/neutron_add_network_interfaces
  * /opt/ansible-lxc-rpc/rpc_deployment/roles/container_extra_setup

The following script may be able to be updated:
  * /opt/ansible-lxc-rpc/scripts/


Keep things as-is

Security Impact


Notifications Impact


Other End User Impact

The user would be able to better understand what is being created and why and they would be able to logically sort / udpate items in inventory if needed.

Performance Impact

There will be an impact when writting the inventory the first time however I would suspect that its minor and would remain un-noticed to the enduser.

Other Deployer Impact

Faster deployments. With the inventory parsed in this way much more of the container configuration can be done upfront. By being able to do the container network and deployment ealier most of the host and container setup roles can be consolodated making it so that there are far fewer redundant tasks.

Developer Impact

The playbooks would need to be updated to use the new layout. Future playbook development to be cleaner however some of the original playbooks would need to be refactored.

Blueprint information

Kevin Carter
Series goal:
Milestone target:
Started by
Kevin Carter
Completed by
Kevin Carter

Related branches



Additional changes would need to be made to the rpc_environment.yml and the rpc_use_config.yml file. These changes would be provide for additional functionality but should be backwards compatible with old releases.


* Could we remove service_name from inventory and pass it as role parameter where needed? This would allow users to combine roles if necessary (as happens when mutliple roles are added to the physical host in an AIO). This would prevent us from having to manually source group_vars

@hughsaunders: yes, we should remove the global grou_vars munging we are doing, The service name should a default per role that is namespaced and can be overridden when needed. IE: "".

This BP also goes hand in hand with the ansible-galaxy BP and is affected by the rackspace-namesake BP. Here is a sample of what I believe the refactored JSON inventory should look like. ""


Work Items

This blueprint contains Public information 
Everyone can see this information.


No subscribers.