Provide OpenStack credential for COE services

Registered by Ton Ngo

Some COE during its operation needs to request OpenStack services via API calls. An example is Kubernetes external Load Balancer feature, which talks to Neutron to create and manage load balancer pool, members, VIP, health monitor. Swarm registry also needs similar support. We need to provide the proper credential for these uses in a secure manner. The support should be generalized to be reusable in different scenarios.

Blueprint information

Status:
Complete
Approver:
Adrian Otto
Priority:
High
Drafter:
Ton Ngo
Direction:
Approved
Assignee:
Ton Ngo
Definition:
Approved
Series goal:
Accepted for ocata
Implementation:
Implemented
Milestone target:
milestone icon ocata-2
Started by
Spyros Trigazis
Completed by
Spyros Trigazis

Related branches

Sprints

Whiteboard

There is a trust user for every cluster that serves this purpose injected in the nodes.

Recapping discussion on the mailing list on the implementation.

tango 9/20/15
A little background for context. In the current k8s cluster, all k8s pods and services run within a private subnet (on Flannel) and they can access each other but they cannot be accessed from external network. The way to publish an endpoint to the external network is by specifying this attribute in your service manifest:
type: LoadBalancer
Then k8s will talk to OpenStack Neutron to create the load balancer pool, members, VIP, monitor. The user would associate the VIP with a floating IP and then the endpoint of the service would be accessible from the external internet.
To talk to Neutron, k8s needs the user credential and this is stored in a config file on the master node. This includes the username, tenant name, password. When k8s starts up, it will load the config file and create an authenticated client with Keystone.
The issue we need to find a good solution for is how to handle the password. With the current effort on security to make Magnum production-ready, we want to make sure to handle the password properly.
Ideally, the best solution is to pass the authenticated token to k8s to use, but this will require sizeable change upstream in k8s. We have good reason to pursue this but it will take time.
For now, the current implementation is as follows:
1. In a bay-create, magnum client adds the password to the API call (normally it authenticates and sends the token)
2. The conductor picks it up and uses it as an input parameter to the heat templates
3. When configuring the master node, the password is saved in the config file for k8s services.
4. Magnum does not store the password internally.

This is probably not ideal, but it would let us proceed for now. We can deprecate it later when we have a better solution. So leaving aside the issue of how k8s should be changed, the question is: is this approach reasonable for the time, or is there a better approach?

hongbin 9/20/15
If I understand your proposal correctly, it means the inputted password will be exposed to users in the same tenant (since the password is passed as stack parameter, which is exposed within tenant). If users are not admin, they don’t have privilege to create a temp user. As a result, users have to expose their own password to create a bay, which is suboptimal.

A slightly amendment is to have operator to create a user that is dedicated for communication between k8s and neutron load balancer service. The password of the user can be written into config file, picked up by conductor and passed to heat. The drawback is that there is no multi-tenancy for openstack load balancer service, since all bays will share the same credential.

Another solution I can think of is to have magnum to create a keystone domain [1] for each bay (using admin credential in config file), and assign bay’s owner to that domain. As a result, the user will have privilege to create a bay user within that domain. It seems Heat supports native keystone resource [2], which makes the administration of keystone users much easier. The drawback is the implementation is more complicated.

[1] https://wiki.openstack.org/wiki/Domains
[2] http://specs.openstack.org/openstack/heat-specs/specs/kilo/keystone-resources.html

sdake 9/20/15
I believe the domain approach is the preferred approach for the solution long term. It will require more R&D to execute then other options but also be completely secure.

eghobo 9/20/15
+1, to Hongbin’s concerns about exposing passwords. I think we should start with dedicated kub user in magnum config and moved to keystone domains after.

(?)

Work Items

Dependency tree

* Blueprints in grey have been implemented.

This blueprint contains Public information 
Everyone can see this information.