message proxy server

Registered by Isaku Yamahata

proxy server which relay message between two messaging server
Especially RPC message.

The use case
This is Neutron servicevm framework requirement.
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

One message server is in openstack control network which openstack servers connect to
and another message server is in openstack tenant network which agents connect to.
Openstack server wants to send RPC message to agents in tenant networks.
The control network isn't directly connected to tenant network.
So proxy server relays RPC message over unix domain socket to bypass Linux netns.
The supported RPC is cast, call and fanout. notification isn't supported because it's not needed at this moment. But it's easy to add it.

Blueprint information

Status:
Complete
Approver:
Mark McLoughlin
Priority:
Medium
Drafter:
Isaku Yamahata
Direction:
Needs approval
Assignee:
None
Definition:
Obsolete
Series goal:
None
Implementation:
Needs Code Review
Milestone target:
None
Started by
Isaku Yamahata
Completed by
Isaku Yamahata

Related branches

Sprints

Whiteboard

Gerrit topic: https://review.openstack.org/#q,topic:bp/message-proxy-server,n,z

Addressed by: https://review.openstack.org/77862
    _driver: implement unix domain support

Addressed by: https://review.openstack.org/77863
    proxy: implement proxy server

---

I haven't looked into this too carefully yet, but at first glance I find it difficult to understand your use case in detail - for example, how does are two "proxy server agents" managed? Is Nova or Neutron responsible for starting these? How is the agent connected to the tenant network? How is the hostname used to identify the agent passed to the proxy server? etc.

Basically, I think a more detailed writeup of the use case is important here. A wiki page would be easier to go into this in the detail required.

Another issue I think you should cover in this blueprint is what messaging semantics are supported by this driver - in other words, is RPC cast/call, fanout, notification sending and receiving, etc. all supported by the driver?

I'm assuming this is Juno material. It would be quite helpful to present on this at the Juno design summit, if possible.
---
[Isaku - Mar 20, 2014]
Thanks for the comment.
I've filed a topic suggestion at the summit. http://summit.openstack.org/cfp/details/52
Will write up a document for the use case and so on.
The RPC cast/call/fanout is supported. But notification isn't supported because I don't need it.
However it is easy to add it.

[Isaku - April 7, 2014]
The first documentation is written up.
https://wiki.openstack.org/wiki/Oslo/blueprints/message-proxy-server

---

To me, it seems that what we need is 'notify a guest service something has happened' (which - incidentally - is not an RPC, because there's not necessarily any need for a reply), and what this does is 'extend the RPC mechanism across a privilege boundary' - which certainly does satisfy our need but is a gross extension of the requirement itself.

While I agree that we need to get notifications (internally, messages) out, I'm not sure that I would treat it as an RPC proxy, and (admittedly skimming the code a bit here) I'm not sure I see the protections I would expect about who gets to see messages, who gets to subscribe, and how you manage things to avoid the service overloading. Also, we don't need to send messages into Openstack, not even RPC replies; we already have APIs and it seems to me that we should use the existing and well tested interface rather than adding a backdoor one. Finally, what happens if a message is lost or the receiver crashes? -- Ian.

--
[Isaku - April 13, 2014]
Hi Ian.
I added the section of "How neutron agents works"
As you already pointed out, simple notification doesn't work due to message loss/reorder.
To address agent crash, agent needs to periodically send live message to server.
Regarding to protections, logic to subscribe/manage doesn't live in oslo.message, but agent/server code because oslo.message is just transport. The patch above just provide transport support of unix socket and rpc proxy.

---
Isaku, like Mark I am interested to know what is going to manage agents running inside tenant networks? The networks will be created by user's request, and with each new network a new agent must be started. The same applies to network delete. So what is going to start/kill these agents?

Also I have a suggestion for design. On the wiki page it is suggested to establish communication between agent running in tenant network and guest agent via AMQP broker. It is suggested to either run AMQP broker on a single VM or on each VM where guest agent runs. I think it would make sense to use ZeroMQ here instead. It does not need broker and is good for 1-to-1 communication. Besides, since all communication will be done within tenant network, auth is not required here. Oslo Messaging already has driver for it, but frankly when I've tried to use it, I didn't succeed. Working with it seems to be trickier then with Rabbit/Qpid.
-- Dmitry Mescheryakov
---
[Isaku - April 15, 2014]
As for agent management, we can have an agent to manage subagent if necessary.
such super-agent can be run in guest by startup script and it can be assumed to run always..
Regarding to message queue service, AMQP is just a suggestion. Any kind of message service can be used. Yes 0MQ is one of the possibilities.

Addressed by: https://review.openstack.org/91500
    tests/test_notifier_logger.py: use six.moves._thread

----

[dhellmann - 27 May 2014] -- If I remember correctly, the consensus at the summit was to look at using a tool like Marconi. Is that still your plan? Should this blueprint still be targeted for Juno?

[Isaku - 28 May 2014] -- Yes, that's still my plan. I'm still targeting it for Juno.
But not right now. After settling down servicevm stuff first.

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.