Redis Driver - Pooling Support

Registered by Kurt Griffiths

Enhance the basic Redis driver to support a pool of backend nodes, with the following features:

* Sharding individual queues across multiple nodes for efficiency and horizontal scalability. May be able to use redis-cluster if we move to timestamp-based message IDs (scatter-gather and then sort by timestamp, must be granular enough).
* Replication of each message across 2-3 nodes for HA. Can't user master-slave, because Redis ACKs the write before propagating the message.

== Sharding Questions ==

* Can we get away with using redis-cluster?
    * If not, then...
        * Should we use a hash ring or a catalog?
        * If we use a ring, what is the process for introducing new nodes?
* Should we use a per-node counter or simply rely on the system clock and ntpd? In the latter case, is there any possibility of a race condition causing a message to be missed by any one client? See also: http://redis.io/commands/time
* What do we do when a client does not provide a marker? We need a way to find the head of the queue (i.e., the oldest message).

== Proposed Design ==

* Embed node ID in marker. That way, markers can use per-node counters.
* Query each node in turn, following behind the same cycle used in writing the messages.
* Use a clock (simple counter or system clock) to create locality of message writes and reads, obviating the need for scatter-gather queries.
* Implement replications with an active-active design, that copies messages in parallel via async IO.

More thoughts here (applies also to mongodb driver): https://gist.github.com/kgriffs/2f83c10654f1ace06fc3

~

Food for thought: http://antirez.com/news/78

Blueprint information

Status:
Not started
Approver:
None
Priority:
Low
Drafter:
Kurt Griffiths
Direction:
Approved
Assignee:
None
Definition:
Drafting
Series goal:
Accepted for kilo
Implementation:
Not started
Milestone target:
None

Related branches

Sprints

Whiteboard

How about trying out the new specs process with this bp? (kgriffs)

This may slip to Kilo. A durable message store for graduation is probably higher priority (TBD). (kgriffs)

(?)

Work Items

Dependency tree

* Blueprints in grey have been implemented.

This blueprint contains Public information 
Everyone can see this information.

Subscribers

No subscribers.