Implement automatic sharding, shard management, data migration

Registered by Kostja Osipov

Implement automatic sharding, shard management, data migration.

Shard configuration should be query-able on any node. A shard manager is a separate application which
is subscribed to all changes in metadata and topology, and initiates data re-balancing and re-distribution across the shard. Shard manager is a role, similar to one of transaction coordinator, it doesn't have to be a standalone application.

A key property in automatic sharding is automatic data rebalancing across shards in case of individual shard overgrowth, addition or deletion of shards. Another role of the shard manager therefore is to watch data distribution and initiate a rebalance when distribution becomes uneven.

As long as each shard both contains local data, such as one which belongs to the shard, and globally replicated data, such as information about metadata and sharding toplogy, sharding as a technique necessitates at least two simultaneous replication transports: the HA-stream, perhaps on the basis of synchronous master-master, for high availability of an individual shard (a shard may consist of several nodes), as well as to transfer changes in sharding topology and global metadata of the entire cluster.
The second stream is filled by rebalancing requests, and can be built on top of asynchronous replication plus rsync-like transport to seed empty shards.

If we mix two streams in the write ahead log, and make events of this stream indistinguishable from each other, we'll never be able to verify which event belongs to which stream, and thus verify consistency of a node.

Blueprint information

Status:
Not started
Approver:
Kostja Osipov
Priority:
Essential
Drafter:
Kostja Osipov
Direction:
Approved
Assignee:
UNera
Definition:
Approved
Series goal:
Accepted for 1.5
Implementation:
Unknown
Milestone target:
None

Related branches

Sprints

Whiteboard

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.