Redis optimization

Registered by Lars Butler

Currently, when an OpenQuake job is complete, the KVS data remains. This data is no longer needed after the job completes; it's just clutter. If the number of records on a given Redis server instance is allowed to reach millions, we've noticed that OpenQuake engine will take a performance hit. When an OpenQuake is complete, all Redis data associated with that job should be deleted.
Also, we're currently using Redis in a very 'memcached' way (simple key/value pairs with JSON strings for data structures). In a few cases, we are taking advantage of Redis's supported data types (lists, for example) but the bulk of our data is simply being stored as JSON strings. The JSON encoding/decoding is kind of ugly and is probably introducing a lot of inefficiency; we should investigate this and look into using Redis data types instead.

In summary, I propose the following updates to our utilization of Redis:

1) Garbage collection; make jobs clean up after themselves.
2) Use less JSON and more Redis datatypes (Hashes, Lists, Sets, etc.).

Blueprint information

Status:
Started
Approver:
None
Priority:
Low
Drafter:
Lars Butler
Direction:
Needs approval
Assignee:
None
Definition:
New
Series goal:
None
Implementation:
Started
Milestone target:
None
Started by
John Tarter

Related branches

Sprints

Whiteboard

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.

Subscribers

No subscribers.