[EDP] Add an engine for a Spark standalone deployment

Registered by Trevor McKay

Sahara needs an EDP implementation to run on clusters created with the Spark plugin. This implementation should include the 3 basic EDP functions:

run_job()
get_job_status()
cancel_job()

The Spark plugin creates "Spark standalone" deployments which use the native scheduler, not Yarn or Mesos. Therefore the EDP implementation must use only facilities provided natively by Spark and Linux.

Blueprint information

Status:
Complete
Approver:
Sergey Lukjanov
Priority:
High
Drafter:
Trevor McKay
Direction:
Approved
Assignee:
Trevor McKay
Definition:
Approved
Series goal:
Accepted for juno
Implementation:
Implemented
Milestone target:
milestone icon 2014.2
Started by
Sergey Lukjanov
Completed by
Sergey Lukjanov

Related branches

Sprints

Whiteboard

Waiting for a spec.

Gerrit topic: https://review.openstack.org/#q,topic:bp/edp-spark-standalone,n,z

Addressed by: https://review.openstack.org/109403
    [EDP] Add an engine for a Spark standalone deployment

Addressed by: https://review.openstack.org/107871
    Implement EDP for a Spark standalone cluster

Gerrit topic: https://review.openstack.org/#q,topic:bp/edp-spark-job-type,n,z

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.