Make puppet shine in clouds

Registered by Mathias Gug

 # modular stored config backends (cloud friendly backends)
 # multi puppetmaster setup (running multiple puppetmasters in clouds)
 # monitoring integration: automatically register puppet clients with monitoring frameworks

Blueprint information

Status:
Complete
Approver:
Jos Boumans
Priority:
Undefined
Drafter:
Mathias Gug
Direction:
Needs approval
Assignee:
None
Definition:
Obsolete
Series goal:
None
Implementation:
Deferred
Milestone target:
None
Completed by
Mathias Gug

Related branches

Sprints

Whiteboard

Reviewers: ttx + jib

ttx review / 20100526:
 * Spec doc is missing, should be completed if we intend to work on that
 * Should move "goals" to blueprint description, and "Notes" to the spec doc (if any)
 * Suggested assignees: mathiaz / smoser
 * Estimated complexity: 3-4
 * Suggested priority: 3/Low
 * Suggested Subcycle: Iteration 3 (Beta)

jib review / 20100526:
  * UDS outcome suggests significant work needs to be done upstream before this
    can be attempted. Deferring to Maverick+1 may be sensible for that reason.

Goal:
 * investigate techniques for load balancing puppetmasters on UEC/EC2.

Work items:
Investigate puppet client behavior for catalog compiled from old manifests (puppetmaster instance out of sync): TODO
Fix/document wrong behavior (with upstream): TODO
Test and document puppetmaster loadbalanced with HAProxy (on UEC): TODO
Test and document puppetmaster loadbalanced on EC2 with AWS Elastic Load Balancing: TODO

= UDS Lucid Discussion notes: Puppet-n-da-cloud =

Make puppet shine in clouds

   1. Ubuntu
   2. Blueprints
   3. Make puppet shine in clouds

 # modular stored config backends (cloud friendly backends)
 # multi puppetmaster setup (running multiple puppetmasters in clouds)
 # monitoring integration: automatically register puppet clients with monitoring frameworks

 Stored configs:
 * modular cloud-friendly backends: zookeeper, couchdb, S3
 * integration/role of puppetqd

Running multiple masters:
 * failover
 * load balancing
 * site manifests synchronization:
   - impact on puppet clients.

-------------
Multimaster

Options for backend
  riak
  couch
  zookeeper
  s3 -- ??? -- horrible, last writer wins
  eventually consistent a problem? No, because check in intervals are long

talked about distrib caching, distrib storeconfig

module dir on S3, or invalidate cache after checkouts

ocfs/gfs for puppet content?

catalog version, no reverts.
add logic to note to say don't backstep
(split network scenario)

 * deploying puppetmasters via cloud-config

 Solution based on Riak to distribute the cache manifest (http://github.com/mpdehaan)

Site manifests management with VCS (bzr):
 * dev -> staging -> prod workflow
 * integration with LP.

Monitoring integration:
 * use puppet/facter to gather monitoring statistics?

 Goal from session:
  * Information from upstream
  * How to get puppet integrated into the cloud

Backend config datastore options:
 * couchdb
 * s3
 * riak
 * cassandra
 * zookeeper

From IRC:
  <hazmat> concurrent updates
  <hazmat> a overwrites b
  <hazmat> riak, cassandra use eventual consistenty (riak w/ vector clocks)
  <hazmat> multi-master s3 is a disaster
  <hazmat> its going to be last writer wins on s3
  <hazmat> ideal would be a true multi master consistent view

Environments
 * Staging/Production/QA
 * Client configuration states environment
 * To change environment, use puppet to update conf file
    Set the environment in a node classifier

ACTIONS:
 1) Lookup puppet client behavior on old manifests served by the puppet master
 2) Be able to specify environments through external nodes (maybe move out of client's puppet.conf)
 3) Upstream to implement stored config backend modules
 4) Use http proxying/hw loadabalancer/whatever for loadbalancing puppetmasters

(?)

Work Items