Flavors per service_type

Registered by yogesh-mehra

There is a need to enhance trove to get the filtering done on the flavors (returned from Nova)based on the service-type. Only the relevant flavors should thus get returned when requested for.

In addition to this, Trove should ensure that only flavors that a ServiceType is meant to work with are allowed. So some validation needs to be done on instance create / resize, etc.

Some more info here: https://gist.github.com/vipulsabhaya/6599241

Blueprint information

Status:
Complete
Approver:
Michael Basnight
Priority:
Undefined
Drafter:
None
Direction:
Needs approval
Assignee:
Sushil Kumar
Definition:
Obsolete
Series goal:
None
Implementation:
Unknown
Milestone target:
None
Completed by
Nikhil Manchanda

Related branches

Sprints

Whiteboard

Gerrit topic: https://review.openstack.org/#q,topic:bp/service-type-filter-on-flavors,n,z

Addressed by: https://review.openstack.org/43741
    Adds support for admin to create flavors through mgmt API Extending the management API to allow for an admin to create flavors categorized on service type in trove. Implements: blueprint service-type-filter-on-flavors

Addressed by: https://review.openstack.org/44767
    Adds support for admin to create flavors through mgmt API

Addressed by: https://review.openstack.org/44769
    Adding integration tests and instance validation for flavor create API

Addressed by: https://review.openstack.org/44948
    Adding instance validation for flavor create API

Addressed by: https://review.openstack.org/44950
    Adding instance validation for flavor create API

Addressed by: https://review.openstack.org/49479
    Adds support for admin to register/deregister/get flavors-service mapping through mgmt API

What is the “Flavor”?

Virtual hardware templates are called "flavors" in OpenStack, defining sizes for RAM, disk, number of cores, and so on.

How does Trove manages flavors ?

    It doesn’t. From python-troveclient Trove passes flavor identifiers directly to nova/heat, without any kinds of manipulations.

Why does Trove requires filtering flavors by datastore ?

    Datastore hardware requirements

myLib based implementation

The following hardware components are required for Oracle Database:
Hardware Requirements
Requirement
Value
Physical memory (RAM)
256 MB minimum; 512 MB recommended (at least)
Virtual memory
Double the amount of RAM
Disk space
Basic Installation Type total: 2.04 GB
Advanced Installation Types total: 1.94 GB
Processor
550 MHz minimum

MySQL. Minimum System Requirements:
2 or more CPU cores
2 or more GB of RAM
Disk I/O subsystem applicable for a write-intensive database
Recommended System Requirements (if monitoring 100 or more MySQL servers)
4 or more CPU cores
8 or more GB of RAM
Disk I/O subsystem applicable for a write-intensive database

PostgresSQL. Minimum Production Requirements:
64bit CPU
64bit Operating System
2 Gigabytes of memory
Dual CPU/Core
RAID 1

NoSQL databases hardware requirements

Cassandra

Choosing appropriate hardware depends on selecting the right balance of the following resources: memory, CPU, disks, number of nodes, and network.
Memory. The more memory a Cassandra node has, the better read performance. More RAM allows for larger cache sizes and reduces disk I/O for reads. More RAM also allows memory tables (memtables) to hold more recently written data. Larger memtables lead to a fewer number of SSTables being flushed to disk and fewer files to scan during a read. The ideal amount of RAM depends on the anticipated size of your hot data.
For dedicated hardware, the optimal price-performance sweet spot is 16GB to 64GB; the minimum is 8GB.
For a virtual environments, the optimal range may be 8GB to 16GB; the minimum is 4GB.
For testing light workloads, Cassandra can run on a virtual machine as small as 256MB.
For setting Java heap space.
CPU. I nsert-heavy workloads are CPU-bound in Cassandra before becoming memory-bound. (All writes go to the commit log, but Cassandra is so efficient in writing that the CPU is the limiting factor.) Cassandra is highly concurrent and uses as many CPU cores as available:
For dedicated hardware, 8-core processors are the current price-performance sweet spot.
For virtual environments, consider using a provider that allows CPU bursting, such as Rackspace Cloud Servers.
Disk. Disk space depends a lot on usage, so it's important to understand the mechanism. Cassandra writes data to disk when appending data to the commit log for durability and when flushing memtable to SSTable data files for persistent storage. SSTables are periodically compacted. Compaction improves performance by merging and rewriting data and discarding old data. However, depending on the compaction strategy and size of the compactions, compaction can substantially increase disk utilization and data directory volume. For this reason, you should leave an adequate amount of free disk space available on a node: 50% (worst case) for SizeTieredCompactionStrategy and large compactions, and 10% for LeveledCompactionStrategy.

MongoDB
Hardware Considerations
MongoDB is designed specifically with commodity hardware in mind and has few hardware requirements or limitations. MongoDB’s core components run on little-endian hardware, primarily x86/x86_64 processors. Client libraries (i.e. drivers) can run on big or little endian systems.
Hardware Requirements and Limitations
The hardware for the most effective MongoDB deployments have the following properties.
Allocate Sufficient RAM and CPU
As with all software, more RAM and a faster CPU clock speed are important for performance. In general, databases are not CPU bound. As such, increasing the number of cores can help, but does not provide significant marginal return.

Conclusion

    As you can see each database have it’s own hardware requirements, minimum and maximum. From Trove developer side i don’t see any problems with limitation through flavors, but from user/administrator perspective i’d like to be, at least, notified, and as maximum result blocked from provisioning with inappropriate flavor, that doesn’t fits into minimum requirements.
Datastore base model extension.

Datastore base model should be extended with new column:

    name: FLAVORS
    type: TEXT

It should contain:

    list of flavors allowed for provisioning

Trove core ReST API extension

    Description

HTTP method
URL
GET
/{tenant_id}/flavors/datastore/{id}

Request body

        body: { }
    Response object

        flavors: {
        flavor_1: {
                'id': INT,
            'links': links,
                'name': xlarge,
            'ram': 8Gb,
}
        flavor_2: {
                'id': INT,
            'links': links,
                'name': x-super-large,
            'ram': 32Gb,
}
        }
Trove-manage util extension.

    Suggestion:
trove-manage datastore-flavor-add datastore_id_or_name flavor_id_or_name

trove-manage datastore-flavor-delete datastore_id_or_name flavor_id_or_name
or
trove-manage datastore-flavor-assign datastore_id_or_name flavor_id_or_name

trove-manage datastore-flavor-unassign datastore_id_or_name flavor_id_or_name

    Python-troveclient extension.

    Suggestion:
trove flavor-list --datastore UUID_or_name

    Worklow elaboration

If “flavors” field is empty in given datastore model it means that all flavors allowed for provisioning.
If wrong flavor passed Trove should raise an exception with appropriate message, something like: “Flavor is not allowed for this datastore.”

Iterations

    Iteration 1: filtering flavors per datastore.
    Iteration 2: check flavors on provisioning datastore.
    Iteration 3: flavor check on instance resize action.

WIKI page: https://wiki.openstack.org/wiki/TroveFlavorsPerDatastore

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.

Subscribers

No subscribers.