Provide ability for length of list responses to be limited

Registered by Henry Nash

It should be possible for a cloud provider to optional set a limit of the maximum number of records that will be returned, ensuring that poorly filtered queries do not take excessive time.

Like filtering, this should be enabled in the backends wherever possible to allow this to be passed to the underlying subsystem (e.g SQL)

Blueprint information

Henry Nash
Needs approval
Henry Nash
Series goal:
Accepted for icehouse
Milestone target:
milestone icon 2014.1
Started by
Henry Nash
Completed by
Dolph Mathews

Related branches



Hackathon outcome- Ask drivers to return one more item than the deployer configured limit (N). If the driver returns that maximum (N+1), the last item is truncated from the list and an HTTP 203 Subset is return with the (N) objects, indicating to the client that filters should be applied to subsequent requests.

> Update post Hackathon - significant concern raised over 203 - mainly because, depending on how you read the description of it, the "subset" may actually refer to the header info, rather than the body. While there are a number of other approaches we could pursue in terms of trying to use a redirect, it is proposed that we go for one of the other option discussed at the Hackathon of including a 'truncated' attribute in the collection (with normal status code of 200), which if set to 'true' would indicate truncation.

Gerrit topic:,topic:bp/list-limiting,n,z

Addressed by:
    Implement list limiting support in driver backends

As briefly discussed at the summit -- a smaller first step to this would be in allowing deployers to set a hard limit on collections in keystone.conf, thus forcing clients to invoke filters and punting on pagination

Addressed by:
    list limit doc cleanup

Addressed by:
    refactor _get_list_limit() as a @property

Addressed by:
    explicitly expect hints in the @truncated signature


Work Items

This blueprint contains Public information 
Everyone can see this information.