Comment 4 for bug 1964992

Revision history for this message
Heitor Alves de Siqueira (halves) wrote :

Hi Robie,

You're right, the patch does essentially invert the problem. This is still the behavior upstream, and it currently works like you mentioned: if the user tries to set a min above the default max (ramsize/2), it fails.

I'm working on a patch to propose upstream that should fix this. We should be setting min/max values as a pairs else we'll run into a similar issue as the one reported here. I'm also going to double check other tunables to see if they exhibit similar issues, so we can avoid further problems on those too.

For this particular LP bug, do you think we should wait until a "proper" fix upstream? I do understand the point about breaking setups relying on the current min/max behavior, but that will also happen when upgrading to newer releases. My (subjective) opinion is that users trying to reduce ZFS memory footprint are much more common than the alternative, and for high memory systems this is currently not possible due to this bug. What do you think?