Am 12.05.2015 um 14:35 schrieb Eric Sandeen:
> On 5/12/15 7:01 AM, Stefan Priebe - Profihost AG wrote:
>> while cloud / vms usage become more and more popular and qemu now also
>> offers memory hot add and unplug, cpu hot add and unplug, we still
>> suffer from a missing xfs shrink.
> Filesystem shrink is a very different scenario than "cloud / vms" needs
> for hot memory & cpu plugging, IMHO.
>> I would like to continue to use XFS as it is a rock solid base since
>> around 10 years for us.
>> But one missing piece in variable ressource usage for us is disk
>> shrinking. Is there any chance to get an xfs online shrinking?
> Under what circumstance does your workflow require a filesystem shrink?
May be special but there are a lot of customers out there who do not
manager their ressources. So they've partners and partners of partners.
It happens quite often that the disk runs out of space and the customer
does not know what he can do as third parties control the waste of
space. So we are in the situation where we need to extent the partition
so the customer can continue his business. Later when the partner has
solved the issue (real world shows mostly 2 days - 3 weeks) the customer
wants to shrink again as he does not want to pay the space.
> Honest question, I'm not challenging you, but I would like to understand
> what it is about shrink that is so critical it may cause you to stop using
> One thing about shrink - while i.e. ext4 supports it, the end result of
> a shrunk filesystem is a scrambled filesystem. All those heuristics that
> made the filesystem reasonably performant as it aged get thrown out the
> window as the fs scavenges for space into which to shrink the filesystem...
> That said, another option is to use the dm-thinp target, and allocate /
> de-allocate storage resources to the underlying block device as needed.
We do so while using ceph and trim but the customer pays what he can
theoretically use and not what he uses in real. Ceph also does not
provide a way to show us the real usage of a rbd disk.