On 6/23/16 10:04 AM, Danny Shavit wrote:
> I see. We will try this direction.
> BTW: I thought that good estimate would be "volume_size -
> allocated_size - free_space". But it produced quite a difference
> compared to metadata dump size.
> Is there a specific reason?
How do you determine allocated_size, with du?
How different? Can you show an example?
-Eric
> Thanks,
> Danny
>
> On Thu, Jun 23, 2016 at 1:51 AM, Dave Chinner <david@xxxxxxxxxxxxx
> <mailto:david@xxxxxxxxxxxxx>> wrote:
>
> On Wed, Jun 22, 2016 at 06:58:16PM +0300, Danny Shavit wrote:
> > Hi,
> >
> > We are looking for a method to estimate the size of metadata overhead
> for a
> > given file system.
> > We would like to use this value as indicator for the amount of cache
> memory
> > a system for faster operation.
> > Are there any counters that are maintained in the on-disk data
> > structures like free space for examples?
>
> No.
>
> Right now, you'll need to take a metadump of the filesystem to
> measure it. The size of the dump file will be a close indication of
> the amount of metadata in the filesystem as it only contains
> the filesystem metadata.
>
> In future, querying the rmap will enable us to calculate it on the
> fly, (i.e. not requiring the filesystem to be snapshotted/taken off
> line to do a metadump).
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx <mailto:david@xxxxxxxxxxxxx>
>
>
>
>
> --
> Regards,
> Danny
>
>
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
>
|