<div dir="ltr"><div>I see. We will try this direction.</div><div>BTW: I thought that good estimate would be "volume_size - allocated_size - free_space". But it produced quite a difference compared to metadata dump size.</div><div>Is there a specific reason?</div><div><br></div><div>Thanks,</div><div>Danny</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jun 23, 2016 at 1:51 AM, Dave Chinner <span dir="ltr"><<a href="mailto:david@fromorbit.com" target="_blank">david@fromorbit.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, Jun 22, 2016 at 06:58:16PM +0300, Danny Shavit wrote:<br>
> Hi,<br>
><br>
> We are looking for a method to estimate the size of metadata overhead for a<br>
> given file system.<br>
> We would like to use this value as indicator for the amount of cache memory<br>
> a system for faster operation.<br>
> Are there any counters that are maintained in the on-disk data<br>
> structures like free space for examples?<br>
<br>
</span>No.<br>
<br>
Right now, you'll need to take a metadump of the filesystem to<br>
measure it. The size of the dump file will be a close indication of<br>
the amount of metadata in the filesystem as it only contains<br>
the filesystem metadata.<br>
<br>
In future, querying the rmap will enable us to calculate it on the<br>
fly, (i.e. not requiring the filesystem to be snapshotted/taken off<br>
line to do a metadump).<br>
<br>
Cheers,<br>
<br>
Dave.<br>
<span class="HOEnZb"><font color="#888888">--<br>
Dave Chinner<br>
<a href="mailto:david@fromorbit.com">david@fromorbit.com</a><br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Regards,<br></div>Danny<br></div></div>
</div>