| To: | Danny Shavit <danny@xxxxxxxxxxxxxxxxx> |
|---|---|
| Subject: | Re: xfs metadata overhead |
| From: | Dave Chinner <david@xxxxxxxxxxxxx> |
| Date: | Thu, 23 Jun 2016 08:51:17 +1000 |
| Cc: | xfs@xxxxxxxxxxx |
| Delivered-to: | xfs@xxxxxxxxxxx |
| In-reply-to: | <CAC=x_0jDYb17Vh97Led7XXDiUMcUTJbpJ2Dw45gn=D0_w0K5VQ@xxxxxxxxxxxxxx> |
| References: | <CAC=x_0jDYb17Vh97Led7XXDiUMcUTJbpJ2Dw45gn=D0_w0K5VQ@xxxxxxxxxxxxxx> |
| User-agent: | Mutt/1.5.21 (2010-09-15) |
On Wed, Jun 22, 2016 at 06:58:16PM +0300, Danny Shavit wrote: > Hi, > > We are looking for a method to estimate the size of metadata overhead for a > given file system. > We would like to use this value as indicator for the amount of cache memory > a system for faster operation. > Are there any counters that are maintained in the on-disk data > structures like free space for examples? No. Right now, you'll need to take a metadump of the filesystem to measure it. The size of the dump file will be a close indication of the amount of metadata in the filesystem as it only contains the filesystem metadata. In future, querying the rmap will enable us to calculate it on the fly, (i.e. not requiring the filesystem to be snapshotted/taken off line to do a metadump). Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: Bug (?) : cumulative xfsrestore does not restore files and folders in a directory which was renamed, Dave Chinner |
|---|---|
| Next by Date: | Re: Xfs lockdep warning with for-dave-for-4.6 branch, Dave Chinner |
| Previous by Thread: | xfs metadata overhead, Danny Shavit |
| Next by Thread: | Re: xfs metadata overhead, Danny Shavit |
| Indexes: | [Date] [Thread] [Top] [All Lists] |