Is XFS suitable for 350 million files on 20TB storage?
Dave Chinner
david at fromorbit.com
Sat Sep 6 17:54:24 CDT 2014
On Sat, Sep 06, 2014 at 10:51:05AM -0400, Brian Foster wrote:
> On Sat, Sep 06, 2014 at 09:05:28AM +1000, Dave Chinner wrote:
> > On Fri, Sep 05, 2014 at 02:40:32PM +0200, Stefan Priebe - Profihost AG wrote:
> > >
> > > Am 05.09.2014 um 14:30 schrieb Brian Foster:
> > > > On Fri, Sep 05, 2014 at 11:47:29AM +0200, Stefan Priebe - Profihost AG wrote:
> > > >> Hi,
> > > >>
> > > >> i have a backup system running 20TB of storage having 350 million files.
> > > >> This was working fine for month.
> > > >>
> > > >> But now the free space is so heavily fragmented that i only see the
> > > >> kworker with 4x 100% CPU and write speed beeing very slow. 15TB of the
> > > >> 20TB are in use.
> >
> > What does perf tell you about the CPU being burnt? (i.e run perf top
> > for 10-20s while that CPU burn is happening and paste the top 10 CPU
> > consuming functions).
> >
> > > >>
> > > >> Overall files are 350 Million - all in different directories. Max 5000
> > > >> per dir.
> > > >>
> > > >> Kernel is 3.10.53 and mount options are:
> > > >> noatime,nodiratime,attr2,inode64,logbufs=8,logbsize=256k,noquota
> > > >>
> > > >> # xfs_db -r -c freesp /dev/sda1
> > > >> from to extents blocks pct
> > > >> 1 1 29484138 29484138 2,16
> > > >> 2 3 16930134 39834672 2,92
> > > >> 4 7 16169985 87877159 6,45
> > > >> 8 15 78202543 999838327 73,41
> >
> > With an inode size of 256 bytes, this is going to be your real
> > problem soon - most of the free space is smaller than an inode
> > chunk so soon you won't be able to allocate new inodes, even though
> > there is free space on disk.
> >
>
> The extent list here is in fsb units, right? 256b inodes means 16k inode
> chunks, in which case it seems like there's still plenty of room for
> inode chunks (e.g., 8-15 blocks -> 32k-64k).
PEBKAC. My bad.
Cheers,
Dave.
--
Dave Chinner
david at fromorbit.com
More information about the xfs
mailing list