xfs
[Top] [All Lists]

Re: xfs + 100TB+ storage + lots of small files + NFS

To: Marcin Sura <mailing-lists@xxxxxxx>, xfs@xxxxxxxxxxx
Subject: Re: xfs + 100TB+ storage + lots of small files + NFS
From: Ric Wheeler <ricwheeler@xxxxxxxxx>
Date: Sun, 10 Jul 2016 12:24:22 +0300
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-transfer-encoding; bh=3uFXer9lTME/k2X7fpEjHgGAbsHt3BNODCHnTHUh+FI=; b=DV3y6DpXKaIaPcpyu9cR04xj692YzXc5v9shhD+BU+VYRhgQfEGSRvk4P8w48QD+Zf 2zC4S7RIrsLP4LxyA5olakji7bYNmrWDxEivIHO3AASSfW+vPOEjBN9UOHgYxVyhAaZu TpBBN7Bu0JiANc+5EqMEg0OcxP+Zj6p+MFD4iGT08GEjSa/25GSLpqAOPbaiuWkTnR+e sWk/jEIrzxYOr6Eb4+3oXS4NP6FlZJmjbEL2cOKCqpGMqoTsOEHLE1TiIUZ3FwZKEbko 1DDgiYPQlqDjag4CyYgdOoa1bScTV3sjdUXVlxS0fbUHnr14Fvg9LFxOoYPm4IJKCucX 9twQ==
In-reply-to: <CACNifpXnSMcm6EfSjqLFPXXfqv7XWd4Yu_=UhqTcdn6-o+49Yw@xxxxxxxxxxxxxx>
References: <CACNifpXnSMcm6EfSjqLFPXXfqv7XWd4Yu_=UhqTcdn6-o+49Yw@xxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1
On 07/09/2016 02:14 PM, Marcin Sura wrote:
Hi,

Friend of mine asked me about evaluation of XFS for their purposes. Currently I don't have physical access to their system, but here are the info I've got so far:

SAN:
- physical storage is from FSC array, thin provisioned raid 6 volume,
- volumes are 100TB+ in size
- there are SSD disks in the array, which potentially can be used for journal
- storage is connected to the host via 10GbE iSCSI

Host:
- They are using CentOS 6.5, with stock kernel 2.6.32-*
- System uses all default values, no optimization has beed done
- OS installed on SSD
- Don't know exact details of CPU, but I assume some recent multicore CPU
- Don't know amount of RAM installed, I assume 32GB+

NFS:
- they are exporting filesystem via NFS to 10-20 clients (services), some VMs, some bare metal
- clients are connected via 1GbE or 10GbE links

Workload:
- they are storing tens or hundreds of millions of small files
- files are not in single directory
- files are undek 1K, usually 200 - 500 bytes
- I assume, that some NFS clients constantly write files
- some NFS clients initiates massive reads, millions of random files
- those reads are on demand, but during peak hours there can be many of such requests

So far they were using Ext4, after some basic test they observed 40% improvement in application counters. But I'm afraid that those tests were done in environment not even close to the production (not so big size of filesystem, not so much files).

I want to ask you what would be best mkfs.xfs settings for such setup.

I assume, that they should use inode64 mount option for such large filesystem with that amount of files, but I'm a bit worried about compatibility with NFS (default shipped with CentOS 6.5). I think inode32 is totally out of scope here.

Any other hints for setting this stuff up?
Probably some recent OS/kernel would also help a lot, right?

Also, do you know any benchmark which can be used to simulate such workload? I've googled a lot, but there is quite short list of multi-threaded, small files oriented benchmarks. To be honest, I've found only https://github.com/bengland2/smallfile to be close to what I need. Any other alternatives?

BR
Marcin

I think that is a good test to explore - Ben wrote that for exactly this kind of workload.

For a single system (i.e., performance a single NFS client or local file system), you could also test using fs_mark.

Regards,
Ric


<Prev in Thread] Current Thread [Next in Thread>