| To: | xfs@xxxxxxxxxxx |
|---|---|
| Subject: | xfs + 100TB+ storage + lots of small files + NFS |
| From: | Marcin Sura <mailing-lists@xxxxxxx> |
| Date: | Sat, 9 Jul 2016 13:14:37 +0200 |
| Delivered-to: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=sura-pl.20150623.gappssmtp.com; s=20150623; h=mime-version:from:date:message-id:subject:to; bh=oVDpIX78q4EvoBJbSRw/aBkZHbyVzmcQT+YxwF5pn6U=; b=mY6TkJjfPM4BPr6qR4xV94WQQIizIHHHL3fNtWzgptjlHijbu0kgOSmlYBnu8E4DJ2 KZJxaszskolaAv4A8YX0P/oMWvOYIPRF0edXh9gmoAMx++9fB3cOkyq8rQwzl+DIr2VK rObbxdcrdGrB5xi13Ufq/2k+STcDJg+XOZ6oVriUClu1QxkzZTIpMCGe4kzlmfEpGaHW H1SWo6wDv1n9IN7R/XZ4qPcZospwlV0JcIvSDEvztm9cUTw4htd6Af4MmxWhA5SBrPc7 Mjo93H6LyHzq9dPkO2G7GZk1Yi2zAHgJ2Vyl14AGZChBfVE4/owgJIoEaQuyPUhsFOmC M/+Q== |
|
Hi,
Friend of mine asked me about evaluation of XFS for their purposes. Currently I don't have physical access to their system, but here are the info I've got so far: SAN: - physical storage is from FSC array, thin provisioned raid 6 volume, - volumes are 100TB+ in size - there are SSD disks in the array, which potentially can be used for journal - storage is connected to the host via 10GbE iSCSI Host: - They are using CentOS 6.5, with stock kernel 2.6.32-* - System uses all default values, no optimization has beed done - OS installed on SSD - Don't know exact details of CPU, but I assume some recent multicore CPU - Don't know amount of RAM installed, I assume 32GB+ NFS: - they are exporting filesystem via NFS to 10-20 clients (services), some VMs, some bare metal - clients are connected via 1GbE or 10GbE links Workload: - they are storingÂtens or hundreds of millions of small files - files are not in single directory - files are undek 1K, usually 200 - 500 bytes - I assume, that some NFS clients constantly write files - some NFS clients initiates massive reads, millions of random files - those reads are on demand, but during peak hours there can be many of such requests So far they were using Ext4, after some basic test they observed 40% improvement in application counters. But I'm afraid that those tests were done in environment not even close to the production (not so big size of filesystem, not so much files). I want to ask you what would be best mkfs.xfs settings for such setup. I assume, that they should use inode64 mount option for such large filesystem with that amount of files, but I'm a bit worried about compatibility with NFS (default shipped with CentOS 6.5). I think inode32 is totally out of scope here. Any other hints for setting this stuff up? Probably some recent OS/kernel would also help a lot, right? Also, do you know any benchmark which can be used to simulate such workload? I've googled a lot, but there is quite short list of multi-threaded, small files oriented benchmarks. To be honest, I've found onlyÂhttps://github.com/bengland2/smallfile to be close to what I need. Any other alternatives? BR Marcin |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | [PATCH 3/1] xfs: don't reset b_retries to 0 on every failure, Eric Sandeen |
|---|---|
| Next by Date: | Re: [QUESTION] about the freelist allocator in XFS, Kaho Ng |
| Previous by Thread: | [PATCH] xfs: fix xfs_error_get_cfg for negative errnos, Eric Sandeen |
| Next by Thread: | Re: xfs + 100TB+ storage + lots of small files + NFS, Ric Wheeler |
| Indexes: | [Date] [Thread] [Top] [All Lists] |