I am new to XFS, I have spent a significant time searching for information and have not found any documentation relating to the use of XFS over RAID with N Drives where the Stripe Unit >256K.
To be clear, using (for example) 3 Disks with RAID 5 with a Stripe Unit of 1MB and Stripe Width of 2 would give 2MB Data and 1M Parity.
I have found through extensive testing of RAW RAID volumes with various chunk sizes that larger chunks tend to produce the best performance in most cases whether Sequential, Random, Read or Write, or in combinations. I am keen therefor to perform similar tests with XFS to determine the best performance and would like to test the widest range of larger chunks/Stripe Units possible, including those larger than XFS apparently permits. I am aware that XFS performance will not necessarily mirror that of performance tests of RAW RAID volumes but this needs to be verified by testing.
It appears that XFS on RHEL7 and clones is limited to a maximum Stripe Unit of 256KB.
Since it is not possible to specify the correct parameters (SU=1024K, SW=2), Is it possible and valid to specify an adjusted value where SU is reduced and SW is increased by the same factor.
So using SU=256K, SW=8 would be the same as SU=1024K, SW=2 and give a total of 2MB Data.
Is this valid? Are there any performance or other consequences of using different dimensions? Is there any issue with using other permutations e.g. SU=32K, SW=64?
While I understand that XFS is intended to select optimum defaults, I am concerned and less than enthusiastic about the default settings particularly with larger RAID chunks.
I also would like to find some information on how to correctly interpret the statistics output when formatting a volume with XFS, particularly as the process may use other settings which may require further investigation?
If you cannot answer these questions directly, can you point me to a resource where I might find this information?