|Subject:||Need advice on building a new XFS setup for large files|
|From:||Alvin Ong <alvin.ong@xxxxxxxxxxxxxxxxx>|
|Date:||Tue, 22 Jan 2013 12:22:42 +0800|
We are building a solution with a web front end for our users to store large files.
Large files starting from the size of 500GB and above and can grow up to 1-2TB's per file.
This is the reason we are trying out XFS to see if we can get a test system running.
We plan to use a 6+2 RAID6 to start off with. Then when it gets filled up to maybe 60-70% we will
expand by adding another 6+2 RAID6 to the array.
The max we can grow this configuration is up to 252TB usable which should be enough for a year.
Our requirements might grow up to 2PB in 2 years time if all goes well.
So I have been testing all of this out on a VM running 3 vmdk's and using LVM to create a single logical volume of the 3 disks.
I noticed that out of sdb, sdc and sdd, files keep getting written to sdc.
This is probably due to our web app creating a single folder and all files are written under that folder.
This is the nature of the Allocation Group of XFS? Is there a way to avoid this? As we will have files keep writing to the same disk thus creating a hot spot.
Although it might not hurt us that much if we fill up a single RAID6 to 60-70% then adding another RAID6 to the mix. We could go up to a total of 14 RAID6 sets.
Is LVM a good choice of doing this configuration? Or do you have a better recommendation?
The reason we thought LVM would be good was so that we could easily grow XFS.
Here is some info:
# xfs_info /xfs
meta-data="" isize=256 agcount=4, agsize=32768000 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=131072000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =external bsize=4096 blocks=65536, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
/dev/mapper/vg_xfs-lv_xfs on /xfs type xfs (rw,noatime,nodiratime,logdev=/dev/vg_xfs/lv_log_xfs,nobarrier,inode64,logbsize=262144,allocsize=512m)
Is I was to use the 8-disk RAID6 array with a 256kB stripe size will have a sunit of 512 and a swidth of (8-2)*512=3072.
# mkfs.xfs -d sunit=512,swidth=3072 /dev/mapper/vg_xfs-lv_xfs
# mount -o remount,sunit=512,swidth=3072
What about the logdev option? What is the optimal size to create for it?
Hope to get some guidance from you guru's please.
|<Prev in Thread]||Current Thread||[Next in Thread>|
|Previous by Date:||Re: [PATCH v2 02/12] xfs: make use of xfs_calc_buf_res() in xfs_trans.c, Jeff Liu|
|Next by Date:||Re: [PATCH 00/12, DEV-ONLY] xfsprogs: metadata CRC support, first batch, Dave Chinner|
|Previous by Thread:||Uniformes para su empresa, Fluzu Indumentaria|
|Next by Thread:||Re: Need advice on building a new XFS setup for large files, Stan Hoeppner|
|Indexes:||[Date] [Thread] [Top] [All Lists]|