| To: | linux-xfs@xxxxxxxxxxx |
|---|---|
| Subject: | xfs + lvm on software raid5 |
| From: | Alexander Bergolth <leo@xxxxxxxxxxxxxxxxxxxx> |
| Date: | Thu, 04 Mar 2004 17:27:57 +0100 |
| Sender: | linux-xfs-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5) Gecko/20031115 Thunderbird/0.3 |
Hi! I'd like to create a xfs-formatted filesystem in an lvm logical volume that resides on a software raid5 device. Searching the list archives, I've found that performance should be ok using an internal log for kernel versions > 2.4.18. Additionally, I found a recommendation to use log version 2. However, when creating the filesystem, the kernel reports that the cache buffer size is reduced to 512 bytes: # mkfs.xfs -l version=2 /dev/vg_raid5/lv_images raid5: switching cache buffer size, 1024 --> 512 Are there additional arguments needed for mkfs.xfs? The first reduction of cache buffer size occurs when activating lvm: LVM version 1.0.7(28/03/2003) module loaded loop: loaded (max 8 devices) raid5: switching cache buffer size, 4096 --> 1024 Does this mean a performance penalty? Is there a way to avoid these switches? Is it still recommended to use an external log? How should the log be configured? Thanks in advance, --leo P.S.: Additional info: # cat /etc/fedora-release Fedora Core release 1 (Yarrow) # uname -r 2.4.22-1.2115.nptl_22.rhfc1.at # rpm -q xfsprogs xfsprogs-2.6.2-0_7.rhfc1.at # cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 hdg1[0] hdi1[1] hdk1[2] hdm1[3] hdo1[4]
976751616 blocks level 5, 32k chunk, algorithm 2 [5/5] [UUUUU]# mkfs.xfs -l version=2 /dev/vg_raid5/lv_images meta-data=/dev/vg_raid5/lv_images isize=256 agcount=16, agsize=8192000 blks = sectsz=512 data = bsize=4096 blocks=131072000, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=0 blks realtime =none extsz=65536 blocks=0, rtextents=0 # pvdisplay /dev/md0 --- Physical volume --- PV Name /dev/md0 VG Name vg_raid5 PV Size 931.50 GB [1953503232 secs] / NOT usable 64.19 MB [LVM: 186 KB] PV# 1 PV Status available Allocatable yes Cur LV 1 PE Size (KByte) 65536 Total PE 14903 Free PE 6903 Allocated PE 8000 PV UUID d63Tg4-A41Z-4JXb-FwuV-wIik-QuVh-k2XW7E # vgdisplay /dev/vg_raid5 --- Volume group --- VG Name vg_raid5 VG Access read/write VG Status available/resizable VG # 0 MAX LV 256 Cur LV 1 Open LV 0 MAX LV Size 2 TB Max PV 256 Cur PV 1 Act PV 1 VG Size 931.44 GB PE Size 64 MB Total PE 14903 Alloc PE / Size 8000 / 500 GB Free PE / Size 6903 / 431.44 GB VG UUID 1f13FE-qPVM-1CSB-0gAu-Fe07-5TlS-ElliwA -- ----------------------------------------------------------------------- Alexander (Leo) Bergolth leo@xxxxxxxxxxxxxxxxx WU-Wien - Zentrum fuer Informatikdienste http://leo.wu-wien.ac.at Computers are like air conditioners - they stop working properly when you open Windows |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [Jfs-discussion] Re: Desktop Filesystem Benchmarks in 2.6.3, Pascal Gienger |
|---|---|
| Next by Date: | [Bug 313] New: xfs_log_write: reservation ran out. Need to up reservation, bugzilla-daemon |
| Previous by Thread: | XFS repair question, Jason White |
| Next by Thread: | Re: xfs + lvm on software raid5, Nathan Scott |
| Indexes: | [Date] [Thread] [Top] [All Lists] |