According to information on the XFS home page, writing XFS files can be sped
up by as much as 30 percent by having the "metadata" or "journal" file stored
on a separate drive.
I am trying do use that "speed up" proceedure without any success.
Here's my setup. I have 6 Firewire Drives arranged into three RAID 1 arrays
(md0, md1, md2). Then the 3 RAID 1 arrays are combined into one RAID 0 array
(md3). The end result is a single RAID 10 array.
I don't have any trouble putting XFS on the array with a simple
mkfs.xfs -f -b size=4096 /dev/md3 -- and then mounting the array with
mount -t xfs /dev/md3 /home/raid1
However, I have been unable to configure XFS on the array so that the journal
goes on a separate drive. I am wondering if I'm doing something wrong.
The place where I'm trying to put the journal file is a 4 GB partition on an
internal IDE drive.
The command mkfs.xfs -f -l logdev=/dev/hdb3,size=10000b -b size=4096
/dev/md3 is accepted by the system. It looks as if I formatted the RAID with
XFS and
put the journal on in a separate place.
But when I try to mount the RAID as I did above, I get the following error:
mount: wrong fs type, bad option, bad superblock on /dev/md3
or too many mounted file systems.
I believe I also tried something like mount -t xfs logdev=dev/hdb3
/dev/md3.
I think I also tried mount -t xfs logdev=dev/hdb3,size=10000b /dev/md3
Does anybody have any ideas about what the trouble is? Am I issuing the wrong
commands? Is it impossible to put the journal on the IDE partition? It's a
primary partition at the end of a drive. I've tried marking the partition with
fdisk as type 83/Linux. Also tried (Raid Auto Detect). Same problem either
way.
I vaguely remember something about the place where I put the journal having
to be exactly the same number of blocks as the size that's specified in the
"logdev=dev/location,size=xxxxxb" line. If that's true, I have no idea how to
accomplish that.
Thanks in advance for the help. I'm about to launch into storing a lot of
data on the RAID and I would like to set it up so that it works as efficiently
as
possible.
Regards,
Andy Liebman
|