On Thu, 6 Mar 2003, l.a walsh wrote:
> So lets say you had 2 disks...would it make sense to put the log
> of disk1 on disk2 and the log of disk2 on disk1? This would be
Yes, external logs are available for this reason.
Your cross-log scenario would probably only help if you were really only
writing to 1 fs at a time.
> Second question ... suppose one disk was faster than the other --
> or one was a sda and the other hda. How much metadata is written
> compared to file data, i.e. is there some average ratio or range?
You can look at the stats with the xfs_stats.pl script in cvs.
It really depends on the nature of your workload.
> On the assumption that metadata is smaller, seem like one could use
> a slower log disk for a primary work disk, and the slower log disk
> is mostly archival things that aren't written alot, but more read
> alot -- like mp3's, or CD images....things where the slower read isn't
> going to be a big problem.
True, for reads the log speed isn't critical, but for writes again
it will depend on your workload.
> When writing to disks with a cache, does XFS force any flushes (like
> on log data?) Seem like even if you had a slower disk but an 8Mb
> cache you could keep up with a fairly good write speed on the faster
> disk.
Which is great until you crash, if the cached data is lost...
I don't think xfs explicitly does any ide cache flushing.
> But here's another Q...if you don't flush the on disk cache after
> a log write, then it is 'granted' that the potential for metadata
> loss is at least the size of the on-disk cache. That could beg
> the question -- would it be of any benefit to write a pseudo-block
> device that lives on top of a disk that just does read-write
> caching -- maybe it lives with a 64Mb Buffer and attempts to use
> geometry knowledge of the disk to optimize head motion, whatever.
Ick. :) I think the drive mfgr is the only one who has the knowledge.
-Eric
|