xfs
[Top] [All Lists]

RE: Just some odd questions out of the blue...

To: "'Eric Sandeen'" <sandeen@xxxxxxx>
Subject: RE: Just some odd questions out of the blue...
From: "l.a walsh" <xfs@xxxxxxxxx>
Date: Fri, 7 Mar 2003 13:39:49 -0800
Cc: <linux-xfs@xxxxxxxxxxx>
Importance: Normal
In-reply-to: <Pine.LNX.4.44.0303061954261.16532-100000@xxxxxxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
> -----Original Message-----
> From: linux-xfs-bounce@xxxxxxxxxxx
> [mailto:linux-xfs-bounce@xxxxxxxxxxx] On Behalf Of Eric Sandeen
> Sent: Thu, Mar 06, 2003 6:00p
> To: l.a walsh
> Cc: linux-xfs@xxxxxxxxxxx
> Subject: Re: Just some odd questions out of the blue...
>
>
> On Thu, 6 Mar 2003, l.a walsh wrote:
>
> > So lets say you had 2 disks...would it make sense to put the log
> > of disk1 on disk2 and the log of disk2 on disk1?  This would be
>
> Yes, external logs are available for this reason.
>
> Your cross-log scenario would probably only help if you were
> really only
> writing to 1 fs at a time.
---
        Yes...but say you have 1 disk setup with system and home
dir where home dir is mainly docs and email, and second disk is
setup mainly to do builds on....

> > Second question ... suppose one disk was faster than the other --
> > or one was a sda and the other hda.  How much metadata is written
> > compared to file data, i.e. is there some average ratio or range?
>
> You can look at the stats with the xfs_stats.pl script in cvs.
> It really depends on the nature of your workload.
----
        CVS again...everything is CVS...seems like every piece of
software I'm interested in I've got to pull down the whole bloomin
CVS tree...grumble.  I don't suppose such scripts could be put into
xfsprogs or xfsprogs-devel?  (not right this second...but just in
general)...

> > On the assumption that metadata is smaller, seem like one could use
> > a slower log disk for a primary work disk, and the slower log disk
> > is mostly archival things that aren't written alot, but more read
> > alot -- like mp3's, or CD images....things where the slower
> read isn't
> > going to be a big problem.
>
> True, for reads the log speed isn't critical, but for writes again
> it will depend on your workload.
---
        Yeah, if I think of builds / mail-read/browse as separate
workloads that may overlap, but mail-read/browsing isn't usually
that disk intensive...compared to a parallel build.


> > When writing to disks with a cache, does XFS force any flushes (like
> > on log data?)  Seem like even if you had a slower disk but an 8Mb
> > cache you could keep up with a fairly good write speed on the faster
> > disk.
>
> Which is great until you crash, if the cached data is lost...
---
        Well...xfs's delayed writes are exactly this type of operation:
grouping writes in hopes of decreasing fragmentation.

        If your system is stable and stays up for many days except
for planned reboots, and you're on UPS, one might trade off
speed for risk.  On the other hand, if you are testing a new kernel
or a new patch, you might want to disable some of those features --
it's like syslog, normally many log messages are written asynchronously,
but if I'm expecting trouble, I'll change them to synchronous and
mount disks as synchronous, though drive caching is usually ok
unless I'm also expecting a power outage :-).


> I don't think xfs explicitly does any IDE cache flushing.
>
> > But here's another Q...if you don't flush the on disk cache after
> > a log write, then it is 'granted' that the potential for metadata
> > loss is at least the size of the on-disk cache.  That could beg
> > the question -- would it be of any benefit to write a pseudo-block
> > device that lives on top of a disk that just does read-write
> > caching -- maybe it lives with a 64Mb Buffer and attempts to use
> > geometry knowledge of the disk to optimize head motion, whatever.
>
> Ick.  :)  I think the drive mfgr is the only one who has the
> knowledge
---
        Hmm....I thought that info was available for some drives
in scsiinfo, though for IDE drives the best you could do would be
to section up the disk by "logical track",  assuming that a logical
track would normally map to a group physical sectors that are
physically (barring defect management) located near each other,
though that is an assumption, admittedly.

-l



<Prev in Thread] Current Thread [Next in Thread>