<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Indeed, I turned sync and wsync flags on. As excpected, I had terribly low performance (1MB/s for write operations). So I decided to turn them back off. (I got my 100 MB/s write throughput back). <br>I just wanted to reduce as much as possible unnecessary cache between my VM's and my physcal hard drives knowing that there are up to 8 write cache levels. <br>I'm getting off the subject a bit but here is the list. This is only my conclusion. I don't know if I'm right. <br><br>- Guest page cache.<br>- Virtual disk drive write cache. (off KVM cache=directsync)<br>- Host page cache. (off KVM cache=directsync)<br>- GlusterFS cache. (off)<br>- NAS page cache. (?)<br>- XFS cache (filesystem).<br>- RAID controller write cache. (off)<br>- Physical hard drive write cache. (off)<br><br>The main difficulty is that I have to gather information from different sources (editors / constructors) to get an overview of the cache mechanisms. I need to make sure our databases will not crash to any failure of one of those layers.<br><br>If you have any suggestions on where to find information or who to ask I would be rather grateful. <br>But at least I had answers about the XFS part. <br>Thank you very much !<br><br><div>> Date: Wed, 30 Jul 2014 18:18:58 +1000<br>> From: david@fromorbit.com<br>> To: neutrino8@gmail.com<br>> CC: bfoster@redhat.com; frank_1005@msn.com; xfs@oss.sgi.com<br>> Subject: Re: Delaylog information enquiry<br>> <br>> On Wed, Jul 30, 2014 at 07:42:32AM +0200, Grozdan wrote:<br>> > On Wed, Jul 30, 2014 at 1:41 AM, Dave Chinner <david@fromorbit.com> wrote:<br>> > > Note that this does not change file data behaviour. In this case you<br>> > > need to add the "sync" mount option, which forces all buffered IO to<br>> > > be synchronous and so will be *very slow*. But if you've already<br>> > > turned off the BBWC on the RAID controller then your storage is<br>> > > already terribly slow and so you probably won't care about making<br>> > > performance even worse...<br>> > <br>> > Dave, excuse my ignorant questions<br>> > <br>> > I know the Linux kernel keeps data in cache up to 30 seconds before a<br>> > kernel daemon flushes it to disk, unless<br>> > the configured dirty ratio (which is 40% of RAM, iirc) is reached<br>> <br>> 10% of RAM, actually.<br>> <br>> > before these 30 seconds so the flush is done before it<br>> > <br>> > What I did is lower these 30 seconds to 5 seconds so every 5 seconds<br>> > data is flushed to disk (I've set the dirty_expire_centisecs to 500).<br>> > So, are there any drawbacks in doing this?<br>> <br>> Depends on your workload. For a desktop, you probably won't notice<br>> anything different. For a machine that creates lots of temporary<br>> files and then removes them (e.g. build machines) then it could<br>> crater performance completely because it causes writeback before the<br>> files are removed...<br>> <br>> > I mean, I don't care *that*<br>> > much for performance but I do want my dirty data to be on<br>> > storage in a reasonable amount of time. I looked at the various sync<br>> > mount options but they all are synchronous so it is my<br>> > impression they'll be slower than giving the kernel 5 seconds to keep<br>> > data and then flush it.<br>> > <br>> > From XFS perspective, I'd like to know if this is not recommended or<br>> > if it is? I know that with setting the above to 500 centisecs<br>> > means that there will be more writes to disk and potentially may<br>> > result in tear & wear, thus shortening the lifetime of the<br>> > storage<br>> > <br>> > This is a regular desktop system with a single Seagate Constellation<br>> > SATA disk so no RAID, LVM, thin provision or anything else<br>> > <br>> > What do you think? :)<br>> <br>> I don't think it really matters either way. I don't change<br>> the writeback time on my workstations, build machines or test<br>> machines, but I actually *increase* it on my laptops to save power<br>> by not writing to disk as often. So if you want a little more<br>> safety, then reducing the writeback timeout shouldn't have any<br>> significant affect on performance or wear unless you are doing<br>> something unusual....<br>> <br>> Cheers,<br>> <br>> Dave.<br>> -- <br>> Dave Chinner<br>> david@fromorbit.com<br></div> </div></body>
</html>