XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)
Stefan Ring
stefanrin at gmail.com
Fri Apr 6 10:37:32 CDT 2012
> As to 'ext4' and doing (euphemism) insipid tests involving
> peculiar setups, there is an interesting story in this post:
>
> http://oss.sgi.com/archives/xfs/2012-03/msg00465.html
I really don't see the connection to this thread. You're advocating
mostly that tar use fsync on every file, which to me seems absurd. If
the system goes down halfway through tar extraction, I would delete
the tree and untar again. What do I care if some files are corrupt,
when the entire tree is incomplete anyway?
Despite the somewhat inflammatory thread subject, I don't want to bash
anyone. It's just that untarring large source trees is a very typical
workload for me. And I just don't want to accept that XFS cannot do
better than being several orders of magnitude slower than ext4
(speaking of binary orders of magnitude). As I see it, both file
systems give the same guarantees:
1) That upon completion of sync, all data is readily available on
permanent storage.
2) That the file system metadata doesn't suffer corruption, should the
system lose power during the operation.
More information about the xfs
mailing list