xfs
[Top] [All Lists]

Re: compression

To: Jordan Mendler <jmendler@xxxxxxxx>
Subject: Re: compression
From: Josef Sipek <jsipek@xxxxxxxxxxxxxxxxx>
Date: Wed, 12 Sep 2007 13:42:16 -0400
Cc: xfs@xxxxxxxxxxx
In-reply-to: <654e62180709111643k4700c2bdibec2a16eb5446e76@mail.gmail.com>
References: <654e62180709111643k4700c2bdibec2a16eb5446e76@mail.gmail.com>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.16 (2007-07-16)
On Tue, Sep 11, 2007 at 04:43:20PM -0700, Jordan Mendler wrote:
> Hi all,
> 
> I searched the mailing list archive and could not find an answer. We are
> currently using XFS on Linux for a 17TB Volume used for backups. We are
> running out of space, so rather than order another array, I would like to
> try to implement filesystem-level compression. Does XFS support any type of
> compression? If not, are there any other ways to optimize for more space
> storage? We are doing extensive rsyncs as our method of backups, so gzipping
> on top of the filesystem is not really an option.
 
Implementation-wise, one major thing to keep in mind is that offsets into
the uncompressed copies of files in memory need to be mapped to the
compressed ones. This is rather painful if you want to do things right
(supporting writing as well as reading from files).

As Eric mentioned, you may want to try to eliminate copies of identical
files with symlinks or even hardlinks (just make sure your backup sw is
smart enough to break links when necessary).

Josef 'Jeff' Sipek.

-- 
The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man.
                - George Bernard Shaw


<Prev in Thread] Current Thread [Next in Thread>