On Thu, 25 Dec 2008, David Lethe wrote:
From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-
How do others maintain data integrity? Just not worry about it until
to, rely on backups.. or?
4GB files using gpg and tar in the '90s?
Sorry, to clarify they were ~650-700MiB tars but combined to around a ~4GiB
file later on (from the late 90s), CD's were cheap and yeah 2GiB limit. But
later on to consolidate I moved them to 4GiB DVDs and thus tarred them together
and then gpg on top of that.
Once I restored the data from DVDs, I was able to restore *all* data
I know gpg had 2GB file-related bugs as late as 2005 that caused corruption, and
there was a heck of a lot of 2GB-related bugs in 2.2 and 2.4 kernels which you
must have been running back then. You are also using later versions of these
programs on the new systems, and I'd be willing to bet they compound the
assuming there was no corruption to begin with.
- gzipped tar archives, but I gzip the individual files, rather than the
tarball. That way
any compression-related bugs are limited to a single file. I copy them to DVDs.
That works as well, but are they your regular files gzipped, no encryption?
- For online/nearline, I now use a ZFS, but on a native Solaris system that
functions as my primary
NFS/CIFS/iSCSI server with and a ZFS software-RAID based file system. I am
profoundly impressed with
it, and when they release the deduplication enhancement for ZFS, I'll adopt it
and won't have to buy
any more DVDs, except for offsite archiving purposes.
Wow, I did not know ZFS had plans for de-dupe!! I will have to look into this,
thanks for the info.