| To: | L A Walsh <law@xxxxxxxxx> |
|---|---|
| Subject: | RE: block size in XFS = hard coded constant? |
| From: | Olaf Frączyk <olaf@xxxxxxxxxxxxx> |
| Date: | 30 Sep 2002 14:07:57 +0200 |
| Cc: | Stephen Lord <lord@xxxxxxx>, Linux-Xfs <linux-xfs@xxxxxxxxxxx>, Linux-Kernel <linux-kernel@xxxxxxxxxxxxxxx>, Linux-Fsdevel <linux-fsdevel@xxxxxxxxxxxxxxx> |
| In-reply-to: | <NFBBKNPJLGIDJFAHGKMBIEIJCDAA.law@xxxxxxxxx> |
| References: | <NFBBKNPJLGIDJFAHGKMBIEIJCDAA.law@xxxxxxxxx> |
| Sender: | linux-xfs-bounce@xxxxxxxxxxx |
On Mon, 2002-09-30 at 10:55, L A Walsh wrote: > Right -- I know it isn't the filesystem block size. > > In this day and age, it seems anachronistic. Given the 10% higher block > density, not only would it yield higher capacities, but should yield higher > transfer rates, no? > > I know it isn't a simple constant switch -- but I wouldn't want to switch > constants since not all disks should be constrained to the same block size. > > Do other file systems have the same limitation? Are there any problems in the > linux-kernel with non-512 byte blocks? Hi, DVD-RAM (2048 bytes block size) works well in linux. I use ext2 for DVD-RAM. Regards, Olaf Fraczyk |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | TAKE - Don't reset blocksize on umount, hch |
|---|---|
| Next by Date: | kernel panic "killing interrupt handler" and kernel BUG at sched.c:468, Federico Sevilla III |
| Previous by Thread: | RE: block size in XFS = hard coded constant?, L A Walsh |
| Next by Thread: | Re: block size in XFS = hard coded constant?, Nathan Scott |
| Indexes: | [Date] [Thread] [Top] [All Lists] |