On Wed, Feb 12, 2003 at 09:07:21PM +0100, Bogdan Costescu wrote:
> [Maybe OT] Inbetween these messages, I talked to a friend about a
> networking problem that he has as part of a P2P network.
Applications which write to multiple 'parts' of the file
simultaneously (ie. reading different parts of the file from different
foreign hosts) tend to cause bad fragmentation. I'm not sure if this
is better or worse with other filesystems.
> Then made the connection that the way files are stored when
> downloaded from such network is very similar to our situation
> (talking here only about large stuff like ISO images, movies and
> whatnot) - files are written to disk over several hours, days or
> weeks, depending on Internet link speed and availability.
Slowly writing to disk over a long period of time whilst there is
other disk activity increases the likely hood of fragmentation. A
good example here is log files.
It's even worse if you write pattern means you open and close the file
between writes. If this isn't the case you could in theory tune the
preallocation on the filesystem to make it greater and see if that
helps (I tried it here with positive results).
> The difference might be in appended vs. random writting mode; for
> the networks that support chunked file transfer they are probably
> written as sparse files.
Writing to sparse files randomly bites. Without application level
changes this is pretty hard to do much about.
> Then the files have to be read for writting to CD (ISO images),
> played (movies) etc., so they have the same problem.
A small mount of fragmentation isn't really that bad... I guess it
depends on the average extent size and their physical layout on disk
and to how hard the drive works to access this data.
--cw
|