At 16:15 10-8-2001 +0200, Bram Moolenaar wrote:
cc-ing the list
> I was noticing that on journaling filesystems (in this case XFS) you can
> lose data using vi.
> Example, possible explanation and solution provided. I have not read the
> code to vi (6.0z in my test) so I might have made a wrong assumption
> something but the problem stands as it is.
What you describe sounds like a bug in the filesystem. Vim closes the file,
and that should be sufficient to be sure that the data has been written. On
an "old" file system there is always the risk that the computer crashes before
the cache has been flushed. Therefore Vim provides a backup file.
Calling fsync() after writing a file would reduce the chance to lose data, but
it also makes Vim slower and increase the system load.
Is there an option within vi to use this as a "Safe mode".
That the file is filled with NULLS is certainly a bug in the FS. If nothing
got written yet, the old data should still be there. If the meta data was
updated without the file contents being there, then something has gone wrong
in the FS. It should first write the new data and then update the meta data,
so that there is never an inconsistent situation. When overwriting a file
with new data the metadata doesn't even have to be changed, unless the size
The Nulls come from the fact that XFS supports extents. The metadata is
written out to disk and xfs allocates the extents for this file. a truncate
is done on the file to start writing the data.
Because this never happend you are seeing the NULLS from the "empty" extents.
Observation has revealed that the data is only pushed out to disk after 30
seconds or so by the VM. The metadata (size and timestamp) was pushed out
to disk on exit.
Other filesystems might have this problem as well if they are journaling
In the "old" FS there was only a problem when the system crashes halfway
flushing the file.
A journaling fs will recover from this because the disk write will be
logged and recovered from if it was not complete and reproduce the old
data. However in this case the data is not pushed out simultaneously or
takes another path.
Actually, a power failure halfway a write can cause
anything to happen to the harddisk. A spike is even worse, it's easy to
create a bad sector (or write while the head is moving, destroying several
sectors). A journaling filesystem doesn't protect you from hardware failure!
That is not what it is supposed to do.
> When exiting vi call fsync on the file you work on to make sure the
> sent to disk after exiting or user may have lost their data when something
> goes wrong.
When Vim exits the file has already been closed. It would have to be done
each time you write a file, just before closing it. It actually doesn't
matter if you exit Vim or not, you might just use ":w" when you go off for
lunch and trip over the power cord.
This produces the exact same pattern.
I opened minicom.log which had one line of text and was 35Bytes.
Added a line "test"
wrote it with :w
After reboot the file is:
[seth@lsautom seth]$ ll minicom.log
-rw-r--r-- 1 seth staff 42 Aug 10 17:37 minicom.log
[seth@lsautom seth]$ cat -v minicom.log
So the metadata was written but the data never got there.
vi -r minicom.log did contain the valid data. So why was the .swp written
out but the original file not!
calling sync manually after exit gets it to disk for sure.
Waiting 10 seconds before pulling the power was not enough to get it to disk.
Every program has two purposes one for which
it was written and another for which it wasn't
I use the last kind.