xfs
[Top] [All Lists]

Problems after creating a big file with dd

To: linux-xfs@xxxxxxxxxxx
Subject: Problems after creating a big file with dd
From: Daniel Just <d.just@xxxxxx>
Date: Mon, 21 Jan 2002 20:11:08 +0100
Mail-followup-to: Daniel Just <d.just@xxxxxx>, linux-xfs@xxxxxxxxxxx
Sender: owner-linux-xfs@xxxxxxxxxxx
User-agent: Mutt/1.3.25i (Linux/2.4.17-xfs (i686))
Hello list,

today I wanted to try what happens if I create a file bigger than 2GB
on my debian woody box with 2.4.17-xfs (xfs-2.4.17-all-i386.bz2,
should be of 01/10/01, also with preempt-patch 2.4.17-1, compiled with
gcc 2.95.4). So I tried as a "normal user" 

   $ dd if=/dev/zero of=bigone bs=1M count=3000

Some time later, dd apparently got stuck at 1.4G, I pressed CTRL-C and
it stopped. Having a look at /var/log/messages I found tons of these:

   Jan 21 14:55:42 maryland kernel: xfs_alloc_read_agf: error in <ide1(22,3)> 
AG 0
   Jan 21 14:55:42 maryland kernel: bad agf_magicnum 0x0
   Jan 21 14:55:42 maryland kernel: Bad version number 0x0

After umounting that partition, I ran xfs_check and xfs_repair
which brought some messages I unfortunately didn't keep.
Afterwards I had some stuff in lost+found, also some files got lost.
(But I had a backup handy, so... :->)

What happened here? XFS-Problem? Hardware-Problem? Compiler-Issue?
User-Failure? Please ask, if you need more information.

Best regards and thanks in advance,

Daniel


<Prev in Thread] Current Thread [Next in Thread>