xfs
[Top] [All Lists]

du vs. ls

To: xfs@xxxxxxxxxxx
Subject: du vs. ls
From: pille <pille+xfs+mailinglist+sgi@xxxxxxxxxxxx>
Date: Fri, 04 Jan 2013 16:35:48 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:16.0) Gecko/20121104 Thunderbird/16.0.1
hi,

since some weeks i've noticed that filesizes for new files differ in the
outputs of ls and du.
file contains 100MB of /dev/urandom. copy is a copy of file.


# ls -l file copy
-rw-r--r-- 1 root root 104857600 2013-01-04 11:43 copy
-rw-r--r-- 1 root root 104857600 2013-01-04 10:21 file

# ls -lh file copy
-rw-r--r-- 1 root root 100M 2013-01-04 11:43 copy
-rw-r--r-- 1 root root 100M 2013-01-04 10:21 file

# du -bs file copy
104857600       file
104857600       copy

# du -hs file copy
128M    file                     !!
100M    copy

# stat file copy
  File: `file'
  Size: 104857600       Blocks: 262144     IO Block: 4096   regular file
Device: fb03h/64259d    Inode: 4705643     Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2013-01-04 10:21:28.721872771 +0000
Modify: 2013-01-04 10:21:42.641599647 +0000
Change: 2013-01-04 10:21:42.641599647 +0000
  File: `copy'
  Size: 104857600       Blocks: 204800     IO Block: 4096   regular file
Device: fb03h/64259d    Inode: 4709043     Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2013-01-04 11:43:39.665785370 +0000
Modify: 2013-01-04 11:43:40.005778696 +0000
Change: 2013-01-04 11:43:40.005778696 +0000


notice the different count of blocks. this difference is reflected in
the 28MB difference.

# xfs_fsr -v file copy
file
file already fully defragmented.
copy
copy already fully defragmented.


# sha1sum file copy
23071d0b4caeeb4aa9579283ca67c7c13e66b8ee  file
23071d0b4caeeb4aa9579283ca67c7c13e66b8ee  copy



it's hard to create those files reliably, but the following procedure
worked for me (see random.part):
1) dd file as concatenated parts
2) overwrite it again


# dd if=/dev/urandom of=random.part bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 10.6646 s, 9.8 MB/s

# dd if=/dev/urandom of=random.full bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB) copied, 11.132 s, 9.4 MB/s

# ls -lh; du -hs *
-rw-r--r-- 1 root root 100M 2013-01-04 12:11 random.full
-rw-r--r-- 1 root root 100M 2013-01-04 12:10 random.part
100M    random.full
100M    random.part

# dd if=/dev/urandom of=random.part bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 11.3788 s, 9.2 MB/s

# dd if=/dev/urandom of=random.full bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB) copied, 10.6937 s, 9.8 MB/s

# ls -lh; du -hs *
-rw-r--r-- 1 root root 100M 2013-01-04 12:11 random.full
-rw-r--r-- 1 root root 100M 2013-01-04 12:11 random.part
100M    random.full
128M    random.part                 !!


i'm on ubuntu 10.04, kernel 2.6.38-13-server #57~lucid1-Ubuntu
this issue does not exist as long as the uptime of the server is.
i can't trigger this issue on another server that uses 3.2.0-35-generic
#55-Ubuntu (12.04).
is this a known issue?
please tell me when you need additional information.

cheers
  pille


PS: when i stat the shell-script to check for the issue, it claims that
it uses 128 blocks for 849 bytes:
  File: `check_issue8634.sh'
  Size: 849             Blocks: 128        IO Block: 4096   regular file
Device: fb03h/64259d    Inode: 4708316     Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2013-01-03 10:45:15.000000000 +0000
Modify: 2013-01-04 11:54:01.583582496 +0000
Change: 2013-01-04 11:54:01.583582496 +0000

<Prev in Thread] Current Thread [Next in Thread>