2008/8/26 Dave Chinner <david@xxxxxxxxxxxxx>:
> On Mon, Aug 25, 2008 at 01:08:29PM +0200, Sławomir Nowakowski wrote:
>> 2008/8/23 Dave Chinner <david@xxxxxxxxxxxxx>:
>> Next we have created some files:
>> -one big file called "bigfile" and size of 5109497856 bytes
>> -two small text files called: "file1" and "file2"
>>
>> At this stage it looked as follows:
> ....
>> Filesystem 1K-blocks Used Available Use% Mounted on
>> /dev/sda3 4993984 4989916 4068 100% /mnt/z
>>
>> Then we have run system with 2.6.25.13 kernel and checked how it looks:
> .....
>> Filesystem 1K-blocks Used Available Use% Mounted on
>> /dev/sda3 4993984 4993984 0 100% /mnt/z
>>
>> As it shown in case of 2.6.25.13 kernel system reports no free space,
>> but under 2.6.17.13 kernel there is 4068kB of free space.
>>
>> At this stage when editing file file1 with i.e. mcedit and trying to
>> write changes, the system cuts this file to 0 bytes!
>
> Oh, look, yet another editor that doesn't safely handle ENOSPC and
> trashes files when it can't overwrite them. That's not an XFS
> problem - I suggest raising a bug against the editor....
>
>> >> Is it known issue and/or does solution or workaround exists?
>> >
>> > $ sudo xfs_io -x -c 'resblks 0' <file in filesystem>
>> >
>> > will remove the reservation. This means your filesystem can shutdown
>> > or lose data at ENOSPC in certain circumstances....
>>
>> A question: does using the command:
>>
>> $ sudo xfs_io -x -c 'resblks 0' <file in filesystem>
>>
>> for 2.6.25.13 kernel gives higher risk of losing data then in case of
>> 2.6.17.13 kernel.
>
> Hard to say. If you don't run to ENOSPC then there is no difference.
> If you do run to ENOSPC then I think that there is a slightly higher
> risk of tripping problems on 2.6.25.x because of other ENOSPC fixes
> that have been included since 2.6.17.13. This really is a safety net
> in that it allows the system to continue without problems in
> conditions where it would have previously done a bad thing...
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx
>
Dear Dave,
Can you please take a look at the following outputs of some commands
run under 2.6.17.17 and 2.6.25.13 kernels?
Here is a situation on 2.6.17.13 kernel:
xfs_io -x -c 'statfs' /mnt/point
fd.path = "/mnt/sda"
statfs.f_bsize = 4096
statfs.f_blocks = 487416
statfs.f_bavail = 6
statfs.f_files = 160
statfs.f_ffree = 154
geom.bsize = 4096
geom.agcount = 8
geom.agblocks = 61247
geom.datablocks = 489976
geom.rtblocks = 0
geom.rtextents = 0
geom.rtextsize = 1
geom.sunit = 0
geom.swidth = 0
counts.freedata = 6
counts.freertx = 0
counts.freeino = 58
counts.allocino = 64
xfs_io -x -c 'resblks' /mnt/point
reserved blocks = 0
available reserved blocks = 0
xfs_info /mnt/point
meta-data=/dev/sda4 isize=256 agcount=8, agsize=61247 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=489976, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=2560, version=1
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
But under 2.6.25.13 kernel the situation looks different:
xfs_io -x -c 'statfs' /mnt/point:
fd.path = "/mnt/-sda4"
statfs.f_bsize = 4096
statfs.f_blocks = 487416
statfs.f_bavail = 30
statfs.f_files = 544
statfs.f_ffree = 538
geom.bsize = 4096
geom.agcount = 8
geom.agblocks = 61247
geom.datablocks = 489976
geom.rtblocks = 0
geom.rtextents = 0
geom.rtextsize = 1
geom.sunit = 0
geom.swidth = 0
counts.freedata = 30
counts.freertx = 0
counts.freeino = 58
counts.allocino = 64
xfs_io -x -c 'resblks' /mnt/point:
reserved blocks = 18446744073709551586
available reserved blocks = 18446744073709551586
xfs_info /mnt/point
meta-data=/dev/sda4 isize=256 agcount=8, agsize=61247 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=489976, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=2560, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=4096 blocks=0, rtextents=0
As you can easy see statfs.f_bavail, statfs.f_files, statfs.f_ffree
and counts.freedata values are different.
Can you explain why?
Also after applying your solution "xfs_io -x -c 'resblks 0' <file in
filesystem>" the command
xfs_io -x -c 'resblks' /mnt/point gives output:
reserved blocks = 0
available reserved blocks = 18446744073709551586
Is it OK?
Another question is if you know some advices for tuning of XFS file
systems that will contain maximum 10 files?
Thank you very much for your help!
I really appreciate it.
|